From f24a801c68c6a73d071e602dce66684c892f05b3 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Fri, 13 Mar 2026 17:27:12 +0000 Subject: [PATCH 01/14] better build an agent --- .../docs/docs/tutorials/build-an-agent.md | 315 ++++++++++++++++-- 1 file changed, 291 insertions(+), 24 deletions(-) diff --git a/src/poly/docs/docs/tutorials/build-an-agent.md b/src/poly/docs/docs/tutorials/build-an-agent.md index 1895936..3f69f28 100644 --- a/src/poly/docs/docs/tutorials/build-an-agent.md +++ b/src/poly/docs/docs/tutorials/build-an-agent.md @@ -3,9 +3,16 @@ title: Build an agent with the ADK description: Follow the end-to-end workflow for going from a blank Agent Studio project to a production-ready voice agent with the PolyAI ADK. --- -This guide walks through how to go from a blank slate to a production-ready voice agent with a real backend using **Agent Studio**, the **PolyAI ADK**, and a coding agent such as **Claude Code**. +# Build an agent with the ADK -The overall workflow is simple: +This guide walks through how to go from a blank slate to a production-ready voice agent with a real backend using **Agent Studio**, the **PolyAI ADK**, and optionally a coding agent such as **Claude Code**. + +There are two common ways to build with the ADK: + +| Workflow | Description | +|---|---| +| **CLI workflow** | The hands-on developer path. You run the commands yourself, edit files locally, and push changes back to Agent Studio. | +| **AI-agent workflow** | A coding agent uses the ADK on your behalf, generating and pushing the project files from a brief. |
@@ -15,11 +22,11 @@ The overall workflow is simple: Gather the requirements, business rules, API information, and reference material. -- **The coding agent builds** +- **The agent or developer builds** --- - Using the ADK, the coding agent generates the project files needed for the agent. + Using the ADK, the project files are created, edited, validated, and prepared locally. - **Agent Studio hosts and deploys** @@ -29,19 +36,249 @@ The overall workflow is simple:
-!!! info "No manual flow-building required" - - This workflow is designed so that the coding agent does the heavy lifting of building the agent, while Agent Studio remains the place where the work is reviewed, tested, and deployed. - ## Architecture at a glance | Role | Responsibility | |---|---| | **You** | Provide requirements, project context, and business rules | -| **Coding agent + ADK** | Generate project files and push changes | +| **PolyAI ADK** | Connect the local project to Agent Studio and manage sync, validation, and tooling | +| **Coding agent** | Optionally generate and update files using the ADK | | **Agent Studio** | Host, preview, review, merge, and deploy the agent | -## Step 1 — Gather requirements +## Local project structure + +When an Agent Studio project is linked locally, it follows this general structure: + +~~~text +// +├── agent_settings/ +│ ├── personality.yaml +│ ├── role.yaml +│ ├── rules.txt +│ └── experimental_config.json +├── config/ +│ ├── entities.yaml +│ ├── handoffs.yaml +│ ├── sms_templates.yaml +│ └── variant_attributes.yaml +├── voice/ +│ ├── configuration.yaml +│ ├── speech_recognition/ +│ │ ├── asr_settings.yaml +│ │ ├── keyphrase_boosting.yaml +│ │ └── transcript_corrections.yaml +│ └── response_control/ +│ ├── pronunciations.yaml +│ └── phrase_filtering.yaml +├── chat/ +│ └── configuration.yaml +├── flows/ +│ └── {flow_name}/ +│ ├── flow_config.yaml +│ ├── steps/ +│ │ └── {step_name}.yaml +│ ├── function_steps/ +│ │ └── {function_step}.py +│ └── functions/ +│ └── {function_name}.py +├── functions/ +│ ├── start_function.py +│ ├── end_function.py +│ └── {function_name}.py +├── topics/ +│ └── {topic_name}.yaml +└── project.yaml +~~~ + +This structure mirrors the parts of the agent that Agent Studio understands: settings, flows, functions, topics, channel configuration, and supporting resources. + +## Workflow 1 — CLI workflow + +The CLI workflow is the manual developer path. You use the ADK directly, edit the project locally, and push changes back to Agent Studio. + +### Step 1 — Initialise your project + +Link a local folder to an existing Agent Studio project. The agent must already exist in Agent Studio. + +~~~bash +poly init +poly init --region --account_id --project_id +~~~ + +This creates the local project structure and writes the metadata needed to connect the folder to Agent Studio. + +### Step 2 — Pull remote config and set up the environment + +Pull the current configuration into your local project. + +~~~bash +poly pull +poly pull -f +~~~ + +At this point, configure any API keys or environment variables needed for the project. + +!!! note "Run commands from the project folder" + + All CLI commands should be run from within the local project folder, unless you explicitly use the relevant path flag. + +### Step 3 — Run the agent locally + +Start an interactive chat session to confirm the connection works and inspect runtime behaviour. + +~~~bash +poly chat +poly chat --environment sandbox --channel voice +poly chat --functions --flows --state +~~~ + +### Step 4 — Review the docs and understand the SDK + +Use the CLI docs command to inspect the available resources and learn how they fit together. + +~~~bash +poly docs --all +poly docs flows functions topics +~~~ + +Resource-specific documentation is available for agent settings, voice settings, chat settings, flows, functions, topics, entities, handoffs, variants, SMS templates, variables, speech recognition, response control, and experimental config. + +### Step 5 — Customise the agent + +This is the core build phase. Create a branch, edit resources locally, track changes, and push them back. + +#### Branching + +~~~bash +poly branch create my-feature +poly branch switch my-feature +poly branch current +poly branch list +~~~ + +#### Functions + +Create or modify backend functions the agent calls at runtime. + +Typical locations include: + +- global functions under the functions directory +- lifecycle hooks such as start and end functions +- flow-scoped functions +- function steps inside flows + +#### Topics + +Add or edit knowledge-base topics used for retrieval. + +#### Agent settings + +Update the personality, role, and rules that define the agent’s global behaviour. + +#### Flows + +Build conversation flows, including prompts, step transitions, entities, and function steps. + +#### Channel-specific settings + +Adjust greeting messages, disclaimers, and style prompts for voice and chat. + +#### Handoffs, SMS, and variants + +Define escalation paths, SMS templates, and per-variant configuration. + +#### ASR and response control + +Tune speech recognition and control TTS behaviour. + +#### Experimental config + +Enable or tune experimental features where needed. + +### Step 6 — Track and validate changes + +Inspect the local changes before pushing. + +~~~bash +poly status +poly diff +poly diff +poly validate +poly format +poly revert --all +poly revert +~~~ + +### Step 7 — Push changes + +Push the local changes back to Agent Studio. + +~~~bash +poly push +poly push --dry-run +poly push -f +poly push --skip-validation +~~~ + +### Step 8 — Test against sandbox + +Validate the behaviour in the sandbox environment. + +~~~bash +poly chat --environment sandbox +poly chat --environment sandbox --functions --flows +~~~ + +### Step 9 — Iterate on quality + +Review, refine, and test again. You can also use the review command to share diffs with teammates. + +~~~bash +poly review +poly review --before main --after my-feature +~~~ + +Make test calls, inspect transcripts, refine prompts, flows, and functions, and then re-push. + +### Step 10 — Deploy to production + +Once the changes are pushed and validated, merge the branch in Agent Studio and deploy the project. + +### Step 11 — Monitor performance + +Use Agent Studio analytics to monitor containment, CSAT, handle time, and flagged transcripts. Pull changes back locally as needed and continue iterating. + +## Workflow 2 — AI-agent workflow + +The AI-agent workflow uses a coding agent such as **Claude Code** to execute the same development loop on your behalf. + +
+ +- **You provide the brief** + + --- + + Requirements, business rules, integrations, and API documentation. + +- **The coding agent generates the project** + + --- + + It uses the ADK to inspect the SDK, generate files, and push the result. + +- **You review and deploy** + + --- + + Agent Studio remains the place where the work is checked, merged, and deployed. + +
+ +!!! info "No manual flow-building required" + + In this workflow, the coding agent does the heavy lifting of building the agent, while Agent Studio remains the place where the work is reviewed, tested, and deployed. + +### Step 1 — Gather requirements Collect the project context before you begin. @@ -60,7 +297,7 @@ The more complete and structured the input is, the better the generated output i This workflow works best when you gather the important information up front rather than feeding it in piecemeal later. -## Step 2 — Create a new project in Agent Studio +### Step 2 — Create a new project in Agent Studio Open **Agent Studio** and create a brand-new project. @@ -76,7 +313,7 @@ That blank starting point is intentional. The coding agent will populate the pro Agent Studio is where the project lives, but the coding agent does most of the actual building work. -## Step 3 — Launch the coding agent via the CLI +### Step 3 — Launch the coding agent via the CLI Open your command line interface and launch your coding agent. @@ -86,9 +323,16 @@ At this stage: - the Agent Studio project must already exist - the coding agent should be linked to the project using the ADK +A typical starting point is: + +~~~bash +poly init --region --account_id --project_id +poly pull +~~~ + The ADK acts as the bridge between your local development environment and Agent Studio in the cloud. It allows the coding agent to read from and write back to the project. -## Step 4 — Feed context to the coding agent +### Step 4 — Feed context to the coding agent Now provide the coding agent with the information you gathered earlier. @@ -99,9 +343,15 @@ This is the core input step. Include: - relevant internal context - useful patterns or best practices from previous projects +The coding agent can also use the docs command to inspect the SDK and understand the available resources. + +~~~bash +poly docs --all +~~~ + Reusing proven patterns from earlier projects can improve both speed and output quality. -## Step 5 — Let the agent build +### Step 5 — Let the agent build Once the context has been provided, let the coding agent generate the project files. @@ -137,7 +387,7 @@ The coding agent can produce the assets needed for the agent, including: The generated assets are structured for Agent Studio and prepared to be pushed back to the platform. -## Step 6 — Push back to Agent Studio +### Step 6 — Push back to Agent Studio Once the coding agent has generated the project files, it uses the ADK to push them back into Agent Studio. @@ -154,7 +404,7 @@ When you switch to that branch in Agent Studio, you should see the generated cha The branch-based workflow makes it possible to inspect what was generated before merging it into the main project. -## Step 7 — Review, merge, and deploy +### Step 7 — Review, merge, and deploy Review the generated work inside Agent Studio. @@ -172,23 +422,38 @@ Once everything looks right: At that point, the agent is live. +## CLI command overview + +| Command | Description | +|---|---| +| **poly init** | Initialise a new project locally | +| **poly pull** | Pull remote config into the local project | +| **poly push** | Push local changes to Agent Studio | +| **poly status** | List changed files | +| **poly diff** | Show diffs | +| **poly revert** | Revert local changes | +| **poly branch** | Branch management | +| **poly format** | Format resource files | +| **poly validate** | Validate project configuration locally | +| **poly review** | Create a diff review page | +| **poly chat** | Start an interactive session with the agent | +| **poly docs** | Output resource documentation | + ## Summary | Metric | Value | |---|---| -| **Steps** | 7 | -| **Total time** | ~30 minutes | -| **Manual flows built** | 0 | +| **Manual workflow** | Supported | +| **AI-agent workflow** | Supported | +| **Production-ready path** | Yes | The overall loop is straightforward: -1. provide context -2. let the coding agent generate the project assets -3. use the ADK to push them into Agent Studio +1. create or connect a project +2. build locally using the ADK +3. push to Agent Studio 4. review, merge, and deploy -By reusing patterns from previous projects, the coding agent can produce production-grade output much faster than a fully manual workflow. - ## Next steps
@@ -198,6 +463,7 @@ By reusing patterns from previous projects, the coding agent can produce product --- Explore the available ADK commands and options. + [Open CLI reference](../reference/cli.md) - **Walkthrough video** @@ -205,6 +471,7 @@ By reusing patterns from previous projects, the coding agent can produce product --- See the workflow demonstrated in video form. + [Open the walkthrough video](../get-started/walkthrough-video.md)
\ No newline at end of file From fb6c011bcaf7dc398465d0b1b9cdcfe87c84a199 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Wed, 25 Mar 2026 14:15:31 +0000 Subject: [PATCH 02/14] fix formatting issues --- docs/docs/reference/agent_settings.md | 10 +++---- docs/docs/reference/chat_settings.md | 4 +-- docs/docs/reference/flows.md | 6 ++-- docs/docs/reference/testing.md | 2 +- docs/docs/reference/topics.md | 4 +-- docs/docs/reference/variables.md | 2 +- docs/docs/reference/variants.md | 2 +- docs/docs/reference/voice_settings.md | 2 +- docs/docs/tutorials/build-an-agent.md | 40 +++++++++++++-------------- src/poly/docs/agent_settings.md | 10 +++---- src/poly/docs/chat_settings.md | 2 +- src/poly/docs/entities.md | 2 +- src/poly/docs/flows.md | 2 +- src/poly/docs/functions.md | 8 +++--- src/poly/docs/response_control.md | 4 +-- src/poly/docs/speech_recognition.md | 6 ++-- src/poly/docs/topics.md | 16 +++++------ src/poly/docs/variables.md | 14 +++++----- src/poly/docs/variants.md | 4 +-- src/poly/docs/voice_settings.md | 6 ++-- 20 files changed, 73 insertions(+), 73 deletions(-) diff --git a/docs/docs/reference/agent_settings.md b/docs/docs/reference/agent_settings.md index 9840240..a41d777 100644 --- a/docs/docs/reference/agent_settings.md +++ b/docs/docs/reference/agent_settings.md @@ -1,12 +1,12 @@ --- title: Agent settings -description: Define the agent’s identity, role, and behavioral rules in the PolyAI ADK. +description: Define the agent's identity, role, and behavioral rules in the PolyAI ADK. --- # Agent settings

-Agent settings define the agent’s identity and behavioral rules. +Agent settings define the agent's identity and behavioral rules. They live in agent_settings/ and are made up of personality, role, and rules resources.

@@ -32,7 +32,7 @@ agent_settings/ --- - Controls the agent’s tone and conversational style. + Controls the agent's tone and conversational style. - **Role** @@ -56,7 +56,7 @@ agent_settings/ ## Personality -The `personality.yaml` file controls the agent’s conversational tone. +The `personality.yaml` file controls the agent's conversational tone. ### Fields @@ -102,7 +102,7 @@ custom: "" The `role.yaml` file defines what the agent is. -This is usually the agent’s role, title, or function in the business context. +This is usually the agent's role, title, or function in the business context. ### Fields diff --git a/docs/docs/reference/chat_settings.md b/docs/docs/reference/chat_settings.md index 27ef64f..8f9b917 100644 --- a/docs/docs/reference/chat_settings.md +++ b/docs/docs/reference/chat_settings.md @@ -63,7 +63,7 @@ Use this for chat-specific guidance such as: - using bullet points for lists - adjusting formatting for readability -This is separate from the agent’s broader personality. Use it to control how the agent communicates specifically in web chat. +This is separate from the agent's broader personality. Use it to control how the agent communicates specifically in web chat. ### Fields @@ -100,7 +100,7 @@ style_prompt: --- - Define the agent’s overall identity, role, and rules. + Define the agent's overall identity, role, and rules. [Open agent settings](./agent_settings.md) - **Voice settings** diff --git a/docs/docs/reference/flows.md b/docs/docs/reference/flows.md index a95b438..c93d23d 100644 --- a/docs/docs/reference/flows.md +++ b/docs/docs/reference/flows.md @@ -6,7 +6,7 @@ description: Define multi-step processes that guide the agent through structured # Flows

-Flows choreograph multi-step processes. At any given moment, the model only sees the current step’s prompt and tools. +Flows choreograph multi-step processes. At any given moment, the model only sees the current step's prompt and tools.

A good flow keeps each step focused on a single task. Use Python for branching, validation, and routing logic, and use prompts for conversational behavior. @@ -108,7 +108,7 @@ start_step: Collect Details ## Step types -A step represents the agent’s current position in the flow. +A step represents the agent's current position in the flow. There are three step types: @@ -166,7 +166,7 @@ A condition can: Use the correct step identifier depending on target type: -- **Default step** or **advanced step**: use the step’s `name` +- **Default step** or **advanced step**: use the step's `name` - **Function step**: use the Python filename without `.py` ## Advanced steps diff --git a/docs/docs/reference/testing.md b/docs/docs/reference/testing.md index c01225b..93b2741 100644 --- a/docs/docs/reference/testing.md +++ b/docs/docs/reference/testing.md @@ -40,7 +40,7 @@ Testing is useful when you want to: --- - Use `pytest` to run the project’s test suite locally. + Use `pytest` to run the project's test suite locally. - **Validation** diff --git a/docs/docs/reference/topics.md b/docs/docs/reference/topics.md index ae0997c..fb3d18e 100644 --- a/docs/docs/reference/topics.md +++ b/docs/docs/reference/topics.md @@ -6,7 +6,7 @@ description: Define knowledge-base topics that the agent can retrieve and act on # Topics

-Topics are the agent’s knowledge base. They are queried through retrieval-augmented generation (RAG), and when a user’s input matches a topic, the agent retrieves its content and follows its actions. +Topics are the agent's knowledge base. They are queried through retrieval-augmented generation (RAG), and when a user's input matches a topic, the agent retrieves its content and follows its actions.

Topics are how you teach the agent facts and guidance about specific subject areas without putting everything into flows or rules. @@ -164,7 +164,7 @@ Use markdown headers like `##` and `###` to break up branches or conditions. - prefer structured `##` branches in actions - disable topics with `enabled: false` during development instead of deleting them -!!! tip "Tell, don’t script" +!!! tip "Tell, don't script" Prefer instructions like “Tell the user that ...” over hard-coded dialogue such as `Say: '...'`. This gives the agent more room to behave naturally, especially across languages. diff --git a/docs/docs/reference/variables.md b/docs/docs/reference/variables.md index 1262201..f08b646 100644 --- a/docs/docs/reference/variables.md +++ b/docs/docs/reference/variables.md @@ -6,7 +6,7 @@ description: Understand how state variables are discovered, stored, and referenc # Variables

-Variables are virtual resources that represent state values used in the agent’s code. +Variables are virtual resources that represent state values used in the agent's code.

Unlike most resources, variables do not have files on disk. They are discovered automatically by scanning function code for `conv.state.` usage. diff --git a/docs/docs/reference/variants.md b/docs/docs/reference/variants.md index 7ca2d4e..4cc28bf 100644 --- a/docs/docs/reference/variants.md +++ b/docs/docs/reference/variants.md @@ -172,7 +172,7 @@ Common uses include: !!! warning "Missing values will fail validation" - If a variant is missing from an attribute’s `values` map, validation will fail. + If a variant is missing from an attribute's `values` map, validation will fail. ## Best practices diff --git a/docs/docs/reference/voice_settings.md b/docs/docs/reference/voice_settings.md index 43ce261..7ef3965 100644 --- a/docs/docs/reference/voice_settings.md +++ b/docs/docs/reference/voice_settings.md @@ -72,7 +72,7 @@ Use this for voice-specific guidance such as: - spoken tone - conversational pacing -This is separate from the agent’s broader personality. Use it to shape how the agent should sound specifically on phone calls. +This is separate from the agent's broader personality. Use it to shape how the agent should sound specifically on phone calls. ### Fields diff --git a/docs/docs/tutorials/build-an-agent.md b/docs/docs/tutorials/build-an-agent.md index 24b86d8..32f8c55 100644 --- a/docs/docs/tutorials/build-an-agent.md +++ b/docs/docs/tutorials/build-an-agent.md @@ -96,11 +96,11 @@ When an Agent Studio project is linked locally, it follows this general structur This structure mirrors the parts of the agent that Agent Studio understands: settings, flows, functions, topics, channel configuration, and supporting resources. -## Workflow 1 — CLI workflow +## Workflow 1 - CLI workflow The CLI workflow is the manual developer path. You use the ADK directly, edit the project locally, and push changes back to Agent Studio. -### Step 1 — Initialise your project +### Step 1 - Initialise your project Link a local folder to an existing Agent Studio project. The agent must already exist in Agent Studio. @@ -111,7 +111,7 @@ poly init --region --account_id --project_id This creates the local project structure and writes the metadata needed to connect the folder to Agent Studio. -### Step 2 — Pull remote config and set up the environment +### Step 2 - Pull remote config and set up the environment Pull the current configuration into your local project. @@ -126,7 +126,7 @@ At this point, configure any API keys or environment variables needed for the pr All CLI commands should be run from within the local project folder, unless you explicitly use the relevant path flag. -### Step 3 — Run the agent locally +### Step 3 - Run the agent locally Start an interactive chat session to confirm the connection works and inspect runtime behaviour. @@ -136,7 +136,7 @@ poly chat --environment sandbox --channel voice poly chat --functions --flows --state ~~~ -### Step 4 — Review the docs and understand the SDK +### Step 4 - Review the docs and understand the SDK Use the CLI docs command to inspect the available resources and learn how they fit together. @@ -147,7 +147,7 @@ poly docs flows functions topics Resource-specific documentation is available for agent settings, voice settings, chat settings, flows, functions, topics, entities, handoffs, variants, SMS templates, variables, speech recognition, response control, and experimental config. -### Step 5 — Customise the agent +### Step 5 - Customise the agent This is the core build phase. Create a branch, edit resources locally, track changes, and push them back. @@ -199,7 +199,7 @@ Tune speech recognition and control TTS behaviour. Enable or tune experimental features where needed. -### Step 6 — Track and validate changes +### Step 6 - Track and validate changes Inspect the local changes before pushing. @@ -213,7 +213,7 @@ poly revert --all poly revert ~~~ -### Step 7 — Push changes +### Step 7 - Push changes Push the local changes back to Agent Studio. @@ -224,7 +224,7 @@ poly push -f poly push --skip-validation ~~~ -### Step 8 — Test against sandbox +### Step 8 - Test against sandbox Validate the behaviour in the sandbox environment. @@ -233,7 +233,7 @@ poly chat --environment sandbox poly chat --environment sandbox --functions --flows ~~~ -### Step 9 — Iterate on quality +### Step 9 - Iterate on quality Review, refine, and test again. You can also use the review command to share diffs with teammates. @@ -244,15 +244,15 @@ poly review --before main --after my-feature Make test calls, inspect transcripts, refine prompts, flows, and functions, and then re-push. -### Step 10 — Deploy to production +### Step 10 - Deploy to production Once the changes are pushed and validated, merge the branch in Agent Studio and deploy the project. -### Step 11 — Monitor performance +### Step 11 - Monitor performance Use Agent Studio analytics to monitor containment, CSAT, handle time, and flagged transcripts. Pull changes back locally as needed and continue iterating. -## Workflow 2 — AI-agent workflow +## Workflow 2 - AI-agent workflow The AI-agent workflow uses a coding agent such as **Claude Code** to execute the same development loop on your behalf. @@ -282,7 +282,7 @@ The AI-agent workflow uses a coding agent such as **Claude Code** to execute the In this workflow, the coding agent does the heavy lifting of building the agent, while Agent Studio remains the place where the work is reviewed, tested, and deployed. -### Step 1 — Gather requirements +### Step 1 - Gather requirements Collect the project context before you begin. @@ -301,7 +301,7 @@ The more complete and structured the input is, the better the generated output i This workflow works best when you gather the important information up front rather than feeding it in piecemeal later. -### Step 2 — Create a new project in Agent Studio +### Step 2 - Create a new project in Agent Studio Open **Agent Studio** and create a brand-new project. @@ -317,7 +317,7 @@ That blank starting point is intentional. The coding agent will populate the pro Agent Studio is where the project lives, but the coding agent does most of the actual building work. -### Step 3 — Launch the coding agent via the CLI +### Step 3 - Launch the coding agent via the CLI Open your command line interface and launch your coding agent. @@ -336,7 +336,7 @@ poly pull The ADK acts as the bridge between your local development environment and Agent Studio in the cloud. It allows the coding agent to read from and write back to the project. -### Step 4 — Feed context to the coding agent +### Step 4 - Feed context to the coding agent Now provide the coding agent with the information you gathered earlier. @@ -355,7 +355,7 @@ poly docs --all Reusing proven patterns from earlier projects can improve both speed and output quality. -### Step 5 — Let the agent build +### Step 5 - Let the agent build Once the context has been provided, let the coding agent generate the project files. @@ -391,7 +391,7 @@ The coding agent can produce the assets needed for the agent, including: The generated assets are structured for Agent Studio and prepared to be pushed back to the platform. -### Step 6 — Push back to Agent Studio +### Step 6 - Push back to Agent Studio Once the coding agent has generated the project files, it uses the ADK to push them back into Agent Studio. @@ -408,7 +408,7 @@ When you switch to that branch in Agent Studio, you should see the generated cha The branch-based workflow makes it possible to inspect what was generated before merging it into the main project. -### Step 7 — Review, merge, and deploy +### Step 7 - Review, merge, and deploy Review the generated work inside Agent Studio. diff --git a/src/poly/docs/agent_settings.md b/src/poly/docs/agent_settings.md index ddf5a9f..5327dbe 100644 --- a/src/poly/docs/agent_settings.md +++ b/src/poly/docs/agent_settings.md @@ -51,11 +51,11 @@ custom: "" Plain-text behavioral instructions the agent follows on every turn. This is a key file for shaping agent behavior. ### Supported references -- `{{fn:function_name}}` — global functions -- `{{twilio_sms:template_name}}` — SMS templates -- `{{ho:handoff_name}}` — handoffs -- `{{attr:attribute_name}}` — variant attributes -- `{{vrbl:variable_name}}` — variables +- `{{fn:function_name}}` - global functions +- `{{twilio_sms:template_name}}` - SMS templates +- `{{ho:handoff_name}}` - handoffs +- `{{attr:attribute_name}}` - variant attributes +- `{{vrbl:variable_name}}` - variables ### Example ```text diff --git a/src/poly/docs/chat_settings.md b/src/poly/docs/chat_settings.md index 7c759db..12d314b 100644 --- a/src/poly/docs/chat_settings.md +++ b/src/poly/docs/chat_settings.md @@ -24,7 +24,7 @@ greeting: ## Style Prompt -Channel-specific instructions that shape how the agent writes. Separate from personality — use this for chat-specific guidance (e.g. "keep responses concise", "use bullet points for lists"). +Channel-specific instructions that shape how the agent writes. Separate from personality - use this for chat-specific guidance (e.g. "keep responses concise", "use bullet points for lists"). ### Fields - **prompt**: Free-text style instructions. No resource references allowed. diff --git a/src/poly/docs/entities.md b/src/poly/docs/entities.md index 6763059..f119932 100644 --- a/src/poly/docs/entities.md +++ b/src/poly/docs/entities.md @@ -35,7 +35,7 @@ Each entity has: - **In flow prompts**: `{{entity:entity_name}}` to reference the collected value. - **In function steps**: `conv.entities.entity_name.value` to read; check with `if conv.entities.entity_name: ...`. -- **In default step conditions**: `required_entities` gates a condition — it only triggers once all listed entities are collected. +- **In default step conditions**: `required_entities` gates a condition - it only triggers once all listed entities are collected. - **In default steps**: `extracted_entities` tells the agent which entities to collect in that step. ASR biasing is automatically configured based on entity types. ## Example diff --git a/src/poly/docs/flows.md b/src/poly/docs/flows.md index d49dbb3..b220200 100644 --- a/src/poly/docs/flows.md +++ b/src/poly/docs/flows.md @@ -60,7 +60,7 @@ These define how the agent can transition out of one default node. They can tran Example: - **condition_type**: `step_condition` (go to another step) or `exit_flow_condition` (exit flow) - **description**: When this condition applies -- **child_step**: Next step — **only for step_condition**; omit for exit_flow_condition +- **child_step**: Next step - **only for step_condition**; omit for exit_flow_condition - **required_entities**: Entities that must be collected before this condition can trigger **child_step rules:** diff --git a/src/poly/docs/functions.md b/src/poly/docs/functions.md index b675c35..f1fd0a9 100644 --- a/src/poly/docs/functions.md +++ b/src/poly/docs/functions.md @@ -7,8 +7,8 @@ Functions are Python files that add deterministic logic to your agent. They can ## Location ``` functions/ # Global functions -├── start_function.py # Optional — runs once at call start -├── end_function.py # Optional — runs once at call end +├── start_function.py # Optional - runs once at call start +├── end_function.py # Optional - runs once at call end └── {function_name}.py # Called by LLM via {{fn:function_name}} flows/{flow_name}/ ├── functions/ @@ -96,7 +96,7 @@ Prefer naming after the **event that should trigger the call** (e.g. `first_name ### Start function (`start_function.py`) - Runs **once at call start**, before the first user input. -- Signature: `def start_function(conv: Conversation):` — no `flow`, no `@func_parameter`. +- Signature: `def start_function(conv: Conversation):` - no `flow`, no `@func_parameter`. - Typical use: initialize state, read SIP headers, set language, write initial metrics, then `conv.goto_flow("...")`. ### End function (`end_function.py`) @@ -111,7 +111,7 @@ If a function file isn't intended to be called by the LLM, it still needs a main `conv.state` is preserved between turns. Use it to set variables for future logic or to be used in prompts - **Set**: `conv.state.variable_name = value` - **Read**: `conv.state.variable_name` (returns `None` if not set) -- **In prompts**: `$variable` or `{{vrbl:variable}}` (not `conv.state.variable`). No `$var.attribute` — stringify in Python first. +- **In prompts**: `$variable` or `{{vrbl:variable}}` (not `conv.state.variable`). No `$var.attribute` - stringify in Python first. ### Counters Use `conv.state.counter` (initialize and increment) to avoid infinite retries. After a limit (e.g. 3), hand off or exit. diff --git a/src/poly/docs/response_control.md b/src/poly/docs/response_control.md index ca97f7e..52289c1 100644 --- a/src/poly/docs/response_control.md +++ b/src/poly/docs/response_control.md @@ -6,8 +6,8 @@ Response control resources manage what the agent says before it reaches the user ``` voice/response_control/ -├── pronunciations.yaml # Optional — TTS pronunciation rules -└── phrase_filtering.yaml # Optional — block/intercept phrases before TTS +├── pronunciations.yaml # Optional - TTS pronunciation rules +└── phrase_filtering.yaml # Optional - block/intercept phrases before TTS ``` ## Pronunciations (`pronunciations.yaml`) diff --git a/src/poly/docs/speech_recognition.md b/src/poly/docs/speech_recognition.md index 47d7fa6..1fd428d 100644 --- a/src/poly/docs/speech_recognition.md +++ b/src/poly/docs/speech_recognition.md @@ -7,8 +7,8 @@ Speech recognition resources control how the agent processes user speech input o ``` voice/speech_recognition/ ├── asr_settings.yaml # Barge-in, interaction style -├── keyphrase_boosting.yaml # Optional — bias ASR toward specific words -└── transcript_corrections.yaml # Optional — regex corrections on ASR output +├── keyphrase_boosting.yaml # Optional - bias ASR toward specific words +└── transcript_corrections.yaml # Optional - regex corrections on ASR output ``` ## ASR Settings (`asr_settings.yaml`) @@ -39,7 +39,7 @@ Bias the speech recognizer toward specific words or phrases (brand names, produc ### Structure A `keyphrases` list where each entry has: - **keyphrase** (required): The word or phrase to boost. -- **level**: Boost strength — `default`, `boosted`, or `maximum`. Default: `default`. +- **level**: Boost strength - `default`, `boosted`, or `maximum`. Default: `default`. ### Example ```yaml diff --git a/src/poly/docs/topics.md b/src/poly/docs/topics.md index c45ebc4..e8045f4 100644 --- a/src/poly/docs/topics.md +++ b/src/poly/docs/topics.md @@ -41,7 +41,7 @@ actions: |- ## Example queries - Maximum **20 queries**. - Cover different ways a user might ask about the same thing. -- Don't try to cover every minor variation — the model generalizes. +- Don't try to cover every minor variation - the model generalizes. ## Content - Factual information only. This is what gets retrieved via RAG. @@ -51,18 +51,18 @@ actions: |- ## Actions - Behavioral instructions: what to say, when to call functions, how to branch. - **This is the only place** where you can use references in a topic: - - `{{fn:function_name}}` — call a global function - - `{{fn:function_name}}('arg')` — call with an argument - - `{{attr:attribute_name}}` — variant attribute - - `{{twilio_sms:template_name}}` — SMS template - - `{{ho:handoff_name}}` — handoff - - `$variable` — state variable + - `{{fn:function_name}}` - call a global function + - `{{fn:function_name}}('arg')` - call with an argument + - `{{attr:attribute_name}}` - variant attribute + - `{{twilio_sms:template_name}}` - SMS template + - `{{ho:handoff_name}}` - handoff + - `$variable` - state variable - **Branching**: Use markdown headers (`##`, `###`) for conditional sections. - Keep actions clear and scannable; avoid one long paragraph with mixed conditions. ## Best practices - Don't prompt the model to `"Say: '...'"` (hurts multilingual support); use `"Tell the user that ..."`. - Prefer structured actions with `## Conditional Branch` sections over a single dense paragraph. -- Keep content and actions separate — content is facts, actions is behavior. +- Keep content and actions separate - content is facts, actions is behavior. - One topic per subject area. If a topic is getting too large, split it. - Disable topics with `enabled: false` rather than deleting them during development. diff --git a/src/poly/docs/variables.md b/src/poly/docs/variables.md index ed8305a..0bc994a 100644 --- a/src/poly/docs/variables.md +++ b/src/poly/docs/variables.md @@ -2,13 +2,13 @@ ## Overview -Variables are virtual resources that represent state values used in the agent's code. Unlike other resources, variables do not have files on disk — they are automatically discovered by scanning function code for `conv.state.` usage. +Variables are virtual resources that represent state values used in the agent's code. Unlike other resources, variables do not have files on disk - they are automatically discovered by scanning function code for `conv.state.` usage. ## How variables work When you write `conv.state.customer_name = "Alice"` in a function, `customer_name` becomes a tracked variable. The ADK discovers these by scanning all function files (global functions, flow functions, and function steps) for state attribute access patterns. -Variables can be referenced in prompts and templates using `$variable_name` or `{{vrbl:variable_name}}` — these are interchangeable. Prefer `{{vrbl:variable_name}}` as it is validated by the ADK. +Variables can be referenced in prompts and templates using `$variable_name` or `{{vrbl:variable_name}}` - these are interchangeable. Prefer `{{vrbl:variable_name}}` as it is validated by the ADK. ## Setting state in code ```python @@ -26,9 +26,9 @@ if conv.state.is_verified: ## Using variables in prompts and templates -In flow step prompts, topic actions, SMS templates, and other text fields, use either syntax — they are interchangeable: +In flow step prompts, topic actions, SMS templates, and other text fields, use either syntax - they are interchangeable: -- `{{vrbl:variable_name}}` (preferred — validated by the ADK) +- `{{vrbl:variable_name}}` (preferred - validated by the ADK) - `$variable_name` ``` @@ -39,12 +39,12 @@ The customer's name is $customer_name and their balance is $account_balance. text: "Hi {{vrbl:customer_name}}, your booking is confirmed for {{vrbl:booking_date}}." ``` -Do not use `conv.state.variable` syntax in prompts — use `$variable` or `{{vrbl:variable}}` only. +Do not use `conv.state.variable` syntax in prompts - use `$variable` or `{{vrbl:variable}}` only. -Do not use `$var.attribute` — stringify complex objects in Python first, then store the string in state. +Do not use `$var.attribute` - stringify complex objects in Python first, then store the string in state. ## Best practices -- Variables are discovered automatically — no manual registration needed. +- Variables are discovered automatically - no manual registration needed. - Use descriptive snake_case names. - Initialize state variables in `start_function` or early in the flow to avoid `None` values. - Keep variable names consistent across functions and prompts. diff --git a/src/poly/docs/variants.md b/src/poly/docs/variants.md index 2da8050..ab0599f 100644 --- a/src/poly/docs/variants.md +++ b/src/poly/docs/variants.md @@ -12,11 +12,11 @@ Variant attributes provide per-variant configuration (per location, environment, The file has two top-level keys: -### `variants` — List of variants +### `variants` - List of variants - **name** (required): Unique identifier (e.g. a location name, environment, or tenant). Used as the key in attribute `values`. - **is_default** (optional): Exactly one variant must have `is_default: true`. Used when no variant is resolved at runtime. -### `attributes` — List of attributes +### `attributes` - List of attributes - **name**: Attribute identifier (snake_case recommended), e.g. `greeting_name`, `support_phone_number`. - **values**: Map from **variant name** to string value. Must have one entry per variant. Values can be `""`, a single line, or multi-line (`|-`). diff --git a/src/poly/docs/voice_settings.md b/src/poly/docs/voice_settings.md index ce4ef4b..d4f254f 100644 --- a/src/poly/docs/voice_settings.md +++ b/src/poly/docs/voice_settings.md @@ -24,7 +24,7 @@ greeting: ## Style Prompt -Channel-specific instructions that shape how the agent speaks. Separate from personality — use this for voice-specific guidance (e.g. phrasing, verbosity, tone of speech). +Channel-specific instructions that shape how the agent speaks. Separate from personality - use this for voice-specific guidance (e.g. phrasing, verbosity, tone of speech). ### Fields - **prompt**: Free-text style instructions. No resource references allowed. @@ -66,5 +66,5 @@ disclaimer_messages: ``` ## Related voice resources -- [Speech Recognition](speech_recognition.md) — ASR settings, keyphrase boosting, transcript corrections -- [Response Control](response_control.md) — pronunciations, phrase filters +- [Speech Recognition](speech_recognition.md) - ASR settings, keyphrase boosting, transcript corrections +- [Response Control](response_control.md) - pronunciations, phrase filters From 6baf8715fc8cc4dc2bed21b19f24b83d4b29cf7e Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Thu, 26 Mar 2026 11:13:54 +0000 Subject: [PATCH 03/14] docs: improve quality, clarity, and active language across all pages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Remove AI-isms: "heavy lifting", "trust the model", "composable", passive agent-speak - Use active, task-oriented language throughout (headers, instructions, tips) - Remove internal history paragraph from what-is-the-adk (Local Agent Studio provenance) - Add Python 3.14 install guidance and checklist to prerequisites - Add standard uv install command (astral.sh) to prerequisites - Remove "Running tests" from installation.md (belongs in testing.md) - Add `uv run pytest` variant to testing.md - Remove low-value Summary table from build-an-agent tutorial - Make VS Code extension a proper hyperlink in tooling.md - Fix "Pre-requisites" → "Prerequisites" across index and access pages - Clarify import line note in functions.md - Improve CLI reference lead paragraph Co-Authored-By: Claude Sonnet 4.6 --- docs/docs/concepts/anti-patterns.md | 6 +-- .../concepts/multi-user-and-guardrails.md | 16 +++--- docs/docs/concepts/working-locally.md | 24 ++------- docs/docs/get-started/access-and-waitlist.md | 22 +++----- docs/docs/get-started/first-commands.md | 16 ++---- docs/docs/get-started/installation.md | 18 ++----- docs/docs/get-started/prerequisites.md | 54 +++++++++---------- docs/docs/get-started/what-is-the-adk.md | 40 +++++++------- docs/docs/index.md | 21 ++++---- docs/docs/reference/cli.md | 6 +-- docs/docs/reference/flows.md | 2 +- docs/docs/reference/functions.md | 4 +- docs/docs/reference/testing.md | 6 +++ docs/docs/reference/tooling.md | 6 +-- docs/docs/reference/topics.md | 8 +-- docs/docs/tutorials/build-an-agent.md | 42 ++++++--------- 16 files changed, 117 insertions(+), 174 deletions(-) diff --git a/docs/docs/concepts/anti-patterns.md b/docs/docs/concepts/anti-patterns.md index aa02f0a..37d39d8 100644 --- a/docs/docs/concepts/anti-patterns.md +++ b/docs/docs/concepts/anti-patterns.md @@ -3,7 +3,7 @@ title: Common anti-patterns description: Avoid common mistakes when building flows, writing prompts, and handling control flow in the PolyAI ADK. --- -This page collects common implementation mistakes that make agents harder to reason about, harder to maintain, or more likely to behave incorrectly at runtime. +This page collects common implementation mistakes that make agents harder to predict, harder to maintain, or more likely to behave incorrectly at runtime. The general rule is simple: keep prompts focused on conversation, keep Python focused on deterministic logic, and make control flow explicit. @@ -71,9 +71,9 @@ Implement the check in Python and transition to the correct step or flow explici When branching logic is buried in prompts: -- behavior becomes harder to test +- behavior becomes harder to test and verify - routing becomes harder to debug -- the agent becomes more dependent on model interpretation for deterministic behavior +- deterministic behavior becomes dependent on how the model interprets the instruction
diff --git a/docs/docs/concepts/multi-user-and-guardrails.md b/docs/docs/concepts/multi-user-and-guardrails.md index f678edb..0b0c068 100644 --- a/docs/docs/concepts/multi-user-and-guardrails.md +++ b/docs/docs/concepts/multi-user-and-guardrails.md @@ -69,9 +69,7 @@ You can also use: ## Validation as a guardrail -Before pushing, the ADK can validate the project locally. - -This is one of the main guardrails in the workflow. It helps catch issues early, before they are pushed into Agent Studio. +Run `poly validate` before pushing to catch issues locally, before they reach Agent Studio. Examples of what validation protects against include: @@ -82,7 +80,7 @@ Examples of what validation protects against include: !!! info "Validate before pushing" - In collaborative workflows, `poly validate` should be treated as a normal part of the editing cycle, not as an optional afterthought. + In collaborative workflows, treat `poly validate` as a standard step in the editing cycle, not an optional one. ## Pulling and merge behavior @@ -94,25 +92,23 @@ poly pull If the pulled changes conflict with your own local edits, the ADK will merge them and surface merge markers where conflicts occur. -That means the local workflow is not isolated from Agent Studio UI work — both sides can affect branch state, and developers should be aware of that. +The local workflow is not isolated from Agent Studio UI work — both sides affect branch state. Keep that in mind when collaborating. ## Review workflow -When changes are ready for review, the ADK can help generate a review artifact using: +When changes are ready for review, generate a review artifact: ~~~bash poly review ~~~ -This can be used to compare: +Use this to compare: - local changes against the remote project - one branch against another - a feature branch against `main` or `sandbox` -A GitHub environment token is required for this step. - -The review flow is useful when you want to share changes without asking a reviewer to inspect the raw local filesystem. +A GitHub environment token is required. The output lets reviewers inspect changes without access to your local filesystem. ## Guardrails inherited from Agent Studio diff --git a/docs/docs/concepts/working-locally.md b/docs/docs/concepts/working-locally.md index 6a06fab..126a9fc 100644 --- a/docs/docs/concepts/working-locally.md +++ b/docs/docs/concepts/working-locally.md @@ -3,15 +3,9 @@ title: Working locally description: Understand how the PolyAI ADK maps Agent Studio projects onto a local development workflow. --- -The **PolyAI ADK** is a CLI tool and Python package for managing **PolyAI Agent Studio** projects locally. +With the ADK, you work on Agent Studio projects from your local machine instead of exclusively through the browser. -It provides a Git-like workflow for synchronizing project configuration between your local filesystem and the Agent Studio platform, so agent development can fit into normal engineering build, review, and collaboration cycles. - -## What “working locally” means - -With the ADK, you are not building an agent only inside the browser. - -Instead, you work with a project on your machine, where you can: +Your local filesystem becomes your primary editing surface. You can: - edit agent resources directly - review changes with Git-style workflows @@ -135,21 +129,11 @@ These placeholders are used in prompts, rules, topic actions, and related text f | `{{ho:handoff_name}}` | Handoff destination | Rules | | `{{vrbl:variable_name}}` | State variable | Prompts, topic actions, SMS templates | -These references make the local project composable: settings, prompts, and behaviors can refer to resources by name rather than hard-coding values. - -## Why local development matters - -Working locally makes the ADK especially useful for teams that want to: - -- review changes before deployment -- reuse patterns across projects -- work with large or complex resource trees -- generate or modify resources using coding agents -- fit agent development into existing engineering processes +These references let settings, prompts, and behaviors point to resources by name rather than repeating hard-coded values. !!! tip "A Git-like workflow for Agent Studio" - The ADK is easiest to understand if you think of it as a synchronization layer between your local project files and the Agent Studio platform. + Think of the ADK as a synchronization layer between your local files and the Agent Studio platform. ## Related pages diff --git a/docs/docs/get-started/access-and-waitlist.md b/docs/docs/get-started/access-and-waitlist.md index b6fd0c0..8306652 100644 --- a/docs/docs/get-started/access-and-waitlist.md +++ b/docs/docs/get-started/access-and-waitlist.md @@ -11,34 +11,26 @@ description: Learn how access to the PolyAI ADK currently works during the Early The **PolyAI ADK** is currently available through an **Early Access Program (EAP)**. -To use the ADK, you must have access to a **workspace in PolyAI Agent Studio** before using the tool. - ## Current availability -The ADK is not yet generally available. - -At the moment, access is limited to users who have been granted access through the Early Access Program. +The ADK is not yet generally available. Access is limited to participants in the Early Access Program. ## What you need -Before using the ADK, you must have: +To use the ADK, you must have: - access to a **PolyAI Agent Studio workspace** - an **API key** -Without these, you will not be able to use the ADK with Agent Studio projects. +Both are provided by your PolyAI contact. Without them, the ADK cannot connect to Agent Studio. ## Requesting access -If you are an existing client and would like to gain access to the PolyAI ADK, or if you are interested in trying the ADK with Agent Studio, submit a request using the relevant waitlist or interest form. - -!!! info "Early access only" - - Access to the ADK is currently managed through the Early Access Program, so availability is controlled rather than open by default. +If you are an existing client or are interested in trying the ADK, submit a request using the waitlist or interest form linked above. ## Planned availability -The PolyAI ADK is currently scheduled for a **general availability release in Q2 2026**. +The PolyAI ADK is scheduled for **general availability in Q2 2026**. ## Next steps @@ -46,11 +38,11 @@ If you already have the required access, continue to the prerequisites page.
-- **Pre-requisites** +- **Prerequisites** --- Confirm the local tools and access requirements needed before installation. - [Open pre-requisites](./prerequisites.md) + [Open prerequisites](./prerequisites.md)
\ No newline at end of file diff --git a/docs/docs/get-started/first-commands.md b/docs/docs/get-started/first-commands.md index 77cc83a..72a80e6 100644 --- a/docs/docs/get-started/first-commands.md +++ b/docs/docs/get-started/first-commands.md @@ -9,13 +9,13 @@ Once the ADK is installed, the fastest way to get oriented is to inspect the CLI ## View top-level help -To see all available commands and options, run: +Run `poly --help` to see every available command: ~~~bash poly --help ~~~ -Each command also supports its own help output. For example: +Each command also accepts `--help` for its own flags and options: ~~~bash poly push --help @@ -95,15 +95,9 @@ The ADK provides the following core commands:
-## Recommended starting point +## Explore any command -A good way to explore the CLI is: - -1. run `poly --help` -2. identify the command you need -3. run that command with `--help` - -For example: +To learn what a command does and what flags it accepts, run it with `--help`: ~~~bash poly init --help @@ -113,7 +107,7 @@ poly push --help ## Next step -Once you understand the CLI shape, continue to the command reference or the tutorial. +Continue to the command reference for a complete listing, or go straight to the tutorial to see a real workflow.
diff --git a/docs/docs/get-started/installation.md b/docs/docs/get-started/installation.md index ffac1cd..61bce03 100644 --- a/docs/docs/get-started/installation.md +++ b/docs/docs/get-started/installation.md @@ -23,17 +23,17 @@ Once installed, you can use the `poly` command to interact with Agent Studio pro ## Verify the installation -To confirm that the CLI is available, run: +Confirm the CLI is available: ~~~bash poly --help ~~~ -If installation has completed successfully, this will display the top-level command help. +You should see the top-level command help if installation succeeded. ## Development setup from source -If you are contributing to the ADK itself or working directly from the repository, you can set it up locally from source instead. +To contribute to the ADK or work directly from the repository: ~~~bash git clone https://github.com/polyai/adk.git @@ -44,17 +44,7 @@ uv pip install -e ".[dev]" pre-commit install ~~~ -This installs the project in editable mode and enables the repository’s development hooks. - -## Running tests - -To run the test suite: - -~~~bash -pytest -~~~ - -Test files are located in `src/poly/tests/`. +This installs the project in editable mode and registers the development hooks. ## Next step diff --git a/docs/docs/get-started/prerequisites.md b/docs/docs/get-started/prerequisites.md index 5e84f40..ec69e24 100644 --- a/docs/docs/get-started/prerequisites.md +++ b/docs/docs/get-started/prerequisites.md @@ -1,9 +1,9 @@ --- -title: Pre-requisites +title: Prerequisites description: Understand the access requirements and local tools needed before using the PolyAI ADK. --- -Before using the **PolyAI ADK**, you need both the correct **platform access** and the required **local tools**. +Before using the **PolyAI ADK**, you need the correct **platform access** and the required **local tools**. ## Platform access @@ -20,47 +20,45 @@ If you need access to the PolyAI platform, contact: ## Local requirements -You will also need the following installed on your machine: +Install the following tools before continuing: -
- -- **Python 3.14 or higher** - - --- +| Tool | Version | Notes | +|---|---|---| +| **Python** | 3.14+ | Required to run the ADK | +| **uv** | latest | Recommended for development setup from source | +| **Git** | any | Required to clone the repository or contribute | - The ADK requires a modern Python runtime. +### Install Python 3.14+ -- **uv** +Python 3.14 is a recent release. Use one of these methods: - --- +- **Homebrew** (macOS): `brew install python@3.14` +- **pyenv**: `pyenv install 3.14` then `pyenv global 3.14` +- **Official installer**: [python.org/downloads](https://www.python.org/downloads/){ target="_blank" rel="noopener" } - Recommended for local development setup from source. +### Install uv -- **Git** +The recommended way to install `uv`: - --- - - Required if you plan to clone the repository or contribute from source. - -
- -## Install uv +~~~bash +curl -LsSf https://astral.sh/uv/install.sh | sh +~~~ -If you use Homebrew on macOS, you can install `uv` with: +Alternatively, with Homebrew on macOS: ~~~bash brew install uv ~~~ -## Before you continue +## Checklist -Before moving on, make sure you have: +Before continuing, confirm: -- confirmed access to **Agent Studio** -- obtained an **API key** -- installed **Python 3.14+** -- installed `uv` if you are using the development setup -- confirmed that **Git** is available locally +- [ ] You have access to an **Agent Studio workspace** +- [ ] You have obtained an **API key** from your PolyAI contact +- [ ] Python 3.14+ is installed and on your `PATH` +- [ ] `uv` is installed if you plan to use the development setup +- [ ] `git` is available locally ## Next step diff --git a/docs/docs/get-started/what-is-the-adk.md b/docs/docs/get-started/what-is-the-adk.md index 277083f..3a59eac 100644 --- a/docs/docs/get-started/what-is-the-adk.md +++ b/docs/docs/get-started/what-is-the-adk.md @@ -9,44 +9,40 @@ description: Learn what the PolyAI Agent Development Kit is, why it exists, and [Join the waitlist](https://fehky.share-eu1.hsforms.com/2oSGLpUctRvyqXcb6K44DAQ){ target="_blank" rel="noopener" } -The **PolyAI ADK (Agent Development Kit)** is a **CLI tool and Python package** for interacting with **Agent Studio** projects on your local machine. +The **PolyAI ADK (Agent Development Kit)** is a **CLI tool and Python package** for managing **Agent Studio** projects on your local machine. -It provides a Git-like workflow for synchronizing project configurations between your local filesystem and the Agent Studio platform. +It gives you a Git-like workflow for synchronizing project configuration between your local filesystem and the Agent Studio platform. -The ADK originated as **Local Agent Studio**, an internal Deployment team tool, and was later repackaged for external use with changes such as API key authentication and the removal of internal-only process references. +## What you can do with the ADK -## What it enables - -With the ADK, developers can: - -- build and edit Agent Studio projects locally -- synchronize project configuration with Agent Studio -- use Git-based workflows alongside agent development -- work with AI coding tools such as **Cursor** or **Claude Code** -- accelerate onboarding and agent building +- Build and edit Agent Studio projects locally using standard tooling +- Synchronize project configuration with Agent Studio using `poly push` and `poly pull` +- Branch, validate, and review changes before deployment +- Use AI coding tools such as **Claude Code** to generate and update project files +- Collaborate across multiple developers on the same project ## Why it exists -The ADK was designed to move developer workflows out of the browser and into a local development environment. +The ADK moves development work out of the browser and into your local environment. -Instead of editing everything directly inside Agent Studio, developers can pull a project locally, make changes using standard development tooling, and push those changes back to the platform. +Instead of editing everything directly inside Agent Studio, you pull a project locally, make changes using your normal tools, and push those changes back to the platform. -This makes it easier to: +This makes it straightforward to: -- iterate quickly -- collaborate across multiple contributors -- run validation and tests before deployment -- use coding agents to automate work that would otherwise be manual +- iterate quickly without browser round-trips +- collaborate across a team without overwriting each other's work +- validate and test changes before pushing them live +- automate repetitive build work with coding tools ## Multi-developer workflows -The ADK was designed with **multi-user collaboration** in mind. +The ADK supports team workflows out of the box. -It preserves the same guardrails as Agent Studio, so developers should not be able to push changes that are incompatible with the project. +It preserves the same guardrails as Agent Studio, so developers cannot push changes that are incompatible with the project. !!! tip "Git-like, but for Agent Studio" - The ADK is best understood as a developer workflow layer for Agent Studio: pull, edit locally, validate, and push. + Think of the ADK as the local development layer for Agent Studio: pull, edit locally, validate, and push. ## Next steps diff --git a/docs/docs/index.md b/docs/docs/index.md index ff43081..af62578 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -13,7 +13,7 @@ description: Documentation for the PolyAI Agent Development Kit. Build, edit, and deploy Agent Studio projects locally with the **PolyAI ADK**. -The ADK gives developers a local, Git-like workflow for working with Agent Studio projects using standard tooling, validation, and AI-assisted development. +The ADK gives you a local, Git-like workflow for Agent Studio projects: pull, edit with standard tooling, validate, and push. ## Start here @@ -23,14 +23,14 @@ The ADK gives developers a local, Git-like workflow for working with Agent Studi --- - Learn what the ADK is, why it exists, and how it fits into Agent Studio workflows. + Understand what the ADK does and where it fits in the Agent Studio workflow. [Read the overview](get-started/what-is-the-adk.md) - **Installation** --- - Set up the ADK locally and prepare your development environment. + Install the ADK and prepare your local environment. [Open installation](get-started/installation.md) - **Build an agent** @@ -44,20 +44,19 @@ The ADK gives developers a local, Git-like workflow for working with Agent Studi --- - Review the core commands available in the `poly` CLI. + See every `poly` command and its flags. [Open CLI reference](reference/cli.md)
## What this site covers -This documentation is organised around the developer journey: +This documentation follows the developer journey: -- understanding what the ADK is -- getting access and installing it -- learning the core CLI workflow -- building and reviewing agents -- looking up commands, testing, and tooling details +- understanding what the ADK is and how to get access +- installing it and running the first commands +- building, reviewing, and deploying agents +- reference for all CLI commands, resource types, and tooling ## Recommended path @@ -65,7 +64,7 @@ If you are new to the ADK, follow this order: 1. read **What is the PolyAI ADK?** 2. check **Access and waitlist** -3. complete **Pre-requisites** +3. complete **Prerequisites** 4. follow **Installation** 5. use **First commands** 6. continue to **Build an agent with the ADK** \ No newline at end of file diff --git a/docs/docs/reference/cli.md b/docs/docs/reference/cli.md index cbeabbd..19986e2 100644 --- a/docs/docs/reference/cli.md +++ b/docs/docs/reference/cli.md @@ -5,7 +5,7 @@ description: Reference for the core commands provided by the PolyAI ADK CLI.

The PolyAI ADK is accessed through the poly command. -Use the CLI help output as the first source of truth. +When in doubt about a flag or option, run the command with --help — that output reflects your installed version exactly.

## Start with help @@ -22,9 +22,9 @@ Each command also supports its own help output. For example: poly push --help ~~~ -!!! tip "Use help output as the source of truth" +!!! tip "Help output reflects your installed version" - The installed CLI is the fastest way to confirm the commands and flags available in your local environment. + This reference page covers the standard commands. Run `poly --help` to confirm the exact flags available in your environment. ## Core commands diff --git a/docs/docs/reference/flows.md b/docs/docs/reference/flows.md index c93d23d..1ec9d84 100644 --- a/docs/docs/reference/flows.md +++ b/docs/docs/reference/flows.md @@ -248,7 +248,7 @@ Python should be used for: Function steps live in `function_steps/*.py`. -These are deterministic Python steps with no LLM decision-making involved. They are ideal for: +These are deterministic Python steps. They execute directly, with no model interpretation. Use them for: - API calls - validation diff --git a/docs/docs/reference/functions.md b/docs/docs/reference/functions.md index fac827f..0862434 100644 --- a/docs/docs/reference/functions.md +++ b/docs/docs/reference/functions.md @@ -83,13 +83,13 @@ Every `.py` file must define a function with the same name as the file, excludin That function is the entry point when the file is called by the model or runtime. -Use this import pattern: +Every function file must include this import line: ~~~python from _gen import * # ~~~ -This line must match the expected pattern exactly. +Do not modify this line. The ADK matches it exactly when reading function files. ## Decorators diff --git a/docs/docs/reference/testing.md b/docs/docs/reference/testing.md index 93b2741..5675bb5 100644 --- a/docs/docs/reference/testing.md +++ b/docs/docs/reference/testing.md @@ -15,6 +15,12 @@ In the ADK workflow, testing usually sits alongside validation and manual review Run tests with: +~~~bash +uv run pytest src/poly/tests/ -v +~~~ + +Or, if pytest is on your path directly: + ~~~bash pytest ~~~ diff --git a/docs/docs/reference/tooling.md b/docs/docs/reference/tooling.md index 7b72338..32d29e6 100644 --- a/docs/docs/reference/tooling.md +++ b/docs/docs/reference/tooling.md @@ -38,11 +38,7 @@ Then reference rules.md in your prompt or agent context. This gives your coding ### VS Code extension -A **PolyAI ADK VS Code extension** is available in the VS Code Marketplace: - -- `https://marketplace.visualstudio.com/items?itemName=PolyAI.adk-extension` - -This can be useful when working directly with ADK resources in a local editor. +A **PolyAI ADK VS Code extension** is available in the VS Code Marketplace. Search for `PolyAI.adk-extension` or install it directly from the [marketplace listing](https://marketplace.visualstudio.com/items?itemName=PolyAI.adk-extension){ target="_blank" rel="noopener" }. ## Other local tools diff --git a/docs/docs/reference/topics.md b/docs/docs/reference/topics.md index fb3d18e..e38fcef 100644 --- a/docs/docs/reference/topics.md +++ b/docs/docs/reference/topics.md @@ -89,8 +89,8 @@ This split is important: content is for facts, actions are for behavior. ### Limits and guidance - use no more than **20** example queries -- cover meaningful variation, not every tiny wording change -- trust the model to generalize from the examples you give it +- cover meaningful variation, not every minor wording change +- the system generalizes from your examples; exhaustive coverage is not needed ## Content @@ -164,9 +164,9 @@ Use markdown headers like `##` and `###` to break up branches or conditions. - prefer structured `##` branches in actions - disable topics with `enabled: false` during development instead of deleting them -!!! tip "Tell, don't script" +!!! tip “Tell, don't script” - Prefer instructions like “Tell the user that ...” over hard-coded dialogue such as `Say: '...'`. This gives the agent more room to behave naturally, especially across languages. + Prefer instructions like “Tell the user that ...” over hard-coded dialogue such as `Say: '...'`. This lets the agent vary phrasing naturally, especially across languages. ## Related pages diff --git a/docs/docs/tutorials/build-an-agent.md b/docs/docs/tutorials/build-an-agent.md index 32f8c55..c73546e 100644 --- a/docs/docs/tutorials/build-an-agent.md +++ b/docs/docs/tutorials/build-an-agent.md @@ -9,14 +9,14 @@ description: Follow the end-to-end workflow for going from a blank Agent Studio [Join the waitlist](https://fehky.share-eu1.hsforms.com/2oSGLpUctRvyqXcb6K44DAQ){ target="_blank" rel="noopener" } -This guide walks through how to go from a blank slate to a production-ready voice agent with a real backend using **Agent Studio**, the **PolyAI ADK**, and optionally a coding agent such as **Claude Code**. +This guide walks through how to go from a blank slate to a production-ready voice agent using **Agent Studio**, the **PolyAI ADK**, and optionally a coding tool such as **Claude Code**. There are two common ways to build with the ADK: | Workflow | Description | |---|---| | **CLI workflow** | The hands-on developer path. You run the commands yourself, edit files locally, and push changes back to Agent Studio. | -| **AI-agent workflow** | A coding agent uses the ADK on your behalf, generating and pushing the project files from a brief. | +| **AI-agent workflow** | You provide a brief; a coding tool uses the ADK to generate and push the project files on your behalf. |
@@ -280,7 +280,7 @@ The AI-agent workflow uses a coding agent such as **Claude Code** to execute the !!! info "No manual flow-building required" - In this workflow, the coding agent does the heavy lifting of building the agent, while Agent Studio remains the place where the work is reviewed, tested, and deployed. + In this workflow, the coding tool generates the project files. Agent Studio is where the output is reviewed, tested, and deployed. ### Step 1 - Gather requirements @@ -295,11 +295,11 @@ This should include anything the coding agent will need in order to produce a wo - reference material - links to API documentation -The more complete and structured the input is, the better the generated output is likely to be. +The more complete and structured your input is, the less correction the output requires. !!! tip "Front-load the context" - This workflow works best when you gather the important information up front rather than feeding it in piecemeal later. + Gather everything up front. Providing context piecemeal produces piecemeal output. ### Step 2 - Create a new project in Agent Studio @@ -336,30 +336,30 @@ poly pull The ADK acts as the bridge between your local development environment and Agent Studio in the cloud. It allows the coding agent to read from and write back to the project. -### Step 4 - Feed context to the coding agent +### Step 4 - Give the coding tool its context -Now provide the coding agent with the information you gathered earlier. +Provide the coding tool with the information you gathered earlier. -This is the core input step. Include: +Include: - project-specific requirements - the URL to the business’s public API documentation - relevant internal context - useful patterns or best practices from previous projects -The coding agent can also use the docs command to inspect the SDK and understand the available resources. +Use the docs command to generate a reference file the coding tool can read: ~~~bash poly docs --all ~~~ -Reusing proven patterns from earlier projects can improve both speed and output quality. +Including patterns from earlier projects reduces correction time and improves consistency. -### Step 5 - Let the agent build +### Step 5 - Generate the project files -Once the context has been provided, let the coding agent generate the project files. +Once the context is in place, the coding tool generates the project files. -The coding agent can produce the assets needed for the agent, including: +This produces the assets the agent needs, including:
@@ -389,11 +389,11 @@ The coding agent can produce the assets needed for the agent, including:
-The generated assets are structured for Agent Studio and prepared to be pushed back to the platform. +The generated files follow ADK structure and are ready to push to Agent Studio. -### Step 6 - Push back to Agent Studio +### Step 6 - Push to Agent Studio -Once the coding agent has generated the project files, it uses the ADK to push them back into Agent Studio. +Once the files are generated, use the ADK to push them to Agent Studio. A new branch is created in the project so the generated work can be reviewed safely before anything goes live. @@ -443,15 +443,7 @@ At that point, the agent is live. | **poly chat** | Start an interactive session with the agent | | **poly docs** | Output resource documentation | -## Summary - -| Metric | Value | -|---|---| -| **Manual workflow** | Supported | -| **AI-agent workflow** | Supported | -| **Production-ready path** | Yes | - -The overall loop is straightforward: +## The overall loop 1. create or connect a project 2. build locally using the ADK From b1d0477f26f827095787c673300756f9fdea2a21 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Thu, 26 Mar 2026 11:27:39 +0000 Subject: [PATCH 04/14] =?UTF-8?q?docs:=20second=20sweep=20=E2=80=94=20remo?= =?UTF-8?q?ve=20remaining=20AI-isms=20and=20fix=20missed=20files?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Remove all remaining "coding agent" → "coding tool" across build-an-agent.md and development/docs.md - Rewrite development/docs.md fully: remove Summary table with fake metrics, update all step headings and body copy - Fix typo "walkthroug" in walkthrough-video.md - Fix "lets the model refer" in sms.md → plain language - Fix "This helps the model understand" in functions.md → "This tells the model when to call the function" - Fix passive constructions in experimental_config.md - Improve response_control.md lead sentence - Remove "virtual resources" jargon from variables.md lead - Fix "Loading ADK rules into your coding agent" heading in tooling.md Co-Authored-By: Claude Sonnet 4.6 --- docs/docs/development/docs.md | 91 +++++++++++----------- docs/docs/get-started/walkthrough-video.md | 4 +- docs/docs/reference/experimental_config.md | 6 +- docs/docs/reference/functions.md | 2 +- docs/docs/reference/response_control.md | 2 +- docs/docs/reference/sms.md | 2 +- docs/docs/reference/tooling.md | 6 +- docs/docs/reference/variables.md | 4 +- docs/docs/tutorials/build-an-agent.md | 22 +++--- 9 files changed, 68 insertions(+), 71 deletions(-) diff --git a/docs/docs/development/docs.md b/docs/docs/development/docs.md index 1bbd72b..69fae97 100644 --- a/docs/docs/development/docs.md +++ b/docs/docs/development/docs.md @@ -1,11 +1,11 @@ --- title: Build an agent with the ADK -description: Go from a blank Agent Studio project to a production-ready voice agent using the PolyAI ADK and an AI coding agent. +description: Go from a blank Agent Studio project to a production-ready voice agent using the PolyAI ADK and a coding tool such as Claude Code. --- # Build an agent with the ADK -This guide walks through how to go from a blank slate to a production-ready voice agent with a real backend using **PolyAI ADK**, **Agent Studio**, and a coding agent such as **Claude Code**. +This guide walks through how to go from a blank slate to a production-ready voice agent with a real backend using **PolyAI ADK**, **Agent Studio**, and a coding tool such as **Claude Code**. The intended workflow is simple: @@ -17,23 +17,23 @@ The intended workflow is simple: Gather the project requirements, business rules, API information, and reference material. -- **The coding agent builds** +- **The coding tool builds** --- - Using the ADK, the coding agent generates the files needed for the agent. + Using the ADK, the coding tool generates the project files. - **Agent Studio hosts and deploys** --- - The generated work is pushed back into Agent Studio, where it can be reviewed, merged, and deployed. + The generated work is pushed into Agent Studio, where it can be reviewed, merged, and deployed.
!!! info "No manual flow-building required" - This workflow is designed so that the coding agent does the heavy lifting of building the agent, while Agent Studio remains the place where the finished work is reviewed, tested, and deployed. + In this workflow, the coding tool generates the project files. Agent Studio is where the output is reviewed, tested, and deployed. ## Architecture at a glance @@ -45,9 +45,9 @@ The intended workflow is simple: ## Step 1 — Gather requirements -Collect the project context from your team’s communication channels before you begin. +Collect the project context from your team's communication channels before you begin. -This should include anything the coding agent will need to produce a working agent, such as: +Include anything needed to produce a working agent: - API endpoint URLs - business rules @@ -56,11 +56,11 @@ This should include anything the coding agent will need to produce a working age - reference material - links to relevant documentation -The more complete and structured the input is, the better the coding agent’s output will be. +The more complete and structured your input, the less correction the output requires. !!! tip "Front-load the context" - This workflow works best when you gather the requirements up front rather than feeding them in piecemeal later. + Gather everything up front. Providing context piecemeal produces piecemeal output. ## Step 2 — Create a new project in Agent Studio @@ -72,42 +72,53 @@ The project starts empty: - no flows - no configuration -That blank starting point is intentional. The coding agent will populate the project in later steps. +That blank starting point is intentional. The coding tool populates the project in later steps. !!! note "Think of Agent Studio as the deployment target" - Agent Studio is where the project lives, but the coding agent does most of the actual building work. + Agent Studio is where the project lives, but the coding tool generates the actual content. -## Step 3 — Launch the coding agent via the CLI +## Step 3 — Start the coding tool via the CLI -Open your command line interface and launch your coding agent. +Open your terminal and start your coding tool. At this stage: - the ADK must already be installed -- the new Agent Studio project should already exist -- the coding agent should be linked to the project using the ADK +- the new Agent Studio project must already exist +- the coding tool should initialize and link the project using the ADK -The ADK acts as the bridge between your local development environment and Agent Studio in the cloud. It allows the coding agent to read from and write back to the project. +~~~bash +poly init --region --account_id --project_id +poly pull +~~~ -## Step 4 — Feed context to the coding agent +The ADK acts as the bridge between your local environment and Agent Studio. It lets the coding tool read from and write back to the project. -Now provide the coding agent with the information you gathered earlier. +## Step 4 — Give the coding tool its context -This is the core input step. Include: +Provide the coding tool with the information you gathered earlier. + +Include: - the project-specific requirements -- the URL to the business’s public API documentation +- the URL to the business's public API documentation - any relevant internal project context - best practices or patterns from previous projects -Reusing proven patterns from earlier projects can improve both speed and output quality. +Use the docs command to generate a reference file the coding tool can read: + +~~~bash +poly docs --all +~~~ -## Step 5 — Let the agent build +Including patterns from earlier projects reduces correction time and improves consistency. -Once the context has been provided, let the coding agent generate the project files. +## Step 5 — Generate the project files -The coding agent can produce the assets needed for the agent, including: +Once the context is in place, the coding tool generates the project files. + +This produces the assets the agent needs, including:
@@ -115,7 +126,7 @@ The coding agent can produce the assets needed for the agent, including: --- - Dialogue logic and routing for the agent. + Dialogue logic and routing. - **Callable functions** @@ -127,7 +138,7 @@ The coding agent can produce the assets needed for the agent, including: --- - Information the agent can reference when answering questions. + Information the agent can retrieve when answering questions. - **API integrations** @@ -137,13 +148,13 @@ The coding agent can produce the assets needed for the agent, including:
-The generated assets are structured for Agent Studio and prepared to be pushed back to the platform. +The generated files follow ADK structure and are ready to push to Agent Studio. -## Step 6 — Push back to Agent Studio +## Step 6 — Push to Agent Studio -Once the coding agent has generated the project files, it uses the ADK to push them back into Agent Studio. +Once the files are generated, use the ADK to push them to Agent Studio. -A new branch is created in the project so the generated work can be reviewed safely before anything goes live. +A new branch is created so the generated work can be reviewed safely before anything goes live. When you switch to that branch in Agent Studio, you should see the generated changes, such as: @@ -160,7 +171,7 @@ When you switch to that branch in Agent Studio, you should see the generated cha Review the generated work inside Agent Studio. -Check that the key parts of the agent look correct: +Check the key parts of the agent: - flows - functions @@ -174,23 +185,13 @@ Once everything looks right: At that point, the agent is live. -## Summary - -| Metric | Value | -|---|---| -| **Steps** | 7 | -| **Total time** | ~30 minutes | -| **Manual flows built** | 0 | - -The overall loop is straightforward: +## The overall loop 1. provide context -2. let the coding agent generate the project assets -3. use the ADK to push them into Agent Studio +2. generate the project files using the coding tool +3. push to Agent Studio with the ADK 4. review, merge, and deploy -By reusing patterns from previous projects, the coding agent can produce production-grade output much faster than a fully manual workflow. - ## Next steps
diff --git a/docs/docs/get-started/walkthrough-video.md b/docs/docs/get-started/walkthrough-video.md index 88992d0..578db8c 100644 --- a/docs/docs/get-started/walkthrough-video.md +++ b/docs/docs/get-started/walkthrough-video.md @@ -3,9 +3,9 @@ title: Walkthrough Video description: Watch a walkthrough of building a production-ready voice agent with the PolyAI ADK. --- -This walkthroug shows how quickly a production-ready voice agent can be built using the **PolyAI ADK**. +This walkthrough shows how quickly a production-ready voice agent can be built using the **PolyAI ADK**. -It gives a practical look at the developer workflow and shows how the ADK fits into modern AI-assisted agent development. +It gives a practical look at the developer workflow and shows how the ADK fits into a coding-tool-assisted development process. ## Watch the video diff --git a/docs/docs/reference/experimental_config.md b/docs/docs/reference/experimental_config.md index 4838a93..b5fc798 100644 --- a/docs/docs/reference/experimental_config.md +++ b/docs/docs/reference/experimental_config.md @@ -9,7 +9,7 @@ description: Enable experimental features and advanced runtime settings for an a The experimental config file is an optional JSON file used to enable experimental features and advanced runtime settings for an agent.

-It can be used for things such as: +Use it for: - feature flags - ASR tuning @@ -66,11 +66,11 @@ The ADK validates `experimental_config.json` against this schema when you run: poly validate ~~~ -If the configuration is invalid, it will fail validation locally. Invalid experimental config in deployed agents is not read by the runtime. +Invalid configuration fails `poly validate` locally. Experimental config that fails validation is not read by the runtime in deployed agents. !!! info "Validate before pushing" - Because experimental config can affect runtime behavior in subtle ways, it should always be validated locally before changes are pushed. + Experimental config can affect runtime behavior in subtle ways. Always run `poly validate` locally before pushing changes. ## When to use it diff --git a/docs/docs/reference/functions.md b/docs/docs/reference/functions.md index 0862434..551db1d 100644 --- a/docs/docs/reference/functions.md +++ b/docs/docs/reference/functions.md @@ -147,7 +147,7 @@ Prefer naming functions after the **event that should trigger them**, rather tha - `store_first_name` - `send_confirmation` -This helps the model understand when the function should be called. +This tells the model when to call the function. ## Returns and control flow diff --git a/docs/docs/reference/response_control.md b/docs/docs/reference/response_control.md index 3877ce9..4743574 100644 --- a/docs/docs/reference/response_control.md +++ b/docs/docs/reference/response_control.md @@ -6,7 +6,7 @@ description: Control how voice-agent output is filtered and pronounced before it # Response control

-Response control resources manage what the agent says before it reaches the user. +Response control resources process the agent's output before it is spoken.

They are used to adjust spoken output by: diff --git a/docs/docs/reference/sms.md b/docs/docs/reference/sms.md index 44397f4..b112f72 100644 --- a/docs/docs/reference/sms.md +++ b/docs/docs/reference/sms.md @@ -87,7 +87,7 @@ SMS templates can be referenced in rules, topics, and related instructions using {{twilio_sms:template_name}} ~~~ -This lets the model refer to the correct SMS template by name, rather than embedding the message body directly into prompt text. +This lets you reference the correct template by name without embedding the full message body in prompt text. ## Using variables diff --git a/docs/docs/reference/tooling.md b/docs/docs/reference/tooling.md index 32d29e6..1567e39 100644 --- a/docs/docs/reference/tooling.md +++ b/docs/docs/reference/tooling.md @@ -26,15 +26,15 @@ Claude Code is particularly useful for: - applying patterns reused across previous projects - speeding up repetitive implementation work -#### Loading ADK rules into your coding agent +#### Loading ADK rules into Claude Code -Before using Claude Code or another AI coding tool, generate a local documentation file and reference it in your session: +Before starting a session with Claude Code or another coding tool, generate a documentation file and pass it as context: ~~~bash poly docs --all --output rules.md ~~~ -Then reference rules.md in your prompt or agent context. This gives your coding agent accurate knowledge of ADK resource types, constraints, and conventions. +Reference `rules.md` in your session prompt. This gives the coding tool accurate knowledge of ADK resource types, constraints, and conventions. ### VS Code extension diff --git a/docs/docs/reference/variables.md b/docs/docs/reference/variables.md index f08b646..2b647f0 100644 --- a/docs/docs/reference/variables.md +++ b/docs/docs/reference/variables.md @@ -6,11 +6,9 @@ description: Understand how state variables are discovered, stored, and referenc # Variables

-Variables are virtual resources that represent state values used in the agent's code. +Variables are not files on disk. They represent values stored in conv.state and are discovered automatically by scanning function code.

-Unlike most resources, variables do not have files on disk. They are discovered automatically by scanning function code for `conv.state.` usage. - ## How variables work When you assign a value to `conv.state.customer_name` in code, `customer_name` becomes a tracked variable. diff --git a/docs/docs/tutorials/build-an-agent.md b/docs/docs/tutorials/build-an-agent.md index a87f2db..fb91401 100644 --- a/docs/docs/tutorials/build-an-agent.md +++ b/docs/docs/tutorials/build-an-agent.md @@ -254,7 +254,7 @@ Use Agent Studio analytics to monitor containment, CSAT, handle time, and flagge ## Workflow 2 - AI-agent workflow -The AI-agent workflow uses a coding agent such as **Claude Code** to execute the same development loop on your behalf. +The AI-agent workflow uses a coding tool such as **Claude Code** to run the same development loop on your behalf.
@@ -264,11 +264,11 @@ The AI-agent workflow uses a coding agent such as **Claude Code** to execute the Requirements, business rules, integrations, and API documentation. -- **The coding agent generates the project** +- **The coding tool generates the project** --- - It uses the ADK to inspect the SDK, generate files, and push the result. + It uses the ADK to read documentation, generate files, and push the result. - **You review and deploy** @@ -286,7 +286,7 @@ The AI-agent workflow uses a coding agent such as **Claude Code** to execute the Collect the project context before you begin. -This should include anything the coding agent will need in order to produce a working agent, such as: +Include anything the coding tool will need to produce a working agent: - API endpoint URLs - business rules @@ -311,30 +311,28 @@ The project starts empty: - no flows - no configuration -That blank starting point is intentional. The coding agent will populate the project in later steps. +That blank starting point is intentional. The coding tool populates the project in later steps. !!! note "Think of Agent Studio as the deployment target" - Agent Studio is where the project lives, but the coding agent does most of the actual building work. + Agent Studio is where the project lives, but the coding tool generates the actual content. -### Step 3 - Launch the coding agent via the CLI +### Step 3 - Start the coding tool via the CLI -Open your command line interface and launch your coding agent. +Open your terminal and start the coding tool. At this stage: - the ADK must already be installed - the Agent Studio project must already exist -- the coding agent should be linked to the project using the ADK - -A typical starting point is: +- the coding tool should initialize and link the project using the ADK ~~~bash poly init --region --account_id --project_id poly pull ~~~ -The ADK acts as the bridge between your local development environment and Agent Studio in the cloud. It allows the coding agent to read from and write back to the project. +The ADK acts as the bridge between your local environment and Agent Studio. It lets the coding tool read from and write back to the project. ### Step 4 - Give the coding tool its context From 0477d6f8f8517afe3b2d0bc28102cf44e89eeea7 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Thu, 26 Mar 2026 11:34:40 +0000 Subject: [PATCH 05/14] docs: fix rebase-restored coding agent references in build-an-agent.md Co-Authored-By: Claude Sonnet 4.6 --- docs/docs/tutorials/build-an-agent.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/tutorials/build-an-agent.md b/docs/docs/tutorials/build-an-agent.md index fb91401..70c9414 100644 --- a/docs/docs/tutorials/build-an-agent.md +++ b/docs/docs/tutorials/build-an-agent.md @@ -345,13 +345,13 @@ Include: - relevant internal context - useful patterns or best practices from previous projects -The coding agent can also use the docs command to inspect the SDK and understand the available resources. +Use the docs command to generate a reference file the coding tool can read: ~~~bash poly docs --all ~~~ -### Step 5 - Let the agent build +### Step 5 - Generate the project files Once the context is in place, the coding tool generates the project files. From fdbd287fae06ed745aa435f2d07d5ac69757480f Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Thu, 26 Mar 2026 11:46:47 +0000 Subject: [PATCH 06/14] chore: add Google Analytics (G-TTMRZWJPP4) to MkDocs config MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Also fix nav label "Pre-requisites" → "Prerequisites". Co-Authored-By: Claude Sonnet 4.6 --- docs/mkdocs.yml | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index a5c295f..539b365 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -55,6 +55,10 @@ markdown_extensions: permalink: true extra: + analytics: + provider: google + property: G-TTMRZWJPP4 + first_repo_url: https://github.com/polyai/adk first_repo_name: adk first_repo_icon: fontawesome/brands/github @@ -87,7 +91,7 @@ nav: - What is the PolyAI ADK?: get-started/what-is-the-adk.md - Walkthrough video: get-started/walkthrough-video.md - Access and waitlist: get-started/access-and-waitlist.md - - Pre-requisites: get-started/prerequisites.md + - Prerequisites: get-started/prerequisites.md - Installation: get-started/installation.md - First commands: get-started/first-commands.md From 7c6fbc576c434eee0a1dec93beda3831858dec76 Mon Sep 17 00:00:00 2001 From: Aaron Forinton Date: Sat, 28 Mar 2026 21:35:49 +0000 Subject: [PATCH 07/14] docs: update installation guide, move dev setup, remove pytest, fix em dashes - Replace bare pip install in Installation page with uv venv guidance - Add Generate API Key section (POLY_ADK_KEY) to Installation page - Remove Development setup from source from Installation page - Move Development setup from source to Working locally page - Remove pytest references from testing.md (users should use their own testing tools) - Replace em dashes with regular dashes across all docs pages Co-Authored-By: Claude Sonnet 4.6 --- .../concepts/multi-user-and-guardrails.md | 2 +- docs/docs/concepts/working-locally.md | 19 +++++++- docs/docs/development/docs.md | 14 +++--- docs/docs/get-started/installation.md | 35 +++++++++------ docs/docs/reference/cli.md | 4 +- docs/docs/reference/testing.md | 43 +++---------------- 6 files changed, 53 insertions(+), 64 deletions(-) diff --git a/docs/docs/concepts/multi-user-and-guardrails.md b/docs/docs/concepts/multi-user-and-guardrails.md index 0b0c068..b463599 100644 --- a/docs/docs/concepts/multi-user-and-guardrails.md +++ b/docs/docs/concepts/multi-user-and-guardrails.md @@ -92,7 +92,7 @@ poly pull If the pulled changes conflict with your own local edits, the ADK will merge them and surface merge markers where conflicts occur. -The local workflow is not isolated from Agent Studio UI work — both sides affect branch state. Keep that in mind when collaborating. +The local workflow is not isolated from Agent Studio UI work - both sides affect branch state. Keep that in mind when collaborating. ## Review workflow diff --git a/docs/docs/concepts/working-locally.md b/docs/docs/concepts/working-locally.md index 126a9fc..ea21ab3 100644 --- a/docs/docs/concepts/working-locally.md +++ b/docs/docs/concepts/working-locally.md @@ -51,7 +51,7 @@ A typical project structure looks like this: ~~~text // -├── _gen/ # Generated stubs — do not edit +├── _gen/ # Generated stubs - do not edit ├── agent_settings/ # Agent identity and behavior │ ├── personality.yaml │ ├── role.yaml @@ -68,7 +68,7 @@ A typical project structure looks like this: │ └── response_control/ ├── chat/ # Chat channel settings │ └── configuration.yaml -├── flows/ # Optional — flow definitions +├── flows/ # Optional - flow definitions ├── functions/ # Global functions ├── topics/ # Knowledge base topics └── project.yaml # Project metadata @@ -135,6 +135,21 @@ These references let settings, prompts, and behaviors point to resources by name Think of the ADK as a synchronization layer between your local files and the Agent Studio platform. +## Development setup from source + +To contribute to the ADK or work directly from the repository: + +~~~bash +git clone https://github.com/polyai/adk.git +cd adk +uv venv +source .venv/bin/activate +uv pip install -e ".[dev]" +pre-commit install +~~~ + +This installs the project in editable mode and registers the development hooks. + ## Related pages
diff --git a/docs/docs/development/docs.md b/docs/docs/development/docs.md index 69fae97..d2ccf8a 100644 --- a/docs/docs/development/docs.md +++ b/docs/docs/development/docs.md @@ -43,7 +43,7 @@ The intended workflow is simple: | **Claude Code + ADK** | Generate project files and push changes | | **Agent Studio** | Host, preview, review, and deploy the agent | -## Step 1 — Gather requirements +## Step 1 - Gather requirements Collect the project context from your team's communication channels before you begin. @@ -62,7 +62,7 @@ The more complete and structured your input, the less correction the output requ Gather everything up front. Providing context piecemeal produces piecemeal output. -## Step 2 — Create a new project in Agent Studio +## Step 2 - Create a new project in Agent Studio Open **Agent Studio** and create a brand-new project. @@ -78,7 +78,7 @@ That blank starting point is intentional. The coding tool populates the project Agent Studio is where the project lives, but the coding tool generates the actual content. -## Step 3 — Start the coding tool via the CLI +## Step 3 - Start the coding tool via the CLI Open your terminal and start your coding tool. @@ -95,7 +95,7 @@ poly pull The ADK acts as the bridge between your local environment and Agent Studio. It lets the coding tool read from and write back to the project. -## Step 4 — Give the coding tool its context +## Step 4 - Give the coding tool its context Provide the coding tool with the information you gathered earlier. @@ -114,7 +114,7 @@ poly docs --all Including patterns from earlier projects reduces correction time and improves consistency. -## Step 5 — Generate the project files +## Step 5 - Generate the project files Once the context is in place, the coding tool generates the project files. @@ -150,7 +150,7 @@ This produces the assets the agent needs, including: The generated files follow ADK structure and are ready to push to Agent Studio. -## Step 6 — Push to Agent Studio +## Step 6 - Push to Agent Studio Once the files are generated, use the ADK to push them to Agent Studio. @@ -167,7 +167,7 @@ When you switch to that branch in Agent Studio, you should see the generated cha The branch-based workflow makes it possible to inspect what was generated before merging it into the main project. -## Step 7 — Review, merge, and deploy +## Step 7 - Review, merge, and deploy Review the generated work inside Agent Studio. diff --git a/docs/docs/get-started/installation.md b/docs/docs/get-started/installation.md index 61bce03..b2ece5f 100644 --- a/docs/docs/get-started/installation.md +++ b/docs/docs/get-started/installation.md @@ -13,6 +13,18 @@ The **PolyAI ADK** can be installed as a Python package. ## Install the ADK +We recommend installing in a virtual environment rather than installing to the global system Python. Run the following to create one: + +~~~bash +uv venv --python=3.14 --seed +~~~ + +Activate the virtual environment: + +~~~bash +source .venv/bin/activate +~~~ + Install the package with pip: ~~~bash @@ -21,30 +33,25 @@ pip install polyai-adk Once installed, you can use the `poly` command to interact with Agent Studio projects locally. -## Verify the installation +## Generate API key -Confirm the CLI is available: +Set your API key as an environment variable: ~~~bash -poly --help +export POLY_ADK_KEY= ~~~ -You should see the top-level command help if installation succeeded. +You can generate an API key from the Agent Studio platform. The `POLY_ADK_KEY` environment variable must be set before running any `poly` commands. -## Development setup from source +## Verify the installation -To contribute to the ADK or work directly from the repository: +Confirm the CLI is available: ~~~bash -git clone https://github.com/polyai/adk.git -cd adk -uv venv -source .venv/bin/activate -uv pip install -e ".[dev]" -pre-commit install +poly --help ~~~ -This installs the project in editable mode and registers the development hooks. +You should see the top-level command help if installation succeeded. ## Next step @@ -59,4 +66,4 @@ Once the ADK is installed, continue to the first commands page to explore the CL Learn the core ADK commands and how to inspect the CLI. [Open first commands](./first-commands.md) -
\ No newline at end of file +
diff --git a/docs/docs/reference/cli.md b/docs/docs/reference/cli.md index 19986e2..d4fccfe 100644 --- a/docs/docs/reference/cli.md +++ b/docs/docs/reference/cli.md @@ -5,7 +5,7 @@ description: Reference for the core commands provided by the PolyAI ADK CLI.

The PolyAI ADK is accessed through the poly command. -When in doubt about a flag or option, run the command with --help — that output reflects your installed version exactly. +When in doubt about a flag or option, run the command with --help - that output reflects your installed version exactly.

## Start with help @@ -167,7 +167,7 @@ poly docs --all poly docs --all --output rules.md ~~~ -Use `--output` to write the documentation to a local file. This is useful when working with AI coding tools — pass the output file as context to give the agent accurate knowledge of ADK resource types and conventions. +Use `--output` to write the documentation to a local file. This is useful when working with AI coding tools - pass the output file as context to give the agent accurate knowledge of ADK resource types and conventions. ## Working pattern diff --git a/docs/docs/reference/testing.md b/docs/docs/reference/testing.md index 5675bb5..95323e3 100644 --- a/docs/docs/reference/testing.md +++ b/docs/docs/reference/testing.md @@ -1,6 +1,6 @@ --- title: Testing -description: Run tests and validate project changes when working with the PolyAI ADK. +description: Validate project changes when working with the PolyAI ADK. --- # Testing @@ -11,26 +11,6 @@ Testing helps confirm that your project changes behave as expected before they a In the ADK workflow, testing usually sits alongside validation and manual review in Agent Studio. -## Run the test suite - -Run tests with: - -~~~bash -uv run pytest src/poly/tests/ -v -~~~ - -Or, if pytest is on your path directly: - -~~~bash -pytest -~~~ - -Test files are located in: - -~~~text -src/poly/tests/ -~~~ - ## What testing is for Testing is useful when you want to: @@ -42,12 +22,6 @@ Testing is useful when you want to:
-- **Automated tests** - - --- - - Use `pytest` to run the project's test suite locally. - - **Validation** --- @@ -69,13 +43,8 @@ A typical development loop looks like this: 1. edit files locally 2. inspect changes with `poly status` and `poly diff` 3. run `poly validate` -4. run `pytest` where relevant -5. push changes with `poly push` -6. test the branch in Agent Studio - -!!! tip "Validation and testing are complementary" - - `poly validate` checks configuration correctness. `pytest` checks code behavior. They solve different problems and are both useful. +4. push changes with `poly push` +5. test the branch in Agent Studio ## What to test @@ -89,11 +58,9 @@ The exact tests will depend on the kind of work you are doing, but common areas ## Best practices -- run tests before pushing substantial changes -- keep tests focused and readable - use validation as part of the normal workflow, not just before merge - test important error paths, not only success cases -- combine automated testing with interactive review when behavior depends on conversation flow +- combine interactive review when behavior depends on conversation flow ## Related pages @@ -113,4 +80,4 @@ The exact tests will depend on the kind of work you are doing, but common areas See how testing fits into the end-to-end workflow. [Open build an agent](../tutorials/build-an-agent.md) -
\ No newline at end of file +
From 3e5f00ba6476fe7e49486c4ccb2756785cba09ef Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Sat, 28 Mar 2026 21:39:53 +0000 Subject: [PATCH 08/14] docs: update installation guide, move dev setup, remove pytest, fix em dashes --- CHANGELOG.md | 264 ---------- CONTRIBUTING.md | 2 +- pyproject.toml | 2 +- src/poly/cli.py | 732 ++++---------------------- src/poly/{output => }/console.py | 0 src/poly/handlers/interface.py | 85 +-- src/poly/handlers/sdk.py | 3 + src/poly/handlers/sync_client.py | 200 +++---- src/poly/output/__init__.py | 1 - src/poly/output/json_output.py | 31 -- src/poly/project.py | 423 ++++++++------- src/poly/resources/__init__.py | 4 +- src/poly/resources/api_integration.py | 17 +- src/poly/resources/function.py | 4 +- src/poly/tests/project_test.py | 282 ++++------ src/poly/tests/resources_test.py | 18 +- uv.lock | 46 +- 17 files changed, 561 insertions(+), 1553 deletions(-) rename src/poly/{output => }/console.py (100%) delete mode 100644 src/poly/output/__init__.py delete mode 100644 src/poly/output/json_output.py diff --git a/CHANGELOG.md b/CHANGELOG.md index f8c4239..65a89a3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,270 +1,6 @@ # CHANGELOG -## v0.6.0 (2026-03-27) - -### Features - -- Add resource caching and progress spinner for init/pull/branch - ([#50](https://github.com/polyai/adk/pull/50), - [`2d4fc0a`](https://github.com/polyai/adk/commit/2d4fc0ae78c348d0cc9269d7f0749c83a06adedd)) - -## Summary - -Batch `MultiResourceYamlResource` writes during `poly init` so each YAML file is written once - instead of once per resource, and add a progress spinner to `init`, `pull`, and `branch switch` so - the CLI doesn't appear stuck on large projects. - -Also edited CONTRIBUTING.md to edit the clone url - changed org to PolyAI. - -## Motivation - -`poly init` is very slow on projects with many pronunciations (or other multi-resource YAML types) - because `save()` rewrites the full YAML file for every single item. On large projects like pacden, - the process appears stuck with no output. The `save_to_cache` + `write_cache_to_file` pattern - already exists for `poly pull` — this reuses it for `init` and adds a progress spinner across all - three commands. - -## Changes - -- Use `save_to_cache=True` for all `MultiResourceYamlResource` saves during `init_project()`, then - flush to disk once via `write_cache_to_file()` - Add an optional `on_save(current, total)` - callback to `init_project()`, `pull_project()`, `_update_multi_resource_yaml_resources()`, - `_update_pulled_resources()`, and `switch_branch()` for progress reporting - Wire up - `console.status()` spinners in `cli.py` for `init`, `pull`, and `branch switch`, using - `nullcontext` to skip the spinner in `--json` mode - Progress counter includes both multi-resource - (per batch total) and non-multi-resource types for an accurate total - -- CONTRIBUTING.md to edit the clone url - changed org from PolyAI-LDN to PolyAI. - -## Test strategy - -- [x] Added/updated unit tests - [x] Manual CLI testing (`poly `) - [ ] Tested against a - live Agent Studio project - [ ] N/A (docs, config, or trivial change) - -## Checklist - -- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes (361 tests, 0 failures) - - [x] No breaking changes to the `poly` CLI interface (or migration path documented) - [x] Commit - messages follow [conventional commits](https://www.conventionalcommits.org/) - -## Screenshots / Logs Before: Screenshot 2026-03-25 at 10 04
-  14 PM - -After: Screenshot 2026-03-25 at 10 04 01 PM - - -## v0.5.1 (2026-03-27) - -### Bug Fixes - -- Display branch name instead of branch id ([#45](https://github.com/polyai/adk/pull/45), - [`5a54240`](https://github.com/polyai/adk/commit/5a54240418d1848d195af23e87b3cb7005462d4b)) - -## Summary Display new branch name in CLI when the tool switches branch - -## Motivation On push when creating a new branch, users would be shown branch ID not new branch name - -## Changes - -- Change logger level for some logs to hide on usual CLI usage - Make it more clear when a branch id - is used in logs - When branch_id changes, output this in CLI with new branch name - Update auto - branch name to exclude `sdk-user` - -## Test strategy - - - -- [ ] Added/updated unit tests - [x] Manual CLI testing (`poly `) - [ ] Tested against a - live Agent Studio project - [ ] N/A (docs, config, or trivial change) - -## Checklist - -- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes - [x] No breaking - changes to the `poly` CLI interface (or migration path documented) - [x] Commit messages follow - [conventional commits](https://www.conventionalcommits.org/) - -## Screenshots / Logs Screenshot 2026-03-26 at 15 54 24 - ---------- - -Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> - - -## v0.5.0 (2026-03-26) - -### Features - -- **cli**: Machine-readable --json, projection-based pull/push, and serialized push commands - ([#41](https://github.com/polyai/adk/pull/41), - [`cb91e2a`](https://github.com/polyai/adk/commit/cb91e2abffe97dfdbc6e3db8770f16a369f6da29)) - -## Summary - -Adds a global-style `--json` mode across `poly` subcommands so stdout is a single JSON object for - scripting and CI. Introduces `--from-projection` / optional projection output for `init` and - `pull`, and `--output-json-commands` on `push` to include the queued Agent Studio commands (as - dicts). Moves console helpers under `poly.output` and adds `json_output` helpers (including - protobuf → JSON via `MessageToDict`). - -## Motivation - -Operators and automation need stable, parseable CLI output and the ability to drive pull/push from a - captured projection (without hitting the projection API). Exposing staged push commands supports - dry-run review and integration testing. - -Closes #23 - -## Changes - -- Wire `json_parent` (`--json`) into relevant subparsers; many code paths now emit structured JSON - and exit with non-zero on failure where appropriate. - Add `--from-projection` (JSON string or `-` - for stdin) to `pull` and `push`; `SyncClientHandler.pull_resources` uses an inline projection when - provided instead of fetching. - Add `--output-json-projection` on `init` / `pull` (and related - flows) to include the projection in JSON output when requested. - Add `--output-json-commands` on - `push` to append serialized commands to the JSON payload; `push_project` returns `(success, - message, commands)`. - `pull_project` returns `(files_with_conflicts, projection)`; - `pull_resources` returns `(resources, projection)`. - New `poly/output/json_output.py` - (`json_print`, `commands_to_dicts`); relocate `console.py` to `poly/output/console.py` and update - imports. - Update `project_test` mocks/expectations for new return shapes; `uv.lock` updated for - dependencies. - -## Test strategy - -- [x] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [ ] Tested against a - live Agent Studio project - [ ] N/A (docs, config, or trivial change) - -## Checklist - -- [ ] `ruff check .` and `ruff format --check .` pass - [ ] `pytest` passes - [ ] No breaking - changes to the `poly` CLI interface (or migration path documented) - [ ] Commit messages follow - [conventional commits](https://www.conventionalcommits.org/) - -**Note for reviewers:** The **CLI** remains backward compatible (new flags only). - **`AgentStudioProject.pull_project` / `push_project`** (and `pull_resources` on the handler) - **change return types** vs `main`; any direct Python callers must be updated to unpack the new - tuples and optional `projection_json` argument. - -## Screenshots / Logs - - - ---------- - -Co-authored-by: Oliver Eisenberg - -Co-authored-by: Claude Sonnet 4.6 - - -## v0.4.1 (2026-03-26) - -### Bug Fixes - -- Error on merges ([#44](https://github.com/polyai/adk/pull/44), - [`b3d8d62`](https://github.com/polyai/adk/commit/b3d8d62b8b36e476f7027691d0d18da33edf9a74)) - -## Summary Fix issue where merges were marked as successful when there is an internal API error - -## Motivation - -This error breaks pipelines that rely on this output - -Closes # - -## Changes - -- Make success response more explicit instead of relying on errors/conflicts lists - -## Test strategy - - - -- [ ] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [x] Tested against a - live Agent Studio project - [ ] N/A (docs, config, or trivial change) - -## Checklist - -- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes - [x] No breaking - changes to the `poly` CLI interface (or migration path documented) - [x] Commit messages follow - [conventional commits](https://www.conventionalcommits.org/) - -## Screenshots / Logs - - - -- Guard uv.lock checkout in coverage workflow ([#42](https://github.com/polyai/adk/pull/42), - [`2383405`](https://github.com/polyai/adk/commit/238340568a8bdbe8ece9612f94d7bd7664154fad)) - -## Summary - -- Prevent coverage CI from failing when `uv.lock` is absent on a branch - Wrap both `git checkout -- - uv.lock` calls with a conditional `git rev-parse --verify` check before and after the base branch - checkout step - -🤖 Generated with [Claude Code](https://claude.com/claude-code) - -Co-authored-by: Claude Sonnet 4.6 - -### Chores - -- Add pytest-cov and coverage to dev dependencies ([#36](https://github.com/polyai/adk/pull/36), - [`649ccb7`](https://github.com/polyai/adk/commit/649ccb7d10f3ce59ba9e0f0094bf93b3c90736a7)) - -## Summary - Adds `pytest-cov>=6.0.0` and `coverage>=7.0.0` to the `[dev]` optional dependencies in - `pyproject.toml` - -## Test plan - [x] Run `uv pip install -e ".[dev]"` and verify `pytest-cov` and `coverage` install - successfully image - -🤖 Generated with [Claude Code](https://claude.com/claude-code) - ---------- - -Co-authored-by: Claude Sonnet 4.6 - -### Documentation - -- Fix formatting issues ([#40](https://github.com/polyai/adk/pull/40), - [`eafff58`](https://github.com/polyai/adk/commit/eafff58ab877a65d3fd204a850bcb7489083a1fa)) - -## Summary - - - -## Motivation - - - -Closes # - -## Changes - - - -- - -## Test strategy - - - -- [ ] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [ ] Tested against a - live Agent Studio project - [ ] N/A (docs, config, or trivial change) - -## Checklist - -- [ ] `ruff check .` and `ruff format --check .` pass - [ ] `pytest` passes - [ ] No breaking - changes to the `poly` CLI interface (or migration path documented) - [ ] Commit messages follow - [conventional commits](https://www.conventionalcommits.org/) - -## Screenshots / Logs - - - - ## v0.4.0 (2026-03-25) ### Bug Fixes diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 6787d65..cad67d2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -12,7 +12,7 @@ Contributions are welcome! Please ensure all tests pass before submitting a pull ### Getting Started ```bash -git clone https://github.com/PolyAI/adk.git +git clone https://github.com/PolyAI-LDN/adk.git cd adk uv venv source .venv/bin/activate diff --git a/pyproject.toml b/pyproject.toml index b6c83c9..a5e69cf 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -106,7 +106,7 @@ tag_format = "v{version}" [project] name = "polyai-adk" -version = "0.6.0" +version = "0.4.0" description = "Agent Development Kit (ADK) — a CLI for managing Agent Studio projects locally" readme = "README.md" requires-python = ">=3.14.0" diff --git a/src/poly/cli.py b/src/poly/cli.py index de8fd6a..f96e0a7 100644 --- a/src/poly/cli.py +++ b/src/poly/cli.py @@ -12,16 +12,15 @@ import shutil import subprocess import sys -from argparse import SUPPRESS, ArgumentParser, RawTextHelpFormatter -from contextlib import nullcontext +from argparse import ArgumentParser, RawTextHelpFormatter from importlib.metadata import version as get_package_version -from typing import Any, Optional +from typing import Optional import argcomplete import requests import questionary -from poly.output.console import ( +from poly.console import ( console, error, handle_exception, @@ -37,7 +36,6 @@ success, warning, ) -from poly.output.json_output import json_print, commands_to_dicts from poly.handlers.github_api_handler import GitHubAPIHandler from poly.handlers.interface import ( REGIONS, @@ -83,13 +81,6 @@ def _create_parser(cls) -> ArgumentParser: help="Show full error tracebacks for debugging.", ) - json_parent = ArgumentParser(add_help=False) - json_parent.add_argument( - "--json", - action="store_true", - help="Print a single JSON object on stdout (machine-readable).", - ) - subparsers = parser.add_subparsers(dest="command", required=True) # DOCS @@ -124,7 +115,7 @@ def _create_parser(cls) -> ArgumentParser: # INIT init_parser = subparsers.add_parser( "init", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Initialize a new Agent Studio project.", description="Initialize a new Agent Studio project.\n\nExamples:\n poly init --region eu-west-1 --account_id 123 --project_id my_project\n poly init # (interactive selection)", formatter_class=RawTextHelpFormatter, @@ -156,25 +147,12 @@ def _create_parser(cls) -> ArgumentParser: init_parser.add_argument( "--format", action="store_true", help="Format resources after init." ) - init_parser.add_argument( - "--from-projection", - type=str, - metavar="JSON|-", - help=SUPPRESS, - default=None, - ) - init_parser.add_argument( - "--output-json-projection", - action="store_true", - help=SUPPRESS, - default=False, - ) init_parser.add_argument("--debug", action="store_true", help="Display debug logs.") # PULL pull_parser = subparsers.add_parser( "pull", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Pull the latest project configuration from Agent Studio.", description="Pull the latest project configuration from Agent Studio.\n\nExamples:\n poly pull --path /path/to/project\n poly pull -f # force overwrite local changes", formatter_class=RawTextHelpFormatter, @@ -197,25 +175,12 @@ def _create_parser(cls) -> ArgumentParser: help="Format resources after pulling.", default=False, ) - pull_parser.add_argument( - "--from-projection", - type=str, - metavar="JSON|-", - help=SUPPRESS, - default=None, - ) - pull_parser.add_argument( - "--output-json-projection", - action="store_true", - help=SUPPRESS, - default=False, - ) pull_parser.add_argument("--debug", action="store_true", help="Display debug logs.") # PUSH push_parser = subparsers.add_parser( "push", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Push the project configuration to Agent Studio.", description="Push the project configuration to Agent Studio.\n\nExamples:\n poly push --path /path/to/project\n poly push --skip-validation --dry-run", formatter_class=RawTextHelpFormatter, @@ -249,30 +214,11 @@ def _create_parser(cls) -> ArgumentParser: default=False, ) push_parser.add_argument("--debug", action="store_true", help="Display debug logs.") - push_parser.add_argument( - "--from-projection", - type=str, - metavar="JSON|-", - help=SUPPRESS, - default=None, - ) - push_parser.add_argument( - "--output-json-commands", - action="store_true", - help=SUPPRESS, - default=False, - ) - push_parser.add_argument( - "--email", - type=str, - help="Email to use for metadata creation for push", - default=None, - ) # STATUS status_parser = subparsers.add_parser( "status", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Check the changed files of the project.", description="Check the changed files of the project.\n\nExamples:\n poly status\n poly status --path /path/to/project", formatter_class=RawTextHelpFormatter, @@ -289,7 +235,7 @@ def _create_parser(cls) -> ArgumentParser: # REVERT revert_parser = subparsers.add_parser( "revert", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Revert changes in the project.", description="Revert changes in the project.\n\nExamples:\n poly revert --all\n poly revert file1.yaml file2.yaml", formatter_class=RawTextHelpFormatter, @@ -317,7 +263,7 @@ def _create_parser(cls) -> ArgumentParser: # DIFF diff_parser = subparsers.add_parser( "diff", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Show the changes made to the project.", description="Show the changes made to the project.\n\nExamples:\n poly diff\n poly diff file1.yaml", formatter_class=RawTextHelpFormatter, @@ -381,7 +327,7 @@ def _create_parser(cls) -> ArgumentParser: # GET BRANCHES 'branch list' branches_parser = subparsers.add_parser( "branch", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Manage branches in the Agent Studio project.", description="Manage branches in the Agent Studio project.\n\nExamples:\n poly branch list\n poly branch create new-branch\n poly branch switch existing-branch", formatter_class=RawTextHelpFormatter, @@ -411,24 +357,11 @@ def _create_parser(cls) -> ArgumentParser: action="store_true", help="Force switch to a different branch and discard changes.", ) - branches_parser.add_argument( - "--from-projection", - type=str, - metavar="JSON|-", - help=SUPPRESS, - default=None, - ) - branches_parser.add_argument( - "--output-json-projection", - action="store_true", - help="Output the projection in json format", - default=False, - ) # FORMAT format_parser = subparsers.add_parser( "format", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Run ruff and YAML/JSON formatting on the project (optional ty with --ty).", description=( "Run ruff (lint + format) on Python and formatting on YAML/JSON resources.\n\n" @@ -466,7 +399,7 @@ def _create_parser(cls) -> ArgumentParser: # Validate validate_parser = subparsers.add_parser( "validate", - parents=[verbose_parent, json_parent], + parents=[verbose_parent], help="Validate the project configuration locally.", description="Validate the project configuration locally.\n\nExamples:\n poly validate --path /path/to/project\n", formatter_class=RawTextHelpFormatter, @@ -584,42 +517,22 @@ def _run_command(cls, args): args.account_id, args.project_id, args.format, - args.from_projection, - output_json=args.json, - output_json_projection=args.output_json_projection, ) elif args.command == "pull": - cls.pull( - args.path, - args.force, - args.format, - args.from_projection, - output_json=args.json, - output_json_projection=args.output_json_projection, - ) + cls.pull(args.path, args.force, args.format) elif args.command == "push": - cls.push( - args.path, - args.force, - args.skip_validation, - args.dry_run, - args.format, - args.email, - args.from_projection, - output_json=args.json, - output_commands=args.output_json_commands, - ) + cls.push(args.path, args.force, args.skip_validation, args.dry_run, args.format) elif args.command == "status": - cls.status(args.path, args.json) + cls.status(args.path) elif args.command == "revert": - cls.revert(args.path, args.all, args.files, output_json=args.json) + cls.revert(args.path, args.all, args.files) elif args.command == "diff": - cls.diff(args.path, args.files, args.json) + cls.diff(args.path, args.files) elif args.command == "review": if args.delete: @@ -636,10 +549,10 @@ def _run_command(cls, args): elif args.command == "branch": if args.action == "list": - cls.branch_list(args.path, args.json) + cls.branch_list(args.path) elif args.action == "create": - cls.branch_create(args.path, args.branch_name, args.json) + cls.branch_create(args.path, args.branch_name) elif args.action == "switch": cls.branch_switch( @@ -647,13 +560,10 @@ def _run_command(cls, args): args.branch_name, getattr(args, "force", False), getattr(args, "format", False), - args.json, - output_json_projection=args.output_json_projection, - from_projection=args.from_projection, ) elif args.action == "current": - cls.get_current_branch(args.path, args.json) + cls.get_current_branch(args.path) elif args.command == "format": cls.format( @@ -661,11 +571,10 @@ def _run_command(cls, args): args.files, getattr(args, "check", False), getattr(args, "ty", False), - output_json=args.json, ) elif args.command == "validate": - cls.validate_project(args.path, args.json) + cls.validate_project(args.path) elif args.command == "docs": cls.docs( @@ -721,59 +630,15 @@ def main(cls, sys_args=None): except Exception as e: handle_exception(e) - @staticmethod - def _parse_from_projection_json( - from_projection: Optional[str], - *, - json_errors: bool, - ) -> Optional[dict[str, Any]]: - """Parse ``--from-projection`` CLI value into a projection dict, or exit on failure. - - If the value is ``-`` (after stripping), JSON is read from stdin until EOF. - """ - if not from_projection: - return None - raw = from_projection.strip() - if raw == "-": - raw = sys.stdin.read() - try: - parsed: Any = json.loads(raw) - if isinstance(parsed, dict) and "projection" in parsed: - parsed = parsed["projection"] - except json.JSONDecodeError as e: - msg = f"Invalid JSON in --from-projection: {e}" - if json_errors: - json_print({"success": False, "error": msg}) - else: - error(msg) - sys.exit(1) - if not isinstance(parsed, dict): - msg = "--from-projection must be a JSON object (dictionary)." - if json_errors: - json_print({"success": False, "error": msg}) - else: - error(msg) - sys.exit(1) - return parsed - @classmethod - def _load_project(cls, base_path: str, output_json: bool = False) -> AgentStudioProject: + def _load_project(cls, base_path: str) -> AgentStudioProject: """Read project config or exit with a helpful error if not found. Args: base_path: Path to the project directory. - output_json: If True, print JSON and exit when config is missing. """ project = cls.read_project_config(base_path) if not project: - if output_json: - json_print( - { - "success": False, - "error": "No project configuration found. Run poly init to initialize a project.", - } - ) - sys.exit(1) error( "No project configuration found. Run [bold]poly init[/bold] to initialize a project." ) @@ -814,22 +679,9 @@ def init_project( account_id: str = None, project_id: str = None, format: bool = False, - from_projection: str = None, - output_json: bool = False, - output_json_projection: bool = False, - ) -> None: + ) -> AgentStudioProject: """Initialize a new Agent Studio project.""" - if output_json and not (region and account_id and project_id): - json_print( - { - "success": False, - "error": "init with --json requires --region, --account_id, and --project_id.", - } - ) - sys.exit(1) - - if not output_json: - info("Initialising project...") + info("Initialising project...") if not region: regions = REGIONS @@ -847,14 +699,6 @@ def init_project( use_jk_keys=False, ).ask() if not account_menu: - if output_json: - json_print( - { - "success": False, - "error": "No account selected.", - } - ) - sys.exit(1) warning("No account selected. Exiting.") return account_id = accounts[account_menu] @@ -868,133 +712,39 @@ def init_project( use_jk_keys=False, ).ask() if not project_menu: - if output_json: - json_print( - { - "success": False, - "error": "No project selected.", - } - ) - sys.exit(1) warning("No project selected. Exiting.") return project_id = projects[project_menu] - if not output_json: - info(f"Initializing project [bold]{account_id}/{project_id}[/bold]...") + info(f"Initializing project [bold]{account_id}/{project_id}[/bold]...") - projection_json = cls._parse_from_projection_json( - from_projection, - json_errors=output_json or output_json_projection, - ) - - ctx = ( - console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() + project = AgentStudioProject.init_project( + base_path=base_path, + region=region, + account_id=account_id, + project_id=project_id, + format=format, ) - on_save = None - - with ctx as status: - if status: - - def on_save(current: int, total: int) -> None: - status.update(f"[info]Saving resources ({current}/{total})...[/info]") - - project, projection = AgentStudioProject.init_project( - base_path=base_path, - region=region, - account_id=account_id, - project_id=project_id, - format=format, - projection_json=projection_json, - on_save=on_save, - ) if not project: - if output_json: - json_print( - { - "success": False, - "error": "Failed to initialize the project.", - } - ) - else: - error("Failed to initialize the project.") - sys.exit(1) + error("Failed to initialize the project.") + return None - if output_json or output_json_projection: - json_output = { - "success": True, - "root_path": project.root_path, - } - if output_json_projection: - json_output["projection"] = projection - json_print(json_output) - else: - success(f"Project initialized at {project.root_path}") + success(f"Project initialized at {project.root_path}") + return project @classmethod - def pull( - cls, - base_path: str, - force: bool = False, - format: bool = False, - from_projection: str = None, - output_json: bool = False, - output_json_projection: bool = False, - ) -> None: + def pull(cls, base_path: str, force: bool = False, format: bool = False) -> AgentStudioProject: """Pull the latest project configuration from the Agent Studio.""" - project = cls._load_project(base_path, output_json=output_json) - if not output_json: - info(f"Pulling project [bold]{project.account_id}/{project.project_id}[/bold]...") - - projection_json = cls._parse_from_projection_json( - from_projection, - json_errors=output_json or output_json_projection, - ) - - original_branch_id = project.branch_id - - ctx = ( - console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() - ) - on_save = None - - with ctx as status: - if status: - - def on_save(current: int, total: int) -> None: - status.update(f"[info]Saving resources ({current}/{total})...[/info]") - - files_with_conflicts, projection = project.pull_project( - force=force, format=format, projection_json=projection_json, on_save=on_save - ) - - new_branch_name = None - if original_branch_id != project.branch_id: - new_branch_name = project.get_current_branch() - if output_json or output_json_projection: - json_output = { - "success": not bool(files_with_conflicts), - "files_with_conflicts": files_with_conflicts, - } - if new_branch_name: - json_output["new_branch_name"] = new_branch_name - json_output["new_branch_id"] = project.branch_id - if output_json_projection: - json_output["projection"] = projection - json_print(json_output) - if files_with_conflicts: - sys.exit(1) - return + project = cls._load_project(base_path) + info(f"Pulling project [bold]{project.account_id}/{project.project_id}[/bold]...") - if new_branch_name: - warning( - f"Current branch no longer exists in Agent Studio. Switched to branch '{new_branch_name}'." - ) + files_with_conflicts = project.pull_project(force=force, format=format) if files_with_conflicts: print_file_list("Merge conflicts detected", files_with_conflicts, "filename.conflict") success(f"Pulled {project.account_id}/{project.project_id}") + return project @classmethod def push( @@ -1004,76 +754,29 @@ def push( skip_validation: bool = False, dry_run: bool = False, format: bool = False, - email: Optional[str] = None, - from_projection: str = None, - output_json: bool = False, - output_commands: bool = False, - ) -> None: + ) -> AgentStudioProject: """Push the project configuration to the Agent Studio.""" - project = cls._load_project(base_path, output_json=output_json) - if not output_json and not output_commands: - info( - f"Pushing local changes for [bold]{project.account_id}/{project.project_id}[/bold]..." - ) + project = cls._load_project(base_path) + info(f"Pushing local changes for [bold]{project.account_id}/{project.project_id}[/bold]...") - projection_json = cls._parse_from_projection_json( - from_projection, - json_errors=output_json or output_commands, + push_ok, output = project.push_project( + force=force, skip_validation=skip_validation, dry_run=dry_run, format=format ) - - original_branch_id = project.branch_id - push_ok, output, commands = project.push_project( - force=force, - skip_validation=skip_validation, - dry_run=dry_run, - format=format, - email=email, - projection_json=projection_json, - ) - new_branch_name = None - if original_branch_id != project.branch_id: - new_branch_name = project.get_current_branch() - if output_json or output_commands: - json_output = { - "success": push_ok, - "message": output, - "dry_run": dry_run, - } - if new_branch_name: - json_output["new_branch_name"] = new_branch_name - json_output["new_branch_id"] = project.branch_id - if output_commands: - json_output["commands"] = commands_to_dicts(commands) - json_print(json_output) - if not push_ok: - sys.exit(1) - return - - if new_branch_name: - warning(f"Created and switched to new branch '{new_branch_name}'.") if push_ok: success(f"Pushed {project.account_id}/{project.project_id} to Agent Studio.") else: error(f"Failed to push {project.account_id}/{project.project_id} to Agent Studio.") plain(output) + return project + @classmethod - def status(cls, base_path: str, output_json: bool = False) -> None: + def status(cls, base_path: str) -> None: """Check the changed files of the project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) files_with_conflicts, modified_files, new_files, deleted_files = project.project_status() - if output_json: - json_output = { - "files_with_conflicts": files_with_conflicts, - "modified_files": modified_files, - "new_files": new_files, - "deleted_files": deleted_files, - } - json_print(json_output) - return - branch_info = project.get_current_branch() print_status( @@ -1093,41 +796,18 @@ def status(cls, base_path: str, output_json: bool = False) -> None: plain("\n[muted]No changes detected.[/muted]") @classmethod - def revert( - cls, - base_path: str, - all_files: bool = False, - files: list[str] = None, - output_json: bool = False, - ) -> None: + def revert(cls, base_path: str, all_files: bool = False, files: list[str] = None) -> None: """Revert changes in the project.""" if not all_files and not files: - if output_json: - json_print( - { - "success": False, - "error": "No files specified to revert. Use --all or list files.", - } - ) - sys.exit(1) error("No files specified to revert. Use [bold]--all[/bold] to revert all changes.") return - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) # If relative paths are provided, convert them to absolute paths files = [os.path.abspath(os.path.join(os.getcwd(), file)) for file in files or []] files_reverted = project.revert_changes(all_files=all_files, files=files) - if output_json: - json_print( - { - "success": bool(files_reverted), - "files_reverted": files_reverted, - } - ) - return - if not files_reverted: plain("[muted]No changes to revert.[/muted]") return @@ -1135,37 +815,25 @@ def revert( success("Changes reverted successfully.") @classmethod - def _diff( - cls, base_path: str, files: list[str] = None, output_json: bool = False - ) -> dict[str, str]: - """Compute local diffs; may print a human hint when there are no changes.""" + def _diff(cls, base_path: str, files: list[str] = None) -> Optional[dict[str, str]]: + """Show the changes made to the project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) files = [os.path.abspath(os.path.join(os.getcwd(), file)) for file in files or []] - diffs = project.get_diffs(all_files=not files, files=files) or {} + diffs = project.get_diffs(all_files=not files, files=files) - if not diffs and not output_json: + if not diffs: plain("[muted]No changes detected.[/muted]") + return None return diffs @classmethod - def diff(cls, base_path: str, files: list[str] = None, output_json: bool = False) -> None: + def diff(cls, base_path: str, files: list[str] = None) -> None: """Show the changes made to the project.""" - diffs = cls._diff(base_path, files, output_json=output_json) - if output_json: - json_print( - { - "diffs": diffs, - } - ) - return - - if not diffs: - return - + diffs = cls._diff(base_path, files) or {} for file_path, diff_text in diffs.items(): console.rule(f"[bold]{file_path}[/bold]") print_diff(diff_text) @@ -1189,7 +857,7 @@ def _review( diffs = project.diff_remote_named_versions(before_name, after_name) or {} else: # Compare local vs remote (existing behavior) - diffs = cls._diff(base_path) + diffs = cls._diff(base_path) or {} if not diffs: return {} @@ -1257,20 +925,11 @@ def delete_gists(cls) -> None: return @classmethod - def branch_list(cls, base_path: str, output_json: bool = False) -> None: + def branch_list(cls, base_path: str) -> None: """List branches in the Agent Studio project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) current_branch, branches = project.get_branches() - - if output_json: - json_output = { - "current_branch": current_branch, - "branches": branches, - } - json_print(json_output) - return - if not branches: plain("[muted]No branches found.[/muted]") return @@ -1279,88 +938,42 @@ def branch_list(cls, base_path: str, output_json: bool = False) -> None: if current_branch is None: warning( - f"Current local branch does not exist in Agent Studio. " + f"Current local branch '{project.branch_id}' does not exist in Agent Studio. " "It may have been deleted or merged." ) @classmethod - def branch_create( - cls, base_path: str, branch_name: str = None, output_json: bool = False - ) -> None: + def branch_create(cls, base_path: str, branch_name: str = None) -> None: """Create a new branch in the Agent Studio project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) if project.branch_id != "main": - if output_json: - json_print( - { - "success": False, - "error": "Branches can only be created from the main branch (sandbox).", - } - ) - else: - error( - "Branches can only be created from the [bold]main[/bold] branch (sandbox). " - "Please switch and try again." - ) - sys.exit(1) + error( + "Branches can only be created from the [bold]main[/bold] branch (sandbox). " + "Please switch and try again." + ) + return if not branch_name: - if output_json: - json_print( - { - "success": False, - "error": "branch create with --json requires a branch name argument.", - } - ) - sys.exit(1) branch_name = input("Enter the name of the new branch: ").strip() if not branch_name: warning("No branch name provided. Exiting.") return new_branch_id = project.create_branch(branch_name) - if output_json: - json_print( - { - "success": bool(new_branch_id), - "new_branch_id": new_branch_id, - "branch_name": branch_name, - } - ) - if not new_branch_id: - sys.exit(1) - return - if new_branch_id: success(f"Branch '{branch_name}' created (ID: {new_branch_id})") else: error("Failed to create the branch.") - sys.exit(1) @classmethod def branch_switch( - cls, - base_path: str, - branch_name: str = None, - force: bool = False, - format: bool = False, - output_json: bool = False, - output_json_projection: bool = False, - from_projection: str = None, + cls, base_path: str, branch_name: str = None, force: bool = False, format: bool = False ) -> None: """Switch to a different branch in the Agent Studio project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) if not branch_name: - if output_json: - json_print( - { - "success": False, - "error": "branch switch with --json requires a branch name argument.", - } - ) - sys.exit(1) # Drop down menu to select branch current_branch, branches = project.get_branches() if not branches: @@ -1386,64 +999,21 @@ def branch_switch( selected_option = branch_menu branch_name = selected_option.replace(" (current)", "") - projection_json = cls._parse_from_projection_json( - from_projection, - json_errors=output_json or output_json_projection, - ) - - ctx = ( - console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() - ) - on_save = None - - with ctx as status: - if status: - - def on_save(current: int, total: int) -> None: - status.update(f"[info]Saving resources ({current}/{total})...[/info]") - - switch_ok, projection = project.switch_branch( - branch_name, - force=force, - format=format, - projection_json=projection_json, - on_save=on_save, - ) - - if output_json or output_json_projection: - json_output = { - "success": switch_ok, - "branch_name": branch_name, - } - if output_json_projection: - json_output["projection"] = projection - json_print(json_output) - if not switch_ok: - sys.exit(1) - return - + switch_ok = project.switch_branch(branch_name, force=force, format=format) if switch_ok: success(f"Switched to branch '{branch_name}'.") else: error(f"Failed to switch to branch '{branch_name}'.") - sys.exit(1) @classmethod - def get_current_branch(cls, base_path: str, output_json: bool = False) -> None: + def get_current_branch(cls, base_path: str) -> None: """Get the current branch of the Agent Studio project.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) current_branch = project.get_current_branch() - if output_json: - json_output = { - "current_branch": current_branch, - } - json_print(json_output) - return - if current_branch is None: warning( - f"Current local branch does not exist in Agent Studio. " + f"Current local branch '{project.branch_id}' does not exist in Agent Studio. " "It may have been deleted or merged." ) return @@ -1456,133 +1026,73 @@ def format( files: list[str] = None, check_only: bool = False, run_ty: bool = False, - output_json: bool = False, ) -> None: """Format project resources (Python via ruff, YAML/JSON via in-process formatting); optionally run ty.""" - project = cls._load_project(base_path, output_json=output_json) + project = cls._load_project(base_path) + # Resolve to absolute paths so they match resource_mapping.file_path files_resolved: list[str] | None = None if files: files_resolved = [os.path.abspath(os.path.join(base_path, f)) for f in files] - if not output_json: - if check_only: - info("[bold]Check-only[/bold]: verifying formatting (no files will be modified).") - else: - info("[bold]Fix mode[/bold]: formatting project resources.") - plain("") - info( - "Checking project resources (Python + YAML/JSON)" - if check_only - else "Formatting project resources (Python + YAML/JSON)" - ) + if check_only: + info("[bold]Check-only[/bold]: verifying formatting (no files will be modified).") + else: + info("[bold]Fix mode[/bold]: formatting project resources.") - affected, format_errors = project.format_files(files=files_resolved, check_only=check_only) - rel_affected = [os.path.relpath(p, base_path) or p for p in affected] + plain("") + step = ( + "Checking project resources (Python + YAML/JSON)" + if check_only + else "Formatting project resources (Python + YAML/JSON)" + ) + info(step) + affected, format_errors = project.format_files(files=files_resolved, check_only=check_only) + for msg in format_errors: + plain(f"[red]{msg}[/red]") if format_errors: - if output_json: - json_print( - { - "success": False, - "check_only": check_only, - "format_errors": format_errors, - "affected": rel_affected, - "ty_ran": False, - "ty_returncode": None, - "ty_timed_out": False, - } - ) - else: - for msg in format_errors: - plain(f"[red]{msg}[/red]") - error("Format failed for some files.") + error("Format failed for some files.") sys.exit(1) - + return if check_only and affected: - if output_json: - json_print( - { - "success": False, - "check_only": check_only, - "format_errors": [], - "affected": rel_affected, - "ty_ran": False, - "ty_returncode": None, - "ty_timed_out": False, - } - ) - else: - for path in affected: - rel = os.path.relpath(path, base_path) or path - plain(f"[red]{rel}[/red]") - info("Try [bold]poly format[/bold] to fix.") - sys.exit(1) - - if not output_json: for path in affected: rel = os.path.relpath(path, base_path) or path - plain(rel) - success("Passed.") - if check_only: - success("All checks passed (no changes written).") - else: - success("All issues fixed." if affected else "No issues found.") + plain(f"[red]{rel}[/red]") + info("Try [bold]poly format[/bold] to fix.") + sys.exit(1) + return + for path in affected: + rel = os.path.relpath(path, base_path) or path + plain(rel) + success("Passed.") + if check_only: + success("All checks passed (no changes written).") + else: + success("All issues fixed." if affected else "No issues found.") - ty_returncode: int | None = None - ty_timed_out = False + # Ty (type check only; no fix) — off by default; use --ty to enable. if run_ty: ty_cmd = [sys.executable, "-m", "ty"] if shutil.which("ty"): ty_cmd = ["ty"] - if not output_json: - info("Type checking (ty)") + info("Type checking (ty)") try: r = subprocess.run( ty_cmd + ["check"], cwd=base_path, - capture_output=output_json, + capture_output=False, text=True, timeout=15, stdin=subprocess.DEVNULL, ) - ty_returncode = r.returncode except subprocess.TimeoutExpired: - ty_timed_out = True - if output_json: - json_print( - { - "success": False, - "check_only": check_only, - "format_errors": [], - "affected": rel_affected, - "ty_ran": True, - "ty_returncode": None, - "ty_timed_out": True, - } - ) - else: - plain("[red]Timed out after 15s.[/red]") - sys.exit(1) - - if not output_json and ty_returncode != 0: + plain("[red]Timed out after 15s.[/red]") sys.exit(1) - if not output_json: - success("Passed.") - - if output_json: - json_print( - { - "success": not (run_ty and ty_returncode not in (None, 0)), - "check_only": check_only, - "format_errors": [], - "affected": rel_affected, - "ty_ran": run_ty, - "ty_returncode": ty_returncode, - "ty_timed_out": ty_timed_out, - } - ) - if run_ty and ty_returncode != 0: + return + if r.returncode != 0: sys.exit(1) + return + success("Passed.") @classmethod def chat( @@ -1734,19 +1244,11 @@ def _run_chat_loop( return restart @classmethod - def validate_project(cls, base_path: str, output_json: bool = False) -> None: + def validate_project(cls, base_path: str) -> None: """Validate the project configuration locally.""" - project = cls._load_project(base_path, output_json=output_json) - errors = project.validate_project() - - if output_json: - json_output = { - "valid": bool(not errors), - "errors": errors, - } - json_print(json_output) - return + project = cls._load_project(base_path) + errors = project.validate_project() if not errors: success("Project configuration is valid.") else: diff --git a/src/poly/output/console.py b/src/poly/console.py similarity index 100% rename from src/poly/output/console.py rename to src/poly/console.py diff --git a/src/poly/handlers/interface.py b/src/poly/handlers/interface.py index 400d387..97d2f12 100644 --- a/src/poly/handlers/interface.py +++ b/src/poly/handlers/interface.py @@ -4,8 +4,6 @@ from typing import Any, Optional -from google.protobuf.message import Message - from poly.handlers.platform_api import PlatformAPIHandler from poly.handlers.sync_client import SyncClientHandler from poly.resources import BaseResource, Resource @@ -110,24 +108,13 @@ def pull_deployment_resources( """ return self.sync_client.pull_deployment_resources(deployment_id) - def pull_resources( - self, projection_json: Optional[dict[str, Any]] = None - ) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: + def pull_resources(self) -> dict[type[Resource], dict[str, Resource]]: """Fetch all resources for the specific project. - Args: - projection_json (Optional[dict[str, Any]]): A dictionary containing the projection. - If provided, the projection will be used instead of fetching it from the API. - Returns: dict[type[Resource], dict[str, Resource]]: A dictionary mapping resource types to their resources - dict[str, Any]: The projection data """ - if projection_json is not None: - return SyncClientHandler.load_resources_from_projection( - projection_json - ), projection_json return self.sync_client.pull_resources() def push_resources( @@ -142,80 +129,26 @@ def push_resources( """Upload multiple resources for the specific project. Args: - new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload - deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete - updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload + new_resources (dict[type[Resource], dict[str, Resource]]): New resources to upload + deleted_resources (dict[type[Resource], dict[str, Resource]]): Resources to delete + updated_resources (dict[type[Resource], dict[str, Resource]]): Updated resources to upload dry_run (bool): If True, only log the upload actions without actually uploading - queue_pushes (bool): If True, queue the resources for pushing. email (str): Email to use for metadata creation. If None, use the email of the current user. Returns: bool: True if the resources were pushed successfully, False otherwise """ - self.queue_resources( + return self.sync_client.push_resources( deleted_resources=deleted_resources, new_resources=new_resources, updated_resources=updated_resources, + dry_run=dry_run, + queue_pushes=queue_pushes, email=email, ) - if queue_pushes: - return True - - if dry_run: - self.clear_command_queue() - return True - - return self.send_queued_commands() - - def queue_resources( - self, - deleted_resources: dict[type[BaseResource], dict[str, BaseResource]], - new_resources: dict[type[BaseResource], dict[str, BaseResource]], - updated_resources: dict[type[BaseResource], dict[str, BaseResource]], - email: Optional[str] = None, - ) -> list[Message]: - """Queue multiple resources for the specific project. - - Args: - deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete - new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload - updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload - email (str): Email to use for metadata creation. - If None, use the email of the current user. - - Returns: - list[Message]: A list of queued Command protobuf messages. - """ - return self.sync_client.queue_resources( - deleted_resources=deleted_resources, - new_resources=new_resources, - updated_resources=updated_resources, - email=email, - ) - - def send_queued_commands(self) -> bool: - """Send all queued commands as a batch and clear the queue. - - Returns: - bool: True if the commands were sent successfully, False otherwise - """ - return self.sync_client.send_queued_commands() - - def clear_command_queue(self) -> None: - """Clear all queued commands without sending.""" - self.sync_client.clear_command_queue() - - def get_queued_commands(self) -> list[Message]: - """Get all queued commands. - - Returns: - list[Message]: A list of queued Command protobuf messages. - """ - return self.sync_client.get_queued_commands() - def get_branches(self) -> dict[str, str]: """Get a list of branches. @@ -251,7 +184,7 @@ def switch_branch(self, branch_id: str) -> bool: def merge_branch( self, message: str, conflict_resolutions: Optional[list[dict[str, Any]]] = None - ) -> tuple[bool, list[dict[str, str]], list[dict[str, str]]]: + ) -> tuple[list[dict[str, str]], list[dict[str, str]]]: """Merge the current branch into main. Args: @@ -262,9 +195,7 @@ def merge_branch( - value: Optional custom value (only used with custom strategy) Returns: - success (bool): True if the merge was successful, False otherwise list[dict[str, str]]: A list of conflict information if the merge failed, empty list if successful - list[dict[str, str]]: A list of error information if the merge failed, empty list if successful """ return self.sync_client.merge_branch(message, conflict_resolutions) diff --git a/src/poly/handlers/sdk.py b/src/poly/handlers/sdk.py index cfd3de9..719b7bf 100644 --- a/src/poly/handlers/sdk.py +++ b/src/poly/handlers/sdk.py @@ -316,6 +316,9 @@ def merge_branch( response_data = response.json() # Check if this is a conflict response if "conflicts" in response_data or "hasConflicts" in response_data: + logger.warning( + f"Merge has conflicts: {len(response_data.get('conflicts', []))} conflicts detected" + ) return response_data # Otherwise, it's a different error error_msg = f"API Error 400: {response_data}" diff --git a/src/poly/handlers/sync_client.py b/src/poly/handlers/sync_client.py index f5f2e5c..76a14f8 100644 --- a/src/poly/handlers/sync_client.py +++ b/src/poly/handlers/sync_client.py @@ -5,18 +5,17 @@ import logging import uuid -from copy import deepcopy from typing import Any, Optional from poly.handlers.protobuf.commands_pb2 import Command from poly.handlers.protobuf.handoff_pb2 import Handoff_SetDefault from poly.handlers.sdk import SourcererAPIError, SourcererSDK from poly.resources import ( - ApiIntegration, - ApiIntegrationEnvironments, - ApiIntegrationOperation, ASRBiasing, AsrSettings, + ApiIntegration, + ApiIntegrationOperation, + ApiIntegrationEnvironments, BaseResource, ChatGreeting, ChatStylePrompt, @@ -78,7 +77,7 @@ class SyncClientHandler: @property def branch_id(self) -> str: """Get the current branch ID.""" - return self._sdk.branch_id + return self.sdk.branch_id def __init__( self, @@ -102,46 +101,40 @@ def __init__( project_id=project_id, branch_id=branch_id, ) + # Switch to the specified branch if exists and provided. + if branch_id and branch_id != "main": + found_branches = self._sdk.fetch_branches().get("branches", []) + branch = next((b for b in found_branches if b.get("branchId") == branch_id), None) + if branch: + self._sdk.branch_id = branch_id + else: + logger.warning(f"Branch {branch_id} does not exist. Switching to 'main' branch.") + self._sdk.branch_id = "main" @property def sdk(self) -> SourcererSDK: """Get the Sourcerer SDK instance.""" return self._sdk - def assert_branch_exists(self) -> str: - """Assert that the branch exists and switch to 'main' if it doesn't.""" - if self.branch_id != "main": - found_branches = self._sdk.fetch_branches().get("branches", []) - branch = next((b for b in found_branches if b.get("branchId") == self.branch_id), None) - if not branch: - logger.info( - f"Branch ID:'{self.branch_id}' does not exist. Switching to 'main' branch." - ) - self._sdk.branch_id = "main" - return self.branch_id - - @classmethod - def load_resources_from_projection( - cls, projection: dict - ) -> dict[type[Resource], dict[str, Resource]]: + def _load_resources(self, projection: dict) -> dict[type[Resource], dict[str, Resource]]: return { - Topic: cls._read_topics_from_projection(projection), - Function: cls._read_functions_from_projection(projection), - Entity: cls._read_entities_from_projection(projection), - Variable: cls._read_variables_from_projection(projection), - **cls._read_agent_settings_from_projection(projection), - **cls._read_channel_settings_from_projection(projection), - **cls._read_flows_from_projection(projection), - ExperimentalConfig: cls._read_experimental_config_from_projection(projection), - SMSTemplate: cls._read_sms_templates_from_projection(projection), - Handoff: cls._read_handoffs_from_projection(projection), - **cls._read_variants_from_projection(projection), - PhraseFilter: cls._read_phrase_filters_from_projection(projection), - Pronunciation: cls._read_pronunciations_from_projection(projection), - KeyphraseBoosting: cls._read_keyphrase_boosting_from_projection(projection), - TranscriptCorrection: cls._read_transcript_corrections_from_projection(projection), - **cls._read_asr_settings_from_projection(projection), - ApiIntegration: cls._read_api_integrations_from_projection(projection), + Topic: self._read_topics_from_projection(projection), + Function: self._read_functions_from_projection(projection), + Entity: self._read_entities_from_projection(projection), + Variable: self._read_variables_from_projection(projection), + **self._read_agent_settings_from_projection(projection), + **self._read_channel_settings_from_projection(projection), + **self._read_flows_from_projection(projection), + ExperimentalConfig: self._read_experimental_config_from_projection(projection), + SMSTemplate: self._read_sms_templates_from_projection(projection), + Handoff: self._read_handoffs_from_projection(projection), + **self._read_variants_from_projection(projection), + PhraseFilter: self._read_phrase_filters_from_projection(projection), + Pronunciation: self._read_pronunciations_from_projection(projection), + KeyphraseBoosting: self._read_keyphrase_boosting_from_projection(projection), + TranscriptCorrection: self._read_transcript_corrections_from_projection(projection), + **self._read_asr_settings_from_projection(projection), + ApiIntegration: self._read_api_integrations_from_projection(projection), } # ty:ignore[invalid-return-type] def pull_deployment_resources( @@ -157,31 +150,27 @@ def pull_deployment_resources( logger.info( f"Fetching project data for project {self.project_id} for deployment {deployment_id}" ) - self.assert_branch_exists() projection = self.sdk.fetch_deployment_projection(deployment_id=deployment_id) logger.info( f"Successfully fetched project data for project {self.project_id} for deployment {deployment_id}" ) - return self.load_resources_from_projection(projection) + return self._load_resources(projection) - def pull_resources(self) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: + def pull_resources(self) -> dict[type[Resource], dict[str, Resource]]: """Fetch all resources from a specific project. Returns: dict[type[Resource], dict[str, Resource]]: A dictionary mapping resource types to their resources - dict[str, Any]: The projection data """ logger.info( f"Fetching project data for project {self.project_id} on branch {self.sdk.branch_id}" ) - self.assert_branch_exists() projection = self.sdk.fetch_projection(force_refresh=True) - logger.debug(f"Projection: {projection}") logger.info( f"Successfully fetched project data for project {self.project_id} on branch {self.sdk.branch_id}" ) - return self.load_resources_from_projection(projection), projection + return self._load_resources(projection) @staticmethod def _read_topics_from_projection(projection: dict) -> dict[str, Topic]: @@ -880,14 +869,16 @@ def _read_api_integrations_from_projection( Variable, ] - def queue_resources( + def push_resources( self, deleted_resources: dict[type[BaseResource], dict[str, BaseResource]], new_resources: dict[type[BaseResource], dict[str, BaseResource]], updated_resources: dict[type[BaseResource], dict[str, BaseResource]], + dry_run: bool = False, + queue_pushes: bool = False, email: Optional[str] = None, - ) -> list[Command]: - """Queue multiple resources for the specific project. + ) -> bool: + """Upload multiple resources for the specific project. Sends in order: - delete @@ -898,16 +889,18 @@ def queue_resources( deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload - email (str): Email to use for metadata creation. + dry_run (bool): If True, only log the upload actions without actually + uploading Returns: - list[Command]: A list of queued Command protobuf messages. + bool: True if the resources were pushed successfully, False otherwise """ metadata = self.sdk.create_metadata() if email: metadata.created_by = email - commands = [] + if self.sdk.branch_id == "main": + self.create_branch() # creates branch and switches to it delete_resources_priority: list[type[BaseResource]] = [] for resource_type in self.PRIORITY_DELETE_TYPES: @@ -920,7 +913,7 @@ def queue_resources( for resource_type in delete_resources_priority: for resource_id, resource in deleted_resources.get(resource_type, {}).items(): delete_type = resource.delete_command_type - commands.append( + self.sdk.add_command_to_queue( Command( type=delete_type, command_id=str(uuid.uuid4()), @@ -941,7 +934,7 @@ def queue_resources( resources = new_resources.get(resource_type, {}) for resource_id, resource in resources.items(): create_type = resource.create_command_type - commands.append( + self.sdk.add_command_to_queue( Command( type=create_type, command_id=str(uuid.uuid4()), @@ -962,7 +955,7 @@ def queue_resources( resources = updated_resources.get(resource_type, {}) for resource_id, resource in resources.items(): update_type = resource.update_command_type - commands.append( + self.sdk.add_command_to_queue( Command( type=update_type, command_id=str(uuid.uuid4()), @@ -975,7 +968,7 @@ def queue_resources( for resource_dict in [new_resources, updated_resources]: for resource_id, resource in resource_dict.get(Handoff, {}).items(): if isinstance(resource, Handoff) and resource.is_default: - commands.append( + self.sdk.add_command_to_queue( Command( type="handoff_set_default", command_id=str(uuid.uuid4()), @@ -984,49 +977,21 @@ def queue_resources( ) ) - for command in commands: - self.sdk.add_command_to_queue(command) - - logger.info(f"Queued {len(commands)} commands") - logger.debug(f"Commands: {commands!r}") - return commands - - def send_queued_commands(self) -> bool: - """Send all queued commands as a batch and clear the queue. - - Returns: - bool: True if the commands were sent successfully, False otherwise - """ - if self.sdk.get_queue_size() == 0: - logger.info("No commands to send") - return True - - self.assert_branch_exists() - - # Creates branch and switches to it - if self.sdk.branch_id == "main": - self.create_branch() - - try: - logger.info(f"Sending {len(self.sdk._command_queue)} commands to {self.sdk.branch_id}") - self.sdk.send_command_batch() + if not (dry_run or queue_pushes): + logger.info(f"Sending commands command_queue={self.sdk._command_queue!r}") + try: + self.sdk.send_command_batch() + except SourcererAPIError as e: + logger.error(f"Failed to push resources: {e}") + # If the batch fails, we assume all commands failed + return False + elif queue_pushes: return True - except SourcererAPIError as e: - logger.error(f"Failed to send commands: {e}") - return False + elif dry_run: + logger.info(f"Created commands command_queue={self.sdk._command_queue!r}") + self.sdk.clear_queue() - def clear_command_queue(self) -> None: - """Clear all queued commands without sending.""" - logger.info(f"Clearing {len(self.sdk._command_queue)} commands") - self.sdk.clear_queue() - - def get_queued_commands(self) -> list[Command]: - """Get all queued commands. - - Returns: - list[Command]: A list of queued Command protobuf messages. - """ - return deepcopy(self.sdk._command_queue) + return True def switch_branch(self, branch_id: str) -> bool: """Switch to a different branch within the same project. @@ -1037,16 +1002,14 @@ def switch_branch(self, branch_id: str) -> bool: Returns: bool: True if the switch was successful, False otherwise """ - self.assert_branch_exists() - if self.sdk.branch_id == branch_id: - logger.info(f"Already on branch ID:'{branch_id}'") + logger.info(f"Already on branch {branch_id}") return True if branch_id == "main": self.sdk.branch_id = "main" self.sdk.get_project_data() - logger.info(f"Switched to branch ID:'{branch_id}'") + logger.info(f"Switched to branch {branch_id}") return True if found_branches := self.sdk.fetch_branches().get("branches"): @@ -1056,12 +1019,11 @@ def switch_branch(self, branch_id: str) -> bool: # Re-fetch project data to ensure the SDK is up-to-date self.sdk.clear_cache() self.sdk.get_project_data() - logger.info(f"Switched to branch ID:'{branch_id}'") + logger.info(f"Switched to branch {branch_id}") return True else: - logger.error(f"Branch ID:'{branch_id}' does not exist.") + logger.error(f"Branch {branch_id} does not exist.") return False - return False def create_branch(self, branch_name: Optional[str] = None) -> str: """Create a new branch for the project @@ -1077,10 +1039,9 @@ def create_branch(self, branch_name: Optional[str] = None) -> str: if branch_name is None: metadata = self.sdk.create_metadata() - time_suffix = f"{metadata.created_at.seconds % 100000:05d}" - random_suffix = uuid.uuid4().hex[:4] - suffix = f"{time_suffix}-{random_suffix}" # to avoid duplicate names - branch_name = f"ADK-{suffix}" + email = metadata.created_by.split("@")[0] + suffix = f"{metadata.created_at.seconds % 10000:04d}" # to avoid duplicate names + branch_name = f"ADK-{email}-{suffix}" logger.info(f"Creating new branch '{branch_name}' from 'main' branch") @@ -1088,9 +1049,7 @@ def create_branch(self, branch_name: Optional[str] = None) -> str: expected_main_last_known_sequence=self.sdk._last_known_sequence, branch_name=branch_name, ) - logger.info( - f"Created and switched to new branch. Name:'{branch_name}' ID:'{self.sdk.branch_id}'" - ) + logger.warning(f"Created and switched to new branch '{self.sdk.branch_id}'") return self.sdk.branch_id def get_branches(self) -> dict[str, str]: @@ -1119,20 +1078,20 @@ def delete_branch(self, branch_id): logger.error("Cannot delete 'main' branch.") return False - logger.info(f"Deleting branch ID:'{branch_id}'") + logger.info(f"Deleting branch '{branch_id}'") try: self.sdk.delete_branch(branch_id=branch_id) except SourcererAPIError as e: - logger.error(f"Failed to delete branch ID:'{branch_id}': {e}") + logger.error(f"Failed to delete branch '{branch_id}': {e}") return False - logger.info(f"Successfully deleted branch ID:'{branch_id}'") + logger.info(f"Successfully deleted branch '{branch_id}'") return True def merge_branch( self, message: str, conflict_resolutions: Optional[list[dict[str, Any]]] = None - ) -> tuple[bool, list[dict[str, str]], list[dict[str, str]]]: + ) -> tuple[list[dict[str, str]], list[dict[str, str]]]: """Merge the current branch into main. Args: @@ -1143,15 +1102,12 @@ def merge_branch( - value: Optional custom value (only used with custom strategy) Returns: - success (bool): True if the merge was successful, False otherwise list[dict[str, str]]: A list of conflict information if the merge failed, empty list if successful list[dict[str, str]]: A list of error information if the merge failed, empty list if successful """ - self.assert_branch_exists() - if self.sdk.branch_id == "main": logger.error("Cannot merge 'main' branch into itself.") - return False, [], [] + return [], [] logger.info(f"Merging branch '{self.sdk.branch_id}' into 'main'") @@ -1163,7 +1119,9 @@ def merge_branch( ) except SourcererAPIError as e: logger.error(f"Failed to merge branch '{self.sdk.branch_id}' into 'main': {e}") - return False, [], [] + return [], [] + + conflicts, errors = [], [] if result.get("hasConflicts", False) or result.get("errors", []): logger.error( @@ -1171,12 +1129,10 @@ def merge_branch( ) conflicts = result.get("conflicts", []) errors = result.get("errors", []) - return False, conflicts, errors logger.info(f"Successfully merged branch '{self.sdk.branch_id}' into 'main'") - return True, [], [] + return conflicts, errors def get_branch_chat_info(self, branch_id: str) -> dict[str, Any]: """Get deployment info needed to start a draft chat on a branch.""" - self.assert_branch_exists() return self.sdk.get_branch_chat_info(branch_id) diff --git a/src/poly/output/__init__.py b/src/poly/output/__init__.py deleted file mode 100644 index 742efb2..0000000 --- a/src/poly/output/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright PolyAI Limited diff --git a/src/poly/output/json_output.py b/src/poly/output/json_output.py deleted file mode 100644 index a4fcb65..0000000 --- a/src/poly/output/json_output.py +++ /dev/null @@ -1,31 +0,0 @@ -"""JSON output helpers for machine-readable CLI output. - -Copyright PolyAI Limited -""" - -import json -import sys - -from google.protobuf.json_format import MessageToDict - - -def json_print(data: dict) -> None: - """Print data as formatted JSON to stdout. - - Args: - data: Dictionary to serialize and print. - """ - json.dump(data, sys.stdout, indent=2, default=str) - sys.stdout.write("\n") - - -def commands_to_dicts(commands: list) -> list[dict]: - """Convert a list of Command protobufs to JSON-serializable dicts. - - Args: - commands: List of Command protobuf messages. - - Returns: - list[dict]: Each Command serialized via MessageToDict. - """ - return [MessageToDict(cmd, preserving_proto_field_name=True) for cmd in commands] diff --git a/src/poly/project.py b/src/poly/project.py index 78d5fde..88e3033 100644 --- a/src/poly/project.py +++ b/src/poly/project.py @@ -10,11 +10,8 @@ import uuid from dataclasses import dataclass, fields from datetime import datetime -from collections.abc import Callable from typing import Any, Optional, TypeAlias -from google.protobuf.message import Message - import poly.resources.resource_utils as resource_utils import poly.utils as utils from poly.handlers.interface import ( @@ -321,9 +318,7 @@ def init_project( account_id: str, project_id: str, format: bool = False, - projection_json: Optional[dict[str, Any]] = None, - on_save: Callable[[int, int], None] | None = None, - ) -> tuple["AgentStudioProject", dict[str, Any]]: + ) -> "AgentStudioProject": """Get project from the Agent Studio Interactor Args: @@ -332,14 +327,9 @@ def init_project( account_id (str): The account ID of the project project_id (str): The project ID format (bool): If True, format resources after pulling - projection_json (dict[str, Any]): A dictionary containing the projection - If provided, the projection will be used instead of fetching it from the API. - on_save: Optional callback invoked with (current, total) - during the resource save loop. Returns: AgentStudioProject: An instance of AgentStudioProject with functions loaded - dict[str, Any]: The projection data """ base_path = os.path.join(base_path, account_id, project_id) @@ -353,41 +343,28 @@ def init_project( last_updated=datetime.now(), branch_id="main", ) - project.resources, projection = project.api_handler.pull_resources( - projection_json=projection_json - ) + project.resources = project.api_handler.pull_resources() project._check_no_duplicate_resource_paths(project.resources) resource_mappings: list[ResourceMapping] = project._make_resource_mappings( project.resources ) - all_resources = project.all_resources - total = len(all_resources) - - MultiResourceYamlResource._file_cache.clear() - - for i, resource in enumerate(all_resources, 1): - if on_save: - on_save(i, total) - is_multi = isinstance(resource, MultiResourceYamlResource) + # Save functions and topics + for resource in project.all_resources: resource.save( base_path, resource_mappings=resource_mappings, resource_name=resource.name, format=format, - save_to_cache=is_multi, ) - MultiResourceYamlResource.write_cache_to_file() - MultiResourceYamlResource._file_cache.clear() - project.save_config(write_project_yaml=True) utils.export_decorators(DECORATORS, base_path) utils.save_imports(base_path) - return project, projection + return project def save_config(self, write_project_yaml: bool = False) -> None: """Save the project configuration to a file @@ -414,11 +391,7 @@ def save_config(self, write_project_yaml: bool = False) -> None: yaml_content = resource_utils.dump_yaml(config_dict) f.write(yaml_content) - def load_project( - self, - preserve_not_loaded_resources: bool = False, - projection_json: Optional[dict[str, Any]] = None, - ) -> None: + def load_project(self, preserve_not_loaded_resources: bool = False) -> None: """Load the current state of project on Agent Studio into memory This is used when no current resources are loaded. @@ -427,10 +400,8 @@ def load_project( preserve_not_loaded_resources: If True, retain the current _not_loaded_resources value across the load (used when reloading for comparison without affecting local state). - projection_json: If set, build resources from this projection dict - instead of fetching from the API (same shape as a sourcerer projection). """ - resources, _ = self.api_handler.pull_resources(projection_json=projection_json) + resources = self.api_handler.pull_resources() self._check_no_duplicate_resource_paths(resources) self.resources = resources @@ -439,13 +410,7 @@ def load_project( self._not_loaded_resources = [] self.save_config() - def pull_project( - self, - force: bool = False, - format: bool = False, - projection_json: Optional[dict[str, Any]] = None, - on_save: Callable[[int, int], None] | None = None, - ) -> tuple[list[str], dict[str, Any]]: + def pull_project(self, force: bool = False, format: bool = False) -> list[str]: """Pull the project configuration from the Agent Studio Interactor. If there are local changes, it will merge them with the incoming changes. @@ -462,19 +427,14 @@ def pull_project( Returns: list[str]: A list of file names with merge conflicts. - dict[str, Any]: The projection data """ # ------- # Pull resources # ------- - incoming_resources, projection = self.api_handler.pull_resources( - projection_json=projection_json - ) - # Only update branch id if we used the API to pull the resources - if projection_json is None: - self.branch_id = self.api_handler.branch_id + incoming_resources = self.api_handler.pull_resources() + self.branch_id = self.api_handler.branch_id self._check_no_duplicate_resource_paths(incoming_resources) # ------- @@ -486,7 +446,6 @@ def pull_project( incoming_resources=incoming_resources, force=force, format=format, - on_save=on_save, ) # ------- @@ -522,7 +481,7 @@ def pull_project( utils.save_imports(self.root_path) self.save_config() - return files_with_conflicts, projection + return files_with_conflicts @staticmethod def _delete_empty_folders(folder_path: str) -> None: @@ -588,10 +547,7 @@ def _update_multi_resource_yaml_resources( incoming_resource_mappings: list[ResourceMapping], force: bool, format: bool = False, - on_save: Callable[[int, int], None] | None = None, - progress_offset: int = 0, - progress_total: int = 0, - ) -> tuple[list[str], int]: + ) -> list[str]: """Merge MultiResourceYaml resources when pulling As files are merged on a per file basis, we must first compute the whole file: @@ -681,10 +637,6 @@ def _update_multi_resource_yaml_resources( ): resource_type.delete_resource(file_path, save_to_cache=True) - progress_offset += len(resources) - if on_save: - on_save(progress_offset, progress_total) - incoming_file_contents = { file: resource_utils.dump_yaml(top_level_yaml_dict) for file, (_, top_level_yaml_dict) in MultiResourceYamlResource._file_cache.items() @@ -727,7 +679,7 @@ def _update_multi_resource_yaml_resources( MultiResourceYamlResource.save_to_file(merged_contents, file) MultiResourceYamlResource._file_cache.clear() - return files_with_conflicts, progress_offset + return files_with_conflicts def _update_pulled_resources( self, @@ -735,7 +687,6 @@ def _update_pulled_resources( incoming_resources: ResourceMap, force: bool, format: bool = False, - on_save: Callable[[int, int], None] | None = None, ) -> list[str]: files_with_conflicts = [] @@ -750,34 +701,26 @@ def _update_pulled_resources( ) # Merging is done on a per file basis. - # For most resources - a resource is a single file - # For MultiResourceYamlResources - a resource is a part of a file, - # So first compute the whole file, then do merge process separately for each file. - total = sum(len(res) for res in incoming_resources.values()) - - multi_conflicts, current = self._update_multi_resource_yaml_resources( - original_resources=self.resources, - incoming_resources=incoming_resources, - original_resource_mappings=original_resource_mappings, - incoming_resource_mappings=incoming_resource_mappings, - force=force, - format=format, - on_save=on_save, - progress_offset=0, - progress_total=total, + # For most resources, a resource is a single file + # For MultiResourceYamlResources, a resource is part of a file, + # So we must first compute the whole file, so do merge process separately for each file. + files_with_conflicts.extend( + self._update_multi_resource_yaml_resources( + original_resources=self.resources, + incoming_resources=incoming_resources, + original_resource_mappings=original_resource_mappings, + incoming_resource_mappings=incoming_resource_mappings, + force=force, + format=format, + ) ) - files_with_conflicts.extend(multi_conflicts) - # For other resources, we follow the usual process for resource_type, incoming in incoming_resources.items(): if issubclass(resource_type, MultiResourceYamlResource): continue for resource_id, incoming_resource in incoming.items(): - current += 1 - if on_save: - on_save(current, total) # If force is True, overwrite local changes # If the resource is not loaded, save it directly if force or ( @@ -891,72 +834,6 @@ def _update_pulled_resources( return files_with_conflicts - def _stage_commands( - self, - new_state: ResourceMap, - new_resources: ResourceMap, - updated_resources: ResourceMap, - deleted_resources: ResourceMap, - email: Optional[str] = None, - ) -> list[Message]: - """Stage commands for the project.""" - - # Group flow resources together - # Creating flow config, group all new steps/functions under it and remove from - # new resources - push_changes = self._clean_resources_before_push( - new_state, - new_resources, - updated_resources, - deleted_resources, - ) - new_resources = push_changes.main.new - updated_resources = push_changes.main.updated - deleted_resources = push_changes.main.deleted - pre_changes = push_changes.pre - post_changes = push_changes.post - - # Assign positions to new flows - new_resources, updated_resources = self._assign_flow_positions( - new_resources, - updated_resources, - new_state, - ) - - # Queue new/updated/deleted resources - commands = [] - if pre_changes.new or pre_changes.deleted or pre_changes.updated: - commands.extend( - self.api_handler.queue_resources( - new_resources=pre_changes.new, - deleted_resources=pre_changes.deleted, - updated_resources=pre_changes.updated, - email=email, - ) - ) - - if new_resources or deleted_resources or updated_resources: - commands.extend( - self.api_handler.queue_resources( - new_resources=new_resources, - deleted_resources=deleted_resources, - updated_resources=updated_resources, - email=email, - ) - ) - - if post_changes.new or post_changes.deleted or post_changes.updated: - commands.extend( - self.api_handler.queue_resources( - new_resources=post_changes.new, - deleted_resources=post_changes.deleted, - updated_resources=post_changes.updated, - email=email, - ) - ) - - return commands - def push_project( self, force=False, @@ -964,8 +841,7 @@ def push_project( dry_run=False, format=False, email=None, - projection_json: Optional[dict[str, Any]] = None, - ) -> tuple[bool, str, list[Message]]: + ) -> tuple[bool, str]: """Push the project configuration to the Agent Studio Interactor. Args: @@ -973,40 +849,31 @@ def push_project( skip_validation (bool): If True, skip local validation. dry_run (bool): If True, do not actually push changes. format (bool): If True, format the resource before saving. - projection_json (dict[str, Any]): A dictionary containing the projection - If provided, the projection will be used instead of fetching it from the API. email (str): Email to use for metadata creation. If None, use the email of the current user. Returns: - Tuple[bool, str, list[Message]]: - - Boolean indicating success. - - String message. - - List of commands serialized to protobuf. + Tuple[bool, str]: A tuple containing a boolean indicating success, + and a string message. """ if not dry_run: # If force, load latest version of the project # to compare against if force: - self.load_project( - preserve_not_loaded_resources=True, projection_json=projection_json - ) + self.load_project(preserve_not_loaded_resources=True) # If not force, pull and merge latest version of the project else: - files_with_conflicts, _ = self.pull_project( - format=format, projection_json=projection_json - ) + files_with_conflicts = self.pull_project(format=format) if files_with_conflicts: conflicts = "\n- ".join(files_with_conflicts) return ( False, f"Merge conflicts detected in the following files:\n- {conflicts}\nPlease resolve the conflicts and try again.", - [], ) - # Push Algorithm + # Push Algorithm # 1. Get new/kept/deleted resources new_resource_mappings, kept_resource_mappings, deleted_resource_mappings = ( self.find_new_kept_deleted(self.discover_local_resources()) @@ -1074,7 +941,7 @@ def push_project( deleted_resources.update(subresource_changes.deleted) if not (updated_resources or new_resources or deleted_resources): - return False, "No changes detected", [] + return False, "No changes detected" # 4. Validate all resources with new state if not skip_validation: @@ -1083,17 +950,111 @@ def push_project( ) if validation_errors: error_messages = "\n".join(validation_errors) - return False, f"Validation errors detected:\n{error_messages}", [] + return False, f"Validation errors detected:\n{error_messages}" - commands = self._stage_commands( - new_state, new_resources, updated_resources, deleted_resources, email=email + # 5. Group flow resources together + # Creating flow config, group all new steps/functions under it and remove from + # new resources + push_changes = self._clean_resources_before_push( + new_state, + new_resources, + updated_resources, + deleted_resources, + ) + new_resources = push_changes.main.new + updated_resources = push_changes.main.updated + deleted_resources = push_changes.main.deleted + pre_changes = push_changes.pre + post_changes = push_changes.post + + # Assign positions to new flows + new_resources, updated_resources = self._assign_flow_positions( + new_resources, + updated_resources, + new_state, ) - if not dry_run: - success = self.api_handler.send_queued_commands() - self.branch_id = self.api_handler.branch_id - else: - self.api_handler.clear_command_queue() - success = True + + pre_and_post_push = any( + [ + pre_changes.new, + pre_changes.updated, + pre_changes.deleted, + post_changes.new, + post_changes.updated, + post_changes.deleted, + ] + ) + + # Assign positions to new flows + for flow_config in new_resources.get(FlowConfig, {}).values(): + if not isinstance(flow_config, FlowConfig): + raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") + resource_utils.assign_flow_positions(flow_config.steps, flow_config.start_step) + + # Assign positions to flows with new/updated steps + updated_flow_ids = set() + for flow_step in ( + list(new_resources.get(FlowStep, {}).values()) + + list(updated_resources.get(FlowStep, {}).values()) + + list(new_resources.get(FunctionStep, {}).values()) + + list(updated_resources.get(FunctionStep, {}).values()) + ): + if not isinstance(flow_step, BaseFlowStep): + raise TypeError(f"Flow step is not a FlowStep: {flow_step}") + updated_flow_ids.add(flow_step.flow_id) + + for updated_flow_id in updated_flow_ids: + flow_config = new_state.get(FlowConfig, {}).get(updated_flow_id) + if not flow_config: + raise ValueError(f"Flow config not found for flow id: {updated_flow_id}") + if not isinstance(flow_config, FlowConfig): + raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") + flow_steps = [ + step + for step in ( + list(new_state.get(FlowStep, {}).values()) + + list(new_state.get(FunctionStep, {}).values()) + ) + if isinstance(step, BaseFlowStep) and step.flow_id == updated_flow_id + ] + + resource_utils.assign_flow_positions(flow_steps, flow_config.start_step) + + # 6. Push new/updated/deleted resources + if self.branch_id: + logger.info(f"Pushing changes to branch {self.branch_id}") + + if pre_and_post_push: + self.api_handler.push_resources( + new_resources=pre_changes.new, + deleted_resources=pre_changes.deleted, + updated_resources=pre_changes.updated, + dry_run=dry_run, + email=email, + queue_pushes=True, + ) + + # Push changed resources (queue only when pre_push ran, so we send pre+main together) + success = self.api_handler.push_resources( + new_resources=new_resources, + deleted_resources=deleted_resources, + updated_resources=updated_resources, + dry_run=dry_run, + email=email, + queue_pushes=pre_and_post_push, + ) + + if pre_and_post_push: + success = self.api_handler.push_resources( + new_resources=post_changes.new, + deleted_resources=post_changes.deleted, + updated_resources=post_changes.updated, + dry_run=dry_run, + email=email, + queue_pushes=False, + ) + + self.branch_id = self.api_handler.branch_id if not success: failed_resources = [] @@ -1105,17 +1066,17 @@ def push_project( for resources in resource_dict.values(): failed_resources.extend([res.name for res in resources.values()]) errors_names = "\n-".join(failed_resources) - return False, f"Failed to push resources: \n-{errors_names}", commands + return False, f"Failed to push resources: \n-{errors_names}" if dry_run: - return True, "Dry run completed. No changes were pushed.", commands + return True, "Dry run completed. No changes were pushed." else: # Update local state self.resources = new_state self.file_structure_info = self.compute_file_structure_info(self.resources) self.save_config() - return True, "Resources pushed successfully.", commands + return True, "Resources pushed successfully." @staticmethod def _assign_flow_positions( @@ -1131,13 +1092,10 @@ def _assign_flow_positions( # Assign positions to flows with new/updated steps updated_flow_ids = set() - for flow_step in ( - list(new_resources.get(FlowStep, {}).values()) - + list(updated_resources.get(FlowStep, {}).values()) - + list(new_resources.get(FunctionStep, {}).values()) - + list(updated_resources.get(FunctionStep, {}).values()) + for flow_step in list(new_resources.get(FlowStep, {}).values()) + list( + updated_resources.get(FlowStep, {}).values() ): - if not isinstance(flow_step, BaseFlowStep): + if not isinstance(flow_step, FlowStep): raise TypeError(f"Flow step is not a FlowStep: {flow_step}") updated_flow_ids.add(flow_step.flow_id) @@ -1149,11 +1107,8 @@ def _assign_flow_positions( raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") flow_steps = [ step - for step in ( - list(new_state.get(FlowStep, {}).values()) - + list(new_state.get(FunctionStep, {}).values()) - ) - if isinstance(step, BaseFlowStep) and step.flow_id == updated_flow_id + for step in new_state.get(FlowStep, {}).values() + if isinstance(step, FlowStep) and step.flow_id == updated_flow_id ] resource_utils.assign_flow_positions(flow_steps, flow_config.start_step) @@ -1654,7 +1609,7 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: ) deployment_id = (deployments.get(name) or {}).get("deployment_id") if not deployment_id: - logger.error(f"No active deployment found for environment '{name}'.") + logger.warning(f"No active deployment found for environment '{name}'.") return {} logger.info(f"Pulling resources from deployment '{deployment_id}' ({name})...") return self.api_handler.pull_deployment_resources(deployment_id) @@ -1667,8 +1622,7 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: self.region, self.account_id, self.project_id, branch_id ) logger.info(f"Pulling resources from branch '{name}'...") - resources, _ = branch_api_handler.pull_resources() - return resources + return branch_api_handler.pull_resources() # 3) Deployment version hash prefix -> deployment resources version_hash = (name or "")[:9].lower() @@ -1685,7 +1639,7 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: ) return self.api_handler.pull_deployment_resources(deployment_id) - logger.error(f"Name '{name}' not found in environments, branches, or deployments.") + logger.warning(f"Name '{name}' not found in environments, branches, or deployments.") return {} def diff_remote_named_versions( @@ -1696,7 +1650,7 @@ def diff_remote_named_versions( after_resources = self.get_remote_resources_by_name(after_name) if not before_resources or not after_resources: - logger.error( + logger.warning( "Could not retrieve resources for one or both specified names: " f"before={before_name}, after={after_name}" ) @@ -2045,28 +1999,16 @@ def create_branch(self, branch_name: str = None) -> str: self.save_config() return branch_id - def switch_branch( - self, - branch_name: str, - force: bool = False, - format: bool = False, - projection_json: Optional[dict[str, Any]] = None, - on_save: Callable[[int, int], None] | None = None, - ) -> tuple[bool, dict[str, Any]]: + def switch_branch(self, branch_name: str, force: bool = False, format: bool = False) -> bool: """Switch to a different branch in the project. Args: branch_name (str): The name of the branch force (bool): If True, discard uncommitted changes when switching branches. format (bool): If True, format resources after switching branches. - projection_json (dict[str, Any]): A dictionary containing the projection - If provided, the projection will be used instead of fetching it from the API. - on_save: Optional callback invoked with (current, total) - during the resource save loop. Returns: bool: True if the switch was successful, False otherwise - dict[str, Any]: The projection data """ if self.get_diffs(all_files=True) and not force: raise ValueError( @@ -2077,13 +2019,10 @@ def switch_branch( if branch_name not in branches: raise ValueError(f"Branch {branch_name} does not exist.") success = self.api_handler.switch_branch(branches[branch_name]) - projection = {} if success: self.branch_id = branches[branch_name] - _, projection = self.pull_project( - force=force, format=format, projection_json=projection_json, on_save=on_save - ) - return success, projection + self.pull_project(force=force, format=format) + return success def get_current_branch(self) -> Optional[str]: """Get the current branch name. @@ -2431,10 +2370,10 @@ def merge_branch( f"Cannot merge branch with uncommitted changes, diffs: {list(diffs.keys())}" ) - success, conflicts, errors = self.api_handler.merge_branch( + conflicts, errors = self.api_handler.merge_branch( message=message, conflict_resolutions=conflict_resolutions ) - if success: + if not (conflicts or errors): self.switch_branch("main", force=True) return True, [], [] @@ -2573,10 +2512,62 @@ def sync_ids_with_sandbox(self, email: str = None) -> bool: if not (updated_resources or new_resources or deleted_resources): return True - self._stage_commands( - new_state, new_resources, updated_resources, deleted_resources, email=email + push_changes = self._clean_resources_before_push( + new_state, + new_resources, + updated_resources, + deleted_resources, ) - success = self.api_handler.send_queued_commands() + new_resources = push_changes.main.new + updated_resources = push_changes.main.updated + deleted_resources = push_changes.main.deleted + pre_changes = push_changes.pre + post_changes = push_changes.post + + new_resources, updated_resources = self._assign_flow_positions( + new_resources, + updated_resources, + new_state, + ) + + # 6. Push new/updated/deleted resources + if self.branch_id: + logger.info(f"Pushing changes to branch {self.branch_id}") + + pre_and_post_push = ( + pre_changes.new + or pre_changes.updated + or pre_changes.deleted + or post_changes.new + or post_changes.updated + or post_changes.deleted + ) + + if pre_and_post_push: + success = self.api_handler.push_resources( + new_resources=pre_changes.new, + deleted_resources=pre_changes.deleted, + updated_resources=pre_changes.updated, + queue_pushes=True, + email=email, + ) + + # Push changed resources + success = self.api_handler.push_resources( + new_resources=new_resources, + deleted_resources=deleted_resources, + updated_resources=updated_resources, + queue_pushes=pre_and_post_push, + email=email, + ) + + if pre_and_post_push: + success = self.api_handler.push_resources( + new_resources=post_changes.new, + deleted_resources=post_changes.deleted, + updated_resources=post_changes.updated, + email=email, + ) self.branch_id = self.api_handler.branch_id diff --git a/src/poly/resources/__init__.py b/src/poly/resources/__init__.py index 7722d6f..1ab8b39 100644 --- a/src/poly/resources/__init__.py +++ b/src/poly/resources/__init__.py @@ -4,12 +4,12 @@ SettingsRole, SettingsRules, ) +from poly.resources.asr_settings import AsrSettings from poly.resources.api_integration import ( ApiIntegration, - ApiIntegrationEnvironments, ApiIntegrationOperation, + ApiIntegrationEnvironments, ) -from poly.resources.asr_settings import AsrSettings from poly.resources.channel_settings import ( ChatGreeting, ChatStylePrompt, diff --git a/src/poly/resources/api_integration.py b/src/poly/resources/api_integration.py index b130c2b..f98d5c5 100644 --- a/src/poly/resources/api_integration.py +++ b/src/poly/resources/api_integration.py @@ -11,21 +11,22 @@ from typing import ClassVar, Optional import poly.resources.resource_utils as utils +from poly.resources.resource import ( + MultiResourceYamlResource, + ResourceMapping, + SubResource, +) + from poly.handlers.protobuf import api_integrations_pb2 from poly.handlers.protobuf.api_integrations_pb2 import ( ApiIntegration_Create, - ApiIntegration_Delete, ApiIntegration_Update, ApiIntegrationConfig_Update, + ApiIntegration_Delete, + Environments, ApiIntegrationOperation_Create, - ApiIntegrationOperation_Delete, ApiIntegrationOperation_Update, - Environments, -) -from poly.resources.resource import ( - MultiResourceYamlResource, - ResourceMapping, - SubResource, + ApiIntegrationOperation_Delete, ) logger = logging.getLogger(__name__) diff --git a/src/poly/resources/function.py b/src/poly/resources/function.py index 665d0fa..eee5e0d 100644 --- a/src/poly/resources/function.py +++ b/src/poly/resources/function.py @@ -692,9 +692,7 @@ def _extract_variable_references(code: str, resource_mappings: list[ResourceMapp } for name in variable_names: if name not in known_variables: - logger.warning( - f"Variable {name} not found in resource mappings, will be added in the next push" - ) + logger.warning(f"Variable {name} not found in resource mappings") continue variable_references[known_variables[name]] = True return variable_references diff --git a/src/poly/tests/project_test.py b/src/poly/tests/project_test.py index f6a27e5..7adb4ac 100644 --- a/src/poly/tests/project_test.py +++ b/src/poly/tests/project_test.py @@ -9,6 +9,7 @@ import os import unittest from copy import deepcopy +from unittest import mock from unittest.mock import MagicMock, patch import poly.resources.resource_utils as resource_utils @@ -39,6 +40,7 @@ VoiceGreeting, VoiceStylePrompt, ) +from poly.resources.resource import MultiResourceYamlResource from poly.resources.flows import ( ASRBiasing, Condition, @@ -46,7 +48,6 @@ StepType, ) from poly.resources.function import FunctionType -from poly.resources.resource import MultiResourceYamlResource from poly.tests.testing_utils import mock_read_from_file DIR = os.path.dirname(os.path.abspath(__file__)) @@ -70,61 +71,6 @@ def test_init(self): self.assertEqual(project.project_id, "test_project") -class InitProjectOnSaveTest(unittest.TestCase): - """Tests for the on_save callback in init_project""" - - def setUp(self): - self.mock_api_handler = patch.object( - AgentStudioProject, "api_handler", new_callable=MagicMock - ).start() - self.mock_save_config = patch.object(AgentStudioProject, "save_config").start() - self.mock_save_imports = patch("poly.utils.save_imports").start() - self.mock_export_decorators = patch("poly.utils.export_decorators").start() - self.mock_resource_save = patch.object(Resource, "save").start() - self.mock_write_cache = patch.object( - MultiResourceYamlResource, "write_cache_to_file" - ).start() - - def tearDown(self): - patch.stopall() - - def test_on_save_called_with_correct_progress(self): - """on_save should be called once per resource with (current, total)""" - self.mock_api_handler.pull_resources.return_value = ( - AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources, - {}, - ) - on_save = MagicMock() - - project, _ = AgentStudioProject.init_project( - base_path=os.path.join(TEST_DIR, "tmp"), - region="us-1", - account_id="test_account", - project_id="test_project", - on_save=on_save, - ) - - total = len(project.all_resources) - self.assertEqual(on_save.call_count, total) - on_save.assert_any_call(1, total) - on_save.assert_any_call(total, total) - - def test_no_on_save_does_not_error(self): - """init_project without on_save should work without errors""" - self.mock_api_handler.pull_resources.return_value = ( - AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources, - {}, - ) - - project, _ = AgentStudioProject.init_project( - base_path=os.path.join(TEST_DIR, "tmp"), - region="us-1", - account_id="test_account", - project_id="test_project", - ) - self.assertIsNotNone(project) - - class SortPathsForReverseDeletionTest(unittest.TestCase): """Tests for _sort_paths_for_reverse_deletion (Pronunciation vs lexicographic order).""" @@ -1550,10 +1496,8 @@ def setUp(self): AgentStudioProject, "api_handler", new_callable=MagicMock ).start() self.mock_save_config = patch.object(AgentStudioProject, "save_config").start() - self.mock_pull.return_value = ([], {}) - self.mock_api_handler.queue_resources = MagicMock(return_value=[]) - self.mock_api_handler.send_queued_commands = MagicMock(return_value=True) - self.mock_api_handler.clear_command_queue = MagicMock() + self.mock_pull.return_value = [] + self.mock_api_handler.push_resources = MagicMock(return_value=True) self.mock_load_project = patch.object(AgentStudioProject, "load_project").start() def tearDown(self): @@ -1563,17 +1507,17 @@ def tearDown(self): def test_push_project_no_changes(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertFalse(success) self.assertEqual(message, "No changes detected") - self.mock_api_handler.queue_resources.assert_not_called() + self.mock_api_handler.push_resources.assert_not_called() def test_push_project_merge_conflict(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - self.mock_pull.return_value = (["functions/test_function.py"], {}) + self.mock_pull.return_value = ["functions/test_function.py"] - success, message, commands = project.push_project(force=False) + success, message = project.push_project(force=False) self.assertFalse(success) self.assertIn("Merge conflicts detected", message) @@ -1584,11 +1528,11 @@ def test_push_project_new_resources(self): project_data["resources"]["topics"].pop("TOPIC-Topic 1") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(Topic, new_resources) # New resources get random IDs, so check by name @@ -1606,11 +1550,11 @@ def test_push_project_new_resource_flow(self): number_steps += 1 project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True, skip_validation=True) + success, message = project.push_project(force=True, skip_validation=True) self.assertTrue(success, f"Push failed: {message}") - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(FlowConfig, new_resources) # New resources get random IDs, so check by name @@ -1635,11 +1579,11 @@ def test_push_project_deleted_resource(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(Function, deleted_resources) self.assertIn("FUNCTION-extra_function", deleted_resources[Function]) @@ -1681,11 +1625,11 @@ def mock_discover(self): return result with patch.object(AgentStudioProject, "discover_local_resources", mock_discover): - success, message, commands = project.push_project(force=True, skip_validation=True) + success, message = project.push_project(force=True, skip_validation=True) self.assertTrue(success, f"Push failed: {message}") - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] # Must NOT include VariantAttribute - we never had them locally self.assertNotIn(VariantAttribute, deleted_resources) @@ -1698,11 +1642,11 @@ def test_push_project_modified_resource(self): ) project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(Function, updated_resources) self.assertIn("FUNCTION-test_function", updated_resources[Function]) @@ -1714,11 +1658,11 @@ def test_push_project_modified_sub_resources_dtmf(self): ] = True project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(DTMFConfig, updated_resources) @@ -1729,11 +1673,11 @@ def test_push_project_new_sub_resources_condition(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(Condition, new_resources) # Deleted 2 conditions, so check that 2 new conditions are pushed @@ -1760,11 +1704,11 @@ def test_push_project_deleted_sub_resource_condition(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(Condition, deleted_resources) @@ -1779,11 +1723,11 @@ def test_push_project_updated_sub_resource_asr_biasing(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(ASRBiasing, updated_resources) @@ -1809,11 +1753,11 @@ def test_push_project_mixed_changes(self): ] = False project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] updated_resources = call_args.kwargs["updated_resources"] deleted_resources = call_args.kwargs["deleted_resources"] @@ -1827,11 +1771,11 @@ def test_push_project_new_keyphrase_boosting(self): project_data["resources"]["keyphrase_boosting"].pop("KEYPHRASE_BOOSTING-polyai") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(KeyphraseBoosting, new_resources) kp_names = [r.keyphrase for r in new_resources[KeyphraseBoosting].values()] @@ -1847,11 +1791,11 @@ def test_push_project_deleted_keyphrase_boosting(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(KeyphraseBoosting, deleted_resources) self.assertIn("KEYPHRASE_BOOSTING-extra", deleted_resources[KeyphraseBoosting]) @@ -1861,11 +1805,11 @@ def test_push_project_modified_keyphrase_boosting(self): project_data["resources"]["keyphrase_boosting"]["KEYPHRASE_BOOSTING-polyai"]["level"] = "default" project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(KeyphraseBoosting, updated_resources) self.assertIn("KEYPHRASE_BOOSTING-polyai", updated_resources[KeyphraseBoosting]) @@ -1875,11 +1819,11 @@ def test_push_project_new_transcript_correction(self): project_data["resources"]["transcript_corrections"].pop("TRANSCRIPT_CORRECTIONS-email_domain") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(TranscriptCorrection, new_resources) tc_names = [r.name for r in new_resources[TranscriptCorrection].values()] @@ -1897,11 +1841,11 @@ def test_push_project_deleted_transcript_correction(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(TranscriptCorrection, deleted_resources) self.assertIn("TRANSCRIPT_CORRECTIONS-extra", deleted_resources[TranscriptCorrection]) @@ -1911,11 +1855,11 @@ def test_push_project_modified_asr_settings(self): project_data["resources"]["asr_settings"]["asr_settings"]["barge_in"] = True project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True) + success, message = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.queue_resources.assert_called_once() - call_args = self.mock_api_handler.queue_resources.call_args + self.mock_api_handler.push_resources.assert_called_once() + call_args = self.mock_api_handler.push_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(AsrSettings, updated_resources) self.assertIn("asr_settings", updated_resources[AsrSettings]) @@ -1930,7 +1874,7 @@ def test_push_project_validation_error(self): invalid_content = "name: test_flow\ndescription:\nstart_step: start_step\n" with mock_read_from_file({flow_config_path: invalid_content}): - success, message, commands = project.push_project(force=True, skip_validation=False) + success, message = project.push_project(force=True, skip_validation=False) self.assertFalse(success) self.assertIn("Validation errors", message) @@ -1945,7 +1889,7 @@ def test_push_project_validation_error_skip(self): invalid_content = "name: test_flow\ndescription:\nstart_step: start_step\n" with mock_read_from_file({flow_config_path: invalid_content}): - success, message, commands = project.push_project(force=True, skip_validation=True) + success, message = project.push_project(force=True, skip_validation=True) self.assertTrue(success) @@ -1954,13 +1898,18 @@ def test_push_project_dry_run(self): project_data["resources"]["topics"].pop("TOPIC-Topic 1") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message, commands = project.push_project(force=True, dry_run=True) + success, message = project.push_project(force=True, dry_run=True) self.assertTrue(success) self.assertIn("Dry run completed", message) - self.mock_api_handler.queue_resources.assert_called_once() - self.mock_api_handler.send_queued_commands.assert_not_called() - self.mock_api_handler.clear_command_queue.assert_called_once() + self.mock_api_handler.push_resources.assert_called_once_with( + new_resources=mock.ANY, + deleted_resources=mock.ANY, + updated_resources=mock.ANY, + dry_run=True, + email=None, + queue_pushes=mock.ANY, + ) class ValidateProjectTest(unittest.TestCase): @@ -2031,9 +1980,9 @@ def test_pull_project_no_changes(self): # Incoming resources are the same as project.resources # Use the actual resources from the project to ensure they match original_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = (original_resources, {}) + self.mock_api_handler.pull_resources.return_value = original_resources - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) self.assertEqual(project.resources, original_resources) @@ -2058,7 +2007,7 @@ def test_pull_project_not_loaded_resources_force_save(self): # Simulate pull: incoming has variant_attributes from remote full_project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) incoming_resources = full_project.resources - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources with mock_read_from_file( { @@ -2067,7 +2016,7 @@ def test_pull_project_not_loaded_resources_force_save(self): ): "{}\n" } ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Variant attributes are now present in project resources with the correct keys @@ -2092,9 +2041,9 @@ def test_pull_project_addition(self): example_queries=["New query"], ) incoming_resources.setdefault(Topic, {})["TOPIC-new_topic"] = new_topic - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify the new resource was saved via save_to_file or save self.assertTrue(self.mock_save_to_file.called or self.mock_resource_save.called) @@ -2108,9 +2057,9 @@ def test_pull_project_deletion(self): incoming_resources = deepcopy(project.resources) if Topic in incoming_resources and "TOPIC-Topic 1" in incoming_resources[Topic]: del incoming_resources[Topic]["TOPIC-Topic 1"] - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify the resource file was removed via os.remove @@ -2127,9 +2076,9 @@ def test_pull_project_modify_1(self): modified_func = deepcopy(incoming_resources[Function][func_id]) modified_func.code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Modified"\n' incoming_resources[Function][func_id] = modified_func - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify resource is updated in project resources self.assertIn(func_id, project.resources.get(Function, {})) @@ -2145,7 +2094,7 @@ def test_pull_project_modify_conflict(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources with mock_read_from_file( { @@ -2154,7 +2103,7 @@ def test_pull_project_modify_conflict(self): ): 'from _gen import * # \n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """Modified locally."""\n return "Local change"\n' } ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) # Should detect merge conflict self.assertEqual( files_with_conflicts, [os.path.join(TEST_DIR, "functions", "test_function.py")] @@ -2185,7 +2134,7 @@ def test_pull_project_modify_flow_config_conflict(self): modified_flow_config = deepcopy(incoming_resources[FlowConfig][flow_config_id]) modified_flow_config.description = "Modified remotely - new description" incoming_resources[FlowConfig][flow_config_id] = modified_flow_config - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources # Mock local file with different changes flow_config_path = os.path.join( @@ -2196,7 +2145,7 @@ def test_pull_project_modify_flow_config_conflict(self): flow_config_path: "name: test_flow\ndescription: Modified locally - different description\nstart_step: start_step\n" } ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) # Should detect merge conflict self.assertEqual(files_with_conflicts, [flow_config_path]) # Resources are now incoming resources @@ -2229,7 +2178,7 @@ def test_pull_project_modify_no_conflict(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources with mock_read_from_file( { @@ -2238,7 +2187,7 @@ def test_pull_project_modify_no_conflict(self): ): 'from _gen import * # \n\ndef added_extra_function():\n pass\n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """A test function for global use."""\n return "Hello from global function"\n' } ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) # Should detect no merge conflict self.assertEqual(files_with_conflicts, []) # Resources are now incoming resources @@ -2266,7 +2215,7 @@ def test_pull_project_force(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources with mock_read_from_file( { @@ -2275,7 +2224,7 @@ def test_pull_project_force(self): ): 'from _gen import * # \n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """Modified locally."""\n return "Local change"\n' } ): - files_with_conflicts, _ = project.pull_project(force=True) + files_with_conflicts = project.pull_project(force=True) # Should detect no merge conflict self.assertEqual(files_with_conflicts, []) @@ -2291,8 +2240,8 @@ def test_pull_project_added_locally_and_remote_same(self): full_project_resources = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources incoming_resources = deepcopy(full_project_resources) - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts, _ = project.pull_project(force=False, format=True) + self.mock_api_handler.pull_resources.return_value = incoming_resources + files_with_conflicts = project.pull_project(force=False, format=True) self.assertEqual(files_with_conflicts, []) # Verify resource is updated in project resources self.assertIn("FUNCTION-test_function_with_parameters", project.resources.get(Function, {})) @@ -2316,8 +2265,8 @@ def test_pull_project_added_locally_and_remote_different(self): incoming_resources = deepcopy(full_project_resources) incoming_resources[Function]["FUNCTION-test_function_with_parameters"].code = 'def test_function_with_parameters(conv: Conversation):\n """Test function with parameters."""\n return "Test function with parameters"\n' - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts, _ = project.pull_project(force=False) + self.mock_api_handler.pull_resources.return_value = incoming_resources + files_with_conflicts = project.pull_project(force=False) self.assertEqual(len(files_with_conflicts), 1) def test_pull_project_deleted_locally(self): @@ -2334,8 +2283,8 @@ def test_pull_project_deleted_locally(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) incoming_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts, _ = project.pull_project(force=False) + self.mock_api_handler.pull_resources.return_value = incoming_resources + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify it wasn't saved to the file system @@ -2363,9 +2312,9 @@ def test_pull_project_resource_moved(self): # Rename the topic (this changes the file path) renamed_topic.name = "renamed_topic" - self.mock_api_handler.pull_resources.return_value = (original_resources, {}) + self.mock_api_handler.pull_resources.return_value = original_resources - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify old file would be removed @@ -2379,8 +2328,6 @@ def test_pull_project_resource_moved(self): def test_pull_project_empty_flow_folder_deletion(self): """Test that empty flow folders are deleted after pull""" project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - original_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = (original_resources, {}) # Mock os.listdir and os.rmdir to verify empty folder deletion empty_flow_path = os.path.join(TEST_DIR, "flows", "test_flow") @@ -2403,7 +2350,7 @@ def mock_isdir(path): patch("os.path.isdir", side_effect=mock_isdir), patch("os.rmdir") as mock_rmdir, ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) # Empty flow folder should be deleted # _delete_empty_folders is called after pull_project @@ -2448,7 +2395,7 @@ def test_pull_project_multi_resource_yaml_remote_change_no_local_change(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) incoming_resources = deepcopy(project.resources) incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2469,7 +2416,7 @@ def test_pull_project_multi_resource_yaml_remote_change_no_local_change(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, original_kp_content), ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2495,7 +2442,7 @@ def test_pull_project_multi_resource_yaml_merge_no_conflict(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2525,7 +2472,7 @@ def test_pull_project_multi_resource_yaml_merge_no_conflict(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, local_kp_content), ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2554,7 +2501,7 @@ def test_pull_project_multi_resource_yaml_conflict(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2584,7 +2531,7 @@ def test_pull_project_multi_resource_yaml_conflict(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, local_kp_content), ): - files_with_conflicts, _ = project.pull_project(force=False) + files_with_conflicts = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertIn(kp_path, files_with_conflicts) @@ -2607,14 +2554,14 @@ def test_pull_project_multi_resource_yaml_force(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + self.mock_api_handler.pull_resources.return_value = incoming_resources kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" ) MultiResourceYamlResource._file_cache.clear() - files_with_conflicts, _ = project.pull_project(force=True) + files_with_conflicts = project.pull_project(force=True) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2631,31 +2578,6 @@ def test_pull_project_multi_resource_yaml_force(self): self.assertIn("level: boosted", saved_content) self.assertNotIn("<<<<<<<", saved_content) - def test_pull_project_on_save_callback(self): - """on_save should be called during pull with correct final progress""" - project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - incoming_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - - on_save = MagicMock() - files_with_conflicts, _ = project.pull_project(on_save=on_save) - - self.assertEqual(files_with_conflicts, []) - self.assertGreater(on_save.call_count, 0) - last_call = on_save.call_args_list[-1] - current, total = last_call[0] - self.assertEqual(current, total) - - def test_pull_project_no_on_save_does_not_error(self): - """pull_project without on_save should work without errors""" - project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - incoming_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - - files_with_conflicts, _ = project.pull_project() - self.assertEqual(files_with_conflicts, []) - - class DocsTest(unittest.TestCase): """Tests for the docs module""" diff --git a/src/poly/tests/resources_test.py b/src/poly/tests/resources_test.py index b132856..44ecba9 100644 --- a/src/poly/tests/resources_test.py +++ b/src/poly/tests/resources_test.py @@ -14,15 +14,6 @@ SettingsRole, SettingsRules, ) -from poly.resources.api_integration import ( - AVAILABLE_AUTH_TYPES, - AVAILABLE_OPERATIONS, - URL_PATTERN, - ApiIntegration, - ApiIntegrationConfig, - ApiIntegrationEnvironments, - ApiIntegrationOperation, -) from poly.resources.asr_settings import AsrSettings from poly.resources.channel_settings import ( ChatGreeting, @@ -50,6 +41,15 @@ FunctionParameters, FunctionType, ) +from poly.resources.api_integration import ( + AVAILABLE_AUTH_TYPES, + AVAILABLE_OPERATIONS, + URL_PATTERN, + ApiIntegration, + ApiIntegrationConfig, + ApiIntegrationOperation, + ApiIntegrationEnvironments, +) from poly.resources.handoff import Handoff from poly.resources.keyphrase_boosting import KeyphraseBoosting from poly.resources.phrase_filter import PhraseFilter diff --git a/uv.lock b/uv.lock index 41c0c73..8261d23 100644 --- a/uv.lock +++ b/uv.lock @@ -332,7 +332,7 @@ wheels = [ [[package]] name = "polyai-adk" -version = "0.4.0" +version = "0.3.3" source = { editable = "." } dependencies = [ { name = "argcomplete" }, @@ -758,28 +758,28 @@ wheels = [ [[package]] name = "uv" -version = "0.11.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2b/e9/691eb77e5e767cdec695db3f91ec259bbb66f9af7c86a8dbe462ef72a120/uv-0.11.1.tar.gz", hash = "sha256:8aa7e4983fabb06d0ba58e8b8c969d568ce495ad5f2f0426af97b55720f0dee1", size = 4007244, upload-time = "2026-03-24T23:14:18.269Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/16/f9/a95c44fba785c27a966087154a8f6825774d49a38b3c5cd35f80e07ca5ca/uv-0.11.1-py3-none-linux_armv6l.whl", hash = "sha256:424b5b412d37838ea6dc11962f037be98b92e83c6ec755509e2af8a4ca3fbf2a", size = 23320598, upload-time = "2026-03-24T23:13:44.998Z" }, - { url = "https://files.pythonhosted.org/packages/5d/de/b7e24956a2508debf2addefcad93c72165069370f914d90db6264e0cf96a/uv-0.11.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c2133b0532af0217bf252d981bded8bff0c770f174f91f20655f88705f28c03f", size = 22832732, upload-time = "2026-03-24T23:13:33.677Z" }, - { url = "https://files.pythonhosted.org/packages/93/bd/1ac91bc704c22a427a44262f09e208ae897817a856d0e8dc0d60e4032e92/uv-0.11.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1a7b74e5a15b9bc6e61ce807adeca5a2807f557d3f06a5586de1da309d844c1d", size = 21406409, upload-time = "2026-03-24T23:14:32.231Z" }, - { url = "https://files.pythonhosted.org/packages/34/1d/f767701e1160538d25ee6c1d49ce1e72442970b6658365afdd57339d10e0/uv-0.11.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:fb1f32ec6c7dffb7ae71afaf6bf1defca0bd20a73a25e61226210c0a3e8bb13d", size = 23154066, upload-time = "2026-03-24T23:14:07.334Z" }, - { url = "https://files.pythonhosted.org/packages/55/21/d2cfa3571557ba68ffd530656b1d7159fe59a6b01be94595351b1eec1c29/uv-0.11.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:0d5cf3c1c96f8afd67072d80479a58c2d69471916bac4ac36cc55f2aa025dc8e", size = 22922490, upload-time = "2026-03-24T23:13:25.83Z" }, - { url = "https://files.pythonhosted.org/packages/59/3c/68119f555b2ec152235951cc9aa0f40006c5f03d17c98adaab6a3d36d42b/uv-0.11.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5829a254c64b19420b9e48186182d162b01f8da0130e770cbb8851fd138bb820", size = 22923054, upload-time = "2026-03-24T23:14:03.595Z" }, - { url = "https://files.pythonhosted.org/packages/70/ce/0df944835519372b1d698acaa388baa874cf69a6183b5f0980cb8855b81a/uv-0.11.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d4259027e80f4dcc9ae3dceddcd5407173d334484737166fc212e96bb760d6ea", size = 24576177, upload-time = "2026-03-24T23:14:25.263Z" }, - { url = "https://files.pythonhosted.org/packages/db/04/0076335413c618fe086e5a4762103634552e638a841e12a4bb8f5137d710/uv-0.11.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b6169eb49d1d2b5df7a7079162e1242e49ad46c6590c55f05b182fa526963763", size = 25207026, upload-time = "2026-03-24T23:14:11.579Z" }, - { url = "https://files.pythonhosted.org/packages/bb/57/79c0479e12c2291ad9777be53d813957fa38283975b708eead8e855ba725/uv-0.11.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c96a7310a051b1013efffe082f31d718bce0538d4abc20a716d529bf226b7c44", size = 24393748, upload-time = "2026-03-24T23:13:48.553Z" }, - { url = "https://files.pythonhosted.org/packages/c3/25/9ef73c8b6ef04b0cead7d8f1547034568e3e58f3397b55b83167e587f84a/uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41ccc438dbb905240a3630265feb25be1bda61656ec7c32682a83648a686f4aa", size = 24518525, upload-time = "2026-03-24T23:13:41.129Z" }, - { url = "https://files.pythonhosted.org/packages/a0/a3/035c7c2feb2139efb5d70f2e9f68912c34f7d92ee2429bacd708824483bb/uv-0.11.1-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:44f528ba3d66321cea829770982cccb14af142203e4e19d00ff0c23b28e3cd33", size = 23270167, upload-time = "2026-03-24T23:13:51.937Z" }, - { url = "https://files.pythonhosted.org/packages/25/59/2dd782b537bfd1e41cb06de4f4a529fe2f9bd10034fb3fcce225ec86c1a5/uv-0.11.1-py3-none-manylinux_2_31_riscv64.musllinux_1_1_riscv64.whl", hash = "sha256:4fcc3d5fdea24181d77e7765bf9d16cdd9803fd524820c62c66f91b2e2644d5b", size = 24011976, upload-time = "2026-03-24T23:13:37.402Z" }, - { url = "https://files.pythonhosted.org/packages/7b/f0/9983e6f31d495cc548f1e211cab5b89a3716f406a2d9d8134b8245ec103c/uv-0.11.1-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5de9e43a32079b8d57093542b0cd8415adba5ed9944fa49076c0927f3ff927e1", size = 24029605, upload-time = "2026-03-24T23:14:28.819Z" }, - { url = "https://files.pythonhosted.org/packages/19/dc/9c59e803bfc1b9d6c4c4b7374689c688e9dc0a1ecc2375399d3a59fd4a58/uv-0.11.1-py3-none-musllinux_1_1_i686.whl", hash = "sha256:f13ae98a938effae5deb587a63e7e42f05d6ba9c1661903ef538e4e87b204f8c", size = 23702811, upload-time = "2026-03-24T23:14:21.207Z" }, - { url = "https://files.pythonhosted.org/packages/7d/77/b1cbfdac0b2dd3e7aa420e9dad1abe8badb47eabd8741a9993586b14f8dc/uv-0.11.1-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:57d38e8b6f6937e1521da568adf846bb89439c73e146e89a8ab2cfe7bb15657a", size = 24714239, upload-time = "2026-03-24T23:13:29.814Z" }, - { url = "https://files.pythonhosted.org/packages/e4/d3/94917751acbbb5e053cb366004ae8be3c9664f82aef7de54f55e38ec15cb/uv-0.11.1-py3-none-win32.whl", hash = "sha256:36f4552b24acaa4699b02baeb1bb928202bb98d426dcc5041ab7ebae082a6430", size = 22404606, upload-time = "2026-03-24T23:13:55.614Z" }, - { url = "https://files.pythonhosted.org/packages/aa/87/8dadfe03944a4a493cd58b6f4f13e5181069a0048aeb2fae7da2c587a542/uv-0.11.1-py3-none-win_amd64.whl", hash = "sha256:d6a1c4cdb1064e9ceaa59e89a7489dd196222a0b90cfb77ca37a909b5e024ea0", size = 24850092, upload-time = "2026-03-24T23:14:15.186Z" }, - { url = "https://files.pythonhosted.org/packages/38/1b/dad559273df0c8263533afa4a28570cf6804272f379df9830b528a9cf8bc/uv-0.11.1-py3-none-win_arm64.whl", hash = "sha256:3bc9632033c7a280342f9b304bd12eccb47d6965d50ea9ee57ecfaf4f1f393c4", size = 23376127, upload-time = "2026-03-24T23:13:59.59Z" }, +version = "0.11.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/5a/c3/8fe199f300c8c740a55bc7a0eb628aa21ce6fd81130ab26b1b74597e3566/uv-0.11.0.tar.gz", hash = "sha256:8065cd54c2827588611a1de334901737373602cb64d7b84735a08b7d16c8932b", size = 4007038, upload-time = "2026-03-23T22:04:50.132Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/78/29/188d4abb5bbae1d815f4ca816ad5a3df570cb286600b691299424f5e0798/uv-0.11.0-py3-none-linux_armv6l.whl", hash = "sha256:0a66d95ded54f76be0b3c5c8aefd4a35cc453f8d3042563b3a06e2dc4d54dbb6", size = 23338895, upload-time = "2026-03-23T22:04:53.4Z" }, + { url = "https://files.pythonhosted.org/packages/49/d3/e8c91242e5bf2c10e8da8ad4568bc41741f497ba6ae7ebfa3f931ef56171/uv-0.11.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:130f5dd799e8f50ab5c1cdc51b044bb990330d99807c406d37f0b09b3fdf85fe", size = 22812837, upload-time = "2026-03-23T22:05:13.426Z" }, + { url = "https://files.pythonhosted.org/packages/d9/1c/6ddd0febcea06cf23e59d9bff90d07025ecfd600238807f41ed2bdafd159/uv-0.11.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:4b0ebbd7ae019ea9fc4bff6a07d0c1e1d6784d1842bbdcb941982d30e2391972", size = 21363278, upload-time = "2026-03-23T22:05:48.771Z" }, + { url = "https://files.pythonhosted.org/packages/79/25/2bf8fb0ae419a9dd7b7e13ab6d742628146ed9dd0d2205c2f7d5c437f3d5/uv-0.11.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:50f3d0c4902558a2a06afb4666e6808510879fb52b0d8cc7be36e509d890fd88", size = 23132924, upload-time = "2026-03-23T22:05:52.759Z" }, + { url = "https://files.pythonhosted.org/packages/ff/af/c83604cf9d2c2a07f50d779c8a51c50bc6e31bcc196d58c76c4af5de363c/uv-0.11.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:16b7850ac8311eb04fe74c6ec1b3a7b6d7d84514bb6176877fcf5df9b7d6464a", size = 22935016, upload-time = "2026-03-23T22:05:45.023Z" }, + { url = "https://files.pythonhosted.org/packages/8d/1f/2b4bbab1952a9c28f09e719ca5260fb6ae013d0a8b5025c3813ba86708ed/uv-0.11.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f2c3ec280a625c77ff6d9d53ebc0af9277ca58086b8ab2f8e66b03569f6aecb9", size = 22929000, upload-time = "2026-03-23T22:05:17.039Z" }, + { url = "https://files.pythonhosted.org/packages/ca/bc/038b3df6e22413415ae1eec748ee5b5f0c32ac2bdd80350a1d1944a4b8aa/uv-0.11.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:24fbec6a70cee6e2bf5619ff71e4c984664dbcc03dcf77bcef924febf9292293", size = 24575116, upload-time = "2026-03-23T22:05:01.095Z" }, + { url = "https://files.pythonhosted.org/packages/76/91/6adc039c3b701bd4a65d8fdfada3e7f3ee54eaca1759b3199699bf338d0e/uv-0.11.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15d2380214518375713c8da32e84e3d1834bee324b43a5dff8097b4d8b1694a9", size = 25158577, upload-time = "2026-03-23T22:05:21.049Z" }, + { url = "https://files.pythonhosted.org/packages/ae/1e/fa1a4f5845c4081c0ace983608ae8fbe00fa27eefb4f0f884832c519b289/uv-0.11.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:74cf7401fe134dde492812e478bc0ece27f01f52be29ebbd103b4bb238ce2a29", size = 24390099, upload-time = "2026-03-23T22:04:43.756Z" }, + { url = "https://files.pythonhosted.org/packages/36/fa/086616d98b0b8a2cc5e7b49c389118a8196027a79a5a501f5e738f718f59/uv-0.11.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:30a08ee4291580784a5e276a1cbec8830994dba2ed5c94d878cce8b2121367cf", size = 24508501, upload-time = "2026-03-23T22:05:05.062Z" }, + { url = "https://files.pythonhosted.org/packages/cc/e5/628d21734684c3413ae484229815c04dc9c5639b71b53c308e4e7faec225/uv-0.11.0-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:fb45be97641214df78647443e8fa0236deeef4c7995f2e3df55879b0bc42d71d", size = 23213423, upload-time = "2026-03-23T22:05:37.112Z" }, + { url = "https://files.pythonhosted.org/packages/84/53/56df3017a738de6170f8937290f45e3cd33c6d8aa7cf21b7fb688e9eaa07/uv-0.11.0-py3-none-manylinux_2_31_riscv64.musllinux_1_1_riscv64.whl", hash = "sha256:509f6e04ba3a38309a026874d2d99652d16fee79da26c8008886bc9e42bc37df", size = 24014494, upload-time = "2026-03-23T22:05:25.013Z" }, + { url = "https://files.pythonhosted.org/packages/44/a4/1cf99ae80dd3ec08834e55c12ea22a6a36efc16ad39ea256c9ebe4e0682c/uv-0.11.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:30eed93f96a99a97e64543558be79c628d6197059227c0789f9921aa886e83f6", size = 24049669, upload-time = "2026-03-23T22:05:09.865Z" }, + { url = "https://files.pythonhosted.org/packages/bc/ad/621271fa73f268bea996e3e296698097b5c557d48de1d316b319105e45ef/uv-0.11.0-py3-none-musllinux_1_1_i686.whl", hash = "sha256:81b73d7e9d811131636f0010533a98dd9c1893d5b7aa9672cc1ed00452834ba3", size = 23677683, upload-time = "2026-03-23T22:04:57.211Z" }, + { url = "https://files.pythonhosted.org/packages/20/03/daf51de08504529dc3de94d15d81590249e4d0394aa881dc305d7e6d6478/uv-0.11.0-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:7cbcf306d71d84855972a24a760d33f44898ac5e94b680de62cd28e30d91b69a", size = 24728106, upload-time = "2026-03-23T22:05:29.149Z" }, + { url = "https://files.pythonhosted.org/packages/22/ac/26ed5b0792f940bab892be65de7c9297c6ef1ec879adf7d133300eba31a3/uv-0.11.0-py3-none-win32.whl", hash = "sha256:801604513ec0cc05420b382a0f61064ce1c7800758ed676caba5ff4da0e3a99e", size = 22440703, upload-time = "2026-03-23T22:05:32.806Z" }, + { url = "https://files.pythonhosted.org/packages/8b/86/5449b6cd7530d1f61a77fde6186f438f8a5291cb063a8baa3b4addaa24b9/uv-0.11.0-py3-none-win_amd64.whl", hash = "sha256:7e16194cf933c9803478f83fb140cefe76cd37fc0d9918d922f6f6fbc6ca7297", size = 24860392, upload-time = "2026-03-23T22:05:41.019Z" }, + { url = "https://files.pythonhosted.org/packages/04/5b/b93ef560e7b69854a83610e7285ebc681bb385dd321e6f6d359bef5db4c0/uv-0.11.0-py3-none-win_arm64.whl", hash = "sha256:1960ae9c73d782a73b82e28e5f735b269743d18a467b3f14ec35b614435a2aef", size = 23347957, upload-time = "2026-03-23T22:04:47.727Z" }, ] [[package]] From 60695f084fee649d4188ed5f79803238f27a04c6 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Sat, 28 Mar 2026 22:13:43 +0000 Subject: [PATCH 09/14] Revert "docs: update installation guide, move dev setup, remove pytest, fix em dashes" This reverts commit 3e5f00ba6476fe7e49486c4ccb2756785cba09ef. --- CHANGELOG.md | 264 ++++++++++ CONTRIBUTING.md | 2 +- pyproject.toml | 2 +- src/poly/cli.py | 732 ++++++++++++++++++++++---- src/poly/handlers/interface.py | 85 ++- src/poly/handlers/sdk.py | 3 - src/poly/handlers/sync_client.py | 200 ++++--- src/poly/output/__init__.py | 1 + src/poly/{ => output}/console.py | 0 src/poly/output/json_output.py | 31 ++ src/poly/project.py | 423 +++++++-------- src/poly/resources/__init__.py | 4 +- src/poly/resources/api_integration.py | 17 +- src/poly/resources/function.py | 4 +- src/poly/tests/project_test.py | 282 ++++++---- src/poly/tests/resources_test.py | 18 +- uv.lock | 46 +- 17 files changed, 1553 insertions(+), 561 deletions(-) create mode 100644 src/poly/output/__init__.py rename src/poly/{ => output}/console.py (100%) create mode 100644 src/poly/output/json_output.py diff --git a/CHANGELOG.md b/CHANGELOG.md index 65a89a3..f8c4239 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,270 @@ # CHANGELOG +## v0.6.0 (2026-03-27) + +### Features + +- Add resource caching and progress spinner for init/pull/branch + ([#50](https://github.com/polyai/adk/pull/50), + [`2d4fc0a`](https://github.com/polyai/adk/commit/2d4fc0ae78c348d0cc9269d7f0749c83a06adedd)) + +## Summary + +Batch `MultiResourceYamlResource` writes during `poly init` so each YAML file is written once + instead of once per resource, and add a progress spinner to `init`, `pull`, and `branch switch` so + the CLI doesn't appear stuck on large projects. + +Also edited CONTRIBUTING.md to edit the clone url - changed org to PolyAI. + +## Motivation + +`poly init` is very slow on projects with many pronunciations (or other multi-resource YAML types) + because `save()` rewrites the full YAML file for every single item. On large projects like pacden, + the process appears stuck with no output. The `save_to_cache` + `write_cache_to_file` pattern + already exists for `poly pull` — this reuses it for `init` and adds a progress spinner across all + three commands. + +## Changes + +- Use `save_to_cache=True` for all `MultiResourceYamlResource` saves during `init_project()`, then + flush to disk once via `write_cache_to_file()` - Add an optional `on_save(current, total)` + callback to `init_project()`, `pull_project()`, `_update_multi_resource_yaml_resources()`, + `_update_pulled_resources()`, and `switch_branch()` for progress reporting - Wire up + `console.status()` spinners in `cli.py` for `init`, `pull`, and `branch switch`, using + `nullcontext` to skip the spinner in `--json` mode - Progress counter includes both multi-resource + (per batch total) and non-multi-resource types for an accurate total + +- CONTRIBUTING.md to edit the clone url - changed org from PolyAI-LDN to PolyAI. + +## Test strategy + +- [x] Added/updated unit tests - [x] Manual CLI testing (`poly `) - [ ] Tested against a + live Agent Studio project - [ ] N/A (docs, config, or trivial change) + +## Checklist + +- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes (361 tests, 0 failures) + - [x] No breaking changes to the `poly` CLI interface (or migration path documented) - [x] Commit + messages follow [conventional commits](https://www.conventionalcommits.org/) + +## Screenshots / Logs Before: Screenshot 2026-03-25 at 10 04
+  14 PM + +After: Screenshot 2026-03-25 at 10 04 01 PM + + +## v0.5.1 (2026-03-27) + +### Bug Fixes + +- Display branch name instead of branch id ([#45](https://github.com/polyai/adk/pull/45), + [`5a54240`](https://github.com/polyai/adk/commit/5a54240418d1848d195af23e87b3cb7005462d4b)) + +## Summary Display new branch name in CLI when the tool switches branch + +## Motivation On push when creating a new branch, users would be shown branch ID not new branch name + +## Changes + +- Change logger level for some logs to hide on usual CLI usage - Make it more clear when a branch id + is used in logs - When branch_id changes, output this in CLI with new branch name - Update auto + branch name to exclude `sdk-user` + +## Test strategy + + + +- [ ] Added/updated unit tests - [x] Manual CLI testing (`poly `) - [ ] Tested against a + live Agent Studio project - [ ] N/A (docs, config, or trivial change) + +## Checklist + +- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes - [x] No breaking + changes to the `poly` CLI interface (or migration path documented) - [x] Commit messages follow + [conventional commits](https://www.conventionalcommits.org/) + +## Screenshots / Logs Screenshot 2026-03-26 at 15 54 24 + +--------- + +Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> + + +## v0.5.0 (2026-03-26) + +### Features + +- **cli**: Machine-readable --json, projection-based pull/push, and serialized push commands + ([#41](https://github.com/polyai/adk/pull/41), + [`cb91e2a`](https://github.com/polyai/adk/commit/cb91e2abffe97dfdbc6e3db8770f16a369f6da29)) + +## Summary + +Adds a global-style `--json` mode across `poly` subcommands so stdout is a single JSON object for + scripting and CI. Introduces `--from-projection` / optional projection output for `init` and + `pull`, and `--output-json-commands` on `push` to include the queued Agent Studio commands (as + dicts). Moves console helpers under `poly.output` and adds `json_output` helpers (including + protobuf → JSON via `MessageToDict`). + +## Motivation + +Operators and automation need stable, parseable CLI output and the ability to drive pull/push from a + captured projection (without hitting the projection API). Exposing staged push commands supports + dry-run review and integration testing. + +Closes #23 + +## Changes + +- Wire `json_parent` (`--json`) into relevant subparsers; many code paths now emit structured JSON + and exit with non-zero on failure where appropriate. - Add `--from-projection` (JSON string or `-` + for stdin) to `pull` and `push`; `SyncClientHandler.pull_resources` uses an inline projection when + provided instead of fetching. - Add `--output-json-projection` on `init` / `pull` (and related + flows) to include the projection in JSON output when requested. - Add `--output-json-commands` on + `push` to append serialized commands to the JSON payload; `push_project` returns `(success, + message, commands)`. - `pull_project` returns `(files_with_conflicts, projection)`; + `pull_resources` returns `(resources, projection)`. - New `poly/output/json_output.py` + (`json_print`, `commands_to_dicts`); relocate `console.py` to `poly/output/console.py` and update + imports. - Update `project_test` mocks/expectations for new return shapes; `uv.lock` updated for + dependencies. + +## Test strategy + +- [x] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [ ] Tested against a + live Agent Studio project - [ ] N/A (docs, config, or trivial change) + +## Checklist + +- [ ] `ruff check .` and `ruff format --check .` pass - [ ] `pytest` passes - [ ] No breaking + changes to the `poly` CLI interface (or migration path documented) - [ ] Commit messages follow + [conventional commits](https://www.conventionalcommits.org/) + +**Note for reviewers:** The **CLI** remains backward compatible (new flags only). + **`AgentStudioProject.pull_project` / `push_project`** (and `pull_resources` on the handler) + **change return types** vs `main`; any direct Python callers must be updated to unpack the new + tuples and optional `projection_json` argument. + +## Screenshots / Logs + + + +--------- + +Co-authored-by: Oliver Eisenberg + +Co-authored-by: Claude Sonnet 4.6 + + +## v0.4.1 (2026-03-26) + +### Bug Fixes + +- Error on merges ([#44](https://github.com/polyai/adk/pull/44), + [`b3d8d62`](https://github.com/polyai/adk/commit/b3d8d62b8b36e476f7027691d0d18da33edf9a74)) + +## Summary Fix issue where merges were marked as successful when there is an internal API error + +## Motivation + +This error breaks pipelines that rely on this output + +Closes # + +## Changes + +- Make success response more explicit instead of relying on errors/conflicts lists + +## Test strategy + + + +- [ ] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [x] Tested against a + live Agent Studio project - [ ] N/A (docs, config, or trivial change) + +## Checklist + +- [x] `ruff check .` and `ruff format --check .` pass - [x] `pytest` passes - [x] No breaking + changes to the `poly` CLI interface (or migration path documented) - [x] Commit messages follow + [conventional commits](https://www.conventionalcommits.org/) + +## Screenshots / Logs + + + +- Guard uv.lock checkout in coverage workflow ([#42](https://github.com/polyai/adk/pull/42), + [`2383405`](https://github.com/polyai/adk/commit/238340568a8bdbe8ece9612f94d7bd7664154fad)) + +## Summary + +- Prevent coverage CI from failing when `uv.lock` is absent on a branch - Wrap both `git checkout -- + uv.lock` calls with a conditional `git rev-parse --verify` check before and after the base branch + checkout step + +🤖 Generated with [Claude Code](https://claude.com/claude-code) + +Co-authored-by: Claude Sonnet 4.6 + +### Chores + +- Add pytest-cov and coverage to dev dependencies ([#36](https://github.com/polyai/adk/pull/36), + [`649ccb7`](https://github.com/polyai/adk/commit/649ccb7d10f3ce59ba9e0f0094bf93b3c90736a7)) + +## Summary - Adds `pytest-cov>=6.0.0` and `coverage>=7.0.0` to the `[dev]` optional dependencies in + `pyproject.toml` + +## Test plan - [x] Run `uv pip install -e ".[dev]"` and verify `pytest-cov` and `coverage` install + successfully image + +🤖 Generated with [Claude Code](https://claude.com/claude-code) + +--------- + +Co-authored-by: Claude Sonnet 4.6 + +### Documentation + +- Fix formatting issues ([#40](https://github.com/polyai/adk/pull/40), + [`eafff58`](https://github.com/polyai/adk/commit/eafff58ab877a65d3fd204a850bcb7489083a1fa)) + +## Summary + + + +## Motivation + + + +Closes # + +## Changes + + + +- + +## Test strategy + + + +- [ ] Added/updated unit tests - [ ] Manual CLI testing (`poly `) - [ ] Tested against a + live Agent Studio project - [ ] N/A (docs, config, or trivial change) + +## Checklist + +- [ ] `ruff check .` and `ruff format --check .` pass - [ ] `pytest` passes - [ ] No breaking + changes to the `poly` CLI interface (or migration path documented) - [ ] Commit messages follow + [conventional commits](https://www.conventionalcommits.org/) + +## Screenshots / Logs + + + + ## v0.4.0 (2026-03-25) ### Bug Fixes diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index cad67d2..6787d65 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -12,7 +12,7 @@ Contributions are welcome! Please ensure all tests pass before submitting a pull ### Getting Started ```bash -git clone https://github.com/PolyAI-LDN/adk.git +git clone https://github.com/PolyAI/adk.git cd adk uv venv source .venv/bin/activate diff --git a/pyproject.toml b/pyproject.toml index a5e69cf..b6c83c9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -106,7 +106,7 @@ tag_format = "v{version}" [project] name = "polyai-adk" -version = "0.4.0" +version = "0.6.0" description = "Agent Development Kit (ADK) — a CLI for managing Agent Studio projects locally" readme = "README.md" requires-python = ">=3.14.0" diff --git a/src/poly/cli.py b/src/poly/cli.py index f96e0a7..de8fd6a 100644 --- a/src/poly/cli.py +++ b/src/poly/cli.py @@ -12,15 +12,16 @@ import shutil import subprocess import sys -from argparse import ArgumentParser, RawTextHelpFormatter +from argparse import SUPPRESS, ArgumentParser, RawTextHelpFormatter +from contextlib import nullcontext from importlib.metadata import version as get_package_version -from typing import Optional +from typing import Any, Optional import argcomplete import requests import questionary -from poly.console import ( +from poly.output.console import ( console, error, handle_exception, @@ -36,6 +37,7 @@ success, warning, ) +from poly.output.json_output import json_print, commands_to_dicts from poly.handlers.github_api_handler import GitHubAPIHandler from poly.handlers.interface import ( REGIONS, @@ -81,6 +83,13 @@ def _create_parser(cls) -> ArgumentParser: help="Show full error tracebacks for debugging.", ) + json_parent = ArgumentParser(add_help=False) + json_parent.add_argument( + "--json", + action="store_true", + help="Print a single JSON object on stdout (machine-readable).", + ) + subparsers = parser.add_subparsers(dest="command", required=True) # DOCS @@ -115,7 +124,7 @@ def _create_parser(cls) -> ArgumentParser: # INIT init_parser = subparsers.add_parser( "init", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Initialize a new Agent Studio project.", description="Initialize a new Agent Studio project.\n\nExamples:\n poly init --region eu-west-1 --account_id 123 --project_id my_project\n poly init # (interactive selection)", formatter_class=RawTextHelpFormatter, @@ -147,12 +156,25 @@ def _create_parser(cls) -> ArgumentParser: init_parser.add_argument( "--format", action="store_true", help="Format resources after init." ) + init_parser.add_argument( + "--from-projection", + type=str, + metavar="JSON|-", + help=SUPPRESS, + default=None, + ) + init_parser.add_argument( + "--output-json-projection", + action="store_true", + help=SUPPRESS, + default=False, + ) init_parser.add_argument("--debug", action="store_true", help="Display debug logs.") # PULL pull_parser = subparsers.add_parser( "pull", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Pull the latest project configuration from Agent Studio.", description="Pull the latest project configuration from Agent Studio.\n\nExamples:\n poly pull --path /path/to/project\n poly pull -f # force overwrite local changes", formatter_class=RawTextHelpFormatter, @@ -175,12 +197,25 @@ def _create_parser(cls) -> ArgumentParser: help="Format resources after pulling.", default=False, ) + pull_parser.add_argument( + "--from-projection", + type=str, + metavar="JSON|-", + help=SUPPRESS, + default=None, + ) + pull_parser.add_argument( + "--output-json-projection", + action="store_true", + help=SUPPRESS, + default=False, + ) pull_parser.add_argument("--debug", action="store_true", help="Display debug logs.") # PUSH push_parser = subparsers.add_parser( "push", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Push the project configuration to Agent Studio.", description="Push the project configuration to Agent Studio.\n\nExamples:\n poly push --path /path/to/project\n poly push --skip-validation --dry-run", formatter_class=RawTextHelpFormatter, @@ -214,11 +249,30 @@ def _create_parser(cls) -> ArgumentParser: default=False, ) push_parser.add_argument("--debug", action="store_true", help="Display debug logs.") + push_parser.add_argument( + "--from-projection", + type=str, + metavar="JSON|-", + help=SUPPRESS, + default=None, + ) + push_parser.add_argument( + "--output-json-commands", + action="store_true", + help=SUPPRESS, + default=False, + ) + push_parser.add_argument( + "--email", + type=str, + help="Email to use for metadata creation for push", + default=None, + ) # STATUS status_parser = subparsers.add_parser( "status", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Check the changed files of the project.", description="Check the changed files of the project.\n\nExamples:\n poly status\n poly status --path /path/to/project", formatter_class=RawTextHelpFormatter, @@ -235,7 +289,7 @@ def _create_parser(cls) -> ArgumentParser: # REVERT revert_parser = subparsers.add_parser( "revert", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Revert changes in the project.", description="Revert changes in the project.\n\nExamples:\n poly revert --all\n poly revert file1.yaml file2.yaml", formatter_class=RawTextHelpFormatter, @@ -263,7 +317,7 @@ def _create_parser(cls) -> ArgumentParser: # DIFF diff_parser = subparsers.add_parser( "diff", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Show the changes made to the project.", description="Show the changes made to the project.\n\nExamples:\n poly diff\n poly diff file1.yaml", formatter_class=RawTextHelpFormatter, @@ -327,7 +381,7 @@ def _create_parser(cls) -> ArgumentParser: # GET BRANCHES 'branch list' branches_parser = subparsers.add_parser( "branch", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Manage branches in the Agent Studio project.", description="Manage branches in the Agent Studio project.\n\nExamples:\n poly branch list\n poly branch create new-branch\n poly branch switch existing-branch", formatter_class=RawTextHelpFormatter, @@ -357,11 +411,24 @@ def _create_parser(cls) -> ArgumentParser: action="store_true", help="Force switch to a different branch and discard changes.", ) + branches_parser.add_argument( + "--from-projection", + type=str, + metavar="JSON|-", + help=SUPPRESS, + default=None, + ) + branches_parser.add_argument( + "--output-json-projection", + action="store_true", + help="Output the projection in json format", + default=False, + ) # FORMAT format_parser = subparsers.add_parser( "format", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Run ruff and YAML/JSON formatting on the project (optional ty with --ty).", description=( "Run ruff (lint + format) on Python and formatting on YAML/JSON resources.\n\n" @@ -399,7 +466,7 @@ def _create_parser(cls) -> ArgumentParser: # Validate validate_parser = subparsers.add_parser( "validate", - parents=[verbose_parent], + parents=[verbose_parent, json_parent], help="Validate the project configuration locally.", description="Validate the project configuration locally.\n\nExamples:\n poly validate --path /path/to/project\n", formatter_class=RawTextHelpFormatter, @@ -517,22 +584,42 @@ def _run_command(cls, args): args.account_id, args.project_id, args.format, + args.from_projection, + output_json=args.json, + output_json_projection=args.output_json_projection, ) elif args.command == "pull": - cls.pull(args.path, args.force, args.format) + cls.pull( + args.path, + args.force, + args.format, + args.from_projection, + output_json=args.json, + output_json_projection=args.output_json_projection, + ) elif args.command == "push": - cls.push(args.path, args.force, args.skip_validation, args.dry_run, args.format) + cls.push( + args.path, + args.force, + args.skip_validation, + args.dry_run, + args.format, + args.email, + args.from_projection, + output_json=args.json, + output_commands=args.output_json_commands, + ) elif args.command == "status": - cls.status(args.path) + cls.status(args.path, args.json) elif args.command == "revert": - cls.revert(args.path, args.all, args.files) + cls.revert(args.path, args.all, args.files, output_json=args.json) elif args.command == "diff": - cls.diff(args.path, args.files) + cls.diff(args.path, args.files, args.json) elif args.command == "review": if args.delete: @@ -549,10 +636,10 @@ def _run_command(cls, args): elif args.command == "branch": if args.action == "list": - cls.branch_list(args.path) + cls.branch_list(args.path, args.json) elif args.action == "create": - cls.branch_create(args.path, args.branch_name) + cls.branch_create(args.path, args.branch_name, args.json) elif args.action == "switch": cls.branch_switch( @@ -560,10 +647,13 @@ def _run_command(cls, args): args.branch_name, getattr(args, "force", False), getattr(args, "format", False), + args.json, + output_json_projection=args.output_json_projection, + from_projection=args.from_projection, ) elif args.action == "current": - cls.get_current_branch(args.path) + cls.get_current_branch(args.path, args.json) elif args.command == "format": cls.format( @@ -571,10 +661,11 @@ def _run_command(cls, args): args.files, getattr(args, "check", False), getattr(args, "ty", False), + output_json=args.json, ) elif args.command == "validate": - cls.validate_project(args.path) + cls.validate_project(args.path, args.json) elif args.command == "docs": cls.docs( @@ -630,15 +721,59 @@ def main(cls, sys_args=None): except Exception as e: handle_exception(e) + @staticmethod + def _parse_from_projection_json( + from_projection: Optional[str], + *, + json_errors: bool, + ) -> Optional[dict[str, Any]]: + """Parse ``--from-projection`` CLI value into a projection dict, or exit on failure. + + If the value is ``-`` (after stripping), JSON is read from stdin until EOF. + """ + if not from_projection: + return None + raw = from_projection.strip() + if raw == "-": + raw = sys.stdin.read() + try: + parsed: Any = json.loads(raw) + if isinstance(parsed, dict) and "projection" in parsed: + parsed = parsed["projection"] + except json.JSONDecodeError as e: + msg = f"Invalid JSON in --from-projection: {e}" + if json_errors: + json_print({"success": False, "error": msg}) + else: + error(msg) + sys.exit(1) + if not isinstance(parsed, dict): + msg = "--from-projection must be a JSON object (dictionary)." + if json_errors: + json_print({"success": False, "error": msg}) + else: + error(msg) + sys.exit(1) + return parsed + @classmethod - def _load_project(cls, base_path: str) -> AgentStudioProject: + def _load_project(cls, base_path: str, output_json: bool = False) -> AgentStudioProject: """Read project config or exit with a helpful error if not found. Args: base_path: Path to the project directory. + output_json: If True, print JSON and exit when config is missing. """ project = cls.read_project_config(base_path) if not project: + if output_json: + json_print( + { + "success": False, + "error": "No project configuration found. Run poly init to initialize a project.", + } + ) + sys.exit(1) error( "No project configuration found. Run [bold]poly init[/bold] to initialize a project." ) @@ -679,9 +814,22 @@ def init_project( account_id: str = None, project_id: str = None, format: bool = False, - ) -> AgentStudioProject: + from_projection: str = None, + output_json: bool = False, + output_json_projection: bool = False, + ) -> None: """Initialize a new Agent Studio project.""" - info("Initialising project...") + if output_json and not (region and account_id and project_id): + json_print( + { + "success": False, + "error": "init with --json requires --region, --account_id, and --project_id.", + } + ) + sys.exit(1) + + if not output_json: + info("Initialising project...") if not region: regions = REGIONS @@ -699,6 +847,14 @@ def init_project( use_jk_keys=False, ).ask() if not account_menu: + if output_json: + json_print( + { + "success": False, + "error": "No account selected.", + } + ) + sys.exit(1) warning("No account selected. Exiting.") return account_id = accounts[account_menu] @@ -712,39 +868,133 @@ def init_project( use_jk_keys=False, ).ask() if not project_menu: + if output_json: + json_print( + { + "success": False, + "error": "No project selected.", + } + ) + sys.exit(1) warning("No project selected. Exiting.") return project_id = projects[project_menu] - info(f"Initializing project [bold]{account_id}/{project_id}[/bold]...") + if not output_json: + info(f"Initializing project [bold]{account_id}/{project_id}[/bold]...") - project = AgentStudioProject.init_project( - base_path=base_path, - region=region, - account_id=account_id, - project_id=project_id, - format=format, + projection_json = cls._parse_from_projection_json( + from_projection, + json_errors=output_json or output_json_projection, ) + ctx = ( + console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() + ) + on_save = None + + with ctx as status: + if status: + + def on_save(current: int, total: int) -> None: + status.update(f"[info]Saving resources ({current}/{total})...[/info]") + + project, projection = AgentStudioProject.init_project( + base_path=base_path, + region=region, + account_id=account_id, + project_id=project_id, + format=format, + projection_json=projection_json, + on_save=on_save, + ) + if not project: - error("Failed to initialize the project.") - return None + if output_json: + json_print( + { + "success": False, + "error": "Failed to initialize the project.", + } + ) + else: + error("Failed to initialize the project.") + sys.exit(1) - success(f"Project initialized at {project.root_path}") - return project + if output_json or output_json_projection: + json_output = { + "success": True, + "root_path": project.root_path, + } + if output_json_projection: + json_output["projection"] = projection + json_print(json_output) + else: + success(f"Project initialized at {project.root_path}") @classmethod - def pull(cls, base_path: str, force: bool = False, format: bool = False) -> AgentStudioProject: + def pull( + cls, + base_path: str, + force: bool = False, + format: bool = False, + from_projection: str = None, + output_json: bool = False, + output_json_projection: bool = False, + ) -> None: """Pull the latest project configuration from the Agent Studio.""" - project = cls._load_project(base_path) - info(f"Pulling project [bold]{project.account_id}/{project.project_id}[/bold]...") + project = cls._load_project(base_path, output_json=output_json) + if not output_json: + info(f"Pulling project [bold]{project.account_id}/{project.project_id}[/bold]...") + + projection_json = cls._parse_from_projection_json( + from_projection, + json_errors=output_json or output_json_projection, + ) + + original_branch_id = project.branch_id - files_with_conflicts = project.pull_project(force=force, format=format) + ctx = ( + console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() + ) + on_save = None + + with ctx as status: + if status: + + def on_save(current: int, total: int) -> None: + status.update(f"[info]Saving resources ({current}/{total})...[/info]") + + files_with_conflicts, projection = project.pull_project( + force=force, format=format, projection_json=projection_json, on_save=on_save + ) + + new_branch_name = None + if original_branch_id != project.branch_id: + new_branch_name = project.get_current_branch() + if output_json or output_json_projection: + json_output = { + "success": not bool(files_with_conflicts), + "files_with_conflicts": files_with_conflicts, + } + if new_branch_name: + json_output["new_branch_name"] = new_branch_name + json_output["new_branch_id"] = project.branch_id + if output_json_projection: + json_output["projection"] = projection + json_print(json_output) + if files_with_conflicts: + sys.exit(1) + return + + if new_branch_name: + warning( + f"Current branch no longer exists in Agent Studio. Switched to branch '{new_branch_name}'." + ) if files_with_conflicts: print_file_list("Merge conflicts detected", files_with_conflicts, "filename.conflict") success(f"Pulled {project.account_id}/{project.project_id}") - return project @classmethod def push( @@ -754,29 +1004,76 @@ def push( skip_validation: bool = False, dry_run: bool = False, format: bool = False, - ) -> AgentStudioProject: + email: Optional[str] = None, + from_projection: str = None, + output_json: bool = False, + output_commands: bool = False, + ) -> None: """Push the project configuration to the Agent Studio.""" - project = cls._load_project(base_path) - info(f"Pushing local changes for [bold]{project.account_id}/{project.project_id}[/bold]...") + project = cls._load_project(base_path, output_json=output_json) + if not output_json and not output_commands: + info( + f"Pushing local changes for [bold]{project.account_id}/{project.project_id}[/bold]..." + ) - push_ok, output = project.push_project( - force=force, skip_validation=skip_validation, dry_run=dry_run, format=format + projection_json = cls._parse_from_projection_json( + from_projection, + json_errors=output_json or output_commands, ) + + original_branch_id = project.branch_id + push_ok, output, commands = project.push_project( + force=force, + skip_validation=skip_validation, + dry_run=dry_run, + format=format, + email=email, + projection_json=projection_json, + ) + new_branch_name = None + if original_branch_id != project.branch_id: + new_branch_name = project.get_current_branch() + if output_json or output_commands: + json_output = { + "success": push_ok, + "message": output, + "dry_run": dry_run, + } + if new_branch_name: + json_output["new_branch_name"] = new_branch_name + json_output["new_branch_id"] = project.branch_id + if output_commands: + json_output["commands"] = commands_to_dicts(commands) + json_print(json_output) + if not push_ok: + sys.exit(1) + return + + if new_branch_name: + warning(f"Created and switched to new branch '{new_branch_name}'.") if push_ok: success(f"Pushed {project.account_id}/{project.project_id} to Agent Studio.") else: error(f"Failed to push {project.account_id}/{project.project_id} to Agent Studio.") plain(output) - return project - @classmethod - def status(cls, base_path: str) -> None: + def status(cls, base_path: str, output_json: bool = False) -> None: """Check the changed files of the project.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) files_with_conflicts, modified_files, new_files, deleted_files = project.project_status() + if output_json: + json_output = { + "files_with_conflicts": files_with_conflicts, + "modified_files": modified_files, + "new_files": new_files, + "deleted_files": deleted_files, + } + json_print(json_output) + return + branch_info = project.get_current_branch() print_status( @@ -796,18 +1093,41 @@ def status(cls, base_path: str) -> None: plain("\n[muted]No changes detected.[/muted]") @classmethod - def revert(cls, base_path: str, all_files: bool = False, files: list[str] = None) -> None: + def revert( + cls, + base_path: str, + all_files: bool = False, + files: list[str] = None, + output_json: bool = False, + ) -> None: """Revert changes in the project.""" if not all_files and not files: + if output_json: + json_print( + { + "success": False, + "error": "No files specified to revert. Use --all or list files.", + } + ) + sys.exit(1) error("No files specified to revert. Use [bold]--all[/bold] to revert all changes.") return - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) # If relative paths are provided, convert them to absolute paths files = [os.path.abspath(os.path.join(os.getcwd(), file)) for file in files or []] files_reverted = project.revert_changes(all_files=all_files, files=files) + if output_json: + json_print( + { + "success": bool(files_reverted), + "files_reverted": files_reverted, + } + ) + return + if not files_reverted: plain("[muted]No changes to revert.[/muted]") return @@ -815,25 +1135,37 @@ def revert(cls, base_path: str, all_files: bool = False, files: list[str] = None success("Changes reverted successfully.") @classmethod - def _diff(cls, base_path: str, files: list[str] = None) -> Optional[dict[str, str]]: - """Show the changes made to the project.""" + def _diff( + cls, base_path: str, files: list[str] = None, output_json: bool = False + ) -> dict[str, str]: + """Compute local diffs; may print a human hint when there are no changes.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) files = [os.path.abspath(os.path.join(os.getcwd(), file)) for file in files or []] - diffs = project.get_diffs(all_files=not files, files=files) + diffs = project.get_diffs(all_files=not files, files=files) or {} - if not diffs: + if not diffs and not output_json: plain("[muted]No changes detected.[/muted]") - return None return diffs @classmethod - def diff(cls, base_path: str, files: list[str] = None) -> None: + def diff(cls, base_path: str, files: list[str] = None, output_json: bool = False) -> None: """Show the changes made to the project.""" - diffs = cls._diff(base_path, files) or {} + diffs = cls._diff(base_path, files, output_json=output_json) + if output_json: + json_print( + { + "diffs": diffs, + } + ) + return + + if not diffs: + return + for file_path, diff_text in diffs.items(): console.rule(f"[bold]{file_path}[/bold]") print_diff(diff_text) @@ -857,7 +1189,7 @@ def _review( diffs = project.diff_remote_named_versions(before_name, after_name) or {} else: # Compare local vs remote (existing behavior) - diffs = cls._diff(base_path) or {} + diffs = cls._diff(base_path) if not diffs: return {} @@ -925,11 +1257,20 @@ def delete_gists(cls) -> None: return @classmethod - def branch_list(cls, base_path: str) -> None: + def branch_list(cls, base_path: str, output_json: bool = False) -> None: """List branches in the Agent Studio project.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) current_branch, branches = project.get_branches() + + if output_json: + json_output = { + "current_branch": current_branch, + "branches": branches, + } + json_print(json_output) + return + if not branches: plain("[muted]No branches found.[/muted]") return @@ -938,42 +1279,88 @@ def branch_list(cls, base_path: str) -> None: if current_branch is None: warning( - f"Current local branch '{project.branch_id}' does not exist in Agent Studio. " + f"Current local branch does not exist in Agent Studio. " "It may have been deleted or merged." ) @classmethod - def branch_create(cls, base_path: str, branch_name: str = None) -> None: + def branch_create( + cls, base_path: str, branch_name: str = None, output_json: bool = False + ) -> None: """Create a new branch in the Agent Studio project.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) if project.branch_id != "main": - error( - "Branches can only be created from the [bold]main[/bold] branch (sandbox). " - "Please switch and try again." - ) - return + if output_json: + json_print( + { + "success": False, + "error": "Branches can only be created from the main branch (sandbox).", + } + ) + else: + error( + "Branches can only be created from the [bold]main[/bold] branch (sandbox). " + "Please switch and try again." + ) + sys.exit(1) if not branch_name: + if output_json: + json_print( + { + "success": False, + "error": "branch create with --json requires a branch name argument.", + } + ) + sys.exit(1) branch_name = input("Enter the name of the new branch: ").strip() if not branch_name: warning("No branch name provided. Exiting.") return new_branch_id = project.create_branch(branch_name) + if output_json: + json_print( + { + "success": bool(new_branch_id), + "new_branch_id": new_branch_id, + "branch_name": branch_name, + } + ) + if not new_branch_id: + sys.exit(1) + return + if new_branch_id: success(f"Branch '{branch_name}' created (ID: {new_branch_id})") else: error("Failed to create the branch.") + sys.exit(1) @classmethod def branch_switch( - cls, base_path: str, branch_name: str = None, force: bool = False, format: bool = False + cls, + base_path: str, + branch_name: str = None, + force: bool = False, + format: bool = False, + output_json: bool = False, + output_json_projection: bool = False, + from_projection: str = None, ) -> None: """Switch to a different branch in the Agent Studio project.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) if not branch_name: + if output_json: + json_print( + { + "success": False, + "error": "branch switch with --json requires a branch name argument.", + } + ) + sys.exit(1) # Drop down menu to select branch current_branch, branches = project.get_branches() if not branches: @@ -999,21 +1386,64 @@ def branch_switch( selected_option = branch_menu branch_name = selected_option.replace(" (current)", "") - switch_ok = project.switch_branch(branch_name, force=force, format=format) + projection_json = cls._parse_from_projection_json( + from_projection, + json_errors=output_json or output_json_projection, + ) + + ctx = ( + console.status("[info]Saving resources...[/info]") if not output_json else nullcontext() + ) + on_save = None + + with ctx as status: + if status: + + def on_save(current: int, total: int) -> None: + status.update(f"[info]Saving resources ({current}/{total})...[/info]") + + switch_ok, projection = project.switch_branch( + branch_name, + force=force, + format=format, + projection_json=projection_json, + on_save=on_save, + ) + + if output_json or output_json_projection: + json_output = { + "success": switch_ok, + "branch_name": branch_name, + } + if output_json_projection: + json_output["projection"] = projection + json_print(json_output) + if not switch_ok: + sys.exit(1) + return + if switch_ok: success(f"Switched to branch '{branch_name}'.") else: error(f"Failed to switch to branch '{branch_name}'.") + sys.exit(1) @classmethod - def get_current_branch(cls, base_path: str) -> None: + def get_current_branch(cls, base_path: str, output_json: bool = False) -> None: """Get the current branch of the Agent Studio project.""" - project = cls._load_project(base_path) + project = cls._load_project(base_path, output_json=output_json) current_branch = project.get_current_branch() + if output_json: + json_output = { + "current_branch": current_branch, + } + json_print(json_output) + return + if current_branch is None: warning( - f"Current local branch '{project.branch_id}' does not exist in Agent Studio. " + f"Current local branch does not exist in Agent Studio. " "It may have been deleted or merged." ) return @@ -1026,73 +1456,133 @@ def format( files: list[str] = None, check_only: bool = False, run_ty: bool = False, + output_json: bool = False, ) -> None: """Format project resources (Python via ruff, YAML/JSON via in-process formatting); optionally run ty.""" - project = cls._load_project(base_path) - # Resolve to absolute paths so they match resource_mapping.file_path + project = cls._load_project(base_path, output_json=output_json) files_resolved: list[str] | None = None if files: files_resolved = [os.path.abspath(os.path.join(base_path, f)) for f in files] - if check_only: - info("[bold]Check-only[/bold]: verifying formatting (no files will be modified).") - else: - info("[bold]Fix mode[/bold]: formatting project resources.") - - plain("") + if not output_json: + if check_only: + info("[bold]Check-only[/bold]: verifying formatting (no files will be modified).") + else: + info("[bold]Fix mode[/bold]: formatting project resources.") + plain("") + info( + "Checking project resources (Python + YAML/JSON)" + if check_only + else "Formatting project resources (Python + YAML/JSON)" + ) - step = ( - "Checking project resources (Python + YAML/JSON)" - if check_only - else "Formatting project resources (Python + YAML/JSON)" - ) - info(step) affected, format_errors = project.format_files(files=files_resolved, check_only=check_only) - for msg in format_errors: - plain(f"[red]{msg}[/red]") + rel_affected = [os.path.relpath(p, base_path) or p for p in affected] + if format_errors: - error("Format failed for some files.") + if output_json: + json_print( + { + "success": False, + "check_only": check_only, + "format_errors": format_errors, + "affected": rel_affected, + "ty_ran": False, + "ty_returncode": None, + "ty_timed_out": False, + } + ) + else: + for msg in format_errors: + plain(f"[red]{msg}[/red]") + error("Format failed for some files.") sys.exit(1) - return + if check_only and affected: + if output_json: + json_print( + { + "success": False, + "check_only": check_only, + "format_errors": [], + "affected": rel_affected, + "ty_ran": False, + "ty_returncode": None, + "ty_timed_out": False, + } + ) + else: + for path in affected: + rel = os.path.relpath(path, base_path) or path + plain(f"[red]{rel}[/red]") + info("Try [bold]poly format[/bold] to fix.") + sys.exit(1) + + if not output_json: for path in affected: rel = os.path.relpath(path, base_path) or path - plain(f"[red]{rel}[/red]") - info("Try [bold]poly format[/bold] to fix.") - sys.exit(1) - return - for path in affected: - rel = os.path.relpath(path, base_path) or path - plain(rel) - success("Passed.") - if check_only: - success("All checks passed (no changes written).") - else: - success("All issues fixed." if affected else "No issues found.") + plain(rel) + success("Passed.") + if check_only: + success("All checks passed (no changes written).") + else: + success("All issues fixed." if affected else "No issues found.") - # Ty (type check only; no fix) — off by default; use --ty to enable. + ty_returncode: int | None = None + ty_timed_out = False if run_ty: ty_cmd = [sys.executable, "-m", "ty"] if shutil.which("ty"): ty_cmd = ["ty"] - info("Type checking (ty)") + if not output_json: + info("Type checking (ty)") try: r = subprocess.run( ty_cmd + ["check"], cwd=base_path, - capture_output=False, + capture_output=output_json, text=True, timeout=15, stdin=subprocess.DEVNULL, ) + ty_returncode = r.returncode except subprocess.TimeoutExpired: - plain("[red]Timed out after 15s.[/red]") + ty_timed_out = True + if output_json: + json_print( + { + "success": False, + "check_only": check_only, + "format_errors": [], + "affected": rel_affected, + "ty_ran": True, + "ty_returncode": None, + "ty_timed_out": True, + } + ) + else: + plain("[red]Timed out after 15s.[/red]") sys.exit(1) - return - if r.returncode != 0: + + if not output_json and ty_returncode != 0: + sys.exit(1) + if not output_json: + success("Passed.") + + if output_json: + json_print( + { + "success": not (run_ty and ty_returncode not in (None, 0)), + "check_only": check_only, + "format_errors": [], + "affected": rel_affected, + "ty_ran": run_ty, + "ty_returncode": ty_returncode, + "ty_timed_out": ty_timed_out, + } + ) + if run_ty and ty_returncode != 0: sys.exit(1) - return - success("Passed.") @classmethod def chat( @@ -1244,11 +1734,19 @@ def _run_chat_loop( return restart @classmethod - def validate_project(cls, base_path: str) -> None: + def validate_project(cls, base_path: str, output_json: bool = False) -> None: """Validate the project configuration locally.""" - project = cls._load_project(base_path) - + project = cls._load_project(base_path, output_json=output_json) errors = project.validate_project() + + if output_json: + json_output = { + "valid": bool(not errors), + "errors": errors, + } + json_print(json_output) + return + if not errors: success("Project configuration is valid.") else: diff --git a/src/poly/handlers/interface.py b/src/poly/handlers/interface.py index 97d2f12..400d387 100644 --- a/src/poly/handlers/interface.py +++ b/src/poly/handlers/interface.py @@ -4,6 +4,8 @@ from typing import Any, Optional +from google.protobuf.message import Message + from poly.handlers.platform_api import PlatformAPIHandler from poly.handlers.sync_client import SyncClientHandler from poly.resources import BaseResource, Resource @@ -108,13 +110,24 @@ def pull_deployment_resources( """ return self.sync_client.pull_deployment_resources(deployment_id) - def pull_resources(self) -> dict[type[Resource], dict[str, Resource]]: + def pull_resources( + self, projection_json: Optional[dict[str, Any]] = None + ) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: """Fetch all resources for the specific project. + Args: + projection_json (Optional[dict[str, Any]]): A dictionary containing the projection. + If provided, the projection will be used instead of fetching it from the API. + Returns: dict[type[Resource], dict[str, Resource]]: A dictionary mapping resource types to their resources + dict[str, Any]: The projection data """ + if projection_json is not None: + return SyncClientHandler.load_resources_from_projection( + projection_json + ), projection_json return self.sync_client.pull_resources() def push_resources( @@ -129,26 +142,80 @@ def push_resources( """Upload multiple resources for the specific project. Args: - new_resources (dict[type[Resource], dict[str, Resource]]): New resources to upload - deleted_resources (dict[type[Resource], dict[str, Resource]]): Resources to delete - updated_resources (dict[type[Resource], dict[str, Resource]]): Updated resources to upload + new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload + deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete + updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload dry_run (bool): If True, only log the upload actions without actually uploading + queue_pushes (bool): If True, queue the resources for pushing. email (str): Email to use for metadata creation. If None, use the email of the current user. Returns: bool: True if the resources were pushed successfully, False otherwise """ - return self.sync_client.push_resources( + self.queue_resources( deleted_resources=deleted_resources, new_resources=new_resources, updated_resources=updated_resources, - dry_run=dry_run, - queue_pushes=queue_pushes, email=email, ) + if queue_pushes: + return True + + if dry_run: + self.clear_command_queue() + return True + + return self.send_queued_commands() + + def queue_resources( + self, + deleted_resources: dict[type[BaseResource], dict[str, BaseResource]], + new_resources: dict[type[BaseResource], dict[str, BaseResource]], + updated_resources: dict[type[BaseResource], dict[str, BaseResource]], + email: Optional[str] = None, + ) -> list[Message]: + """Queue multiple resources for the specific project. + + Args: + deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete + new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload + updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload + email (str): Email to use for metadata creation. + If None, use the email of the current user. + + Returns: + list[Message]: A list of queued Command protobuf messages. + """ + return self.sync_client.queue_resources( + deleted_resources=deleted_resources, + new_resources=new_resources, + updated_resources=updated_resources, + email=email, + ) + + def send_queued_commands(self) -> bool: + """Send all queued commands as a batch and clear the queue. + + Returns: + bool: True if the commands were sent successfully, False otherwise + """ + return self.sync_client.send_queued_commands() + + def clear_command_queue(self) -> None: + """Clear all queued commands without sending.""" + self.sync_client.clear_command_queue() + + def get_queued_commands(self) -> list[Message]: + """Get all queued commands. + + Returns: + list[Message]: A list of queued Command protobuf messages. + """ + return self.sync_client.get_queued_commands() + def get_branches(self) -> dict[str, str]: """Get a list of branches. @@ -184,7 +251,7 @@ def switch_branch(self, branch_id: str) -> bool: def merge_branch( self, message: str, conflict_resolutions: Optional[list[dict[str, Any]]] = None - ) -> tuple[list[dict[str, str]], list[dict[str, str]]]: + ) -> tuple[bool, list[dict[str, str]], list[dict[str, str]]]: """Merge the current branch into main. Args: @@ -195,7 +262,9 @@ def merge_branch( - value: Optional custom value (only used with custom strategy) Returns: + success (bool): True if the merge was successful, False otherwise list[dict[str, str]]: A list of conflict information if the merge failed, empty list if successful + list[dict[str, str]]: A list of error information if the merge failed, empty list if successful """ return self.sync_client.merge_branch(message, conflict_resolutions) diff --git a/src/poly/handlers/sdk.py b/src/poly/handlers/sdk.py index 719b7bf..cfd3de9 100644 --- a/src/poly/handlers/sdk.py +++ b/src/poly/handlers/sdk.py @@ -316,9 +316,6 @@ def merge_branch( response_data = response.json() # Check if this is a conflict response if "conflicts" in response_data or "hasConflicts" in response_data: - logger.warning( - f"Merge has conflicts: {len(response_data.get('conflicts', []))} conflicts detected" - ) return response_data # Otherwise, it's a different error error_msg = f"API Error 400: {response_data}" diff --git a/src/poly/handlers/sync_client.py b/src/poly/handlers/sync_client.py index 76a14f8..f5f2e5c 100644 --- a/src/poly/handlers/sync_client.py +++ b/src/poly/handlers/sync_client.py @@ -5,17 +5,18 @@ import logging import uuid +from copy import deepcopy from typing import Any, Optional from poly.handlers.protobuf.commands_pb2 import Command from poly.handlers.protobuf.handoff_pb2 import Handoff_SetDefault from poly.handlers.sdk import SourcererAPIError, SourcererSDK from poly.resources import ( - ASRBiasing, - AsrSettings, ApiIntegration, - ApiIntegrationOperation, ApiIntegrationEnvironments, + ApiIntegrationOperation, + ASRBiasing, + AsrSettings, BaseResource, ChatGreeting, ChatStylePrompt, @@ -77,7 +78,7 @@ class SyncClientHandler: @property def branch_id(self) -> str: """Get the current branch ID.""" - return self.sdk.branch_id + return self._sdk.branch_id def __init__( self, @@ -101,40 +102,46 @@ def __init__( project_id=project_id, branch_id=branch_id, ) - # Switch to the specified branch if exists and provided. - if branch_id and branch_id != "main": - found_branches = self._sdk.fetch_branches().get("branches", []) - branch = next((b for b in found_branches if b.get("branchId") == branch_id), None) - if branch: - self._sdk.branch_id = branch_id - else: - logger.warning(f"Branch {branch_id} does not exist. Switching to 'main' branch.") - self._sdk.branch_id = "main" @property def sdk(self) -> SourcererSDK: """Get the Sourcerer SDK instance.""" return self._sdk - def _load_resources(self, projection: dict) -> dict[type[Resource], dict[str, Resource]]: + def assert_branch_exists(self) -> str: + """Assert that the branch exists and switch to 'main' if it doesn't.""" + if self.branch_id != "main": + found_branches = self._sdk.fetch_branches().get("branches", []) + branch = next((b for b in found_branches if b.get("branchId") == self.branch_id), None) + if not branch: + logger.info( + f"Branch ID:'{self.branch_id}' does not exist. Switching to 'main' branch." + ) + self._sdk.branch_id = "main" + return self.branch_id + + @classmethod + def load_resources_from_projection( + cls, projection: dict + ) -> dict[type[Resource], dict[str, Resource]]: return { - Topic: self._read_topics_from_projection(projection), - Function: self._read_functions_from_projection(projection), - Entity: self._read_entities_from_projection(projection), - Variable: self._read_variables_from_projection(projection), - **self._read_agent_settings_from_projection(projection), - **self._read_channel_settings_from_projection(projection), - **self._read_flows_from_projection(projection), - ExperimentalConfig: self._read_experimental_config_from_projection(projection), - SMSTemplate: self._read_sms_templates_from_projection(projection), - Handoff: self._read_handoffs_from_projection(projection), - **self._read_variants_from_projection(projection), - PhraseFilter: self._read_phrase_filters_from_projection(projection), - Pronunciation: self._read_pronunciations_from_projection(projection), - KeyphraseBoosting: self._read_keyphrase_boosting_from_projection(projection), - TranscriptCorrection: self._read_transcript_corrections_from_projection(projection), - **self._read_asr_settings_from_projection(projection), - ApiIntegration: self._read_api_integrations_from_projection(projection), + Topic: cls._read_topics_from_projection(projection), + Function: cls._read_functions_from_projection(projection), + Entity: cls._read_entities_from_projection(projection), + Variable: cls._read_variables_from_projection(projection), + **cls._read_agent_settings_from_projection(projection), + **cls._read_channel_settings_from_projection(projection), + **cls._read_flows_from_projection(projection), + ExperimentalConfig: cls._read_experimental_config_from_projection(projection), + SMSTemplate: cls._read_sms_templates_from_projection(projection), + Handoff: cls._read_handoffs_from_projection(projection), + **cls._read_variants_from_projection(projection), + PhraseFilter: cls._read_phrase_filters_from_projection(projection), + Pronunciation: cls._read_pronunciations_from_projection(projection), + KeyphraseBoosting: cls._read_keyphrase_boosting_from_projection(projection), + TranscriptCorrection: cls._read_transcript_corrections_from_projection(projection), + **cls._read_asr_settings_from_projection(projection), + ApiIntegration: cls._read_api_integrations_from_projection(projection), } # ty:ignore[invalid-return-type] def pull_deployment_resources( @@ -150,27 +157,31 @@ def pull_deployment_resources( logger.info( f"Fetching project data for project {self.project_id} for deployment {deployment_id}" ) + self.assert_branch_exists() projection = self.sdk.fetch_deployment_projection(deployment_id=deployment_id) logger.info( f"Successfully fetched project data for project {self.project_id} for deployment {deployment_id}" ) - return self._load_resources(projection) + return self.load_resources_from_projection(projection) - def pull_resources(self) -> dict[type[Resource], dict[str, Resource]]: + def pull_resources(self) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: """Fetch all resources from a specific project. Returns: dict[type[Resource], dict[str, Resource]]: A dictionary mapping resource types to their resources + dict[str, Any]: The projection data """ logger.info( f"Fetching project data for project {self.project_id} on branch {self.sdk.branch_id}" ) + self.assert_branch_exists() projection = self.sdk.fetch_projection(force_refresh=True) + logger.debug(f"Projection: {projection}") logger.info( f"Successfully fetched project data for project {self.project_id} on branch {self.sdk.branch_id}" ) - return self._load_resources(projection) + return self.load_resources_from_projection(projection), projection @staticmethod def _read_topics_from_projection(projection: dict) -> dict[str, Topic]: @@ -869,16 +880,14 @@ def _read_api_integrations_from_projection( Variable, ] - def push_resources( + def queue_resources( self, deleted_resources: dict[type[BaseResource], dict[str, BaseResource]], new_resources: dict[type[BaseResource], dict[str, BaseResource]], updated_resources: dict[type[BaseResource], dict[str, BaseResource]], - dry_run: bool = False, - queue_pushes: bool = False, email: Optional[str] = None, - ) -> bool: - """Upload multiple resources for the specific project. + ) -> list[Command]: + """Queue multiple resources for the specific project. Sends in order: - delete @@ -889,18 +898,16 @@ def push_resources( deleted_resources (dict[type[BaseResource], dict[str, BaseResource]]): Resources to delete new_resources (dict[type[BaseResource], dict[str, BaseResource]]): New resources to upload updated_resources (dict[type[BaseResource], dict[str, BaseResource]]): Updated resources to upload - dry_run (bool): If True, only log the upload actions without actually - uploading + email (str): Email to use for metadata creation. Returns: - bool: True if the resources were pushed successfully, False otherwise + list[Command]: A list of queued Command protobuf messages. """ metadata = self.sdk.create_metadata() if email: metadata.created_by = email - if self.sdk.branch_id == "main": - self.create_branch() # creates branch and switches to it + commands = [] delete_resources_priority: list[type[BaseResource]] = [] for resource_type in self.PRIORITY_DELETE_TYPES: @@ -913,7 +920,7 @@ def push_resources( for resource_type in delete_resources_priority: for resource_id, resource in deleted_resources.get(resource_type, {}).items(): delete_type = resource.delete_command_type - self.sdk.add_command_to_queue( + commands.append( Command( type=delete_type, command_id=str(uuid.uuid4()), @@ -934,7 +941,7 @@ def push_resources( resources = new_resources.get(resource_type, {}) for resource_id, resource in resources.items(): create_type = resource.create_command_type - self.sdk.add_command_to_queue( + commands.append( Command( type=create_type, command_id=str(uuid.uuid4()), @@ -955,7 +962,7 @@ def push_resources( resources = updated_resources.get(resource_type, {}) for resource_id, resource in resources.items(): update_type = resource.update_command_type - self.sdk.add_command_to_queue( + commands.append( Command( type=update_type, command_id=str(uuid.uuid4()), @@ -968,7 +975,7 @@ def push_resources( for resource_dict in [new_resources, updated_resources]: for resource_id, resource in resource_dict.get(Handoff, {}).items(): if isinstance(resource, Handoff) and resource.is_default: - self.sdk.add_command_to_queue( + commands.append( Command( type="handoff_set_default", command_id=str(uuid.uuid4()), @@ -977,21 +984,49 @@ def push_resources( ) ) - if not (dry_run or queue_pushes): - logger.info(f"Sending commands command_queue={self.sdk._command_queue!r}") - try: - self.sdk.send_command_batch() - except SourcererAPIError as e: - logger.error(f"Failed to push resources: {e}") - # If the batch fails, we assume all commands failed - return False - elif queue_pushes: + for command in commands: + self.sdk.add_command_to_queue(command) + + logger.info(f"Queued {len(commands)} commands") + logger.debug(f"Commands: {commands!r}") + return commands + + def send_queued_commands(self) -> bool: + """Send all queued commands as a batch and clear the queue. + + Returns: + bool: True if the commands were sent successfully, False otherwise + """ + if self.sdk.get_queue_size() == 0: + logger.info("No commands to send") return True - elif dry_run: - logger.info(f"Created commands command_queue={self.sdk._command_queue!r}") - self.sdk.clear_queue() - return True + self.assert_branch_exists() + + # Creates branch and switches to it + if self.sdk.branch_id == "main": + self.create_branch() + + try: + logger.info(f"Sending {len(self.sdk._command_queue)} commands to {self.sdk.branch_id}") + self.sdk.send_command_batch() + return True + except SourcererAPIError as e: + logger.error(f"Failed to send commands: {e}") + return False + + def clear_command_queue(self) -> None: + """Clear all queued commands without sending.""" + logger.info(f"Clearing {len(self.sdk._command_queue)} commands") + self.sdk.clear_queue() + + def get_queued_commands(self) -> list[Command]: + """Get all queued commands. + + Returns: + list[Command]: A list of queued Command protobuf messages. + """ + return deepcopy(self.sdk._command_queue) def switch_branch(self, branch_id: str) -> bool: """Switch to a different branch within the same project. @@ -1002,14 +1037,16 @@ def switch_branch(self, branch_id: str) -> bool: Returns: bool: True if the switch was successful, False otherwise """ + self.assert_branch_exists() + if self.sdk.branch_id == branch_id: - logger.info(f"Already on branch {branch_id}") + logger.info(f"Already on branch ID:'{branch_id}'") return True if branch_id == "main": self.sdk.branch_id = "main" self.sdk.get_project_data() - logger.info(f"Switched to branch {branch_id}") + logger.info(f"Switched to branch ID:'{branch_id}'") return True if found_branches := self.sdk.fetch_branches().get("branches"): @@ -1019,11 +1056,12 @@ def switch_branch(self, branch_id: str) -> bool: # Re-fetch project data to ensure the SDK is up-to-date self.sdk.clear_cache() self.sdk.get_project_data() - logger.info(f"Switched to branch {branch_id}") + logger.info(f"Switched to branch ID:'{branch_id}'") return True else: - logger.error(f"Branch {branch_id} does not exist.") + logger.error(f"Branch ID:'{branch_id}' does not exist.") return False + return False def create_branch(self, branch_name: Optional[str] = None) -> str: """Create a new branch for the project @@ -1039,9 +1077,10 @@ def create_branch(self, branch_name: Optional[str] = None) -> str: if branch_name is None: metadata = self.sdk.create_metadata() - email = metadata.created_by.split("@")[0] - suffix = f"{metadata.created_at.seconds % 10000:04d}" # to avoid duplicate names - branch_name = f"ADK-{email}-{suffix}" + time_suffix = f"{metadata.created_at.seconds % 100000:05d}" + random_suffix = uuid.uuid4().hex[:4] + suffix = f"{time_suffix}-{random_suffix}" # to avoid duplicate names + branch_name = f"ADK-{suffix}" logger.info(f"Creating new branch '{branch_name}' from 'main' branch") @@ -1049,7 +1088,9 @@ def create_branch(self, branch_name: Optional[str] = None) -> str: expected_main_last_known_sequence=self.sdk._last_known_sequence, branch_name=branch_name, ) - logger.warning(f"Created and switched to new branch '{self.sdk.branch_id}'") + logger.info( + f"Created and switched to new branch. Name:'{branch_name}' ID:'{self.sdk.branch_id}'" + ) return self.sdk.branch_id def get_branches(self) -> dict[str, str]: @@ -1078,20 +1119,20 @@ def delete_branch(self, branch_id): logger.error("Cannot delete 'main' branch.") return False - logger.info(f"Deleting branch '{branch_id}'") + logger.info(f"Deleting branch ID:'{branch_id}'") try: self.sdk.delete_branch(branch_id=branch_id) except SourcererAPIError as e: - logger.error(f"Failed to delete branch '{branch_id}': {e}") + logger.error(f"Failed to delete branch ID:'{branch_id}': {e}") return False - logger.info(f"Successfully deleted branch '{branch_id}'") + logger.info(f"Successfully deleted branch ID:'{branch_id}'") return True def merge_branch( self, message: str, conflict_resolutions: Optional[list[dict[str, Any]]] = None - ) -> tuple[list[dict[str, str]], list[dict[str, str]]]: + ) -> tuple[bool, list[dict[str, str]], list[dict[str, str]]]: """Merge the current branch into main. Args: @@ -1102,12 +1143,15 @@ def merge_branch( - value: Optional custom value (only used with custom strategy) Returns: + success (bool): True if the merge was successful, False otherwise list[dict[str, str]]: A list of conflict information if the merge failed, empty list if successful list[dict[str, str]]: A list of error information if the merge failed, empty list if successful """ + self.assert_branch_exists() + if self.sdk.branch_id == "main": logger.error("Cannot merge 'main' branch into itself.") - return [], [] + return False, [], [] logger.info(f"Merging branch '{self.sdk.branch_id}' into 'main'") @@ -1119,9 +1163,7 @@ def merge_branch( ) except SourcererAPIError as e: logger.error(f"Failed to merge branch '{self.sdk.branch_id}' into 'main': {e}") - return [], [] - - conflicts, errors = [], [] + return False, [], [] if result.get("hasConflicts", False) or result.get("errors", []): logger.error( @@ -1129,10 +1171,12 @@ def merge_branch( ) conflicts = result.get("conflicts", []) errors = result.get("errors", []) + return False, conflicts, errors logger.info(f"Successfully merged branch '{self.sdk.branch_id}' into 'main'") - return conflicts, errors + return True, [], [] def get_branch_chat_info(self, branch_id: str) -> dict[str, Any]: """Get deployment info needed to start a draft chat on a branch.""" + self.assert_branch_exists() return self.sdk.get_branch_chat_info(branch_id) diff --git a/src/poly/output/__init__.py b/src/poly/output/__init__.py new file mode 100644 index 0000000..742efb2 --- /dev/null +++ b/src/poly/output/__init__.py @@ -0,0 +1 @@ +# Copyright PolyAI Limited diff --git a/src/poly/console.py b/src/poly/output/console.py similarity index 100% rename from src/poly/console.py rename to src/poly/output/console.py diff --git a/src/poly/output/json_output.py b/src/poly/output/json_output.py new file mode 100644 index 0000000..a4fcb65 --- /dev/null +++ b/src/poly/output/json_output.py @@ -0,0 +1,31 @@ +"""JSON output helpers for machine-readable CLI output. + +Copyright PolyAI Limited +""" + +import json +import sys + +from google.protobuf.json_format import MessageToDict + + +def json_print(data: dict) -> None: + """Print data as formatted JSON to stdout. + + Args: + data: Dictionary to serialize and print. + """ + json.dump(data, sys.stdout, indent=2, default=str) + sys.stdout.write("\n") + + +def commands_to_dicts(commands: list) -> list[dict]: + """Convert a list of Command protobufs to JSON-serializable dicts. + + Args: + commands: List of Command protobuf messages. + + Returns: + list[dict]: Each Command serialized via MessageToDict. + """ + return [MessageToDict(cmd, preserving_proto_field_name=True) for cmd in commands] diff --git a/src/poly/project.py b/src/poly/project.py index 88e3033..78d5fde 100644 --- a/src/poly/project.py +++ b/src/poly/project.py @@ -10,8 +10,11 @@ import uuid from dataclasses import dataclass, fields from datetime import datetime +from collections.abc import Callable from typing import Any, Optional, TypeAlias +from google.protobuf.message import Message + import poly.resources.resource_utils as resource_utils import poly.utils as utils from poly.handlers.interface import ( @@ -318,7 +321,9 @@ def init_project( account_id: str, project_id: str, format: bool = False, - ) -> "AgentStudioProject": + projection_json: Optional[dict[str, Any]] = None, + on_save: Callable[[int, int], None] | None = None, + ) -> tuple["AgentStudioProject", dict[str, Any]]: """Get project from the Agent Studio Interactor Args: @@ -327,9 +332,14 @@ def init_project( account_id (str): The account ID of the project project_id (str): The project ID format (bool): If True, format resources after pulling + projection_json (dict[str, Any]): A dictionary containing the projection + If provided, the projection will be used instead of fetching it from the API. + on_save: Optional callback invoked with (current, total) + during the resource save loop. Returns: AgentStudioProject: An instance of AgentStudioProject with functions loaded + dict[str, Any]: The projection data """ base_path = os.path.join(base_path, account_id, project_id) @@ -343,28 +353,41 @@ def init_project( last_updated=datetime.now(), branch_id="main", ) - project.resources = project.api_handler.pull_resources() + project.resources, projection = project.api_handler.pull_resources( + projection_json=projection_json + ) project._check_no_duplicate_resource_paths(project.resources) resource_mappings: list[ResourceMapping] = project._make_resource_mappings( project.resources ) - # Save functions and topics - for resource in project.all_resources: + all_resources = project.all_resources + total = len(all_resources) + + MultiResourceYamlResource._file_cache.clear() + + for i, resource in enumerate(all_resources, 1): + if on_save: + on_save(i, total) + is_multi = isinstance(resource, MultiResourceYamlResource) resource.save( base_path, resource_mappings=resource_mappings, resource_name=resource.name, format=format, + save_to_cache=is_multi, ) + MultiResourceYamlResource.write_cache_to_file() + MultiResourceYamlResource._file_cache.clear() + project.save_config(write_project_yaml=True) utils.export_decorators(DECORATORS, base_path) utils.save_imports(base_path) - return project + return project, projection def save_config(self, write_project_yaml: bool = False) -> None: """Save the project configuration to a file @@ -391,7 +414,11 @@ def save_config(self, write_project_yaml: bool = False) -> None: yaml_content = resource_utils.dump_yaml(config_dict) f.write(yaml_content) - def load_project(self, preserve_not_loaded_resources: bool = False) -> None: + def load_project( + self, + preserve_not_loaded_resources: bool = False, + projection_json: Optional[dict[str, Any]] = None, + ) -> None: """Load the current state of project on Agent Studio into memory This is used when no current resources are loaded. @@ -400,8 +427,10 @@ def load_project(self, preserve_not_loaded_resources: bool = False) -> None: preserve_not_loaded_resources: If True, retain the current _not_loaded_resources value across the load (used when reloading for comparison without affecting local state). + projection_json: If set, build resources from this projection dict + instead of fetching from the API (same shape as a sourcerer projection). """ - resources = self.api_handler.pull_resources() + resources, _ = self.api_handler.pull_resources(projection_json=projection_json) self._check_no_duplicate_resource_paths(resources) self.resources = resources @@ -410,7 +439,13 @@ def load_project(self, preserve_not_loaded_resources: bool = False) -> None: self._not_loaded_resources = [] self.save_config() - def pull_project(self, force: bool = False, format: bool = False) -> list[str]: + def pull_project( + self, + force: bool = False, + format: bool = False, + projection_json: Optional[dict[str, Any]] = None, + on_save: Callable[[int, int], None] | None = None, + ) -> tuple[list[str], dict[str, Any]]: """Pull the project configuration from the Agent Studio Interactor. If there are local changes, it will merge them with the incoming changes. @@ -427,14 +462,19 @@ def pull_project(self, force: bool = False, format: bool = False) -> list[str]: Returns: list[str]: A list of file names with merge conflicts. + dict[str, Any]: The projection data """ # ------- # Pull resources # ------- - incoming_resources = self.api_handler.pull_resources() - self.branch_id = self.api_handler.branch_id + incoming_resources, projection = self.api_handler.pull_resources( + projection_json=projection_json + ) + # Only update branch id if we used the API to pull the resources + if projection_json is None: + self.branch_id = self.api_handler.branch_id self._check_no_duplicate_resource_paths(incoming_resources) # ------- @@ -446,6 +486,7 @@ def pull_project(self, force: bool = False, format: bool = False) -> list[str]: incoming_resources=incoming_resources, force=force, format=format, + on_save=on_save, ) # ------- @@ -481,7 +522,7 @@ def pull_project(self, force: bool = False, format: bool = False) -> list[str]: utils.save_imports(self.root_path) self.save_config() - return files_with_conflicts + return files_with_conflicts, projection @staticmethod def _delete_empty_folders(folder_path: str) -> None: @@ -547,7 +588,10 @@ def _update_multi_resource_yaml_resources( incoming_resource_mappings: list[ResourceMapping], force: bool, format: bool = False, - ) -> list[str]: + on_save: Callable[[int, int], None] | None = None, + progress_offset: int = 0, + progress_total: int = 0, + ) -> tuple[list[str], int]: """Merge MultiResourceYaml resources when pulling As files are merged on a per file basis, we must first compute the whole file: @@ -637,6 +681,10 @@ def _update_multi_resource_yaml_resources( ): resource_type.delete_resource(file_path, save_to_cache=True) + progress_offset += len(resources) + if on_save: + on_save(progress_offset, progress_total) + incoming_file_contents = { file: resource_utils.dump_yaml(top_level_yaml_dict) for file, (_, top_level_yaml_dict) in MultiResourceYamlResource._file_cache.items() @@ -679,7 +727,7 @@ def _update_multi_resource_yaml_resources( MultiResourceYamlResource.save_to_file(merged_contents, file) MultiResourceYamlResource._file_cache.clear() - return files_with_conflicts + return files_with_conflicts, progress_offset def _update_pulled_resources( self, @@ -687,6 +735,7 @@ def _update_pulled_resources( incoming_resources: ResourceMap, force: bool, format: bool = False, + on_save: Callable[[int, int], None] | None = None, ) -> list[str]: files_with_conflicts = [] @@ -701,26 +750,34 @@ def _update_pulled_resources( ) # Merging is done on a per file basis. - # For most resources, a resource is a single file - # For MultiResourceYamlResources, a resource is part of a file, - # So we must first compute the whole file, so do merge process separately for each file. - files_with_conflicts.extend( - self._update_multi_resource_yaml_resources( - original_resources=self.resources, - incoming_resources=incoming_resources, - original_resource_mappings=original_resource_mappings, - incoming_resource_mappings=incoming_resource_mappings, - force=force, - format=format, - ) + # For most resources - a resource is a single file + # For MultiResourceYamlResources - a resource is a part of a file, + # So first compute the whole file, then do merge process separately for each file. + total = sum(len(res) for res in incoming_resources.values()) + + multi_conflicts, current = self._update_multi_resource_yaml_resources( + original_resources=self.resources, + incoming_resources=incoming_resources, + original_resource_mappings=original_resource_mappings, + incoming_resource_mappings=incoming_resource_mappings, + force=force, + format=format, + on_save=on_save, + progress_offset=0, + progress_total=total, ) + files_with_conflicts.extend(multi_conflicts) + # For other resources, we follow the usual process for resource_type, incoming in incoming_resources.items(): if issubclass(resource_type, MultiResourceYamlResource): continue for resource_id, incoming_resource in incoming.items(): + current += 1 + if on_save: + on_save(current, total) # If force is True, overwrite local changes # If the resource is not loaded, save it directly if force or ( @@ -834,6 +891,72 @@ def _update_pulled_resources( return files_with_conflicts + def _stage_commands( + self, + new_state: ResourceMap, + new_resources: ResourceMap, + updated_resources: ResourceMap, + deleted_resources: ResourceMap, + email: Optional[str] = None, + ) -> list[Message]: + """Stage commands for the project.""" + + # Group flow resources together + # Creating flow config, group all new steps/functions under it and remove from + # new resources + push_changes = self._clean_resources_before_push( + new_state, + new_resources, + updated_resources, + deleted_resources, + ) + new_resources = push_changes.main.new + updated_resources = push_changes.main.updated + deleted_resources = push_changes.main.deleted + pre_changes = push_changes.pre + post_changes = push_changes.post + + # Assign positions to new flows + new_resources, updated_resources = self._assign_flow_positions( + new_resources, + updated_resources, + new_state, + ) + + # Queue new/updated/deleted resources + commands = [] + if pre_changes.new or pre_changes.deleted or pre_changes.updated: + commands.extend( + self.api_handler.queue_resources( + new_resources=pre_changes.new, + deleted_resources=pre_changes.deleted, + updated_resources=pre_changes.updated, + email=email, + ) + ) + + if new_resources or deleted_resources or updated_resources: + commands.extend( + self.api_handler.queue_resources( + new_resources=new_resources, + deleted_resources=deleted_resources, + updated_resources=updated_resources, + email=email, + ) + ) + + if post_changes.new or post_changes.deleted or post_changes.updated: + commands.extend( + self.api_handler.queue_resources( + new_resources=post_changes.new, + deleted_resources=post_changes.deleted, + updated_resources=post_changes.updated, + email=email, + ) + ) + + return commands + def push_project( self, force=False, @@ -841,7 +964,8 @@ def push_project( dry_run=False, format=False, email=None, - ) -> tuple[bool, str]: + projection_json: Optional[dict[str, Any]] = None, + ) -> tuple[bool, str, list[Message]]: """Push the project configuration to the Agent Studio Interactor. Args: @@ -849,31 +973,40 @@ def push_project( skip_validation (bool): If True, skip local validation. dry_run (bool): If True, do not actually push changes. format (bool): If True, format the resource before saving. + projection_json (dict[str, Any]): A dictionary containing the projection + If provided, the projection will be used instead of fetching it from the API. email (str): Email to use for metadata creation. If None, use the email of the current user. Returns: - Tuple[bool, str]: A tuple containing a boolean indicating success, - and a string message. + Tuple[bool, str, list[Message]]: + - Boolean indicating success. + - String message. + - List of commands serialized to protobuf. """ if not dry_run: # If force, load latest version of the project # to compare against if force: - self.load_project(preserve_not_loaded_resources=True) + self.load_project( + preserve_not_loaded_resources=True, projection_json=projection_json + ) # If not force, pull and merge latest version of the project else: - files_with_conflicts = self.pull_project(format=format) + files_with_conflicts, _ = self.pull_project( + format=format, projection_json=projection_json + ) if files_with_conflicts: conflicts = "\n- ".join(files_with_conflicts) return ( False, f"Merge conflicts detected in the following files:\n- {conflicts}\nPlease resolve the conflicts and try again.", + [], ) - # Push Algorithm + # Push Algorithm # 1. Get new/kept/deleted resources new_resource_mappings, kept_resource_mappings, deleted_resource_mappings = ( self.find_new_kept_deleted(self.discover_local_resources()) @@ -941,7 +1074,7 @@ def push_project( deleted_resources.update(subresource_changes.deleted) if not (updated_resources or new_resources or deleted_resources): - return False, "No changes detected" + return False, "No changes detected", [] # 4. Validate all resources with new state if not skip_validation: @@ -950,111 +1083,17 @@ def push_project( ) if validation_errors: error_messages = "\n".join(validation_errors) - return False, f"Validation errors detected:\n{error_messages}" + return False, f"Validation errors detected:\n{error_messages}", [] - # 5. Group flow resources together - # Creating flow config, group all new steps/functions under it and remove from - # new resources - push_changes = self._clean_resources_before_push( - new_state, - new_resources, - updated_resources, - deleted_resources, + commands = self._stage_commands( + new_state, new_resources, updated_resources, deleted_resources, email=email ) - new_resources = push_changes.main.new - updated_resources = push_changes.main.updated - deleted_resources = push_changes.main.deleted - pre_changes = push_changes.pre - post_changes = push_changes.post - - # Assign positions to new flows - new_resources, updated_resources = self._assign_flow_positions( - new_resources, - updated_resources, - new_state, - ) - - pre_and_post_push = any( - [ - pre_changes.new, - pre_changes.updated, - pre_changes.deleted, - post_changes.new, - post_changes.updated, - post_changes.deleted, - ] - ) - - # Assign positions to new flows - for flow_config in new_resources.get(FlowConfig, {}).values(): - if not isinstance(flow_config, FlowConfig): - raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") - resource_utils.assign_flow_positions(flow_config.steps, flow_config.start_step) - - # Assign positions to flows with new/updated steps - updated_flow_ids = set() - for flow_step in ( - list(new_resources.get(FlowStep, {}).values()) - + list(updated_resources.get(FlowStep, {}).values()) - + list(new_resources.get(FunctionStep, {}).values()) - + list(updated_resources.get(FunctionStep, {}).values()) - ): - if not isinstance(flow_step, BaseFlowStep): - raise TypeError(f"Flow step is not a FlowStep: {flow_step}") - updated_flow_ids.add(flow_step.flow_id) - - for updated_flow_id in updated_flow_ids: - flow_config = new_state.get(FlowConfig, {}).get(updated_flow_id) - if not flow_config: - raise ValueError(f"Flow config not found for flow id: {updated_flow_id}") - if not isinstance(flow_config, FlowConfig): - raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") - flow_steps = [ - step - for step in ( - list(new_state.get(FlowStep, {}).values()) - + list(new_state.get(FunctionStep, {}).values()) - ) - if isinstance(step, BaseFlowStep) and step.flow_id == updated_flow_id - ] - - resource_utils.assign_flow_positions(flow_steps, flow_config.start_step) - - # 6. Push new/updated/deleted resources - if self.branch_id: - logger.info(f"Pushing changes to branch {self.branch_id}") - - if pre_and_post_push: - self.api_handler.push_resources( - new_resources=pre_changes.new, - deleted_resources=pre_changes.deleted, - updated_resources=pre_changes.updated, - dry_run=dry_run, - email=email, - queue_pushes=True, - ) - - # Push changed resources (queue only when pre_push ran, so we send pre+main together) - success = self.api_handler.push_resources( - new_resources=new_resources, - deleted_resources=deleted_resources, - updated_resources=updated_resources, - dry_run=dry_run, - email=email, - queue_pushes=pre_and_post_push, - ) - - if pre_and_post_push: - success = self.api_handler.push_resources( - new_resources=post_changes.new, - deleted_resources=post_changes.deleted, - updated_resources=post_changes.updated, - dry_run=dry_run, - email=email, - queue_pushes=False, - ) - - self.branch_id = self.api_handler.branch_id + if not dry_run: + success = self.api_handler.send_queued_commands() + self.branch_id = self.api_handler.branch_id + else: + self.api_handler.clear_command_queue() + success = True if not success: failed_resources = [] @@ -1066,17 +1105,17 @@ def push_project( for resources in resource_dict.values(): failed_resources.extend([res.name for res in resources.values()]) errors_names = "\n-".join(failed_resources) - return False, f"Failed to push resources: \n-{errors_names}" + return False, f"Failed to push resources: \n-{errors_names}", commands if dry_run: - return True, "Dry run completed. No changes were pushed." + return True, "Dry run completed. No changes were pushed.", commands else: # Update local state self.resources = new_state self.file_structure_info = self.compute_file_structure_info(self.resources) self.save_config() - return True, "Resources pushed successfully." + return True, "Resources pushed successfully.", commands @staticmethod def _assign_flow_positions( @@ -1092,10 +1131,13 @@ def _assign_flow_positions( # Assign positions to flows with new/updated steps updated_flow_ids = set() - for flow_step in list(new_resources.get(FlowStep, {}).values()) + list( - updated_resources.get(FlowStep, {}).values() + for flow_step in ( + list(new_resources.get(FlowStep, {}).values()) + + list(updated_resources.get(FlowStep, {}).values()) + + list(new_resources.get(FunctionStep, {}).values()) + + list(updated_resources.get(FunctionStep, {}).values()) ): - if not isinstance(flow_step, FlowStep): + if not isinstance(flow_step, BaseFlowStep): raise TypeError(f"Flow step is not a FlowStep: {flow_step}") updated_flow_ids.add(flow_step.flow_id) @@ -1107,8 +1149,11 @@ def _assign_flow_positions( raise TypeError(f"Flow config is not a FlowConfig: {flow_config}") flow_steps = [ step - for step in new_state.get(FlowStep, {}).values() - if isinstance(step, FlowStep) and step.flow_id == updated_flow_id + for step in ( + list(new_state.get(FlowStep, {}).values()) + + list(new_state.get(FunctionStep, {}).values()) + ) + if isinstance(step, BaseFlowStep) and step.flow_id == updated_flow_id ] resource_utils.assign_flow_positions(flow_steps, flow_config.start_step) @@ -1609,7 +1654,7 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: ) deployment_id = (deployments.get(name) or {}).get("deployment_id") if not deployment_id: - logger.warning(f"No active deployment found for environment '{name}'.") + logger.error(f"No active deployment found for environment '{name}'.") return {} logger.info(f"Pulling resources from deployment '{deployment_id}' ({name})...") return self.api_handler.pull_deployment_resources(deployment_id) @@ -1622,7 +1667,8 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: self.region, self.account_id, self.project_id, branch_id ) logger.info(f"Pulling resources from branch '{name}'...") - return branch_api_handler.pull_resources() + resources, _ = branch_api_handler.pull_resources() + return resources # 3) Deployment version hash prefix -> deployment resources version_hash = (name or "")[:9].lower() @@ -1639,7 +1685,7 @@ def get_remote_resources_by_name(self, name: str) -> ResourceMap: ) return self.api_handler.pull_deployment_resources(deployment_id) - logger.warning(f"Name '{name}' not found in environments, branches, or deployments.") + logger.error(f"Name '{name}' not found in environments, branches, or deployments.") return {} def diff_remote_named_versions( @@ -1650,7 +1696,7 @@ def diff_remote_named_versions( after_resources = self.get_remote_resources_by_name(after_name) if not before_resources or not after_resources: - logger.warning( + logger.error( "Could not retrieve resources for one or both specified names: " f"before={before_name}, after={after_name}" ) @@ -1999,16 +2045,28 @@ def create_branch(self, branch_name: str = None) -> str: self.save_config() return branch_id - def switch_branch(self, branch_name: str, force: bool = False, format: bool = False) -> bool: + def switch_branch( + self, + branch_name: str, + force: bool = False, + format: bool = False, + projection_json: Optional[dict[str, Any]] = None, + on_save: Callable[[int, int], None] | None = None, + ) -> tuple[bool, dict[str, Any]]: """Switch to a different branch in the project. Args: branch_name (str): The name of the branch force (bool): If True, discard uncommitted changes when switching branches. format (bool): If True, format resources after switching branches. + projection_json (dict[str, Any]): A dictionary containing the projection + If provided, the projection will be used instead of fetching it from the API. + on_save: Optional callback invoked with (current, total) + during the resource save loop. Returns: bool: True if the switch was successful, False otherwise + dict[str, Any]: The projection data """ if self.get_diffs(all_files=True) and not force: raise ValueError( @@ -2019,10 +2077,13 @@ def switch_branch(self, branch_name: str, force: bool = False, format: bool = Fa if branch_name not in branches: raise ValueError(f"Branch {branch_name} does not exist.") success = self.api_handler.switch_branch(branches[branch_name]) + projection = {} if success: self.branch_id = branches[branch_name] - self.pull_project(force=force, format=format) - return success + _, projection = self.pull_project( + force=force, format=format, projection_json=projection_json, on_save=on_save + ) + return success, projection def get_current_branch(self) -> Optional[str]: """Get the current branch name. @@ -2370,10 +2431,10 @@ def merge_branch( f"Cannot merge branch with uncommitted changes, diffs: {list(diffs.keys())}" ) - conflicts, errors = self.api_handler.merge_branch( + success, conflicts, errors = self.api_handler.merge_branch( message=message, conflict_resolutions=conflict_resolutions ) - if not (conflicts or errors): + if success: self.switch_branch("main", force=True) return True, [], [] @@ -2512,62 +2573,10 @@ def sync_ids_with_sandbox(self, email: str = None) -> bool: if not (updated_resources or new_resources or deleted_resources): return True - push_changes = self._clean_resources_before_push( - new_state, - new_resources, - updated_resources, - deleted_resources, + self._stage_commands( + new_state, new_resources, updated_resources, deleted_resources, email=email ) - new_resources = push_changes.main.new - updated_resources = push_changes.main.updated - deleted_resources = push_changes.main.deleted - pre_changes = push_changes.pre - post_changes = push_changes.post - - new_resources, updated_resources = self._assign_flow_positions( - new_resources, - updated_resources, - new_state, - ) - - # 6. Push new/updated/deleted resources - if self.branch_id: - logger.info(f"Pushing changes to branch {self.branch_id}") - - pre_and_post_push = ( - pre_changes.new - or pre_changes.updated - or pre_changes.deleted - or post_changes.new - or post_changes.updated - or post_changes.deleted - ) - - if pre_and_post_push: - success = self.api_handler.push_resources( - new_resources=pre_changes.new, - deleted_resources=pre_changes.deleted, - updated_resources=pre_changes.updated, - queue_pushes=True, - email=email, - ) - - # Push changed resources - success = self.api_handler.push_resources( - new_resources=new_resources, - deleted_resources=deleted_resources, - updated_resources=updated_resources, - queue_pushes=pre_and_post_push, - email=email, - ) - - if pre_and_post_push: - success = self.api_handler.push_resources( - new_resources=post_changes.new, - deleted_resources=post_changes.deleted, - updated_resources=post_changes.updated, - email=email, - ) + success = self.api_handler.send_queued_commands() self.branch_id = self.api_handler.branch_id diff --git a/src/poly/resources/__init__.py b/src/poly/resources/__init__.py index 1ab8b39..7722d6f 100644 --- a/src/poly/resources/__init__.py +++ b/src/poly/resources/__init__.py @@ -4,12 +4,12 @@ SettingsRole, SettingsRules, ) -from poly.resources.asr_settings import AsrSettings from poly.resources.api_integration import ( ApiIntegration, - ApiIntegrationOperation, ApiIntegrationEnvironments, + ApiIntegrationOperation, ) +from poly.resources.asr_settings import AsrSettings from poly.resources.channel_settings import ( ChatGreeting, ChatStylePrompt, diff --git a/src/poly/resources/api_integration.py b/src/poly/resources/api_integration.py index f98d5c5..b130c2b 100644 --- a/src/poly/resources/api_integration.py +++ b/src/poly/resources/api_integration.py @@ -11,22 +11,21 @@ from typing import ClassVar, Optional import poly.resources.resource_utils as utils -from poly.resources.resource import ( - MultiResourceYamlResource, - ResourceMapping, - SubResource, -) - from poly.handlers.protobuf import api_integrations_pb2 from poly.handlers.protobuf.api_integrations_pb2 import ( ApiIntegration_Create, + ApiIntegration_Delete, ApiIntegration_Update, ApiIntegrationConfig_Update, - ApiIntegration_Delete, - Environments, ApiIntegrationOperation_Create, - ApiIntegrationOperation_Update, ApiIntegrationOperation_Delete, + ApiIntegrationOperation_Update, + Environments, +) +from poly.resources.resource import ( + MultiResourceYamlResource, + ResourceMapping, + SubResource, ) logger = logging.getLogger(__name__) diff --git a/src/poly/resources/function.py b/src/poly/resources/function.py index eee5e0d..665d0fa 100644 --- a/src/poly/resources/function.py +++ b/src/poly/resources/function.py @@ -692,7 +692,9 @@ def _extract_variable_references(code: str, resource_mappings: list[ResourceMapp } for name in variable_names: if name not in known_variables: - logger.warning(f"Variable {name} not found in resource mappings") + logger.warning( + f"Variable {name} not found in resource mappings, will be added in the next push" + ) continue variable_references[known_variables[name]] = True return variable_references diff --git a/src/poly/tests/project_test.py b/src/poly/tests/project_test.py index 7adb4ac..f6a27e5 100644 --- a/src/poly/tests/project_test.py +++ b/src/poly/tests/project_test.py @@ -9,7 +9,6 @@ import os import unittest from copy import deepcopy -from unittest import mock from unittest.mock import MagicMock, patch import poly.resources.resource_utils as resource_utils @@ -40,7 +39,6 @@ VoiceGreeting, VoiceStylePrompt, ) -from poly.resources.resource import MultiResourceYamlResource from poly.resources.flows import ( ASRBiasing, Condition, @@ -48,6 +46,7 @@ StepType, ) from poly.resources.function import FunctionType +from poly.resources.resource import MultiResourceYamlResource from poly.tests.testing_utils import mock_read_from_file DIR = os.path.dirname(os.path.abspath(__file__)) @@ -71,6 +70,61 @@ def test_init(self): self.assertEqual(project.project_id, "test_project") +class InitProjectOnSaveTest(unittest.TestCase): + """Tests for the on_save callback in init_project""" + + def setUp(self): + self.mock_api_handler = patch.object( + AgentStudioProject, "api_handler", new_callable=MagicMock + ).start() + self.mock_save_config = patch.object(AgentStudioProject, "save_config").start() + self.mock_save_imports = patch("poly.utils.save_imports").start() + self.mock_export_decorators = patch("poly.utils.export_decorators").start() + self.mock_resource_save = patch.object(Resource, "save").start() + self.mock_write_cache = patch.object( + MultiResourceYamlResource, "write_cache_to_file" + ).start() + + def tearDown(self): + patch.stopall() + + def test_on_save_called_with_correct_progress(self): + """on_save should be called once per resource with (current, total)""" + self.mock_api_handler.pull_resources.return_value = ( + AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources, + {}, + ) + on_save = MagicMock() + + project, _ = AgentStudioProject.init_project( + base_path=os.path.join(TEST_DIR, "tmp"), + region="us-1", + account_id="test_account", + project_id="test_project", + on_save=on_save, + ) + + total = len(project.all_resources) + self.assertEqual(on_save.call_count, total) + on_save.assert_any_call(1, total) + on_save.assert_any_call(total, total) + + def test_no_on_save_does_not_error(self): + """init_project without on_save should work without errors""" + self.mock_api_handler.pull_resources.return_value = ( + AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources, + {}, + ) + + project, _ = AgentStudioProject.init_project( + base_path=os.path.join(TEST_DIR, "tmp"), + region="us-1", + account_id="test_account", + project_id="test_project", + ) + self.assertIsNotNone(project) + + class SortPathsForReverseDeletionTest(unittest.TestCase): """Tests for _sort_paths_for_reverse_deletion (Pronunciation vs lexicographic order).""" @@ -1496,8 +1550,10 @@ def setUp(self): AgentStudioProject, "api_handler", new_callable=MagicMock ).start() self.mock_save_config = patch.object(AgentStudioProject, "save_config").start() - self.mock_pull.return_value = [] - self.mock_api_handler.push_resources = MagicMock(return_value=True) + self.mock_pull.return_value = ([], {}) + self.mock_api_handler.queue_resources = MagicMock(return_value=[]) + self.mock_api_handler.send_queued_commands = MagicMock(return_value=True) + self.mock_api_handler.clear_command_queue = MagicMock() self.mock_load_project = patch.object(AgentStudioProject, "load_project").start() def tearDown(self): @@ -1507,17 +1563,17 @@ def tearDown(self): def test_push_project_no_changes(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertFalse(success) self.assertEqual(message, "No changes detected") - self.mock_api_handler.push_resources.assert_not_called() + self.mock_api_handler.queue_resources.assert_not_called() def test_push_project_merge_conflict(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) - self.mock_pull.return_value = ["functions/test_function.py"] + self.mock_pull.return_value = (["functions/test_function.py"], {}) - success, message = project.push_project(force=False) + success, message, commands = project.push_project(force=False) self.assertFalse(success) self.assertIn("Merge conflicts detected", message) @@ -1528,11 +1584,11 @@ def test_push_project_new_resources(self): project_data["resources"]["topics"].pop("TOPIC-Topic 1") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(Topic, new_resources) # New resources get random IDs, so check by name @@ -1550,11 +1606,11 @@ def test_push_project_new_resource_flow(self): number_steps += 1 project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True, skip_validation=True) + success, message, commands = project.push_project(force=True, skip_validation=True) self.assertTrue(success, f"Push failed: {message}") - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(FlowConfig, new_resources) # New resources get random IDs, so check by name @@ -1579,11 +1635,11 @@ def test_push_project_deleted_resource(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(Function, deleted_resources) self.assertIn("FUNCTION-extra_function", deleted_resources[Function]) @@ -1625,11 +1681,11 @@ def mock_discover(self): return result with patch.object(AgentStudioProject, "discover_local_resources", mock_discover): - success, message = project.push_project(force=True, skip_validation=True) + success, message, commands = project.push_project(force=True, skip_validation=True) self.assertTrue(success, f"Push failed: {message}") - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] # Must NOT include VariantAttribute - we never had them locally self.assertNotIn(VariantAttribute, deleted_resources) @@ -1642,11 +1698,11 @@ def test_push_project_modified_resource(self): ) project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(Function, updated_resources) self.assertIn("FUNCTION-test_function", updated_resources[Function]) @@ -1658,11 +1714,11 @@ def test_push_project_modified_sub_resources_dtmf(self): ] = True project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(DTMFConfig, updated_resources) @@ -1673,11 +1729,11 @@ def test_push_project_new_sub_resources_condition(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(Condition, new_resources) # Deleted 2 conditions, so check that 2 new conditions are pushed @@ -1704,11 +1760,11 @@ def test_push_project_deleted_sub_resource_condition(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(Condition, deleted_resources) @@ -1723,11 +1779,11 @@ def test_push_project_updated_sub_resource_asr_biasing(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(ASRBiasing, updated_resources) @@ -1753,11 +1809,11 @@ def test_push_project_mixed_changes(self): ] = False project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] updated_resources = call_args.kwargs["updated_resources"] deleted_resources = call_args.kwargs["deleted_resources"] @@ -1771,11 +1827,11 @@ def test_push_project_new_keyphrase_boosting(self): project_data["resources"]["keyphrase_boosting"].pop("KEYPHRASE_BOOSTING-polyai") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(KeyphraseBoosting, new_resources) kp_names = [r.keyphrase for r in new_resources[KeyphraseBoosting].values()] @@ -1791,11 +1847,11 @@ def test_push_project_deleted_keyphrase_boosting(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(KeyphraseBoosting, deleted_resources) self.assertIn("KEYPHRASE_BOOSTING-extra", deleted_resources[KeyphraseBoosting]) @@ -1805,11 +1861,11 @@ def test_push_project_modified_keyphrase_boosting(self): project_data["resources"]["keyphrase_boosting"]["KEYPHRASE_BOOSTING-polyai"]["level"] = "default" project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(KeyphraseBoosting, updated_resources) self.assertIn("KEYPHRASE_BOOSTING-polyai", updated_resources[KeyphraseBoosting]) @@ -1819,11 +1875,11 @@ def test_push_project_new_transcript_correction(self): project_data["resources"]["transcript_corrections"].pop("TRANSCRIPT_CORRECTIONS-email_domain") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args new_resources = call_args.kwargs["new_resources"] self.assertIn(TranscriptCorrection, new_resources) tc_names = [r.name for r in new_resources[TranscriptCorrection].values()] @@ -1841,11 +1897,11 @@ def test_push_project_deleted_transcript_correction(self): } project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args deleted_resources = call_args.kwargs["deleted_resources"] self.assertIn(TranscriptCorrection, deleted_resources) self.assertIn("TRANSCRIPT_CORRECTIONS-extra", deleted_resources[TranscriptCorrection]) @@ -1855,11 +1911,11 @@ def test_push_project_modified_asr_settings(self): project_data["resources"]["asr_settings"]["asr_settings"]["barge_in"] = True project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True) + success, message, commands = project.push_project(force=True) self.assertTrue(success) - self.mock_api_handler.push_resources.assert_called_once() - call_args = self.mock_api_handler.push_resources.call_args + self.mock_api_handler.queue_resources.assert_called_once() + call_args = self.mock_api_handler.queue_resources.call_args updated_resources = call_args.kwargs["updated_resources"] self.assertIn(AsrSettings, updated_resources) self.assertIn("asr_settings", updated_resources[AsrSettings]) @@ -1874,7 +1930,7 @@ def test_push_project_validation_error(self): invalid_content = "name: test_flow\ndescription:\nstart_step: start_step\n" with mock_read_from_file({flow_config_path: invalid_content}): - success, message = project.push_project(force=True, skip_validation=False) + success, message, commands = project.push_project(force=True, skip_validation=False) self.assertFalse(success) self.assertIn("Validation errors", message) @@ -1889,7 +1945,7 @@ def test_push_project_validation_error_skip(self): invalid_content = "name: test_flow\ndescription:\nstart_step: start_step\n" with mock_read_from_file({flow_config_path: invalid_content}): - success, message = project.push_project(force=True, skip_validation=True) + success, message, commands = project.push_project(force=True, skip_validation=True) self.assertTrue(success) @@ -1898,18 +1954,13 @@ def test_push_project_dry_run(self): project_data["resources"]["topics"].pop("TOPIC-Topic 1") project = AgentStudioProject.from_dict(project_data, TEST_DIR) - success, message = project.push_project(force=True, dry_run=True) + success, message, commands = project.push_project(force=True, dry_run=True) self.assertTrue(success) self.assertIn("Dry run completed", message) - self.mock_api_handler.push_resources.assert_called_once_with( - new_resources=mock.ANY, - deleted_resources=mock.ANY, - updated_resources=mock.ANY, - dry_run=True, - email=None, - queue_pushes=mock.ANY, - ) + self.mock_api_handler.queue_resources.assert_called_once() + self.mock_api_handler.send_queued_commands.assert_not_called() + self.mock_api_handler.clear_command_queue.assert_called_once() class ValidateProjectTest(unittest.TestCase): @@ -1980,9 +2031,9 @@ def test_pull_project_no_changes(self): # Incoming resources are the same as project.resources # Use the actual resources from the project to ensure they match original_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = original_resources + self.mock_api_handler.pull_resources.return_value = (original_resources, {}) - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) self.assertEqual(project.resources, original_resources) @@ -2007,7 +2058,7 @@ def test_pull_project_not_loaded_resources_force_save(self): # Simulate pull: incoming has variant_attributes from remote full_project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) incoming_resources = full_project.resources - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) with mock_read_from_file( { @@ -2016,7 +2067,7 @@ def test_pull_project_not_loaded_resources_force_save(self): ): "{}\n" } ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Variant attributes are now present in project resources with the correct keys @@ -2041,9 +2092,9 @@ def test_pull_project_addition(self): example_queries=["New query"], ) incoming_resources.setdefault(Topic, {})["TOPIC-new_topic"] = new_topic - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify the new resource was saved via save_to_file or save self.assertTrue(self.mock_save_to_file.called or self.mock_resource_save.called) @@ -2057,9 +2108,9 @@ def test_pull_project_deletion(self): incoming_resources = deepcopy(project.resources) if Topic in incoming_resources and "TOPIC-Topic 1" in incoming_resources[Topic]: del incoming_resources[Topic]["TOPIC-Topic 1"] - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify the resource file was removed via os.remove @@ -2076,9 +2127,9 @@ def test_pull_project_modify_1(self): modified_func = deepcopy(incoming_resources[Function][func_id]) modified_func.code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Modified"\n' incoming_resources[Function][func_id] = modified_func - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify resource is updated in project resources self.assertIn(func_id, project.resources.get(Function, {})) @@ -2094,7 +2145,7 @@ def test_pull_project_modify_conflict(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) with mock_read_from_file( { @@ -2103,7 +2154,7 @@ def test_pull_project_modify_conflict(self): ): 'from _gen import * # \n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """Modified locally."""\n return "Local change"\n' } ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) # Should detect merge conflict self.assertEqual( files_with_conflicts, [os.path.join(TEST_DIR, "functions", "test_function.py")] @@ -2134,7 +2185,7 @@ def test_pull_project_modify_flow_config_conflict(self): modified_flow_config = deepcopy(incoming_resources[FlowConfig][flow_config_id]) modified_flow_config.description = "Modified remotely - new description" incoming_resources[FlowConfig][flow_config_id] = modified_flow_config - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) # Mock local file with different changes flow_config_path = os.path.join( @@ -2145,7 +2196,7 @@ def test_pull_project_modify_flow_config_conflict(self): flow_config_path: "name: test_flow\ndescription: Modified locally - different description\nstart_step: start_step\n" } ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) # Should detect merge conflict self.assertEqual(files_with_conflicts, [flow_config_path]) # Resources are now incoming resources @@ -2178,7 +2229,7 @@ def test_pull_project_modify_no_conflict(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) with mock_read_from_file( { @@ -2187,7 +2238,7 @@ def test_pull_project_modify_no_conflict(self): ): 'from _gen import * # \n\ndef added_extra_function():\n pass\n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """A test function for global use."""\n return "Hello from global function"\n' } ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) # Should detect no merge conflict self.assertEqual(files_with_conflicts, []) # Resources are now incoming resources @@ -2215,7 +2266,7 @@ def test_pull_project_force(self): incoming_resources[Function][ "FUNCTION-test_function" ].code = 'def test_function(conv: Conversation):\n """Modified remotely."""\n return "Remote change"\n' - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) with mock_read_from_file( { @@ -2224,7 +2275,7 @@ def test_pull_project_force(self): ): 'from _gen import * # \n\n@func_description(\'A test function for global use.\')\ndef test_function(conv: Conversation):\n """Modified locally."""\n return "Local change"\n' } ): - files_with_conflicts = project.pull_project(force=True) + files_with_conflicts, _ = project.pull_project(force=True) # Should detect no merge conflict self.assertEqual(files_with_conflicts, []) @@ -2240,8 +2291,8 @@ def test_pull_project_added_locally_and_remote_same(self): full_project_resources = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR).resources incoming_resources = deepcopy(full_project_resources) - self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts = project.pull_project(force=False, format=True) + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + files_with_conflicts, _ = project.pull_project(force=False, format=True) self.assertEqual(files_with_conflicts, []) # Verify resource is updated in project resources self.assertIn("FUNCTION-test_function_with_parameters", project.resources.get(Function, {})) @@ -2265,8 +2316,8 @@ def test_pull_project_added_locally_and_remote_different(self): incoming_resources = deepcopy(full_project_resources) incoming_resources[Function]["FUNCTION-test_function_with_parameters"].code = 'def test_function_with_parameters(conv: Conversation):\n """Test function with parameters."""\n return "Test function with parameters"\n' - self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts = project.pull_project(force=False) + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(len(files_with_conflicts), 1) def test_pull_project_deleted_locally(self): @@ -2283,8 +2334,8 @@ def test_pull_project_deleted_locally(self): project = AgentStudioProject.from_dict(project_data, TEST_DIR) incoming_resources = deepcopy(project.resources) - self.mock_api_handler.pull_resources.return_value = incoming_resources - files_with_conflicts = project.pull_project(force=False) + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify it wasn't saved to the file system @@ -2312,9 +2363,9 @@ def test_pull_project_resource_moved(self): # Rename the topic (this changes the file path) renamed_topic.name = "renamed_topic" - self.mock_api_handler.pull_resources.return_value = original_resources + self.mock_api_handler.pull_resources.return_value = (original_resources, {}) - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) self.assertEqual(files_with_conflicts, []) # Verify old file would be removed @@ -2328,6 +2379,8 @@ def test_pull_project_resource_moved(self): def test_pull_project_empty_flow_folder_deletion(self): """Test that empty flow folders are deleted after pull""" project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) + original_resources = deepcopy(project.resources) + self.mock_api_handler.pull_resources.return_value = (original_resources, {}) # Mock os.listdir and os.rmdir to verify empty folder deletion empty_flow_path = os.path.join(TEST_DIR, "flows", "test_flow") @@ -2350,7 +2403,7 @@ def mock_isdir(path): patch("os.path.isdir", side_effect=mock_isdir), patch("os.rmdir") as mock_rmdir, ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) # Empty flow folder should be deleted # _delete_empty_folders is called after pull_project @@ -2395,7 +2448,7 @@ def test_pull_project_multi_resource_yaml_remote_change_no_local_change(self): project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) incoming_resources = deepcopy(project.resources) incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2416,7 +2469,7 @@ def test_pull_project_multi_resource_yaml_remote_change_no_local_change(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, original_kp_content), ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2442,7 +2495,7 @@ def test_pull_project_multi_resource_yaml_merge_no_conflict(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2472,7 +2525,7 @@ def test_pull_project_multi_resource_yaml_merge_no_conflict(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, local_kp_content), ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2501,7 +2554,7 @@ def test_pull_project_multi_resource_yaml_conflict(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" @@ -2531,7 +2584,7 @@ def test_pull_project_multi_resource_yaml_conflict(self): "poly.resources.resource.Resource.read_from_file", side_effect=self._make_kp_read_mock(original_kp_content, local_kp_content), ): - files_with_conflicts = project.pull_project(force=False) + files_with_conflicts, _ = project.pull_project(force=False) MultiResourceYamlResource._file_cache.clear() self.assertIn(kp_path, files_with_conflicts) @@ -2554,14 +2607,14 @@ def test_pull_project_multi_resource_yaml_force(self): incoming_resources = deepcopy(project.resources) # Remote: PolyAI level maximum → boosted incoming_resources[KeyphraseBoosting]["KEYPHRASE_BOOSTING-polyai"].level = "boosted" - self.mock_api_handler.pull_resources.return_value = incoming_resources + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) kp_path = os.path.join( TEST_DIR, "voice", "speech_recognition", "keyphrase_boosting.yaml" ) MultiResourceYamlResource._file_cache.clear() - files_with_conflicts = project.pull_project(force=True) + files_with_conflicts, _ = project.pull_project(force=True) MultiResourceYamlResource._file_cache.clear() self.assertEqual(files_with_conflicts, []) @@ -2578,6 +2631,31 @@ def test_pull_project_multi_resource_yaml_force(self): self.assertIn("level: boosted", saved_content) self.assertNotIn("<<<<<<<", saved_content) + def test_pull_project_on_save_callback(self): + """on_save should be called during pull with correct final progress""" + project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) + incoming_resources = deepcopy(project.resources) + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + + on_save = MagicMock() + files_with_conflicts, _ = project.pull_project(on_save=on_save) + + self.assertEqual(files_with_conflicts, []) + self.assertGreater(on_save.call_count, 0) + last_call = on_save.call_args_list[-1] + current, total = last_call[0] + self.assertEqual(current, total) + + def test_pull_project_no_on_save_does_not_error(self): + """pull_project without on_save should work without errors""" + project = AgentStudioProject.from_dict(PROJECT_DATA, TEST_DIR) + incoming_resources = deepcopy(project.resources) + self.mock_api_handler.pull_resources.return_value = (incoming_resources, {}) + + files_with_conflicts, _ = project.pull_project() + self.assertEqual(files_with_conflicts, []) + + class DocsTest(unittest.TestCase): """Tests for the docs module""" diff --git a/src/poly/tests/resources_test.py b/src/poly/tests/resources_test.py index 44ecba9..b132856 100644 --- a/src/poly/tests/resources_test.py +++ b/src/poly/tests/resources_test.py @@ -14,6 +14,15 @@ SettingsRole, SettingsRules, ) +from poly.resources.api_integration import ( + AVAILABLE_AUTH_TYPES, + AVAILABLE_OPERATIONS, + URL_PATTERN, + ApiIntegration, + ApiIntegrationConfig, + ApiIntegrationEnvironments, + ApiIntegrationOperation, +) from poly.resources.asr_settings import AsrSettings from poly.resources.channel_settings import ( ChatGreeting, @@ -41,15 +50,6 @@ FunctionParameters, FunctionType, ) -from poly.resources.api_integration import ( - AVAILABLE_AUTH_TYPES, - AVAILABLE_OPERATIONS, - URL_PATTERN, - ApiIntegration, - ApiIntegrationConfig, - ApiIntegrationOperation, - ApiIntegrationEnvironments, -) from poly.resources.handoff import Handoff from poly.resources.keyphrase_boosting import KeyphraseBoosting from poly.resources.phrase_filter import PhraseFilter diff --git a/uv.lock b/uv.lock index 8261d23..41c0c73 100644 --- a/uv.lock +++ b/uv.lock @@ -332,7 +332,7 @@ wheels = [ [[package]] name = "polyai-adk" -version = "0.3.3" +version = "0.4.0" source = { editable = "." } dependencies = [ { name = "argcomplete" }, @@ -758,28 +758,28 @@ wheels = [ [[package]] name = "uv" -version = "0.11.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/5a/c3/8fe199f300c8c740a55bc7a0eb628aa21ce6fd81130ab26b1b74597e3566/uv-0.11.0.tar.gz", hash = "sha256:8065cd54c2827588611a1de334901737373602cb64d7b84735a08b7d16c8932b", size = 4007038, upload-time = "2026-03-23T22:04:50.132Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/78/29/188d4abb5bbae1d815f4ca816ad5a3df570cb286600b691299424f5e0798/uv-0.11.0-py3-none-linux_armv6l.whl", hash = "sha256:0a66d95ded54f76be0b3c5c8aefd4a35cc453f8d3042563b3a06e2dc4d54dbb6", size = 23338895, upload-time = "2026-03-23T22:04:53.4Z" }, - { url = "https://files.pythonhosted.org/packages/49/d3/e8c91242e5bf2c10e8da8ad4568bc41741f497ba6ae7ebfa3f931ef56171/uv-0.11.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:130f5dd799e8f50ab5c1cdc51b044bb990330d99807c406d37f0b09b3fdf85fe", size = 22812837, upload-time = "2026-03-23T22:05:13.426Z" }, - { url = "https://files.pythonhosted.org/packages/d9/1c/6ddd0febcea06cf23e59d9bff90d07025ecfd600238807f41ed2bdafd159/uv-0.11.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:4b0ebbd7ae019ea9fc4bff6a07d0c1e1d6784d1842bbdcb941982d30e2391972", size = 21363278, upload-time = "2026-03-23T22:05:48.771Z" }, - { url = "https://files.pythonhosted.org/packages/79/25/2bf8fb0ae419a9dd7b7e13ab6d742628146ed9dd0d2205c2f7d5c437f3d5/uv-0.11.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:50f3d0c4902558a2a06afb4666e6808510879fb52b0d8cc7be36e509d890fd88", size = 23132924, upload-time = "2026-03-23T22:05:52.759Z" }, - { url = "https://files.pythonhosted.org/packages/ff/af/c83604cf9d2c2a07f50d779c8a51c50bc6e31bcc196d58c76c4af5de363c/uv-0.11.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:16b7850ac8311eb04fe74c6ec1b3a7b6d7d84514bb6176877fcf5df9b7d6464a", size = 22935016, upload-time = "2026-03-23T22:05:45.023Z" }, - { url = "https://files.pythonhosted.org/packages/8d/1f/2b4bbab1952a9c28f09e719ca5260fb6ae013d0a8b5025c3813ba86708ed/uv-0.11.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f2c3ec280a625c77ff6d9d53ebc0af9277ca58086b8ab2f8e66b03569f6aecb9", size = 22929000, upload-time = "2026-03-23T22:05:17.039Z" }, - { url = "https://files.pythonhosted.org/packages/ca/bc/038b3df6e22413415ae1eec748ee5b5f0c32ac2bdd80350a1d1944a4b8aa/uv-0.11.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:24fbec6a70cee6e2bf5619ff71e4c984664dbcc03dcf77bcef924febf9292293", size = 24575116, upload-time = "2026-03-23T22:05:01.095Z" }, - { url = "https://files.pythonhosted.org/packages/76/91/6adc039c3b701bd4a65d8fdfada3e7f3ee54eaca1759b3199699bf338d0e/uv-0.11.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15d2380214518375713c8da32e84e3d1834bee324b43a5dff8097b4d8b1694a9", size = 25158577, upload-time = "2026-03-23T22:05:21.049Z" }, - { url = "https://files.pythonhosted.org/packages/ae/1e/fa1a4f5845c4081c0ace983608ae8fbe00fa27eefb4f0f884832c519b289/uv-0.11.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:74cf7401fe134dde492812e478bc0ece27f01f52be29ebbd103b4bb238ce2a29", size = 24390099, upload-time = "2026-03-23T22:04:43.756Z" }, - { url = "https://files.pythonhosted.org/packages/36/fa/086616d98b0b8a2cc5e7b49c389118a8196027a79a5a501f5e738f718f59/uv-0.11.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:30a08ee4291580784a5e276a1cbec8830994dba2ed5c94d878cce8b2121367cf", size = 24508501, upload-time = "2026-03-23T22:05:05.062Z" }, - { url = "https://files.pythonhosted.org/packages/cc/e5/628d21734684c3413ae484229815c04dc9c5639b71b53c308e4e7faec225/uv-0.11.0-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:fb45be97641214df78647443e8fa0236deeef4c7995f2e3df55879b0bc42d71d", size = 23213423, upload-time = "2026-03-23T22:05:37.112Z" }, - { url = "https://files.pythonhosted.org/packages/84/53/56df3017a738de6170f8937290f45e3cd33c6d8aa7cf21b7fb688e9eaa07/uv-0.11.0-py3-none-manylinux_2_31_riscv64.musllinux_1_1_riscv64.whl", hash = "sha256:509f6e04ba3a38309a026874d2d99652d16fee79da26c8008886bc9e42bc37df", size = 24014494, upload-time = "2026-03-23T22:05:25.013Z" }, - { url = "https://files.pythonhosted.org/packages/44/a4/1cf99ae80dd3ec08834e55c12ea22a6a36efc16ad39ea256c9ebe4e0682c/uv-0.11.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:30eed93f96a99a97e64543558be79c628d6197059227c0789f9921aa886e83f6", size = 24049669, upload-time = "2026-03-23T22:05:09.865Z" }, - { url = "https://files.pythonhosted.org/packages/bc/ad/621271fa73f268bea996e3e296698097b5c557d48de1d316b319105e45ef/uv-0.11.0-py3-none-musllinux_1_1_i686.whl", hash = "sha256:81b73d7e9d811131636f0010533a98dd9c1893d5b7aa9672cc1ed00452834ba3", size = 23677683, upload-time = "2026-03-23T22:04:57.211Z" }, - { url = "https://files.pythonhosted.org/packages/20/03/daf51de08504529dc3de94d15d81590249e4d0394aa881dc305d7e6d6478/uv-0.11.0-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:7cbcf306d71d84855972a24a760d33f44898ac5e94b680de62cd28e30d91b69a", size = 24728106, upload-time = "2026-03-23T22:05:29.149Z" }, - { url = "https://files.pythonhosted.org/packages/22/ac/26ed5b0792f940bab892be65de7c9297c6ef1ec879adf7d133300eba31a3/uv-0.11.0-py3-none-win32.whl", hash = "sha256:801604513ec0cc05420b382a0f61064ce1c7800758ed676caba5ff4da0e3a99e", size = 22440703, upload-time = "2026-03-23T22:05:32.806Z" }, - { url = "https://files.pythonhosted.org/packages/8b/86/5449b6cd7530d1f61a77fde6186f438f8a5291cb063a8baa3b4addaa24b9/uv-0.11.0-py3-none-win_amd64.whl", hash = "sha256:7e16194cf933c9803478f83fb140cefe76cd37fc0d9918d922f6f6fbc6ca7297", size = 24860392, upload-time = "2026-03-23T22:05:41.019Z" }, - { url = "https://files.pythonhosted.org/packages/04/5b/b93ef560e7b69854a83610e7285ebc681bb385dd321e6f6d359bef5db4c0/uv-0.11.0-py3-none-win_arm64.whl", hash = "sha256:1960ae9c73d782a73b82e28e5f735b269743d18a467b3f14ec35b614435a2aef", size = 23347957, upload-time = "2026-03-23T22:04:47.727Z" }, +version = "0.11.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/2b/e9/691eb77e5e767cdec695db3f91ec259bbb66f9af7c86a8dbe462ef72a120/uv-0.11.1.tar.gz", hash = "sha256:8aa7e4983fabb06d0ba58e8b8c969d568ce495ad5f2f0426af97b55720f0dee1", size = 4007244, upload-time = "2026-03-24T23:14:18.269Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/16/f9/a95c44fba785c27a966087154a8f6825774d49a38b3c5cd35f80e07ca5ca/uv-0.11.1-py3-none-linux_armv6l.whl", hash = "sha256:424b5b412d37838ea6dc11962f037be98b92e83c6ec755509e2af8a4ca3fbf2a", size = 23320598, upload-time = "2026-03-24T23:13:44.998Z" }, + { url = "https://files.pythonhosted.org/packages/5d/de/b7e24956a2508debf2addefcad93c72165069370f914d90db6264e0cf96a/uv-0.11.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c2133b0532af0217bf252d981bded8bff0c770f174f91f20655f88705f28c03f", size = 22832732, upload-time = "2026-03-24T23:13:33.677Z" }, + { url = "https://files.pythonhosted.org/packages/93/bd/1ac91bc704c22a427a44262f09e208ae897817a856d0e8dc0d60e4032e92/uv-0.11.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1a7b74e5a15b9bc6e61ce807adeca5a2807f557d3f06a5586de1da309d844c1d", size = 21406409, upload-time = "2026-03-24T23:14:32.231Z" }, + { url = "https://files.pythonhosted.org/packages/34/1d/f767701e1160538d25ee6c1d49ce1e72442970b6658365afdd57339d10e0/uv-0.11.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:fb1f32ec6c7dffb7ae71afaf6bf1defca0bd20a73a25e61226210c0a3e8bb13d", size = 23154066, upload-time = "2026-03-24T23:14:07.334Z" }, + { url = "https://files.pythonhosted.org/packages/55/21/d2cfa3571557ba68ffd530656b1d7159fe59a6b01be94595351b1eec1c29/uv-0.11.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:0d5cf3c1c96f8afd67072d80479a58c2d69471916bac4ac36cc55f2aa025dc8e", size = 22922490, upload-time = "2026-03-24T23:13:25.83Z" }, + { url = "https://files.pythonhosted.org/packages/59/3c/68119f555b2ec152235951cc9aa0f40006c5f03d17c98adaab6a3d36d42b/uv-0.11.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5829a254c64b19420b9e48186182d162b01f8da0130e770cbb8851fd138bb820", size = 22923054, upload-time = "2026-03-24T23:14:03.595Z" }, + { url = "https://files.pythonhosted.org/packages/70/ce/0df944835519372b1d698acaa388baa874cf69a6183b5f0980cb8855b81a/uv-0.11.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d4259027e80f4dcc9ae3dceddcd5407173d334484737166fc212e96bb760d6ea", size = 24576177, upload-time = "2026-03-24T23:14:25.263Z" }, + { url = "https://files.pythonhosted.org/packages/db/04/0076335413c618fe086e5a4762103634552e638a841e12a4bb8f5137d710/uv-0.11.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b6169eb49d1d2b5df7a7079162e1242e49ad46c6590c55f05b182fa526963763", size = 25207026, upload-time = "2026-03-24T23:14:11.579Z" }, + { url = "https://files.pythonhosted.org/packages/bb/57/79c0479e12c2291ad9777be53d813957fa38283975b708eead8e855ba725/uv-0.11.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c96a7310a051b1013efffe082f31d718bce0538d4abc20a716d529bf226b7c44", size = 24393748, upload-time = "2026-03-24T23:13:48.553Z" }, + { url = "https://files.pythonhosted.org/packages/c3/25/9ef73c8b6ef04b0cead7d8f1547034568e3e58f3397b55b83167e587f84a/uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41ccc438dbb905240a3630265feb25be1bda61656ec7c32682a83648a686f4aa", size = 24518525, upload-time = "2026-03-24T23:13:41.129Z" }, + { url = "https://files.pythonhosted.org/packages/a0/a3/035c7c2feb2139efb5d70f2e9f68912c34f7d92ee2429bacd708824483bb/uv-0.11.1-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:44f528ba3d66321cea829770982cccb14af142203e4e19d00ff0c23b28e3cd33", size = 23270167, upload-time = "2026-03-24T23:13:51.937Z" }, + { url = "https://files.pythonhosted.org/packages/25/59/2dd782b537bfd1e41cb06de4f4a529fe2f9bd10034fb3fcce225ec86c1a5/uv-0.11.1-py3-none-manylinux_2_31_riscv64.musllinux_1_1_riscv64.whl", hash = "sha256:4fcc3d5fdea24181d77e7765bf9d16cdd9803fd524820c62c66f91b2e2644d5b", size = 24011976, upload-time = "2026-03-24T23:13:37.402Z" }, + { url = "https://files.pythonhosted.org/packages/7b/f0/9983e6f31d495cc548f1e211cab5b89a3716f406a2d9d8134b8245ec103c/uv-0.11.1-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5de9e43a32079b8d57093542b0cd8415adba5ed9944fa49076c0927f3ff927e1", size = 24029605, upload-time = "2026-03-24T23:14:28.819Z" }, + { url = "https://files.pythonhosted.org/packages/19/dc/9c59e803bfc1b9d6c4c4b7374689c688e9dc0a1ecc2375399d3a59fd4a58/uv-0.11.1-py3-none-musllinux_1_1_i686.whl", hash = "sha256:f13ae98a938effae5deb587a63e7e42f05d6ba9c1661903ef538e4e87b204f8c", size = 23702811, upload-time = "2026-03-24T23:14:21.207Z" }, + { url = "https://files.pythonhosted.org/packages/7d/77/b1cbfdac0b2dd3e7aa420e9dad1abe8badb47eabd8741a9993586b14f8dc/uv-0.11.1-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:57d38e8b6f6937e1521da568adf846bb89439c73e146e89a8ab2cfe7bb15657a", size = 24714239, upload-time = "2026-03-24T23:13:29.814Z" }, + { url = "https://files.pythonhosted.org/packages/e4/d3/94917751acbbb5e053cb366004ae8be3c9664f82aef7de54f55e38ec15cb/uv-0.11.1-py3-none-win32.whl", hash = "sha256:36f4552b24acaa4699b02baeb1bb928202bb98d426dcc5041ab7ebae082a6430", size = 22404606, upload-time = "2026-03-24T23:13:55.614Z" }, + { url = "https://files.pythonhosted.org/packages/aa/87/8dadfe03944a4a493cd58b6f4f13e5181069a0048aeb2fae7da2c587a542/uv-0.11.1-py3-none-win_amd64.whl", hash = "sha256:d6a1c4cdb1064e9ceaa59e89a7489dd196222a0b90cfb77ca37a909b5e024ea0", size = 24850092, upload-time = "2026-03-24T23:14:15.186Z" }, + { url = "https://files.pythonhosted.org/packages/38/1b/dad559273df0c8263533afa4a28570cf6804272f379df9830b528a9cf8bc/uv-0.11.1-py3-none-win_arm64.whl", hash = "sha256:3bc9632033c7a280342f9b304bd12eccb47d6965d50ea9ee57ecfaf4f1f393c4", size = 23376127, upload-time = "2026-03-24T23:13:59.59Z" }, ] [[package]] From 69b5730bab7056f22820d75e50284a0c55a3d7ff Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Mon, 30 Mar 2026 12:14:33 +0100 Subject: [PATCH 10/14] docs: address reviewer feedback on installation and prerequisites --- docs/docs/concepts/working-locally.md | 15 --------------- docs/docs/get-started/installation.md | 14 ++++++++------ docs/docs/get-started/prerequisites.md | 20 ++++++-------------- 3 files changed, 14 insertions(+), 35 deletions(-) diff --git a/docs/docs/concepts/working-locally.md b/docs/docs/concepts/working-locally.md index ea21ab3..1aad70b 100644 --- a/docs/docs/concepts/working-locally.md +++ b/docs/docs/concepts/working-locally.md @@ -135,21 +135,6 @@ These references let settings, prompts, and behaviors point to resources by name Think of the ADK as a synchronization layer between your local files and the Agent Studio platform. -## Development setup from source - -To contribute to the ADK or work directly from the repository: - -~~~bash -git clone https://github.com/polyai/adk.git -cd adk -uv venv -source .venv/bin/activate -uv pip install -e ".[dev]" -pre-commit install -~~~ - -This installs the project in editable mode and registers the development hooks. - ## Related pages
diff --git a/docs/docs/get-started/installation.md b/docs/docs/get-started/installation.md index b2ece5f..9e68523 100644 --- a/docs/docs/get-started/installation.md +++ b/docs/docs/get-started/installation.md @@ -13,7 +13,9 @@ The **PolyAI ADK** can be installed as a Python package. ## Install the ADK -We recommend installing in a virtual environment rather than installing to the global system Python. Run the following to create one: +We recommend installing in a virtual environment rather than installing to the global system Python. If you don't have `uv` installed, see the [uv installation guide](https://docs.astral.sh/uv/getting-started/installation/){ target="_blank" rel="noopener" }. + +Run the following to create a virtual environment: ~~~bash uv venv --python=3.14 --seed @@ -31,17 +33,17 @@ Install the package with pip: pip install polyai-adk ~~~ -Once installed, you can use the `poly` command to interact with Agent Studio projects locally. - ## Generate API key -Set your API key as an environment variable: +Log in to the Agent Studio platform and generate an API key. Then set it as an environment variable: ~~~bash export POLY_ADK_KEY= ~~~ -You can generate an API key from the Agent Studio platform. The `POLY_ADK_KEY` environment variable must be set before running any `poly` commands. +The `POLY_ADK_KEY` environment variable must be set before running any `poly` commands. + +Once the ADK is installed and your API key is set, you can use the `poly` command to interact with Agent Studio projects locally. ## Verify the installation @@ -55,7 +57,7 @@ You should see the top-level command help if installation succeeded. ## Next step -Once the ADK is installed, continue to the first commands page to explore the CLI. +Continue to the first commands page to explore the CLI.
diff --git a/docs/docs/get-started/prerequisites.md b/docs/docs/get-started/prerequisites.md index ec69e24..3074ab3 100644 --- a/docs/docs/get-started/prerequisites.md +++ b/docs/docs/get-started/prerequisites.md @@ -24,21 +24,12 @@ Install the following tools before continuing: | Tool | Version | Notes | |---|---|---| -| **Python** | 3.14+ | Required to run the ADK | -| **uv** | latest | Recommended for development setup from source | +| **uv** | latest | Manages Python and virtual environments | | **Git** | any | Required to clone the repository or contribute | -### Install Python 3.14+ - -Python 3.14 is a recent release. Use one of these methods: - -- **Homebrew** (macOS): `brew install python@3.14` -- **pyenv**: `pyenv install 3.14` then `pyenv global 3.14` -- **Official installer**: [python.org/downloads](https://www.python.org/downloads/){ target="_blank" rel="noopener" } - ### Install uv -The recommended way to install `uv`: +`uv` manages Python versions for you, including the version required by the ADK. Install it with: ~~~bash curl -LsSf https://astral.sh/uv/install.sh | sh @@ -50,14 +41,15 @@ Alternatively, with Homebrew on macOS: brew install uv ~~~ +See the [uv installation guide](https://docs.astral.sh/uv/getting-started/installation/){ target="_blank" rel="noopener" } for more options. + ## Checklist Before continuing, confirm: - [ ] You have access to an **Agent Studio workspace** - [ ] You have obtained an **API key** from your PolyAI contact -- [ ] Python 3.14+ is installed and on your `PATH` -- [ ] `uv` is installed if you plan to use the development setup +- [ ] `uv` is installed - [ ] `git` is available locally ## Next step @@ -73,4 +65,4 @@ Once these requirements are in place, continue to installation. Install the ADK and set up your local environment. [Open installation](./installation.md) -
\ No newline at end of file +
From ee8ccd626278f1a52fb47881077c930b7dc83d41 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Mon, 30 Mar 2026 12:22:40 +0100 Subject: [PATCH 11/14] fix: convert backtick code fences to tilde fences in licensing.md --- docs/docs/legal/licensing.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/docs/legal/licensing.md b/docs/docs/legal/licensing.md index 8cf4a39..37cec45 100644 --- a/docs/docs/legal/licensing.md +++ b/docs/docs/legal/licensing.md @@ -9,7 +9,7 @@ PolyAI ADK uses several third-party open source software packages. We gratefully ### MIT License -``` +~~~ MIT License Permission is hereby granted, free of charge, to any person obtaining a copy @@ -29,11 +29,11 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -``` +~~~ ### Apache License 2.0 -``` +~~~ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ @@ -130,11 +130,11 @@ liable for any damages arising from the use of the Work. warranty, support, indemnity, or other liability obligations. END OF TERMS AND CONDITIONS -``` +~~~ ### BSD 3-Clause License -``` +~~~ BSD 3-Clause License Redistribution and use in source and binary forms, with or without @@ -161,11 +161,11 @@ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -``` +~~~ ### Mozilla Public License 2.0 (MPL 2.0) -``` +~~~ Mozilla Public License Version 2.0 ================================== @@ -539,4 +539,4 @@ Exhibit B - "Incompatible With Secondary Licenses" Notice This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. -``` \ No newline at end of file +~~~ \ No newline at end of file From 0a769911087b3f2d26a299c84431f795f7036b6a Mon Sep 17 00:00:00 2001 From: Aaron Forinton <89849359+AaronForinton@users.noreply.github.com> Date: Mon, 30 Mar 2026 14:03:20 +0100 Subject: [PATCH 12/14] Remove redundant line in anti-patterns documentation --- docs/docs/concepts/anti-patterns.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/docs/concepts/anti-patterns.md b/docs/docs/concepts/anti-patterns.md index 37d39d8..a2fd7c1 100644 --- a/docs/docs/concepts/anti-patterns.md +++ b/docs/docs/concepts/anti-patterns.md @@ -172,9 +172,8 @@ If the user is expected to answer, put the full question in the utterance and le ## Design principle -When in doubt: - - make control flow explicit - keep prompts conversational - keep code deterministic -- prefer simple, testable paths over clever prompt tricks \ No newline at end of file +- prefer simple, testable paths over clever prompt tricks + From 058206b01a69c953675a3b67484996f040a08de0 Mon Sep 17 00:00:00 2001 From: aaronforinton Date: Mon, 30 Mar 2026 17:29:57 +0100 Subject: [PATCH 13/14] fix: correct bare code fences in licensing.md --- docs/docs/legal/licensing.md | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/docs/docs/legal/licensing.md b/docs/docs/legal/licensing.md index 37cec45..0da8b16 100644 --- a/docs/docs/legal/licensing.md +++ b/docs/docs/legal/licensing.md @@ -9,7 +9,7 @@ PolyAI ADK uses several third-party open source software packages. We gratefully ### MIT License -~~~ +~~~text MIT License Permission is hereby granted, free of charge, to any person obtaining a copy @@ -30,10 +30,9 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ~~~ - ### Apache License 2.0 -~~~ +~~~text Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ @@ -131,10 +130,9 @@ warranty, support, indemnity, or other liability obligations. END OF TERMS AND CONDITIONS ~~~ - ### BSD 3-Clause License -~~~ +~~~text BSD 3-Clause License Redistribution and use in source and binary forms, with or without @@ -162,10 +160,9 @@ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ~~~ - ### Mozilla Public License 2.0 (MPL 2.0) -~~~ +~~~text Mozilla Public License Version 2.0 ================================== From 9186da55833d34128cb5f4344b08f374f6c14ed1 Mon Sep 17 00:00:00 2001 From: Aaron Forinton <89849359+AaronForinton@users.noreply.github.com> Date: Mon, 30 Mar 2026 20:45:20 +0100 Subject: [PATCH 14/14] Add pygments version 2.18.0 to requirements --- docs/requirements.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/requirements.txt b/docs/requirements.txt index 8782856..a95c902 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -1,3 +1,4 @@ mkdocs==1.6.1 mkdocs-material==9.6.20 pymdown-extensions==10.14.3 +pygments==2.18.0