Jido Integration is an Elixir integration platform for publishing connector capabilities, managing auth lifecycle, invoking work across direct and Harness-backed runtimes, and reviewing durable execution state.
This repository includes the public platform facade, bridge packages, connector packages, durability tiers, and app-level proofs for hosted webhook and async flows. If you are evaluating or using the platform, start here. If you are changing the internals of the monorepo itself, use the developer guides linked below.
Connector packages that depend on external SDK or runtime repos should prefer
sibling-relative paths during active local development and fall back to pinned
git refs otherwise. They should not rely on connector-local vendored deps/
trees for runtime dependency sourcing.
- read Architecture for the platform shape and package responsibilities
- read Runtime Model to choose between direct, session, and stream execution
- read Durability before selecting in-memory, local-file, or Postgres-backed state
- read Publishing for the welded package release flow
- read Reference Apps for end-to-end proof surfaces
- read Developer Index only if you are working on repo internals
- Guide Index
- Architecture
- Runtime Model
- Durability
- Connector Lifecycle
- Conformance
- Async And Webhooks
- Publishing
- Reference Apps
- Observability
Jido.Integration.V2is the stable public entrypoint for connector discovery, auth lifecycle calls, invocation, review lookups, and target lookup.- connector packages publish authored capability contracts and may also expose
curated generated
Jido.Action,Jido.Sensor, andJido.Pluginsurfaces. core/dispatch_runtimeandcore/webhook_routerprovide the hosted async and webhook APIs above the main facade.
Key public capabilities today include:
- connector discovery through
connectors/0,capabilities/0,fetch_connector/1,fetch_capability/1, andprojected_catalog_entries/0 - auth lifecycle through
start_install/3,complete_install/2,fetch_install/1,connection_status/1,request_lease/2,rotate_connection/2, andrevoke_connection/2 - invocation through
InvocationRequest.new!/1,invoke/1, andinvoke/3 - review and targeting through
fetch_run/1,fetch_attempt/1,events/1,run_artifacts/1,fetch_artifact/1,announce_target/1,fetch_target/1,compatible_targets/1, andreview_packet/2
Phase 7 also lands the explicit cross-repo reference seam in
core/contracts:
SubjectRefnames the primary source subject for a higher-order recordEvidenceRefnames exact source records plus the review-packet lineage they were read throughGovernanceRefnames approval, denial, override, rollback, or policy-decision lineage without creating duplicate control-plane ownership or a separate persisted review record familyReviewProjectionis the contracts-onlypacket.metadatashape for northbound consumers that need review packet lineage without depending oncore/platform
Phase 8 also freezes the higher-order seam: higher-order sidecars such as
jido_memory, jido_skill, and jido_eval stay on the core/contracts seam
and may persist only derived state.
Phase 9 provider-factory work builds on that already-correct ownership split instead of reopening control-plane, catalog, or review authority in those repos.
Hosted webhook routing and async replay are intentionally separate public package APIs:
Jido.Integration.V2.DispatchRuntimeJido.Integration.V2.WebhookRouter
Runtime families proved in-tree:
:direct- GitHub issue and comment operations
- Notion user, search, page, block, data-source, and comment operations
:sessioncodex.exec.session
:streammarket.ticks.pull
Reference apps:
apps/trading_ops- proves one operator-visible workflow across stream, session, and direct runtimes
- keeps trigger admission in
core/ingress - keeps durable review truth in
core/control_plane
apps/devops_incident_response- proves hosted webhook registration, async dispatch, dead-letter, replay, and restart recovery
- keeps webhook behavior app-local instead of widening
connectors/github
The current surface also proves:
- connectors execute through short-lived auth leases, not durable credential truth
- public invocation binds auth through
connection_id;credential_refremains internal execution plumbing - GitHub and Notion both publish generated common consumer surfaces from authored contracts
- conformance runs from the root while connector evidence stays package-local
- local durability, async queue state, and webhook route state are all explicit opt-in packages
The source monorepo remains the system of record. The publishable Hex package
is generated from this repo through weld.
The release path is explicit:
mix release.preparemix release.publish.dry_runmix release.publishmix release.archive
mix release.prepare generates the welded package, runs the artifact quality
lane, builds the tarball, and writes a durable release bundle under dist/.
mix release.publish publishes from that prepared bundle snapshot rather than
from the monorepo root. mix release.archive then preserves the prepared
bundle in the archive tree so the exact released artifact remains inspectable.
The first published welded artifact intentionally ships the direct-runtime, webhook, async, durability, auth, and public-facade surface. The Harness-backed session and stream packages, plus lower-boundary bridge packages, stay source-repo packages until their external runtime dependencies become independently publishable.
The repo root is a workspace and documentation layer. Runtime code lives in child packages and top-level apps.
jido_integration/
mix.exs # workspace root only
README.md # user-facing repo entry point
guides/ # user-facing and developer guide entry points
docs/ # repo-level developer notes and workflows
lib/ # root Mix tasks and workspace helpers only
test/ # root tooling tests only
core/
platform/ # public facade package (`:jido_integration_v2`)
contracts/ # shared public structs and behaviours
auth/ # install, connection, credential, and lease truth
control_plane/ # durable run, trigger, and artifact truth
harness_runtime/ # Harness-backed session/stream adapter package
consumer_surfaces/ # generated common Jido surface runtime support
direct_runtime/ # direct capability execution
runtime_asm_bridge/ # integration-owned `asm` Harness driver projection
session_runtime/ # integration-owned `jido_session` Harness driver
ingress/ # trigger normalization and durable admission
policy/ # pre-attempt policy and shed decisions
dispatch_runtime/ # async queue, retry, replay, recovery
webhook_router/ # hosted route lifecycle and ingress bridge
conformance/ # reusable connector conformance engine
store_local/ # restart-safe local durability tier
store_postgres/ # database-backed durable tier
bridges/
boundary_bridge/ # lower-boundary sandbox bridge package
connectors/
github/ # direct GitHub connector + live acceptance runbook
notion/ # direct Notion connector + package-local live proofs
codex_cli/ # Harness-routed session connector via `asm`
market_data/ # Harness-routed stream connector via `asm`
apps/
trading_ops/ # cross-runtime operator proof
devops_incident_response # hosted webhook + async recovery proof
GitHub and Notion stay on the direct provider-SDK path and do not inherit session or stream runtime-kernel coupling merely because the repo also ships non-direct capability families.
Jido.Integration.V2 -> DirectRuntime -> connector -> provider SDK -> pristine
Only actual :session and :stream capabilities use
/home/home/p/g/n/jido_harness via Jido.Harness.
Jido.Integration.V2 -> HarnessRuntime -> Jido.Harness -> {asm | jido_session}
asm routes through core/runtime_asm_bridge into /home/home/p/g/n/agent_session_manager
and /home/home/p/g/n/cli_subprocess_core, while jido_session routes
through core/session_runtime via Jido.Session.HarnessDriver.
Stage 4 brings the in-repo jido_session lane to the same lower-boundary seam
as asm: boundary-backed internal sessions now allocate or reopen through
Jido.BoundaryBridge, fail closed on unsupported descriptor_version, and
carry normalized boundary descriptor metadata through the public session
projection without treating policy_intent_echo as governance truth.
Phase 6A removed the old core/session_kernel and core/stream_runtime
bridge packages. They are not part of the repo or the target runtime
architecture.
For lower-boundary readiness, TargetDescriptor.extensions["boundary"] is the
authored baseline boundary capability advertisement. Runtime code may merge
worker-local facts into a runtime-merged live capability view when a
boundary-backed asm or boundary-backed jido_session lane learns a more
precise attach or checkpoint posture at execution time.
User-facing guides live under guides/. Developer-focused repo notes remain in
docs/, and package-specific workflows remain in package-local READMEs.
Primary package and app runbooks:
core/platform/README.mdcore/consumer_surfaces/README.mdcore/conformance/README.mdcore/session_runtime/README.mdcore/store_local/README.mdcore/dispatch_runtime/README.mdcore/webhook_router/README.mdbridges/boundary_bridge/README.mdconnectors/github/README.mdconnectors/github/docs/live_acceptance.mdconnectors/notion/README.mdconnectors/notion/docs/live_acceptance.mdapps/trading_ops/README.mdapps/devops_incident_response/README.md
The monorepo test and CI surface now includes packages that wire
core/store_postgres in :test.
mix mr.test and mix ci therefore expect a reachable Postgres test store.
Before calling the repo blocked on Postgres reachability, run:
mix mr.pg.preflightThat check validates the canonical core/store_postgres test tier only. The
repo still supports the other two durability tiers in parallel:
- in-memory defaults in
core/authandcore/control_plane core/store_localfor restart-safe local durabilitycore/store_postgresfor the shared database-backed tier