Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ adding docker-api-proxy service ⚠️ #7070

Open
wants to merge 49 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
3398e50
aded docker-api-proxy with integration tests
Jan 23, 2025
47a1b9b
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 23, 2025
8a4c101
refactor tests
Jan 23, 2025
47eb688
refactor
Jan 23, 2025
c4f8276
using docker IP
Jan 23, 2025
7ab73c9
making test mandatory
Jan 23, 2025
ff6539e
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 23, 2025
9eeaee9
giving service more time to start
Jan 23, 2025
5bb62ea
added secure flag to settings
Jan 23, 2025
92b9086
removed unused
Jan 23, 2025
714dbfc
renamed
Jan 23, 2025
b080606
fixed dependencies
Jan 23, 2025
6ffef9e
updae healthcheck and remove logs
Jan 23, 2025
333ac43
trigger test on any package change
Jan 23, 2025
36b54fa
refactor
Jan 23, 2025
da0d8da
rename
Jan 23, 2025
10cd8ea
rename
Jan 23, 2025
0393243
refactor with new pattern
Jan 23, 2025
2999a03
refactor
Jan 23, 2025
a763834
rename
Jan 23, 2025
b4d34b6
fixed integration tests
Jan 23, 2025
ab35648
running as non root user
Jan 23, 2025
6168b5e
dropped depnencies
Jan 23, 2025
4771022
added required necessary dependencies
Jan 23, 2025
b0eb0d1
remove unused
Jan 29, 2025
1e54972
remove unused
Jan 29, 2025
ce13d5d
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 29, 2025
1837ca2
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 30, 2025
e71c113
added combine_lfiespan_context_managers
Jan 30, 2025
faaf938
using new docker client
Jan 30, 2025
096f943
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 30, 2025
f39fa91
refactor to use custom setting var name
Jan 30, 2025
b9da94d
added env vars
Jan 31, 2025
0d92121
adding check to see that API is responsive
Jan 31, 2025
e1fe41d
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Jan 31, 2025
92a682d
ensure it timesout
Jan 31, 2025
e53c60c
renamed network
Jan 31, 2025
44d8ce4
added healthcheck error
Jan 31, 2025
959d1b6
fixed healthcheck
Jan 31, 2025
bd7e1c7
refaactor
Jan 31, 2025
911bcdf
Merge remote-tracking branch 'upstream/master' into pr-osparc-docker-…
Feb 4, 2025
cce8765
refactor position
Feb 4, 2025
b3bd6e6
typos and notes
Feb 4, 2025
d84efbf
refactor to use settings instead of property name
Feb 4, 2025
09a4a0b
fixed specs generation
Feb 4, 2025
72e2bc7
rename
Feb 4, 2025
f0709d3
fixed tests
Feb 4, 2025
5d87c53
fixed warning
Feb 4, 2025
8f062d2
added missing dependencies
Feb 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .env-devel
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,12 @@ DIRECTOR_REGISTRY_CACHING=True
DIRECTOR_SERVICES_CUSTOM_CONSTRAINTS=null
DIRECTOR_TRACING=null

DOCKER_API_PROXY_HOST=docker-api-proxy
DOCKER_API_PROXY_PASSWORD=null
DOCKER_API_PROXY_PORT=8888
DOCKER_API_PROXY_SECURE=False
DOCKER_API_PROXY_USER=null

EFS_USER_ID=8006
EFS_USER_NAME=efs
EFS_GROUP_ID=8106
Expand Down
70 changes: 70 additions & 0 deletions .github/workflows/ci-testing-deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ jobs:
migration: ${{ steps.filter.outputs.migration }}
payments: ${{ steps.filter.outputs.payments }}
dynamic-scheduler: ${{ steps.filter.outputs.dynamic-scheduler }}
docker-api-proxy: ${{ steps.filter.outputs.docker-api-proxy }}
resource-usage-tracker: ${{ steps.filter.outputs.resource-usage-tracker }}
static-webserver: ${{ steps.filter.outputs.static-webserver }}
storage: ${{ steps.filter.outputs.storage }}
Expand Down Expand Up @@ -233,6 +234,9 @@ jobs:
- 'services/docker-compose*'
- 'scripts/mypy/*'
- 'mypy.ini'
docker-api-proxy:
- 'packages/**'
- 'services/docker-api-proxy/**'
resource-usage-tracker:
- 'packages/**'
- 'services/resource-usage-tracker/**'
Expand Down Expand Up @@ -2190,6 +2194,71 @@ jobs:
with:
flags: integrationtests #optional


integration-test-docker-api-proxy:
needs: [changes, build-test-images]
if: ${{ needs.changes.outputs.anything-py == 'true' || needs.changes.outputs.docker-api-proxy == 'true' || github.event_name == 'push'}}
timeout-minutes: 30 # if this timeout gets too small, then split the tests
name: "[int] docker-api-proxy"
runs-on: ${{ matrix.os }}
strategy:
matrix:
python: ["3.11"]
os: [ubuntu-22.04]
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: setup docker buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
- name: setup python environment
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python }}
- name: expose github runtime for buildx
uses: crazy-max/ghaction-github-runtime@v3
# FIXME: Workaround for https://github.com/actions/download-artifact/issues/249
- name: download docker images with retry
uses: Wandalen/wretry.action@master
with:
action: actions/download-artifact@v4
with: |
name: docker-buildx-images-${{ runner.os }}-${{ github.sha }}-backend
path: /${{ runner.temp }}/build
attempt_limit: 5
attempt_delay: 1000
- name: load docker images
run: make load-images local-src=/${{ runner.temp }}/build
- name: install uv
uses: astral-sh/setup-uv@v5
with:
version: "0.5.x"
enable-cache: false
cache-dependency-glob: "**/docker-api-proxy/requirements/ci.txt"
- name: show system version
run: ./ci/helpers/show_system_versions.bash
- name: install
run: ./ci/github/integration-testing/docker-api-proxy.bash install
- name: test
run: ./ci/github/integration-testing/docker-api-proxy.bash test
- name: upload failed tests logs
if: ${{ failure() }}
uses: actions/upload-artifact@v4
with:
name: ${{ github.job }}_docker_logs
path: ./services/docker-api-proxy/test_failures
- name: cleanup
if: ${{ !cancelled() }}
run: ./ci/github/integration-testing/docker-api-proxy.bash clean_up
- uses: codecov/codecov-action@v5
if: ${{ !cancelled() }}
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
flags: integrationtests #optional

integration-test-simcore-sdk:
needs: [changes, build-test-images]
if: ${{ needs.changes.outputs.anything-py == 'true' || needs.changes.outputs.simcore-sdk == 'true' || github.event_name == 'push' }}
Expand Down Expand Up @@ -2262,6 +2331,7 @@ jobs:
integration-test-director-v2-01,
integration-test-director-v2-02,
integration-test-dynamic-sidecar,
integration-test-docker-api-proxy,
integration-test-simcore-sdk,
integration-test-webserver-01,
integration-test-webserver-02,
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ SERVICES_NAMES_TO_BUILD := \
payments \
resource-usage-tracker \
dynamic-scheduler \
docker-api-proxy \
service-integration \
static-webserver \
storage \
Expand Down
40 changes: 40 additions & 0 deletions ci/github/integration-testing/docker-api-proxy.bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash
# http://redsymbol.net/articles/unofficial-bash-strict-mode/
set -o errexit # abort on nonzero exitstatus
set -o nounset # abort on unbound variable
set -o pipefail # don't hide errors within pipes
IFS=$'\n\t'

install() {
make devenv
# shellcheck source=/dev/null
source .venv/bin/activate
pushd services/docker-api-proxy
make install-ci
popd
uv pip list
make info-images
}

test() {
# shellcheck source=/dev/null
source .venv/bin/activate
pushd services/docker-api-proxy
make test-ci-integration
popd
}

clean_up() {
docker images
make down
}

# Check if the function exists (bash specific)
if declare -f "$1" >/dev/null; then
# call arguments verbatim
"$@"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
1 change: 1 addition & 0 deletions packages/models-library/src/models_library/errors.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ class ErrorDict(_ErrorDictRequired, total=False):

RABBITMQ_CLIENT_UNHEALTHY_MSG = "RabbitMQ client is in a bad state!"
REDIS_CLIENT_UNHEALTHY_MSG = "Redis cannot be reached!"
DOCKER_API_PROXY_UNHEALTHY_MSG = "docker-api-proxy service is not reachable!"


# NOTE: Here we do not just import as 'from pydantic.error_wrappers import ErrorDict'
Expand Down
52 changes: 52 additions & 0 deletions packages/pytest-simcore/src/pytest_simcore/docker_api_proxy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
import logging

import pytest
from aiohttp import ClientSession, ClientTimeout
from pydantic import TypeAdapter
from settings_library.docker_api_proxy import DockerApiProxysettings
from tenacity import before_sleep_log, retry, stop_after_delay, wait_fixed

from .helpers.docker import get_service_published_port
from .helpers.host import get_localhost_ip
from .helpers.typing_env import EnvVarsDict

_logger = logging.getLogger(__name__)


@retry(
wait=wait_fixed(1),
stop=stop_after_delay(10),
before_sleep=before_sleep_log(_logger, logging.INFO),
reraise=True,
)
async def _wait_till_docker_api_proxy_is_responsive(
settings: DockerApiProxysettings,
) -> None:
async with ClientSession(timeout=ClientTimeout(1, 1, 1, 1, 1)) as client:
response = await client.get(f"{settings.base_url}/version")
assert response.status == 200, await response.text()


@pytest.fixture
async def docker_api_proxy_settings(
docker_stack: dict, env_vars_for_docker_compose: EnvVarsDict
) -> DockerApiProxysettings:
"""Returns the settings of a redis service that is up and responsive"""

prefix = env_vars_for_docker_compose["SWARM_STACK_NAME"]
assert f"{prefix}_docker-api-proxy" in docker_stack["services"]

published_port = get_service_published_port(
"docker-api-proxy", int(env_vars_for_docker_compose["DOCKER_API_PROXY_PORT"])
)

settings = TypeAdapter(DockerApiProxysettings).validate_python(
{
"DOCKER_API_PROXY_HOST": get_localhost_ip(),
"DOCKER_API_PROXY_PORT": published_port,
}
)

await _wait_till_docker_api_proxy_is_responsive(settings)

return settings
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,12 @@
"invitations": "/",
"payments": "/",
"resource-usage-tracker": "/",
"docker-api-proxy": "/version",
}
AIOHTTP_BASED_SERVICE_PORT: int = 8080
FASTAPI_BASED_SERVICE_PORT: int = 8000
DASK_SCHEDULER_SERVICE_PORT: int = 8787
DOCKER_API_PROXY_SERVICE_PORT: int = 8888

_SERVICE_NAME_REPLACEMENTS: dict[str, str] = {
"dynamic-scheduler": "dynamic-schdlr",
Expand Down Expand Up @@ -133,6 +135,7 @@ def services_endpoint(
AIOHTTP_BASED_SERVICE_PORT,
FASTAPI_BASED_SERVICE_PORT,
DASK_SCHEDULER_SERVICE_PORT,
DOCKER_API_PROXY_SERVICE_PORT,
]
endpoint = URL(
f"http://{get_localhost_ip()}:{get_service_published_port(full_service_name, target_ports)}"
Expand Down
83 changes: 83 additions & 0 deletions packages/service-library/src/servicelib/fastapi/docker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import asyncio
import logging
from collections.abc import AsyncIterator
from contextlib import AsyncExitStack
from typing import Final

import aiodocker
import aiohttp
import tenacity
from aiohttp import ClientSession
from fastapi import FastAPI
from fastapi_lifespan_manager import State
from pydantic import NonNegativeInt
from servicelib.fastapi.lifespan_utils import LifespanGenerator
from settings_library.docker_api_proxy import DockerApiProxysettings

_logger = logging.getLogger(__name__)

_DEFAULT_DOCKER_API_PROXY_HEALTH_TIMEOUT: Final[NonNegativeInt] = 5


def get_lifespan_remote_docker_client(
settings: DockerApiProxysettings,
) -> LifespanGenerator:
async def _(app: FastAPI) -> AsyncIterator[State]:

session: ClientSession | None = None
if settings.DOCKER_API_PROXY_USER and settings.DOCKER_API_PROXY_PASSWORD:
session = ClientSession(
auth=aiohttp.BasicAuth(
login=settings.DOCKER_API_PROXY_USER,
password=settings.DOCKER_API_PROXY_PASSWORD.get_secret_value(),
)
)

async with AsyncExitStack() as exit_stack:
if settings.DOCKER_API_PROXY_USER and settings.DOCKER_API_PROXY_PASSWORD:
await exit_stack.enter_async_context(
ClientSession(
auth=aiohttp.BasicAuth(
login=settings.DOCKER_API_PROXY_USER,
password=settings.DOCKER_API_PROXY_PASSWORD.get_secret_value(),
)
)
)

client = await exit_stack.enter_async_context(
aiodocker.Docker(url=settings.base_url, session=session)
)

app.state.remote_docker_client = client

await wait_till_docker_api_proxy_is_responsive(app)

# NOTE this has to be inside exit_stack scope
yield {}

return _


@tenacity.retry(
wait=tenacity.wait_fixed(5),
stop=tenacity.stop_after_delay(60),
before_sleep=tenacity.before_sleep_log(_logger, logging.INFO),
reraise=True,
)
async def wait_till_docker_api_proxy_is_responsive(app: FastAPI) -> None:
await is_docker_api_proxy_ready(app)


async def is_docker_api_proxy_ready(
app: FastAPI, *, timeout=_DEFAULT_DOCKER_API_PROXY_HEALTH_TIMEOUT # noqa: ASYNC109
) -> bool:
try:
await asyncio.wait_for(get_remote_docker_client(app).version(), timeout=timeout)
except (aiodocker.DockerError, TimeoutError):
return False
return True


def get_remote_docker_client(app: FastAPI) -> aiodocker.Docker:
assert isinstance(app.state.remote_docker_client, aiodocker.Docker) # nosec
return app.state.remote_docker_client
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
from functools import cached_property

from pydantic import Field, SecretStr

from .base import BaseCustomSettings
from .basic_types import PortInt


class DockerApiProxysettings(BaseCustomSettings):
DOCKER_API_PROXY_HOST: str = Field(
description="hostname of the docker-api-proxy service"
)
DOCKER_API_PROXY_PORT: PortInt = Field(
8888, description="port of the docker-api-proxy service"
)
DOCKER_API_PROXY_SECURE: bool = False

DOCKER_API_PROXY_USER: str | None = None
DOCKER_API_PROXY_PASSWORD: SecretStr | None = None

@cached_property
def base_url(self) -> str:
protocl = "https" if self.DOCKER_API_PROXY_SECURE else "http"
return f"{protocl}://{self.DOCKER_API_PROXY_HOST}:{self.DOCKER_API_PROXY_PORT}"
44 changes: 44 additions & 0 deletions services/docker-api-proxy/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
FROM alpine:3.21 AS base

LABEL maintainer=GitHK

# simcore-user uid=8004(scu) gid=8004(scu) groups=8004(scu)
ENV SC_USER_ID=8004 \
SC_USER_NAME=scu \
SC_BUILD_TARGET=base \
SC_BOOT_MODE=default

RUN addgroup -g ${SC_USER_ID} ${SC_USER_NAME} && \
adduser -u ${SC_USER_ID} -G ${SC_USER_NAME} \
--disabled-password \
--gecos "" \
--shell /bin/sh \
--home /home/${SC_USER_NAME} \
${SC_USER_NAME}

RUN apk add --no-cache socat curl && \
curl -L -o /usr/local/bin/gosu https://github.com/tianon/gosu/releases/download/1.16/gosu-amd64 && \
chmod +x /usr/local/bin/gosu && \
gosu --version


# Health check to ensure the proxy is running
HEALTHCHECK \
--interval=10s \
--timeout=5s \
--start-period=30s \
--start-interval=1s \
--retries=5 \
CMD curl http://localhost:8888/version || exit 1

COPY --chown=scu:scu services/docker-api-proxy/docker services/docker-api-proxy/docker
RUN chmod +x services/docker-api-proxy/docker/*.sh
GitHK marked this conversation as resolved.
Show resolved Hide resolved

ENTRYPOINT [ "/bin/sh", "services/docker-api-proxy/docker/entrypoint.sh" ]
CMD ["/bin/sh", "services/docker-api-proxy/docker/boot.sh"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question, why do you need the entrypoint/boot stuff at all?
Why not just switch to USER scu at this point?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and second question, and on a second thought, since this service shall never run without the docker socket, which requires root access I'm wondering whether this makes sense. maybe you can just give privilege to access the docker socket.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only as root can the permissions to the docker socket be given by creating a group and a new user starting from the group of the docker socket.


FROM base AS development
ENV SC_BUILD_TARGET=development

FROM base AS production
ENV SC_BUILD_TARGET=production
2 changes: 2 additions & 0 deletions services/docker-api-proxy/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
include ../../scripts/common.Makefile
include ../../scripts/common-service.Makefile
Loading
Loading