Skip to content

Commit 5edff3d

Browse files
shuningczhirafovod
andauthored
Fix CI test order and tests (part 1) (#31)
* Fix CI test order - install util-genai before dependent packages The test for opentelemetry-util-genai-emitters-splunk fails with: ModuleNotFoundError: No module named 'opentelemetry.util.genai.emitters.spec' This happens because emitters-splunk depends on opentelemetry-util-genai being installed first (which contains the emitters.spec module). Reordered the test steps to install and test the base util-genai package before the packages that depend on it (emitters-splunk, evals, etc.). (cherry picked from commit e8bfae2) * Fix CI: Install all packages before running tests Some tests have cross-dependencies between packages: - util-genai tests import from util-genai-evals - emitters-splunk tests import from util-genai Changed strategy to install all packages first (with --no-deps), then run each package's tests separately. This ensures all inter-package dependencies are available during testing. (cherry picked from commit c8a3bdd) * Fix remaining undefined logger references in handler.py Changed all 'logger' references to '_LOGGER' to match the module's logger variable name (line 239 and 245). This fixes the NameError that was occurring during test execution. (cherry picked from commit d5b7953) * Fix test mocking path for load_completion_callbacks Tests were trying to mock 'handler._load_completion_callbacks' but the function is actually 'utils.load_completion_callbacks' (imported from utils module). Updated the mock patch paths to point to the correct location. (cherry picked from commit ccc7665) * Fix emitters-splunk: use event_logger instead of content_logger in factories, update tests to match current implementation (cherry picked from commit a1910cf) * Fix test: correct evaluation metric name from toxicity_v1 to toxicity/v1 (cherry picked from commit d2a5711) * Fix remaining mock paths and dynamic aggregation test in util-genai-evals - Fix 4 instances of handler._load_completion_callbacks -> utils.load_completion_callbacks in test_evaluators.py - Fix test_evaluation_dynamic_aggregation.py to set _aggregate_results to None instead of False to enable dynamic environment variable reading as per actual implementation (cherry picked from commit a20742f) * Fix CI test order - install util-genai before dependent packages The test for opentelemetry-util-genai-emitters-splunk fails with: ModuleNotFoundError: No module named 'opentelemetry.util.genai.emitters.spec' This happens because emitters-splunk depends on opentelemetry-util-genai being installed first (which contains the emitters.spec module). Reordered the test steps to install and test the base util-genai package before the packages that depend on it (emitters-splunk, evals, etc.). (cherry picked from commit e8bfae2) * Fix CI: Install all packages before running tests Some tests have cross-dependencies between packages: - util-genai tests import from util-genai-evals - emitters-splunk tests import from util-genai Changed strategy to install all packages first (with --no-deps), then run each package's tests separately. This ensures all inter-package dependencies are available during testing. (cherry picked from commit c8a3bdd) * Add test dependencies for langchain instrumentation - Add [test] extra to pyproject.toml with langchain-core, langchain-openai, pytest-recording, vcrpy, pyyaml, flaky - Update CI workflow to install [instruments,test] dependencies - Fixes CI failure: ModuleNotFoundError: No module named 'langchain_openai' * Upgrade flaky to >=3.8.1 for pytest 7.4.4 compatibility Fixes ImportError: cannot import name 'call_runtest_hook' from 'flaky.flaky_pytest_plugin' The flaky 3.7.0 version is incompatible with pytest 7.4.4 used in CI. Upgrading to flaky>=3.8.1 resolves the compatibility issue. * Fix deepeval test patching issues - Fix incorrect patch targets for _instantiate_metrics and _run_deepeval * These are module-level functions, not class methods * Change from patch.object(class, method) to patch(module.function) * Update function signatures to match actual implementations - Fix _build_llm_test_case lambda signatures * Function takes only 1 argument (invocation), not 2 - Fix test assertions for bias metric labels * 'Not Biased' for success=True (not 'pass') * 'Biased' for success=False (not 'fail') - Fix default metrics test expectation * Remove 'faithfulness' from expected defaults * Actual defaults: bias, toxicity, answer_relevancy, hallucination, sentiment All deepeval tests now pass (15 passed, 2 warnings) * Fix langchain instrumentation callback handler tests - Fixed LangchainCallbackHandler initialization to use telemetry_handler parameter - Fixed _resolve_agent_name to not return 'agent' tag as agent name - Added missing workflow methods to _StubTelemetryHandler (start_workflow, stop_workflow, fail_workflow, fail_by_run_id) - Filter gen_ai.tool.* metadata from ToolCall attributes (stored in dedicated fields) - Process invocation_params in on_chat_model_start: * Extract model_name from invocation_params with higher priority * Add invocation params with request_ prefix to attributes * Move ls_* metadata to langchain_legacy sub-dict * Set provider from ls_provider metadata * Add callback.name and callback.id from serialized data - Configure pytest-recording (VCR) for cassette playback - Fix vcr fixture scopes and cassette directory configuration All 7 callback handler agent tests now pass. * Fix langchain instrumentation test isolation issue - unwrap all wrapper layers Root cause: The opentelemetry.instrumentation.utils.unwrap() function only unwraps ONE layer of wrapt wrappers. When tests call uninstrument() and then re-instrument(), the second wrapping creates nested wrappers. The unwrap only removed the outer layer, leaving the old wrapper still active. Fix: Modified _uninstrument() to unwrap ALL layers by looping while __wrapped__ exists, ensuring complete cleanup before re-instrumentation. Also added _callback_handler reference to instrumentor for test access. Result: ALL 9 langchain tests now passing (was 8 passed, 1 skipped) * Add response model extraction to callback_handler - Extract response_model_name from generation.message.response_metadata - Fixes test_langchain_llm.py and test_langchain_llm_util.py - Ensures gen_ai.response.model attribute is set in spans * Fix deepeval compatibility: make CacheConfig optional - CacheConfig is only available in deepeval >= 3.7.0 - Use try-except import to support older versions - Conditionally add cache_config to eval_kwargs - Fixes CI ImportError on older deepeval versions * Upgrade deepeval minimum version to 3.7.0 - Update pyproject.toml: deepeval>=0.21.0 -> deepeval>=3.7.0 - Revert deepeval_runner.py to original simple implementation - CacheConfig is required in deepeval >= 3.7.0 - Removes compatibility code, cleaner solution * Add CacheConfig to test stubs for deepeval - Add CacheConfig class to stub modules in test files - Update evaluate() stub signature to accept cache_config parameter - Fixes CI import errors when using deepeval stubs - Both test_deepeval_evaluator.py and test_deepeval_sentiment_metric.py updated * Add runtime check for deepeval module availability - Add check in run_evaluation to detect if deepeval is patched to None - Raises ImportError when sys.modules['deepeval'] is None - Fixes test_dependency_missing which patches deepeval to None - Ensures proper error handling when dependency is missing Previously, CacheConfig wasn't in the stubs, so import would fail. Now that we added CacheConfig to stubs (for newer deepeval >= 3.7.0), we need runtime check to handle the dependency_missing test case. * Apply code formatting (auto-formatter cleanup) - pyproject.toml: Format include list to single line - deepeval_runner.py: Add blank lines for PEP 8 compliance - test files: Format function parameters to multi-line style No functional changes, formatting only. * Fix ruff linting issues - Remove unused imports (MagicMock, TracerProvider) - Sort imports in test_splunk_emitters.py - Remove trailing whitespace * Apply ruff formatting * Remove CacheConfig import from deepeval_runner - CacheConfig parameter has default value in deepeval.evaluate() - No need to import or pass it explicitly - Works with all deepeval versions (>=0.21.0) - Simpler code without conditional imports * Complete CacheConfig cleanup and fix flaky test - Remove CacheConfig from deepeval_runner.py (use default parameter) - Remove sys.modules runtime check (no longer needed) - Remove CacheConfig stubs from test files - Remove cache_config parameter from stub evaluate() functions - Fix flaky timestamp test: use >= instead of > for end_time (Windows CI can have identical timestamps in same nanosecond) * Revert production code to main branch Revert to main branch versions: - deepeval_runner.py (keep CacheConfig import/usage) - langchain/callback_handler.py - langchain/__init__.py These changes will be submitted in a separate MR. Test file improvements are kept in this branch. * ruff++ --------- Co-authored-by: Sergey Sergeev <[email protected]>
1 parent 8a6ec9a commit 5edff3d

File tree

19 files changed

+317
-213
lines changed

19 files changed

+317
-213
lines changed

.github/workflows/ci-main.yaml

Lines changed: 39 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -22,43 +22,51 @@ jobs:
2222
fail-fast: false
2323
matrix:
2424
os: [ubuntu-latest, windows-latest, macos-latest]
25-
python-version: ['3.10', '3.11', '3.12', '3.13']
25+
python-version: ["3.10", "3.11", "3.12", "3.13"]
2626

2727
steps:
28-
- uses: actions/checkout@v5
28+
- uses: actions/checkout@v5
2929

30-
- name: Set up Python ${{ matrix.python-version }}
31-
uses: actions/setup-python@v6
32-
with:
33-
python-version: ${{ matrix.python-version }}
30+
- name: Enable long paths on Windows
31+
if: runner.os == 'Windows'
32+
run: |
33+
git config --system core.longpaths true
3434
35-
- name: Install dependencies
36-
run: |
37-
python -m pip install --upgrade pip
38-
pip install pytest==7.4.4 pytest-cov==4.1.0
39-
pip install -r dev-genai-requirements.txt
35+
- name: Set up Python ${{ matrix.python-version }}
36+
uses: actions/setup-python@v6
37+
with:
38+
python-version: ${{ matrix.python-version }}
4039

41-
- name: Run tests - opentelemetry-util-genai-emitters-splunk
42-
run: |
43-
pip install -e util/opentelemetry-util-genai-emitters-splunk --no-deps
44-
python -m pytest util/opentelemetry-util-genai-emitters-splunk/tests/ -v
40+
- name: Install dependencies
41+
run: |
42+
python -m pip install --upgrade pip
43+
pip install pytest==7.4.4 pytest-cov==4.1.0
44+
pip install -r dev-genai-requirements.txt
4545
46-
- name: Run tests - opentelemetry-util-genai-evals
47-
run: |
48-
pip install -e util/opentelemetry-util-genai-evals --no-deps
49-
python -m pytest util/opentelemetry-util-genai-evals/tests/ -v
46+
- name: Install all genai packages
47+
run: |
48+
pip install -e util/opentelemetry-util-genai --no-deps
49+
pip install -e util/opentelemetry-util-genai-evals --no-deps
50+
pip install -e util/opentelemetry-util-genai-evals-deepeval --no-deps
51+
pip install -e util/opentelemetry-util-genai-emitters-splunk --no-deps
52+
pip install -e "instrumentation-genai/opentelemetry-instrumentation-langchain[instruments,test]"
5053
51-
- name: Run tests - opentelemetry-util-genai-evals-deepeval
52-
run: |
53-
pip install -e util/opentelemetry-util-genai-evals-deepeval --no-deps
54-
python -m pytest util/opentelemetry-util-genai-evals-deepeval/tests/ -v
54+
- name: Run tests - opentelemetry-util-genai
55+
run: |
56+
python -m pytest util/opentelemetry-util-genai/tests/ -v --cov=opentelemetry.util.genai --cov-report=term-missing
5557
56-
- name: Run tests - opentelemetry-instrumentation-langchain
57-
run: |
58-
pip install -e instrumentation-genai/opentelemetry-instrumentation-langchain --no-deps
59-
python -m pytest instrumentation-genai/opentelemetry-instrumentation-langchain/tests/ -v
58+
- name: Run tests - opentelemetry-util-genai-emitters-splunk
59+
run: |
60+
python -m pytest util/opentelemetry-util-genai-emitters-splunk/tests/ -v
6061
61-
- name: Run tests - opentelemetry-util-genai
62-
run: |
63-
pip install -e util/opentelemetry-util-genai --no-deps
64-
python -m pytest util/opentelemetry-util-genai/tests/ -v --cov=opentelemetry.util.genai --cov-report=term-missing
62+
- name: Run tests - opentelemetry-util-genai-evals
63+
run: |
64+
python -m pytest util/opentelemetry-util-genai-evals/tests/ -v
65+
66+
- name: Run tests - opentelemetry-util-genai-evals-deepeval
67+
run: |
68+
python -m pytest util/opentelemetry-util-genai-evals-deepeval/tests/ -v
69+
70+
- name: Run tests - opentelemetry-instrumentation-langchain
71+
run: |
72+
python -m pytest instrumentation-genai/opentelemetry-instrumentation-langchain/tests/ -v

dev-genai-requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ markupsafe>=2.0.1
1212
codespell==2.1.0
1313
requests==2.32.3
1414
ruamel.yaml==0.17.21
15-
flaky==3.7.0
15+
flaky>=3.8.1
1616
pre-commit==3.7.0; python_version >= '3.9'
1717
pre-commit==3.5.0; python_version < '3.9'
1818
ruff==0.6.9

dev-requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ markupsafe>=2.0.1
1212
codespell==2.1.0
1313
requests==2.32.4
1414
ruamel.yaml==0.17.21
15-
flaky==3.7.0
15+
flaky>=3.8.1
1616
pre-commit==3.7.0; python_version >= '3.9'
1717
pre-commit==3.5.0; python_version < '3.9'
1818
ruff==0.6.9

instrumentation-genai/opentelemetry-instrumentation-langchain/examples/multi_agent_travel_planner/client_server_version/client.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -98,21 +98,21 @@ def run_client(
9898
poison_config = None
9999
if use_poison:
100100
poison_config = generate_random_poison_config()
101-
print(f"\n💉 Poison Configuration:")
101+
print("\n💉 Poison Configuration:")
102102
print(f" Probability: {poison_config['prob']}")
103103
print(f" Types: {', '.join(poison_config['types'])}")
104104
print(f" Max snippets: {poison_config['max']}")
105105
print(f" Seed: {poison_config['seed']}")
106106

107107
# Generate user request
108108
user_request = generate_travel_request(origin, destination)
109-
print(f"\n✉️ User Request:")
109+
print("\n✉️ User Request:")
110110
print(f" {user_request}")
111111

112112
# Get server URL from environment or default to localhost
113113
server_url = os.getenv("SERVER_URL", "http://localhost:8080")
114114

115-
print(f"\n🔌 Connecting to Flask server...")
115+
print("\n🔌 Connecting to Flask server...")
116116
print(f" URL: {server_url}")
117117

118118
# Prepare request data
@@ -149,31 +149,31 @@ def run_client(
149149
print(f"👥 Travellers: {result['travellers']}")
150150

151151
if result.get('poison_events'):
152-
print(f"\n💉 Poison Events Triggered:")
152+
print("\n💉 Poison Events Triggered:")
153153
for event in result['poison_events']:
154154
print(f" - {event}")
155155

156-
print(f"\n✈️ Flight Summary:")
156+
print("\n✈️ Flight Summary:")
157157
print(f" {result['flight_summary']}")
158158

159-
print(f"\n🏨 Hotel Summary:")
159+
print("\n🏨 Hotel Summary:")
160160
print(f" {result['hotel_summary']}")
161161

162-
print(f"\n🎭 Activities Summary:")
162+
print("\n🎭 Activities Summary:")
163163
print(f" {result['activities_summary']}")
164164

165-
print(f"\n🎉 Final Itinerary:")
165+
print("\n🎉 Final Itinerary:")
166166
print("─" * 60)
167167
print(result['final_itinerary'])
168168
print("─" * 60)
169169

170170
if result.get('agent_steps'):
171-
print(f"\n🤖 Agent Steps:")
171+
print("\n🤖 Agent Steps:")
172172
for step in result['agent_steps']:
173173
print(f" - {step['agent']}: {step['status']}")
174174

175175
except requests.exceptions.Timeout:
176-
print(f"\n❌ Error: Request timed out after 5 minutes")
176+
print("\n❌ Error: Request timed out after 5 minutes")
177177
sys.exit(1)
178178
except requests.exceptions.RequestException as e:
179179
print(f"\n❌ Error: Failed to connect to server: {e}")
@@ -184,7 +184,7 @@ def run_client(
184184
sys.exit(1)
185185
except KeyError as e:
186186
print(f"\n❌ Error: Missing key in response: {e}")
187-
print(f"Response:")
187+
print("Response:")
188188
pprint(result)
189189
sys.exit(1)
190190

instrumentation-genai/opentelemetry-instrumentation-langchain/examples/multi_agent_travel_planner/client_server_version/main.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -190,17 +190,15 @@
190190

191191
from __future__ import annotations
192192

193-
import argparse
194193
import json
195194
import os
196195
import random
197196
import sys
198197
from datetime import datetime, timedelta
199-
from typing import Annotated, Any, Dict, List, Optional, TypedDict
198+
from typing import Annotated, Dict, List, Optional, TypedDict
200199
from uuid import uuid4
201200
from pprint import pprint
202201

203-
from dotenv import load_dotenv
204202
from flask import Flask, request, jsonify
205203
from langchain_core.messages import (
206204
AIMessage,
@@ -220,7 +218,7 @@
220218

221219
from opentelemetry.sdk.trace import TracerProvider
222220
from opentelemetry.sdk.trace.export import BatchSpanProcessor
223-
from opentelemetry.trace import SpanKind, Status, StatusCode, Tracer
221+
from opentelemetry.trace import SpanKind
224222
from opentelemetry import _events, _logs, metrics, trace
225223
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
226224
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
@@ -529,7 +527,7 @@ def pretty_print_message(message, indent=False):
529527

530528
indented = "\n".join("\t" + c for c in pretty_message.split("\n"))
531529
print(indented, file=sys.stderr, flush=True)
532-
except Exception as e:
530+
except Exception:
533531
# Fallback if pretty_repr fails
534532
print(f"Message: {message}", file=sys.stderr, flush=True)
535533

@@ -985,7 +983,7 @@ def plan():
985983
poison_config=poison_config,
986984
)
987985

988-
print(f"[SERVER] Travel plan completed successfully", file=sys.stderr, flush=True)
986+
print("[SERVER] Travel plan completed successfully", file=sys.stderr, flush=True)
989987
print("\n" + "="*80, file=sys.stderr)
990988
print("TRAVEL PLAN RESULT:", file=sys.stderr)
991989
pprint(result, stream=sys.stderr)

instrumentation-genai/opentelemetry-instrumentation-langchain/pyproject.toml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,14 @@ dependencies = [
3434
instruments = [
3535
"langchain >= 0.3.21",
3636
]
37+
test = [
38+
"langchain-core >= 1.0.0",
39+
"langchain-openai >= 1.0.0",
40+
"pytest-recording >= 0.13.0",
41+
"vcrpy >= 7.0.0",
42+
"pyyaml >= 6.0.0",
43+
"flaky >= 3.8.1",
44+
]
3745

3846
[project.entry-points.opentelemetry_instrumentor]
3947
langchain = "opentelemetry.instrumentation.langchain:LangChainInstrumentor"

instrumentation-genai/opentelemetry-instrumentation-langchain/tests/conftest.py

Lines changed: 56 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ def chatOpenAI_client():
138138
return ChatOpenAI()
139139

140140

141-
@pytest.fixture(scope="module")
141+
@pytest.fixture(scope="function")
142142
def vcr_config():
143143
return {
144144
"filter_headers": [
@@ -149,9 +149,19 @@ def vcr_config():
149149
],
150150
"decode_compressed_response": True,
151151
"before_record_response": scrub_response_headers,
152+
"serializer": "yaml",
152153
}
153154

154155

156+
@pytest.fixture(scope="session")
157+
def vcr_cassette_dir():
158+
"""Override the default cassette directory to avoid nested subdirectories."""
159+
import os
160+
161+
# Return the cassettes directory path
162+
return os.path.join(os.path.dirname(__file__), "cassettes")
163+
164+
155165
@pytest.fixture(scope="function")
156166
def instrument_no_content(tracer_provider, event_logger_provider, meter_provider):
157167
if LangChainInstrumentor is None: # pragma: no cover - skip when dependency missing
@@ -175,16 +185,35 @@ def instrument_with_content(tracer_provider, event_logger_provider, meter_provid
175185
if LangChainInstrumentor is None: # pragma: no cover
176186
pytest.skip("opentelemetry-instrumentation-langchain not available")
177187
set_prompt_capture_enabled(True)
188+
189+
# Reset util-genai singleton handler to ensure clean state
190+
import opentelemetry.util.genai.handler as _util_handler_mod # noqa: PLC0415
191+
192+
if hasattr(_util_handler_mod.get_telemetry_handler, "_default_handler"):
193+
setattr(_util_handler_mod.get_telemetry_handler, "_default_handler", None)
194+
195+
# Create new instrumentor for each test
178196
instrumentor = LangChainInstrumentor()
197+
198+
# If already instrumented (from previous test), uninstrument first
199+
if instrumentor._is_instrumented_by_opentelemetry:
200+
instrumentor.uninstrument()
201+
179202
instrumentor.instrument(
180203
tracer_provider=tracer_provider,
181204
event_logger_provider=event_logger_provider,
182205
meter_provider=meter_provider,
183206
)
184207

185208
yield instrumentor
209+
186210
set_prompt_capture_enabled(True)
187-
instrumentor.uninstrument()
211+
# Clean up: uninstrument and reset singleton
212+
if instrumentor._is_instrumented_by_opentelemetry:
213+
instrumentor.uninstrument()
214+
215+
if hasattr(_util_handler_mod.get_telemetry_handler, "_default_handler"):
216+
setattr(_util_handler_mod.get_telemetry_handler, "_default_handler", None)
188217

189218

190219
@pytest.fixture(scope="function")
@@ -222,21 +251,37 @@ def instrument_with_content_util(
222251
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT: "SPAN_ONLY", # util-genai content gate
223252
}
224253
)
254+
225255
# Reset singleton so new env vars are applied
226256
import opentelemetry.util.genai.handler as _util_handler_mod # noqa: PLC0415
227257

228258
if hasattr(_util_handler_mod.get_telemetry_handler, "_default_handler"):
229259
setattr(_util_handler_mod.get_telemetry_handler, "_default_handler", None)
260+
261+
# Create new instrumentor for each test
230262
instrumentor = LangChainInstrumentor()
263+
264+
# If already instrumented (from previous test), uninstrument first
265+
if instrumentor._is_instrumented_by_opentelemetry:
266+
instrumentor.uninstrument()
267+
231268
instrumentor.instrument(
232269
tracer_provider=tracer_provider,
233270
event_logger_provider=event_logger_provider,
234271
meter_provider=meter_provider,
235272
)
273+
236274
yield instrumentor
275+
237276
os.environ.pop(OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT, None)
238277
set_prompt_capture_enabled(True)
239-
instrumentor.uninstrument()
278+
279+
# Clean up: uninstrument and reset singleton
280+
if instrumentor._is_instrumented_by_opentelemetry:
281+
instrumentor.uninstrument()
282+
283+
if hasattr(_util_handler_mod.get_telemetry_handler, "_default_handler"):
284+
setattr(_util_handler_mod.get_telemetry_handler, "_default_handler", None)
240285

241286

242287
class LiteralBlockScalar(str):
@@ -305,6 +350,11 @@ def deserialize(cassette_string):
305350

306351
try: # pragma: no cover - optional pytest-vcr dependency
307352
import pytest_recording # type: ignore # noqa: F401
353+
import vcr as vcr_module # type: ignore # noqa: F401
354+
355+
# Register custom YAML serializer globally
356+
vcr_module.VCR().register_serializer("yaml", PrettyPrintJSONBody)
357+
308358
except ModuleNotFoundError: # pragma: no cover - provide stub when plugin missing
309359

310360
@pytest.fixture(name="vcr", scope="module")
@@ -316,9 +366,10 @@ def register_serializer(self, *_args, **_kwargs):
316366
return _VCRStub()
317367

318368

319-
@pytest.fixture(scope="module", autouse=True)
369+
@pytest.fixture(scope="function", autouse=True)
320370
def fixture_vcr(vcr):
321-
vcr.register_serializer("yaml", PrettyPrintJSONBody)
371+
# When pytest-recording is installed, vcr is a Cassette and we don't need to do anything
372+
# The serializer is already registered on the VCR module above
322373
return vcr
323374

324375

0 commit comments

Comments
 (0)