Skip to content

Commit 39c4082

Browse files
committed
feat(docs): Fix broken link to Stubs in docs, added observability-testing.
Signed-off-by: BillyDoesDev <[email protected]>
1 parent 2b066db commit 39c4082

File tree

2 files changed

+84
-2
lines changed

2 files changed

+84
-2
lines changed

src/pages/concepts/reference/glossary.js

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
44

55
function Glossary() {
66
const [state, setState] = useState(() => {
7-
const alphabet = "ABCEFGIMRSTUW";
7+
const alphabet = "ABCEFGIMORSTUW";
88
const initialState = {};
99
for (let i = 0; i < alphabet.length; i++) {
1010
initialState[alphabet[i]] = true;
@@ -96,6 +96,12 @@ function Glossary() {
9696
link: "/docs/concepts/reference/glossary/microservice-testing",
9797
},
9898
],
99+
O: [
100+
{
101+
name: "Observability Testing",
102+
link: "/docs/concepts/reference/glossary/observability-testing",
103+
},
104+
],
99105
R: [
100106
{
101107
name: "Regression Testing",
@@ -105,7 +111,7 @@ function Glossary() {
105111
S: [
106112
{
107113
name: "Stubs",
108-
ink: "/docs/concepts/reference/glossary/stubs",
114+
link: "/docs/concepts/reference/glossary/stubs",
109115
},
110116
{
111117
name: "Software Testing Life Cycle",
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
---
2+
id: observability-testing
3+
title: Observability Testing with Keploy
4+
sidebar_label: Observability Testing
5+
description: This glossary has an explanation of all the terminologies that beginners find difficult to understand at first glance.
6+
tags:
7+
- explanation
8+
- glossary
9+
- observability
10+
- testing
11+
- monitoring
12+
keywords:
13+
- API
14+
- observability
15+
- testing
16+
- logs
17+
- metrics
18+
- traces
19+
---
20+
21+
## What is Observability Testing?
22+
23+
<figure>
24+
<img src="https://grafana.com/media/blog/otel-lgtm-docker-image/docker-image_components.png?w=900" />
25+
<figcaption class="figcaption">The OTEL-LGTM Stack. Image Credits: <a href="https://grafana.com/blog/2024/03/13/an-opentelemetry-backend-in-a-docker-image-introducing-grafana/otel-lgtm/">Grafana</a></figcaption>
26+
</figure>
27+
28+
29+
Observability in testing ensures that software systems produce sufficient data (logs, metrics, and traces - the holy trinity of telemetry) to understand their internal state and diagnose issues during or after tests.
30+
31+
Popular tools and frameworks used for tracking these metrics are [Grafana](https://grafana.com) and [Prometheus](https://prometheus.io/). To learn more about getting started with these tools, check out their elaborate [guide](https://grafana.com/docs/grafana/latest/getting-started/get-started-grafana-prometheus/).
32+
33+
## What Does Observability Add To Testing?
34+
35+
- **Improved debugging** — easier to pinpoint failures and their causes.
36+
- **Faster incident response** — helps teams react to test failures or outages.
37+
- **Enhanced confidence in system behavior** — tests validate not just outcomes, but how systems behave under load or failure.
38+
39+
## Example Observability Checks During Testing
40+
41+
- Validate that API request traces are emitted for each call.
42+
- Ensure error logs are generated when failures occur.
43+
- Confirm metrics (e.g., response times, throughput) meet expected thresholds during load tests.
44+
45+
## Challenges in Observability Testing:
46+
47+
- **Signal overload (too much data)**
48+
- Systems emit large volumes of logs, metrics, and traces, making it difficult to identify meaningful signals amidst noise.
49+
50+
- **Lack of automated assertions**
51+
- Observability data is collected but not actively validated in test cases, causing issues to go undetected unless manually reviewed.
52+
53+
- **Lack of production fidelity in test environments**
54+
- CI or staging environments may not emit the same telemetry as production, leading to false positives or missed issues.
55+
56+
- **Non-determinism in metrics**
57+
- Performance data can fluctuate across test runs, making it difficult to assert on expected values reliably.
58+
59+
- **Difficulty correlating logs, metrics, and traces**
60+
- Without unified tooling, it's hard to trace a single issue across different observability signals.
61+
62+
63+
## Overcoming Challenges with Keploy
64+
65+
Keploy is an innovative testing tool designed to address many of the challenges associated with observability testing. Here's how it helps:
66+
<img src="https://keploy.io/docs/gif/record-replay.gif?raw=true"/>
67+
<br/>
68+
69+
- **Automated Test Case Generation**: Keploy can generate test cases by recording your application's network calls. This automation significantly reduces the time and effort required to create comprehensive test suites.
70+
- **Dependency Mocking**: Keploy automatically generates dependency mocks based on recorded network interactions. This feature allows for faster and more efficient testing compared to traditional unit tests.
71+
- **Realistic Testing Environment**: With its built-in proxy setup, Keploy records system calls between services, creating a more accurate representation of the production environment in your tests.
72+
- **Efficient Integration Testing**: By capturing and replaying inter-service communications, Keploy enables more effective integration testing without the need to set up complex environments.
73+
- **Reduced Test Maintenance**: As Keploy generates tests based on actual system behavior, it helps keep tests up-to-date with changes in the observability, reducing the maintenance burden.
74+
- **Performance Testing**: The recorded interactions can be used to simulate realistic load scenarios, aiding in performance testing of observability.
75+
76+
By leveraging Keploy's capabilities, development teams can overcome many of the traditional challenges associated with observability testing, leading to more robust and reliable distributed systems.

0 commit comments

Comments
 (0)