-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
big refactoring, following consistent format
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
- Loading branch information
1 parent
201cf85
commit e4e5695
Showing
73 changed files
with
490 additions
and
380 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,10 @@ | ||
# Contributing your recipes | ||
# 🍚 Contributing your recipes | ||
|
||
Do you have a cool recipe to contribute? That's great! Just open a PR with your | ||
configuration YAML, along with a description on the readme file. | ||
Do you have a cool recipe to contribute? That's great! Here's what I look for when reviewing a contribution: | ||
|
||
- Keep a narrow focus and demonstrate one concept or integration | ||
- The recipe has to be reproducible for everyone | ||
- Specify the versions of the software you used for testing the recipe | ||
- It should be self-contained. It might reference other recipes, but everything needed to succeed is included in the recipe itself | ||
- Unless you are demonstrating a specific exporter, export the telemetry to Grafana Cloud for consistency with the other recipes | ||
- Recipes featuring other commercial vendors are OK as long as they can be reproduced by everyone with a free account. If the main purpose of the recipe is to show how to export data to this vendor, propose one that is similar to [`grafana-cloud`](./grafana-cloud/) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
# 🍜 Recipe: Grafana Cloud from Kubernetes | ||
|
||
This recipe shows how to send telemetry data to Grafana Cloud with the OpenTelemetry Collector running on a Kubernetes Cluster. This is very similar to the ["grafana-cloud"](../grafana-cloud/) recipe, with extra steps to make it run on Kubernetes. You are encouraged to try that one first if you are not familiar with the Collector or sending data to Grafana Cloud via OTLP. | ||
|
||
One interesting aspect of this recipe is that it keeps the credentials on a secret, mounting them as environment variables. This way, the secrets aren't exposed in plain-text. | ||
|
||
## 🧄 Ingredients | ||
|
||
- OpenTelemetry Operator, see the main [`README.md`](../README.md) for instructions | ||
- The `telemetrygen` tool, or any other application that is able to send OTLP data to our collector | ||
- The `otelcol-cr.yaml` file from this directory | ||
- A `GRAFANA_CLOUD_USER` environment variable, also known as Grafana Cloud instance ID, found under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- A `GRAFANA_CLOUD_TOKEN` environment variable, which can be generated under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- The endpoint for your stack | ||
|
||
## 🥣 Preparation | ||
|
||
1. Create and switch to a namespace for our recipe | ||
```terminal | ||
kubectl create ns grafana-cloud-from-kubernetes | ||
kubens grafana-cloud-from-kubernetes | ||
``` | ||
|
||
2. Create a secret with the credentials: | ||
```terminal | ||
kubectl create secret generic grafana-cloud-credentials --from-literal=GRAFANA_CLOUD_USER="..." --from-literal=GRAFANA_CLOUD_TOKEN="..." | ||
``` | ||
|
||
3. Change the `endpoint` parameter for the `otlphttp` exporter to point to your stack's endpoint | ||
|
||
4. Install the OTel Collector custom resource | ||
```terminal | ||
kubectl apply -f grafana-cloud-from-kubernetes/otelcol-cr.yaml | ||
``` | ||
|
||
5. Open a port-forward to the Collector: | ||
```terminal | ||
kubectl port-forward svc/grafana-cloud-from-kubernetes-collector 4317 | ||
``` | ||
|
||
6. Send some telemetry to your Collector | ||
```terminal | ||
telemetrygen traces --traces 2 --otlp-insecure --otlp-attributes='recipe="grafana-cloud-from-kubernetes"' | ||
``` | ||
|
||
7. Open your Grafana instance, go to Explore, and select the appropriate datasource, such as "...-traces". If used the command above, click "Search" and you should see two traces listed, each with two spans. | ||
|
||
## 😋 Executed last time with these versions | ||
|
||
The most recent execution of this recipe was done with these versions: | ||
|
||
- OpenTelemetry Operator v0.100.1 | ||
- OpenTelemetry Collector Contrib v0.101.0 | ||
- `telemetrygen` v0.101.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
apiVersion: opentelemetry.io/v1beta1 | ||
kind: OpenTelemetryCollector | ||
metadata: | ||
name: grafana-cloud-from-kubernetes | ||
spec: | ||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.101.0 | ||
envFrom: | ||
- secretRef: | ||
name: grafana-cloud-credentials | ||
config: | ||
extensions: | ||
basicauth: | ||
client_auth: | ||
username: "${env:GRAFANA_CLOUD_USER}" | ||
password: "${env:GRAFANA_CLOUD_TOKEN}" | ||
|
||
receivers: | ||
otlp: | ||
protocols: | ||
grpc: {} | ||
|
||
exporters: | ||
otlphttp: | ||
endpoint: https://otlp-gateway-prod-eu-west-2.grafana.net/otlp | ||
auth: | ||
authenticator: basicauth | ||
|
||
service: | ||
extensions: [ basicauth ] | ||
pipelines: | ||
traces: | ||
receivers: [ otlp ] | ||
processors: [ ] | ||
exporters: [ otlphttp ] | ||
logs: | ||
receivers: [ otlp ] | ||
processors: [ ] | ||
exporters: [ otlphttp ] | ||
metrics: | ||
receivers: [ otlp ] | ||
processors: [ ] | ||
exporters: [ otlphttp ] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
# 🍜 Recipe: Grafana Cloud | ||
|
||
This recipe shows how to send telemetry data to Grafana Cloud with the OpenTelemetry Collector. The recommended way to send data for most use-cases is via the OTLP endpoint, given that the Collector already has an internal representation on that format. It's also better to send OTLP Logs via the OTLP endpoint to Grafana Cloud Logs, as it would appropriately place some values, such as the trace ID and span ID, on its new metadata storage, which isn't part of the message body or index. | ||
|
||
## 🧄 Ingredients | ||
|
||
- The `telemetrygen` tool, or any other application that is able to send OTLP data to our collector | ||
- The `otelcol.yaml` file from this directory | ||
- A `GRAFANA_CLOUD_USER` environment variable, also known as Grafana Cloud instance ID, found under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- A `GRAFANA_CLOUD_TOKEN` environment variable, which can be generated under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- The endpoint for your stack | ||
|
||
## 🥣 Preparation | ||
|
||
1. Change the `endpoint` parameter for the `otlphttp` exporter to point to your stack's endpoint | ||
|
||
2. Export the environment variables `GRAFANA_CLOUD_USER` and `GRAFANA_CLOUD_TOKEN` | ||
|
||
3. Run a Collector distribution with the provided configuration file | ||
```terminal | ||
otelcol-contrib --config otelcol.yaml | ||
``` | ||
|
||
4. Send some telemetry to your Collector | ||
```terminal | ||
telemetrygen traces --traces 2 --otlp-insecure --otlp-attributes='recipe="grafana-cloud"' | ||
``` | ||
|
||
5. Open your Grafana instance, go to Explore, and select the appropriate datasource, such as "...-traces". If used the command above, click "Search" and you should see two traces listed, each with two spans. | ||
|
||
## 😋 Executed last time with these versions | ||
|
||
The most recent execution of this recipe was done with these versions: | ||
|
||
- OpenTelemetry Collector Contrib v0.101.0 | ||
- `telemetrygen` v0.101.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
# 🍜 Recipe: Kafka on Kubernetes | ||
|
||
This recipe uses a Kafka cluster between two layers of Collectors, showing how to scale a collection pipeline that is able to absorb spikes in traffic without putting extra pressure on the backend. The idea is that the first layer might scale according to the demand, sending data to Kafka, which is consumed by a static set of Collectors. This architecture is suitable for scaling Collectors when the backend can eventually catch-up with the traffic, as is usually the case with sudden spikes. | ||
|
||
Note that we've used the `transform` processor to add the current timestamp to all spans at both layers (`published_at` and `consumed_at`), so that we know when we sent something to the queue, and when we've consumed it from the queue. | ||
|
||
## 🧄 Ingredients | ||
|
||
- OpenTelemetry Operator, see the main [`README.md`](../README.md) for instructions | ||
- A Kafka Cluster and one topic for each telemetry data type (metric, logs, traces) | ||
- The `telemetrygen` tool, or any other application that is able to send OTLP data to our collector | ||
- The `otelcol-cr.yaml` file from this directory | ||
- A `GRAFANA_CLOUD_USER` environment variable, also known as Grafana Cloud instance ID, found under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- A `GRAFANA_CLOUD_TOKEN` environment variable, which can be generated under the instructions for "OpenTelemetry" on your Grafana Cloud stack. | ||
- The endpoint for your stack | ||
|
||
## 🥣 Preparation | ||
|
||
1. Install [Strimzi](https://strimzi.io), a Kubernetes Operator for Kafka | ||
```terminal | ||
kubectl create ns kafka | ||
kubens kafka | ||
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' | ||
kubectl wait --for=condition=Available deployments/strimzi-cluster-operator --timeout=300s | ||
``` | ||
2. Install the Kafka cluster and topics for our recipe | ||
```terminal | ||
kubectl apply -f kafka-for-otelcol.yaml | ||
kubectl wait kafka/kafka-for-otelcol --for=condition=Ready --timeout=300s | ||
``` | ||
3. Create and switch to a namespace for our recipe | ||
```terminal | ||
kubectl create ns kafka-on-kubernetes | ||
kubens kafka-on-kubernetes | ||
``` | ||
4. Create a secret with the credentials: | ||
```terminal | ||
kubectl create secret generic grafana-cloud-credentials --from-literal=GRAFANA_CLOUD_USER="..." --from-literal=GRAFANA_CLOUD_TOKEN="..." | ||
``` | ||
|
||
5. Create the OTel Collector custom resources that publishes to and consumes from the Kafka topic | ||
```terminal | ||
kubectl apply -f otelcol-pub.yaml | ||
kubectl apply -f otelcol-sub.yaml | ||
kubectl wait --for=condition=Available deployments/otelcol-pub-collector | ||
kubectl wait --for=condition=Available deployments/otelcol-sub-collector | ||
``` | ||
6. Open a port-forward to the Collector that is publishing to Kafka: | ||
```terminal | ||
kubectl port-forward svc/otelcol-pub-collector 4317 | ||
``` | ||
|
||
7. Send some telemetry to your Collector | ||
```terminal | ||
telemetrygen traces --traces 2 --otlp-insecure --otlp-attributes='recipe="kafka-on-kubernetes"' | ||
``` | ||
|
||
8. Open your Grafana instance, go to Explore, and select the appropriate datasource, such as "...-traces". If used the command above, click "Search" and you should see two traces listed, each with two spans. | ||
|
||
## 😋 Executed last time with these versions | ||
|
||
The most recent execution of this recipe was done with these versions: | ||
|
||
- Strimzi v0.41.0 | ||
- OpenTelemetry Operator v0.100.1 | ||
- OpenTelemetry Collector Contrib v0.101.0 | ||
- `telemetrygen` v0.101.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
apiVersion: kafka.strimzi.io/v1beta2 | ||
kind: KafkaNodePool | ||
metadata: | ||
name: dual-role | ||
labels: | ||
strimzi.io/cluster: kafka-for-otelcol | ||
spec: | ||
replicas: 1 | ||
roles: | ||
- controller | ||
- broker | ||
storage: | ||
type: jbod | ||
volumes: | ||
- id: 0 | ||
type: persistent-claim | ||
size: 100Gi | ||
deleteClaim: false | ||
kraftMetadata: shared | ||
--- | ||
apiVersion: kafka.strimzi.io/v1beta2 | ||
kind: Kafka | ||
metadata: | ||
name: kafka-for-otelcol | ||
annotations: | ||
strimzi.io/node-pools: enabled | ||
strimzi.io/kraft: enabled | ||
spec: | ||
kafka: | ||
version: 3.7.0 | ||
metadataVersion: 3.7-IV4 | ||
config: | ||
offsets.topic.replication.factor: 1 | ||
transaction.state.log.replication.factor: 1 | ||
transaction.state.log.min.isr: 1 | ||
default.replication.factor: 1 | ||
min.insync.replicas: 1 | ||
listeners: | ||
- name: plain | ||
port: 9092 | ||
type: internal | ||
tls: false | ||
--- | ||
apiVersion: kafka.strimzi.io/v1beta2 | ||
kind: KafkaTopic | ||
metadata: | ||
name: otlp-spans | ||
labels: | ||
strimzi.io/cluster: kafka-for-otelcol | ||
--- | ||
apiVersion: kafka.strimzi.io/v1beta2 | ||
kind: KafkaTopic | ||
metadata: | ||
name: otlp-metrics | ||
labels: | ||
strimzi.io/cluster: kafka-for-otelcol | ||
--- | ||
apiVersion: kafka.strimzi.io/v1beta2 | ||
kind: KafkaTopic | ||
metadata: | ||
name: otlp-logs | ||
labels: | ||
strimzi.io/cluster: kafka-for-otelcol |
Oops, something went wrong.