-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Vapor OTel integration post #106
base: main
Are you sure you want to change the base?
Conversation
@@ -0,0 +1,303 @@ | |||
--- | |||
date: 2024-12-16 14:00 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets update this to whatever publish date we want (day we merge)
|
||
As you can see here there's a bunch of things going on, but we'll get into each of them. | ||
|
||
1. Our Vapor HTTP server (which can be either an API or a web app) directly sends data to Prometheus, the metrics database; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to clarify this (and the diagram). There's no functionality to send data to Prometheus, it's done the other way around, something (normally a collector/side car) scrapes the metrics endpoint. I guess in our example it's Prometheus directly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way this works with an OTel Collector is that the server (Swift OTel) pushes signals (logs, metrics, spans) to the collector. The collector then either sends those signals to other tools, or in case of Prometheus for Metrics creates an HTTP server itself which Prometheus uses to scrape the metrics that were pushed to the collector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@slashmo That's the case for the OTel collector, however we're not using that for the HTTP server, we're using the collector only to collect metrics from the queues (this was a real project in which the queue metrics collection was an addition to the already established Vapor - Prometheus direct interaction solution). Since the HTTP server metrics are shared directly with Prometheus without going through the collector I believe Tim is right in saying that it's Prometheus fetching the data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, either way, Prometheus is always fetching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That being said, I was a bit surprised to see this shown in an OTel integration post. Is the goal of the post to show different ways of emitting metrics, one of which being OTel? Otherwise, if the goal is to integrate OTel, I'd expect Metrics and Tracing in there, with perhaps a couple of OTLP compatible observability backends so that folks know that OTel is more than just the collector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The post was born because we implemented this for a client and it seemed interesting to share, however I agree the title might not do it justice, we could update it to something like "Collecting metrics in Vapor with OTel and Prometheus". I haven't found any tutorial or blog post remotely similar to this so I guess anything's fine 😄
``` | ||
|
||
This should be enough to get Prometheus up and running and scraping the Vapor application. | ||
We'll update both the `docker-compose.yml` and the `prometheus.yml` files later on to include the queue workers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a note here to talk about sidecars that most people will use in a production app
- ./grafana.yml:/etc/grafana/provisioning/datasources/grafana.yml | ||
``` | ||
|
||
And that's it! Once the Docker Compose stack is running, you should be able to access Grafana at `http://localhost:3000` and start creating dashboards to visualize the data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be really cool to show an example of some queues metrics here, even if we have to redact queues names
Co-authored-by: Tim Condon <[email protected]>
No description provided.