-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split the reporter into client and server modules #73
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Mickael Maison <[email protected]>
The CI is failing. I can reproduce locally but not sure I understand why this is happening. Running:
Hits the issue:
But if I run:
then it works |
README.md
Outdated
@@ -37,47 +37,51 @@ The metrics reporter has the following configurations: | |||
### Kafka Brokers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this say Kafka Brokers and Controllers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated that section. I think we'll probably want further improvements to the README, it's still a bit bare bones.
I think
then verify does not install artifacts to local repo |
<artifactId>client-metrics-reporter</artifactId> | ||
|
||
<dependencies> | ||
<dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For multi module projects, maven best practice is to express shared dependencies in the parent POM using a dependencyManagement section. You can then remove the version numbers from the dependencies in the child poms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the tip! I already had the versions defined in the parent POM so I removed the versions from the modules.
} | ||
|
||
@Override | ||
public void contextChange(MetricsContext metricsContext) { | ||
String prefix = metricsContext.contextLabels().get(MetricsContext.NAMESPACE); | ||
if (!PREFIXES.contains(prefix)) { | ||
throw new IllegalStateException("ClientMetricsReporter should only be used in Kafka servers"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ClientMetricsReporter should only be used by clients?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, fixed.
|
||
private static final Logger LOG = LoggerFactory.getLogger(AbstractReporter.class); | ||
|
||
private final Map<Object, MetricWrapper> allowedMetrics = new ConcurrentHashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't find the naming of otherMetrics
too clear. It took me a while to grok what was going on:
You keep track of the otherMetrics so that in the case the allow list is changed, you can start to report on the newly matched metrics. You only need to do this if the implementation is reconfigurable.
Can I ask for a few more comments (class level)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Keith, it would be better to have some comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed otherMetrics
to disallowedMetrics
. I also added a few comments to this class to clarify its role. Let me know if some parts are still not clear.
* Update the allowed metrics based on the current allowlist pattern. | ||
*/ | ||
public void updateAllowedMetrics() { | ||
if (!isReconfigurable()) return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are guaranteed that this method is called by a single thread, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes in KRaft mode this should be called by a single thread. While this runs the addMetric()
and removeMetric()
methods could be called from another thread but that should be fine with ConcurrentHashMap
s.
/** | ||
* MetricsReporter implementation that expose Kafka server metrics in the Prometheus format. | ||
*/ | ||
public class ServerKafkaMetricsReporter extends ClientMetricsReporter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The class hierarchy is giving me pause for thought. Is ServerKafkaMetricsReporter really a ClientMetricsReporter. I wonder whether the common stuff should be pushed to the abstract class with only ServerKafkaMetricsReporter knowing about reconfigurability and 'otherMetric' tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's a bit complicated.
The allowedMetrics logic is shared across all 3 reporters (ClientMetricsReporter, ServerKafkaMetricsReporter and ServerYammerMetricsReporter) so it makes sense to have that in the abstract class.
For the reconfiguration logic (otherMetric) it's used by both ServerKafkaMetricsReporter and ServerYammerMetricsReporter. So it's still 2 out of 3 reporters hence why I put it there too. Otherwise we'd need a server abstract reporter but that seems overkill.
Signed-off-by: Mickael Maison <[email protected]>
So what do you recommend? Should I add a step to run |
I was looking at the bridge as simpler compared to the operator. We have the verify and spotbugs as two separated stages anyway, you can see here: https://github.com/strimzi/strimzi-kafka-bridge/blob/main/.azure/templates/jobs/build_java.yaml#L27 where the https://github.com/strimzi/strimzi-kafka-bridge/blob/main/.azure/templates/jobs/build_java.yaml#L35 where the |
Signed-off-by: Mickael Maison <[email protected]>
I'm not sure I follow. What do you suggest doing? |
Well we simply do
where make
so Spotbugs is part of the building process (but last to execute). I would go with |
Yes that seems the simplest option to me. But I wanted to see what other projects did and understand if there was a good reason to have separated steps. |
Signed-off-by: Mickael Maison <[email protected]>
|
Signed-off-by: Mickael Maison <[email protected]>
lgtm - thanks @mimaison |
This implements Proposal 96 and splits the reporter into 2 modules:
This also adds support for dynamic reconfiguration (#55).
I'm still unsure about the packaging. Running
mvn package
produces archives under client-metrics-reporter and server-metrics-reporter. Is that sufficient? or do we want a single archive?