Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collation fetching fairness #4880

Open
wants to merge 106 commits into
base: master
Choose a base branch
from
Open

Conversation

tdimitrov
Copy link
Contributor

@tdimitrov tdimitrov commented Jun 26, 2024

Related to #1797

When fetching collations in collator protocol/validator side we need to ensure that each parachain has got a fair core time share depending on its assignments in the claim queue. This means that the number of collations fetched per parachain should ideally be equal to (but definitely not bigger than) the number of claims for the particular parachain in the claim queue.

The current implementation doesn't guarantee such fairness. For each relay parent there is a waiting_queue (PerRelayParent -> Collations -> waiting_queue) which holds any unfetched collations advertised to the validator. The collations are fetched on first in first out principle which means that if two parachains share a core and one of the parachains is more aggresive it might starve the second parachain. How? At each relay parent up to max_candidate_depth candidates are accepted (enforced in fn is_seconded_limit_reached) so if one of the parachains is quick enough to fill in the queue with its advertisements the validator will never fetch anything from the rest of the parachains despite they are scheduled. This doesn't mean that the aggressive parachain will occupy all the core time (this is guaranteed by the runtime) but it will deny the rest of the parachains sharing the same core to have collations backed.

The solution I am proposing extends the checks in is_seconded_limit_reached with an additional check. The solution I am proposing is to limit fetches and advertisements based on the state of the claim queue. At each relay parent the claim queue for the core assigned to the validator is fetched. For each parachain a fetch limit is calculated (equal to the number of entries in the claim queue). Advertisements are not fetched for a parachain which has exceeded its claims in the claim queue. This solves the problem with aggressive parachains advertising too much collations.

The second part is in collation fetching logic. The collator will keep track on which collations it has fetched so far. When a new collation needs to be fetched instead of popping the first entry from the waiting_queue the validator examines the claim queue and looks for the earliest claim which hasn't got a corresponding fetch. This way the collator will always try to prioritise the most urgent entries.

@tdimitrov tdimitrov added the T8-polkadot This PR/Issue is related to/affects the Polkadot network. label Jun 26, 2024
@tdimitrov tdimitrov force-pushed the tsv-collator-proto-fairness branch from c7f24aa to 0f28aa8 Compare June 28, 2024 08:19
@tdimitrov tdimitrov requested a review from a team as a code owner October 18, 2024 08:10
Copy link
Member

@eskimor eskimor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. Had a quick look, but not yet following. We still seem to track per relay parent (and per peer): How can we guarantee fairness in such a scheme, given that collators are free in picking relay parents?

@tdimitrov
Copy link
Contributor Author

Hmm. Had a quick look, but not yet following. We still seem to track per relay parent (and per peer): How can we guarantee fairness in such a scheme, given that collators are free in picking relay parents?

We count candidates at relay parent X and all previous relay parents within the view (here).

Why do you say we track per peer? We have a check that a peer doesn't provide more entries than the elements in the claim queue as a quick spam protection check in insert_advertisement (here) but it only serves as an initial spam protection so that PeerData doesn't grow indefinitely. We do check the claim queue after that.

Copy link
Contributor

@alindima alindima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the effort you invested in this so far 👍🏻

Not introduced here, but this subsystem has not aged very well and the code is quite complicated and convoluted.

Generally I would love to see a refactor of the collator protocol. Maybe this could be done as part of the issue for removing the async backing parameters (which will probably add modifications to the collator protocol also)

@@ -398,7 +369,7 @@ struct State {
/// support prospective parachains. This mapping works as a replacement for
/// [`polkadot_node_network_protocol::View`] and can be dropped once the transition
/// to asynchronous backing is done.
active_leaves: HashMap<Hash, ProspectiveParachainsMode>,
active_leaves: HashMap<Hash, AsyncBackingParams>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we can remove active_leaves altogether now that we don't need to support pre-async backing code (as the comment on it says also).

When we want to check if a relay parent is in the implicit view we can check against all_allowed_relay_parents instead of known_allowed_relay_parents_under

Please also update the comments. There are several comments about pre-async backing stuff

// Current assignments is equal to the length of the claim queue. No honest
// collator should send that much advertisements.
if candidates.len() > per_relay_parent.assignment.current.len() {
return Err(InsertAdvertisementError::PeerLimitReached)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should do this check on the else branch as well

)
.await;
let head = Hash::from_low_u64_be(128);
let head_num: u32 = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slightly unrelated. AFAICT ReportCollator message is never sent by any subsystems. We should remove it along with this test

Copy link
Contributor Author

@tdimitrov tdimitrov Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree but let's not handle this here. I've opened an issue #6415

@@ -1314,3 +1385,596 @@ fn child_blocked_from_seconding_by_parent(#[case] valid_parent: bool) {
virtual_overseer
});
}

#[test]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

advertisement_spam_protection test should also check for the actual advertisement limit, not only duplicates

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate?

If advertisement_spam_protection is extended to check the claim queue limit is respected it will overlap with collations_outside_limits_are_not_fetched. Imo they test different cases and it's nice to have them separated. The test names could be better though.

)
.map_err(AdvertisementError::Invalid)?;

if per_relay_parent.collations.is_seconded_limit_reached(relay_parent_mode) {
let claims_for_para = per_relay_parent.collations.claims_for_para(&para_id);
let seconded_and_pending_at_ancestors = seconded_and_pending_for_para_in_view(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are checking here for our ancestors only. But we could have already seconded collations at the latest leaf.
Suppose A B C relay parents (C being the latest and active)
We second 2 collations on C. Then want to second one on B, we'll only take into account the collations on A (which are none)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I built the whole PR under the wrong assumption that relay parents can't 'go backwards'. After talking with you offline and looking through the code again I can see this is not the case.

I'll think about how to fix this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now reworked to consider old relay parents and all the paths up to an outer leaf.

@eskimor
Copy link
Member

eskimor commented Oct 30, 2024

Hmm. Had a quick look, but not yet following. We still seem to track per relay parent (and per peer): How can we guarantee fairness in such a scheme, given that collators are free in picking relay parents?

We count candidates at relay parent X and all previous relay parents within the view (here).

Got it, thanks! Was a bit hidden ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants