Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

6 scheduling #36

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open

6 scheduling #36

wants to merge 25 commits into from

Conversation

vlerkin
Copy link
Collaborator

@vlerkin vlerkin commented Nov 7, 2024

What happens in the PR:

  1. The logic of event watcher was separated in an observer class; the logic of log watching stayed in a log handler class, but the initialization was changed to subscribe to the event in case jobless feature was configured;
  2. The new class KubernetesScheduler was created to handle logic when jobs must be unsuspended and how (ordered);
  3. scheduler endpoint was modified, logic to set a value for start_suspended parameter was added;
  4. schedule method from k8s launcher has a new start_suspended parameter, it's value is passed when called inside the api; also new methods were added: unsuspend_job patches existing suspended job suspend=False, get_running_jobs_count returns the number of jobs that are currently running, list_suspended_jobs returns the list of jobs where spec.suspend is true, _get_job_name extracts the job name from the metadata, it is then used for unsuspend function;

The big picture:
Event watcher connects to the k8s api and receives the stream of events, it then notifies the subscribers if a new event is received and passes it to the provided callback. The subscriber - KubernetesScheduler - receives event in a handle_pod_event method, this method reacts to the changes in job statuses, and if job completed running or failed it calls another method - check_and_unsuspend_jobs - that checks capacity and unsuspends jobs until the number of allowed parallel jobs is reached, while doing this it relies on another method - get_next_suspended_job_id - to unsuspend the most recent job, to keep the order in which jobs were initially scheduled.
When the job is scheduled, based on the number of currently active jobs and max_proc provided in the config (default is 4), the job runs or goes to the queue of suspended jobs (native k8s queue). Then events that change the number of active jobs trigger the logic of KubernetesScheduler class that unsuspend suspended jobs until the desired state (num of parallel jobs) is achieved.

@vlerkin vlerkin requested a review from wvengen November 7, 2024 17:25
Copy link
Member

@wvengen wvengen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, nice you were able to come up with something so quickly already!
I looked at it from a high level, and noticed that this is currently implemented for Kubernetes only (that makes sense), and also setup in such a way that it needs refactoring for Docker. I would think of the scheduler as something that could work for both Docker and Kubernetes, especially the scheduling decisions. Also, there is now k8s-specific code in the main file (e.g. the import), and the kubernetes scheduler, this makes the code somewhat spaghetti: there are specific implementation-specific classes where responsibility is meant to be delegated. If you need to access the scheduler in the main file, use a generic scheduler, and make the docker-based parts not implemented. I think that would give a much cleaner design.

Also, I would consider making the launcher responsible for scheduling. And then have the scheduler talk to the launcher to actually start jobs.

I'm not yet sure if we should allow running without the scheduler, or if it would always be active.

@wvengen
Copy link
Member

wvengen commented Nov 8, 2024

Hope my feedback was at an angle that helps you at this stage. In any case, well done, keep it going!

p.s. the CI error looks like it could be cause by Kubernetes-specific things having entered into the main api code, which wouldn't work when running with Docker.

@vlerkin
Copy link
Collaborator Author

vlerkin commented Nov 11, 2024

Working on Docker implementation to be added to this PR

Copy link
Member

@wvengen wvengen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to see a working version! Quite readable :)
I think it needs a little cleanup, but you're getting there, I think.

@@ -0,0 +1,96 @@
import logging
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we fit this in the directory structure? I wouldn't expect this in the src root.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, this functionality seems related to the kubernetes launcher.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k8s_scheduler is not the part of launcher because it is 1) a subscriber and should have this functionality to subscribe to the observer, 2) be the part which is initialized optionally when a user wants to limit the number of parallel job (launcher is not optional), 3) contains higher-level logic that uses low-level methods and helper methods from the launcher.

If you don't like that this file is located in the root, I can relocate it to the directory that belongs to this feature, say, limit_jobs or something like this, you can pick any name you like and I will add it.

if not jobs.items:
logger.error(f"No job found with job_id={job_id}")
return None
return jobs.items[0].metadata.name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when you're listing jobs, would you also get the name already?

@vlerkin
Copy link
Collaborator Author

vlerkin commented Nov 20, 2024

I have problems because I separated this PR partially and now I have a multiverse which I need to refactor to the only source of truth. Going to spend some uncertain amount of time on that.

@wvengen
Copy link
Member

wvengen commented Nov 21, 2024

The way I would do this:

  1. Continue working on this PR, until you need the functionality developed in the other PR (or until it has been merged).
  2. Interactive rebase on the branch of the other PR. Filter out the commits you had here that you rewrote in the other branch.
  3. There may be little or much work to do in resolving conflicts. If it is really many, in various commits, you may consider another route (see below).
  4. Test, done.

Of this is much work in many commits, you may consider first doing an interactive rebase of this PR, to simplify it, and reduce the number of commits (that each may need amending).

Yes, this is a bit of work, but something I come across now and then, in various projects.
Sorry for the complexity!

@vlerkin
Copy link
Collaborator Author

vlerkin commented Nov 21, 2024

Thank you for the advice!
I was thinking of dropping the commit with the merging main to this branch, then make the code work so the tests run if needed. Then merging with that other branch that refactored the observer further and make the code of both branches work together and then check if there are any conflicts with main and resolving those. This is a bit longer way than simply redoing the merge with the main branch but I messed up the last one because I lost track of changes, so gradually rebuilding this branch is a bit easier for me.

No worries, this is me who messed up merging, complexity is part of the job:D Learning to make more granular commits and cleaner PRs the hard way:D

@vlerkin vlerkin force-pushed the 6-scheduling branch 2 times, most recently from c8b35ad to 6394633 Compare November 21, 2024 17:38
@vlerkin
Copy link
Collaborator Author

vlerkin commented Nov 27, 2024

I modified one of the methods in the scheduler (get_next_suspended_job_id) to handle cases if a job does not have a creation_timestamp. It is not expected but if someone used a custom resource and forgot to add this field or made any other error, the job will get the timestamp assigned and will be processed like other jobs in the queue.

Also, there are now unit tests that cover different scenarios for the scheduler.

If you have any other comments for improvements, let me know!

@wvengen
Copy link
Member

wvengen commented Jan 21, 2025

Could you please resolve conflicts on main, so that I can see what this PR specifically changes?

Copy link
Member

@wvengen wvengen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first glance, well done! Seems to do the job (though haven't tested it).
An integration test would increase my confidence that it performs well (now I'd have to run it locally to see if it actually works - I think it would, but still I would feel necessitated to do so).

Also, I think the scheduling logic is now implemented in K8s and Docker separately. Would it make sense to have a single piece of code decide when to schedule, and let e.g. the launcher and listener be the interface to K8s/Docker? Haven't thought this fully through, but the question comes up.

Some questions and notes remain, otherwise it's well on the way.

Comment on lines +22 to +30
# Number of attempts to reconnect with k8s API to watch events, default is 5
reconnection_attempts = 5

# Minimum time in seconds to wait before reconnecting to k8s API to watch events, default is 5
backoff_time = 5

# Coefficient that is multiplied by backoff_time to provide exponential backoff to prevent k8s API from being overwhelmed
# default is 2, every reconnection attempt will take backoff_time*backoff_coefficient
backoff_coefficient = 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think the average user needs to configure this? If yes, keep. If no, the documentation would be enough.
Good to remove max_proc.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_proc is not present by default, yes, so all scheduled jobs will run immediately. But when provided, then scheduler is activated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we leave out the reconnection parameters from the sample config? They are documented, and wouldn't need tweaking when getting started.

assert job_id == 'job1'
mock_logger.warning.assert_called_with(
f"Job {job} missing 'metadata.creation_timestamp'; assigned max timestamp."
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice you're testing this!
Yet ... can we think of an integration test? It's much more reliable if we can test this against Kubernetes (and Docker).

With max_proc = 0 no job should ever run.
With max_proc = 1 scheduling two jobs, where one has a sleep, we want to see the last job suspended, and started after the first job has ended.
(etc?)

I think this is not that hard either, except we need to test with different configs, something we don't really do yet.
The current test seems tightly coupled to the implementation. An integration test would allow us to refactor the code, and have the test still tell us if it works or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, you this may be for later :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally unit tests that rely on a method signature are enough to confirm the way a method works, integration tests are extras, we can add a ticket to create integration tests that would test different config files.

@vlerkin
Copy link
Collaborator Author

vlerkin commented Jan 28, 2025

The launcher is already sort of an interface, since every implementation uses its own launcher. Listener cannot be an interface because event watcher is a native Kubernetes thing, if I understand your comment correctly. So for k8s we still use the watcher to start suspended jobs when at least one job is done or deleted, and for Docker we have a background thread that checks the state of existing containers, we have discussed it previously.

except AttributeError as attr_e:
logger.error(f"AttributeError in get_next_suspended_job_id: {attr_e}")
except TypeError as type_e:
logger.error(f"TypeError in get_next_suspended_job_id: {type_e}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about this before, that the scheduler is used by the launcher, correct?
If yes, then I would expect this file in the directory structure in a way that it is 'part' of the k8s launcher, e.g. in launcher/k8s_scheduler.py

vlerkin added 12 commits March 4, 2025 14:25
…logic for observer in a RecourceWatcher class; added method to stop a thread gracefully
…that handles the logic to unsuspend jobs and get the next in order according to the creation timestamp; modify schedule endpoint to start jobs suspended if there is already enogh jobs running; modify corresponding function in k8s launcher; add to k8s launcher methods to unsuspend job, to get current number of running jobs, to list suspended jobs and a private method to get job name to be used for unsuspend function
…source watcher instance to enable_joblogs to subscribe to the event watcher if the log feature is configured; delete logic about event watcher from main; pass container for list objects function instead of container name; remove start methon from log handler class; modify joblogs init to subscribe to event watcher
…rs and run more from the queue of created jobs when capacity is available; add backgroung thread that sleeps for 5 sec and triggers the function that starts additional containers up to capacity; add a method to gracefully stop the background thread that might be used in the future to stop the thread when app stops; encapsulate k8s and docker related schedule functionality in corresponding launchers and keep api.py launcher agnostic; add max_proc to config for docker
…nnect loop for event watcher; make number of reconnect attempts, backoff time and a coefficient for exponential growth configurable via config; add backoff_time, reconnection_attempts and backoff_coefficient as attributes to the resource watcher init; add resource_version as a param to w.stream so a failed stream can read from the last resource it was able to catch; add urllib3.exceptions.ProtocolError and handle reconnection after some exponential backoff time to avoid api flooding; add config as a param for init for resource watcher; modify config in kubernetes.yaml and k8s config to contain add backoff_time, reconnection_attempts and backoff_coefficient
…and a label selector to make the code in listjobs, get_running_jobs and list_suspended_jobs DRY; refactor listjobs to use the helper function with the existing _parse_job as a filter_func parameter
…unction because list jobs uses a different logic
…nnect loop for event watcher; make number of reconnect attempts, backoff time and a coefficient for exponential growth configurable via config; add backoff_time, reconnection_attempts and backoff_coefficient as attributes to the resource watcher init; add resource_version as a param to w.stream so a failed stream can read from the last resource it was able to catch; add urllib3.exceptions.ProtocolError and handle reconnection after some exponential backoff time to avoid api flooding; add config as a param for init for resource watcher; modify config in kubernetes.yaml and k8s config to contain add backoff_time, reconnection_attempts and backoff_coefficient
… connection to the k8s was achieved so only sequential failures detected; add exception handling to watch_pods to handle failure in urllib3, when source version is old and not available anymore, and when stream is ended; remove k8s resource watcher initialization from run function in api.py and move it to k8s.py launcher as _init_resource_watcher; refactor existing logic from joblogs/__init__.py to keep it in _init_resource_watcher and enable_joblogs in k8s launcher
vlerkin added 9 commits March 4, 2025 14:29
… a package that has an enable function in launcher/k8s.py which also part of resource watcher initialization; initialize the scheduler if max_proc was provided in the scrapyd section of the config file; refactor related methods in the launcher to use extra functionality for job number limiting only if max_proc is provided
…; remove max_proc from config file since by default we want to runn all scheduled jobs in parallel; add a section about max_proc to the CONFIG.md
…on_timestamps to the jobs that do not have them, so they are proccessed at the end of the queue; add unit tests for k8s_scheduler class
@wvengen
Copy link
Member

wvengen commented Mar 4, 2025

Rebased on main, adapted integration tests to setup with different configuration files.

@wvengen wvengen force-pushed the 6-scheduling branch 9 times, most recently from 9a9e29b to f8418ea Compare March 5, 2025 09:33
@wvengen
Copy link
Member

wvengen commented Mar 10, 2025

I'm not fully happy with the current integration tests, using a shellscript to patch the k8s setup. It might be cleaner to add a YAML file with the desired k8s manifest (in this case, for the role), and let the CI script save the cluster state on cluster setup, and restore it before running a test (kubectl apply -f first the pristine cluster state, then the test-specific resource; would even save a scale down as it is part of the pristine cluster state - but not the waiting on it, so perhaps remains useful to keep scale down).
Downside would be that the full role state is still necessary, so any change to scrapyd-k8d's required roles need to be included in the test-specific role manifest as well, so in that respect, a patch is actually more to the point.
So perhaps it is fine as it is, just wanted to share my thoughts here.

@vlerkin
Copy link
Collaborator Author

vlerkin commented Mar 10, 2025

Wait until the second is finished too

listjobs_wait(jobid2, 'finished', max_wait=STATIC_SLEEP+MAX_WAIT)

Just curious why would you wait until the second job is done? If it was scheduled after the first job is done then the feature works properly, I am not sure what we are testing here with the waiting for the second one.

@vlerkin
Copy link
Collaborator Author

vlerkin commented Mar 10, 2025

I would group the unit tests under 4 classes:
TestKubernetesSchedulerInitialization: test_k8s_scheduler_init, test_k8s_scheduler_init_invalid_max_proc

TestPodEventHandling:
test_handle_pod_event_with_non_dict_event,
test_handle_pod_event_pod_missing_status, etc

TestJobSuspensionManagement: test_check_and_unsuspend_jobs_with_capacity_and_suspended_jobs, test_check_and_unsuspend_jobs_no_suspended_jobs, etc

TestSuspendedJobSelection: test_get_next_suspended_job_id_with_suspended_jobs, test_get_next_suspended_job_id_no_suspended_jobs, etc

This is much more readable and easier to contribute to add or change tests. What do you think?

@vlerkin
Copy link
Collaborator Author

vlerkin commented Mar 10, 2025

And to make code more compact, we could use @pytest.mark.parametrize to avoid code duplications

@wvengen
Copy link
Member

wvengen commented Mar 10, 2025

Hi 👋 Thanks for your input!

Just curious why would you wait until the second job is done? If it was scheduled after the first job is done then the feature works properly, I am not sure what we are testing here with the waiting for the second one.

As a general note, the integration tests are, integration tests, and meant to test the system as a whole. Here it means that it is not strictly necessary to wait on the second job, but it is still part of the expected flow, and doesn't hurt to test. These tests are more like what a user would expect when using the system; not handling specific edge cases very isolated, but may include edge cases in the integrated flow.

TestKubernetesSchedulerInitialization, TestPodEventHandling, TestJobSuspensionManagement, TestSuspendedJobSelection

These tests sound Kubernetes-specific, and are not really about testing a full interaction cycle with scrapyd-k8s using its API only. Therefore they don't really belong in the (current) integration tests, I think.

There are probably specific cases to cover, as you write. Very useful to know about. Maybe we could express them as a full integration tests, that triggers a certain corner-case, and should work in a certain way, regardless of backend (k8s/docker).

If we want to test the launchers (incl. the possible schedulers), then we'd need another kind of tests, perhaps docker- and k8s-specific tests, that check how a cluster/node responds to launcher commands, and vice versa. Here we might test the surface API of the launcher (instead of the REST API). This is a kind of test we don't have yet (all tests are now backend-agnostic).

In the early stages of this project, to keep testing work managable, no backend-specific tests were created. There may come a time where this project grows, and needs backend-specific tests, but I see the overhead as a bit too much as of now - as long as we can cover enough ground with the backend-independent integration tests.

test_k8s_scheduler_init_invalid_max_proc

This seems like it would be a separate integration test with a config file having an invalid value. I think the API does not expose this, so either the daemon does not start at all, or runs with a reduced feature set. Both are not reported through the API, so cannot be well tested now. So this requires a different kind of testing setup, curently out-of-scope. I think this would belong in a different issue, in revising the testing infrastructure.

Note that I'd really like to keep a distinction between running actual tests, and setting up the daemon under test in an environment.

@vlerkin
Copy link
Collaborator Author

vlerkin commented Mar 11, 2025

These tests sound Kubernetes-specific, and are not really about testing a full interaction cycle with scrapyd-k8s using its API only. Therefore they don't really belong in the (current) integration tests, I think.

They are unit tests, I just looked at them again and suggested possible improvements, it was probably a bit confusing:)

@vlerkin
Copy link
Collaborator Author

vlerkin commented Mar 11, 2025

I don't really see any improvements for the integration tests, looks good as it is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants