You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We don't have this yet.
First add serving logs to the API (just plaintext output, the container log messages, streaming if you manage to do it).
Then add the extra entry to the listjobs endpoint for finished jobs.
The text was updated successfully, but these errors were encountered:
After #28 (incl. PRs #31, #42, #48), this feature can be implemented relatively easily. If the file is present in the joblogs dir, then we can serve it from there (e.g. sendfile, and ideally we'd also support range headers); if it is present on object/container storage instead, it could become a redirect to a signed URL, or alternatively proxied.
So the log_url would be a path for the specific job, handled by the API to return the log.
Scrapyd's listjobs endpoint shows a logs path for finished jobs (as seen here):
We don't have this yet.
First add serving logs to the API (just plaintext output, the container log messages, streaming if you manage to do it).
Then add the extra entry to the listjobs endpoint for finished jobs.
The text was updated successfully, but these errors were encountered: