You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But all I get when trying to execute the crawler is:
On startup:
2022-10-07 20:19:11 [scrapy.core.downloader.handlers] ERROR: Loading "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler" for scheme "http"
Traceback (most recent call last):
File "scrapy/core/downloader/handlers/__init__.py", line 49, in _load_handler
File "scrapy/utils/misc.py", line 61, in load_object
File "importlib/__init__.py", line 127, in import_module
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'scrapy_playwright.handler'
And later on:
2022-10-07 20:19:12 [scrapy.core.scraper] ERROR: Error downloading <GET https://XXXXXXXXXX.com>
Traceback (most recent call last):
File "twisted/internet/defer.py", line 1692, in _inlineCallbacks
File "twisted/python/failure.py", line 518, in throwExceptionIntoGenerator
File "scrapy/core/downloader/middleware.py", line 49, in process_request
File "scrapy/utils/defer.py", line 67, in mustbe_deferred
File "scrapy/core/downloader/handlers/__init__.py", line 74, in download_request
scrapy.exceptions.NotSupported: Unsupported URL scheme 'https': No module named 'scrapy_playwright.handler'
(Obfuscated the URL myself)
The text was updated successfully, but these errors were encountered:
@bernardodalfovo In my experience, running into ModuleNotFoundError is an indication that you need to add that module as a hidden import either through the command line or through the spec file. pyinstaller can't always find the packages it needs to collect, so this is a way of adding those.
Hello,
I am trying to include scrapy-playwright to my binary using PyInstaller.
I have tried a few different setups:
But all I get when trying to execute the crawler is:
(Obfuscated the URL myself)
The text was updated successfully, but these errors were encountered: