You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently had a system crash because of full RAM and Swap, which introduced error in Elastics Shards and I didn't manage to recover.
So I started over again with 5.7.2. Now, I've set REDIS_TRIMMING to 2000000 and RAM seems to settle on 56 GB usage.
One thing I notice now when I monitor processing speed is that ingestion slows down dramatically over time. Currently a queue of 500k from AlienVault and 1M from CrowdStrike. The system added 2300 entities and 110 000 relations last 24 hours. Thats 0.02 entities/sec and 1.2 relations/sec.
I also notice some connectors has State=null and no In progess or Completed works. However, If I filter Entities on that connector Author I see a lot of newly ingested entities.
I notice in the opencti_opencti container logs that it's throwing:
2023-04-26T06:48:49.357451577Z (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
with this frequency of seconds between log event:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Been running OpenCTI for two and a half years. Running on WMware/CentOS (16 CPU, 100 GB RAM)/Docker with the github docker-compose and connectors.
External: AlienVault, CISA, CVE, CrowdStrike, MITRE, Malpedia, OpenCTI, RiskIQ
Enrichment: AbuseIPDB, Hygiene, IPQS, IpInfo, SHODAN, Greynoise
I recently had a system crash because of full RAM and Swap, which introduced error in Elastics Shards and I didn't manage to recover.
So I started over again with 5.7.2. Now, I've set REDIS_TRIMMING to 2000000 and RAM seems to settle on 56 GB usage.
One thing I notice now when I monitor processing speed is that ingestion slows down dramatically over time. Currently a queue of 500k from AlienVault and 1M from CrowdStrike. The system added 2300 entities and 110 000 relations last 24 hours. Thats 0.02 entities/sec and 1.2 relations/sec.
I also notice some connectors has State=null and no In progess or Completed works. However, If I filter Entities on that connector Author I see a lot of newly ingested entities.
I notice in the opencti_opencti container logs that it's throwing:
2023-04-26T06:48:49.357451577Z (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
with this frequency of seconds between log event:
seconds occurances
0-9 17
10-19 23
20-29 22
30-39 11
40-49 9
50-59 5
I'm just worried these automated aborted events are causing disruptions in the ingestion and the presentation of connectors ingestion status.
Beta Was this translation helpful? Give feedback.
All reactions