Replies: 3 comments 2 replies
-
Hi @zirkelc, thank you for opening this discussion. From what I understand, this sounds quite similar to this existing feature request #519. If this is the case, I'd like to encourage to 👍 there as this helps us understand demand. -- Regarding the actual feature, I think this makes sense and it's something we have on our roadmap for the medium term. Before working on this we'd like to better understand the impact of buffering logs on memory usage and find a way to decrease the chances of running out of memory. For example, we're unsure if we should buffer based on a certain amount of messages, or based on a certain size of payloads, or both. Additionally, we're looking into doing something slightly more advanced than what suggested and not just flush what's in the buffer when you call import { Logger } from '@aws-lambda-powertools/logger';
const logger = new Logger({ logLevel: 'error' });
export const handler = async (event) => {
logger.logEventIfEnabled(event); // this is an info log
logger.debug('some other log');
throw new Error('something bad happened'); // at this point, before the runtime exits we'd log both these logs present in the buffer
} In order to do this, however, we need to resolve some open points with the AWS Lambda team since today it's not possible to do this without opting into Lambda Extensions. Either way, depending on how the conversation goes, we might want to implement this in two waves and first enable the behavior you described, which we could do independently. |
Beta Was this translation helpful? Give feedback.
-
Hey @dreamorosi as said via email, I made a very naive implementation of this feature and I'd like to get your feedback on it if you got a chance. Here's the diff in my forked repo: #3178 All log items less than the configured log level will be stored in a On a sidenote, the way it's implemented could actually be done quite easily by sub-classing the |
Beta Was this translation helpful? Give feedback.
-
Closing this in favor of #3410 - the proposal moved onto the RFC phase and will be part of the 2025 backlog for Powertools for AWS (estimated ETA Q1). |
Beta Was this translation helpful? Give feedback.
-
Hi,
I currently log lots of information as INFO or DEBUG to cloudwatch. In order to reduce my cloudwatch bill, I'm thinking about setting the default log level
POWERTOOLS_LOG_LEVEL
to ERROR. But then, in case of errors I' don't have the useful debug logs available to analyze the error.I didn't look the the implementation yet, but maybe it would be possible to collect all DEBUG/INFO/... logs which are less than the configured log level, and then in case of an error logged via
logger.error()
(or any other log message > log level) to flush all collected logs to cloudwatch. That means only critical logs (log level >POWERTOOLS_LOG_LEVEL
) would trigger non-critical logs (log level <POWERTOOLS_LOG_LEVEL
) to be logged in cloudwatch.Beta Was this translation helpful? Give feedback.
All reactions