-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2 API Proposal Document #339
Conversation
Thank you @aryan-25 ! I'll have a look this morning |
This is not related to this doc, but should we move the Current struct :
Proposed struct
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great steps forward!
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
||
`LambdaContext` will be largely unchanged, but the `eventLoop` property as well as the `allocator` property (of type `ByteBuffer`) will be removed. | ||
|
||
A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. | |
A new function `addBackgroundTask(_:)` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning behind making the context non-sendable? Is it for the background task stuff? It should be fine given the scope of sharing it will be pretty narrow but it could cause issues with strict concurrency for those that aren't aware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to allow adding background tasks in a structured way, we will need to back the LambdaContext
with a TaskGroup
. Since TaskGroup
is not Sendable
we can't mark LambdaContext
as Sendable
. However if users need any property (all of them are Sendable
) out of the LambdaContext
they can just get that property out of it and then pass it around or close over it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does the context need to hold the TaskGroup
? I smell structured concurrency violations :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LambdaContext
is now marked as Sendable
like before. We have got rid of the addBackgroundTask(_:)
function in the revised proposal.
|
||
### StreamingLambdaHandler | ||
|
||
The new `StreamingLambdaHandler` protocol is the base protocol to implement a Lambda function. Most users will not use this protocol and instead use the `LambdaHandler` protocol defined in the `Codable` Support section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for clarity it might be nice to have a section talking about the LambdaHandler
- I spent a bit of time trying to find the detailed explanation of it before realising it's in the top section
|
||
### LambdaRuntime | ||
|
||
`LambdaRuntime` is the class that communicates with the Lambda control plane as defined in [Building a custom runtime for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html) and forward the invocations to the provided `StreamingLambdaHandler`. It will conform to `ServiceLifecycle.Service` to provide support for `swift-service-lifecycle`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work with LambdaHandler
as well, or does that conform to StreamingLambdaHandler
to ferry calls through?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to use a LambdaHandler
users will wrap it in a LambdaCodableAdapter
. LambdaCodableAdapter
conforms to StreamingLambdaHandler
. So the LambdaRuntime type that you will use when using LambdaHandler will be:
let runtime: LambdaRuntime<LambdaCodableAdapter<MyLambdaHandler, Event, Output, JSONDecoder, JSONEncoder>>
but users basically never have to use that type explicitly. So to answer your question: Yes it works with LambdaHandler
as well, but via the LambdaCodableAdapter
.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
||
We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
||
We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be interesting to see if anyone has any use cases that integrate something like Swift Middleware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If using Middleware with a request and context, do you think the response writer would be placed in the context or be a separate parameter, thus meaning a departure from the current swift-middleware setup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding whether you provide a response writer, versus returning a response that includes a writer I not sure. I have gone back and forward on this when building HB. In most cases I have not seen any need for a response writer but there are still situations where they makes things clearer. eg Tracing (You want to finished your request span once the response has been written, not when you have returned a response which includes the closure that'll write the response).
In the end you can implement returning a response on top of an API what uses a response writer but not the reverse so for flexibility it should probably be a response writer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable
conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.
That said I do think the argument that this is a lower-level and more flexible API makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to integrate the swift-middleware library directly. I think if users have those needs they should consider using hummingbird/smoke/vapor in lambda mode, which then brings the option to use middleware.
This shall not mean that writing a middleware can be impossible. We should allow users to write middleware that they can stack.
Also I think swift-middleware is currently not targeted for anytime soon. So in order to make progress we should not depend on it. Integrating swift-middleware (as is) on top of this proposed API is absolutely possible and therefore should not be considered a blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can debate whether we think the swift-middleware library should be integrated directly but I do agree it shouldn't be considered a blocker for this work/release.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
/// the runtime can ask for further events. Note that AWS will continue to | ||
/// charge you after the response has been returned, but work is still | ||
/// processing in the background. | ||
public func addBackgroundTask(_ body: sending @escaping () async -> ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't feel necessary to me with the current API proposal. Since we pass in an independent writer. A user can just write some bytes, the call writer then finish and then do any background work before returning from the handle
method. They can even set that up in child tasks in the handle
method to start the background work while handling the actual request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great call. Yes if we keep the API that passes in a LambdaResponseWriter
we can put the user in charge!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. This is unstructured concurrency and given that the user controls finish
, they know when the response is done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi this would not be unstructured, as the underlying runtime would inject a taskGroup
into the context and that taskGroup
would be used to schedule the child task here. The underlying runtime would then ensure that all tasks have completed, before it would ask for more work. This approach would be necessary, if we used an approach in which the user returns a LambdaResponse:
protocol LambdaStreamingResponse {
func handle(_ event: ByteBuffer, context: LambdaContext) async throws -> LambdaResponse
}
However this is needed for the LambdaHandler
API:
public protocol CodableLambdaHandler: ~Copyable {
associatedtype Event
associatedtype Output
// only way to schedule background work that can continue after returning the Output here
// is by having the addBackgroundTask on `LambdaContext` here.
func handle(_ event: Event, context: LambdaContext) async throws -> Output
}
@FranzBusch given this, do we want to have two different LambdaContext
s? Or are we fine with keeping the LambdaContext.addBackgroundTask
even in the situation where it isn't really needed? I think I would opt for keeping it (so that we have it for the 99% use-case)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabianfett I'm pretty sure that it's is unstructured, it just manages to now use a Task {
to achieve it. It passes the background work into a reference which will then be owned by something further up the stack.
The test is easy, if you write this code
do { // creating a useless piece of structure (so we can observe Structured Concurrency)
thing.addBackgroundWork { print("AAAA") }
} // the structure ends here
print("BBBB")
Is there any way in which AAAA
could print after BBBB
? If so: unstructured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with splitting it across two methods is if the post work needs context from the initial work.
but we can pass the context, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi what if you want to create a task that starts immediately but is allowed to run longer... I totally get the benefits. I think we need something that is better than Task {}
here. And I think addBackgroundTask {}
is significantly better here, as it will guarantee that you get runtime until your background task finishes. We could not guarantee this for Task {}
. Also two methods on the interface won't work with the closure API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabianfett Not sure I understand what "a task that starts immediately but is allowed to run longer" is. Could you provide some example code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An API call that
- does a short computation which is returned to you immediately
- spawns a longer computation, which's result is send to you via mail once it finishes (can take up to 15min)
Both should start as soon as the API is invoked. We can potentially question the API design here. But this is possible with Lambda and we should allow users to do this, if they opt to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the most consistent approach for @fabianfett's use case is to provide a wrapping CodableResponseWriter
(potentially with just a writeAndFinish
API) while allowing work to continue past the result being returned to the control plane. It is not as simple as returning the result directly. We could provide both forms but that does add its own complexity to the library overall.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
public var invokedFunctionARN: String { get } | ||
|
||
/// The timestamp that the function times out. | ||
public var deadline: DispatchWallTime { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use ContinuousClock.Instant
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope this is epoch time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per Lambda runtime API
Lambda-Runtime-Deadline-Ms – The date that the function times out in Unix time milliseconds. For example, 1542409706888.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got you so we would need a UTCClock.Instant
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Sadly we don't have that yet. Only way out would be if Lambda built its own Clock. But I don't think we want to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should create a LambdaUTCTimeClock
type that conforms to all the new hotness but implements the clock. With the hope that one day you can deprecate it and to typealias LambdaUTCTimeClock = UTClock
with a bit of luck.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi would you build the sleeping on top of dispatch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi would you build the sleeping on top of dispatch?
No.
I'd just create a custom Clock type (because the language lacks UTCClock
for some reason). Once you have your own clock (which you could for example back by clock_gettime
and struct timespec
but hidden as implementation detail).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you implement the required sleeping methods though?
https://developer.apple.com/documentation/swift/clock/sleep(for:tolerance:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just use the normal Task.sleep
on the non-suspending clock for that. UTC time and the non-suspending clock run at the same speed
enum Lambda { | ||
/// This returns the default EventLoop that a LambdaRuntime is scheduled on. | ||
/// It uses `NIOSingletons.posixEventLoopGroup.next()` under the hood. | ||
public static var defaultEventLoop: any EventLoop { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like noted I don't see why we would need to create a separate static var for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
answered above.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
||
We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
||
We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
handler: Handler, | ||
encoder: Encoder, | ||
decoder: Decoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make more sense for handler
to be the final argument so it could be a trailing closure if the user wants?
Is there an end-to-end example somewhere of how this looks from main()
for the case where custom coders need to be used?
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
|
||
We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
||
We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable
conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.
That said I do think the argument that this is a lower-level and more flexible API makes sense.
let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in | ||
Greeting(echoedMessage: event.message) | ||
} | ||
|
||
try await runtime.run() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this, the simplest use case, would it make sense to provide a static main()
on LambdaRuntime
that just calls its run
method so users can use @main
if this is all they need to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I can follow here. How would that code look like?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it look like-
@main
LambdaRuntime { (event: Input, context: LambdaContext) in
Greeting(echoedMessage: event.message)
}
|
||
The current API extensively uses the `EventLoop` family of interfaces from SwiftNIO in many areas. To use these | ||
interfaces correctly though, it requires developers to exercise great care and be aware of certain details such as never | ||
running blocking code on the same `EventLoop` the library uses. Developers also need to understand the various transform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same applies to async/await. I'd scratch this
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
/// the runtime can ask for further events. Note that AWS will continue to | ||
/// charge you after the response has been returned, but work is still | ||
/// processing in the background. | ||
public func addBackgroundTask(_ body: sending @escaping () async -> ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. This is unstructured concurrency and given that the user controls finish
, they know when the response is done.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
/// - Parameter logger: A logger | ||
public init( | ||
handler: consuming sending Handler, | ||
eventLoop: EventLoop = Lambda.defaultEventLoop, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we still have the ability to run everything in just one thread (the main thread)? I think for many lambdas you'll never want to fork off a second thread at all. NIO thread & async/await should just run in the main thread (but as global default executor not as @MainActor
)
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Show resolved
Hide resolved
I don't have further input other than what was already mentioned. Looks like a solid update to me, with some small considerations here and there. |
- Remove the `reportError(_:)` method from `LambdaResponseStreamWriter` and instead make the `handle(...)` method of `StreamingLambdaHandler` throwing. - Remove the `addBackgroundTask(_:)` method from `LambdaContext` due to structured concurrency concerns and introduce the `LambdaWithBackgroundProcessingHandler` protocol as a solution. - Introduce `LambdaHandlerAdapter`, which adapts handlers conforming to `LambdaHandler` with `LambdaWithBackgroundProcessingHandler`. - Update `LambdaCodableAdapter` to now be generic over any handler conforming to `LambdaWithBackgroundProcessingHandler` instead of `LambdaHandler`.
cc @Joannis @sebsto @czechboy0 @weissi @FranzBusch @adam-fowler @0xTim @tachyonics We have updated the proposal to address the concerns raised. We are looking forward to hearing your feedback. |
@aryan-25 Thanks so much for continuing to push this forward! 🙏🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @aryan-25, this is looking great!
I am fine with having the three handler protocols; we will just need to make sure that each is documented with what use case they are recommended for and for the writer protocols what it means for the control plane to finish the response but delay returning from the handle function.
type `ByteBufferAllocator` will also be removed because (1), we generally want to reduce the number of SwiftNIO types | ||
exposed in the API, and (2), `ByteBufferAllocator` does not optimize the allocation strategies. The common pattern | ||
observed across many libraries is to re-use existing `ByteBuffer`s as much as possible. This is also what we do for the | ||
`LambdaCodableAdapter` (explained in the **Codable Support** section) implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, this is a good justification for this change!
Additional comments still to be addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good!
same way the lifecycles of the required services are managed, e.g. | ||
`try await ServiceGroup(services: [postgresClient, ..., lambdaRuntime], ...).run()`. | ||
- Dependencies can now be injected into `LambdaRuntime` — `swift-service-lifecycle` guarantees that the services will be | ||
initialized _before_ the `LambdaRuntime`'s `run()` function is called. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What guarantees are these? ServiceLifecycle doesn't currently have any initialization order guarantees, unless I missed something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. I was not aware of this. The proposal has been updated to remove the term "guarantees" and now states that the required services will be initialized together with LambdaRuntime
.
This leads to cases where ServiceLifecycle has called LambdaRuntime
s run()
function, and then LambdaRuntime
requests something from a service that has not been started up yet.
The solution for this has to be provided by the services, e.g. PostgresNIO hangs on such requests until the run()
function has been called.
```swift | ||
/// Wraps an underlying handler conforming to ``LambdaHandler`` | ||
/// with ``LambdaWithBackgroundProcessingHandler``. | ||
public struct LambdaHandlerAdapter< |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear how the Event is decoded and Output encoded in the LambdaHandlerAdapter
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LambdaHandlerAdapter
does not perform any encoding/decoding since it just wraps LambdaHandler
with LambdaWithBackgroundProcessingHandler
(both protocols have the generic Event
and Output
types)
LambdaCodableAdapter
(which wraps LamdbaWithBackgroundProcessingHandler
with StreamingLambdaHandler
) does encoding/decoding through the encoder and decoder passed to its constructor.
try await outputWriter.write(result: Greeting(echoedMessage: event.messageToEcho)) | ||
|
||
// Perform some background work, e.g: | ||
try await Task.sleep(for: .seconds(10)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this setup a lot
…ResponseWriter Remove `~Copyable` from `LambdaResponseStreamWriter` and `LambdaResponseWriter`. Instead throw an error when `finish()` is called multiple times or when `write`/`writeAndFinish` is called after `finish()`.
|
||
init(context: AWSLambdaRuntimeCore.LambdaInitializationContext) async throws { | ||
/// Instantiate service | ||
let client = PostgresClient(configuration: ...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Postgres as example about something to use with AWS Lambda is quite misleading for the users for two reasons:
- The main point of using Lambda is autoscaling, this is usually something very difficult to obtain from a RDS like Postgres and could require connection pooling features. Ideally, the use case we should present is the connection to a NO-SQL database such as DynamoDB or MongoDB.
- In addition giving the nature of connection oriented protocol of the DB, opening the client connection during the Lambda's initialisation could lead to connection time-out due to the life cycle of the Lambda. There is no guarantee on how long a lambda will be allocated after the first usage. At some point the connection of the DB will be closed from the DB side, leaving subsequent invocation with a closed connection from the Lambda side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main point of using Lambda is autoscaling, this is usually something very difficult to obtain from a RDS like Postgres and could require connection pooling features. Ideally, the use case we should present is the connection to a NO-SQL database such as DynamoDB or MongoDB.
While I generally agree, neither Soto nor MongoDB integrate with swift-service-lifecycle today. The goal of this new API is to integrate better with the shared building blocks of the Swift server ecosystem. Since swift-service-lifecycle is an important one, we opted for PostgresNIO as it integrates with it today.
In addition giving the nature of connection oriented protocol of the DB, opening the client connection during the Lambda's initialisation could lead to connection time-out due to the life cycle of the Lambda.
PostgresClient can lazily create connections. This is totally configurable.
At some point the connection of the DB will be closed from the DB side, leaving subsequent invocation with a closed connection from the Lambda side.
That's totally fine, the connection pool will automatically reconnect for the user.
The discussion here has settled. @aryan-25 addressed most of the feedback. From my point the only open issue concerns the use of Last call to all participants: I intend to merge this PR on Monday. Is there any other unaddressed feedback besides #384, that we should create an extra ticket for? |
Co-authored-by: Fabian Fett <[email protected]>
Hello 👋
I am Aryan, a Computer Science student and currently an intern at Apple on the Swift on Server team. As part of my internship, I have been working with @fabianfett to propose a new v2 API for the
swift-aws-lambda-runtime
library.swift-aws-lambda-runtime
is an important library for the Swift on Server ecosystem. The initial API was written before async/await was introduced to Swift. When async/await was introduced, shims were added to bridge between the underlying SwiftNIOEventLoop
interfaces and async/await.However, just like
gRPC-swift
andpostgres-nio
, we now want to shift to solely using async/await constructs instead ofEventLoop
interfaces. For this, large parts of the current API have to be reconsidered. This also provides a good opportunity to add support for new AWS Lambda features such as response streaming.We have written a document that explains the current limitations of the library and proposes a detailed design for the v2 API.
Please read the proposal and voice your opinions either here or on the forum post. We are looking forward to your feedback!
cc: @sebsto @tachyonics @FranzBusch @Lukasa @tomerd @weissi @adam-fowler @ktoso @0xTim