Skip to content
This repository has been archived by the owner on Nov 23, 2021. It is now read-only.

Write Callback Block on Queue vs on Write #36

Open
carlbrown opened this issue Aug 26, 2017 · 6 comments
Open

Write Callback Block on Queue vs on Write #36

carlbrown opened this issue Aug 26, 2017 · 6 comments
Assignees
Labels

Comments

@carlbrown
Copy link
Contributor

I'm pulling comment from @helje5 from #28 over into this issue, because that issue is getting cluttered.

The worse aspect is that this calls the done-cb before the queued write has actually finished! This needs to be passed along the stream, that's the whole point.

My reason for not doing this was two-fold:

  1. I was concerned about side-effects from passing the completion blocks into the Socket code, either in the form of memory leaks or in the form of deadlocks.
  2. I don't see a case where it matters. The best this code can do is to hold onto the CB until after the socket write() completes. But, as the documentation for write(2) explicitly says "A successful return from write() does not make any guarantee that data has been committed," holding onto the CB that long doesn't provide any more reliability or information that if it was called where it is now.

There's no guarantee the connection won't be closed before the bytes that were queued will be written to the socket, but there's no guarantee they're going to make it to the far end of the TCP connection either.

So why does it matter whether the callback happens after the bytes are queued for write() or once write() had completed? The guarantee (or lack thereof) is the same either way. By Occam's Razor then, I think the simpler design is better.

I might well be missing something here. If so, I'd love to know what it is.

Thanks

@helje5
Copy link
Contributor

helje5 commented Aug 26, 2017

You don't need a done-CB at all if it doesn't provide 'done' functionality. What purpose does it solve in the current setup, none at all - just drop it then ;-)

The reason we added the CB to Johannes original proposal was to add support for proper back pressure handling. Which is somewhat necessary for async based servers ...

P.S.: When talking about sockets, refer to man 2 send, not write. The "does not make any guarantee that data has been committed" refers to filesystem buffers that may not have been flushed to disk. The semantics of a write on a socket file handle are different.

@carlbrown
Copy link
Contributor Author

The reason we added the CB to Johannes original proposal was to add support for proper back pressure handling. Which is somewhat necessary for async based servers ...

I need to understand better how this would work, because even if I waited to call the CB until after send(), I'm don't understand how that would help. Is there documentation or a book or something you can point me to?

The first indication of congestion we would get (I think) is that the send() call would return fewer bytes sent than requested (in which case we spin up a DispatchSourceWrite to handle the retry). In that case, would the CB just be held until the DispatchSourceWrite was done? Or should there be a "partially written" value for the callback? And then what would the application do with that information?

I'm happy to fix this (or review Pull Requests), but I'm not clear on what would need to happen.

P.S.: When talking about sockets, refer to man 2 send, not write. ...

You are, of course, correct - sorry. That's what I get for writing up bug while I'm watching a hurricane make landfall live on my TV. (I'm in a safe part of Texas, but I have family along the coast that I'm worried about, so I wasn't fully engaged with what I was writing. My apologies.).

@helje5
Copy link
Contributor

helje5 commented Aug 26, 2017

You want information on back pressure? I suppose it is kinda self explaining by its water analogy :-) You can only put more stuff in when you take stuff out. Well, the basic idea is that you suspend a stream of input data until the output stream is ready to take on more data (usually w/ adding additional buffers). Otherwise you can quickly spool up much more data than the backend can actually process (in a timely manner or before running out of buffer memory).
Resources, hm. I like the Stream tutorial, it may explain it, but is a good resource on async-io in any case.

Doing a quick G search this also seems to explain it: Node Backpressuring in Streams, that also kinda looks good: Some thoughts on asynchronous API design.

As a very basic example, say you are piping data from a DispatchIO source to the HTTP server output stream. It may look something like this (not working code):

source.onData { data in
  source.suspend()
  httpResponse.write(data) {
    source.resume()
  }
}

(as mentioned you would usually use buffers, if you have a lot of time, feel free to browse to the Noze.io implementation of GCD based streams ;-) )

@carlbrown
Copy link
Contributor Author

So the idea is just that the application has the opportunity to pause until after socket().send()has completed?

Ok, so let me think out loud here:

  • So the pathological case is that we have a server with a really large receive bandwith and a small send bandwidth (which is opposite what you'd normally expect, but possible). And the server is accepting a chunk from the client on the big pipe, and then processing it somehow, and then queuing it to be sent back to the client on the small pipe.

  • So with the current implementation, chunks would be added to the WriteQueue much faster than they could be sent to the client (the proverbial Leaky Bucket), and so the WriteQueue would just continue to grow until failure (for example, until available RAM was exhausted).

  • But if the CB waited until after socket().send() was done, then receiving from the client could be paused and each processed chunk would wait for the network socket to have buffer space available. This would mean that things would be added to the queue only at roughly the same rate that they were taken off, preventing the queue from growing without bound.

If I've got that right, it makes sense to me, and it's worth risking potential memory leaks to support.

I'll take a pass at this and see what I can do.

Thanks for the feedback. I hasn't thought the scenario all the way through.

@carlbrown carlbrown self-assigned this Aug 26, 2017
@carlbrown carlbrown added the bug label Aug 26, 2017
@helje5
Copy link
Contributor

helje5 commented Aug 26, 2017

So the pathological case is that we have a server with a really large receive bandwith and a small send bandwidth

Yeah, this is a very common scenario, the most obvious example just serving a plain file (though this has other optimisations). The outbound is not always but quite often slower than the producer (the application, back office db/cache server etc).

@carlbrown
Copy link
Contributor Author

Status update: I did a naïve implementation of this at carlbrown@5d503a3 and it didn't work - I got duplicated bytes in the output, so the completionBlock was getting called more than once.

I started trying to troubleshoot it, but I kept needing to make (temporary, debugging, mostly logging) changes to BlueSocket, and its complicated design was slowing me way down, so I switched gears and ripped it out, leading to #39 (to be clear - I don't think BlueSocket is the problem, it was just making it more difficult for me to find the problem than I was willing to tolerate).

Once #39 is merged, I'll revisit this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants