Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed module documentation #326

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

ktoso
Copy link
Contributor

@ktoso ktoso commented Aug 7, 2024

This is a very early version of Distributed module documentation; It is not polished up, may have stray sentences and sections etc.

We'll figure out how to best present the information through the pitch posted on the forums over here:

I'll also work on improving tone and voice still, right now it is not the most consistent.

For now, we can discuss what needs to be explained in this documentation and I started a forums thread for this purpose: TSPL Pitch: Distributed.

ktoso added 2 commits August 7, 2023 09:56
Konrad originally drafted this content in RST, in May 2022.  Alex
converted it to markdown without preserving the Git history, because that
history was only one commit that added this content.

The original parent commit was ed422ae.
Copy link

@martialln martialln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly puts comments on the last section around Emulating callbacks as I found some errors in code examples

we registered a closure to be called at some later point in time when some additional information was looked up by the
actor. In local-only actors, a common way of modeling this is passing a closure to the actor.

In distributed actors, the same can be achieved using distributed method calls rather than closures, like this:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should explain that Alice and Bob must share the same ActorSystem. Also there are two cases: is Bob type known on Alice side or not

print("later: \(laterReply)")
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using resolvable protocol we could have something like that:

@Resolvable
protocol InfoListener: DistributedActor, Codable {
    distributed func additionalInfo(_ info: String)
}

distributed actor Alice {
    distributed func call(later: $InfoListener) async -> String {
        Task.detached {
            // do some asynchronous processing AFTER returning from the 'call' method...
            try await later.additionalInfo("Here's the info you asked for!")
        }
        
        return "Thanks for your call!"
    }
}

distributed actor Bob: DefaultDistributedActorSystem.SerializationRequirement {
    func test(alice: Alice) async throws {
        let infoListener = try $InfoListener.resolve(id: self.id, using: self.actorSystem)
        let immediateReply = try await alice.call(later: infoListener)
        
        print("immediately: \(immediateReply)")
    }
}

extension Bob: InfoListener {
    distributed func additionalInfo(_ info: String) {
        print("later: \(info)")
    }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That one's tricky, I would not want $ types in distributed method signatures but perhaps there's no workaround yet... I'll dig into this one

Copy link

@akbashev akbashev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall like the page, gives a quick recap of the module!

Want to reread it a bit later with a fresh mind, but at the moment have two small comments. Added them as suggestions, but take them with a grain of salt.

  1. Overall I think would be nice to add some small sentence on why exactly one should use distributed module.
  2. Think memory management could also be highlighted—it's not quite obvious at a first glance. Though not sure about placing, it's quite specific. Maybe even be placed in Implementing Your Own DistributedActorSystem section as it's mostly for systems to care about?

The actor system may return a local or remote reference, however it should not perform asynchronous work such as
trying to confirm if the actor exists remotely or not. The resolve method should quickly return either a known
local actor identified by the passed `id` or a remote reference if the actor may exist remotely.

Copy link

@akbashev akbashev Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> Note: Be careful with memory management. Local actors already stored on the node, so it might be better to keep references to them as weak. However, for remote actors reference is generated, which could potentially be cleaned if not kept strongly. // TODO: Link to Automatic Reference Counting page?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, I agree we need to document this but maybe needs its own section with more info.

ktoso and others added 5 commits August 28, 2024 10:32
Thank you for the input everyone! I'll keep working on this :)

Co-authored-by: Jaleel Akbashev <[email protected]>
Co-authored-by: Martial Lienert <[email protected]>
@ktoso ktoso changed the title [WIP] Add Distributed module documentation Distributed module documentation Feb 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants