-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mutex.lock
should give you a disposable token, rather than the mutex itself being disposable
#27
Comments
Neither are the design we've been working on, though it is close. The approach we've been considering is somewhat similar to the C++ The API I've been discussing with the V8 teams at Google and Microsoft looks something like this: type Lockable = Mutex; // `| SharedMutex` or others in the future
class UniqueLock<T extends Lockable> {
constructor(mutex?: Lockable, t: "lock" | "defer-lock" | "try-to-lock" | "adopt-lock" = "lock");
get mutex(): Lockable | undefined;
get ownsLock(): boolean;
tryLock(): boolean; // try to lock the mutex. non-blocking
lock(): void; // lock the mutex (possibly with a timeout?). blocking
lockAsync(): Promise<void>; // locks the mutex asynchronously. non-blocking
unlock(): void; // unlocks the mutex
release(): void; // detach the mutex without unlocking
swap(other: UniqueLock<T>): void; // swap state with another mutex
move(): UniqueLock<T>; // create a new lock with this lock's state, releasing this lock
[Symbol.dispose](): void; // unlock mutex and release the lock if it is taken.
} Examples: // take lock synchronously (off UI thread)
{
using lck = new UniqueLock(mut);
} // lock released
// take lock asynchronously (on UI thread)
{
using lck = new UniqueLock(mut, "defer-lock");
await lck.lockAsync();
}
// try to take lock (on or off UI thread) 1/2
{
using lck = new UniqueLock(mut, "try-to-lock");
if (lck.ownsLock) {
...
}
}
// try to take lock (on or off UI thread) 2/2
{
using lck = new UniqueLock(mut, "defer-lock");
if (lck.tryLock()) {
...
}
}
// take, release, and retake lock (off UI thread)
{
using lck = new UniqueLock(mut);
// has lock
lck.unlock();
// does not have lock
lck.lock();
// has lock again
}
// adopt an existing lock, such as when bootstrapping an application started from another
// short-lived thread that is no longer participating in coordination.
{
const lck1 = new UniqueLock(mut, "adopt-lock"); // adopts lock that was already taken
lck1.ownsLock; // true
}
// swapping locks
{
using lck1 = new UniqueLock();
{
using lck2 = new UniqueLock(mut);
...
lck1.ownsLock; // false
lck1.mutex; // undefined
lck2.ownsLock; // true
lck2.mutex; // [object Mutex]
lck1.swap(lck2);
lck1.ownsLock; // true
lck1.mutex; // [object Mutex]
lck2.ownsLock; // false
lck2.mutex; // undefined
}
} And if/when we later add a By having |
Unfortunately, a simple |
Can you give examples of cases for which the |
I gave a number of examples, above. While fairly simplistic, they do illustrate a number of real-world uses of Also, something like |
Sorry, I meant example use cases. Things you're trying to do with the more complicated API, not just how you call it. |
Of the above API examples, The use cases for Let's imagine, for the moment, that interface Mutex {
lock(): Disposable;
lockAsync(): Promise<Disposable>;
tryLock(): Disposable | undefined;
} To implement a multi-mutex lock with deadlock prevention, like /**
* @param {Iterable<Mutex>} mutexes
* @returns {Disposable}
*/
export function lockAll(mutexes) {
mutexes = Array.from(mutexes);
let locks = new Array(mutexes.length);
let remaining = mutexes.length;
let index = 0;
let error;
let hasError = false;
while (!hasError) {
try {
if (remaining === 0) {
// all locks taken, return a disposable to release them later
const scope = new DisposableScope();
for (const lock of locks.slice(index)) {
scope.use(lock);
}
for (const lock of locks.slice(0, index)) {
scope.use(lock);
}
return scope;
}
if (remaining === mutexes.length) {
// always wait for the first lock
locks[index] = mutexes[index].lock(); // NOTE: lock token allocation
}
else {
locks[index] = mutexes[index].tryLock(); // NOTE: lock token allocation
if (!locks[index]) {
// failed to take lock. release each token and start over at the current index
let i = (index + mutexes.length - 1) % mutexes.length;
while (remaining < mutexes.length) {
// always unlock all locks taken, even if one unlock fails for some reason
try {
locks[i]?.[Symbol.dispose](); // NOTE: lock token release
locks[i] = undefined;
}
catch (e) {
error = error ? new SuppressedError(e, error) : e;
hasError = true;
}
i = (index + mutexes.length - 1) % mutexes.length;
remaining++;
}
continue;
}
// lock was taken, move to next mutex
index = (index + 1) % mutexes.length;
remaining--;
}
}
catch (e) {
error = error ? new SuppressedError(e, error) : e;
hasError = true;
}
}
// if we reach this point, an error occurred. unlock all locks taken in reverse order and throw the error
let i = (index + mutexes.length - 1) % mutexes.length;
while (remaining < mutexes.length) {
// always unlock all locks taken, even if one unlock fails for some reason
try {
locks[i]?.[Symbol.dispose]();
}
catch (e) {
error = new SuppressedError(e, error);
}
i = (index + mutexes.length - 1) % mutexes.length;
remaining++;
}
throw error;
} This algorithm steps through all of the provided mutexes, locking each one in order. We only use The downside of this approach is that there is overhead for producing and releasing new disposable lock tokens when there is significant contention. Even though these are nursery objects, that means they are cheap, not free. This is unfortunate because in the ideal scenario, all of the locks we take are expected to at least outlive the function call. Here is the same algorithm utilizing /**
* @param {Iterable<Mutex>} mutexes
* @returns {Disposable}
*/
export function lockAll(mutexes) {
const locks = Array.from(mutexes, mutex => new UniqueLock(mutex, "defer-lock")); // all allocations occur once
let remaining = locks.length;
let index = 0;
let error;
let hasError = false;
while (!hasError) {
try {
if (remaining === 0) {
// all locks taken, return a disposable to release them later
const scope = new DisposableScope();
for (const lock of locks.slice(index)) {
scope.use(lock);
}
for (const lock of locks.slice(0, index)) {
scope.use(lock);
}
return scope;
}
if (remaining === locks.length) {
// always wait for the first lock
locks[index].lock(); // NOTE: no allocation
}
else {
if (!locks[index].tryLock()) { // NOTE: no allocation
// failed to take lock. release each token and start over at the current index
let i = (index + locks.length - 1) % locks.length;
while (remaining < locks.length) {
// always unlock all locks taken, even if one unlock fails for some reason
try {
locks[i].unlock(); // NOTE: no release
}
catch (e) {
error = error ? new SuppressedError(e, error) : e;
hasError = true;
}
i = (index + locks.length - 1) % locks.length;
remaining++;
}
continue;
}
// lock was taken, move to next mutex
index = (index + 1) % locks.length;
remaining--;
}
}
catch (e) {
error = error ? new SuppressedError(e, error) : e;
hasError = true;
}
}
// if we reach this point, an error occurred. unlock all locks taken in reverse order and throw the error
let i = (index + locks.length - 1) % locks.length;
while (remaining < locks.length) {
// always unlock all locks taken, even if one unlock fails for some reason
try {
locks.unlock();
}
catch (e) {
error = new SuppressedError(e, error);
}
i = (index + locks.length - 1) % locks.length;
remaining++;
}
throw error;
} Here, there is no overhead from object allocation and release during high contention. All of the locks that will eventually outlive the function are allocated all at once. |
I see. I agree that the object allocations in that specific algorithm are unfortunate, though I would want to hear from an engine that this matters in practice before committing to a more complex design just to avoid these allocations. In any case, if the goal was just "avoid allocations in the above algorithm", that would be more simply accomplished by making using token = mutex.lock();
//...
token.release();
// ...
mutex.lock(token); You would then implement the above algorithm by changing your first implementation as follows: - locks[index] = mutexes[index].lock();
+ locks[index] = mutexes[index].lock(locks[index]);
- locks[index] = mutexes[index].tryLock();
- if (!locks[index]) {
+ let lock = mutexes[index].tryLock(mutexes[index]);
+ if (lock) { mutexes[index] = lock; /* only does anything the first time through */ }
+ else {
- locks[i] = undefined; That seems much simpler than the |
I'm not sure I would agree that is simpler. The mechanics of what you're describing add a lot more moving parts and complex interactions to reason over and a fair amount of back and forth. It's certainly not straightforward. What I am suggesting isn't complex. You have the mutex, which is purely shared state, and the lock, which is the thread-local mechanism to interact with that state. Everything you want to do to interact with the Mutex is on the lock object:
The In addition, we'd likely change IMO, |
In your design, locking a mutex is done by creating a separate object (the creation of which may or may not lock the mutex, depending on a four-state enum argument to the constructor) and/or by calling a method on the separate object. Async locking is necessarily two steps because In my design, locking a mutex is always done by calling a method on the mutex, which creates or updates a token. Async locking looks exactly like sync locking, just calling the async API instead of the sync API and adding an If there's some reason to want the extra complexity of The design I've proposed incidentally is quite close to the design in Rust. Indeed if you consider |
Ignoring the Plus the allocation isn't great. All things considered I'd like to enable the fine-grained locking use case (like, if an app wants a lock per object). Given that, my current preference is the simpler lock/unlock design (with a simple |
No you can't - every tokens unlock method can only be called once. |
The main advantage is that it semantically ties the capability to unlock to the corresponding locking. You can't accidentally unlock a mutex held by someone else. That's the correctness guarantee it gives you.
There's an important difference here: calling |
You can't statically prevent it from being called. You still have to spec what it does a second time if it is called, like throw, or do nothing.
Ah true, good point. For blocking locks we can do something simpler, like checking if the current agent holds the lock on the mutex. For async locks the token proposal seems superior. The allocation per locking operation annoyance stands, but mainly for the sync case. The async case is so expensive already, a small token allocation won't matter. Perhaps something like async locks hand out tokens, and sync locks/unlocks need to check for the current agent having acquired the lock already? Edit: For a uniform API, there can also be a single, pre-allocated token for the blocking case per agent. |
Actually I take this back. That is too inflexible. I see no reason why one agent can't lock a mutex, then another agent unlocks the mutex, if a critical section spans several agents. The token approach doesn't have this issue assuming you can clone the token, nor does the simple lock/unlock API. |
We have concurrency within agents, and you can hold a lock across async turns. That means that multiple pieces of code within the same agent might be trying to lock/unlock the same lock, which can lead to the same problem as would happen across threads. Consider: async function foo() {
await someSetup();
try {
mutex.lock();
await someOperationWhichNeedsTheLock();
// done, we can release the lock immediately
// but we stay in the `try` so we can get our error handling
mutex.unlock();
await someOtherOperation();
} catch (e) {
handle(e);
} finally {
// make sure we release the lock when the function is done
mutex.unlock();
}
}
await Promise.all([foo(1), foo(2)]); This can lead to the following execution:
So you end up executing code while you think you have the lock, but you don't, even though there's only one agent. It is much harder to make this mistake with tokens. |
Updated to avoid the possibility of deadlock, just to make it clear this can happen in otherwise-reasonable code: let resourceMutex = new Atomics.Mutex;
let deadlockMutex = new Atomics.Mutex;
async function foo() {
try {
await deadlockMutex.lockAsync();
for (let i = 0; i < 1e6; ++i) {
// use the sync locking API because we rarely need any `await`s in this loop
// deadlockMutex guarantees only one async function is in this loop at a time, so we can't deadlock
resourceMutex.lock();
someSyncOperationWhichNeedsTheLock();
if (unlikely()) await someAsyncOperationWhichNeedsTheLock();
resourceMutex.unlock();
someExpensiveComputation(); // we want to allow other threads to use the resource while this happens
}
deadlockMutex.unlock();
await someOtherOperation();
} catch (e) {
handle(e);
} finally {
// make sure we release the locks when the function is done even in case of errors
deadlockMutex.unlock();
resourceMutex.unlock();
}
}
await Promise.all([foo(1), foo(2)]); |
Yes, I buy that. Ideas on how to overcome the allocation issue for the sync case? |
I would have let resourceMutex = new Atomics.Mutex;
let deadlockMutex = new Atomics.Mutex;
async function foo() {
let resourceToken;
let deadlockToken;
try {
deadlockToken = await deadlockMutex.lockAsync();
for (let i = 0; i < 1e6; ++i) {
// use the sync locking API because we rarely need any `await`s in this loop
// deadlockMutex guarantees only one async function is in this loop at a time, so we can't deadlock
resourceToken = resourceMutex.lock(resourceToken);
someSyncOperationWhichNeedsTheLock();
if (unlikely()) await someAsyncOperationWhichNeedsTheLock();
resourceToken.unlock();
someExpensiveComputation(); // we want to allow other threads to use the resource while this happens
}
deadlockToken.unlock();
await someOtherOperation();
} catch (e) {
handle(e);
} finally {
// make sure we release the locks when the function is done even in case of errors
deadlockToken.unlock();
resourceToken.unlock();
}
}
await Promise.all([foo(1), foo(2)]); In this example, this costs only four object allocations (2 mutexes * 2 calls to |
I think I'm okay with the reusable token idea. In the limit, for apps that want to minimize costs here, the cost is a constant one (1) additional allocation per Mutex. That is, the unsafe mutex API can be recovered by using (mutex, token) pairs where you reuse the same token. |
Coming back to Ron's |
I think it's buildable on top of the token design, yes, as long as its Strictly as a question of design I am not currently convinced it should be built into the standard library, though. |
Uses the reusable token-based design from #27
Let me walk back my position somewhat. One of the main motivators for How do we see As such, I would expect the condition to instead accept the lock token, which it would need to be able to unlock and reacquire. That either means we need a way to programmatically unlock and relock a lock token, or we need to handle that internally within the spec by giving How would you anticipate that What do you expect would happen if
|
OK, it sounds like we're mostly on the same page, in that case. Specifically
I am happy to have
Given that the mutex has the lock capability, and the token has the unlock capability, the obvious consequence is that the condition would require both: basically an atomic For the async case (or even the sync case, if tokens are cloneable to other threads, which they probably should be), this raises the question of what happens if someone re-binds the token while the condition variable is waiting to re-acquire the token. I see two possibilities: either the I think the remaining major differences between
|
I could probably be convinced that it's OK for condition variables to magically have the ability to unlock a mutex given only the token which currently locks that mutex, instead of needing to explicitly pass in both the mutex and its token. |
OK I see we have re-invented Rust's design again: Their tokens can't be shared so you don't have the issue of someone re-using the token for something else while you're waiting. |
Yeah that's what I did in the spec draft. |
This point is important as it points out that shareable tokens add back the same problems but with even more complexity. I think the conclusion is that tokens must not be shareable, but instead must be moveable. For a lock, the owner is a thread of execution, so the problem arises if multiple threads of execution all have access to an unlock capability. This all works fine (I think?), if, on Edit: That's a big if, actually, on if it's feasible to directly hand lock ownership from one thread to another. For manually written ParkingLot-style mutexes, I think it's probably fine? But if it isn't, I can also be convinced the ability to move lock ownership across threads is just not a common enough use case to bend over backwards to enable. |
It's re-bound to mutex2. Is this an issue? |
Uses the reusable token-based design from #27
No, it just informs the design. If lock tokens cannot be rebound then the only thing that could produce them would be the // throw away mutex
const mut = new Atomics.Mutex();
const token = mut.lock();
token.unlock(); just to get an unbound lock token for reuse. |
Agreed. The spec draft in #30 adds |
I'm not a fan of the name |
Uses the reusable token-based design from #27
This is the approach in the spec draft, so closing. |
The readme mentions a possible integration with explicit resource management. It suggests adding
mutex.lock
andmutex.unlock
and making mutexes disposable. I think the latter two points are wrong, and the correct API would look like this:i.e.,
mutex.lock
returns an object with aSymbol.dispose
method and a string-namedunlock
method (as an alias ofdispose
), rather than the mutex being disposable. This makes it impossible to accidentally unlock the mutex without first locking it.The tokens need not be shared objects (
though I guess they could be, if that's easy to do). Edit: Actually, on talking this over, probably the ideal is for the tokens to be cloneable, in the sense of having explicit support instructuredClone
, but not be shared objects.cc @michaelficarra @lucacasonato
The text was updated successfully, but these errors were encountered: