-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove Legacy Lightning-RPC Client #579
base: main
Are you sure you want to change the base?
Conversation
Before there was cln_rpc there was a custom LightningClient that served as the rpc client for cln. Since we now have cln_rpc we can use that instead. Signed-off-by: Peter Neuroth <[email protected]>
Syncing to the latest version of the protos. This needed some rework and left us with a bunch of unused code (also thanks to ripping out the cln rpc client LightningClient. Signed-off-by: Peter Neuroth <[email protected]>
Signed-off-by: Peter Neuroth <[email protected]>
9e28a75
to
0233d00
Compare
@@ -114,10 +106,11 @@ impl PluginNodeServer { | |||
use tokio::time::{sleep, Duration}; | |||
|
|||
// Move the lock into the closure so we can release it later. | |||
let rpc = rrpc.lock().await; | |||
let mut rpc = cln_rpc::ClnRpc::new(rpc_path.clone()).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We got a bit unlucky here: we use the Mutex
to prevent incoming JSON-RPC calls, before the JSON-RPC socket is ready. This is done by acquiring the lock during startup and releasing it once the socket file is present. The internal use-cases are ok, since they by definition are invoked only after the node started and the socket is ready.
The change below however breaks this, by not locking in get_rpc
it is now possible for incoming grpc calls to be directly forwarded (not waiting behind a lock) and so they will fail. We will most likely notice this as the startup commands (i.e., the ones that caused the node to start) will fail in a large number of cases.
Please leave the Mutex
in get_rpc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one regression: a lock is required to ensure that the node has its JSON-RPC socket available before we can try to talk to it.
let rpc = self.get_rpc().await; | ||
let mut rpc = self.get_rpc().await.map_err(|e| { | ||
tonic::Status::new( | ||
tonic::Code::Internal, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And this is the error messages all "early calls" will run into. Not the best UX.
@@ -1,336 +1,13 @@ | |||
tonic::include_proto!("greenlight"); | |||
use self::amount::Unit; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This deletion has been so long in the making, thank you ^^
The plugin still had some references to a custom lightning-rpc client here and there which made it dificult to keep the
greenlight.proto
in sync. This PR removes this rpc client and syncs thegreenlight.proto
for the plugin and the signerproxy.