Skip to content

Starting Pd to connect to the shim (or otherwise using the guiport) twice in rapid succession may break the shim #9

@giuliomoro

Description

@giuliomoro

I was probing the open port, closing the probe and immediately connect to it and this was causing the shim to think that there were 0 connections left. Using a short usleep() between closing the probe and starting Pd with -guiport fixes it.

Looking at the js code, I am thinking about what happens if the two events (close probing, connect pd) are all ready before the execution of a tick of the node event loop. What probably happens is that on connect, the connection is accepted and the file descriptor is open. It is added to a set to file descriptors to be poll()ed or select()ed. At some point of the event loop, the poll/select is executed and any relevant callbacks called. It may well be that it first looks for the fd that accepts new connections and only later for the one that checks for new data on an open connection (which is what would trigger the "end" callback). Anyhow, that's just guesswork.

The point seems to be that we probably need a combination of two things: push new connections to an array and remove them on 'close' (not sure whether handling 'end' or 'error' separately is better than 'close'). Then at any time if you have at least one element in the array, you have one 'connection' active. Before considering a connection as "pd" (for the purpose of telling the frontend that we are connected), we may want to wait to receive some data from it (which I assume we would in response to the init message), or at least set a small delay before notifying the frontend, to ensure that the connection is still there and it was not just a fluke.
As to what to do if you actually have multiple "pd" connections active at one point ... I am not sure. Either send data to them all or just to the newest one ...
You could in principle be live patching several instances of Pd at once (e.g.: you have a distributed system with several different boards, each processing different I/Os through the same patch). As long as you assume they are all identical AND you only fwd Pd->frontend messages for exactly one of them AND the frontend->Pd messages are broadcasted to all of them, then this may actually work. But that of course is not a problem for today.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions