Skip to content

Rewrite the engine using the engine #74

Open
@dumconstantin

Description

@dumconstantin

If the engine is seen strictly as a process and data manager then, in theory, we could rewrite the main engine packages using the engine itself.

Basically, in a running system, there are two state trees:
A. The state of the system
B. The state of the producers that run that system

Currently, we have the engine library that is written such to support A but then deals with B in a very programmatic way. However, the state of views/producers and their relationships can be mapped very well on the same patterns that make A possible.

References to the functions of producers are stored on state B. These functions will execute in the context of state A.

On a broader look at the matter, state A is a subset of state B.

In practical terms, the following would describe the implementation:

{
  producers: {
     xyz: {
       fn: (value, updater) => updater.set(value + 1),
       arguments: {
         value: Observe.foo.input,
         updater: Update.foo.output
       }
    }
  },
  dataUpdate: {
     path: Path.foo.output,
     operation: set,
     value: 124
  },
  data: {
     foo: {
        input: 123,
        output: 124
     }
   }
}

As the engine needs to be single-process to keep the integrity of the state (multi-process engine is another conversation) the dataUpdate path is basically a JSON Patch, that describes the where and the how. On this dataUpdate there are two producers that need to listen: 1) syncData that applies the operation to the data path and 2) triggerNext that triggers all the producers/views that are interested in the path - the triggering is done through an update operation.

The triggerNext producer can have some interesting attributes. For example, it can wait until the current process window has finished gathering all changes and then trigger the producer/view in the next process window (e.g. setImmediate). It can trigger another path for certain producers to recompute, etc.

By doing so the producer in state B is triggered and then it gets the values from the data and then it calls the function that belongs to that producer. In essence, there wouldn't be any need for listening on the state data paths, as the triggerNext associated to the dataUpdate would be enough.

Benefits of doing this experiment (I'm relying on intuition here):

  • the complexity of the engine internals can be reduced dramatically, as most of the logic regarding caching and managing updates can be described better in a more structured way than trying to patch multiple graphs to work together
  • more focus can be made on increasing the performance on the remaining moving parts: parsing paths, caching, deciding who to trigger
  • because the internal part is... well... internal, when executing those producers, there isn't any need to clone the data which makes the process so much faster
  • much more optimization can be done at build time, the triggerNext producer could be hardwired during compile-time and thus ensuring minimum computation on each update
  • less data cloning, which results in lower memory consumption and high yield as we would have more control over the data used in the general execution
  • would allow a simpler multi-process engine using a map-reduce pattern for the data update -> trigger decisions -> data compile process (of course, doing it in another language that supports proper multi-process e.g. Rust/Go would make this worthwhile)

Metadata

Metadata

Assignees

No one assigned

    Labels

    ideaanything new or radical that can later be broken down into features or changes

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions