I have been working predominantly on one prototype that instantiates “pixels” while the user draws. These pixels exist in a generic array of objects, they are also categorized by what drawing state they belonged to when they were created and then these states are further divided into combined objects.
The main concept being that a library or geneology of forms based on drawing state is created for each drawing state. The goal then would be to have a simple ai use those forms to draw with the user.
What is illustrated in the video below is a proof of concept of this system, creating cubes, for now, drawing with the black cubes, and reinstantiating forms that were created in each state as RED and BLUE objects, red here is hairy vertical line, blue is curve, but these two states are derived randomly for this test.
There is no logic for where and when the objects are reinstantiated yet, for now sub-forms are drawn 30 units above the original black pixels as a proof of concept.
I need to give intention to how I am splitting the forms, where they are placed, and when, I am planning to use a history of states that is user generated.
I will post updated development on the other models soon, as well as issues I had coding them.