my experiments with ffmpeg were rather satisfactory. I created a program which would assemble random-ish selections from audio files (it would not be difficult to add video too but I wanted to start a large collection of sample media and that is easiest with simple audio for now). Here is what it sounds like:
I think the same principles could be used for images. We could translate written stories and present them in a logical order (like cave paintings or documentary photography) to create a visual story.
I think what I’d like to do ultamitely is to create a streaming version of this so I could create basically an infinite story or AI-generated.
so the goal is to create a function where as an output of the phrase “a duck ate some bread” I would get a list of images which would convey that idea.
maybe the output would look something like this: [‘duck.png’, ‘verb:ate’, ‘bread.png’]
and then I could use ffmpeg to make animations from the verbs or something.
and then I could create an AI which would generate simple stories and feed that in to the animation source-material function and then output that onto the screen.
I appreciate any ideas that you have. It is actually very simple to make something like this. Should only take a few hours once I sit down to do it. The hard part would be making it stream endlessly I think