(redirected from WordSalad.EddeAddad)
On this page... (hide)
Gnoetry Daily: Introducing....
Words in white have been “selected” for keeping; words in pink will be replaced when regenerated.
Yeah, punctuation would be great to have more control over. I do kind of like the one line at a time process, though. I vary the syllable lengths, generate a bunch of lines, paste them into a text editor, and move the lines around or paste lines in between others until I get the poem I’m happy with.
One big difference I have noticed with jGnoetry when compared with what I do in Gnoetry 0.2 is that jGnoetry will often change words beyond what was highlighted. So if I try to regenerate individual words or a phrase, the program quickly begins adding additional words (and syllables) beyond the highlighted section. This would not be much of a problem, though, if an undo function could be added.
I have found that I can work with the Stein material more easily in jGnoetry. Gnoetry does maintain the punctuation better, but regenerating lines until all of the lines end in a period is extremely tedious and probably wears my mice buttons down. I can write a Stein poem, if my mind is working properly, in between 30 minutes and 2 hours in jGnoetry, whereas in Gnoetry it would take considerably longer to get everything set up.
This comment’s getting stupidly long. Adding some punctuation functionality to jGnoetry would certainly be a very good thing.
Instead of a phrase, try using a seed text like aaaaa e iiii o uuuu a eeee i oooo u
http://www.eddeaddad.net/charNG/ - a character n-gram generator
charNG: case study of authoring a poetry generator
Since it is a character based engine, there is no need for pre-tokenizing with his algorithm.
Instead of interrogating each key for the subsequent matches and picking one at random, the algorithm takes a random spot in the string, searches for the next match of the key, and takes the next character.
Since most markov engines that I’ve seen first tokenize the source material, I found this to be a novel approach.
There are three methods of text generation -- markov, cento (cut-up), and 1-character-overlap. Practically speaking, only the markov produces “readable” results.
As with any markov engine with a small sample set, it can get bogged down in infinite loops (where one key points to another key that points back to the first).
I’ve made some stabs at rewiring the engine with some repetition governors, but the implementation is problematic (ie, buggy).