Advances in techniques for predicting sequence elements have improved our ability to create generative sequences in various styles. The following are audio and text sequences generated using Long Short Term Memory (LSTM) neural networks in the style of various source texts and audio samples.
I produced audio with Matt Vitelli’s GRUV library which trains a recurrent neural network on music to produce new samples of audio in the style of the source audio. Some novel and promising results, but overfitting is evident, with fragments from the source audio being audible in the outputs. More work needs to be done in this area. Some of the outputs:
John Coltrane -- Blue Train
Frédéric Chopin -- Nocturnes
Vibraphone sample
I used the char-rnn technique first described and implemented by Andrej Karpathy</a> and re-implemented in Chainer by Yusuke Tomoto to create some text sequences trained on various corpora. </p>
Book of Genesis
State of the union addresses by George W. Bush, 2001-2008
War & Peace by Leo Tolstoy
Some more personal texts were generated trained on my Gmail and on my personal journal.