Let me explain, very informally, what a predictive text imitator is. It is a computer program that takes as input a passage of training text and produces as output a new text that is composed quasi-randomly except that it matches the training text with regard to the frequencies of word or character sequences up to some fixed finite length k.
(There has to be such a length limit, of course: the only text in which the word sequence of Moby-Dick is matched perfectly is Moby-Dick, but what a predictive text imitator trained on Moby-Dick would do is to produce quasi-random Moby-Dickish gibberish in which each sequence of not more than k units matches Moby-Dick as regards the transition probabilities between adjacent units.)
I tell you this because a couple of months ago Jamie Brew made a predictive text imitator and trained it on my least favorite book in the world, the 1918 edition of William Strunk's The Elements of Style. He then set it to work writing the first ten sections of a new quasi-randomly generated book. You can see the results here. The first point at which I broke down and laughed till there were tears in my eyes was at the section heading 'The Possessive Jesus of Composition and Publication'. But there were other later such points too. Take a look at it. And trust me: following the advice in Jamie Brew's version of the book won't do your writing much more harm than following the original.
My reasons for despising the original work by Strunk (and the even worse revision by E. B. White) are given here, and in greater detail here. Jamie Brew's astonishingly spare website is here. His code and various technical details are available on Github here. His own description of his program is that it is "a writing engine intended to imitate the predictive text function on smartphones." His description of the training corpus the program uses is that it consists of "a tree with the frequencies of all n-grams up to a certain size present in a source, and information about which words precede and follow these." (The n-grams might be character sequences; I'm not sure about that.) I don't know anything about whether he tweaked the output to get his version of Elements or whether the program just spat out the whole hilarious thing, formatting and all.
[Comments on this post have been closed by the possessive Jesus of composition and publication.]