Random Radicals: A Fake Kanji Experiment


As humans, we are able to communicate with others by drawing pictures, and somehow this has evolved into modern language. The ability to express our thoughts is a very powerful tool in our society. Being able to write is generally more difficult than just being able to read, and this is especially true for the Chinese language. From personal experience, being able to write Chinese is a lot more difficult than just being able to read Chinese and requires a greater understanding of the language.

We now have machines that can help us accurately classify images and read handwritten characters. However, for machines to gain a deeper understanding of the content they are processing, they will also need to be able to generate such content. The next natural step is to have machines draw simple pictures of what they are thinking about, and develop an ability to express themselves. Seeing how machines produce drawings may also provide us with some insights into their learning process.

In this work, we have trained a machine to learn Chinese characters by exposing it to a Kanji database. The machine learns by trying to form invariant patterns of the shapes and strokes that it sees, rather than recording exactly what it sees into memory, kind of like how our own brains work. Afterwards, using its neural connections, the machine attempts to write something out, stroke-by-stroke, onto the screen.


Kanji Machine Learning


Training Examples with Correct Stroke Order


After being exposed to enough stroke-level data from a Kanji database, the machine is able to group certain strokes together by itself to form more abstract concepts of basic radicals and components that make up the typical Kanji, such as 口, 豆, 辶. In addition, it naturally learns to write these radicals in the correct stroke order - for example, 口 must be written with three separate strokes (| 、ヿ, _ ), in this order, and cannot be hastily drawn as a square ⃞ with one stroke.

Furthermore, the machine tries to draw radicals not only in a logical location, but in the logical order as well. In the 逗 example, the 豆 radical must be placed inside the component 辶, and 豆 must also be written before 辶 (it is a common mistake for a beginner to write 辶 before 豆). Our machine generally tries to construct fake Kanji with some sort of internal logical order as well, at both the stroke-level and radical-level, as it has developed some belief system about the relative location, and relative order to structure abstract concepts of radicals into an even more abstract concept of a full Kanji.

Lastly, it also learns from variations of writing certain Kanji components. As you can see from the training examples, there are two different ways to write 逗, using either the simplified 辶 or the more traditional 辶. Kanji has been around for thousands of years so it is normal to see forks and branches of the writing system. I will leave it to you to check if the machine can also conceptualise and generalise on these small variations as well.

The output of the machine is a set of lines, hence we are really modelling Kanji as a set of ordered vectors. Unlike many other recent machine learning image generation techniques that are primarily pixel-based, our approach is a vector-based image generation technique. I believe that Kanji are much more naturally suited to be represented with vectors, not only because writing in general is vector-based, but for Kanji, the ordering of the strokes is of fundamental importance.

Using vectors to model the ordered strokes of a Kanji is much more meaningful than any pixel-based representation technique. As a result, the chaotic output of a machine's dream will likely still conform to the natural structure of Kanji. This approach may lead to future developments of getting machines to sketch images.

For a more detailed description of the Long Short Term Memory + Mixture Density Network algorithm used to generate these Fake Kanji, please read my blog post.


Fake Kanji Hall of Fame


Wooden Fish

Rock Harvesting

Horse Food

Food Shelter and Girl

Fishing at Dawn

Insider Trading

A Hummer

Wife is Shopping


I recorded a few notable fake Kanji that the machine has come up here. If you see some good ones that you would like to share, please save the .svg file above and email it to me with a brief description of the fake Kanji, and I'll add it to the gallery every now and then.

A curated feed of interesting fake Kanji is now live on Twitter. Please follow @neokanji.


Related Works


1989 - A Book from the Sky (天書). This work is inspired from Xu Bing (徐冰)'s original artwork, exploring the creation of pseudo-Chinese characters.

1996 - King of Kowloon (九龍皇帝). The format of this work is heavily influenced by Tsang Tsou Choi (曾灶財), a legendary Calligraphy street artist from Hong Kong. Google made a permanent homepage for his work. My favourite pieces were from the mid 1990s.

2001 - Alphabet Synthesis Machine by Golan Levin, Jonathan Feinberg and Cassidy Curtis. This work created 20,000 entirely new abstract writing systems, expressed as beautiful TrueType fonts. I consider this work way ahead of its time.

2006 - The Sheep market by Aaron Koblin. A collection of 10,000 sketches of sheep created by workers on Amazon's Mechanical Turk. Each worker was paid $0.02 (USD) to "draw a sheep facing left".

2013 - RNN Handwriting Synthesis by Alex Graves. We extended Graves' algorithm to work well with Chinese characters. My favourite quote hidden in the LaTeX comments in his paper, Generating sequential data is the closest computers get to dreaming.

2015 - A Book from the Sky 天书, Exploring the Latent Space of Chinese Handwriting, by Gene Kogan. This approach uses the Alex Radford's implementation of the DCGAN algorithm to train on a rasterised Chinese handwriting dataset, and does a good job of exploring the latent space between actual Chinese characters.