At a time when computational resources seem abundant, there is much excitement around scaling up machine learning and training increasingly larger models on bigger datasets. Intelligent life, however, has arisen not from an abundance of resources, but rather from the lack of it. Evolution naturally selects systems that are able to do more with less. We can see many examples of such resource “bottlenecks” that helped shaped our development as a species: from the way the brain is wired, and how our consciousness is able to process abstract thought, to how we are able to convey abstract concepts to one another through drawings and gestures that developed into languages, stories, and culture. It is debatable whether such bottlenecks are a requirement for intelligence to emerge, but it is undeniable that our own intelligence is a result of resource constraints.
I am interested in studying how intelligence might have emerge from limited resource constraints. Below are a selection of projects that I had worked on over the years that is related to my interest. Some of the works illustrate machine learning concepts interactively inside of a web browser. Please read my blog for more information.
C. Daniel Freeman, Luke Metz, David Ha
Rather than assume forward models are needed, in this work, we investigate to what extent world models trained with policy gradients behave like forward predictive models, by restricting the agent’s ability to observe its environment.
Judith E. Fan, Monica Dinculescu, David Ha
A web-based environment for collaborative sketching of everyday visual concepts. We integrate an artificial agent (using Sketch-RNN), who is both cooperative and responsive to actions performed by its human collaborator. We find that collaboration between humans and machines encourages the creation of novel and meaningful content.
Raphael Gontijo Lopes, David Ha, Douglas Eck, Jonathon Shlens
SVG-VAE is a latent space model, just like MusicVAE and sketch-rnn. It learns a latent representation of the visual style of icons by training on their pixel rendering (the VAE). This lets us create palettes for blending and exploring icon styles. Then, it learns to use this latent representation to generate that draw that icon in SVG format.
Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, David Ha
We introduce Kuzushiji-MNIST, a drop-in replacement for MNIST, plus two other datasets (Kana and Kanji). In this work, we also try more interesting tasks like domain transfer from Kuzushiji Kanji to modern Kanji.
Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson
PlaNet learns a world model from image inputs only and successfully leverages it for planning in latent space.
Cinjon Resnick, Wes Eldridge, David Ha, Denny Britz, Jakob Foerster, Julian Togelius, Kyunghyun Cho, Joan Bruna
NeurIPS Competitions held in 2018 and 2019
Presented at AAAI 2019 Workshop on RL in Games
We created Pommerman, a multi-agent environment based on the classic Bomberman game, with the aim of advancing the state of multi-agent RL research. It consists of a set of scenarios, each having at least four players and containing both cooperative and competitive aspects.
David Ha, Jürgen Schmidhuber
A generative recurrent neural network is quickly trained in an unsupervised manner to model pixel-based RL environments through compressed spatio-temporal representations. The world model's extracted features are fed into compact and simple policies trained by evolution. We also train our agent entirely inside of an environment generated by its own internal world model, and transfer this policy back into the actual environment.
I explain how evolution strategies using a few visual examples. I try to keep the equations light, but provide links to original articles. This is the first post in a series of articles, where I plan to show how to apply these algorithms to a range of tasks from MNIST, Gym, Roboschool to PyBullet environments.
David Ha, Douglas Eck
We present sketch-rnn, a generative recurrent neural network capable of producing sketches of common objects, with the goal of training a machine to draw and generalize abstract concepts in a manner similar to humans.
David Ha, Andrew M. Dai, Quoc V. Le
This work explores an approach of using one network, known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers.
I combined NEAT with the backpropagation algorithm and created demos for evolving efficient, but atypical neural network structures for common ML toy tasks such as classification and regression.
In this project, I use recurrent neural networks to model Chinese characters. The model I constructed is trained on stroke data from a Kanji dataset. We can use this model to come up with “fake” Kanji characters.
Each creature, controlled by a unique recurrent neural network brain, dies after contact with a plank. After some time, they reproduce by bumping into each other, passing on a version of their brain to future generations. Over time, they evolved a tendency to avoid planks and also attract each other.