1 min read

David Dohan

David Dohan

QuHarrison Terry presents David Dohan with the WTF Innovators Award for his research of foundational model programs for LLMs and his contributions to advancing and applying language models to a range of fields from mathematics to protein engineering to scientific reasoning.

The WTF Innovators Award recognizes excellence at the precipice of societal change, with the inaugural class focusing on AI innovators. As a memento, each of the 34 awardees are gifted a featured song by QuHarrison Terry and Genesis Renji. We present “Reading Palm”, produced by Nimso, to David Dohan.

David Dohan was an AI Researcher at Google Brain from June 2016 to February 2023 and currently works at OpenAI, studying scalable alignment of language models and methods for generally intelligent reasoning systems.

David Dohan was a part of the team at Google Brain that figured out how to chain together language models to effectively interact and learn from each other. The next advances in language models will partly rely on the ability for language models to connect and advance one another. He’s furthering that research at OpenAI and actively working on AGI, which is why he should be on everyone’s radar. – QuHarrison Terry.

At Google Brain, David was core to the research and release of four major projects (among other work): Language Model Cascades, Minerva, ProtNLM, and PaLM.

Cascades is a probabilistic programming language and framework for expressing computer programs that chain together (or cascade) language models interacting with themselves, each other, and with external tools.

Minerva is a language model capable of solving mathematical and scientific problems using step-by-step reasoning.

ProtNLM is a language model that can predict protein function and provide a short functional description from a protein’s amino acid sequence.

PaLM is a transformer model trained with the Pathways system, which allows a single model to generalize across domains and tasks while being highly efficient.