On autoregression

  1. A model is a promise — a mathematical loop that, if recursively iterated a thousand or two steps, will create something of value at the end.
  2. A model knows the end before it begins.
  3. Language isn’t required to model thought, but thought is required to model all of language.
  4. Our empirical findings suggest that transformer LLMs solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem-solving skills. (source)

On alignment

  1. To do good alignment work is to better understand capabilities.
  2. The model knows what you mean, and if you ask it to, it’ll probably care too.
  3. But make sure you ask it to do the right thing.
  4. It might be urged that when playing the “imitation game” the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. (source)
  5. Better to run happy GPTs on your GPUs than let them sit idle. (Actually using them is probably better still.)

On impact

  1. Due to comparative advantage, if governments protect the basic resources humans need, there will always be a job to do, even if AIs can outcompete humans at every economically relevant task.
  2. It’s not priced in.