Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Neuroscience tech tree

4 minute read

Published:

Neuroscience tech tree

Status: Chaotic draft, will hopefully become more coherent within the next 2 months

Types of stochasticity and errors brains need to deal with/ways in which proteins in water are suboptimal for computation

6 minute read

Published:

Together, these may add orders of magnitude to the complexity and resources a brain has to use to accomplish tasks.

  • mutations in evolutionary history - preferable to have a brain structure surviving some mutations
  • diffusion times of neurotransmitters
  • stochasticity of reactions facilitated by proteins
  • protein folding: to implement the equivalent of a logic gate, one needs to invent some protein that folds in just the right way to facilitate a computation. That doesn’t sound easy at all, may introduce overhead
  • leakage (of current through axon/neuron walls)
  • stochasticity of neuronal growth and arborization?
  • Energy:
    • using too much energy is a much bigger concern for animals than for computers, so it is plausible that there are tradeoffs towards using less energy vs. better performance
    • relatedly: Mitochondrial volume limits energy inflow into brain; power density of biomatter is much lower than achievable in e.g. microchips
    • warm-blooded brain temperature can’t rise more than a few K before damage, limiting sustained energy expenditure. Can’t cool with liquid nitrogen either…
  • Evolution: There must have been a continuous evolutionary path from a bacterium to the human, or any other, brain. This probably doesn’t mix well with many “brittle” algorithms typical in CS, like cryptography, compression, maybe error correction…
  • because proteins and neurons are so incredibly slow compared to transistors, algorithms that can’t be parallelized well may not even be worth it
  • Indeed, there is little evidence for the sort of algorithms computer scientists devise (as in cryptography, compression, …) in biology. The most involved algorithm I heard of is the use of “modified Bloom filters” in flies remembering odors.
  • Parasites

Is your brain more complicated than your fridge?

6 minute read

Published:

The human brain is often described as “the most complex object in the universe” - justified by the number of synapses it contains or similar. But a glass of water contains lots of molecules and degrees of freedom as well.1 While the amount of relevant computation a brain performs during its lifetime may be high,2 the amount of information needed to describe it is upper-bounded by how much relevant data

  • was contained in the fertilized egg it developed from, and
  • flowed into it after conception.
  1. This problem is not just hypothetical: As of 2016, surrounding water molecules were a major expense in biochemical simulations according to what I heard. I don’t know the current status (QUESTION). 

  2. See Joseph Carlsmith’s report for a thorough attempt at estimation. The number of operations needed may be much higher or much lower than the number of spikes occuring in a natural brain. Furthermore, as they become more complex, artificial and natural computation systems tend to become bottle-necked by communication rather than computation - so the “number of operations required” may turn out to be irrelevant. TODO either elaborate, or remove this? 

Reachable states in quantum phase estimation

5 minute read

Published:

A famous, basic algorithm in quantum computing is the quantum phase estimation algorithm. We can see the algorithm as a quantum query algorithm 1 for oracles $O_d=\ket{\mathrm{idle}}\bra{\mathrm{idle}} + d\ket{\mathrm{v}}\bra{\mathrm{v}}$, where $d\in\mathbb{C}, \lVert d \rVert = 1$ is the eigenvalue to be estimated - for discretization purposes, we choose $|D|\in\mathbb{N}^{+}$ and assume $d$ is a $|D|$th root of unity, $d\in D$ with $D:=\left\{d\mid d\in\mathbb{C}, d^{|D|}=1\right\}$.

  1. https://github.com/qudent/RhoPaths 

How to formalize the notion that a computation implicitly contains another one?

6 minute read

Published:

Suppose you want to play the original version of Tetris, written for a Soviet Elektronika 60 computer - but you only have a binary version of that program. So you write an emulator that runs on your modern MacBook and simulates the Elektronika 60’s behaviour to solve the problem. Of course, the straightforward way to write such an emulator is to simulate the Elektronika 60’s memory states and CPU step-by-step. Then the emulator’s execution trace - the history of instructions and memory states it went through before termination, and the causal implications that gave rise to them - “implicitly contains” all the original computer’s calculations. The question is: Can we formalize this notion of “implicit containment”, and is it possible to write an emulator that doesn’t implicitly perform all the original computer’s calculations to predict its display outputs?

portfolio

publications

talks

teaching