Neuroscience tech tree
Published:
Neuroscience tech tree
Status: Chaotic draft, will hopefully become more coherent within the next 2 months
Published:
Status: Chaotic draft, will hopefully become more coherent within the next 2 months
Published:
Together, these may add orders of magnitude to the complexity and resources a brain has to use to accomplish tasks.
Published:
Update This project is now a collaboration with Kevin Kermani Nejad. Current material:
Published:
The human brain is often described as “the most complex object in the universe” - justified by the number of synapses it contains or similar. But a glass of water contains lots of molecules and degrees of freedom as well.1 While the amount of relevant computation a brain performs during its lifetime may be high,2 the amount of information needed to describe it is upper-bounded by how much relevant data
This problem is not just hypothetical: As of 2016, surrounding water molecules were a major expense in biochemical simulations according to what I heard. I don’t know the current status (QUESTION). ↩
See Joseph Carlsmith’s report for a thorough attempt at estimation. The number of operations needed may be much higher or much lower than the number of spikes occuring in a natural brain. Furthermore, as they become more complex, artificial and natural computation systems tend to become bottle-necked by communication rather than computation - so the “number of operations required” may turn out to be irrelevant. TODO either elaborate, or remove this? ↩
Published:
A famous, basic algorithm in quantum computing is the quantum phase estimation algorithm. We can see the algorithm as a quantum query algorithm 1 for oracles $O_d=\ket{\mathrm{idle}}\bra{\mathrm{idle}} + d\ket{\mathrm{v}}\bra{\mathrm{v}}$, where $d\in\mathbb{C}, \lVert d \rVert = 1$ is the eigenvalue to be estimated - for discretization purposes, we choose $|D|\in\mathbb{N}^{+}$ and assume $d$ is a $|D|$th root of unity, $d\in D$ with $D:=\left\{d\mid d\in\mathbb{C}, d^{|D|}=1\right\}$.
https://github.com/qudent/RhoPaths ↩
Published:
Suppose you want to play the original version of Tetris, written for a Soviet Elektronika 60 computer - but you only have a binary version of that program. So you write an emulator that runs on your modern MacBook and simulates the Elektronika 60’s behaviour to solve the problem. Of course, the straightforward way to write such an emulator is to simulate the Elektronika 60’s memory states and CPU step-by-step. Then the emulator’s execution trace - the history of instructions and memory states it went through before termination, and the causal implications that gave rise to them - “implicitly contains” all the original computer’s calculations. The question is: Can we formalize this notion of “implicit containment”, and is it possible to write an emulator that doesn’t implicitly perform all the original computer’s calculations to predict its display outputs?