“Scrap back propagation” – Artificial intelligence pioneer says we need to start over: https://www.axios.com/ai-pioneer-advocates-starting-over-2485537027.html
Scrap back propagation Discussion: https://www.reddit.com/r/MachineLearning/comments/70e4ex/n_hinton_says_we_should_scrap_back_propagation/
Comment by reddit user @Optrode on “notable differences between brains and deep learning models”:
- Sensory systems in the brain usually have a great deal of top down modulation (think early layers receiving recurrent input from later layers). There aren’t really any sensory or motor systems in the brain that AREN’T recurrent.
- Sensory systems in the brain also tend to have a lot of lateral inhibition (i.e. neurons inhibiting other neurons in the same layer).
- Brain sensory systems tend to separate information into channels. E.g. at all levels of the visual system, there are separate pathways for high and low spatial frequency content (outline & movement vs. texture), and color information.
- Particularly with regard to the visual system, inputs are always scanned in a dynamic fashion. When a person views a picture, only a very small subsection of the image (see: fovea, saccade) is seen at high detail at any instant. The “high detail zone” skips around the image, lingering on salient points.
- Obviously, there’s STDP. STDP essentially pushes neurons to predict the future, and I think that unsupervised training methods that focus on predicting the future (this came up in the recent AMA, as I recall) obtain some of the same benefits as STDP.
- I’ve seen several comments in this thread on how reducing the number of weights per node (e.g. CNN, QRNN) is beneficial, and this resembles the state of affairs in the brain. There is no such thing as a fully connected layer in the brain, connectivity is usually sparse (though not random). This usually is related to the segregation of different channels of information.
- Lastly, most information processing / discrimination in the brain is assisted by semantic information. If you see a person in a hospital gown, you are primed to see a nurse or doctor. This remains true for a while afterwards, since we rarely use our sensory facilities to view collections of random, unrelated photos.
Comment by reddit user @Smike713 on “brains are born pre-wired to learn without supervision”:
As Hinton states in the article, this is the fundamental failure of his method: that it cannot produce unsupervised learning. If our brains were wired like current ANNs – even if they were wired like ANNs with all the missing features that you enumerated – it would be impossible for humans to learn anything at all. For ANNs to work like human brains, some inputs would need to be pre-labeled with the correct answers, hence the need for our brains to be pre-wired with something extra.
What Hinton is getting at is the split between computationalism and connectionism. These two views represent complimentary research agendas, but they disagree over the significance of explanatory knowledge. Connectionist approaches – such as Hinton’s – do not require that the researchers have any explanatory insight into the workings of cognitive functions before implementing those functions in artificial systems. In fact, even if the function is successfully implemented, the inner-workings of that function are black-boxed within the system (e.g., no one knows exactly what function each layer of an ANN is performing when identifying faces). However, computationalists claim that weighting and re-weighting of connections between layers of fungible perceptrons is not sufficient to capture all the computational functions of the human brain; what’s missing are higher-order structures as well as cellular level specializations that optimize a system to perform a given function. Therefore, computationalists claim that the most important breakthroughs will follow not from tinkering with existing ANN methods but, rather, by developing new methods that are better informed by neuroscientific and psychological theory.
As Harvard psychologist Steve Pinker has argued in The Blank Slate and How The Mind Works, the brain is not a blank slate of neuronal layers waiting to be pieced together and wired-up; we are born with brains already structured for unsupervised learning in a dozen cognitive domains, some of which already work pretty well without any learning at all. For this reason, Pinker and computationalists like him are convinced that progress in AI is fundamentally dependent on developing a theoretical understanding of how the mind works. In this article, Hinton was essentially conceding that he’s been won-over by the computationalists, which might explain why Pinker has been tweeting the praises of Hinton on Twitter today 🙂