Metacademy - Differential geometry for machine learning
web.archive.org
Metacademy - Differential geometry for machine learning
If you could figure out what patterns of neural activity in a 100-dimension population are fundamental to that population – and which are just recycled combinations of those fundamental patterns – you could explain that neural population with fewer than 100 dimensions.
This “blessing of nonuniformity,” whereby data is not spread uniformly in (hyper) space, is often what saves the day.