Sublime
An inspiration engine for ideas
enneagram
Stuart Evans • 5 cards
It turns out that *all* independently trained neural nets form a connected, multidimensional manifold of low loss- you can always form a low-loss path from one SGD solution to any other. This can be used for efficient generation of ensembles. https://t.co/WqXid9nk9e
Nora Belrosex.comBut generally neural nets need to “see a lot of examples” to train well. And at least for some tasks it’s an important piece of neural net lore that the examples can be incredibly repetitive. And indeed it’s a standard strategy to just show a neural net all the examples one has, over and over again. In each of these “training rounds” (or “epochs”)
... See moreStephen Wolfram • What Is ChatGPT Doing ... And Why Does It Work?
They can combine frames into cognitive networks.
Steven Hayes • A Liberated Mind: The essential guide to ACT
learning
Mo Shafieeha and • 20 cards
Cybernetics
Pedro Parrachia • 1 card
Mixture of experts , MoE or ME for short, is an ensemble learning technique that implements the idea of training experts on subtasks of a predictive modeling problem.
In the neural network community, several researchers have examined the decomposition methodology. [...] Mixture–of–Experts (ME) methodology that decomposes the input space, such that e... See more
Jason Brownlee • Just a moment...

RenTec uses Hidden Markov Models in trading.
The technique generated 60% returns per year over 30 years.
One of the co-founders of RenTec's name is in the algorithm!
Here's how it works: https://t.co/aogI0bDtu7