I’ve been trying to read Norbert Wiener’s Cybernetics recently, just to look up the origins of all the AI thing. Among other cases he talks about a problem of perceiving a geometric figure (a square) the same way regardless of its size, a.k.a. scale invariance problem. I thought about how...

I’ve interrupted my previous post because of the nasty error with whitened data. So, good news, me (I guess): it seems that whitening MNIST dataset is just a bad idea and I shouldn’t do it at all. Now, I’m still not quite sure if this is in fact the case....

Checking my progress: about 10 days dedicated to energy-based models, including Hopfield nets, Boltzmann machines and their little sisters RBMs. And I’m starting to notice that I can read and understand (some) machine learning papers now, not just scroll through all the equations in a panic. Another thing I’ve also...

This is a continuation of the previous post dedicated to (eventually) understand Restricted Boltzmann Machines. I’ve already seen Hopfield nets that act like associative memory systems by storing memories in local minima and getting there from corrupted inputs by minimizing energy, and now… to something completely different. The first unexpected...

Trying to jump on the deep learning bandwagon, I often miss things. Sometimes I find my mind filled with models and algorihtms I hardly fully undestand: they become obscure concepts and fancy buzzwords. That actually bothers me, so I’ve decided to make a couple of detailed runs across the stuff...