OK, it’s been quite some time since I wrote anything here: partly because of all the stuff going on with my life (thesis defending process and so on), but most importantly, because of a machine learning project that have been taking lots of my time. I mean Kaggle’s right whale...

Apparently what’s started as a couple of lectures and assignments has turned into completely different understanding of probability and machine learning I’m still trying to digest. So this is going too be a big (and possibly really oversimplified) post. Everything is random We start from the idea that the world...

So before the summer school (which I’m going to briefly mention a little bit later) I was going to speedread a randomly chosen sample set of recent papers just to catch up with what’s going on. Here are some short notes I made during the reading. Most of the papers...

So I’ve tried to make an attempt against Kaggle’s cats and dogs dataset as I intended to. In short, some good and bad news: I’ve managed to achieve 75% success recognition rate, which is, to be honest, not good at all for binary classification. Still I hoped that’ll give me...

So, how do we learn feature hierarchies from images? Cut a lot of patches from image data, put them into an unsupervised algorithm. Get a bunch of features that look like edges (which is well-known and reproduced many times). Then we can make bigger patches, represent them in terms of...