Sunday, November 21, 2010

Deep Transfer vis Second-Order Markov Logic


Transfer learning is a topic I am not familiar with. It is said to cope with training with data that differs from the testing data. There are two extents of transfer learning: so called shallow transfer that deals with different distributions in training and testing (in the same domain) and so called deep transfer that deals with different domains in training and testing.

The deep transfer is possible only because different domains shares the same logic while this is actually I think Markov logic should be able to play an important role. The paper explains why we must use second-order logic (due to finding domain-independent knowledge) and relational (for transfer learning). Their proposed algorithm is DTM (deep transfer via Markov logic).

For the experiments, the authors uses three domains, which seem no-in-the-least related to each other (yeast protein, webkb and social nets data from facebook). I am still not sure whether these experiments really show how their transfers work. Maybe we should return to this paper after a more careful study.

No comments: