So although bootstrapped learners are common for semi-supervised learning, there are a few other approaches such as using EM, graphical modelling, or the graph-based scheme Prof. Smith mentioned during seminar. Although it's difficult, I feel like if we can keep pushing the accuracy generated from just simple seed data, then it's a major step up from supervised learning, although it probably still needs a lot of work.
The papers Daniel and Brendan introduced I also found pretty interesting. The idea of making the card-pyramid-like data structure, regardless of the end goal, seems like a cool approach, although I guess its usefulness was unclear in the end. Daniel introduced the FACTORIE library that can construct graphical models with pretty good performance results. I recall Markov Logic Networks were mentioned as the basis for comparison, although I am unfamiliar with those at the moment.
As a note, I have yet to find anyone's sharing to be uninteresting, although sometimes I feel like the conversation goes a bit out of the scope of my current knowledge base.