Category Archives: AI

Betting on UFC Fights – A Statistical Data Analysis

Mixed Martial Arts (MMA) is an incredibly entertaining and technical sport to watch. It’s become one of the fastest growing sports in the world. I’ve been following MMA organizations like the Ultimate Fighting Championship (UFC) for almost eight years now, and in that time have developed a great appreciation for MMA techniques. After watching dozens of fights, you begin to pick up on what moves win and when, and spot strengths and weaknesses in certain fighters. However, I’ve always wanted to test my knowledge against the actual stats – like do accomplished wrestlers really beat fighters with little wrestling experience?

To do this, we need fight data, so I crawled and parsed all the MMA fights from Sherdog.com. This data includes fighter profiles (birth date, weight, height, disciplines, training camp, location) and fight records (challenger, opponent, time, round, outcome, event). After some basic data cleaning, I had a dataset of 11,886 fight records, 1,390 of which correspond to the UFC.

I then trained a random forest classifier from this data to see if a state-of-the-art machine learning model can identify any winning and losing characteristics. Over cross-validation with 10 folds, the resulting model scored a surprisingly decent AUC score of 0.69; a AUC score closer to 0.5 would indicate that the model can’t predict winning fights any better than random or fair coin flips.

So there may be interesting patterns in this data … Feeling motivated, I ran exhaustive searches over the data to find feature combinations that indicate winning or losing behaviors. Many hours later, several dozens of such insights were found.

Here are the most interesting ones (stars indicate statistical significance at the 5% level):

Top UFC Insights

Fighters older than 32 years of age will more likely lose

This was validated in 173 out of 277 (62%) fights*

Fighters with more than 6 TKO victories fighting opponents older than 32 years of age will more likely win

This was validated in 47 out of 60 (78%) fights*

Fighters from Japan will more likely lose

This was validated in 36 out of 51 (71%) fights*

Fighters who have lost 2 or more KOs will more likely lose

This was validated in 54 out of 84 (64%) fights*

Fighters with 3x or more decision wins and are greater than 3% taller than their opponents will more likely win

This was validated in 32 out of 38 (84%) fights*

Fighters who have won 3x or more decisions than their opponent will more likely win

This was validated in 142 out of 235 (60%) fights*

Fighters with no wrestling background vs fighters who do have one more likely lose

This was validated in 136 out of 212 (64%) fights*

Fighters fighting opponents with 3x or less decision wins and are on a 6 fight (or better) winning streak more likely win

This was validated in 30 out of 39 (77%) fights*

Fighters younger than their opponents by 3 or more years in age will more likely win

This was validated in 324 out of 556 (58%) fights*

Fighters who haven’t fought in more than 210 days will more likely lose

This was validated in 162 out of 276 (59%) fights*

Fighters taller than their opponents by 3% will more likely win

This was validated in 159 out of 274 (58%) fights*

Fighters who have lost less by submission than their opponents will more likely win

This was validated in 295 out of 522 (57%) fights*

Fighters who have lost 6 or more fights will more likely lose

This was validated in 172 out of 291 (60%) fights*

Fighters who have 18 or more wins and never had a 2 fight losing streak more likely win

This was validated in 79 out of 126 (63%) fights*

Fighters who have lost back to back fights will more likely lose

This was validated in 514 out of 906 (57%) fights*

Fighters with 0 TKO victories will more likely lose

This was validated in 90 out of 164 (55%) fights

Fighters fighting opponents out of Greg Jackson’s camp will more likely lose

This was validated in 38 out of 63 (60%) fights

 

Top Insights over All Fights

Fighters with 15 or more wins that have 50% less losses than their opponents will more likely win

This was validated in 239 out of 307 (78%) fights*

Fighters fighting American opponents will more likely win

This was validated in 803 out of 1303 (62%) fights*

Fighters with 2x more (or better) wins than their opponents and those opponents lost their last fights will more likely win

This was validated in 709 out of 1049 (68%) fights*

Fighters who’ve lost their last 4 fights in a row will more likely lose

This was validated in 345 out of 501 (68%) fights*

Fighters currently on a 5 fight (or better) winning streak will more likely win

This was validated in 1797 out of 2960 (61%) fights*

Fighters with 3x or more wins than their opponents will more likely win

This was validated in 2831 out of 4764 (59%) fights*

Fighters who have lost 7 or more times will more likely lose

This was validated in 2551 out of 4547 (56%) fights*

Fighters with no jiu jitsu in their background versus fighters who do have it more likely lose

This was validated in 334 out of 568 (59%) fights*

Fighters who have lost by submission 5 or more times will more likely lose

This was validated in 1166 out of 1982 (59%) fights*

Fighters in the Middleweight division who fought their last fight more recently will more likely win

This was validated in 272 out of 446 (61%) fights*

Fighters in the Lightweight division fighting 6 foot tall fighters (or higher) will more likely win

This was validated in 50 out of 83 (60%) fights

 

Note – I separated UFC fights from all fights because regulations and rules can vary across MMA organizations.

Most of these insights are intuitive except for maybe the last one and an earlier one which states 77% of the time fighters beat opponents who are on 6 fight or better winning streaks but have 3x less decision wins.

Many of these insights demonstrate statistically significant winning biases. I couldn’t help but wonder – could we use these insights to effectively bet on UFC fights? For the sake of simplicity, what happens if we make bets based on just the very first insight which states that fighters older than 32 years old will more likely lose (with a 62% chance)?

To evaluate this betting rule, I pulled the most recent UFC fights where in each fight there’s a fighter that’s at least 33 years old. I found 52 such fights, spanning 2/5/2011 – 8/14/2011. I placed a $10K bet on the younger fighter in each of these fights.

Surprisingly, this rule calls 33 of these 52 fights correctly (63% – very close to the rule’s observed 62% overall win rate). Each fight called incorrectly results in a loss of $10,000, and for each of the fights called correctly I obtained the corresponding Bodog money line (betting odds) to compute the actual winning amount.

I’ve compiled the betting data for these fights in this Google spreadsheet.

Note, for 6 of the fights that our rule called correctly, the money lines favored the losing fighters.

Let’s compute the overall return of our simple betting rule:

For each of these 52 fights, we risked $10,000, or in all $520,000
We lost 19 times, or a total of $190,000
Based on the betting odds of the 33 fights we called correctly (see spreadsheet), we won $255,565.44
Profit = $255,565.44 – $190,000 = $65,565.44
Return on investment (ROI) = 100 * 65,565.44 / 520,000 = 12.6%

 

That’s a very decent return.

For kicks, let’s compare this to investing in the stock market over the same period of time. If we buy the S&P 500 with a conventional dollar cost averaging strategy to spread out the $520,000 investment, then we get a ROI of -7.31%. Ouch.

Keep in mind that we’re using a simple betting rule that’s based on a single insight. The random forest model, which optimizes over many insights, should predict better and be applicable to more fights.

Please note that I’m just poking fun at stocks – I’m not saying betting on UFC fights with this rule is a more sound investment strategy (risk should be thoroughly examined – the variance of the performance of the rule should be evaluated over many periods of time).

The main goal here is to demonstrate the effectiveness of data driven approaches for better understanding the patterns in a sport like MMA. The UFC could leverage these data mining approaches for coming up with fairer matches (dismiss fights that match obvious winning and losing biases). I don’t favor this, but given many fans want to see knockouts, the UFC could even use these approaches to design fights that will likely avoid decisions or submissions.

Anyways, there’s so much more analysis I’ve done (and haven’t done) over this data. Will post more results when cycles permit. Stay tuned.

23 Comments

Filed under AI, Blog Stuff, Computer Science, Data Mining, Economics, Machine Learning, Research, Science, Statistics, Trends

pplmatch – Find Like Minded People on LinkedIn

http://www.pplmatch.com

Just provide a link to a public LinkedIn profile and an email address and that’s it. The system will go find other folks on LinkedIn who best match that given profile and email back a summary of the results.

It leverages some very useful IR techniques along with a basic machine learned model to optimize the matching quality.

Some use cases:

  • If I provide a link to a star engineer, I can find a bunch of folks like that person to go try to recruit. One could also use LinkedIn / Google search to find people, but sometimes it can be difficult to formulate the right query and may be easier to just pivot off an ideal candidate.
  • I recently shared it with a colleague of mine who just graduated from college. He really wants to join a startup but doesn’t know of any (he just knows about the big companies like Microsoft, Google, Yahoo!, etc.). With this tool he found people who shared similar backgrounds and saw which small companies they work at.
  • Generally browsing the people graph based on credentials as opposed to relationships. It seems to be a fun way to find like minded people around the world and see where they ended up. I’ve recently been using it to find advisors and customers based on folks I admire.

Anyways, just a fun application I developed on the side. It’s not perfect by any means but I figured it’s worth sharing.

It’s pretty compute intensive, so if you want to try it send mail to [contact at pplmatch dot com] to get your email address added to the list. Also, do make sure that the profiles you supply expose lots of text publicly – the more text the better the results.

Leave a comment

Filed under AI, Blog Stuff, Computer Science, CS, Data Mining, Information Retrieval, Machine Learning, NLP, Research, Science, Search, Social, Uncategorized, Web2.0

Build an Automatic Tagger in 200 lines with BOSS

My colleagues and I will be giving a talk on BOSS at Yahoo!’s Hack Day in NYC on October 9. To show developers the versatility of an open search API, I developed a simple toy example (see my past ones: TweetNews, Q&A) on the flight over that uses BOSS to generate data for training a machine learned text classifier. The resulting application basically takes two tags, some text, and tells you which tag best classifies that text. For example, you can ask the system if some piece of text is more liberal or conservative.

How does it work? BOSS offers delicious metadata for many search results that have been saved in delicious. This includes top tags, their frequencies, and the number of user saves. Additionally, BOSS makes available an option to retrieve extended search result abstracts. So, to generate a training set, I first build up a query list (100 delicious popular tags), search each query through BOSS (asking for 500 results per), and filter the results to just those that have delicious tags.

Basically, the collection logically looks like this:

[(result_1, delicious_tags), (result_2, delicious_tags) …]

Then, I invert the collection on the tags while retaining each result’s extended abstract and title fields (concatenated together)

This logically looks like this now:

[(tag_1, result_1.abstract + result_1.title), (tag_2, result_1.abstract + result_1.title), …, (tag_1, result_2.abstract + result_2.title), (tag_2, result_2.abstract + result_2.title) …]

To build a model comparing 2 tags, the system selects pairs from the above collection that have matching tags, converts the abstract + title text into features, and then passes the resulting pairs over to LibSVM to train a binary classification model.

Here’s how it works:

tagger viksi$ python gen_training_test_set.py liberal conservative

tagger viksi$ python autosvm.py training_data.txt test_data.txt

__Searching / Training Best Model

____Trained A Better Model: 60.5263

____Trained A Better Model: 68.4211

__Predicting Test Data

__Evaluation

____Right: 16

____Wrong: 4

____Total: 20

____Accuracy: 0.800000

get_training_test_set finds the pairs with matching tags and split those results into a training (80% of the pairs) and test set (20%), saving the data as training_data.txt and test_data.txt respectively. autosvm learns the best model (brute forcing the parameters for you – could be handy by itself as a general learning tool) and then applies it to the test set, reporting how well it did. In the above case, the system achieved 80% accuracy over 20 test instances.

Here’s another way to use it:

tagger viksi$ python classify.py apple microsoft bill gates steve ballmer windows vista xp

microsoft

tagger viksi$ python classify.py apple microsoft steve jobs ipod iphone macbook

apple

classify combines the above steps into an application that, given two tags and some text, will return which tag more likely describes the text. Or, in command line form, ‘python classify.py [tag1] [tag2] [some free text]’ => ‘tag1’ or ‘tag2’

My main goal here is not to build a perfect experiment or classifier (see caveats below), but to show a proof of concept of how BOSS or open search can be leveraged to build intelligent applications. BOSS isn’t just a search API, but really a general data API for powering any application that needs to party on a lot of the world’s knowledge.

I’ve open sourced the code here:

http://github.com/zooie/tagger

Caveats

Although the total lines of code is ~200 lines, the system is fairly state-of-the-art as it employs LibSVM for its learning model. However, this classifier setup has several caveats due to my time constraints and goals, as my main intention for this example was to show the awesomeness of the BOSS data. For example, training and testing on abstracts and titles means the top features will probably be inclusive of the query, so the test set may be fairly easy to score well on as well as not be representative of real input data. I did later add code to remove query related features from the test set and the accuracy seemed to dip just slightly. For classify.py, the ‘some free text’ input needs to be fairly large (about an extended abstract’s size) to be more accurate. Another caveat is what happens when both tags have been used to label a particular search result. The current system may only choose one tag, which may incur an error depending on what’s selected in the test set. Furthermore, the features I’m using are super simple and can be greatly improved with TFIDF scaling, normalization, feature selection (mutual information gain), etc. Also, more training / test instances (and check the distribution of the labels), baselines and evaluation measures should be tested.

I could have made this code a lot cleaner and shorter if I just used LibSVM’s python interface, but I for some reason forgot about that and wrote up scripts that parsed the stdout messages of the binaries to get something working fast (but dirty).

Leave a comment

Filed under AI, Boss, Code, CS, Data Mining, delicious, Information Retrieval, Machine Learning, Open Source, Research, Search, Social, Statistics, Talk, Tutorial, Yahoo

Google Co-op just got del.icio.us!

Update: Sorry, link is going up and down. Worth trying, but will try to find a more stable option when time cycles free up.

This past week I decided to cook up a service (link in bold near the middle of this post) I feel will greatly assist users in developing advanced Google Custom Search Engines (CSE’s). I read through the Co-op discussion posts, digg/blog comments, reviews, emails, etc. and learned many of our users are fascinated by the refinements feature – in particular, building search engines that produce results like this:

‘linear regression” on my Machine Learning Search Engine

… but unfortunately, many do not know how to do this nor understand/want to hack up the XML. Additionally, I think it’s fair to say many users interested in building advanced CSE’s have already done similar site tagging/bookmarking through services like del.icio.us. del.icio.us really is great. Here are a couple of reasons why people should (and do) use del.icio.us:

  • It’s simple and clean
  • You can multi-tag a site quickly (comma separated field; don’t have to keep reopening the bookmarklet like with Google’s)
  • You can create new tags on the fly (don’t choose the labels from a fixed drop-down like with Google’s)
  • The bookmarklet provides auto-complete tag suggestions; shows you the popular tags others have used for that current site
  • Can have bundles (two level tag hierarchies)
  • Can see who else has bookmarked the site (can also view their comments); builds a user community
  • Generates a public page serving all your bookmarks

Understandably, we received several requests to support del.icio.us bookmark importing. My part-time role with Google just ended last Friday, so, as a non-Googler, I decided to build this project. Initially, I was planning to write a simple service to convert del.icio.us bookmarks into CSE annotations – and that’s it – but realized, as I learned more about del.icio.us, that there were several additional features I could develop that would make our users’ lives even easier. Instead of just generating the annotations, I decided to also generate the CSE contexts as well.

Ok, enough talk, here’s the final product:
http://basundi.com:8000/login.html

If you don’t have a del.icio.us account, and just want to see how it works, then shoot me an email (check the bottom of the Bio page) and I’ll send you a dummy account to play with (can’t publicize it or else people might spam it or change the password).

Here’s a quick feature list:

  • Can build a full search engine (like the machine learning one above) in two steps, without having to edit any XML, and in less than two minutes
  • Auto-generates the CSE annotations XML from your del.icio.us bookmarks and tags
  • Provides an option to auto-generate CSE annotations just for del.icio.us bookmarks that have a particular tag
  • Provides an option to Auto-calculate each annotation’s boost score (log normalizes over the max # of Others per bookmark)
  • Provides an option to Auto-expand links (appends a wildcard * to any links that point to a directory)
  • Auto-generates the CSE context XML
  • Auto-generates facet titles
  • Since there’s a four facet by five labels restriction (that’s the max that one can fit in the refinements display on the search results page), I provide two options for automatic facet/refinement generation:
    • The first uses a machine learning algorithm to find the four most frequent disjoint 5-item-sets (based on the # of del.icio.us tag co-occurrences; it then does query-expansion over the tag sets to determine good facet titles)
    • The other option returns the user’s most popular del.ico.us bundles and corresponding tags
    • Any refinements that do not make it in the top 4 facets are dumped in a fifth facet in order of popularity. If you don’t understand this then don’t worry, you don’t need to! The point is all of this is automated for you (just use the default Cluster option). If you want control over which refinements/facets get displayed, then just choose Bundle.
  • Provides help documentation links at key steps
  • And best of all … You don’t need to understand the advanced options of Google CSE/Co-op to build an advanced CSE! This seriously does all the hard, tedious work for you!

In my opinion, there’s no question that this is the easiest way to make a fancy search engine. If I make any future examples I’m using this – I can simply use del.icio.us, sign-in to this service, and voila I have a search engine with facets and multi-label support.


Please note that this tool is not officially endorsed by nor affiliated with Google or Yahoo! It was just something I wanted to work on for fun that I think will benefit many users (including myself). Also, send your feedback/issues/bugs to me or post them on this blog.

74 Comments

Filed under AI, Co-op, CS, CSE, Google, Machine Learning, Research, Tagging

SQL Text Mining

One of the projects Jim Gray and I worked on this summer was classifying the types of SQL users ask on the SkyServer site ( http://cas.sdss.org/dr5/en/ ). We were surprised that we could not find any existing research that could describe methods on how to break down the SQL for categorization – especially considering the number of websites and database workloads that bookkeep query logs. Below is a link to the powerpoint presentation I gave at MSR Mountain View last week which describes how we analyzed the SQL. Notable features include text processing strategies, clustering algorithms, distance functions, and two example applications (Bot detection and Query recommendation). We plan to publish our algorithms and results in a technical report in the next month or so – but for now, enjoy the .ppt. As always, comments are more than welcome.

SQL Text Mining Presentation

Creative Commons License

1 Comment

Filed under AI, CS, Databases, Machine Learning, Research

Hierarchical Multi-Labeling Presentation Slides

Below is a small class talk I gave on the hierarchical multi-labeling classification framework I outlined in my previous ‘Future of Tagging’ posts. I did a small experiment classifying tech news articles as Pro/Anti- Microsoft/Google (along with some other tags like the tech category and whether the article is a blog or publication based off the text of the piece). The results are very promising – even with such a small corpus of training documents the classifier performed very well. I do have some ideas on how to further improve accuracy, so when time cycles free up I’ll add those to the code and rerun it on a bigger and better (in terms of tag structure) training data set. By then I’m hoping the code will look less embarassing for public release and the results will be more conclusive – but until that day here are the presentation slides:

http://thumbstacks.com/play.html?show=b8f3f0c63f62631d9c5c77e33aba322a

somerights20.png

5 Comments

Filed under AI, CS, Machine Learning, Research, Tagging

The Future of Tagging – Part II

Note: Refers to the ideas described in the original post

An Algorithmic Update

Just wanted to let people know that I’ve changed my algorithms/framework for hierarchical mult-labeling classification quite a bit. One thing that really bugged me about my initial idea was the error correction scheme – i.e. sampling the tag network (a bayesian/mrf hybrid) for closely related bitstrings. All the SAT/conditional probability table values in this network are generated from the number of times tags occur together in the training data, thus making my error correction scheme a popularity contest. But what about the feature values? We SHOULD take these values into account and try to reduce our new input down to a training data example with closely related feature values THAT also happens to have a similar tag bitstring (based off the prediction string outputted by the binary classifiers).

With regards to assuming there are k errors in the bitstring (call it b) we get back from the classifiers – before we sampled new candidate bitstrings based off the bitpattern produced after randomly unsetting k bits in b. Instead, since many classifiers (like the support vector I’m using) can return a probability confidence associated to the 0/1 output, my new algorithm chooses the k positions to unset not uniformly at random, but rather with a bias towards the bits with the smallest probabilities (since they are most likely the erroneous ones according to the classifiers).

Another thing I added were two tag normalization rules for determining how to choose labels:

  1. No more than one tag from each tree/hierarchy
  2. Each tag must be a leaf node in a tree

Why the rules? It provides some level of control for the placement and generality of the tags. The first one ensures there’s some separation/disjointness among the tags. And for the second – I was afraid of mixing general and very specific tags together in a grouping because it could hurt my learner’s accuracy (since the tags/labels are not on the same par). By forcing tags to be leaf nodes in the trees we sort of normalize the labels to be on the same weighted level of specificity.

Another note – when generating the tag binary classifiers, I originally proposed just taking all the files/features that map to a label grouping that contains that tag (set as the y=1 cases in the binary classifier’s training data model) and all the files/features that map to a grouping that does not contain the tag for the y=0 cases. However, this splitting up of the data seems likely to produce many bad/unnecessary features since (1) there can be a LOT of 0 cases and (2) 0 case files/examples can deal with ANYTHING, inducing their completely irrelevant features to the tag’s binary classifier’s model. But we have a way out of this dilemma thanks to the tag normalization rules above – since we can only choose a single tag from each tree, we can use all the inputs/files/training data examples that map to other leaf-node tags in the SAME tree for the zero cases. This selection of 0 case files scopes the context down to one label hierarchy/tree that contains the tag we’re trying to learn.

Anyway, I’ll try to post the pseudo code (and actual code) for my algorithms and some initial experimental results on this blog shortly. Additionally, expect a tutorial describing the steps/software I used to perform these tests.

somerights20.png

 

Leave a comment

Filed under AI, CS, Machine Learning, Research, Tagging, TODO

The Future of Tagging

Update (9/25): A Short Presentation On This (forgot to link)
Update (5/06): An Algorithmic Update
Update (3/29): Added a Motivation section

Motivation

After reviewing the comments/emails I’ve received so far, I realized my article could make the motivation clearer. As I’ve mentioned in this post/comments, I agree with everyone regarding basic tagging – it’s by far the simplest way for any user to organize their media/text so that others can search/relate their content. I don’t see this ever going away. However, I decided to take a step back in this article and look at the issues with the current tagging model and examine an alternative method, namely hierarchical multi-labeling. Hierarchical multi-labeling solves many of the issues concerning basic tagging and should lend to better auto-tagging algorithms since it tells us how tags are related to each other. I definitely agree this isn’t something we should expect the average user to perform – but I do think power users and content aggregators like Google News could benefit greatly from this tagging model.

One of my goals this semester is to let anyone (most likely a hardcore geek) pass a tag network and training data (both represented as files) to my web service and I generate for them a classifier page (with a search box that takes in a webpage link or a string of words). Click ‘Classify’ and it’ll return the best set of tags for that input based off the training data examples. Services can use this to classify their news articles, emails, bookmarks, etc. Notice the grunt work (which isn’t too bad) is done by a geek, but all users can benefit from such a system.

Anyway, hope you enjoy the read and please comment if possible.

On Digg:

http://digg.com/software/Why_current_tagging_sucks,_and_how_to_fix_it

Two ways we can improve upon tagging:

  1. Support label hierarchies/groupings
  2. Use multiple (but ‘normalized’, explained below) labels per object

Why Hierarchical Tags?

Many app’s make use of a single level hierarchy of tags

  • Ex’s: Flickr, Gmail, Google Base, Technorati, WordPress, Delicious, YouTube
  • Big buzz around tagging
  • Finally provides users a simple way to structure their unstructured content (typically text, photos, video, and music media)
  • Makes it easier to search for related items

But one level hierarchy has issues

  • Namespace – redundant/similar named tags treated differently – wasteful
  • Not normalized – tags are not equal, general and specific in the same level
  • Loses relationships among different tags (how is tag x and tag y causally related?)
  • One level hierarchy labels are really just keywords
  • Not much different than a concise textual description
  • Only provides structural support if multiple documents use same tags
  • In it’s current form, it’s an unstructured way of structuring unstructured content
  • But better than nothing

Very simple Ex: Gmail has a one level hierarchy of tags

  • I get an email regarding my CS294 class, so I tag it with ‘CS294′
  • However, I would also like to be able to search through all emails
    relevant to school, so I have to tag it with label ‘Coursework’
  • But ‘Coursework’ encompasses ‘CS294′
  • I have to redundantly add Coursework to each CS294 tagged email even though ‘CS294′ implies ‘Coursework’
  • I could label it ‘Coursework\CS294′, but that’s just one tag specific to
    CS294, I can’t separate out the ‘Coursework’ part for search
  • This slashing technique, popularized in many Delicious sites, provides hierarchy in name only
    • Fails to capture any practical benefits like searching or relating different hierarchies

Why Multi-Label?

Objects typically (and should) fall under several categories

Many cool AI applications/data representations motivate multiple labels:

  • Medical Diagnosis
    • Real life Dr. House without the attitude
    • Normally many causes/diseases associated to a set of health features
    • Help diagnosticians to narrow down on the most likely set of causes
  • Computer Vision (i.e. Flickr, Riya, espgame.org, peekaboom.com)
    • For ex. Espgame/Peekaboom collect many labels for images/pixels
    • Could use their data to train computer vision learners for auto-tagging
  • Email/Filesystems/Databases/Search (i.e. Gmail, WinFS, SQL, Google Base)
    • File/Directory concept outdated
    • ‘Files’ are really objects which have metadata and relational aspects
    • Multi-Labels present a great, simple way to structure the diverse unstructured content in a file
    • (side-note: hierarchical tags could be used to provide backwards compatibility with File/Directory)
  • News (i.e. Google News, Digg, CNN, NY Times, Slashdot, News.com)
    • Multiple (hierarchical) labels for each news piece
    • Like seeing these labels { News.Tech.Computers.Hardware; Ideology.Geeky.Anti-Microsoft; Source.Online.Blog; People.Bill-Gates }
      … tells me a ton about an article before even reading it
    • Plus I can now search/relate these tags to find similar news articles based off specific attributes

Let’s get a bit more technical

We organize labels into trees (gives us hierarchies)

Per object, we choose multiple labels if each label comes from a different tree (hence ‘normalized’, provides some degree of separation/independence of tags)

So, what’s the point of adding all this complexity to tagging

One of the nice benefits of tagging is it’s so simple

I agree: I’m not expecting mommy and daddy to do hierarchical multi-labeling

But content providers can do this to reap the benefits described above

AND, because it will help our artificial intelligence algorithms learn how to multi-tag objects automatically (mainly because we know how tags are related to each other)

A possible machine learning algorithm for hierarchical multi-labeling

Design

We’ll build this algorithm based off binary supervised classifiers because:

  • Well understood in theory & practice; simpler, best accuracy
  • Many multiclass classifiers actually use several pairwise (all-pairs, one-versus-all, etc) binary classifiers
  • Many algorithms to work with: Perceptron, Kernels (Support Vector Machines), Neural Nets, Decision Trees, etc.

Want to create a Bayesian network based off the tag trees (actually it’s more like a Markov random field since there are undirected edges between tree nodes which occur together in the training data, annotated with CPT/SAT-based representations describing the causalities)

Ex. of a Tag Network

News

  • Sports
    • Editorial

Ideology

  • Liberal
    • Democrat
    • Marxist
  • Geeky
    • Anti-Microsoft
  • Nationalism
    • Brazil
    • USA

Athletics

  • Outdoor
    • Baseball
    • Football
  • Indoor
    • Soccer

Source

  • Paper
    • Magazine
  • Online
    • Blog
      • Wikipedia

* Does not show (undirected) links between hierarchies (like
News.Sports to Ideology.Nationalism) since it’s hard to show in text

Ex. Training Data

x1=3.45, x2=2.10, x3=5.45, x4=0.20, x5=9.20

y =

  1. News.Sports.Editorial
  2. Ideology.Nationalism.Brazil
  3. Athletics.Indoor.Soccer
  4. Source.Online.Wikipedia

x1=1.25, x2=6.93, x3=3.11, x4=8.01, x5=0.20

y=

  1. News.Tech.Computers.Hardware
  2. Ideology.Geeky.Anti-Microsoft
  3. Source.Online.Blog

How to fill in the CPT values for each bayes node in the Tag network?
We just count the tag groupings in the training data and use these numbers to generate a distribution

Learning

  • Create a Kernel (Support Vector) machine based binary classifier for each distinct tag
  • Train each binary classifier with the features from the training data whose y contains the tag (set classifier’s y = 1 for each of these feature sets)
    and with features that do not contain the tag (set classifier’s y = 0 for each)

    • (side-note: also known as one-versus-all approach, most common multiclass method)

Predicting

We run a new feature set through the set of binary classifiers, which each output a 0 or 1

Now we could just use this bitstring to immediately return a set of tags (the tags associated to 1 bits), and several existing Multi-Label approaches do this, but I think we can do better

The two main issues I have using this bitstring directly:

  1. The binary classifiers treat the tags independent of one another
  2. We don’t know which features correlate to which tags, and to what degree
    • Therefore we may be using irrelevant features (since we use them all) for training the binary classifiers, which hurts accuracy

These issues introduce errors in our bitstring b

However, we can use the tag relationships in our bayesian network to correct b

This problem lends nicely to an information theoretical approach

  • We received b over a noisy channel
  • Use what we know about tag relationships to reduce error (i.e. Hamming distance from the actual bitstring)

Reducing Error

There are several ways to go about error correcting b, here’s the one I came up with:
(any feedback esp. here would be great)

  • Use a Gibb’s (MCMC) based sampling scheme to generate candidate bitstrings from b
  • Nice convergence properties
  • Not knowing which bits are wrong in b motivates randomized/sampling methods
  • For n times, randomly choose a bit and flip it proportional to its probability in its CPT, output the new bitstring
  • This scheme occasionally alternates from sampling new bitstrings based off previously ‘sample generated’ bitstrings and off the original b (could alternate after k iterations, where k is the median hamming distance score from the strings produced by the binary classifiers compared against the actual strings for a hidden training data set)

Now we wish to find ‘interesting’ bitstrings from our set of n

  • By ‘interesting’ I mean bitstrings with frequent itemsets (related)
  • Use Apriori data mining algorithm to find these bitstrings, call the returning set s
  • Then over the bitstrings in s, scan for bits that have the same assignments and for 1 bits

Run a diagnosis (or MAP) query over the tags assigned 1 conditioned on tags assigned the same value in every bitstring, which returns our desired tag assignments

Here’s an Ex.:

Say the bitstrings we get from the Apriori algorithm are:

A B C D E F G
0 1 0 1 0 1 0
0 1 1 1 0 1 0
0 1 0 1 0 0 1

I scan the bits and see A is 0, B is 1, E is 0, in all the bitstrings

I also see C, F, G were assigned 1 in at least one bitstring

So I run this MAP query over my bayes network to find the assignment that maximizes:

Pr(C=?, F=?, G=? | A=0, B=1, E=0)

and return the tags B (since it was assigned 1 everywhere) and whichever ones will be assigned 1 by this query

Bayesian Inference and Diagnosis

Our previous step has left us with a very difficult problem to solve

  • Just doing plain ol’ bayesian inference queries is #P complete
  • But MAP queries are even harder since they need to infer the probabilities of all possible assignments

But luckily for me, I wanted to reduce a fun problem down to a MAP query

  • Let’s me relate this project to some very interesting class material :)

Prof. Satish Kumar in CS294 lectured on an exciting method for quickly computing inference/MAP queries

Utilizes several interesting tricks:

  • Precompiling the bayesian network into an efficient SAT-based representation
  • AND’ing each DNF clause with the query terms
  • And then counting the # of solutions to each clause using Karp’s FPRAS algorithm
  • Adding up the counters (each multiplied by some factor) solves your inference queries!

Can do DNF sampling (Las-Vegas or Atlantic-City style) under the same scheme for computing diagnosis queries

(references to learn more:
http://www.eecs.berkeley.edu/~tksk/classes/s06/handouts/lecture-06.pdf http://www.eecs.berkeley.edu/~tksk/PAPERS/thesis.pdf starting on pg. 179)

So, if we just precompile our tag network (before ever learning/predicting labels) then at runtime we can answer MAP queries in polynomial time w.r.t. the size of the SAT-based representation & exponential w.r.t the size of the largest communication link in our clique tree (an optimization that basically caches variables shared
between family nodes)

Pretty cool!

(A nice side-note property: the more variable assignments we condition on, the more we reduce the size of our SAT-based representation)

Benefits

Everything is Parallelizable!

  • Counting the tag occurences in the training data to populate the CPT’s in the bayes network (split by record)
  • Generating the set of binary classifiers and running a new feature set over the binary classifiers (since they are independent)
  • Induced correction sampling of bitstrings
  • Apriori is parallelizable
  • Scanning the bitstrings returned from Apriori for commonalities (split by bitstring, for each tag just output 1/0, then reduce the sum and check it with the total # of bitstrings)
  • Even Satish’s inference/diagnosis algorithms (split by DNF clause)

Decouples tag relations and learning

  • Our tag network does not condition on specific feature values
  • This could be seen as a bad thing but learning the causalities of specific feature values to a label (or group of labels) sounds very difficult/messy & at best application limiting since it assumes we:
    1. Know all (or even a subset of) the possible values each of the features can take
    2. Have a learning method that maps each of these individual values to a tags distribution
  • However, binary classifiers implicity find discrepencies in particular features in order to differentiate data inputs
  • In our model, the user just needs to have tag-specific binary classifiers
  • They don’t need to incorporate causality with other tags in their one-tag specific learner
    • Avoiding MultiClass learners (which aren’t nearly as well-understood nor as accurate as binary classifiers)
    • Additionally, our paradigm lets users plug-in their favorite binary classifier (many to choose from, as mentioned in Motivations)
  • Our model lets users simply construct tag dependency trees without having to explicity show why those links exist

By making the problem more complex we might actually be improving learning accuracy

  • Tag networks, hierarchal labeling, & multi-labeling – added complex functions to the system – should improve learning
  • Let’s us now exploit the overlap of tags
  • Multi-Labeling increases the chances that our learner can predict one, a subset, or all the tags correctly for a new input
    • May be a better pay off than the currently popular paradigm of where you either get the label right or you don’t

 

This work is licensed under a Creative Commons License

 

34 Comments

Filed under AI, Databases, Google, Machine Learning, Research, Tagging

Want to help the AI community? Play games!

Last week I heard Luis von Ahn‘s talk on CMU’s Human Computation project, which utilizes Human cycles to solve significant AI-Complete problems like Computer Vision and NLP. How do they enlist the help of free human labor? By wrapping these problems into addictive, interactive, web based games. Genius. For example, their ESP Game gives two remote players a series of identical images to label. For each label they both match, the higher their scores.

So, how does this help advance AI research? Well, despite our major breakthroughs in computer hardware and theory, we haven’t quite figured out a robust method to understanding the contents of a picture. This is a big problem because most pictures on the web don’t come with sufficient metadata and often have insensitive file names like ‘3.jpg’. However, unlike machines, us humans can easily categorize a set of images. These labels would significantly help search engines like Google improve their image rankings and even enable users to search for related images.

Other interesting remarks from the talk – In just the past year humans spent over 9 billion hours playing Windows Solitaire. 9 billion! ASDFLKQOW!! To put this into perspective, it took 5 million hours to construct the Empire State building and 20 million hours for the Panama Canal. That’s ridiculous. Luis estimates that if just 5,000 humans played ESP for a 2 month’s time, all the images in Google search engine could be accurately labeled.

So, the moral of this post – if you’re bored and got a few human cycles to spare, try playing these ‘productive’ games.

1 Comment

Filed under AI, Games, Google, Research