User Tools

Site Tools


notes:bayesian_classification

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
notes:bayesian_classification [2013/03/14 16:33]
andy [Combining words]
notes:bayesian_classification [2013/03/15 14:11]
andy [Combining words]
Line 3: Line 3:
 This page discusses the application of Bayes Theorem as a simple classifier for text and outlines the mathematical basis and the algorithmic approach. This page discusses the application of Bayes Theorem as a simple classifier for text and outlines the mathematical basis and the algorithmic approach.
  
-The information in this page is heavily cribbed from the Wikipedia articles on [[wikipedia>​Bayesian spam filtering]],​ [[wikipedia>​naive Bayes classifier]] and [[wikipedia>​Bayes'​ Theorem]].+The information in this page is heavily cribbed from the Wikipedia articles on [[wikipedia>​Bayesian spam filtering]],​ [[wikipedia>​naive Bayes classifier]] and [[wikipedia>​Bayes'​ Theorem]]. There'​s also a [[http://​cs.wellesley.edu/​~anderson/​writing/​naive-bayes.pdf|useful paper on combining word probabilities]] which is worth a read, especially the final section which discusses an erroneous assumption that some implementations make.
  
 ===== Bayes' Theorem ===== ===== Bayes' Theorem =====
Line 45: Line 45:
 \begin{equation} P(C_i|W) = \frac{P(W|C_i)P(C_i)}{\sum\limits_{j=1}^n{P(W|C_j)P(C_j)}} \end{equation} \begin{equation} P(C_i|W) = \frac{P(W|C_i)P(C_i)}{\sum\limits_{j=1}^n{P(W|C_j)P(C_j)}} \end{equation}
  
-This depends partly on the ratio of messages with particular classifications $P(C_i)$. However, some classifiers make the simplifying assumption that all classifications are initially equally likely, which yields:+This depends partly on the ratio of messages with particular classifications $P(C_i)$ ​which makes it a **biased** classifier (i.e. it makes assumptions about the distribution of incoming messages before even looking at them). However, some classifiers make the simplifying assumption that all classifications are initially equally likely, which yields:
  
 \begin{equation*} P(C_1) = P(C_2) = ... = P(C_n) = \frac{1}{n} \end{equation*} \begin{equation*} P(C_1) = P(C_2) = ... = P(C_n) = \frac{1}{n} \end{equation*}
  
-Putting this into the equation above allows us to simplify it:+Putting this into the equation above allows us to simplify it and obtain an **unbiased** classifier:
  
 \begin{equation*} P(C_i|W) = \frac{P(W|C_i) \frac{1}{n} }{ \frac{1}{n} \sum\limits_{j=1}^n{P(W|C_j)}} \end{equation*} \begin{equation*} P(C_i|W) = \frac{P(W|C_i) \frac{1}{n} }{ \frac{1}{n} \sum\limits_{j=1}^n{P(W|C_j)}} \end{equation*}
Line 55: Line 55:
  
 This allows the probability of a given word classifying the message correctly in terms of the relative frequencies of that word in the different categories, which is easily acquired through suitable training. This allows the probability of a given word classifying the message correctly in terms of the relative frequencies of that word in the different categories, which is easily acquired through suitable training.
- 
 ==== Combining words ==== ==== Combining words ====
  
Line 91: Line 90:
  
 Please forgive the slightly loose use of notation, there are a few too many dimensions over which to iterate for clarity. Please forgive the slightly loose use of notation, there are a few too many dimensions over which to iterate for clarity.
 +
 +One slight simplification to note results from the fact that $P(C_i)$ is presumably determined by dividing a number of trained messages by the total number of messages trained. Let $N_{C_i}$ indicate the number of messages trained in category $C_i$, $N$ indicate the number of messages trained overall and $N_{C_i}(W_a)$ indicate the number of messages containing token $W_a$ that were trained in category $C_i$. Thus the equation above becomes:
 +
 +\begin{equation*} P(C_i|W_a \cap W_b \cap ... \cap W_z) = \frac{\frac{1}{N}N_{C_i}\prod\limits_{j=a}^z{\frac{N_{C_i}(W_j)}{N_{C_i}}}}{\frac{1}{N}\sum\limits_{k=1}^n{N_{C_k}\prod\limits_{j=a}^z{\frac{N_{C_k}(W_j)}{N_{C_k}}}}} \end{equation*}
 +\begin{equation} \Rightarrow P(C_i|W_a \cap W_b \cap ... \cap W_z) = \frac{\prod\limits_{j=a}^z{N_{C_i}(W_j)}}{N_{C_i}^{x-1}\sum\limits_{k=1}^n{\frac{1}{N_{C_k}^{x-1}}\prod\limits_{j=a}^z{N_{C_k}(W_j)}}} \end{equation}
 +
 +Where $x$ is the total number of words. This version may help avoid underflow, but may instead be susceptible to overflow due to the exponentiation involved.
 +==== Two-category case ====
 +
 +A common case is that there are two categories --- for example, this is the case for email spam detection. In this case it can be tempting to simplify the above equation using the fact that $P(C_1) = (1 - P(C_2))$. However, this is not as effective as it seems as you would also need to assume that $P(W_i|C_1) = (1 - P(W_i|C_2))$ to achieve any significant simplifcation. However, this is clearly not the case --- just because the word "​drugs"​ occurs in 20% of spam email, for example, it doesn'​t follow that it occurs in 80% of non-spam.
 +
 +==== Precision issues ====
 +
 +Since many of the probabilities for particular words may be quite low once a large corpus of messages has been analysed, the product of large numbers of them can lead to underflow if floating point representations are used. One solution to this is to limit the analysis to a small number of "most interesting"​ words - this has other performance improvements as well.
 +
 +Another technique which can also be used is to perform the multiplications in the log space and use addition instead of multiplication. This uses the identity:
 +
 +\begin{equation*} p_1 p_2 ... p_n = e^{\ln{p_1} + \ln{p_2} + ... + \ln{p_n}} \end{equation*}
 +
 +This is probably of limited use in the equation above because the summations require conversion back from log space anyway, but it may prove useful.
notes/bayesian_classification.txt ยท Last modified: 2013/03/15 14:13 by andy