Tuesday, June 19, 2012

Keyword Extraction

Keyword Extraction is a very difficult problem in natural language processing. Please read discussions on the topic on stackoverflow: or wikipedia: 


In this post I will talk about two ways to extract keywords from a large chunk of text.


The first code (text2term_topia.py) is based on a python package available at http://pypi.python.org/pypi/topia.termextract/ . Please install this package before you run the code. 

#text2term_topia.py

#coding: utf-8

from topia.termextract import extract

"""
install the package at http://pypi.python.org/pypi/topia.termextract/ for this to work
"""

def extract_keyword(text):
extractor = extract.TermExtractor()
try:
taggedTerms = sorted(extractor(text))
except Exception:
taggedTerms = []
terms = []
for tterms in taggedTerms:
terms.append(tterms[0])

return terms

def main():
text = 'University of California, Davis (also referred to as UCD, UC Davis, or Davis) is a public teaching and research university established in 1905 and located in Davis, California, USA. Spanning over 5,300 acres (2,100 ha), the campus is the largest within the University of California system and third largest by enrollment.[6] The Carnegie Foundation classifies UC Davis as a comprehensive doctoral research university with a medical program, veterinary program, and very high research activity.'
 
fetched_keywords = extract_keyword(text)

print fetched_keywords
if __name__ == "__main__":
main()


The second code (text2term_yahoo.py) is based on Yahoo!'s engine to extract keywords. This piece of code requires an app_id from Yahoo! to run successfully. For more, read here: http://developer.yahoo.com/search/content/V1/termExtraction.html




# text2term_yahoo.py


import simplejson, urllib, sys

APP_ID = '' #INSERT YOUR APP_ID HERE
EXTRACT_BASE = 'http://search.yahooapis.com/ContentAnalysisService/V1/termExtraction'

class YahooSearchError(Exception):
    pass

def extract(context,query='',**kwargs):
    kwargs.update({
        'appid': APP_ID,
        'context': context,
        'output': 'json'
    })
    url = EXTRACT_BASE + '?' + urllib.urlencode(kwargs)
    result = simplejson.load(urllib.urlopen(url))
    if 'Error' in result:
        # An error occurred; raise an exception
        raise YahooSearchError, result['Error']
    return result['ResultSet']

def extract_keyword(text):
try:
info = extract(text)
if 'Result' in info:
return info['Result']
else:
return []
except YahooSearchError, e:
print e,"\nAn API error occurred."
sys.exit()
except IOError:
print "A network IO error occured."
sys.exit()

def main():
text = 'University of California, Davis (also referred to as UCD, UC Davis, or Davis) is a public teaching and research university established in 1905 and located in Davis, California, USA. Spanning over 5,300 acres (2,100 ha), the campus is the largest within the University of California system and third largest by enrollment.[6] The Carnegie Foundation classifies UC Davis as a comprehensive doctoral research university with a medical program, veterinary program, and very high research activity.'
print extract_keyword(text)
if __name__ == "__main__":
main()

The Yahoo! based code is good to run only 3000 queries for every 24hours which is a disadvantage but returns very high-quality and a limited number (i.e. less noisy) of keywords. 

However, the topia based code runs locally on the machine and can handle as many queries as we want. Unfortunately the number of keyword it extracts is large and often times noisy and may also need further processing to separate out symbols and numbers. Despite this disadvantage, compared to the constraint of only 3000 queries per day with the Yahoo! engine, I mostly prefer to use the topia term extractor code for my work.


Stemming or Lemmatization Words


From wiki (http://en.wikipedia.org/wiki/Stemming): Stemming is the process for reducing inflected (or sometimes derived) words to their stem, base or root form—generally a written word form. For example, the words "stemmer", "stemming", "stemmed" are based on the word "stem".

Multiple algorithms exist to stem words: e.g. Porter Stemming Algorithm: http://tartarus.org/~martin/PorterStemmer/, Snowball stemming algorithms, etc. We are going to use Python-NLTK package implementation of lemmatizer that uses WordNet's built-in morphy function.

from nltk.stem.wordnet import WordNetLemmatizer
lmtzr = WordNetLemmatizer()
>>> lmtzr.lemmatize('cars')
'car'

>>> lmtzr.lemmatize('feet')
'foot'

>>> lmtzr.lemmatize('stemmed')
'stemmed'

>>> lmtzr.lemmatize('stemmed','v')
'stem'
The wordnet lemmatizer only knows four parts of speech (ADJ, ADV, NOUN, and VERB): {'a': 'adj', 'n': 'noun', 'r': 'adv', 'v': 'verb'}
The wordnet lemmatizer considers the pos of words passed on to be noun unless otherwise specifically told. We can know the part of speech value of a word from the treebank module of nltk which has its own nomenclature to denote parts of speech. For example, the noun parts of speech in the treebank tagset all start with NN, the verb tags all start with VB, the adjective tags start with JJ, and the adverb tags start with RB. 

So, we will converting from one set of labels i.e. from the treebank module to the other set, i.e. the wordnet terms:

wordnet_tag ={'NN':'n','JJ':'a','VB':'v','RB':'r'}

tokens = nltk.word_tokenize("stemmer stemming stemmed")
tagged = nltk.pos_tag(tokens)
for t in tagged:
     print t[0],t[1][:1]
     try:
          print t[0],":",lmtzr.lemmatize(t[0],wordnet_tag[t[1][:2]])
     except:
          print t[0],":",lmtzr.lemmatize(t[0])

Removing stop words


Frequently occurring words or words that don't add value to overall goal of the processing needs to be removed from a text. The definition of stop words is highly dependent on the context. Please look up the wiki page on stop words to know more about the concept: http://en.wikipedia.org/wiki/Stop_words

In this post, we are going to define stop words as used by MySQL version 5.6 (http://dev.mysql.com/doc/refman/5.6/en/fulltext-stopwords.html)

Attached is a text file containing the MySQL words and we are going to remove the stop words from our text.

stop_word_set = {}
file = open("mysql_5.6_stopwords.txt")
stop_word_set = set(file.read().split("\n"))
file.close()

def remove_stopwords(wordlist):
     return set(wordlist).difference(stop_word_set)

>>> line = "Football refers to a number of sports that involve kicking a ball with the foot to score a goal"

>>> print remove_stopwords(line.split(" "))
set(['a', 'ball', 'goal', 'Football', 'number', 'sports', 'involve', 'kicking', 'score', 'foot', 'refers'])


In this code, the function is remove_stopword(wordlist). The input parameter is a list of words and the function returns a set of words removing the words found in the stop word list. Due to properties of the set data structure, the words in the sentence are unordered. 

If the extract input string with stop words removed is required, then we can modify the above code as follows:

stop_word_set = {}
file = open("mysql_5.6_stopwords.txt")
stop_word_set = set(file.read().split("\n"))
file.close()

def remove_stopwords(line):
     output = []
     for l in line.split(" "):
          if l not in stop_word_set:
               output.append(l)
     return " ".join(output)

>>> line = "Football refers to a number of sports that involve kicking a ball with the foot to score a goal"

>>> print remove_stopwords(line)
Football refers a number sports involve kicking a ball foot score a goal

The first function will be significantly faster than the second function due to it's use of set operations. Select one that suits your requirement.




P.S.: Blogger doesn't support uploading files. Another reason why wordpress is better! Best option: Copy paste the words from http://dev.mysql.com/doc/refman/5.6/en/fulltext-stopwords.html into a text file named mysql_5.6_stopwords.txt and make sure one word appears on each line. Without correct formatting, the above code will break.



Removing punctuations


Python has a built-in function to access all the punctuations:

>>> from string import punctuation
>>> print punctuation
!"#$%&'()*+,-./:;<=>?@[]^_`{|}~

Code to remove punctuation(s) from a given string:

from string import punctuation
def removePunctuation(string,replacement='',exclude=''):
   for p in set(list(punctuation)).difference(set(list(exclude))):
       string = string.replace(p,replacement)
   return string

>>> removePunctuation("Hello World!!",' ')
"Hello World  "

>>> removePunctuation("Hello World!!")
"Hello World"

>>> removePunctuation("Hello-World!!",'  ','!')
"Hello World!!"

The replacement parameter replaces the punctuation characters with the given character.
The default value to replace punctuation marks is an empty string.

The exclude option provides scope to retain specific punctuations. For example in the case of cleaning a paragraph of text, we might want to retain the full stop (.) mark. The exclude parameter takes a string containing all the punctuations that needs to be skipped.

Python Multiple Whitespace removal

import re
s = "The   fox jumped   over    the log."
re.sub("\s{2,}" , " ", s)


This can also be done using lists which for some reason the python community really favors over all other methods:


s = "The  fox  jumped  over   the log."
s = filter(None,s.split())
s = " ".join(s)

Sunday, June 3, 2012

Change font matplotlib

How to change font in matplotlib codes:

import matplotlib



# customization using formats given at: http://matplotlib.sourceforge.net/users/customizing.html 

font = {'family' : 'Trebuchet MS', #options: 'serif' (e.g. Times), 'sans-serif' (e.g. Helvetica), 'cursive' (e.g. Zapf-Chancery), 'fantasy' (e.g. Western), and 'monospace' (e.g. Courier)
'style'  : 'normal', #options: normal (or roman), italic or oblique
        'weight' : 'normal', #options: 13 values: normal, bold, bolder, lighter, 100, 200, 300, ...900.
        'size'   : 10}
matplotlib.rc('font', **font)

axes_font = {'labelsize' : 16, #: medium  # fontsize of the x any y labels
'titlesize' : 'medium' #fontsize of the axes title
}
matplotlib.rc('axes', **axes_font)

Yep, that's all !!

Thursday, April 5, 2012

Chrome Downloads Bar Hide or Disable

Chrome downloads bar is probably the worst 'liked' feature about Google Chrome according to discussions in Google Products Forum. And rightly so. Like so many others, I too hate it. The dislike gets intensified since Chrome provides no easy option to hide or disable the bar. After a long search, I finally discovered one option to achieve this: Go to 'chrome://flags/' and disable the 'New Downloads UI'. A relaunch is required and wohoo the download bar hides nicely.


Now, the sad part. The page at 'chrome://flags/' mentions that the experiments may bite and they are serious about it! :( 


If you try to download a file that is an installer i.e. a dmg/exe file or even part of an extension, chrome asks for a confirmation. With the downloads bar now hidden,  the option to ask a confirmation remains hidden and thus you're hanging in the balance. The download remains stuck and at some point you must revert back to enable the 'New Downloads UI' to carry on with life :( Ahh.. Google.. Please give us the option!

Friday, March 23, 2012

pipe with rm

I wanted to delete a large number of files in a directory that satisfied a certain criteria in their name.  The command for this is:


find /path/to/dir -name '*criteria*' -exec rm {} \;

or



find /path/to/dir -name '*criteria*'| xargs rm

or


ls | grep criteria | xargs rm


Note: The last command might be slower than the first two.

Sunday, March 11, 2012

Plotting a random geometric graph using Networkx

I wanted to plot the random geometric graph as shown in networkx gallery with a few tweaks. I wanted to have two plots: 1) A plot of 600 nodes with nodes in only one color and 2) A similar plot of 600 nodes with few (75) nodes highlighted with a different color. I thought it would be an easy process if I download the code and update the code with a few additional lines to perform my work. Unfortunately, when I downloaded the code and executed it (without any changes), the code crashed on the second line with the following error codes:


Traceback (most recent call last):
  File "random_geometric_graph.py", line 6, in <module>
    pos=nx.get_node_attributes(G,'pos')
AttributeError: 'module' object has no attribute 'get_node_attributes'

I checked to see if I had the latest versions of NetworkX and Matplotlib and I did. No idea why a piece of code downloaded directly from the library's official website won't run. To be sure something wasn't wrong in the way libraries were linked in my machine, I decided to recheck the code on another machine . Same error and same crash.. No idea. Finally I decided to check the code in the library itself. Everything with get_node_attributes function seemed to be okay so I copied it to my working file and it failed again! Narrowing down, it looked like the code had problem accessing the node elements and thus the function crashed. Fast forward, I decided to copy the graph generation function and return the attributes simultaneously to a different variable. Here is my final code:



# -*- coding: utf-8 -*-
"""
Generators for geometric graphs.
"""
#    Copyright (C) 2004-2011 by 
#    Aric Hagberg <hagberg@lanl.gov>
#    Dan Schult <dschult@colgate.edu>
#    Pieter Swart <swart@lanl.gov>
#    All rights reserved.
#    BSD license.
from __future__ import print_function

__author__ = "\n".join(['Aric Hagberg (hagberg@lanl.gov)',
                        'Dan Schult (dschult@colgate.edu)',
                        'Ben Edwards (BJEdwards@gmail.com)'])

__all__ = ['random_geometric_graph',
           'waxman_graph',
           'geographical_threshold_graph',
           'navigable_small_world_graph']
from bisect import bisect_leftfrom functools import reducefrom itertools import productimport math, random, sysimport networkx as nx
import matplotlib.pyplot as plt
#---------------------------------------------------------------
#  Random Geometric Graphs
#---------------------------------------------------------------
        def random_geometric_graph(n, radius, dim=2, pos=None):
    r"""Return the random geometric graph in the unit cube.

    The random geometric graph model places n nodes uniformly at random 
    in the unit cube  Two nodes `u,v` are connected with an edge if
    `d(u,v)<=r` where `d` is the Euclidean distance and `r` is a radius 
    threshold.

    Parameters
    ----------
    n : int
        Number of nodes
    radius: float
        Distance threshold value  
    dim : int, optional
        Dimension of graph
    pos : dict, optional
        A dictionary keyed by node with node positions as values.

    Returns
    -------
    Graph
      
    Examples
    --------
    >>> G = nx.random_geometric_graph(20,0.1)

    """
    G=nx.Graph()
    G.name="Random Geometric Graph"
    G.add_nodes_from(range(n)) 
    if pos is None:
        # random positions 
        for n in G:
            G.node[n]['pos']=[random.random() for i in range(0,dim)]
    else:
        nx.set_node_attributes(G,'pos',pos)
    
 
    name = 'pos'
    position_data = dict( (n,d[name]) for n,d in G.node.items() if name in d)
    
    # connect nodes within "radius" of each other
    # n^2 algorithm, could use a k-d tree implementation
    nodes = G.nodes(data=True)
    while nodes:
        u,du = nodes.pop()
        pu = du['pos']
        for v,dv in nodes:
            pv = dv['pos']
            d = sum(((a-b)**2 for a,b in zip(pu,pv)))
            if d <= radius**2:
                G.add_edge(u,v)
                
    return G,position_data


n = 600
d = 0.0725


G,pos = random_geometric_graph(n,d)
print(n,'t',G.number_of_edges())
# find node near center (0.5,0.5)
dmin=1
ncenter=0
for n in pos:
    x,y=pos[n]
    d=(x-0.5)**2+(y-0.5)**2
    if d<dmin:
        ncenter=n
        dmin=d

# color by path length from node near center
#p=nx.single_source_shortest_path_length(G,ncenter)

highlighted_nodes = random.sample(xrange(n),75)
#print(highlighted_nodes)
p = {}

base_color = '#0F1C95'
trust_color = '#36D258'
for i in range(n+1):
        p[i] = base_color
        if i in highlighted_nodes:
                p[i] = trust_color

plt.figure(figsize=(9,9))
nx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),node_size=100,node_color=p.values(),cmap=plt.cm.Reds_r)

plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axis('off')
plt.savefig('highlighted_graph.png')

plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axis('off')
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),node_size=100,node_color=base_color,cmap=plt.cm.Reds_r)
plt.savefig('no-highlight_graph.png')