Pillowfights 51

Image by asterix611 via Flickr

In preparation for a blogpost I’m going to make some time this week I found myself wanting to somehow parametrize vocabulary richness in a piece of text.

btw, Code at bottom ๐Ÿ˜‰

It’s an interesting problem because when you read something, it’s pretty easy to see when an author is using rich vocabulary, but trying to reduce this observation to a simple number turns out to be a bit of a brainfuck. It’s obviously somehow related to word frequencies and it seems obvious that what we want to measure is the distribution shape of word frequencies.

Luckily googling around for about an hour turned up a clue. Way back in 1944 a statistician called G.U. Yule cracked this problem in a paper titled The statistical study of literary vocabulary. What he came up with is the so called Yule’s K characteristic.

Wikipedia is scarce on these things, so we know we’re treading strange strange ground here. Persistent searching with google scholar turned up a version of the paper that wasn’t paywalled to oblivion. However since it was on google books it was missing some key pages.

Luckily what looks like a random homework for R explains perfectly how to implement Yule’s K value:

A complementary way of assessing the vocabulary difficulty of texts is to measure their lexical richness. Two indices one could use are Yule’s K or Yule’s I. These two are defined as follows:
(1) Yule’s K = 10,000โ‹…๎‚žM 2โˆ’M 1 ๎‚Ÿรท๎‚ž M 1โ‹…M 1๎‚Ÿ
(2) Yule’s I = ๎‚žM 1โ‹…M 1๎‚Ÿรท๎‚žM 2โˆ’M 1 ๎‚Ÿ
where M1 is the number of all word forms a text consists of and M2 is the sum of the products of each observed frequency to the power of two and the number of word types observed with that frequency (cf. Oakes 1998:204). For example, if one word occurs three times and four words occur five times, M2=(1*32)+(4*52)=109. The larger Yule’s K, the smaller the diversity of the vocabulary (and thus, arguably, the easier the text). Since Yule’s I is based on the reciprocal of Yule’s K, the larger Yule’s I, the larger the diversity of the vocabulary (and thus, arguably, the more difficult the text).

Unfortunately I don’t have the link anymore. It was a seriously random pdf I found online, the title seems to be “Quantitative corpus linguistics with R: a practical introduction”

In hopes this blogpost saves somebody a few hours of googling when trying to measure vocabulary richness, here’s my python implementation of Yule’s K characteristic (or rather its inverse, Yule’s I)

from nltk.stem.porter import PorterStemmer
from itertools import groupby
 
def words(entry):
    return filter(lambda w: len(w) > 0,
                  [w.strip("0123456789!:,.?(){}[]") for w in entry.split()])
 
def yule(entry):
    # yule's I measure (the inverse of yule's K measure)
    # higher number is higher diversity - richer vocabulary
    d = {}
    stemmer = PorterStemmer()
    for w in words(entry):
        w = stemmer.stem(w).lower()
        try:
            d[w] += 1
        except KeyError:
            d[w] = 1
 
    M1 = float(len(d))
    M2 = sum([len(list(g))*(freq**2) for freq,g in groupby(sorted(d.values()))])
 
    try:
        return (M1*M1)/(M2-M1)
    except ZeroDivisionError:
        return 0

For example the output of that function for this post is 21.6

Just wish I knew how to make that middle part more functional-like. I don’t like having weird for loops strewn about my code like that.

Enhanced by Zemanta

Learned something new? Want to improve your skills?

Join over 10,000 engineers just like you already improving their skills!

Here's how it works ๐Ÿ‘‡

Leave your email and I'll send you an Interactive Modern JavaScript Cheatsheet ๐Ÿ“–right away. After that you'll get thoughtfully written emails every week about React, JavaScript, and your career. Lessons learned over my 20 years in the industry working with companies ranging from tiny startups to Fortune5 behemoths.

PS: You should also follow me on twitter ๐Ÿ‘‰ here.
It's where I go to shoot the shit about programming.