1 million sentences to save in DB - removing non-relevant English words 1 million sentences to save in DB - removing non-relevant English words hadoop hadoop

1 million sentences to save in DB - removing non-relevant English words


There are two common approaches:

  1. Compile a stop list.
  2. POS tag the sentences and throw out those parts of speech that you think are not interesting.

In both cases, determining which words/POS tags are relevant may be done using a measure such as PMI.

Mind you: standard stop lists from information retrieval may or may not work in sentiment analysis. I recently read a paper (no reference, sorry) where it was claimed that ! and ?, commonly removed in search engines, are valuable clues for sentiment analysis. (So may 'I', esp. when you also have a neutral category.)

Edit: you can also safely throw away everything that occurs only once in the training set (so called hapax legomena). Words that occur once have little information value for your classifier, but may take up a lot of space.


To reduce amount of data retrieved from your database, you may create in your database a dictionary -- a table that maps words* to numbers** -- and than retrieve only a number vector for training and a complete sentence for manual marking a sentiment.

|* No scientific publication comes to my mind but maybe it is enough to use only stems or lemmas instead of words. It would reduce the size of the dictionary.

|** If this operation kills your database, you can create a dictionary in a local application -- that uses a text indexing engine (e.g., apache lucene) -- and store only the result in your database.