How to get rid of punctuation using NLTK tokenizer? How to get rid of punctuation using NLTK tokenizer? python python

How to get rid of punctuation using NLTK tokenizer?


Take a look at the other tokenizing options that nltk provides here. For example, you can define a tokenizer that picks out sequences of alphanumeric characters as tokens and drops everything else:

from nltk.tokenize import RegexpTokenizertokenizer = RegexpTokenizer(r'\w+')tokenizer.tokenize('Eighty-seven miles to go, yet.  Onward!')

Output:

['Eighty', 'seven', 'miles', 'to', 'go', 'yet', 'Onward']


You do not really need NLTK to remove punctuation. You can remove it with simple python. For strings:

import strings = '... some string with punctuation ...'s = s.translate(None, string.punctuation)

Or for unicode:

import stringtranslate_table = dict((ord(char), None) for char in string.punctuation)   s.translate(translate_table)

and then use this string in your tokenizer.

P.S. string module have some other sets of elements that can be removed (like digits).


Below code will remove all punctuation marks as well as non alphabetic characters. Copied from their book.

http://www.nltk.org/book/ch01.html

import nltks = "I can't do this now, because I'm so tired.  Please give me some time. @ sd  4 232"words = nltk.word_tokenize(s)words=[word.lower() for word in words if word.isalpha()]print(words)

output

['i', 'ca', 'do', 'this', 'now', 'because', 'i', 'so', 'tired', 'please', 'give', 'me', 'some', 'time', 'sd']