CountVectorizer

CountVectorizer(, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=)*

Convert a collection of text documents to a matrix of token counts

This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.

If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data.

Read more in the :ref:User Guide <text_feature_extraction>.

Parameters

Attributes

Examples

>>> from sklearn.feature_extraction.text import CountVectorizer
>>> corpus = [
...     'This is the first document.',
...     'This document is the second document.',
...     'And this is the third one.',
...     'Is this the first document?',
... ]
>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]
>>> vectorizer2 = CountVectorizer(analyzer='word', ngram_range=(2, 2))
>>> X2 = vectorizer2.fit_transform(corpus)
>>> print(vectorizer2.get_feature_names())
['and this', 'document is', 'first document', 'is the', 'is this',
'second document', 'the first', 'the second', 'the third', 'third one',
'this document', 'this is', 'this the']
>>> print(X2.toarray())
[[0 0 1 1 0 0 1 0 0 0 0 1 0]
[0 1 0 1 0 1 0 1 0 0 1 0 0]
[1 0 0 1 0 0 0 0 1 1 0 1 0]
[0 0 1 0 1 0 1 0 0 0 0 0 1]]

See Also

HashingVectorizer, TfidfVectorizer

Notes

The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling.

Methods


build_analyzer()

Return a callable that handles preprocessing, tokenization and n-grams generation.

Returns

analyzer: callable A function to handle preprocessing, tokenization and n-grams generation.


build_preprocessor()

Return a function to preprocess the text before tokenization.

Returns

preprocessor: callable A function to preprocess the text before tokenization.


build_tokenizer()

Return a function that splits a string into a sequence of tokens.

Returns

tokenizer: callable A function to split a string into a sequence of tokens.


decode(doc)

Decode the input into a string of unicode symbols.

The decoding strategy depends on the vectorizer parameters.

Parameters

Returns

doc: str A string of unicode symbols.


fit(raw_documents, y=None)

Learn a vocabulary dictionary of all tokens in the raw documents.

Parameters

Returns

self


fit_transform(raw_documents, y=None)

Learn the vocabulary dictionary and return document-term matrix.

This is equivalent to fit followed by transform, but more efficiently implemented.

Parameters

Returns


get_feature_names()

Array mapping from feature integer indices to feature name.

Returns


get_params(deep=True)

Get parameters for this estimator.

Parameters

Returns


get_stop_words()

Build or fetch the effective stop words list.

Returns

stop_words: list or None A list of stop words.


inverse_transform(X)

Return terms per document with nonzero entries in X.

Parameters

Returns


set_params(params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object.

Parameters

Returns


transform(raw_documents)

Transform documents to document-term matrix.

Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor.

Parameters

Returns