Provided by: liblucy-perl_0.3.3-6build1_amd64 

NAME
Lucy::Analysis::RegexTokenizer - Split a string into tokens.
SYNOPSIS
my $whitespace_tokenizer
= Lucy::Analysis::RegexTokenizer->new( pattern => '\S+' );
# or...
my $word_char_tokenizer
= Lucy::Analysis::RegexTokenizer->new( pattern => '\w+' );
# or...
my $apostrophising_tokenizer = Lucy::Analysis::RegexTokenizer->new;
# Then... once you have a tokenizer, put it into a PolyAnalyzer:
my $polyanalyzer = Lucy::Analysis::PolyAnalyzer->new(
analyzers => [ $case_folder, $word_char_tokenizer, $stemmer ], );
DESCRIPTION
Generically, "tokenizing" is a process of breaking up a string into an array of "tokens". For instance,
the string "three blind mice" might be tokenized into "three", "blind", "mice".
Lucy::Analysis::RegexTokenizer decides where it should break up the text based on a regular expression
compiled from a supplied "pattern" matching one token. If our source string is...
"Eats, Shoots and Leaves."
... then a "whitespace tokenizer" with a "pattern" of "\\S+" produces...
Eats,
Shoots
and
Leaves.
... while a "word character tokenizer" with a "pattern" of "\\w+" produces...
Eats
Shoots
and
Leaves
... the difference being that the word character tokenizer skips over punctuation as well as whitespace
when determining token boundaries.
CONSTRUCTORS
new( [labeled params] )
my $word_char_tokenizer = Lucy::Analysis::RegexTokenizer->new(
pattern => '\w+', # required
);
• pattern - A string specifying a Perl-syntax regular expression which should match one token. The
default value is "\w+(?:[\x{2019}']\w+)*", which matches "it's" as well as "it" and "O'Henry's" as
well as "Henry".
INHERITANCE
Lucy::Analysis::RegexTokenizer isa Lucy::Analysis::Analyzer isa Lucy::Object::Obj.
perl v5.22.1 2015-12-18 Lucy::Analysis::RegexTokenizer(3pm)