TextTokenizingDataset(data: Union[str, List], transform: Union[Callable, List] = None, cache=None, generate_idx=None, delimiter=None, max_seq_len=None, sent_delimiter=None, char_level=False, hard_constraint=False)¶
A dataset for tagging tokenization tasks.
data – The local or remote path to a dataset, or a list of samples where each sample is a dict.
transform – Predefined transform(s).
Trueto enable caching, so that transforms won’t be called twice.
generate_idx – Create a
IDXfield for each sample to store its order in dataset. Useful for prediction when samples are re-ordered by a sampler.
delimiter – Delimiter between tokens used to split a line in the corpus.
max_seq_len – Sentences longer than
max_seq_lenwill be split into shorter ones if possible.
sent_delimiter – Delimiter between sentences, like period or comma, which indicates a long sentence can be split here.
char_level – Whether the sequence length is measured at char level.
hard_constraint – Whether to enforce hard length constraint on sentences. If there is no
sent_delimiterin a sentence, it will be split at a token anyway.