span_rank
span_rank¶
Span Rank based SRL.
- class hanlp.components.srl.span_rank.span_rank.SpanRankingSemanticRoleLabeler(**kwargs)[source]¶
An implementation of “Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling” (He et al. 2018b). It generates candidates triples of (predicate, arg_start, arg_end) and rank them.
- Parameters
**kwargs – Predefined config.
- build_criterion(**kwargs)[source]¶
Implement this method to build criterion (loss function).
- Parameters
**kwargs – The subclass decides the method signature.
- build_dataloader(data, batch_size, shuffle, device, logger: logging.Logger, generate_idx=False, transform=None, **kwargs) torch.utils.data.dataloader.DataLoader [source]¶
Build dataloader for training, dev and test sets. It’s suggested to build vocabs in this method if they are not built yet.
- Parameters
data – Data representing samples, which can be a path or a list of samples.
batch_size – Number of samples per batch.
shuffle – Whether to shuffle this dataloader.
device – Device tensors should be loaded onto.
logger – Logger for reporting some message if dataloader takes a long time or if vocabs has to be built.
**kwargs – Arguments from
**self.config
.
- build_metric(**kwargs) Tuple[hanlp.metrics.f1.F1, hanlp.metrics.f1.F1] [source]¶
Implement this to build metric(s).
- Parameters
**kwargs – The subclass decides the method signature.
- build_model(training=True, **kwargs) torch.nn.modules.module.Module [source]¶
Build model.
- Parameters
training –
True
if called during training.**kwargs –
**self.config
.
- build_optimizer(trn, epochs, lr, adam_epsilon, weight_decay, warmup_steps, transformer_lr, **kwargs)[source]¶
Implement this method to build an optimizer.
- Parameters
**kwargs – The subclass decides the method signature.
- build_vocabs(dataset, logger, **kwargs)[source]¶
Override this method to build vocabs.
- Parameters
trn – Training set.
logger – Logger for reporting progress.
- evaluate_dataloader(data: torch.utils.data.dataloader.DataLoader, criterion: Callable, metric, logger, ratio_width=None, output=False, official=False, confusion_matrix=False, **kwargs)[source]¶
Evaluate on a dataloader.
- Parameters
data – Dataloader which can build from any data source.
criterion – Loss function.
metric – Metric(s).
output – Whether to save outputs into some file.
**kwargs – Not used.
- execute_training_loop(trn: torch.utils.data.dataloader.DataLoader, dev: torch.utils.data.dataloader.DataLoader, epochs, criterion, optimizer, metric, save_dir, logger: logging.Logger, devices, **kwargs)[source]¶
Implement this to run training loop.
- Parameters
trn – Training set.
dev – Development set.
epochs – Number of epochs.
criterion – Loss function.
optimizer – Optimizer(s).
metric – Metric(s)
save_dir – The directory to save this component.
logger – Logger for reporting progress.
devices – Devices this component and dataloader will live on.
ratio_width – The width of dataset size measured in number of characters. Used for logger to align messages.
**kwargs – Other hyper-parameters passed from sub-class.
- fit(trn_data, dev_data, save_dir, embed, context_layer, batch_size=40, batch_max_tokens=700, lexical_dropout=0.5, dropout=0.2, span_width_feature_size=20, ffnn_size=150, ffnn_depth=2, argument_ratio=0.8, predicate_ratio=0.4, max_arg_width=30, mlp_label_size=100, enforce_srl_constraint=False, use_gold_predicates=False, doc_level_offset=True, use_biaffine=False, lr=0.001, transformer_lr=1e-05, adam_epsilon=1e-06, weight_decay=0.01, warmup_steps=0.1, grad_norm=5.0, gradient_accumulation=1, loss_reduction='sum', transform=None, devices=None, logger=None, seed=None, **kwargs)[source]¶
Fit to data, triggers the training procedure. For training set and dev set, they shall be local or remote files.
- Parameters
trn_data – Training set.
dev_data – Development set.
save_dir – The directory to save trained component.
batch_size – The number of samples in a batch.
epochs – Number of epochs.
devices – Devices this component will live on.
logger – Any
logging.Logger
instance.seed – Random seed to reproduce this training.
finetune –
True
to load fromsave_dir
instead of creating a randomly initialized component.str
to specify a differentsave_dir
to load from.eval_trn – Evaluate training set after each update. This can slow down the training but provides a quick diagnostic for debugging.
_device_placeholder –
True
to create a placeholder tensor which triggers PyTorch to occupy devices so other components won’t take these devices as first choices.**kwargs – Hyperparameters used by sub-classes.
- Returns
Any results sub-classes would like to return. Usually the best metrics on training set.
- fit_dataloader(trn: torch.utils.data.dataloader.DataLoader, criterion, optimizer, metric, logger: logging.Logger, linear_scheduler=None, gradient_accumulation=1, **kwargs)[source]¶
Fit onto a dataloader.
- Parameters
trn – Training set.
criterion – Loss function.
optimizer – Optimizer.
metric – Metric(s).
logger – Logger for reporting progress.
**kwargs – Other hyper-parameters passed from sub-class.
- predict(data: Union[str, List[str]], batch_size: Optional[int] = None, fmt='dict', **kwargs)[source]¶
Predict on data fed by user. Users shall avoid directly call this method since it is not guarded with
torch.no_grad
and will introduces unnecessary gradient computation. Use__call__
instead.- Parameters
*args – Sentences or tokens.
**kwargs – Used in sub-classes.