biaffine_sdp

biaffine_sdp

Biaffine dependency parser.

class hanlp.components.parsers.biaffine.biaffine_sdp.BiaffineSemanticDependencyParser[source]

Implementation of “Stanford’s graph-based neural dependency parser at the conll 2017 shared task” (Dozat et al. 2017) and “Establishing Strong Baselines for the New Decade” (He & Choi 2020).

build_criterion(**kwargs)[source]

Implement this method to build criterion (loss function).

Parameters

**kwargs – The subclass decides the method signature.

build_metric(**kwargs)[source]

Implement this to build metric(s).

Parameters

**kwargs – The subclass decides the method signature.

fit(trn_data, dev_data, save_dir, feat=None, n_embed=100, pretrained_embed=None, transformer=None, average_subwords=False, word_dropout: float = 0.2, transformer_hidden_dropout=None, layer_dropout=0, mix_embedding: Optional[int] = None, embed_dropout=0.33, n_lstm_hidden=400, n_lstm_layers=3, hidden_dropout=0.33, n_mlp_arc=500, n_mlp_rel=100, mlp_dropout=0.33, arc_dropout=None, rel_dropout=None, arc_loss_interpolation=0.4, lr=0.002, transformer_lr=5e-05, mu=0.9, nu=0.9, epsilon=1e-12, clip=5.0, decay=0.75, decay_steps=5000, weight_decay=0, warmup_steps=0.1, separate_optimizer=True, patience=100, batch_size=None, sampler_builder=None, lowercase=False, epochs=50000, apply_constraint=False, single_root=None, no_zero_head=None, punct=False, min_freq=2, logger=None, verbose=True, unk='<unk>', pad_rel=None, max_sequence_length=512, gradient_accumulation=1, devices: Optional[Union[float, int, List[int]]] = None, transform=None, **kwargs)[source]

Fit to data, triggers the training procedure. For training set and dev set, they shall be local or remote files.

Parameters
  • trn_data – Training set.

  • dev_data – Development set.

  • save_dir – The directory to save trained component.

  • batch_size – The number of samples in a batch.

  • epochs – Number of epochs.

  • devices – Devices this component will live on.

  • logger – Any logging.Logger instance.

  • seed – Random seed to reproduce this training.

  • finetuneTrue to load from save_dir instead of creating a randomly initialized component. str to specify a different save_dir to load from.

  • eval_trn – Evaluate training set after each update. This can slow down the training but provides a quick diagnostic for debugging.

  • _device_placeholderTrue to create a placeholder tensor which triggers PyTorch to occupy devices so other components won’t take these devices as first choices.

  • **kwargs – Hyperparameters used by sub-classes.

Returns

Any results sub-classes would like to return. Usually the best metrics on training set.