Data Format

Input Format

RESTful Input


To make a RESTful call, one needs to send a json HTTP POST request to the server, which contains at least a text field or a tokens field. The input to RESTful API is very flexible. It can be one of the following 3 formats:

  1. It can be a document of raw str filled into text. The server will split it into sentences.

  2. It can be a list of sentences, each sentence is a raw str, filled into text.

  3. It can be a list of tokenized sentences, each sentence is a list of str typed tokens, filled into tokens.

Additionally, fine-grained controls are performed with the arguments defined in hanlp_restful.HanLPClient.parse().


curl -X POST "" \ 
     -H "accept: application/json" -H "Content-Type: application/json" \
     -d "{\"text\":\"2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。阿婆主来到北京立方庭参观自然语义科技公司。\",\"tokens\":null,\"tasks\":null,\"skip_tasks\":null,\"language\":null}"

Model Input

The input format to models is specified per model and per task. Generally speaking, if a model has no tokenizer built in, then its input is a sentence in list[str] form (a list of tokens), or multiple such sentences nested in a list.

If a model has a tokenizer built in, each sentence is in str form. Additionally, you can use skip_tasks='tok*' to ask the model to use your tokenized inputs instead of tokenizing them, in which case, each of your sentence needs to be in list[str] form, as if there was no tokenizer.

For any model, its input is of sentence level, which means you have to split a document into sentences beforehand. You may want to try NgramSentenceBoundaryDetector for sentence splitting.

Output Format

The outputs of both HanLPClient and MultiTaskLearning are unified as the same Document format.

For example, the following RESTful codes will output such an instance.

from hanlp_restful import HanLPClient
HanLP = HanLPClient('', auth=None)  # Fill in your auth
  "tok/fine": [
    ["2021年", "HanLPv2.1", "为", "生产", "环境", "带来", "次", "世代", "最", "先进", "的", "多", "语种", "NLP", "技术", "。"],
    ["英", "首相", "与", "特朗普", "通", "电话", "讨论", "华为", "与", "苹果", "公司", "。"]
  "tok/coarse": [
    ["2021年", "HanLPv2.1", "为", "生产环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"],
    ["英", "首相", "与", "特朗普", "通电话", "讨论", "华为", "与", "苹果公司", "。"]
  "pos/ctb": [
    ["NT", "NR", "P", "NN", "NN", "VV", "JJ", "NN", "AD", "JJ", "DEG", "CD", "NN", "NR", "NN", "PU"],
    ["NR", "NN", "CC", "NR", "VV", "NN", "VV", "NR", "CC", "NR", "NN", "PU"]
  "pos/pku": [
    ["t", "nx", "p", "vn", "n", "v", "b", "n", "d", "a", "u", "a", "n", "nx", "n", "w"],
    ["j", "n", "p", "nr", "v", "n", "v", "nz", "c", "nz", "n", "w"]
  "pos/863": [
    ["nt", "w", "p", "v", "n", "v", "a", "nt", "d", "a", "u", "a", "n", "ws", "n", "w"],
    ["j", "n", "c", "nh", "v", "n", "v", "ni", "c", "ni", "n", "w"]
  "ner/pku": [
    [["特朗普", "nr", 3, 4], ["苹果公司", "nt", 9, 11]]
  "ner/msra": [
    [["2021年", "DATE", 0, 1], ["HanLPv2.1", "ORGANIZATION", 1, 2]],
    [["英", "LOCATION", 0, 1], ["特朗普", "PERSON", 3, 4], ["华为", "ORGANIZATION", 7, 8], ["苹果公司", "ORGANIZATION", 9, 11]]
  "ner/ontonotes": [
    [["2021年", "DATE", 0, 1], ["HanLPv2.1", "ORG", 1, 2]],
    [["英", "GPE", 0, 1], ["特朗普", "PERSON", 3, 4], ["华为", "ORG", 7, 8], ["苹果公司", "ORG", 9, 11]]
  "srl": [
    [[["2021年", "ARGM-TMP", 0, 1], ["HanLPv2.1", "ARG0", 1, 2], ["为生产环境", "ARG2", 2, 5], ["带来", "PRED", 5, 6], ["次世代最先进的多语种NLP技术", "ARG1", 6, 15]], [["最", "ARGM-ADV", 8, 9], ["先进", "PRED", 9, 10], ["技术", "ARG0", 14, 15]]],
    [[["英首相与特朗普", "ARG0", 0, 4], ["通", "PRED", 4, 5], ["电话", "ARG1", 5, 6]], [["英首相与特朗普", "ARG0", 0, 4], ["讨论", "PRED", 6, 7], ["华为与苹果公司", "ARG1", 7, 11]]]
  "dep": [
    [[6, "tmod"], [6, "nsubj"], [6, "prep"], [5, "nn"], [3, "pobj"], [0, "root"], [8, "amod"], [15, "nn"], [10, "advmod"], [15, "rcmod"], [10, "assm"], [13, "nummod"], [15, "nn"], [15, "nn"], [6, "dobj"], [6, "punct"]],
    [[2, "nn"], [5, "nsubj"], [5, "prep"], [3, "pobj"], [0, "root"], [5, "dobj"], [5, "conj"], [11, "conj"], [11, "cc"], [11, "nn"], [7, "dobj"], [5, "punct"]]
  "sdp": [
    [[[6, "Time"]], [[6, "Exp"]], [[5, "mPrep"]], [[5, "Desc"]], [[6, "Datv"]], [[13, "dDesc"]], [[0, "Root"], [8, "Desc"], [13, "Desc"]], [[15, "Time"]], [[10, "mDegr"]], [[15, "Desc"]], [[10, "mAux"]], [[8, "Quan"], [13, "Quan"]], [[15, "Desc"]], [[15, "Nmod"]], [[6, "Pat"]], [[6, "mPunc"]]],
    [[[2, "Nmod"]], [[5, "Agt"], [7, "Agt"]], [[4, "mPrep"]], [[2, "eCoo"]], [[7, "dMann"]], [[5, "Cont"]], [[5, "ePurp"]], [[7, "Cont"]], [[10, "mConj"], [11, "mConj"]], [[11, "Nmod"]], [[7, "Cont"], [8, "eCoo"]], [[7, "mPunc"]]]
  "con": [
    ["TOP", [["IP", [["NP", [["NT", ["2021年"]]]], ["NP", [["NR", ["HanLPv2.1"]]]], ["VP", [["PP", [["P", ["为"]], ["NP", [["NN", ["生产"]], ["NN", ["环境"]]]]]], ["VP", [["VV", ["带来"]], ["NP", [["ADJP", [["NP", [["ADJP", [["JJ", ["次"]]]], ["NP", [["NN", ["世代"]]]]]], ["ADVP", [["AD", ["最"]]]], ["VP", [["JJ", ["先进"]]]]]], ["DEG", ["的"]], ["NP", [["QP", [["CD", ["多"]]]], ["NP", [["NN", ["语种"]]]]]], ["NP", [["NR", ["NLP"]], ["NN", ["技术"]]]]]]]]]], ["PU", ["。"]]]]]],
    ["TOP", [["IP", [["NP", [["NP", [["NP", [["NR", ["英"]]]], ["NP", [["NN", ["首相"]]]]]], ["CC", ["与"]], ["NP", [["NR", ["特朗普"]]]]]], ["VP", [["VP", [["VV", ["通"]], ["NP", [["NN", ["电话"]]]]]], ["VP", [["VV", ["讨论"]], ["NP", [["NR", ["华为"]], ["CC", ["与"]], ["NR", ["苹果"]], ["NN", ["公司"]]]]]]]], ["PU", ["。"]]]]]]

The outputs above is represented as a json dictionary where each key is a task name and its value is the output of the corresponding task. For each output, if it’s a nested list then it contains multiple sentences otherwise it’s just one single sentence.

We make the following naming convention of NLP tasks, each consists of 3 letters.

Naming Convention





Tokenization. Each element is a token.



Part-of-Speech Tagging. Each element is a tag.



Lemmatization. Each element is a lemma.



Features of Universal Dependencies. Each element is a feature.



Named Entity Recognition. Each element is a tuple of (entity, type, begin, end), where ends are exclusive offsets.



Dependency Parsing. Each element is a tuple of (head, relation) where head starts with index 0 (which is ROOT).



Constituency Parsing. Each list is a bracketed constituent.



Semantic Role Labeling. Similar to ner, each element is a tuple of (arg/pred, label, begin, end), where the predicate is labeled as PRED.



Semantic Dependency Parsing. Similar to dep, however each token can have any number (including zero) of heads and corresponding relations.



Abstract Meaning Representation. Each AMR graph is represented as list of logical triples. See AMR guidelines.


When there are multiple models performing the same task, their keys are appended with a secondary identifier. For example, tok/fine and tok/corase means a fine-grained tokenization model and a coarse-grained one respectively.