2.14 Project: Named Entity Recognition#

In this lesson, we’ll see how to extract relevant information in the form of entities from unstructured text.

What is Named Entity Recognition#

Named Entity Recognition (NER) is the NLP task of identifying key information (entities) in text. An entity is a set of contiguous words that appear in the document and refers to the same thing. Some examples of entities are “Fabio”, “New York”, and “September 1st, 2022”. Usually, entities are classified into categories like “Person”, “City”, and “Date”.

As an example, if you pass the text “Hello! I’m Max. I’d like to book a flight to South Korea.” to a NER model, you’ll get something like the following.

Hello! I'm Max PER . I'd like to book a flight to South Korea LOC .

The word “Max” has been classified as a person entity, whereas the words “South” and “Korea” form a location entity.

NER has several use-cases, such as:

  • It’s one of the steps typically performed to extract knowledge graphs from texts. First entities are extracted, and then relations between them are inferred from the text.

  • In chatbots, NER extracts entities from the user utterances, which may be required to perform the correct task. For example, the utterance “I’d like to book a flight to Rome” would be classified as a “Flight booking request” with the location entity “Rome”. Therefore, the chatbot should look for flights to Rome specifically.

NER Models#

In the Hugging Face Hub there are several pre-trained NER models available. For example, dslim/bert-base-NER is a BERT model fine-tuned for NER specifically. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).

How NER Models work#

NER models work by tokenizing the input text and then classifying each token in it. A popular scheme for encoding entity information in texts is the BIO (i.e. Beginning, Inside and Outside) scheme.

It works by marking tokens with “O” (i.e. “outside”) if they are not part of an entity, with “B-PER” (i.e. “beginning person”) or “B-LOC” (i.e. “beginning location”) and so on if they are the beginning of a person or location entity respectively, and with “I-PER” (i.e. “inside person”) or “I-LOC” (i.e. “inside location”) and so on if they are the continuation after the beginning of a person or location entity respectively.

Suppose we are applying a NER model to the sentence “I’m Max and I’d like to book a flight to South Korea” and that the tokenization splits the sentence into its words. A NER model that follows the BIO scheme would classify the tokens as follows:

  • “I” -> “O” (no entity)

  • “‘m” -> “O” (no entity)

  • “Max” -> “B-PER” (start of a person entity)

  • “and” -> “O” (no entity)

  • “South” -> “B-LOC” (start of a location entity)

  • “Korea” -> “I-LOC” (continuation of a location entity)

Consequently, the text would have a person entity in “Max” and a location entity in “South Korea”.

NER with Python#

Let’s see how to use a NER model in few lines of code with Python.

Install and Import Libraries#

First, we install and import the necessary libraries.

pip install transformers spacy
# NER model
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

# show NER results
from spacy import displacy

spacy is a popular Python library for NLP. It contains the popular displacy visualizer, which provides nice visualizations of entities in texts.

Download and Use the NER Model#

Let’s download the dslim/bert-base-NER NER model and tokenizer and load them into a ner pipeline.

tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
pipe = pipeline("ner", model=model, tokenizer=tokenizer)

Then, we try the NER pipeline with a sample text.

text = "Hello! I'm Max. I'd like to book a flight to South Korea."

ner_results = pipe(text)
[{'entity': 'B-PER', 'score': 0.99546933, 'index': 6, 'word': 'Max', 'start': 11, 'end': 14}, {'entity': 'B-LOC', 'score': 0.9997621, 'index': 17, 'word': 'South', 'start': 45, 'end': 50}, {'entity': 'I-LOC', 'score': 0.99933356, 'index': 18, 'word': 'Korea', 'start': 51, 'end': 56}]

The model returns the list of tokens that haven’t been classified with a “O” tag, that is, the tokens that are the start or the continuation of an entity.

Visualize NER Results#

The displacy visualizer requires entity data to be presented in a different format from the one returned by the NER pipeline. So, let’s write a simple from_ner_results_to_displacy function that formats the entity data differently.

def from_ner_results_to_displacy(text, ner_results):
  d_result = { "text": text, "title": None } 
  ents = []
  current_entity = None
  for ent in ner_results:
    if "B-" in ent["entity"]:
      if current_entity != None:
      entity_label = ent["entity"][2:]
      current_entity = {
        "label": entity_label,
        "start": ent["start"],
        "end": ent["end"]
    if "I-" in ent["entity"]:
      current_entity["end"] = ent["end"]
  if current_entity != None:
  d_result["ents"] = ents
  return d_result

This is how our entity data is now structured.

text = "Hello! I'm Max. I'd like to book a flight to South Korea."
ner_results = pipe(text)
d_displacy = from_ner_results_to_displacy(text, ner_results)
  'text': "Hello! I'm Max. I'd like to book a flight to South Korea.",
  'title': None,
  'ents': [{'label': 'PER', 'start': 11, 'end': 14}, {'label': 'LOC', 'start': 45, 'end': 56}]

Last, we feed this data to the displacy.render function, which plots the results.

displacy.render(d_displacy, style="ent", manual=True)
Hello! I'm Max PER . I'd like to book a flight to South Korea LOC .

Code Exercises#


What is the task of Named Entity Recognition (NER) in NLP?

  1. Generating a summary of a text.

  2. Identifying topics in a text.

  3. Identifying key information (entities) in text.

  4. Extracting relationships between entities.

What is not a popular real-world application of Named Entity Recognition?

  1. Extracting entities from user utterances in chatbots.

  2. Building knowledge graphs.

  3. Sentiment analysis.

Which of the following is not a typical entity extracted by a NER model?

  1. Person entity.

  2. Location entity.

  3. Organization entity.

  4. Noun entity.

What is the name of a popular scheme for classifying tokens in NER?

  1. OIB scheme.

  2. BIO scheme.

  3. SIO scheme.

  4. ISO scheme.

Questions and Feedbacks#

Have questions about this lesson? Would you like to exchange ideas? Or would you like to point out something that needs to be corrected? Join the NLPlanet Discord server and interact with the community! There’s a specific channel for this course called practical-nlp-nlplanet.