skip to content

Summarization in NLP: Get straight to the point

/ 1 min read

Summarization in NLP


Sequence-to-sequence models have introduced new approaches,

However they have two short comings.

  • liable to reproduce factual details inaccurately

  • tend to repeat themselves

Authors have proposed a novel that builds on top of the standard sequence-to-sequence attentional models.

First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.

Second, we use coverage to keep track of what has been summarized, which discourages repetition

We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points


graph TB

sum --> abs[Abstractive]
sum --> ext[Extractive]


Frontiers in Natural Language Processing Expert Responses.