history of chatbots, Artificial Intelligence

An introduction to natural language processing

28 Jun, 2017
4 Min Read Dan Ward

An introduction to natural language processing


Recent Posts

Subscribe to our blog

Have you ever wondered how artificially intelligent robots are able to simulate real-life conversations with humans?

Welcome to the world of natural language processing: computer techniques that help robots articulate natural language.

The origins of natural language processing (NLP) date back to the 1950s. Computer scientists began experimenting with hand-written rules that could program a machine to produce a specific output, based on the input it received.

In 1954, Georgetown University and IBM conducted theGeorgetown/IBM experimentwhich became the first successful case of machine translation. Inputted with 250 words and six grammatical rules, the computer could translate a number of complex Russian sentences into proper English.

Other experiments at the time included:

  • SHRDLU, a computer program that could build and move virtual blocks using natural language commands
  • ELIZA,  the first chatbot capable of simulating a natural conversation

The most exciting developments in natural language processing have come about in the last 30 years, driven by advancements in machine learning. Machine learning uses complex algorithms to help computers learn (rather than just following specific rules which requires a vast number of inputs to cater for all possible variations).

For example, when you manually tag friends in images on Facebook, the system learns to attach certain names to faces so that it can automatically prompt you in the future. When you watch Netflix or shop on Amazon, the system learns from your preferences so it can offer new suggestions.

Machine learning has paved the way for major improvements in natural language processing, which form part of many applications we use today.

NLP in today’s world

You likely encounter natural language processing on a daily basis, such as through:

  • Language translation (i.e.: the globe icon in Twitter and the ‘Translate’ features in Facebook and most search engines)
  • Automatic summarisation (programs capable of extracting a summary from a body of text)
  • Text classification (such as those used in mail programs to automatically detect spam based on text that has been dubbed “spam” or “ham”)
  • Spelling and grammar checks (which tell you if you have incorrectly used a word or phrase)
  • Information extraction (features in online software that can suggest events to add to your calendar or alert you to discussions surrounding topics of interest)

In marketing, natural language processing (combined with machine learning and big data) is producing highly useful applications for building deeper customer relationships.

For example, sentiment analysis uses NLP and machine learning to trawl huge data sets and return a report on how customers feel about and rate a particular brand. Otherwise known as polarity detection, the computer program learns to determine whether a text is positive, negative or neutral in order to draw a conclusion.

Advancements in NLP are also making it possible for organisations to develop chatbots which can act as real-life customer service reps and provide information, answer questions, and facilitate sales with customers.

NLP and chatbots

We haven’t quite reached the level of HAL 9000 from 2001 a Space Odyssey, who could recognise faces, understand feelings, lip read, and lie.

But question-answer chatbots are becoming incredibly life-like, such as Apple’s Siri and IBM’s Watson. In cases where a chatbot has a specific goal (such as answering general customer queries or matching requests to a product range for suggestions), the results are increasingly successful.

And while machines will only ever be able to simulate understanding, rather than actually understand, programmers are finding new ways to develop algorithms that cover the many complicated elements of human speech (think back to high school English lessons on phonology, morphology, syntax, semantics, and pragmatics).

The enemy of NLP lies in the nuances of our language, such as ambiguity, sarcasm and irony. Because, as humans, we can infer meaning, we understand each other. Teaching a bot to achieve this requires a whole new level of machine learning.

The future of NLP will seek to answer the question: can machines be taught to derive meaning from language, and use that understanding to answer questions in a relevant and insightful way?

For more insights into NLP and the rise of chatbots, download our whitepaper: