Chatbots Using Natural Language Processing

Chatbots Using Natural Language Processing
October 16, 2020
Have you ever been caught in a situation when you go to some webpage, which has an unfriendly interface and is stuffed with too much information, but you are looking for a little piece of information hidden somewhere in the deepest sub-category? And you read a lot of irrelevant text, just to find the right page and section? Unfortunately, some websites must contain a ton of information, but the way you interact with it can be simplified.

Chatbots and voice assistants play a big role in simplifying the communication with our devices. It is human nature to communicate using the voice and with the expansion of Artificial Intelligence, particularly Natural Language Processing, engineers can create chatbots that can understand human language, as well as the context of conversations to give relevant responses. In this article, I will show you how easy it is to train an NLP based chatbot and integrate it into your application.

What is NLP and CUI? 🤨

There are plenty of branches of Artificial Intelligence out there, but in this article we will exclusively cover Natural Language Processing, because it is crucial for Intelligent Chatbots and it dramatically improves communication with them.

By definition:

Natural Language Processing is the ability of a computer to process the same language - spoken or written - that humans use in normal discourse.

A Conversational User Interface (CUI) is a digital interface where a conversation is put as the main tool for interaction with a user. The CUI is more social and natural in so far as the user “messages”, “asks”, “agrees” or “disagrees” instead of “navigates” or “browses.”

Why do we need NLP? 🤔

So why do we need Natural Language Processing in chatbots? To put it simply, NLP makes communication easier and more natural for the user, and therefore it attracts more people and makes technology more useful and attractive for businesses and for users in general. The role that NLP plays in chatbot implementation is shown in the picture below.

Botanalytics, Who’s doing it best: The top 10 industries for chatbots, [cit. 2019-05-16]. industries-for-chatbots/.

Different types of chatbots 🤖

The basic type of chatbot is a menu/button-based chatbot. This chatbot is based on decision tree hierarchies presented to the user in the form of buttons. A user is required to make several selections by choosing the buttons that will lead to the answer. This type of chatbot is good for answering frequently asked questions and is the easiest for implementation, but when scenarios are more complex with more variables and it is harder to predict how the user will react, these chatbots usually fail.

Another advanced type of chatbot is keyword recognition based chatbots. Their main difference is that they can listen to what users type, recognize the keywords, and give an appropriate response based on this input.

The most advanced type of chatbot these days is a contextual chatbot. In addition to Natural Language Processing, they also utilize another branch of Artificial Intelligence known as Machine Learning (ML). ML helps to remember conversations with specific users and learn from these conversations, gather data, and improve automatically over time. This feature dramatically improves communication with a chatbot, as it shortens the time of conversation and makes interaction easier as soon as the bot starts to understand users’ needs.

What tools are available for NLP 🔨 🔧 🔩

There are some tools shipped as services which are available for working with Artificial Intelligence including NLP. Most of them have APIs for integration into your application and some kind of a tool, usually UI, to create, train, and manage your NLP instance.

The most well-known are:

All of these solutions would be suitable, but in this article, I would like to concentrate on IBM Watson, because it has a free lite plan, a user-friendly interface, and a lot of SDKs. For me, it also appeared more straightforward and easy to start with.

How to train the NLP model in IBM Watson

Now let's see how we can train and integrate the assistant. First of all, have a look at the high-level architecture of any system which works with IBM Watson.

IBM, Ibm cloud documentation, [cit. 2019-05-16].

As you can see, you can integrate it to some popular messaging platforms or any other application since IBM has SDKs for many programming languages (including JavaScript). Then your client connects to Watson Assistant, which is an IBM cloud service, which in turn connects to other IBM services to provide a set of tools for chatbot creation. As a developer you should only care about connecting to Watson Assistant and calling its endpoints. This process will be described in the next session. Now let's concentrate on creating a chatbot instance and training it.

Before we begin, I will introduce you to the key components of an IBM NLP chatbot: intents, entities, and dialogs.

The names are more or less self-explanatory, but to make things clear here are a few examples of intents:

  1. get_information_about_weather
  2. buy_a_smartphone
  3. write_an_article

You are the one who defines the intents, so you can call them whatever you want, but you will have to give examples to Watson so it can learn which questions should be associated with given intents. For example, the chatbot can learn that the question "Can I buy a smartphone?" leads to the intent "buy_a_smartphone". The idea is that you can give more than one example, and once you give enough examples, the chatbot will learn to recognize other questions which might mean this intent automatically.

The entities are a bit simpler. The examples of entities are: laptop, weather, monitor, Ikea.

You can teach the chatbot to recognize synonyms of laptop as a laptop, or if someone mistypes any particular word Watson should understand it anyway and map it to the correct "entity".

The third key component is a dialog. The dialog is the definition of the way the chatbot should behave- the way it should respond when it recognizes particular intent or/and entity and the actions it should take after it receives user input.

Here are the steps you should take to create an instance of Watson assistant:

  1. Register in the IBM cloud
  2. Find the Watson service through Catalog → Watson Assistant
  3. Chose an assistant plan, name it, choose a region etc., and click create
  4. You will be redirected to the Watson assistant page where you have a credentials tutorial that you can read later. For now, just press Launch Watson Assistant.
  5. It will redirect you to a page where you can create an assistant and teach it (create skills, intents, entities etc.). After you do it for the first time, you will go through a tutorial which explains what exactly each term means. But for now, everything can probably be easily understood from examples:
  1. Click Launch Watson Assistant (you should be redirected to the page that is shown below). After that, click on the icon that I highlighted.
  1. There you will see your chatbot skills and click the first one, or create a skill if it is not created yet with "Create skill" button.
  2. Once you get to the skill you can see Intents, entities, dialog, and other options. First of all, you need to define "intents". In this demo, I defined two simple intents: The first one is that the user wants to return an item to an e-shop, but he doesn't know the conditions. So once he asks something regarding the return of an item (which means withdrawing from the contract) the bot will identify it as "#withdraw_from_contract" intent. The second one is just a "#whats_up" intent, so that the bot responds politely if someone asks how is it doing. Let's see how you define it. It has a really simple UI. You only need to name the intent and give some input examples. The recommended amount of examples is more than 5. The more examples you provide, the better it identifies the intent.
  1. The next thing you need to do is to define entities. For the example, I just need the bot to understand if the user responds positively or negatively (yes or no), so I defined the entities @positive_answer and @negative_answer. You can also add synonyms and Watson suggests different synonyms automatically.
  1. Now our chatbot is able to recognize whether the user wants to know if the chatbot is fine, or wants to return an item in the e-shop. Also, it can identify a positive and negative answer. This should be enough to build our first little dialog. Sooo, go to the dialog section.
  2. In the dialog section you can create nodes and subnodes. Think of a node as one step in the dialog. You can re-order them, create subnodes, and define the way the bot jumps from one node to another. Let me give you an example.
    As you see, it has a tree structure. Each node is intended to activate when user input matches the conditions of this node (conditions are defined with intents and entities). For example, here we have a " What's up" node which will activate when bot recognizes "#what_is_up" intent. Otherwise it will skip it.
    So once Watson receives input it:
  • goes through the tree from top to bottom and checks every node (if it satisfies predefined conditions).
  • does not check subnodes if the parent node didn't match the conditions.
  • once it matches the parent node condition, the parent node reacts and waits for the response
  • after the user gets a response from the parent node which, in turn, is waiting for the users' response, the bot will start checking this nodes' children first. If the child nodes don’t meet conditions, it will start checking from the beginning.
  1. Now create a node, or open any of the existing nodes. The sidepanel will open, where you define the particular step.

The definition of the node looks like this. If the assistant recognizes (your conditions), the assistant responds (some response) and then the assistant should either wait for a reply or jump to another specified node.

Now let's use a "Try it" button and see how it works. I will intentionally try to use complicated sentences that I didn't define in examples.

As you can see, it was able to understand everything correctly even though I phrased it in a complicated way.

How to integrate it into your application

I will show you how to do it in Node.js, as it's extremely easy. Just follow these steps:

  1. Create express app
  2. Install body-parser and ibm watson with

{% c-block language="js" %}
npm install body-parser ibm-watson --save-dev
{% c-block-end %}

  1. Make the app listen on whichever port you want (in our case 3000)

{% c-block language="js" %}
   app.listen(3000, function () {
     console.log('Example app listening on port 3000!');
{% c-block-end %}

  1. Import AssistantV2 and IamAuthenticator

{% c-block language="js" %}
   const bodyParser = require('body-parser');
   /* parser for post requests */
   const AssistantV2 = require('ibm-watson/assistant/v2');
   /* watson sdk */
   const { IamAuthenticator } = require('ibm-watson/auth');
{% c-block-end %}

  1. Define the credentials that you will need

{% c-block language="js" %}
   const credentials = {
     API_KEY: 'your_api_key',
     URL:  '',
     VERSION: '2019-02-28',
     ASSISTANT_ID: 'your_assisntant_id'
{% c-block-end %}

  1. Create Authenticator

{% c-block language="js" %}
   const authenticator = new IamAuthenticator({
     apikey: credentials.API_KEY
{% c-block-end %}

  1. Create assistant

{% c-block language="js" %}
   const authenticator = new IamAuthenticator({
     apikey: credentials.API_KEY
{% c-block-end %}

  1. Create a GET method that will create a session and return you a session_id

{% c-block language="js" %}
   app.get('/session_id', function (req, res) {
         assistantId: credentials.ASSISTANT_ID,
         function(error, response) {
           if (error) {
             return res.send(error);
           } else {
             return res.send(response);
{% c-block-end %}

  1. Create a POST method that will send a message to IBM Watson and return you an object with the answer. Make sure you add sessionId and message to the request body.

{% c-block language="js" %}'/message', function (req, res) {
     if (!req.body.sessionId || !req.body.message) {
       res.status(400).send("bad request")
     var payload = {
       assistantId: credentials.ASSISTANT_ID,
       sessionId: req.body.sessionId,
       input: {
         message_type : 'text',
         text : req.body.message
     assistant.message(payload, function(err, data) {
       if (err) {
         return res.status(err.code || 500).json(err)
       return res.json(data)
{% c-block-end %}

And that's it! Now you are able to connect to your chatbot engine through your application! First you need to get a session ID through /session_id endpoint, and then use this session_id when sending a message through /message endpoint!

Let’s see how it works in postman. I’m sending a POST request to /message endpoint with 'sessionId' and 'message' in the body and the response is:

In the response you get all information that you need. You get the intents and entities that Watson recognized (with confidence value) and you get the response that was defined in a particular node. Now, as a developer, you can do whatever you want with this information- you can re-direct the user, change content on your page, or just simply display the answer.

For this example, I would wait for the positive or negative answer, give the user a relevant response, and re-direct him to the right section of the terms and conditions page.

The examples I showed here are quite primitive, but their main purpose was to show how the intelligent assistant works. Once you create a lot of different intents, entities and dialog paths, this bot can serve as a replacement for any type of customer support or just a fun feature, and in case it doesn't know the answer it can re-direct you to a real human.

I hope I could show you how easily it can be done and that using some types of artificial intelligence in your app doesn't require any specific knowledge.

What do you think? Leave me a message in the contact form! 😃

Boris is a Junior Web App Developer who mainly works in JavaScript/TypeScript, React and MobX. He has a head full of fresh knowledge and is keen on applying it every day. In programming, he appreciates the combination of creativity and engineering and the possibility to see results of his work right away. He likes to play tennis and to spend time with his friends.

Other articles by same author

Article collaborators

SABO Newsletter icon


Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

About SABO Mobile IT

We focus on developing specialized software for our customers in the automotive, supplier, medical and high-tech industries in Germany and other European countries. We connect systems, data and users and generate added value for our customers with products that are intuitive to use.
Learn more about sabo