Shivam Mishra
10 min readMar 11, 2021

--

Custom Bots using RASA Framework and Java (Part 2)

Welcome to the second blog of the Rasa chatbot project. Let’s recapitulate what we saw in the first part : -

  1. Introduction to chatbots
  2. Content Outline
  3. History Of Chatbots
  4. Jar Files
  5. Making DDF and
  6. Further resources

Here is the link to the first part, in case you have not gone through it. Make sure you check it out, I can assure you that it’s worth it ; )

Following is the outline of what we will be covering in this blog :

  • About RASA
  • RASA Framework Description and tweaks
  • Updates RASA Files
  • Demonstration of Running Files
  • Further Read

Let’s start without further delay! ( :

About RASA : Rasa is actually a Latin term of word combinations Tabula Rasa which means clean slate. Similar to the phrase, Rasa is an open source machine learning framework running on Python language, which contains a skeletal default framework for automated text and voice-based conversations. Apart from the option of launching the framework on command prompt, you can launch it also on a dedicated platform called Rasa X. It is a tool designed to make it easier to deploy and improve Rasa-powered assistants by learning from real conversations.

Rasa X can also be used to develop an assistant from scratch, but the main goal behind developing it was to build a new UI tool which would make it as easy as possible for any assistant to improve by learning from real conversations. Thus, the conventional approach is to deploy the assistants on the platform to make them learn using a conversational driven approach.

Further insights can be gained from references which are stated at the bottom of the link.

RASA Framework Description and tweaks : Let’s have a walkthrough of the steps involved in downloading and executing the framework.

  • Create a virtual environment : Trust me, when you have to deal with packages, modules or frameworks which require specific version of the language (Python), or some other modules, the option of creating a virtual environment comes as a blessing. Virtual environments are analogous to different rooms in a house where you can keep modules required for a task or project.

Open anaconda navigator as administrator (not exactly needed, but taking precautions never hurts, right!) and type following command :
Command : conda create — name env_name (the dash is actually two hyphens in command prompt)

creating virtual environment using conda prompt

Run the command conda activate env_name to activate the environment, which can later be deactivated using command conda deactivate, and you will be redirected to base/root environment.

*** If you have python version >=3.8 then use following command :

conda create — name env_name python==3.7.x (I used 3.7.6)

  • Install Prerequisites — For Rasa, we have few requisites, so this will not take long (unless you have version conflicts, to avoid which we are using virtual env)

commands : Execute these after creating and activating venv

  1. conda install ujson
  2. Install the visual C++ distribution if you have not before. Check this link for the same. Download the x64 version, if you have a 64 bit supported system.
  3. conda install tensorflow
    After this step, you need to download Install Rasa. conda navigator does not have any command for that, so we will be using pip instead. Use
  4. pip install rasa

For more detailed explanation, you can check this link.

After everything mentioned above is done, you can simply initialize the framework using command rasa init

  • Install Rasa X (optional, don’t need here) : Execute following command-> pip3 install rasa-x — extra-index-url https://pypi.rasa.com/simple
  • Train the default framework : This step is a mandatory and critical one, if you are initializing the framework for the first time on your machine. Right after using the command in the second step, the framework will ask you to train the initial model, after asking project location. Say yes, and kaboom!! you have the default framework up and working. Try executing it. But it’s not an advanced one, so don’t expect it to be that efficient.

If you make some changes in the files, and wish to train the framework again, just use the command rasa train in the navigator, and the training process repeats.

Training the framework. NLU portion is getting trained
Training the framework. Core portion is getting trained

Updates RASA Files : Now that we have reached the epicentre of bot creation using Rasa, let me congratulate you first, for sticking this long with me :P

This is the critical part where we make changes in the existing framework, according to our need. Let me introduce some Natural Language Processing (NLP) and Machine Learning (ML) concepts here, which we will get acquainted with in this procedure.

  • Intents : The intent can be defined as the customer goal, or user intention. Simply put, it is a collection of possible queries which can be thrown from the user.
  • Entities : Entity refers to the modifier or input used by the user in order to change the intent. In raw terms, it is analogous to a variable which can alter the value of an equation.

While entity is the identifier used by the user to describe the issue, intent is what they mean to express.

  • Utterances : Anything which a user says. You can say that

Utterance = Intents + Entities

Components in a sentence
  • Slots : Slots are the variables you give your program to let your bot categorize and interpret users’ input. Basically they are your bot’s memory which are used in order to fill value of the entity
  • Pipeline Components :
  • WhitespaceTokenizer : Tokenization is a way of separating a piece of text into smaller units called tokens. Here, tokens can be either words, characters, or sub words. Hence, tokenization can be broadly classified into 3 types — word, character, and sub word (n-gram characters) tokenization.

WhitespaceTokenizer is a tokenizer using whitespaces as a separator.

  • RegexFeaturizer : Creates a vector representation of user messages using regular expressions. It creates features for entity extraction and intent classification. During training the RegexFeaturizer creates a list of regular expressions defined in the training data format. For each regex, a feature will be set marking whether this expression was found in the user message or not. All features will later be fed into an intent classifier / entity extractor to simplify classification (assuming the classifier has learned during the training phase that this set feature indicates a certain intent / entity). Regex features for entity extraction are currently only supported by the CRFEntityExtractor and the DIETClassifier components!
  • LexicalSyntacticFeaturizer : Creates features for entity extraction. Moves with a sliding window over every token in the user message and creates features according to the configuration (see below). As a default configuration is present, you don’t need to specify a configuration.
  • CountVectorsFeaturizer : Creates features for intent classification and response selection. Creates bag-of-words representation of user message, intent, and response using sklearn’s CountVectorizer. All tokens which consist only of digits (e.g. 123 and 99 but not a123d) will be assigned to the same feature.
  • DIETClassifier : DIET (Dual Intent and Entity Transformer) is a multi-task architecture for intent classification and entity recognition. The architecture is based on a transformer which is shared for both tasks.

If you are interested, you can check out the Algorithm Whiteboard series on YouTube, where the developers have explained the model architecture in detail (tho the understanding of architecture is not needed for basic projects)

  • ResponseSelector : Selectors predict a bot response from a set of candidate responses.Response Selector component can be used to build a response retrieval model to directly predict a bot response from a set of candidate responses. The prediction of this model is used by the dialogue manager to utter the predicted responses. It embeds user inputs and response labels into the same space and follows the exact same neural network architecture and optimization as the DIETClassifier.
  • FallbackClassifier : The FallbackClassifier classifies a user message with the intent nlu_fallback in case the previous intent classifier wasn’t able to classify an intent with a confidence greater or equal than the threshold of the FallbackClassifier
  • Pipeline Policies :
  1. TEDPolicy : The Transformer Embedding Dialogue (TED) Policy is a multi-task architecture for next action prediction and entity recognition.
    If you want to learn more about the model, check out the paper and on the youtube channel. where developers have explained the model architecture in detail.
  2. AugmentedMemoizationPolicy : The MemoizationPolicy remembers the stories from your training data. It checks if the current conversation matches the stories in your stories.yml file. If so, it will predict the next action from the matching stories of your training data with a confidence of 1.0. If no matching conversation is found, the policy predicts None with confidence 0.0.The AugmentedMemoizationPolicy remembers examples from training stories for up to max_history turns, just like the MemoizationPolicy. Additionally, it has a forgetting mechanism that will forget a certain amount of steps in the conversation history and try to find a match in your stories with the reduced history.
  3. RulePolicy : The RulePolicy is a policy that handles conversation parts that follow a fixed behavior (e.g. business logic). It makes predictions based on any rules you have in your training data.

If you are not aware of the NLP concepts, or the Neural Network concepts, I would suggest you to not meddle up with the values, and anyways the parameters of these concepts are given such a value that they work almost perfectly for every small to medium range bot frameworks.

Anyways, it’s better to have an understanding of the concepts, or at least have knowledge about things embedded in the project : )

Let’s move onto the file details present inside the framework -

Default file structure and hierarchy of the framework

This is what the default framework looks like when you install, initiate and execute the framework.

  1. The actions file contains custom actions which you define if you need to perform actions other than basic question answering.
  2. Data folder contains three files viz. Intents (nlu.yml), rules (yes! we can specify some rules (implemented by rule policies) if we want to) and the story flow (containing intents and corresponding actions).
  3. env folder comes into scenario only if you are using virtual environments
  4. models contain trained models, and tests contain sample stories, on which the default framework is trained upon. We do not need to bother with these files.
  5. config.yml contains the pipeline details. For vasic bots, they are not initiated (although you can, if you want to improve the performance).
  6. credentials.yml is used if we want to link the bot with some social media platforms.
  7. domain.yml contains list of intents, actions and corresponding response details.
  8. endpoints.yml contains various endpoints which are defined for trackers, event_brokers,custom actions etc. Uncomment the action_endpoint, if you are going to define custom actions in the file.

If your bot contains more than one domain/type of FAQ, then response selector comes into scenario (I will provide a link for one such project, which is provided by Rasa developers themselves. I swear they are one of the most amazing devs you’ll see around XD )

Now, since I have given all required details, it’s time you guys prepare to get your hands dirty, and understand what exactly is happening in each file, combining this blog with a sample project I created, which you can access using this link .

For advanced chatbot framework, you can follow this link. But make sure you create a strong base firstly, so that you get the most out of the concepts mentioned in the project.

Demonstration of Running Files : There are two ways in which you can run the framework.

First and the relatively basic method is to run the command rasa shell which will deploy the created model in anaconda navigator.

Calling the framework in the command prompt

As you can see from the picture, this method does not give any background detail about the pipeline involved in the framework. You can see the conversation occurring like any normal chatbot interface.

The second method can be used if you want to monitor the performance of the framework simultaneously while deploying it on the navigator. This can be done by running the command rasa shell — debug , and this gives value of various parameters which are incorporated in the framework.

Calling the framework with debug mode

This is what happens when I ask the weather information as intent. As evident from the picture, it uses all the pipeline concepts to predict next action with certain confidence.

Display of parameter values along with the result

And gives the final result after calling the custom action with certain confidence (again!!)

Further Read : Phew!! finally we reached the concluding part of the complete journey. But I bet by now that all the patience you held inside you for coming till here must have been worth it after successful implementation, and visualization of the result.

At this point, I would like to express my gratitude towards you all for bearing out with me till the end of the journey. Also, heartfelt acknowledgement towards the people who provides us with the supplementary resources for a better understanding of the concept. Here are some other resources which you can refer, for a much better grasp on the basics :

  • History :
  1. https://chatbotslife.com/a-brief-history-of-chatbots-d5a8689cf52f
  2. https://analyticsindiamag.com/story-eliza-first-chatbot-developed-1966/
  3. https://www.engati.com/blog/history-of-chatbots
  4. https://insights.daffodilsw.com/blog/the-history-and-evolution-of-chatbots
  • JAR :
  1. https://docs.oracle.com/javase/8/docs/technotes/guides/jar/jarGuide.html
  2. https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html
  3. https://docs.oracle.com/javase/8/docs/technotes/guides/jar/jar.html
  4. https://en.wikipedia.org/wiki/JAR_(file_format)#:~:text=A%20JAR%20(Java%20ARchive)%20is,format%20and%20typically%20have%20a%20
  • RASA -
  1. https://rasa.com/docs/rasa-x/
  2. https://blog.rasa.com/rasa-x-getting-started-as-a-current-rasa-user/
  3. https://www.youtube.com/watch?v=4ewIABo0OkU

Also, I went through some other blogs which you can go through along with mine, and I recommend you to check out this blog for creating APIs and this blog for implementing AR in JAR file (in the chatbot frame), so you can create an end to end project all by yourself. The approach used by these guys is simplistic and quite understanding in nature. Afterall, some extra knowledge never harms anyways, right ; )

Thanks for reading this blog. See you around over some other interesting topics. Till then, grow more, and stay safe!

--

--