Overview
A machine learning approach called text classification automatically tags or categorizes text. Text classifiers can assess and categorize text by sentiment, subject, and consumer intent using natural language processing (NLP) more quickly and correctly than humans.
It’s challenging for people to keep up with the amount of data coming in from all the many channels, including emails, chats, web pages, social media, online reviews, support issues, survey results, you name it.
How Does Text Classification Work?
You must translate text into something a machine can comprehend before you can start using machine learning to build a classifier. A bag of words is frequently used for this, in which a vector denotes the frequency of a word inside a predetermined list of terms.
The text classifier model is given training data, which includes feature vectors for each text sample and tag after the data has been vectorized. The model will be able to generate precise predictions with enough training examples.
Let’s look at some most popular text categorization algorithms: Support Vector Machines, Naive Bayes, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN).
Having said that, this article will examine various text classification approaches that you may utilize for your next NLP project and that simply need a tiny amount of code and little to no training data.
The techniques used make use of Transformer models, which are getting better and better at text categorization. You must be familiar with all three methods since choosing one of them might save you a ton of time on your upcoming text categorization job.
Already Fine-Tuned Transformer Model
The most typical method for classifying text with Transformer models is to tweak a pre-trained model like BERT. Given adequate data and Happy Transformer, developing a text classification model is a simple procedure.
To find a model who can do your assignment, I advise you to look at Hugging Face’s Model Hub in this situation. Visit this page to view the Hub’s list of text classification models that are currently available. To choose a model that meets your needs, you may also utilize their numerous search features.
Moving on, you will have to install Happy Transformer so that you can import it into the project and use the model HappyTextToText
.
# install Happy Transformer.
pip install happytransformer
# import the HappyTextClassification
From happytransformer import HappyTextClassification
Instantiate model. You will provide the first two position parameters the model type and name. The number of classes the model may categorize the text into, in this example six — sadness, joy, love, anger, fear and surprise, should be provided for the named parameter “num labels.”
Happy_Classifer = HappyTextClassification("DISTILBERT", "bhadresh-savani/distilbert-base-uncased-emotion", num_labels=6)
Easy! now you can use the classify_text()
method of the Happy_Classifer model to classify text.
text = "I had a lot of fun running today."
output = Happy_Classifer.classify_text(text)
print(output)
Output:
Output:
TextClassificationResult(label='joy', score=0.9929057359695435)
From the output printed on the screen, it is a Dataclass
with two variables label
and score
.
print(output.label)
print(output.score)
Output:
Output:
joy
0.9929057359695435
Zero-Shot NLI Model
Without any training data, a model trained for natural language inference (NLI) may categorize text into any number of categories. Finding out if an input contradicts, is neutral toward, or entails another input is a component of NLI. By reformulating the model’s input, researchers from the University of Pennsylvania presented a method for modifying an NLI model to do text categorization.
Through the Transformers library from Hugging Face, you can employ a zero-shot NLI text classification model with just a few lines of code. Import the pipeline class, which is what you’ll use to deploy the model.
from transformers import pipeline
The operation we’re executing and the model name are two inputs required by the Pipeline class in order to load a model. The task’s name is “zero-shot-text-classification,” and the most popular zero-shot text classification model on the Model Hub is “facebook/bart-large-mnli” which can be found on Hugging Face’s website.
task = "zero-shot-classification"
model = "facebook/bart-large-mnli"
classifier = pipeline(task, model)
Give examples of possible inputs for a sentiment analysis task.
labels = ["negative", "positive"]
text = "I loved the movie so much."
Text may now be classified by invoking the classifier, passing the labels as the second position argument and the text to be classified as the first. The findings are included in a dictionary that is the output.
output = classifier(text, labels)
print(output)
Output:
Output:
{'sequence': 'I loved the movie so much.', 'labels': ['positive', 'negative'], 'scores': [0.9705924391746521, 0.02940751425921917]}
The dictionary contains three keys: sequence, labels, and scores. You can choose to get the label positive with its corresponding score.
print(output["labels"][0])
print(output["scores"][0])
Output:
Output:
positive
0.9705924391746521
Conclusion
This marks the end of this article, you can also read up other techniques to perform text classification using low or no data — Zero-Shot Data Generation and Few-Shot Learning with a Text Generation Model. This is a very interesting topic that I love so much and I hope you also learnt something.
Let’s connect: Twitter & LinkedIn
You can also check out my Youtube channel.
Happy coding! 😊