
Zero-Shot and Few-Shot Classification with Scikit-LLM
Image by Editor | ChatGPT
In this article, you will learn:
- how Scikit-LLM integrates large language models like OpenAI’s GPT with the Scikit-learn framework for text analysis.
- the difference between zero-shot and few-shot classification and how to implement them using Scikit-LLM.
- a step-by-step guide on configuring Scikit-LLM with an OpenAI API key and applying it to a sample text classification task.
Introduction
Scikit-LLM is a Python library designed to integrate large language models (LLMs) like OpenAI’s GPT-3.5 and GPT-4 with the Scikit-learn machine learning framework. It provides a simple interface to use LLMs as zero-shot or few-shot classifiers using natural language prompts, making it handy for downstream text analysis tasks such as classification, sentiment analysis, and topic labeling.
This article focuses on the zero-shot and few-shot classification capabilities of Scikit-LLM and illustrates how to use it alongside Scikit-learn workflows for these tasks.
Before we get hands-on, let’s clarify the difference between zero-shot and few-shot classification.
- Zero-shot classification: The LLM classifies text without any prior labeled examples from the dataset; it is prompted only with the possible class labels.
- Few-shot classification: A small set of labeled examples is provided in the prompt—typically a few examples per possible class—to guide the LLM’s reasoning toward the requested classification.
Step-by-Step Process
It’s time to try these two use cases with our article’s starring library: Scikit-LLM. We start by installing it:
We will now need to import the following two classes:
from skllm.config import SKLLMConfig from skllm import ZeroShotGPTClassifier |
Since using OpenAI’s models requires an API key, we will need to configure it. Access and register, if needed, on the OpenAI platform to create a new API key. Note that if you do not have a paid plan, your options for using models in the examples below might be limited.
SKLLMConfig.set_openai_key(“API_KEY_GOES_HERE”) |
Let’s consider this small example dataset, containing several user reviews and their associated sentiment labels:
X = [ “I love this product! It works great and exceeded my expectations.”, “This is the worst service I have ever received.”, “The movie was okay, not bad but not great either.”, “Excellent customer support and very quick response.”, “I am disappointed with the quality of this item.” ]
y = [ “positive”, “negative”, “neutral”, “positive”, “negative” ] |
We create an instance of a zero-shot classifier as follows:
clf = ZeroShotGPTClassifier() |
As its name suggests, Scikit-LLM is heavily based on Scikit-learn. Consequently, the process for training and evaluating a model will look very familiar if you are experienced with the Scikit-learn ecosystem:
clf.fit(X, y) labels = clf.predict(X)
print(labels) |
But here’s where the real value of zero-shot classification stands out: training data is not mandatory. The classifier can be “fitted” using only the possible labels. In other words, it is possible to do something like this:
clf_empty = ZeroShotGPTClassifier() clf_empty.fit(None, [“positive”, “negative”, “neutral”]) labels = clf_empty.predict(X) |
And it still works! This is the true essence of zero-shot classification.
Regarding few-shot classification, the process strongly resembles the first zero-shot classification example where we provided training data. In fact, the fit()
method is the correct way to pass the few labeled examples to the model.
from skllm import FewShotGPTClassifier
clf = FewShotGPTClassifier()
# Fit uses few-shot examples to build part of the prompt clf.fit(X, y)
test_samples = [ “The new update is fantastic and really smooth.”, “I’m not happy with the experience at all.”, “Meh, it was neither exciting nor terrible.” ]
predictions = clf.predict(test_samples) print(predictions) |
It might sound odd at first, but in the specific use case of few-shot classification, both the fit()
and predict()
methods are part of the inference process. fit()
provides the labeled examples for the prompt, and predict()
provides the new text to be classified. Together, they build the complete prompt sent to the LLM.
Wrapping Up
This article demonstrated two text classification use cases using the newly released Scikit-llm library: zero-shot and few-shot classification. Using an approach similar to Scikit-learn, these two techniques subtly differ in the prompting strategy they use to leverage example data during inference.
Zero-shot classification doesn’t require labeled examples and relies solely on the loaded model’s general understanding to assign labels. Meanwhile, few-shot classification incorporates a small set of labeled examples into the prompt to guide the model’s inference more accurately.