Building Interactive Machine Learning Applications with Gradio

Building Interactive Machine Learning Applications with Gradio

Mar 10, 2025·
Mohsen Davarynejad
Mohsen Davarynejad
· 5 min read
Created by AI with DALL·E

When working with machine learning models, we often need to showcase our models to stakeholders, gather feedback, or make them accessible to non-technical users. This is where Gradio comes in. Gradio allows you to quickly build interactive UIs for your models with minimal effort.

Key Features of Gradio:

  • Simple and intuitive: Requires minimal coding to get a UI up and running.
  • Supports multiple input and output types: Text, images, audio, and more.
  • Live deployment: Easily shareable with a public link.
  • Integration with ML frameworks: Works well with TensorFlow, PyTorch, and Hugging Face models.
  • Embeddable: Can be embedded in notebooks or websites.

Let’s dive into building our first Gradio application!


Setting Up the Environment with venv

Step 0: Install Required Dependencies

Update your system and install necessary tools:

sudo apt update && sudo apt upgrade -y
sudo apt install -y python3-venv git
python3 -m venv gradio-env
source gradio-env/bin/activate

Now lets install Gradio:

pip install --upgrade pip
pip install gradio

Step 1: A Simple Gradio Interface

We’ll create a basic function that takes a name as input and returns a greeting.

import gradio as gr

def greet(name):
    return f"Hello, {name}! Welcome to Y1D trading 101!"

iface = gr.Interface(fn=greet, inputs="text", outputs="text")
iface.launch(server_name="0.0.0.0", server_port=7860)

In case you need to access the app externally we first need to open firewall for port 7860.

sudo ufw allow 7861/tcp
sudo ufw reload

To check firewall status:

sudo ufw status

If ufw is inactive, enable it:

sudo ufw enable

Nest step is to find the IP of your Machine:

hostname -I

And now access your app in a browser on IP:7860

Run the script, and you’ll see a simple web interface where users can enter their names and receive a greeting.

Access the port

You can also use ngrok for an easy tunnel, a solution that we will skip here for now.

Explanation:

  • fn=greet: The function that processes the input.
  • inputs="text": Takes a text input from the user.
  • outputs="text": Outputs a text response.
  • iface.launch(): Runs the application in the browser.
  • iface.launch(server_name="0.0.0.0", server_port=7860): Runs tha app on specified port.

Extending the Application

Now, let’s build a more useful application. Suppose we have a sentiment analysis model, and we want to create an interactive UI for it.

pip install torch transformers

Step 2: Adding a Sentiment Analysis Model

We’ll use the transformers library to load a pre-trained model from Hugging Face.

from transformers import pipeline
import gradio as gr

# Load sentiment analysis model
sentiment_model = pipeline("sentiment-analysis")

def analyze_sentiment(text):
    result = sentiment_model(text)[0]
    return f"Sentiment: {result['label']} (Confidence: {result['score']:.2f})"

iface = gr.Interface(
    fn=analyze_sentiment, 
    inputs="text", 
    outputs="text",
    title="Sentiment Analysis",
    description="Enter a sentence and get its sentiment prediction."
)

iface.launch(server_name="0.0.0.0", server_port=7860)

Access the port
Access the port

New Features Added:

  • Using a real ML model: We integrated a Hugging Face transformer for sentiment analysis.
  • Providing a title and description: Makes the UI more informative.

Making the App More Interactive

We can further enhance our Gradio app by adding dropdown menus, multiple inputs, and custom UI elements.

Step 3: Adding Multiple Models and Customization Options

Let’s extend our sentiment analysis app to allow users to choose between different models and adjust the confidence threshold.

import gradio as gr
from transformers import pipeline

# Define available models
models = {
    "DistilBERT": pipeline("sentiment-analysis"),
    "BERT": pipeline("sentiment-analysis", model="nlptown/bert-base-multilingual-uncased-sentiment"),
}

def analyze_sentiment(text, model_choice, threshold):
    model = models[model_choice]
    result = model(text)[0]
    sentiment = result["label"]
    confidence = result["score"]
    
    if confidence < threshold:
        return "Uncertain sentiment (confidence too low)"
    return f"Sentiment: {sentiment} (Confidence: {confidence:.2f})"

iface = gr.Interface(
    fn=analyze_sentiment,
    inputs=[
        "text", 
        gr.Dropdown(choices=["DistilBERT", "BERT"], label="Choose Model"),
        gr.Slider(0.0, 1.0, value=0.5, label="Confidence Threshold")
    ],
    outputs="text",
    title="Advanced Sentiment Analysis",
    description="Select a model, enter text, and set a confidence threshold."
)

iface.launch(server_name="0.0.0.0", server_port=7860)

Access the port
Access the port

New Features:

  • Dropdown menu: Allows users to choose between different ML models.
  • Slider control: Users can adjust the confidence threshold.
  • More informative output: Displays confidence scores and handles uncertainty better.

Deploying Your Gradio App

Once satisfied with your application, you can deploy it:

  1. Running Locally: Simply execute your script.

  2. Public Sharing: Add share=True to .launch():

    iface.launch(share=True)
    

    This generates a public, somewhat random, Gradio link accessible anywhere.

  3. Hosting on Hugging Face Spaces (Out of scope):

    • Push your script to a Hugging Face repo and create a Gradio Space.
  4. Docker Deployment:

    • Containerize your app and deploy it using Docker, AWS, or Google Cloud.

Conclusion

Gradio makes it incredibly easy to build and share interactive machine learning applications. We started with a simple example, then progressively enhanced it by integrating models and adding UI elements. You can now build your own interactive ML tools and deploy them effortlessly.

Read More

If you’re interested in diving deeper into Gradio and its capabilities, check out the following resources:

  • Time Series Visualization in Gradio: Learn how to create interactive time plots and visualize time-series data in your Gradio apps with ease.
    Check out the tutorial

  • Using Flagging in Gradio: Learn how to implement flagging in your Gradio apps to collect user feedback, improve models, and monitor predictions effectively.
    Read the guide here

  • Building Multi-Page Apps: Want to create a more structured app? This guide walks you through setting up multi-page applications in Gradio for better user navigation.
    Explore multi-page apps

  • Integrating Gradio with Slack: This guide walks you through creating a Slack bot powered by a Gradio app, making AI models easily accessible within Slack workspaces. This serves as a reminder of Gradio’s extensive customization options and the wide range of use cases it supports.
    Build a Slack bot with Gradio