Your First LLM App: Using OpenAI API with Streamlit and Python.

We’re going to summarize any given text like a pro. Think of it as turning long emails, documents, or even your cousin’s 3-paragraph WhatsApp message into a neat JSON

This is a 101-style beginner’s blog, so if you’re new to this — worry not! I’ll walk you through everything step by step, and I promise to keep it simple (with just enough geeky fun). All codes are in this repo – nextweb repo

Here’s what we’ll cover in this AI-powered journey:

By the end, you’ll have a basic yet functional AI summarizer app built with Streamlit and OpenAI, ready to wow your friends or at least make your reading list more manageable.

Let’s get building! 🛠️✨

Prerequisites

Open API Key

Setting Up Your Project Environment

OPENAI_API_KEY=sk-proj-xxxxx
OPENAI_API_MODEL=gpt-4o-mini

Create app.py and use OpenAI

from openai import OpenAI
import streamlit as st
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

client = OpenAI(
  api_key=os.environ['OPENAI_API_KEY'])
llm_model=os.environ['OPENAI_API_MODEL']

def get_completion(prompt, model=llm_model):
    messages = [
        {"role": "system", "content": "Act like a text summarizing tool, give response in json which key 'summary' "},
        {"role": "user", "content": prompt}]
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        temperature=0,
    )
    return response.choices[0].message.content

When you call the GPT model, you’re basically starting a chat. To keep things structured, you pass it a list of messages. Each message has a role and some content. Here’s what the roles mean:

role: "system"

Think of this as setting the stage. You’re telling the AI how to behave. In our case, we say:

“Act like a text summarizing tool and respond in JSON with a key called ‘summary’”

It’s like giving the AI a job title before it gets to work.

role: "user"

This is your voice — the actual input or question you’re giving the AI. For example:

“this is my long article : <article text>”

You can think of it like this:


temperature – The AI’s “Creativity Dial”

💡 In our summarizer app, we’re using temperature=0 to keep things tight and consistent — because you probably don’t want a haiku when you just need the key points.

Creating a Basic Streamlit App

# Initialize Streamlit UI
st.title("Chatbot Prototype")

# Display a text input box for user query
user_input = st.text_input("Give text to summarize:")

if user_input:
    # Pass the user input to the llm_chain for processing
    response = get_completion(user_input)

    # Display the chatbot's response
    st.write("Assistant:", response)

Running the App

streamlit run app.py

If all goes well (fingers crossed 🤞), it’ll launch a nice, clean UI in your default browser — and boom, you’re ready to start playing with sample inputs and watching your AI summarizer do its thing!

After running streamlit app.

Conclusion

And that’s a wrap! 🎉
You’ve just built a simple but powerful AI summarizer app using Streamlit and OpenAI. Along the way, you:


Discover more from NextWeb Spark (Ireland)

Subscribe to get the latest posts sent to your email.

Alok Kumar Avatar

Posted by

Leave a comment