ホーム > AI > How to Use Local Open WebUI API from Command Line
AI

How to Use Local Open WebUI API from Command Line

Thank you for your continued support.
This article contains advertisements that help fund our operations.

Open WebUI can be used not only through web browsers but also by directly interacting with its API from the command line. This article provides a detailed explanation of how to ask questions and get responses from the terminal using the Open WebUI API running on Docker.

Prerequisites

  • Environment setup using Docker is complete and can be accessed via local URL
  • Using localhost:3000
  • Models have been pulled using ollama

Confirming Open WebUI Startup

docker ps

Example output:

CONTAINER ID   IMAGE                                COMMAND           CREATED       STATUS                 PORTS                    NAMES
67426e250958   ghcr.io/open-webui/open-webui:main   "bash start.sh"   2 hours ago   Up 2 hours (healthy)   0.0.0.0:3000->8080/tcp   open-webui

In my case, I can connect by accessing http://localhost:3000, which becomes the base URL for the endpoint.

How to Obtain API Token

  1. Click the icon in the upper right
  2. Settings
  3. Account
  4. API Key, Show
  5. Copy the JWT token

Use this JWT token.

How to Check Available Models

Before using the API, let's check what models are available.

Checking via GUI

Open the chat screen, and you'll see the model displayed in the upper left of the chat. Click on it to select models, where you can see the list.

Checking via Command

curl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:3000/api/models

Replace YOUR_TOKEN with the JWT token you copied earlier. If your localhost URL is different, change it to your desired port.

Basic Questions Using the API

The preliminary setup is now complete.

Let's actually try asking questions using the API.

Basic Request

curl -X POST http://localhost:3000/api/chat/completions \
-H "Content-Type: application/json" \
-H "Cookie: token=YOUR_JWT_TOKEN" \
-d '{
  "model": "your_model_name",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ]
}'

Paste your JWT token in place of YOUR_JWT_TOKEN.

Specify the model you want to use for your_model_name.

Custom models created with Knowledge Base can also be used.

It worked even with a randomly created model named "Test".

Response Example

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hello! Is there anything I can help you with?"
      }
    }
  ]
}

Failed Example

According to the documentation, it seemed like the following should work:

curl -X POST http://localhost:3000/api/chat/completions -H "Authorization: Bearer my-jwt-token" -H "Content-Type: application/json" -d '{"model": "Test", "messages": [{"role": "user", "content": "Hello"}]}'
{"detail":"401 Unauthorized"}

But it didn't work. Why not?

Since it didn't work as expected, I investigated the Network tab in the browser and found that it seemed to be using cookies for POST requests. So when I sent the token using cookies, it worked successfully.

Is this just me? I hope this helps someone.

Reference Links

Please Provide Feedback
We would appreciate your feedback on this article. Feel free to leave a comment on any relevant YouTube video or reach out through the contact form. Thank you!