A proxy server for multiple ollama instances with Key security
Go to file
Saifeddine ALOUI 9d6c4b40f0 Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
ollama_proxy_server Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
.gitignore Working procy 2024-01-16 01:06:21 +01:00
LICENSE Initial commit 2024-01-15 23:35:19 +01:00
README.md Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
authorized_users.txt Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
config.ini Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
requirements.txt Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
requirements_dev.txt Added authorization and multi access to multiple servers 2024-01-16 01:44:22 +01:00
setup.py Working procy 2024-01-16 01:06:21 +01:00

README.md

Ollama Proxy Server

Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.

Prerequisites

Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.

Installation

  1. Clone or download the ollama_proxy_server repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
  2. Navigate to the cloned directory in the terminal and run pip install -e .

Configuration

Servers configuration (config.ini)

Create a file named config.ini in the same directory as your script, containing server configurations:

[Server1]
url = http://localhost:8080/
queue_size = 5

[Server2]
url = http://localhost:8081/
queue_size = 3

# Add as many servers as needed, in the same format as [Server1] and [Server2].

Replace http://localhost:8080/ with the URL and port of the first server. The queue_size value indicates the maximum number of requests that can be queued at a given time for this server.

Authorized users (authorized_users.txt)

Create a file named authorized_users.txt in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:

user1,key1
user2,key2

Replace user1, key1, user2, and key2 with the desired username and API key for each user.

Usage

Starting the server

Start the Ollama Proxy Server by running the following command in your terminal:

python ollama_proxy_server.py

The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.

Client requests

To send a request to the server, use the following command:

curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]

Replace <METHOD> with the HTTP method (GET or POST), <USER_KEY> with a valid user:key pair from your authorized_users.txt, <PORT> with the port number of your running Ollama Proxy Server, and <PATH> with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the --data <POST_DATA> option to send data in the body.

For example:

curl -X POST -H "Authorization: Bearer user1:key1" http://localhost:8080/api/generate --data '{"data": "Hello, World!"}'
``` # Ollama Proxy Server

Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.

## Prerequisites
Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.

## Installation
1. Clone or download the `ollama_proxy_server` repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
2. Navigate to the cloned directory in the terminal and run `pip install -e .`

## Configuration

### Servers configuration (config.ini)
Create a file named `config.ini` in the same directory as your script, containing server configurations:
```makefile
[Server1]
url = http://localhost:8080/
queue_size = 5

[Server2]
url = http://localhost:8081/
queue_size = 3

# Add as many servers as needed, in the same format as [Server1] and [Server2].

Replace http://localhost:8080/ with the URL and port of the first server. The queue_size value indicates the maximum number of requests that can be queued at a given time for this server.

Authorized users (authorized_users.txt)

Create a file named authorized_users.txt in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:

user1:key1, user2:key2

Replace user1, key1, user2, and key2 with the desired username and API key for each user.

Usage

Starting the server

Start the Ollama Proxy Server by running the following command in your terminal:

python ollama_proxy_server.py

The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.

Client requests

To send a request to the server, use the following command:

curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]

Replace <METHOD> with the HTTP method (GET or POST), <USER_KEY> with a valid user:key pair from your authorized_users.txt, <PORT> with the port number of your running Ollama Proxy Server, and <PATH> with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the --data <POST_DATA> option to send data in the body.

For example:

curl -X POST -H "Authorization: Bearer user1:key1" http://localhost:8080/api/generate --data '{"data": "Hello, World!"}'