Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.
## Prerequisites
Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.
## Installation
1. Clone or download the `ollama_proxy_server` repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
2. Navigate to the cloned directory in the terminal and run `pip install -e .`
## Configuration
### Servers configuration (config.ini)
Create a file named `config.ini` in the same directory as your script, containing server configurations:
Replace `http://localhost:11434/` with the URL and port of the first server. The `queue_size` value indicates the maximum number of requests that can be queued at a given time for this server.
Create a file named `authorized_users.txt` in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:
```makefile
user1,key1
user2,key2
```
Replace `user1`, `key1`, `user2`, and `key2` with the desired username and API key for each user.
## Usage
### Starting the server
Start the Ollama Proxy Server by running the following command in your terminal:
```bash
python ollama_proxy_server.py
```
The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.
### Client requests
To send a request to the server, use the following command:
Replace `<METHOD>` with the HTTP method (GET or POST), `<USER_KEY>` with a valid user:key pair from your `authorized_users.txt`, `<PORT>` with the port number of your running Ollama Proxy Server, and `<PATH>` with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the `--data <POST_DATA>` option to send data in the body.
Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.
## Prerequisites
Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.
## Installation
1. Clone or download the `ollama_proxy_server` repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
2. Navigate to the cloned directory in the terminal and run `pip install -e .`
## Configuration
### Servers configuration (config.ini)
Create a file named `config.ini` in the same directory as your script, containing server configurations:
```makefile
[Server1]
url = http://localhost:8080/
queue_size = 5
[Server2]
url = http://localhost:8081/
queue_size = 3
# Add as many servers as needed, in the same format as [Server1] and [Server2].
```
Replace `http://localhost:8080/` with the URL and port of the first server. The `queue_size` value indicates the maximum number of requests that can be queued at a given time for this server.
### Authorized users (authorized_users.txt)
Create a file named `authorized_users.txt` in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:
```makefile
user1:key1, user2:key2
```
Replace `user1`, `key1`, `user2`, and `key2` with the desired username and API key for each user.
## Usage
### Starting the server
Start the Ollama Proxy Server by running the following command in your terminal:
```bash
python ollama_proxy_server.py
```
The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.
### Client requests
To send a request to the server, use the following command:
Replace `<METHOD>` with the HTTP method (GET or POST), `<USER_KEY>` with a valid user:key pair from your `authorized_users.txt`, `<PORT>` with the port number of your running Ollama Proxy Server, and `<PATH>` with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the `--data <POST_DATA>` option to send data in the body.