Skip to content

Quick Start

This guide walks you through setting up a local runqy environment with working examples.

Prerequisites

  • Docker (for Redis, or use Docker Compose for full stack)
  • One of:
    • Pre-built binaries (recommended)
    • Go 1.24+ (to build from source)

No database required for development

SQLite is embedded in the server for development. PostgreSQL is only needed for production.

Choose Your Installation Method

The fastest way to get started:

Linux/macOS:

# Install server
curl -fsSL https://raw.githubusercontent.com/publikey/runqy/main/install.sh | sh

# Install worker
curl -fsSL https://raw.githubusercontent.com/publikey/runqy-worker/main/install.sh | sh

Windows (PowerShell):

# Install server
iwr https://raw.githubusercontent.com/publikey/runqy/main/install.ps1 -useb | iex

# Install worker
iwr https://raw.githubusercontent.com/publikey/runqy-worker/main/install.ps1 -useb | iex

Run the full stack without cloning the repo:

curl -O https://raw.githubusercontent.com/Publikey/runqy/main/docker-compose.quickstart.yml
docker-compose -f docker-compose.quickstart.yml up -d

This starts Redis, PostgreSQL, server, and worker. Skip to Step 5.

Clone the repo and run the full stack:

git clone https://github.com/Publikey/runqy.git
cd runqy
docker-compose up -d

This starts Redis, PostgreSQL, server, and worker. Skip to Step 5.

Build from source:

git clone https://github.com/Publikey/runqy.git
git clone https://github.com/Publikey/runqy-worker.git

# Build server
cd runqy/app && go build -o runqy .

# Build worker
cd ../runqy-worker && go build -o runqy-worker ./cmd/worker

For more installation options, see the Installation Guide.

1. Start Redis

docker run -d --name redis -p 6379:6379 redis:7-alpine

Redis 8.x is not supported

runqy uses asynq which relies on Lua scripts that are incompatible with Redis 8.x. Use Redis 7.x (redis:7-alpine). The Docker Compose files already pin this version.

2. Start the Server

export REDIS_HOST=localhost
export REDIS_PORT=6379
export REDIS_PASSWORD=""
export RUNQY_API_KEY=dev-api-key
runqy serve --sqlite
$env:REDIS_HOST = "localhost"
$env:REDIS_PORT = "6379"
$env:REDIS_PASSWORD = ""
$env:RUNQY_API_KEY = "dev-api-key"
runqy serve --sqlite
cd runqy/app
export REDIS_HOST=localhost
export REDIS_PORT=6379
export REDIS_PASSWORD=""
export RUNQY_API_KEY=dev-api-key
go run . serve --sqlite
cd runqy/app
$env:REDIS_HOST = "localhost"
$env:REDIS_PORT = "6379"
$env:REDIS_PASSWORD = ""
$env:RUNQY_API_KEY = "dev-api-key"
go run . serve --sqlite

The server starts on port 3000 by default.

API Authentication

The server reads the API key from the RUNQY_API_KEY environment variable. HTTP clients (curl, SDKs) must send it as an Authorization: Bearer {key} header.

3. Deploy the Example Queues

In a new terminal:

# Download example config
curl -fsSL https://raw.githubusercontent.com/Publikey/runqy/main/examples/quickstart.yaml -o quickstart.yaml

runqy login -s http://localhost:3000 -k dev-api-key
runqy config create -f quickstart.yaml
# Download example config
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Publikey/runqy/main/examples/quickstart.yaml" -OutFile "quickstart.yaml"

runqy login -s http://localhost:3000 -k dev-api-key
runqy config create -f quickstart.yaml
cd runqy/app
go build -o runqy .
./runqy login -s http://localhost:3000 -k dev-api-key config create -f ../examples/quickstart.yaml
cd runqy/app
go build -o runqy.exe .
.\runqy.exe login -s http://localhost:3000 -k dev-api-key
.\runqy.exe config create -f ..\examples\quickstart.yaml

This deploys two example queues:

Queue Mode Description
quickstart-oneshot one_shot Spawns a new Python process per task
quickstart-longrunning long_running Keeps Python process alive between tasks

Sub-queue naming: the .default suffix

When a queue has no explicit sub-queues defined, runqy automatically appends .default. So quickstart-oneshot becomes quickstart-oneshot.default at runtime. You'll see this suffix in worker logs, Redis keys, and API responses. When enqueueing, you can use either the short name (quickstart-oneshot) or the full name (quickstart-oneshot.default).

4. Start a Worker

In a new terminal:

# Download example config
curl -fsSL https://raw.githubusercontent.com/publikey/runqy-worker/main/config.yml.example -o config.yml

# Start worker
runqy-worker -config config.yml
# Download example config
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/publikey/runqy-worker/main/config.yml.example" -OutFile "config.yml"

# Start worker
runqy-worker -config config.yml
cd runqy-worker
cp config.yml.example config.yml
go run ./cmd/worker
cd runqy-worker
Copy-Item config.yml.example config.yml
go run ./cmd/worker

The downloaded config is pre-configured for the quickstart (no changes needed):

server:
  url: "http://localhost:3000"
  api_key: "dev-api-key"

worker:
  queues:
    - "quickstart-oneshot.default"
    - "quickstart-longrunning.default"

The worker will:

  1. Register with the server
  2. Clone the example task code from the runqy repo
  3. Set up a Python virtual environment
  4. Start the Python process
  5. The worker is now ready to process tasks

5. Enqueue a Task

In a new terminal:

curl -X POST http://localhost:3000/queue/add \
  -H "Authorization: Bearer dev-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "queue": "quickstart-oneshot.default",
    "timeout": 60,
    "data": {"operation": "uppercase", "data": "hello world"}
  }'
curl.exe -X POST http://localhost:3000/queue/add `
  -H "Authorization: Bearer dev-api-key" `
  -H "Content-Type: application/json" `
  -d '{\"queue\": \"quickstart-oneshot.default\", \"timeout\": 60, \"data\": {\"operation\": \"uppercase\", \"data\": \"hello world\"}}'

Response:

{
  "info": {
    "id": "abc123...",
    "state": "pending",
    "queue": "quickstart-oneshot.default",
    ...
  },
  "data": {...}
}

Task ID

Use the id from the response to check the result in the next step.

Queue name shorthand

You can omit the .default suffix when enqueueing. For example, quickstart-oneshot automatically resolves to quickstart-oneshot.default.

Request format: nested vs flat

The /queue/add endpoint accepts two formats:

Nested format (wraps payload in "data"):

{"queue": "myqueue", "timeout": 60, "data": {"operation": "uppercase", "data": "hello"}}

Flat format (all extra fields become the payload):

{"queue": "myqueue", "timeout": 60, "operation": "uppercase", "data": "hello"}

Both are equivalent. In the nested format, the "data" wrapper contains your task payload as-is. Note that in the quickstart example, the inner "data" key is a task input field (not the wrapper) — this is valid but can look confusing at first glance.

Try Long-Running Mode

To try long-running mode, just enqueue to quickstart-longrunning.default — the worker already listens on both queues.

6. Check the Result

Replace {id} with the task ID from the previous step:

curl http://localhost:3000/queue/{id}
curl.exe http://localhost:3000/queue/{id}

Response:

{
  "info": {
    "state": "completed",
    "queue": "quickstart-oneshot.default",
    "result": {"result": "HELLO WORLD"}
  }
}

7. Monitor

Visit http://localhost:3000/monitoring/ to see the web dashboard.

Example Task Code

The quickstart uses example tasks from runqy/examples/:

examples/quickstart-oneshot/main.py
from runqy_python import task, run_once

@task
def process(payload: dict) -> dict:
    operation = payload.get("operation", "echo")
    data = payload.get("data")

    if operation == "echo":
        return {"result": data}
    elif operation == "uppercase":
        return {"result": data.upper() if isinstance(data, str) else data}
    elif operation == "double":
        return {"result": data * 2 if isinstance(data, (int, float)) else data}
    else:
        return {"error": f"Unknown operation: {operation}"}

if __name__ == "__main__":
    run_once()
examples/quickstart-longrunning/main.py
from runqy_python import task, load, run

@load
def setup():
    # Initialize resources once at startup
    return {"processor": SimpleProcessor()}

@task
def process(payload: dict, ctx: dict) -> dict:
    processor = ctx["processor"]
    operation = payload.get("operation", "echo")
    data = payload.get("data")
    result = processor.process(operation, data)
    return {"result": result, "calls": processor.call_count}

if __name__ == "__main__":
    run()

Next Steps