Initial commit

This commit is contained in:
Codex
2026-01-23 11:12:31 +01:00
commit 0c420a8697
27 changed files with 1767 additions and 0 deletions

158
SPEC.md Normal file
View File

@@ -0,0 +1,158 @@
# Backup Orchestrator (Windows Client + Proxmox Server)
## Sommario esecutivo
- Client Windows PySide6 con selezione cartelle, log/progresso e switch di schedulazione chiama API FastAPI sul server.
- Server Proxmox esegue un job che, tramite SSH verso lhelper OpenSSH del client, avvia un `rsync` pull verso `/srv/backup/current`, mantenendo una history di 20 versioni.
- Infrastruttura hardening: HTTPS, SSH, least privilege, logging strutturato, retention/lock e opzioni per compressione e task scheduling.
## Requisiti principali
### Funzionali
1. Il client permette di taggare 1+ cartelle per backup e abilita il bottone “Esegui” solo quando la selezione è valida.
2. Il pulsante invia credenziali SSH (utente/password) e percorsi al backend FastAPI.
3. Il server connette via SSH al client e lancia `rsync` con `--backup --backup-dir=/srv/backup/.history/<run_id>` e senza `--delete` (pull incrementale).
4. Non si propagano cancellazioni lato client; i file eliminati restano disponibili in `/current` o nello storico.
5. Mantiene fino a 20 esecuzioni storiche per file sovrascritti, archiviando le versioni anteriori.
6. Interfaccia GUI mostra log in tempo reale, barra di progresso, e notifica finale “Backup completato” o dettagli errore.
7. Configurazione locale (es. `config.json`) risiede accanto alleseguibile e supporta profili, credenziali, toggle Task Scheduler (per lon/off di `schtasks`).
8. Il client può attivare/disattivare uno scheduler che richiama lAPI server in autonomia.
### Non funzionali
- Affidabilità: i job sono idempotenti e ripetibili, evitando corruzione dello stato se rilanciati.
- Performance: trasferimento differenziale (`rsync` con `-a --info=progress2 --compress` opzionale, `--partial`) e compressione configurabile.
- Manutenibilità: separazione GUI, orchestratore e runner `rsync`/logging.
- Sicurezza LAN: autenticazione token su API, canali HTTPS/SSH, criteri di privilegio ridotti, protezione credenziali (DPAPI o prompt ad ogni esecuzione), log/rotazioni.
## Architettura proposta
### Componenti critici
- **Client GUI (PySide6)**: gestisce profili, selezione cartelle, switch Task Scheduler e visualizzazione del log/polling di stato; comunica via HTTPS con token JWT.
- **Configurazione locale (`config.json`)**: profili, cartelle, endpoint API, preferenze di compressione, flag scheduler.
- **Windows OpenSSH Server (`sshd`) + `rsync.exe` portable**: riceve connessioni SSH dal server e invoca `rsync --server` per eseguire i transfer.
- **Orchestrator API (FastAPI + Uvicorn)**: espone `/auth/login`, `/profiles`, `/backup/start`, `/backup/status/{job_id}`, `/backup/log/{job_id}`, `/health`.
- **Job Runner (systemd o servizio)**: serializza job, acquisisce lock per non sovrapporsi, gestisce il comando SSH/rsync, archivia log e progressi in DB, aggiorna stato API.
- **Metadata Store (SQLite/Postgres)**: conservazione job, log, stato, retention history.
- **Backup Storage**: `/srv/backup/current` e `/srv/backup/.history/<run_id>`; possibilità di snapshot ZFS opzionale.
### Interazioni chiave
```mermaid
componentDiagram
component "Windows Client GUI\n(PySide6)" as GUI
component "Local Config\n(config.json)" as CFG
component "Windows OpenSSH Server\n(sshd)" as SSHD
component "Rsync Binary\n(rsync.exe portable)" as RSYNCW
component "Backup Orchestrator API\n(FastAPI + Uvicorn)" as API
component "Job Runner\n(systemd service)" as RUNNER
component "Metadata Store\n(SQLite/Postgres)" as DB
component "Backup Storage\n/current + /.history" as STORE
GUI --> CFG : read/write
GUI --> API : HTTPS (LAN)\nStart/Status
API --> DB : store jobs/log
API --> RUNNER : enqueue/trigger
RUNNER --> SSHD : SSH connect (pull)
SSHD --> RSYNCW : remote rsync command
RUNNER --> STORE : write backup data
```
### Flusso operativo (quando lutente preme “Esegui”)
```mermaid
sequenceDiagram
participant U as Utente
participant GUI as Client GUI (Win)
participant API as Orchestrator API (Server)
participant RUN as Job Runner (Server)
participant SSH as SSH (Win sshd)
participant STO as Storage (Server)
U->>GUI: Seleziona cartelle
GUI->>GUI: Valida cartelle (>=1) e abilita "Esegui"
U->>GUI: Click "Esegui"
GUI->>API: POST /backup/start (profilo, cartelle, credenziali)
API->>API: Crea job_id + stato QUEUED
API->>RUN: Avvia job(job_id)
RUN->>SSH: ssh user@client "rsync --server ..."
SSH->>RUN: Stream stdout/stderr (progress/log)
RUN->>STO: Scrive su /current + versioni su /.history/<run_id>
RUN->>API: Aggiorna stato COMPLETED/FAILED + summary
GUI->>API: GET /backup/status/{job_id} (poll/stream)
API->>GUI: stato + progress + log
GUI->>U: "Backup completato" (o errore)
```
### Deployment
```mermaid
deploymentDiagram
node "LAN" {
node "PC Windows 11 (Client)" {
artifact "BackupClient.exe\n(PySide6 packaged)" as EXE
artifact "config.json" as CJSON
node "Windows Feature" {
artifact "OpenSSH Server (sshd)" as WSSHD
}
artifact "rsync.exe portable" as WRSYNC
}
node "Proxmox Host" {
node "VM Debian (Backup Server)" {
artifact "backup-orchestrator\n(FastAPI)" as SAPI
artifact "job-runner\n(systemd)" as SJR
artifact "rsync" as SRSYNC
artifact "db (sqlite/postgres)" as SDB
artifact "/srv/backup/...\n(current/.history)" as SST
}
}
}
EXE --> SAPI : HTTPS 443 (LAN)
SJR --> WSSHD : SSH 22 (LAN)
SJR --> SST : local filesystem
```
```
## API e sicurezza
- **Autenticazione**: `/auth/login` restituisce JWT con scadenza, rate limit + lockout.
- **Profiling**: `/profiles` CRUD (nome, cartelle, scheduling).
- **Backup**: `/backup/start` richiama job(principale), `GET /backup/status/{job_id}`, `GET /backup/log/{job_id}`.
- **Health**: `/health` (autenticata o solo LAN) restituisce stato/versione.
### Protezioni
- HTTPS con cert self-signed o CA locale.
- Token scaduti/e refresh.
- Validazione path (no path traversal, esistenza cartelle); permetti solo cartelle allow-list.
- SSH dai soli IP server e utente `backupsvc` con shell limitata e permessi su `/srv/backup`.
- Log strutturati (JSON), rotazione via `logrotate`, niente password nei log.
## Retention e rsync
- Ogni esecuzione crea un `run_id`, `rsync` esegue `--backup --backup-dir=/srv/backup/.history/<run_id>` e `--info=progress2`.
- `current/` contiene lo stato attivo; `.history/<run_id>` conserva vecchie versioni per i file diversificati.
- Un task di cleanup mantiene solo le ultime 20 esecuzioni, cancellando `.history` più vecchie.
## Strategia scheduling e configurazione
- Client: switch Task Scheduler che attiva/disattiva una `schtasks` che lancia `BackupClient.exe --auto` (o simile) con token.
- Config facoltativa per credenziali salvate (nemmeno nel file in chiaro) tramite Windows DPAPI, ma luso di prompt password ogni job rimane default.
- API: job queue con lock, `systemd` service per runner; `rsync` timeout/retry configurabili.
## Fasi di implementazione e stime
1. **Setup server** (0.51 giorno): Debian su Proxmox, storage dedicato `/srv/backup`, user/permessi, `rsync`, OpenSSH client e service.
2. **Orchestrator FastAPI + runner** (1.52.5 giorni): API start/status/log, gestione DB (SQLite default, Postgres opzione), job queue, retention automatica.
3. **Client PySide6** (24 giorni): GUI selezione cartelle/profili, log/progress, switch Task Scheduler, config.json, packaging PyInstaller.
4. **Integrazione Windows** (12 giorni): script abilitazione OpenSSH/rsync, firewall rules, test permessi lunghi.
5. **Hardening + QA** (12 giorni): logging, rate limit, E2E manuali/schedulati, threat modeling leggero.
## Checklist sicurezza e operatività
- SSH solo da server, firewall Windows limitato ad IP del backup server.
- Password SSH non loggata, prompt per ogni job (DPAPI opzionale).
- HTTPS interno con cert gestione.
- Validazione input (allowlist, no path traversal, sanitizzazione).
- User Linux dedicato `backupsvc`, directory `700`, log `640`.
- Lock job per evitare duplicati, timeout `rsync`.
## Deliverable attesi
- Repo server: FastAPI + runner, script provisioning, doc installazione, test base.
- Repo client: PySide6 GUI, config schema, Task Scheduler script, rsync portable bundle.
- Documentazione: manuale utente, runbook admin, threat model + checklist.
- Pipeline CI: lint/test/build/release.
## Note aggiuntive
- Password SSH passata al server solo in memoria per job; non persistita nel DB.
- Il bottone sul client rimane il trigger UX, ma la copia vera è un pull orchestrato dal server grazie allSSH.

39
client/FINAL_README.md Normal file
View File

@@ -0,0 +1,39 @@
# Packaging the Backup Client (PyInstaller flow)
This guide explains how to transform the PySide6 UI into `BackupClient.exe`, bundle the portable configuration, scheduler helpers, and ship-ready assets.
## 1. Pre-requisites
1. Install Python 3.10+ on the Windows build machine.
2. Create a virtual environment inside `/path/to/projbck/client` and activate it.
3. Install the Python dependencies with `pip install -e .` to grab PySide6, requests, python-dotenv, and PyInstaller.
4. Ensure OpenSSH/rsync/sshpass and the scheduler PowerShell helper already exist inside the client directory (`scheduler/manage_scheduler.ps1`, `config.json`).
## 2. Building the executable
1. Open `cmd.exe` or PowerShell (elevated if you need scheduler permissions).
2. Navigate to the project root: `cd \path\to\projbck`.
3. Run the provided helper script: `client\package_client.bat`. It will:
- Invoke PyInstaller with `--onefile`, adding `config.json` to the exe root and the `scheduler` folder under the same directory.
- Expect PyInstaller to create `client\dist\BackupClient`.
- Call `client\post_build.bat dist\BackupClient` to copy `config.json` and the `scheduler` folder explicitly (in case PyInstaller didnt preserve them) next to `BackupClient.exe`.
4. Alternatively, if you prefer Bash/WSL, `./client/package_client.sh` performs the same operations.
## 3. What travels with the exe
The `dist\BackupClient` directory after running the helper will contain:
- `BackupClient.exe`: the packaged PySide6 client.
- `config.json`: portable configuration storing server URL, profiles, scheduler flag, and API token state.
- `scheduler\manage_scheduler.ps1`: PowerShell helper that creates/removes the Task Scheduler task when the checkbox is toggled.
## 4. Post-build validation
1. Confirm `BackupClient.exe` finds `config.json` next to it; this file must remain writable for the UI to save profiles.
2. Ensure `scheduler/manage_scheduler.ps1` exists relative to the executable at `scheduler/manage_scheduler.ps1` so the scheduler toggle can invoke it.
3. Run `BackupClient.exe` to verify the UI opens, loads the config, and `Esegui backup` works.
4. Toggle the scheduler checkbox to confirm PowerShell can find and execute `manage_scheduler.ps1` (it will create a task such as `BackupClient-<profilo>` under Task Scheduler).
## 5. Deployment
Package the entire `dist\BackupClient` folder. The deployment artifact must keep:
- `BackupClient.exe`
- `config.json`
- `scheduler\manage_scheduler.ps1`
- Any optional assets you add later (icons, documentation, etc.)
If you automate releases, call `client\package_client.bat` inside your CI/CD script and archive the resulting `dist\BackupClient` directory.

22
client/README.md Normal file
View File

@@ -0,0 +1,22 @@
# Backup Client (PySide6)
Portable Windows client UI that drives the Proxmox FastAPI orchestrator via HTTPS.
## Features
- Profile management (name + folder list) stored next to the executable in `config.json`.
- Folders list, Task Scheduler toggle, SSH credentials (username/password) for the pull backup.
- Progress bar, status label and log stream fed by the orchestrator's `/backup/status` and `/backup/log` endpoints.
- Login prompt when the stored API token expires or is missing.
## Getting started
1. Install dependencies (`pip install -e .` inside the `client` folder or build a PyInstaller bundle).
2. Launch the client: `python -m backup_client.ui` (PyPI-style module entry uses the `main()` function).
3. Use the UI to create/save profiles, add folders, enter the Windows host/SSH info, and click **Esegui backup**.
4. Config (server URL, profiles, scheduler flag, token) is persisted in `config.json` sitting beside the executable for portability.
## Scheduling and packaging notes
- The Task Scheduler switch is persisted but wiring to `schtasks` should be done via scripts.
- `rsync.exe`, `OpenSSH`, and wrapper scripts live on the Windows client; this UI only triggers the server pull.
- See `scheduler/manage_scheduler.ps1` for a helper that takes `-Action Enable|Disable`, `-TaskName`, `-ExecutablePath`, and optional profile metadata to build the `schtasks /Create` call. The checkbox now invokes that script, so enabling the scheduler creates the `ONLOGON` task that calls `BackupClient.exe --auto --profile "<name>"` and disabling the switch removes it.
- After PyInstaller finishes, run `post_build.bat path\to\dist\<bundle>` to copy `config.json` and the scheduler helper into the same directory as `BackupClient.exe` so the runtime can find them (`post_build.bat` copies `config.json` to the root and the `scheduler` folder beside the exe).
- Run `package_client.sh` from Bash/WSL or `package_client.bat` from cmd/PowerShell: both invoke PyInstaller with the required `--add-data` flags, target `dist/BackupClient`, and call `post_build.bat` so `config.json` and the scheduler helper travel with the exe.

25
client/package_client.bat Normal file
View File

@@ -0,0 +1,25 @@
@echo off
REM Usage: package_client.bat
setlocal enabledelayedexpansion
pushd "%~dp0"
set PYINSTALLER_ARGS=--onefile --add-data config.json;. --add-data scheduler;scheduler src/__main__.py
python -m PyInstaller %PYINSTALLER_ARGS%
if errorlevel 1 (
echo PyInstaller failed
popd
exit /b 1
)
set DIST_DIR=dist\BackupClient
if not exist "%DIST_DIR%" (
echo ERROR: expected %DIST_DIR% not created
popd
exit /b 1
)
"%CD%\post_build.bat" "%DIST_DIR%"
if errorlevel 1 (
echo post_build failed
popd
exit /b 1
)
popd
echo Packaging complete.

18
client/package_client.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
DIST_DIR="dist/BackupClient"
PYINSTALLER_ARGS=(
"--onefile"
"--add-data" "config.json;."
"--add-data" "scheduler;scheduler"
"src/__main__.py"
)
cd "client"
python -m PyInstaller "${PYINSTALLER_ARGS[@]}"
if [ ! -d "${DIST_DIR}" ]; then
echo "ERROR: expected dist directory ${DIST_DIR} not created"
exit 1
fi
"$(pwd)/post_build.bat" "${DIST_DIR}"

27
client/post_build.bat Normal file
View File

@@ -0,0 +1,27 @@
@echo off
REM Usage: post_build.bat <dist_dir>
set DIST_DIR=%~1
if "%DIST_DIR%"=="" (
echo Usage: %~nx0 path_to_dist
exit /b 1
)
if not exist "%DIST_DIR%" (
echo Target directory %DIST_DIR% does not exist
exit /b 1
)
set ROOT_DIR=%~dp0
copy /Y "%ROOT_DIR%\config.json" "%DIST_DIR%\config.json" >nul
if errorlevel 1 (
echo Failed to copy config.json
exit /b 1
)
if not exist "%DIST_DIR%\scheduler" (
mkdir "%DIST_DIR%\scheduler"
)
xcopy /Y /E "%ROOT_DIR%\scheduler" "%DIST_DIR%\scheduler" >nul
if errorlevel 1 (
echo Failed to copy scheduler scripts
exit /b 1
)
echo Deploy assets copied to %DIST_DIR%
exit /b 0

15
client/pyproject.toml Normal file
View File

@@ -0,0 +1,15 @@
[project]
name = "backup-client"
version = "0.1.0"
description = "PySide6 Windows backup client for orchestrator"
authors = ["truetype74 <max.mauri@gmail.com>","Codex <dev@example.com>"]
requires-python = ">=3.10"
[project.dependencies]
PySide6 = "^6.9"
requests = "^2.33"
python-dotenv = "^1.0"
[build-system]
requires = ["hatchling>=1.8"]
build-backend = "hatchling.build"

View File

@@ -0,0 +1,59 @@
param(
[Parameter(Mandatory=$true)]
[ValidateSet("Enable","Disable")]
[string]$Action,
[Parameter(Mandatory=$true)]
[string]$TaskName,
[Parameter(Mandatory=$true)]
[string]$ExecutablePath,
[string]$ProfileName = "default",
[string]$Trigger = "ONLOGON",
[string]$StartTime = "02:00"
)
function Write-Result {
param([bool]$Success, [string]$Message)
Write-Output "$($Success ? 'SUCCESS' : 'FAIL') : $Message"
exit (if ($Success) {0} else {1})
}
if (-not (Test-Path -Path $ExecutablePath)) {
Write-Result -Success:$false -Message "Executable '$ExecutablePath' non trovato"
}
$taskArguments = "--auto --profile '$ProfileName'"
switch ($Action) {
'Enable' {
$existing = schtasks /Query /TN $TaskName 2>$null
if ($LASTEXITCODE -eq 0) {
schtasks /Delete /TN $TaskName /F | Out-Null
}
$escapedExe = "`"$ExecutablePath`""
$command = "$escapedExe $taskArguments"
$createArgs = @(
"/Create",
"/TN", $TaskName,
"/TR", $command,
"/SC", $Trigger,
"/ST", $StartTime,
"/RL", "HIGHEST",
"/F"
)
$result = schtasks @createArgs
if ($LASTEXITCODE -ne 0) {
Write-Result -Success:$false -Message "Creazione task fallita: $result"
}
Write-Result -Success:$true -Message "Task '$TaskName' abilitato"
}
'Disable' {
$result = schtasks /Delete /TN $TaskName /F
if ($LASTEXITCODE -ne 0) {
Write-Result -Success:$false -Message "Cancellazione task fallita: $result"
}
Write-Result -Success:$true -Message "Task '$TaskName' disabilitato"
}
}

1
client/src/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Backup client package."""

4
client/src/__main__.py Normal file
View File

@@ -0,0 +1,4 @@
from .ui import main
if __name__ == "__main__":
main()

120
client/src/api_client.py Normal file
View File

@@ -0,0 +1,120 @@
from __future__ import annotations
import json
from datetime import datetime
from typing import Any, Dict, List
import requests
import urllib3
from .config import (
AppConfig,
ProfileConfig,
is_token_valid,
save_config,
update_token,
)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class ApiClientError(Exception):
pass
class ApiClient:
def __init__(self, config: AppConfig) -> None:
self.config = config
self._session = requests.Session()
self._session.verify = False
def _headers(self) -> Dict[str, str]:
headers = {"Content-Type": "application/json"}
token = self.config.token
if token:
headers["Authorization"] = f"Bearer {token}"
return headers
def login(self, password: str) -> None:
payload = {"username": self.config.api_user, "password": password}
response = self._session.post(f"{self.config.server_url}/auth/login", data=payload, timeout=10)
if response.status_code != 200:
raise ApiClientError("Login failed")
body = response.json()
token = body.get("access_token")
expires_at_raw = body.get("expires_at")
if not token or not expires_at_raw:
raise ApiClientError("Invalid auth payload")
expires_at = datetime.fromisoformat(expires_at_raw)
update_token(self.config, token, expires_at)
def ensure_authenticated(self) -> None:
if is_token_valid(self.config):
return
raise ApiClientError("Client not authenticated (login required)")
def ensure_remote_profile(self, profile: ProfileConfig) -> ProfileConfig:
if profile.server_id:
return profile
self.ensure_authenticated()
payload = {
"name": profile.name,
"folders": profile.folders,
"description": None,
"schedule_enabled": profile.schedule_enabled,
}
response = self._session.post(
f"{self.config.server_url}/profiles", headers=self._headers(), json=payload, timeout=10
)
if response.status_code not in (200, 201):
raise ApiClientError("Unable to create profile on server")
data = response.json()
profile.server_id = data.get("id")
save_config(self.config)
return profile
def start_backup(
self,
*,
profile: ProfileConfig,
folders: List[str],
client_host: str,
ssh_username: str,
ssh_password: str,
) -> int:
self.ensure_authenticated()
remote_profile = self.ensure_remote_profile(profile)
if not remote_profile.server_id:
raise ApiClientError("Profile missing server identifier")
payload: Dict[str, Any] = {
"profile_id": remote_profile.server_id,
"client_host": client_host,
"ssh_username": ssh_username,
"ssh_password": ssh_password,
"folders": folders,
}
response = self._session.post(
f"{self.config.server_url}/backup/start",
headers=self._headers(),
json=payload,
timeout=20,
)
if response.status_code != 200:
raise ApiClientError(f"Backup start failed: {response.text}")
return response.json().get("job_id")
def job_status(self, job_id: int) -> Dict[str, Any]:
response = self._session.get(
f"{self.config.server_url}/backup/status/{job_id}", headers=self._headers(), timeout=10
)
if response.status_code != 200:
raise ApiClientError("Unable to fetch job status")
return response.json()
def job_log(self, job_id: int) -> List[str]:
response = self._session.get(
f"{self.config.server_url}/backup/log/{job_id}", headers=self._headers(), timeout=10
)
if response.status_code != 200:
raise ApiClientError("Unable to fetch job log")
return response.json().get("lines", [])

110
client/src/config.py Normal file
View File

@@ -0,0 +1,110 @@
from __future__ import annotations
import json
import sys
from dataclasses import asdict, dataclass, field
from datetime import datetime
from pathlib import Path
from typing import List, Optional
CONFIG_PATH = Path(sys.argv[0]).resolve().parent / "config.json"
def _ensure_config_path() -> Path:
CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True)
return CONFIG_PATH
@dataclass
class ProfileConfig:
name: str
folders: List[str]
schedule_enabled: bool = False
server_id: Optional[int] = None
@dataclass
class AppConfig:
server_url: str = "https://backup-server.local:8443"
api_user: str = "backup-admin"
token: Optional[str] = None
token_expires_at: Optional[str] = None
profiles: List[ProfileConfig] = field(default_factory=list)
active_profile: Optional[str] = None
scheduler_enabled: bool = False
@property
def active_profile_obj(self) -> Optional[ProfileConfig]:
if not self.active_profile:
return None
for profile in self.profiles:
if profile.name == self.active_profile:
return profile
return None
def to_dict(self) -> dict:
payload = asdict(self)
payload["profiles"] = [asdict(profile) for profile in self.profiles]
return payload
def load_config() -> AppConfig:
path = _ensure_config_path()
if not path.exists():
default = AppConfig()
save_config(default)
return default
raw = json.loads(path.read_text(encoding="utf-8"))
profiles = [ProfileConfig(**profile) for profile in raw.get("profiles", [])]
return AppConfig(
server_url=raw.get("server_url", "https://backup-server.local:8443"),
api_user=raw.get("api_user", "backup-admin"),
token=raw.get("token"),
token_expires_at=raw.get("token_expires_at"),
profiles=profiles,
active_profile=raw.get("active_profile"),
scheduler_enabled=raw.get("scheduler_enabled", False),
)
def save_config(config: AppConfig) -> None:
path = _ensure_config_path()
path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
def clear_token(config: AppConfig) -> None:
config.token = None
config.token_expires_at = None
save_config(config)
def is_token_valid(config: AppConfig) -> bool:
if not config.token or not config.token_expires_at:
return False
try:
expires = datetime.fromisoformat(config.token_expires_at)
except ValueError:
return False
return expires > datetime.utcnow()
def update_token(config: AppConfig, token: str, expires_at: datetime) -> None:
config.token = token
config.token_expires_at = expires_at.isoformat()
save_config(config)
def find_profile(config: AppConfig, name: str) -> Optional[ProfileConfig]:
for profile in config.profiles:
if profile.name == name:
return profile
return None
def ensure_profile(config: AppConfig, profile: ProfileConfig) -> ProfileConfig:
existing = find_profile(config, profile.name)
if existing:
return existing
config.profiles.append(profile)
save_config(config)
return profile

76
client/src/scheduler.py Normal file
View File

@@ -0,0 +1,76 @@
from __future__ import annotations
import shutil
import subprocess
import sys
from pathlib import Path
from typing import Iterable
class SchedulerError(Exception):
"""Raised when scheduler helper fails."""
def _sanitize(name: str) -> str:
return "".join(ch for ch in name if ch.isalnum() or ch in "-_").strip() or "default"
def _script_path() -> Path:
"""Return the path to the helper PowerShell script relative to the executable."""
base = Path(sys.argv[0]).resolve().parent
return base / "scheduler" / "manage_scheduler.ps1"
def _powershell_args(
action: str,
task_name: str,
executable: Path,
profile_name: str,
trigger: str = "ONLOGON",
start_time: str = "02:00",
) -> list[str]:
return [
shutil.which("powershell"),
"-ExecutionPolicy",
"Bypass",
"-File",
str(_script_path()),
"-Action",
action,
"-TaskName",
task_name,
"-ExecutablePath",
str(executable),
"-ProfileName",
profile_name,
"-Trigger",
trigger,
"-StartTime",
start_time,
]
def _run(action: str, task_name: str, profile_name: str, executable: Path) -> str:
script = _script_path()
if not script.exists():
raise SchedulerError(f"Script helpers not found at {script}")
ps_cmd = _powershell_args(action, task_name, executable, profile_name)
if not ps_cmd[0]:
raise SchedulerError("PowerShell not found in PATH")
result = subprocess.run(
[arg for arg in ps_cmd if arg], # filter None
capture_output=True,
text=True,
)
output = result.stdout.strip() or result.stderr.strip()
if result.returncode != 0:
raise SchedulerError(output or "Errore sconosciuto durante la configurazione dello scheduler")
return output
def enable_scheduler(task_name: str, profile_name: str, executable: Path) -> str:
return _run("Enable", task_name, _sanitize(profile_name), executable)
def disable_scheduler(task_name: str, profile_name: str, executable: Path) -> str:
return _run("Disable", task_name, _sanitize(profile_name), executable)

382
client/src/ui.py Normal file
View File

@@ -0,0 +1,382 @@
from __future__ import annotations
import sys
import time
from pathlib import Path
from typing import List, Optional
from PySide6.QtCore import QThread, Signal
from PySide6.QtWidgets import (
QApplication,
QCheckBox,
QFormLayout,
QGridLayout,
QHBoxLayout,
QLabel,
QLineEdit,
QListWidget,
QMainWindow,
QMessageBox,
QPushButton,
QPlainTextEdit,
QProgressBar,
QVBoxLayout,
QWidget,
QFileDialog,
QComboBox,
QInputDialog,
)
from .api_client import ApiClient, ApiClientError
from .config import (
AppConfig,
ProfileConfig,
clear_token,
find_profile,
is_token_valid,
load_config,
save_config,
)
from .scheduler import SchedulerError, disable_scheduler, enable_scheduler
class BackupWorker(QThread):
status_update = Signal(dict)
log_update = Signal(list)
completed = Signal(bool, str)
def __init__(
self,
api_client: ApiClient,
profile: ProfileConfig,
folders: List[str],
client_host: str,
ssh_username: str,
ssh_password: str,
) -> None:
super().__init__()
self.api_client = api_client
self.profile = profile
self.folders = folders
self.client_host = client_host
self.ssh_username = ssh_username
self.ssh_password = ssh_password
def run(self) -> None:
try:
job_id = self.api_client.start_backup(
profile=self.profile,
folders=self.folders,
client_host=self.client_host,
ssh_username=self.ssh_username,
ssh_password=self.ssh_password,
)
except ApiClientError as exc:
self.completed.emit(False, str(exc))
return
status: Optional[dict] = None
while True:
try:
status = self.api_client.job_status(job_id)
except ApiClientError as exc:
self.completed.emit(False, str(exc))
return
self.status_update.emit(status)
self.log_update.emit(status.get("last_log_lines", []))
state = status.get("status")
if state in ("COMPLETED", "FAILED"):
break
time.sleep(2)
try:
final_log = self.api_client.job_log(job_id)
self.log_update.emit(final_log)
except ApiClientError:
pass
summary = status.get("summary") if status else ""
success = status and status.get("status") == "COMPLETED"
self.completed.emit(success, summary or ("Backup completato" if success else "Backup fallito"))
class MainWindow(QMainWindow):
def __init__(self, config: AppConfig) -> None:
super().__init__()
self.config = config
self.api_client = ApiClient(config)
self.worker: Optional[BackupWorker] = None
self._log_buffer: List[str] = []
self._setup_ui()
self._load_profiles()
self._sync_scheduler()
self._update_run_state()
def _setup_ui(self) -> None:
self.setWindowTitle("Backup Client Windows")
central = QWidget()
self.setCentralWidget(central)
layout = QVBoxLayout(central)
profile_layout = QGridLayout()
profile_layout.addWidget(QLabel("Profilo:"), 0, 0)
self.profile_combo = QComboBox()
profile_layout.addWidget(self.profile_combo, 0, 1)
profile_layout.addWidget(QLabel("Nome:"), 1, 0)
self.profile_line = QLineEdit()
profile_layout.addWidget(self.profile_line, 1, 1)
self.save_profile_btn = QPushButton("Salva profilo")
profile_layout.addWidget(self.save_profile_btn, 2, 0)
self.delete_profile_btn = QPushButton("Elimina profilo")
profile_layout.addWidget(self.delete_profile_btn, 2, 1)
layout.addLayout(profile_layout)
folders_layout = QHBoxLayout()
self.folders_list = QListWidget()
folders_layout.addWidget(self.folders_list)
folders_btn_layout = QVBoxLayout()
self.add_folder_btn = QPushButton("Aggiungi cartella")
self.remove_folder_btn = QPushButton("Rimuovi cartella")
folders_btn_layout.addWidget(self.add_folder_btn)
folders_btn_layout.addWidget(self.remove_folder_btn)
folders_layout.addLayout(folders_btn_layout)
layout.addLayout(folders_layout)
creds_layout = QFormLayout()
self.client_host_input = QLineEdit()
self.ssh_user_input = QLineEdit()
self.ssh_password_input = QLineEdit()
self.ssh_password_input.setEchoMode(QLineEdit.Password)
creds_layout.addRow("Host client:", self.client_host_input)
creds_layout.addRow("SSH user:", self.ssh_user_input)
creds_layout.addRow("SSH password:", self.ssh_password_input)
layout.addLayout(creds_layout)
self.scheduler_checkbox = QCheckBox("Attiva Task Scheduler locale")
layout.addWidget(self.scheduler_checkbox)
self.run_button = QPushButton("Esegui backup")
layout.addWidget(self.run_button)
self.progress_bar = QProgressBar()
self.progress_bar.setRange(0, 100)
layout.addWidget(self.progress_bar)
self.status_label = QLabel("Pronto")
layout.addWidget(self.status_label)
self.log_output = QPlainTextEdit()
self.log_output.setReadOnly(True)
layout.addWidget(self.log_output)
self.profile_combo.currentTextChanged.connect(self._on_profile_changed)
self.save_profile_btn.clicked.connect(self.save_profile)
self.delete_profile_btn.clicked.connect(self.delete_profile)
self.add_folder_btn.clicked.connect(self.add_folder)
self.remove_folder_btn.clicked.connect(self.remove_folder)
self.folders_list.itemSelectionChanged.connect(self._update_run_state)
self.scheduler_checkbox.toggled.connect(self._on_scheduler_toggled)
self.run_button.clicked.connect(self.run_backup)
def _load_profiles(self) -> None:
self.profile_combo.blockSignals(True)
self.profile_combo.clear()
for profile in self.config.profiles:
self.profile_combo.addItem(profile.name)
self.profile_combo.blockSignals(False)
target = self.config.active_profile
if target:
index = self.profile_combo.findText(target)
if index >= 0:
self.profile_combo.setCurrentIndex(index)
profile = find_profile(self.config, target)
if profile:
self._apply_profile(profile)
elif self.config.profiles:
first = self.config.profiles[0]
self.profile_combo.setCurrentIndex(0)
self._apply_profile(first)
def _apply_profile(self, profile: ProfileConfig) -> None:
self.profile_line.setText(profile.name)
self.folders_list.clear()
for folder in profile.folders:
self.folders_list.addItem(folder)
self.config.active_profile = profile.name
save_config(self.config)
self._update_run_state()
def _sync_scheduler(self) -> None:
self.scheduler_checkbox.setChecked(self.config.scheduler_enabled)
def _on_scheduler_toggled(self, state: bool) -> None:
self.config.scheduler_enabled = state
task_name = f"BackupClient-{self.profile_line.text().strip() or 'default'}"
exe_path = Path(sys.argv[0]).resolve()
prev_state = self.config.scheduler_enabled
profile_name = self.profile_line.text().strip() or "default"
try:
message = (
enable_scheduler(task_name, profile_name, exe_path) if state else disable_scheduler(task_name, profile_name, exe_path)
)
self.config.scheduler_enabled = state
save_config(self.config)
self.status_label.setText(message)
except SchedulerError as exc:
QMessageBox.warning(self, "Scheduler", str(exc))
self.scheduler_checkbox.blockSignals(True)
self.scheduler_checkbox.setChecked(prev_state)
self.scheduler_checkbox.blockSignals(False)
self.status_label.setText("Scheduler non aggiornato")
def _on_profile_changed(self, name: str) -> None:
profile = find_profile(self.config, name)
if profile:
self._apply_profile(profile)
else:
self.profile_line.clear()
self.folders_list.clear()
self.config.active_profile = None
save_config(self.config)
self._update_run_state()
def add_folder(self) -> None:
folder = QFileDialog.getExistingDirectory(self, "Seleziona cartella")
if folder:
if folder not in self._current_folders():
self.folders_list.addItem(folder)
self._update_run_state()
def remove_folder(self) -> None:
selected = self.folders_list.selectedItems()
for item in selected:
self.folders_list.takeItem(self.folders_list.row(item))
self._update_run_state()
def _current_folders(self) -> List[str]:
return [self.folders_list.item(i).text() for i in range(self.folders_list.count())]
def _update_run_state(self) -> None:
has_folders = bool(self._current_folders())
self.run_button.setEnabled(has_folders)
def save_profile(self) -> None:
name = self.profile_line.text().strip()
if not name:
QMessageBox.warning(self, "Profilo", "Inserisci un nome profilo")
return
folders = self._current_folders()
if not folders:
QMessageBox.warning(self, "Profilo", "Seleziona almeno una cartella")
return
profile = find_profile(self.config, name)
if profile:
profile.folders = folders
profile.schedule_enabled = self.scheduler_checkbox.isChecked()
else:
profile = ProfileConfig(name=name, folders=folders, schedule_enabled=self.scheduler_checkbox.isChecked())
self.config.profiles.append(profile)
self.config.active_profile = name
save_config(self.config)
self._load_profiles()
self.status_label.setText(f"Profilo '{name}' salvato")
def delete_profile(self) -> None:
name = self.profile_line.text().strip()
profile = find_profile(self.config, name)
if not profile:
return
self.config.profiles.remove(profile)
if self.config.active_profile == name:
self.config.active_profile = None
save_config(self.config)
self._load_profiles()
self.status_label.setText(f"Profilo '{name}' eliminato")
def run_backup(self) -> None:
if self.worker and self.worker.isRunning():
return
if not is_token_valid(self.config):
if not self._prompt_login():
return
profile_name = self.profile_line.text().strip()
if not profile_name:
QMessageBox.warning(self, "Backup", "Inserisci un nome profilo")
return
folders = self._current_folders()
if not folders:
QMessageBox.warning(self, "Backup", "Seleziona almeno una cartella")
return
client_host = self.client_host_input.text().strip()
ssh_user = self.ssh_user_input.text().strip()
ssh_password = self.ssh_password_input.text().strip()
if not client_host or not ssh_user or not ssh_password:
QMessageBox.warning(self, "Backup", "Inserire host e credenziali SSH")
return
profile = find_profile(self.config, profile_name)
if not profile:
profile = ProfileConfig(name=profile_name, folders=folders, schedule_enabled=self.scheduler_checkbox.isChecked())
self.config.profiles.append(profile)
else:
profile.folders = folders
profile.schedule_enabled = self.scheduler_checkbox.isChecked()
self.config.active_profile = profile_name
save_config(self.config)
self.worker = BackupWorker(
api_client=self.api_client,
profile=profile,
folders=folders,
client_host=client_host,
ssh_username=ssh_user,
ssh_password=ssh_password,
)
self.worker.status_update.connect(self._handle_status)
self.worker.log_update.connect(self._handle_log)
self.worker.completed.connect(self._handle_completion)
self.run_button.setEnabled(False)
self.progress_bar.setValue(0)
self.status_label.setText("Avvio backup...")
self.worker.start()
def _handle_status(self, data: dict) -> None:
progress = data.get("progress", 0)
status = data.get("status", "")
self.progress_bar.setValue(progress)
self.status_label.setText(status)
def _handle_log(self, lines: List[str]) -> None:
if not lines:
return
self._log_buffer.extend(lines)
self._log_buffer = self._log_buffer[-500:]
self.log_output.setPlainText("\n".join(self._log_buffer))
self.log_output.verticalScrollBar().setValue(self.log_output.verticalScrollBar().maximum())
def _handle_completion(self, success: bool, message: str) -> None:
self.run_button.setEnabled(True)
self.status_label.setText(message)
QMessageBox.information(self, "Backup", "Backup completato" if success else f"Errore: {message}")
def _prompt_login(self) -> bool:
password, ok = QInputDialog.getText(self, "Accesso API", "Password API", QLineEdit.Password)
if not ok:
return False
try:
self.api_client.login(password)
self.status_label.setText("Autenticato")
return True
except ApiClientError as exc:
QMessageBox.warning(self, "Accesso", str(exc))
clear_token(self.config)
return False
def main() -> None:
app = QApplication(sys.argv)
window = MainWindow(load_config())
window.show()
app.exec()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,27 @@
# Windows Client Tooling Setup
This folder holds helpers you run on the Windows host that will be pulled by the Proxmox orchestrator.
## 1. Prerequisites on Windows
- Run PowerShell with Administrative privileges.
- Ensure OpenSSH.Client is available (Windows ships with it by default on modern builds).
- Copy a portable `rsync.exe` (and its DLLs) to `windows-helpers/assets/rsync.zip` so the setup script can expand it into `C:\BackupClient\bin`.
## 2. Run the helper
1. Open PowerShell in this folder.
2. Execute `.uild\setup_openssh_rsync.ps1` (adjust path if you copy the scripts elsewhere) with optional parameters:
```powershell
.\setup_openssh_rsync.ps1 -InstallDir C:\BackupClient -RsyncZipPath .\assets\rsync.zip
```
3. The script:
- installs/starts the OpenSSH Server feature, sets `sshd` to auto-start and opens port 22.
- creates `C:\BackupClient` and copies the `rsync` binary into `C:\BackupClient\bin`.
## 3. Post-setup checks
- `sshd` should be running (`Get-Service sshd`).
- The firewall rule `BackupClient SSH` allows inbound TCP 22 on private/domain networks.
- From the Proxmox server, `ssh backupuser@<windows_ip>` succeeds and the `rsync.exe` inside `C:\BackupClient\bin` can be invoked.
## 4. Notes
- Keep the `rsync.exe` bundle in the installer so the orchestrator can invoke `rsync --server` over SSH.
- Store any helper scripts and configuration files near the packaged client so the scheduler toggle and future automation can find them.

View File

@@ -0,0 +1,45 @@
param(
[string]$InstallDir = "C:\BackupClient",
[string]$RsyncZipPath = "$PSScriptRoot\\assets\\rsync.zip"
)
function Ensure-Directory {
param([string]$Path)
if (-not (Test-Path $Path)) {
New-Item -ItemType Directory -Path $Path -Force | Out-Null
}
}
function Install-OpenSshServer {
$capability = Get-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
if ($capability.State -ne "Installed") {
Write-Host "Installing OpenSSH.Server..."
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 | Out-Null
}
Set-Service sshd -StartupType Automatic
Start-Service sshd
}
function Configure-Firewall {
$rule = Get-NetFirewallRule -DisplayName "BackupClient SSH" -ErrorAction SilentlyContinue
if (-not $rule) {
New-NetFirewallRule -DisplayName "BackupClient SSH" -Direction Inbound -Action Allow -Protocol TCP -LocalPort 22 -Profile Private,Domain
}
}
function Deploy-Rsync {
$binDir = Join-Path $InstallDir "bin"
Ensure-Directory $binDir
if (Test-Path $RsyncZipPath) {
Expand-Archive -Path $RsyncZipPath -DestinationPath $binDir -Force
} else {
Write-Warning "Rsync zip not found at $RsyncZipPath, expecting rsync.exe already present in $binDir"
}
}
Ensure-Directory $InstallDir
Install-OpenSshServer
Configure-Firewall
Deploy-Rsync
Write-Host "OpenSSH + rsync helper ready in $InstallDir"

1
resume.txt Normal file
View File

@@ -0,0 +1 @@
codex resume 019be9e2-cfbb-7c42-99ef-e72404473ac9

6
server/.env.example Normal file
View File

@@ -0,0 +1,6 @@
API_USER=backup-admin
API_PASSWORD=SuperSecret123
DATABASE_URL=sqlite:///data/backup.db
BACKUP_BASE=/srv/backup
RUNTIME_LOG_DIR=./logs
TOKEN_LIFETIME_MINUTES=120

26
server/README.md Normal file
View File

@@ -0,0 +1,26 @@
# Backup Orchestrator Server
Backend orchestrator for the Windows-to-Proxmox rsync pull pipeline.
## Structure
- `app/config.py`: settings derived from `.env` and helpers for backup/current/history/log directories.
- `app/models.py`: SQLModel schema for jobs, profiles, tokens and retention history.
- `app/database.py`: SQLite engine and helpers.
- `app/job_runner.py`: threaded job queue that executes `rsync` over SSH, streams logs, updates job status, and enforces 20-run retention.
- `app/main.py`: FastAPI application exposing auth, profile CRUD, backup start/status/log, and health.
## Getting started
1. Copy `.env.example` to `.env` and adjust values (especially `DATABASE_URL`, `BACKUP_BASE`, and credentials).
2. Create Python environment and install dependencies (e.g. `pip install .` or `hatch run pip install .`).
3. Start the API/runner:
```bash
uvicorn app.main:app --host 0.0.0.0 --port 8443
```
4. Ensure `/srv/backup/current` and `/srv/backup/.history` are writable by the service user.
5. Install `sshpass` and `rsync` on the Proxmox VM; the runner relies on them to pull from Windows.
## Notes
- Tokens live in the database for `TOKEN_LIFETIME_MINUTES`; expired tokens return 401.
- Each backup run writes to `<log_dir>/job_<id>.log`; status endpoints return the last lines.
- History retention keeps the 20 most recent runs per profile and prunes older directories under `/srv/backup/.history`.
- The job runner expects the client to expose OpenSSH on port 22; `sshpass` is used to handpass the provided password.

1
server/app/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Server application package."""

44
server/app/config.py Normal file
View File

@@ -0,0 +1,44 @@
from __future__ import annotations
import os
from pydantic import BaseSettings
def _ensure_dir(path: str) -> str:
os.makedirs(path, exist_ok=True)
return path
class Settings(BaseSettings):
api_username: str = "backup-admin"
api_password: str = "SuperSecret123"
token_lifetime_minutes: int = 120
database_url: str = "sqlite:///./data/backup.db"
backup_base: str = "/srv/backup"
history_retention: int = 20
log_dir: str = "./logs"
ssh_timeout_seconds: int = 1800
ssh_pass_command: str = "sshpass"
ssh_extra_args: str = "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
rsync_extra_args: str = "--info=progress2 --partial"
compress: bool = True
class Config:
env_file = os.path.join(os.path.dirname(__file__), "..", ".env")
env_file_encoding = "utf-8"
@property
def backup_current(self) -> str:
return _ensure_dir(os.path.join(self.backup_base, "current"))
@property
def backup_history(self) -> str:
return _ensure_dir(os.path.join(self.backup_base, ".history"))
@property
def runtime_logs(self) -> str:
return _ensure_dir(self.log_dir)
settings = Settings()

18
server/app/database.py Normal file
View File

@@ -0,0 +1,18 @@
from __future__ import annotations
from sqlmodel import SQLModel, create_engine, Session
from .config import settings
engine = create_engine(
settings.database_url,
connect_args={"check_same_thread": False} if settings.database_url.startswith("sqlite") else {},
)
def init_db() -> None:
SQLModel.metadata.create_all(engine)
def get_session() -> Session:
return Session(engine)

203
server/app/job_runner.py Normal file
View File

@@ -0,0 +1,203 @@
from __future__ import annotations
import os
import queue
import shlex
import shutil
import subprocess
import threading
from datetime import datetime
from typing import Iterable, NamedTuple
from sqlmodel import select
from .config import settings
from .database import get_session
from .models import Job, JobStatus, RunHistory
class JobPayload(NamedTuple):
job_id: int
ssh_password: str
def _sanitize_segment(value: str) -> str:
sanitized = "".join(ch if ch.isalnum() or ch in "-_" else "_" for ch in value)
return sanitized[:64]
def _split_args(raw: str) -> list[str]:
raw = raw.strip()
if not raw:
return []
return shlex.split(raw)
def _ensure_dirs(*paths: str) -> None:
for path in paths:
os.makedirs(path, exist_ok=True)
def _write_log(path: str, lines: Iterable[str]) -> None:
with open(path, "a", encoding="utf-8", errors="ignore") as fp:
for line in lines:
fp.write(line)
if not line.endswith("\n"):
fp.write("\n")
def _tail(log_path: str, max_lines: int = 20) -> list[str]:
try:
with open(log_path, "r", encoding="utf-8", errors="ignore") as fp:
return fp.readlines()[-max_lines:]
except FileNotFoundError:
return []
def _cleanup_history(profile_name: str, session, keep: int = 20) -> None:
if keep <= 0:
return
sanitized = _sanitize_segment(profile_name)
history_root = os.path.join(settings.backup_history, sanitized)
session.flush()
records = (
session.exec(
select(RunHistory)
.where(RunHistory.profile_name == profile_name)
.order_by(RunHistory.created_at.desc())
)
.all()
)
stale = records[keep:]
for record in stale:
path = os.path.join(history_root, record.run_id)
shutil.rmtree(path, ignore_errors=True)
session.delete(record)
if stale:
session.commit()
class JobRunner:
def __init__(self) -> None:
self.queue: "queue.Queue[JobPayload]" = queue.Queue()
self.worker = threading.Thread(target=self._worker, daemon=True)
self.worker.start()
def enqueue(self, payload: JobPayload) -> None:
self.queue.put(payload)
def _worker(self) -> None:
while True:
payload = self.queue.get()
try:
self._run(payload)
finally:
self.queue.task_done()
def _run(self, payload: JobPayload) -> None:
session = get_session()
job = session.get(Job, payload.job_id)
if not job:
session.close()
return
log_path = os.path.join(settings.runtime_logs, f"job_{job.id}.log")
try:
job.log_path = log_path
job.status = JobStatus.RUNNING
job.progress = 0
job.updated_at = datetime.utcnow()
session.add(job)
session.commit()
history_root = os.path.join(settings.backup_history, _sanitize_segment(job.profile_name), job.run_id)
_ensure_dirs(history_root)
target_base = os.path.join(settings.backup_current, _sanitize_segment(job.profile_name))
_ensure_dirs(target_base)
folders = [folder for folder in job.folders.split("||") if folder]
total = max(len(folders), 1)
completed = 0
summary_lines: list[str] = []
for folder in folders:
dest_folder = os.path.join(target_base, _sanitize_segment(folder))
_ensure_dirs(dest_folder)
args = [
settings.ssh_pass_command,
"-p",
payload.ssh_password,
"rsync",
"-a",
"--backup",
f"--backup-dir={history_root}",
]
args += _split_args(settings.rsync_extra_args)
if settings.compress:
args.append("-z")
args += [
"-e",
f"ssh {settings.ssh_extra_args}",
f"{job.ssh_username}@{job.client_host}:{folder}",
dest_folder,
]
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
)
if process.stdout:
for line in process.stdout:
_write_log(log_path, [line])
summary_lines.append(line.strip())
process.wait()
if process.returncode != 0:
summary = f"Folder {folder} failed (rsync exit {process.returncode})."
job.summary = summary
job.status = JobStatus.FAILED
job.updated_at = datetime.utcnow()
session.add(job)
session.commit()
return
completed += 1
job.progress = int((completed / total) * 100)
job.updated_at = datetime.utcnow()
session.add(job)
session.commit()
job.status = JobStatus.COMPLETED
job.progress = 100
job.summary = "; ".join(summary_lines[-5:])
job.updated_at = datetime.utcnow()
session.add(job)
session.commit()
run_entry = RunHistory(profile_name=job.profile_name, run_id=job.run_id)
session.add(run_entry)
session.commit()
_cleanup_history(job.profile_name, session, keep=settings.history_retention)
finally:
session.close()
def tail_log(self, job: Job, lines: int = 20) -> list[str]:
if not job.log_path:
return []
return [line.strip() for line in _tail(job.log_path, lines)]
def ensure_runner() -> JobRunner:
global _runner
try:
return _runner
except NameError:
_runner = JobRunner()
return _runner
runner = ensure_runner()

207
server/app/main.py Normal file
View File

@@ -0,0 +1,207 @@
from __future__ import annotations
from datetime import datetime, timedelta
from typing import List
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm
from sqlmodel import select
from .config import settings
from .database import get_session, init_db
from .job_runner import JobPayload, runner
from .models import Job, JobStatus, Profile, Token, RunHistory
from .schemas import (
AuthToken,
BackupLog,
BackupStartRequest,
HealthResponse,
JobStatusResponse,
ProfileCreate,
ProfileRead,
TokenType,
)
app = FastAPI(title="Backup Orchestrator", version="0.1.0")
oauth_scheme = OAuth2PasswordBearer(tokenUrl="/auth/login")
def _collect_folders(text: str) -> List[str]:
return [entry.strip() for entry in text.split("||") if entry.strip()]
def _profile_to_response(profile: Profile) -> ProfileRead:
return ProfileRead(
id=profile.id, # type: ignore[arg-type]
name=profile.name,
folders=_collect_folders(profile.folders),
description=profile.description,
schedule_enabled=profile.schedule_enabled,
created_at=profile.created_at,
updated_at=profile.updated_at,
)
def _get_current_user(token: str = Depends(oauth_scheme)): # noqa: C901
session = get_session()
try:
record = session.exec(select(Token).where(Token.token == token)).first()
if not record or record.expires_at < datetime.utcnow():
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid or expired token")
return record.user
finally:
session.close()
@app.on_event("startup")
def on_startup() -> None:
init_db()
@app.post("/auth/login", response_model=AuthToken)
def login(form_data: OAuth2PasswordRequestForm = Depends()) -> AuthToken:
if form_data.username != settings.api_username or form_data.password != settings.api_password:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid credentials")
session = get_session()
expires_at = datetime.utcnow() + timedelta(minutes=settings.token_lifetime_minutes)
token_value = Token(token=_generate_token(), user=form_data.username, expires_at=expires_at)
session.add(token_value)
session.commit()
session.close()
return AuthToken(access_token=token_value.token, token_type=TokenType.bearer, expires_at=expires_at)
def _generate_token(length: int = 36) -> str:
from secrets import token_urlsafe
return token_urlsafe(length)
@app.get("/profiles", response_model=List[ProfileRead])
def list_profiles(_: str = Depends(_get_current_user)) -> List[ProfileRead]:
session = get_session()
try:
results = session.exec(select(Profile).order_by(Profile.created_at)).all()
return [_profile_to_response(profile) for profile in results]
finally:
session.close()
@app.post("/profiles", response_model=ProfileRead, status_code=status.HTTP_201_CREATED)
def create_profile(data: ProfileCreate, _: str = Depends(_get_current_user)) -> ProfileRead:
session = get_session()
try:
profile = Profile(
name=data.name,
folders="||".join(data.folders),
description=data.description,
schedule_enabled=data.schedule_enabled,
)
session.add(profile)
session.commit()
session.refresh(profile)
return _profile_to_response(profile)
finally:
session.close()
@app.put("/profiles/{profile_id}", response_model=ProfileRead)
def update_profile(profile_id: int, data: ProfileCreate, _: str = Depends(_get_current_user)) -> ProfileRead:
session = get_session()
try:
profile = session.get(Profile, profile_id)
if not profile:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Profile not found")
profile.name = data.name
profile.folders = "||".join(data.folders)
profile.description = data.description
profile.schedule_enabled = data.schedule_enabled
profile.updated_at = datetime.utcnow()
session.add(profile)
session.commit()
session.refresh(profile)
return _profile_to_response(profile)
finally:
session.close()
@app.delete("/profiles/{profile_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_profile(profile_id: int, _: str = Depends(_get_current_user)) -> None:
session = get_session()
try:
profile = session.get(Profile, profile_id)
if not profile:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Profile not found")
session.delete(profile)
session.commit()
finally:
session.close()
@app.post("/backup/start")
def start_backup(request: BackupStartRequest, _: str = Depends(_get_current_user)) -> dict:
session = get_session()
try:
profile = session.get(Profile, request.profile_id)
if not profile:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Profile not found")
folders = request.folders or _collect_folders(profile.folders)
if not folders:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="No folders provided")
job = Job(
profile_name=profile.name,
client_host=request.client_host,
folders="||".join(folders),
ssh_username=request.ssh_username,
run_id=_generate_run_id(),
)
session.add(job)
session.commit()
session.refresh(job)
runner.enqueue(JobPayload(job_id=job.id, ssh_password=request.ssh_password))
return {"job_id": job.id}
finally:
session.close()
def _generate_run_id() -> str:
from uuid import uuid4
return uuid4().hex
@app.get("/backup/status/{job_id}", response_model=JobStatusResponse)
def job_status(job_id: int, _: str = Depends(_get_current_user)) -> JobStatusResponse:
session = get_session()
try:
job = session.get(Job, job_id)
if not job:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Job not found")
return JobStatusResponse(
job_id=job.id, # type: ignore[arg-type]
status=job.status.value,
progress=job.progress,
summary=job.summary,
last_log_lines=runner.tail_log(job),
created_at=job.created_at,
updated_at=job.updated_at,
)
finally:
session.close()
@app.get("/backup/log/{job_id}", response_model=BackupLog)
def job_log(job_id: int, _: str = Depends(_get_current_user)) -> BackupLog:
session = get_session()
try:
job = session.get(Job, job_id)
if not job:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Job not found")
return BackupLog(job_id=job.id, lines=runner.tail_log(job, lines=200))
finally:
session.close()
@app.get("/health", response_model=HealthResponse)
def health() -> HealthResponse:
return HealthResponse(status="ok", version=app.version)

52
server/app/models.py Normal file
View File

@@ -0,0 +1,52 @@
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import Optional
from sqlmodel import SQLModel, Field
class JobStatus(str, Enum):
QUEUED = "QUEUED"
RUNNING = "RUNNING"
COMPLETED = "COMPLETED"
FAILED = "FAILED"
class Job(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
profile_name: str
client_host: str
folders: str
ssh_username: str
run_id: str
status: JobStatus = Field(sa_column_kwargs={"default": JobStatus.QUEUED})
progress: int = 0
summary: Optional[str] = None
log_path: Optional[str] = None
created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow)
class Profile(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
folders: str
schedule_enabled: bool = Field(default=False)
description: Optional[str] = None
created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow)
class Token(SQLModel, table=True):
token: str = Field(primary_key=True)
user: str
expires_at: datetime
class RunHistory(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
profile_name: str
run_id: str
created_at: datetime = Field(default_factory=datetime.utcnow)

62
server/app/schemas.py Normal file
View File

@@ -0,0 +1,62 @@
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import List, Optional
from pydantic import BaseModel, Field, constr
class TokenType(str, Enum):
bearer = "bearer"
class AuthToken(BaseModel):
access_token: str
token_type: TokenType
expires_at: datetime
class ProfileBase(BaseModel):
name: constr(min_length=1)
folders: List[str] = Field(..., min_items=1)
description: Optional[str] = None
schedule_enabled: bool = False
class ProfileCreate(ProfileBase):
pass
class ProfileRead(ProfileBase):
id: int
created_at: datetime
updated_at: datetime
class BackupStartRequest(BaseModel):
profile_id: int
client_host: str
ssh_username: str
ssh_password: str
folders: Optional[List[str]] = None
class JobStatusResponse(BaseModel):
job_id: int
status: str
progress: int
summary: Optional[str]
last_log_lines: List[str] = []
created_at: datetime
updated_at: datetime
class BackupLog(BaseModel):
job_id: int
lines: List[str]
class HealthResponse(BaseModel):
status: str
version: str

19
server/pyproject.toml Normal file
View File

@@ -0,0 +1,19 @@
[project]
name = "backup-orchestrator"
version = "0.1.0"
description = "FastAPI orchestrator for Windows SSH pull backups"
authors = ["Codex <dev@example.com>"]
readme = "README.md"
requires-python = ">=3.10"
[project.dependencies]
fastapi = "^0.111"
uvicorn = {extras = ["standard"], version = "^0.23"}
sqlmodel = "^0.0.8"
python-dotenv = "^1.0"
cryptography = "^41.0"
passlib = "^1.7"
[build-system]
requires = ["hatchling>=1.8"]
build-backend = "hatchling.build"