Compare commits
21 Commits
d9e9e95b5f
...
23c37f4590
|
|
@ -3,6 +3,10 @@
|
||||||
.codex/
|
.codex/
|
||||||
__pycache__/
|
__pycache__/
|
||||||
*.pyc
|
*.pyc
|
||||||
|
tls/
|
||||||
|
node_modules/
|
||||||
|
playwright-report/
|
||||||
|
test-results/
|
||||||
|
|
||||||
# Keep firmware SDK tree out of this workspace-tracking repo
|
# Keep firmware SDK tree out of this workspace-tracking repo
|
||||||
CR_SDK_CK-main/
|
CR_SDK_CK-main/
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,240 @@
|
||||||
|
# Phase 5 Runbook (Session Reuse Prototype)
|
||||||
|
|
||||||
|
This runbook starts a minimal `k_server` + `k_proxy` prototype for session reuse testing.
|
||||||
|
|
||||||
|
Last updated: 2026-04-26
|
||||||
|
|
||||||
|
Related browser demo:
|
||||||
|
|
||||||
|
- `k_client_portal.py` can now be used in `k_client` at `http://127.0.0.1:8766` to show:
|
||||||
|
- registration
|
||||||
|
- current registered-user list from `k_proxy`
|
||||||
|
- unregister from the browser page
|
||||||
|
- login with card approval/denial
|
||||||
|
- protected `k_server` counter access
|
||||||
|
- logout
|
||||||
|
- explicit "k_server was not called" behavior when login is denied
|
||||||
|
|
||||||
|
## What This Prototype Covers
|
||||||
|
|
||||||
|
- `k_proxy` creates short-lived sessions.
|
||||||
|
- Session creation uses a card-presence check (`fido2_probe.py --json`) as the current auth gate.
|
||||||
|
- Valid sessions can repeatedly access a protected `k_server` counter endpoint without re-running card auth each request.
|
||||||
|
- Session status and logout/invalidation paths are implemented.
|
||||||
|
|
||||||
|
## Modes
|
||||||
|
|
||||||
|
There are two useful ways to run this prototype:
|
||||||
|
|
||||||
|
- Same-VM quickstart: `k_proxy` and `k_server` run on one VM for app-local testing.
|
||||||
|
- Split-VM chain: `k_proxy` runs in `k_proxy`, `k_server` runs in `k_server`, and the Qubes forwarding layer must permit the chain.
|
||||||
|
|
||||||
|
## Start Services
|
||||||
|
|
||||||
|
### Same-VM quickstart
|
||||||
|
|
||||||
|
This matches the code defaults and is useful for basic app behavior only.
|
||||||
|
|
||||||
|
In the chosen VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/user/chromecard/k_server_app.py --host 127.0.0.1 --port 8780 --proxy-token dev-proxy-token
|
||||||
|
```
|
||||||
|
|
||||||
|
In the same VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/user/chromecard/k_proxy_app.py \
|
||||||
|
--host 127.0.0.1 \
|
||||||
|
--port 8770 \
|
||||||
|
--session-ttl 300 \
|
||||||
|
--server-base-url http://127.0.0.1:8780 \
|
||||||
|
--proxy-token dev-proxy-token
|
||||||
|
```
|
||||||
|
|
||||||
|
### Split-VM chain
|
||||||
|
|
||||||
|
This is the current Qubes target shape.
|
||||||
|
|
||||||
|
In `k_server` VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/user/chromecard/k_server_app.py \
|
||||||
|
--host 127.0.0.1 \
|
||||||
|
--port 8780 \
|
||||||
|
--proxy-token dev-proxy-token \
|
||||||
|
--tls-certfile /home/user/chromecard/tls/phase2/k_server.crt \
|
||||||
|
--tls-keyfile /home/user/chromecard/tls/phase2/k_server.key
|
||||||
|
```
|
||||||
|
|
||||||
|
In `k_proxy` VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
qvm-connect-tcp 9780:k_server:8780
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/user/chromecard/k_proxy_app.py \
|
||||||
|
--host 127.0.0.1 \
|
||||||
|
--port 8771 \
|
||||||
|
--session-ttl 300 \
|
||||||
|
--server-base-url https://127.0.0.1:9780 \
|
||||||
|
--server-ca-file /home/user/chromecard/tls/phase2/ca.crt \
|
||||||
|
--proxy-token dev-proxy-token \
|
||||||
|
--tls-certfile /home/user/chromecard/tls/phase2/k_proxy.crt \
|
||||||
|
--tls-keyfile /home/user/chromecard/tls/phase2/k_proxy.key
|
||||||
|
```
|
||||||
|
|
||||||
|
In `k_client` VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
qvm-connect-tcp 9771:k_proxy:8771
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- Current validated split-VM path is `k_client localhost:9771 -> k_proxy localhost:8771 -> k_proxy localhost:9780 forward -> k_server localhost:8780`.
|
||||||
|
- Use `--cacert /home/user/chromecard/tls/phase2/ca.crt` for TLS verification in `curl`-based checks.
|
||||||
|
- Raw VM-IP routing is not the validated path for the current prototype.
|
||||||
|
|
||||||
|
## Ownership And Concurrency
|
||||||
|
|
||||||
|
- `k_proxy` is authoritative for session state.
|
||||||
|
- `k_server` is authoritative for the protected counter state.
|
||||||
|
- Sessions are in-memory only in `k_proxy` and are lost on proxy restart.
|
||||||
|
- The protected counter is in-memory only in `k_server` and resets on server restart.
|
||||||
|
- Both services use `ThreadingHTTPServer`.
|
||||||
|
- `k_proxy` guards its session store with a single process-local lock.
|
||||||
|
- `k_server` guards counter increments with a single process-local lock.
|
||||||
|
- Qubes localhost forwarders are transport plumbing only; they are not a source of state authority.
|
||||||
|
|
||||||
|
## Test Flow
|
||||||
|
|
||||||
|
Use the proxy port that matches the mode you started:
|
||||||
|
|
||||||
|
- Same-VM quickstart: `8770`
|
||||||
|
- Split-VM chain: `9771` from `k_client`, `8771` inside `k_proxy`
|
||||||
|
|
||||||
|
Create a session (runs auth gate once):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/login \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d '{"username":"alice"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy `session_token` from response, then:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
TOKEN='<paste-token>'
|
||||||
|
```
|
||||||
|
|
||||||
|
Check session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/status \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
Call protected resource multiple times (should not require new login):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
Logout/invalidate:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/logout \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
Re-check after logout (should fail with 401):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -i --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Regression Script
|
||||||
|
|
||||||
|
For the split-VM chain, use the host-side regression helper:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/home/user/chromecard/phase5_chain_regression.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Defaults:
|
||||||
|
|
||||||
|
- Drives the test from `k_client` over SSH.
|
||||||
|
- Uses `https://127.0.0.1:9771` and `/home/user/chromecard/tls/phase2/ca.crt` inside `k_client`.
|
||||||
|
- Logs in as `alice`.
|
||||||
|
- Runs `20` counter requests at parallelism `8`.
|
||||||
|
- Verifies that returned counter values are unique and gap-free, then logs out and checks for `401` after logout.
|
||||||
|
|
||||||
|
Useful overrides:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
REQUESTS=50 PARALLELISM=12 /home/user/chromecard/phase5_chain_regression.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/home/user/chromecard/phase5_chain_regression.sh --username alice --client-host k_client
|
||||||
|
```
|
||||||
|
|
||||||
|
For the browser-facing `k_client` page, use the Playwright regression spec:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install
|
||||||
|
npx playwright install
|
||||||
|
npm run test:k-client
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- default target is `http://127.0.0.1:8766`
|
||||||
|
- override with `PORTAL_BASE_URL=http://127.0.0.1:8766`
|
||||||
|
- the spec expects manual card confirmation during register and login
|
||||||
|
- timeouts can be tuned with `CARD_REGISTRATION_TIMEOUT_MS` and `CARD_LOGIN_TIMEOUT_MS`
|
||||||
|
- from this host, a forwarded portal URL was used successfully:
|
||||||
|
- `PORTAL_BASE_URL=http://127.0.0.1:18766 npm run test:k-client`
|
||||||
|
|
||||||
|
Verified result on 2026-04-25:
|
||||||
|
|
||||||
|
- Live split-VM chain passed end-to-end.
|
||||||
|
- Login, session status, counter reuse, and logout all worked from `k_client`.
|
||||||
|
- A `20` request / `8` worker concurrency burst returned unique, gap-free counter values `23..42`.
|
||||||
|
- The Playwright browser regression for `k_client_portal.py` also passed end-to-end:
|
||||||
|
- register
|
||||||
|
- login
|
||||||
|
- protected counter
|
||||||
|
- logout
|
||||||
|
- unregister
|
||||||
|
|
||||||
|
## Current Limitation
|
||||||
|
|
||||||
|
- The stable deployed baseline still uses card-presence probing, not full assertion verification, for the default auth gate.
|
||||||
|
- Session and counter state are still process-local only; restart loses state.
|
||||||
|
- Upstream trust still relies on a shared static `X-Proxy-Token`.
|
||||||
|
- Experimental direct FIDO2 mode exists in `k_proxy_app.py` behind `--auth-mode fido2-direct`:
|
||||||
|
- direct `/enroll/register` now succeeds
|
||||||
|
- direct `/session/login` now succeeds and returns `auth_mode: "fido2_assertion"`
|
||||||
|
- direct `/session/status`, `/resource/counter`, and `/session/logout` also succeed end-to-end
|
||||||
|
- the mode remains optional for now; the deployed service was returned to default `probe` mode so the validated Phase 5 baseline stays reproducible
|
||||||
|
- Raw CTAP debugging helper exists at `/home/user/chromecard/raw_ctap_probe.py`:
|
||||||
|
- use it on `k_proxy` to exercise low-level `makeCredential` / `getAssertion`
|
||||||
|
- it logs keepalive callbacks and transport exceptions
|
||||||
|
- `phase5_chain_regression.sh` now supports card-interactive direct auth via:
|
||||||
|
- `--interactive-card`
|
||||||
|
- `--expect-auth-mode fido2_assertion`
|
||||||
|
|
||||||
|
## Current Focus
|
||||||
|
|
||||||
|
- Keep the HTTPS split-VM chain reproducible in default `probe` mode.
|
||||||
|
- Decide whether `fido2-direct` is ready to become the default deployed auth path.
|
||||||
|
- Continue Phase 6.5 concurrency work; the active system limit is still higher-fan-out Qubes forwarding on the browser-facing path rather than basic Phase 5 functionality.
|
||||||
552
Setup.md
552
Setup.md
|
|
@ -1,6 +1,6 @@
|
||||||
# Setup
|
# Setup
|
||||||
|
|
||||||
Last updated: 2026-04-24
|
Last updated: 2026-04-27
|
||||||
|
|
||||||
This is a living setup/status file for the local ChromeCard workspace at `/home/user/chromecard`.
|
This is a living setup/status file for the local ChromeCard workspace at `/home/user/chromecard`.
|
||||||
Update this file whenever environment status or verified behavior changes.
|
Update this file whenever environment status or verified behavior changes.
|
||||||
|
|
@ -44,7 +44,7 @@ Update this file whenever environment status or verified behavior changes.
|
||||||
|
|
||||||
## Target Qubes Topology
|
## Target Qubes Topology
|
||||||
|
|
||||||
- Base template for all AppVMs: Debian template.
|
- Base template for all AppVMs: `debian-13-xfce`.
|
||||||
- Allowed network paths:
|
- Allowed network paths:
|
||||||
- `k_client` -> `k_proxy` over TLS
|
- `k_client` -> `k_proxy` over TLS
|
||||||
- `k_proxy` -> `k_server` over TLS
|
- `k_proxy` -> `k_server` over TLS
|
||||||
|
|
@ -68,6 +68,11 @@ Functional roles:
|
||||||
- Provides a dummy protected resource for early integration testing (monotonic increasing number/counter).
|
- Provides a dummy protected resource for early integration testing (monotonic increasing number/counter).
|
||||||
- May hold user/session state logic needed for authorization decisions.
|
- May hold user/session state logic needed for authorization decisions.
|
||||||
|
|
||||||
|
UI baseline for each AppVM (start-menu visible apps):
|
||||||
|
- Firefox
|
||||||
|
- XFCE Terminal
|
||||||
|
- File Manager
|
||||||
|
|
||||||
## Target Request Flow
|
## Target Request Flow
|
||||||
|
|
||||||
1. `k_client` sends HTTPS request to `k_proxy`.
|
1. `k_client` sends HTTPS request to `k_proxy`.
|
||||||
|
|
@ -110,21 +115,538 @@ Thread-safety expectation:
|
||||||
|
|
||||||
## Current Status Snapshot (2026-04-24)
|
## Current Status Snapshot (2026-04-24)
|
||||||
|
|
||||||
- Python is available: `Python 3.13.12`.
|
- AppVM OS version is confirmed: Debian `13.4` (`k_server`, and same on `k_client`/`k_proxy`).
|
||||||
- `python3 fido2_probe.py --list` runs, but returns: `No CTAP HID devices found.`
|
- Python in AppVMs is available: `Python 3.13.5`.
|
||||||
- No HID raw device nodes currently visible: `no hidraw devices visible`.
|
- `python3 /home/user/chromecard/fido2_probe.py --list` in `k_proxy` now detects ChromeCard on `/dev/hidraw0` (`vid:pid=4617:5`).
|
||||||
|
- HID raw device nodes are now visible in `k_proxy`:
|
||||||
|
- `/dev/hidraw0` -> `crw-rw----+`
|
||||||
|
- `/dev/hidraw1` -> `crw-------`
|
||||||
|
- `python3 /home/user/chromecard/fido2_probe.py --json` succeeds and returns CTAP2 `getInfo`:
|
||||||
|
- versions: `["FIDO_2_0"]`
|
||||||
|
- aaguid: `1234567890abcdef0123456789abcdef`
|
||||||
|
- options: `rk=false`, `up=true`, `uv=true`
|
||||||
|
- max_msg_size: `1024`
|
||||||
|
- Local WebAuthn demo (`http://localhost:8765` in `k_proxy`) succeeded:
|
||||||
|
- register: `ok=true`, `username=alice`, `credential_count=1`
|
||||||
|
- login/auth: `ok=true`, `username=alice`, `authenticated=true`
|
||||||
|
- Phase 5 prototype services are now available:
|
||||||
|
- `/home/user/chromecard/k_proxy_app.py`
|
||||||
|
- `/home/user/chromecard/k_server_app.py`
|
||||||
|
- `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
||||||
|
- Remote VM access is now available via SSH/SCP aliases:
|
||||||
|
- command execution: `ssh <host> <cmd>`
|
||||||
|
- file copy to VM home: `scp <file> <host>:~`
|
||||||
|
- validated hosts: `k_client`, `k_proxy`, `k_server`
|
||||||
- `west` is not currently installed/in PATH: `west not found`.
|
- `west` is not currently installed/in PATH: `west not found`.
|
||||||
- The checked-out `CR_SDK_CK-main` tree appears incomplete for documented sysbuild role layout:
|
- The checked-out `CR_SDK_CK-main` tree appears incomplete for documented sysbuild role layout:
|
||||||
- missing: `mvp`, `setup`, `components`, `samples`
|
- missing: `mvp`, `setup`, `components`, `samples`
|
||||||
- `CR_SDK_CK-main/scripts/build_flash_mvp.sh` exists, but it expects the above role directories.
|
- `CR_SDK_CK-main/scripts/build_flash_mvp.sh` exists, but it expects the above role directories.
|
||||||
- Python helper scripts were intentionally moved out of `CR_SDK_CK-main/scripts` and are now maintained at workspace root.
|
- Python helper scripts were intentionally moved out of `CR_SDK_CK-main/scripts` and are now maintained at workspace root.
|
||||||
|
- Qubes AppVM baseline is now up: `k_client`, `k_proxy`, `k_server` can start and have terminals running.
|
||||||
|
|
||||||
Implication:
|
Implication:
|
||||||
- We cannot currently confirm live FIDO2 connectivity from this host.
|
- Live FIDO2 connectivity from `k_proxy` to ChromeCard is confirmed over USB HID/CTAPHID.
|
||||||
|
- Local browser WebAuthn register/login flow is confirmed working in `k_proxy`.
|
||||||
- We cannot currently run the documented firmware build/flash flow.
|
- We cannot currently run the documented firmware build/flash flow.
|
||||||
|
|
||||||
Session note (2026-04-24):
|
Session note (2026-04-24):
|
||||||
- Markdown tracking was reviewed and normalized around `Setup.md` + `Workplan.md` as the active, continuously updated execution record.
|
- Markdown tracking was reviewed and normalized around `Setup.md` + `Workplan.md` as the active, continuously updated execution record.
|
||||||
|
- AppVM template decision recorded: use `debian-13-xfce` for `k_client`, `k_proxy`, and `k_server`.
|
||||||
|
- VM start attempt failed with Xen toolstack error: `libxenlight have failed to create new domain 'k_client'`.
|
||||||
|
- VM start blocker was resolved by reducing VM memory to `400` MiB; all three AppVMs now start.
|
||||||
|
- Runtime check from VMs: Debian `13.4` and Python `3.13.5`; `k_proxy` still shows `no hidraw devices`.
|
||||||
|
- After USB assignment to `k_proxy`, `/dev/hidraw0` and `/dev/hidraw1` appeared.
|
||||||
|
- CTAP probe re-run succeeded with detected ChromeCard device and valid CTAP2 `getInfo` response.
|
||||||
|
- Local WebAuthn demo completed successfully for user `alice` (register + login).
|
||||||
|
- Phase 5 starter implementation added with session TTL, logout/invalidation, and proxy->server protected counter forwarding.
|
||||||
|
|
||||||
|
Session note (2026-04-24, doc maintenance):
|
||||||
|
- Top-level Markdown files were re-scanned: `PHASE5_RUNBOOK.md`, `Setup.md`, `Workplan.md`.
|
||||||
|
- `PHASE5_RUNBOOK.md` remains consistent with the current Phase 5 prototype paths and flow.
|
||||||
|
- No plan/setup drift was found requiring behavioral changes; docs remain aligned.
|
||||||
|
- SSH-based VM operation was validated for `k_client`, `k_proxy`, `k_server` (Debian `13.4` confirmed remotely).
|
||||||
|
- SCP file transfer to `k_proxy` home directory was validated with read-back.
|
||||||
|
|
||||||
|
Session note (2026-04-24, remote flow diagnostics):
|
||||||
|
- VM script staging gap found: `/home/user/chromecard/k_proxy_app.py`, `k_server_app.py`, and helper files were missing on AppVMs and were copied via `scp`.
|
||||||
|
- Services were started in VMs and verified locally:
|
||||||
|
- `k_proxy` local health OK on `127.0.0.1:8770` and `127.0.0.1:8771`
|
||||||
|
- `k_server` local health OK on `127.0.0.1:8780`
|
||||||
|
- Verified VM IPs during this run:
|
||||||
|
- `k_proxy`: `10.137.0.12`
|
||||||
|
- `k_server`: `10.137.0.13`
|
||||||
|
- `k_client`: `10.137.0.16`
|
||||||
|
- Current chain failure is network pathing/firewall:
|
||||||
|
- `k_client -> k_proxy` (`10.137.0.12:8771`) times out.
|
||||||
|
- `k_proxy -> k_server` (`10.137.0.13:8780`) times out.
|
||||||
|
- Proxy returns upstream error payload: `server unavailable: timed out`.
|
||||||
|
|
||||||
|
Session note (2026-04-24, markdown re-scan):
|
||||||
|
- Re-read top-level workspace Markdown files: `Setup.md`, `Workplan.md`, `PHASE5_RUNBOOK.md`.
|
||||||
|
- Re-skimmed source-tree reference docs in `CR_SDK_CK-main`, including `BUILD.md`, `README.md`, `README_HOST.md`, `RELEASE.md`, and `distribute_bundle.md`.
|
||||||
|
- Current workspace docs remain aligned with the verified execution record.
|
||||||
|
- Source-tree doc drift remains unchanged:
|
||||||
|
- `README_HOST.md` still points to `./scripts/fido2_probe.py` and `./scripts/webauthn_local_demo.py`.
|
||||||
|
- Active workspace policy continues to treat those paths as historical; maintained helper paths remain `/home/user/chromecard/fido2_probe.py` and `/home/user/chromecard/webauthn_local_demo.py`.
|
||||||
|
- Source-tree build docs continue to describe a full SDK layout with `mvp`, `setup`, `components`, and `samples`, which is still not present in the current local checkout snapshot.
|
||||||
|
|
||||||
|
Session note (2026-04-24, policy retry):
|
||||||
|
- Markdown re-scan was retried after local policy changes.
|
||||||
|
- Re-running the workspace doc scan with a non-login shell completed cleanly, without the earlier SSH/socat startup noise in command output.
|
||||||
|
|
||||||
|
Session note (2026-04-24, chain probe retry):
|
||||||
|
- Re-probed the Qubes access path for `k_client -> k_proxy -> k_server`.
|
||||||
|
- Local forwarded SSH listener ports still exist on the host:
|
||||||
|
- `0.0.0.0:2222` -> `qrexec-client-vm 'k_client' qubes.ConnectTCP+22`
|
||||||
|
- `0.0.0.0:2223` -> `qrexec-client-vm 'k_proxy' qubes.ConnectTCP+22`
|
||||||
|
- `0.0.0.0:2224` -> `qrexec-client-vm 'k_server' qubes.ConnectTCP+22`
|
||||||
|
- These forwarded SSH ports currently fail immediately:
|
||||||
|
- `ssh k_client` / `ssh k_proxy` / `ssh k_server` close immediately on localhost forwarded ports.
|
||||||
|
- Direct `qrexec-client-vm <target> qubes.ConnectTCP+22` returns `Request refused`.
|
||||||
|
- Chain ports are currently blocked at the same qrexec layer:
|
||||||
|
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8770` -> `Request refused`
|
||||||
|
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
||||||
|
- This means the current blocker is active qrexec policy/service refusal for `qubes.ConnectTCP`, not the Python service code in `k_proxy_app.py` or `k_server_app.py`.
|
||||||
|
- Separate SSH config issue remains on the host:
|
||||||
|
- `/etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf` is still owned `root:root` but mode `777`, which causes OpenSSH to reject it as insecure on the normal login-shell path.
|
||||||
|
|
||||||
|
Session note (2026-04-25, post-restart probe):
|
||||||
|
- Correct client-facing proxy port is `8771` for the current split-VM chain checks.
|
||||||
|
- SSH to `k_proxy` is working again.
|
||||||
|
- `k_proxy` card visibility is restored after VM restart and card reconnect:
|
||||||
|
- `/dev/hidraw0` and `/dev/hidraw1` are present in `k_proxy`
|
||||||
|
- Current service state after restart:
|
||||||
|
- `k_proxy` has no listener on `127.0.0.1:8771`
|
||||||
|
- `k_server` has no listener on `127.0.0.1:8780`
|
||||||
|
- Current qrexec chain state after restart:
|
||||||
|
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
||||||
|
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
||||||
|
- Practical meaning:
|
||||||
|
- SSH and card attachment recovered
|
||||||
|
- phase-5 app services are not currently running in the VMs
|
||||||
|
- qrexec forwarding for the chain ports is still being refused
|
||||||
|
|
||||||
|
Session note (2026-04-25, service restart):
|
||||||
|
- `k_server_app.py` was restarted successfully in `k_server`:
|
||||||
|
- PID `1320`
|
||||||
|
- listening on `127.0.0.1:8780`
|
||||||
|
- `/health` returns `{"ok": true, "service": "k_server", ...}`
|
||||||
|
- `k_proxy_app.py` was restarted successfully in `k_proxy`:
|
||||||
|
- PID `2774`
|
||||||
|
- listening on `127.0.0.1:8771`
|
||||||
|
- `/health` returns `{"ok": true, "service": "k_proxy", "active_sessions": 0, ...}`
|
||||||
|
- Despite local service recovery, qrexec forwarding is still denied:
|
||||||
|
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
||||||
|
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
||||||
|
|
||||||
|
Session note (2026-04-25, markdown refresh):
|
||||||
|
- Re-read the active workspace markdown files:
|
||||||
|
- `Setup.md`
|
||||||
|
- `Workplan.md`
|
||||||
|
- `PHASE5_RUNBOOK.md`
|
||||||
|
- Corrected the Phase 5 runbook to distinguish the old same-VM quickstart from the current split-VM chain usage.
|
||||||
|
- Current documented client-facing proxy port for split-VM tests is `8771`.
|
||||||
|
- Current documented blocker remains unchanged:
|
||||||
|
- local service health inside `k_proxy` and `k_server` is good
|
||||||
|
- inter-VM forwarding via `qubes.ConnectTCP` is still refused
|
||||||
|
|
||||||
|
Session note (2026-04-25, Phase 2 HTTPS bring-up):
|
||||||
|
- Added direct TLS support to:
|
||||||
|
- `/home/user/chromecard/k_proxy_app.py`
|
||||||
|
- `/home/user/chromecard/k_server_app.py`
|
||||||
|
- Added local certificate generator:
|
||||||
|
- `/home/user/chromecard/generate_phase2_certs.py`
|
||||||
|
- Generated local CA and service certs at:
|
||||||
|
- `/home/user/chromecard/tls/phase2/ca.crt`
|
||||||
|
- `/home/user/chromecard/tls/phase2/k_proxy.crt`
|
||||||
|
- `/home/user/chromecard/tls/phase2/k_server.crt`
|
||||||
|
- Certificate generation was corrected to include subject key identifier and authority key identifier so Python TLS verification succeeds.
|
||||||
|
- Current validated HTTPS shape is Qubes-localhost forwarding, not raw VM-IP routing:
|
||||||
|
- in `k_client`: `qvm-connect-tcp 9771:k_proxy:8771`
|
||||||
|
- in `k_proxy`: `qvm-connect-tcp 9780:k_server:8780`
|
||||||
|
- `k_proxy` listens on `https://127.0.0.1:8771`
|
||||||
|
- `k_server` listens on `https://127.0.0.1:8780`
|
||||||
|
- `k_proxy` upstream is `https://127.0.0.1:9780`
|
||||||
|
- Verified HTTPS checks:
|
||||||
|
- `k_client -> k_proxy` `/health` over TLS succeeds with `--cacert /home/user/chromecard/tls/phase2/ca.crt`
|
||||||
|
- `k_proxy -> k_server` `/health` and `/resource/counter` over TLS succeed through the `9780` forwarder
|
||||||
|
- end-to-end `k_client -> k_proxy -> k_server` login + session reuse succeeded over HTTPS
|
||||||
|
- End-to-end verified results:
|
||||||
|
- login returned `ok=true` for `alice`
|
||||||
|
- first protected counter call returned value `1`
|
||||||
|
- second protected counter call returned value `2`
|
||||||
|
- session status remained valid after reuse
|
||||||
|
|
||||||
|
Session note (2026-04-25, Phase 2.5 ownership and concurrency):
|
||||||
|
- Current prototype state ownership is now explicit:
|
||||||
|
- `k_proxy` is authoritative for session state
|
||||||
|
- `k_server` is authoritative for protected resource state
|
||||||
|
- `k_client` is not authoritative for either session validity or counter/resource state
|
||||||
|
- Current session model in `k_proxy`:
|
||||||
|
- server-side in-memory session store only
|
||||||
|
- opaque bearer token generated by `secrets.token_urlsafe(32)`
|
||||||
|
- per-session fields are `username` and `expires_at`
|
||||||
|
- expiry is enforced in `k_proxy`; `k_server` does not validate client sessions directly
|
||||||
|
- Current resource model in `k_server`:
|
||||||
|
- in-memory monotonic counter guarded by a lock
|
||||||
|
- access allowed only when request arrives from `k_proxy` with the expected `X-Proxy-Token`
|
||||||
|
- Current concurrency model in code:
|
||||||
|
- both services use `ThreadingHTTPServer`
|
||||||
|
- `k_proxy` protects session-map mutations and garbage collection with a single lock
|
||||||
|
- `k_server` protects counter increments with a single lock
|
||||||
|
- TLS verification and upstream fetches happen outside the session lock in `k_proxy`
|
||||||
|
- Current runtime assumptions and limits:
|
||||||
|
- Qubes localhost forwarders are treated as transport plumbing, not as state authorities
|
||||||
|
- if `k_proxy` restarts, in-memory sessions are lost
|
||||||
|
- if `k_server` restarts, the in-memory counter resets
|
||||||
|
- the current shared `X-Proxy-Token` is a prototype trust mechanism, not a final authorization design
|
||||||
|
- Practical meaning:
|
||||||
|
- race-free behavior is currently defined for session CRUD and counter increments inside one process per VM
|
||||||
|
- persistence, distributed session authority, and multi-proxy/multi-server coordination are not implemented yet
|
||||||
|
|
||||||
|
Session note (2026-04-25, Phase 6 client portal prototype):
|
||||||
|
- Added browser-facing client process:
|
||||||
|
- `/home/user/chromecard/k_client_portal.py`
|
||||||
|
- Current Phase 6 prototype shape:
|
||||||
|
- portal runs in `k_client` on `http://127.0.0.1:8766`
|
||||||
|
- portal keeps local enrolled username state in `k_client`
|
||||||
|
- portal calls `k_proxy` over the validated TLS forward `https://127.0.0.1:9771`
|
||||||
|
- Current local enrollment model:
|
||||||
|
- enrollment is a client-local username selection stored by the portal
|
||||||
|
- no dedicated server-side enrollment API exists yet
|
||||||
|
- Verified portal API flow in `k_client`:
|
||||||
|
- `GET /health` returns `ok=true`
|
||||||
|
- `POST /api/enroll` with `alice` succeeds
|
||||||
|
- `POST /api/login` succeeds and returns a proxy session token
|
||||||
|
- `POST /api/status` succeeds
|
||||||
|
- `POST /api/resource/counter` succeeds twice with upstream values `3` and `4`
|
||||||
|
- `POST /api/logout` succeeds
|
||||||
|
- Current implication:
|
||||||
|
- `k_client` now has a concrete client-side process instead of only runbook curls
|
||||||
|
- browser-facing flow is now available through the local portal
|
||||||
|
- next hardening step is to replace client-local enrollment with the intended enrollment contract and decide whether browser traffic should eventually talk to `k_proxy` directly or continue through a local client portal
|
||||||
|
|
||||||
|
Session note (2026-04-25, Phase 6 enrollment contract):
|
||||||
|
- Added proxy-side enrollment API and storage:
|
||||||
|
- `POST /enroll/register`
|
||||||
|
- `GET /enroll/status?username=<name>`
|
||||||
|
- persisted prototype store at `/home/user/chromecard/k_proxy_enrollments.json` in `k_proxy`
|
||||||
|
- Current enrollment authority is now `k_proxy`, not the `k_client` portal.
|
||||||
|
- Current portal behavior:
|
||||||
|
- portal enrollment calls `k_proxy` over TLS
|
||||||
|
- portal keeps only a preferred local username for convenience
|
||||||
|
- portal login now depends on proxy-side enrollment existing
|
||||||
|
- Verified behavior:
|
||||||
|
- direct proxy login for unenrolled `bob` returns `{"ok": false, "error": "user not enrolled", ...}`
|
||||||
|
- portal enrollment of `alice` succeeds and persists in proxy-side enrollment storage
|
||||||
|
- proxy enrollment status for `alice` returns `ok=true`
|
||||||
|
- portal login and protected counter access still succeed after enrollment
|
||||||
|
- Practical meaning:
|
||||||
|
- Phase 6 now has a real `k_client -> k_proxy` enrollment request path
|
||||||
|
- the remaining gap is not basic routing; it is deciding the final enrollment semantics and whether the browser should stay behind a local portal or talk to `k_proxy` directly
|
||||||
|
|
||||||
|
Session note (2026-04-25, browser target moved to k_proxy):
|
||||||
|
- `k_proxy` now serves the browser-facing portal UI directly on `/` over `https://127.0.0.1:9771`.
|
||||||
|
- `k_client_portal.py` is now a temporary bridge page:
|
||||||
|
- it points users to `https://127.0.0.1:9771/`
|
||||||
|
- it is no longer the primary browser target
|
||||||
|
- Verified direct browser/API target behavior from `k_client`:
|
||||||
|
- `GET https://127.0.0.1:9771/` returns the proxy portal HTML
|
||||||
|
- `GET https://127.0.0.1:9771/health` returns `ok=true`
|
||||||
|
- direct `POST /enroll/register` for `carol` succeeds
|
||||||
|
- direct `POST /session/login` for `carol` succeeds
|
||||||
|
- Current implication:
|
||||||
|
- browser traffic is now intended to go straight to `k_proxy`
|
||||||
|
- the `k_client` portal remains only as a temporary bridge/compatibility layer
|
||||||
|
|
||||||
|
Session note (2026-04-25, k_client browser flow page):
|
||||||
|
- `k_client_portal.py` now also serves a local browser demo page again on `http://127.0.0.1:8766` inside `k_client`.
|
||||||
|
- The page is useful as an operator/demo surface:
|
||||||
|
- register user
|
||||||
|
- login with card approval or denial in `k_proxy`
|
||||||
|
- call the protected `k_server` counter
|
||||||
|
- logout
|
||||||
|
- The page now also exposes current proxy enrollment state:
|
||||||
|
- shows the registered users visible in `k_proxy`
|
||||||
|
- lets the operator select a listed user into the username field
|
||||||
|
- lets the operator unregister users from the browser page
|
||||||
|
- login now uses the current username field instead of only the portal's last remembered user
|
||||||
|
- Added a browser regression harness for the `k_client` page:
|
||||||
|
- `/home/user/chromecard/tests/k_client_portal.spec.js`
|
||||||
|
- `/home/user/chromecard/playwright.config.js`
|
||||||
|
- `/home/user/chromecard/package.json`
|
||||||
|
- intended flow: register, login, call `k_server`, logout, unregister
|
||||||
|
- verified passing live on 2026-04-25 from this host via forwarded portal URL:
|
||||||
|
- `PORTAL_BASE_URL=http://127.0.0.1:18766 npm run test:k-client`
|
||||||
|
- It also makes the negative path explicit:
|
||||||
|
- if login is denied on the card, the page reports that `k_server` was not called
|
||||||
|
- Primary browser-facing app logic still lives on `k_proxy`, but the `k_client` page is now a concrete demo/control surface rather than just a redirect.
|
||||||
|
|
||||||
|
Session note (2026-04-25, provisional enrollment hardening):
|
||||||
|
- The enrollment contract in `k_proxy` is now explicit but provisional.
|
||||||
|
- Current prototype enrollment rules:
|
||||||
|
- usernames are canonicalized to lowercase
|
||||||
|
- allowed username pattern is `3-32` chars using lowercase letters, digits, `.`, `_`, `-`
|
||||||
|
- optional `display_name` is allowed up to `64` chars
|
||||||
|
- enrollment create is create-only and duplicate create returns `user already enrolled`
|
||||||
|
- enrollment update is a separate operation
|
||||||
|
- enrollment delete is a separate operation and removes any active sessions for that username
|
||||||
|
- Current enrollment endpoints on `k_proxy`:
|
||||||
|
- `POST /enroll/register`
|
||||||
|
- `GET /enroll/status?username=<name>`
|
||||||
|
- `POST /enroll/update`
|
||||||
|
- `POST /enroll/delete`
|
||||||
|
- `GET /enroll/list`
|
||||||
|
- Verified behavior from `k_client` against `https://127.0.0.1:9771`:
|
||||||
|
- invalid username `A!` is rejected
|
||||||
|
- create for `dave` with `display_name` succeeds
|
||||||
|
- duplicate create for `dave` is rejected
|
||||||
|
- update for `dave` succeeds
|
||||||
|
- list returns enrolled users and metadata
|
||||||
|
- delete for `dave` succeeds
|
||||||
|
- login for deleted `dave` fails with `user not enrolled`
|
||||||
|
- Deliberate current limit:
|
||||||
|
- enrollment itself still does not require card presence; only login does
|
||||||
|
- this was kept lightweight because the enrollment semantics are expected to change later
|
||||||
|
|
||||||
|
Session note (2026-04-25, Phase 6.5 concurrency probe):
|
||||||
|
- Added reproducible concurrency probe:
|
||||||
|
- `/home/user/chromecard/phase65_concurrency_probe.py`
|
||||||
|
- probe now supports `--max-workers` so client-side fan-out can be swept explicitly
|
||||||
|
- Successful baseline run from `k_client` against direct proxy path:
|
||||||
|
- `3` users
|
||||||
|
- `4` protected requests per user
|
||||||
|
- `12/12` requests succeeded
|
||||||
|
- counter values were unique and contiguous from `6` to `17`
|
||||||
|
- max observed latency was about `457 ms`
|
||||||
|
- Larger follow-up run exposed current limit:
|
||||||
|
- `5` users
|
||||||
|
- `5` protected requests per user
|
||||||
|
- `18/25` requests succeeded
|
||||||
|
- failures returned TLS EOF / upstream unavailable errors
|
||||||
|
- successful counter values were still unique and contiguous from `18` to `35`
|
||||||
|
- max observed latency was about `758 ms`
|
||||||
|
- Additional Phase 6.5 diagnosis:
|
||||||
|
- fixed a keep-alive/body-drain bug in the HTTP/1.1 experiment so `k_server` no longer misparses follow-on requests as `{}POST`
|
||||||
|
- added an upstream connection pool in `k_proxy`; current default/test setting clamps `k_proxy -> k_server` to one pooled TLS connection
|
||||||
|
- despite that change, a full fan-out run with `25` in-flight protected calls still fails on client-observed TLS EOFs
|
||||||
|
- a worker-limited run now passes cleanly:
|
||||||
|
- `5` users
|
||||||
|
- `5` protected requests per user
|
||||||
|
- `25/25` requests succeeded with `--max-workers 10`
|
||||||
|
- raising client-side fan-out still breaks:
|
||||||
|
- `22/25` requests succeeded with `--max-workers 15`
|
||||||
|
- `15/25` requests succeeded with fully unbounded `25` workers in the latest rerun
|
||||||
|
- Current diagnosis:
|
||||||
|
- the protected counter and session logic stay correct under load; successful values remain unique and contiguous
|
||||||
|
- `k_proxy` and `k_server` can complete the requests that actually reach them
|
||||||
|
- the primary collapse point in current testing is the client-facing Qubes forwarder on `9771`
|
||||||
|
- `qvm_connect_9771.log` shows `qrexec-agent-data` / data-vchan failures and repeated `xs_transaction_start: No space left on device`
|
||||||
|
- `qvm_connect_9780.log` also showed earlier qrexec failures, but the latest worker-threshold evidence points first to connection fan-out on `k_client -> k_proxy`
|
||||||
|
- Practical meaning:
|
||||||
|
- the application logic is good for moderate concurrent use in the current prototype
|
||||||
|
- the direct browser path appears stable around `10` in-flight protected calls in the current Qubes setup
|
||||||
|
- the current concurrency ceiling is being set by Qubes forwarding behavior rather than by the monotonic counter logic
|
||||||
|
|
||||||
|
Session note (2026-04-25, in-VM forwarding test):
|
||||||
|
- Tested the intended in-VM forwarding path with `qvm-connect-tcp` instead of host-side `qrexec-client-vm`.
|
||||||
|
- Forwarders start and bind locally:
|
||||||
|
- in `k_client`: `qvm-connect-tcp 8771:k_proxy:8771` binds `localhost:8771`
|
||||||
|
- in `k_proxy`: `qvm-connect-tcp 8780:k_server:8780` binds `localhost:8780`
|
||||||
|
- But the actual client->proxy connection is still refused when used:
|
||||||
|
- `k_client` forward log shows `Request refused`
|
||||||
|
- `socat` reports child exit status `126` and `Connection reset by peer`
|
||||||
|
- Local login on `k_proxy` reaches the app but fails on the auth dependency:
|
||||||
|
- `POST /session/login` to `http://127.0.0.1:8771` returns `401`
|
||||||
|
- details: `Missing dependency: python-fido2 ... No module named 'fido2'`
|
||||||
|
- `k_server` was not reached during this login test; current `k_server.log` only shows `/health`.
|
||||||
|
|
||||||
|
Session note (2026-04-25, after python3-fido2 install):
|
||||||
|
- `k_proxy` was restarted after `python3-fido2` installation and now listens again on `127.0.0.1:8771`.
|
||||||
|
- The previous Python import blocker is resolved; local login now reaches the CTAP probe path.
|
||||||
|
- Current local login result on `k_proxy`:
|
||||||
|
- `{"ok": false, "error": "card auth failed", "details": "No CTAP HID devices found."}`
|
||||||
|
- Current forwarded login result from `k_client` is still not completing:
|
||||||
|
- `curl http://127.0.0.1:8771/session/login` -> `Empty reply from server`
|
||||||
|
- `qvm_connect_8771.log` still shows repeated `Request refused` and child exit status `126`
|
||||||
|
- Practical meaning:
|
||||||
|
- Python dependency issue in `k_proxy` is fixed
|
||||||
|
- card access inside `k_proxy` is currently missing again at CTAP/HID level
|
||||||
|
- `k_client -> k_proxy` qrexec forwarding is still effectively denied/refused
|
||||||
|
|
||||||
|
Session note (2026-04-25, card reattached):
|
||||||
|
- Card visibility in `k_proxy` is restored again:
|
||||||
|
- `/dev/hidraw0` and `/dev/hidraw1` present
|
||||||
|
- `fido2_probe.py --list` detects ChromeCard on `/dev/hidraw0`
|
||||||
|
- Local login on `k_proxy` now succeeds again:
|
||||||
|
- `POST /session/login` on `127.0.0.1:8771` returns `200`
|
||||||
|
- session creation for user `alice` succeeded
|
||||||
|
- Remaining failure is isolated to the client-facing qrexec path:
|
||||||
|
- `k_client` -> `localhost:8771` through `qvm-connect-tcp` still returns `Empty reply from server`
|
||||||
|
- `qvm_connect_8771.log` still shows `Request refused`
|
||||||
|
|
||||||
|
Session note (2026-04-25, clean forward retest):
|
||||||
|
- Re-ran both forwards and exercised each hop immediately after local bind.
|
||||||
|
- `k_proxy -> k_server`:
|
||||||
|
- `qvm-connect-tcp 8780:k_server:8780` binds `localhost:8780` in `k_proxy`
|
||||||
|
- first real `POST /resource/counter` through that forward returns `Empty reply from server`
|
||||||
|
- `qvm_connect_8780.log` then records `Request refused` with child exit status `126`
|
||||||
|
- `k_client -> k_proxy`:
|
||||||
|
- `qvm-connect-tcp 8771:k_proxy:8771` binds `localhost:8771` in `k_client`
|
||||||
|
- first real `POST /session/login` through that forward returns `Empty reply from server`
|
||||||
|
- `qvm_connect_8771.log` records `Request refused` with child exit status `126`
|
||||||
|
- Conclusion from this retest:
|
||||||
|
- both forwards fail in the same way
|
||||||
|
- local bind succeeds, but the actual qrexec `qubes.ConnectTCP` request is refused when the first connection is attempted
|
||||||
|
|
||||||
|
Session note (2026-04-25, dom0 policy fix validated):
|
||||||
|
- After changing dom0 policy to use explicit destination VMs instead of `@default` for `qubes.ConnectTCP`, both forwards now work.
|
||||||
|
- Verified hop 1:
|
||||||
|
- in `k_proxy`, `POST http://127.0.0.1:8780/resource/counter` with `X-Proxy-Token: dev-proxy-token` succeeds
|
||||||
|
- response included counter value `1`
|
||||||
|
- Verified hop 2:
|
||||||
|
- in `k_client`, `POST http://127.0.0.1:8771/session/login` succeeds
|
||||||
|
- session token is returned through the `k_client -> k_proxy` forward
|
||||||
|
- Verified full end-to-end flow from `k_client`:
|
||||||
|
- login succeeded and returned session token
|
||||||
|
- `POST /session/status` succeeded
|
||||||
|
- `POST /resource/counter` succeeded twice with upstream values `2` and `3`
|
||||||
|
- `POST /session/logout` succeeded
|
||||||
|
- post-logout `POST /resource/counter` correctly returned `401 invalid or expired session`
|
||||||
|
- Current conclusion:
|
||||||
|
- `k_client -> k_proxy -> k_server` chain is operational
|
||||||
|
- session reuse and logout behavior are working in the current prototype
|
||||||
|
|
||||||
|
Session note (2026-04-25, live chain re-validation and regression helper):
|
||||||
|
- Re-validated the split-VM chain after restart using the current TLS/localhost-forward shape:
|
||||||
|
- `k_client` local `9771` -> `k_proxy:8771`
|
||||||
|
- `k_proxy` local `9780` -> `k_server:8780`
|
||||||
|
- Verified live service state during this run:
|
||||||
|
- `k_server` local `https://127.0.0.1:8780/health` returned `ok=true`
|
||||||
|
- `k_proxy` local `https://127.0.0.1:8771/health` returned `ok=true`
|
||||||
|
- `k_proxy` local `https://127.0.0.1:9780/health` reached `k_server`
|
||||||
|
- `k_client` local `https://127.0.0.1:9771/health` reached `k_proxy`
|
||||||
|
- Verified end-to-end behavior from `k_client`:
|
||||||
|
- login for `alice` succeeded
|
||||||
|
- session status succeeded
|
||||||
|
- protected counter calls succeeded with session reuse
|
||||||
|
- logout succeeded
|
||||||
|
- post-logout protected access returned `401 invalid or expired session`
|
||||||
|
- Added reproducible regression helper at:
|
||||||
|
- `/home/user/chromecard/phase5_chain_regression.sh`
|
||||||
|
- Verified the new helper end-to-end on 2026-04-25:
|
||||||
|
- default run uses `20` requests at parallelism `8`
|
||||||
|
- returned values were unique and gap-free
|
||||||
|
- latest verified counter range from the helper was `43..62`
|
||||||
|
- Practical meaning:
|
||||||
|
- the current blocker is no longer Qubes forwarding for the base Phase 5 chain
|
||||||
|
- the current next-step gap is auth semantics, not transport bring-up
|
||||||
|
|
||||||
|
Session note (2026-04-25, direct FIDO2 auth attempt):
|
||||||
|
- Added an experimental direct FIDO2 path in `/home/user/chromecard/k_proxy_app.py`:
|
||||||
|
- runtime switch: `--auth-mode fido2-direct`
|
||||||
|
- default runtime remains `probe`
|
||||||
|
- Added a low-level CTAP helper at `/home/user/chromecard/raw_ctap_probe.py`:
|
||||||
|
- purpose: bypass `Fido2Client` and exercise raw CTAP2 `makeCredential` / `getAssertion`
|
||||||
|
- logs keepalive callbacks and exact transport exceptions for host-side debugging
|
||||||
|
- Direct-mode intent:
|
||||||
|
- replace the legacy `fido2_probe.py --json` session gate
|
||||||
|
- perform real credential registration and real assertion verification locally in `k_proxy` with `python-fido2`
|
||||||
|
- Current observed blocker on `k_proxy`:
|
||||||
|
- direct `make_credential` fails with `No compatible PIN/UV protocols supported!`
|
||||||
|
- reproduces outside the app in a minimal VM-side probe, so this is not just a handler bug
|
||||||
|
- likely cause is the current card / `python-fido2` stack selecting a PIN/UV-dependent CTAP2 path for registration
|
||||||
|
- Additional probe:
|
||||||
|
- a forced CTAP1 fallback experiment did not fail immediately, but also did not complete quickly enough to treat as a usable working path in this turn
|
||||||
|
- Latest live blocker (2026-04-25, after refactor/deploy):
|
||||||
|
- direct probing is currently blocked before the card Yes/No UI stage because `k_proxy` no longer sees any CTAP HID device
|
||||||
|
- `ssh k_proxy "python3 /home/user/chromecard/fido2_probe.py --list"` now returns `No CTAP HID devices found.`
|
||||||
|
- `ssh k_proxy "ls -l /dev/hidraw*"` shows no `hidraw` nodes at the moment
|
||||||
|
- Follow-up after card reattach (2026-04-25):
|
||||||
|
- `k_proxy` again shows `/dev/hidraw0` and `/dev/hidraw1`
|
||||||
|
- direct node-open check confirms `/dev/hidraw0` is readable as the normal user
|
||||||
|
- `/dev/hidraw1` still returns `PermissionError: [Errno 13] Permission denied`
|
||||||
|
- raw `makeCredential` probe still produced no on-card registration prompt, so the host path is hanging before the firmware Yes/No UI
|
||||||
|
- hidraw mapping confirms `/dev/hidraw0` is the FIDO interface:
|
||||||
|
- report descriptor begins with usage page `0xF1D0`
|
||||||
|
- `get_descriptor('/dev/hidraw0')` returns `report_size_in=64`, `report_size_out=64`
|
||||||
|
- `/dev/hidraw1` is a separate vendor HID interface with usage page `0xFF00`
|
||||||
|
- stale Python probes holding `/dev/hidraw0` were cleared, but behavior did not change
|
||||||
|
- a manual CTAPHID `INIT` packet sent directly to `/dev/hidraw0` writes successfully and still gets no response within `3s`
|
||||||
|
- this places the current blocker below `python-fido2`: raw HID traffic is not getting a CTAPHID reply after the latest reattach
|
||||||
|
- `webauthn_local_demo.py` was re-run inside `k_proxy` after reattach and still produced no card prompt on register
|
||||||
|
- that confirms the current failure is below both the browser WebAuthn path and the direct `python-fido2` path
|
||||||
|
- after a full power cycle and reattach, manual CTAPHID `INIT` on `/dev/hidraw0` started replying again
|
||||||
|
- `webauthn_local_demo.py` register in `k_proxy` then succeeded again, confirming the card transport was recovered by the power cycle
|
||||||
|
- direct host-side registration via `raw_ctap_probe.py --device-path /dev/hidraw0 make-credential --rp-id localhost` also succeeded again after pressing `yes` on the card
|
||||||
|
- returned credential material included:
|
||||||
|
- `fmt="none"`
|
||||||
|
- credential id `7986cfcf45663f625eb7fc7b52640d83cf3d0e8a6627eeadaba3126406b1e0b8`
|
||||||
|
- this confirms the recovered direct path now reaches the real card confirmation UI and completes CTAP2 `makeCredential`
|
||||||
|
- `k_proxy_app.py --auth-mode fido2-direct` was then patched to:
|
||||||
|
- use low-level CTAP2 instead of the higher-level `Fido2Client` registration/assertion calls
|
||||||
|
- open the explicit FIDO node `/dev/hidraw0` instead of scanning devices
|
||||||
|
- cache the direct device handle instead of reopening it for each operation
|
||||||
|
- current remaining blocker:
|
||||||
|
- was narrowed through repeated retries to a mix of hidraw node disappearance, older `python-fido2` response-mapping requirements, and CTAP payload-shape mismatches
|
||||||
|
- latest verified state:
|
||||||
|
- after reattach with healthy CTAPHID `INIT`, real app registration through `k_proxy_app.py --auth-mode fido2-direct` now succeeds
|
||||||
|
- `/enroll/register` for `directtest` returned `ok=true` and `has_credential=true`
|
||||||
|
- real app login through `/session/login` for `directtest` also now succeeds after card confirmation
|
||||||
|
- returned `auth_mode` is `fido2_assertion`
|
||||||
|
- session status succeeds
|
||||||
|
- protected `/resource/counter` access succeeds again through `k_proxy -> k_server`
|
||||||
|
- logout succeeds
|
||||||
|
- post-logout protected access returns `401`
|
||||||
|
- direct mode no longer depends on a fixed `/dev/hidraw0` path
|
||||||
|
- after a later re-enumeration where the card appeared on `/dev/hidraw1`, `k_proxy_app.py` was patched to probe available `/dev/hidraw*` nodes and select the first working CTAPHID device automatically
|
||||||
|
- browser registration then worked again without changing the configured `--direct-device-path`
|
||||||
|
- temporary direct-mode hidraw lifetime logging has been removed again after diagnosis
|
||||||
|
- `/home/user/chromecard/phase5_chain_regression.sh` now supports the direct-auth baseline via:
|
||||||
|
- `--interactive-card`
|
||||||
|
- `--login-timeout`
|
||||||
|
- `--expect-auth-mode fido2_assertion`
|
||||||
|
- Practical outcome for this session:
|
||||||
|
- the experimental direct mode is kept in code for follow-up work
|
||||||
|
- the deployed `k_proxy` service was restored to default `probe` mode
|
||||||
|
- verified `alice` login still works afterward, so the validated Phase 5 baseline remains intact
|
||||||
|
|
||||||
|
Session note (2026-04-27, fido2-direct end-to-end browser validation):
|
||||||
|
- Deployed all three services (k_server, k_proxy, k_client_portal) in split-VM chain via SSH/SCP.
|
||||||
|
- k_proxy restarted with --auth-mode fido2-direct.
|
||||||
|
- Full browser flow verified from k_client at http://127.0.0.1:8766 with real card:
|
||||||
|
- Register: makeCredential triggered on card, button press confirmed.
|
||||||
|
- Login: getAssertion triggered on card, button press confirmed.
|
||||||
|
- Counter: k_server returned incremented value.
|
||||||
|
- Logout: session correctly invalidated.
|
||||||
|
- Confirmed: probe mode showed stale directtest enrollment (no credential_data_b64) from earlier session; that is expected.
|
||||||
|
- Bug found and fixed: clicking Register after Login cleared the client-side session token but left the server-side session alive; fix adds a best-effort /session/logout call to k_proxy before re-enrolling.
|
||||||
|
- Current deployed service state:
|
||||||
|
- k_server: https://127.0.0.1:8780, TLS, proxy-token dev-proxy-token
|
||||||
|
- k_proxy: https://127.0.0.1:8771, TLS, --auth-mode fido2-direct, upstream https://127.0.0.1:9780
|
||||||
|
- k_client: http://127.0.0.1:8766, proxy-base-url https://127.0.0.1:9771
|
||||||
|
- Forwards: k_proxy 9780->k_server:8780, k_client 9771->k_proxy:8771
|
||||||
|
- Unit test suite added: tests/test_k_proxy.py (100 tests, all passing, run locally with python3 -m unittest tests/test_k_proxy.py).
|
||||||
|
|
||||||
|
Session note (2026-04-26, markdown maintenance re-scan):
|
||||||
|
- Re-read the maintained workspace markdown set:
|
||||||
|
- `/home/user/chromecard/Setup.md`
|
||||||
|
- `/home/user/chromecard/Workplan.md`
|
||||||
|
- `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
||||||
|
- Re-checked that the currently referenced runtime artifacts still exist in the workspace:
|
||||||
|
- `k_proxy_app.py`
|
||||||
|
- `k_server_app.py`
|
||||||
|
- `k_client_portal.py`
|
||||||
|
- `phase5_chain_regression.sh`
|
||||||
|
- `raw_ctap_probe.py`
|
||||||
|
- `generate_phase2_certs.py`
|
||||||
|
- `tls/phase2/ca.crt`
|
||||||
|
- `tls/phase2/k_proxy.crt`
|
||||||
|
- `tls/phase2/k_server.crt`
|
||||||
|
- Current documentation conclusion:
|
||||||
|
- the workspace still supports the HTTPS localhost-forwarded split-VM chain as the active baseline
|
||||||
|
- direct FIDO2 enrollment/login support exists in code and is documented as an optional follow-up path, not the default deployed runtime
|
||||||
|
- the main unresolved engineering limit is still the higher-fan-out Qubes forwarding ceiling on the browser-facing path, not basic chain bring-up
|
||||||
|
|
||||||
## Known FIDO2 Transport Boundary
|
## Known FIDO2 Transport Boundary
|
||||||
|
|
||||||
|
|
@ -151,6 +673,9 @@ SUBSYSTEM=="hidraw", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="0005", MODE="06
|
||||||
- `python3 /home/user/chromecard/fido2_probe.py --list`
|
- `python3 /home/user/chromecard/fido2_probe.py --list`
|
||||||
- Then:
|
- Then:
|
||||||
- `python3 /home/user/chromecard/fido2_probe.py --json`
|
- `python3 /home/user/chromecard/fido2_probe.py --json`
|
||||||
|
- For raw CTAP debugging on `k_proxy`:
|
||||||
|
- `python3 /home/user/chromecard/raw_ctap_probe.py info`
|
||||||
|
- `python3 /home/user/chromecard/raw_ctap_probe.py make-credential --rp-id localhost`
|
||||||
|
|
||||||
4. Run local WebAuthn bring-up demo.
|
4. Run local WebAuthn bring-up demo.
|
||||||
- `python3 /home/user/chromecard/webauthn_local_demo.py`
|
- `python3 /home/user/chromecard/webauthn_local_demo.py`
|
||||||
|
|
@ -179,13 +704,18 @@ SUBSYSTEM=="hidraw", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="0005", MODE="06
|
||||||
|
|
||||||
## Open Gaps To Resolve
|
## Open Gaps To Resolve
|
||||||
|
|
||||||
- Why no `/dev/hidraw*` device is visible despite USB connection.
|
|
||||||
- Whether udev rule is missing or device VID/PID differs from expected.
|
|
||||||
- Whether current firmware on card exposes the FIDO2 HID interface.
|
|
||||||
- Whether a full `CR_SDK_CK-main` checkout (with role directories) is available locally.
|
- Whether a full `CR_SDK_CK-main` checkout (with role directories) is available locally.
|
||||||
- Whether server-side code should be pulled now for broader CIP/WebAuthn integration testing.
|
- Whether server-side code should be pulled now for broader CIP/WebAuthn integration testing.
|
||||||
- Exact Qubes firewall and service binding rules to enforce the `k_client -> k_proxy -> k_server` chain.
|
|
||||||
- Exact enrollment process interface running in `k_client` and how it reaches `k_proxy`.
|
- Exact enrollment process interface running in `k_client` and how it reaches `k_proxy`.
|
||||||
- Concrete session format/lifetime so cached sessions reduce card prompts without weakening security.
|
- Upgrade Phase 5 auth gate from card-presence probe to full WebAuthn assertion verification for session creation.
|
||||||
|
- Determine the viable path for real credential registration on `k_proxy`:
|
||||||
|
- enable whatever PIN/UV support the card expects for direct CTAP2 registration, or
|
||||||
|
- adopt a different one-time enrollment path that can persist real credential material for later direct assertion verification.
|
||||||
|
- Restore card visibility inside `k_proxy` so direct probes can reach the card UI again:
|
||||||
|
- `/dev/hidraw*` must exist in `k_proxy`
|
||||||
|
- `fido2_probe.py --list` must detect the card before the raw Yes/No probe can continue
|
||||||
|
- Identify why the host probe hangs before card UI even with `/dev/hidraw0` readable:
|
||||||
|
- determine why CTAPHID `INIT` on the correct FIDO hidraw node receives no reply after reattach
|
||||||
|
- likely recovery targets are the Qubes USB mediation path, a fresh USB reassign, or a `k_proxy` VM/device reset
|
||||||
- Precise ownership split of session/user state between `k_proxy` and `k_server`.
|
- Precise ownership split of session/user state between `k_proxy` and `k_server`.
|
||||||
- Concrete concurrency limits and acceptance criteria (requests/sec, parallel clients, latency/error thresholds).
|
- Concrete concurrency limits and acceptance criteria (requests/sec, parallel clients, latency/error thresholds).
|
||||||
|
|
|
||||||
336
Workplan.md
336
Workplan.md
|
|
@ -1,6 +1,6 @@
|
||||||
# Workplan
|
# Workplan
|
||||||
|
|
||||||
Last updated: 2026-04-24
|
Last updated: 2026-04-27
|
||||||
|
|
||||||
This is the execution plan for making ChromeCard FIDO2 development and validation reproducible on this machine.
|
This is the execution plan for making ChromeCard FIDO2 development and validation reproducible on this machine.
|
||||||
|
|
||||||
|
|
@ -8,8 +8,9 @@ This is the execution plan for making ChromeCard FIDO2 development and validatio
|
||||||
|
|
||||||
- Treat `/home/user/chromecard/CR_SDK_CK-main` as read-only.
|
- Treat `/home/user/chromecard/CR_SDK_CK-main` as read-only.
|
||||||
- Keep helper scripts such as `fido2_probe.py` and `webauthn_local_demo.py` at `/home/user/chromecard`.
|
- Keep helper scripts such as `fido2_probe.py` and `webauthn_local_demo.py` at `/home/user/chromecard`.
|
||||||
- Target deployment model is Qubes OS with 3 Debian-based AppVMs: `k_client`, `k_proxy`, `k_server`.
|
- Target deployment model is Qubes OS with 3 AppVMs based on `debian-13-xfce`: `k_client`, `k_proxy`, `k_server`.
|
||||||
- Current authenticator link is card->`k_proxy` (USB), but architecture must allow migration to wireless phone-mediated validation.
|
- Current authenticator link is card->`k_proxy` (USB), but architecture must allow migration to wireless phone-mediated validation.
|
||||||
|
- VM execution path is SSH-first for experiments: `ssh <host> <cmd>` and `scp <file> <host>:~`.
|
||||||
|
|
||||||
## Goals
|
## Goals
|
||||||
|
|
||||||
|
|
@ -26,7 +27,7 @@ This is the execution plan for making ChromeCard FIDO2 development and validatio
|
||||||
## Phase 0: Qubes VM Baseline (Blocking)
|
## Phase 0: Qubes VM Baseline (Blocking)
|
||||||
|
|
||||||
1. Provision/verify AppVMs.
|
1. Provision/verify AppVMs.
|
||||||
- Ensure `k_client`, `k_proxy`, `k_server` exist and are based on the Debian template.
|
- Ensure `k_client`, `k_proxy`, `k_server` exist and are based on `debian-13-xfce`.
|
||||||
|
|
||||||
2. Assign functional responsibilities.
|
2. Assign functional responsibilities.
|
||||||
- `k_client`: browser client + enrollment process.
|
- `k_client`: browser client + enrollment process.
|
||||||
|
|
@ -41,7 +42,7 @@ This is the execution plan for making ChromeCard FIDO2 development and validatio
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- All 3 VMs exist, boot, and have clearly defined service ownership.
|
- All 3 VMs exist, boot, and have clearly defined service ownership.
|
||||||
|
|
||||||
## Phase 1: Qubes Firewall Policy (Blocking)
|
## Phase 1: Qubes Firewall Policy
|
||||||
|
|
||||||
1. Enforce allowed forward paths only.
|
1. Enforce allowed forward paths only.
|
||||||
- Allow `k_client` outbound TLS only to `k_proxy` service port(s).
|
- Allow `k_client` outbound TLS only to `k_proxy` service port(s).
|
||||||
|
|
@ -58,6 +59,33 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Policy matches intended chain and is test-verified.
|
- Policy matches intended chain and is test-verified.
|
||||||
|
|
||||||
|
Status (2026-04-24, remote diagnostics):
|
||||||
|
- Confirmed active blocker remains Phase 1 network policy/pathing.
|
||||||
|
- Evidence from live VM probes:
|
||||||
|
- `k_client (10.137.0.16) -> k_proxy (10.137.0.12:8771)`: TCP timeout.
|
||||||
|
- `k_proxy (10.137.0.12) -> k_server (10.137.0.13:8780)`: upstream timeout.
|
||||||
|
- Local service health inside each VM is good, so failure is inter-VM reachability, not local process startup.
|
||||||
|
|
||||||
|
Status (2026-04-25, after restart and service recovery):
|
||||||
|
- Refined blocker: this is currently a qrexec/`qubes.ConnectTCP` refusal problem, not an app-local listener problem.
|
||||||
|
- Current evidence:
|
||||||
|
- `k_proxy` local `/health` is up on `127.0.0.1:8771`
|
||||||
|
- `k_server` local `/health` is up on `127.0.0.1:8780`
|
||||||
|
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
||||||
|
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
||||||
|
- Immediate next action for Phase 1:
|
||||||
|
- verify and fix the dom0 policy/mechanism that should permit `qubes.ConnectTCP` forwarding for the chain ports
|
||||||
|
|
||||||
|
Status (2026-04-25, dom0 policy fix validated):
|
||||||
|
- The forwarding blocker is cleared for the current prototype shape.
|
||||||
|
- Verified working chain:
|
||||||
|
- `k_client` localhost `9771` -> `k_proxy:8771`
|
||||||
|
- `k_proxy` localhost `9780` -> `k_server:8780`
|
||||||
|
- Verified outcome:
|
||||||
|
- TLS health checks pass on both hops
|
||||||
|
- end-to-end login, session status, protected counter access, and logout all succeed from `k_client`
|
||||||
|
- Phase 1 is complete for the current localhost-forwarded `qubes.ConnectTCP` design.
|
||||||
|
|
||||||
## Phase 2: TLS Certificates and Service Endpoints
|
## Phase 2: TLS Certificates and Service Endpoints
|
||||||
|
|
||||||
1. Certificate model.
|
1. Certificate model.
|
||||||
|
|
@ -76,6 +104,19 @@ Exit criteria:
|
||||||
- Mutual TLS trust decisions are documented and tested.
|
- Mutual TLS trust decisions are documented and tested.
|
||||||
- HTTPS calls succeed on both links with expected cert validation.
|
- HTTPS calls succeed on both links with expected cert validation.
|
||||||
|
|
||||||
|
Status (2026-04-25):
|
||||||
|
- Implemented HTTPS listeners in both prototype services.
|
||||||
|
- Added local CA + service certificate generation in `generate_phase2_certs.py`.
|
||||||
|
- Verified the working Qubes path is localhost forwarding plus TLS:
|
||||||
|
- `k_client` local `9771` forwards to `k_proxy:8771`
|
||||||
|
- `k_proxy` local `9780` forwards to `k_server:8780`
|
||||||
|
- Verified cert validation on both hops using the generated CA.
|
||||||
|
- Verified end-to-end HTTPS flow:
|
||||||
|
- `k_client -> k_proxy` login over TLS
|
||||||
|
- `k_proxy -> k_server` protected counter call over TLS
|
||||||
|
- session reuse still works across repeated protected requests
|
||||||
|
- Phase 2 is now effectively complete for the current prototype shape.
|
||||||
|
|
||||||
## Phase 2.5: Define State Ownership and Concurrency Model
|
## Phase 2.5: Define State Ownership and Concurrency Model
|
||||||
|
|
||||||
1. State ownership.
|
1. State ownership.
|
||||||
|
|
@ -92,6 +133,32 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Architecture clearly documents state authority and race-free update rules.
|
- Architecture clearly documents state authority and race-free update rules.
|
||||||
|
|
||||||
|
Next action (2026-04-25):
|
||||||
|
- Move into Phase 2.5 and make the current prototype decisions explicit:
|
||||||
|
- authority for session state remains `k_proxy`
|
||||||
|
- `k_server` remains authority for the protected counter/resource state
|
||||||
|
- localhost Qubes forwarders are part of the active runtime model for the two TLS hops
|
||||||
|
- define concurrency assumptions and limits around session store, forwarders, and counter access
|
||||||
|
|
||||||
|
Status (2026-04-25):
|
||||||
|
- Current ownership model is now explicit:
|
||||||
|
- `k_proxy` is authoritative for session creation, expiry, lookup, and logout
|
||||||
|
- `k_server` is authoritative for the protected monotonic counter
|
||||||
|
- `k_client` is a client only; it holds bearer tokens but is not a state authority
|
||||||
|
- Current validation boundary is explicit:
|
||||||
|
- `k_proxy` validates bearer tokens against its in-memory session store
|
||||||
|
- `k_server` trusts only requests that arrive with the configured `X-Proxy-Token`
|
||||||
|
- `k_server` does not currently validate end-user session tokens directly
|
||||||
|
- Current concurrency strategy is explicit:
|
||||||
|
- `k_proxy` uses `ThreadingHTTPServer` plus one lock around the in-memory session map
|
||||||
|
- `k_server` uses `ThreadingHTTPServer` plus one lock around counter increments
|
||||||
|
- upstream HTTPS calls from `k_proxy` are made outside the session-store lock
|
||||||
|
- Current runtime limits are explicit:
|
||||||
|
- sessions are process-local and disappear on `k_proxy` restart
|
||||||
|
- counter state is process-local and resets on `k_server` restart
|
||||||
|
- transport relies on Qubes localhost forwarders `9771` and `9780`
|
||||||
|
- Phase 2.5 is complete for the current prototype shape.
|
||||||
|
|
||||||
## Phase 3: Recover Basic Device Visibility on `k_proxy` (Blocking)
|
## Phase 3: Recover Basic Device Visibility on `k_proxy` (Blocking)
|
||||||
|
|
||||||
1. Verify physical + USB enumeration path.
|
1. Verify physical + USB enumeration path.
|
||||||
|
|
@ -129,6 +196,11 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Register and login both complete with card interaction prompts.
|
- Register and login both complete with card interaction prompts.
|
||||||
|
|
||||||
|
Status (2026-04-24):
|
||||||
|
- Completed in `k_proxy` using `http://localhost:8765`.
|
||||||
|
- Registration result: `ok=true`, `username=alice`, `credential_count=1`.
|
||||||
|
- Authentication result: `ok=true`, `username=alice`, `authenticated=true`.
|
||||||
|
|
||||||
## Phase 5: Implement Proxy Auth + Session Reuse
|
## Phase 5: Implement Proxy Auth + Session Reuse
|
||||||
|
|
||||||
1. Authenticate via card once per session window.
|
1. Authenticate via card once per session window.
|
||||||
|
|
@ -148,6 +220,47 @@ Exit criteria:
|
||||||
- Repeated authorized requests do not require card interaction until session expiry.
|
- Repeated authorized requests do not require card interaction until session expiry.
|
||||||
- Expired/invalid sessions are correctly rejected.
|
- Expired/invalid sessions are correctly rejected.
|
||||||
|
|
||||||
|
Status (2026-04-24):
|
||||||
|
- Started with a runnable prototype:
|
||||||
|
- `/home/user/chromecard/k_proxy_app.py`
|
||||||
|
- `/home/user/chromecard/k_server_app.py`
|
||||||
|
- `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
||||||
|
- Implemented in prototype:
|
||||||
|
- session create/status/logout endpoints in `k_proxy`
|
||||||
|
- TTL-based server-side session store with expiry garbage collection
|
||||||
|
- protected monotonic counter endpoint in `k_server` with thread-safe increments
|
||||||
|
- proxy forwarding from `k_proxy` to `k_server` using a shared upstream token
|
||||||
|
- Current auth gate for session creation is card-presence probe (`fido2_probe.py --json`), pending upgrade to full assertion verification path.
|
||||||
|
|
||||||
|
Status (2026-04-25):
|
||||||
|
- Prototype services were re-started successfully after VM restart.
|
||||||
|
- Current split-VM test shape is:
|
||||||
|
- `k_proxy` listening on `127.0.0.1:8771`
|
||||||
|
- `k_server` listening on `127.0.0.1:8780`
|
||||||
|
- End-to-end validation is now passing through the live chain from `k_client`.
|
||||||
|
- Current verified behavior:
|
||||||
|
- login succeeds for `alice`
|
||||||
|
- session status succeeds
|
||||||
|
- repeated protected counter requests succeed with session reuse
|
||||||
|
- logout succeeds
|
||||||
|
- post-logout protected access returns `401`
|
||||||
|
- Added repeatable host-side regression helper:
|
||||||
|
- `/home/user/chromecard/phase5_chain_regression.sh`
|
||||||
|
- Phase 5 is complete for the current prototype semantics.
|
||||||
|
- Experimental follow-up in code:
|
||||||
|
- `k_proxy_app.py` now also has `--auth-mode fido2-direct`
|
||||||
|
- this mode attempts direct credential registration and direct assertion verification with `python-fido2`
|
||||||
|
- it is not the deployed default because direct registration currently fails on `k_proxy` with `No compatible PIN/UV protocols supported!`
|
||||||
|
- `/home/user/chromecard/raw_ctap_probe.py` now exists for lower-level CTAP2 probing with keepalive/error logging
|
||||||
|
- latest retry result: after reattaching the card, `k_proxy` again exposes `/dev/hidraw0` and `/dev/hidraw1`, but raw `makeCredential` still reaches no Yes/No card prompt
|
||||||
|
- `/dev/hidraw0` opens successfully as the normal user; `/dev/hidraw1` is still permission-denied
|
||||||
|
- manual CTAPHID testing now shows `/dev/hidraw0` is the correct FIDO interface and a direct `INIT` write gets no response at all
|
||||||
|
- rerunning `webauthn_local_demo.py` inside `k_proxy` also still gives no card prompt, so the current break is below both browser WebAuthn and direct host probes
|
||||||
|
- after a full power cycle and reattach, manual CTAPHID `INIT` replies again and browser registration in `webauthn_local_demo.py` succeeds again
|
||||||
|
- direct `raw_ctap_probe.py --device-path /dev/hidraw0 make-credential --rp-id localhost` now also succeeds again after card confirmation
|
||||||
|
- `k_proxy_app.py --auth-mode fido2-direct` has been moved onto low-level CTAP2 with hidraw auto-detection; it still accepts `--direct-device-path`, but no longer breaks if the card re-enumerates onto `/dev/hidraw1`
|
||||||
|
- after repeated fixes for hidraw lifetime, VM-side `python-fido2` response mapping, and CTAP payload shape, real app registration now succeeds for `directtest`
|
||||||
|
|
||||||
## Phase 5.5: Implement Dummy Resource + Access Policy on `k_server`
|
## Phase 5.5: Implement Dummy Resource + Access Policy on `k_server`
|
||||||
|
|
||||||
1. Protected dummy resource.
|
1. Protected dummy resource.
|
||||||
|
|
@ -164,6 +277,14 @@ Exit criteria:
|
||||||
- Authorized requests obtain consistent increasing values.
|
- Authorized requests obtain consistent increasing values.
|
||||||
- Unauthorized requests are rejected.
|
- Unauthorized requests are rejected.
|
||||||
|
|
||||||
|
Status (2026-04-25):
|
||||||
|
- The protected counter resource is implemented and validated in the live split-VM chain.
|
||||||
|
- Verified behavior:
|
||||||
|
- authorized requests from `k_proxy` obtain increasing values
|
||||||
|
- unauthorized post-logout requests from `k_client` are rejected with `401`
|
||||||
|
- `20` concurrent protected requests through the chain returned unique, gap-free values
|
||||||
|
- Phase 5.5 is complete for the current prototype shape.
|
||||||
|
|
||||||
## Phase 6: Integrate Client Enrollment + Proxy Login Flow
|
## Phase 6: Integrate Client Enrollment + Proxy Login Flow
|
||||||
|
|
||||||
1. Enrollment process in `k_client`.
|
1. Enrollment process in `k_client`.
|
||||||
|
|
@ -176,11 +297,107 @@ Exit criteria:
|
||||||
|
|
||||||
3. Browser flow in `k_client`.
|
3. Browser flow in `k_client`.
|
||||||
- Browser traffic goes only to `k_proxy`.
|
- Browser traffic goes only to `k_proxy`.
|
||||||
- Validate end-to-end login to `k_server` resource through proxy chain.
|
|
||||||
|
Immediate next action:
|
||||||
|
- Preserve the now-working direct auth path as a tested option while keeping the default deployed baseline stable.
|
||||||
|
- Verified end-to-end state:
|
||||||
|
- direct `/enroll/register` succeeds for `directtest`
|
||||||
|
- direct `/session/login` succeeds for `directtest`
|
||||||
|
- `/session/status` succeeds
|
||||||
|
- protected `/resource/counter` succeeds through `k_proxy -> k_server`
|
||||||
|
- `/session/logout` succeeds
|
||||||
|
- post-logout protected access returns `401`
|
||||||
|
- Next work should be cleanup/hardening:
|
||||||
|
- decide whether to keep `directtest` enrollment
|
||||||
|
- rerun `phase5_chain_regression.sh --interactive-card --expect-auth-mode fido2_assertion` against the current direct-auth baseline
|
||||||
|
- decide when `fido2-direct` should replace `probe` as the default deployed auth mode
|
||||||
|
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Enrollment and login both function end-to-end via `k_client -> k_proxy -> k_server`.
|
- Enrollment and login both function end-to-end via `k_client -> k_proxy -> k_server`.
|
||||||
|
|
||||||
|
Status (2026-04-25):
|
||||||
|
- Added first `k_client` implementation at `/home/user/chromecard/k_client_portal.py`.
|
||||||
|
- Current prototype flow:
|
||||||
|
- browser now targets `k_proxy` directly over `https://127.0.0.1:9771`
|
||||||
|
- `k_client_portal.py` also serves a local browser flow page on `http://127.0.0.1:8766`
|
||||||
|
- `k_proxy` continues to authenticate with the card and forward to `k_server`
|
||||||
|
- the `k_client` page now also lists registered users from `k_proxy`
|
||||||
|
- the `k_client` page can unregister users from the browser
|
||||||
|
- the portal login action now uses the current username field instead of only the remembered local user
|
||||||
|
- a Playwright regression spec now exists for the browser flow in `tests/k_client_portal.spec.js`
|
||||||
|
- the Playwright browser regression has now passed end-to-end once from this host against a forwarded portal URL
|
||||||
|
- Verified end-to-end through the portal:
|
||||||
|
- enroll `alice`
|
||||||
|
- login succeeds
|
||||||
|
- session status succeeds
|
||||||
|
- protected counter succeeds repeatedly with session reuse
|
||||||
|
- logout succeeds
|
||||||
|
- Enrollment contract progress:
|
||||||
|
- `k_proxy` now exposes prototype enrollment endpoints
|
||||||
|
- proxy-side enrollment storage exists and is checked before login is allowed
|
||||||
|
- direct browser/API traffic can now use those proxy endpoints without going through the local bridge
|
||||||
|
- Phase 6 is materially further along for the current prototype shape:
|
||||||
|
- direct browser target is on `k_proxy`
|
||||||
|
- login/resource flow is integrated on the direct proxy path
|
||||||
|
- enrollment now has a real client->proxy path
|
||||||
|
- the `k_client` page is now a usable demo/operator surface in addition to the direct proxy path
|
||||||
|
- final enrollment semantics are still provisional
|
||||||
|
|
||||||
|
Status (2026-04-25, enrollment hardening):
|
||||||
|
- Added a more explicit provisional enrollment contract in `k_proxy`:
|
||||||
|
- username normalization and validation
|
||||||
|
- optional `display_name`
|
||||||
|
- separate create, update, delete, status, and list operations
|
||||||
|
- delete invalidates existing sessions for that username
|
||||||
|
- Verified the hardened behaviors on the direct proxy path.
|
||||||
|
- Phase 6 is now strong enough to treat the browser/proxy flow as a stable prototype baseline.
|
||||||
|
- The remaining reason Phase 6 is not "final" is product semantics, not missing basic mechanics:
|
||||||
|
- whether enrollment should require card presence
|
||||||
|
- what user attributes belong in enrollment
|
||||||
|
- what re-enroll and recovery should mean
|
||||||
|
|
||||||
|
Status (2026-04-25, Phase 6.5 initial concurrency results):
|
||||||
|
- Added reproducible probe script at `/home/user/chromecard/phase65_concurrency_probe.py`.
|
||||||
|
- Probe now supports `--max-workers` so client-side fan-out can be tested separately from total request count.
|
||||||
|
- Moderate direct-path concurrency passes:
|
||||||
|
- `3 users x 4 requests`
|
||||||
|
- `12/12` successful protected calls
|
||||||
|
- counter values remained unique and contiguous
|
||||||
|
- Larger direct-path concurrency currently fails:
|
||||||
|
- `5 users x 5 requests`
|
||||||
|
- only `18/25` successful protected calls
|
||||||
|
- failed calls report TLS EOF / upstream unavailable errors
|
||||||
|
- Follow-up findings are more precise:
|
||||||
|
- body-drain handling was fixed for the HTTP/1.1 keep-alive experiment
|
||||||
|
- `k_proxy -> k_server` upstream concurrency is now clampable and currently tested at one pooled connection
|
||||||
|
- `5 users x 5 requests` passes at `25/25` when client fan-out is limited to `--max-workers 10`
|
||||||
|
- the same total load still fails at higher fan-out:
|
||||||
|
- `22/25` at `--max-workers 15`
|
||||||
|
- `15/25` at fully unbounded `25` workers in the latest rerun
|
||||||
|
- Current bottleneck is still not counter correctness:
|
||||||
|
- successful results still show unique, contiguous counter values
|
||||||
|
- `k_proxy` and `k_server` complete the requests that actually arrive
|
||||||
|
- Current likely bottleneck is the client-facing Qubes forwarding layer:
|
||||||
|
- `qvm_connect_9771.log` shows qrexec data-vchan failures
|
||||||
|
- observed message includes `xs_transaction_start: No space left on device`
|
||||||
|
- `qvm_connect_9780.log` showed earlier failures too, but the latest threshold test points first to connection fan-out on `k_client -> k_proxy`
|
||||||
|
- Phase 6.5 is therefore started but not complete:
|
||||||
|
- application-level concurrency looks acceptable at moderate load
|
||||||
|
- current working envelope is roughly `10` in-flight protected calls on the direct browser path
|
||||||
|
- higher-load failures still need Qubes forwarding diagnosis before the phase can be closed
|
||||||
|
|
||||||
|
Status (2026-04-25, Phase 5 regression helper):
|
||||||
|
- Added repeatable split-VM regression helper:
|
||||||
|
- `/home/user/chromecard/phase5_chain_regression.sh`
|
||||||
|
- Verified helper result on the live chain:
|
||||||
|
- `20` requests at parallelism `8`
|
||||||
|
- login/session-status/counter/logout sequence completed successfully
|
||||||
|
- returned counter values were unique and gap-free
|
||||||
|
- latest verified helper range was `43..62`
|
||||||
|
- Current implication:
|
||||||
|
- the Phase 5 baseline is now reproducible
|
||||||
|
- next work should target auth semantics rather than basic chain bring-up
|
||||||
|
|
||||||
## Phase 6.5: Concurrency and Multi-Client Test Setup
|
## Phase 6.5: Concurrency and Multi-Client Test Setup
|
||||||
|
|
||||||
1. Single-VM concurrency tests.
|
1. Single-VM concurrency tests.
|
||||||
|
|
@ -234,6 +451,79 @@ Exit criteria:
|
||||||
- Re-scan relevant `.md` files before each new execution cycle and reconcile drift.
|
- Re-scan relevant `.md` files before each new execution cycle and reconcile drift.
|
||||||
- Record date-stamped session notes when priorities or blockers change.
|
- Record date-stamped session notes when priorities or blockers change.
|
||||||
|
|
||||||
|
Status (2026-04-24, markdown maintenance):
|
||||||
|
- Re-scanned the active workspace Markdown set and the main source-tree reference docs.
|
||||||
|
- No workplan phase change was required from this pass.
|
||||||
|
- Ongoing documentation watch item remains path drift in `CR_SDK_CK-main/README_HOST.md`, which still uses historical `./scripts/...` helper locations instead of workspace-root helper paths.
|
||||||
|
- Operational note: the markdown scan path now runs cleanly after policy adjustment when invoked without a login shell.
|
||||||
|
|
||||||
|
Status (2026-04-24, chain probe retry):
|
||||||
|
- Phase 1 remains blocked, but the failure point is now narrowed further:
|
||||||
|
- current refusal occurs at Qubes `qubes.ConnectTCP` policy/service evaluation for ports `22`, `8770`, and `8780`
|
||||||
|
- this happens before any end-to-end app-level request can be retried
|
||||||
|
- Practical implication:
|
||||||
|
- do not spend time on `k_proxy_app.py` / `k_server_app.py` request handling until qrexec forwarding is permitting the intended hops again
|
||||||
|
- next recovery action is to fix/activate the relevant Qubes `qubes.ConnectTCP` policy and then re-run the qrexec bridge checks before testing HTTP flow
|
||||||
|
|
||||||
|
Status (2026-04-25, post-restart probe):
|
||||||
|
- Corrected the client-facing proxy port reference to `8771`.
|
||||||
|
- SSH access to `k_proxy` and card visibility recovered after VM restart.
|
||||||
|
- New immediate blockers are:
|
||||||
|
- `k_proxy` service not listening on `127.0.0.1:8771`
|
||||||
|
- `k_server` service not listening on `127.0.0.1:8780`
|
||||||
|
- qrexec forwarding for `8771` and `8780` still returns `Request refused`
|
||||||
|
- Next retry should start services first, then re-test qrexec forwarding and only then attempt end-to-end client flow.
|
||||||
|
|
||||||
|
Status (2026-04-25, service restart):
|
||||||
|
- Local VM services are running again on the intended loopback ports:
|
||||||
|
- `k_server`: `127.0.0.1:8780`
|
||||||
|
- `k_proxy`: `127.0.0.1:8771`
|
||||||
|
- Phase 1 remains blocked specifically by qrexec policy/forwarding refusal on those ports.
|
||||||
|
- Next action is no longer app startup; it is fixing the `qubes.ConnectTCP` allow path for `8771` and `8780`.
|
||||||
|
|
||||||
|
Status (2026-04-25, in-VM forwarding test):
|
||||||
|
- Verified that using `qvm-connect-tcp` inside the source VMs still does not complete the client->proxy hop:
|
||||||
|
- bind succeeds locally, but first real connection gets `Request refused`
|
||||||
|
- Independent app-layer blocker also found in `k_proxy`:
|
||||||
|
- `python-fido2` is missing there, so local `/session/login` currently fails before card auth can succeed
|
||||||
|
- Current ordered blockers:
|
||||||
|
- first: effective Qubes/qrexec allow path for `k_client -> k_proxy:8771`
|
||||||
|
- second: install `python-fido2` in `k_proxy`
|
||||||
|
- third: re-test end-to-end login and then proxy->server counter flow
|
||||||
|
|
||||||
|
Status (2026-04-25, after python3-fido2 install):
|
||||||
|
- `python3-fido2` blocker in `k_proxy` is resolved.
|
||||||
|
- Updated ordered blockers:
|
||||||
|
- first: effective Qubes/qrexec allow path for `k_client -> k_proxy:8771`
|
||||||
|
- second: restore CTAP HID device visibility/access in `k_proxy` (`No CTAP HID devices found`)
|
||||||
|
- third: re-test end-to-end login and then proxy->server counter flow
|
||||||
|
|
||||||
|
Status (2026-04-25, card reattached):
|
||||||
|
- CTAP HID visibility/access in `k_proxy` is restored.
|
||||||
|
- Local proxy login is working again with the attached card.
|
||||||
|
- The only currently confirmed blocker for the end-to-end path is the `k_client -> k_proxy:8771` qrexec/`qvm-connect-tcp` refusal.
|
||||||
|
|
||||||
|
Status (2026-04-25, clean forward retest):
|
||||||
|
- The retest shows the same qrexec failure mode on both hops, not just the client-facing one.
|
||||||
|
- Updated blocker statement:
|
||||||
|
- effective `qubes.ConnectTCP` allow path is failing for both
|
||||||
|
- `k_client -> k_proxy:8771`
|
||||||
|
- `k_proxy -> k_server:8780`
|
||||||
|
- App services and card path are currently good; forwarding remains the single active system blocker.
|
||||||
|
|
||||||
|
Status (2026-04-25, dom0 policy fix validated):
|
||||||
|
- The explicit-destination dom0 `qubes.ConnectTCP` policy fix resolved forwarding on both hops.
|
||||||
|
- Current verified working chain:
|
||||||
|
- `k_client -> k_proxy:8771`
|
||||||
|
- `k_proxy -> k_server:8780`
|
||||||
|
- Current verified prototype behavior:
|
||||||
|
- session login works from `k_client`
|
||||||
|
- session status works
|
||||||
|
- protected counter flow reaches `k_server`
|
||||||
|
- session reuse avoids re-login for repeated counter calls
|
||||||
|
- logout invalidates the session and subsequent protected access returns `401`
|
||||||
|
- Immediate networking blocker is cleared.
|
||||||
|
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- New team member can follow docs end-to-end without path or tooling ambiguity.
|
- New team member can follow docs end-to-end without path or tooling ambiguity.
|
||||||
|
|
||||||
|
|
@ -257,6 +547,31 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- `k_proxy` can validate via wireless phone path with no client-facing API changes.
|
- `k_proxy` can validate via wireless phone path with no client-facing API changes.
|
||||||
|
|
||||||
|
## Current Next Step
|
||||||
|
|
||||||
|
Status (2026-04-27):
|
||||||
|
- fido2-direct mode confirmed working end-to-end with real card via browser on k_client.
|
||||||
|
- Full register → login → counter → logout flow verified with physical card button presses.
|
||||||
|
- Bug fixed: ClientState.enroll() now calls /session/logout on k_proxy before re-enrolling.
|
||||||
|
- 100-test unit suite added for k_proxy (tests/test_k_proxy.py); runs locally without card or VMs.
|
||||||
|
- All three service files refactored and re-deployed.
|
||||||
|
|
||||||
|
Phase status (2026-04-27):
|
||||||
|
- Phase 6.5 (concurrency): deferred. Ceiling (~10 in-flight) is acceptable until multi-card use cases arrive.
|
||||||
|
- Phase 7 (firmware build/flash): blocked on Chrome Roads (card vendor). No local action until that discussion concludes.
|
||||||
|
- Phase 9 (phone integration): awaiting go-ahead. When approved: Flutter app (iOS + Android) replaces k_proxy; FIDO2 over WiFi to card; depends on Phase 7 firmware capability.
|
||||||
|
|
||||||
|
No active engineering work is unblocked at this time. Resume when Chrome Roads responds or Phase 9 is approved.
|
||||||
|
|
||||||
|
Status (2026-04-26, markdown maintenance):
|
||||||
|
- Re-scanned `Setup.md`, `Workplan.md`, and `PHASE5_RUNBOOK.md` against the current workspace files.
|
||||||
|
- Updated the plan to match the verified state:
|
||||||
|
- direct FIDO2 auth is no longer the primary blocker because register/login/logout already work in the experimental path
|
||||||
|
- the main open system limit is concurrency/fan-out on the Qubes-forwarded browser path
|
||||||
|
- the current planning split is now:
|
||||||
|
- baseline path: keep `probe` mode stable and reproducible
|
||||||
|
- follow-up path: decide whether to promote `fido2-direct`
|
||||||
|
|
||||||
## Inputs Expected During This Session
|
## Inputs Expected During This Session
|
||||||
|
|
||||||
- Exact observed behavior on reconnect attempts (USB/hidraw/probe).
|
- Exact observed behavior on reconnect attempts (USB/hidraw/probe).
|
||||||
|
|
@ -267,3 +582,14 @@ Exit criteria:
|
||||||
- Decision on where user/session authority lives (`k_proxy` vs `k_server` vs split).
|
- Decision on where user/session authority lives (`k_proxy` vs `k_server` vs split).
|
||||||
- Target concurrency level for validation (parallel clients and parallel requests per client).
|
- Target concurrency level for validation (parallel clients and parallel requests per client).
|
||||||
- Preferred wireless transport/protocol between `k_proxy` and phone (for future phase).
|
- Preferred wireless transport/protocol between `k_proxy` and phone (for future phase).
|
||||||
|
|
||||||
|
## Session Maintenance Notes (2026-04-24)
|
||||||
|
|
||||||
|
- Top-level Markdown review completed for `PHASE5_RUNBOOK.md`, `Setup.md`, and `Workplan.md`.
|
||||||
|
- Current execution plan remains in sync with the Phase 5 runbook:
|
||||||
|
- prototype services at `/home/user/chromecard/k_proxy_app.py` and `/home/user/chromecard/k_server_app.py`
|
||||||
|
- run sequence documented in `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
||||||
|
- No phase ordering or blocker changes were required from this review pass.
|
||||||
|
- Remote execution support is now active and validated:
|
||||||
|
- `ssh` command execution works for `k_client`, `k_proxy`, `k_server`
|
||||||
|
- `scp` push to VM home works (validated on `k_proxy`)
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,74 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Manual CTAPHID INIT probe for a specific hidraw node.
|
||||||
|
|
||||||
|
This bypasses python-fido2's device bootstrap so we can see whether the raw HID
|
||||||
|
transport itself exchanges packets on the expected FIDO interface.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import secrets
|
||||||
|
import select
|
||||||
|
import struct
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
CTAPHID_INIT = 0x06
|
||||||
|
TYPE_INIT = 0x80
|
||||||
|
BROADCAST_CID = 0xFFFFFFFF
|
||||||
|
|
||||||
|
|
||||||
|
def build_init_packet(nonce: bytes) -> bytes:
|
||||||
|
frame = struct.pack(">IBH", BROADCAST_CID, TYPE_INIT | CTAPHID_INIT, len(nonce)) + nonce
|
||||||
|
return b"\0" + frame.ljust(64, b"\0")
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(description="Manual CTAPHID INIT probe")
|
||||||
|
parser.add_argument("--device-path", default="/dev/hidraw0")
|
||||||
|
parser.add_argument("--timeout", type=float, default=3.0)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
path = Path(args.device_path)
|
||||||
|
if not path.exists():
|
||||||
|
print(f"missing device: {path}", file=sys.stderr)
|
||||||
|
return 2
|
||||||
|
|
||||||
|
nonce = secrets.token_bytes(8)
|
||||||
|
packet = build_init_packet(nonce)
|
||||||
|
print(f"device={path}")
|
||||||
|
print(f"nonce={nonce.hex()}")
|
||||||
|
print(f"write_len={len(packet)}")
|
||||||
|
print(f"write_hex={packet.hex()}")
|
||||||
|
|
||||||
|
fd = os.open(str(path), os.O_RDWR)
|
||||||
|
try:
|
||||||
|
written = os.write(fd, packet)
|
||||||
|
print(f"written={written}")
|
||||||
|
poller = select.poll()
|
||||||
|
poller.register(fd, select.POLLIN)
|
||||||
|
events = poller.poll(int(args.timeout * 1000))
|
||||||
|
print(f"events={events}")
|
||||||
|
if not events:
|
||||||
|
print("timeout_waiting_for_response")
|
||||||
|
return 1
|
||||||
|
response = os.read(fd, 64)
|
||||||
|
print(f"read_len={len(response)}")
|
||||||
|
print(f"read_hex={response.hex()}")
|
||||||
|
if len(response) >= 24:
|
||||||
|
cid, cmd, bc = struct.unpack(">IBH", response[:7])
|
||||||
|
print(f"resp_cid=0x{cid:08x}")
|
||||||
|
print(f"resp_cmd=0x{cmd:02x}")
|
||||||
|
print(f"resp_bc={bc}")
|
||||||
|
print(f"resp_payload={response[7:7+bc].hex()}")
|
||||||
|
return 0
|
||||||
|
finally:
|
||||||
|
os.close(fd)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
|
@ -0,0 +1,157 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Generate a small local CA plus leaf certificates for Phase 2 HTTPS testing.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import ipaddress
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from cryptography import x509
|
||||||
|
from cryptography.hazmat.primitives import hashes, serialization
|
||||||
|
from cryptography.hazmat.primitives.asymmetric import rsa
|
||||||
|
from cryptography.x509.oid import ExtendedKeyUsageOID, NameOID
|
||||||
|
|
||||||
|
|
||||||
|
def build_name(common_name: str) -> x509.Name:
|
||||||
|
return x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, common_name)])
|
||||||
|
|
||||||
|
|
||||||
|
def new_private_key() -> rsa.RSAPrivateKey:
|
||||||
|
return rsa.generate_private_key(public_exponent=65537, key_size=2048)
|
||||||
|
|
||||||
|
|
||||||
|
def write_private_key(path: Path, key: rsa.RSAPrivateKey) -> None:
|
||||||
|
path.write_bytes(
|
||||||
|
key.private_bytes(
|
||||||
|
encoding=serialization.Encoding.PEM,
|
||||||
|
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||||
|
encryption_algorithm=serialization.NoEncryption(),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def write_cert(path: Path, cert: x509.Certificate) -> None:
|
||||||
|
path.write_bytes(cert.public_bytes(serialization.Encoding.PEM))
|
||||||
|
|
||||||
|
|
||||||
|
def parse_sans(names: list[str]) -> list[x509.GeneralName]:
|
||||||
|
sans: list[x509.GeneralName] = []
|
||||||
|
seen = set()
|
||||||
|
for value in names:
|
||||||
|
if value in seen:
|
||||||
|
continue
|
||||||
|
seen.add(value)
|
||||||
|
try:
|
||||||
|
sans.append(x509.IPAddress(ipaddress.ip_address(value)))
|
||||||
|
except ValueError:
|
||||||
|
sans.append(x509.DNSName(value))
|
||||||
|
return sans
|
||||||
|
|
||||||
|
|
||||||
|
def issue_ca(common_name: str, valid_days: int) -> tuple[rsa.RSAPrivateKey, x509.Certificate]:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
key = new_private_key()
|
||||||
|
subject = issuer = build_name(common_name)
|
||||||
|
cert = (
|
||||||
|
x509.CertificateBuilder()
|
||||||
|
.subject_name(subject)
|
||||||
|
.issuer_name(issuer)
|
||||||
|
.public_key(key.public_key())
|
||||||
|
.serial_number(x509.random_serial_number())
|
||||||
|
.not_valid_before(now - timedelta(minutes=5))
|
||||||
|
.not_valid_after(now + timedelta(days=valid_days))
|
||||||
|
.add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
|
||||||
|
.add_extension(x509.SubjectKeyIdentifier.from_public_key(key.public_key()), critical=False)
|
||||||
|
.add_extension(x509.AuthorityKeyIdentifier.from_issuer_public_key(key.public_key()), critical=False)
|
||||||
|
.add_extension(x509.KeyUsage(digital_signature=True, key_encipherment=False, key_cert_sign=True, crl_sign=True, content_commitment=False, data_encipherment=False, key_agreement=False, encipher_only=False, decipher_only=False), critical=True)
|
||||||
|
.sign(key, hashes.SHA256())
|
||||||
|
)
|
||||||
|
return key, cert
|
||||||
|
|
||||||
|
|
||||||
|
def issue_leaf(
|
||||||
|
ca_key: rsa.RSAPrivateKey,
|
||||||
|
ca_cert: x509.Certificate,
|
||||||
|
common_name: str,
|
||||||
|
san_values: list[str],
|
||||||
|
valid_days: int,
|
||||||
|
) -> tuple[rsa.RSAPrivateKey, x509.Certificate]:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
key = new_private_key()
|
||||||
|
cert = (
|
||||||
|
x509.CertificateBuilder()
|
||||||
|
.subject_name(build_name(common_name))
|
||||||
|
.issuer_name(ca_cert.subject)
|
||||||
|
.public_key(key.public_key())
|
||||||
|
.serial_number(x509.random_serial_number())
|
||||||
|
.not_valid_before(now - timedelta(minutes=5))
|
||||||
|
.not_valid_after(now + timedelta(days=valid_days))
|
||||||
|
.add_extension(x509.BasicConstraints(ca=False, path_length=None), critical=True)
|
||||||
|
.add_extension(x509.SubjectAlternativeName(parse_sans(san_values)), critical=False)
|
||||||
|
.add_extension(x509.SubjectKeyIdentifier.from_public_key(key.public_key()), critical=False)
|
||||||
|
.add_extension(x509.AuthorityKeyIdentifier.from_issuer_public_key(ca_key.public_key()), critical=False)
|
||||||
|
.add_extension(x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False)
|
||||||
|
.add_extension(x509.KeyUsage(digital_signature=True, key_encipherment=True, key_cert_sign=False, crl_sign=False, content_commitment=False, data_encipherment=False, key_agreement=False, encipher_only=False, decipher_only=False), critical=True)
|
||||||
|
.sign(ca_key, hashes.SHA256())
|
||||||
|
)
|
||||||
|
return key, cert
|
||||||
|
|
||||||
|
|
||||||
|
def emit_leaf_bundle(
|
||||||
|
out_dir: Path,
|
||||||
|
leaf_name: str,
|
||||||
|
ca_key: rsa.RSAPrivateKey,
|
||||||
|
ca_cert: x509.Certificate,
|
||||||
|
san_values: list[str],
|
||||||
|
valid_days: int,
|
||||||
|
) -> None:
|
||||||
|
key, cert = issue_leaf(ca_key, ca_cert, leaf_name, san_values, valid_days)
|
||||||
|
write_private_key(out_dir / f"{leaf_name}.key", key)
|
||||||
|
write_cert(out_dir / f"{leaf_name}.crt", cert)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(description="Generate local CA and Phase 2 service certificates")
|
||||||
|
parser.add_argument("--out-dir", default="tls/phase2")
|
||||||
|
parser.add_argument("--valid-days", type=int, default=30)
|
||||||
|
parser.add_argument("--ca-common-name", default="ChromeCard Phase2 Local CA")
|
||||||
|
parser.add_argument(
|
||||||
|
"--proxy-san",
|
||||||
|
action="append",
|
||||||
|
default=[],
|
||||||
|
help="Extra SAN for k_proxy certificate; may be repeated",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--server-san",
|
||||||
|
action="append",
|
||||||
|
default=[],
|
||||||
|
help="Extra SAN for k_server certificate; may be repeated",
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = parse_args()
|
||||||
|
out_dir = Path(args.out_dir)
|
||||||
|
out_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
ca_key, ca_cert = issue_ca(args.ca_common_name, args.valid_days)
|
||||||
|
write_private_key(out_dir / "ca.key", ca_key)
|
||||||
|
write_cert(out_dir / "ca.crt", ca_cert)
|
||||||
|
|
||||||
|
proxy_sans = ["localhost", "127.0.0.1", "k_proxy", *args.proxy_san]
|
||||||
|
server_sans = ["localhost", "127.0.0.1", "k_server", *args.server_san]
|
||||||
|
|
||||||
|
emit_leaf_bundle(out_dir, "k_proxy", ca_key, ca_cert, proxy_sans, args.valid_days)
|
||||||
|
emit_leaf_bundle(out_dir, "k_server", ca_key, ca_cert, server_sans, args.valid_days)
|
||||||
|
|
||||||
|
print(f"Generated CA and leaf certificates in {out_dir}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
|
@ -0,0 +1,850 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
k_client_portal — browser-facing portal running in k_client.
|
||||||
|
|
||||||
|
Serves the single-page UI and thin API shim that delegates every auth and
|
||||||
|
resource operation to k_proxy over the localhost-forwarded TLS endpoint.
|
||||||
|
Persists one preferred username locally; all session and enrollment state
|
||||||
|
lives in k_proxy.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import ssl
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
from urllib.error import HTTPError, URLError
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
|
||||||
|
|
||||||
|
HTML = """<!doctype html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>ChromeCard Client Flow</title>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--bg: #f3efe8;
|
||||||
|
--panel: #fffdf8;
|
||||||
|
--ink: #181614;
|
||||||
|
--muted: #655f56;
|
||||||
|
--line: #d9cfbf;
|
||||||
|
--accent: #0c6a60;
|
||||||
|
--accent-2: #8a5b2b;
|
||||||
|
--ok: #17653c;
|
||||||
|
--warn: #8f5b00;
|
||||||
|
--bad: #8a1f28;
|
||||||
|
--shadow: rgba(55, 41, 19, 0.08);
|
||||||
|
}
|
||||||
|
* { box-sizing: border-box; }
|
||||||
|
body {
|
||||||
|
margin: 0;
|
||||||
|
font-family: "Iowan Old Style", "Palatino Linotype", serif;
|
||||||
|
background:
|
||||||
|
radial-gradient(circle at top left, rgba(12,106,96,0.12), transparent 34%),
|
||||||
|
linear-gradient(180deg, #f9f3e8 0%, var(--bg) 100%);
|
||||||
|
color: var(--ink);
|
||||||
|
}
|
||||||
|
main {
|
||||||
|
max-width: 980px;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: 32px 20px 56px;
|
||||||
|
}
|
||||||
|
.hero, .panel {
|
||||||
|
padding: 22px 24px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: linear-gradient(135deg, rgba(255,253,248,0.98), rgba(242,237,228,0.94));
|
||||||
|
box-shadow: 0 18px 40px var(--shadow);
|
||||||
|
}
|
||||||
|
.hero {
|
||||||
|
margin-bottom: 18px;
|
||||||
|
}
|
||||||
|
h1 {
|
||||||
|
margin: 0 0 8px;
|
||||||
|
font-size: clamp(2rem, 4vw, 3.4rem);
|
||||||
|
line-height: 0.95;
|
||||||
|
letter-spacing: -0.04em;
|
||||||
|
}
|
||||||
|
.subtitle {
|
||||||
|
margin: 0;
|
||||||
|
color: var(--muted);
|
||||||
|
max-width: 62ch;
|
||||||
|
font-size: 1rem;
|
||||||
|
}
|
||||||
|
.grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: minmax(0, 1.3fr) minmax(300px, 0.9fr);
|
||||||
|
gap: 18px;
|
||||||
|
align-items: start;
|
||||||
|
}
|
||||||
|
.stack {
|
||||||
|
display: grid;
|
||||||
|
gap: 18px;
|
||||||
|
}
|
||||||
|
.actions, .row {
|
||||||
|
display: flex;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
gap: 10px;
|
||||||
|
}
|
||||||
|
.actions {
|
||||||
|
margin-top: 18px;
|
||||||
|
}
|
||||||
|
input {
|
||||||
|
width: 100%;
|
||||||
|
padding: 10px 12px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: #fff;
|
||||||
|
font: inherit;
|
||||||
|
color: var(--ink);
|
||||||
|
}
|
||||||
|
label {
|
||||||
|
display: grid;
|
||||||
|
gap: 6px;
|
||||||
|
margin-top: 14px;
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 0.95rem;
|
||||||
|
}
|
||||||
|
button {
|
||||||
|
text-decoration: none;
|
||||||
|
border: 0;
|
||||||
|
padding: 10px 14px;
|
||||||
|
font: inherit;
|
||||||
|
color: #fff;
|
||||||
|
background: var(--accent);
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
button.secondary { background: var(--accent-2); }
|
||||||
|
button.ghost {
|
||||||
|
background: #fff;
|
||||||
|
color: var(--ink);
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
}
|
||||||
|
button:disabled {
|
||||||
|
opacity: 0.55;
|
||||||
|
cursor: wait;
|
||||||
|
}
|
||||||
|
.status {
|
||||||
|
display: grid;
|
||||||
|
gap: 12px;
|
||||||
|
}
|
||||||
|
.status-card {
|
||||||
|
padding: 14px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: rgba(255,255,255,0.86);
|
||||||
|
}
|
||||||
|
.status-card h2 {
|
||||||
|
margin: 0 0 6px;
|
||||||
|
font-size: 1rem;
|
||||||
|
}
|
||||||
|
.status-line {
|
||||||
|
font-size: 0.95rem;
|
||||||
|
color: var(--muted);
|
||||||
|
}
|
||||||
|
#usersList {
|
||||||
|
display: grid;
|
||||||
|
gap: 8px;
|
||||||
|
margin-top: 12px;
|
||||||
|
}
|
||||||
|
.user-row {
|
||||||
|
display: flex;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
justify-content: space-between;
|
||||||
|
align-items: center;
|
||||||
|
gap: 10px;
|
||||||
|
padding: 10px 12px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: rgba(255,255,255,0.86);
|
||||||
|
}
|
||||||
|
.user-meta {
|
||||||
|
display: grid;
|
||||||
|
gap: 2px;
|
||||||
|
}
|
||||||
|
.user-name {
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
.user-subtle {
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
.user-actions {
|
||||||
|
display: flex;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
.small {
|
||||||
|
padding: 8px 10px;
|
||||||
|
font-size: 0.92rem;
|
||||||
|
}
|
||||||
|
.badge {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 4px 8px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
font-size: 0.86rem;
|
||||||
|
background: #fff;
|
||||||
|
color: var(--ink);
|
||||||
|
margin-right: 6px;
|
||||||
|
margin-bottom: 6px;
|
||||||
|
}
|
||||||
|
.timeline {
|
||||||
|
display: grid;
|
||||||
|
gap: 10px;
|
||||||
|
margin-top: 16px;
|
||||||
|
}
|
||||||
|
.step {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 32px 1fr;
|
||||||
|
gap: 12px;
|
||||||
|
padding: 12px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: rgba(255,255,255,0.84);
|
||||||
|
}
|
||||||
|
.step-index {
|
||||||
|
width: 32px;
|
||||||
|
height: 32px;
|
||||||
|
display: grid;
|
||||||
|
place-items: center;
|
||||||
|
border-radius: 999px;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: #fff;
|
||||||
|
font-size: 0.88rem;
|
||||||
|
}
|
||||||
|
.hint {
|
||||||
|
margin-top: 14px;
|
||||||
|
padding: 12px 14px;
|
||||||
|
border-left: 4px solid var(--accent-2);
|
||||||
|
background: rgba(138,91,43,0.08);
|
||||||
|
color: var(--ink);
|
||||||
|
font-size: 0.95rem;
|
||||||
|
}
|
||||||
|
pre {
|
||||||
|
margin: 0;
|
||||||
|
padding: 16px;
|
||||||
|
overflow: auto;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
background: #16130f;
|
||||||
|
color: #efe7da;
|
||||||
|
font-family: "SFMono-Regular", Consolas, monospace;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
line-height: 1.45;
|
||||||
|
min-height: 360px;
|
||||||
|
}
|
||||||
|
@media (max-width: 860px) {
|
||||||
|
.grid { grid-template-columns: 1fr; }
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<main>
|
||||||
|
<section class="hero">
|
||||||
|
<h1>ChromeCard Client Flow</h1>
|
||||||
|
<p class="subtitle">
|
||||||
|
This page runs in `k_client` and drives the real split-VM flow:
|
||||||
|
register a user, ask the card in `k_proxy` for approval, and then call
|
||||||
|
the protected counter on `k_server` only if auth succeeds.
|
||||||
|
</p>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<div class="grid">
|
||||||
|
<section class="stack">
|
||||||
|
<section class="panel">
|
||||||
|
<div class="row">
|
||||||
|
<span class="badge">Browser: k_client</span>
|
||||||
|
<span class="badge">Card: k_proxy</span>
|
||||||
|
<span class="badge">Resource: k_server</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<label>
|
||||||
|
Username
|
||||||
|
<input id="username" value="directtest" autocomplete="off">
|
||||||
|
</label>
|
||||||
|
|
||||||
|
<div class="actions">
|
||||||
|
<button id="registerBtn">Register User</button>
|
||||||
|
<button id="loginBtn">Login</button>
|
||||||
|
<button id="counterBtn">Call k_server</button>
|
||||||
|
<button id="logoutBtn" class="secondary">Logout</button>
|
||||||
|
<button id="runFlowBtn" class="ghost">Run Full Flow</button>
|
||||||
|
<button id="refreshBtn" class="ghost">Refresh State</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="hint" id="hintBox">
|
||||||
|
Registration: press <strong>yes</strong> on the card to enroll.
|
||||||
|
Login: press <strong>yes</strong> to allow the identity check, or
|
||||||
|
<strong>no</strong> to deny it. If login is denied, this page will
|
||||||
|
show that `k_server` was not called.
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="timeline">
|
||||||
|
<div class="step">
|
||||||
|
<div class="step-index">1</div>
|
||||||
|
<div>
|
||||||
|
<strong>Register user</strong><br>
|
||||||
|
Creates or refreshes the enrolled identity in `k_proxy`.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="step">
|
||||||
|
<div class="step-index">2</div>
|
||||||
|
<div>
|
||||||
|
<strong>Authenticate with the card</strong><br>
|
||||||
|
`k_proxy` asks the card for approval. Press `yes` to continue or `no` to reject.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="step">
|
||||||
|
<div class="step-index">3</div>
|
||||||
|
<div>
|
||||||
|
<strong>Call `k_server`</strong><br>
|
||||||
|
The protected counter is only reached when login created a valid session.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel status">
|
||||||
|
<div class="status-card">
|
||||||
|
<h2>Client State</h2>
|
||||||
|
<div class="status-line" id="stateUser">Enrolled user: unknown</div>
|
||||||
|
<div class="status-line" id="stateSession">Session: unknown</div>
|
||||||
|
<div class="status-line" id="stateExpires">Expires: unknown</div>
|
||||||
|
</div>
|
||||||
|
<div class="status-card">
|
||||||
|
<h2>Registered Users</h2>
|
||||||
|
<div class="status-line" id="usersSummary">Loading users...</div>
|
||||||
|
<div id="usersList"></div>
|
||||||
|
</div>
|
||||||
|
<div class="status-card">
|
||||||
|
<h2>Flow Result</h2>
|
||||||
|
<div class="status-line" id="flowResult">No flow run yet.</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel">
|
||||||
|
<h2 style="margin-top:0">Event Log</h2>
|
||||||
|
<pre id="log"></pre>
|
||||||
|
</section>
|
||||||
|
</div>
|
||||||
|
</main>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
const logNode = document.getElementById("log");
|
||||||
|
const hintBox = document.getElementById("hintBox");
|
||||||
|
const flowResult = document.getElementById("flowResult");
|
||||||
|
const stateUser = document.getElementById("stateUser");
|
||||||
|
const stateSession = document.getElementById("stateSession");
|
||||||
|
const stateExpires = document.getElementById("stateExpires");
|
||||||
|
const usersSummary = document.getElementById("usersSummary");
|
||||||
|
const usersList = document.getElementById("usersList");
|
||||||
|
const usernameInput = document.getElementById("username");
|
||||||
|
const buttons = Array.from(document.querySelectorAll("button"));
|
||||||
|
|
||||||
|
function log(message, payload) {
|
||||||
|
const stamp = new Date().toLocaleTimeString();
|
||||||
|
let line = `[${stamp}] ${message}`;
|
||||||
|
if (payload !== undefined) {
|
||||||
|
line += "\\n" + JSON.stringify(payload, null, 2);
|
||||||
|
}
|
||||||
|
logNode.textContent = line + "\\n\\n" + logNode.textContent;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setBusy(busy) {
|
||||||
|
for (const button of buttons) button.disabled = busy;
|
||||||
|
}
|
||||||
|
|
||||||
|
function username() {
|
||||||
|
return usernameInput.value.trim();
|
||||||
|
}
|
||||||
|
|
||||||
|
async function api(path, payload) {
|
||||||
|
const resp = await fetch(path, {
|
||||||
|
method: "POST",
|
||||||
|
headers: {"Content-Type": "application/json"},
|
||||||
|
body: JSON.stringify(payload || {})
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
return {status: resp.status, data};
|
||||||
|
}
|
||||||
|
|
||||||
|
async function refreshState() {
|
||||||
|
const resp = await fetch("/api/client/state");
|
||||||
|
const data = await resp.json();
|
||||||
|
stateUser.textContent = `Enrolled user: ${data.enrolled_username || "none"}`;
|
||||||
|
stateSession.textContent = `Session active: ${data.session_active ? "yes" : "no"}`;
|
||||||
|
stateExpires.textContent = `Expires: ${data.session_expires_at || "none"}`;
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderUsers(users) {
|
||||||
|
usersList.innerHTML = "";
|
||||||
|
if (!users.length) {
|
||||||
|
usersSummary.textContent = "No registered users in k_proxy.";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
usersSummary.textContent = `${users.length} registered user${users.length === 1 ? "" : "s"} visible in k_proxy.`;
|
||||||
|
for (const user of users) {
|
||||||
|
const row = document.createElement("div");
|
||||||
|
row.className = "user-row";
|
||||||
|
|
||||||
|
const meta = document.createElement("div");
|
||||||
|
meta.className = "user-meta";
|
||||||
|
meta.innerHTML =
|
||||||
|
`<div class="user-name">${user.username}</div>` +
|
||||||
|
`<div class="user-subtle">Credential present: ${user.has_credential ? "yes" : "no"}</div>`;
|
||||||
|
|
||||||
|
const actions = document.createElement("div");
|
||||||
|
actions.className = "user-actions";
|
||||||
|
|
||||||
|
const useBtn = document.createElement("button");
|
||||||
|
useBtn.className = "ghost small";
|
||||||
|
useBtn.textContent = "Use";
|
||||||
|
useBtn.addEventListener("click", () => {
|
||||||
|
usernameInput.value = user.username;
|
||||||
|
flowResult.textContent = `Selected user ${user.username}.`;
|
||||||
|
});
|
||||||
|
|
||||||
|
const deleteBtn = document.createElement("button");
|
||||||
|
deleteBtn.className = "secondary small";
|
||||||
|
deleteBtn.textContent = "Unregister";
|
||||||
|
deleteBtn.addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try { await deleteUser(user.username); } finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
|
||||||
|
actions.appendChild(useBtn);
|
||||||
|
actions.appendChild(deleteBtn);
|
||||||
|
row.appendChild(meta);
|
||||||
|
row.appendChild(actions);
|
||||||
|
usersList.appendChild(row);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function refreshUsers() {
|
||||||
|
const resp = await fetch("/api/enrollments");
|
||||||
|
const data = await resp.json();
|
||||||
|
renderUsers(data.users || []);
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function registerUser() {
|
||||||
|
hintBox.innerHTML = "Card step: if the card shows a <strong>registration</strong> prompt, press <strong>yes</strong> to enroll this user.";
|
||||||
|
const result = await api("/api/enroll", {username: username()});
|
||||||
|
log("Register user", result);
|
||||||
|
flowResult.textContent = result.status === 200 ? "User registration succeeded." : "User registration failed.";
|
||||||
|
await refreshState();
|
||||||
|
await refreshUsers();
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loginUser() {
|
||||||
|
hintBox.innerHTML = "Card step: if the card shows an <strong>authentication</strong> prompt, press <strong>yes</strong> to allow login or <strong>no</strong> to deny it.";
|
||||||
|
const result = await api("/api/login", {username: username()});
|
||||||
|
log("Login", result);
|
||||||
|
await refreshState();
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function callCounter() {
|
||||||
|
const result = await api("/api/resource/counter", {});
|
||||||
|
log("Call k_server counter", result);
|
||||||
|
flowResult.textContent =
|
||||||
|
result.status === 200
|
||||||
|
? `k_server was reached. Counter value: ${result.data.upstream?.value}`
|
||||||
|
: "k_server was not reached successfully.";
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function logoutUser() {
|
||||||
|
const result = await api("/api/logout", {});
|
||||||
|
log("Logout", result);
|
||||||
|
flowResult.textContent = result.status === 200 ? "Session cleared." : "Logout failed.";
|
||||||
|
await refreshState();
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteUser(usernameToDelete) {
|
||||||
|
const result = await api("/api/enroll/delete", {username: usernameToDelete});
|
||||||
|
log("Unregister user", result);
|
||||||
|
flowResult.textContent =
|
||||||
|
result.status === 200
|
||||||
|
? `User ${usernameToDelete} was unregistered.`
|
||||||
|
: `Could not unregister ${usernameToDelete}.`;
|
||||||
|
if (result.status === 200 && username() === usernameToDelete) {
|
||||||
|
usernameInput.value = "";
|
||||||
|
}
|
||||||
|
await refreshState();
|
||||||
|
await refreshUsers();
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function runFlow() {
|
||||||
|
setBusy(true);
|
||||||
|
flowResult.textContent = "Flow running...";
|
||||||
|
try {
|
||||||
|
const login = await loginUser();
|
||||||
|
if (login.status !== 200) {
|
||||||
|
flowResult.textContent = "Login denied or failed. `k_server` was not called.";
|
||||||
|
log("Flow stopped before k_server", {
|
||||||
|
reason: "login failed",
|
||||||
|
status: login.status,
|
||||||
|
response: login.data
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const counter = await callCounter();
|
||||||
|
if (counter.status === 200) {
|
||||||
|
flowResult.textContent = `Flow succeeded. k_server returned counter ${counter.data.upstream?.value}.`;
|
||||||
|
} else {
|
||||||
|
flowResult.textContent = "Login succeeded, but the protected k_server call failed.";
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
setBusy(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById("registerBtn").addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try { await registerUser(); } finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
document.getElementById("loginBtn").addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try {
|
||||||
|
const result = await loginUser();
|
||||||
|
flowResult.textContent = result.status === 200 ? "Login succeeded. You can now call k_server." : "Login denied or failed. k_server was not called.";
|
||||||
|
} finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
document.getElementById("counterBtn").addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try { await callCounter(); } finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
document.getElementById("logoutBtn").addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try { await logoutUser(); } finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
document.getElementById("runFlowBtn").addEventListener("click", runFlow);
|
||||||
|
document.getElementById("refreshBtn").addEventListener("click", async () => {
|
||||||
|
setBusy(true);
|
||||||
|
try {
|
||||||
|
const state = await refreshState();
|
||||||
|
const users = await refreshUsers();
|
||||||
|
log("State refreshed", {state, users});
|
||||||
|
} finally { setBusy(false); }
|
||||||
|
});
|
||||||
|
|
||||||
|
Promise.all([refreshState(), refreshUsers()]).then(([state, users]) => {
|
||||||
|
log("Client flow page ready", {state, users});
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EnrollmentRecord:
|
||||||
|
username: str
|
||||||
|
|
||||||
|
|
||||||
|
class ClientState:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
proxy_base_url: str,
|
||||||
|
proxy_ca_file: str | None,
|
||||||
|
enroll_db: Path,
|
||||||
|
interactive_timeout_s: float = 90.0,
|
||||||
|
default_timeout_s: float = 10.0,
|
||||||
|
):
|
||||||
|
self.proxy_base_url = proxy_base_url.rstrip("/")
|
||||||
|
self.proxy_ca_file = proxy_ca_file
|
||||||
|
self.enroll_db = enroll_db
|
||||||
|
# Registration and login both require a physical card touch, which can
|
||||||
|
# take up to ~60 s in practice; 90 s gives a generous margin.
|
||||||
|
self.interactive_timeout_s = interactive_timeout_s
|
||||||
|
self.default_timeout_s = default_timeout_s
|
||||||
|
self.lock = threading.Lock()
|
||||||
|
self.preferred_enrollment: EnrollmentRecord | None = None
|
||||||
|
self.session_token: str | None = None
|
||||||
|
self.session_expires_at: int | None = None
|
||||||
|
# Build the TLS context once; creating it on every request is expensive
|
||||||
|
# and the CA file doesn't change at runtime.
|
||||||
|
self._ssl_ctx: ssl.SSLContext | None = (
|
||||||
|
ssl.create_default_context(cafile=self.proxy_ca_file)
|
||||||
|
if proxy_base_url.startswith("https://")
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
self._load_preferred_enrollment()
|
||||||
|
|
||||||
|
def _ssl_context(self) -> ssl.SSLContext | None:
|
||||||
|
return self._ssl_ctx
|
||||||
|
|
||||||
|
def _proxy_json(
|
||||||
|
self,
|
||||||
|
method: str,
|
||||||
|
path: str,
|
||||||
|
payload: dict[str, Any] | None = None,
|
||||||
|
*,
|
||||||
|
timeout_s: float | None = None,
|
||||||
|
) -> tuple[int, dict[str, Any]]:
|
||||||
|
req = Request(f"{self.proxy_base_url}{path}", method=method)
|
||||||
|
req.add_header("Content-Type", "application/json")
|
||||||
|
token = self.get_session_token()
|
||||||
|
if token:
|
||||||
|
req.add_header("Authorization", f"Bearer {token}")
|
||||||
|
body = json.dumps(payload or {}).encode("utf-8")
|
||||||
|
try:
|
||||||
|
with urlopen(
|
||||||
|
req,
|
||||||
|
data=body,
|
||||||
|
timeout=timeout_s or self.default_timeout_s,
|
||||||
|
context=self._ssl_context(),
|
||||||
|
) as resp:
|
||||||
|
return resp.status, json.loads(resp.read().decode("utf-8"))
|
||||||
|
except HTTPError as exc:
|
||||||
|
try:
|
||||||
|
return exc.code, json.loads(exc.read().decode("utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return exc.code, {"ok": False, "error": f"proxy http error {exc.code}"}
|
||||||
|
except URLError as exc:
|
||||||
|
return 502, {"ok": False, "error": f"proxy unavailable: {exc.reason}"}
|
||||||
|
except Exception as exc:
|
||||||
|
return 502, {"ok": False, "error": f"proxy call failed: {exc}"}
|
||||||
|
|
||||||
|
def _load_preferred_enrollment(self) -> None:
|
||||||
|
if not self.enroll_db.exists():
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
data = json.loads(self.enroll_db.read_text())
|
||||||
|
username = str(data.get("username", "")).strip()
|
||||||
|
if username:
|
||||||
|
self.preferred_enrollment = EnrollmentRecord(username=username)
|
||||||
|
except Exception:
|
||||||
|
self.preferred_enrollment = None
|
||||||
|
|
||||||
|
def _save_preferred_enrollment_locked(self) -> None:
|
||||||
|
self.enroll_db.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
payload = {"username": self.preferred_enrollment.username if self.preferred_enrollment else None}
|
||||||
|
self.enroll_db.write_text(json.dumps(payload, indent=2) + "\n")
|
||||||
|
|
||||||
|
def enroll(self, username: str) -> dict[str, Any]:
|
||||||
|
username = username.strip()
|
||||||
|
if not username:
|
||||||
|
return {"ok": False, "error": "username required"}
|
||||||
|
# Best-effort: invalidate any active session on k_proxy before re-enrolling.
|
||||||
|
# The new credential will differ from what the old session was issued for.
|
||||||
|
with self.lock:
|
||||||
|
old_token = self.session_token
|
||||||
|
if old_token:
|
||||||
|
self._proxy_json("POST", "/session/logout")
|
||||||
|
status, data = self._proxy_json(
|
||||||
|
"POST",
|
||||||
|
"/enroll/register",
|
||||||
|
{"username": username},
|
||||||
|
timeout_s=self.interactive_timeout_s,
|
||||||
|
)
|
||||||
|
if status != 200:
|
||||||
|
return data
|
||||||
|
with self.lock:
|
||||||
|
self.preferred_enrollment = EnrollmentRecord(username=username)
|
||||||
|
self._save_preferred_enrollment_locked()
|
||||||
|
self.session_token = None
|
||||||
|
self.session_expires_at = None
|
||||||
|
return {
|
||||||
|
"ok": True,
|
||||||
|
"enrolled_username": username,
|
||||||
|
"proxy_enrollment": data,
|
||||||
|
}
|
||||||
|
|
||||||
|
def list_enrollments(self) -> tuple[int, dict[str, Any]]:
|
||||||
|
return self._proxy_json("GET", "/enroll/list")
|
||||||
|
|
||||||
|
def delete_enrollment(self, username: str) -> tuple[int, dict[str, Any]]:
|
||||||
|
username = username.strip()
|
||||||
|
if not username:
|
||||||
|
return 400, {"ok": False, "error": "username required"}
|
||||||
|
status, data = self._proxy_json("POST", "/enroll/delete", {"username": username})
|
||||||
|
if status == 200:
|
||||||
|
with self.lock:
|
||||||
|
if self.preferred_enrollment and self.preferred_enrollment.username == username:
|
||||||
|
self.preferred_enrollment = None
|
||||||
|
self._save_preferred_enrollment_locked()
|
||||||
|
self.session_token = None
|
||||||
|
self.session_expires_at = None
|
||||||
|
return status, data
|
||||||
|
|
||||||
|
def snapshot(self) -> dict[str, Any]:
|
||||||
|
with self.lock:
|
||||||
|
return {
|
||||||
|
"ok": True,
|
||||||
|
"enrolled_username": self.preferred_enrollment.username if self.preferred_enrollment else None,
|
||||||
|
"session_active": bool(self.session_token),
|
||||||
|
"session_expires_at": self.session_expires_at,
|
||||||
|
"proxy_base_url": self.proxy_base_url,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_session_token(self) -> str | None:
|
||||||
|
with self.lock:
|
||||||
|
return self.session_token
|
||||||
|
|
||||||
|
def login(self, username: str | None = None) -> tuple[int, dict[str, Any]]:
|
||||||
|
requested = (username or "").strip()
|
||||||
|
with self.lock:
|
||||||
|
if requested:
|
||||||
|
username = requested
|
||||||
|
elif self.preferred_enrollment:
|
||||||
|
username = self.preferred_enrollment.username
|
||||||
|
else:
|
||||||
|
return 400, {"ok": False, "error": "no enrolled user"}
|
||||||
|
|
||||||
|
status, data = self._proxy_json(
|
||||||
|
"POST",
|
||||||
|
"/session/login",
|
||||||
|
{"username": username},
|
||||||
|
timeout_s=self.interactive_timeout_s,
|
||||||
|
)
|
||||||
|
if status == 200 and data.get("session_token"):
|
||||||
|
with self.lock:
|
||||||
|
self.preferred_enrollment = EnrollmentRecord(username=username)
|
||||||
|
self._save_preferred_enrollment_locked()
|
||||||
|
self.session_token = data["session_token"]
|
||||||
|
self.session_expires_at = int(data.get("expires_at", 0)) or None
|
||||||
|
return status, data
|
||||||
|
|
||||||
|
def status(self) -> tuple[int, dict[str, Any]]:
|
||||||
|
return self._proxy_json("POST", "/session/status")
|
||||||
|
|
||||||
|
def counter(self) -> tuple[int, dict[str, Any]]:
|
||||||
|
return self._proxy_json("POST", "/resource/counter")
|
||||||
|
|
||||||
|
def logout(self) -> tuple[int, dict[str, Any]]:
|
||||||
|
status, data = self._proxy_json("POST", "/session/logout")
|
||||||
|
if status == 200:
|
||||||
|
with self.lock:
|
||||||
|
self.session_token = None
|
||||||
|
self.session_expires_at = None
|
||||||
|
return status, data
|
||||||
|
|
||||||
|
|
||||||
|
class Handler(BaseHTTPRequestHandler):
|
||||||
|
state: ClientState
|
||||||
|
|
||||||
|
def _json(self, status: int, payload: dict[str, Any]) -> None:
|
||||||
|
body = json.dumps(payload).encode("utf-8")
|
||||||
|
self.send_response(status)
|
||||||
|
self.send_header("Content-Type", "application/json")
|
||||||
|
self.send_header("Content-Length", str(len(body)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(body)
|
||||||
|
|
||||||
|
def _html(self, body: str) -> None:
|
||||||
|
data = body.encode("utf-8")
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
self.send_header("Content-Length", str(len(data)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(data)
|
||||||
|
|
||||||
|
def _read_json(self) -> dict[str, Any]:
|
||||||
|
length = int(self.headers.get("Content-Length", "0"))
|
||||||
|
raw = self.rfile.read(length)
|
||||||
|
if not raw:
|
||||||
|
return {}
|
||||||
|
return json.loads(raw.decode("utf-8"))
|
||||||
|
|
||||||
|
def _require_json(self) -> dict[str, Any] | None:
|
||||||
|
# Returns None and sends 400 when the body is unparseable; the caller
|
||||||
|
# should return immediately without sending a second response.
|
||||||
|
try:
|
||||||
|
return self._read_json()
|
||||||
|
except Exception:
|
||||||
|
self._json(400, {"ok": False, "error": "invalid json"})
|
||||||
|
return None
|
||||||
|
|
||||||
|
def do_GET(self) -> None: # noqa: N802
|
||||||
|
path = urlparse(self.path).path
|
||||||
|
if path == "/":
|
||||||
|
self._html(HTML)
|
||||||
|
return
|
||||||
|
if path == "/health":
|
||||||
|
self._json(200, {"ok": True, "service": "k_client_portal", "time": int(time.time())})
|
||||||
|
return
|
||||||
|
if path == "/api/client/state":
|
||||||
|
self._json(200, self.state.snapshot())
|
||||||
|
return
|
||||||
|
if path == "/api/enrollments":
|
||||||
|
status, data = self.state.list_enrollments()
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
self.send_error(404)
|
||||||
|
|
||||||
|
def do_POST(self) -> None: # noqa: N802
|
||||||
|
path = urlparse(self.path).path
|
||||||
|
if path == "/api/enroll":
|
||||||
|
data = self._require_json()
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
result = self.state.enroll(str(data.get("username", "")))
|
||||||
|
self._json(200 if result.get("ok") else 400, result)
|
||||||
|
return
|
||||||
|
if path == "/api/login":
|
||||||
|
data = self._require_json()
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
status, data = self.state.login(str(data.get("username", "")))
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
if path == "/api/enroll/delete":
|
||||||
|
data = self._require_json()
|
||||||
|
if data is None:
|
||||||
|
return
|
||||||
|
status, data = self.state.delete_enrollment(str(data.get("username", "")))
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
if path == "/api/status":
|
||||||
|
status, data = self.state.status()
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
if path == "/api/resource/counter":
|
||||||
|
status, data = self.state.counter()
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
if path == "/api/logout":
|
||||||
|
status, data = self.state.logout()
|
||||||
|
self._json(status, data)
|
||||||
|
return
|
||||||
|
self.send_error(404)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(description="Run browser-facing client portal in k_client")
|
||||||
|
parser.add_argument("--host", default="127.0.0.1")
|
||||||
|
parser.add_argument("--port", type=int, default=8766)
|
||||||
|
parser.add_argument("--proxy-base-url", default="https://127.0.0.1:9771")
|
||||||
|
parser.add_argument("--proxy-ca-file", help="CA certificate used to verify k_proxy HTTPS certificate")
|
||||||
|
parser.add_argument("--enroll-db", default="/home/user/chromecard/k_client_enrollment.json")
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = parse_args()
|
||||||
|
if args.proxy_base_url.startswith("https://") and not args.proxy_ca_file:
|
||||||
|
raise SystemExit("--proxy-ca-file is required when --proxy-base-url uses https")
|
||||||
|
|
||||||
|
Handler.state = ClientState(
|
||||||
|
proxy_base_url=args.proxy_base_url,
|
||||||
|
proxy_ca_file=args.proxy_ca_file,
|
||||||
|
enroll_db=Path(args.enroll_db),
|
||||||
|
)
|
||||||
|
server = ThreadingHTTPServer((args.host, args.port), Handler)
|
||||||
|
print(f"k_client_portal listening on http://{args.host}:{args.port}")
|
||||||
|
server.serve_forever()
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,128 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
k_server — protected resource backend.
|
||||||
|
|
||||||
|
Exposes a monotonic counter behind a shared proxy token. Only k_proxy
|
||||||
|
is expected to reach this service; k_client should have no direct path.
|
||||||
|
All state is process-local and resets on restart.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import ssl
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
||||||
|
from typing import Any
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
|
||||||
|
class ServerState:
|
||||||
|
# All state is process-local; a restart resets the counter to zero.
|
||||||
|
def __init__(self, proxy_token: str):
|
||||||
|
self.proxy_token = proxy_token
|
||||||
|
self.counter = 0
|
||||||
|
self.lock = threading.Lock()
|
||||||
|
|
||||||
|
def next_counter(self) -> int:
|
||||||
|
with self.lock:
|
||||||
|
self.counter += 1
|
||||||
|
return self.counter
|
||||||
|
|
||||||
|
|
||||||
|
class Handler(BaseHTTPRequestHandler):
|
||||||
|
state: ServerState
|
||||||
|
protocol_version = "HTTP/1.1"
|
||||||
|
|
||||||
|
def _json(self, status: int, payload: dict[str, Any]) -> None:
|
||||||
|
body = json.dumps(payload).encode("utf-8")
|
||||||
|
self.send_response(status)
|
||||||
|
self.send_header("Content-Type", "application/json")
|
||||||
|
self.send_header("Content-Length", str(len(body)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(body)
|
||||||
|
|
||||||
|
def _discard_request_body(self) -> None:
|
||||||
|
# HTTP/1.1 keep-alive: the connection is reused, so the body must be fully
|
||||||
|
# consumed before we send the response, even for endpoints that ignore it.
|
||||||
|
length = int(self.headers.get("Content-Length", "0"))
|
||||||
|
if length > 0:
|
||||||
|
self.rfile.read(length)
|
||||||
|
|
||||||
|
def _is_proxy_authorized(self) -> bool:
|
||||||
|
return self.headers.get("X-Proxy-Token") == self.state.proxy_token
|
||||||
|
|
||||||
|
def do_GET(self) -> None: # noqa: N802
|
||||||
|
path = urlparse(self.path).path
|
||||||
|
if path == "/health":
|
||||||
|
self._json(
|
||||||
|
200,
|
||||||
|
{
|
||||||
|
"ok": True,
|
||||||
|
"service": "k_server",
|
||||||
|
"time": int(time.time()),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return
|
||||||
|
self.send_error(404)
|
||||||
|
|
||||||
|
def do_POST(self) -> None: # noqa: N802
|
||||||
|
path = urlparse(self.path).path
|
||||||
|
if path != "/resource/counter":
|
||||||
|
self.send_error(404)
|
||||||
|
return
|
||||||
|
self._discard_request_body()
|
||||||
|
if not self._is_proxy_authorized():
|
||||||
|
self._json(401, {"ok": False, "error": "unauthorized proxy"})
|
||||||
|
return
|
||||||
|
|
||||||
|
value = self.state.next_counter()
|
||||||
|
self._json(
|
||||||
|
200,
|
||||||
|
{
|
||||||
|
"ok": True,
|
||||||
|
"resource": "counter",
|
||||||
|
"value": value,
|
||||||
|
"time": int(time.time()),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(description="Run k_server counter service")
|
||||||
|
parser.add_argument("--host", default="127.0.0.1")
|
||||||
|
parser.add_argument("--port", type=int, default=8780)
|
||||||
|
parser.add_argument("--tls-certfile", help="PEM certificate chain for HTTPS listener")
|
||||||
|
parser.add_argument("--tls-keyfile", help="PEM private key for HTTPS listener")
|
||||||
|
parser.add_argument(
|
||||||
|
"--proxy-token",
|
||||||
|
default="dev-proxy-token",
|
||||||
|
help="Shared token expected in X-Proxy-Token from k_proxy",
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = parse_args()
|
||||||
|
if bool(args.tls_certfile) != bool(args.tls_keyfile):
|
||||||
|
raise SystemExit("Both --tls-certfile and --tls-keyfile are required to enable HTTPS")
|
||||||
|
|
||||||
|
state = ServerState(proxy_token=args.proxy_token)
|
||||||
|
Handler.state = state
|
||||||
|
server = ThreadingHTTPServer((args.host, args.port), Handler)
|
||||||
|
scheme = "http"
|
||||||
|
if args.tls_certfile:
|
||||||
|
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
|
||||||
|
context.load_cert_chain(certfile=args.tls_certfile, keyfile=args.tls_keyfile)
|
||||||
|
server.socket = context.wrap_socket(server.socket, server_side=True)
|
||||||
|
scheme = "https"
|
||||||
|
|
||||||
|
print(f"k_server listening on {scheme}://{args.host}:{args.port}")
|
||||||
|
server.serve_forever()
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
|
@ -0,0 +1,78 @@
|
||||||
|
{
|
||||||
|
"name": "chromecard-browser-regression",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"lockfileVersion": 3,
|
||||||
|
"requires": true,
|
||||||
|
"packages": {
|
||||||
|
"": {
|
||||||
|
"name": "chromecard-browser-regression",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"devDependencies": {
|
||||||
|
"@playwright/test": "^1.54.2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@playwright/test": {
|
||||||
|
"version": "1.59.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.59.1.tgz",
|
||||||
|
"integrity": "sha512-PG6q63nQg5c9rIi4/Z5lR5IVF7yU5MqmKaPOe0HSc0O2cX1fPi96sUQu5j7eo4gKCkB2AnNGoWt7y4/Xx3Kcqg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"dependencies": {
|
||||||
|
"playwright": "1.59.1"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"playwright": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/fsevents": {
|
||||||
|
"version": "2.3.2",
|
||||||
|
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
|
||||||
|
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
|
||||||
|
"dev": true,
|
||||||
|
"hasInstallScript": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"optional": true,
|
||||||
|
"os": [
|
||||||
|
"darwin"
|
||||||
|
],
|
||||||
|
"engines": {
|
||||||
|
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/playwright": {
|
||||||
|
"version": "1.59.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.59.1.tgz",
|
||||||
|
"integrity": "sha512-C8oWjPR3F81yljW9o5OxcWzfh6avkVwDD2VYdwIGqTkl+OGFISgypqzfu7dOe4QNLL2aqcWBmI3PMtLIK233lw==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"dependencies": {
|
||||||
|
"playwright-core": "1.59.1"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"playwright": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"fsevents": "2.3.2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/playwright-core": {
|
||||||
|
"version": "1.59.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.59.1.tgz",
|
||||||
|
"integrity": "sha512-HBV/RJg81z5BiiZ9yPzIiClYV/QMsDCKUyogwH9p3MCP6IYjUFu/MActgYAvK0oWyV9NlwM3GLBjADyWgydVyg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"bin": {
|
||||||
|
"playwright-core": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
{
|
||||||
|
"name": "chromecard-browser-regression",
|
||||||
|
"private": true,
|
||||||
|
"version": "0.1.0",
|
||||||
|
"description": "Playwright regression checks for the k_client browser flow",
|
||||||
|
"scripts": {
|
||||||
|
"test:k-client": "playwright test tests/k_client_portal.spec.js"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@playwright/test": "^1.54.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,230 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
CLIENT_HOST="${CLIENT_HOST:-k_client}"
|
||||||
|
CA_FILE="${CA_FILE:-/home/user/chromecard/tls/phase2/ca.crt}"
|
||||||
|
PROXY_URL="${PROXY_URL:-https://127.0.0.1:9771}"
|
||||||
|
USERNAME="${USERNAME:-alice}"
|
||||||
|
REQUESTS="${REQUESTS:-20}"
|
||||||
|
PARALLELISM="${PARALLELISM:-8}"
|
||||||
|
CONNECT_TIMEOUT="${CONNECT_TIMEOUT:-8}"
|
||||||
|
LOGIN_TIMEOUT="${LOGIN_TIMEOUT:-90}"
|
||||||
|
INTERACTIVE_CARD="${INTERACTIVE_CARD:-0}"
|
||||||
|
EXPECT_AUTH_MODE="${EXPECT_AUTH_MODE:-}"
|
||||||
|
SSH_CONFIG="${SSH_CONFIG:-/home/user/.ssh/config}"
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<'EOF'
|
||||||
|
Usage: phase5_chain_regression.sh [options]
|
||||||
|
|
||||||
|
Runs the Phase 5 split-VM regression from the host by executing the client-side
|
||||||
|
flow inside k_client over SSH.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
--client-host HOST SSH host alias for k_client (default: k_client)
|
||||||
|
--ca-file PATH CA bundle path inside k_client
|
||||||
|
--proxy-url URL Proxy URL visible from k_client
|
||||||
|
--username NAME Username for session login
|
||||||
|
--requests N Number of counter requests to issue
|
||||||
|
--parallelism N Number of concurrent workers
|
||||||
|
--connect-timeout SEC SSH connect timeout
|
||||||
|
--login-timeout SEC Timeout for the interactive login request (default: 90)
|
||||||
|
--interactive-card Print card-confirmation instructions before login
|
||||||
|
--expect-auth-mode NAME Require login response auth_mode to match
|
||||||
|
--ssh-config PATH SSH config file to use (default: /home/user/.ssh/config)
|
||||||
|
-h, --help Show this help text
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--client-host)
|
||||||
|
CLIENT_HOST="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--ca-file)
|
||||||
|
CA_FILE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--proxy-url)
|
||||||
|
PROXY_URL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--username)
|
||||||
|
USERNAME="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--requests)
|
||||||
|
REQUESTS="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--parallelism)
|
||||||
|
PARALLELISM="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--connect-timeout)
|
||||||
|
CONNECT_TIMEOUT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--login-timeout)
|
||||||
|
LOGIN_TIMEOUT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--interactive-card)
|
||||||
|
INTERACTIVE_CARD=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--expect-auth-mode)
|
||||||
|
EXPECT_AUTH_MODE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--ssh-config)
|
||||||
|
SSH_CONFIG="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "unknown argument: $1" >&2
|
||||||
|
usage >&2
|
||||||
|
exit 2
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ "${INTERACTIVE_CARD}" == "1" ]]; then
|
||||||
|
cat <<EOF
|
||||||
|
Starting interactive login for ${USERNAME}.
|
||||||
|
When the card shows the authentication prompt, press yes to approve.
|
||||||
|
Press no only if you want to reject the login.
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
ssh \
|
||||||
|
-F "${SSH_CONFIG}" \
|
||||||
|
-o BatchMode=yes \
|
||||||
|
-o StrictHostKeyChecking=accept-new \
|
||||||
|
-o ConnectTimeout="${CONNECT_TIMEOUT}" \
|
||||||
|
"${CLIENT_HOST}" \
|
||||||
|
env \
|
||||||
|
CA_FILE="${CA_FILE}" \
|
||||||
|
PROXY_URL="${PROXY_URL}" \
|
||||||
|
USERNAME="${USERNAME}" \
|
||||||
|
REQUESTS="${REQUESTS}" \
|
||||||
|
PARALLELISM="${PARALLELISM}" \
|
||||||
|
LOGIN_TIMEOUT="${LOGIN_TIMEOUT}" \
|
||||||
|
EXPECT_AUTH_MODE="${EXPECT_AUTH_MODE}" \
|
||||||
|
python3 - <<'PY'
|
||||||
|
import concurrent.futures
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import ssl
|
||||||
|
import sys
|
||||||
|
import urllib.error
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
ca_file = os.environ["CA_FILE"]
|
||||||
|
proxy_url = os.environ["PROXY_URL"].rstrip("/")
|
||||||
|
username = os.environ["USERNAME"]
|
||||||
|
requests = int(os.environ["REQUESTS"])
|
||||||
|
parallelism = int(os.environ["PARALLELISM"])
|
||||||
|
login_timeout = int(os.environ["LOGIN_TIMEOUT"])
|
||||||
|
expect_auth_mode = os.environ["EXPECT_AUTH_MODE"]
|
||||||
|
|
||||||
|
if requests < 1:
|
||||||
|
raise SystemExit("REQUESTS must be >= 1")
|
||||||
|
if parallelism < 1:
|
||||||
|
raise SystemExit("PARALLELISM must be >= 1")
|
||||||
|
|
||||||
|
ctx = ssl.create_default_context(cafile=ca_file)
|
||||||
|
|
||||||
|
def post_json(path: str, payload: dict | None = None, token: str | None = None, timeout: int = 10):
|
||||||
|
data = None if payload is None else json.dumps(payload).encode("utf-8")
|
||||||
|
headers = {}
|
||||||
|
if payload is not None:
|
||||||
|
headers["Content-Type"] = "application/json"
|
||||||
|
if token:
|
||||||
|
headers["Authorization"] = f"Bearer {token}"
|
||||||
|
req = urllib.request.Request(
|
||||||
|
f"{proxy_url}{path}",
|
||||||
|
data=data,
|
||||||
|
headers=headers,
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(req, context=ctx, timeout=timeout) as resp:
|
||||||
|
return resp.status, json.loads(resp.read().decode("utf-8"))
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
body = exc.read().decode("utf-8")
|
||||||
|
try:
|
||||||
|
payload = json.loads(body)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
payload = {"ok": False, "error": body}
|
||||||
|
return exc.code, payload
|
||||||
|
|
||||||
|
status, login = post_json("/session/login", {"username": username}, timeout=login_timeout)
|
||||||
|
if status != 200 or "session_token" not in login:
|
||||||
|
print(json.dumps({"ok": False, "stage": "login", "status": status, "response": login}))
|
||||||
|
raise SystemExit(1)
|
||||||
|
if expect_auth_mode and login.get("auth_mode") != expect_auth_mode:
|
||||||
|
print(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"ok": False,
|
||||||
|
"stage": "login",
|
||||||
|
"error": "unexpected auth_mode",
|
||||||
|
"expected": expect_auth_mode,
|
||||||
|
"response": login,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
raise SystemExit(1)
|
||||||
|
|
||||||
|
token = login["session_token"]
|
||||||
|
values = []
|
||||||
|
|
||||||
|
def fetch_one(_: int) -> int:
|
||||||
|
status, payload = post_json("/resource/counter", {}, token=token)
|
||||||
|
if status != 200:
|
||||||
|
raise RuntimeError(json.dumps({"status": status, "response": payload}))
|
||||||
|
return int(payload["upstream"]["value"])
|
||||||
|
|
||||||
|
try:
|
||||||
|
with concurrent.futures.ThreadPoolExecutor(max_workers=parallelism) as pool:
|
||||||
|
for value in pool.map(fetch_one, range(requests)):
|
||||||
|
values.append(value)
|
||||||
|
|
||||||
|
status_resp, session = post_json("/session/status", {}, token=token)
|
||||||
|
logout_status, logout = post_json("/session/logout", {}, token=token)
|
||||||
|
invalid_status, invalid = post_json("/resource/counter", {}, token=token)
|
||||||
|
except Exception as exc:
|
||||||
|
try:
|
||||||
|
post_json("/session/logout", {}, token=token)
|
||||||
|
finally:
|
||||||
|
raise SystemExit(str(exc))
|
||||||
|
|
||||||
|
sorted_values = sorted(values)
|
||||||
|
expected = list(range(sorted_values[0], sorted_values[-1] + 1)) if sorted_values else []
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
"ok": True,
|
||||||
|
"username": username,
|
||||||
|
"proxy_url": proxy_url,
|
||||||
|
"requests": requests,
|
||||||
|
"parallelism": parallelism,
|
||||||
|
"unique": len(set(values)) == len(values),
|
||||||
|
"gap_free": sorted_values == expected,
|
||||||
|
"min": min(sorted_values) if sorted_values else None,
|
||||||
|
"max": max(sorted_values) if sorted_values else None,
|
||||||
|
"values": sorted_values,
|
||||||
|
"login": login,
|
||||||
|
"session_status": {"status": status_resp, "response": session},
|
||||||
|
"logout": {"status": logout_status, "response": logout},
|
||||||
|
"post_logout": {"status": invalid_status, "response": invalid},
|
||||||
|
}
|
||||||
|
print(json.dumps(summary, indent=2, sort_keys=True))
|
||||||
|
if not summary["unique"] or not summary["gap_free"] or logout_status != 200 or invalid_status != 401:
|
||||||
|
raise SystemExit(1)
|
||||||
|
PY
|
||||||
|
|
@ -0,0 +1,188 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Phase 6.5 concurrency probe for the direct browser-to-k_proxy path.
|
||||||
|
|
||||||
|
What it does:
|
||||||
|
- Creates a small batch of enrolled users.
|
||||||
|
- Logs each user in through k_proxy over TLS.
|
||||||
|
- Fires protected counter requests in parallel using the returned bearer tokens.
|
||||||
|
- Verifies that all calls succeed and that returned counter values are unique and contiguous.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import ssl
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Any
|
||||||
|
from urllib.error import HTTPError, URLError
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Session:
|
||||||
|
username: str
|
||||||
|
token: str
|
||||||
|
|
||||||
|
|
||||||
|
def request_json(
|
||||||
|
base_url: str,
|
||||||
|
path: str,
|
||||||
|
*,
|
||||||
|
method: str = "GET",
|
||||||
|
payload: dict[str, Any] | None = None,
|
||||||
|
token: str | None = None,
|
||||||
|
cafile: str | None = None,
|
||||||
|
timeout: int = 10,
|
||||||
|
) -> tuple[int, dict[str, Any]]:
|
||||||
|
req = Request(f"{base_url.rstrip('/')}{path}", method=method)
|
||||||
|
req.add_header("Content-Type", "application/json")
|
||||||
|
if token:
|
||||||
|
req.add_header("Authorization", f"Bearer {token}")
|
||||||
|
data = None if payload is None else json.dumps(payload).encode("utf-8")
|
||||||
|
context = ssl.create_default_context(cafile=cafile) if base_url.startswith("https://") else None
|
||||||
|
try:
|
||||||
|
with urlopen(req, data=data, timeout=timeout, context=context) as resp:
|
||||||
|
return resp.status, json.loads(resp.read().decode("utf-8"))
|
||||||
|
except HTTPError as exc:
|
||||||
|
try:
|
||||||
|
return exc.code, json.loads(exc.read().decode("utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return exc.code, {"ok": False, "error": f"http error {exc.code}"}
|
||||||
|
except URLError as exc:
|
||||||
|
return 502, {"ok": False, "error": f"url error: {exc.reason}"}
|
||||||
|
except Exception as exc:
|
||||||
|
return 502, {"ok": False, "error": f"request failed: {exc}"}
|
||||||
|
|
||||||
|
|
||||||
|
def enroll_user(base_url: str, cafile: str, username: str, display_name: str) -> None:
|
||||||
|
status, data = request_json(
|
||||||
|
base_url,
|
||||||
|
"/enroll/register",
|
||||||
|
method="POST",
|
||||||
|
payload={"username": username, "display_name": display_name},
|
||||||
|
cafile=cafile,
|
||||||
|
)
|
||||||
|
if status == 200:
|
||||||
|
return
|
||||||
|
if status == 409 and data.get("error") == "user already enrolled":
|
||||||
|
return
|
||||||
|
raise RuntimeError(f"enroll failed for {username}: status={status} data={data}")
|
||||||
|
|
||||||
|
|
||||||
|
def login_user(base_url: str, cafile: str, username: str) -> Session:
|
||||||
|
status, data = request_json(
|
||||||
|
base_url,
|
||||||
|
"/session/login",
|
||||||
|
method="POST",
|
||||||
|
payload={"username": username},
|
||||||
|
cafile=cafile,
|
||||||
|
)
|
||||||
|
if status != 200 or not data.get("session_token"):
|
||||||
|
raise RuntimeError(f"login failed for {username}: status={status} data={data}")
|
||||||
|
return Session(username=username, token=data["session_token"])
|
||||||
|
|
||||||
|
|
||||||
|
def counter_call(base_url: str, cafile: str, session: Session, call_id: int) -> dict[str, Any]:
|
||||||
|
started = time.time()
|
||||||
|
status, data = request_json(
|
||||||
|
base_url,
|
||||||
|
"/resource/counter",
|
||||||
|
method="POST",
|
||||||
|
payload={},
|
||||||
|
token=session.token,
|
||||||
|
cafile=cafile,
|
||||||
|
)
|
||||||
|
finished = time.time()
|
||||||
|
return {
|
||||||
|
"call_id": call_id,
|
||||||
|
"username": session.username,
|
||||||
|
"status": status,
|
||||||
|
"data": data,
|
||||||
|
"latency_ms": int((finished - started) * 1000),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(description="Run Phase 6.5 concurrency probe against k_proxy")
|
||||||
|
parser.add_argument("--base-url", default="https://127.0.0.1:9771")
|
||||||
|
parser.add_argument("--ca-file", required=True)
|
||||||
|
parser.add_argument("--users", type=int, default=3)
|
||||||
|
parser.add_argument("--requests-per-user", type=int, default=4)
|
||||||
|
parser.add_argument("--username-prefix", default="phase65")
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-workers",
|
||||||
|
type=int,
|
||||||
|
help="Maximum number of in-flight protected calls; defaults to total requests",
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = parse_args()
|
||||||
|
|
||||||
|
sessions: list[Session] = []
|
||||||
|
for idx in range(args.users):
|
||||||
|
username = f"{args.username_prefix}_{idx}"
|
||||||
|
enroll_user(args.base_url, args.ca_file, username, f"Phase65 User {idx}")
|
||||||
|
sessions.append(login_user(args.base_url, args.ca_file, username))
|
||||||
|
|
||||||
|
jobs: list[tuple[Session, int]] = []
|
||||||
|
call_id = 0
|
||||||
|
for session in sessions:
|
||||||
|
for _ in range(args.requests_per_user):
|
||||||
|
jobs.append((session, call_id))
|
||||||
|
call_id += 1
|
||||||
|
|
||||||
|
results: list[dict[str, Any]] = []
|
||||||
|
max_workers = args.max_workers or len(jobs)
|
||||||
|
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||||
|
future_map = {
|
||||||
|
executor.submit(counter_call, args.base_url, args.ca_file, session, job_id): (session.username, job_id)
|
||||||
|
for session, job_id in jobs
|
||||||
|
}
|
||||||
|
for future in as_completed(future_map):
|
||||||
|
username, job_id = future_map[future]
|
||||||
|
try:
|
||||||
|
results.append(future.result())
|
||||||
|
except Exception as exc:
|
||||||
|
results.append(
|
||||||
|
{
|
||||||
|
"call_id": job_id,
|
||||||
|
"username": username,
|
||||||
|
"status": 599,
|
||||||
|
"data": {"ok": False, "error": str(exc)},
|
||||||
|
"latency_ms": -1,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
results.sort(key=lambda item: item["call_id"])
|
||||||
|
ok_results = [item for item in results if item["status"] == 200 and item["data"].get("ok")]
|
||||||
|
values = [item["data"]["upstream"]["value"] for item in ok_results]
|
||||||
|
values_sorted = sorted(values)
|
||||||
|
contiguous = bool(values_sorted) and values_sorted == list(range(values_sorted[0], values_sorted[0] + len(values_sorted)))
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
"ok": len(ok_results) == len(results) and len(set(values)) == len(values) and contiguous,
|
||||||
|
"users": args.users,
|
||||||
|
"requests_per_user": args.requests_per_user,
|
||||||
|
"total_requests": len(results),
|
||||||
|
"max_workers": max_workers,
|
||||||
|
"successful_requests": len(ok_results),
|
||||||
|
"unique_counter_values": len(set(values)),
|
||||||
|
"counter_min": min(values_sorted) if values_sorted else None,
|
||||||
|
"counter_max": max(values_sorted) if values_sorted else None,
|
||||||
|
"counter_contiguous": contiguous,
|
||||||
|
"max_latency_ms": max((item["latency_ms"] for item in results), default=None),
|
||||||
|
"results": results,
|
||||||
|
}
|
||||||
|
print(json.dumps(summary, indent=2))
|
||||||
|
return 0 if summary["ok"] else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
|
@ -0,0 +1,18 @@
|
||||||
|
// Minimal local Playwright config for the k_client browser flow.
|
||||||
|
const { defineConfig } = require("@playwright/test");
|
||||||
|
|
||||||
|
module.exports = defineConfig({
|
||||||
|
testDir: "./tests",
|
||||||
|
timeout: 180_000,
|
||||||
|
expect: {
|
||||||
|
timeout: 15_000,
|
||||||
|
},
|
||||||
|
use: {
|
||||||
|
baseURL: process.env.PORTAL_BASE_URL || "http://127.0.0.1:8766",
|
||||||
|
headless: process.env.PW_HEADLESS === "1",
|
||||||
|
trace: "on-first-retry",
|
||||||
|
screenshot: "only-on-failure",
|
||||||
|
video: "retain-on-failure",
|
||||||
|
},
|
||||||
|
reporter: [["list"]],
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,321 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Low-level CTAP2 probe for ChromeCard host debugging.
|
||||||
|
|
||||||
|
This bypasses the higher-level Fido2Client/WebAuthn helpers so we can inspect
|
||||||
|
raw makeCredential/getAssertion behavior, keepalive callbacks, and transport
|
||||||
|
errors on the host stack.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import traceback
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
try:
|
||||||
|
from fido2.ctap import CtapError
|
||||||
|
from fido2.ctap2 import Ctap2
|
||||||
|
from fido2.hid import CtapHidDevice
|
||||||
|
from fido2.hid.linux import get_descriptor, open_connection
|
||||||
|
except Exception as exc:
|
||||||
|
print("Missing dependency: python-fido2", file=sys.stderr)
|
||||||
|
print("Install with: python3 -m pip install fido2", file=sys.stderr)
|
||||||
|
print(f"Import error: {exc}", file=sys.stderr)
|
||||||
|
sys.exit(2)
|
||||||
|
|
||||||
|
|
||||||
|
def _json_default(value: Any) -> Any:
|
||||||
|
if isinstance(value, bytes):
|
||||||
|
return value.hex()
|
||||||
|
if isinstance(value, set):
|
||||||
|
return sorted(value)
|
||||||
|
if hasattr(value, "items"):
|
||||||
|
return dict(value.items())
|
||||||
|
return str(value)
|
||||||
|
|
||||||
|
|
||||||
|
def _now() -> str:
|
||||||
|
return time.strftime("%Y-%m-%dT%H:%M:%S", time.localtime())
|
||||||
|
|
||||||
|
|
||||||
|
def log(message: str) -> None:
|
||||||
|
print(f"[{_now()}] {message}", file=sys.stderr, flush=True)
|
||||||
|
|
||||||
|
|
||||||
|
def list_devices() -> list[CtapHidDevice]:
|
||||||
|
return list(CtapHidDevice.list_devices())
|
||||||
|
|
||||||
|
|
||||||
|
def describe_device(dev: CtapHidDevice) -> dict[str, Any]:
|
||||||
|
desc = getattr(dev, "descriptor", None)
|
||||||
|
return {
|
||||||
|
"product_name": getattr(desc, "product_name", None),
|
||||||
|
"manufacturer": getattr(desc, "manufacturer_string", None),
|
||||||
|
"vendor_id": getattr(desc, "vid", None),
|
||||||
|
"product_id": getattr(desc, "pid", None),
|
||||||
|
"path": getattr(desc, "path", None),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_ctap2(dev: CtapHidDevice) -> Ctap2:
|
||||||
|
return Ctap2(dev)
|
||||||
|
|
||||||
|
|
||||||
|
def get_device(index: int, device_path: str | None) -> CtapHidDevice:
|
||||||
|
if device_path:
|
||||||
|
descriptor = get_descriptor(device_path)
|
||||||
|
return CtapHidDevice(descriptor, open_connection(descriptor))
|
||||||
|
devs = list_devices()
|
||||||
|
if not devs:
|
||||||
|
raise SystemExit("No CTAP HID devices found.")
|
||||||
|
if index < 0 or index >= len(devs):
|
||||||
|
raise SystemExit(f"Invalid --index {index}; found {len(devs)} device(s).")
|
||||||
|
return devs[index]
|
||||||
|
|
||||||
|
|
||||||
|
def print_json(payload: dict[str, Any]) -> None:
|
||||||
|
print(json.dumps(payload, indent=2, default=_json_default))
|
||||||
|
|
||||||
|
|
||||||
|
def keepalive_logger(status: int) -> None:
|
||||||
|
log(f"keepalive status={status}")
|
||||||
|
|
||||||
|
|
||||||
|
def _coerce_hex_bytes(value: str | None, label: str) -> bytes | None:
|
||||||
|
if value is None:
|
||||||
|
return None
|
||||||
|
raw = value.strip().lower()
|
||||||
|
if raw.startswith("0x"):
|
||||||
|
raw = raw[2:]
|
||||||
|
try:
|
||||||
|
return bytes.fromhex(raw)
|
||||||
|
except ValueError as exc:
|
||||||
|
raise SystemExit(f"invalid hex for {label}: {value}") from exc
|
||||||
|
|
||||||
|
|
||||||
|
def _client_data_hash(label: str) -> bytes:
|
||||||
|
return hashlib.sha256(label.encode("utf-8")).digest()
|
||||||
|
|
||||||
|
|
||||||
|
def _key_params() -> list[dict[str, Any]]:
|
||||||
|
return [
|
||||||
|
{"type": "public-key", "alg": -7},
|
||||||
|
{"type": "public-key", "alg": -257},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def do_info(ctap2: Ctap2, device_meta: dict[str, Any]) -> int:
|
||||||
|
info = ctap2.get_info()
|
||||||
|
print_json({"device": device_meta, "ctap2_info": info})
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def do_make_credential(ctap2: Ctap2, args: argparse.Namespace, device_meta: dict[str, Any]) -> int:
|
||||||
|
rp = {"id": args.rp_id, "name": args.rp_name or args.rp_id}
|
||||||
|
user_id = args.user_id.encode("utf-8")
|
||||||
|
user = {
|
||||||
|
"id": user_id,
|
||||||
|
"name": args.user_name,
|
||||||
|
"displayName": args.user_display_name or args.user_name,
|
||||||
|
}
|
||||||
|
client_data_hash = _client_data_hash(f"chromecard-make-credential:{args.rp_id}:{args.user_name}")
|
||||||
|
options = {"rk": args.resident_key, "uv": args.user_verification}
|
||||||
|
log(
|
||||||
|
"starting makeCredential "
|
||||||
|
f"rp_id={args.rp_id} user={args.user_name} rk={options['rk']} uv={options['uv']}"
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
response = ctap2.make_credential(
|
||||||
|
client_data_hash=client_data_hash,
|
||||||
|
rp=rp,
|
||||||
|
user=user,
|
||||||
|
key_params=_key_params(),
|
||||||
|
options=options,
|
||||||
|
on_keepalive=keepalive_logger,
|
||||||
|
)
|
||||||
|
except CtapError as exc:
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "makeCredential",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp": rp,
|
||||||
|
"user": user,
|
||||||
|
"options": options,
|
||||||
|
"error_type": "CtapError",
|
||||||
|
"error_code": getattr(exc, "code", None),
|
||||||
|
"error_name": str(getattr(exc, "code", None)),
|
||||||
|
"message": str(exc),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 1
|
||||||
|
except Exception as exc:
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "makeCredential",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp": rp,
|
||||||
|
"user": user,
|
||||||
|
"options": options,
|
||||||
|
"error_type": type(exc).__name__,
|
||||||
|
"message": str(exc),
|
||||||
|
"traceback": traceback.format_exc(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
auth_data = getattr(response, "auth_data", None)
|
||||||
|
credential_data = getattr(auth_data, "credential_data", None)
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "makeCredential",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp": rp,
|
||||||
|
"user": user,
|
||||||
|
"options": options,
|
||||||
|
"fmt": getattr(response, "fmt", None),
|
||||||
|
"auth_data": auth_data,
|
||||||
|
"credential_id_hex": getattr(credential_data, "credential_id", b"").hex()
|
||||||
|
if credential_data is not None
|
||||||
|
else None,
|
||||||
|
"credential_data_hex": bytes(credential_data).hex() if credential_data is not None else None,
|
||||||
|
"att_stmt": getattr(response, "att_stmt", None),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def do_get_assertion(ctap2: Ctap2, args: argparse.Namespace, device_meta: dict[str, Any]) -> int:
|
||||||
|
allow_credential = _coerce_hex_bytes(args.allow_credential_id, "allow-credential-id")
|
||||||
|
allow_list = [{"type": "public-key", "id": allow_credential}] if allow_credential else None
|
||||||
|
client_data_hash = _client_data_hash(f"chromecard-get-assertion:{args.rp_id}")
|
||||||
|
options = {"up": True, "uv": args.user_verification}
|
||||||
|
log(
|
||||||
|
"starting getAssertion "
|
||||||
|
f"rp_id={args.rp_id} allow_list={1 if allow_list else 0} uv={options['uv']}"
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
response = ctap2.get_assertion(
|
||||||
|
rp_id=args.rp_id,
|
||||||
|
client_data_hash=client_data_hash,
|
||||||
|
allow_list=allow_list,
|
||||||
|
options=options,
|
||||||
|
on_keepalive=keepalive_logger,
|
||||||
|
)
|
||||||
|
except CtapError as exc:
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "getAssertion",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp_id": args.rp_id,
|
||||||
|
"allow_list": allow_list,
|
||||||
|
"options": options,
|
||||||
|
"error_type": "CtapError",
|
||||||
|
"error_code": getattr(exc, "code", None),
|
||||||
|
"error_name": str(getattr(exc, "code", None)),
|
||||||
|
"message": str(exc),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 1
|
||||||
|
except Exception as exc:
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "getAssertion",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp_id": args.rp_id,
|
||||||
|
"allow_list": allow_list,
|
||||||
|
"options": options,
|
||||||
|
"error_type": type(exc).__name__,
|
||||||
|
"message": str(exc),
|
||||||
|
"traceback": traceback.format_exc(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
assertions: list[dict[str, Any]] = []
|
||||||
|
for item in getattr(response, "assertions", []) or []:
|
||||||
|
assertions.append(
|
||||||
|
{
|
||||||
|
"credential": getattr(item, "credential", None),
|
||||||
|
"auth_data": getattr(item, "auth_data", None),
|
||||||
|
"signature": getattr(item, "signature", None),
|
||||||
|
"user": getattr(item, "user", None),
|
||||||
|
"number_of_credentials": getattr(item, "number_of_credentials", None),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"operation": "getAssertion",
|
||||||
|
"device": device_meta,
|
||||||
|
"rp_id": args.rp_id,
|
||||||
|
"allow_list": allow_list,
|
||||||
|
"options": options,
|
||||||
|
"assertions": assertions,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def build_parser() -> argparse.ArgumentParser:
|
||||||
|
parser = argparse.ArgumentParser(description="Low-level CTAP2 host probe")
|
||||||
|
parser.add_argument("--index", type=int, default=0, help="Device index from --list output")
|
||||||
|
parser.add_argument(
|
||||||
|
"--device-path",
|
||||||
|
help="Use a specific hidraw node such as /dev/hidraw0 instead of scanning all devices",
|
||||||
|
)
|
||||||
|
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
|
subparsers.add_parser("list", help="List CTAP HID devices")
|
||||||
|
subparsers.add_parser("info", help="Fetch CTAP2 getInfo")
|
||||||
|
|
||||||
|
make_credential = subparsers.add_parser("make-credential", help="Run raw CTAP2 makeCredential")
|
||||||
|
make_credential.add_argument("--rp-id", default="localhost")
|
||||||
|
make_credential.add_argument("--rp-name", default="ChromeCard Local Probe")
|
||||||
|
make_credential.add_argument("--user-name", default="probe-user")
|
||||||
|
make_credential.add_argument("--user-display-name", default="Probe User")
|
||||||
|
make_credential.add_argument("--user-id", default=secrets.token_hex(16))
|
||||||
|
make_credential.add_argument("--resident-key", action="store_true")
|
||||||
|
make_credential.add_argument("--user-verification", action="store_true")
|
||||||
|
|
||||||
|
get_assertion = subparsers.add_parser("get-assertion", help="Run raw CTAP2 getAssertion")
|
||||||
|
get_assertion.add_argument("--rp-id", default="localhost")
|
||||||
|
get_assertion.add_argument("--allow-credential-id", help="Credential id as hex")
|
||||||
|
get_assertion.add_argument("--user-verification", action="store_true")
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = build_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == "list":
|
||||||
|
devs = list_devices()
|
||||||
|
print_json(
|
||||||
|
{
|
||||||
|
"devices": [describe_device(dev) for dev in devs],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return 0 if devs else 1
|
||||||
|
|
||||||
|
dev = get_device(args.index, args.device_path)
|
||||||
|
device_meta = describe_device(dev)
|
||||||
|
ctap2 = get_ctap2(dev)
|
||||||
|
|
||||||
|
if args.command == "info":
|
||||||
|
return do_info(ctap2, device_meta)
|
||||||
|
if args.command == "make-credential":
|
||||||
|
return do_make_credential(ctap2, args, device_meta)
|
||||||
|
if args.command == "get-assertion":
|
||||||
|
return do_get_assertion(ctap2, args, device_meta)
|
||||||
|
parser.error(f"unsupported command: {args.command}")
|
||||||
|
return 2
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
|
@ -0,0 +1,70 @@
|
||||||
|
const { test, expect } = require("@playwright/test");
|
||||||
|
|
||||||
|
const registrationTimeoutMs = Number(process.env.CARD_REGISTRATION_TIMEOUT_MS || "90000");
|
||||||
|
const loginTimeoutMs = Number(process.env.CARD_LOGIN_TIMEOUT_MS || "90000");
|
||||||
|
|
||||||
|
function uniqueUsername() {
|
||||||
|
return `pw_${Date.now().toString(36)}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function waitForActionResult(page, action, expectedText, timeoutMs) {
|
||||||
|
const flowResult = page.locator("#flowResult");
|
||||||
|
await action();
|
||||||
|
await expect(flowResult).toContainText(expectedText, { timeout: timeoutMs });
|
||||||
|
}
|
||||||
|
|
||||||
|
test.describe("k_client portal regression", () => {
|
||||||
|
test("registers, logs in, reads counter, logs out, and unregisters", async ({ page }) => {
|
||||||
|
const username = uniqueUsername();
|
||||||
|
const usersList = page.locator("#usersList");
|
||||||
|
const flowResult = page.locator("#flowResult");
|
||||||
|
const sessionLine = page.locator("#stateSession");
|
||||||
|
|
||||||
|
test.setTimeout(registrationTimeoutMs + loginTimeoutMs + 90_000);
|
||||||
|
|
||||||
|
await page.goto("/");
|
||||||
|
await expect(page.getByRole("heading", { name: "ChromeCard Client Flow" })).toBeVisible();
|
||||||
|
await page.getByLabel("Username").fill(username);
|
||||||
|
|
||||||
|
await test.step("Register user", async () => {
|
||||||
|
// Card step: press yes on the registration prompt.
|
||||||
|
await waitForActionResult(
|
||||||
|
page,
|
||||||
|
() => page.getByRole("button", { name: "Register User" }).click(),
|
||||||
|
"User registration succeeded.",
|
||||||
|
registrationTimeoutMs
|
||||||
|
);
|
||||||
|
await expect(usersList).toContainText(username);
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step("Login", async () => {
|
||||||
|
// Card step: press yes on the authentication prompt.
|
||||||
|
await waitForActionResult(
|
||||||
|
page,
|
||||||
|
() => page.getByRole("button", { name: "Login" }).click(),
|
||||||
|
"Login succeeded. You can now call k_server.",
|
||||||
|
loginTimeoutMs
|
||||||
|
);
|
||||||
|
await expect(sessionLine).toContainText("Session active: yes");
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step("Call k_server counter", async () => {
|
||||||
|
await page.getByRole("button", { name: "Call k_server" }).click();
|
||||||
|
await expect(flowResult).toContainText("k_server was reached. Counter value:");
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step("Logout", async () => {
|
||||||
|
await page.getByRole("button", { name: "Logout" }).click();
|
||||||
|
await expect(flowResult).toContainText("Session cleared.");
|
||||||
|
await expect(sessionLine).toContainText("Session active: no");
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step("Unregister user", async () => {
|
||||||
|
const row = usersList.locator(".user-row", { hasText: username });
|
||||||
|
await expect(row).toBeVisible();
|
||||||
|
await row.getByRole("button", { name: "Unregister" }).click();
|
||||||
|
await expect(flowResult).toContainText(`User ${username} was unregistered.`);
|
||||||
|
await expect(usersList).not.toContainText(username);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,804 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Unit tests for k_proxy_app.py.
|
||||||
|
|
||||||
|
Card (FIDO2/CTAP) and k_server (UpstreamPool) are mocked throughout.
|
||||||
|
All tests run locally without any Qubes VMs or attached hardware.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import http.client
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import unittest
|
||||||
|
from http.server import ThreadingHTTPServer
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
|
import k_proxy_app as app
|
||||||
|
from k_proxy_app import (
|
||||||
|
AUTH_MODE_FIDO2_DIRECT,
|
||||||
|
AUTH_MODE_PROBE,
|
||||||
|
Enrollment,
|
||||||
|
Handler,
|
||||||
|
ProxyState,
|
||||||
|
UpstreamPool,
|
||||||
|
b64u_decode,
|
||||||
|
b64u_encode,
|
||||||
|
enrollment_payload,
|
||||||
|
normalize_display_name,
|
||||||
|
normalize_username,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ── test helpers ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def _make_state(tmp_path, *, auth_mode=AUTH_MODE_PROBE, session_ttl=300):
|
||||||
|
return ProxyState(
|
||||||
|
session_ttl_s=session_ttl,
|
||||||
|
auth_mode=auth_mode,
|
||||||
|
auth_command="echo ok",
|
||||||
|
server_base_url="http://127.0.0.1:19999",
|
||||||
|
server_ca_file=None,
|
||||||
|
server_max_connections=1,
|
||||||
|
proxy_token="test-token",
|
||||||
|
enrollment_db=tmp_path / "enrollments.json",
|
||||||
|
rp_id="localhost",
|
||||||
|
rp_name="Test RP",
|
||||||
|
origin="https://localhost",
|
||||||
|
direct_device_path="",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _enrollment(username="alice", display_name=None, *, credential_data_b64=None):
|
||||||
|
now = int(time.time())
|
||||||
|
return Enrollment(
|
||||||
|
username=username,
|
||||||
|
display_name=display_name,
|
||||||
|
created_at=now,
|
||||||
|
updated_at=now,
|
||||||
|
credential_data_b64=credential_data_b64,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ── pure function tests ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestNormalizeUsername(unittest.TestCase):
|
||||||
|
def test_simple_valid(self):
|
||||||
|
self.assertEqual(normalize_username("alice"), "alice")
|
||||||
|
|
||||||
|
def test_strips_and_lowercases(self):
|
||||||
|
self.assertEqual(normalize_username(" Alice "), "alice")
|
||||||
|
|
||||||
|
def test_valid_with_dots_dashes_underscores(self):
|
||||||
|
for name in ("alice.smith", "alice-smith", "alice_smith", "a1b"):
|
||||||
|
with self.subTest(name=name):
|
||||||
|
self.assertEqual(normalize_username(name), name)
|
||||||
|
|
||||||
|
def test_too_short_raises(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
normalize_username("ab")
|
||||||
|
|
||||||
|
def test_too_long_raises(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
normalize_username("a" * 33)
|
||||||
|
|
||||||
|
def test_invalid_chars_raise(self):
|
||||||
|
for bad in ("Alice!", "al ice", "al@ice", "AB"):
|
||||||
|
with self.subTest(bad=bad):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
normalize_username(bad)
|
||||||
|
|
||||||
|
def test_minimum_length_valid(self):
|
||||||
|
self.assertEqual(normalize_username("abc"), "abc")
|
||||||
|
|
||||||
|
def test_maximum_length_valid(self):
|
||||||
|
self.assertEqual(normalize_username("a" * 32), "a" * 32)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNormalizeDisplayName(unittest.TestCase):
|
||||||
|
def test_none_returns_none(self):
|
||||||
|
self.assertIsNone(normalize_display_name(None))
|
||||||
|
|
||||||
|
def test_whitespace_only_returns_none(self):
|
||||||
|
self.assertIsNone(normalize_display_name(" "))
|
||||||
|
|
||||||
|
def test_strips_whitespace(self):
|
||||||
|
self.assertEqual(normalize_display_name(" Alice Smith "), "Alice Smith")
|
||||||
|
|
||||||
|
def test_max_length_accepted(self):
|
||||||
|
self.assertEqual(normalize_display_name("a" * 64), "a" * 64)
|
||||||
|
|
||||||
|
def test_over_max_length_raises(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
normalize_display_name("a" * 65)
|
||||||
|
|
||||||
|
|
||||||
|
class TestBase64Utils(unittest.TestCase):
|
||||||
|
def test_round_trip(self):
|
||||||
|
original = b"\x00\x01\x02\xffsome\xffbinary"
|
||||||
|
self.assertEqual(b64u_decode(b64u_encode(original)), original)
|
||||||
|
|
||||||
|
def test_no_padding_chars_in_output(self):
|
||||||
|
encoded = b64u_encode(b"x")
|
||||||
|
self.assertNotIn("=", encoded)
|
||||||
|
|
||||||
|
def test_decode_handles_missing_padding(self):
|
||||||
|
encoded = b64u_encode(b"hello")
|
||||||
|
self.assertEqual(b64u_decode(encoded), b"hello")
|
||||||
|
|
||||||
|
|
||||||
|
class TestEnrollmentPayload(unittest.TestCase):
|
||||||
|
def test_basic_fields(self):
|
||||||
|
e = _enrollment("alice", "Alice Smith")
|
||||||
|
payload = enrollment_payload(e)
|
||||||
|
self.assertTrue(payload["ok"])
|
||||||
|
self.assertEqual(payload["username"], "alice")
|
||||||
|
self.assertEqual(payload["display_name"], "Alice Smith")
|
||||||
|
self.assertFalse(payload["has_credential"])
|
||||||
|
|
||||||
|
def test_has_credential_true_when_data_present(self):
|
||||||
|
e = _enrollment(credential_data_b64="abc")
|
||||||
|
self.assertTrue(enrollment_payload(e)["has_credential"])
|
||||||
|
|
||||||
|
def test_created_flag_included_when_given(self):
|
||||||
|
e = _enrollment()
|
||||||
|
self.assertIn("created", enrollment_payload(e, created=True))
|
||||||
|
self.assertNotIn("created", enrollment_payload(e))
|
||||||
|
|
||||||
|
|
||||||
|
# ── session management ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestSessionManagement(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self._tmpdir = tempfile.TemporaryDirectory()
|
||||||
|
self.state = _make_state(Path(self._tmpdir.name))
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self._tmpdir.cleanup()
|
||||||
|
|
||||||
|
def test_create_returns_token_and_future_expiry(self):
|
||||||
|
token, expires_at = self.state.create_session("alice")
|
||||||
|
self.assertIsInstance(token, str)
|
||||||
|
self.assertGreater(len(token), 16)
|
||||||
|
self.assertGreater(expires_at, time.time())
|
||||||
|
|
||||||
|
def test_get_session_returns_correct_username(self):
|
||||||
|
token, _ = self.state.create_session("alice")
|
||||||
|
session = self.state.get_session(token)
|
||||||
|
self.assertIsNotNone(session)
|
||||||
|
self.assertEqual(session.username, "alice")
|
||||||
|
|
||||||
|
def test_get_session_unknown_token_returns_none(self):
|
||||||
|
self.assertIsNone(self.state.get_session("not-a-real-token"))
|
||||||
|
|
||||||
|
def test_expired_session_returns_none(self):
|
||||||
|
state = _make_state(Path(self._tmpdir.name), session_ttl=-1)
|
||||||
|
token, _ = state.create_session("alice")
|
||||||
|
self.assertIsNone(state.get_session(token))
|
||||||
|
|
||||||
|
def test_invalidate_session_removes_it(self):
|
||||||
|
token, _ = self.state.create_session("alice")
|
||||||
|
self.assertTrue(self.state.invalidate_session(token))
|
||||||
|
self.assertIsNone(self.state.get_session(token))
|
||||||
|
|
||||||
|
def test_invalidate_unknown_token_returns_false(self):
|
||||||
|
self.assertFalse(self.state.invalidate_session("ghost"))
|
||||||
|
|
||||||
|
def test_active_session_count_tracks_correctly(self):
|
||||||
|
self.assertEqual(self.state.active_session_count(), 0)
|
||||||
|
t1, _ = self.state.create_session("alice")
|
||||||
|
t2, _ = self.state.create_session("bob")
|
||||||
|
self.assertEqual(self.state.active_session_count(), 2)
|
||||||
|
self.state.invalidate_session(t1)
|
||||||
|
self.assertEqual(self.state.active_session_count(), 1)
|
||||||
|
|
||||||
|
def test_expired_sessions_garbage_collected(self):
|
||||||
|
state = _make_state(Path(self._tmpdir.name), session_ttl=-1)
|
||||||
|
state.create_session("alice")
|
||||||
|
state.create_session("bob")
|
||||||
|
self.assertEqual(state.active_session_count(), 0)
|
||||||
|
|
||||||
|
def test_tokens_are_unique(self):
|
||||||
|
tokens = {self.state.create_session("alice")[0] for _ in range(20)}
|
||||||
|
self.assertEqual(len(tokens), 20)
|
||||||
|
|
||||||
|
def test_uses_direct_fido2_false_in_probe_mode(self):
|
||||||
|
self.assertFalse(self.state.uses_direct_fido2())
|
||||||
|
|
||||||
|
def test_uses_direct_fido2_true_in_direct_mode(self):
|
||||||
|
state = _make_state(Path(self._tmpdir.name), auth_mode=AUTH_MODE_FIDO2_DIRECT)
|
||||||
|
self.assertTrue(state.uses_direct_fido2())
|
||||||
|
|
||||||
|
def test_auth_mode_label_probe(self):
|
||||||
|
self.assertEqual(self.state.auth_mode_label(), "card_presence_probe")
|
||||||
|
|
||||||
|
def test_auth_mode_label_direct(self):
|
||||||
|
state = _make_state(Path(self._tmpdir.name), auth_mode=AUTH_MODE_FIDO2_DIRECT)
|
||||||
|
self.assertEqual(state.auth_mode_label(), "fido2_assertion")
|
||||||
|
|
||||||
|
|
||||||
|
# ── enrollment management ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestEnrollmentManagement(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self._tmpdir = tempfile.TemporaryDirectory()
|
||||||
|
self.tmp_path = Path(self._tmpdir.name)
|
||||||
|
self.state = _make_state(self.tmp_path)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self._tmpdir.cleanup()
|
||||||
|
|
||||||
|
def test_register_creates_enrollment(self):
|
||||||
|
e = self.state.register_enrollment("alice", "Alice Smith")
|
||||||
|
self.assertEqual(e.username, "alice")
|
||||||
|
self.assertEqual(e.display_name, "Alice Smith")
|
||||||
|
self.assertTrue(self.state.has_enrollment("alice"))
|
||||||
|
|
||||||
|
def test_register_persists_across_state_reload(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
state2 = _make_state(self.tmp_path)
|
||||||
|
self.assertTrue(state2.has_enrollment("alice"))
|
||||||
|
|
||||||
|
def test_register_duplicate_raises_file_exists_error(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
with self.assertRaises(FileExistsError):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
|
||||||
|
def test_register_invalid_username_raises_value_error(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
self.state.register_enrollment("A!", None)
|
||||||
|
|
||||||
|
def test_register_display_name_too_long_raises(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
self.state.register_enrollment("alice", "x" * 65)
|
||||||
|
|
||||||
|
def test_update_changes_display_name(self):
|
||||||
|
self.state.register_enrollment("alice", "Old")
|
||||||
|
updated = self.state.update_enrollment("alice", "New")
|
||||||
|
self.assertEqual(updated.display_name, "New")
|
||||||
|
self.assertEqual(self.state.get_enrollment("alice").display_name, "New")
|
||||||
|
|
||||||
|
def test_update_unknown_user_raises_key_error(self):
|
||||||
|
with self.assertRaises(KeyError):
|
||||||
|
self.state.update_enrollment("nobody", "Name")
|
||||||
|
|
||||||
|
def test_delete_removes_enrollment(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
self.state.delete_enrollment("alice")
|
||||||
|
self.assertFalse(self.state.has_enrollment("alice"))
|
||||||
|
|
||||||
|
def test_delete_invalidates_active_sessions(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
token, _ = self.state.create_session("alice")
|
||||||
|
self.state.delete_enrollment("alice")
|
||||||
|
self.assertIsNone(self.state.get_session(token))
|
||||||
|
|
||||||
|
def test_delete_does_not_affect_other_users_sessions(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
self.state.register_enrollment("bob", None)
|
||||||
|
bob_token, _ = self.state.create_session("bob")
|
||||||
|
self.state.delete_enrollment("alice")
|
||||||
|
self.assertIsNotNone(self.state.get_session(bob_token))
|
||||||
|
|
||||||
|
def test_delete_unknown_user_raises_key_error(self):
|
||||||
|
with self.assertRaises(KeyError):
|
||||||
|
self.state.delete_enrollment("nobody")
|
||||||
|
|
||||||
|
def test_list_enrollments_sorted_alphabetically(self):
|
||||||
|
self.state.register_enrollment("charlie", None)
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
self.state.register_enrollment("bob", None)
|
||||||
|
names = [e.username for e in self.state.list_enrollments()]
|
||||||
|
self.assertEqual(names, ["alice", "bob", "charlie"])
|
||||||
|
|
||||||
|
def test_get_enrollment_found(self):
|
||||||
|
self.state.register_enrollment("alice", "Alice")
|
||||||
|
e = self.state.get_enrollment("alice")
|
||||||
|
self.assertIsNotNone(e)
|
||||||
|
self.assertEqual(e.username, "alice")
|
||||||
|
|
||||||
|
def test_get_enrollment_not_found_returns_none(self):
|
||||||
|
self.assertIsNone(self.state.get_enrollment("nobody"))
|
||||||
|
|
||||||
|
def test_get_enrollment_invalid_username_returns_none(self):
|
||||||
|
self.assertIsNone(self.state.get_enrollment("!bad!"))
|
||||||
|
|
||||||
|
def test_has_enrollment_true(self):
|
||||||
|
self.state.register_enrollment("alice", None)
|
||||||
|
self.assertTrue(self.state.has_enrollment("alice"))
|
||||||
|
|
||||||
|
def test_has_enrollment_false(self):
|
||||||
|
self.assertFalse(self.state.has_enrollment("nobody"))
|
||||||
|
|
||||||
|
def test_register_direct_mode_delegates_to_direct_method(self):
|
||||||
|
state = _make_state(self.tmp_path, auth_mode=AUTH_MODE_FIDO2_DIRECT)
|
||||||
|
fake = _enrollment("alice", credential_data_b64="cred")
|
||||||
|
with patch.object(state, "_register_direct_fido2", return_value=fake) as mock_direct:
|
||||||
|
result = state.register_enrollment("alice", None)
|
||||||
|
mock_direct.assert_called_once_with("alice", None)
|
||||||
|
self.assertEqual(result.username, "alice")
|
||||||
|
|
||||||
|
|
||||||
|
# ── authentication ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestProbeAuth(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self._tmpdir = tempfile.TemporaryDirectory()
|
||||||
|
self.state = _make_state(Path(self._tmpdir.name))
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self._tmpdir.cleanup()
|
||||||
|
|
||||||
|
def _mock_proc(self, returncode, stdout="", stderr=""):
|
||||||
|
proc = MagicMock()
|
||||||
|
proc.returncode = returncode
|
||||||
|
proc.stdout = stdout
|
||||||
|
proc.stderr = stderr
|
||||||
|
return proc
|
||||||
|
|
||||||
|
def test_success_when_subprocess_returns_zero(self):
|
||||||
|
with patch("k_proxy_app.subprocess.run", return_value=self._mock_proc(0, '{"ok": true}')):
|
||||||
|
ok, _ = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertTrue(ok)
|
||||||
|
|
||||||
|
def test_failure_when_subprocess_returns_nonzero(self):
|
||||||
|
with patch("k_proxy_app.subprocess.run", return_value=self._mock_proc(1, stderr="No CTAP HID devices")):
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertIn("No CTAP HID devices", msg)
|
||||||
|
|
||||||
|
def test_failure_uses_stdout_when_stderr_empty(self):
|
||||||
|
with patch("k_proxy_app.subprocess.run", return_value=self._mock_proc(2, stdout="probe failed")):
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertIn("probe failed", msg)
|
||||||
|
|
||||||
|
def test_failure_when_subprocess_raises(self):
|
||||||
|
with patch("k_proxy_app.subprocess.run", side_effect=TimeoutError("timed out")):
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertIn("auth command failed", msg)
|
||||||
|
|
||||||
|
|
||||||
|
class TestDirectFido2Auth(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self._tmpdir = tempfile.TemporaryDirectory()
|
||||||
|
self.state = _make_state(Path(self._tmpdir.name), auth_mode=AUTH_MODE_FIDO2_DIRECT)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self._tmpdir.cleanup()
|
||||||
|
|
||||||
|
def test_unenrolled_user_returns_false(self):
|
||||||
|
ok, msg = self.state.authenticate_with_card("nobody")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertEqual(msg, "user not enrolled")
|
||||||
|
|
||||||
|
def test_enrolled_without_credential_returns_false(self):
|
||||||
|
self.state.enrollments["alice"] = _enrollment("alice")
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertEqual(msg, "user has no registered credential")
|
||||||
|
|
||||||
|
def test_exception_from_ctap_returns_false_with_message(self):
|
||||||
|
self.state.enrollments["alice"] = _enrollment("alice", credential_data_b64="dW5pY29kZQ")
|
||||||
|
with patch("k_proxy_app.AttestedCredentialData", side_effect=Exception("bad cbor")):
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertIn("assertion verification failed", msg)
|
||||||
|
|
||||||
|
def test_success_path_with_mocked_internals(self):
|
||||||
|
self.state.enrollments["alice"] = _enrollment("alice", credential_data_b64=b64u_encode(b"fake_cred"))
|
||||||
|
|
||||||
|
mock_cred = MagicMock()
|
||||||
|
mock_options = MagicMock()
|
||||||
|
mock_options.public_key.rp_id = "localhost"
|
||||||
|
mock_options.public_key.allow_credentials = []
|
||||||
|
mock_options.public_key.challenge = b"challenge"
|
||||||
|
mock_client_data = MagicMock()
|
||||||
|
mock_client_data.hash = b"hash"
|
||||||
|
mock_assertion = MagicMock()
|
||||||
|
mock_assertion.assertions = None
|
||||||
|
mock_assertion.credential = {"id": b"cred_id"}
|
||||||
|
mock_assertion.auth_data = b"auth"
|
||||||
|
mock_assertion.signature = b"sig"
|
||||||
|
mock_assertion.user = None
|
||||||
|
|
||||||
|
with patch("k_proxy_app.AttestedCredentialData", return_value=mock_cred), \
|
||||||
|
patch("k_proxy_app.AuthenticationResponse", return_value=MagicMock()), \
|
||||||
|
patch("k_proxy_app.AuthenticatorAssertionResponse", return_value=MagicMock()), \
|
||||||
|
patch.object(self.state, "_drop_direct_device"), \
|
||||||
|
patch.object(self.state.fido_server, "authenticate_begin", return_value=(mock_options, {})), \
|
||||||
|
patch.object(self.state, "_collect_client_data", return_value=mock_client_data), \
|
||||||
|
patch.object(self.state, "_with_direct_ctap2", return_value=mock_assertion), \
|
||||||
|
patch.object(self.state.fido_server, "authenticate_complete"):
|
||||||
|
ok, msg = self.state.authenticate_with_card("alice")
|
||||||
|
|
||||||
|
self.assertTrue(ok)
|
||||||
|
self.assertEqual(msg, "assertion verified")
|
||||||
|
|
||||||
|
|
||||||
|
# ── upstream pool ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class TestUpstreamPool(unittest.TestCase):
|
||||||
|
def _pool(self):
|
||||||
|
return UpstreamPool(
|
||||||
|
server_base_url="http://127.0.0.1:19999",
|
||||||
|
server_ca_file=None,
|
||||||
|
max_connections=2,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _mock_response(self, status, body, will_close=True):
|
||||||
|
resp = MagicMock()
|
||||||
|
resp.status = status
|
||||||
|
resp.read.return_value = body
|
||||||
|
resp.will_close = will_close
|
||||||
|
return resp
|
||||||
|
|
||||||
|
def test_successful_request_returns_status_and_parsed_json(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.getresponse.return_value = self._mock_response(200, b'{"ok": true, "value": 7}')
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn):
|
||||||
|
status, data = pool.request_json("/resource/counter", {"X-Proxy-Token": "tok"}, {})
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["value"], 7)
|
||||||
|
|
||||||
|
def test_non_200_status_is_returned_as_is(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.getresponse.return_value = self._mock_response(403, b'{"ok": false, "error": "forbidden"}')
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn):
|
||||||
|
status, data = pool.request_json("/test", {}, {})
|
||||||
|
self.assertEqual(status, 403)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
|
||||||
|
def test_oserror_returns_502(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.request.side_effect = OSError("connection refused")
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn):
|
||||||
|
status, data = pool.request_json("/test", {}, {})
|
||||||
|
self.assertEqual(status, 502)
|
||||||
|
self.assertIn("server unavailable", data["error"])
|
||||||
|
|
||||||
|
def test_empty_body_returns_empty_dict(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.getresponse.return_value = self._mock_response(200, b"")
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn):
|
||||||
|
status, data = pool.request_json("/test", {}, {})
|
||||||
|
self.assertEqual(data, {})
|
||||||
|
|
||||||
|
def test_connection_reused_when_will_close_false(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.getresponse.return_value = self._mock_response(200, b'{"ok": true}', will_close=False)
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn) as mock_new:
|
||||||
|
pool.request_json("/test", {}, {})
|
||||||
|
pool.request_json("/test", {}, {})
|
||||||
|
self.assertEqual(mock_new.call_count, 1)
|
||||||
|
self.assertEqual(conn.request.call_count, 2)
|
||||||
|
|
||||||
|
def test_connection_not_reused_when_will_close_true(self):
|
||||||
|
pool = self._pool()
|
||||||
|
conn = MagicMock()
|
||||||
|
conn.getresponse.return_value = self._mock_response(200, b'{"ok": true}', will_close=True)
|
||||||
|
with patch.object(pool, "_new_connection", return_value=conn) as mock_new:
|
||||||
|
pool.request_json("/test", {}, {})
|
||||||
|
pool.request_json("/test", {}, {})
|
||||||
|
self.assertEqual(mock_new.call_count, 2)
|
||||||
|
|
||||||
|
|
||||||
|
# ── HTTP handler integration tests ────────────────────────────────────────────
|
||||||
|
|
||||||
|
class ServerFixture(unittest.TestCase):
|
||||||
|
"""Spins up a real ThreadingHTTPServer backed by a ProxyState with mocked
|
||||||
|
card and upstream. Card auth and fetch_counter are patched per-test via
|
||||||
|
patch.object(self.state, ...) or the _login() helper."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self._tmpdir = tempfile.TemporaryDirectory()
|
||||||
|
self.tmp_path = Path(self._tmpdir.name)
|
||||||
|
self.state = _make_state(self.tmp_path)
|
||||||
|
Handler.state = self.state
|
||||||
|
self.server = ThreadingHTTPServer(("127.0.0.1", 0), Handler)
|
||||||
|
self.port = self.server.server_address[1]
|
||||||
|
self._thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||||
|
self._thread.start()
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self.server.shutdown()
|
||||||
|
self.server.server_close()
|
||||||
|
self._tmpdir.cleanup()
|
||||||
|
|
||||||
|
# ── request helpers ──
|
||||||
|
|
||||||
|
def _conn(self):
|
||||||
|
return http.client.HTTPConnection("127.0.0.1", self.port, timeout=5)
|
||||||
|
|
||||||
|
def _get(self, path):
|
||||||
|
conn = self._conn()
|
||||||
|
try:
|
||||||
|
conn.request("GET", path)
|
||||||
|
resp = conn.getresponse()
|
||||||
|
return resp.status, resp.read()
|
||||||
|
finally:
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
def _get_json(self, path):
|
||||||
|
status, body = self._get(path)
|
||||||
|
return status, json.loads(body)
|
||||||
|
|
||||||
|
def _post(self, path, payload=None, token=None):
|
||||||
|
conn = self._conn()
|
||||||
|
try:
|
||||||
|
body = json.dumps(payload or {}).encode()
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Content-Length": str(len(body)),
|
||||||
|
}
|
||||||
|
if token:
|
||||||
|
headers["Authorization"] = f"Bearer {token}"
|
||||||
|
conn.request("POST", path, body=body, headers=headers)
|
||||||
|
resp = conn.getresponse()
|
||||||
|
return resp.status, json.loads(resp.read())
|
||||||
|
finally:
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
def _post_raw(self, path, raw_body):
|
||||||
|
conn = self._conn()
|
||||||
|
try:
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Content-Length": str(len(raw_body)),
|
||||||
|
}
|
||||||
|
conn.request("POST", path, body=raw_body, headers=headers)
|
||||||
|
resp = conn.getresponse()
|
||||||
|
return resp.status, resp.read()
|
||||||
|
finally:
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
# ── state helpers ──
|
||||||
|
|
||||||
|
def _enroll(self, username="alice", display_name=None):
|
||||||
|
self.state.register_enrollment(username, display_name)
|
||||||
|
|
||||||
|
def _login(self, username="alice"):
|
||||||
|
"""Enroll user and obtain a session token with the card mocked to succeed."""
|
||||||
|
self._enroll(username)
|
||||||
|
with patch.object(self.state, "authenticate_with_card", return_value=(True, "ok")):
|
||||||
|
status, data = self._post("/session/login", {"username": username})
|
||||||
|
self.assertEqual(status, 200, f"login setup failed: {data}")
|
||||||
|
return data["session_token"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestHandlerHealth(ServerFixture):
|
||||||
|
def test_get_root_returns_html(self):
|
||||||
|
status, body = self._get("/")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertIn(b"ChromeCard", body)
|
||||||
|
|
||||||
|
def test_health_returns_service_info(self):
|
||||||
|
status, data = self._get_json("/health")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["service"], "k_proxy")
|
||||||
|
self.assertIn("active_sessions", data)
|
||||||
|
|
||||||
|
def test_health_reflects_active_session_count(self):
|
||||||
|
self.state.create_session("alice")
|
||||||
|
_, data = self._get_json("/health")
|
||||||
|
self.assertEqual(data["active_sessions"], 1)
|
||||||
|
|
||||||
|
def test_unknown_get_returns_404(self):
|
||||||
|
status, _ = self._get("/nonexistent")
|
||||||
|
self.assertEqual(status, 404)
|
||||||
|
|
||||||
|
def test_unknown_post_returns_404(self):
|
||||||
|
status, _ = self._post_raw("/nonexistent", b"{}")
|
||||||
|
self.assertEqual(status, 404)
|
||||||
|
|
||||||
|
|
||||||
|
class TestHandlerEnrollment(ServerFixture):
|
||||||
|
def test_register_new_user_returns_200(self):
|
||||||
|
status, data = self._post("/enroll/register", {"username": "alice", "display_name": "Alice"})
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["username"], "alice")
|
||||||
|
self.assertEqual(data["display_name"], "Alice")
|
||||||
|
|
||||||
|
def test_register_duplicate_returns_409(self):
|
||||||
|
self._enroll("alice")
|
||||||
|
status, data = self._post("/enroll/register", {"username": "alice"})
|
||||||
|
self.assertEqual(status, 409)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
|
||||||
|
def test_register_invalid_username_returns_400(self):
|
||||||
|
status, data = self._post("/enroll/register", {"username": "A!"})
|
||||||
|
self.assertEqual(status, 400)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
|
||||||
|
def test_register_invalid_json_returns_400(self):
|
||||||
|
status, _ = self._post_raw("/enroll/register", b"not-json")
|
||||||
|
self.assertEqual(status, 400)
|
||||||
|
|
||||||
|
def test_enroll_status_found(self):
|
||||||
|
self._enroll("alice", "Alice Smith")
|
||||||
|
status, data = self._get_json("/enroll/status?username=alice")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["display_name"], "Alice Smith")
|
||||||
|
|
||||||
|
def test_enroll_status_not_found_returns_404(self):
|
||||||
|
status, data = self._get_json("/enroll/status?username=nobody")
|
||||||
|
self.assertEqual(status, 404)
|
||||||
|
|
||||||
|
def test_enroll_status_missing_param_returns_400(self):
|
||||||
|
status, data = self._get_json("/enroll/status")
|
||||||
|
self.assertEqual(status, 400)
|
||||||
|
|
||||||
|
def test_enroll_list_empty(self):
|
||||||
|
status, data = self._get_json("/enroll/list")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["users"], [])
|
||||||
|
|
||||||
|
def test_enroll_list_returns_sorted_users(self):
|
||||||
|
self._enroll("charlie")
|
||||||
|
self._enroll("alice")
|
||||||
|
_, data = self._get_json("/enroll/list")
|
||||||
|
names = [u["username"] for u in data["users"]]
|
||||||
|
self.assertEqual(names, ["alice", "charlie"])
|
||||||
|
|
||||||
|
def test_enroll_update_changes_display_name(self):
|
||||||
|
self._enroll("alice", "Old")
|
||||||
|
status, data = self._post("/enroll/update", {"username": "alice", "display_name": "New"})
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["display_name"], "New")
|
||||||
|
|
||||||
|
def test_enroll_update_unknown_returns_404(self):
|
||||||
|
status, _ = self._post("/enroll/update", {"username": "nobody"})
|
||||||
|
self.assertEqual(status, 404)
|
||||||
|
|
||||||
|
def test_enroll_delete_returns_200_and_deleted_true(self):
|
||||||
|
self._enroll("alice")
|
||||||
|
status, data = self._post("/enroll/delete", {"username": "alice"})
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["deleted"])
|
||||||
|
self.assertFalse(self.state.has_enrollment("alice"))
|
||||||
|
|
||||||
|
def test_enroll_delete_unknown_returns_404(self):
|
||||||
|
status, _ = self._post("/enroll/delete", {"username": "nobody"})
|
||||||
|
self.assertEqual(status, 404)
|
||||||
|
|
||||||
|
|
||||||
|
class TestHandlerSession(ServerFixture):
|
||||||
|
def test_login_success_returns_token(self):
|
||||||
|
self._enroll("alice")
|
||||||
|
with patch.object(self.state, "authenticate_with_card", return_value=(True, "ok")):
|
||||||
|
status, data = self._post("/session/login", {"username": "alice"})
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertIn("session_token", data)
|
||||||
|
self.assertIn("expires_at", data)
|
||||||
|
self.assertEqual(data["auth_mode"], "card_presence_probe")
|
||||||
|
|
||||||
|
def test_login_unenrolled_user_returns_403(self):
|
||||||
|
status, data = self._post("/session/login", {"username": "nobody"})
|
||||||
|
self.assertEqual(status, 403)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
self.assertIn("not enrolled", data["error"])
|
||||||
|
|
||||||
|
def test_login_card_failure_returns_401(self):
|
||||||
|
self._enroll("alice")
|
||||||
|
with patch.object(self.state, "authenticate_with_card", return_value=(False, "No CTAP devices")):
|
||||||
|
status, data = self._post("/session/login", {"username": "alice"})
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
self.assertIn("card auth failed", data["error"])
|
||||||
|
self.assertIn("No CTAP devices", data["details"])
|
||||||
|
|
||||||
|
def test_login_invalid_username_returns_400(self):
|
||||||
|
status, data = self._post("/session/login", {"username": "!bad!"})
|
||||||
|
self.assertEqual(status, 400)
|
||||||
|
|
||||||
|
def test_session_status_valid_token(self):
|
||||||
|
token = self._login()
|
||||||
|
status, data = self._post("/session/status", {}, token=token)
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["username"], "alice")
|
||||||
|
self.assertIn("expires_at", data)
|
||||||
|
self.assertGreaterEqual(data["seconds_remaining"], 0)
|
||||||
|
|
||||||
|
def test_session_status_no_token_returns_401(self):
|
||||||
|
status, data = self._post("/session/status", {})
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
|
||||||
|
def test_session_status_invalid_token_returns_401(self):
|
||||||
|
status, data = self._post("/session/status", {}, token="bad-token")
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
self.assertIn("invalid or expired", data["error"])
|
||||||
|
|
||||||
|
def test_logout_valid_token(self):
|
||||||
|
token = self._login()
|
||||||
|
status, data = self._post("/session/logout", {}, token=token)
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertTrue(data["invalidated"])
|
||||||
|
self.assertIsNone(self.state.get_session(token))
|
||||||
|
|
||||||
|
def test_logout_invalid_token_returns_200_not_invalidated(self):
|
||||||
|
status, data = self._post("/session/logout", {}, token="ghost")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertFalse(data["invalidated"])
|
||||||
|
|
||||||
|
def test_logout_no_token_returns_401(self):
|
||||||
|
status, data = self._post("/session/logout", {})
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
|
||||||
|
def test_session_invalid_after_logout(self):
|
||||||
|
token = self._login()
|
||||||
|
self._post("/session/logout", {}, token=token)
|
||||||
|
status, data = self._post("/session/status", {}, token=token)
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
|
||||||
|
def test_multiple_sessions_independent(self):
|
||||||
|
t1 = self._login("alice")
|
||||||
|
t2 = self._login("bob")
|
||||||
|
# logout alice, bob's session still valid
|
||||||
|
self._post("/session/logout", {}, token=t1)
|
||||||
|
status, data = self._post("/session/status", {}, token=t2)
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["username"], "bob")
|
||||||
|
|
||||||
|
|
||||||
|
class TestHandlerResource(ServerFixture):
|
||||||
|
def test_counter_with_valid_session(self):
|
||||||
|
token = self._login()
|
||||||
|
with patch.object(self.state, "fetch_counter", return_value=(200, {"ok": True, "value": 5})):
|
||||||
|
status, data = self._post("/resource/counter", {}, token=token)
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertTrue(data["ok"])
|
||||||
|
self.assertEqual(data["upstream"]["value"], 5)
|
||||||
|
self.assertEqual(data["username"], "alice")
|
||||||
|
self.assertTrue(data["session_reused"])
|
||||||
|
|
||||||
|
def test_counter_no_token_returns_401(self):
|
||||||
|
status, data = self._post("/resource/counter", {})
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
|
||||||
|
def test_counter_invalid_token_returns_401(self):
|
||||||
|
status, data = self._post("/resource/counter", {}, token="garbage")
|
||||||
|
self.assertEqual(status, 401)
|
||||||
|
|
||||||
|
def test_counter_upstream_failure_propagated(self):
|
||||||
|
token = self._login()
|
||||||
|
with patch.object(self.state, "fetch_counter", return_value=(502, {"ok": False, "error": "server unavailable"})):
|
||||||
|
status, data = self._post("/resource/counter", {}, token=token)
|
||||||
|
self.assertEqual(status, 502)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
self.assertIn("upstream failed", data["error"])
|
||||||
|
|
||||||
|
def test_counter_returns_upstream_non_200_as_error(self):
|
||||||
|
token = self._login()
|
||||||
|
with patch.object(self.state, "fetch_counter", return_value=(403, {"ok": False, "error": "forbidden"})):
|
||||||
|
status, data = self._post("/resource/counter", {}, token=token)
|
||||||
|
self.assertEqual(status, 403)
|
||||||
|
self.assertFalse(data["ok"])
|
||||||
|
|
||||||
|
def test_counter_session_still_valid_after_call(self):
|
||||||
|
token = self._login()
|
||||||
|
with patch.object(self.state, "fetch_counter", return_value=(200, {"ok": True, "value": 1})):
|
||||||
|
self._post("/resource/counter", {}, token=token)
|
||||||
|
status, _ = self._post("/session/status", {}, token=token)
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main(verbosity=2)
|
||||||
Loading…
Reference in New Issue