Compare commits
No commits in common. "bd839ea42df50981efc30a5f3400080aeb050c44" and "37600548ace32ce484ac396655636093302d6601" have entirely different histories.
bd839ea42d
...
37600548ac
|
|
@ -3,10 +3,6 @@
|
||||||
.codex/
|
.codex/
|
||||||
__pycache__/
|
__pycache__/
|
||||||
*.pyc
|
*.pyc
|
||||||
tls/
|
|
||||||
node_modules/
|
|
||||||
playwright-report/
|
|
||||||
test-results/
|
|
||||||
|
|
||||||
# Keep firmware SDK tree out of this workspace-tracking repo
|
# Keep firmware SDK tree out of this workspace-tracking repo
|
||||||
CR_SDK_CK-main/
|
CR_SDK_CK-main/
|
||||||
|
|
|
||||||
|
|
@ -2,19 +2,6 @@
|
||||||
|
|
||||||
This runbook starts a minimal `k_server` + `k_proxy` prototype for session reuse testing.
|
This runbook starts a minimal `k_server` + `k_proxy` prototype for session reuse testing.
|
||||||
|
|
||||||
Last updated: 2026-04-25
|
|
||||||
|
|
||||||
Related browser demo:
|
|
||||||
|
|
||||||
- `k_client_portal.py` can now be used in `k_client` at `http://127.0.0.1:8766` to show:
|
|
||||||
- registration
|
|
||||||
- current registered-user list from `k_proxy`
|
|
||||||
- unregister from the browser page
|
|
||||||
- login with card approval/denial
|
|
||||||
- protected `k_server` counter access
|
|
||||||
- logout
|
|
||||||
- explicit "k_server was not called" behavior when login is denied
|
|
||||||
|
|
||||||
## What This Prototype Covers
|
## What This Prototype Covers
|
||||||
|
|
||||||
- `k_proxy` creates short-lived sessions.
|
- `k_proxy` creates short-lived sessions.
|
||||||
|
|
@ -22,26 +9,15 @@ Related browser demo:
|
||||||
- Valid sessions can repeatedly access a protected `k_server` counter endpoint without re-running card auth each request.
|
- Valid sessions can repeatedly access a protected `k_server` counter endpoint without re-running card auth each request.
|
||||||
- Session status and logout/invalidation paths are implemented.
|
- Session status and logout/invalidation paths are implemented.
|
||||||
|
|
||||||
## Modes
|
|
||||||
|
|
||||||
There are two useful ways to run this prototype:
|
|
||||||
|
|
||||||
- Same-VM quickstart: `k_proxy` and `k_server` run on one VM for app-local testing.
|
|
||||||
- Split-VM chain: `k_proxy` runs in `k_proxy`, `k_server` runs in `k_server`, and the Qubes forwarding layer must permit the chain.
|
|
||||||
|
|
||||||
## Start Services
|
## Start Services
|
||||||
|
|
||||||
### Same-VM quickstart
|
In `k_server` VM:
|
||||||
|
|
||||||
This matches the code defaults and is useful for basic app behavior only.
|
|
||||||
|
|
||||||
In the chosen VM:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 /home/user/chromecard/k_server_app.py --host 127.0.0.1 --port 8780 --proxy-token dev-proxy-token
|
python3 /home/user/chromecard/k_server_app.py --host 127.0.0.1 --port 8780 --proxy-token dev-proxy-token
|
||||||
```
|
```
|
||||||
|
|
||||||
In the same VM:
|
In `k_proxy` VM:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 /home/user/chromecard/k_proxy_app.py \
|
python3 /home/user/chromecard/k_proxy_app.py \
|
||||||
|
|
@ -52,75 +28,12 @@ python3 /home/user/chromecard/k_proxy_app.py \
|
||||||
--proxy-token dev-proxy-token
|
--proxy-token dev-proxy-token
|
||||||
```
|
```
|
||||||
|
|
||||||
### Split-VM chain
|
|
||||||
|
|
||||||
This is the current Qubes target shape.
|
|
||||||
|
|
||||||
In `k_server` VM:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3 /home/user/chromecard/k_server_app.py \
|
|
||||||
--host 127.0.0.1 \
|
|
||||||
--port 8780 \
|
|
||||||
--proxy-token dev-proxy-token \
|
|
||||||
--tls-certfile /home/user/chromecard/tls/phase2/k_server.crt \
|
|
||||||
--tls-keyfile /home/user/chromecard/tls/phase2/k_server.key
|
|
||||||
```
|
|
||||||
|
|
||||||
In `k_proxy` VM:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
qvm-connect-tcp 9780:k_server:8780
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3 /home/user/chromecard/k_proxy_app.py \
|
|
||||||
--host 127.0.0.1 \
|
|
||||||
--port 8771 \
|
|
||||||
--session-ttl 300 \
|
|
||||||
--server-base-url https://127.0.0.1:9780 \
|
|
||||||
--server-ca-file /home/user/chromecard/tls/phase2/ca.crt \
|
|
||||||
--proxy-token dev-proxy-token \
|
|
||||||
--tls-certfile /home/user/chromecard/tls/phase2/k_proxy.crt \
|
|
||||||
--tls-keyfile /home/user/chromecard/tls/phase2/k_proxy.key
|
|
||||||
```
|
|
||||||
|
|
||||||
In `k_client` VM:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
qvm-connect-tcp 9771:k_proxy:8771
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes:
|
|
||||||
|
|
||||||
- Current validated split-VM path is `k_client localhost:9771 -> k_proxy localhost:8771 -> k_proxy localhost:9780 forward -> k_server localhost:8780`.
|
|
||||||
- Use `--cacert /home/user/chromecard/tls/phase2/ca.crt` for TLS verification in `curl`-based checks.
|
|
||||||
- Raw VM-IP routing is not the validated path for the current prototype.
|
|
||||||
|
|
||||||
## Ownership And Concurrency
|
|
||||||
|
|
||||||
- `k_proxy` is authoritative for session state.
|
|
||||||
- `k_server` is authoritative for the protected counter state.
|
|
||||||
- Sessions are in-memory only in `k_proxy` and are lost on proxy restart.
|
|
||||||
- The protected counter is in-memory only in `k_server` and resets on server restart.
|
|
||||||
- Both services use `ThreadingHTTPServer`.
|
|
||||||
- `k_proxy` guards its session store with a single process-local lock.
|
|
||||||
- `k_server` guards counter increments with a single process-local lock.
|
|
||||||
- Qubes localhost forwarders are transport plumbing only; they are not a source of state authority.
|
|
||||||
|
|
||||||
## Test Flow
|
## Test Flow
|
||||||
|
|
||||||
Use the proxy port that matches the mode you started:
|
|
||||||
|
|
||||||
- Same-VM quickstart: `8770`
|
|
||||||
- Split-VM chain: `9771` from `k_client`, `8771` inside `k_proxy`
|
|
||||||
|
|
||||||
Create a session (runs auth gate once):
|
Create a session (runs auth gate once):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/login \
|
curl -sS -X POST http://127.0.0.1:8770/session/login \
|
||||||
-H 'Content-Type: application/json' \
|
-H 'Content-Type: application/json' \
|
||||||
-d '{"username":"alice"}'
|
-d '{"username":"alice"}'
|
||||||
```
|
```
|
||||||
|
|
@ -134,121 +47,34 @@ TOKEN='<paste-token>'
|
||||||
Check session:
|
Check session:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/status \
|
curl -sS -X POST http://127.0.0.1:8770/session/status \
|
||||||
-H "Authorization: Bearer $TOKEN"
|
-H "Authorization: Bearer $TOKEN"
|
||||||
```
|
```
|
||||||
|
|
||||||
Call protected resource multiple times (should not require new login):
|
Call protected resource multiple times (should not require new login):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
curl -sS -X POST http://127.0.0.1:8770/resource/counter \
|
||||||
-H "Authorization: Bearer $TOKEN"
|
-H "Authorization: Bearer $TOKEN"
|
||||||
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
curl -sS -X POST http://127.0.0.1:8770/resource/counter \
|
||||||
-H "Authorization: Bearer $TOKEN"
|
-H "Authorization: Bearer $TOKEN"
|
||||||
```
|
```
|
||||||
|
|
||||||
Logout/invalidate:
|
Logout/invalidate:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -sS --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/session/logout \
|
curl -sS -X POST http://127.0.0.1:8770/session/logout \
|
||||||
-H "Authorization: Bearer $TOKEN"
|
-H "Authorization: Bearer $TOKEN"
|
||||||
```
|
```
|
||||||
|
|
||||||
Re-check after logout (should fail with 401):
|
Re-check after logout (should fail with 401):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -i --cacert /home/user/chromecard/tls/phase2/ca.crt -X POST https://127.0.0.1:<proxy-port>/resource/counter \
|
curl -i -X POST http://127.0.0.1:8770/resource/counter \
|
||||||
-H "Authorization: Bearer $TOKEN"
|
-H "Authorization: Bearer $TOKEN"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Regression Script
|
|
||||||
|
|
||||||
For the split-VM chain, use the host-side regression helper:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/home/user/chromecard/phase5_chain_regression.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Defaults:
|
|
||||||
|
|
||||||
- Drives the test from `k_client` over SSH.
|
|
||||||
- Uses `https://127.0.0.1:9771` and `/home/user/chromecard/tls/phase2/ca.crt` inside `k_client`.
|
|
||||||
- Logs in as `alice`.
|
|
||||||
- Runs `20` counter requests at parallelism `8`.
|
|
||||||
- Verifies that returned counter values are unique and gap-free, then logs out and checks for `401` after logout.
|
|
||||||
|
|
||||||
Useful overrides:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
REQUESTS=50 PARALLELISM=12 /home/user/chromecard/phase5_chain_regression.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/home/user/chromecard/phase5_chain_regression.sh --username alice --client-host k_client
|
|
||||||
```
|
|
||||||
|
|
||||||
For the browser-facing `k_client` page, use the Playwright regression spec:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install
|
|
||||||
npx playwright install
|
|
||||||
npm run test:k-client
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes:
|
|
||||||
|
|
||||||
- default target is `http://127.0.0.1:8766`
|
|
||||||
- override with `PORTAL_BASE_URL=http://127.0.0.1:8766`
|
|
||||||
- the spec expects manual card confirmation during register and login
|
|
||||||
- timeouts can be tuned with `CARD_REGISTRATION_TIMEOUT_MS` and `CARD_LOGIN_TIMEOUT_MS`
|
|
||||||
- from this host, a forwarded portal URL was used successfully:
|
|
||||||
- `PORTAL_BASE_URL=http://127.0.0.1:18766 npm run test:k-client`
|
|
||||||
|
|
||||||
Verified result on 2026-04-25:
|
|
||||||
|
|
||||||
- Live split-VM chain passed end-to-end.
|
|
||||||
- Login, session status, counter reuse, and logout all worked from `k_client`.
|
|
||||||
- A `20` request / `8` worker concurrency burst returned unique, gap-free counter values `23..42`.
|
|
||||||
- The Playwright browser regression for `k_client_portal.py` also passed end-to-end:
|
|
||||||
- register
|
|
||||||
- login
|
|
||||||
- protected counter
|
|
||||||
- logout
|
|
||||||
- unregister
|
|
||||||
|
|
||||||
## Current Limitation
|
## Current Limitation
|
||||||
|
|
||||||
- This uses card-presence probing, not a full WebAuthn assertion verification path.
|
- This uses card-presence probing, not a full WebAuthn assertion verification path.
|
||||||
- Intended as a Phase 5 starter for session semantics and proxy/server behavior.
|
- Intended as a Phase 5 starter for session semantics and proxy/server behavior.
|
||||||
- Session and counter state are currently process-local only; restart loses state.
|
|
||||||
- Upstream trust still relies on a shared static `X-Proxy-Token`.
|
|
||||||
- Experimental direct FIDO2 mode now exists in `k_proxy_app.py` behind `--auth-mode fido2-direct`, but it is not the default runtime:
|
|
||||||
- direct registration on the current `k_proxy` card/library stack still fails with `No compatible PIN/UV protocols supported!`
|
|
||||||
- a CTAP1 fallback probe did not complete quickly enough to promote as the working path
|
|
||||||
- the deployed service was restored to default `probe` mode so the validated Phase 5 chain remains usable
|
|
||||||
- Raw CTAP debugging helper now exists at `/home/user/chromecard/raw_ctap_probe.py`:
|
|
||||||
- use it on `k_proxy` to exercise low-level `makeCredential` / `getAssertion`
|
|
||||||
- it logs keepalive callbacks and transport exceptions
|
|
||||||
- Current blocker before the next direct-auth attempt:
|
|
||||||
- `k_proxy` currently has no visible `/dev/hidraw*`
|
|
||||||
- `python3 /home/user/chromecard/fido2_probe.py --list` in `k_proxy` returns `No CTAP HID devices found.`
|
|
||||||
- restore card visibility first, then retry the raw CTAP probe and stop to tell the user when to press `yes` or `no`
|
|
||||||
- Latest retry after card reattach:
|
|
||||||
- `/dev/hidraw0` and `/dev/hidraw1` are visible in `k_proxy` again
|
|
||||||
- `/dev/hidraw0` opens successfully as the normal user, but `/dev/hidraw1` is still permission-denied
|
|
||||||
- raw `makeCredential` still shows no card prompt, so the hang is before the firmware confirmation UI
|
|
||||||
- hidraw inspection confirms `/dev/hidraw0` is the real FIDO interface and `/dev/hidraw1` is a separate vendor HID interface
|
|
||||||
- manual CTAPHID `INIT` written directly to `/dev/hidraw0` gets no reply at all within `3s`
|
|
||||||
- rerunning `webauthn_local_demo.py` inside `k_proxy` also shows no card prompt on register
|
|
||||||
- next step is to recover the USB/Qubes transport path before retrying direct auth
|
|
||||||
- after a full power cycle and reattach, manual CTAPHID `INIT` replies again and `webauthn_local_demo.py` registration succeeds again
|
|
||||||
- direct `raw_ctap_probe.py --device-path /dev/hidraw0 make-credential --rp-id localhost` also succeeds again after pressing `yes` on the card
|
|
||||||
- `k_proxy_app.py --auth-mode fido2-direct` was patched to use low-level CTAP2 and to auto-detect the working `/dev/hidraw*` node when the card re-enumerates
|
|
||||||
- after additional fixes for hidraw lifetime, VM-side `python-fido2` response mapping, and CTAP payload shape, `/enroll/register` now succeeds again for `directtest`
|
|
||||||
- `/session/login` for `directtest` now also succeeds after card confirmation and returns `auth_mode: "fido2_assertion"`
|
|
||||||
- `/session/status` succeeds
|
|
||||||
- protected `/resource/counter` succeeds again through `k_proxy -> k_server`
|
|
||||||
- `/session/logout` succeeds
|
|
||||||
- post-logout protected access returns `401`
|
|
||||||
- temporary direct-mode hidraw lifetime logging was removed again after diagnosis
|
|
||||||
- `phase5_chain_regression.sh` now supports card-interactive direct auth via `--interactive-card --expect-auth-mode fido2_assertion`
|
|
||||||
|
|
|
||||||
470
Setup.md
470
Setup.md
|
|
@ -1,6 +1,6 @@
|
||||||
# Setup
|
# Setup
|
||||||
|
|
||||||
Last updated: 2026-04-25
|
Last updated: 2026-04-24
|
||||||
|
|
||||||
This is a living setup/status file for the local ChromeCard workspace at `/home/user/chromecard`.
|
This is a living setup/status file for the local ChromeCard workspace at `/home/user/chromecard`.
|
||||||
Update this file whenever environment status or verified behavior changes.
|
Update this file whenever environment status or verified behavior changes.
|
||||||
|
|
@ -133,10 +133,6 @@ Thread-safety expectation:
|
||||||
- `/home/user/chromecard/k_proxy_app.py`
|
- `/home/user/chromecard/k_proxy_app.py`
|
||||||
- `/home/user/chromecard/k_server_app.py`
|
- `/home/user/chromecard/k_server_app.py`
|
||||||
- `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
- `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
||||||
- Remote VM access is now available via SSH/SCP aliases:
|
|
||||||
- command execution: `ssh <host> <cmd>`
|
|
||||||
- file copy to VM home: `scp <file> <host>:~`
|
|
||||||
- validated hosts: `k_client`, `k_proxy`, `k_server`
|
|
||||||
- `west` is not currently installed/in PATH: `west not found`.
|
- `west` is not currently installed/in PATH: `west not found`.
|
||||||
- The checked-out `CR_SDK_CK-main` tree appears incomplete for documented sysbuild role layout:
|
- The checked-out `CR_SDK_CK-main` tree appears incomplete for documented sysbuild role layout:
|
||||||
- missing: `mvp`, `setup`, `components`, `samples`
|
- missing: `mvp`, `setup`, `components`, `samples`
|
||||||
|
|
@ -160,457 +156,6 @@ Session note (2026-04-24):
|
||||||
- Local WebAuthn demo completed successfully for user `alice` (register + login).
|
- Local WebAuthn demo completed successfully for user `alice` (register + login).
|
||||||
- Phase 5 starter implementation added with session TTL, logout/invalidation, and proxy->server protected counter forwarding.
|
- Phase 5 starter implementation added with session TTL, logout/invalidation, and proxy->server protected counter forwarding.
|
||||||
|
|
||||||
Session note (2026-04-24, doc maintenance):
|
|
||||||
- Top-level Markdown files were re-scanned: `PHASE5_RUNBOOK.md`, `Setup.md`, `Workplan.md`.
|
|
||||||
- `PHASE5_RUNBOOK.md` remains consistent with the current Phase 5 prototype paths and flow.
|
|
||||||
- No plan/setup drift was found requiring behavioral changes; docs remain aligned.
|
|
||||||
- SSH-based VM operation was validated for `k_client`, `k_proxy`, `k_server` (Debian `13.4` confirmed remotely).
|
|
||||||
- SCP file transfer to `k_proxy` home directory was validated with read-back.
|
|
||||||
|
|
||||||
Session note (2026-04-24, remote flow diagnostics):
|
|
||||||
- VM script staging gap found: `/home/user/chromecard/k_proxy_app.py`, `k_server_app.py`, and helper files were missing on AppVMs and were copied via `scp`.
|
|
||||||
- Services were started in VMs and verified locally:
|
|
||||||
- `k_proxy` local health OK on `127.0.0.1:8770` and `127.0.0.1:8771`
|
|
||||||
- `k_server` local health OK on `127.0.0.1:8780`
|
|
||||||
- Verified VM IPs during this run:
|
|
||||||
- `k_proxy`: `10.137.0.12`
|
|
||||||
- `k_server`: `10.137.0.13`
|
|
||||||
- `k_client`: `10.137.0.16`
|
|
||||||
- Current chain failure is network pathing/firewall:
|
|
||||||
- `k_client -> k_proxy` (`10.137.0.12:8771`) times out.
|
|
||||||
- `k_proxy -> k_server` (`10.137.0.13:8780`) times out.
|
|
||||||
- Proxy returns upstream error payload: `server unavailable: timed out`.
|
|
||||||
|
|
||||||
Session note (2026-04-24, markdown re-scan):
|
|
||||||
- Re-read top-level workspace Markdown files: `Setup.md`, `Workplan.md`, `PHASE5_RUNBOOK.md`.
|
|
||||||
- Re-skimmed source-tree reference docs in `CR_SDK_CK-main`, including `BUILD.md`, `README.md`, `README_HOST.md`, `RELEASE.md`, and `distribute_bundle.md`.
|
|
||||||
- Current workspace docs remain aligned with the verified execution record.
|
|
||||||
- Source-tree doc drift remains unchanged:
|
|
||||||
- `README_HOST.md` still points to `./scripts/fido2_probe.py` and `./scripts/webauthn_local_demo.py`.
|
|
||||||
- Active workspace policy continues to treat those paths as historical; maintained helper paths remain `/home/user/chromecard/fido2_probe.py` and `/home/user/chromecard/webauthn_local_demo.py`.
|
|
||||||
- Source-tree build docs continue to describe a full SDK layout with `mvp`, `setup`, `components`, and `samples`, which is still not present in the current local checkout snapshot.
|
|
||||||
|
|
||||||
Session note (2026-04-24, policy retry):
|
|
||||||
- Markdown re-scan was retried after local policy changes.
|
|
||||||
- Re-running the workspace doc scan with a non-login shell completed cleanly, without the earlier SSH/socat startup noise in command output.
|
|
||||||
|
|
||||||
Session note (2026-04-24, chain probe retry):
|
|
||||||
- Re-probed the Qubes access path for `k_client -> k_proxy -> k_server`.
|
|
||||||
- Local forwarded SSH listener ports still exist on the host:
|
|
||||||
- `0.0.0.0:2222` -> `qrexec-client-vm 'k_client' qubes.ConnectTCP+22`
|
|
||||||
- `0.0.0.0:2223` -> `qrexec-client-vm 'k_proxy' qubes.ConnectTCP+22`
|
|
||||||
- `0.0.0.0:2224` -> `qrexec-client-vm 'k_server' qubes.ConnectTCP+22`
|
|
||||||
- These forwarded SSH ports currently fail immediately:
|
|
||||||
- `ssh k_client` / `ssh k_proxy` / `ssh k_server` close immediately on localhost forwarded ports.
|
|
||||||
- Direct `qrexec-client-vm <target> qubes.ConnectTCP+22` returns `Request refused`.
|
|
||||||
- Chain ports are currently blocked at the same qrexec layer:
|
|
||||||
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8770` -> `Request refused`
|
|
||||||
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
|
||||||
- This means the current blocker is active qrexec policy/service refusal for `qubes.ConnectTCP`, not the Python service code in `k_proxy_app.py` or `k_server_app.py`.
|
|
||||||
- Separate SSH config issue remains on the host:
|
|
||||||
- `/etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf` is still owned `root:root` but mode `777`, which causes OpenSSH to reject it as insecure on the normal login-shell path.
|
|
||||||
|
|
||||||
Session note (2026-04-25, post-restart probe):
|
|
||||||
- Correct client-facing proxy port is `8771` for the current split-VM chain checks.
|
|
||||||
- SSH to `k_proxy` is working again.
|
|
||||||
- `k_proxy` card visibility is restored after VM restart and card reconnect:
|
|
||||||
- `/dev/hidraw0` and `/dev/hidraw1` are present in `k_proxy`
|
|
||||||
- Current service state after restart:
|
|
||||||
- `k_proxy` has no listener on `127.0.0.1:8771`
|
|
||||||
- `k_server` has no listener on `127.0.0.1:8780`
|
|
||||||
- Current qrexec chain state after restart:
|
|
||||||
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
|
||||||
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
|
||||||
- Practical meaning:
|
|
||||||
- SSH and card attachment recovered
|
|
||||||
- phase-5 app services are not currently running in the VMs
|
|
||||||
- qrexec forwarding for the chain ports is still being refused
|
|
||||||
|
|
||||||
Session note (2026-04-25, service restart):
|
|
||||||
- `k_server_app.py` was restarted successfully in `k_server`:
|
|
||||||
- PID `1320`
|
|
||||||
- listening on `127.0.0.1:8780`
|
|
||||||
- `/health` returns `{"ok": true, "service": "k_server", ...}`
|
|
||||||
- `k_proxy_app.py` was restarted successfully in `k_proxy`:
|
|
||||||
- PID `2774`
|
|
||||||
- listening on `127.0.0.1:8771`
|
|
||||||
- `/health` returns `{"ok": true, "service": "k_proxy", "active_sessions": 0, ...}`
|
|
||||||
- Despite local service recovery, qrexec forwarding is still denied:
|
|
||||||
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
|
||||||
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
|
||||||
|
|
||||||
Session note (2026-04-25, markdown refresh):
|
|
||||||
- Re-read the active workspace markdown files:
|
|
||||||
- `Setup.md`
|
|
||||||
- `Workplan.md`
|
|
||||||
- `PHASE5_RUNBOOK.md`
|
|
||||||
- Corrected the Phase 5 runbook to distinguish the old same-VM quickstart from the current split-VM chain usage.
|
|
||||||
- Current documented client-facing proxy port for split-VM tests is `8771`.
|
|
||||||
- Current documented blocker remains unchanged:
|
|
||||||
- local service health inside `k_proxy` and `k_server` is good
|
|
||||||
- inter-VM forwarding via `qubes.ConnectTCP` is still refused
|
|
||||||
|
|
||||||
Session note (2026-04-25, Phase 2 HTTPS bring-up):
|
|
||||||
- Added direct TLS support to:
|
|
||||||
- `/home/user/chromecard/k_proxy_app.py`
|
|
||||||
- `/home/user/chromecard/k_server_app.py`
|
|
||||||
- Added local certificate generator:
|
|
||||||
- `/home/user/chromecard/generate_phase2_certs.py`
|
|
||||||
- Generated local CA and service certs at:
|
|
||||||
- `/home/user/chromecard/tls/phase2/ca.crt`
|
|
||||||
- `/home/user/chromecard/tls/phase2/k_proxy.crt`
|
|
||||||
- `/home/user/chromecard/tls/phase2/k_server.crt`
|
|
||||||
- Certificate generation was corrected to include subject key identifier and authority key identifier so Python TLS verification succeeds.
|
|
||||||
- Current validated HTTPS shape is Qubes-localhost forwarding, not raw VM-IP routing:
|
|
||||||
- in `k_client`: `qvm-connect-tcp 9771:k_proxy:8771`
|
|
||||||
- in `k_proxy`: `qvm-connect-tcp 9780:k_server:8780`
|
|
||||||
- `k_proxy` listens on `https://127.0.0.1:8771`
|
|
||||||
- `k_server` listens on `https://127.0.0.1:8780`
|
|
||||||
- `k_proxy` upstream is `https://127.0.0.1:9780`
|
|
||||||
- Verified HTTPS checks:
|
|
||||||
- `k_client -> k_proxy` `/health` over TLS succeeds with `--cacert /home/user/chromecard/tls/phase2/ca.crt`
|
|
||||||
- `k_proxy -> k_server` `/health` and `/resource/counter` over TLS succeed through the `9780` forwarder
|
|
||||||
- end-to-end `k_client -> k_proxy -> k_server` login + session reuse succeeded over HTTPS
|
|
||||||
- End-to-end verified results:
|
|
||||||
- login returned `ok=true` for `alice`
|
|
||||||
- first protected counter call returned value `1`
|
|
||||||
- second protected counter call returned value `2`
|
|
||||||
- session status remained valid after reuse
|
|
||||||
|
|
||||||
Session note (2026-04-25, Phase 2.5 ownership and concurrency):
|
|
||||||
- Current prototype state ownership is now explicit:
|
|
||||||
- `k_proxy` is authoritative for session state
|
|
||||||
- `k_server` is authoritative for protected resource state
|
|
||||||
- `k_client` is not authoritative for either session validity or counter/resource state
|
|
||||||
- Current session model in `k_proxy`:
|
|
||||||
- server-side in-memory session store only
|
|
||||||
- opaque bearer token generated by `secrets.token_urlsafe(32)`
|
|
||||||
- per-session fields are `username` and `expires_at`
|
|
||||||
- expiry is enforced in `k_proxy`; `k_server` does not validate client sessions directly
|
|
||||||
- Current resource model in `k_server`:
|
|
||||||
- in-memory monotonic counter guarded by a lock
|
|
||||||
- access allowed only when request arrives from `k_proxy` with the expected `X-Proxy-Token`
|
|
||||||
- Current concurrency model in code:
|
|
||||||
- both services use `ThreadingHTTPServer`
|
|
||||||
- `k_proxy` protects session-map mutations and garbage collection with a single lock
|
|
||||||
- `k_server` protects counter increments with a single lock
|
|
||||||
- TLS verification and upstream fetches happen outside the session lock in `k_proxy`
|
|
||||||
- Current runtime assumptions and limits:
|
|
||||||
- Qubes localhost forwarders are treated as transport plumbing, not as state authorities
|
|
||||||
- if `k_proxy` restarts, in-memory sessions are lost
|
|
||||||
- if `k_server` restarts, the in-memory counter resets
|
|
||||||
- the current shared `X-Proxy-Token` is a prototype trust mechanism, not a final authorization design
|
|
||||||
- Practical meaning:
|
|
||||||
- race-free behavior is currently defined for session CRUD and counter increments inside one process per VM
|
|
||||||
- persistence, distributed session authority, and multi-proxy/multi-server coordination are not implemented yet
|
|
||||||
|
|
||||||
Session note (2026-04-25, Phase 6 client portal prototype):
|
|
||||||
- Added browser-facing client process:
|
|
||||||
- `/home/user/chromecard/k_client_portal.py`
|
|
||||||
- Current Phase 6 prototype shape:
|
|
||||||
- portal runs in `k_client` on `http://127.0.0.1:8766`
|
|
||||||
- portal keeps local enrolled username state in `k_client`
|
|
||||||
- portal calls `k_proxy` over the validated TLS forward `https://127.0.0.1:9771`
|
|
||||||
- Current local enrollment model:
|
|
||||||
- enrollment is a client-local username selection stored by the portal
|
|
||||||
- no dedicated server-side enrollment API exists yet
|
|
||||||
- Verified portal API flow in `k_client`:
|
|
||||||
- `GET /health` returns `ok=true`
|
|
||||||
- `POST /api/enroll` with `alice` succeeds
|
|
||||||
- `POST /api/login` succeeds and returns a proxy session token
|
|
||||||
- `POST /api/status` succeeds
|
|
||||||
- `POST /api/resource/counter` succeeds twice with upstream values `3` and `4`
|
|
||||||
- `POST /api/logout` succeeds
|
|
||||||
- Current implication:
|
|
||||||
- `k_client` now has a concrete client-side process instead of only runbook curls
|
|
||||||
- browser-facing flow is now available through the local portal
|
|
||||||
- next hardening step is to replace client-local enrollment with the intended enrollment contract and decide whether browser traffic should eventually talk to `k_proxy` directly or continue through a local client portal
|
|
||||||
|
|
||||||
Session note (2026-04-25, Phase 6 enrollment contract):
|
|
||||||
- Added proxy-side enrollment API and storage:
|
|
||||||
- `POST /enroll/register`
|
|
||||||
- `GET /enroll/status?username=<name>`
|
|
||||||
- persisted prototype store at `/home/user/chromecard/k_proxy_enrollments.json` in `k_proxy`
|
|
||||||
- Current enrollment authority is now `k_proxy`, not the `k_client` portal.
|
|
||||||
- Current portal behavior:
|
|
||||||
- portal enrollment calls `k_proxy` over TLS
|
|
||||||
- portal keeps only a preferred local username for convenience
|
|
||||||
- portal login now depends on proxy-side enrollment existing
|
|
||||||
- Verified behavior:
|
|
||||||
- direct proxy login for unenrolled `bob` returns `{"ok": false, "error": "user not enrolled", ...}`
|
|
||||||
- portal enrollment of `alice` succeeds and persists in proxy-side enrollment storage
|
|
||||||
- proxy enrollment status for `alice` returns `ok=true`
|
|
||||||
- portal login and protected counter access still succeed after enrollment
|
|
||||||
- Practical meaning:
|
|
||||||
- Phase 6 now has a real `k_client -> k_proxy` enrollment request path
|
|
||||||
- the remaining gap is not basic routing; it is deciding the final enrollment semantics and whether the browser should stay behind a local portal or talk to `k_proxy` directly
|
|
||||||
|
|
||||||
Session note (2026-04-25, browser target moved to k_proxy):
|
|
||||||
- `k_proxy` now serves the browser-facing portal UI directly on `/` over `https://127.0.0.1:9771`.
|
|
||||||
- `k_client_portal.py` is now a temporary bridge page:
|
|
||||||
- it points users to `https://127.0.0.1:9771/`
|
|
||||||
- it is no longer the primary browser target
|
|
||||||
- Verified direct browser/API target behavior from `k_client`:
|
|
||||||
- `GET https://127.0.0.1:9771/` returns the proxy portal HTML
|
|
||||||
- `GET https://127.0.0.1:9771/health` returns `ok=true`
|
|
||||||
- direct `POST /enroll/register` for `carol` succeeds
|
|
||||||
- direct `POST /session/login` for `carol` succeeds
|
|
||||||
- Current implication:
|
|
||||||
- browser traffic is now intended to go straight to `k_proxy`
|
|
||||||
- the `k_client` portal remains only as a temporary bridge/compatibility layer
|
|
||||||
|
|
||||||
Session note (2026-04-25, k_client browser flow page):
|
|
||||||
- `k_client_portal.py` now also serves a local browser demo page again on `http://127.0.0.1:8766` inside `k_client`.
|
|
||||||
- The page is useful as an operator/demo surface:
|
|
||||||
- register user
|
|
||||||
- login with card approval or denial in `k_proxy`
|
|
||||||
- call the protected `k_server` counter
|
|
||||||
- logout
|
|
||||||
- The page now also exposes current proxy enrollment state:
|
|
||||||
- shows the registered users visible in `k_proxy`
|
|
||||||
- lets the operator select a listed user into the username field
|
|
||||||
- lets the operator unregister users from the browser page
|
|
||||||
- login now uses the current username field instead of only the portal's last remembered user
|
|
||||||
- Added a browser regression harness for the `k_client` page:
|
|
||||||
- `/home/user/chromecard/tests/k_client_portal.spec.js`
|
|
||||||
- `/home/user/chromecard/playwright.config.js`
|
|
||||||
- `/home/user/chromecard/package.json`
|
|
||||||
- intended flow: register, login, call `k_server`, logout, unregister
|
|
||||||
- verified passing live on 2026-04-25 from this host via forwarded portal URL:
|
|
||||||
- `PORTAL_BASE_URL=http://127.0.0.1:18766 npm run test:k-client`
|
|
||||||
- It also makes the negative path explicit:
|
|
||||||
- if login is denied on the card, the page reports that `k_server` was not called
|
|
||||||
- Primary browser-facing app logic still lives on `k_proxy`, but the `k_client` page is now a concrete demo/control surface rather than just a redirect.
|
|
||||||
|
|
||||||
Session note (2026-04-25, provisional enrollment hardening):
|
|
||||||
- The enrollment contract in `k_proxy` is now explicit but provisional.
|
|
||||||
- Current prototype enrollment rules:
|
|
||||||
- usernames are canonicalized to lowercase
|
|
||||||
- allowed username pattern is `3-32` chars using lowercase letters, digits, `.`, `_`, `-`
|
|
||||||
- optional `display_name` is allowed up to `64` chars
|
|
||||||
- enrollment create is create-only and duplicate create returns `user already enrolled`
|
|
||||||
- enrollment update is a separate operation
|
|
||||||
- enrollment delete is a separate operation and removes any active sessions for that username
|
|
||||||
- Current enrollment endpoints on `k_proxy`:
|
|
||||||
- `POST /enroll/register`
|
|
||||||
- `GET /enroll/status?username=<name>`
|
|
||||||
- `POST /enroll/update`
|
|
||||||
- `POST /enroll/delete`
|
|
||||||
- `GET /enroll/list`
|
|
||||||
- Verified behavior from `k_client` against `https://127.0.0.1:9771`:
|
|
||||||
- invalid username `A!` is rejected
|
|
||||||
- create for `dave` with `display_name` succeeds
|
|
||||||
- duplicate create for `dave` is rejected
|
|
||||||
- update for `dave` succeeds
|
|
||||||
- list returns enrolled users and metadata
|
|
||||||
- delete for `dave` succeeds
|
|
||||||
- login for deleted `dave` fails with `user not enrolled`
|
|
||||||
- Deliberate current limit:
|
|
||||||
- enrollment itself still does not require card presence; only login does
|
|
||||||
- this was kept lightweight because the enrollment semantics are expected to change later
|
|
||||||
|
|
||||||
Session note (2026-04-25, Phase 6.5 concurrency probe):
|
|
||||||
- Added reproducible concurrency probe:
|
|
||||||
- `/home/user/chromecard/phase65_concurrency_probe.py`
|
|
||||||
- probe now supports `--max-workers` so client-side fan-out can be swept explicitly
|
|
||||||
- Successful baseline run from `k_client` against direct proxy path:
|
|
||||||
- `3` users
|
|
||||||
- `4` protected requests per user
|
|
||||||
- `12/12` requests succeeded
|
|
||||||
- counter values were unique and contiguous from `6` to `17`
|
|
||||||
- max observed latency was about `457 ms`
|
|
||||||
- Larger follow-up run exposed current limit:
|
|
||||||
- `5` users
|
|
||||||
- `5` protected requests per user
|
|
||||||
- `18/25` requests succeeded
|
|
||||||
- failures returned TLS EOF / upstream unavailable errors
|
|
||||||
- successful counter values were still unique and contiguous from `18` to `35`
|
|
||||||
- max observed latency was about `758 ms`
|
|
||||||
- Additional Phase 6.5 diagnosis:
|
|
||||||
- fixed a keep-alive/body-drain bug in the HTTP/1.1 experiment so `k_server` no longer misparses follow-on requests as `{}POST`
|
|
||||||
- added an upstream connection pool in `k_proxy`; current default/test setting clamps `k_proxy -> k_server` to one pooled TLS connection
|
|
||||||
- despite that change, a full fan-out run with `25` in-flight protected calls still fails on client-observed TLS EOFs
|
|
||||||
- a worker-limited run now passes cleanly:
|
|
||||||
- `5` users
|
|
||||||
- `5` protected requests per user
|
|
||||||
- `25/25` requests succeeded with `--max-workers 10`
|
|
||||||
- raising client-side fan-out still breaks:
|
|
||||||
- `22/25` requests succeeded with `--max-workers 15`
|
|
||||||
- `15/25` requests succeeded with fully unbounded `25` workers in the latest rerun
|
|
||||||
- Current diagnosis:
|
|
||||||
- the protected counter and session logic stay correct under load; successful values remain unique and contiguous
|
|
||||||
- `k_proxy` and `k_server` can complete the requests that actually reach them
|
|
||||||
- the primary collapse point in current testing is the client-facing Qubes forwarder on `9771`
|
|
||||||
- `qvm_connect_9771.log` shows `qrexec-agent-data` / data-vchan failures and repeated `xs_transaction_start: No space left on device`
|
|
||||||
- `qvm_connect_9780.log` also showed earlier qrexec failures, but the latest worker-threshold evidence points first to connection fan-out on `k_client -> k_proxy`
|
|
||||||
- Practical meaning:
|
|
||||||
- the application logic is good for moderate concurrent use in the current prototype
|
|
||||||
- the direct browser path appears stable around `10` in-flight protected calls in the current Qubes setup
|
|
||||||
- the current concurrency ceiling is being set by Qubes forwarding behavior rather than by the monotonic counter logic
|
|
||||||
|
|
||||||
Session note (2026-04-25, in-VM forwarding test):
|
|
||||||
- Tested the intended in-VM forwarding path with `qvm-connect-tcp` instead of host-side `qrexec-client-vm`.
|
|
||||||
- Forwarders start and bind locally:
|
|
||||||
- in `k_client`: `qvm-connect-tcp 8771:k_proxy:8771` binds `localhost:8771`
|
|
||||||
- in `k_proxy`: `qvm-connect-tcp 8780:k_server:8780` binds `localhost:8780`
|
|
||||||
- But the actual client->proxy connection is still refused when used:
|
|
||||||
- `k_client` forward log shows `Request refused`
|
|
||||||
- `socat` reports child exit status `126` and `Connection reset by peer`
|
|
||||||
- Local login on `k_proxy` reaches the app but fails on the auth dependency:
|
|
||||||
- `POST /session/login` to `http://127.0.0.1:8771` returns `401`
|
|
||||||
- details: `Missing dependency: python-fido2 ... No module named 'fido2'`
|
|
||||||
- `k_server` was not reached during this login test; current `k_server.log` only shows `/health`.
|
|
||||||
|
|
||||||
Session note (2026-04-25, after python3-fido2 install):
|
|
||||||
- `k_proxy` was restarted after `python3-fido2` installation and now listens again on `127.0.0.1:8771`.
|
|
||||||
- The previous Python import blocker is resolved; local login now reaches the CTAP probe path.
|
|
||||||
- Current local login result on `k_proxy`:
|
|
||||||
- `{"ok": false, "error": "card auth failed", "details": "No CTAP HID devices found."}`
|
|
||||||
- Current forwarded login result from `k_client` is still not completing:
|
|
||||||
- `curl http://127.0.0.1:8771/session/login` -> `Empty reply from server`
|
|
||||||
- `qvm_connect_8771.log` still shows repeated `Request refused` and child exit status `126`
|
|
||||||
- Practical meaning:
|
|
||||||
- Python dependency issue in `k_proxy` is fixed
|
|
||||||
- card access inside `k_proxy` is currently missing again at CTAP/HID level
|
|
||||||
- `k_client -> k_proxy` qrexec forwarding is still effectively denied/refused
|
|
||||||
|
|
||||||
Session note (2026-04-25, card reattached):
|
|
||||||
- Card visibility in `k_proxy` is restored again:
|
|
||||||
- `/dev/hidraw0` and `/dev/hidraw1` present
|
|
||||||
- `fido2_probe.py --list` detects ChromeCard on `/dev/hidraw0`
|
|
||||||
- Local login on `k_proxy` now succeeds again:
|
|
||||||
- `POST /session/login` on `127.0.0.1:8771` returns `200`
|
|
||||||
- session creation for user `alice` succeeded
|
|
||||||
- Remaining failure is isolated to the client-facing qrexec path:
|
|
||||||
- `k_client` -> `localhost:8771` through `qvm-connect-tcp` still returns `Empty reply from server`
|
|
||||||
- `qvm_connect_8771.log` still shows `Request refused`
|
|
||||||
|
|
||||||
Session note (2026-04-25, clean forward retest):
|
|
||||||
- Re-ran both forwards and exercised each hop immediately after local bind.
|
|
||||||
- `k_proxy -> k_server`:
|
|
||||||
- `qvm-connect-tcp 8780:k_server:8780` binds `localhost:8780` in `k_proxy`
|
|
||||||
- first real `POST /resource/counter` through that forward returns `Empty reply from server`
|
|
||||||
- `qvm_connect_8780.log` then records `Request refused` with child exit status `126`
|
|
||||||
- `k_client -> k_proxy`:
|
|
||||||
- `qvm-connect-tcp 8771:k_proxy:8771` binds `localhost:8771` in `k_client`
|
|
||||||
- first real `POST /session/login` through that forward returns `Empty reply from server`
|
|
||||||
- `qvm_connect_8771.log` records `Request refused` with child exit status `126`
|
|
||||||
- Conclusion from this retest:
|
|
||||||
- both forwards fail in the same way
|
|
||||||
- local bind succeeds, but the actual qrexec `qubes.ConnectTCP` request is refused when the first connection is attempted
|
|
||||||
|
|
||||||
Session note (2026-04-25, dom0 policy fix validated):
|
|
||||||
- After changing dom0 policy to use explicit destination VMs instead of `@default` for `qubes.ConnectTCP`, both forwards now work.
|
|
||||||
- Verified hop 1:
|
|
||||||
- in `k_proxy`, `POST http://127.0.0.1:8780/resource/counter` with `X-Proxy-Token: dev-proxy-token` succeeds
|
|
||||||
- response included counter value `1`
|
|
||||||
- Verified hop 2:
|
|
||||||
- in `k_client`, `POST http://127.0.0.1:8771/session/login` succeeds
|
|
||||||
- session token is returned through the `k_client -> k_proxy` forward
|
|
||||||
- Verified full end-to-end flow from `k_client`:
|
|
||||||
- login succeeded and returned session token
|
|
||||||
- `POST /session/status` succeeded
|
|
||||||
- `POST /resource/counter` succeeded twice with upstream values `2` and `3`
|
|
||||||
- `POST /session/logout` succeeded
|
|
||||||
- post-logout `POST /resource/counter` correctly returned `401 invalid or expired session`
|
|
||||||
- Current conclusion:
|
|
||||||
- `k_client -> k_proxy -> k_server` chain is operational
|
|
||||||
- session reuse and logout behavior are working in the current prototype
|
|
||||||
|
|
||||||
Session note (2026-04-25, live chain re-validation and regression helper):
|
|
||||||
- Re-validated the split-VM chain after restart using the current TLS/localhost-forward shape:
|
|
||||||
- `k_client` local `9771` -> `k_proxy:8771`
|
|
||||||
- `k_proxy` local `9780` -> `k_server:8780`
|
|
||||||
- Verified live service state during this run:
|
|
||||||
- `k_server` local `https://127.0.0.1:8780/health` returned `ok=true`
|
|
||||||
- `k_proxy` local `https://127.0.0.1:8771/health` returned `ok=true`
|
|
||||||
- `k_proxy` local `https://127.0.0.1:9780/health` reached `k_server`
|
|
||||||
- `k_client` local `https://127.0.0.1:9771/health` reached `k_proxy`
|
|
||||||
- Verified end-to-end behavior from `k_client`:
|
|
||||||
- login for `alice` succeeded
|
|
||||||
- session status succeeded
|
|
||||||
- protected counter calls succeeded with session reuse
|
|
||||||
- logout succeeded
|
|
||||||
- post-logout protected access returned `401 invalid or expired session`
|
|
||||||
- Added reproducible regression helper at:
|
|
||||||
- `/home/user/chromecard/phase5_chain_regression.sh`
|
|
||||||
- Verified the new helper end-to-end on 2026-04-25:
|
|
||||||
- default run uses `20` requests at parallelism `8`
|
|
||||||
- returned values were unique and gap-free
|
|
||||||
- latest verified counter range from the helper was `43..62`
|
|
||||||
- Practical meaning:
|
|
||||||
- the current blocker is no longer Qubes forwarding for the base Phase 5 chain
|
|
||||||
- the current next-step gap is auth semantics, not transport bring-up
|
|
||||||
|
|
||||||
Session note (2026-04-25, direct FIDO2 auth attempt):
|
|
||||||
- Added an experimental direct FIDO2 path in `/home/user/chromecard/k_proxy_app.py`:
|
|
||||||
- runtime switch: `--auth-mode fido2-direct`
|
|
||||||
- default runtime remains `probe`
|
|
||||||
- Added a low-level CTAP helper at `/home/user/chromecard/raw_ctap_probe.py`:
|
|
||||||
- purpose: bypass `Fido2Client` and exercise raw CTAP2 `makeCredential` / `getAssertion`
|
|
||||||
- logs keepalive callbacks and exact transport exceptions for host-side debugging
|
|
||||||
- Direct-mode intent:
|
|
||||||
- replace the legacy `fido2_probe.py --json` session gate
|
|
||||||
- perform real credential registration and real assertion verification locally in `k_proxy` with `python-fido2`
|
|
||||||
- Current observed blocker on `k_proxy`:
|
|
||||||
- direct `make_credential` fails with `No compatible PIN/UV protocols supported!`
|
|
||||||
- reproduces outside the app in a minimal VM-side probe, so this is not just a handler bug
|
|
||||||
- likely cause is the current card / `python-fido2` stack selecting a PIN/UV-dependent CTAP2 path for registration
|
|
||||||
- Additional probe:
|
|
||||||
- a forced CTAP1 fallback experiment did not fail immediately, but also did not complete quickly enough to treat as a usable working path in this turn
|
|
||||||
- Latest live blocker (2026-04-25, after refactor/deploy):
|
|
||||||
- direct probing is currently blocked before the card Yes/No UI stage because `k_proxy` no longer sees any CTAP HID device
|
|
||||||
- `ssh k_proxy "python3 /home/user/chromecard/fido2_probe.py --list"` now returns `No CTAP HID devices found.`
|
|
||||||
- `ssh k_proxy "ls -l /dev/hidraw*"` shows no `hidraw` nodes at the moment
|
|
||||||
- Follow-up after card reattach (2026-04-25):
|
|
||||||
- `k_proxy` again shows `/dev/hidraw0` and `/dev/hidraw1`
|
|
||||||
- direct node-open check confirms `/dev/hidraw0` is readable as the normal user
|
|
||||||
- `/dev/hidraw1` still returns `PermissionError: [Errno 13] Permission denied`
|
|
||||||
- raw `makeCredential` probe still produced no on-card registration prompt, so the host path is hanging before the firmware Yes/No UI
|
|
||||||
- hidraw mapping confirms `/dev/hidraw0` is the FIDO interface:
|
|
||||||
- report descriptor begins with usage page `0xF1D0`
|
|
||||||
- `get_descriptor('/dev/hidraw0')` returns `report_size_in=64`, `report_size_out=64`
|
|
||||||
- `/dev/hidraw1` is a separate vendor HID interface with usage page `0xFF00`
|
|
||||||
- stale Python probes holding `/dev/hidraw0` were cleared, but behavior did not change
|
|
||||||
- a manual CTAPHID `INIT` packet sent directly to `/dev/hidraw0` writes successfully and still gets no response within `3s`
|
|
||||||
- this places the current blocker below `python-fido2`: raw HID traffic is not getting a CTAPHID reply after the latest reattach
|
|
||||||
- `webauthn_local_demo.py` was re-run inside `k_proxy` after reattach and still produced no card prompt on register
|
|
||||||
- that confirms the current failure is below both the browser WebAuthn path and the direct `python-fido2` path
|
|
||||||
- after a full power cycle and reattach, manual CTAPHID `INIT` on `/dev/hidraw0` started replying again
|
|
||||||
- `webauthn_local_demo.py` register in `k_proxy` then succeeded again, confirming the card transport was recovered by the power cycle
|
|
||||||
- direct host-side registration via `raw_ctap_probe.py --device-path /dev/hidraw0 make-credential --rp-id localhost` also succeeded again after pressing `yes` on the card
|
|
||||||
- returned credential material included:
|
|
||||||
- `fmt="none"`
|
|
||||||
- credential id `7986cfcf45663f625eb7fc7b52640d83cf3d0e8a6627eeadaba3126406b1e0b8`
|
|
||||||
- this confirms the recovered direct path now reaches the real card confirmation UI and completes CTAP2 `makeCredential`
|
|
||||||
- `k_proxy_app.py --auth-mode fido2-direct` was then patched to:
|
|
||||||
- use low-level CTAP2 instead of the higher-level `Fido2Client` registration/assertion calls
|
|
||||||
- open the explicit FIDO node `/dev/hidraw0` instead of scanning devices
|
|
||||||
- cache the direct device handle instead of reopening it for each operation
|
|
||||||
- current remaining blocker:
|
|
||||||
- was narrowed through repeated retries to a mix of hidraw node disappearance, older `python-fido2` response-mapping requirements, and CTAP payload-shape mismatches
|
|
||||||
- latest verified state:
|
|
||||||
- after reattach with healthy CTAPHID `INIT`, real app registration through `k_proxy_app.py --auth-mode fido2-direct` now succeeds
|
|
||||||
- `/enroll/register` for `directtest` returned `ok=true` and `has_credential=true`
|
|
||||||
- real app login through `/session/login` for `directtest` also now succeeds after card confirmation
|
|
||||||
- returned `auth_mode` is `fido2_assertion`
|
|
||||||
- session status succeeds
|
|
||||||
- protected `/resource/counter` access succeeds again through `k_proxy -> k_server`
|
|
||||||
- logout succeeds
|
|
||||||
- post-logout protected access returns `401`
|
|
||||||
- direct mode no longer depends on a fixed `/dev/hidraw0` path
|
|
||||||
- after a later re-enumeration where the card appeared on `/dev/hidraw1`, `k_proxy_app.py` was patched to probe available `/dev/hidraw*` nodes and select the first working CTAPHID device automatically
|
|
||||||
- browser registration then worked again without changing the configured `--direct-device-path`
|
|
||||||
- temporary direct-mode hidraw lifetime logging has been removed again after diagnosis
|
|
||||||
- `/home/user/chromecard/phase5_chain_regression.sh` now supports the direct-auth baseline via:
|
|
||||||
- `--interactive-card`
|
|
||||||
- `--login-timeout`
|
|
||||||
- `--expect-auth-mode fido2_assertion`
|
|
||||||
- Practical outcome for this session:
|
|
||||||
- the experimental direct mode is kept in code for follow-up work
|
|
||||||
- the deployed `k_proxy` service was restored to default `probe` mode
|
|
||||||
- verified `alice` login still works afterward, so the validated Phase 5 baseline remains intact
|
|
||||||
|
|
||||||
## Known FIDO2 Transport Boundary
|
## Known FIDO2 Transport Boundary
|
||||||
|
|
||||||
- FIDO2 on this firmware is handled via USB HID (CTAPHID), not Wi-Fi/BLE/MQTT.
|
- FIDO2 on this firmware is handled via USB HID (CTAPHID), not Wi-Fi/BLE/MQTT.
|
||||||
|
|
@ -636,9 +181,6 @@ SUBSYSTEM=="hidraw", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="0005", MODE="06
|
||||||
- `python3 /home/user/chromecard/fido2_probe.py --list`
|
- `python3 /home/user/chromecard/fido2_probe.py --list`
|
||||||
- Then:
|
- Then:
|
||||||
- `python3 /home/user/chromecard/fido2_probe.py --json`
|
- `python3 /home/user/chromecard/fido2_probe.py --json`
|
||||||
- For raw CTAP debugging on `k_proxy`:
|
|
||||||
- `python3 /home/user/chromecard/raw_ctap_probe.py info`
|
|
||||||
- `python3 /home/user/chromecard/raw_ctap_probe.py make-credential --rp-id localhost`
|
|
||||||
|
|
||||||
4. Run local WebAuthn bring-up demo.
|
4. Run local WebAuthn bring-up demo.
|
||||||
- `python3 /home/user/chromecard/webauthn_local_demo.py`
|
- `python3 /home/user/chromecard/webauthn_local_demo.py`
|
||||||
|
|
@ -669,16 +211,8 @@ SUBSYSTEM=="hidraw", ATTRS{idVendor}=="1209", ATTRS{idProduct}=="0005", MODE="06
|
||||||
|
|
||||||
- Whether a full `CR_SDK_CK-main` checkout (with role directories) is available locally.
|
- Whether a full `CR_SDK_CK-main` checkout (with role directories) is available locally.
|
||||||
- Whether server-side code should be pulled now for broader CIP/WebAuthn integration testing.
|
- Whether server-side code should be pulled now for broader CIP/WebAuthn integration testing.
|
||||||
|
- Exact Qubes firewall and service binding rules to enforce the `k_client -> k_proxy -> k_server` chain.
|
||||||
- Exact enrollment process interface running in `k_client` and how it reaches `k_proxy`.
|
- Exact enrollment process interface running in `k_client` and how it reaches `k_proxy`.
|
||||||
- Upgrade Phase 5 auth gate from card-presence probe to full WebAuthn assertion verification for session creation.
|
- Upgrade Phase 5 auth gate from card-presence probe to full WebAuthn assertion verification for session creation.
|
||||||
- Determine the viable path for real credential registration on `k_proxy`:
|
|
||||||
- enable whatever PIN/UV support the card expects for direct CTAP2 registration, or
|
|
||||||
- adopt a different one-time enrollment path that can persist real credential material for later direct assertion verification.
|
|
||||||
- Restore card visibility inside `k_proxy` so direct probes can reach the card UI again:
|
|
||||||
- `/dev/hidraw*` must exist in `k_proxy`
|
|
||||||
- `fido2_probe.py --list` must detect the card before the raw Yes/No probe can continue
|
|
||||||
- Identify why the host probe hangs before card UI even with `/dev/hidraw0` readable:
|
|
||||||
- determine why CTAPHID `INIT` on the correct FIDO hidraw node receives no reply after reattach
|
|
||||||
- likely recovery targets are the Qubes USB mediation path, a fresh USB reassign, or a `k_proxy` VM/device reset
|
|
||||||
- Precise ownership split of session/user state between `k_proxy` and `k_server`.
|
- Precise ownership split of session/user state between `k_proxy` and `k_server`.
|
||||||
- Concrete concurrency limits and acceptance criteria (requests/sec, parallel clients, latency/error thresholds).
|
- Concrete concurrency limits and acceptance criteria (requests/sec, parallel clients, latency/error thresholds).
|
||||||
|
|
|
||||||
298
Workplan.md
298
Workplan.md
|
|
@ -1,6 +1,6 @@
|
||||||
# Workplan
|
# Workplan
|
||||||
|
|
||||||
Last updated: 2026-04-25
|
Last updated: 2026-04-24
|
||||||
|
|
||||||
This is the execution plan for making ChromeCard FIDO2 development and validation reproducible on this machine.
|
This is the execution plan for making ChromeCard FIDO2 development and validation reproducible on this machine.
|
||||||
|
|
||||||
|
|
@ -10,7 +10,6 @@ This is the execution plan for making ChromeCard FIDO2 development and validatio
|
||||||
- Keep helper scripts such as `fido2_probe.py` and `webauthn_local_demo.py` at `/home/user/chromecard`.
|
- Keep helper scripts such as `fido2_probe.py` and `webauthn_local_demo.py` at `/home/user/chromecard`.
|
||||||
- Target deployment model is Qubes OS with 3 AppVMs based on `debian-13-xfce`: `k_client`, `k_proxy`, `k_server`.
|
- Target deployment model is Qubes OS with 3 AppVMs based on `debian-13-xfce`: `k_client`, `k_proxy`, `k_server`.
|
||||||
- Current authenticator link is card->`k_proxy` (USB), but architecture must allow migration to wireless phone-mediated validation.
|
- Current authenticator link is card->`k_proxy` (USB), but architecture must allow migration to wireless phone-mediated validation.
|
||||||
- VM execution path is SSH-first for experiments: `ssh <host> <cmd>` and `scp <file> <host>:~`.
|
|
||||||
|
|
||||||
## Goals
|
## Goals
|
||||||
|
|
||||||
|
|
@ -42,7 +41,7 @@ This is the execution plan for making ChromeCard FIDO2 development and validatio
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- All 3 VMs exist, boot, and have clearly defined service ownership.
|
- All 3 VMs exist, boot, and have clearly defined service ownership.
|
||||||
|
|
||||||
## Phase 1: Qubes Firewall Policy
|
## Phase 1: Qubes Firewall Policy (Blocking)
|
||||||
|
|
||||||
1. Enforce allowed forward paths only.
|
1. Enforce allowed forward paths only.
|
||||||
- Allow `k_client` outbound TLS only to `k_proxy` service port(s).
|
- Allow `k_client` outbound TLS only to `k_proxy` service port(s).
|
||||||
|
|
@ -59,33 +58,6 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Policy matches intended chain and is test-verified.
|
- Policy matches intended chain and is test-verified.
|
||||||
|
|
||||||
Status (2026-04-24, remote diagnostics):
|
|
||||||
- Confirmed active blocker remains Phase 1 network policy/pathing.
|
|
||||||
- Evidence from live VM probes:
|
|
||||||
- `k_client (10.137.0.16) -> k_proxy (10.137.0.12:8771)`: TCP timeout.
|
|
||||||
- `k_proxy (10.137.0.12) -> k_server (10.137.0.13:8780)`: upstream timeout.
|
|
||||||
- Local service health inside each VM is good, so failure is inter-VM reachability, not local process startup.
|
|
||||||
|
|
||||||
Status (2026-04-25, after restart and service recovery):
|
|
||||||
- Refined blocker: this is currently a qrexec/`qubes.ConnectTCP` refusal problem, not an app-local listener problem.
|
|
||||||
- Current evidence:
|
|
||||||
- `k_proxy` local `/health` is up on `127.0.0.1:8771`
|
|
||||||
- `k_server` local `/health` is up on `127.0.0.1:8780`
|
|
||||||
- `qrexec-client-vm k_proxy qubes.ConnectTCP+8771` -> `Request refused`
|
|
||||||
- `qrexec-client-vm k_server qubes.ConnectTCP+8780` -> `Request refused`
|
|
||||||
- Immediate next action for Phase 1:
|
|
||||||
- verify and fix the dom0 policy/mechanism that should permit `qubes.ConnectTCP` forwarding for the chain ports
|
|
||||||
|
|
||||||
Status (2026-04-25, dom0 policy fix validated):
|
|
||||||
- The forwarding blocker is cleared for the current prototype shape.
|
|
||||||
- Verified working chain:
|
|
||||||
- `k_client` localhost `9771` -> `k_proxy:8771`
|
|
||||||
- `k_proxy` localhost `9780` -> `k_server:8780`
|
|
||||||
- Verified outcome:
|
|
||||||
- TLS health checks pass on both hops
|
|
||||||
- end-to-end login, session status, protected counter access, and logout all succeed from `k_client`
|
|
||||||
- Phase 1 is complete for the current localhost-forwarded `qubes.ConnectTCP` design.
|
|
||||||
|
|
||||||
## Phase 2: TLS Certificates and Service Endpoints
|
## Phase 2: TLS Certificates and Service Endpoints
|
||||||
|
|
||||||
1. Certificate model.
|
1. Certificate model.
|
||||||
|
|
@ -104,19 +76,6 @@ Exit criteria:
|
||||||
- Mutual TLS trust decisions are documented and tested.
|
- Mutual TLS trust decisions are documented and tested.
|
||||||
- HTTPS calls succeed on both links with expected cert validation.
|
- HTTPS calls succeed on both links with expected cert validation.
|
||||||
|
|
||||||
Status (2026-04-25):
|
|
||||||
- Implemented HTTPS listeners in both prototype services.
|
|
||||||
- Added local CA + service certificate generation in `generate_phase2_certs.py`.
|
|
||||||
- Verified the working Qubes path is localhost forwarding plus TLS:
|
|
||||||
- `k_client` local `9771` forwards to `k_proxy:8771`
|
|
||||||
- `k_proxy` local `9780` forwards to `k_server:8780`
|
|
||||||
- Verified cert validation on both hops using the generated CA.
|
|
||||||
- Verified end-to-end HTTPS flow:
|
|
||||||
- `k_client -> k_proxy` login over TLS
|
|
||||||
- `k_proxy -> k_server` protected counter call over TLS
|
|
||||||
- session reuse still works across repeated protected requests
|
|
||||||
- Phase 2 is now effectively complete for the current prototype shape.
|
|
||||||
|
|
||||||
## Phase 2.5: Define State Ownership and Concurrency Model
|
## Phase 2.5: Define State Ownership and Concurrency Model
|
||||||
|
|
||||||
1. State ownership.
|
1. State ownership.
|
||||||
|
|
@ -133,32 +92,6 @@ Status (2026-04-25):
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Architecture clearly documents state authority and race-free update rules.
|
- Architecture clearly documents state authority and race-free update rules.
|
||||||
|
|
||||||
Next action (2026-04-25):
|
|
||||||
- Move into Phase 2.5 and make the current prototype decisions explicit:
|
|
||||||
- authority for session state remains `k_proxy`
|
|
||||||
- `k_server` remains authority for the protected counter/resource state
|
|
||||||
- localhost Qubes forwarders are part of the active runtime model for the two TLS hops
|
|
||||||
- define concurrency assumptions and limits around session store, forwarders, and counter access
|
|
||||||
|
|
||||||
Status (2026-04-25):
|
|
||||||
- Current ownership model is now explicit:
|
|
||||||
- `k_proxy` is authoritative for session creation, expiry, lookup, and logout
|
|
||||||
- `k_server` is authoritative for the protected monotonic counter
|
|
||||||
- `k_client` is a client only; it holds bearer tokens but is not a state authority
|
|
||||||
- Current validation boundary is explicit:
|
|
||||||
- `k_proxy` validates bearer tokens against its in-memory session store
|
|
||||||
- `k_server` trusts only requests that arrive with the configured `X-Proxy-Token`
|
|
||||||
- `k_server` does not currently validate end-user session tokens directly
|
|
||||||
- Current concurrency strategy is explicit:
|
|
||||||
- `k_proxy` uses `ThreadingHTTPServer` plus one lock around the in-memory session map
|
|
||||||
- `k_server` uses `ThreadingHTTPServer` plus one lock around counter increments
|
|
||||||
- upstream HTTPS calls from `k_proxy` are made outside the session-store lock
|
|
||||||
- Current runtime limits are explicit:
|
|
||||||
- sessions are process-local and disappear on `k_proxy` restart
|
|
||||||
- counter state is process-local and resets on `k_server` restart
|
|
||||||
- transport relies on Qubes localhost forwarders `9771` and `9780`
|
|
||||||
- Phase 2.5 is complete for the current prototype shape.
|
|
||||||
|
|
||||||
## Phase 3: Recover Basic Device Visibility on `k_proxy` (Blocking)
|
## Phase 3: Recover Basic Device Visibility on `k_proxy` (Blocking)
|
||||||
|
|
||||||
1. Verify physical + USB enumeration path.
|
1. Verify physical + USB enumeration path.
|
||||||
|
|
@ -232,35 +165,6 @@ Status (2026-04-24):
|
||||||
- proxy forwarding from `k_proxy` to `k_server` using a shared upstream token
|
- proxy forwarding from `k_proxy` to `k_server` using a shared upstream token
|
||||||
- Current auth gate for session creation is card-presence probe (`fido2_probe.py --json`), pending upgrade to full assertion verification path.
|
- Current auth gate for session creation is card-presence probe (`fido2_probe.py --json`), pending upgrade to full assertion verification path.
|
||||||
|
|
||||||
Status (2026-04-25):
|
|
||||||
- Prototype services were re-started successfully after VM restart.
|
|
||||||
- Current split-VM test shape is:
|
|
||||||
- `k_proxy` listening on `127.0.0.1:8771`
|
|
||||||
- `k_server` listening on `127.0.0.1:8780`
|
|
||||||
- End-to-end validation is now passing through the live chain from `k_client`.
|
|
||||||
- Current verified behavior:
|
|
||||||
- login succeeds for `alice`
|
|
||||||
- session status succeeds
|
|
||||||
- repeated protected counter requests succeed with session reuse
|
|
||||||
- logout succeeds
|
|
||||||
- post-logout protected access returns `401`
|
|
||||||
- Added repeatable host-side regression helper:
|
|
||||||
- `/home/user/chromecard/phase5_chain_regression.sh`
|
|
||||||
- Phase 5 is complete for the current prototype semantics.
|
|
||||||
- Experimental follow-up in code:
|
|
||||||
- `k_proxy_app.py` now also has `--auth-mode fido2-direct`
|
|
||||||
- this mode attempts direct credential registration and direct assertion verification with `python-fido2`
|
|
||||||
- it is not the deployed default because direct registration currently fails on `k_proxy` with `No compatible PIN/UV protocols supported!`
|
|
||||||
- `/home/user/chromecard/raw_ctap_probe.py` now exists for lower-level CTAP2 probing with keepalive/error logging
|
|
||||||
- latest retry result: after reattaching the card, `k_proxy` again exposes `/dev/hidraw0` and `/dev/hidraw1`, but raw `makeCredential` still reaches no Yes/No card prompt
|
|
||||||
- `/dev/hidraw0` opens successfully as the normal user; `/dev/hidraw1` is still permission-denied
|
|
||||||
- manual CTAPHID testing now shows `/dev/hidraw0` is the correct FIDO interface and a direct `INIT` write gets no response at all
|
|
||||||
- rerunning `webauthn_local_demo.py` inside `k_proxy` also still gives no card prompt, so the current break is below both browser WebAuthn and direct host probes
|
|
||||||
- after a full power cycle and reattach, manual CTAPHID `INIT` replies again and browser registration in `webauthn_local_demo.py` succeeds again
|
|
||||||
- direct `raw_ctap_probe.py --device-path /dev/hidraw0 make-credential --rp-id localhost` now also succeeds again after card confirmation
|
|
||||||
- `k_proxy_app.py --auth-mode fido2-direct` has been moved onto low-level CTAP2 with hidraw auto-detection; it still accepts `--direct-device-path`, but no longer breaks if the card re-enumerates onto `/dev/hidraw1`
|
|
||||||
- after repeated fixes for hidraw lifetime, VM-side `python-fido2` response mapping, and CTAP payload shape, real app registration now succeeds for `directtest`
|
|
||||||
|
|
||||||
## Phase 5.5: Implement Dummy Resource + Access Policy on `k_server`
|
## Phase 5.5: Implement Dummy Resource + Access Policy on `k_server`
|
||||||
|
|
||||||
1. Protected dummy resource.
|
1. Protected dummy resource.
|
||||||
|
|
@ -277,14 +181,6 @@ Exit criteria:
|
||||||
- Authorized requests obtain consistent increasing values.
|
- Authorized requests obtain consistent increasing values.
|
||||||
- Unauthorized requests are rejected.
|
- Unauthorized requests are rejected.
|
||||||
|
|
||||||
Status (2026-04-25):
|
|
||||||
- The protected counter resource is implemented and validated in the live split-VM chain.
|
|
||||||
- Verified behavior:
|
|
||||||
- authorized requests from `k_proxy` obtain increasing values
|
|
||||||
- unauthorized post-logout requests from `k_client` are rejected with `401`
|
|
||||||
- `20` concurrent protected requests through the chain returned unique, gap-free values
|
|
||||||
- Phase 5.5 is complete for the current prototype shape.
|
|
||||||
|
|
||||||
## Phase 6: Integrate Client Enrollment + Proxy Login Flow
|
## Phase 6: Integrate Client Enrollment + Proxy Login Flow
|
||||||
|
|
||||||
1. Enrollment process in `k_client`.
|
1. Enrollment process in `k_client`.
|
||||||
|
|
@ -297,107 +193,11 @@ Status (2026-04-25):
|
||||||
|
|
||||||
3. Browser flow in `k_client`.
|
3. Browser flow in `k_client`.
|
||||||
- Browser traffic goes only to `k_proxy`.
|
- Browser traffic goes only to `k_proxy`.
|
||||||
|
- Validate end-to-end login to `k_server` resource through proxy chain.
|
||||||
Immediate next action:
|
|
||||||
Immediate next action:
|
|
||||||
- Preserve the now-working direct auth path and record it as the current baseline.
|
|
||||||
- Verified end-to-end state:
|
|
||||||
- direct `/enroll/register` succeeds for `directtest`
|
|
||||||
- direct `/session/login` succeeds for `directtest`
|
|
||||||
- `/session/status` succeeds
|
|
||||||
- protected `/resource/counter` succeeds through `k_proxy -> k_server`
|
|
||||||
- `/session/logout` succeeds
|
|
||||||
- post-logout protected access returns `401`
|
|
||||||
- Next work should be cleanup/hardening:
|
|
||||||
- decide whether to keep `directtest` enrollment
|
|
||||||
- rerun `phase5_chain_regression.sh --interactive-card --expect-auth-mode fido2_assertion` against the current direct-auth baseline
|
|
||||||
|
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- Enrollment and login both function end-to-end via `k_client -> k_proxy -> k_server`.
|
- Enrollment and login both function end-to-end via `k_client -> k_proxy -> k_server`.
|
||||||
|
|
||||||
Status (2026-04-25):
|
|
||||||
- Added first `k_client` implementation at `/home/user/chromecard/k_client_portal.py`.
|
|
||||||
- Current prototype flow:
|
|
||||||
- browser now targets `k_proxy` directly over `https://127.0.0.1:9771`
|
|
||||||
- `k_client_portal.py` also serves a local browser flow page on `http://127.0.0.1:8766`
|
|
||||||
- `k_proxy` continues to authenticate with the card and forward to `k_server`
|
|
||||||
- the `k_client` page now also lists registered users from `k_proxy`
|
|
||||||
- the `k_client` page can unregister users from the browser
|
|
||||||
- the portal login action now uses the current username field instead of only the remembered local user
|
|
||||||
- a Playwright regression spec now exists for the browser flow in `tests/k_client_portal.spec.js`
|
|
||||||
- the Playwright browser regression has now passed end-to-end once from this host against a forwarded portal URL
|
|
||||||
- Verified end-to-end through the portal:
|
|
||||||
- enroll `alice`
|
|
||||||
- login succeeds
|
|
||||||
- session status succeeds
|
|
||||||
- protected counter succeeds repeatedly with session reuse
|
|
||||||
- logout succeeds
|
|
||||||
- Enrollment contract progress:
|
|
||||||
- `k_proxy` now exposes prototype enrollment endpoints
|
|
||||||
- proxy-side enrollment storage exists and is checked before login is allowed
|
|
||||||
- direct browser/API traffic can now use those proxy endpoints without going through the local bridge
|
|
||||||
- Phase 6 is materially further along for the current prototype shape:
|
|
||||||
- direct browser target is on `k_proxy`
|
|
||||||
- login/resource flow is integrated on the direct proxy path
|
|
||||||
- enrollment now has a real client->proxy path
|
|
||||||
- the `k_client` page is now a usable demo/operator surface in addition to the direct proxy path
|
|
||||||
- final enrollment semantics are still provisional
|
|
||||||
|
|
||||||
Status (2026-04-25, enrollment hardening):
|
|
||||||
- Added a more explicit provisional enrollment contract in `k_proxy`:
|
|
||||||
- username normalization and validation
|
|
||||||
- optional `display_name`
|
|
||||||
- separate create, update, delete, status, and list operations
|
|
||||||
- delete invalidates existing sessions for that username
|
|
||||||
- Verified the hardened behaviors on the direct proxy path.
|
|
||||||
- Phase 6 is now strong enough to treat the browser/proxy flow as a stable prototype baseline.
|
|
||||||
- The remaining reason Phase 6 is not "final" is product semantics, not missing basic mechanics:
|
|
||||||
- whether enrollment should require card presence
|
|
||||||
- what user attributes belong in enrollment
|
|
||||||
- what re-enroll and recovery should mean
|
|
||||||
|
|
||||||
Status (2026-04-25, Phase 6.5 initial concurrency results):
|
|
||||||
- Added reproducible probe script at `/home/user/chromecard/phase65_concurrency_probe.py`.
|
|
||||||
- Probe now supports `--max-workers` so client-side fan-out can be tested separately from total request count.
|
|
||||||
- Moderate direct-path concurrency passes:
|
|
||||||
- `3 users x 4 requests`
|
|
||||||
- `12/12` successful protected calls
|
|
||||||
- counter values remained unique and contiguous
|
|
||||||
- Larger direct-path concurrency currently fails:
|
|
||||||
- `5 users x 5 requests`
|
|
||||||
- only `18/25` successful protected calls
|
|
||||||
- failed calls report TLS EOF / upstream unavailable errors
|
|
||||||
- Follow-up findings are more precise:
|
|
||||||
- body-drain handling was fixed for the HTTP/1.1 keep-alive experiment
|
|
||||||
- `k_proxy -> k_server` upstream concurrency is now clampable and currently tested at one pooled connection
|
|
||||||
- `5 users x 5 requests` passes at `25/25` when client fan-out is limited to `--max-workers 10`
|
|
||||||
- the same total load still fails at higher fan-out:
|
|
||||||
- `22/25` at `--max-workers 15`
|
|
||||||
- `15/25` at fully unbounded `25` workers in the latest rerun
|
|
||||||
- Current bottleneck is still not counter correctness:
|
|
||||||
- successful results still show unique, contiguous counter values
|
|
||||||
- `k_proxy` and `k_server` complete the requests that actually arrive
|
|
||||||
- Current likely bottleneck is the client-facing Qubes forwarding layer:
|
|
||||||
- `qvm_connect_9771.log` shows qrexec data-vchan failures
|
|
||||||
- observed message includes `xs_transaction_start: No space left on device`
|
|
||||||
- `qvm_connect_9780.log` showed earlier failures too, but the latest threshold test points first to connection fan-out on `k_client -> k_proxy`
|
|
||||||
- Phase 6.5 is therefore started but not complete:
|
|
||||||
- application-level concurrency looks acceptable at moderate load
|
|
||||||
- current working envelope is roughly `10` in-flight protected calls on the direct browser path
|
|
||||||
- higher-load failures still need Qubes forwarding diagnosis before the phase can be closed
|
|
||||||
|
|
||||||
Status (2026-04-25, Phase 5 regression helper):
|
|
||||||
- Added repeatable split-VM regression helper:
|
|
||||||
- `/home/user/chromecard/phase5_chain_regression.sh`
|
|
||||||
- Verified helper result on the live chain:
|
|
||||||
- `20` requests at parallelism `8`
|
|
||||||
- login/session-status/counter/logout sequence completed successfully
|
|
||||||
- returned counter values were unique and gap-free
|
|
||||||
- latest verified helper range was `43..62`
|
|
||||||
- Current implication:
|
|
||||||
- the Phase 5 baseline is now reproducible
|
|
||||||
- next work should target auth semantics rather than basic chain bring-up
|
|
||||||
|
|
||||||
## Phase 6.5: Concurrency and Multi-Client Test Setup
|
## Phase 6.5: Concurrency and Multi-Client Test Setup
|
||||||
|
|
||||||
1. Single-VM concurrency tests.
|
1. Single-VM concurrency tests.
|
||||||
|
|
@ -451,79 +251,6 @@ Exit criteria:
|
||||||
- Re-scan relevant `.md` files before each new execution cycle and reconcile drift.
|
- Re-scan relevant `.md` files before each new execution cycle and reconcile drift.
|
||||||
- Record date-stamped session notes when priorities or blockers change.
|
- Record date-stamped session notes when priorities or blockers change.
|
||||||
|
|
||||||
Status (2026-04-24, markdown maintenance):
|
|
||||||
- Re-scanned the active workspace Markdown set and the main source-tree reference docs.
|
|
||||||
- No workplan phase change was required from this pass.
|
|
||||||
- Ongoing documentation watch item remains path drift in `CR_SDK_CK-main/README_HOST.md`, which still uses historical `./scripts/...` helper locations instead of workspace-root helper paths.
|
|
||||||
- Operational note: the markdown scan path now runs cleanly after policy adjustment when invoked without a login shell.
|
|
||||||
|
|
||||||
Status (2026-04-24, chain probe retry):
|
|
||||||
- Phase 1 remains blocked, but the failure point is now narrowed further:
|
|
||||||
- current refusal occurs at Qubes `qubes.ConnectTCP` policy/service evaluation for ports `22`, `8770`, and `8780`
|
|
||||||
- this happens before any end-to-end app-level request can be retried
|
|
||||||
- Practical implication:
|
|
||||||
- do not spend time on `k_proxy_app.py` / `k_server_app.py` request handling until qrexec forwarding is permitting the intended hops again
|
|
||||||
- next recovery action is to fix/activate the relevant Qubes `qubes.ConnectTCP` policy and then re-run the qrexec bridge checks before testing HTTP flow
|
|
||||||
|
|
||||||
Status (2026-04-25, post-restart probe):
|
|
||||||
- Corrected the client-facing proxy port reference to `8771`.
|
|
||||||
- SSH access to `k_proxy` and card visibility recovered after VM restart.
|
|
||||||
- New immediate blockers are:
|
|
||||||
- `k_proxy` service not listening on `127.0.0.1:8771`
|
|
||||||
- `k_server` service not listening on `127.0.0.1:8780`
|
|
||||||
- qrexec forwarding for `8771` and `8780` still returns `Request refused`
|
|
||||||
- Next retry should start services first, then re-test qrexec forwarding and only then attempt end-to-end client flow.
|
|
||||||
|
|
||||||
Status (2026-04-25, service restart):
|
|
||||||
- Local VM services are running again on the intended loopback ports:
|
|
||||||
- `k_server`: `127.0.0.1:8780`
|
|
||||||
- `k_proxy`: `127.0.0.1:8771`
|
|
||||||
- Phase 1 remains blocked specifically by qrexec policy/forwarding refusal on those ports.
|
|
||||||
- Next action is no longer app startup; it is fixing the `qubes.ConnectTCP` allow path for `8771` and `8780`.
|
|
||||||
|
|
||||||
Status (2026-04-25, in-VM forwarding test):
|
|
||||||
- Verified that using `qvm-connect-tcp` inside the source VMs still does not complete the client->proxy hop:
|
|
||||||
- bind succeeds locally, but first real connection gets `Request refused`
|
|
||||||
- Independent app-layer blocker also found in `k_proxy`:
|
|
||||||
- `python-fido2` is missing there, so local `/session/login` currently fails before card auth can succeed
|
|
||||||
- Current ordered blockers:
|
|
||||||
- first: effective Qubes/qrexec allow path for `k_client -> k_proxy:8771`
|
|
||||||
- second: install `python-fido2` in `k_proxy`
|
|
||||||
- third: re-test end-to-end login and then proxy->server counter flow
|
|
||||||
|
|
||||||
Status (2026-04-25, after python3-fido2 install):
|
|
||||||
- `python3-fido2` blocker in `k_proxy` is resolved.
|
|
||||||
- Updated ordered blockers:
|
|
||||||
- first: effective Qubes/qrexec allow path for `k_client -> k_proxy:8771`
|
|
||||||
- second: restore CTAP HID device visibility/access in `k_proxy` (`No CTAP HID devices found`)
|
|
||||||
- third: re-test end-to-end login and then proxy->server counter flow
|
|
||||||
|
|
||||||
Status (2026-04-25, card reattached):
|
|
||||||
- CTAP HID visibility/access in `k_proxy` is restored.
|
|
||||||
- Local proxy login is working again with the attached card.
|
|
||||||
- The only currently confirmed blocker for the end-to-end path is the `k_client -> k_proxy:8771` qrexec/`qvm-connect-tcp` refusal.
|
|
||||||
|
|
||||||
Status (2026-04-25, clean forward retest):
|
|
||||||
- The retest shows the same qrexec failure mode on both hops, not just the client-facing one.
|
|
||||||
- Updated blocker statement:
|
|
||||||
- effective `qubes.ConnectTCP` allow path is failing for both
|
|
||||||
- `k_client -> k_proxy:8771`
|
|
||||||
- `k_proxy -> k_server:8780`
|
|
||||||
- App services and card path are currently good; forwarding remains the single active system blocker.
|
|
||||||
|
|
||||||
Status (2026-04-25, dom0 policy fix validated):
|
|
||||||
- The explicit-destination dom0 `qubes.ConnectTCP` policy fix resolved forwarding on both hops.
|
|
||||||
- Current verified working chain:
|
|
||||||
- `k_client -> k_proxy:8771`
|
|
||||||
- `k_proxy -> k_server:8780`
|
|
||||||
- Current verified prototype behavior:
|
|
||||||
- session login works from `k_client`
|
|
||||||
- session status works
|
|
||||||
- protected counter flow reaches `k_server`
|
|
||||||
- session reuse avoids re-login for repeated counter calls
|
|
||||||
- logout invalidates the session and subsequent protected access returns `401`
|
|
||||||
- Immediate networking blocker is cleared.
|
|
||||||
|
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- New team member can follow docs end-to-end without path or tooling ambiguity.
|
- New team member can follow docs end-to-end without path or tooling ambiguity.
|
||||||
|
|
||||||
|
|
@ -547,14 +274,6 @@ Exit criteria:
|
||||||
Exit criteria:
|
Exit criteria:
|
||||||
- `k_proxy` can validate via wireless phone path with no client-facing API changes.
|
- `k_proxy` can validate via wireless phone path with no client-facing API changes.
|
||||||
|
|
||||||
## Current Next Step
|
|
||||||
|
|
||||||
- Resolve the direct-registration blocker for `--auth-mode fido2-direct` in `k_proxy`.
|
|
||||||
- Candidate directions:
|
|
||||||
- determine whether the current card can support the required PIN/UV path for direct CTAP2 registration from `python-fido2`
|
|
||||||
- or provide a different one-time enrollment route that yields persistent real credential material for later direct assertion verification
|
|
||||||
- Keep the new regression helper as the fast check that transport, session reuse, and counter semantics still hold after each change.
|
|
||||||
|
|
||||||
## Inputs Expected During This Session
|
## Inputs Expected During This Session
|
||||||
|
|
||||||
- Exact observed behavior on reconnect attempts (USB/hidraw/probe).
|
- Exact observed behavior on reconnect attempts (USB/hidraw/probe).
|
||||||
|
|
@ -565,14 +284,3 @@ Exit criteria:
|
||||||
- Decision on where user/session authority lives (`k_proxy` vs `k_server` vs split).
|
- Decision on where user/session authority lives (`k_proxy` vs `k_server` vs split).
|
||||||
- Target concurrency level for validation (parallel clients and parallel requests per client).
|
- Target concurrency level for validation (parallel clients and parallel requests per client).
|
||||||
- Preferred wireless transport/protocol between `k_proxy` and phone (for future phase).
|
- Preferred wireless transport/protocol between `k_proxy` and phone (for future phase).
|
||||||
|
|
||||||
## Session Maintenance Notes (2026-04-24)
|
|
||||||
|
|
||||||
- Top-level Markdown review completed for `PHASE5_RUNBOOK.md`, `Setup.md`, and `Workplan.md`.
|
|
||||||
- Current execution plan remains in sync with the Phase 5 runbook:
|
|
||||||
- prototype services at `/home/user/chromecard/k_proxy_app.py` and `/home/user/chromecard/k_server_app.py`
|
|
||||||
- run sequence documented in `/home/user/chromecard/PHASE5_RUNBOOK.md`
|
|
||||||
- No phase ordering or blocker changes were required from this review pass.
|
|
||||||
- Remote execution support is now active and validated:
|
|
||||||
- `ssh` command execution works for `k_client`, `k_proxy`, `k_server`
|
|
||||||
- `scp` push to VM home works (validated on `k_proxy`)
|
|
||||||
|
|
|
||||||
|
|
@ -1,74 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Manual CTAPHID INIT probe for a specific hidraw node.
|
|
||||||
|
|
||||||
This bypasses python-fido2's device bootstrap so we can see whether the raw HID
|
|
||||||
transport itself exchanges packets on the expected FIDO interface.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
import secrets
|
|
||||||
import select
|
|
||||||
import struct
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
|
|
||||||
CTAPHID_INIT = 0x06
|
|
||||||
TYPE_INIT = 0x80
|
|
||||||
BROADCAST_CID = 0xFFFFFFFF
|
|
||||||
|
|
||||||
|
|
||||||
def build_init_packet(nonce: bytes) -> bytes:
|
|
||||||
frame = struct.pack(">IBH", BROADCAST_CID, TYPE_INIT | CTAPHID_INIT, len(nonce)) + nonce
|
|
||||||
return b"\0" + frame.ljust(64, b"\0")
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
parser = argparse.ArgumentParser(description="Manual CTAPHID INIT probe")
|
|
||||||
parser.add_argument("--device-path", default="/dev/hidraw0")
|
|
||||||
parser.add_argument("--timeout", type=float, default=3.0)
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
path = Path(args.device_path)
|
|
||||||
if not path.exists():
|
|
||||||
print(f"missing device: {path}", file=sys.stderr)
|
|
||||||
return 2
|
|
||||||
|
|
||||||
nonce = secrets.token_bytes(8)
|
|
||||||
packet = build_init_packet(nonce)
|
|
||||||
print(f"device={path}")
|
|
||||||
print(f"nonce={nonce.hex()}")
|
|
||||||
print(f"write_len={len(packet)}")
|
|
||||||
print(f"write_hex={packet.hex()}")
|
|
||||||
|
|
||||||
fd = os.open(str(path), os.O_RDWR)
|
|
||||||
try:
|
|
||||||
written = os.write(fd, packet)
|
|
||||||
print(f"written={written}")
|
|
||||||
poller = select.poll()
|
|
||||||
poller.register(fd, select.POLLIN)
|
|
||||||
events = poller.poll(int(args.timeout * 1000))
|
|
||||||
print(f"events={events}")
|
|
||||||
if not events:
|
|
||||||
print("timeout_waiting_for_response")
|
|
||||||
return 1
|
|
||||||
response = os.read(fd, 64)
|
|
||||||
print(f"read_len={len(response)}")
|
|
||||||
print(f"read_hex={response.hex()}")
|
|
||||||
if len(response) >= 24:
|
|
||||||
cid, cmd, bc = struct.unpack(">IBH", response[:7])
|
|
||||||
print(f"resp_cid=0x{cid:08x}")
|
|
||||||
print(f"resp_cmd=0x{cmd:02x}")
|
|
||||||
print(f"resp_bc={bc}")
|
|
||||||
print(f"resp_payload={response[7:7+bc].hex()}")
|
|
||||||
return 0
|
|
||||||
finally:
|
|
||||||
os.close(fd)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
raise SystemExit(main())
|
|
||||||
|
|
@ -1,157 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Generate a small local CA plus leaf certificates for Phase 2 HTTPS testing.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import ipaddress
|
|
||||||
from datetime import datetime, timedelta, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from cryptography import x509
|
|
||||||
from cryptography.hazmat.primitives import hashes, serialization
|
|
||||||
from cryptography.hazmat.primitives.asymmetric import rsa
|
|
||||||
from cryptography.x509.oid import ExtendedKeyUsageOID, NameOID
|
|
||||||
|
|
||||||
|
|
||||||
def build_name(common_name: str) -> x509.Name:
|
|
||||||
return x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, common_name)])
|
|
||||||
|
|
||||||
|
|
||||||
def new_private_key() -> rsa.RSAPrivateKey:
|
|
||||||
return rsa.generate_private_key(public_exponent=65537, key_size=2048)
|
|
||||||
|
|
||||||
|
|
||||||
def write_private_key(path: Path, key: rsa.RSAPrivateKey) -> None:
|
|
||||||
path.write_bytes(
|
|
||||||
key.private_bytes(
|
|
||||||
encoding=serialization.Encoding.PEM,
|
|
||||||
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
|
||||||
encryption_algorithm=serialization.NoEncryption(),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def write_cert(path: Path, cert: x509.Certificate) -> None:
|
|
||||||
path.write_bytes(cert.public_bytes(serialization.Encoding.PEM))
|
|
||||||
|
|
||||||
|
|
||||||
def parse_sans(names: list[str]) -> list[x509.GeneralName]:
|
|
||||||
sans: list[x509.GeneralName] = []
|
|
||||||
seen = set()
|
|
||||||
for value in names:
|
|
||||||
if value in seen:
|
|
||||||
continue
|
|
||||||
seen.add(value)
|
|
||||||
try:
|
|
||||||
sans.append(x509.IPAddress(ipaddress.ip_address(value)))
|
|
||||||
except ValueError:
|
|
||||||
sans.append(x509.DNSName(value))
|
|
||||||
return sans
|
|
||||||
|
|
||||||
|
|
||||||
def issue_ca(common_name: str, valid_days: int) -> tuple[rsa.RSAPrivateKey, x509.Certificate]:
|
|
||||||
now = datetime.now(timezone.utc)
|
|
||||||
key = new_private_key()
|
|
||||||
subject = issuer = build_name(common_name)
|
|
||||||
cert = (
|
|
||||||
x509.CertificateBuilder()
|
|
||||||
.subject_name(subject)
|
|
||||||
.issuer_name(issuer)
|
|
||||||
.public_key(key.public_key())
|
|
||||||
.serial_number(x509.random_serial_number())
|
|
||||||
.not_valid_before(now - timedelta(minutes=5))
|
|
||||||
.not_valid_after(now + timedelta(days=valid_days))
|
|
||||||
.add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
|
|
||||||
.add_extension(x509.SubjectKeyIdentifier.from_public_key(key.public_key()), critical=False)
|
|
||||||
.add_extension(x509.AuthorityKeyIdentifier.from_issuer_public_key(key.public_key()), critical=False)
|
|
||||||
.add_extension(x509.KeyUsage(digital_signature=True, key_encipherment=False, key_cert_sign=True, crl_sign=True, content_commitment=False, data_encipherment=False, key_agreement=False, encipher_only=False, decipher_only=False), critical=True)
|
|
||||||
.sign(key, hashes.SHA256())
|
|
||||||
)
|
|
||||||
return key, cert
|
|
||||||
|
|
||||||
|
|
||||||
def issue_leaf(
|
|
||||||
ca_key: rsa.RSAPrivateKey,
|
|
||||||
ca_cert: x509.Certificate,
|
|
||||||
common_name: str,
|
|
||||||
san_values: list[str],
|
|
||||||
valid_days: int,
|
|
||||||
) -> tuple[rsa.RSAPrivateKey, x509.Certificate]:
|
|
||||||
now = datetime.now(timezone.utc)
|
|
||||||
key = new_private_key()
|
|
||||||
cert = (
|
|
||||||
x509.CertificateBuilder()
|
|
||||||
.subject_name(build_name(common_name))
|
|
||||||
.issuer_name(ca_cert.subject)
|
|
||||||
.public_key(key.public_key())
|
|
||||||
.serial_number(x509.random_serial_number())
|
|
||||||
.not_valid_before(now - timedelta(minutes=5))
|
|
||||||
.not_valid_after(now + timedelta(days=valid_days))
|
|
||||||
.add_extension(x509.BasicConstraints(ca=False, path_length=None), critical=True)
|
|
||||||
.add_extension(x509.SubjectAlternativeName(parse_sans(san_values)), critical=False)
|
|
||||||
.add_extension(x509.SubjectKeyIdentifier.from_public_key(key.public_key()), critical=False)
|
|
||||||
.add_extension(x509.AuthorityKeyIdentifier.from_issuer_public_key(ca_key.public_key()), critical=False)
|
|
||||||
.add_extension(x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False)
|
|
||||||
.add_extension(x509.KeyUsage(digital_signature=True, key_encipherment=True, key_cert_sign=False, crl_sign=False, content_commitment=False, data_encipherment=False, key_agreement=False, encipher_only=False, decipher_only=False), critical=True)
|
|
||||||
.sign(ca_key, hashes.SHA256())
|
|
||||||
)
|
|
||||||
return key, cert
|
|
||||||
|
|
||||||
|
|
||||||
def emit_leaf_bundle(
|
|
||||||
out_dir: Path,
|
|
||||||
leaf_name: str,
|
|
||||||
ca_key: rsa.RSAPrivateKey,
|
|
||||||
ca_cert: x509.Certificate,
|
|
||||||
san_values: list[str],
|
|
||||||
valid_days: int,
|
|
||||||
) -> None:
|
|
||||||
key, cert = issue_leaf(ca_key, ca_cert, leaf_name, san_values, valid_days)
|
|
||||||
write_private_key(out_dir / f"{leaf_name}.key", key)
|
|
||||||
write_cert(out_dir / f"{leaf_name}.crt", cert)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args() -> argparse.Namespace:
|
|
||||||
parser = argparse.ArgumentParser(description="Generate local CA and Phase 2 service certificates")
|
|
||||||
parser.add_argument("--out-dir", default="tls/phase2")
|
|
||||||
parser.add_argument("--valid-days", type=int, default=30)
|
|
||||||
parser.add_argument("--ca-common-name", default="ChromeCard Phase2 Local CA")
|
|
||||||
parser.add_argument(
|
|
||||||
"--proxy-san",
|
|
||||||
action="append",
|
|
||||||
default=[],
|
|
||||||
help="Extra SAN for k_proxy certificate; may be repeated",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--server-san",
|
|
||||||
action="append",
|
|
||||||
default=[],
|
|
||||||
help="Extra SAN for k_server certificate; may be repeated",
|
|
||||||
)
|
|
||||||
return parser.parse_args()
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
args = parse_args()
|
|
||||||
out_dir = Path(args.out_dir)
|
|
||||||
out_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
ca_key, ca_cert = issue_ca(args.ca_common_name, args.valid_days)
|
|
||||||
write_private_key(out_dir / "ca.key", ca_key)
|
|
||||||
write_cert(out_dir / "ca.crt", ca_cert)
|
|
||||||
|
|
||||||
proxy_sans = ["localhost", "127.0.0.1", "k_proxy", *args.proxy_san]
|
|
||||||
server_sans = ["localhost", "127.0.0.1", "k_server", *args.server_san]
|
|
||||||
|
|
||||||
emit_leaf_bundle(out_dir, "k_proxy", ca_key, ca_cert, proxy_sans, args.valid_days)
|
|
||||||
emit_leaf_bundle(out_dir, "k_server", ca_key, ca_cert, server_sans, args.valid_days)
|
|
||||||
|
|
||||||
print(f"Generated CA and leaf certificates in {out_dir}")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
raise SystemExit(main())
|
|
||||||
|
|
@ -1,832 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Minimal browser-facing client portal for Phase 6 bring-up.
|
|
||||||
|
|
||||||
This runs in k_client, keeps a local preferred username, and talks to k_proxy
|
|
||||||
over the localhost-forwarded TLS endpoint.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import ssl
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any
|
|
||||||
from urllib.error import HTTPError, URLError
|
|
||||||
from urllib.parse import urlparse
|
|
||||||
from urllib.request import Request, urlopen
|
|
||||||
|
|
||||||
|
|
||||||
HTML = """<!doctype html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="utf-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
|
||||||
<title>ChromeCard Client Flow</title>
|
|
||||||
<style>
|
|
||||||
:root {
|
|
||||||
--bg: #f3efe8;
|
|
||||||
--panel: #fffdf8;
|
|
||||||
--ink: #181614;
|
|
||||||
--muted: #655f56;
|
|
||||||
--line: #d9cfbf;
|
|
||||||
--accent: #0c6a60;
|
|
||||||
--accent-2: #8a5b2b;
|
|
||||||
--ok: #17653c;
|
|
||||||
--warn: #8f5b00;
|
|
||||||
--bad: #8a1f28;
|
|
||||||
--shadow: rgba(55, 41, 19, 0.08);
|
|
||||||
}
|
|
||||||
* { box-sizing: border-box; }
|
|
||||||
body {
|
|
||||||
margin: 0;
|
|
||||||
font-family: "Iowan Old Style", "Palatino Linotype", serif;
|
|
||||||
background:
|
|
||||||
radial-gradient(circle at top left, rgba(12,106,96,0.12), transparent 34%),
|
|
||||||
linear-gradient(180deg, #f9f3e8 0%, var(--bg) 100%);
|
|
||||||
color: var(--ink);
|
|
||||||
}
|
|
||||||
main {
|
|
||||||
max-width: 980px;
|
|
||||||
margin: 0 auto;
|
|
||||||
padding: 32px 20px 56px;
|
|
||||||
}
|
|
||||||
.hero, .panel {
|
|
||||||
padding: 22px 24px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: linear-gradient(135deg, rgba(255,253,248,0.98), rgba(242,237,228,0.94));
|
|
||||||
box-shadow: 0 18px 40px var(--shadow);
|
|
||||||
}
|
|
||||||
.hero {
|
|
||||||
margin-bottom: 18px;
|
|
||||||
}
|
|
||||||
h1 {
|
|
||||||
margin: 0 0 8px;
|
|
||||||
font-size: clamp(2rem, 4vw, 3.4rem);
|
|
||||||
line-height: 0.95;
|
|
||||||
letter-spacing: -0.04em;
|
|
||||||
}
|
|
||||||
.subtitle {
|
|
||||||
margin: 0;
|
|
||||||
color: var(--muted);
|
|
||||||
max-width: 62ch;
|
|
||||||
font-size: 1rem;
|
|
||||||
}
|
|
||||||
.grid {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: minmax(0, 1.3fr) minmax(300px, 0.9fr);
|
|
||||||
gap: 18px;
|
|
||||||
align-items: start;
|
|
||||||
}
|
|
||||||
.stack {
|
|
||||||
display: grid;
|
|
||||||
gap: 18px;
|
|
||||||
}
|
|
||||||
.actions, .row {
|
|
||||||
display: flex;
|
|
||||||
flex-wrap: wrap;
|
|
||||||
gap: 10px;
|
|
||||||
}
|
|
||||||
.actions {
|
|
||||||
margin-top: 18px;
|
|
||||||
}
|
|
||||||
input {
|
|
||||||
width: 100%;
|
|
||||||
padding: 10px 12px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: #fff;
|
|
||||||
font: inherit;
|
|
||||||
color: var(--ink);
|
|
||||||
}
|
|
||||||
label {
|
|
||||||
display: grid;
|
|
||||||
gap: 6px;
|
|
||||||
margin-top: 14px;
|
|
||||||
color: var(--muted);
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
button {
|
|
||||||
text-decoration: none;
|
|
||||||
border: 0;
|
|
||||||
padding: 10px 14px;
|
|
||||||
font: inherit;
|
|
||||||
color: #fff;
|
|
||||||
background: var(--accent);
|
|
||||||
cursor: pointer;
|
|
||||||
}
|
|
||||||
button.secondary { background: var(--accent-2); }
|
|
||||||
button.ghost {
|
|
||||||
background: #fff;
|
|
||||||
color: var(--ink);
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
}
|
|
||||||
button:disabled {
|
|
||||||
opacity: 0.55;
|
|
||||||
cursor: wait;
|
|
||||||
}
|
|
||||||
.status {
|
|
||||||
display: grid;
|
|
||||||
gap: 12px;
|
|
||||||
}
|
|
||||||
.status-card {
|
|
||||||
padding: 14px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: rgba(255,255,255,0.86);
|
|
||||||
}
|
|
||||||
.status-card h2 {
|
|
||||||
margin: 0 0 6px;
|
|
||||||
font-size: 1rem;
|
|
||||||
}
|
|
||||||
.status-line {
|
|
||||||
font-size: 0.95rem;
|
|
||||||
color: var(--muted);
|
|
||||||
}
|
|
||||||
#usersList {
|
|
||||||
display: grid;
|
|
||||||
gap: 8px;
|
|
||||||
margin-top: 12px;
|
|
||||||
}
|
|
||||||
.user-row {
|
|
||||||
display: flex;
|
|
||||||
flex-wrap: wrap;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
gap: 10px;
|
|
||||||
padding: 10px 12px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: rgba(255,255,255,0.86);
|
|
||||||
}
|
|
||||||
.user-meta {
|
|
||||||
display: grid;
|
|
||||||
gap: 2px;
|
|
||||||
}
|
|
||||||
.user-name {
|
|
||||||
font-weight: 600;
|
|
||||||
}
|
|
||||||
.user-subtle {
|
|
||||||
color: var(--muted);
|
|
||||||
font-size: 0.9rem;
|
|
||||||
}
|
|
||||||
.user-actions {
|
|
||||||
display: flex;
|
|
||||||
flex-wrap: wrap;
|
|
||||||
gap: 8px;
|
|
||||||
}
|
|
||||||
.small {
|
|
||||||
padding: 8px 10px;
|
|
||||||
font-size: 0.92rem;
|
|
||||||
}
|
|
||||||
.badge {
|
|
||||||
display: inline-block;
|
|
||||||
padding: 4px 8px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
font-size: 0.86rem;
|
|
||||||
background: #fff;
|
|
||||||
color: var(--ink);
|
|
||||||
margin-right: 6px;
|
|
||||||
margin-bottom: 6px;
|
|
||||||
}
|
|
||||||
.timeline {
|
|
||||||
display: grid;
|
|
||||||
gap: 10px;
|
|
||||||
margin-top: 16px;
|
|
||||||
}
|
|
||||||
.step {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: 32px 1fr;
|
|
||||||
gap: 12px;
|
|
||||||
padding: 12px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: rgba(255,255,255,0.84);
|
|
||||||
}
|
|
||||||
.step-index {
|
|
||||||
width: 32px;
|
|
||||||
height: 32px;
|
|
||||||
display: grid;
|
|
||||||
place-items: center;
|
|
||||||
border-radius: 999px;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: #fff;
|
|
||||||
font-size: 0.88rem;
|
|
||||||
}
|
|
||||||
.hint {
|
|
||||||
margin-top: 14px;
|
|
||||||
padding: 12px 14px;
|
|
||||||
border-left: 4px solid var(--accent-2);
|
|
||||||
background: rgba(138,91,43,0.08);
|
|
||||||
color: var(--ink);
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
pre {
|
|
||||||
margin: 0;
|
|
||||||
padding: 16px;
|
|
||||||
overflow: auto;
|
|
||||||
border: 1px solid var(--line);
|
|
||||||
background: #16130f;
|
|
||||||
color: #efe7da;
|
|
||||||
font-family: "SFMono-Regular", Consolas, monospace;
|
|
||||||
font-size: 0.9rem;
|
|
||||||
line-height: 1.45;
|
|
||||||
min-height: 360px;
|
|
||||||
}
|
|
||||||
@media (max-width: 860px) {
|
|
||||||
.grid { grid-template-columns: 1fr; }
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<main>
|
|
||||||
<section class="hero">
|
|
||||||
<h1>ChromeCard Client Flow</h1>
|
|
||||||
<p class="subtitle">
|
|
||||||
This page runs in `k_client` and drives the real split-VM flow:
|
|
||||||
register a user, ask the card in `k_proxy` for approval, and then call
|
|
||||||
the protected counter on `k_server` only if auth succeeds.
|
|
||||||
</p>
|
|
||||||
</section>
|
|
||||||
|
|
||||||
<div class="grid">
|
|
||||||
<section class="stack">
|
|
||||||
<section class="panel">
|
|
||||||
<div class="row">
|
|
||||||
<span class="badge">Browser: k_client</span>
|
|
||||||
<span class="badge">Card: k_proxy</span>
|
|
||||||
<span class="badge">Resource: k_server</span>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<label>
|
|
||||||
Username
|
|
||||||
<input id="username" value="directtest" autocomplete="off">
|
|
||||||
</label>
|
|
||||||
|
|
||||||
<div class="actions">
|
|
||||||
<button id="registerBtn">Register User</button>
|
|
||||||
<button id="loginBtn">Login</button>
|
|
||||||
<button id="counterBtn">Call k_server</button>
|
|
||||||
<button id="logoutBtn" class="secondary">Logout</button>
|
|
||||||
<button id="runFlowBtn" class="ghost">Run Full Flow</button>
|
|
||||||
<button id="refreshBtn" class="ghost">Refresh State</button>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="hint" id="hintBox">
|
|
||||||
Registration: press <strong>yes</strong> on the card to enroll.
|
|
||||||
Login: press <strong>yes</strong> to allow the identity check, or
|
|
||||||
<strong>no</strong> to deny it. If login is denied, this page will
|
|
||||||
show that `k_server` was not called.
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="timeline">
|
|
||||||
<div class="step">
|
|
||||||
<div class="step-index">1</div>
|
|
||||||
<div>
|
|
||||||
<strong>Register user</strong><br>
|
|
||||||
Creates or refreshes the enrolled identity in `k_proxy`.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="step">
|
|
||||||
<div class="step-index">2</div>
|
|
||||||
<div>
|
|
||||||
<strong>Authenticate with the card</strong><br>
|
|
||||||
`k_proxy` asks the card for approval. Press `yes` to continue or `no` to reject.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="step">
|
|
||||||
<div class="step-index">3</div>
|
|
||||||
<div>
|
|
||||||
<strong>Call `k_server`</strong><br>
|
|
||||||
The protected counter is only reached when login created a valid session.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
|
|
||||||
<section class="panel status">
|
|
||||||
<div class="status-card">
|
|
||||||
<h2>Client State</h2>
|
|
||||||
<div class="status-line" id="stateUser">Enrolled user: unknown</div>
|
|
||||||
<div class="status-line" id="stateSession">Session: unknown</div>
|
|
||||||
<div class="status-line" id="stateExpires">Expires: unknown</div>
|
|
||||||
</div>
|
|
||||||
<div class="status-card">
|
|
||||||
<h2>Registered Users</h2>
|
|
||||||
<div class="status-line" id="usersSummary">Loading users...</div>
|
|
||||||
<div id="usersList"></div>
|
|
||||||
</div>
|
|
||||||
<div class="status-card">
|
|
||||||
<h2>Flow Result</h2>
|
|
||||||
<div class="status-line" id="flowResult">No flow run yet.</div>
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
</section>
|
|
||||||
|
|
||||||
<section class="panel">
|
|
||||||
<h2 style="margin-top:0">Event Log</h2>
|
|
||||||
<pre id="log"></pre>
|
|
||||||
</section>
|
|
||||||
</div>
|
|
||||||
</main>
|
|
||||||
|
|
||||||
<script>
|
|
||||||
const logNode = document.getElementById("log");
|
|
||||||
const hintBox = document.getElementById("hintBox");
|
|
||||||
const flowResult = document.getElementById("flowResult");
|
|
||||||
const stateUser = document.getElementById("stateUser");
|
|
||||||
const stateSession = document.getElementById("stateSession");
|
|
||||||
const stateExpires = document.getElementById("stateExpires");
|
|
||||||
const usersSummary = document.getElementById("usersSummary");
|
|
||||||
const usersList = document.getElementById("usersList");
|
|
||||||
const usernameInput = document.getElementById("username");
|
|
||||||
const buttons = Array.from(document.querySelectorAll("button"));
|
|
||||||
|
|
||||||
function log(message, payload) {
|
|
||||||
const stamp = new Date().toLocaleTimeString();
|
|
||||||
let line = `[${stamp}] ${message}`;
|
|
||||||
if (payload !== undefined) {
|
|
||||||
line += "\\n" + JSON.stringify(payload, null, 2);
|
|
||||||
}
|
|
||||||
logNode.textContent = line + "\\n\\n" + logNode.textContent;
|
|
||||||
}
|
|
||||||
|
|
||||||
function setBusy(busy) {
|
|
||||||
for (const button of buttons) button.disabled = busy;
|
|
||||||
}
|
|
||||||
|
|
||||||
function username() {
|
|
||||||
return usernameInput.value.trim();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function api(path, payload) {
|
|
||||||
const resp = await fetch(path, {
|
|
||||||
method: "POST",
|
|
||||||
headers: {"Content-Type": "application/json"},
|
|
||||||
body: JSON.stringify(payload || {})
|
|
||||||
});
|
|
||||||
const data = await resp.json();
|
|
||||||
return {status: resp.status, data};
|
|
||||||
}
|
|
||||||
|
|
||||||
async function refreshState() {
|
|
||||||
const resp = await fetch("/api/client/state");
|
|
||||||
const data = await resp.json();
|
|
||||||
stateUser.textContent = `Enrolled user: ${data.enrolled_username || "none"}`;
|
|
||||||
stateSession.textContent = `Session active: ${data.session_active ? "yes" : "no"}`;
|
|
||||||
stateExpires.textContent = `Expires: ${data.session_expires_at || "none"}`;
|
|
||||||
return data;
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderUsers(users) {
|
|
||||||
usersList.innerHTML = "";
|
|
||||||
if (!users.length) {
|
|
||||||
usersSummary.textContent = "No registered users in k_proxy.";
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
usersSummary.textContent = `${users.length} registered user${users.length === 1 ? "" : "s"} visible in k_proxy.`;
|
|
||||||
for (const user of users) {
|
|
||||||
const row = document.createElement("div");
|
|
||||||
row.className = "user-row";
|
|
||||||
|
|
||||||
const meta = document.createElement("div");
|
|
||||||
meta.className = "user-meta";
|
|
||||||
meta.innerHTML =
|
|
||||||
`<div class="user-name">${user.username}</div>` +
|
|
||||||
`<div class="user-subtle">Credential present: ${user.has_credential ? "yes" : "no"}</div>`;
|
|
||||||
|
|
||||||
const actions = document.createElement("div");
|
|
||||||
actions.className = "user-actions";
|
|
||||||
|
|
||||||
const useBtn = document.createElement("button");
|
|
||||||
useBtn.className = "ghost small";
|
|
||||||
useBtn.textContent = "Use";
|
|
||||||
useBtn.addEventListener("click", () => {
|
|
||||||
usernameInput.value = user.username;
|
|
||||||
flowResult.textContent = `Selected user ${user.username}.`;
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteBtn = document.createElement("button");
|
|
||||||
deleteBtn.className = "secondary small";
|
|
||||||
deleteBtn.textContent = "Unregister";
|
|
||||||
deleteBtn.addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try { await deleteUser(user.username); } finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
|
|
||||||
actions.appendChild(useBtn);
|
|
||||||
actions.appendChild(deleteBtn);
|
|
||||||
row.appendChild(meta);
|
|
||||||
row.appendChild(actions);
|
|
||||||
usersList.appendChild(row);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function refreshUsers() {
|
|
||||||
const resp = await fetch("/api/enrollments");
|
|
||||||
const data = await resp.json();
|
|
||||||
renderUsers(data.users || []);
|
|
||||||
return data;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function registerUser() {
|
|
||||||
hintBox.innerHTML = "Card step: if the card shows a <strong>registration</strong> prompt, press <strong>yes</strong> to enroll this user.";
|
|
||||||
const result = await api("/api/enroll", {username: username()});
|
|
||||||
log("Register user", result);
|
|
||||||
flowResult.textContent = result.status === 200 ? "User registration succeeded." : "User registration failed.";
|
|
||||||
await refreshState();
|
|
||||||
await refreshUsers();
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loginUser() {
|
|
||||||
hintBox.innerHTML = "Card step: if the card shows an <strong>authentication</strong> prompt, press <strong>yes</strong> to allow login or <strong>no</strong> to deny it.";
|
|
||||||
const result = await api("/api/login", {username: username()});
|
|
||||||
log("Login", result);
|
|
||||||
await refreshState();
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function callCounter() {
|
|
||||||
const result = await api("/api/resource/counter", {});
|
|
||||||
log("Call k_server counter", result);
|
|
||||||
flowResult.textContent =
|
|
||||||
result.status === 200
|
|
||||||
? `k_server was reached. Counter value: ${result.data.upstream?.value}`
|
|
||||||
: "k_server was not reached successfully.";
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function logoutUser() {
|
|
||||||
const result = await api("/api/logout", {});
|
|
||||||
log("Logout", result);
|
|
||||||
flowResult.textContent = result.status === 200 ? "Session cleared." : "Logout failed.";
|
|
||||||
await refreshState();
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function deleteUser(usernameToDelete) {
|
|
||||||
const result = await api("/api/enroll/delete", {username: usernameToDelete});
|
|
||||||
log("Unregister user", result);
|
|
||||||
flowResult.textContent =
|
|
||||||
result.status === 200
|
|
||||||
? `User ${usernameToDelete} was unregistered.`
|
|
||||||
: `Could not unregister ${usernameToDelete}.`;
|
|
||||||
if (result.status === 200 && username() === usernameToDelete) {
|
|
||||||
usernameInput.value = "";
|
|
||||||
}
|
|
||||||
await refreshState();
|
|
||||||
await refreshUsers();
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function runFlow() {
|
|
||||||
setBusy(true);
|
|
||||||
flowResult.textContent = "Flow running...";
|
|
||||||
try {
|
|
||||||
const login = await loginUser();
|
|
||||||
if (login.status !== 200) {
|
|
||||||
flowResult.textContent = "Login denied or failed. `k_server` was not called.";
|
|
||||||
log("Flow stopped before k_server", {
|
|
||||||
reason: "login failed",
|
|
||||||
status: login.status,
|
|
||||||
response: login.data
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
const counter = await callCounter();
|
|
||||||
if (counter.status === 200) {
|
|
||||||
flowResult.textContent = `Flow succeeded. k_server returned counter ${counter.data.upstream?.value}.`;
|
|
||||||
} else {
|
|
||||||
flowResult.textContent = "Login succeeded, but the protected k_server call failed.";
|
|
||||||
}
|
|
||||||
} finally {
|
|
||||||
setBusy(false);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
document.getElementById("registerBtn").addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try { await registerUser(); } finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
document.getElementById("loginBtn").addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try {
|
|
||||||
const result = await loginUser();
|
|
||||||
flowResult.textContent = result.status === 200 ? "Login succeeded. You can now call k_server." : "Login denied or failed. k_server was not called.";
|
|
||||||
} finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
document.getElementById("counterBtn").addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try { await callCounter(); } finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
document.getElementById("logoutBtn").addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try { await logoutUser(); } finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
document.getElementById("runFlowBtn").addEventListener("click", runFlow);
|
|
||||||
document.getElementById("refreshBtn").addEventListener("click", async () => {
|
|
||||||
setBusy(true);
|
|
||||||
try {
|
|
||||||
const state = await refreshState();
|
|
||||||
const users = await refreshUsers();
|
|
||||||
log("State refreshed", {state, users});
|
|
||||||
} finally { setBusy(false); }
|
|
||||||
});
|
|
||||||
|
|
||||||
Promise.all([refreshState(), refreshUsers()]).then(([state, users]) => {
|
|
||||||
log("Client flow page ready", {state, users});
|
|
||||||
});
|
|
||||||
</script>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class EnrollmentRecord:
|
|
||||||
username: str
|
|
||||||
|
|
||||||
|
|
||||||
class ClientState:
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
proxy_base_url: str,
|
|
||||||
proxy_ca_file: str | None,
|
|
||||||
enroll_db: Path,
|
|
||||||
interactive_timeout_s: float = 90.0,
|
|
||||||
default_timeout_s: float = 10.0,
|
|
||||||
):
|
|
||||||
self.proxy_base_url = proxy_base_url.rstrip("/")
|
|
||||||
self.proxy_ca_file = proxy_ca_file
|
|
||||||
self.enroll_db = enroll_db
|
|
||||||
self.interactive_timeout_s = interactive_timeout_s
|
|
||||||
self.default_timeout_s = default_timeout_s
|
|
||||||
self.lock = threading.Lock()
|
|
||||||
self.preferred_enrollment: EnrollmentRecord | None = None
|
|
||||||
self.session_token: str | None = None
|
|
||||||
self.session_expires_at: int | None = None
|
|
||||||
self._load_preferred_enrollment()
|
|
||||||
|
|
||||||
def _ssl_context(self):
|
|
||||||
if self.proxy_base_url.startswith("https://"):
|
|
||||||
return ssl.create_default_context(cafile=self.proxy_ca_file)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _proxy_json(
|
|
||||||
self,
|
|
||||||
method: str,
|
|
||||||
path: str,
|
|
||||||
payload: dict[str, Any] | None = None,
|
|
||||||
*,
|
|
||||||
timeout_s: float | None = None,
|
|
||||||
) -> tuple[int, dict[str, Any]]:
|
|
||||||
req = Request(f"{self.proxy_base_url}{path}", method=method)
|
|
||||||
req.add_header("Content-Type", "application/json")
|
|
||||||
token = self.get_session_token()
|
|
||||||
if token:
|
|
||||||
req.add_header("Authorization", f"Bearer {token}")
|
|
||||||
body = json.dumps(payload or {}).encode("utf-8")
|
|
||||||
try:
|
|
||||||
with urlopen(
|
|
||||||
req,
|
|
||||||
data=body,
|
|
||||||
timeout=timeout_s or self.default_timeout_s,
|
|
||||||
context=self._ssl_context(),
|
|
||||||
) as resp:
|
|
||||||
return resp.status, json.loads(resp.read().decode("utf-8"))
|
|
||||||
except HTTPError as exc:
|
|
||||||
try:
|
|
||||||
return exc.code, json.loads(exc.read().decode("utf-8"))
|
|
||||||
except Exception:
|
|
||||||
return exc.code, {"ok": False, "error": f"proxy http error {exc.code}"}
|
|
||||||
except URLError as exc:
|
|
||||||
return 502, {"ok": False, "error": f"proxy unavailable: {exc.reason}"}
|
|
||||||
except Exception as exc:
|
|
||||||
return 502, {"ok": False, "error": f"proxy call failed: {exc}"}
|
|
||||||
|
|
||||||
def _load_preferred_enrollment(self) -> None:
|
|
||||||
if not self.enroll_db.exists():
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
data = json.loads(self.enroll_db.read_text())
|
|
||||||
username = str(data.get("username", "")).strip()
|
|
||||||
if username:
|
|
||||||
self.preferred_enrollment = EnrollmentRecord(username=username)
|
|
||||||
except Exception:
|
|
||||||
self.preferred_enrollment = None
|
|
||||||
|
|
||||||
def _save_preferred_enrollment_locked(self) -> None:
|
|
||||||
self.enroll_db.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
payload = {"username": self.preferred_enrollment.username if self.preferred_enrollment else None}
|
|
||||||
self.enroll_db.write_text(json.dumps(payload, indent=2) + "\n")
|
|
||||||
|
|
||||||
def enroll(self, username: str) -> dict[str, Any]:
|
|
||||||
username = username.strip()
|
|
||||||
if not username:
|
|
||||||
return {"ok": False, "error": "username required"}
|
|
||||||
status, data = self._proxy_json(
|
|
||||||
"POST",
|
|
||||||
"/enroll/register",
|
|
||||||
{"username": username},
|
|
||||||
timeout_s=self.interactive_timeout_s,
|
|
||||||
)
|
|
||||||
if status != 200:
|
|
||||||
return data
|
|
||||||
with self.lock:
|
|
||||||
self.preferred_enrollment = EnrollmentRecord(username=username)
|
|
||||||
self._save_preferred_enrollment_locked()
|
|
||||||
self.session_token = None
|
|
||||||
self.session_expires_at = None
|
|
||||||
return {
|
|
||||||
"ok": True,
|
|
||||||
"enrolled_username": username,
|
|
||||||
"proxy_enrollment": data,
|
|
||||||
}
|
|
||||||
|
|
||||||
def list_enrollments(self) -> tuple[int, dict[str, Any]]:
|
|
||||||
return self._proxy_json("GET", "/enroll/list")
|
|
||||||
|
|
||||||
def delete_enrollment(self, username: str) -> tuple[int, dict[str, Any]]:
|
|
||||||
username = username.strip()
|
|
||||||
if not username:
|
|
||||||
return 400, {"ok": False, "error": "username required"}
|
|
||||||
status, data = self._proxy_json("POST", "/enroll/delete", {"username": username})
|
|
||||||
if status == 200:
|
|
||||||
with self.lock:
|
|
||||||
if self.preferred_enrollment and self.preferred_enrollment.username == username:
|
|
||||||
self.preferred_enrollment = None
|
|
||||||
self._save_preferred_enrollment_locked()
|
|
||||||
self.session_token = None
|
|
||||||
self.session_expires_at = None
|
|
||||||
return status, data
|
|
||||||
|
|
||||||
def snapshot(self) -> dict[str, Any]:
|
|
||||||
with self.lock:
|
|
||||||
return {
|
|
||||||
"ok": True,
|
|
||||||
"enrolled_username": self.preferred_enrollment.username if self.preferred_enrollment else None,
|
|
||||||
"session_active": bool(self.session_token),
|
|
||||||
"session_expires_at": self.session_expires_at,
|
|
||||||
"proxy_base_url": self.proxy_base_url,
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_session_token(self) -> str | None:
|
|
||||||
with self.lock:
|
|
||||||
return self.session_token
|
|
||||||
|
|
||||||
def login(self, username: str | None = None) -> tuple[int, dict[str, Any]]:
|
|
||||||
requested = (username or "").strip()
|
|
||||||
with self.lock:
|
|
||||||
if requested:
|
|
||||||
username = requested
|
|
||||||
elif self.preferred_enrollment:
|
|
||||||
username = self.preferred_enrollment.username
|
|
||||||
else:
|
|
||||||
return 400, {"ok": False, "error": "no enrolled user"}
|
|
||||||
|
|
||||||
status, data = self._proxy_json(
|
|
||||||
"POST",
|
|
||||||
"/session/login",
|
|
||||||
{"username": username},
|
|
||||||
timeout_s=self.interactive_timeout_s,
|
|
||||||
)
|
|
||||||
if status == 200 and data.get("session_token"):
|
|
||||||
with self.lock:
|
|
||||||
self.preferred_enrollment = EnrollmentRecord(username=username)
|
|
||||||
self._save_preferred_enrollment_locked()
|
|
||||||
self.session_token = data["session_token"]
|
|
||||||
self.session_expires_at = int(data.get("expires_at", 0)) or None
|
|
||||||
return status, data
|
|
||||||
|
|
||||||
def status(self) -> tuple[int, dict[str, Any]]:
|
|
||||||
return self._proxy_json("POST", "/session/status")
|
|
||||||
|
|
||||||
def counter(self) -> tuple[int, dict[str, Any]]:
|
|
||||||
return self._proxy_json("POST", "/resource/counter")
|
|
||||||
|
|
||||||
def logout(self) -> tuple[int, dict[str, Any]]:
|
|
||||||
status, data = self._proxy_json("POST", "/session/logout")
|
|
||||||
if status == 200:
|
|
||||||
with self.lock:
|
|
||||||
self.session_token = None
|
|
||||||
self.session_expires_at = None
|
|
||||||
return status, data
|
|
||||||
|
|
||||||
|
|
||||||
class Handler(BaseHTTPRequestHandler):
|
|
||||||
state: ClientState
|
|
||||||
|
|
||||||
def _json(self, status: int, payload: dict[str, Any]) -> None:
|
|
||||||
body = json.dumps(payload).encode("utf-8")
|
|
||||||
self.send_response(status)
|
|
||||||
self.send_header("Content-Type", "application/json")
|
|
||||||
self.send_header("Content-Length", str(len(body)))
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(body)
|
|
||||||
|
|
||||||
def _html(self, body: str) -> None:
|
|
||||||
data = body.encode("utf-8")
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header("Content-Type", "text/html; charset=utf-8")
|
|
||||||
self.send_header("Content-Length", str(len(data)))
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(data)
|
|
||||||
|
|
||||||
def _read_json(self) -> dict[str, Any]:
|
|
||||||
length = int(self.headers.get("Content-Length", "0"))
|
|
||||||
raw = self.rfile.read(length)
|
|
||||||
if not raw:
|
|
||||||
return {}
|
|
||||||
return json.loads(raw.decode("utf-8"))
|
|
||||||
|
|
||||||
def do_GET(self) -> None: # noqa: N802
|
|
||||||
path = urlparse(self.path).path
|
|
||||||
if path == "/":
|
|
||||||
self._html(HTML)
|
|
||||||
return
|
|
||||||
if path == "/health":
|
|
||||||
self._json(200, {"ok": True, "service": "k_client_portal", "time": int(time.time())})
|
|
||||||
return
|
|
||||||
if path == "/api/client/state":
|
|
||||||
self._json(200, self.state.snapshot())
|
|
||||||
return
|
|
||||||
if path == "/api/enrollments":
|
|
||||||
status, data = self.state.list_enrollments()
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
self.send_error(404)
|
|
||||||
|
|
||||||
def do_POST(self) -> None: # noqa: N802
|
|
||||||
path = urlparse(self.path).path
|
|
||||||
if path == "/api/enroll":
|
|
||||||
try:
|
|
||||||
data = self._read_json()
|
|
||||||
except Exception:
|
|
||||||
self._json(400, {"ok": False, "error": "invalid json"})
|
|
||||||
return
|
|
||||||
result = self.state.enroll(str(data.get("username", "")))
|
|
||||||
self._json(200 if result.get("ok") else 400, result)
|
|
||||||
return
|
|
||||||
if path == "/api/login":
|
|
||||||
try:
|
|
||||||
data = self._read_json()
|
|
||||||
except Exception:
|
|
||||||
self._json(400, {"ok": False, "error": "invalid json"})
|
|
||||||
return
|
|
||||||
status, data = self.state.login(str(data.get("username", "")))
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
if path == "/api/enroll/delete":
|
|
||||||
try:
|
|
||||||
data = self._read_json()
|
|
||||||
except Exception:
|
|
||||||
self._json(400, {"ok": False, "error": "invalid json"})
|
|
||||||
return
|
|
||||||
status, data = self.state.delete_enrollment(str(data.get("username", "")))
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
if path == "/api/status":
|
|
||||||
status, data = self.state.status()
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
if path == "/api/resource/counter":
|
|
||||||
status, data = self.state.counter()
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
if path == "/api/logout":
|
|
||||||
status, data = self.state.logout()
|
|
||||||
self._json(status, data)
|
|
||||||
return
|
|
||||||
self.send_error(404)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args() -> argparse.Namespace:
|
|
||||||
parser = argparse.ArgumentParser(description="Run browser-facing client portal in k_client")
|
|
||||||
parser.add_argument("--host", default="127.0.0.1")
|
|
||||||
parser.add_argument("--port", type=int, default=8766)
|
|
||||||
parser.add_argument("--proxy-base-url", default="https://127.0.0.1:9771")
|
|
||||||
parser.add_argument("--proxy-ca-file", help="CA certificate used to verify k_proxy HTTPS certificate")
|
|
||||||
parser.add_argument("--enroll-db", default="/home/user/chromecard/k_client_enrollment.json")
|
|
||||||
return parser.parse_args()
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
args = parse_args()
|
|
||||||
if args.proxy_base_url.startswith("https://") and not args.proxy_ca_file:
|
|
||||||
raise SystemExit("--proxy-ca-file is required when --proxy-base-url uses https")
|
|
||||||
|
|
||||||
Handler.state = ClientState(
|
|
||||||
proxy_base_url=args.proxy_base_url,
|
|
||||||
proxy_ca_file=args.proxy_ca_file,
|
|
||||||
enroll_db=Path(args.enroll_db),
|
|
||||||
)
|
|
||||||
server = ThreadingHTTPServer((args.host, args.port), Handler)
|
|
||||||
print(f"k_client_portal listening on http://{args.host}:{args.port}")
|
|
||||||
server.serve_forever()
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
raise SystemExit(main())
|
|
||||||
1087
k_proxy_app.py
1087
k_proxy_app.py
File diff suppressed because it is too large
Load Diff
|
|
@ -12,7 +12,6 @@ from __future__ import annotations
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
import ssl
|
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
||||||
|
|
@ -34,7 +33,6 @@ class ServerState:
|
||||||
|
|
||||||
class Handler(BaseHTTPRequestHandler):
|
class Handler(BaseHTTPRequestHandler):
|
||||||
state: ServerState
|
state: ServerState
|
||||||
protocol_version = "HTTP/1.1"
|
|
||||||
|
|
||||||
def _json(self, status: int, payload: dict[str, Any]) -> None:
|
def _json(self, status: int, payload: dict[str, Any]) -> None:
|
||||||
body = json.dumps(payload).encode("utf-8")
|
body = json.dumps(payload).encode("utf-8")
|
||||||
|
|
@ -44,11 +42,6 @@ class Handler(BaseHTTPRequestHandler):
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(body)
|
self.wfile.write(body)
|
||||||
|
|
||||||
def _discard_request_body(self) -> None:
|
|
||||||
length = int(self.headers.get("Content-Length", "0"))
|
|
||||||
if length > 0:
|
|
||||||
self.rfile.read(length)
|
|
||||||
|
|
||||||
def _is_proxy_authorized(self) -> bool:
|
def _is_proxy_authorized(self) -> bool:
|
||||||
return self.headers.get("X-Proxy-Token") == self.state.proxy_token
|
return self.headers.get("X-Proxy-Token") == self.state.proxy_token
|
||||||
|
|
||||||
|
|
@ -71,7 +64,6 @@ class Handler(BaseHTTPRequestHandler):
|
||||||
if path != "/resource/counter":
|
if path != "/resource/counter":
|
||||||
self.send_error(404)
|
self.send_error(404)
|
||||||
return
|
return
|
||||||
self._discard_request_body()
|
|
||||||
if not self._is_proxy_authorized():
|
if not self._is_proxy_authorized():
|
||||||
self._json(401, {"ok": False, "error": "unauthorized proxy"})
|
self._json(401, {"ok": False, "error": "unauthorized proxy"})
|
||||||
return
|
return
|
||||||
|
|
@ -92,8 +84,6 @@ def parse_args() -> argparse.Namespace:
|
||||||
parser = argparse.ArgumentParser(description="Run k_server counter service")
|
parser = argparse.ArgumentParser(description="Run k_server counter service")
|
||||||
parser.add_argument("--host", default="127.0.0.1")
|
parser.add_argument("--host", default="127.0.0.1")
|
||||||
parser.add_argument("--port", type=int, default=8780)
|
parser.add_argument("--port", type=int, default=8780)
|
||||||
parser.add_argument("--tls-certfile", help="PEM certificate chain for HTTPS listener")
|
|
||||||
parser.add_argument("--tls-keyfile", help="PEM private key for HTTPS listener")
|
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--proxy-token",
|
"--proxy-token",
|
||||||
default="dev-proxy-token",
|
default="dev-proxy-token",
|
||||||
|
|
@ -104,20 +94,10 @@ def parse_args() -> argparse.Namespace:
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
args = parse_args()
|
args = parse_args()
|
||||||
if bool(args.tls_certfile) != bool(args.tls_keyfile):
|
|
||||||
raise SystemExit("Both --tls-certfile and --tls-keyfile are required to enable HTTPS")
|
|
||||||
|
|
||||||
state = ServerState(proxy_token=args.proxy_token)
|
state = ServerState(proxy_token=args.proxy_token)
|
||||||
Handler.state = state
|
Handler.state = state
|
||||||
server = ThreadingHTTPServer((args.host, args.port), Handler)
|
server = ThreadingHTTPServer((args.host, args.port), Handler)
|
||||||
scheme = "http"
|
print(f"k_server listening on http://{args.host}:{args.port}")
|
||||||
if args.tls_certfile:
|
|
||||||
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
|
|
||||||
context.load_cert_chain(certfile=args.tls_certfile, keyfile=args.tls_keyfile)
|
|
||||||
server.socket = context.wrap_socket(server.socket, server_side=True)
|
|
||||||
scheme = "https"
|
|
||||||
|
|
||||||
print(f"k_server listening on {scheme}://{args.host}:{args.port}")
|
|
||||||
server.serve_forever()
|
server.serve_forever()
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,78 +0,0 @@
|
||||||
{
|
|
||||||
"name": "chromecard-browser-regression",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"lockfileVersion": 3,
|
|
||||||
"requires": true,
|
|
||||||
"packages": {
|
|
||||||
"": {
|
|
||||||
"name": "chromecard-browser-regression",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"devDependencies": {
|
|
||||||
"@playwright/test": "^1.54.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@playwright/test": {
|
|
||||||
"version": "1.59.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.59.1.tgz",
|
|
||||||
"integrity": "sha512-PG6q63nQg5c9rIi4/Z5lR5IVF7yU5MqmKaPOe0HSc0O2cX1fPi96sUQu5j7eo4gKCkB2AnNGoWt7y4/Xx3Kcqg==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"dependencies": {
|
|
||||||
"playwright": "1.59.1"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"playwright": "cli.js"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=18"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/fsevents": {
|
|
||||||
"version": "2.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
|
|
||||||
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
|
|
||||||
"dev": true,
|
|
||||||
"hasInstallScript": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"optional": true,
|
|
||||||
"os": [
|
|
||||||
"darwin"
|
|
||||||
],
|
|
||||||
"engines": {
|
|
||||||
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/playwright": {
|
|
||||||
"version": "1.59.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.59.1.tgz",
|
|
||||||
"integrity": "sha512-C8oWjPR3F81yljW9o5OxcWzfh6avkVwDD2VYdwIGqTkl+OGFISgypqzfu7dOe4QNLL2aqcWBmI3PMtLIK233lw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"dependencies": {
|
|
||||||
"playwright-core": "1.59.1"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"playwright": "cli.js"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=18"
|
|
||||||
},
|
|
||||||
"optionalDependencies": {
|
|
||||||
"fsevents": "2.3.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/playwright-core": {
|
|
||||||
"version": "1.59.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.59.1.tgz",
|
|
||||||
"integrity": "sha512-HBV/RJg81z5BiiZ9yPzIiClYV/QMsDCKUyogwH9p3MCP6IYjUFu/MActgYAvK0oWyV9NlwM3GLBjADyWgydVyg==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"bin": {
|
|
||||||
"playwright-core": "cli.js"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=18"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
12
package.json
12
package.json
|
|
@ -1,12 +0,0 @@
|
||||||
{
|
|
||||||
"name": "chromecard-browser-regression",
|
|
||||||
"private": true,
|
|
||||||
"version": "0.1.0",
|
|
||||||
"description": "Playwright regression checks for the k_client browser flow",
|
|
||||||
"scripts": {
|
|
||||||
"test:k-client": "playwright test tests/k_client_portal.spec.js"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
|
||||||
"@playwright/test": "^1.54.2"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,230 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
CLIENT_HOST="${CLIENT_HOST:-k_client}"
|
|
||||||
CA_FILE="${CA_FILE:-/home/user/chromecard/tls/phase2/ca.crt}"
|
|
||||||
PROXY_URL="${PROXY_URL:-https://127.0.0.1:9771}"
|
|
||||||
USERNAME="${USERNAME:-alice}"
|
|
||||||
REQUESTS="${REQUESTS:-20}"
|
|
||||||
PARALLELISM="${PARALLELISM:-8}"
|
|
||||||
CONNECT_TIMEOUT="${CONNECT_TIMEOUT:-8}"
|
|
||||||
LOGIN_TIMEOUT="${LOGIN_TIMEOUT:-90}"
|
|
||||||
INTERACTIVE_CARD="${INTERACTIVE_CARD:-0}"
|
|
||||||
EXPECT_AUTH_MODE="${EXPECT_AUTH_MODE:-}"
|
|
||||||
SSH_CONFIG="${SSH_CONFIG:-/home/user/.ssh/config}"
|
|
||||||
|
|
||||||
usage() {
|
|
||||||
cat <<'EOF'
|
|
||||||
Usage: phase5_chain_regression.sh [options]
|
|
||||||
|
|
||||||
Runs the Phase 5 split-VM regression from the host by executing the client-side
|
|
||||||
flow inside k_client over SSH.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--client-host HOST SSH host alias for k_client (default: k_client)
|
|
||||||
--ca-file PATH CA bundle path inside k_client
|
|
||||||
--proxy-url URL Proxy URL visible from k_client
|
|
||||||
--username NAME Username for session login
|
|
||||||
--requests N Number of counter requests to issue
|
|
||||||
--parallelism N Number of concurrent workers
|
|
||||||
--connect-timeout SEC SSH connect timeout
|
|
||||||
--login-timeout SEC Timeout for the interactive login request (default: 90)
|
|
||||||
--interactive-card Print card-confirmation instructions before login
|
|
||||||
--expect-auth-mode NAME Require login response auth_mode to match
|
|
||||||
--ssh-config PATH SSH config file to use (default: /home/user/.ssh/config)
|
|
||||||
-h, --help Show this help text
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case "$1" in
|
|
||||||
--client-host)
|
|
||||||
CLIENT_HOST="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--ca-file)
|
|
||||||
CA_FILE="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--proxy-url)
|
|
||||||
PROXY_URL="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--username)
|
|
||||||
USERNAME="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--requests)
|
|
||||||
REQUESTS="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--parallelism)
|
|
||||||
PARALLELISM="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--connect-timeout)
|
|
||||||
CONNECT_TIMEOUT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--login-timeout)
|
|
||||||
LOGIN_TIMEOUT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--interactive-card)
|
|
||||||
INTERACTIVE_CARD=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--expect-auth-mode)
|
|
||||||
EXPECT_AUTH_MODE="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--ssh-config)
|
|
||||||
SSH_CONFIG="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-h|--help)
|
|
||||||
usage
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "unknown argument: $1" >&2
|
|
||||||
usage >&2
|
|
||||||
exit 2
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ "${INTERACTIVE_CARD}" == "1" ]]; then
|
|
||||||
cat <<EOF
|
|
||||||
Starting interactive login for ${USERNAME}.
|
|
||||||
When the card shows the authentication prompt, press yes to approve.
|
|
||||||
Press no only if you want to reject the login.
|
|
||||||
EOF
|
|
||||||
fi
|
|
||||||
|
|
||||||
ssh \
|
|
||||||
-F "${SSH_CONFIG}" \
|
|
||||||
-o BatchMode=yes \
|
|
||||||
-o StrictHostKeyChecking=accept-new \
|
|
||||||
-o ConnectTimeout="${CONNECT_TIMEOUT}" \
|
|
||||||
"${CLIENT_HOST}" \
|
|
||||||
env \
|
|
||||||
CA_FILE="${CA_FILE}" \
|
|
||||||
PROXY_URL="${PROXY_URL}" \
|
|
||||||
USERNAME="${USERNAME}" \
|
|
||||||
REQUESTS="${REQUESTS}" \
|
|
||||||
PARALLELISM="${PARALLELISM}" \
|
|
||||||
LOGIN_TIMEOUT="${LOGIN_TIMEOUT}" \
|
|
||||||
EXPECT_AUTH_MODE="${EXPECT_AUTH_MODE}" \
|
|
||||||
python3 - <<'PY'
|
|
||||||
import concurrent.futures
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import ssl
|
|
||||||
import sys
|
|
||||||
import urllib.error
|
|
||||||
import urllib.request
|
|
||||||
|
|
||||||
ca_file = os.environ["CA_FILE"]
|
|
||||||
proxy_url = os.environ["PROXY_URL"].rstrip("/")
|
|
||||||
username = os.environ["USERNAME"]
|
|
||||||
requests = int(os.environ["REQUESTS"])
|
|
||||||
parallelism = int(os.environ["PARALLELISM"])
|
|
||||||
login_timeout = int(os.environ["LOGIN_TIMEOUT"])
|
|
||||||
expect_auth_mode = os.environ["EXPECT_AUTH_MODE"]
|
|
||||||
|
|
||||||
if requests < 1:
|
|
||||||
raise SystemExit("REQUESTS must be >= 1")
|
|
||||||
if parallelism < 1:
|
|
||||||
raise SystemExit("PARALLELISM must be >= 1")
|
|
||||||
|
|
||||||
ctx = ssl.create_default_context(cafile=ca_file)
|
|
||||||
|
|
||||||
def post_json(path: str, payload: dict | None = None, token: str | None = None, timeout: int = 10):
|
|
||||||
data = None if payload is None else json.dumps(payload).encode("utf-8")
|
|
||||||
headers = {}
|
|
||||||
if payload is not None:
|
|
||||||
headers["Content-Type"] = "application/json"
|
|
||||||
if token:
|
|
||||||
headers["Authorization"] = f"Bearer {token}"
|
|
||||||
req = urllib.request.Request(
|
|
||||||
f"{proxy_url}{path}",
|
|
||||||
data=data,
|
|
||||||
headers=headers,
|
|
||||||
method="POST",
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
with urllib.request.urlopen(req, context=ctx, timeout=timeout) as resp:
|
|
||||||
return resp.status, json.loads(resp.read().decode("utf-8"))
|
|
||||||
except urllib.error.HTTPError as exc:
|
|
||||||
body = exc.read().decode("utf-8")
|
|
||||||
try:
|
|
||||||
payload = json.loads(body)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
payload = {"ok": False, "error": body}
|
|
||||||
return exc.code, payload
|
|
||||||
|
|
||||||
status, login = post_json("/session/login", {"username": username}, timeout=login_timeout)
|
|
||||||
if status != 200 or "session_token" not in login:
|
|
||||||
print(json.dumps({"ok": False, "stage": "login", "status": status, "response": login}))
|
|
||||||
raise SystemExit(1)
|
|
||||||
if expect_auth_mode and login.get("auth_mode") != expect_auth_mode:
|
|
||||||
print(
|
|
||||||
json.dumps(
|
|
||||||
{
|
|
||||||
"ok": False,
|
|
||||||
"stage": "login",
|
|
||||||
"error": "unexpected auth_mode",
|
|
||||||
"expected": expect_auth_mode,
|
|
||||||
"response": login,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
)
|
|
||||||
raise SystemExit(1)
|
|
||||||
|
|
||||||
token = login["session_token"]
|
|
||||||
values = []
|
|
||||||
|
|
||||||
def fetch_one(_: int) -> int:
|
|
||||||
status, payload = post_json("/resource/counter", {}, token=token)
|
|
||||||
if status != 200:
|
|
||||||
raise RuntimeError(json.dumps({"status": status, "response": payload}))
|
|
||||||
return int(payload["upstream"]["value"])
|
|
||||||
|
|
||||||
try:
|
|
||||||
with concurrent.futures.ThreadPoolExecutor(max_workers=parallelism) as pool:
|
|
||||||
for value in pool.map(fetch_one, range(requests)):
|
|
||||||
values.append(value)
|
|
||||||
|
|
||||||
status_resp, session = post_json("/session/status", {}, token=token)
|
|
||||||
logout_status, logout = post_json("/session/logout", {}, token=token)
|
|
||||||
invalid_status, invalid = post_json("/resource/counter", {}, token=token)
|
|
||||||
except Exception as exc:
|
|
||||||
try:
|
|
||||||
post_json("/session/logout", {}, token=token)
|
|
||||||
finally:
|
|
||||||
raise SystemExit(str(exc))
|
|
||||||
|
|
||||||
sorted_values = sorted(values)
|
|
||||||
expected = list(range(sorted_values[0], sorted_values[-1] + 1)) if sorted_values else []
|
|
||||||
|
|
||||||
summary = {
|
|
||||||
"ok": True,
|
|
||||||
"username": username,
|
|
||||||
"proxy_url": proxy_url,
|
|
||||||
"requests": requests,
|
|
||||||
"parallelism": parallelism,
|
|
||||||
"unique": len(set(values)) == len(values),
|
|
||||||
"gap_free": sorted_values == expected,
|
|
||||||
"min": min(sorted_values) if sorted_values else None,
|
|
||||||
"max": max(sorted_values) if sorted_values else None,
|
|
||||||
"values": sorted_values,
|
|
||||||
"login": login,
|
|
||||||
"session_status": {"status": status_resp, "response": session},
|
|
||||||
"logout": {"status": logout_status, "response": logout},
|
|
||||||
"post_logout": {"status": invalid_status, "response": invalid},
|
|
||||||
}
|
|
||||||
print(json.dumps(summary, indent=2, sort_keys=True))
|
|
||||||
if not summary["unique"] or not summary["gap_free"] or logout_status != 200 or invalid_status != 401:
|
|
||||||
raise SystemExit(1)
|
|
||||||
PY
|
|
||||||
|
|
@ -1,188 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Phase 6.5 concurrency probe for the direct browser-to-k_proxy path.
|
|
||||||
|
|
||||||
What it does:
|
|
||||||
- Creates a small batch of enrolled users.
|
|
||||||
- Logs each user in through k_proxy over TLS.
|
|
||||||
- Fires protected counter requests in parallel using the returned bearer tokens.
|
|
||||||
- Verifies that all calls succeed and that returned counter values are unique and contiguous.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import ssl
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any
|
|
||||||
from urllib.error import HTTPError, URLError
|
|
||||||
from urllib.request import Request, urlopen
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Session:
|
|
||||||
username: str
|
|
||||||
token: str
|
|
||||||
|
|
||||||
|
|
||||||
def request_json(
|
|
||||||
base_url: str,
|
|
||||||
path: str,
|
|
||||||
*,
|
|
||||||
method: str = "GET",
|
|
||||||
payload: dict[str, Any] | None = None,
|
|
||||||
token: str | None = None,
|
|
||||||
cafile: str | None = None,
|
|
||||||
timeout: int = 10,
|
|
||||||
) -> tuple[int, dict[str, Any]]:
|
|
||||||
req = Request(f"{base_url.rstrip('/')}{path}", method=method)
|
|
||||||
req.add_header("Content-Type", "application/json")
|
|
||||||
if token:
|
|
||||||
req.add_header("Authorization", f"Bearer {token}")
|
|
||||||
data = None if payload is None else json.dumps(payload).encode("utf-8")
|
|
||||||
context = ssl.create_default_context(cafile=cafile) if base_url.startswith("https://") else None
|
|
||||||
try:
|
|
||||||
with urlopen(req, data=data, timeout=timeout, context=context) as resp:
|
|
||||||
return resp.status, json.loads(resp.read().decode("utf-8"))
|
|
||||||
except HTTPError as exc:
|
|
||||||
try:
|
|
||||||
return exc.code, json.loads(exc.read().decode("utf-8"))
|
|
||||||
except Exception:
|
|
||||||
return exc.code, {"ok": False, "error": f"http error {exc.code}"}
|
|
||||||
except URLError as exc:
|
|
||||||
return 502, {"ok": False, "error": f"url error: {exc.reason}"}
|
|
||||||
except Exception as exc:
|
|
||||||
return 502, {"ok": False, "error": f"request failed: {exc}"}
|
|
||||||
|
|
||||||
|
|
||||||
def enroll_user(base_url: str, cafile: str, username: str, display_name: str) -> None:
|
|
||||||
status, data = request_json(
|
|
||||||
base_url,
|
|
||||||
"/enroll/register",
|
|
||||||
method="POST",
|
|
||||||
payload={"username": username, "display_name": display_name},
|
|
||||||
cafile=cafile,
|
|
||||||
)
|
|
||||||
if status == 200:
|
|
||||||
return
|
|
||||||
if status == 409 and data.get("error") == "user already enrolled":
|
|
||||||
return
|
|
||||||
raise RuntimeError(f"enroll failed for {username}: status={status} data={data}")
|
|
||||||
|
|
||||||
|
|
||||||
def login_user(base_url: str, cafile: str, username: str) -> Session:
|
|
||||||
status, data = request_json(
|
|
||||||
base_url,
|
|
||||||
"/session/login",
|
|
||||||
method="POST",
|
|
||||||
payload={"username": username},
|
|
||||||
cafile=cafile,
|
|
||||||
)
|
|
||||||
if status != 200 or not data.get("session_token"):
|
|
||||||
raise RuntimeError(f"login failed for {username}: status={status} data={data}")
|
|
||||||
return Session(username=username, token=data["session_token"])
|
|
||||||
|
|
||||||
|
|
||||||
def counter_call(base_url: str, cafile: str, session: Session, call_id: int) -> dict[str, Any]:
|
|
||||||
started = time.time()
|
|
||||||
status, data = request_json(
|
|
||||||
base_url,
|
|
||||||
"/resource/counter",
|
|
||||||
method="POST",
|
|
||||||
payload={},
|
|
||||||
token=session.token,
|
|
||||||
cafile=cafile,
|
|
||||||
)
|
|
||||||
finished = time.time()
|
|
||||||
return {
|
|
||||||
"call_id": call_id,
|
|
||||||
"username": session.username,
|
|
||||||
"status": status,
|
|
||||||
"data": data,
|
|
||||||
"latency_ms": int((finished - started) * 1000),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args() -> argparse.Namespace:
|
|
||||||
parser = argparse.ArgumentParser(description="Run Phase 6.5 concurrency probe against k_proxy")
|
|
||||||
parser.add_argument("--base-url", default="https://127.0.0.1:9771")
|
|
||||||
parser.add_argument("--ca-file", required=True)
|
|
||||||
parser.add_argument("--users", type=int, default=3)
|
|
||||||
parser.add_argument("--requests-per-user", type=int, default=4)
|
|
||||||
parser.add_argument("--username-prefix", default="phase65")
|
|
||||||
parser.add_argument(
|
|
||||||
"--max-workers",
|
|
||||||
type=int,
|
|
||||||
help="Maximum number of in-flight protected calls; defaults to total requests",
|
|
||||||
)
|
|
||||||
return parser.parse_args()
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
args = parse_args()
|
|
||||||
|
|
||||||
sessions: list[Session] = []
|
|
||||||
for idx in range(args.users):
|
|
||||||
username = f"{args.username_prefix}_{idx}"
|
|
||||||
enroll_user(args.base_url, args.ca_file, username, f"Phase65 User {idx}")
|
|
||||||
sessions.append(login_user(args.base_url, args.ca_file, username))
|
|
||||||
|
|
||||||
jobs: list[tuple[Session, int]] = []
|
|
||||||
call_id = 0
|
|
||||||
for session in sessions:
|
|
||||||
for _ in range(args.requests_per_user):
|
|
||||||
jobs.append((session, call_id))
|
|
||||||
call_id += 1
|
|
||||||
|
|
||||||
results: list[dict[str, Any]] = []
|
|
||||||
max_workers = args.max_workers or len(jobs)
|
|
||||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
|
||||||
future_map = {
|
|
||||||
executor.submit(counter_call, args.base_url, args.ca_file, session, job_id): (session.username, job_id)
|
|
||||||
for session, job_id in jobs
|
|
||||||
}
|
|
||||||
for future in as_completed(future_map):
|
|
||||||
username, job_id = future_map[future]
|
|
||||||
try:
|
|
||||||
results.append(future.result())
|
|
||||||
except Exception as exc:
|
|
||||||
results.append(
|
|
||||||
{
|
|
||||||
"call_id": job_id,
|
|
||||||
"username": username,
|
|
||||||
"status": 599,
|
|
||||||
"data": {"ok": False, "error": str(exc)},
|
|
||||||
"latency_ms": -1,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
results.sort(key=lambda item: item["call_id"])
|
|
||||||
ok_results = [item for item in results if item["status"] == 200 and item["data"].get("ok")]
|
|
||||||
values = [item["data"]["upstream"]["value"] for item in ok_results]
|
|
||||||
values_sorted = sorted(values)
|
|
||||||
contiguous = bool(values_sorted) and values_sorted == list(range(values_sorted[0], values_sorted[0] + len(values_sorted)))
|
|
||||||
|
|
||||||
summary = {
|
|
||||||
"ok": len(ok_results) == len(results) and len(set(values)) == len(values) and contiguous,
|
|
||||||
"users": args.users,
|
|
||||||
"requests_per_user": args.requests_per_user,
|
|
||||||
"total_requests": len(results),
|
|
||||||
"max_workers": max_workers,
|
|
||||||
"successful_requests": len(ok_results),
|
|
||||||
"unique_counter_values": len(set(values)),
|
|
||||||
"counter_min": min(values_sorted) if values_sorted else None,
|
|
||||||
"counter_max": max(values_sorted) if values_sorted else None,
|
|
||||||
"counter_contiguous": contiguous,
|
|
||||||
"max_latency_ms": max((item["latency_ms"] for item in results), default=None),
|
|
||||||
"results": results,
|
|
||||||
}
|
|
||||||
print(json.dumps(summary, indent=2))
|
|
||||||
return 0 if summary["ok"] else 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
raise SystemExit(main())
|
|
||||||
|
|
@ -1,18 +0,0 @@
|
||||||
// Minimal local Playwright config for the k_client browser flow.
|
|
||||||
const { defineConfig } = require("@playwright/test");
|
|
||||||
|
|
||||||
module.exports = defineConfig({
|
|
||||||
testDir: "./tests",
|
|
||||||
timeout: 180_000,
|
|
||||||
expect: {
|
|
||||||
timeout: 15_000,
|
|
||||||
},
|
|
||||||
use: {
|
|
||||||
baseURL: process.env.PORTAL_BASE_URL || "http://127.0.0.1:8766",
|
|
||||||
headless: process.env.PW_HEADLESS === "1",
|
|
||||||
trace: "on-first-retry",
|
|
||||||
screenshot: "only-on-failure",
|
|
||||||
video: "retain-on-failure",
|
|
||||||
},
|
|
||||||
reporter: [["list"]],
|
|
||||||
});
|
|
||||||
|
|
@ -1,321 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Low-level CTAP2 probe for ChromeCard host debugging.
|
|
||||||
|
|
||||||
This bypasses the higher-level Fido2Client/WebAuthn helpers so we can inspect
|
|
||||||
raw makeCredential/getAssertion behavior, keepalive callbacks, and transport
|
|
||||||
errors on the host stack.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import hashlib
|
|
||||||
import json
|
|
||||||
import secrets
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import traceback
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
try:
|
|
||||||
from fido2.ctap import CtapError
|
|
||||||
from fido2.ctap2 import Ctap2
|
|
||||||
from fido2.hid import CtapHidDevice
|
|
||||||
from fido2.hid.linux import get_descriptor, open_connection
|
|
||||||
except Exception as exc:
|
|
||||||
print("Missing dependency: python-fido2", file=sys.stderr)
|
|
||||||
print("Install with: python3 -m pip install fido2", file=sys.stderr)
|
|
||||||
print(f"Import error: {exc}", file=sys.stderr)
|
|
||||||
sys.exit(2)
|
|
||||||
|
|
||||||
|
|
||||||
def _json_default(value: Any) -> Any:
|
|
||||||
if isinstance(value, bytes):
|
|
||||||
return value.hex()
|
|
||||||
if isinstance(value, set):
|
|
||||||
return sorted(value)
|
|
||||||
if hasattr(value, "items"):
|
|
||||||
return dict(value.items())
|
|
||||||
return str(value)
|
|
||||||
|
|
||||||
|
|
||||||
def _now() -> str:
|
|
||||||
return time.strftime("%Y-%m-%dT%H:%M:%S", time.localtime())
|
|
||||||
|
|
||||||
|
|
||||||
def log(message: str) -> None:
|
|
||||||
print(f"[{_now()}] {message}", file=sys.stderr, flush=True)
|
|
||||||
|
|
||||||
|
|
||||||
def list_devices() -> list[CtapHidDevice]:
|
|
||||||
return list(CtapHidDevice.list_devices())
|
|
||||||
|
|
||||||
|
|
||||||
def describe_device(dev: CtapHidDevice) -> dict[str, Any]:
|
|
||||||
desc = getattr(dev, "descriptor", None)
|
|
||||||
return {
|
|
||||||
"product_name": getattr(desc, "product_name", None),
|
|
||||||
"manufacturer": getattr(desc, "manufacturer_string", None),
|
|
||||||
"vendor_id": getattr(desc, "vid", None),
|
|
||||||
"product_id": getattr(desc, "pid", None),
|
|
||||||
"path": getattr(desc, "path", None),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_ctap2(dev: CtapHidDevice) -> Ctap2:
|
|
||||||
return Ctap2(dev)
|
|
||||||
|
|
||||||
|
|
||||||
def get_device(index: int, device_path: str | None) -> CtapHidDevice:
|
|
||||||
if device_path:
|
|
||||||
descriptor = get_descriptor(device_path)
|
|
||||||
return CtapHidDevice(descriptor, open_connection(descriptor))
|
|
||||||
devs = list_devices()
|
|
||||||
if not devs:
|
|
||||||
raise SystemExit("No CTAP HID devices found.")
|
|
||||||
if index < 0 or index >= len(devs):
|
|
||||||
raise SystemExit(f"Invalid --index {index}; found {len(devs)} device(s).")
|
|
||||||
return devs[index]
|
|
||||||
|
|
||||||
|
|
||||||
def print_json(payload: dict[str, Any]) -> None:
|
|
||||||
print(json.dumps(payload, indent=2, default=_json_default))
|
|
||||||
|
|
||||||
|
|
||||||
def keepalive_logger(status: int) -> None:
|
|
||||||
log(f"keepalive status={status}")
|
|
||||||
|
|
||||||
|
|
||||||
def _coerce_hex_bytes(value: str | None, label: str) -> bytes | None:
|
|
||||||
if value is None:
|
|
||||||
return None
|
|
||||||
raw = value.strip().lower()
|
|
||||||
if raw.startswith("0x"):
|
|
||||||
raw = raw[2:]
|
|
||||||
try:
|
|
||||||
return bytes.fromhex(raw)
|
|
||||||
except ValueError as exc:
|
|
||||||
raise SystemExit(f"invalid hex for {label}: {value}") from exc
|
|
||||||
|
|
||||||
|
|
||||||
def _client_data_hash(label: str) -> bytes:
|
|
||||||
return hashlib.sha256(label.encode("utf-8")).digest()
|
|
||||||
|
|
||||||
|
|
||||||
def _key_params() -> list[dict[str, Any]]:
|
|
||||||
return [
|
|
||||||
{"type": "public-key", "alg": -7},
|
|
||||||
{"type": "public-key", "alg": -257},
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def do_info(ctap2: Ctap2, device_meta: dict[str, Any]) -> int:
|
|
||||||
info = ctap2.get_info()
|
|
||||||
print_json({"device": device_meta, "ctap2_info": info})
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
def do_make_credential(ctap2: Ctap2, args: argparse.Namespace, device_meta: dict[str, Any]) -> int:
|
|
||||||
rp = {"id": args.rp_id, "name": args.rp_name or args.rp_id}
|
|
||||||
user_id = args.user_id.encode("utf-8")
|
|
||||||
user = {
|
|
||||||
"id": user_id,
|
|
||||||
"name": args.user_name,
|
|
||||||
"displayName": args.user_display_name or args.user_name,
|
|
||||||
}
|
|
||||||
client_data_hash = _client_data_hash(f"chromecard-make-credential:{args.rp_id}:{args.user_name}")
|
|
||||||
options = {"rk": args.resident_key, "uv": args.user_verification}
|
|
||||||
log(
|
|
||||||
"starting makeCredential "
|
|
||||||
f"rp_id={args.rp_id} user={args.user_name} rk={options['rk']} uv={options['uv']}"
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
response = ctap2.make_credential(
|
|
||||||
client_data_hash=client_data_hash,
|
|
||||||
rp=rp,
|
|
||||||
user=user,
|
|
||||||
key_params=_key_params(),
|
|
||||||
options=options,
|
|
||||||
on_keepalive=keepalive_logger,
|
|
||||||
)
|
|
||||||
except CtapError as exc:
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "makeCredential",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp": rp,
|
|
||||||
"user": user,
|
|
||||||
"options": options,
|
|
||||||
"error_type": "CtapError",
|
|
||||||
"error_code": getattr(exc, "code", None),
|
|
||||||
"error_name": str(getattr(exc, "code", None)),
|
|
||||||
"message": str(exc),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
except Exception as exc:
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "makeCredential",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp": rp,
|
|
||||||
"user": user,
|
|
||||||
"options": options,
|
|
||||||
"error_type": type(exc).__name__,
|
|
||||||
"message": str(exc),
|
|
||||||
"traceback": traceback.format_exc(),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
|
|
||||||
auth_data = getattr(response, "auth_data", None)
|
|
||||||
credential_data = getattr(auth_data, "credential_data", None)
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "makeCredential",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp": rp,
|
|
||||||
"user": user,
|
|
||||||
"options": options,
|
|
||||||
"fmt": getattr(response, "fmt", None),
|
|
||||||
"auth_data": auth_data,
|
|
||||||
"credential_id_hex": getattr(credential_data, "credential_id", b"").hex()
|
|
||||||
if credential_data is not None
|
|
||||||
else None,
|
|
||||||
"credential_data_hex": bytes(credential_data).hex() if credential_data is not None else None,
|
|
||||||
"att_stmt": getattr(response, "att_stmt", None),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
def do_get_assertion(ctap2: Ctap2, args: argparse.Namespace, device_meta: dict[str, Any]) -> int:
|
|
||||||
allow_credential = _coerce_hex_bytes(args.allow_credential_id, "allow-credential-id")
|
|
||||||
allow_list = [{"type": "public-key", "id": allow_credential}] if allow_credential else None
|
|
||||||
client_data_hash = _client_data_hash(f"chromecard-get-assertion:{args.rp_id}")
|
|
||||||
options = {"up": True, "uv": args.user_verification}
|
|
||||||
log(
|
|
||||||
"starting getAssertion "
|
|
||||||
f"rp_id={args.rp_id} allow_list={1 if allow_list else 0} uv={options['uv']}"
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
response = ctap2.get_assertion(
|
|
||||||
rp_id=args.rp_id,
|
|
||||||
client_data_hash=client_data_hash,
|
|
||||||
allow_list=allow_list,
|
|
||||||
options=options,
|
|
||||||
on_keepalive=keepalive_logger,
|
|
||||||
)
|
|
||||||
except CtapError as exc:
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "getAssertion",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp_id": args.rp_id,
|
|
||||||
"allow_list": allow_list,
|
|
||||||
"options": options,
|
|
||||||
"error_type": "CtapError",
|
|
||||||
"error_code": getattr(exc, "code", None),
|
|
||||||
"error_name": str(getattr(exc, "code", None)),
|
|
||||||
"message": str(exc),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
except Exception as exc:
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "getAssertion",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp_id": args.rp_id,
|
|
||||||
"allow_list": allow_list,
|
|
||||||
"options": options,
|
|
||||||
"error_type": type(exc).__name__,
|
|
||||||
"message": str(exc),
|
|
||||||
"traceback": traceback.format_exc(),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
|
|
||||||
assertions: list[dict[str, Any]] = []
|
|
||||||
for item in getattr(response, "assertions", []) or []:
|
|
||||||
assertions.append(
|
|
||||||
{
|
|
||||||
"credential": getattr(item, "credential", None),
|
|
||||||
"auth_data": getattr(item, "auth_data", None),
|
|
||||||
"signature": getattr(item, "signature", None),
|
|
||||||
"user": getattr(item, "user", None),
|
|
||||||
"number_of_credentials": getattr(item, "number_of_credentials", None),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"operation": "getAssertion",
|
|
||||||
"device": device_meta,
|
|
||||||
"rp_id": args.rp_id,
|
|
||||||
"allow_list": allow_list,
|
|
||||||
"options": options,
|
|
||||||
"assertions": assertions,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
def build_parser() -> argparse.ArgumentParser:
|
|
||||||
parser = argparse.ArgumentParser(description="Low-level CTAP2 host probe")
|
|
||||||
parser.add_argument("--index", type=int, default=0, help="Device index from --list output")
|
|
||||||
parser.add_argument(
|
|
||||||
"--device-path",
|
|
||||||
help="Use a specific hidraw node such as /dev/hidraw0 instead of scanning all devices",
|
|
||||||
)
|
|
||||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
|
||||||
|
|
||||||
subparsers.add_parser("list", help="List CTAP HID devices")
|
|
||||||
subparsers.add_parser("info", help="Fetch CTAP2 getInfo")
|
|
||||||
|
|
||||||
make_credential = subparsers.add_parser("make-credential", help="Run raw CTAP2 makeCredential")
|
|
||||||
make_credential.add_argument("--rp-id", default="localhost")
|
|
||||||
make_credential.add_argument("--rp-name", default="ChromeCard Local Probe")
|
|
||||||
make_credential.add_argument("--user-name", default="probe-user")
|
|
||||||
make_credential.add_argument("--user-display-name", default="Probe User")
|
|
||||||
make_credential.add_argument("--user-id", default=secrets.token_hex(16))
|
|
||||||
make_credential.add_argument("--resident-key", action="store_true")
|
|
||||||
make_credential.add_argument("--user-verification", action="store_true")
|
|
||||||
|
|
||||||
get_assertion = subparsers.add_parser("get-assertion", help="Run raw CTAP2 getAssertion")
|
|
||||||
get_assertion.add_argument("--rp-id", default="localhost")
|
|
||||||
get_assertion.add_argument("--allow-credential-id", help="Credential id as hex")
|
|
||||||
get_assertion.add_argument("--user-verification", action="store_true")
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
parser = build_parser()
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
if args.command == "list":
|
|
||||||
devs = list_devices()
|
|
||||||
print_json(
|
|
||||||
{
|
|
||||||
"devices": [describe_device(dev) for dev in devs],
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return 0 if devs else 1
|
|
||||||
|
|
||||||
dev = get_device(args.index, args.device_path)
|
|
||||||
device_meta = describe_device(dev)
|
|
||||||
ctap2 = get_ctap2(dev)
|
|
||||||
|
|
||||||
if args.command == "info":
|
|
||||||
return do_info(ctap2, device_meta)
|
|
||||||
if args.command == "make-credential":
|
|
||||||
return do_make_credential(ctap2, args, device_meta)
|
|
||||||
if args.command == "get-assertion":
|
|
||||||
return do_get_assertion(ctap2, args, device_meta)
|
|
||||||
parser.error(f"unsupported command: {args.command}")
|
|
||||||
return 2
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
raise SystemExit(main())
|
|
||||||
|
|
@ -1,70 +0,0 @@
|
||||||
const { test, expect } = require("@playwright/test");
|
|
||||||
|
|
||||||
const registrationTimeoutMs = Number(process.env.CARD_REGISTRATION_TIMEOUT_MS || "90000");
|
|
||||||
const loginTimeoutMs = Number(process.env.CARD_LOGIN_TIMEOUT_MS || "90000");
|
|
||||||
|
|
||||||
function uniqueUsername() {
|
|
||||||
return `pw_${Date.now().toString(36)}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function waitForActionResult(page, action, expectedText, timeoutMs) {
|
|
||||||
const flowResult = page.locator("#flowResult");
|
|
||||||
await action();
|
|
||||||
await expect(flowResult).toContainText(expectedText, { timeout: timeoutMs });
|
|
||||||
}
|
|
||||||
|
|
||||||
test.describe("k_client portal regression", () => {
|
|
||||||
test("registers, logs in, reads counter, logs out, and unregisters", async ({ page }) => {
|
|
||||||
const username = uniqueUsername();
|
|
||||||
const usersList = page.locator("#usersList");
|
|
||||||
const flowResult = page.locator("#flowResult");
|
|
||||||
const sessionLine = page.locator("#stateSession");
|
|
||||||
|
|
||||||
test.setTimeout(registrationTimeoutMs + loginTimeoutMs + 90_000);
|
|
||||||
|
|
||||||
await page.goto("/");
|
|
||||||
await expect(page.getByRole("heading", { name: "ChromeCard Client Flow" })).toBeVisible();
|
|
||||||
await page.getByLabel("Username").fill(username);
|
|
||||||
|
|
||||||
await test.step("Register user", async () => {
|
|
||||||
// Card step: press yes on the registration prompt.
|
|
||||||
await waitForActionResult(
|
|
||||||
page,
|
|
||||||
() => page.getByRole("button", { name: "Register User" }).click(),
|
|
||||||
"User registration succeeded.",
|
|
||||||
registrationTimeoutMs
|
|
||||||
);
|
|
||||||
await expect(usersList).toContainText(username);
|
|
||||||
});
|
|
||||||
|
|
||||||
await test.step("Login", async () => {
|
|
||||||
// Card step: press yes on the authentication prompt.
|
|
||||||
await waitForActionResult(
|
|
||||||
page,
|
|
||||||
() => page.getByRole("button", { name: "Login" }).click(),
|
|
||||||
"Login succeeded. You can now call k_server.",
|
|
||||||
loginTimeoutMs
|
|
||||||
);
|
|
||||||
await expect(sessionLine).toContainText("Session active: yes");
|
|
||||||
});
|
|
||||||
|
|
||||||
await test.step("Call k_server counter", async () => {
|
|
||||||
await page.getByRole("button", { name: "Call k_server" }).click();
|
|
||||||
await expect(flowResult).toContainText("k_server was reached. Counter value:");
|
|
||||||
});
|
|
||||||
|
|
||||||
await test.step("Logout", async () => {
|
|
||||||
await page.getByRole("button", { name: "Logout" }).click();
|
|
||||||
await expect(flowResult).toContainText("Session cleared.");
|
|
||||||
await expect(sessionLine).toContainText("Session active: no");
|
|
||||||
});
|
|
||||||
|
|
||||||
await test.step("Unregister user", async () => {
|
|
||||||
const row = usersList.locator(".user-row", { hasText: username });
|
|
||||||
await expect(row).toBeVisible();
|
|
||||||
await row.getByRole("button", { name: "Unregister" }).click();
|
|
||||||
await expect(flowResult).toContainText(`User ${username} was unregistered.`);
|
|
||||||
await expect(usersList).not.toContainText(username);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
Loading…
Reference in New Issue