Skip to content

feat: AE-1225: poll request logs for qb workers#276

Open
jhcipar wants to merge 3 commits intomainfrom
jhcipar/AE-1225/qb-request-log-polling
Open

feat: AE-1225: poll request logs for qb workers#276
jhcipar wants to merge 3 commits intomainfrom
jhcipar/AE-1225/qb-request-log-polling

Conversation

@jhcipar
Copy link
Contributor

@jhcipar jhcipar commented Mar 17, 2026

adds live QB worker log polling during ServerlessResource.run() and persists endpoint auth metadata (endpoint_id and aiKey) into manifests during provisioning/reconciliation so follow-up calls can reliably access endpoint logs.

Introduces a new QBRequestLogFetcher that polls /v2/{endpoint}/logs API while jobs are in progress, deduplicates and emits new log lines, and then filters duplicate stdout lines from final job output. It also normalizes deployed resource attributes (id/endpoint_id, aiKey/ai_key) and writes them back to local/state manifests

two things that I think would be good to get thoughts on

  1. if we poll and fetch logs during execution, do we show them again after the job completes (the current state, is that they're included in stdout and they're printed at the end if they weren't fetched from endpoint logs)?
  2. do we include system logs? leaning towards yes, but we will have to poll separately for them unfortunately

@jhcipar jhcipar requested review from KAJdev and deanq March 18, 2026 00:19
Copy link
Contributor

@runpod-Henrik runpod-Henrik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1. Bug: start_time default is evaluated at import time, not instantiation

# request_logs.py:35
def __init__(
    self,
    ...
    start_time: datetime = datetime.now(timezone.utc),  # evaluated ONCE at module load
):

datetime.now() in a default argument is computed when the module is first imported, not when QBRequestLogFetcher() is called. Every instance created without an explicit start_time — including the one created in run() — gets the same stale timestamp. On a long-running process that calls run() hours after startup, the first poll will request logs from the entire lifetime of the process.

Fix:

start_time: Optional[datetime] = None
# in body:
self.start_time = start_time if start_time is not None else datetime.now(timezone.utc)

2. Issue: print() for log output bypasses the logging system

# serverless.py:240
print(f"worker log: {line}")

Flash uses log = logging.getLogger(__name__) throughout. print() bypasses LOG_LEVEL filtering and SensitiveDataFilter. If a worker log line contains an API key or sensitive value, it will not be filtered.

Should be log.info("worker log: %s", line) (or rich.print if the intent is always-visible user output, but then it should be consistent with how the rest of the CLI surfaces output to users).


3. Issue: fetched_until stalls when logs have no timestamps

# request_logs.py:165
else:
    # not all logs have a timestamp, assume we should refetch
    self.fetched_until = self.start_time

When records have no dt field, fetched_until is set to start_time (unchanged). The next call to fetch_logs() then sets self.start_time = self.fetched_until = self.start_time — the window never advances. The deduplication seen set prevents duplicate output, but the API is called with the same time range on every poll for the entire duration of the job.

Should fall back to end_utc (the timestamp of the current fetch) rather than start_time.


4. Issue: _fetch_worker_id is dead code

# request_logs.py:67
async def _fetch_worker_id(self, endpoint_id, request_id, runpod_api_key):
    ...

This method is never called anywhere in the PR. fetch_logs() doesn't pass a request_id, matched_by_request_id is always False, and worker_id is always None. If this is scaffolding for future work, a comment would help. If it's not needed, removing it keeps the surface area clean.


5. Question: stdout deduplication path has no test coverage

The tests for run() patch out _emit_endpoint_logs entirely with AsyncMock, so fetcher.seen is never populated. The stdout deduplication block:

# serverless.py:1157-1166
if raw in fetcher.seen:
    continue

has no test. A test that lets _emit_endpoint_logs populate fetcher.seen and then verifies that matching stdout lines are stripped from the final output would close this gap.


6. Minor: _emit_endpoint_logs has inconsistent return

if not batch:
    return False   # explicit False
# other paths:
return             # implicit None

The return value is unused by the caller, so no functional impact — but the mixed False/None is confusing. Either make it -> None throughout (remove the return False) or commit to returning a bool.


Verdict

The mutable default argument is a real bug that will cause missed logs on any process that imports flash before calling run(). The other three issues (print vs logger, fetched_until stall, dead method) are worth fixing before merge. Tests are solid for the happy path — the one gap is the stdout deduplication path.

@KAJdev
Copy link
Contributor

KAJdev commented Mar 19, 2026

I'm seeing double logs

2026-03-18 19:31:39,696 | INFO  | CpuLiveServerless:fyr7izk2pspcy6 | API /run
2026-03-18 19:31:39,901 | INFO  | CpuLiveServerless:fyr7izk2pspcy6 | Started Job:f510b3cf-6e83-44b0-9009-3bff3ac92a31-u1
2026-03-18 19:31:40,101 | INFO  | Job:f510b3cf-6e83-44b0-9009-3bff3ac92a31-u1 | Status: IN_QUEUE
2026-03-18 19:31:41,325 | INFO  | Job:f510b3cf-6e83-44b0-9009-3bff3ac92a31-u1 | .
2026-03-18 19:31:43,245 | INFO  | Job:f510b3cf-6e83-44b0-9009-3bff3ac92a31-u1 | ..
2026-03-18 19:31:45,342 | INFO  | Job:f510b3cf-6e83-44b0-9009-3bff3ac92a31-u1 | Status: COMPLETED
2026-03-18 19:31:45,626 | INFO  | Worker:qri408vqei7vdk | Delay Time: 3614 ms
2026-03-18 19:31:45,626 | INFO  | Worker:qri408vqei7vdk | Execution Time: 37 ms
2026-03-19 02:31:43,462 | INFO  | this is an info message
2026-03-19 02:31:43,462 | WARNING | this is a warning message
2026-03-19 02:31:43,462 | ERROR | this is an error message
2026-03-19 02:31:43,462 | INFO  | processing request for world
this is an info message
this is a warning message
this is an error message
processing request for world

@jhcipar
Copy link
Contributor Author

jhcipar commented Mar 19, 2026

@KAJdev yeah, the double logs thing happens in main. At the end of the request flash by default replays stdout from the completed job request into your terminal. Currently the logging handler puts both formatted logs and raw log lines into stdout. For example

{
  "delayTime": 10342,
  "executionTime": 5078,
  "id": "3655fecf-ee84-45db-832d-6bde2b7031d9-u1",
  "output": {
    "instance_id": null,
    "instance_info": null,
    "json_result": null,
    "result": "gAWVFQAAAAAAAAB9lIwGcmVzdWx0lIwFaG93ZHmUcy4=",
    "stdout": "2026-03-19 13:55:24,435 | INFO  | 3655fecf-ee84-45db-832d-6bde2b7031d9-u1 | jello: 0\n2026-03-19 13:55:25,435 | INFO  | 3655fecf-ee84-45db-832d-6bde2b7031d9-u1 | jello: 1\n2026-03-19 13:55:26,436 | INFO  | 3655fecf-ee84-45db-832d-6bde2b7031d9-u1 | jello: 2\n2026-03-19 13:55:27,436 | INFO  | 3655fecf-ee84-45db-832d-6bde2b7031d9-u1 | jello: 3\n2026-03-19 13:55:28,436 | INFO  | 3655fecf-ee84-45db-832d-6bde2b7031d9-u1 | jello: 4\n\njello: 0\njello: 1\njello: 2\njello: 3\njello: 4\n",
    "success": true
  },
  "status": "COMPLETED",
  "workerId": "k7zjko9p3g579t"
}

I am gonna try and figure this out as a short term but also in your case you didn't actually get any polled log lines 🫠 the latency between when they're actually available as endpoint logs is pretty high

@KAJdev
Copy link
Contributor

KAJdev commented Mar 19, 2026

yeah i even tried 60s requests and would only get about 1 log out of 30 at runtime. I think we probably should think of a different method since I'm not sure this is going to be that useful

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants