Fix Codex prolite support and trust reported model availability#2006
Fix Codex prolite support and trust reported model availability#2006ElSargo wants to merge 7 commits intopingdotgg:mainfrom
Conversation
- add prolite plan support for Spark eligibility and auth labels\n- preserve built-in display names for known app-server models\n- treat non-empty model/list results as trusted model availability\n- ignore empty listed models and wait for model/list in one-shot discovery
…3code into feat/codex-prolite-fix
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ApprovabilityVerdict: Needs human review This PR changes model availability resolution logic to trust app-server reported models instead of hardcoded plan-based gating. Combined with the new prolite plan support and custom models propagation, this affects which models users can access at runtime - warranting human review of the feature gating changes. You can customize Macroscope's approvability policy. Learn more. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit f7903e5. Configure here.
|
|
||
| export function readString(value: unknown): string | undefined { | ||
| return typeof value === "string" ? value : undefined; | ||
| } |
There was a problem hiding this comment.
Exported readString function is unused outside its file
Low Severity
The readString function is exported from codexModels.ts but is never imported by any other file. It's only used internally by nonEmptyTrimmed within the same module. The export adds unnecessary public API surface to this new shared module.
Reviewed by Cursor Bugbot for commit f7903e5. Configure here.


Note
This is a revised version of #1980 with a narrower scope and a clearer write-up of the known tradeoffs.
What Changed
This PR adds support for the new distinction between
proandprolitein Codex account handling.Reference:
https://help.openai.com/en/articles/9793128-what-is-chatgpt-pro
The change has two parts:
proliteaccount profile alongside the existing plan types.model/listresponse when it returns a non-empty model list.The second part is important because the app-server can currently report this account type as
unknown, and in that case account-based gating alone is not sufficient to determine whethergpt-5.3-codex-sparkshould be available.Why
In my testing, the Codex app-server currently reports
proliteaccounts asunknown.Before this change, T3 Code only treated
proaccounts as Spark-capable, sogpt-5.3-codex-sparkwas filtered out forproliteusers even when the app-server itself reported Spark as available.This PR fixes that in two ways:
prolitesupport for when the app-server catches up and reports the newer plan type directly.Potential Issues
Technically possible startup latency increase
This revised PR explicitly keeps
model/listin the one-shot app-server discovery probe, so provider discovery now waits foraccount/read,skills/list, andmodel/list.If
model/listwere to hang while the other requests succeeded, startup could take longer than before.Why I think this is acceptable:
model/listhung while the other probe requests succeeded.model/listis uniquely failure-prone relative to the other probe requests.Possibility of selecting a model that later fails server-side
Because this PR is specifically meant to handle cases where account type is reported as
unknown, model availability may depend onmodel/listrather than account classification alone.Why I think this is acceptable:
UI Changes
gpt-5.3-codex-sparknow appears in the model selector forproliteusers.Before:
57d7746
After:
Other Considerations
I was not able to test this change against other account types.
This PR may look larger than the behavior change suggests because it also factors shared Codex model parsing and metadata into
apps/server/src/provider/codexModels.ts.Note
Medium Risk
Changes model selection and provider discovery to trust app-server
model/listresults (with new fallbacks) and adds custom-model passthrough, which can alter which model is used at runtime and affect session startup ifmodel/listmisbehaves.Overview
Adds Codex
proliteplan support so ChatGPT Pro Lite accounts are labeled correctly and treated as Spark-capable.Shifts Codex model resolution to trust the app-server’s
model/listwhen it returns a non-empty set:resolveCodexModelForAccountnow validates the requested model against reported availability, falls back togpt-5.3-codex(or the first reported model), and preserves configured custom models even if not reported.Plumbs reported/configured models through the stack: the app-server manager fetches/parses
model/liston session start, discovery probing now includesmodel/listin the snapshot,CodexProviderprefers reported models over account-based Spark gating, andCodexAdapterforwardscustomModelsfrom settings. Tests are expanded/added to cover prolite, model-list fallbacks, discovery defaults, and custom model preservation.Reviewed by Cursor Bugbot for commit f7903e5. Bugbot is set up for automated code reviews on this repo. Configure here.
Note
Fix Codex prolite plan support and prefer app-server reported model availability
proliteas a recognized Codex plan type that enables spark, with the label 'ChatGPT Pro Lite Subscription'.probeCodexDiscoverynow sends amodel/listrequest and returns available models inCodexDiscoverySnapshot; provider status checks prefer this list over account-plan-based spark gating.resolveCodexModelForAccountnow acceptsavailableModelsandcustomModels, preferring app-server reported models and configured custom models over plan-gating fallback logic.CodexAppServerManagerforwardsavailableModelsandcustomModelsinto model resolution on both session start and per-turn, and theCodexAdapterpasses configured custom models through onstartSession.codexModels.tsnormalizemodel/listresponses, filtering hidden models and preserving built-in display names for known slugs.Macroscope summarized f7903e5.