NO-JIRA: devcontainer fixes and improvements#3511
NO-JIRA: devcontainer fixes and improvements#3511smg247 wants to merge 5 commits intoopenshift:mainfrom
Conversation
lint now works dual db mode with seeded and prodlike dbs for different workflows ability to restore prodlike db from production backup dump convert MCP tool functions to async (asyncio) to fix server connection loss Mount host ~/.claude into container; install Claude Code in post-create Isolate e2e redis db to avoid collisions
The previous commit removed the container fallback, breaking lint on bare hosts without golangci-lint installed. Now auto-detects: uses local golangci-lint when available, otherwise runs via Podman container. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: smg247 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited) Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdds dual dev database modes (seed vs prod-like) selectable via SIPPY_DATA_MODE, migrates MCP server tools/helpers to async with mode-aware DSN helpers, and introduces MCP tools restore_prodlike_db and run_e2e. Updates devcontainer, init/post-create scripts, prompts, docs, tests, linting, and related tooling to support the two-database workflow. ChangesDual Database Modes and Async MCP Server
Sequence Diagram(s)sequenceDiagram
actor User
participant MCP as MCP Server
participant Config as Env/Helpers
participant Service as Backend (sippy)
participant DB as PostgreSQL
Note over User,MCP: Seed mode (default)
User->>MCP: sippy_serve()
MCP->>Config: _data_mode() -> "seed"
Config-->>MCP: select seed DSN (SIPPY_SEED_DATABASE_DSN or default)
MCP->>Service: spawn backend (data_provider=postgres)
Service->>DB: connect to postgres (seed DB)
DB-->>Service: seed data available
Service-->>MCP: ready
MCP-->>User: backend ready
Note over User,MCP: Prod-like mode
User->>MCP: sippy_serve()
MCP->>Config: _data_mode() -> "prod-like"
Config-->>MCP: select prodlike DSN (SIPPY_PRODLIKE_DATABASE_DSN or default)
MCP->>Service: spawn backend (data_provider=bigquery, views_file=config/views.yaml)
Service->>DB: connect to prodlike fallback
DB-->>Service: prodlike data available
Service-->>MCP: ready
MCP-->>User: backend ready
sequenceDiagram
actor User
participant MCP as MCP Server
participant Script as restore_prodlike_db.sh
participant DB as PostgreSQL
participant Logs as Log File
User->>MCP: restore_prodlike_db(backup_file)
MCP->>MCP: validate repo-relative path and permissions
MCP->>Script: spawn restore subprocess
Script->>Script: detect dump format
alt custom/dir format
Script->>DB: pg_restore -d prodlike ...
else sql dump
Script->>DB: psql -d prodlike -f ...
end
DB-->>Script: restore output
Script->>Logs: write logs
MCP->>Logs: tail logs and report status
MCP-->>User: restore success/failure with log excerpt
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 16 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (16 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
mcp/server.py (2)
377-432:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail fast when
prod-likemode cannot resolve BigQuery credentials.In
prod-likemode this defaultsdata_providertobigquery, but the_resolve_bigquery_creds()error is discarded. That letssippy_servereturn “started and ready” even when the required credentials are missing or invalid, and the misconfiguration only shows up later on the first BigQuery-backed request.Suggested fix
- creds_path, _ = _resolve_bigquery_creds(bigquery_credentials_file) - if creds_path: + creds_path = None + if data_provider == "bigquery" or bigquery_credentials_file is not None: + creds_path, err = _resolve_bigquery_creds(bigquery_credentials_file) + if err: + return err + if creds_path: args.extend(["--google-service-account-credential-file", str(creds_path)])🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@mcp/server.py` around lines 377 - 432, When running in prod-like mode the code must fail fast if BigQuery credentials are missing: capture both return values from _resolve_bigquery_creds (creds_path, err) instead of discarding err, and if data_mode == "prod-like" (or data_provider == "bigquery") and creds_path is None return or raise a clear error including err; only extend args with the google-service-account-credential-file when creds_path is present. This change should be applied around the existing call to _resolve_bigquery_creds in sippy_serve so the service does not report started when BigQuery creds are unresolved.
455-466:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winMake
sippy_stopasync before calling_stop_pids.
_stop_pids()is now async, so calls at lines 461 and 465 create unawaited coroutine objects. The tool will report success without actually sending any signals to stop the processes.Suggested fix
`@mcp.tool`() -def sippy_stop() -> str: +async def sippy_stop() -> str: """Stop running sippy_serve and sippy_ng_start processes.""" results = [] serve_pids = _pids_sippy_serve() if serve_pids: - _stop_pids(serve_pids) + await _stop_pids(serve_pids) results.append(f"Stopped sippy_serve (pid(s) {', '.join(str(p) for p in serve_pids)})") ng_pids = _pids_sippy_ng_dev() if ng_pids: - _stop_pids(ng_pids) + await _stop_pids(ng_pids) results.append(f"Stopped sippy_ng (pid(s) {', '.join(str(p) for p in ng_pids)})")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@mcp/server.py` around lines 455 - 466, The sippy_stop function currently calls the now-async _stop_pids without awaiting it; make sippy_stop an async function (change def sippy_stop() -> str: to async def sippy_stop() -> str:) and await each _stop_pids(...) call where serve_pids and ng_pids are handled, ensuring you await _stop_pids(serve_pids) and await _stop_pids(ng_pids); keep the rest of the logic (building results list and return) the same and keep the `@mcp.tool`() decorator in place if it supports async functions.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.apm/instructions/dev-commands.instructions.md:
- Around line 8-17: The single-database example still points SIPPY_DATABASE_DSN
at the seed DB which misleads prod-like workflows; update the docs so the
example is mode-specific or remove the ambiguous single-command and point
readers to the explicit commands using SIPPY_SEED_DATABASE_DSN and
SIPPY_PRODLIKE_DATABASE_DSN (or mention SIPPY_DATA_MODE=prodlike) when running
go run ./cmd/sippy migrate --database-dsn, ensuring the example clearly shows
how to migrate the seed vs prod-like databases.
In @.devcontainer/devcontainer.json:
- Around line 11-12: The devcontainer mounts for ~/.config/gcloud and ~/.claude
are unconditional and can fail on first-run; update
.devcontainer/devcontainer.json so the host directories are created before the
bind mounts or gate the mounts: add or update the initializeCommand to run a
host-side mkdir -p for ${localEnv:HOME}/.config/gcloud and
${localEnv:HOME}/.claude (so they exist before the container is created), or
alternatively change the mount entries to a gated/optional pattern supported by
your devcontainer runtime; touch the initializeCommand and the mount lines (the
entries shown in the diff) so they reference these ensured directories.
In @.devcontainer/post-create.sh:
- Around line 35-39: The current post-create.sh logic creates a symlink at
"$claude_projects/-workspace" only when it doesn't exist, causing it to remain
pointing to a previous repo; update the block that handles
HOST_WORKSPACE_FOLDER/host_project_dir/claude_projects to detect when
"$claude_projects/-workspace" is a symlink (use -L) and its target (readlink)
differs from "$claude_projects/$host_project_dir", and in that case remove the
existing symlink and recreate it to point at
"$claude_projects/$host_project_dir"; keep the existing checks for directory
existence and ensure the new code uses ln -s to recreate the symlink atomically
after removal.
In `@mcp/server.py`:
- Around line 13-17: The code that determines SIPPY_DATA_MODE should respect an
already-exported environment variable instead of unconditionally re-loading the
.env file; update the _data_mode() logic to first check
os.environ.get("SIPPY_DATA_MODE") and return that if present, and only
read/parse _DEVCONTAINER_ENV (or call load_dotenv) as a fallback when the env
var is absent; ensure any load_dotenv call uses override=False so it never
overwrites an explicit SIPPY_DATA_MODE.
---
Outside diff comments:
In `@mcp/server.py`:
- Around line 377-432: When running in prod-like mode the code must fail fast if
BigQuery credentials are missing: capture both return values from
_resolve_bigquery_creds (creds_path, err) instead of discarding err, and if
data_mode == "prod-like" (or data_provider == "bigquery") and creds_path is None
return or raise a clear error including err; only extend args with the
google-service-account-credential-file when creds_path is present. This change
should be applied around the existing call to _resolve_bigquery_creds in
sippy_serve so the service does not report started when BigQuery creds are
unresolved.
- Around line 455-466: The sippy_stop function currently calls the now-async
_stop_pids without awaiting it; make sippy_stop an async function (change def
sippy_stop() -> str: to async def sippy_stop() -> str:) and await each
_stop_pids(...) call where serve_pids and ng_pids are handled, ensuring you
await _stop_pids(serve_pids) and await _stop_pids(ng_pids); keep the rest of the
logic (building results list and return) the same and keep the `@mcp.tool`()
decorator in place if it supports async functions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 78335c80-d689-404c-b18e-07feee698aea
⛔ Files ignored due to path filters (29)
.claude/commands/sippy-dev-app.mdis excluded by!.claude/**.claude/commands/sippy-dev-migrate.mdis excluded by!.claude/**.claude/commands/sippy-dev-regression-cache.mdis excluded by!.claude/**.claude/commands/sippy-dev-restore-prodlike.mdis excluded by!.claude/**.claude/commands/sippy-dev-serve.mdis excluded by!.claude/**.claude/commands/sippy-dev-setup.mdis excluded by!.claude/**.claude/commands/sippy-dev-tests.mdis excluded by!.claude/**.claude/rules/dev-commands.mdis excluded by!.claude/**.claude/rules/general.mdis excluded by!.claude/**.claude/rules/mcp.mdis excluded by!.claude/**.cursor/rules/dev-commands.mdcis excluded by!.cursor/**.cursor/rules/general.mdcis excluded by!.cursor/**.cursor/rules/mcp.mdcis excluded by!.cursor/**.gemini/commands/sippy-dev-app.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-migrate.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-regression-cache.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-restore-prodlike.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-serve.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-setup.tomlis excluded by!.gemini/**.gemini/commands/sippy-dev-tests.tomlis excluded by!.gemini/**.opencode/commands/sippy-dev-app.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-migrate.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-regression-cache.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-restore-prodlike.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-serve.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-setup.mdis excluded by!.opencode/**.opencode/commands/sippy-dev-tests.mdis excluded by!.opencode/**AGENTS.mdis excluded by!AGENTS.mdCLAUDE.mdis excluded by!CLAUDE.md
📒 Files selected for processing (25)
.apm/instructions/dev-commands.instructions.md.apm/instructions/general.instructions.md.apm/instructions/mcp.instructions.md.apm/prompts/sippy-dev-app.prompt.md.apm/prompts/sippy-dev-migrate.prompt.md.apm/prompts/sippy-dev-regression-cache.prompt.md.apm/prompts/sippy-dev-restore-prodlike.prompt.md.apm/prompts/sippy-dev-serve.prompt.md.apm/prompts/sippy-dev-setup.prompt.md.apm/prompts/sippy-dev-tests.prompt.md.devcontainer/.env.example.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/devcontainer.json.devcontainer/init-services.sh.devcontainer/post-create.shapm.lock.yamlhack/go-lint.shmcp/AGENTS.mdmcp/CLAUDE.mdmcp/README.mdmcp/requirements.txtmcp/server.pymcp/test_server.pyscripts/e2e.sh
…iner - Validate database_dsn and redis_url inputs against scheme regexes before passing to subprocess (command injection finding) - Fail fast in sippy_serve when data_provider=bigquery but no BigQuery credentials are available - Make sippy_stop async and await _stop_pids so processes are actually terminated - Ensure ~/.config/gcloud and ~/.claude exist before bind-mount - Remove ambiguous SIPPY_DATABASE_DSN migration example from docs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.apm/instructions/dev-commands.instructions.md (1)
12-15: 💤 Low valueAdd language specifier to fenced code block.
The code block lacks a language specifier, which helps with syntax highlighting and tooling.
Suggested fix
-``` +```shell go run ./cmd/sippy migrate --database-dsn "$SIPPY_SEED_DATABASE_DSN" go run ./cmd/sippy migrate --database-dsn "$SIPPY_PRODLIKE_DATABASE_DSN"</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In @.apm/instructions/dev-commands.instructions.md around lines 12 - 15, Add a
language specifier to the fenced code block containing the two go run
./cmd/sippy migrate commands so tooling can syntax-highlight it; update the
block start fromtoshell (or ```bash) so the lines go run ./cmd/sippy
migrate --database-dsn "$SIPPY_SEED_DATABASE_DSN" and go run ./cmd/sippy migrate
--database-dsn "$SIPPY_PRODLIKE_DATABASE_DSN" are rendered with shell
highlighting.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.Nitpick comments:
In @.apm/instructions/dev-commands.instructions.md:
- Around line 12-15: Add a language specifier to the fenced code block
containing the two go run ./cmd/sippy migrate commands so tooling can
syntax-highlight it; update the block start fromtoshell (or ```bash) so
the lines go run ./cmd/sippy migrate --database-dsn "$SIPPY_SEED_DATABASE_DSN"
and go run ./cmd/sippy migrate --database-dsn "$SIPPY_PRODLIKE_DATABASE_DSN" are
rendered with shell highlighting.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Repository YAML (base), Central YAML (inherited) **Review profile**: CHILL **Plan**: Enterprise **Run ID**: `c80c7a83-f36c-459f-8d70-be7292e461fb` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between f8aa801902a2f08a95fd9851ccae93205449d50c and 90b1a9863954f79b13c491e56868d357b4e6cfa1. </details> <details> <summary>⛔ Files ignored due to path filters (4)</summary> * `.claude/rules/dev-commands.md` is excluded by `!.claude/**` * `.cursor/rules/dev-commands.mdc` is excluded by `!.cursor/**` * `AGENTS.md` is excluded by `!AGENTS.md` * `CLAUDE.md` is excluded by `!CLAUDE.md` </details> <details> <summary>📒 Files selected for processing (5)</summary> * `.apm/instructions/dev-commands.instructions.md` * `.devcontainer/init-services.sh` * `apm.lock.yaml` * `mcp/server.py` * `mcp/test_server.py` </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (2)</summary> * .devcontainer/init-services.sh * apm.lock.yaml </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
- Restore --timeout 10m to golangci-lint in the local path of hack/go-lint.sh (was dropped when simplifying the script) - Exclude mcp/server.py from Snyk: create_subprocess_exec does not use a shell and all inputs are validated before reaching the subprocess Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@smg247: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Scheduling required tests: |
|
@smg247: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
lint now works
dual db mode with seeded and prodlike dbs for different workflows
ability to restore prodlike db from production backup dump
convert MCP tool functions to async (asyncio) to fix server connection loss
Mount host ~/.claude into container
install Claude Code in post-create
Isolate e2e redis db to avoid collisions
Summary by CodeRabbit
New Features
Documentation
Chores
Tests