-
Notifications
You must be signed in to change notification settings - Fork 1
[RUFF] Enable G rule (logging format) #1040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -27,7 +27,7 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| continue | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| if not provider.needs_refresh(token): | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| continue | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.info(f"Refreshing token for {name} (expires {token.expires_at})") | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.info("Refreshing token for %s (expires %s)", name, token.expires_at) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Check failureCode scanning / CodeQL Clear-text logging of sensitive information High
This expression logs
sensitive data (password) Error loading related location Loading
Copilot AutofixAI 20 days ago In general, to fix clear-text logging of sensitive information, either stop logging the sensitive value, replace it with a non-sensitive surrogate (e.g., a static label or hash/truncated form), or ensure it is properly redacted before logging. The goal is to keep logs useful for operations while avoiding exposing identifiers that might help correlate secrets or accounts if logs are compromised. Here, the sensitive part is
No changes are required in
Suggested changeset
1
airlock/oauth/refresh.py
Copilot is powered by AI and may make mistakes. Always verify output.
Refresh and try again.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| new_token = await provider.refresh_tokens(token.refresh_token) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| await k8s_store.write_token( | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| provider.config.refresh_secret.name, | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -42,9 +42,9 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| annotations=provider.config.access_secret.annotations or None, | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| fields=ACCESS_TOKEN_FIELDS, | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.info(f"Refreshed token for {name} (new expiry {new_token.expires_at})") | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.info("Refreshed token for %s (new expiry %s)", name, new_token.expires_at) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Check failureCode scanning / CodeQL Clear-text logging of sensitive information High
This expression logs
sensitive data (password) Error loading related location Loading
Copilot AutofixAI 20 days ago In general, the fix is to avoid logging potentially sensitive or user-derived data directly, especially in security-sensitive flows. Instead, log only non-sensitive metadata (e.g., a generic label, or an internal, sanitized identifier), or remove the data point entirely if it is not necessary for debugging/monitoring. Here, the problematic logs are: logger.info("Refreshing token for %s (expires %s)", name, token.expires_at)
...
logger.info("Refreshed token for %s (new expiry %s)", name, new_token.expires_at)We can eliminate the exposure by no longer logging Concretely, in
No new imports or methods are needed; we simply adjust the existing log calls to remove the tainted argument while preserving the rest of the behavior.
Suggested changeset
1
airlock/oauth/refresh.py
Copilot is powered by AI and may make mistakes. Always verify output.
Refresh and try again.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| except Exception: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.exception(f"Failed to refresh token for {name}") | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logger.exception("Failed to refresh token for %s", name) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Check failureCode scanning / CodeQL Clear-text logging of sensitive information High
This expression logs
sensitive data (password) Error loading related location Loading
Copilot AutofixAI 20 days ago In general: avoid logging any value that might contain sensitive data (tokens, passwords, secrets, or identifiers that could embed them). When logging is needed for debugging/operations, log non-sensitive metadata or a redacted version instead. Best fix here: adjust the log messages in Concretely, in
Suggested changeset
1
airlock/oauth/refresh.py
Copilot is powered by AI and may make mistakes. Always verify output.
Refresh and try again.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| try: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| await k8s_store.delete_orphaned_secrets(target_namespace, known_secret_names) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| except Exception: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -92,7 +92,7 @@ async def callback_get(provider_name: str, request: Request) -> RedirectResponse | |
| annotations=provider.config.access_secret.annotations or None, | ||
| fields=ACCESS_TOKEN_FIELDS, | ||
| ) | ||
| logger.info(f"Stored tokens for {provider_name} (expires {token.expires_at})") | ||
| logger.info("Stored tokens for %s (expires %s)", provider_name, token.expires_at) | ||
| return RedirectResponse("/#/oauth") | ||
|
|
||
| @router.post("/callback/{provider_name}") | ||
|
|
@@ -117,7 +117,7 @@ async def callback_post(provider_name: str, body: _PlaidCallbackBody) -> Redirec | |
| annotations=provider.config.access_secret.annotations or None, | ||
| fields=ACCESS_TOKEN_FIELDS, | ||
| ) | ||
| logger.info(f"Stored Plaid tokens for {provider_name}") | ||
| logger.info("Stored Plaid tokens for %s", provider_name) | ||
| return RedirectResponse("/#/oauth", status_code=303) | ||
|
|
||
| return router | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -88,7 +88,7 @@ def __init__(self, listen_port: int, max_workers: int = 100): | |
| self._upstream_url: str | None = None | ||
| self._creds_lock = threading.Lock() | ||
| self.server_socket: socket.socket | None = None | ||
| self._running = False | ||
| self.running = False | ||
| self._thread: threading.Thread | None = None | ||
| self._executor: ThreadPoolExecutor | None = None | ||
| self._connections: list[socket.socket] = [] | ||
|
|
@@ -118,15 +118,15 @@ def start(self) -> None: | |
| self.server_socket.settimeout(0.5) | ||
|
|
||
| self._executor = ThreadPoolExecutor(max_workers=self.max_workers, thread_name_prefix="proxy") | ||
| self._running = True | ||
| self.running = True | ||
| self._thread = threading.Thread(target=self._serve, daemon=True) | ||
| self._thread.start() | ||
|
|
||
| logger.info("Auth proxy started on 127.0.0.1:%d (max_workers: %d)", self.listen_port, self.max_workers) | ||
|
|
||
| def stop(self) -> None: | ||
| """Stop the proxy server.""" | ||
| self._running = False | ||
| self.running = False | ||
| if self._thread: | ||
| self._thread.join(timeout=2) | ||
| if self._executor: | ||
|
|
@@ -140,7 +140,7 @@ def stop(self) -> None: | |
|
|
||
| def _serve(self) -> None: | ||
| """Main server loop.""" | ||
| while self._running: | ||
| while self.running: | ||
| try: | ||
| client_sock, _ = self.server_socket.accept() # type: ignore[union-attr] | ||
| self._connections.append(client_sock) | ||
|
|
@@ -309,7 +309,7 @@ def __init__(self, sock_path: Path, remote_target: str, max_workers: int = 100): | |
| self._upstream_url: str | None = None | ||
| self._creds_lock = threading.Lock() | ||
| self.server_socket: socket.socket | None = None | ||
| self._running = False | ||
| self.running = False | ||
| self._thread: threading.Thread | None = None | ||
| self._executor: ThreadPoolExecutor | None = None | ||
| self._connections: list[socket.socket] = [] | ||
|
|
@@ -341,15 +341,15 @@ def start(self) -> None: | |
| self.server_socket.settimeout(0.5) | ||
|
|
||
| self._executor = ThreadPoolExecutor(max_workers=self.max_workers, thread_name_prefix="uds-proxy") | ||
| self._running = True | ||
| self.running = True | ||
| self._thread = threading.Thread(target=self._serve, daemon=True) | ||
| self._thread.start() | ||
|
|
||
| logger.info("UDS remote proxy started on %s → %s", self.sock_path, self.remote_target) | ||
|
|
||
| def stop(self) -> None: | ||
| """Stop the UDS proxy server.""" | ||
| self._running = False | ||
| self.running = False | ||
| if self._thread: | ||
| self._thread.join(timeout=2) | ||
| if self._executor: | ||
|
|
@@ -365,7 +365,7 @@ def stop(self) -> None: | |
|
|
||
| def _serve(self) -> None: | ||
| """Main server loop.""" | ||
| while self._running: | ||
| while self.running: | ||
| try: | ||
| client_sock, _ = self.server_socket.accept() # type: ignore[union-attr] | ||
| self._connections.append(client_sock) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -347,7 +347,7 @@ async def setup_auth_proxy( | |
| # Create combined CA bundle (for tools like uv that use SSL_CERT_FILE) | ||
| _create_combined_ca_bundle(paths) | ||
|
|
||
| status = (f"running (port {port})" if proxy._running else "configured") if proxy is not None else "uds-only" | ||
| status = (f"running (port {port})" if proxy.running else "configured") if proxy is not None else "uds-only" | ||
| ca_status = "custom CA" if combined_ca.exists() else "system" | ||
|
|
||
| logger.info("Auth proxy setup complete") | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -39,7 +39,7 @@ | |
| MODEL_OPT = typer.Option(DEFAULT_MODEL, "--model", help="Model name (OPENAI_MODEL)") | ||
| NETWORK_OPT = typer.Option(_ENV_NETWORK, "--network", help="Docker network (ADGN_EDITOR_DOCKER_NETWORK)") | ||
| MAX_TURNS_OPT = typer.Option(40, "--max-turns", help="Maximum agent turns before abort") | ||
| VERBOSE_OPT = typer.Option(False, "--verbose", "-v", help="Show agent actions in real-time") | ||
| VERBOSE_OPT = typer.Option(default=False, help="Show agent actions in real-time") | ||
|
|
||
|
|
||
| @app.callback(invoke_without_command=True) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -108,12 +108,12 @@ async def commit( | |
| timeout_secs: int | None = typer.Option( | ||
| None, "--timeout-secs", help="Maximum seconds for the AI request; 0 disables timeout" | ||
| ), | ||
| stage_all: bool = typer.Option(False, "-a", "--all", help="Stage all tracked changes"), | ||
| no_verify: bool = typer.Option(False, "--no-verify", help="Skip pre-commit hooks"), | ||
| amend: bool = typer.Option(False, "--amend", help="Amend previous commit"), | ||
| accept_ai: bool = typer.Option(False, "--accept-ai", help="Commit with AI message, skip editor"), | ||
| verbose: bool = typer.Option(False, "-v", help="Verbose git commit"), | ||
| debug: bool = typer.Option(False, "--debug", help="Show logger output"), | ||
| stage_all: bool = typer.Option(default=False, help="Stage all tracked changes"), | ||
| no_verify: bool = typer.Option(default=False, help="Skip pre-commit hooks"), | ||
| amend: bool = typer.Option(default=False, help="Amend previous commit"), | ||
| accept_ai: bool = typer.Option(default=False, help="Commit with AI message, skip editor"), | ||
| verbose: bool = typer.Option(default=False, help="Verbose git commit"), | ||
| debug: bool = typer.Option(default=False, help="Show logger output"), | ||
| ): | ||
| """Run the git-commit-ai process.""" | ||
| repo = pygit2.Repository(get_build_workspace_directory()) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -76,7 +76,7 @@ class Settings(BaseModel): | |
|
|
||
| @classmethod | ||
| def from_file(cls, path: Path) -> "Settings": | ||
| logger.info(f"Loading settings from {path.absolute()}") | ||
| logger.info("Loading settings from %s", path.absolute()) | ||
| with path.open() as f: | ||
| data = yaml.safe_load(f) | ||
| if not isinstance(data, dict): | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -74,11 +74,11 @@ async def _connection_loop(self) -> None: | |
| self._connected.clear() | ||
| return | ||
| except (CannotConnect, ConnectionFailed, NotConnected, OSError) as exc: | ||
| logger.warning(f"HA connection lost: {exc}. Reconnecting in {backoff:.1f}s") | ||
| logger.warning("HA connection lost: %s. Reconnecting in %.1fs", exc, backoff) | ||
| except asyncio.CancelledError: | ||
| raise | ||
| except Exception: | ||
| logger.exception(f"Unexpected error in HA connection loop. Reconnecting in {backoff:.1f}s") | ||
| logger.exception("Unexpected error in HA connection loop. Reconnecting in %.1fs", backoff) | ||
| finally: | ||
| self._connected.clear() | ||
| if self._client is not None: | ||
|
|
@@ -96,7 +96,7 @@ async def _ensure_entities(self) -> dict[str, EntityInfo]: | |
| self._entities_time = now | ||
| except (ConnectionError, NotConnected, CannotConnect, ConnectionFailed) as exc: | ||
| if self._entities is not None: | ||
| logger.warning(f"Registry refresh failed ({exc}), serving stale cache") | ||
| logger.warning("Registry refresh failed (%s), serving stale cache", exc) | ||
| else: | ||
| raise | ||
| return self._entities | ||
|
|
@@ -126,7 +126,7 @@ async def _fetch_registry(self) -> dict[str, EntityInfo]: | |
| area_id = device_area.get(device_id) | ||
| registry[entity_id] = EntityInfo(entity_id=entity_id, device_id=device_id, area_id=area_id) | ||
|
|
||
| logger.info(f"Fetched registry: {len(registry)} entities") | ||
| logger.info("Fetched registry: %d entities", len(registry)) | ||
| return registry | ||
|
|
||
| def _get_entity(self, entities: dict[str, EntityInfo], entity_id: str) -> EntityInfo: | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -525,10 +525,10 @@ def main() -> None: | |
| seed_tasks = [t for t in all_tasks if t.type == task_type_enum.value] | ||
|
|
||
| if not seed_tasks: | ||
| logger.error(f"No tasks found with type '{task_type_enum.value}' in {seeds_path}") | ||
| logger.error("No tasks found with type '%s' in %s", task_type_enum.value, seeds_path) | ||
| sys.exit(1) | ||
|
|
||
| logger.info(f"Loaded {len(seed_tasks)} {task_type_enum.value} tasks from {len(all_tasks)} total tasks") | ||
| logger.info("Loaded %d %s tasks from %d total tasks", len(seed_tasks), task_type_enum.value, len(all_tasks)) | ||
|
|
||
| # Load grading criteria from YAML | ||
| logger.info("Loading grading criteria") | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -403,7 +403,8 @@ async def _run_setup_script(self, script_path: str, script_type: str, log_prefix | |
| cmd_args = [str(setup_script), c.id, self.task_id, str(self._output_dir)] | ||
| script_stat = await asyncio.to_thread(setup_script.stat) | ||
| self._logger.info( | ||
| f"Running {script_type.lower()} script", | ||
| "%s script running", | ||
| script_type.lower(), | ||
| script=str(setup_script), | ||
| container_id=c.id, | ||
| task_id=self.task_id, | ||
|
|
@@ -445,13 +446,14 @@ async def _run_setup_script(self, script_path: str, script_type: str, log_prefix | |
|
|
||
| if exit_code != 0: | ||
| self._logger.error( | ||
| f"{script_type} script failed - CONTAINER LEFT RUNNING FOR DEBUG", | ||
| "%s script failed - CONTAINER LEFT RUNNING FOR DEBUG", | ||
| script_type, | ||
| container_id=c.id, | ||
| exit_code=exit_code, | ||
| debug_hint=f"Run: docker logs {c.id}", | ||
| ) | ||
| raise RuntimeError(f"{script_type} script failed with exit code {exit_code}") | ||
| self._logger.info(f"{script_type} script completed successfully", container_id=c.id) | ||
| self._logger.info("%s script completed successfully", script_type, container_id=c.id) | ||
|
|
||
| async def _run_pre_task_always_setup(self): | ||
| """Run always pre-task setup script (runs before every task).""" | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -63,15 +63,16 @@ def upload_lcsc_images(api: InvenTreeAPI): | |
| # Gather LCSC from single supplier | ||
| sp_lcsc = [sp for sp in all_supplier_parts if sp.part == p.pk and sp.supplier == lcsc.pk] | ||
| if len(sp_lcsc) != 1: | ||
| log.info(f"Skip, {len(sp_lcsc)} LCSC SupplierParts.") | ||
| log.info("Skip, %s LCSC SupplierParts.", len(sp_lcsc)) | ||
| continue | ||
| lcsc_from_supplier = sp_lcsc[0].SKU | ||
|
|
||
| # Decide if we have an LCSC ID | ||
| if lcsc_from_link and lcsc_from_supplier: | ||
| # If both are present, assert they match | ||
| if lcsc_from_link != lcsc_from_supplier: | ||
| raise ValueError(f"Conflicting LCSC IDs: {lcsc_from_link=} != {lcsc_from_supplier=}", log._context) | ||
| msg = f"Conflicting LCSC IDs: {lcsc_from_link=} != {lcsc_from_supplier=}" | ||
| raise ValueError(msg, log._context) | ||
| # Both match => use either one | ||
| lcsc_id = lcsc_from_link | ||
| elif lcsc_from_link or lcsc_from_supplier: | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -267,7 +267,7 @@ class BaseResponse(CamelCaseModel): | |
| """ | ||
|
|
||
| # continue_ needs explicit alias since to_camel("continue_") -> "continue_" not "continue" | ||
| continue_: bool = Field(True, alias="continue") | ||
| continue_: bool = Field(default=True, alias="continue") | ||
| stop_reason: str | None = Field(None, description="Message shown to USER when continue is false") | ||
| suppress_output: bool | None = None | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -150,7 +150,7 @@ def entrypoint(cls) -> None: | |
|
|
||
| except Exception: | ||
| # Log the exception | ||
| logger.error("Hook execution failed", exc_info=True) | ||
| logger.exception("Hook execution failed") | ||
| raise | ||
|
|
||
| _emit_and_exit(response) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -90,7 +90,7 @@ def load_page_titles(): | |
| else: | ||
| raise ValueError(f"Missing required 'title' in frontmatter for {page}.md") | ||
| except Exception: | ||
| logger.exception(f"Error loading title for {page}.md") | ||
| logger.exception("Error loading title for %s.md", page) | ||
| raise | ||
|
|
||
|
|
||
|
|
@@ -116,9 +116,9 @@ def handle_page_rendering_error(error: Exception, page_name: str = "page") -> No | |
| HTTPException: Always raises with appropriate status code | ||
| """ | ||
| if isinstance(error, FileNotFoundError): | ||
| logger.error(f"{page_name} not found") | ||
| logger.error("%s not found", page_name) | ||
| raise HTTPException(status_code=404, detail="Document not found") | ||
| logger.error(f"Error rendering {page_name}: {error}") | ||
| logger.error("Error rendering %s: %s", page_name, error) | ||
| raise HTTPException(status_code=500, detail="Internal server error") | ||
|
|
||
|
|
||
|
|
@@ -216,7 +216,7 @@ async def analyze_page_tokens( | |
| tokens = count_tokens_for_models(final_markdown) | ||
| return {"page": page_id, "title": title, "url": url, **tokens} | ||
| except Exception: | ||
| logger.exception(f"Error analyzing {page_id} page") | ||
| logger.exception("Error analyzing %s page", page_id) | ||
| return None | ||
|
|
||
|
|
||
|
|
@@ -300,11 +300,11 @@ async def verify_token(request: Request, token: str = ""): | |
|
|
||
| ts.verify_token(token) | ||
| result = {"status": "success", "message": "Token is valid ✅"} | ||
| logger.info(f"Token verification succeeded for: {token[:20]}...") | ||
| logger.info("Token verification succeeded for: %s...", token[:20]) | ||
| except VerificationError as exc: | ||
| result = {"status": "failed", "errors": exc.issues} | ||
| issues_str = " | ".join(f"✗ {issue}" for issue in exc.issues) | ||
| logger.exception(f"Token verification FAILED: {issues_str}") | ||
| logger.exception("Token verification FAILED: %s", issues_str) | ||
| except FileNotFoundError: | ||
| logger.exception("index.md not found for token verification") | ||
| result = {"status": "failed", "errors": ["Source document not found"]} | ||
|
|
@@ -323,7 +323,7 @@ def main(): | |
| host = os.environ.get("HOST", "0.0.0.0") | ||
| port = int(os.environ.get("PORT", "9000")) | ||
|
|
||
| logger.info(f"Starting FastAPI server on http://{host}:{port}") | ||
| logger.info("Starting FastAPI server on http://%s:%s", host, port) | ||
| uvicorn.run(app, host=host, port=port, log_config=None) # None to use our logging config | ||
|
|
||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -157,13 +157,13 @@ def _make_request_and_save( | |
| Raises: | ||
| SystemExit: If expected_status is specified and doesn't match actual status | ||
| """ | ||
| logger.info(f"Making request: {name} ({method} {endpoint})") | ||
| logger.info("Making request: %s (%s %s)", name, method, endpoint) | ||
|
|
||
| response = self.client.request(method=method, url=endpoint, params=params, json=json_data) | ||
|
|
||
| # If expected status is provided, validate it | ||
| if response.status_code != expected_status: | ||
| logger.error(f"Expected status {expected_status} but got {response.status_code}") | ||
| logger.error("Expected status %s but got %s", expected_status, response.status_code) | ||
| sys.exit(1) | ||
|
|
||
| # Create reference data structure | ||
|
|
@@ -187,12 +187,12 @@ def _make_request_and_save( | |
| # Save to file in YAML format | ||
| path = REFERENCE_DIR / f"{name.lower().replace(' ', '_')}.yaml" | ||
| if path.exists(): | ||
| logger.warning(f"Overwriting existing file: {path}") | ||
| logger.warning("Overwriting existing file: %s", path) | ||
|
|
||
| with path.open("w") as f: | ||
| yaml.dump(reference, f, sort_keys=False, indent=2, default_flow_style=False) | ||
|
|
||
| logger.info(f"Saved reference example to {path}") | ||
| logger.info("Saved reference example to %s", path) | ||
|
|
||
| return response.json() | ||
|
|
||
|
|
@@ -224,7 +224,7 @@ def collect_references(self) -> None: | |
| # Use the first habit for further API calls | ||
| habit = habits[0] | ||
| habit_id = habit["id"] | ||
| logger.info(f"Using habit with ID: {habit_id} and masked name: {self._mask_name(habit['name'])}") | ||
| logger.info("Using habit with ID: %s and masked name: %s", habit_id, self._mask_name(habit["name"])) | ||
|
|
||
| # Get details for a specific habit by ID | ||
| self._make_request_and_save( | ||
|
|
||
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Copilot Autofix
AI 20 days ago
In general, to fix clear‑text logging issues, stop including potentially sensitive values directly in log messages. Either remove them, replace them with non‑sensitive identifiers, or heavily redact them.
Best targeted fix here: change the
logger.infocalls inK8sTokenStore.write_tokenso they no longer interpolate thenamespace(and probably not the exact secret name either). Functionality of the method (reading/replacing/creating the secret) is unchanged; only the log message is made more generic. This keeps observability (“a secret was updated/created”) while avoiding logging where that secret lives.Concretely in
airlock/oauth/k8s_client.py:logger.info("Updated secret %s/%s", namespace, secret_name)with a message that does not includenamespaceorsecret_name, for examplelogger.info("Updated Kubernetes secret").No new imports, methods, or definitions are required; only the two log statements change.