Replace config ValueError with OumiConfigValueError#2325
Replace config ValueError with OumiConfigValueError#2325idoudali wants to merge 2 commits intoidoudali/config-exceptionsfrom
Conversation
…rror handling Introduce dedicated config exception types and add explicit configuration checks to catch invalid or missing values early. Wire these failures into the CLI global handler so users get clearer, consistent error messages. Improve current file checks during configuration parsing. Fixes OPE-1848
Use OumiConfigValueError for validation failures under core/configs so the CLI treats them as OumiConfigError and prints a short message instead of a full traceback. OumiConfigValueError subclasses both OumiConfigError and ValueError for backward compatibility with existing except ValueError tests. Update BaseConfig/BaseParams docstrings to recommend OumiConfigValueError. Fixes OPE-1853
There was a problem hiding this comment.
Pull request overview
This PR introduces OumiConfigValueError (a subclass of both OumiConfigError and ValueError) and updates config validation code under src/oumi/core/configs to raise it instead of ValueError, so the CLI handles validation failures as configuration errors and prints a short message without a traceback.
Changes:
- Add
OumiConfigValueErrorto the shared exception hierarchy (oumi.exceptions). - Replace
ValueErrorwithOumiConfigValueErroracross config/params validation logic incore/configs. - Update
BaseConfig/BaseParamsdocstrings to recommendOumiConfigValueErrorfor validation failures.
Reviewed changes
Copilot reviewed 28 out of 28 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| src/oumi/exceptions.py | Adds OumiConfigValueError to the lightweight exception hierarchy. |
| src/oumi/core/configs/training_config.py | Switches training config validation raises to OumiConfigValueError. |
| src/oumi/core/configs/synthesis_config.py | Switches synthesis config validation raises to OumiConfigValueError. |
| src/oumi/core/configs/quantization_config.py | Switches quantization config validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/tuning_params.py | Switches tuning params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/training_params.py | Switches training params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/test_params.py | Switches test params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/synthesis_params.py | Switches synthesis params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/rule_judge_params.py | Switches rule-judge params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/remote_params.py | Switches remote/adaptive concurrency params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/profiler_params.py | Switches profiler params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/peft_params.py | Switches PEFT enum validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/model_params.py | Switches model params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/judge_params.py | Switches judge params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/guided_decoding_params.py | Switches guided decoding params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/grpo_params.py | Switches GRPO params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/gold_params.py | Switches GOLD params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/gkd_params.py | Switches GKD params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/generation_params.py | Switches generation params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/fsdp_params.py | Switches FSDP params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/evaluation_params.py | Switches evaluation params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/deepspeed_params.py | Switches DeepSpeed params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/data_params.py | Switches data params validation raises to OumiConfigValueError. |
| src/oumi/core/configs/params/base_params.py | Updates BaseParams docstring guidance to recommend OumiConfigValueError. |
| src/oumi/core/configs/judge_config.py | Switches JudgeConfig parse/resolve failures to OumiConfigValueError. |
| src/oumi/core/configs/base_config.py | Updates BaseConfig docstring guidance to recommend OumiConfigValueError. |
| src/oumi/core/configs/async_evaluation_config.py | Switches async evaluation config validation raises to OumiConfigValueError. |
| src/oumi/core/configs/analyze_config.py | Switches analyze config validation raises to OumiConfigValueError. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| raise OumiConfigValueError( | ||
| "Invalid profiler schedule arguments. The parameters " | ||
| "wait: {self.wait}, warmup: {self.warmup}, repeat: {self.repeat}" | ||
| "skip_first: {self.skip_first} must be non-negative." | ||
| ) | ||
| if not (self.active > 0): | ||
| raise ValueError( | ||
| raise OumiConfigValueError( | ||
| "Invalid profiler schedule arguments. The parameter " | ||
| "active: {self.active} must be positive." | ||
| ) |
There was a problem hiding this comment.
ProfilerScheduleParams.post_init error messages include literal placeholders like "{self.wait}" because the strings aren’t f-strings (and concatenation currently drops separators, e.g. missing comma/space before "skip_first"). This makes the validation error output unhelpful; please switch these messages to proper f-strings (or otherwise format the current values) and ensure separators/spaces are included.
| if self.sample_count is not None: | ||
| if self.sample_count < 0: | ||
| raise ValueError("`sample_count` must be greater than 0.") | ||
| raise OumiConfigValueError("`sample_count` must be greater than 0.") | ||
| if self.mixture_proportion is not None: | ||
| if self.mixture_proportion < 0: | ||
| raise ValueError("`mixture_proportion` must be greater than 0.") | ||
| raise OumiConfigValueError( | ||
| "`mixture_proportion` must be greater than 0." | ||
| ) | ||
| if self.mixture_proportion > 1: | ||
| raise ValueError("`mixture_proportion` must not be greater than 1.0 .") | ||
| raise OumiConfigValueError( | ||
| "`mixture_proportion` must not be greater than 1.0 ." | ||
| ) |
There was a problem hiding this comment.
Validation condition and error message are inconsistent: sample_count only errors when < 0 (so 0 is allowed), but the message says it “must be greater than 0”. Similarly mixture_proportion < 0 message says “greater than 0” and the > 1 message has an extra space before the period ("1.0 ."). Please align the checks/docstrings/messages (e.g., use “non-negative / >= 0” if 0 is valid, or change the predicate if 0 should be rejected) and fix the punctuation.
| if self.judgment_scores: | ||
| if not all( | ||
| isinstance(score, int | float) | ||
| for score in self.judgment_scores.values() | ||
| ): | ||
| raise ValueError("All judgment_scores values must be numeric") | ||
| raise OumiConfigValueError("All judgment_scores values must be numeric") | ||
| if len(self.judgment_scores) == 0: | ||
| raise ValueError("judgment_scores cannot be empty when provided") | ||
| raise OumiConfigValueError( | ||
| "judgment_scores cannot be empty when provided" | ||
| ) |
There was a problem hiding this comment.
Inside if self.judgment_scores: the len(self.judgment_scores) == 0 branch is unreachable (an empty dict is falsy and won’t enter this block). This is dead code and can be removed, or the emptiness check should be moved outside the truthiness guard if you intend to validate both “provided but empty” and “missing”.
| # Validate judgment scores are numeric if provided | ||
| if self.judgment_scores: | ||
| if not all( | ||
| isinstance(score, int | float) | ||
| for score in self.judgment_scores.values() | ||
| ): | ||
| raise ValueError("All judgment_scores values must be numeric") | ||
| raise OumiConfigValueError("All judgment_scores values must be numeric") | ||
| if not self.judgment_scores: | ||
| raise ValueError("judgment_scores cannot be empty when provided") | ||
| raise OumiConfigValueError( | ||
| "judgment_scores cannot be empty when provided" | ||
| ) |
There was a problem hiding this comment.
Inside if self.judgment_scores: the subsequent if not self.judgment_scores: check is unreachable (empty dicts are falsy and would not enter the outer block). Please remove this dead check, or restructure the logic if you need to distinguish between None and {} explicitly.
| return "all_exhausted" | ||
| else: | ||
| raise ValueError("Unsupported value for MixtureStrategy") | ||
| raise OumiConfigValueError("Unsupported value for MixtureStrategy") |
There was a problem hiding this comment.
What's the difference between OumiConfigError and OumiConfigValueError? Should we just use one?
e46d02e to
4804801
Compare
Depends on #2319
Description
Use OumiConfigValueError for validation failures under core/configs so the CLI treats them as OumiConfigError and prints a short message instead of a full traceback. OumiConfigValueError subclasses both OumiConfigError and ValueError for backward compatibility with existing except ValueError tests.
Update BaseConfig/BaseParams docstrings to recommend OumiConfigValueError.
Fixes OPE-1853
Related issues
Fixes # (issue)
Before submitting
Reviewers
At least one review from a member of
oumi-ai/oumi-staffis required.