Skip to content

Initial liblzma-rs and liblzma-rs-sys with C2rust#2

Merged
youknowone merged 51 commits intomainfrom
c2rust
Apr 1, 2026
Merged

Initial liblzma-rs and liblzma-rs-sys with C2rust#2
youknowone merged 51 commits intomainfrom
c2rust

Conversation

@youknowone
Copy link
Copy Markdown
Owner

@youknowone youknowone commented Mar 3, 2026

Summary by CodeRabbit

  • New Features

    • Full pure‑Rust LZMA stack (encoders/decoders, LZMA2/LZIP, block/stream/index handling), filter support (delta/BCJ), VLI utilities, CRC32/CRC64/SHA256 checks, microlzma path, outqueue, and multithreaded stream/block encoders/decoders.
    • One‑shot and streaming buffer encode/decode APIs and a benchmark suite comparing Rust vs C backends.
  • Documentation

    • Added governance and migration guidance document.
  • Chores

    • Workspace and manifest updates, feature/flag rework, dev/bench dependencies and CI MSRV bumped to Rust 1.63.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 3, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a pure-Rust liblzma core crate and a liblzma-rs-sys C ABI shim, expands the workspace and CI MSRV, introduces Criterion benchmarks, and implements a large set of new FFI-exposed encoders/decoders, checks, block/index/filter subsystems, multithreaded stream/encoder/decoder code, allocator and utility primitives.

Changes

Cohort / File(s) Summary
Project config & governance
AGENTS.md, Cargo.toml, .github/workflows/main.yml, Cargo.lock.msrv
Adds governance doc; bumps MSRV 1.60→1.63; expands workspace members; rewrites feature wiring and dev/bench deps; updates lockfile.
New crates & manifests
liblzma-rs/Cargo.toml, liblzma-rs-sys/Cargo.toml
Adds liblzma-rs (Rust core) and liblzma-rs-sys (liblzma-sys-compatible C ABI) manifests and feature maps.
Benchmarks
benches/backend_comparison.rs
Adds Criterion bench comparing Rust vs C backends (encode/decode, crc) across multiple sizes.
C ABI compatibility layer
liblzma-rs-sys/src/lib.rs
New large set of #[no_mangle] extern "C" wrappers, canonical C type aliases and many re-exports forwarding into liblzma-rs.
Allocator & core utilities
liblzma-rs/src/alloc.rs, liblzma-rs/src/common/...
Rust-backed allocator, stream/core lifecycle (lzma_code, lzma_end), VLI encode/decode/size, buf helpers, memlimit, version APIs, outqueue and hardware helpers.
Checks (CRC / SHA)
liblzma-rs/src/check/*
CRC32/CRC64 fast implementations, SHA-256, and FFI check API (is_supported, size, init/update/finish).
Filter framework & string parsing
liblzma-rs/src/common/filter_*.rs, liblzma-rs/src/common/string_conversion.rs
Filter encoder/decoder registries, copy/free/validate, props/flags encode-decode, filter string parsing/serialization and option maps.
Block & buffer APIs
liblzma-rs/src/common/block_*.rs, block_util.rs
Block header encode/decode, bounds, buffer encode/decode, block encoder/decoder state machines.
Index subsystem & hash
liblzma-rs/src/common/index*.rs, index_hash.rs
Full index model and APIs: init/end/prealloc/append/cat/dup/iterators, buffer encode/decode, encoder/decoder and streaming hash utilities.
Stream encoders/decoders (single & MT)
liblzma-rs/src/common/stream_*.rs, *_mt.rs, stream_mt.rs, outqueue.rs
Single-threaded and multithreaded stream encoder/decoder implementations, worker/thread orchestration, outqueue, index integration and memconfig hooks.
Easy helpers & streaming buffers
liblzma-rs/src/common/easy_*.rs, stream_buffer_*.rs
Easy preset builders and one-shot stream/block buffer encode/decode helpers and bounds functions.
Standalone formats & auto-detect
liblzma-rs/src/common/alone_*.rs, auto_decoder.rs, file_info.rs, lzip_decoder.rs
ALONE encoder/decoder, auto-detection decoder, file-info and LZIP decoder state machines.
LZ framework & match-finders
liblzma-rs/src/lz/*
LZ encoder/decoder, match-finder implementations (HC/BT variants), memusage and init wiring.
LZMA core, fastpos & LZMA2
liblzma-rs/src/lzma/*, liblzma-rs/src/lzma/fastpos_table.rs
LZMA/LZMA2 encoder/decoder modules, precomputed fastpos table, props handling, LZMA2-specific logic.
Delta filter
liblzma-rs/src/delta/*
Delta encoder/decoder, props encode/decode, memusage and coder init wiring.
Module wiring & crate exports
liblzma-rs/src/lib.rs, */mod.rs
Adds c_str! macro, permits, and exposes many new public modules across the crate surface.

Sequence Diagram(s)

sequenceDiagram
    participant App
    participant Sys as "liblzma-rs-sys\n(C ABI)"
    participant Core as "liblzma-rs\n(Rust core)"
    participant MT as "MT workers / OutQ / Index"

    App->>Sys: lzma_stream_encoder(strm, filters, check)
    Sys->>Core: forward to stream_encoder init
    Core->>MT: init outqueue / index / block encoder
    MT-->>Core: ready

    loop Encode loop
        App->>Sys: lzma_code(stream, LZMA_RUN)
        Sys->>Core: lzma_code -> drive state machine
        Core->>MT: schedule block (maybe spawn worker)
        MT-->>Core: compressed bytes / status
        Core-->>Sys: return lzma_ret
        Sys-->>App: lzma_ret
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~180 minutes

Poem

🐇 I hop through bytes and tables bright,

I bind C calls to Rust with gentle might.
Streams and blocks, threads hum along,
CRCs chirp out a jaunty song.
A carrot-sized thanks for every exported prong.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch c2rust

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 64

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@benches/backend_comparison.rs`:
- Around line 24-33: The lzma calls in the benchmark helpers
(lzma_easy_buffer_encode and lzma_stream_buffer_decode) currently ignore their
return values so out_pos may remain 0 on failure; change each helper (rs_encode,
c_encode, c_decode, rs_decode) to capture the return value, verify it equals
c_sys::LZMA_OK (or the proper success constant), and handle non-success by
returning an Err/panicking/logging rather than trusting out_pos; only truncate
the output buffer to out_pos after a successful call so failed operations do not
produce silent empty outputs and invalidate comparisons.

In `@liblzma-rs/src/check/check.rs`:
- Around line 8-41: The two public C ABI helper functions
lzma_check_is_supported and lzma_check_size are missing the #[no_mangle]
attribute; add #[no_mangle] above each pub extern "C" fn (for
lzma_check_is_supported and lzma_check_size) so their symbol names are stable
and consistent with other exported functions in the crate, keeping the rest of
the signatures and logic unchanged.

In `@liblzma-rs/src/check/crc32_fast.rs`:
- Around line 2-316: The CRC32 table lzma_crc32_table is declared as mutable
static (pub static mut) but never modified; change its declaration to an
immutable static (remove mut) so it becomes pub static lzma_crc32_table: [[u32;
256]; 8] = [...] and update any call sites (e.g., lzma_crc32 or other functions
that currently access lzma_crc32_table via unsafe) to use the immutable static
without unsafe blocks. Ensure the table type and initializer stay the same and
remove unnecessary unsafe usage around accesses.

In `@liblzma-rs/src/check/crc64_fast.rs`:
- Around line 2-4: The table lzma_crc64_table is declared as a mutable global
(pub static mut lzma_crc64_table) which allows external mutation and risks data
races; change it to an immutable global (pub static lzma_crc64_table: [[u64;
256]; 4]) while preserving #[no_mangle] so the symbol remains stable for FFI, or
if ABI requires a mutable symbol keep pub static mut but add a clear safety
comment and document thread-safety expectations near the lzma_crc64_table
declaration.

In `@liblzma-rs/src/check/sha256.rs`:
- Around line 7-16: Replace mutable globals that hold constant SHA-256 seeds
with immutable statics: change static mut SHA256_K: [u32; 64] = [...] to static
SHA256_K: [u32; 64] = [...], and do the same for the other read-only arrays
defined around lines 537-543 (the SHA-256 state/seed arrays); if any code
currently mutates those symbols, remove that mutation or relocate mutable state
into a local/controlled mutable structure (e.g., local variables, thread-local,
or a Mutex-wrapped static) so the constant arrays remain immutable.

In `@liblzma-rs/src/common/alone_encoder.rs`:
- Around line 148-153: The code dereferences the FFI pointer options in init
without checking for null; before calling lzma_lzma_lclppb_encode or accessing
(*options).dict_size, add a null check for options and return LZMA_OPTIONS_ERROR
if options is null, ensuring you validate options first (e.g. check options ==
null before invoking lzma_lzma_lclppb_encode or reading (*options).dict_size) so
neither lzma_lzma_lclppb_encode nor the subsequent (*options).dict_size access
can cause an FFI-side crash.
- Around line 194-214: The exported C entrypoint lzma_alone_encoder is missing
the #[no_mangle] attribute so its symbol will be Rust-mangled and not visible to
C; fix by adding #[no_mangle] immediately above the pub unsafe extern "C" fn
lzma_alone_encoder(...) declaration (keeping the extern "C" and pub unsafe
qualifiers) so the function exports with the exact C symbol name used by
callers.

In `@liblzma-rs/src/common/block_buffer_decoder.rs`:
- Around line 13-21: Add an upfront null-check for the incoming C pointer
"block" alongside the existing validation for in_pos/in_0/out_pos/out so that if
block.is_null() the function returns LZMA_PROG_ERROR immediately; update the
validation block that currently checks in_pos, in_0, in_size, out_pos, out,
out_size to include "|| block.is_null()" and also add the same guard before the
decoder initialization call (where "block" is forwarded to decoder init) to
prevent passing a null "block" into the decoder.

In `@liblzma-rs/src/common/block_buffer_encoder.rs`:
- Around line 30-33: The cast from u64 to size_t in lzma_block_buffer_bound can
truncate on narrow targets; update lzma_block_buffer_bound to check the u64
return from lzma_block_buffer_bound64 and clamp it to the maximum representable
size_t (i.e., if ret > size_t::MAX then return size_t::MAX, else ret as size_t)
so callers never receive an undersized buffer length; perform this check without
panicking (return the max value) and reference lzma_block_buffer_bound and
lzma_block_buffer_bound64 when making the change.

In `@liblzma-rs/src/common/block_decoder.rs`:
- Around line 233-236: Before calling lzma_block_unpadded_size or accessing
fields, ensure the incoming block pointer is non-null and validate the check ID
range; specifically, add a null-check for block and verify (*block).check is
within the expected enum/allowed range (so you don't later use an out-of-range
value in check_size-based copy logic). Move the block-null and check-ID
validation to the top of the conditional that currently calls
lzma_block_unpadded_size and checks (*block).uncompressed_size (symbols:
lzma_block_unpadded_size, (*block).uncompressed_size, LZMA_VLI_MAX,
LZMA_VLI_UNKNOWN, (*block).check, check_size), and if validation fails return
the appropriate error instead of dereferencing block.

In `@liblzma-rs/src/common/block_header_encoder.rs`:
- Around line 45-49: Compute the final aligned header size into a local (e.g.,
let aligned = size.wrapping_add(3) & !3), check it against
LZMA_BLOCK_HEADER_SIZE_MAX, and if it exceeds that limit immediately return the
appropriate error (for consistency use the same error code used elsewhere for
invalid options, e.g., LZMA_OPTIONS_ERROR); only after the check assign
(*block).header_size = aligned and continue to return LZMA_OK. Ensure this
change is applied in the same function that currently sets (*block).header_size
so the validation happens before storing the value.

In `@liblzma-rs/src/common/block_util.rs`:
- Around line 51-57: In lzma_block_total_size replace the mutable binding of
unpadded_size with an immutable one by computing it using a conditional
expression: call lzma_block_unpadded_size(block) into a non-mut variable (e.g.,
unpadded_size) and, if it is not LZMA_VLI_UNKNOWN, set the result to
vli_ceil4(unpadded_size) within the same let using an if expression; keep
function signature and return the final value as before. This touches
lzma_block_total_size, lzma_block_unpadded_size, vli_ceil4, and
LZMA_VLI_UNKNOWN.

In `@liblzma-rs/src/common/common.rs`:
- Around line 113-130: In lzma_next_filter_update, guard against a missing
update callback on next by checking (*next).update.is_some() (or equivalent)
before calling unwrap(); if update is None, return LZMA_PROG_ERROR instead of
unwrapping and panicking—keep the current checks on reversed_filters.id and call
the update function only after verifying (*next).update exists, passing
(*next).coder, allocator, core::ptr::null(), reversed_filters as before.
- Around line 69-93: In lzma_bufcpy, add explicit bounds checks before doing
pointer arithmetic: validate that *in_pos <= in_size and *out_pos <= out_size
(or return 0/compute zero available) to avoid underflow from wrapping_sub and
prevent out-of-bounds copy; compute in_avail and out_avail using
checked/conditional subtraction (set to 0 when pos > size), short-circuit when
copy_size == 0, and only then call core::ptr::copy_nonoverlapping using the safe
offsets derived from the validated positions (references: lzma_bufcpy, in_pos,
out_pos, in_size, out_size, in_avail, out_avail, copy_size).

In `@liblzma-rs/src/common/easy_buffer_encoder.rs`:
- Around line 2-13: The extern "C" declaration of lzma_stream_buffer_encode
should be removed and the crate-local function imported and used directly:
delete the extern block in easy_buffer_encoder.rs, add a use
crate::common::stream_buffer_encoder::lzma_stream_buffer_encode import, and
update any call sites in easy_buffer_encoder (which previously relied on the FFI
symbol) to call lzma_stream_buffer_encode directly (wrapping the call in unsafe
as needed since the function is declared pub unsafe extern "C" in
stream_buffer_encoder.rs). Ensure no leftover references to the removed extern
symbol remain.

In `@liblzma-rs/src/common/easy_decoder_memusage.rs`:
- Around line 3-34: Replace the verbose manual initialization of
lzma_options_easy and its embedded lzma_options_lzma by implementing and using
the Default trait for those types: add Default impls for lzma_options_easy and
lzma_options_lzma (setting the same zero/null/LZMA_RESERVED_ENUM values used
now), then construct the value via lzma_options_easy::default() or
Default::default() in easy_decoder_memusage and similarly update
lzma_stream_encoder_mt_memusage to use Default to remove the long field-by-field
initializer; reference the types lzma_options_easy and lzma_options_lzma and the
other function lzma_stream_encoder_mt_memusage when making changes.
- Around line 35-37: The error return in the preset failure branch uses
UINT32_MAX cast to u64; update the branch so it returns UINT64_MAX instead to
match the memusage conventions (as done in lzma_stream_encoder_mt_memusage).
Locate the call to lzma_easy_preset with the opt_easy variable and replace the
UINT32_MAX as u64 return with UINT64_MAX (or the language-equivalent u64::MAX)
so callers can consistently detect error conditions.

In `@liblzma-rs/src/common/easy_encoder_memusage.rs`:
- Around line 35-37: The function lzma_easy_encoder_memusage currently returns
UINT32_MAX as u64 when lzma_easy_preset(&mut opt_easy, preset) fails, which is
incorrect; change the error sentinel to return UINT64_MAX (as u64) instead.
Locate the lzma_easy_encoder_memusage implementation and replace the UINT32_MAX
return with UINT64_MAX, and apply the same change in the analogous
lzma_easy_decoder_memusage function (check the file easy_decoder_memusage.rs) so
both use the documented UINT64_MAX error sentinel; keep the unsafe
lzma_easy_preset call and opt_easy handling unchanged.

In `@liblzma-rs/src/common/easy_encoder.rs`:
- Around line 2-8: Replace the extern "C" declaration of lzma_stream_encoder
with a direct Rust call to the same-crate function: remove the extern block in
easy_encoder.rs and import the function from crate::common::stream_encoder (use
crate::common::stream_encoder::lzma_stream_encoder), then call
lzma_stream_encoder(...) directly where the extern was used (ensure signatures
match and keep the call in unsafe if required); this avoids the C ABI trampoline
and eliminates unnecessary FFI overhead while preserving behavior.
- Around line 14-49: The stack-allocated opt_easy (lzma_options_easy) and its
internals (filters array and opt_lzma) are being passed by pointer to
lzma_stream_encoder, causing a use-after-free when the function returns;
allocate the options and filters on the heap instead (e.g.,
Box<lzma_options_easy> or similar), ensure lzma_easy_preset is called on the
heap allocation so filters[0].options points to the heap-backed opt_lzma, and
pass a stable raw pointer (Box::into_raw or &*boxed as *mut _) to
lzma_stream_encoder; also document or wire up proper deallocation when the
stream is destroyed so the heap allocation is freed when the encoder no longer
needs it.

In `@liblzma-rs/src/common/file_info.rs`:
- Around line 332-337: The memlimit overflow branch in file_info.rs currently
returns LZMA_PROG_ERROR; update the conditional inside the block that checks
memused > (*coder).memlimit to return LZMA_MEMLIMIT_ERROR instead. Locate the
check that calls lzma_index_memused((*coder).combined_index) and replace the
return value LZMA_PROG_ERROR with LZMA_MEMLIMIT_ERROR so the function (involving
variables coder, combined_index, memused, and memlimit) reports the correct
error code.
- Around line 457-465: The checks calling lzma_index_stream_flags and
lzma_index_stream_padding currently replace any non-LZMA_OK return with
LZMA_PROG_ERROR; instead capture each call's return value (e.g., ret =
lzma_index_stream_flags((*coder).this_index, &mut (*coder).footer_flags)) and if
ret != LZMA_OK return ret, and do the same for lzma_index_stream_padding
(checking against (*coder).stream_padding). This preserves and propagates the
original error codes from those functions rather than collapsing them into
LZMA_PROG_ERROR.
- Around line 165-167: The current update to (*coder).index_remaining uses
wrapping_sub which allows silent underflow when consumed bytes
((*in_pos).wrapping_sub(in_start) as lzma_vli) exceed index_remaining; change
the logic in file_info.rs to prevent underflow by computing the consumed bytes
as a lzma_vli, then subtracting with a saturating/checked operation (e.g., use
saturating_sub or checked_sub and set to 0 on underflow) when updating
(*coder).index_remaining so it never wraps to a large value; keep references to
the same symbols: (*coder).index_remaining, (*in_pos), in_start, and lzma_vli.

In `@liblzma-rs/src/common/filter_buffer_decoder.rs`:
- Around line 39-49: The code calls unwrap() on the extern "C" function pointer
next.code in filter_buffer_decoder.rs (and similarly at the second site) which
can panic across the FFI boundary; replace the unwrap() with an explicit Option
check and call-site handling: if let Some(code_fn) = next.code { invoke
code_fn(next.coder, allocator, in_0, in_pos, in_size, out, out_pos, out_size,
LZMA_FINISH) } else { return an appropriate lzma_ret error (e.g.,
LZMA_PROG_ERROR or LZMA_MEM_ERROR as used elsewhere) } so no panic can escape
FFI; mirror the safe pattern used in stream_buffer_decoder.rs and ensure both
occurrences are fixed.

In `@liblzma-rs/src/common/filter_common.rs`:
- Around line 11-103: The features table is declared as a mutable global (static
mut features) but is never modified; change it to an immutable global (e.g.,
static FEATURES: [filter_features; 13] = [...] or const FEATURES:
[filter_features; 13] = [...]) to eliminate data-race hazards, and update any
call sites that use unsafe mutable access to read-only access from FEATURES
(referencing symbols: features / filter_features / LZMA_FILTER_LZMA1 /
LZMA_FILTER_X86 / LZMA_VLI_UNKNOWN).

In `@liblzma-rs/src/common/filter_flags_decoder.rs`:
- Around line 18-24: The code currently dereferences the raw pointer `filter`
(through `(*filter).options` and `(*filter).id`) and uses other raw C-ABI
pointers before any null checks; add pointer guards at the start of the C-ABI
entrypoint to validate `filter` and any pointer parameters used (e.g., `in_pos`,
`in_size`, `in_0`) and return `LZMA_PROG_ERROR` if any are null or otherwise
invalid instead of proceeding; keep the subsequent calls like `lzma_vli_decode`
and assignments to `(*filter).options`/`(*filter).id` only after the guards pass
so you never dereference a null pointer.

In `@liblzma-rs/src/common/index_hash.rs`:
- Around line 81-88: The function lzma_index_hash_size unconditionally
dereferences the raw pointer index_hash; add a null guard at the start of
lzma_index_hash_size to return a defined lzma_vli (e.g., 0) when index_hash is
null, otherwise call index_size with (*index_hash).blocks.count and
(*index_hash).blocks.index_list_size as before; reference the function name
lzma_index_hash_size, the type lzma_index_hash, and the helper index_size to
locate and update the code.

In `@liblzma-rs/src/common/index.rs`:
- Around line 675-679: The duplication routine currently copies sizes and counts
but omits the checks metadata; inside the lzma_index_dup implementation, after
copying (*dest).index_list_size (the block where (*dest).uncompressed_size,
.total_size, .record_count, .index_list_size are set) add logic to preserve the
checks field by copying (*dest).checks = (*src).checks (or performing the
appropriate deep copy if checks owns heap data) so the duplicated index retains
the same check metadata; locate this change near the declaration of srcstream
(index_stream) and ensure any ownership/alloc semantics for checks are handled
consistently with the rest of the function.
- Around line 259-264: The preallocation setter lzma_index_prealloc may set
prealloc to 0 which allows lzma_index_append to write an OOB record; update
lzma_index_prealloc so that after capping to PREALLOC_MAX it enforces a minimum
of 1 (i.e., if records == 0 then set records = 1) before assigning
(*i).prealloc, preserving existing cast to size_t and the PREALLOC_MAX cap and
leaving lzma_index_append behavior unchanged.
- Around line 565-584: When copying the tree node with (*newg).node = (*g).node
you must also fix the parent pointers of the node's children so they point to
newg (not to g) before freeing g: after the copy, if (*newg).node.left is not
null set (*(*newg).node.left).parent = &mut (*newg).node, and likewise if
(*newg).node.right is not null set (*(*newg).node.right).parent = &mut
(*newg).node; keep the existing update that replaces the parent->right pointer
and the groups.leftmost/root/rightmost assignments, then free g; this ensures no
child still references the freed g (refer to symbols newg, g, node, parent,
left, right, groups.leftmost/root/rightmost).
- Around line 388-394: The code in lzma_index_stream_flags is comparing
stream_flags to itself via lzma_stream_flags_compare(stream_flags,
stream_flags), so mismatches against the stored stream flags are never detected;
change the comparison to compare the incoming stream_flags with the stored flags
on the rightmost stream (use the stored stream flags from (*i).streams.rightmost
/ index_stream before assigning: call lzma_stream_flags_compare with the stored
flags and the incoming stream_flags and return the result if not LZMA_OK, then
only assign (*s).stream_flags = *stream_flags when the comparison succeeds.

In `@liblzma-rs/src/common/microlzma_encoder.rs`:
- Around line 24-32: The code calls unwrap() on external function pointers
(*coder).lzma.set_out_limit and (*coder).lzma.code inside an extern "C" context
which can panic across the FFI boundary; replace those unwrap()s with safe
checks (e.g., if let Some(set_out_limit) = (*coder).lzma.set_out_limit { ... }
else { return LZMA_PROG_ERROR; }) and similarly for (*coder).lzma.code, and then
call the unwrapped function via the local binding (set_out_limit(...) /
code(...)); ensure you return LZMA_PROG_ERROR (or appropriate lzma_ret) when the
Option is None to avoid any panic across FFI.
- Around line 66-102: The comparison and assignment that transmutes an
Option<unsafe extern "C" fn(...) -> lzma_ret> to uintptr_t around
microlzma_encoder_init is verbose and fragile; refactor by introducing a small
helper (e.g., fn_ptr_to_usize or a macro) that converts an Option<unsafe extern
"C" fn(...) -> lzma_ret> to usize in one place and use it for the equality check
and assignment instead of repeating the long transmute expression, updating the
branches that call lzma_next_end(next, allocator) and set (*next).init to use
the helper (referencing microlzma_encoder_init, lzma_next_end, and the
(*next).init field).

In `@liblzma-rs/src/common/outqueue.rs`:
- Around line 119-125: In lzma_outq_get_buf guard against a null cache pointer
before dereferencing: check if (*outq).cache is null and if so return
core::ptr::null_mut() (or the appropriate null sentinel) instead of
dereferencing; otherwise proceed to read buf = (*outq).cache, advance
(*outq).cache = (*buf).next and clear (*buf).next. This prevents null-pointer
dereference when lzma_outq.cache is empty; update references to lzma_outq,
lzma_outbuf, and lzma_outq_get_buf accordingly.
- Around line 189-195: In lzma_outq_enable_partial_output, avoid calling
enable_partial_output.unwrap() across the extern "C" boundary; instead check the
Option before invoking the callback (e.g., if let Some(cb) =
enable_partial_output) and only call cb((*(*outq).head).worker) when present,
otherwise skip calling and still null out the worker pointer as needed; update
the function body to perform a guarded call on enable_partial_output rather than
unwrapping to prevent panics in FFI paths.

In `@liblzma-rs/src/common/stream_buffer_decoder.rs`:
- Around line 42-52: The call to stream_decoder.code.unwrap() (and later
stream_decoder.memconfig) risks panicking across the FFI boundary; change the
code in stream_buffer_decoder.rs to defensively handle those Option fields
rather than unwrapping: check that stream_decoder.code and
stream_decoder.memconfig are Some before calling them, and if not return a
suitable lzma error code (or propagate an error) from the surrounding function
instead of panicking; include a clear message when converting to an error (or
use expect only if you intentionally want a descriptive panic during
development), and update the caller path so functions like stream_decoder.code
(the extern "C" function pointer) and stream_decoder.memconfig are invoked only
after confirming they exist.
- Around line 37-38: The code dereferences memlimit when calling
lzma_stream_decoder_init(&mut stream_decoder, allocator, *memlimit, flags)
without validating memlimit; add a null check for the memlimit pointer (the
memlimit parameter) before the call and handle the null case by returning the
appropriate error (e.g., an lzma_ret error such as LZMA_PROG_ERROR or other
existing error path used in this module) instead of dereferencing it, ensuring
the check occurs alongside the existing input validations and before using
stream_decoder, allocator, and flags.

In `@liblzma-rs/src/common/stream_decoder_mt.rs`:
- Around line 1271-1282: In stream_decoder_mt_end ensure the coder's
synchronization primitives are destroyed before freeing coder: call the
appropriate destructor functions for (*coder).cond and (*coder).mutex (the same
APIs used to create them—e.g., lzma_cond_end / lzma_mutex_end or the project
equivalents) after finishing threads and other cleanup (after threads_end and
before lzma_free), so that (*coder).cond and (*coder).mutex are properly torn
down prior to lzma_free(coder as *mut c_void).
- Around line 1382-1388: Add a null check for the incoming pointer "options"
before any dereference: if options.is_null() { return LZMA_OPTIONS_ERROR; }
Place this check at the top of the function (before accessing (*options).threads
and (*options).flags) so the existing validations for threads and flags (and any
future uses of options) do not dereference a null pointer; keep the return value
LZMA_OPTIONS_ERROR to match existing error handling.

In `@liblzma-rs/src/common/stream_decoder.rs`:
- Around line 55-326: The code uses the numeric sentinel variable
current_block_100 with opaque u64 constants to drive control flow; replace that
pattern with an explicit enum (e.g., DecoderPhase or reuse SEQ_* states) and
clear state-transition branches so each match arm returns/sets the enum variant
rather than assigning magic numbers. Locate the usage of current_block_100, the
match over (*coder).sequence and the subsequent match current_block_100 blocks
(and the internal nested match) and: (1) introduce a typed phase enum or use the
existing SEQ_* constants as enum variants, (2) update each place that sets
current_block_100 to set the enum variant or directly set (*coder).sequence and
branch to the next explicit match arm, (3) remove the numeric constants and
replace nested matches with clear match arms or helper functions (e.g.,
handle_stream_header, handle_block_header, handle_stream_footer) that return the
next phase or lzma_ret, and (4) ensure all transitions that previously relied on
the sentinels still set sequence/phase consistently (references:
current_block_100, (*coder).sequence, SEQ_BLOCK_HEADER, SEQ_INDEX,
SEQ_BLOCK_INIT, SEQ_BLOCK_RUN, SEQ_STREAM_FOOTER, SEQ_STREAM_PADDING,
stream_decoder_reset, lzma_bufcpy, lzma_block_header_decode) and adjust control
flow so behavior is unchanged while removing magic numeric sentinels.

In `@liblzma-rs/src/common/stream_encoder_mt.rs`:
- Around line 968-973: The code checks u64 bounds against BLOCK_SIZE_MAX and
UINT64_MAX but doesn't ensure values fit into size_t on 32-bit platforms; add
checks against SIZE_MAX before any cast to size_t: after computing
*outbuf_size_max = lzma_block_buffer_bound64(*block_size) verify *block_size <=
SIZE_MAX (return LZMA_OPTIONS_ERROR) and then verify *outbuf_size_max != 0 &&
*outbuf_size_max <= SIZE_MAX (return LZMA_MEM_ERROR or LZMA_OPTIONS_ERROR as
appropriate) before any use/cast to size_t; update the checks around
BLOCK_SIZE_MAX, lzma_block_buffer_bound64, and the error returns so no silent
truncation occurs.

In `@liblzma-rs/src/common/stream_encoder.rs`:
- Around line 229-235: The code calls (*coder).block_encoder.update.unwrap()
without checking for None, which can panic if the block encoder has no update
hook; update the branch in stream_encoder (the SEQ_BLOCK_ENCODE path) to first
check whether (*coder).block_encoder.update is Some, and only call it when
present, otherwise handle the missing hook by setting ret to an appropriate
error/return value (or a no-op success) and avoid the unwrap; reference the
symbols block_encoder.update, block_encoder.coder, (*coder).sequence and
SEQ_BLOCK_ENCODE when making the change.

In `@liblzma-rs/src/common/stream_flags_common.rs`:
- Around line 2-7: The two exported globals lzma_header_magic and
lzma_footer_magic are declared as mutable but are constant read-only magic byte
arrays; change their declarations from pub static mut to immutable pub static
and remove the redundant "as u8" casts from the literal elements so the arrays
are plain [u8; N] statics (keep the #[no_mangle] and sizes the same).

In `@liblzma-rs/src/common/stream_flags_decoder.rs`:
- Around line 17-20: The extern "C" decoder function lzma_stream_header_decode
is missing #[no_mangle], so Rust will mangle its symbol and break linkage with
the extern declarations in types.rs; add #[no_mangle] above
lzma_stream_header_decode (and any other FFI decode entry points implemented in
this module) so the symbol names remain unmangled and match the extern "C"
declarations used elsewhere.

In `@liblzma-rs/src/common/stream_flags_encoder.rs`:
- Around line 28-33: The code currently returns LZMA_PROG_ERROR when
stream_flags_encode(...) fails, but that failure indicates invalid options
(validation/check failure), so change the error return to LZMA_OPTIONS_ERROR
wherever stream_flags_encode returns true (e.g., the call in
stream_flags_encoder.rs that uses out.offset(core::mem::size_of::<[u8; 6]>() as
isize) and the analogous call later). Update the return value from
LZMA_PROG_ERROR to LZMA_OPTIONS_ERROR in those branches so option validation
errors are reported correctly.
- Around line 16-19: The exported C ABI functions are missing the #[no_mangle]
attribute so their symbols will be renamed by Rust; add #[no_mangle] above the
extern "C" function definitions (e.g., before lzma_stream_header_encode) and any
other C ABI functions in this file that correspond to declarations in types.rs
so the linker sees the exact C symbols.

In `@liblzma-rs/src/common/vli_decoder.rs`:
- Around line 2-12: In lzma_vli_decode, add a null-check for the output pointer
vli at the start of the function (before any dereference like *vli = 0 or later
uses) and return the appropriate lzma error (e.g., LZMA_PROG_ERROR) when vli is
null; preserve the existing vli_pos fallback logic (the vli_pos_internal path)
and ensure all subsequent uses assume vli is non-null only after this check.

In `@liblzma-rs/src/common/vli_encoder.rs`:
- Around line 2-8: The function lzma_vli_encode currently dereferences out and
out_pos without null checks; add early null-pointer validation at the start of
lzma_vli_encode for both out and out_pos and return an appropriate lzma_ret
error (e.g., LZMA_PROG_ERROR) if either is NULL; ensure subsequent code can
assume non-null pointers and keep the existing behavior/error codes for
overflow/space conditions unchanged.

In `@liblzma-rs/src/delta/delta_common.rs`:
- Line 43: Replace the hardcoded 256 in the write_bytes call with a size
computed from the coder.history field so future changes to the array length stay
correct; specifically, when calling core::ptr::write_bytes on &raw mut
(*coder).history, compute the byte count with
core::mem::size_of_val(&(*coder).history) (or equivalent) and pass that instead
of 256, keeping the pointer expression and the zero byte value unchanged.

In `@liblzma-rs/src/delta/delta_encoder.rs`:
- Around line 133-139: The FFI entrypoint lzma_delta_props_encode writes to *out
without checking for a null pointer; add an out.is_null() guard at the start of
lzma_delta_props_encode (alongside the existing lzma_delta_coder_memusage check)
and return an appropriate error (e.g., LZMA_PROG_ERROR) if out is null; keep the
rest of the logic that casts options to *const lzma_options_delta and computes
(*opt).dist.wrapping_sub(LZMA_DELTA_DIST_MIN) as u8 unchanged.

In `@liblzma-rs/src/lib.rs`:
- Around line 1-9: Replace the broad crate-level suppression
`#![allow(clippy::all)]` with targeted Clippy disables for known c2rust patterns
(e.g., `clippy::transmute_ptr_to_ptr`, `clippy::too_many_arguments`,
`clippy::cast_ptr_alignment`, `clippy::missing_safety_doc`) by updating the
crate attribute near the top of lib.rs and add a one-line TODO comment `// TODO:
Gradually enable Clippy lints as code is cleaned up` above that attribute so the
technical debt is tracked; keep the other existing allows (e.g.,
`non_camel_case_types`, `non_snake_case`) but remove `clippy::all` to avoid
silencing all lints.

In `@liblzma-rs/src/lz/lz_decoder.rs`:
- Around line 181-189: The code calls lz_init.unwrap() on an FFI function
pointer which can be null and panic across extern "C" boundaries; change usages
of lz_init.unwrap() (and any similar .unwrap() calls around the lz_init symbol)
to safely handle a null pointer by checking lz_init.is_some() or matching on
lz_init, returning an appropriate lzma_ret error (or lzma_ret::LZMA_PROG_ERROR /
LZMA_MEM_ERROR as appropriate) when absent, and only calling the function via
the safe pointer (e.g., let init_fn = lz_init.unwrap_or_else(|| return
Err(...)); or match { Some(f) => f(...), None => return ... }) before invoking
the unsafe extern "C" fn for lzma_lz_decoder initialization.
- Around line 137-143: The end-of-stream leftover check in the lz_decoder (when
(*coder).this_finished) uses the wrong predicate: it currently tests
(*coder).temp.size != 0 which is true for any non-zero buffer capacity and can
falsely raise LZMA_DATA_ERROR; change this to check unread bytes by comparing
position against size (e.g., (*coder).temp.pos != (*coder).temp.size) so only
when there are unread bytes in coder.temp you return LZMA_DATA_ERROR, leaving
the existing next_finished -> LZMA_STREAM_END logic unchanged.
- Around line 292-297: The memusage calculation in lzma_lz_decoder_memusage
currently omits the init-time minimum dictionary clamp and 16-byte alignment
used in lzma_lz_decoder_init; update lzma_lz_decoder_memusage to first clamp
dictionary_size to at least LZ_DICT_MIN (matching lzma_lz_decoder_init), then
compute the dictionary bytes using that clamped value and round/align the
dictionary area up to 16 bytes before adding core::mem::size_of::<lzma_coder>(),
(2 * LZ_DICT_REPEAT_MAX), and LZ_DICT_EXTRA so the reported u64 memory matches
actual init requirements.
- Around line 260-267: Reallocation frees (*coder).dict.buf then tries
lzma_alloc but if allocation returns null (*coder).dict.size remains the old
(stale) value; update the failure path in the block handling lzma_alloc so that
when lzma_alloc returns null you explicitly set (*coder).dict.buf =
ptr::null_mut() (it already is) and reset (*coder).dict.size = 0 before
returning LZMA_MEM_ERROR; locate the code around the symbols (*coder).dict.buf,
(*coder).dict.size, lzma_free, lzma_alloc, LZ_DICT_EXTRA and ensure the size is
cleared on allocation failure to avoid later logic assuming a nonzero size with
a null buf.

In `@liblzma-rs/src/lz/lz_encoder_mf.rs`:
- Around line 169-193: The loop in lzma_mf_hc3_skip can underflow amount because
it decrements unguarded; change the loop to explicitly run while amount > 0 (or
check/return if amount == 0 before entering) so you never decrement a zero
value; update the same pattern in the other skip variants (the other skip
functions at the referenced locations) to use an amount>0 loop condition or an
early return, keeping the existing body (mf_avail(mf), move_pending(mf),
move_pos(mf), hash/son updates) unchanged and only altering the loop/guard
around amount to prevent underflow.

In `@liblzma-rs/src/lz/lz_encoder.rs`:
- Around line 342-369: In lzma_lz_encoder_memusage, avoid passing a null
lz_options into lz_encoder_prepare; add an explicit null check for the
lz_options pointer at the top of lzma_lz_encoder_memusage and return UINT64_MAX
(or the existing error sentinel) if it is null, before initializing or calling
lz_encoder_prepare with &mut mf; ensure the check references the lz_options
parameter and the call site of lz_encoder_prepare to prevent dereferencing a
null pointer.
- Around line 507-513: The code dereferences filters and calls lz_init.unwrap()
without validation; before invoking the init call in lz_encoder (the call that
uses lz_init, filters, coder, and lz_options), validate that filters is non-null
and that lz_init is Some: check that filters is not null and that
(*filters).id/options are safe to read, and match on lz_init (or use if let
Some(init) = lz_init) to avoid unwrap; if either check fails return an
appropriate lzma_ret error (e.g., LZMA_PROG_ERROR or LZMA_MEM_ERROR) instead of
panicking so the function exits safely. Ensure you reference the same variables
used in the diff (lz_init, filters, coder, lz_options) and only call init after
these guards pass.

In `@liblzma-rs/src/lzma/fastpos_table.rs`:
- Around line 2-344: The lzma_fastpos lookup table is declared as mutable
(static mut) though it's only read; change it to an immutable static to remove
unnecessary mutable global state. Locate the declaration of lzma_fastpos
(symbol: lzma_fastpos) and replace "static mut" with "static" so the array
becomes read-only; confirm callers (all index reads) compile without needing
mutable borrows and run tests/build to ensure no unsafe writes rely on the
previous mutability.

In `@liblzma-rs/src/lzma/lzma2_decoder.rs`:
- Around line 217-220: In lzma2_decoder_end, don't free (*coder).lzma.coder
directly; if the nested decoder has an end hook (the function pointer stored at
(*coder).lzma.end), call that end hook with (*coder).lzma.coder and allocator so
the decoder-specific teardown runs, otherwise fall back to lzma_free on
(*coder).lzma.coder; after handling the nested coder, free the outer
lzma_lzma2_coder with lzma_free as before. This uses lzma2_decoder_end,
lzma_lzma2_coder, the nested end hook (*coder).lzma.end, and lzma_free to avoid
bypassing decoder-specific cleanup.
- Around line 293-316: In lzma_lzma2_props_decode, guard against null pointers
before dereferencing `props` and before assigning `*options`: check that `props`
is not null before reading `*props` and return an appropriate LZMA_OPTIONS_ERROR
(or LZMA_PROG_ERROR if more appropriate) if it is, and check that `options` is
not null before writing `*options` (return LZMA_PROG_ERROR or
LZMA_OPTIONS_ERROR) so you don't dereference null; keep the existing allocation
via `lzma_alloc` and the `opt` handling (`lzma_options_lzma`, `dict_size`,
`preset_dict`, `preset_dict_size`) unchanged, only add the early null-checks and
returns.
- Around line 252-257: In lzma2_decoder_init, opt is cast to options and then
dereferenced without validation; add a null-check on opt before casting/using it
(i.e., verify opt != std::ptr::null()) and handle the error path (set an
appropriate error return or propagate an LZMA_EINVAL/LZMA_MEM error) instead of
dereferencing; only compute (*coder).need_dictionary_reset from
(*options).preset_dict and (*options).preset_dict_size after opt is proven
non-null, and ensure you still call lzma_lzma_decoder_create with a valid
options pointer or a safe fallback when opt is null.

In `@liblzma-rs/src/lzma/lzma2_encoder.rs`:
- Around line 456-468: The function lzma_lzma2_block_size currently dereferences
the incoming pointer options without a null check; update lzma_lzma2_block_size
to first check if options.is_null() (or options == std::ptr::null()) and
immediately return UINT64_MAX for invalid/null input, then perform the existing
cast to lzma_options_lzma (opt) and the subsequent dict_size validations and
calculations; ensure the null check happens before any use of opt or (*opt).
- Around line 432-453: The function lzma_lzma2_props_encode currently
dereferences out without checking for null; add a null check at the start of
lzma_lzma2_props_encode (after options validation) to return LZMA_PROG_ERROR if
out.is_null(), so neither write to *out at the branches (the code that sets *out
= 40 and *out = get_dist_slot(...).wrapping_sub(24) as u8) can occur on a null
pointer.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8f8d7a3 and 7875183.

📒 Files selected for processing (102)
  • AGENTS.md
  • Cargo.toml
  • benches/backend_comparison.rs
  • liblzma-rs-sys/Cargo.toml
  • liblzma-rs-sys/src/lib.rs
  • liblzma-rs/Cargo.toml
  • liblzma-rs/src/check/check.rs
  • liblzma-rs/src/check/crc32_fast.rs
  • liblzma-rs/src/check/crc64_fast.rs
  • liblzma-rs/src/check/mod.rs
  • liblzma-rs/src/check/sha256.rs
  • liblzma-rs/src/common/alone_decoder.rs
  • liblzma-rs/src/common/alone_encoder.rs
  • liblzma-rs/src/common/auto_decoder.rs
  • liblzma-rs/src/common/block_buffer_decoder.rs
  • liblzma-rs/src/common/block_buffer_encoder.rs
  • liblzma-rs/src/common/block_decoder.rs
  • liblzma-rs/src/common/block_encoder.rs
  • liblzma-rs/src/common/block_header_decoder.rs
  • liblzma-rs/src/common/block_header_encoder.rs
  • liblzma-rs/src/common/block_util.rs
  • liblzma-rs/src/common/common.rs
  • liblzma-rs/src/common/easy_buffer_encoder.rs
  • liblzma-rs/src/common/easy_decoder_memusage.rs
  • liblzma-rs/src/common/easy_encoder.rs
  • liblzma-rs/src/common/easy_encoder_memusage.rs
  • liblzma-rs/src/common/easy_preset.rs
  • liblzma-rs/src/common/file_info.rs
  • liblzma-rs/src/common/filter_buffer_decoder.rs
  • liblzma-rs/src/common/filter_buffer_encoder.rs
  • liblzma-rs/src/common/filter_common.rs
  • liblzma-rs/src/common/filter_decoder.rs
  • liblzma-rs/src/common/filter_encoder.rs
  • liblzma-rs/src/common/filter_flags_decoder.rs
  • liblzma-rs/src/common/filter_flags_encoder.rs
  • liblzma-rs/src/common/hardware_cputhreads.rs
  • liblzma-rs/src/common/hardware_physmem.rs
  • liblzma-rs/src/common/index.rs
  • liblzma-rs/src/common/index_decoder.rs
  • liblzma-rs/src/common/index_encoder.rs
  • liblzma-rs/src/common/index_hash.rs
  • liblzma-rs/src/common/lzip_decoder.rs
  • liblzma-rs/src/common/microlzma_decoder.rs
  • liblzma-rs/src/common/microlzma_encoder.rs
  • liblzma-rs/src/common/mod.rs
  • liblzma-rs/src/common/outqueue.rs
  • liblzma-rs/src/common/stream_buffer_decoder.rs
  • liblzma-rs/src/common/stream_buffer_encoder.rs
  • liblzma-rs/src/common/stream_decoder.rs
  • liblzma-rs/src/common/stream_decoder_mt.rs
  • liblzma-rs/src/common/stream_encoder.rs
  • liblzma-rs/src/common/stream_encoder_mt.rs
  • liblzma-rs/src/common/stream_flags_common.rs
  • liblzma-rs/src/common/stream_flags_decoder.rs
  • liblzma-rs/src/common/stream_flags_encoder.rs
  • liblzma-rs/src/common/stream_mt.rs
  • liblzma-rs/src/common/string_conversion.rs
  • liblzma-rs/src/common/vli_decoder.rs
  • liblzma-rs/src/common/vli_encoder.rs
  • liblzma-rs/src/common/vli_size.rs
  • liblzma-rs/src/delta/delta_common.rs
  • liblzma-rs/src/delta/delta_decoder.rs
  • liblzma-rs/src/delta/delta_encoder.rs
  • liblzma-rs/src/delta/mod.rs
  • liblzma-rs/src/lib.rs
  • liblzma-rs/src/lz/lz_decoder.rs
  • liblzma-rs/src/lz/lz_encoder.rs
  • liblzma-rs/src/lz/lz_encoder_mf.rs
  • liblzma-rs/src/lz/mod.rs
  • liblzma-rs/src/lzma/fastpos_table.rs
  • liblzma-rs/src/lzma/lzma2_decoder.rs
  • liblzma-rs/src/lzma/lzma2_encoder.rs
  • liblzma-rs/src/lzma/lzma_decoder.rs
  • liblzma-rs/src/lzma/lzma_encoder.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_fast.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_normal.rs
  • liblzma-rs/src/lzma/lzma_encoder_presets.rs
  • liblzma-rs/src/lzma/mod.rs
  • liblzma-rs/src/rangecoder/mod.rs
  • liblzma-rs/src/rangecoder/price_table.rs
  • liblzma-rs/src/simple/arm.rs
  • liblzma-rs/src/simple/arm64.rs
  • liblzma-rs/src/simple/armthumb.rs
  • liblzma-rs/src/simple/ia64.rs
  • liblzma-rs/src/simple/mod.rs
  • liblzma-rs/src/simple/powerpc.rs
  • liblzma-rs/src/simple/riscv.rs
  • liblzma-rs/src/simple/simple_coder.rs
  • liblzma-rs/src/simple/simple_decoder.rs
  • liblzma-rs/src/simple/simple_encoder.rs
  • liblzma-rs/src/simple/sparc.rs
  • liblzma-rs/src/simple/x86.rs
  • liblzma-rs/src/tuklib/mod.rs
  • liblzma-rs/src/tuklib/tuklib_cpucores.rs
  • liblzma-rs/src/tuklib/tuklib_physmem.rs
  • liblzma-rs/src/types.rs
  • scripts/run_xz_c_tests_with_liblzma_rs_sys.sh
  • src/stream.rs
  • systest/Cargo.toml
  • systest/build.rs
  • systest/src/main.rs
  • tests/sys_equivalence.rs

Comment on lines +24 to +33
c_sys::lzma_easy_buffer_encode(
6,
c_sys::LZMA_CHECK_CRC64,
ptr::null(),
input.as_ptr(),
input.len(),
out.as_mut_ptr(),
&mut out_pos,
out.len(),
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -name "backend_comparison.rs" -type f

Repository: youknowone/xz2-rs

Length of output: 92


🏁 Script executed:

cat -n benches/backend_comparison.rs | head -100

Repository: youknowone/xz2-rs

Length of output: 3285


🌐 Web query:

liblzma lzma_easy_buffer_encode LZMA_OK return value documentation

💡 Result:

lzma_easy_buffer_encode() returns an lzma_ret. Per the official liblzma (XZ Utils) API docs, the return values are: LZMA_OK (encoding was successful), LZMA_BUF_ERROR (not enough output buffer space), LZMA_UNSUPPORTED_CHECK, LZMA_OPTIONS_ERROR, LZMA_MEM_ERROR, LZMA_DATA_ERROR, and LZMA_PROG_ERROR. [1]

Also note: *out_pos is updated only if encoding succeeds (i.e., when the function returns LZMA_OK). [1]

Source: [1] https://tukaani.org/xz/liblzma-api/container_8h.html


Add return value checks to benchmark helper functions; without them, failed operations silently produce empty output, invalidating comparisons.

The lzma_easy_buffer_encode() and lzma_stream_buffer_decode() calls discard return values. Per liblzma documentation, out_pos is only updated on success (LZMA_OK). If either call fails, out_pos remains unchanged (0 for uninitialized), so truncating to it produces empty buffers. Your benchmarks then measure encoding/decoding of wrong-sized data instead of the actual operations.

Suggested fix pattern (apply to both C and Rust helper pairs)
 unsafe fn c_encode(input: &[u8]) -> Vec<u8> {
     let bound = c_sys::lzma_stream_buffer_bound(input.len());
     let mut out = vec![0u8; bound];
     let mut out_pos: usize = 0;
-    c_sys::lzma_easy_buffer_encode(
+    let ret = c_sys::lzma_easy_buffer_encode(
         6,
         c_sys::LZMA_CHECK_CRC64,
         ptr::null(),
         input.as_ptr(),
         input.len(),
         out.as_mut_ptr(),
         &mut out_pos,
         out.len(),
     );
+    assert_eq!(ret, c_sys::LZMA_OK);
     out.truncate(out_pos);
     out
 }

 unsafe fn c_decode(compressed: &[u8], out_size: usize) -> Vec<u8> {
     let mut out = vec![0u8; out_size];
     let mut memlimit = u64::MAX;
     let mut in_pos = 0usize;
     let mut out_pos = 0usize;
-    c_sys::lzma_stream_buffer_decode(
+    let ret = c_sys::lzma_stream_buffer_decode(
         &mut memlimit,
         0,
         ptr::null(),
         compressed.as_ptr(),
         &mut in_pos,
         compressed.len(),
         out.as_mut_ptr(),
         &mut out_pos,
         out.len(),
     );
+    assert_eq!(ret, c_sys::LZMA_OK);
+    assert_eq!(in_pos, compressed.len());
     out.truncate(out_pos);
     out
 }

Also applies to: rs_encode (42-51), c_decode (61-71), rs_decode (81-91).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benches/backend_comparison.rs` around lines 24 - 33, The lzma calls in the
benchmark helpers (lzma_easy_buffer_encode and lzma_stream_buffer_decode)
currently ignore their return values so out_pos may remain 0 on failure; change
each helper (rs_encode, c_encode, c_decode, rs_decode) to capture the return
value, verify it equals c_sys::LZMA_OK (or the proper success constant), and
handle non-success by returning an Err/panicking/logging rather than trusting
out_pos; only truncate the output buffer to out_pos after a successful call so
failed operations do not produce silent empty outputs and invalidate
comparisons.

Comment thread liblzma-rs/src/check/crc32_fast.rs Outdated
Comment on lines +2 to +316
#[no_mangle]
pub static mut lzma_crc32_table: [[u32; 256]; 8] = [
[
0, 0x77073096, 0xee0e612c, 0x990951ba, 0x76dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
0xedb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x9b64c2b, 0x7eb17cbd, 0xe7b82d07,
0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb,
0xf4d4b551, 0x83d385c7, 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f,
0x63066cd9, 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6,
0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a,
0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f, 0x2802b89e,
0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
0x76dc4190, 0x1db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x6b6b51f, 0x9fbfe4a5,
0xe8b8d433, 0x7807c9a2, 0xf00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x86d3d2d,
0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed,
0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074,
0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc,
0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c,
0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b,
0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x3b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af,
0x4db2615, 0x73dc1683, 0xe3630b12, 0x94643b84, 0xd6d6a3e, 0x7a6a5aa8, 0xe40ecf0b,
0x9309ff9d, 0xa00ae27, 0x7d079eb1, 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a,
0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e,
0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda,
0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9,
0x5505262f, 0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31,
0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x26d930a, 0x9c0906a9,
0xeb0e363f, 0x72076785, 0x5005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0xcb61b38,
0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0xbdbdf21, 0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8,
0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777, 0x88085ae6, 0xff0f6a70,
0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278,
0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f,
0x30b5ffe9, 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693,
0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37,
0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d,
],
[
0, 0x191b3141, 0x32366282, 0x2b2d53c3, 0x646cc504, 0x7d77f445, 0x565aa786, 0x4f4196c7,
0xc8d98a08, 0xd1c2bb49, 0xfaefe88a, 0xe3f4d9cb, 0xacb54f0c, 0xb5ae7e4d, 0x9e832d8e,
0x87981ccf, 0x4ac21251, 0x53d92310, 0x78f470d3, 0x61ef4192, 0x2eaed755, 0x37b5e614,
0x1c98b5d7, 0x5838496, 0x821b9859, 0x9b00a918, 0xb02dfadb, 0xa936cb9a, 0xe6775d5d,
0xff6c6c1c, 0xd4413fdf, 0xcd5a0e9e, 0x958424a2, 0x8c9f15e3, 0xa7b24620, 0xbea97761,
0xf1e8e1a6, 0xe8f3d0e7, 0xc3de8324, 0xdac5b265, 0x5d5daeaa, 0x44469feb, 0x6f6bcc28,
0x7670fd69, 0x39316bae, 0x202a5aef, 0xb07092c, 0x121c386d, 0xdf4636f3, 0xc65d07b2,
0xed705471, 0xf46b6530, 0xbb2af3f7, 0xa231c2b6, 0x891c9175, 0x9007a034, 0x179fbcfb,
0xe848dba, 0x25a9de79, 0x3cb2ef38, 0x73f379ff, 0x6ae848be, 0x41c51b7d, 0x58de2a3c,
0xf0794f05, 0xe9627e44, 0xc24f2d87, 0xdb541cc6, 0x94158a01, 0x8d0ebb40, 0xa623e883,
0xbf38d9c2, 0x38a0c50d, 0x21bbf44c, 0xa96a78f, 0x138d96ce, 0x5ccc0009, 0x45d73148,
0x6efa628b, 0x77e153ca, 0xbabb5d54, 0xa3a06c15, 0x888d3fd6, 0x91960e97, 0xded79850,
0xc7cca911, 0xece1fad2, 0xf5facb93, 0x7262d75c, 0x6b79e61d, 0x4054b5de, 0x594f849f,
0x160e1258, 0xf152319, 0x243870da, 0x3d23419b, 0x65fd6ba7, 0x7ce65ae6, 0x57cb0925,
0x4ed03864, 0x191aea3, 0x188a9fe2, 0x33a7cc21, 0x2abcfd60, 0xad24e1af, 0xb43fd0ee,
0x9f12832d, 0x8609b26c, 0xc94824ab, 0xd05315ea, 0xfb7e4629, 0xe2657768, 0x2f3f79f6,
0x362448b7, 0x1d091b74, 0x4122a35, 0x4b53bcf2, 0x52488db3, 0x7965de70, 0x607eef31,
0xe7e6f3fe, 0xfefdc2bf, 0xd5d0917c, 0xcccba03d, 0x838a36fa, 0x9a9107bb, 0xb1bc5478,
0xa8a76539, 0x3b83984b, 0x2298a90a, 0x9b5fac9, 0x10aecb88, 0x5fef5d4f, 0x46f46c0e,
0x6dd93fcd, 0x74c20e8c, 0xf35a1243, 0xea412302, 0xc16c70c1, 0xd8774180, 0x9736d747,
0x8e2de606, 0xa500b5c5, 0xbc1b8484, 0x71418a1a, 0x685abb5b, 0x4377e898, 0x5a6cd9d9,
0x152d4f1e, 0xc367e5f, 0x271b2d9c, 0x3e001cdd, 0xb9980012, 0xa0833153, 0x8bae6290,
0x92b553d1, 0xddf4c516, 0xc4eff457, 0xefc2a794, 0xf6d996d5, 0xae07bce9, 0xb71c8da8,
0x9c31de6b, 0x852aef2a, 0xca6b79ed, 0xd37048ac, 0xf85d1b6f, 0xe1462a2e, 0x66de36e1,
0x7fc507a0, 0x54e85463, 0x4df36522, 0x2b2f3e5, 0x1ba9c2a4, 0x30849167, 0x299fa026,
0xe4c5aeb8, 0xfdde9ff9, 0xd6f3cc3a, 0xcfe8fd7b, 0x80a96bbc, 0x99b25afd, 0xb29f093e,
0xab84387f, 0x2c1c24b0, 0x350715f1, 0x1e2a4632, 0x7317773, 0x4870e1b4, 0x516bd0f5,
0x7a468336, 0x635db277, 0xcbfad74e, 0xd2e1e60f, 0xf9ccb5cc, 0xe0d7848d, 0xaf96124a,
0xb68d230b, 0x9da070c8, 0x84bb4189, 0x3235d46, 0x1a386c07, 0x31153fc4, 0x280e0e85,
0x674f9842, 0x7e54a903, 0x5579fac0, 0x4c62cb81, 0x8138c51f, 0x9823f45e, 0xb30ea79d,
0xaa1596dc, 0xe554001b, 0xfc4f315a, 0xd7626299, 0xce7953d8, 0x49e14f17, 0x50fa7e56,
0x7bd72d95, 0x62cc1cd4, 0x2d8d8a13, 0x3496bb52, 0x1fbbe891, 0x6a0d9d0, 0x5e7ef3ec,
0x4765c2ad, 0x6c48916e, 0x7553a02f, 0x3a1236e8, 0x230907a9, 0x824546a, 0x113f652b,
0x96a779e4, 0x8fbc48a5, 0xa4911b66, 0xbd8a2a27, 0xf2cbbce0, 0xebd08da1, 0xc0fdde62,
0xd9e6ef23, 0x14bce1bd, 0xda7d0fc, 0x268a833f, 0x3f91b27e, 0x70d024b9, 0x69cb15f8,
0x42e6463b, 0x5bfd777a, 0xdc656bb5, 0xc57e5af4, 0xee530937, 0xf7483876, 0xb809aeb1,
0xa1129ff0, 0x8a3fcc33, 0x9324fd72,
],
[
0, 0x1c26a37, 0x384d46e, 0x246be59, 0x709a8dc, 0x6cbc2eb, 0x48d7cb2, 0x54f1685, 0xe1351b8,
0xfd13b8f, 0xd9785d6, 0xc55efe1, 0x91af964, 0x8d89353, 0xa9e2d0a, 0xb5c473d, 0x1c26a370,
0x1de4c947, 0x1fa2771e, 0x1e601d29, 0x1b2f0bac, 0x1aed619b, 0x18abdfc2, 0x1969b5f5,
0x1235f2c8, 0x13f798ff, 0x11b126a6, 0x10734c91, 0x153c5a14, 0x14fe3023, 0x16b88e7a,
0x177ae44d, 0x384d46e0, 0x398f2cd7, 0x3bc9928e, 0x3a0bf8b9, 0x3f44ee3c, 0x3e86840b,
0x3cc03a52, 0x3d025065, 0x365e1758, 0x379c7d6f, 0x35dac336, 0x3418a901, 0x3157bf84,
0x3095d5b3, 0x32d36bea, 0x331101dd, 0x246be590, 0x25a98fa7, 0x27ef31fe, 0x262d5bc9,
0x23624d4c, 0x22a0277b, 0x20e69922, 0x2124f315, 0x2a78b428, 0x2bbade1f, 0x29fc6046,
0x283e0a71, 0x2d711cf4, 0x2cb376c3, 0x2ef5c89a, 0x2f37a2ad, 0x709a8dc0, 0x7158e7f7,
0x731e59ae, 0x72dc3399, 0x7793251c, 0x76514f2b, 0x7417f172, 0x75d59b45, 0x7e89dc78,
0x7f4bb64f, 0x7d0d0816, 0x7ccf6221, 0x798074a4, 0x78421e93, 0x7a04a0ca, 0x7bc6cafd,
0x6cbc2eb0, 0x6d7e4487, 0x6f38fade, 0x6efa90e9, 0x6bb5866c, 0x6a77ec5b, 0x68315202,
0x69f33835, 0x62af7f08, 0x636d153f, 0x612bab66, 0x60e9c151, 0x65a6d7d4, 0x6464bde3,
0x662203ba, 0x67e0698d, 0x48d7cb20, 0x4915a117, 0x4b531f4e, 0x4a917579, 0x4fde63fc,
0x4e1c09cb, 0x4c5ab792, 0x4d98dda5, 0x46c49a98, 0x4706f0af, 0x45404ef6, 0x448224c1,
0x41cd3244, 0x400f5873, 0x4249e62a, 0x438b8c1d, 0x54f16850, 0x55330267, 0x5775bc3e,
0x56b7d609, 0x53f8c08c, 0x523aaabb, 0x507c14e2, 0x51be7ed5, 0x5ae239e8, 0x5b2053df,
0x5966ed86, 0x58a487b1, 0x5deb9134, 0x5c29fb03, 0x5e6f455a, 0x5fad2f6d, 0xe1351b80,
0xe0f771b7, 0xe2b1cfee, 0xe373a5d9, 0xe63cb35c, 0xe7fed96b, 0xe5b86732, 0xe47a0d05,
0xef264a38, 0xeee4200f, 0xeca29e56, 0xed60f461, 0xe82fe2e4, 0xe9ed88d3, 0xebab368a,
0xea695cbd, 0xfd13b8f0, 0xfcd1d2c7, 0xfe976c9e, 0xff5506a9, 0xfa1a102c, 0xfbd87a1b,
0xf99ec442, 0xf85cae75, 0xf300e948, 0xf2c2837f, 0xf0843d26, 0xf1465711, 0xf4094194,
0xf5cb2ba3, 0xf78d95fa, 0xf64fffcd, 0xd9785d60, 0xd8ba3757, 0xdafc890e, 0xdb3ee339,
0xde71f5bc, 0xdfb39f8b, 0xddf521d2, 0xdc374be5, 0xd76b0cd8, 0xd6a966ef, 0xd4efd8b6,
0xd52db281, 0xd062a404, 0xd1a0ce33, 0xd3e6706a, 0xd2241a5d, 0xc55efe10, 0xc49c9427,
0xc6da2a7e, 0xc7184049, 0xc25756cc, 0xc3953cfb, 0xc1d382a2, 0xc011e895, 0xcb4dafa8,
0xca8fc59f, 0xc8c97bc6, 0xc90b11f1, 0xcc440774, 0xcd866d43, 0xcfc0d31a, 0xce02b92d,
0x91af9640, 0x906dfc77, 0x922b422e, 0x93e92819, 0x96a63e9c, 0x976454ab, 0x9522eaf2,
0x94e080c5, 0x9fbcc7f8, 0x9e7eadcf, 0x9c381396, 0x9dfa79a1, 0x98b56f24, 0x99770513,
0x9b31bb4a, 0x9af3d17d, 0x8d893530, 0x8c4b5f07, 0x8e0de15e, 0x8fcf8b69, 0x8a809dec,
0x8b42f7db, 0x89044982, 0x88c623b5, 0x839a6488, 0x82580ebf, 0x801eb0e6, 0x81dcdad1,
0x8493cc54, 0x8551a663, 0x8717183a, 0x86d5720d, 0xa9e2d0a0, 0xa820ba97, 0xaa6604ce,
0xaba46ef9, 0xaeeb787c, 0xaf29124b, 0xad6fac12, 0xacadc625, 0xa7f18118, 0xa633eb2f,
0xa4755576, 0xa5b73f41, 0xa0f829c4, 0xa13a43f3, 0xa37cfdaa, 0xa2be979d, 0xb5c473d0,
0xb40619e7, 0xb640a7be, 0xb782cd89, 0xb2cddb0c, 0xb30fb13b, 0xb1490f62, 0xb08b6555,
0xbbd72268, 0xba15485f, 0xb853f606, 0xb9919c31, 0xbcde8ab4, 0xbd1ce083, 0xbf5a5eda,
0xbe9834ed,
],
[
0, 0xb8bc6765, 0xaa09c88b, 0x12b5afee, 0x8f629757, 0x37def032, 0x256b5fdc, 0x9dd738b9,
0xc5b428ef, 0x7d084f8a, 0x6fbde064, 0xd7018701, 0x4ad6bfb8, 0xf26ad8dd, 0xe0df7733,
0x58631056, 0x5019579f, 0xe8a530fa, 0xfa109f14, 0x42acf871, 0xdf7bc0c8, 0x67c7a7ad,
0x75720843, 0xcdce6f26, 0x95ad7f70, 0x2d111815, 0x3fa4b7fb, 0x8718d09e, 0x1acfe827,
0xa2738f42, 0xb0c620ac, 0x87a47c9, 0xa032af3e, 0x188ec85b, 0xa3b67b5, 0xb28700d0,
0x2f503869, 0x97ec5f0c, 0x8559f0e2, 0x3de59787, 0x658687d1, 0xdd3ae0b4, 0xcf8f4f5a,
0x7733283f, 0xeae41086, 0x525877e3, 0x40edd80d, 0xf851bf68, 0xf02bf8a1, 0x48979fc4,
0x5a22302a, 0xe29e574f, 0x7f496ff6, 0xc7f50893, 0xd540a77d, 0x6dfcc018, 0x359fd04e,
0x8d23b72b, 0x9f9618c5, 0x272a7fa0, 0xbafd4719, 0x241207c, 0x10f48f92, 0xa848e8f7,
0x9b14583d, 0x23a83f58, 0x311d90b6, 0x89a1f7d3, 0x1476cf6a, 0xaccaa80f, 0xbe7f07e1,
0x6c36084, 0x5ea070d2, 0xe61c17b7, 0xf4a9b859, 0x4c15df3c, 0xd1c2e785, 0x697e80e0,
0x7bcb2f0e, 0xc377486b, 0xcb0d0fa2, 0x73b168c7, 0x6104c729, 0xd9b8a04c, 0x446f98f5,
0xfcd3ff90, 0xee66507e, 0x56da371b, 0xeb9274d, 0xb6054028, 0xa4b0efc6, 0x1c0c88a3,
0x81dbb01a, 0x3967d77f, 0x2bd27891, 0x936e1ff4, 0x3b26f703, 0x839a9066, 0x912f3f88,
0x299358ed, 0xb4446054, 0xcf80731, 0x1e4da8df, 0xa6f1cfba, 0xfe92dfec, 0x462eb889,
0x549b1767, 0xec277002, 0x71f048bb, 0xc94c2fde, 0xdbf98030, 0x6345e755, 0x6b3fa09c,
0xd383c7f9, 0xc1366817, 0x798a0f72, 0xe45d37cb, 0x5ce150ae, 0x4e54ff40, 0xf6e89825,
0xae8b8873, 0x1637ef16, 0x48240f8, 0xbc3e279d, 0x21e91f24, 0x99557841, 0x8be0d7af,
0x335cb0ca, 0xed59b63b, 0x55e5d15e, 0x47507eb0, 0xffec19d5, 0x623b216c, 0xda874609,
0xc832e9e7, 0x708e8e82, 0x28ed9ed4, 0x9051f9b1, 0x82e4565f, 0x3a58313a, 0xa78f0983,
0x1f336ee6, 0xd86c108, 0xb53aa66d, 0xbd40e1a4, 0x5fc86c1, 0x1749292f, 0xaff54e4a,
0x322276f3, 0x8a9e1196, 0x982bbe78, 0x2097d91d, 0x78f4c94b, 0xc048ae2e, 0xd2fd01c0,
0x6a4166a5, 0xf7965e1c, 0x4f2a3979, 0x5d9f9697, 0xe523f1f2, 0x4d6b1905, 0xf5d77e60,
0xe762d18e, 0x5fdeb6eb, 0xc2098e52, 0x7ab5e937, 0x680046d9, 0xd0bc21bc, 0x88df31ea,
0x3063568f, 0x22d6f961, 0x9a6a9e04, 0x7bda6bd, 0xbf01c1d8, 0xadb46e36, 0x15080953,
0x1d724e9a, 0xa5ce29ff, 0xb77b8611, 0xfc7e174, 0x9210d9cd, 0x2aacbea8, 0x38191146,
0x80a57623, 0xd8c66675, 0x607a0110, 0x72cfaefe, 0xca73c99b, 0x57a4f122, 0xef189647,
0xfdad39a9, 0x45115ecc, 0x764dee06, 0xcef18963, 0xdc44268d, 0x64f841e8, 0xf92f7951,
0x41931e34, 0x5326b1da, 0xeb9ad6bf, 0xb3f9c6e9, 0xb45a18c, 0x19f00e62, 0xa14c6907,
0x3c9b51be, 0x842736db, 0x96929935, 0x2e2efe50, 0x2654b999, 0x9ee8defc, 0x8c5d7112,
0x34e11677, 0xa9362ece, 0x118a49ab, 0x33fe645, 0xbb838120, 0xe3e09176, 0x5b5cf613,
0x49e959fd, 0xf1553e98, 0x6c820621, 0xd43e6144, 0xc68bceaa, 0x7e37a9cf, 0xd67f4138,
0x6ec3265d, 0x7c7689b3, 0xc4caeed6, 0x591dd66f, 0xe1a1b10a, 0xf3141ee4, 0x4ba87981,
0x13cb69d7, 0xab770eb2, 0xb9c2a15c, 0x17ec639, 0x9ca9fe80, 0x241599e5, 0x36a0360b,
0x8e1c516e, 0x866616a7, 0x3eda71c2, 0x2c6fde2c, 0x94d3b949, 0x90481f0, 0xb1b8e695,
0xa30d497b, 0x1bb12e1e, 0x43d23e48, 0xfb6e592d, 0xe9dbf6c3, 0x516791a6, 0xccb0a91f,
0x740cce7a, 0x66b96194, 0xde0506f1,
],
[
0, 0x3d6029b0, 0x7ac05360, 0x47a07ad0, 0xf580a6c0, 0xc8e08f70, 0x8f40f5a0, 0xb220dc10,
0x30704bc1, 0xd106271, 0x4ab018a1, 0x77d03111, 0xc5f0ed01, 0xf890c4b1, 0xbf30be61,
0x825097d1, 0x60e09782, 0x5d80be32, 0x1a20c4e2, 0x2740ed52, 0x95603142, 0xa80018f2,
0xefa06222, 0xd2c04b92, 0x5090dc43, 0x6df0f5f3, 0x2a508f23, 0x1730a693, 0xa5107a83,
0x98705333, 0xdfd029e3, 0xe2b00053, 0xc1c12f04, 0xfca106b4, 0xbb017c64, 0x866155d4,
0x344189c4, 0x921a074, 0x4e81daa4, 0x73e1f314, 0xf1b164c5, 0xccd14d75, 0x8b7137a5,
0xb6111e15, 0x431c205, 0x3951ebb5, 0x7ef19165, 0x4391b8d5, 0xa121b886, 0x9c419136,
0xdbe1ebe6, 0xe681c256, 0x54a11e46, 0x69c137f6, 0x2e614d26, 0x13016496, 0x9151f347,
0xac31daf7, 0xeb91a027, 0xd6f18997, 0x64d15587, 0x59b17c37, 0x1e1106e7, 0x23712f57,
0x58f35849, 0x659371f9, 0x22330b29, 0x1f532299, 0xad73fe89, 0x9013d739, 0xd7b3ade9,
0xead38459, 0x68831388, 0x55e33a38, 0x124340e8, 0x2f236958, 0x9d03b548, 0xa0639cf8,
0xe7c3e628, 0xdaa3cf98, 0x3813cfcb, 0x573e67b, 0x42d39cab, 0x7fb3b51b, 0xcd93690b,
0xf0f340bb, 0xb7533a6b, 0x8a3313db, 0x863840a, 0x3503adba, 0x72a3d76a, 0x4fc3feda,
0xfde322ca, 0xc0830b7a, 0x872371aa, 0xba43581a, 0x9932774d, 0xa4525efd, 0xe3f2242d,
0xde920d9d, 0x6cb2d18d, 0x51d2f83d, 0x167282ed, 0x2b12ab5d, 0xa9423c8c, 0x9422153c,
0xd3826fec, 0xeee2465c, 0x5cc29a4c, 0x61a2b3fc, 0x2602c92c, 0x1b62e09c, 0xf9d2e0cf,
0xc4b2c97f, 0x8312b3af, 0xbe729a1f, 0xc52460f, 0x31326fbf, 0x7692156f, 0x4bf23cdf,
0xc9a2ab0e, 0xf4c282be, 0xb362f86e, 0x8e02d1de, 0x3c220dce, 0x142247e, 0x46e25eae,
0x7b82771e, 0xb1e6b092, 0x8c869922, 0xcb26e3f2, 0xf646ca42, 0x44661652, 0x79063fe2,
0x3ea64532, 0x3c66c82, 0x8196fb53, 0xbcf6d2e3, 0xfb56a833, 0xc6368183, 0x74165d93,
0x49767423, 0xed60ef3, 0x33b62743, 0xd1062710, 0xec660ea0, 0xabc67470, 0x96a65dc0,
0x248681d0, 0x19e6a860, 0x5e46d2b0, 0x6326fb00, 0xe1766cd1, 0xdc164561, 0x9bb63fb1,
0xa6d61601, 0x14f6ca11, 0x2996e3a1, 0x6e369971, 0x5356b0c1, 0x70279f96, 0x4d47b626,
0xae7ccf6, 0x3787e546, 0x85a73956, 0xb8c710e6, 0xff676a36, 0xc2074386, 0x4057d457,
0x7d37fde7, 0x3a978737, 0x7f7ae87, 0xb5d77297, 0x88b75b27, 0xcf1721f7, 0xf2770847,
0x10c70814, 0x2da721a4, 0x6a075b74, 0x576772c4, 0xe547aed4, 0xd8278764, 0x9f87fdb4,
0xa2e7d404, 0x20b743d5, 0x1dd76a65, 0x5a7710b5, 0x67173905, 0xd537e515, 0xe857cca5,
0xaff7b675, 0x92979fc5, 0xe915e8db, 0xd475c16b, 0x93d5bbbb, 0xaeb5920b, 0x1c954e1b,
0x21f567ab, 0x66551d7b, 0x5b3534cb, 0xd965a31a, 0xe4058aaa, 0xa3a5f07a, 0x9ec5d9ca,
0x2ce505da, 0x11852c6a, 0x562556ba, 0x6b457f0a, 0x89f57f59, 0xb49556e9, 0xf3352c39,
0xce550589, 0x7c75d999, 0x4115f029, 0x6b58af9, 0x3bd5a349, 0xb9853498, 0x84e51d28,
0xc34567f8, 0xfe254e48, 0x4c059258, 0x7165bbe8, 0x36c5c138, 0xba5e888, 0x28d4c7df,
0x15b4ee6f, 0x521494bf, 0x6f74bd0f, 0xdd54611f, 0xe03448af, 0xa794327f, 0x9af41bcf,
0x18a48c1e, 0x25c4a5ae, 0x6264df7e, 0x5f04f6ce, 0xed242ade, 0xd044036e, 0x97e479be,
0xaa84500e, 0x4834505d, 0x755479ed, 0x32f4033d, 0xf942a8d, 0xbdb4f69d, 0x80d4df2d,
0xc774a5fd, 0xfa148c4d, 0x78441b9c, 0x4524322c, 0x28448fc, 0x3fe4614c, 0x8dc4bd5c,
0xb0a494ec, 0xf704ee3c, 0xca64c78c,
],
[
0, 0xcb5cd3a5, 0x4dc8a10b, 0x869472ae, 0x9b914216, 0x50cd91b3, 0xd659e31d, 0x1d0530b8,
0xec53826d, 0x270f51c8, 0xa19b2366, 0x6ac7f0c3, 0x77c2c07b, 0xbc9e13de, 0x3a0a6170,
0xf156b2d5, 0x3d6029b, 0xc88ad13e, 0x4e1ea390, 0x85427035, 0x9847408d, 0x531b9328,
0xd58fe186, 0x1ed33223, 0xef8580f6, 0x24d95353, 0xa24d21fd, 0x6911f258, 0x7414c2e0,
0xbf481145, 0x39dc63eb, 0xf280b04e, 0x7ac0536, 0xccf0d693, 0x4a64a43d, 0x81387798,
0x9c3d4720, 0x57619485, 0xd1f5e62b, 0x1aa9358e, 0xebff875b, 0x20a354fe, 0xa6372650,
0x6d6bf5f5, 0x706ec54d, 0xbb3216e8, 0x3da66446, 0xf6fab7e3, 0x47a07ad, 0xcf26d408,
0x49b2a6a6, 0x82ee7503, 0x9feb45bb, 0x54b7961e, 0xd223e4b0, 0x197f3715, 0xe82985c0,
0x23755665, 0xa5e124cb, 0x6ebdf76e, 0x73b8c7d6, 0xb8e41473, 0x3e7066dd, 0xf52cb578,
0xf580a6c, 0xc404d9c9, 0x4290ab67, 0x89cc78c2, 0x94c9487a, 0x5f959bdf, 0xd901e971,
0x125d3ad4, 0xe30b8801, 0x28575ba4, 0xaec3290a, 0x659ffaaf, 0x789aca17, 0xb3c619b2,
0x35526b1c, 0xfe0eb8b9, 0xc8e08f7, 0xc7d2db52, 0x4146a9fc, 0x8a1a7a59, 0x971f4ae1,
0x5c439944, 0xdad7ebea, 0x118b384f, 0xe0dd8a9a, 0x2b81593f, 0xad152b91, 0x6649f834,
0x7b4cc88c, 0xb0101b29, 0x36846987, 0xfdd8ba22, 0x8f40f5a, 0xc3a8dcff, 0x453cae51,
0x8e607df4, 0x93654d4c, 0x58399ee9, 0xdeadec47, 0x15f13fe2, 0xe4a78d37, 0x2ffb5e92,
0xa96f2c3c, 0x6233ff99, 0x7f36cf21, 0xb46a1c84, 0x32fe6e2a, 0xf9a2bd8f, 0xb220dc1,
0xc07ede64, 0x46eaacca, 0x8db67f6f, 0x90b34fd7, 0x5bef9c72, 0xdd7beedc, 0x16273d79,
0xe7718fac, 0x2c2d5c09, 0xaab92ea7, 0x61e5fd02, 0x7ce0cdba, 0xb7bc1e1f, 0x31286cb1,
0xfa74bf14, 0x1eb014d8, 0xd5ecc77d, 0x5378b5d3, 0x98246676, 0x852156ce, 0x4e7d856b,
0xc8e9f7c5, 0x3b52460, 0xf2e396b5, 0x39bf4510, 0xbf2b37be, 0x7477e41b, 0x6972d4a3,
0xa22e0706, 0x24ba75a8, 0xefe6a60d, 0x1d661643, 0xd63ac5e6, 0x50aeb748, 0x9bf264ed,
0x86f75455, 0x4dab87f0, 0xcb3ff55e, 0x6326fb, 0xf135942e, 0x3a69478b, 0xbcfd3525,
0x77a1e680, 0x6aa4d638, 0xa1f8059d, 0x276c7733, 0xec30a496, 0x191c11ee, 0xd240c24b,
0x54d4b0e5, 0x9f886340, 0x828d53f8, 0x49d1805d, 0xcf45f2f3, 0x4192156, 0xf54f9383,
0x3e134026, 0xb8873288, 0x73dbe12d, 0x6eded195, 0xa5820230, 0x2316709e, 0xe84aa33b,
0x1aca1375, 0xd196c0d0, 0x5702b27e, 0x9c5e61db, 0x815b5163, 0x4a0782c6, 0xcc93f068,
0x7cf23cd, 0xf6999118, 0x3dc542bd, 0xbb513013, 0x700de3b6, 0x6d08d30e, 0xa65400ab,
0x20c07205, 0xeb9ca1a0, 0x11e81eb4, 0xdab4cd11, 0x5c20bfbf, 0x977c6c1a, 0x8a795ca2,
0x41258f07, 0xc7b1fda9, 0xced2e0c, 0xfdbb9cd9, 0x36e74f7c, 0xb0733dd2, 0x7b2fee77,
0x662adecf, 0xad760d6a, 0x2be27fc4, 0xe0beac61, 0x123e1c2f, 0xd962cf8a, 0x5ff6bd24,
0x94aa6e81, 0x89af5e39, 0x42f38d9c, 0xc467ff32, 0xf3b2c97, 0xfe6d9e42, 0x35314de7,
0xb3a53f49, 0x78f9ecec, 0x65fcdc54, 0xaea00ff1, 0x28347d5f, 0xe368aefa, 0x16441b82,
0xdd18c827, 0x5b8cba89, 0x90d0692c, 0x8dd55994, 0x46898a31, 0xc01df89f, 0xb412b3a,
0xfa1799ef, 0x314b4a4a, 0xb7df38e4, 0x7c83eb41, 0x6186dbf9, 0xaada085c, 0x2c4e7af2,
0xe712a957, 0x15921919, 0xdececabc, 0x585ab812, 0x93066bb7, 0x8e035b0f, 0x455f88aa,
0xc3cbfa04, 0x89729a1, 0xf9c19b74, 0x329d48d1, 0xb4093a7f, 0x7f55e9da, 0x6250d962,
0xa90c0ac7, 0x2f987869, 0xe4c4abcc,
],
[
0, 0xa6770bb4, 0x979f1129, 0x31e81a9d, 0xf44f2413, 0x52382fa7, 0x63d0353a, 0xc5a73e8e,
0x33ef4e67, 0x959845d3, 0xa4705f4e, 0x20754fa, 0xc7a06a74, 0x61d761c0, 0x503f7b5d,
0xf64870e9, 0x67de9cce, 0xc1a9977a, 0xf0418de7, 0x56368653, 0x9391b8dd, 0x35e6b369,
0x40ea9f4, 0xa279a240, 0x5431d2a9, 0xf246d91d, 0xc3aec380, 0x65d9c834, 0xa07ef6ba,
0x609fd0e, 0x37e1e793, 0x9196ec27, 0xcfbd399c, 0x69ca3228, 0x582228b5, 0xfe552301,
0x3bf21d8f, 0x9d85163b, 0xac6d0ca6, 0xa1a0712, 0xfc5277fb, 0x5a257c4f, 0x6bcd66d2,
0xcdba6d66, 0x81d53e8, 0xae6a585c, 0x9f8242c1, 0x39f54975, 0xa863a552, 0xe14aee6,
0x3ffcb47b, 0x998bbfcf, 0x5c2c8141, 0xfa5b8af5, 0xcbb39068, 0x6dc49bdc, 0x9b8ceb35,
0x3dfbe081, 0xc13fa1c, 0xaa64f1a8, 0x6fc3cf26, 0xc9b4c492, 0xf85cde0f, 0x5e2bd5bb,
0x440b7579, 0xe27c7ecd, 0xd3946450, 0x75e36fe4, 0xb044516a, 0x16335ade, 0x27db4043,
0x81ac4bf7, 0x77e43b1e, 0xd19330aa, 0xe07b2a37, 0x460c2183, 0x83ab1f0d, 0x25dc14b9,
0x14340e24, 0xb2430590, 0x23d5e9b7, 0x85a2e203, 0xb44af89e, 0x123df32a, 0xd79acda4,
0x71edc610, 0x4005dc8d, 0xe672d739, 0x103aa7d0, 0xb64dac64, 0x87a5b6f9, 0x21d2bd4d,
0xe47583c3, 0x42028877, 0x73ea92ea, 0xd59d995e, 0x8bb64ce5, 0x2dc14751, 0x1c295dcc,
0xba5e5678, 0x7ff968f6, 0xd98e6342, 0xe86679df, 0x4e11726b, 0xb8590282, 0x1e2e0936,
0x2fc613ab, 0x89b1181f, 0x4c162691, 0xea612d25, 0xdb8937b8, 0x7dfe3c0c, 0xec68d02b,
0x4a1fdb9f, 0x7bf7c102, 0xdd80cab6, 0x1827f438, 0xbe50ff8c, 0x8fb8e511, 0x29cfeea5,
0xdf879e4c, 0x79f095f8, 0x48188f65, 0xee6f84d1, 0x2bc8ba5f, 0x8dbfb1eb, 0xbc57ab76,
0x1a20a0c2, 0x8816eaf2, 0x2e61e146, 0x1f89fbdb, 0xb9fef06f, 0x7c59cee1, 0xda2ec555,
0xebc6dfc8, 0x4db1d47c, 0xbbf9a495, 0x1d8eaf21, 0x2c66b5bc, 0x8a11be08, 0x4fb68086,
0xe9c18b32, 0xd82991af, 0x7e5e9a1b, 0xefc8763c, 0x49bf7d88, 0x78576715, 0xde206ca1,
0x1b87522f, 0xbdf0599b, 0x8c184306, 0x2a6f48b2, 0xdc27385b, 0x7a5033ef, 0x4bb82972,
0xedcf22c6, 0x28681c48, 0x8e1f17fc, 0xbff70d61, 0x198006d5, 0x47abd36e, 0xe1dcd8da,
0xd034c247, 0x7643c9f3, 0xb3e4f77d, 0x1593fcc9, 0x247be654, 0x820cede0, 0x74449d09,
0xd23396bd, 0xe3db8c20, 0x45ac8794, 0x800bb91a, 0x267cb2ae, 0x1794a833, 0xb1e3a387,
0x20754fa0, 0x86024414, 0xb7ea5e89, 0x119d553d, 0xd43a6bb3, 0x724d6007, 0x43a57a9a,
0xe5d2712e, 0x139a01c7, 0xb5ed0a73, 0x840510ee, 0x22721b5a, 0xe7d525d4, 0x41a22e60,
0x704a34fd, 0xd63d3f49, 0xcc1d9f8b, 0x6a6a943f, 0x5b828ea2, 0xfdf58516, 0x3852bb98,
0x9e25b02c, 0xafcdaab1, 0x9baa105, 0xfff2d1ec, 0x5985da58, 0x686dc0c5, 0xce1acb71,
0xbbdf5ff, 0xadcafe4b, 0x9c22e4d6, 0x3a55ef62, 0xabc30345, 0xdb408f1, 0x3c5c126c,
0x9a2b19d8, 0x5f8c2756, 0xf9fb2ce2, 0xc813367f, 0x6e643dcb, 0x982c4d22, 0x3e5b4696,
0xfb35c0b, 0xa9c457bf, 0x6c636931, 0xca146285, 0xfbfc7818, 0x5d8b73ac, 0x3a0a617,
0xa5d7ada3, 0x943fb73e, 0x3248bc8a, 0xf7ef8204, 0x519889b0, 0x6070932d, 0xc6079899,
0x304fe870, 0x9638e3c4, 0xa7d0f959, 0x1a7f2ed, 0xc400cc63, 0x6277c7d7, 0x539fdd4a,
0xf5e8d6fe, 0x647e3ad9, 0xc209316d, 0xf3e12bf0, 0x55962044, 0x90311eca, 0x3646157e,
0x7ae0fe3, 0xa1d90457, 0x579174be, 0xf1e67f0a, 0xc00e6597, 0x66796e23, 0xa3de50ad,
0x5a95b19, 0x34414184, 0x92364a30,
],
[
0, 0xccaa009e, 0x4225077d, 0x8e8f07e3, 0x844a0efa, 0x48e00e64, 0xc66f0987, 0xac50919,
0xd3e51bb5, 0x1f4f1b2b, 0x91c01cc8, 0x5d6a1c56, 0x57af154f, 0x9b0515d1, 0x158a1232,
0xd92012ac, 0x7cbb312b, 0xb01131b5, 0x3e9e3656, 0xf23436c8, 0xf8f13fd1, 0x345b3f4f,
0xbad438ac, 0x767e3832, 0xaf5e2a9e, 0x63f42a00, 0xed7b2de3, 0x21d12d7d, 0x2b142464,
0xe7be24fa, 0x69312319, 0xa59b2387, 0xf9766256, 0x35dc62c8, 0xbb53652b, 0x77f965b5,
0x7d3c6cac, 0xb1966c32, 0x3f196bd1, 0xf3b36b4f, 0x2a9379e3, 0xe639797d, 0x68b67e9e,
0xa41c7e00, 0xaed97719, 0x62737787, 0xecfc7064, 0x205670fa, 0x85cd537d, 0x496753e3,
0xc7e85400, 0xb42549e, 0x1875d87, 0xcd2d5d19, 0x43a25afa, 0x8f085a64, 0x562848c8,
0x9a824856, 0x140d4fb5, 0xd8a74f2b, 0xd2624632, 0x1ec846ac, 0x9047414f, 0x5ced41d1,
0x299dc2ed, 0xe537c273, 0x6bb8c590, 0xa712c50e, 0xadd7cc17, 0x617dcc89, 0xeff2cb6a,
0x2358cbf4, 0xfa78d958, 0x36d2d9c6, 0xb85dde25, 0x74f7debb, 0x7e32d7a2, 0xb298d73c,
0x3c17d0df, 0xf0bdd041, 0x5526f3c6, 0x998cf358, 0x1703f4bb, 0xdba9f425, 0xd16cfd3c,
0x1dc6fda2, 0x9349fa41, 0x5fe3fadf, 0x86c3e873, 0x4a69e8ed, 0xc4e6ef0e, 0x84cef90,
0x289e689, 0xce23e617, 0x40ace1f4, 0x8c06e16a, 0xd0eba0bb, 0x1c41a025, 0x92cea7c6,
0x5e64a758, 0x54a1ae41, 0x980baedf, 0x1684a93c, 0xda2ea9a2, 0x30ebb0e, 0xcfa4bb90,
0x412bbc73, 0x8d81bced, 0x8744b5f4, 0x4beeb56a, 0xc561b289, 0x9cbb217, 0xac509190,
0x60fa910e, 0xee7596ed, 0x22df9673, 0x281a9f6a, 0xe4b09ff4, 0x6a3f9817, 0xa6959889,
0x7fb58a25, 0xb31f8abb, 0x3d908d58, 0xf13a8dc6, 0xfbff84df, 0x37558441, 0xb9da83a2,
0x7570833c, 0x533b85da, 0x9f918544, 0x111e82a7, 0xddb48239, 0xd7718b20, 0x1bdb8bbe,
0x95548c5d, 0x59fe8cc3, 0x80de9e6f, 0x4c749ef1, 0xc2fb9912, 0xe51998c, 0x4949095,
0xc83e900b, 0x46b197e8, 0x8a1b9776, 0x2f80b4f1, 0xe32ab46f, 0x6da5b38c, 0xa10fb312,
0xabcaba0b, 0x6760ba95, 0xe9efbd76, 0x2545bde8, 0xfc65af44, 0x30cfafda, 0xbe40a839,
0x72eaa8a7, 0x782fa1be, 0xb485a120, 0x3a0aa6c3, 0xf6a0a65d, 0xaa4de78c, 0x66e7e712,
0xe868e0f1, 0x24c2e06f, 0x2e07e976, 0xe2ade9e8, 0x6c22ee0b, 0xa088ee95, 0x79a8fc39,
0xb502fca7, 0x3b8dfb44, 0xf727fbda, 0xfde2f2c3, 0x3148f25d, 0xbfc7f5be, 0x736df520,
0xd6f6d6a7, 0x1a5cd639, 0x94d3d1da, 0x5879d144, 0x52bcd85d, 0x9e16d8c3, 0x1099df20,
0xdc33dfbe, 0x513cd12, 0xc9b9cd8c, 0x4736ca6f, 0x8b9ccaf1, 0x8159c3e8, 0x4df3c376,
0xc37cc495, 0xfd6c40b, 0x7aa64737, 0xb60c47a9, 0x3883404a, 0xf42940d4, 0xfeec49cd,
0x32464953, 0xbcc94eb0, 0x70634e2e, 0xa9435c82, 0x65e95c1c, 0xeb665bff, 0x27cc5b61,
0x2d095278, 0xe1a352e6, 0x6f2c5505, 0xa386559b, 0x61d761c, 0xcab77682, 0x44387161,
0x889271ff, 0x825778e6, 0x4efd7878, 0xc0727f9b, 0xcd87f05, 0xd5f86da9, 0x19526d37,
0x97dd6ad4, 0x5b776a4a, 0x51b26353, 0x9d1863cd, 0x1397642e, 0xdf3d64b0, 0x83d02561,
0x4f7a25ff, 0xc1f5221c, 0xd5f2282, 0x79a2b9b, 0xcb302b05, 0x45bf2ce6, 0x89152c78,
0x50353ed4, 0x9c9f3e4a, 0x121039a9, 0xdeba3937, 0xd47f302e, 0x18d530b0, 0x965a3753,
0x5af037cd, 0xff6b144a, 0x33c114d4, 0xbd4e1337, 0x71e413a9, 0x7b211ab0, 0xb78b1a2e,
0x39041dcd, 0xf5ae1d53, 0x2c8e0fff, 0xe0240f61, 0x6eab0882, 0xa201081c, 0xa8c40105,
0x646e019b, 0xeae10678, 0x264b06e6,
],
];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Data race hazard: static mut should be immutable static.

The CRC32 lookup table is declared as pub static mut but its contents are never modified at runtime—it's initialized with constant values. Using static mut creates a data race hazard if multiple threads call lzma_crc32 concurrently, as accessing mutable statics requires unsafe and provides no synchronization.

Change to an immutable static to eliminate the race condition and allow safe concurrent access.

🛡️ Suggested fix
 #[no_mangle]
-pub static mut lzma_crc32_table: [[u32; 256]; 8] = [
+pub static lzma_crc32_table: [[u32; 256]; 8] = [
     [
         0, 0x77073096, 0xee0e612c, ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/crc32_fast.rs` around lines 2 - 316, The CRC32 table
lzma_crc32_table is declared as mutable static (pub static mut) but never
modified; change its declaration to an immutable static (remove mut) so it
becomes pub static lzma_crc32_table: [[u32; 256]; 8] = [...] and update any call
sites (e.g., lzma_crc32 or other functions that currently access
lzma_crc32_table via unsafe) to use the immutable static without unsafe blocks.
Ensure the table type and initializer stay the same and remove unnecessary
unsafe usage around accesses.

Comment thread liblzma-rs/src/check/crc64_fast.rs Outdated
Comment on lines +2 to +4
#[no_mangle]
pub static mut lzma_crc64_table: [[u64; 256]; 4] = [
[
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

static mut table creates potential for data races.

The CRC64 table is declared as static mut with #[no_mangle], meaning it could theoretically be modified by external C code at runtime. If this table is intended to be immutable after initialization (which it appears to be, as a precomputed lookup table), consider using an immutable static instead.

Suggested change if table is read-only
 #[no_mangle]
-pub static mut lzma_crc64_table: [[u64; 256]; 4] = [
+pub static lzma_crc64_table: [[u64; 256]; 4] = [

Note: If the original C code requires a mutable symbol for ABI compatibility, keep static mut but document the thread-safety expectations.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[no_mangle]
pub static mut lzma_crc64_table: [[u64; 256]; 4] = [
[
#[no_mangle]
pub static lzma_crc64_table: [[u64; 256]; 4] = [
[
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/crc64_fast.rs` around lines 2 - 4, The table
lzma_crc64_table is declared as a mutable global (pub static mut
lzma_crc64_table) which allows external mutation and risks data races; change it
to an immutable global (pub static lzma_crc64_table: [[u64; 256]; 4]) while
preserving #[no_mangle] so the symbol remains stable for FFI, or if ABI requires
a mutable symbol keep pub static mut but add a clear safety comment and document
thread-safety expectations near the lzma_crc64_table declaration.

Comment on lines +7 to +16
static mut SHA256_K: [u32; 64] = [
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0xfc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x6ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2,
];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use immutable static for SHA-256 constants/state seeds.

Line 7 and Line 537 define read-only data as static mut. This unnecessarily introduces mutable global state in hashing code.

♻️ Proposed change
-static mut SHA256_K: [u32; 64] = [
+static SHA256_K: [u32; 64] = [
@@
-    static mut s: [u32; 8] = [
+    static S: [u32; 8] = [
@@
-        &raw const s as *const u8,
+        &raw const S as *const u8,

Also applies to: 537-543

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/sha256.rs` around lines 7 - 16, Replace mutable globals
that hold constant SHA-256 seeds with immutable statics: change static mut
SHA256_K: [u32; 64] = [...] to static SHA256_K: [u32; 64] = [...], and do the
same for the other read-only arrays defined around lines 537-543 (the SHA-256
state/seed arrays); if any code currently mutates those symbols, remove that
mutation or relocate mutable state into a local/controlled mutable structure
(e.g., local variables, thread-local, or a Mutex-wrapped static) so the constant
arrays remain immutable.

Comment thread liblzma-rs/src/common/alone_encoder.rs Outdated
Comment on lines +148 to +153
if lzma_lzma_lclppb_encode(options, &raw mut (*coder).header as *mut u8) {
return LZMA_OPTIONS_ERROR;
}
if (*options).dict_size < LZMA_DICT_SIZE_MIN as u32 {
return LZMA_OPTIONS_ERROR;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Guard options before dereferencing in init.

Line 148 and Line 151 dereference options unconditionally. Add a null check to avoid FFI-side crashes.

🐛 Proposed fix
 unsafe extern "C" fn alone_encoder_init(
@@
 ) -> lzma_ret {
+    if options.is_null() {
+        return LZMA_PROG_ERROR;
+    }
+
     if core::mem::transmute::<
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` around lines 148 - 153, The code
dereferences the FFI pointer options in init without checking for null; before
calling lzma_lzma_lclppb_encode or accessing (*options).dict_size, add a null
check for options and return LZMA_OPTIONS_ERROR if options is null, ensuring you
validate options first (e.g. check options == null before invoking
lzma_lzma_lclppb_encode or reading (*options).dict_size) so neither
lzma_lzma_lclppb_encode nor the subsequent (*options).dict_size access can cause
an FFI-side crash.

Comment on lines +137 to +143
if (*coder).this_finished {
if (*coder).temp.size != 0 {
return LZMA_DATA_ERROR;
}
if (*coder).next_finished {
return LZMA_STREAM_END;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Incorrect end-of-stream leftover check can raise false LZMA_DATA_ERROR.

At Line 138, temp.size != 0 is the wrong predicate. It should validate unread bytes (temp.pos != temp.size). As written, a fully consumed temp buffer can still be treated as invalid.

💡 Suggested fix
-        if (*coder).temp.size != 0 {
+        if (*coder).temp.pos != (*coder).temp.size {
             return LZMA_DATA_ERROR;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 137 - 143, The end-of-stream
leftover check in the lz_decoder (when (*coder).this_finished) uses the wrong
predicate: it currently tests (*coder).temp.size != 0 which is true for any
non-zero buffer capacity and can falsely raise LZMA_DATA_ERROR; change this to
check unread bytes by comparing position against size (e.g., (*coder).temp.pos
!= (*coder).temp.size) so only when there are unread bytes in coder.temp you
return LZMA_DATA_ERROR, leaving the existing next_finished -> LZMA_STREAM_END
logic unchanged.

Comment thread liblzma-rs/src/lz/lz_decoder.rs
Comment on lines +260 to +267
if (*coder).dict.size != alloc_size {
lzma_free((*coder).dict.buf as *mut c_void, allocator);
(*coder).dict.buf =
lzma_alloc(alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t), allocator) as *mut u8;
if (*coder).dict.buf.is_null() {
return LZMA_MEM_ERROR;
}
(*coder).dict.size = alloc_size;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reallocation failure leaves stale dictionary metadata.

Lines 261–266 free dict.buf before allocating a replacement, but on allocation failure dict.size stays stale. A later init can skip allocation for that stale size and dereference a null dict.buf.

💡 Suggested fix
     if (*coder).dict.size != alloc_size {
         lzma_free((*coder).dict.buf as *mut c_void, allocator);
+        (*coder).dict.buf = core::ptr::null_mut();
+        (*coder).dict.size = 0;
         (*coder).dict.buf =
             lzma_alloc(alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t), allocator) as *mut u8;
         if (*coder).dict.buf.is_null() {
             return LZMA_MEM_ERROR;
         }
         (*coder).dict.size = alloc_size;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (*coder).dict.size != alloc_size {
lzma_free((*coder).dict.buf as *mut c_void, allocator);
(*coder).dict.buf =
lzma_alloc(alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t), allocator) as *mut u8;
if (*coder).dict.buf.is_null() {
return LZMA_MEM_ERROR;
}
(*coder).dict.size = alloc_size;
if (*coder).dict.size != alloc_size {
lzma_free((*coder).dict.buf as *mut c_void, allocator);
(*coder).dict.buf = core::ptr::null_mut();
(*coder).dict.size = 0;
(*coder).dict.buf =
lzma_alloc(alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t), allocator) as *mut u8;
if (*coder).dict.buf.is_null() {
return LZMA_MEM_ERROR;
}
(*coder).dict.size = alloc_size;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 260 - 267, Reallocation frees
(*coder).dict.buf then tries lzma_alloc but if allocation returns null
(*coder).dict.size remains the old (stale) value; update the failure path in the
block handling lzma_alloc so that when lzma_alloc returns null you explicitly
set (*coder).dict.buf = ptr::null_mut() (it already is) and reset
(*coder).dict.size = 0 before returning LZMA_MEM_ERROR; locate the code around
the symbols (*coder).dict.buf, (*coder).dict.size, lzma_free, lzma_alloc,
LZ_DICT_EXTRA and ensure the size is cleared on allocation failure to avoid
later logic assuming a nonzero size with a null buf.

Comment thread liblzma-rs/src/lz/lz_decoder.rs Outdated
Comment on lines +292 to +297
pub extern "C" fn lzma_lz_decoder_memusage(dictionary_size: size_t) -> u64 {
(core::mem::size_of::<lzma_coder>() as u64)
.wrapping_add(dictionary_size as u64)
.wrapping_add((2 * LZ_DICT_REPEAT_MAX) as u64)
.wrapping_add(LZ_DICT_EXTRA as u64)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

lzma_lz_decoder_memusage does not match init-time sizing rules.

At Lines 292–297, memusage omits the minimum dictionary clamp and 16-byte alignment used in lzma_lz_decoder_init (Lines 245–257), so it can underreport required memory.

💡 Suggested fix
 pub extern "C" fn lzma_lz_decoder_memusage(dictionary_size: size_t) -> u64 {
+    let mut dict = dictionary_size;
+    if dict < 4096 {
+        dict = 4096;
+    }
+    if dict
+        > (SIZE_MAX as size_t)
+            .wrapping_sub(15)
+            .wrapping_sub((2 * LZ_DICT_REPEAT_MAX) as size_t)
+            .wrapping_sub(LZ_DICT_EXTRA as size_t)
+    {
+        return UINT64_MAX;
+    }
+    dict = dict.wrapping_add(15) & !(15 as size_t);
     (core::mem::size_of::<lzma_coder>() as u64)
-        .wrapping_add(dictionary_size as u64)
+        .wrapping_add(dict as u64)
         .wrapping_add((2 * LZ_DICT_REPEAT_MAX) as u64)
         .wrapping_add(LZ_DICT_EXTRA as u64)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub extern "C" fn lzma_lz_decoder_memusage(dictionary_size: size_t) -> u64 {
(core::mem::size_of::<lzma_coder>() as u64)
.wrapping_add(dictionary_size as u64)
.wrapping_add((2 * LZ_DICT_REPEAT_MAX) as u64)
.wrapping_add(LZ_DICT_EXTRA as u64)
}
pub extern "C" fn lzma_lz_decoder_memusage(dictionary_size: size_t) -> u64 {
let mut dict = dictionary_size;
if dict < 4096 {
dict = 4096;
}
if dict
> (SIZE_MAX as size_t)
.wrapping_sub(15)
.wrapping_sub((2 * LZ_DICT_REPEAT_MAX) as size_t)
.wrapping_sub(LZ_DICT_EXTRA as size_t)
{
return UINT64_MAX;
}
dict = dict.wrapping_add(15) & !(15 as size_t);
(core::mem::size_of::<lzma_coder>() as u64)
.wrapping_add(dict as u64)
.wrapping_add((2 * LZ_DICT_REPEAT_MAX) as u64)
.wrapping_add(LZ_DICT_EXTRA as u64)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 292 - 297, The memusage
calculation in lzma_lz_decoder_memusage currently omits the init-time minimum
dictionary clamp and 16-byte alignment used in lzma_lz_decoder_init; update
lzma_lz_decoder_memusage to first clamp dictionary_size to at least LZ_DICT_MIN
(matching lzma_lz_decoder_init), then compute the dictionary bytes using that
clamped value and round/align the dictionary area up to 16 bytes before adding
core::mem::size_of::<lzma_coder>(), (2 * LZ_DICT_REPEAT_MAX), and LZ_DICT_EXTRA
so the reported u64 memory matches actual init requirements.

Comment on lines +2 to +344
pub static mut lzma_fastpos: [u8; 8192] = [
0, 1, 2, 3, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find all usages of lzma_fastpos
echo "=== All usages of lzma_fastpos ==="
rg -n 'lzma_fastpos' --type rust

echo ""
echo "=== Check for mutable access/writes ==="
rg -n 'lzma_fastpos\s*\[' --type rust -A 2

echo ""
echo "=== Check for assignment patterns ==="
rg -n '&mut.*lzma_fastpos|lzma_fastpos.*=' --type rust -A 2

echo ""
echo "=== File structure info ==="
wc -l liblzma-rs/src/lzma/fastpos_table.rs
head -5 liblzma-rs/src/lzma/fastpos_table.rs

Repository: youknowone/xz2-rs

Length of output: 3518


Remove unnecessary mut from read-only lookup table.

The lzma_fastpos array is declared as static mut but is only ever read from—all six usages in the codebase perform index-based reads with no writes or mutable borrows. Unnecessary mutable global state increases the surface for unsafe access; the table should be immutable.

Proposed fix
-pub static mut lzma_fastpos: [u8; 8192] = [
+pub static lzma_fastpos: [u8; 8192] = [
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub static mut lzma_fastpos: [u8; 8192] = [
0, 1, 2, 3, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
];
pub static lzma_fastpos: [u8; 8192] = [
0, 1, 2, 3, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,
15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19,
19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25,
];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lzma/fastpos_table.rs` around lines 2 - 344, The lzma_fastpos
lookup table is declared as mutable (static mut) though it's only read; change
it to an immutable static to remove unnecessary mutable global state. Locate the
declaration of lzma_fastpos (symbol: lzma_fastpos) and replace "static mut" with
"static" so the array becomes read-only; confirm callers (all index reads)
compile without needing mutable borrows and run tests/build to ensure no unsafe
writes rely on the previous mutability.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 78

♻️ Duplicate comments (29)
liblzma-rs/src/common/vli_decoder.rs (1)

2-8: ⚠️ Potential issue | 🟡 Minor

Missing #[no_mangle] attribute and additional null pointer concerns.

The function uses extern "C" without #[no_mangle]. Additionally, in_pos and in_0 pointers are dereferenced without null checks (e.g., lines 13, 23, 28).

🛡️ Proposed fix for missing attribute and null checks
 use crate::types::*;
+#[no_mangle]
 pub unsafe extern "C" fn lzma_vli_decode(
     vli: *mut lzma_vli,
     mut vli_pos: *mut size_t,
     in_0: *const u8,
     in_pos: *mut size_t,
     in_size: size_t,
 ) -> lzma_ret {
+    if vli.is_null() || in_pos.is_null() || in_0.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     let mut vli_pos_internal: size_t = 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/vli_decoder.rs` around lines 2 - 8, The exported C
function lzma_vli_decode is missing #[no_mangle] and lacks null-pointer checks
before dereferencing inputs; add #[no_mangle] above pub unsafe extern "C" fn
lzma_vli_decode(...) and fold in early null checks for pointers used (at
minimum: vli, vli_pos, in_0, in_pos) and return an appropriate lzma_ret error
(e.g., LZMA_PROG_ERROR or equivalent) on null; ensure subsequent code assumes
non-null after these guards so dereferences (reads/writes) are safe and keep the
existing return path semantics for other error cases.
liblzma-rs/src/common/block_util.rs (1)

51-57: 🧹 Nitpick | 🔵 Trivial

Minor: unpadded_size doesn't need mut.

The variable can be computed with a conditional expression instead of mutation.

♻️ Suggested refactor
 pub unsafe extern "C" fn lzma_block_total_size(block: *const lzma_block) -> lzma_vli {
-    let mut unpadded_size: lzma_vli = lzma_block_unpadded_size(block);
-    if unpadded_size != LZMA_VLI_UNKNOWN {
-        unpadded_size = vli_ceil4(unpadded_size);
-    }
-    unpadded_size
+    let unpadded_size: lzma_vli = lzma_block_unpadded_size(block);
+    if unpadded_size != LZMA_VLI_UNKNOWN {
+        vli_ceil4(unpadded_size)
+    } else {
+        unpadded_size
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_util.rs` around lines 51 - 57, In
lzma_block_total_size, remove the unnecessary mut by computing unpadded_size
with a conditional expression: call lzma_block_unpadded_size(block) into a
non-mutable let and assign either vli_ceil4(unpadded_size) or the original value
in an if expression, then return that value; update the function body around the
unpadded_size binding to avoid mutation while keeping the existing semantics.
liblzma-rs/src/common/easy_decoder_memusage.rs (1)

35-37: ⚠️ Potential issue | 🟠 Major

Inconsistent error return value: should return UINT64_MAX instead of UINT32_MAX as u64.

The function returns UINT32_MAX as u64 (0xFFFFFFFF) on preset failure, but memusage functions typically return UINT64_MAX to indicate errors. This inconsistency could cause callers to misinterpret a valid large memory usage as an error.

🐛 Proposed fix
     if unsafe { lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset) } {
-        return UINT32_MAX as u64;
+        return UINT64_MAX;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_decoder_memusage.rs` around lines 35 - 37, The
error return uses UINT32_MAX as u64 which is inconsistent with memusage
conventions; update the early-return in the memusage function in
easy_decoder_memusage.rs that checks the result of
lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset) to return
UINT64_MAX instead of UINT32_MAX as u64 so callers receive the canonical
UINT64_MAX error sentinel (refer to the lzma_easy_preset call and the
opt_easy/preset variables to locate the statement).
liblzma-rs/src/common/microlzma_encoder.rs (1)

24-42: ⚠️ Potential issue | 🔴 Critical

Panic risk: unwrap() calls in extern "C" function will cause undefined behavior.

Lines 24 and 32 use unwrap() on Option function pointers. If either is None, the panic will cross the FFI boundary causing undefined behavior. Replace with explicit checks that return an error code.

🛡️ Proposed fix
     let coder: *mut lzma_microlzma_coder = coder_ptr as *mut lzma_microlzma_coder;
     let out_start: size_t = *out_pos;
     let in_start: size_t = *in_pos;
     let mut uncomp_size: u64 = 0;
-    if (*coder).lzma.set_out_limit.unwrap()(
+    let set_out_limit = match (*coder).lzma.set_out_limit {
+        Some(f) => f,
+        None => return LZMA_PROG_ERROR,
+    };
+    if set_out_limit(
         (*coder).lzma.coder,
         ::core::ptr::addr_of_mut!(uncomp_size),
         out_size.wrapping_sub(*out_pos) as u64,
     ) != LZMA_OK
     {
         return LZMA_PROG_ERROR;
     }
-    let ret: lzma_ret = (*coder).lzma.code.unwrap()(
+    let code = match (*coder).lzma.code {
+        Some(f) => f,
+        None => return LZMA_PROG_ERROR,
+    };
+    let ret: lzma_ret = code(
         (*coder).lzma.coder,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/microlzma_encoder.rs` around lines 24 - 42, The code
currently calls unwrap() on the Option function pointers
(*coder).lzma.set_out_limit and (*coder).lzma.code inside the microlzma encoder
path, which can panic across the FFI boundary; replace these unwrap() calls with
explicit None checks: verify that (*coder).lzma.set_out_limit.is_some() before
calling it and return LZMA_PROG_ERROR if None, and likewise check
(*coder).lzma.code.is_some() before invoking it and return an appropriate
lzma_ret (e.g., LZMA_PROG_ERROR) if missing, then call the function via
.as_ref().unwrap() or the safe function pointer (e.g.,
set_out_limit.unwrap()(...) -> instead use set_out_limit.unwrap_or_else...
semantics) only after confirming presence to avoid panics, referencing the
set_out_limit and code function pointers and the surrounding lzma_ret handling.
liblzma-rs/src/common/vli_encoder.rs (1)

2-17: ⚠️ Potential issue | 🔴 Critical

Missing null pointer validation for out and out_pos.

The function dereferences out_pos (lines 12, 15) and out (lines 24, 35) without validating they are non-null. A null pointer would cause undefined behavior. The check for vli_pos at line 10 exists, but out and out_pos are unconditionally dereferenced.

🛡️ Proposed fix
 pub unsafe extern "C" fn lzma_vli_encode(
     mut vli: lzma_vli,
     mut vli_pos: *mut size_t,
     out: *mut u8,
     out_pos: *mut size_t,
     out_size: size_t,
 ) -> lzma_ret {
+    if out.is_null() || out_pos.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     let mut vli_pos_internal: size_t = 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/vli_encoder.rs` around lines 2 - 17, In
lzma_vli_encode, add null-pointer validation for out_pos and out before any
dereference: check if out_pos.is_null() and if so return LZMA_PROG_ERROR (do
this before using *out_pos or before the vli_pos fallback logic), and check if
out.is_null() and if so return LZMA_PROG_ERROR before writing to *out; keep
existing buffer-size checks (*out_pos >= out_size) but only after confirming
out_pos is non-null. Use the lzma_vli_encode symbol and the out/out_pos
identifiers to locate where to insert these guards.
liblzma-rs/src/common/easy_buffer_encoder.rs (2)

2-13: 🧹 Nitpick | 🔵 Trivial

Consider importing the crate-local function directly instead of via extern "C" declaration.

The lzma_stream_buffer_encode function appears to be defined within this crate as a pub unsafe extern "C" fn. Using extern "C" block to declare it introduces indirection and relies on linking behavior. A direct import would make the dependency explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_buffer_encoder.rs` around lines 2 - 13, The extern
"C" block declaring lzma_stream_buffer_encode should be removed and the
crate-local function imported directly; locate the pub unsafe extern "C" fn
lzma_stream_buffer_encode definition and replace the extern declaration with a
direct use/import (for example use crate::...::lzma_stream_buffer_encode;) so
callers continue to call the same unsafe function without relying on external
linkage, keep calls as unsafe and update any module path to match where the
function is defined.

14-23: ⚠️ Potential issue | 🟠 Major

Missing #[no_mangle] attribute for C ABI function.

The function lzma_easy_buffer_encode is declared pub unsafe extern "C" but lacks the #[no_mangle] attribute. Without it, the symbol name will be mangled by the Rust compiler, making it inaccessible to C callers expecting the unmangled name.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_easy_buffer_encode(
     preset: u32,
     check: lzma_check,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_buffer_encoder.rs` around lines 14 - 23, The
exported C ABI function lzma_easy_buffer_encode is missing the #[no_mangle]
attribute so the Rust compiler will mangle its symbol; add #[no_mangle]
immediately above the pub unsafe extern "C" fn lzma_easy_buffer_encode(...)
declaration to ensure the symbol is emitted with the exact C name and remains
callable from C code.
liblzma-rs/src/common/alone_encoder.rs (2)

202-202: ⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to the exported C entrypoint.

lzma_alone_encoder is declared pub unsafe extern "C" but lacks #[no_mangle], so the symbol will be Rust-mangled and invisible to C callers.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_alone_encoder(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` at line 202, The exported C
entrypoint lzma_alone_encoder is declared pub unsafe extern "C" but lacks
#[no_mangle], so add the #[no_mangle] attribute immediately above the
lzma_alone_encoder function definition to prevent Rust symbol mangling and
ensure the symbol is visible to C callers; keep the function signature (pub
unsafe extern "C" fn lzma_alone_encoder(...)) unchanged and only add the
attribute line above it.

148-156: ⚠️ Potential issue | 🔴 Critical

Guard options before dereferencing in alone_encoder_init.

Lines 148 and 154 dereference options unconditionally. If a null pointer is passed from FFI callers, this causes undefined behavior.

🐛 Proposed fix
 unsafe extern "C" fn alone_encoder_init(
     next: *mut lzma_next_coder,
     allocator: *const lzma_allocator,
     options: *const lzma_options_lzma,
 ) -> lzma_ret {
+    if options.is_null() {
+        return LZMA_PROG_ERROR;
+    }
+
     if core::mem::transmute::<
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` around lines 148 - 156, In
alone_encoder_init, guard the incoming options pointer before any dereference:
check if options.is_null() and return LZMA_OPTIONS_ERROR immediately (before
calling lzma_lzma_lclppb_encode or reading (*options).dict_size); this prevents
undefined behavior when FFI callers pass null and ensures subsequent uses like
::core::ptr::addr_of_mut!((*coder).header) and the dict_size check only run with
a valid options pointer.
liblzma-rs/src/lz/lz_decoder.rs (4)

137-145: ⚠️ Potential issue | 🔴 Critical

Incorrect end-of-stream leftover check can raise false LZMA_DATA_ERROR.

Line 138 checks (*coder).temp.size != 0, but this is true for any non-zero buffer capacity. The correct check is whether there are unread bytes: (*coder).temp.pos != (*coder).temp.size.

🐛 Proposed fix
         if (*coder).this_finished {
-            if (*coder).temp.size != 0 {
+            if (*coder).temp.pos != (*coder).temp.size {
                 return LZMA_DATA_ERROR;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 137 - 145, The end-of-stream
leftover check in the lz decoder uses (*coder).temp.size which is buffer
capacity and can be non-zero even when empty; update the condition in the block
that checks (*coder).this_finished to test for unread bytes by comparing
(*coder).temp.pos != (*coder).temp.size (i.e., use temp.pos vs temp.size)
instead of temp.size != 0 so that LZMA_DATA_ERROR is only returned when there
are actually unread bytes in coder.temp.

235-244: ⚠️ Potential issue | 🔴 Critical

Guard lz_init before calling unwrap() to avoid FFI panic.

If lz_init is None, unwrap() will panic, which is undefined behavior across an extern "C" boundary.

🐛 Proposed fix
+    if lz_init.is_none() {
+        return LZMA_PROG_ERROR;
+    }
     let ret_: lzma_ret = lz_init.unwrap()(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 235 - 244, The call uses
lz_init.unwrap() which can panic across the FFI boundary; before invoking
lz_init, check whether lz_init is Some and handle the None case by returning an
appropriate lzma_ret error (e.g., LZMA_PROG_ERROR) instead of unwrapping; update
the call site around lz_init, ret_, (*coder).lz, lz_options and filters to match
on lz_init (or use if let) and only call the function pointer when present,
returning the chosen error code immediately when lz_init is None.

260-268: ⚠️ Potential issue | 🟠 Major

Reallocation failure leaves stale dictionary metadata.

After freeing dict.buf on line 261, if lzma_alloc fails on line 263, dict.size retains its old value. A subsequent init may skip allocation for that size and dereference a null dict.buf.

🐛 Proposed fix
     if (*coder).dict.size != alloc_size {
         lzma_free((*coder).dict.buf as *mut c_void, allocator);
+        (*coder).dict.buf = core::ptr::null_mut();
+        (*coder).dict.size = 0;
         (*coder).dict.buf =
             lzma_alloc(alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t), allocator) as *mut u8;
         if (*coder).dict.buf.is_null() {
             return LZMA_MEM_ERROR;
         }
         (*coder).dict.size = alloc_size;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 260 - 268, The code frees
(*coder).dict.buf then attempts lzma_alloc but on allocation failure leaves
(*coder).dict.size with the old value, which can cause later code to skip
allocation and dereference a null dict.buf; change the logic in the block around
lzma_alloc so that you clear or reset (*coder).dict.size immediately after
lzma_free (e.g. set it to 0) and only assign (*coder).dict.size = alloc_size
after lzma_alloc successfully returns a non-null pointer (i.e. move the size
assignment to after the null check for the new buffer); reference symbols:
(*coder).dict.buf, (*coder).dict.size, lzma_free, lzma_alloc, allocator,
LZ_DICT_EXTRA, LZMA_MEM_ERROR.

295-301: ⚠️ Potential issue | 🟠 Major

lzma_lz_decoder_memusage does not match init-time sizing rules.

The memusage calculation omits the minimum dictionary clamp (4096) and 16-byte alignment applied in lzma_lz_decoder_init (lines 245–256), so it can underreport required memory for small dictionary sizes.

🐛 Proposed fix
 #[no_mangle]
 pub extern "C" fn lzma_lz_decoder_memusage(dictionary_size: size_t) -> u64 {
+    let mut dict = dictionary_size;
+    if dict < 4096 {
+        dict = 4096;
+    }
+    dict = dict.wrapping_add(15) & !(15 as size_t);
     (core::mem::size_of::<lzma_coder>() as u64)
-        .wrapping_add(dictionary_size as u64)
+        .wrapping_add(dict as u64)
         .wrapping_add((2 * LZ_DICT_REPEAT_MAX) as u64)
         .wrapping_add(LZ_DICT_EXTRA as u64)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 295 - 301, The memusage
calculation in lzma_lz_decoder_memusage must mirror the sizing rules used in
lzma_lz_decoder_init: clamp the input dictionary_size up to the minimum (4096)
used at init time, then account for 16-byte alignment rounding when computing
the dictionary buffer size before adding the fixed coder overheads; update
lzma_lz_decoder_memusage to use the same clamp (e.g., max(dictionary_size,
4096)) and round the dictionary length up to a 16-byte boundary (e.g., (dict +
15) & !15) prior to adding core::mem::size_of::<lzma_coder>(), (2 *
LZ_DICT_REPEAT_MAX), and LZ_DICT_EXTRA so memusage never underreports for small
dictionaries.
liblzma-rs/src/common/easy_encoder_memusage.rs (1)

35-36: ⚠️ Potential issue | 🔴 Critical

Return the documented u64::MAX error sentinel.

UINT32_MAX as u64 looks like a valid ~4 GiB estimate instead of an error. This helper's failure path needs the u64 sentinel so callers can reliably distinguish invalid presets from real memory usage.

Suggested fix
     if unsafe { lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset) } {
-        return UINT32_MAX as u64;
+        return u64::MAX;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_encoder_memusage.rs` around lines 35 - 36, The
failure path currently returns UINT32_MAX as u64 which can be mistaken for a
valid ~4GiB value; change the sentinel to the documented u64 error value by
returning u64::MAX instead in the branch that checks the result of
lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset). Ensure the
replacement occurs in the helper function in easy_encoder_memusage.rs where
opt_easy and preset are used so callers can distinguish an invalid preset from a
real memory-usage estimate.
benches/backend_comparison.rs (1)

20-35: ⚠️ Potential issue | 🟠 Major

Check the liblzma return codes before truncating the buffers.

These helpers ignore lzma_ret and then trust out_pos/in_pos. On encode/decode failure the benchmark silently turns the error path into an empty or partial buffer instead of failing fast, so the numbers are no longer meaningful.

Also applies to: 38-53, 56-73, 76-93

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benches/backend_comparison.rs` around lines 20 - 35, The helpers (e.g.,
unsafe fn c_encode) currently ignore the lzma return value and then truncate
buffers based solely on out_pos, which hides encode/decode failures; capture the
return value of c_sys::lzma_easy_buffer_encode (and the corresponding decode
calls in the other helpers), check it against the expected success codes (e.g.,
LZMA_OK / LZMA_STREAM_END as appropriate) and handle errors deterministically
instead of truncating—propagate an Err or panic with the lzma_ret and any
in_pos/out_pos context so the benchmark fails fast and does not produce silent
partial results.
liblzma-rs/src/common/easy_encoder.rs (1)

2-8: 🧹 Nitpick | 🔵 Trivial

Prefer the in-crate lzma_stream_encoder over an extern declaration.

This path already lives in the crate. Declaring it as an external C symbol gives up compile-time linkage checking and can accidentally bind the easy wrapper to a different backend at link time.

Suggested fix
-use crate::types::*;
-extern "C" {
-    fn lzma_stream_encoder(
-        strm: *mut lzma_stream,
-        filters: *const lzma_filter,
-        check: lzma_check,
-    ) -> lzma_ret;
-}
+use crate::{common::stream_encoder::lzma_stream_encoder, types::*};

Also applies to: 49-53

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_encoder.rs` around lines 2 - 8, Replace the local
extern "C" declaration of lzma_stream_encoder in easy_encoder.rs with the
in-crate implementation: remove the extern block that declares fn
lzma_stream_encoder(strm: *mut lzma_stream, filters: *const lzma_filter, check:
lzma_check) -> lzma_ret and call the crate-provided function (the existing
in-crate lzma_stream_encoder implementation) instead so linkage and compile-time
checks are used; update references in easy_encoder.rs (and the duplicate at
lines ~49-53) to use the crate symbol (e.g., via the module path where the crate
defines lzma_stream_encoder) rather than declaring an external C symbol.
liblzma-rs/src/check/crc64_fast.rs (1)

2-4: 🛠️ Refactor suggestion | 🟠 Major

Make the CRC64 table immutable unless a writable symbol is truly required.

This table is used read-only, but pub static mut exports it as writable global state. One accidental FFI write turns every concurrent CRC computation into a race on shared memory.

Suggested fix
 #[no_mangle]
-pub static mut lzma_crc64_table: [[u64; 256]; 4] = [
+pub static lzma_crc64_table: [[u64; 256]; 4] = [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/crc64_fast.rs` around lines 2 - 4, Replace the writable
exported symbol by making the CRC table immutable: change the declaration of
lzma_crc64_table (the #[no_mangle] pub static mut lzma_crc64_table: [[u64; 256];
4]) to an immutable exported static (e.g., #[no_mangle] pub static
lzma_crc64_table: [[u64; 256]; 4]) or const if appropriate; keep the same name
and shape so FFI reads still work, and only retain mutability if you truly need
a writable FFI symbol.
liblzma-rs/src/common/stream_flags_encoder.rs (3)

60-62: ⚠️ Potential issue | 🟠 Major

Return LZMA_OPTIONS_ERROR for invalid stream-flag options.

Same issue as header encoding - invalid options should return LZMA_OPTIONS_ERROR.

🛠️ Proposed fix
     if stream_flags_encode(options, out.offset((2 * 4) as isize)) {
-        return LZMA_PROG_ERROR;
+        return LZMA_OPTIONS_ERROR;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_encoder.rs` around lines 60 - 62, The call
checking stream_flags_encode(...) in stream_flags_encoder.rs currently returns
LZMA_PROG_ERROR on failure; change it to return LZMA_OPTIONS_ERROR for invalid
stream-flag options. Locate the conditional using stream_flags_encode (the if
that returns LZMA_PROG_ERROR) and replace the returned error constant with
LZMA_OPTIONS_ERROR so invalid option detection parallels the header encoding
behavior.

16-19: ⚠️ Potential issue | 🔴 Critical

Add #[no_mangle] to FFI entry points.

Both lzma_stream_header_encode and lzma_stream_footer_encode are missing #[no_mangle], which will cause linker failures when C code attempts to call these functions.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_header_encode(
     options: *const lzma_stream_flags,
     out: *mut u8,
 ) -> lzma_ret {
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_footer_encode(
     options: *const lzma_stream_flags,
     out: *mut u8,
 ) -> lzma_ret {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_encoder.rs` around lines 16 - 19, The FFI
entry points lzma_stream_header_encode and lzma_stream_footer_encode are missing
#[no_mangle], causing their symbols to be mangled and break C linkage; add
#[no_mangle] immediately above each pub unsafe extern "C" fn declaration
(preserving pub, unsafe, and extern "C") so the functions export stable
unmangled symbols for the C callers.

28-33: ⚠️ Potential issue | 🟠 Major

Return LZMA_OPTIONS_ERROR for invalid stream-flag options.

When stream_flags_encode fails due to an invalid check value, this indicates option validation failure, not an internal program-state error.

🛠️ Proposed fix
     if stream_flags_encode(
         options,
         out.offset(core::mem::size_of::<[u8; 6]>() as isize),
     ) {
-        return LZMA_PROG_ERROR;
+        return LZMA_OPTIONS_ERROR;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_encoder.rs` around lines 28 - 33, The code
currently returns LZMA_PROG_ERROR when stream_flags_encode(...) fails; instead,
detect invalid option failures and return LZMA_OPTIONS_ERROR. Before or
immediately after calling stream_flags_encode (the call using options and
out.offset(...)), validate the check option (e.g., inspect options.check or call
an existing helper like lzma_check_is_supported/check_is_valid); if the check
value is invalid return LZMA_OPTIONS_ERROR, otherwise treat other failures as
LZMA_PROG_ERROR. Update the branch around stream_flags_encode to return
LZMA_OPTIONS_ERROR for invalid-check failures and leave LZMA_PROG_ERROR for
genuine program/encoding errors.
liblzma-rs/src/check/sha256.rs (2)

7-16: 🧹 Nitpick | 🔵 Trivial

Use immutable static for SHA-256 constants.

SHA256_K is read-only data but declared as static mut, introducing unnecessary unsafety. Change to an immutable static.

♻️ Proposed change
-static mut SHA256_K: [u32; 64] = [
+static SHA256_K: [u32; 64] = [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/sha256.rs` around lines 7 - 16, SHA256_K is declared as
`static mut` but is constant data; change the declaration to an immutable
`static SHA256_K: [u32; 64] = [...]` and remove the `mut`. Then update any call
sites that currently use unsafe/mutable access to read from SHA256_K (e.g.,
replace unsafe derefs or mutable references with direct borrows or slices like
&SHA256_K or SHA256_K[..]) so no unsafe is required when reading the table.

536-546: 🧹 Nitpick | 🔵 Trivial

Use immutable static for SHA-256 initial state.

The initial hash values s are constant and should be declared as immutable static.

♻️ Proposed change
 #[no_mangle]
 pub unsafe extern "C" fn lzma_sha256_init(check: *mut lzma_check_state) {
-    static mut s: [u32; 8] = [
+    static S: [u32; 8] = [
         0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a, 0x510e527f, 0x9b05688c, 0x1f83d9ab,
         0x5be0cd19,
     ];
     core::ptr::copy_nonoverlapping(
-        ::core::ptr::addr_of!(s) as *const u8,
+        ::core::ptr::addr_of!(S) as *const u8,
         ::core::ptr::addr_of_mut!((*check).state.sha256.state) as *mut u8,
         core::mem::size_of::<[u32; 8]>(),
     );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/sha256.rs` around lines 536 - 546, The static array s in
lzma_sha256_init is declared mutable but it is a constant initial hash state;
change its declaration from static mut s: [u32; 8] to an immutable static s:
[u32; 8] and keep the copy_nonoverlapping that reads from
::core::ptr::addr_of!(s) as *const u8 into (*check).state.sha256.state; no other
behavior changes are needed (retain use of addr_of! and size_of::<[u32; 8]>()).
liblzma-rs/src/common/block_header_encoder.rs (1)

47-51: ⚠️ Potential issue | 🟠 Major

Enforce final block header size limit before storing header_size.

The computed aligned size should be validated against LZMA_BLOCK_HEADER_SIZE_MAX before assignment to prevent invalid header sizes from being stored.

🛡️ Proposed fix
-    (*block).header_size = size.wrapping_add(3) & !(3);
-    LZMA_OK
+    let aligned_size: u32 = size.wrapping_add(3) & !(3);
+    if aligned_size < LZMA_BLOCK_HEADER_SIZE_MIN || aligned_size > LZMA_BLOCK_HEADER_SIZE_MAX {
+        return LZMA_OPTIONS_ERROR;
+    }
+    (*block).header_size = aligned_size;
+    LZMA_OK
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_header_encoder.rs` around lines 47 - 51, Before
storing the aligned header size into (*block).header_size, validate that the
computed value (the local variable size after size.wrapping_add(3) & !(3)) does
not exceed LZMA_BLOCK_HEADER_SIZE_MAX; if it does, return an appropriate LZMA
error instead of assigning. Update the code around the assignment of
(*block).header_size (the block header encoder routine that currently ends with
(*block).header_size = size.wrapping_add(3) & !(3); LZMA_OK) to perform this
check and return the error when the limit is exceeded.
liblzma-rs/src/check/crc32_fast.rs (1)

2-3: ⚠️ Potential issue | 🟠 Major

Data race hazard: static mut should be immutable static.

The CRC32 lookup table is never modified at runtime but declared as pub static mut, creating a data race hazard if multiple threads call lzma_crc32 concurrently.

🛡️ Suggested fix
 #[no_mangle]
-pub static mut lzma_crc32_table: [[u32; 256]; 8] = [
+pub static lzma_crc32_table: [[u32; 256]; 8] = [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/crc32_fast.rs` around lines 2 - 3, The CRC32 lookup
table is declared as mutable static which creates a data race; change the
declaration of lzma_crc32_table from "pub static mut lzma_crc32_table: [[u32;
256]; 8]" to an immutable "pub static lzma_crc32_table: [[u32; 256]; 8] = [...]"
(or make it "pub const" if allowed) and remove any code that relies on mutating
it; update callers such as lzma_crc32 to read from the table without unsafe
mutable access (use immutable indexing) so the table is immutable and
thread-safe.
liblzma-rs/src/common/stream_flags_decoder.rs (1)

17-20: ⚠️ Potential issue | 🔴 Critical

Add #[no_mangle] to FFI entry points.

Both lzma_stream_header_decode and lzma_stream_footer_decode are declared as foreign symbols elsewhere but lack #[no_mangle]. Without this attribute, Rust mangles the symbol names, breaking FFI linkage.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_header_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_footer_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_decoder.rs` around lines 17 - 20, The FFI
entry points lzma_stream_header_decode and lzma_stream_footer_decode are missing
#[no_mangle], causing symbol name mangling; fix by adding #[no_mangle]
immediately above the function declarations (preserve pub unsafe extern "C" fn
signatures and existing parameters/return types) so the symbols are exported
with unmangled names for C linkage; update both functions in
stream_flags_decoder.rs where they are defined.
liblzma-rs/src/common/filter_common.rs (1)

11-103: ⚠️ Potential issue | 🟠 Major

Data race hazard: static mut features should be immutable static.

The features table is never modified at runtime—it contains constant filter metadata. Using static mut requires unsafe access and creates potential data races when accessed concurrently from multiple threads. Change to an immutable static.

🛡️ Suggested fix
-static mut features: [filter_features; 13] = [
+static FEATURES: [filter_features; 13] = [

Then update all access sites to use FEATURES without unsafe blocks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/filter_common.rs` around lines 11 - 103, The table
`features` is declared as `static mut` but is never mutated; change it to an
immutable `static` (suggest renaming to `FEATURES`) with the same type and
initializer so it can be safely read without unsafe blocks, and then update all
access sites that currently use `unsafe { features }` or similar to read
`FEATURES` directly (remove unsafe and any mutable assumptions); keep the
element type `filter_features` and array length the same.
liblzma-rs/src/common/block_buffer_encoder.rs (1)

30-33: ⚠️ Potential issue | 🟠 Major

Add bounds check before casting u64 to size_t to prevent truncation on 32-bit targets.

Direct cast on line 32 truncates silently when the bound exceeds size_t::MAX (4GB on 32-bit systems), returning an invalidly small buffer size. This creates a memory safety hazard since downstream code allocates based on this potentially undersized bound.

🐛 Proposed fix
 #[no_mangle]
 pub extern "C" fn lzma_block_buffer_bound(uncompressed_size: size_t) -> size_t {
     let ret: u64 = lzma_block_buffer_bound64(uncompressed_size as u64);
+    if ret > size_t::MAX as u64 {
+        return 0;
+    }
     ret as size_t
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_buffer_encoder.rs` around lines 30 - 33, The
function lzma_block_buffer_bound currently casts a u64 result from
lzma_block_buffer_bound64 to size_t without checking for overflow; add a bounds
check comparing ret to the maximum representable size_t (e.g., size_t::MAX /
usize::MAX) and handle overflow instead of truncating—if ret exceeds size_t max,
return the maximum size_t value (or otherwise signal overflow per project
convention) so callers do not receive an undersized buffer; update
lzma_block_buffer_bound to perform this check before casting to size_t to
prevent truncation on 32-bit targets.
liblzma-rs/src/common/block_decoder.rs (1)

221-226: ⚠️ Potential issue | 🔴 Critical

Validate block pointer before dereference.

lzma_block_unpadded_size(block) at line 221 dereferences block before any null-pointer check. A null block will cause undefined behavior. Additionally, (*block).check should be validated against LZMA_CHECK_ID_MAX before use in downstream check_size logic.

🐛 Proposed fix
 pub unsafe extern "C" fn lzma_block_decoder_init(
     next: *mut lzma_next_coder,
     allocator: *const lzma_allocator,
     block: *mut lzma_block,
 ) -> lzma_ret {
+    if block.is_null() {
+        return LZMA_PROG_ERROR;
+    }
+    if (*block).check > LZMA_CHECK_ID_MAX {
+        return LZMA_PROG_ERROR;
+    }
     if core::mem::transmute::<
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_decoder.rs` around lines 221 - 226, The code
dereferences block in lzma_block_unpadded_size(block) and when reading
(*block).uncompressed_size without a null check; reorder and add a null-pointer
check for block at the start of the function (or before the
lzma_block_unpadded_size call) and return LZMA_PROG_ERROR if block is NULL, then
perform the unpadded-size and uncompressed_size checks (comparing against
LZMA_VLI_MAX and LZMA_VLI_UNKNOWN) as before; additionally validate
(*block).check against LZMA_CHECK_ID_MAX before any downstream use (e.g.,
check_size logic) and return an error if it is out of range so the check_size
path never reads an invalid check id.
liblzma-rs/src/common/stream_encoder_mt.rs (1)

993-998: ⚠️ Potential issue | 🔴 Critical

The 32-bit size_t truncation bug is still present here.

block_size and outbuf_size_max are only validated as u64s here, but later stored in size_t fields. On 32-bit targets that can silently shrink the worker/input buffer allocations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder_mt.rs` around lines 993 - 998, The code
only validates block_size and outbuf_size_max as u64 but later stores them into
size_t-sized fields which can truncate on 32-bit targets; update the checks in
the block that uses BLOCK_SIZE_MAX, UINT64_MAX and lzma_block_buffer_bound64 to
also ensure both *block_size and the computed *outbuf_size_max fit into a size_t
on the target (compare against SIZE_MAX / usize::MAX or a platform SIZE_T_MAX),
and return LZMA_OPTIONS_ERROR or LZMA_MEM_ERROR as appropriate if they exceed
size_t before any assignment or cast to size_t.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6e8797c1-2462-4623-b902-971bf88476bd

📥 Commits

Reviewing files that changed from the base of the PR and between 8f8d7a3 and 4155aa2.

📒 Files selected for processing (105)
  • .github/workflows/main.yml
  • AGENTS.md
  • Cargo.lock.msrv
  • Cargo.toml
  • benches/backend_comparison.rs
  • liblzma-rs-sys/Cargo.toml
  • liblzma-rs-sys/src/lib.rs
  • liblzma-rs/Cargo.toml
  • liblzma-rs/src/check/check.rs
  • liblzma-rs/src/check/crc32_fast.rs
  • liblzma-rs/src/check/crc64_fast.rs
  • liblzma-rs/src/check/mod.rs
  • liblzma-rs/src/check/sha256.rs
  • liblzma-rs/src/common/alone_decoder.rs
  • liblzma-rs/src/common/alone_encoder.rs
  • liblzma-rs/src/common/auto_decoder.rs
  • liblzma-rs/src/common/block_buffer_decoder.rs
  • liblzma-rs/src/common/block_buffer_encoder.rs
  • liblzma-rs/src/common/block_decoder.rs
  • liblzma-rs/src/common/block_encoder.rs
  • liblzma-rs/src/common/block_header_decoder.rs
  • liblzma-rs/src/common/block_header_encoder.rs
  • liblzma-rs/src/common/block_util.rs
  • liblzma-rs/src/common/common.rs
  • liblzma-rs/src/common/easy_buffer_encoder.rs
  • liblzma-rs/src/common/easy_decoder_memusage.rs
  • liblzma-rs/src/common/easy_encoder.rs
  • liblzma-rs/src/common/easy_encoder_memusage.rs
  • liblzma-rs/src/common/easy_preset.rs
  • liblzma-rs/src/common/file_info.rs
  • liblzma-rs/src/common/filter_buffer_decoder.rs
  • liblzma-rs/src/common/filter_buffer_encoder.rs
  • liblzma-rs/src/common/filter_common.rs
  • liblzma-rs/src/common/filter_decoder.rs
  • liblzma-rs/src/common/filter_encoder.rs
  • liblzma-rs/src/common/filter_flags_decoder.rs
  • liblzma-rs/src/common/filter_flags_encoder.rs
  • liblzma-rs/src/common/hardware_cputhreads.rs
  • liblzma-rs/src/common/hardware_physmem.rs
  • liblzma-rs/src/common/index.rs
  • liblzma-rs/src/common/index_decoder.rs
  • liblzma-rs/src/common/index_encoder.rs
  • liblzma-rs/src/common/index_hash.rs
  • liblzma-rs/src/common/lzip_decoder.rs
  • liblzma-rs/src/common/microlzma_decoder.rs
  • liblzma-rs/src/common/microlzma_encoder.rs
  • liblzma-rs/src/common/mod.rs
  • liblzma-rs/src/common/outqueue.rs
  • liblzma-rs/src/common/stream_buffer_decoder.rs
  • liblzma-rs/src/common/stream_buffer_encoder.rs
  • liblzma-rs/src/common/stream_decoder.rs
  • liblzma-rs/src/common/stream_decoder_mt.rs
  • liblzma-rs/src/common/stream_encoder.rs
  • liblzma-rs/src/common/stream_encoder_mt.rs
  • liblzma-rs/src/common/stream_flags_common.rs
  • liblzma-rs/src/common/stream_flags_decoder.rs
  • liblzma-rs/src/common/stream_flags_encoder.rs
  • liblzma-rs/src/common/stream_mt.rs
  • liblzma-rs/src/common/string_conversion.rs
  • liblzma-rs/src/common/vli_decoder.rs
  • liblzma-rs/src/common/vli_encoder.rs
  • liblzma-rs/src/common/vli_size.rs
  • liblzma-rs/src/delta/delta_common.rs
  • liblzma-rs/src/delta/delta_decoder.rs
  • liblzma-rs/src/delta/delta_encoder.rs
  • liblzma-rs/src/delta/mod.rs
  • liblzma-rs/src/lib.rs
  • liblzma-rs/src/lz/lz_decoder.rs
  • liblzma-rs/src/lz/lz_encoder.rs
  • liblzma-rs/src/lz/lz_encoder_mf.rs
  • liblzma-rs/src/lz/mod.rs
  • liblzma-rs/src/lzma/fastpos_table.rs
  • liblzma-rs/src/lzma/lzma2_decoder.rs
  • liblzma-rs/src/lzma/lzma2_encoder.rs
  • liblzma-rs/src/lzma/lzma_decoder.rs
  • liblzma-rs/src/lzma/lzma_encoder.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_fast.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_normal.rs
  • liblzma-rs/src/lzma/lzma_encoder_presets.rs
  • liblzma-rs/src/lzma/mod.rs
  • liblzma-rs/src/rangecoder/mod.rs
  • liblzma-rs/src/rangecoder/price_table.rs
  • liblzma-rs/src/simple/arm.rs
  • liblzma-rs/src/simple/arm64.rs
  • liblzma-rs/src/simple/armthumb.rs
  • liblzma-rs/src/simple/ia64.rs
  • liblzma-rs/src/simple/mod.rs
  • liblzma-rs/src/simple/powerpc.rs
  • liblzma-rs/src/simple/riscv.rs
  • liblzma-rs/src/simple/simple_coder.rs
  • liblzma-rs/src/simple/simple_decoder.rs
  • liblzma-rs/src/simple/simple_encoder.rs
  • liblzma-rs/src/simple/sparc.rs
  • liblzma-rs/src/simple/x86.rs
  • liblzma-rs/src/tuklib/mod.rs
  • liblzma-rs/src/tuklib/tuklib_cpucores.rs
  • liblzma-rs/src/tuklib/tuklib_physmem.rs
  • liblzma-rs/src/types.rs
  • liblzma-sys/Cargo.toml
  • scripts/run_xz_c_tests_with_liblzma_rs_sys.sh
  • src/stream.rs
  • systest/Cargo.toml
  • systest/build.rs
  • systest/src/main.rs
  • tests/sys_equivalence.rs

Comment thread Cargo.toml
Comment on lines +29 to +32
rand = "=0.8.0"
quickcheck = "=1.0.1"
regex = "=1.9.0"
log = "=0.4.20"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Exact version pinning with = is unusual and may cause resolution conflicts.

Using exact version pins (=0.8.0, =1.0.1, etc.) for dev-dependencies is restrictive and can cause dependency resolution failures when other crates in the graph require different versions. Consider using caret requirements (e.g., 0.8.0) unless there's a specific compatibility reason.

♻️ Suggested change
 [dev-dependencies]
-rand = "=0.8.0"
-quickcheck = "=1.0.1"
-regex = "=1.9.0"
-log = "=0.4.20"
+rand = "0.8.0"
+quickcheck = "1.0.1"
+regex = "1.9.0"
+log = "0.4.20"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
rand = "=0.8.0"
quickcheck = "=1.0.1"
regex = "=1.9.0"
log = "=0.4.20"
rand = "0.8.0"
quickcheck = "1.0.1"
regex = "1.9.0"
log = "0.4.20"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` around lines 29 - 32, The Cargo.toml dev-dependencies are pinned
with exact-equality strings (rand, quickcheck, regex, log) which can cause
resolution conflicts; update the version specifiers for rand, quickcheck, regex
and log from the exact-pin form (e.g., "=0.8.0") to caret-style requirements
(e.g., "0.8.0") so Cargo can use compatible semver ranges unless there is a
documented, specific reason to keep exact pins.

Comment thread Cargo.toml Outdated
Comment thread liblzma-rs/Cargo.toml
Comment on lines +1 to +12
[package]
name = "liblzma-rs"
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Pure Rust implementation of liblzma (transpiled from C via c2rust)"

[features]
default = []

[dependencies]
libc = "0.2"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding rust-version field for MSRV clarity.

The CI workflow specifies MSRV 1.63, but the manifest lacks a rust-version field. Adding it helps Cargo emit clearer errors when users attempt to build with an older toolchain.

📝 Proposed addition
 [package]
 name = "liblzma-rs"
 version = "0.1.0"
 edition = "2021"
+rust-version = "1.63"
 license = "MIT OR Apache-2.0"
 description = "Pure Rust implementation of liblzma (transpiled from C via c2rust)"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[package]
name = "liblzma-rs"
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Pure Rust implementation of liblzma (transpiled from C via c2rust)"
[features]
default = []
[dependencies]
libc = "0.2"
[package]
name = "liblzma-rs"
version = "0.1.0"
edition = "2021"
rust-version = "1.63"
license = "MIT OR Apache-2.0"
description = "Pure Rust implementation of liblzma (transpiled from C via c2rust)"
[features]
default = []
[dependencies]
libc = "0.2"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/Cargo.toml` around lines 1 - 12, Add the rust-version field to the
Cargo.toml package metadata so Cargo can enforce the MSRV declared in CI; update
the [package] section (where name = "liblzma-rs", version = "0.1.0", edition =
"2021") by inserting rust-version = "1.63" (or the exact MSRV used in CI) to
ensure builds on older toolchains fail with a clear error.

Comment thread liblzma-rs/src/common/alone_encoder.rs Outdated
@@ -0,0 +1,222 @@
use crate::types::*;
use libc::{c_uint, c_void};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

WebAssembly builds fail: libc::c_uint unavailable on wasm32-unknown-unknown.

The libc crate does not provide c_uint (or most C types) on the wasm32-unknown-unknown target, causing compilation failures as shown in CI. Consider using core::ffi::c_uint (stabilized in Rust 1.64) or defining a type alias conditionally.

🔧 Proposed fix
-use libc::{c_uint, c_void};
+use core::ffi::{c_uint, c_void};

Or, if broader libc compatibility is needed elsewhere:

+#[cfg(not(target_arch = "wasm32"))]
 use libc::{c_uint, c_void};
+#[cfg(target_arch = "wasm32")]
+use core::ffi::{c_uint, c_void};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use libc::{c_uint, c_void};
use core::ffi::{c_uint, c_void};
🧰 Tools
🪛 GitHub Check: Test WebAssembly (beta, wasm32-unknown-unknown)

[failure] 2-2:
unresolved import libc::c_uint

🪛 GitHub Check: Test WebAssembly (nightly, wasm32-unknown-unknown)

[failure] 2-2:
unresolved import libc::c_uint

🪛 GitHub Check: Test WebAssembly (stable, wasm32-unknown-unknown)

[failure] 2-2:
unresolved import libc::c_uint

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` at line 2, The import of libc types
causes wasm failures: replace the use of libc::{c_uint, c_void} in
alone_encoder.rs with the platform-stable core::ffi equivalents
(core::ffi::c_uint and core::ffi::c_void) or add a conditional type alias so the
code uses core::ffi on wasm32-unknown-unknown and libc on platforms that need
it; update references to c_uint and c_void in the file (e.g., function
signatures or FFI bindings) to use the chosen alias/paths so the crate compiles
on wasm and native targets.

Comment on lines +26 to +37
if block.is_null()
|| (*block).version > 1
|| (*block).header_size < LZMA_BLOCK_HEADER_SIZE_MIN
|| (*block).header_size > LZMA_BLOCK_HEADER_SIZE_MAX
|| (*block).header_size & 3 != 0
|| !((*block).compressed_size <= LZMA_VLI_MAX
|| (*block).compressed_size == LZMA_VLI_UNKNOWN)
|| (*block).compressed_size == 0
|| (*block).check > LZMA_CHECK_ID_MAX
{
return 0;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Returning 0 for errors conflates with potentially valid size calculations.

The function returns 0 as an error indicator (lines 36, 46), but 0 could theoretically be confused with a valid size in error handling code. Consider whether LZMA_VLI_UNKNOWN should be the consistent error return value for all error cases, not just when compressed_size is unknown.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_util.rs` around lines 26 - 37, The function
currently returns 0 on validation failure (e.g., when checking (*block).version,
header_size, compressed_size, check) which can be confused with a valid size;
change all error return paths in this function to return LZMA_VLI_UNKNOWN
instead of 0 (replace the early `return 0;` and any other `return 0` in this
function with `return LZMA_VLI_UNKNOWN`) and ensure the function's return type
and callers handle LZMA_VLI_UNKNOWN as the canonical error sentinel rather than
0; keep all existing validation checks (e.g., comparisons against
LZMA_BLOCK_HEADER_SIZE_MIN/MAX, LZMA_VLI_MAX, LZMA_VLI_UNKNOWN,
LZMA_CHECK_ID_MAX) unchanged.

Comment on lines +102 to +107
pub unsafe extern "C" fn lzma_delta_encoder_init(
next: *mut lzma_next_coder,
allocator: *const lzma_allocator,
filters: *const lzma_filter_info,
) -> lzma_ret {
(*next).code = Some(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing null check on next parameter in public FFI entrypoint.

Line 107 dereferences next without validation, same issue as in decoder init.

🛡️ Proposed fix
 pub unsafe extern "C" fn lzma_delta_encoder_init(
     next: *mut lzma_next_coder,
     allocator: *const lzma_allocator,
     filters: *const lzma_filter_info,
 ) -> lzma_ret {
+    if next.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     (*next).code = Some(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub unsafe extern "C" fn lzma_delta_encoder_init(
next: *mut lzma_next_coder,
allocator: *const lzma_allocator,
filters: *const lzma_filter_info,
) -> lzma_ret {
(*next).code = Some(
pub unsafe extern "C" fn lzma_delta_encoder_init(
next: *mut lzma_next_coder,
allocator: *const lzma_allocator,
filters: *const lzma_filter_info,
) -> lzma_ret {
if next.is_null() {
return LZMA_PROG_ERROR;
}
(*next).code = Some(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_encoder.rs` around lines 102 - 107, The function
lzma_delta_encoder_init currently dereferences next without checking for null;
add a null check at the start of lzma_delta_encoder_init (matching the pattern
used in the decoder init) and return the appropriate lzma_ret error (e.g.,
LZMA_PROG_ERROR) if next is null, before any dereference of the lzma_next_coder
pointer; ensure the check occurs before accessing (*next).code so you don't
trigger undefined behavior.

Comment thread liblzma-rs/src/lz/lz_encoder_mf.rs Outdated
@@ -0,0 +1,734 @@
use crate::types::*;
use libc::c_uint;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

WebAssembly builds fail: libc::c_uint unavailable on wasm32-unknown-unknown.

Same issue as in alone_encoder.rs. Use core::ffi::c_uint for cross-platform compatibility.

🔧 Proposed fix
-use libc::c_uint;
+use core::ffi::c_uint;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use libc::c_uint;
use core::ffi::c_uint;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder_mf.rs` at line 2, Replace the platform-specific
libc::c_uint import with the portable core::ffi::c_uint to make WebAssembly
builds work; update the use statement in lz_encoder_mf.rs (the place that
currently imports libc::c_uint) to import core::ffi::c_uint so any functions or
type annotations in this file that reference c_uint continue to compile on
wasm32-unknown-unknown (no other logic changes required).

Comment thread liblzma-rs/src/lz/lz_encoder.rs
Comment thread liblzma-rs/src/lz/lz_encoder.rs Outdated
Comment thread liblzma-rs/src/lz/lz_encoder.rs Outdated
Comment on lines +537 to +546
pub extern "C" fn lzma_mf_is_supported(mf: lzma_match_finder) -> lzma_bool {
match mf {
3 => return true as lzma_bool,
4 => return true as lzma_bool,
18 => return true as lzma_bool,
19 => return true as lzma_bool,
20 => return true as lzma_bool,
_ => return false as lzma_bool,
};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to lzma_mf_is_supported for C linkage.

This function is pub extern "C" but lacks #[no_mangle], so C callers expecting the symbol lzma_mf_is_supported will get a linker error.

🔧 Proposed fix
+#[no_mangle]
 pub extern "C" fn lzma_mf_is_supported(mf: lzma_match_finder) -> lzma_bool {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub extern "C" fn lzma_mf_is_supported(mf: lzma_match_finder) -> lzma_bool {
match mf {
3 => return true as lzma_bool,
4 => return true as lzma_bool,
18 => return true as lzma_bool,
19 => return true as lzma_bool,
20 => return true as lzma_bool,
_ => return false as lzma_bool,
};
}
#[no_mangle]
pub extern "C" fn lzma_mf_is_supported(mf: lzma_match_finder) -> lzma_bool {
match mf {
3 => return true as lzma_bool,
4 => return true as lzma_bool,
18 => return true as lzma_bool,
19 => return true as lzma_bool,
20 => return true as lzma_bool,
_ => return false as lzma_bool,
};
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 537 - 546, The function
lzma_mf_is_supported is declared pub extern "C" but lacks the #[no_mangle]
attribute, causing C callers to fail to find the symbol; fix it by adding
#[no_mangle] immediately above the pub extern "C" fn lzma_mf_is_supported
signature (keep the existing signature and body intact so the symbol is exported
as exactly "lzma_mf_is_supported"), then rebuild to verify the symbol is present
for C linkage.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +119 to +130
let input = make_payload(size);
let c_compressed = unsafe { c_encode(&input) };
let rs_compressed = unsafe { rs_encode(&input) };
group.throughput(Throughput::Bytes(size as u64));

group.bench_with_input(BenchmarkId::new("c", label), &c_compressed, |b, data| {
b.iter(|| unsafe { c_decode(black_box(data), size) })
});
group.bench_with_input(
BenchmarkId::new("rust", label),
&rs_compressed,
|b, data| b.iter(|| unsafe { rs_decode(black_box(data), size) }),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Benchmark both decoders with identical compressed input.

Right now the C decoder is timed on c_encode() output and the Rust decoder on rs_encode() output. If those streams differ at all, the decode group is comparing different workloads, not isolated decoder throughput.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benches/backend_comparison.rs` around lines 119 - 130, The decode benchmarks
use different compressed inputs (c_compressed vs rs_compressed), so decoder
throughput compares unequal workloads; change the decode group to benchmark both
c_decode and rs_decode on the same compressed byte slice by reusing a single
compressed value (e.g., compute one compressed buffer via c_encode or rs_encode)
and pass that same buffer into both group.bench_with_input calls (locations:
c_encode, rs_encode, c_decode, rs_decode, and the group.bench_with_input
invocations) so both decoders decode identical data.

Comment thread liblzma-rs-sys/src/lib.rs
Comment thread liblzma-rs-sys/src/lib.rs Outdated
Comment on lines +629 to +654
#[cfg(feature = "parallel")]
#[repr(C)]
pub struct lzma_mt {
pub flags: u32,
pub threads: u32,
pub block_size: u64,
pub timeout: u32,
pub preset: u32,
pub filters: *const lzma_filter,
pub check: lzma_check,
_reserved_enum1: __enum_ty,
_reserved_enum2: __enum_ty,
_reserved_enum3: __enum_ty,
_reserved_int1: u32,
_reserved_int2: u32,
_reserved_int3: u32,
_reserved_int4: u32,
pub memlimit_threading: u64,
pub memlimit_stop: u64,
_reserved_int7: u64,
_reserved_int8: u64,
_reserved_ptr1: *mut c_void,
_reserved_ptr2: *mut c_void,
_reserved_ptr3: *mut c_void,
_reserved_ptr4: *mut c_void,
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider providing a Default implementation or constructor for lzma_mt.

The lzma_mt struct contains raw pointer fields (filters, _reserved_ptr1 through _reserved_ptr4) that must be properly initialized. While C callers typically zero-initialize with memset, Rust consumers must explicitly initialize every field, including all reserved fields. Providing a Default implementation would make safe initialization easier and reduce the risk of undefined behavior from uninitialized pointers.

Proposed addition
#[cfg(feature = "parallel")]
impl Default for lzma_mt {
    fn default() -> Self {
        Self {
            flags: 0,
            threads: 0,
            block_size: 0,
            timeout: 0,
            preset: 0,
            filters: std::ptr::null(),
            check: 0,
            _reserved_enum1: 0,
            _reserved_enum2: 0,
            _reserved_enum3: 0,
            _reserved_int1: 0,
            _reserved_int2: 0,
            _reserved_int3: 0,
            _reserved_int4: 0,
            memlimit_threading: 0,
            memlimit_stop: 0,
            _reserved_int7: 0,
            _reserved_int8: 0,
            _reserved_ptr1: std::ptr::null_mut(),
            _reserved_ptr2: std::ptr::null_mut(),
            _reserved_ptr3: std::ptr::null_mut(),
            _reserved_ptr4: std::ptr::null_mut(),
        }
    }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs-sys/src/lib.rs` around lines 629 - 654, Add a safe
zero-initializer for lzma_mt by implementing Default for the struct (under
cfg(feature = "parallel")), returning a Self with all numeric fields set to 0,
enum/reserved fields set to 0, pointer fields (filters, _reserved_ptr1..4) set
to std::ptr::null() or std::ptr::null_mut(), and check set to 0; this lets Rust
callers call lzma_mt::default() instead of manually initializing every field and
avoids uninitialized raw pointers.

Comment thread liblzma-rs/src/check/check.rs Outdated
Comment thread liblzma-rs/src/check/check.rs Outdated
Comment on lines +1364 to +1370
*old_memlimit = (*coder).memlimit_stop;
if new_memlimit != 0 {
if new_memlimit < *memusage {
return LZMA_MEMLIMIT_ERROR;
}
(*coder).memlimit_stop = new_memlimit;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clamp memlimit_threading when the public memlimit is lowered.

memconfig() only updates memlimit_stop. Later threaded/block-selection decisions still compare against the stale memlimit_threading, so after a successful limit reduction the decoder can keep allocating above the caller's new cap.

Suggested fix
     *old_memlimit = (*coder).memlimit_stop;
     if new_memlimit != 0 {
         if new_memlimit < *memusage {
             return LZMA_MEMLIMIT_ERROR;
         }
         (*coder).memlimit_stop = new_memlimit;
+        if (*coder).memlimit_threading > new_memlimit {
+            (*coder).memlimit_threading = new_memlimit;
+        }
     }
     LZMA_OK
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
*old_memlimit = (*coder).memlimit_stop;
if new_memlimit != 0 {
if new_memlimit < *memusage {
return LZMA_MEMLIMIT_ERROR;
}
(*coder).memlimit_stop = new_memlimit;
}
*old_memlimit = (*coder).memlimit_stop;
if new_memlimit != 0 {
if new_memlimit < *memusage {
return LZMA_MEMLIMIT_ERROR;
}
(*coder).memlimit_stop = new_memlimit;
if (*coder).memlimit_threading > new_memlimit {
(*coder).memlimit_threading = new_memlimit;
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder_mt.rs` around lines 1364 - 1370, When
lowering the public memlimit in memconfig (the block that sets
(*coder).memlimit_stop), also clamp the threaded allocation cap so
threaded/block-selection uses the new lower limit: after storing
(*coder).memlimit_stop = new_memlimit (or when new_memlimit != 0) update
(*coder).memlimit_threading = min((*coder).memlimit_threading,
(*coder).memlimit_stop) (or set it to new_memlimit if memlimit_threading is
larger) and keep the existing memusage check; this ensures subsequent threaded
allocation decisions respect the caller's reduced limit.

Comment thread liblzma-rs/src/common/stream_decoder.rs Outdated
@@ -0,0 +1,480 @@
use crate::types::*;
use libc::{c_uint, c_void};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

libc::c_uint unavailable on wasm32-unknown-unknown target.

Same issue as other files - the libc crate does not export C integer types on wasm32 targets. Use core::ffi types instead.

🐛 Proposed fix
-use libc::{c_uint, c_void};
+use core::ffi::{c_uint, c_void};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use libc::{c_uint, c_void};
use core::ffi::{c_uint, c_void};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder.rs` at line 2, The import of
libc::{c_uint, c_void} doesn't work for wasm32; replace it by importing
core::ffi::c_void (use core::ffi::c_void) and change all uses of the C unsigned
integer alias c_uint in this file (e.g., function signatures or type aliases in
stream_decoder.rs) to a concrete Rust integer type such as u32 so the code
compiles on wasm32-unknown-unknown without relying on libc.

Comment on lines +255 to +270
3500765272169221397 => {
(*coder).block_options.version = 1;
let mut filters: [lzma_filter; 5] = [lzma_filter {
id: 0,
options: core::ptr::null_mut(),
}; 5];
(*coder).block_options.filters =
::core::ptr::addr_of_mut!(filters) as *mut lzma_filter;
let ret_: lzma_ret = lzma_block_header_decode(
::core::ptr::addr_of_mut!((*coder).block_options),
allocator,
::core::ptr::addr_of_mut!((*coder).buffer) as *mut u8,
);
if ret_ != LZMA_OK {
return ret_;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stack-allocated filters array escapes via raw pointer.

The filters array is allocated on the stack (line 257-260), then a pointer to it is assigned to (*coder).block_options.filters (line 261-262). After lzma_block_header_decode returns, the pointer is used but filters remains valid only within this match arm's scope. While the pointer is nulled at line 294 before the function continues, any error path between lines 263-293 that returns early would leave a dangling pointer in block_options.filters.

🔍 Analysis

If lzma_block_header_decode (line 263) succeeds but a subsequent operation fails (e.g., lzma_block_decoder_init at line 283-287 returns an error), the code returns at line 296 with (*coder).block_options.filters still pointing to the now-invalid stack memory from filters. This could cause use-after-free if the caller attempts to access the filters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder.rs` around lines 255 - 270, The
stack-allocated filters array escapes via a raw pointer assigned to
(*coder).block_options.filters which can dangle on early returns; allocate
filters on the heap instead and ensure ownership is transferred safely and
cleaned up on all paths. Replace the local stack array (filters) with a heap
allocation (e.g., Box/Vec) and store a raw pointer from
Box::into_raw/Vec::as_mut_ptr in (*coder).block_options.filters, then after
lzma_block_header_decode and any subsequent calls (including on error paths such
as after lzma_block_decoder_init) ensure you convert the raw pointer back and
drop it (or free it) and set (*coder).block_options.filters =
core::ptr::null_mut(); reference symbols: filters,
(*coder).block_options.filters, lzma_block_header_decode,
lzma_block_decoder_init.

Comment thread liblzma-rs/src/common/stream_encoder_mt.rs Outdated
Comment on lines +8 to +11
pub unsafe extern "C" fn lzma_stream_flags_compare(
a: *const lzma_stream_flags,
b: *const lzma_stream_flags,
) -> lzma_ret {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing #[no_mangle] and null pointer validation.

  1. Without #[no_mangle], this function won't be callable from C with the expected symbol name.
  2. Pointers a and b are dereferenced on line 12 without null checks.
🐛 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_flags_compare(
     a: *const lzma_stream_flags,
     b: *const lzma_stream_flags,
 ) -> lzma_ret {
+    if a.is_null() || b.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     if (*a).version != 0 || (*b).version != 0 {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub unsafe extern "C" fn lzma_stream_flags_compare(
a: *const lzma_stream_flags,
b: *const lzma_stream_flags,
) -> lzma_ret {
#[no_mangle]
pub unsafe extern "C" fn lzma_stream_flags_compare(
a: *const lzma_stream_flags,
b: *const lzma_stream_flags,
) -> lzma_ret {
if a.is_null() || b.is_null() {
return LZMA_PROG_ERROR;
}
if (*a).version != 0 || (*b).version != 0 {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_common.rs` around lines 8 - 11, Add the
missing #[no_mangle] to the existing unsafe extern "C" fn
lzma_stream_flags_compare so the symbol is exported to C, and add null-pointer
checks for both parameters (a and b) at the top of that function: if either is
null, return an appropriate lzma_ret error (e.g., LZMA_PROG_ERROR) instead of
dereferencing; only after validating non-null pointers proceed with the existing
comparison logic that currently dereferences *a and *b.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@liblzma-rs/src/common/auto_decoder.rs`:
- Around line 40-66: The code sets (*coder).sequence = SEQ_CODE before
attempting to initialize the nested decoder, which can leave sequence advanced
when lzma_stream_decoder_init or lzma_alone_decoder_init fail; change the
control flow in the auto-decoder initialization so that (*coder).sequence is
only assigned SEQ_CODE after the chosen init call returns LZMA_OK (i.e., perform
the init first using lzma_stream_decoder_init or lzma_alone_decoder_init on
::core::ptr::addr_of_mut!((*coder).next), check the ret_/ret__0 for LZMA_OK, and
only then set (*coder).sequence = SEQ_CODE), ensuring failures return
immediately without advancing sequence and avoiding a panic when next.code is
None.

In `@liblzma-rs/src/common/block_decoder.rs`:
- Around line 36-124: The code uses opaque magic-number values assigned to
current_block_40 (17473121293339793080, 9393557385011460022) to emulate gotos
based on (*coder).sequence; replace this pattern with idiomatic control flow:
remove current_block_40 and the magic constants in
liblzma-rs/src/common/block_decoder.rs and instead use a match or loop on
(*coder).sequence (and set (*coder).sequence = SEQ_PADDING where appropriate) or
define an enum for sequence states and match over that, preserving the existing
branches and early returns (LZMA_DATA_ERROR, ret returns, etc.); ensure the same
behaviors around the inner coder call (next.code.unwrap()), size updates
(compressed_size/uncompressed_size), lzma_check_update, and final sequence
transition to SEQ_PADDING remain unchanged while eliminating the numeric
placeholders.

In `@liblzma-rs/src/common/block_encoder.rs`:
- Around line 135-177: The init guard currently uses a verbose
core::mem::transmute of an Option-wrapped function pointer; simplify by casting
the function pointer directly to uintptr_t and use that for comparison and
assignment (replace the transmute expressions with direct casts like
lzma_block_encoder_init as uintptr_t), keeping the same logic around
(*next).init and the lzma_next_end(next, allocator) call and assigning
(*next).init afterward.

In `@liblzma-rs/src/common/index.rs`:
- Around line 336-347: The current cumulative additions can overflow before the
LZMA_VLI_MAX checks; replace the unchecked adds with fallible checks (use
checked_add) when building file_size from compressed_base, the header sizes,
stream_padding, vli_ceil4(unpadded_sum), and the result of
index_size(record_count, index_list_size), and if any checked_add returns None
immediately return LZMA_VLI_UNKNOWN; update the logic around the file_size
variable and comparisons so you only compare to LZMA_VLI_MAX after successful
checked_adds to avoid wrapped values passing the guard.

In `@liblzma-rs/src/common/lzip_decoder.rs`:
- Around line 297-298: The function lzma_lzip_decoder is missing the
#[no_mangle] attribute required for C FFI linkage; add #[no_mangle] immediately
above the lzma_lzip_decoder definition (which is declared as pub unsafe extern
"C" fn lzma_lzip_decoder(...)) so the symbol is exported with the expected C
name, keeping the existing pub unsafe extern "C" signature intact to match
lzma_lzip_decoder_init.

In `@liblzma-rs/src/common/stream_decoder.rs`:
- Around line 29-39: stream_decoder_reset currently leaves the
stream_flags.check field uninitialized causing stream_decoder_get_check to
expose garbage or previous stream data; update stream_decoder_reset to
explicitly initialize/reset (*coder).stream_flags.check to a safe default (e.g.,
zero or the library's "unknown/none" check constant) whenever starting a new
stream so concatenated streams don't inherit the prior check value and the first
call to stream_decoder_get_check is well-defined.

In `@liblzma-rs/src/common/stream_encoder.rs`:
- Around line 424-449: Add the #[no_mangle] attribute to the exported C ABI
function lzma_stream_encoder so the symbol is not Rust-mangled and C callers can
link to "lzma_stream_encoder"; locate the pub unsafe extern "C" fn
lzma_stream_encoder(...) declaration and place #[no_mangle] directly above it
(also scan other similar extern "C" functions in this module to ensure they have
#[no_mangle] if they are intended for C visibility).
- Around line 109-115: The static array `convert` is declared as `static mut`
but is never mutated; change its declaration to an immutable `static` (e.g.,
`static convert: [lzma_action; 5] = [...]`) to remove unnecessary `mut` and
unsafe requirements, and then update any places that currently access it via
`unsafe { convert }` to read it directly without `unsafe`; reference the
`convert` symbol and its elements (`LZMA_RUN`, `LZMA_SYNC_FLUSH`, `LZMA_FINISH`)
when making the change.

In `@liblzma-rs/src/lz/lz_encoder_mf.rs`:
- Around line 168-192: The skip helpers (lzma_mf_hc3_skip, lzma_mf_hc4_skip,
lzma_mf_bt2_skip, lzma_mf_bt3_skip, lzma_mf_bt4_skip) run their loop body
unconditionally and then do amount -= 1, causing underflow when amount == 0; fix
each by returning early or changing the loop to while amount != 0 so the body is
not executed when amount is zero (e.g., replace the current loop { ... amount -=
1; if amount==0 break; } pattern with while amount != 0 { ... amount -= 1 } or
an initial if amount == 0 { return } before the existing loop), keeping the
existing inner logic and references to mf_avail, move_pending, move_pos,
hash/son updates, and cyclic_pos unchanged.

In `@liblzma-rs/src/lz/lz_encoder.rs`:
- Around line 471-475: The inner encoder vtable fields (lz.code, lz.end,
lz.options_update, lz.set_out_limit) on lzma_coder must be cleared on reuse
before calling lz_init to avoid dispatching stale function pointers; update the
reuse path in the initializer that currently sets (*coder).lz.coder =
core::ptr::null_mut() so it also sets (*coder).lz.code = None, (*coder).lz.end =
None, (*coder).lz.options_update = None and (*coder).lz.set_out_limit = None
prior to invoking lz_init, then after lz_init validate that lz.code was
installed and treat the call as a failure (return non-LZMA_OK) if lz.code
remains None (so lz_encode cannot be called with stale callbacks); apply the
same change to the analogous reuse block around the second region mentioned (the
506-515 area) and ensure lz_init error handling returns a failure instead of
LZMA_OK when callbacks are not set.

---

Duplicate comments:
In `@benches/backend_comparison.rs`:
- Around line 121-132: The decode benchmarks use different compressed inputs
(c_compressed vs rs_compressed) so they measure different workloads; compress
the payload once per size (e.g., call make_payload(size) then produce a single
compressed fixture like compressed = unsafe { c_encode(&input) } or pick a
canonical encoder) and pass that same compressed fixture to both
group.bench_with_input calls so both c_decode and rs_decode decode identical
bytes; update references to c_compressed/rs_compressed in the bench_with_input
closures to use the shared compressed variable and keep the same label/size
handling.
- Around line 22-37: The encode/decode helpers (e.g., c_encode) currently trust
out_pos (and in_pos for decoders) even when the liblzma call fails; update each
helper that calls c_sys::lzma_easy_buffer_encode / the decode counterparts to
check the returned lzma_ret value and only truncate and return the output buffer
when the call returns success (e.g., c_sys::LZMA_OK or appropriate success
code); for decoders also assert or verify that in_pos == compressed.len() (or
expected input consumption) before accepting the result; use the return value to
log/return an error or panic otherwise instead of trusting out_pos/in_pos.

In `@liblzma-rs-sys/src/lib.rs`:
- Around line 146-149: The wrapper lzma_memlimit_get currently accepts a raw
pointer but is declared safe; change its signature to match the adjacent
raw-pointer wrappers by making it unsafe (i.e., declare pub unsafe extern "C" fn
lzma_memlimit_get(...)) so callers must uphold the pointer contract before
forwarding to liblzma_rs::common::common::lzma_memlimit_get; update any internal
doc/comment if present and check callers/tests that call lzma_memlimit_get to
pass through unsafe blocks as needed.

In `@liblzma-rs/src/common/alone_encoder.rs`:
- Around line 201-221: The exported C ABI function lzma_alone_encoder is
declared as pub unsafe extern "C" but missing #[no_mangle], so its symbol will
be Rust-mangled and not linkable from C; add the #[no_mangle] attribute
immediately above the lzma_alone_encoder function declaration (keeping the
existing pub unsafe extern "C" signature and body) to ensure the symbol is
emitted with the exact name expected by C callers.
- Around line 65-106: The function alone_encoder_init currently dereferences the
options pointer (used by lzma_lzma_lclppb_encode and (*options).dict_size)
without checking for null; add an early null check inside alone_encoder_init
(before any use of options) that returns an appropriate lzma_ret (e.g.,
LZMA_OPTIONS_ERROR) when options.is_null() to avoid undefined behavior, and
ensure subsequent uses of options assume a non-null pointer.

In `@liblzma-rs/src/common/block_buffer_encoder.rs`:
- Around line 2-3: The constant HEADERS_BOUND uses a bitwise AND with !(3) but
relies on operator precedence to apply the mask to the whole summed expression;
wrap the entire additive expression (the sum of 1 + 1 + 2 * LZMA_VLI_BYTES_MAX +
3 + 4 + LZMA_CHECK_SIZE_MAX + 3) in parentheses so the & !(3) applies
unambiguously to that total (referencing HEADERS_BOUND, LZMA_VLI_BYTES_MAX, and
LZMA_CHECK_SIZE_MAX).
- Around line 28-32: The function lzma_block_buffer_bound currently casts a u64
(ret from lzma_block_buffer_bound64) directly to size_t which will truncate on
32-bit platforms; add a bounds check before casting: compare ret against the
maximum representable size_t and if ret exceeds that max, return the size_t max
(clamp) instead of truncating, otherwise cast and return; update references in
this function (ret, uncompressed_size, lzma_block_buffer_bound64) to perform the
check to prevent a truncated undersized bound.

In `@liblzma-rs/src/common/block_decoder.rs`:
- Around line 177-225: In lzma_block_decoder_init, validate the incoming block
pointer before any dereference (e.g., before calling lzma_block_unpadded_size or
reading (*block).uncompressed_size); add a null-check for block at the top of
the function and return an appropriate lzma_ret (e.g., LZMA_PROG_ERROR) if block
is null, ensuring existing cleanup logic (like lzma_next_end on next) still runs
when needed and avoiding dereferencing block later in the function.

In `@liblzma-rs/src/common/block_encoder.rs`:
- Around line 34-76: The match on (*coder).sequence that assigns magic numbers
to current_block_34 (the values 6470892831169497455 and 47327340716975230)
should be replaced with an explicit Rust enum/state so control flow is clear and
idiomatic: introduce a small enum (e.g., BlockState or CurrentBlock) and map the
existing arms for sequence 0, 1, 2 to meaningful variants instead of numeric
constants, update the code that switches on current_block_34 to match on that
enum, and keep existing behavior around (*coder).sequence, SEQ_PADDING, the call
to (*coder).next.code.unwrap() in the sequence 0 branch, and the subsequent
returns (LZMA_STREAM_END, LZMA_SYNC_FLUSH, LZMA_DATA_ERROR, LZMA_PROG_ERROR)
unchanged; replace references to current_block_34 with the new enum variant to
preserve the original branching logic.

In `@liblzma-rs/src/common/common.rs`:
- Around line 117-128: The code currently calls (*next).update.unwrap(), which
panics when update is None; instead check whether (*next).update is Some before
calling it and if it is None return LZMA_PROG_ERROR (preserving the existing
early-return for LZMA_VLI_UNKNOWN which returns LZMA_OK); when Some, call the
callback with the same arguments ((*next).coder, allocator, core::ptr::null(),
reversed_filters). Ensure you reference the same symbols: reversed_filters,
next, update, coder, allocator, LZMA_PROG_ERROR, and LZMA_OK.
- Around line 76-91: The code must validate cursor positions before doing the
wrapping_sub math: check whether *in_pos >= in_size or *out_pos >= out_size (or
otherwise out-of-range) and reject/return an error (or no-op) immediately
instead of proceeding; update the block that computes in_avail/out_avail and the
subsequent call to core::ptr::copy_nonoverlapping to only run when the cursors
are valid, so that wrapping_sub cannot produce huge values for
in_avail/out_avail and prevent out-of-bounds reads/writes involving in_pos,
out_pos, in_size, out_size and copy_nonoverlapping.

In `@liblzma-rs/src/common/easy_preset.rs`:
- Around line 3-9: The function lzma_easy_preset currently dereferences opt_easy
(e.g., in calls like addr_of_mut!((*opt_easy).opt_lzma) and accesses to
(*opt_easy).filters) before any validation; add an early null-pointer guard at
the top of lzma_easy_preset that checks if opt_easy.is_null() and returns true
immediately to preserve the existing failure convention, so no subsequent
dereferences of opt_easy (addr_of_mut!((*opt_easy).opt_lzma),
(*opt_easy).filters[0], etc.) occur when callers pass NULL.

In `@liblzma-rs/src/common/file_info.rs`:
- Around line 617-620: Ensure the caller's output pointer is cleared immediately
on entry: before doing any work (and before the early return on
dest_index.is_null()), set *dest_index to null to avoid leaving a stale pointer
on failure; locate the init path around the symbols dest_index, next and
lzma_file_info_coder and update the code so that if dest_index is non-null you
assign null to *dest_index at the very start of the function (still return
LZMA_PROG_ERROR if dest_index itself is null).

In `@liblzma-rs/src/common/filter_encoder.rs`:
- Around line 449-452: The code currently returns LZMA_PROG_ERROR when
encoder_find((*filter).id) yields null in the filter encoder lookup (the fe
pointer check in filter_encoder.rs), but lzma_properties_size() already treats
ids <= LZMA_VLI_MAX as “unsupported but valid” and would return
LZMA_OPTIONS_ERROR; change the null-path return to LZMA_OPTIONS_ERROR so the
public contract is consistent. Locate the fe null-check that follows the
encoder_find call and replace the LZMA_PROG_ERROR return with
LZMA_OPTIONS_ERROR, keeping the rest of the function (and any existing
validation that uses lzma_properties_size) unchanged.

In `@liblzma-rs/src/common/index_hash.rs`:
- Around line 86-93: The FFI entry point lzma_index_hash_size currently
dereferences the raw pointer unconditionally; update lzma_index_hash_size to
check if index_hash is null and return 0 (a safe lzma_vli zero-size) when it is,
otherwise call index_size with (*index_hash).blocks.count and
(*index_hash).blocks.index_list_size; ensure the null-check is inside the unsafe
block or precedes dereference to avoid UB.

In `@liblzma-rs/src/common/index.rs`:
- Around line 261-265: The lzma_index_prealloc function can store 0 into
(*i).prealloc which leads lzma_index_append to allocate an index_group with zero
slots and cause OOB writes; update lzma_index_prealloc to clamp the incoming
records value between 1 and PREALLOC_MAX (use records = max(1, min(records,
PREALLOC_MAX as lzma_vli))) before assigning (*i).prealloc, ensuring the stored
size_t is at least 1 and no zero-length allocations occur.
- Around line 691-695: The duplication routine lzma_index_dup() currently copies
uncompressed_size, total_size, record_count, index_list_size and stream pointers
but omits the checks bitset, causing lzma_index_checks() to lose preserved check
IDs for non-rightmost streams; update lzma_index_dup() to also copy the checks
field from src to dest (i.e., assign dest->checks = src->checks or the Rust
equivalent for (*dest).checks = (*src).checks) so the duplicated index retains
the original check metadata.

In `@liblzma-rs/src/common/microlzma_decoder.rs`:
- Around line 215-255: Check and validate uncomp_size against LZMA_VLI_MAX
before allocating and wiring next.coder/next.code/next.end to avoid leaving next
in a partially-initialized state; move the uncomp_size > LZMA_VLI_MAX check to
the top of the init path (before lzma_alloc and before assigning (*next).coder,
(*next).code, and (*next).end) so that the function returns LZMA_OPTIONS_ERROR
without mutating next, and only perform the allocation and lzma_next_coder_s
initialization when uncomp_size is valid.

In `@liblzma-rs/src/common/microlzma_encoder.rs`:
- Around line 23-41: The code currently unwraps optional callbacks on
lzma_next_coder which can panic across the FFI boundary; instead check that
(*coder).lzma.set_out_limit and (*coder).lzma.code are present and return
LZMA_PROG_ERROR if either is None, then call them safely (e.g., match or if let
Some(cb) = ...) to invoke set_out_limit and code, keeping the existing arguments
and assigning the lzma_ret to ret as before; specifically update the block that
calls set_out_limit and assigns ret from code to handle missing callbacks
without unwraps (references: set_out_limit, code, lzma_next_coder).

In `@liblzma-rs/src/common/outqueue.rs`:
- Around line 187-196: In lzma_outq_enable_partial_output, avoid calling
enable_partial_output.unwrap() unconditionally; first check the Option and only
invoke the callback when it is Some to prevent a panic across the FFI boundary.
Replace the direct unwrap with an if let Some(cb) = enable_partial_output (or
equivalent match) and then call cb((*(*outq).head).worker) and clear the worker
pointer as currently done; keep the existing pointer and finished checks around
((*outq).head).finished and ((*outq).head).worker.
- Around line 117-143: The function lzma_outq_get_buf dereferences (*outq).cache
without validating it; add a defensive null check at the start of
lzma_outq_get_buf (similar to lzma_outq_clear_cache) that returns
core::ptr::null_mut() if (*outq).cache.is_null() (or otherwise handles the error
consistently with the FFI contract), and document that callers should handle a
null return (or ensure lzma_outq_prealloc_buf is called); update any related
logic in lzma_outq_get_buf accordingly so no dereference occurs when cache is
null.

In `@liblzma-rs/src/common/stream_decoder_mt.rs`:
- Around line 614-616: In stream_decoder_reset(), after setting
(*coder).sequence = SEQ_STREAM_HEADER and (*coder).pos = 0, also explicitly
reset the stream_flags.check field (e.g. set (*coder).stream_flags.check = 0 or
the appropriate "unknown" sentinel) so stream_decoder_mt_get_check() never reads
stale/garbage data across new or concatenated streams; update the
stream_decoder_reset() implementation to clear that field alongside sequence and
pos.
- Around line 479-485: The code path that handles a worker thread error
currently sets (*coder).pending_error to LZMA_PROG_ERROR, losing the real worker
terminal code; change it to preserve the actual worker error by assigning
(*coder).pending_error = (*coder).thread_error (instead of LZMA_PROG_ERROR) in
the branch where (*coder).thread_error != LZMA_OK and (*coder).fail_fast is
false, ensuring the real error from the background decoder is propagated to
callers.
- Around line 1316-1325: The teardown must destroy the coder-level
synchronization primitives before freeing the coder: call
mythread_cond_destroy(::core::ptr::addr_of_mut!((*coder).cond), allocator) and
mythread_mutex_destroy(::core::ptr::addr_of_mut!((*coder).mutex), allocator) (or
the project’s equivalent mythread_*_destroy functions) after the other component
teardowns (e.g., after threads_end, lzma_outq_end, lzma_next_end,
lzma_filters_free, lzma_index_hash_end) and before lzma_free(coder as *mut
c_void, allocator) to avoid leaking OS resources and preserve init/teardown
symmetry for (*coder).cond and (*coder).mutex.
- Around line 1363-1369: The public memlimit update sets (*coder).memlimit_stop
but leaves the per-thread cap (*coder).memlimit_threading stale, allowing
threaded allocations to exceed the new public limit; after setting
(*coder).memlimit_stop when new_memlimit != 0, clamp the threading cap by
setting (*coder).memlimit_threading = min((*coder).memlimit_threading,
new_memlimit) (or assign new_memlimit if the threading cap is larger) so
memlimit_threading cannot exceed the lowered public limit; refer to symbols
memlimit_stop, memlimit_threading, new_memlimit, memusage, and coder to locate
where to apply the change.

In `@liblzma-rs/src/common/stream_decoder.rs`:
- Around line 255-268: The early return after calling lzma_block_header_decode
in stream_decoder.rs leaks per-filter options and leaves block_options.filters
pointing at a stack array; before returning on error from
lzma_block_header_decode, perform the same cleanup as the normal exit path: free
any per-filter options populated by the decode and reset
(*coder).block_options.filters to null (i.e. replicate the cleanup logic used
after successful decode), so that no dangling pointer or leaked filter options
remain when returning from the function.

In `@liblzma-rs/src/common/stream_encoder_mt.rs`:
- Around line 992-997: The code validates *block_size against BLOCK_SIZE_MAX and
UINT64_MAX but does not guard the later narrowing to size_t; add checks against
SIZE_MAX: after validating *block_size (the same place that checks
BLOCK_SIZE_MAX and UINT64_MAX) also return LZMA_OPTIONS_ERROR if *block_size >
SIZE_MAX, and after computing *outbuf_size_max =
lzma_block_buffer_bound64(*block_size) check if *outbuf_size_max > SIZE_MAX and
return LZMA_MEM_ERROR if so, using the existing
LZMA_OPTIONS_ERROR/LZMA_MEM_ERROR symbols to avoid silent truncation when
narrowing the u64 values to size_t.
- Around line 820-833: The code currently returns LZMA_STREAM_END for
LZMA_FULL_FLUSH / LZMA_FULL_BARRIER as soon as input is consumed and outq is
empty, but in-flight worker threads may still be processing; update the branch
handling LZMA_FULL_FLUSH and LZMA_FULL_BARRIER (the block with checks on *in_pos
== in_size, LZMA_RUN, LZMA_FULL_BARRIER, lzma_outq_is_empty and LZMA_FULL_FLUSH)
to first ensure worker activity is quiescent before returning: query or wait for
the encoder’s worker state (e.g. the thread pool / worker queue associated with
the coder) to be idle (spin/wait or join outstanding tasks) and only then return
LZMA_STREAM_END; keep existing checks for LZMA_FINISH/LZMA_RUN but replace the
immediate return for FULL_FLUSH/FULL_BARRIER with a worker-idle wait using the
coder’s worker state.

In `@liblzma-rs/src/common/stream_flags_decoder.rs`:
- Around line 16-19: The public C-ABI decode entry points have Rust name
mangling; add #[no_mangle] above each extern "C" fn export so the symbols are
stable for C linkage — specifically prepend #[no_mangle] to
lzma_stream_header_decode and the other public decode functions in this file
(the ones around the 50-53 region) to ensure the C callers can resolve the
unmangled symbols.

In `@liblzma-rs/src/common/string_conversion.rs`:
- Around line 159-172: Change read-only global option maps from mutable statics
to immutable statics and use safe lazy init for the one that is mutated: replace
static mut bcj_optmap and delta_optmap with static bcj_optmap and static
delta_optmap initialized immutably (removing unsafe mutability), and for
lzma12_optmap which is written in run_static_initializers switch to a OnceLock
(or similar sync primitive) so run_static_initializers stores the initialized
value into the OnceLock and callers access lzma12_optmap.get()/get_or_init()
instead of direct mutation; update any references from direct dereference to the
OnceLock API and remove unsafe blocks related to these statics.
- Around line 752-757: The code calls filter_name_map[i as usize].parse.unwrap()
which will panic if parse is None; guard the call by checking parse (e.g., if
let Some(parse_fn) = filter_name_map[i as usize].parse) and use parse_fn(str,
str_end, options) instead of unwrap(), and decide the correct behavior when
parse is None (e.g., skip to next iteration or return a proper error); ensure
you still free resources with lzma_free(options, allocator) when returning early
and preserve the existing errmsg handling.
- Around line 931-933: The error message returned by the C-ABI function
lzma_str_to_filters() uses Rust-specific wording "core::ptr::null_mut()", which
is inappropriate for C consumers; update the returned C string to use C
terminology such as "NULL pointer" (e.g., "Unexpected NULL pointer argument(s)
to lzma_str_to_filters()") so that callers using the C ABI receive a clear,
language-appropriate error message; ensure the change is made in the return
string inside the null-check block that references str and filters.
- Around line 112-119: The suffixes array is declared as mutable but never
modified—replace the unsafe mutable declaration with an immutable static by
changing `static mut suffixes: [[c_char; 4]; 4] = unsafe { ... };` to `static
suffixes: [[c_char; 4]; 4] = [ ... ];`, removing the unnecessary `mut` and the
surrounding `unsafe` block; keep the same initializer entries (the transmute
expressions or safe equivalents) so the symbol `suffixes` remains available to
the code that reads it.
- Around line 1350-1354: The platform-specific INIT_ARRAY definition only
applies to Linux/Windows/macOS so run_static_initializers (which populates
lzma12_optmap) never runs on other targets; fix by making the initializer run on
all targets — e.g. add a fallback build clause that defines the INIT_ARRAY (or a
platform-agnostic ctor) when not target_os = "linux"/"windows"/"macos", or use a
crate-level #[ctor] to call run_static_initializers unconditionally; ensure the
static INIT_ARRAY/run_static_initializers symbol referenced in
string_conversion.rs is present for all targets so lzma12_optmap is initialized
everywhere.
- Around line 238-245: Replace the magic number 101 in the match arm with a
character-byte literal for clarity: change the condition in the match on **str
(the block that sets LZMA_PRESET_EXTREME) to compare against b'e' cast to c_char
(i.e., b'e' as c_char) instead of 101 so the code reads explicitly that it's
checking for the 'e' flag.

In `@liblzma-rs/src/delta/delta_common.rs`:
- Around line 42-46: The zeroing currently uses a hardcoded 256 byte length;
replace that literal with a computed size using core::mem::size_of_val so the
clear stays correct if the history field changes. In the call that uses
core::ptr::write_bytes with ::core::ptr::addr_of_mut!((*coder).history) as *mut
u8, compute the count as core::mem::size_of_val(&(*coder).history) (cast to
usize if needed) and pass that instead of 256; keep the same address expression
and the zero byte value.
- Around line 8-13: The public FFI function lzma_delta_coder_init dereferences
the next pointer without checking for null; add a null-check at the start of
lzma_delta_coder_init to validate that next (pointer to lzma_next_coder) is not
null and return an appropriate lzma_ret error (e.g., LZMA_PROG_ERROR or similar)
if it is null, before any dereference or use of (*next).coder; ensure the same
pattern is used for any other public FFI entrypoints that take raw pointers.

In `@liblzma-rs/src/delta/delta_decoder.rs`:
- Around line 45-50: In lzma_delta_decoder_init, add an explicit null check for
the incoming pointer parameter next before any dereference; if next.is_null()
return the appropriate lzma_ret error (e.g., LZMA_PROG_ERROR) from the FFI
entrypoint and avoid touching (*next). Implement this check at the top of the
unsafe extern "C" fn to ensure the public ABI validates the pointer before using
lzma_next_coder.
- Around line 27-37: The current call to (*coder).next.code.unwrap() can panic
if next.code is None; change this to defensively check next.code and return
LZMA_PROG_ERROR (the lzma_ret error) when it's None, otherwise call the function
pointer with the same arguments; locate the call site referencing
(*coder).next.code, (*coder).next.coder, and the lzma_ret return value and
implement a match/if let that avoids unwrap to prevent panics in FFI code.
- Around line 67-75: In lzma_delta_props_decode add explicit null checks for the
incoming C ABI pointers before any dereference: verify that options and props
are not null at the top of the function (before the existing props_size check or
immediately after it) and return an appropriate error (e.g. LZMA_PROG_ERROR) if
either is null; then proceed to read *props and write to *options as currently
implemented (this prevents the unsafe dereferences in the lines that read
props[0] and assign to *options).

In `@liblzma-rs/src/delta/delta_encoder.rs`:
- Around line 132-138: In lzma_delta_props_encode, validate the out pointer
before dereferencing: check out.is_null() and return an appropriate error (e.g.,
LZMA_PROG_ERROR) if null; keep the existing early check using
lzma_delta_coder_memusage(options) == UINT64_MAX and then cast options to *const
lzma_options_delta and write the encoded property only when out is non-null (use
LZMA_DELTA_DIST_MIN and the existing wrapping logic), preserving the final
LZMA_OK return on success.
- Around line 101-106: The public FFI function lzma_delta_encoder_init
dereferences the pointer parameter next without checking for null; add an
initial null check at the top of lzma_delta_encoder_init (validate next !=
std::ptr::null_mut()) and return the appropriate lzma_ret error (e.g.,
LZMA_PROG_ERROR) if it is null, ensuring no dereference occurs before the check;
keep the rest of the function unchanged and reference the symbol
lzma_delta_encoder_init and the parameter name next when making the change.

In `@liblzma-rs/src/lib.rs`:
- Around line 1-9: The crate-level attribute currently silences all Clippy lints
via clippy::all; replace this overly broad suppression with a targeted list of
specific lints to ignore (for example clippy::transmute_ptr_to_ptr,
clippy::too_many_arguments) or remove the clippy suppression entirely and only
add individual allows for the actual warnings emitted by the C2rust output;
update the attribute near the top of the file (the existing #![allow(...,
clippy::all)] declaration) to enumerate the precise clippy rules to silence and
add a brief TODO comment tracking this technical debt.

In `@liblzma-rs/src/lz/lz_decoder.rs`:
- Around line 295-300: The memusage function currently returns raw
dictionary_size; update lzma_lz_decoder_memusage to apply the same sizing rules
used in lzma_lz_decoder_init: clamp dictionary_size to a minimum of 4096 and
round it up to 16-byte alignment before computing the total
(size_of::<lzma_coder>() + aligned_dictionary + 2*LZ_DICT_REPEAT_MAX +
LZ_DICT_EXTRA). Locate lzma_lz_decoder_memusage and mirror the clamp/align logic
from lzma_lz_decoder_init so reported memory matches actual initialization
behavior.
- Around line 234-240: The code unconditionally calls lz_init.unwrap() which can
panic when a null/None callback is passed; update the extern "C" entrypoint
(where lz_init, coder.lz and lz_options are used) to check lz_init.is_some() or
match on lz_init and if None return LZMA_PROG_ERROR instead of unwrapping; if
Some(f) call f(::core::ptr::addr_of_mut!((*coder).lz), allocator, (*filters).id,
(*filters).options, ::core::ptr::addr_of_mut!(lz_options)) and propagate its
lzma_ret result as before. Ensure you reference the lz_init variable and the
returned LZMA_PROG_ERROR constant in your change.
- Around line 259-266: After calling lzma_free on (*coder).dict.buf, clear the
dictionary metadata so a subsequent failed allocation cannot leave stale values:
set (*coder).dict.buf = std::ptr::null_mut() and (*coder).dict.size = 0
immediately after lzma_free (or at minimum in the allocation-failure branch
before returning LZMA_MEM_ERROR). Update the block around lzma_free / lzma_alloc
that touches coder->dict so the code always zeroes dict.buf and dict.size when
the old buffer is freed or when lzma_alloc fails, referencing the symbols
(*coder).dict.buf, (*coder).dict.size, lzma_free, lzma_alloc and LZMA_MEM_ERROR.
- Around line 136-138: The EOF leftover check currently inspects
(*coder).temp.size after (*coder).this_finished which can misreport trailing
garbage; instead use the coder's unread-byte state to decide if residual data
exists. In the block that checks (*coder).this_finished, replace the
(*coder).temp.size test with a check of the unread-byte flag/state (the field
representing an unread byte on the coder struct) so the decoder treats a fully
consumed temp buffer as valid EOF; update the check around
(*coder).this_finished accordingly (use the coder.unread/unread_byte field
rather than temp.size).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3050f640-d0cb-4936-aabd-488f59f3e18c

📥 Commits

Reviewing files that changed from the base of the PR and between 4155aa2 and 4e55b4e.

📒 Files selected for processing (59)
  • Cargo.toml
  • benches/backend_comparison.rs
  • liblzma-rs-sys/src/lib.rs
  • liblzma-rs/src/check/sha256.rs
  • liblzma-rs/src/common/alone_decoder.rs
  • liblzma-rs/src/common/alone_encoder.rs
  • liblzma-rs/src/common/auto_decoder.rs
  • liblzma-rs/src/common/block_buffer_encoder.rs
  • liblzma-rs/src/common/block_decoder.rs
  • liblzma-rs/src/common/block_encoder.rs
  • liblzma-rs/src/common/common.rs
  • liblzma-rs/src/common/easy_preset.rs
  • liblzma-rs/src/common/file_info.rs
  • liblzma-rs/src/common/filter_decoder.rs
  • liblzma-rs/src/common/filter_encoder.rs
  • liblzma-rs/src/common/index.rs
  • liblzma-rs/src/common/index_decoder.rs
  • liblzma-rs/src/common/index_encoder.rs
  • liblzma-rs/src/common/index_hash.rs
  • liblzma-rs/src/common/lzip_decoder.rs
  • liblzma-rs/src/common/microlzma_decoder.rs
  • liblzma-rs/src/common/microlzma_encoder.rs
  • liblzma-rs/src/common/outqueue.rs
  • liblzma-rs/src/common/stream_decoder.rs
  • liblzma-rs/src/common/stream_decoder_mt.rs
  • liblzma-rs/src/common/stream_encoder.rs
  • liblzma-rs/src/common/stream_encoder_mt.rs
  • liblzma-rs/src/common/stream_flags_decoder.rs
  • liblzma-rs/src/common/string_conversion.rs
  • liblzma-rs/src/delta/delta_common.rs
  • liblzma-rs/src/delta/delta_decoder.rs
  • liblzma-rs/src/delta/delta_encoder.rs
  • liblzma-rs/src/lib.rs
  • liblzma-rs/src/lz/lz_decoder.rs
  • liblzma-rs/src/lz/lz_encoder.rs
  • liblzma-rs/src/lz/lz_encoder_mf.rs
  • liblzma-rs/src/lzma/lzma2_decoder.rs
  • liblzma-rs/src/lzma/lzma2_encoder.rs
  • liblzma-rs/src/lzma/lzma_decoder.rs
  • liblzma-rs/src/lzma/lzma_encoder.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_fast.rs
  • liblzma-rs/src/lzma/lzma_encoder_optimum_normal.rs
  • liblzma-rs/src/lzma/lzma_encoder_presets.rs
  • liblzma-rs/src/simple/arm.rs
  • liblzma-rs/src/simple/arm64.rs
  • liblzma-rs/src/simple/armthumb.rs
  • liblzma-rs/src/simple/ia64.rs
  • liblzma-rs/src/simple/powerpc.rs
  • liblzma-rs/src/simple/riscv.rs
  • liblzma-rs/src/simple/simple_coder.rs
  • liblzma-rs/src/simple/simple_decoder.rs
  • liblzma-rs/src/simple/simple_encoder.rs
  • liblzma-rs/src/simple/sparc.rs
  • liblzma-rs/src/simple/x86.rs
  • liblzma-rs/src/types.rs
  • src/lib.rs
  • src/read.rs
  • src/write.rs
  • tests/sys_equivalence.rs

Comment on lines +40 to +66
(*coder).sequence = SEQ_CODE;
if *in_0.offset(*in_pos as isize) == 0xfd {
let ret_: lzma_ret = lzma_stream_decoder_init(
::core::ptr::addr_of_mut!((*coder).next),
allocator,
(*coder).memlimit,
(*coder).flags,
);
if ret_ != LZMA_OK {
return ret_;
}
} else {
let ret__0: lzma_ret = lzma_alone_decoder_init(
::core::ptr::addr_of_mut!((*coder).next),
allocator,
(*coder).memlimit,
true,
);
if ret__0 != LZMA_OK {
return ret__0;
}
if (*coder).flags & LZMA_TELL_NO_CHECK as u32 != 0 {
return LZMA_NO_CHECK;
}
if (*coder).flags & LZMA_TELL_ANY_CHECK as u32 != 0 {
return LZMA_GET_CHECK;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don’t advance to SEQ_CODE until the nested decoder is initialized.

sequence is updated before either init call can fail. If initialization returns LZMA_MEMLIMIT_ERROR or LZMA_MEM_ERROR, a retry drops into Line 80 with next.code == None and panics across the extern "C" boundary.

🔧 Suggested fix
-            (*coder).sequence = SEQ_CODE;
             if *in_0.offset(*in_pos as isize) == 0xfd {
                 let ret_: lzma_ret = lzma_stream_decoder_init(
                     ::core::ptr::addr_of_mut!((*coder).next),
                     allocator,
                     (*coder).memlimit,
                     (*coder).flags,
                 );
                 if ret_ != LZMA_OK {
                     return ret_;
                 }
+                (*coder).sequence = SEQ_CODE;
             } else {
                 let ret__0: lzma_ret = lzma_alone_decoder_init(
                     ::core::ptr::addr_of_mut!((*coder).next),
                     allocator,
                     (*coder).memlimit,
                     true,
                 );
                 if ret__0 != LZMA_OK {
                     return ret__0;
                 }
+                (*coder).sequence = SEQ_CODE;
                 if (*coder).flags & LZMA_TELL_NO_CHECK as u32 != 0 {
                     return LZMA_NO_CHECK;
                 }
                 if (*coder).flags & LZMA_TELL_ANY_CHECK as u32 != 0 {
                     return LZMA_GET_CHECK;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/auto_decoder.rs` around lines 40 - 66, The code sets
(*coder).sequence = SEQ_CODE before attempting to initialize the nested decoder,
which can leave sequence advanced when lzma_stream_decoder_init or
lzma_alone_decoder_init fail; change the control flow in the auto-decoder
initialization so that (*coder).sequence is only assigned SEQ_CODE after the
chosen init call returns LZMA_OK (i.e., perform the init first using
lzma_stream_decoder_init or lzma_alone_decoder_init on
::core::ptr::addr_of_mut!((*coder).next), check the ret_/ret__0 for LZMA_OK, and
only then set (*coder).sequence = SEQ_CODE), ensuring failures return
immediately without advancing sequence and avoiding a panic when next.code is
None.

Comment on lines +36 to +124
let current_block_40: u64 = match (*coder).sequence {
0 => {
let in_start: size_t = *in_pos;
let out_start: size_t = *out_pos;
let in_stop: size_t = (*in_pos).wrapping_add(
(if (in_size.wrapping_sub(*in_pos) as lzma_vli)
< (*coder)
.compressed_limit
.wrapping_sub((*coder).compressed_size)
{
in_size.wrapping_sub(*in_pos) as lzma_vli
} else {
(*coder)
.compressed_limit
.wrapping_sub((*coder).compressed_size)
}) as size_t,
);
let out_stop: size_t = (*out_pos).wrapping_add(
(if (out_size.wrapping_sub(*out_pos) as lzma_vli)
< (*coder)
.uncompressed_limit
.wrapping_sub((*coder).uncompressed_size)
{
out_size.wrapping_sub(*out_pos) as lzma_vli
} else {
(*coder)
.uncompressed_limit
.wrapping_sub((*coder).uncompressed_size)
}) as size_t,
);
let ret: lzma_ret = (*coder).next.code.unwrap()(
(*coder).next.coder,
allocator,
in_0,
in_pos,
in_stop,
out,
out_pos,
out_stop,
action,
);
let in_used: size_t = (*in_pos).wrapping_sub(in_start);
let out_used: size_t = (*out_pos).wrapping_sub(out_start);
(*coder).compressed_size = (*coder).compressed_size.wrapping_add(in_used as lzma_vli);
(*coder).uncompressed_size = (*coder)
.uncompressed_size
.wrapping_add(out_used as lzma_vli);
if ret == LZMA_OK {
let comp_done: bool = (*coder).compressed_size == (*(*coder).block).compressed_size;
let uncomp_done: bool =
(*coder).uncompressed_size == (*(*coder).block).uncompressed_size;
if comp_done && uncomp_done {
return LZMA_DATA_ERROR;
}
if comp_done && *out_pos < out_size {
return LZMA_DATA_ERROR;
}
if uncomp_done && *in_pos < in_size {
return LZMA_DATA_ERROR;
}
}
if !(*coder).ignore_check && out_used > 0 {
lzma_check_update(
::core::ptr::addr_of_mut!((*coder).check),
(*(*coder).block).check,
out.offset(out_start as isize),
out_used,
);
}
if ret != LZMA_STREAM_END {
return ret;
}
if !is_size_valid((*coder).compressed_size, (*(*coder).block).compressed_size)
|| !is_size_valid(
(*coder).uncompressed_size,
(*(*coder).block).uncompressed_size,
)
{
return LZMA_DATA_ERROR;
}
(*(*coder).block).compressed_size = (*coder).compressed_size;
(*(*coder).block).uncompressed_size = (*coder).uncompressed_size;
(*coder).sequence = SEQ_PADDING;
17473121293339793080
}
1 => 17473121293339793080,
2 => 9393557385011460022,
_ => return LZMA_PROG_ERROR,
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

C2rust control flow pattern acceptable for initial port.

Similar to block_encoder.rs, the magic number pattern (17473121293339793080, 9393557385011460022) for current_block_40 simulates goto statements. Consider refactoring to idiomatic Rust in future cleanup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_decoder.rs` around lines 36 - 124, The code uses
opaque magic-number values assigned to current_block_40 (17473121293339793080,
9393557385011460022) to emulate gotos based on (*coder).sequence; replace this
pattern with idiomatic control flow: remove current_block_40 and the magic
constants in liblzma-rs/src/common/block_decoder.rs and instead use a match or
loop on (*coder).sequence (and set (*coder).sequence = SEQ_PADDING where
appropriate) or define an enum for sequence states and match over that,
preserving the existing branches and early returns (LZMA_DATA_ERROR, ret
returns, etc.); ensure the same behaviors around the inner coder call
(next.code.unwrap()), size updates (compressed_size/uncompressed_size),
lzma_check_update, and final sequence transition to SEQ_PADDING remain unchanged
while eliminating the numeric placeholders.

Comment thread liblzma-rs/src/common/block_encoder.rs Outdated
Comment on lines +135 to +177
#[no_mangle]
pub unsafe extern "C" fn lzma_block_encoder_init(
next: *mut lzma_next_coder,
allocator: *const lzma_allocator,
block: *mut lzma_block,
) -> lzma_ret {
if core::mem::transmute::<
Option<
unsafe extern "C" fn(
*mut lzma_next_coder,
*const lzma_allocator,
*mut lzma_block,
) -> lzma_ret,
>,
uintptr_t,
>(Some(
lzma_block_encoder_init
as unsafe extern "C" fn(
*mut lzma_next_coder,
*const lzma_allocator,
*mut lzma_block,
) -> lzma_ret,
)) != (*next).init
{
lzma_next_end(next, allocator);
}
(*next).init = core::mem::transmute::<
Option<
unsafe extern "C" fn(
*mut lzma_next_coder,
*const lzma_allocator,
*mut lzma_block,
) -> lzma_ret,
>,
uintptr_t,
>(Some(
lzma_block_encoder_init
as unsafe extern "C" fn(
*mut lzma_next_coder,
*const lzma_allocator,
*mut lzma_block,
) -> lzma_ret,
));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Verbose transmute pattern for init guard is correct but could be simplified.

The pattern using core::mem::transmute to compare function pointers for the init guard is a direct C2rust translation. It's functionally correct for detecting re-initialization but quite verbose. Consider extracting this pattern into a helper macro or function in future refactoring.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_encoder.rs` around lines 135 - 177, The init
guard currently uses a verbose core::mem::transmute of an Option-wrapped
function pointer; simplify by casting the function pointer directly to uintptr_t
and use that for comparison and assignment (replace the transmute expressions
with direct casts like lzma_block_encoder_init as uintptr_t), keeping the same
logic around (*next).init and the lzma_next_end(next, allocator) call and
assigning (*next).init afterward.

Comment on lines +336 to +347
let mut file_size: lzma_vli = compressed_base
.wrapping_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
.wrapping_add(stream_padding)
.wrapping_add(vli_ceil4(unpadded_sum));
if file_size > LZMA_VLI_MAX {
return LZMA_VLI_UNKNOWN;
}
file_size = file_size.wrapping_add(index_size(record_count, index_list_size));
if file_size > LZMA_VLI_MAX {
return LZMA_VLI_UNKNOWN;
}
file_size
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject arithmetic overflow before the LZMA_VLI_MAX checks.

The additions here can overflow u64 before the > LZMA_VLI_MAX guard runs, which can wrap an oversized file size back into a small valid-looking number instead of returning LZMA_VLI_UNKNOWN.

Suggested fix
 unsafe extern "C" fn index_file_size(
     compressed_base: lzma_vli,
     unpadded_sum: lzma_vli,
     record_count: lzma_vli,
     index_list_size: lzma_vli,
     stream_padding: lzma_vli,
 ) -> lzma_vli {
-    let mut file_size: lzma_vli = compressed_base
-        .wrapping_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
-        .wrapping_add(stream_padding)
-        .wrapping_add(vli_ceil4(unpadded_sum));
-    if file_size > LZMA_VLI_MAX {
-        return LZMA_VLI_UNKNOWN;
-    }
-    file_size = file_size.wrapping_add(index_size(record_count, index_list_size));
-    if file_size > LZMA_VLI_MAX {
-        return LZMA_VLI_UNKNOWN;
-    }
-    file_size
+    let mut file_size = match compressed_base
+        .checked_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
+        .and_then(|v| v.checked_add(stream_padding))
+        .and_then(|v| v.checked_add(vli_ceil4(unpadded_sum)))
+    {
+        Some(v) if v <= LZMA_VLI_MAX => v,
+        _ => return LZMA_VLI_UNKNOWN,
+    };
+
+    file_size = match file_size.checked_add(index_size(record_count, index_list_size)) {
+        Some(v) if v <= LZMA_VLI_MAX => v,
+        _ => return LZMA_VLI_UNKNOWN,
+    };
+
+    file_size
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mut file_size: lzma_vli = compressed_base
.wrapping_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
.wrapping_add(stream_padding)
.wrapping_add(vli_ceil4(unpadded_sum));
if file_size > LZMA_VLI_MAX {
return LZMA_VLI_UNKNOWN;
}
file_size = file_size.wrapping_add(index_size(record_count, index_list_size));
if file_size > LZMA_VLI_MAX {
return LZMA_VLI_UNKNOWN;
}
file_size
unsafe extern "C" fn index_file_size(
compressed_base: lzma_vli,
unpadded_sum: lzma_vli,
record_count: lzma_vli,
index_list_size: lzma_vli,
stream_padding: lzma_vli,
) -> lzma_vli {
let mut file_size = match compressed_base
.checked_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
.and_then(|v| v.checked_add(stream_padding))
.and_then(|v| v.checked_add(vli_ceil4(unpadded_sum)))
{
Some(v) if v <= LZMA_VLI_MAX => v,
_ => return LZMA_VLI_UNKNOWN,
};
file_size = match file_size.checked_add(index_size(record_count, index_list_size)) {
Some(v) if v <= LZMA_VLI_MAX => v,
_ => return LZMA_VLI_UNKNOWN,
};
file_size
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/index.rs` around lines 336 - 347, The current
cumulative additions can overflow before the LZMA_VLI_MAX checks; replace the
unchecked adds with fallible checks (use checked_add) when building file_size
from compressed_base, the header sizes, stream_padding, vli_ceil4(unpadded_sum),
and the result of index_size(record_count, index_list_size), and if any
checked_add returns None immediately return LZMA_VLI_UNKNOWN; update the logic
around the file_size variable and comparisons so you only compare to
LZMA_VLI_MAX after successful checked_adds to avoid wrapped values passing the
guard.

Comment thread liblzma-rs/src/common/lzip_decoder.rs Outdated
Comment on lines +297 to +298
#[no_mangle]
pub unsafe extern "C" fn lzma_lzip_decoder_init(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Cargo editions =="
fd '^Cargo\.toml$' -x sh -c '
  printf "\n-- %s --\n" "$1"
  sed -n "1,80p" "$1" | sed -n "/^\[package\]/,/^\[/p"
' sh {}

echo
echo "== FFI entry points in liblzma-rs/src/common/lzip_decoder.rs =="
python - <<'PY'
from pathlib import Path

path = Path("liblzma-rs/src/common/lzip_decoder.rs")
lines = path.read_text().splitlines()
targets = (
    'fn lzma_lzip_decoder_init',
    'fn lzma_lzip_decoder(',
)
for target in targets:
    for i, line in enumerate(lines, 1):
        if target in line:
            start = max(1, i - 4)
            end = min(len(lines), i + 4)
            print(f"\n-- {target} @ lines {start}-{end} --")
            for n in range(start, end + 1):
                print(f"{n}: {lines[n - 1]}")
            break
PY

Repository: youknowone/xz2-rs

Length of output: 2764


Add #[no_mangle] to lzma_lzip_decoder for C FFI linkage.

The lzma_lzip_decoder_init function at line 297 is marked with #[no_mangle], but lzma_lzip_decoder at line 392 is not. Without this attribute, the C code cannot link to the function using the expected public API name. Both extern "C" functions intended for FFI should have #[no_mangle].

Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_lzip_decoder(
     strm: *mut lzma_stream,
     memlimit: u64,
     flags: u32,
 ) -> lzma_ret {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[no_mangle]
pub unsafe extern "C" fn lzma_lzip_decoder_init(
#[no_mangle]
pub unsafe extern "C" fn lzma_lzip_decoder(
strm: *mut lzma_stream,
memlimit: u64,
flags: u32,
) -> lzma_ret {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/lzip_decoder.rs` around lines 297 - 298, The function
lzma_lzip_decoder is missing the #[no_mangle] attribute required for C FFI
linkage; add #[no_mangle] immediately above the lzma_lzip_decoder definition
(which is declared as pub unsafe extern "C" fn lzma_lzip_decoder(...)) so the
symbol is exported with the expected C name, keeping the existing pub unsafe
extern "C" signature intact to match lzma_lzip_decoder_init.

Comment thread liblzma-rs/src/common/stream_decoder.rs Outdated
Comment on lines +29 to +39
unsafe extern "C" fn stream_decoder_reset(
coder: *mut lzma_stream_coder,
allocator: *const lzma_allocator,
) -> lzma_ret {
(*coder).index_hash = lzma_index_hash_init((*coder).index_hash, allocator);
if (*coder).index_hash.is_null() {
return LZMA_MEM_ERROR;
}
(*coder).sequence = SEQ_STREAM_HEADER;
(*coder).pos = 0;
LZMA_OK
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Reset stream_flags.check when a new stream starts.

stream_decoder_get_check() exposes this field immediately. Because stream_decoder_reset() leaves it unchanged, the first call can read uninitialized memory and concatenated streams can report the previous stream's check until the next header is parsed.

🔧 Suggested fix
 unsafe extern "C" fn stream_decoder_reset(
     coder: *mut lzma_stream_coder,
     allocator: *const lzma_allocator,
 ) -> lzma_ret {
     (*coder).index_hash = lzma_index_hash_init((*coder).index_hash, allocator);
     if (*coder).index_hash.is_null() {
         return LZMA_MEM_ERROR;
     }
     (*coder).sequence = SEQ_STREAM_HEADER;
     (*coder).pos = 0;
+    (*coder).stream_flags.check = LZMA_CHECK_NONE;
     LZMA_OK
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder.rs` around lines 29 - 39,
stream_decoder_reset currently leaves the stream_flags.check field uninitialized
causing stream_decoder_get_check to expose garbage or previous stream data;
update stream_decoder_reset to explicitly initialize/reset
(*coder).stream_flags.check to a safe default (e.g., zero or the library's
"unknown/none" check constant) whenever starting a new stream so concatenated
streams don't inherit the prior check value and the first call to
stream_decoder_get_check is well-defined.

Comment thread liblzma-rs/src/common/stream_encoder.rs Outdated
Comment thread liblzma-rs/src/common/stream_encoder.rs Outdated
Comment on lines +424 to +449
pub unsafe extern "C" fn lzma_stream_encoder(
strm: *mut lzma_stream,
filters: *const lzma_filter,
check: lzma_check,
) -> lzma_ret {
let ret_: lzma_ret = lzma_strm_init(strm);
if ret_ != LZMA_OK {
return ret_;
}
let ret__0: lzma_ret = stream_encoder_init(
::core::ptr::addr_of_mut!((*(*strm).internal).next),
(*strm).allocator,
filters,
check,
);
if ret__0 != LZMA_OK {
lzma_end(strm);
return ret__0;
}
(*(*strm).internal).supported_actions[LZMA_RUN as usize] = true;
(*(*strm).internal).supported_actions[LZMA_SYNC_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_BARRIER as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FINISH as usize] = true;
LZMA_OK
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to lzma_stream_encoder for C ABI visibility.

The function is declared pub unsafe extern "C" but lacks #[no_mangle], causing symbol name mangling. C callers won't find the function with the expected name.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_encoder(
     strm: *mut lzma_stream,
     filters: *const lzma_filter,
     check: lzma_check,
 ) -> lzma_ret {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub unsafe extern "C" fn lzma_stream_encoder(
strm: *mut lzma_stream,
filters: *const lzma_filter,
check: lzma_check,
) -> lzma_ret {
let ret_: lzma_ret = lzma_strm_init(strm);
if ret_ != LZMA_OK {
return ret_;
}
let ret__0: lzma_ret = stream_encoder_init(
::core::ptr::addr_of_mut!((*(*strm).internal).next),
(*strm).allocator,
filters,
check,
);
if ret__0 != LZMA_OK {
lzma_end(strm);
return ret__0;
}
(*(*strm).internal).supported_actions[LZMA_RUN as usize] = true;
(*(*strm).internal).supported_actions[LZMA_SYNC_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_BARRIER as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FINISH as usize] = true;
LZMA_OK
}
#[no_mangle]
pub unsafe extern "C" fn lzma_stream_encoder(
strm: *mut lzma_stream,
filters: *const lzma_filter,
check: lzma_check,
) -> lzma_ret {
let ret_: lzma_ret = lzma_strm_init(strm);
if ret_ != LZMA_OK {
return ret_;
}
let ret__0: lzma_ret = stream_encoder_init(
::core::ptr::addr_of_mut!((*(*strm).internal).next),
(*strm).allocator,
filters,
check,
);
if ret__0 != LZMA_OK {
lzma_end(strm);
return ret__0;
}
(*(*strm).internal).supported_actions[LZMA_RUN as usize] = true;
(*(*strm).internal).supported_actions[LZMA_SYNC_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_FLUSH as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FULL_BARRIER as usize] = true;
(*(*strm).internal).supported_actions[LZMA_FINISH as usize] = true;
LZMA_OK
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder.rs` around lines 424 - 449, Add the
#[no_mangle] attribute to the exported C ABI function lzma_stream_encoder so the
symbol is not Rust-mangled and C callers can link to "lzma_stream_encoder";
locate the pub unsafe extern "C" fn lzma_stream_encoder(...) declaration and
place #[no_mangle] directly above it (also scan other similar extern "C"
functions in this module to ensure they have #[no_mangle] if they are intended
for C visibility).

Comment on lines +168 to +192
pub unsafe extern "C" fn lzma_mf_hc3_skip(mf: *mut lzma_mf, mut amount: u32) {
loop {
if mf_avail(mf) < 3 {
move_pending(mf);
} else {
let cur: *const u8 = mf_ptr(mf);
let pos: u32 = (*mf).read_pos.wrapping_add((*mf).offset);
let temp: u32 = lzma_crc32_table[0][*cur as usize] ^ *cur.offset(1) as u32;
let hash_2_value: u32 = temp & HASH_2_MASK as u32;
let hash_value: u32 = (temp ^ (*cur.offset(2) as u32) << 8) & (*mf).hash_mask;
let cur_match: u32 = *(*mf)
.hash
.offset((FIX_3_HASH_SIZE as u32).wrapping_add(hash_value) as isize);
*(*mf).hash.offset(hash_2_value as isize) = pos;
*(*mf)
.hash
.offset((FIX_3_HASH_SIZE as u32).wrapping_add(hash_value) as isize) = pos;
*(*mf).son.offset((*mf).cyclic_pos as isize) = cur_match;
move_pos(mf);
}
amount -= 1;
if amount == 0 {
break;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

wc -l liblzma-rs/src/lz/lz_encoder_mf.rs

Repository: youknowone/xz2-rs

Length of output: 99


🏁 Script executed:

# Get an overview of the file structure around the claimed lines
sed -n '160,195p' liblzma-rs/src/lz/lz_encoder_mf.rs | cat -n

Repository: youknowone/xz2-rs

Length of output: 1630


🏁 Script executed:

# Check the other four ranges mentioned
sed -n '265,305p' liblzma-rs/src/lz/lz_encoder_mf.rs | cat -n

Repository: youknowone/xz2-rs

Length of output: 1884


🏁 Script executed:

# Third range
sed -n '442,490p' liblzma-rs/src/lz/lz_encoder_mf.rs | cat -n

Repository: youknowone/xz2-rs

Length of output: 2093


🏁 Script executed:

# Fourth range
sed -n '543,598p' liblzma-rs/src/lz/lz_encoder_mf.rs | cat -n

Repository: youknowone/xz2-rs

Length of output: 2541


🏁 Script executed:

# Fifth range
sed -n '675,737p' liblzma-rs/src/lz/lz_encoder_mf.rs | cat -n

Repository: youknowone/xz2-rs

Length of output: 2737


Handle amount == 0 before entering these skip loops.

Each of these five skip helpers executes the loop body at least once, even when amount is zero. The first amount -= 1 then underflows (to u32::MAX), which panics in debug builds and causes approximately 2^32 iterations in release builds. Use while amount != 0 (or return early on zero) to fix.

Affected functions
  • lzma_mf_hc3_skip (lines 168–192)
  • lzma_mf_hc4_skip (lines 270–302)
  • lzma_mf_bt2_skip (lines 447–485)
  • lzma_mf_bt3_skip (lines 548–593)
  • lzma_mf_bt4_skip (lines 680–732)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder_mf.rs` around lines 168 - 192, The skip helpers
(lzma_mf_hc3_skip, lzma_mf_hc4_skip, lzma_mf_bt2_skip, lzma_mf_bt3_skip,
lzma_mf_bt4_skip) run their loop body unconditionally and then do amount -= 1,
causing underflow when amount == 0; fix each by returning early or changing the
loop to while amount != 0 so the body is not executed when amount is zero (e.g.,
replace the current loop { ... amount -= 1; if amount==0 break; } pattern with
while amount != 0 { ... amount -= 1 } or an initial if amount == 0 { return }
before the existing loop), keeping the existing inner logic and references to
mf_avail, move_pending, move_pos, hash/son updates, and cyclic_pos unchanged.

Comment on lines +471 to +475
(*coder).lz.coder = core::ptr::null_mut();
(*coder).lz.code = None;
(*coder).lz.end = None;
(*coder).lz.options_update = None;
(*coder).lz.set_out_limit = None;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Reset and revalidate the inner encoder vtable on reuse.

Lines 471-475 only run on first allocation. Reinitializing an existing lzma_coder with a new or partially-initializing lz_init can leave stale lz.code/lz.end pointers behind, and lz_encode() will dispatch through them or panic on Line 131. Please clear the inner callback fields before invoking lz_init, then reject LZMA_OK if the callback did not install lz.code.

Also applies to: 506-515

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 471 - 475, The inner encoder
vtable fields (lz.code, lz.end, lz.options_update, lz.set_out_limit) on
lzma_coder must be cleared on reuse before calling lz_init to avoid dispatching
stale function pointers; update the reuse path in the initializer that currently
sets (*coder).lz.coder = core::ptr::null_mut() so it also sets (*coder).lz.code
= None, (*coder).lz.end = None, (*coder).lz.options_update = None and
(*coder).lz.set_out_limit = None prior to invoking lz_init, then after lz_init
validate that lz.code was installed and treat the call as a failure (return
non-LZMA_OK) if lz.code remains None (so lz_encode cannot be called with stale
callbacks); apply the same change to the analogous reuse block around the second
region mentioned (the 506-515 area) and ensure lz_init error handling returns a
failure instead of LZMA_OK when callbacks are not set.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
liblzma-rs/src/common/common.rs (2)

124-150: ⚠️ Potential issue | 🔴 Critical

Reject invalid cursor positions before doing the wrapping_sub math.

If *in_pos > in_size or *out_pos > out_size, the availability calculation wraps around to a huge value, and copy_nonoverlapping will read/write out of bounds. This exported helper should validate cursors before proceeding.

Proposed fix
 pub unsafe extern "C" fn lzma_bufcpy(
     in_0: *const u8,
     in_pos: *mut size_t,
     in_size: size_t,
     out: *mut u8,
     out_pos: *mut size_t,
     out_size: size_t,
 ) -> size_t {
+    if *in_pos > in_size || *out_pos > out_size {
+        return 0;
+    }
     let in_avail: size_t = in_size.wrapping_sub(*in_pos);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/common.rs` around lines 124 - 150, Validate cursor
positions in lzma_bufcpy before doing wrapping_sub: check the dereferenced
in_pos and out_pos against in_size and out_size (e.g., if *in_pos > in_size or
*out_pos > out_size) and return 0 (no bytes copied) or otherwise abort early
instead of performing the wrapping_sub math; do this check before computing
in_avail/out_avail and before calling core::ptr::copy_nonoverlapping to prevent
out-of-bounds reads/writes.

168-186: ⚠️ Potential issue | 🔴 Critical

Replace .unwrap() with proper error handling on (*next).update.

The update field is Option-typed (per lzma_next_coder_s definition) and is set to None in multiple places throughout the codebase. Calling .unwrap() on None will panic across the FFI boundary, which is undefined behavior.

Proposed fix
     if (*reversed_filters).id == LZMA_VLI_UNKNOWN {
         return LZMA_OK;
     }
-    (*next).update.unwrap()(
+    let Some(update_fn) = (*next).update else {
+        return LZMA_PROG_ERROR;
+    };
+    update_fn(
         (*next).coder,
         allocator,
         core::ptr::null(),
         reversed_filters,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/common.rs` around lines 168 - 186, The function
lzma_next_filter_update currently calls (*next).update.unwrap() which can panic
if update is None; change it to check the Option before calling: inspect
(*next).update (the update field on lzma_next_coder) and if it is Some(func)
call func((*next).coder, allocator, core::ptr::null(), reversed_filters) and
return its lzma_ret; if it is None return an appropriate error such as
LZMA_PROG_ERROR (to avoid panicking across the FFI boundary). Ensure you perform
the Option check in lzma_next_filter_update and call the unwrapped function
pointer only when present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@liblzma-rs/src/common/common.rs`:
- Around line 269-307: Replace the raw integer literals in the sequence match
and return values with the corresponding named constants to improve readability
and maintainability: use a named constant for the initial sequence value (e.g.,
ISEQ_RUN instead of 0) when matching (*(*strm).internal).sequence, and replace
the numeric action checks (1,2,3,4) with their LZMA action constants
(LZMA_SYNC_FLUSH, LZMA_FULL_FLUSH, LZMA_FINISH, LZMA_FULL_BARRIER) where
appropriate; also replace numeric return literals like 0, 101, 12, 1 with the
LZMA return codes (e.g., LZMA_OK, LZMA_TIMED_OUT, LZMA_GET_CHECK,
LZMA_PROG_ERROR) so the match arms that currently reference sequence values
(0..6) and the early returns use the named constants instead of magic
numbers—update comparisons against (*(*strm).internal).sequence and the return
statements in this block accordingly.
- Around line 212-246: The lzma_strm_init function allocates and zeroes some
fields of the lzma_internal struct but does not set avail_in; add an explicit
initialization of (*(*strm).internal).avail_in = 0 (or equivalent) in
lzma_strm_init after allocating/initializing (*strm).internal (near where
supported_actions is zeroed and sequence/allow_buf_error are set) so that
avail_in is deterministically initialized (this complements lzma_code which
later writes/reads avail_in).
- Around line 395-418: Both lzma_get_progress and lzma_get_check must validate
input pointers before any dereference: check that strm is non-null and
(*strm).internal is non-null (similar to lzma_memusage/lzma_memlimit_get). For
lzma_get_progress, if strm or internal is null, set *progress_in and
*progress_out to 0 and return without dereferencing; otherwise proceed with
existing logic (including checking next.get_progress). For lzma_get_check, if
strm or internal is null return LZMA_CHECK_NONE immediately; otherwise perform
the existing next.get_check check and call. Ensure all early-return checks occur
outside any dereference and keep the unsafe blocks as required.

---

Duplicate comments:
In `@liblzma-rs/src/common/common.rs`:
- Around line 124-150: Validate cursor positions in lzma_bufcpy before doing
wrapping_sub: check the dereferenced in_pos and out_pos against in_size and
out_size (e.g., if *in_pos > in_size or *out_pos > out_size) and return 0 (no
bytes copied) or otherwise abort early instead of performing the wrapping_sub
math; do this check before computing in_avail/out_avail and before calling
core::ptr::copy_nonoverlapping to prevent out-of-bounds reads/writes.
- Around line 168-186: The function lzma_next_filter_update currently calls
(*next).update.unwrap() which can panic if update is None; change it to check
the Option before calling: inspect (*next).update (the update field on
lzma_next_coder) and if it is Some(func) call func((*next).coder, allocator,
core::ptr::null(), reversed_filters) and return its lzma_ret; if it is None
return an appropriate error such as LZMA_PROG_ERROR (to avoid panicking across
the FFI boundary). Ensure you perform the Option check in
lzma_next_filter_update and call the unwrapped function pointer only when
present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6627a1bc-f3cd-40cb-8aa2-e6cde80813e5

📥 Commits

Reviewing files that changed from the base of the PR and between 4e55b4e and 325228d.

📒 Files selected for processing (2)
  • liblzma-rs/src/common/common.rs
  • liblzma-rs/src/types.rs

Comment thread liblzma-rs/src/common/common.rs Outdated
Comment thread liblzma-rs/src/common/common.rs
Comment thread liblzma-rs/src/common/common.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 34

♻️ Duplicate comments (47)
liblzma-rs/src/common/alone_encoder.rs (2)

200-220: ⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to exported C entrypoint.

lzma_alone_encoder is a public extern "C" function but lacks #[no_mangle], causing C callers expecting this symbol to fail at link time.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_alone_encoder(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` around lines 200 - 220, The exported
C entrypoint lzma_alone_encoder is declared as pub unsafe extern "C" fn but
missing the #[no_mangle] attribute, causing the symbol to be mangled and
unavailable to C callers; add the #[no_mangle] attribute immediately above the
lzma_alone_encoder declaration (keeping the existing pub unsafe extern "C" fn
signature and body) so the symbol is emitted with the expected unmangled name
for C linkage.

146-154: ⚠️ Potential issue | 🔴 Critical

Guard options before dereferencing.

Lines 146-148 and 152 dereference options unconditionally. Add a null check to avoid FFI-side crashes.

🐛 Proposed fix
 unsafe extern "C" fn alone_encoder_init(
     next: *mut lzma_next_coder,
     allocator: *const lzma_allocator,
     options: *const lzma_options_lzma,
 ) -> lzma_ret {
+    if options.is_null() {
+        return LZMA_PROG_ERROR;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/alone_encoder.rs` around lines 146 - 154, The code
dereferences the FFI pointer options (used in lzma_lzma_lclppb_encode call and
when checking (*options).dict_size) without null checking; add a guard at the
start of this block to return LZMA_OPTIONS_ERROR if options.is_null() (or
options == null) before calling lzma_lzma_lclppb_encode or accessing
(*options).dict_size, so lzma_lzma_lclppb_encode, coder->header access and the
dict size check are only performed when options is non-null.
liblzma-rs/src/common/auto_decoder.rs (1)

29-60: ⚠️ Potential issue | 🟠 Major

Sequence state set before decoder initialization completes.

Line 33 sets (*coder).sequence = SEQ_CODE before the decoder init calls. If lzma_stream_decoder_init or lzma_alone_decoder_init fails, the sequence is left at SEQ_CODE but next.code may be None. While line 73-75 guards against None with let Some(code) = ... else { return LZMA_PROG_ERROR }, the state machine logic is still incorrect—a failed init should keep sequence at SEQ_INIT for proper retry semantics.

🔧 Suggested fix: move sequence update after successful init
         0 => {
             if *in_pos >= in_size {
                 return LZMA_OK;
             }
-            (*coder).sequence = SEQ_CODE;
             if *in_0.offset(*in_pos as isize) == 0xfd {
                 let ret_: lzma_ret = lzma_stream_decoder_init(
                     ::core::ptr::addr_of_mut!((*coder).next),
                     allocator,
                     (*coder).memlimit,
                     (*coder).flags,
                 );
                 if ret_ != LZMA_OK {
                     return ret_;
                 }
+                (*coder).sequence = SEQ_CODE;
             } else {
                 let ret__0: lzma_ret = lzma_alone_decoder_init(
                     ::core::ptr::addr_of_mut!((*coder).next),
                     allocator,
                     (*coder).memlimit,
                     true,
                 );
                 if ret__0 != LZMA_OK {
                     return ret__0;
                 }
+                (*coder).sequence = SEQ_CODE;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/auto_decoder.rs` around lines 29 - 60, The sequence is
being set to SEQ_CODE before initializing the actual decoder, so if
lzma_stream_decoder_init or lzma_alone_decoder_init fails the state machine is
inconsistent; move the assignment to (*coder).sequence = SEQ_CODE so it occurs
only after the init call returns LZMA_OK (for both lzma_stream_decoder_init and
lzma_alone_decoder_init), leaving (*coder).sequence as SEQ_INIT on failure, and
ensure any early returns propagate the error without changing sequence;
reference the fields/methods (*coder).sequence, SEQ_CODE, SEQ_INIT,
lzma_stream_decoder_init, lzma_alone_decoder_init, and (*coder).next when making
the change.
liblzma-rs/src/lz/lz_encoder.rs (2)

84-88: 🧹 Nitpick | 🔵 Trivial

No-op write_bytes call (count is 0).

core::ptr::write_bytes(..., 0) with a count of 0 bytes does nothing. This is a C2Rust artifact and can be removed for clarity.

♻️ Proposed cleanup
     (*coder).mf.write_pos = write_pos as u32;
-    core::ptr::write_bytes(
-        (*coder).mf.buffer.offset(write_pos as isize) as *mut u8,
-        0 as u8,
-        0,
-    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 84 - 88, The call to
core::ptr::write_bytes((*coder).mf.buffer.offset(write_pos as isize) as *mut u8,
0 as u8, 0) is a no-op because the count is 0; remove that statement to clean up
the C2Rust artifact. Locate the write_bytes invocation referencing coder,
mf.buffer and write_pos in lz_encoder.rs and delete the entire call expression
(ensuring no other logic depends on its side effects).

515-524: ⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to lzma_mf_is_supported for C linkage.

This function is pub extern "C" but lacks #[no_mangle], so C callers expecting the symbol lzma_mf_is_supported will get a linker error.

🔧 Proposed fix
+#[no_mangle]
 pub extern "C" fn lzma_mf_is_supported(mf: lzma_match_finder) -> lzma_bool {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 515 - 524, The function
lzma_mf_is_supported is declared pub extern "C" but missing the #[no_mangle]
attribute, causing linker errors for C callers; add the #[no_mangle] attribute
immediately above the pub extern "C" fn lzma_mf_is_supported(...) declaration so
the symbol is exported as exactly "lzma_mf_is_supported" for C linkage, and keep
the existing match logic unchanged; ensure no other attributes collide with
#[no_mangle].
liblzma-rs/src/common/filter_encoder.rs (1)

404-407: ⚠️ Potential issue | 🟠 Major

Return LZMA_OPTIONS_ERROR for unsupported (but valid) filter IDs in lzma_properties_encode.

Line 406 currently returns LZMA_PROG_ERROR, which is inconsistent with lzma_properties_size for the same lookup failure mode.

🐛 Suggested fix
     let fe: *const lzma_filter_encoder = encoder_find((*filter).id) as *const lzma_filter_encoder;
     if fe.is_null() {
-        return LZMA_PROG_ERROR;
+        return if (*filter).id <= LZMA_VLI_MAX {
+            LZMA_OPTIONS_ERROR
+        } else {
+            LZMA_PROG_ERROR
+        };
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/filter_encoder.rs` around lines 404 - 407, In
lzma_properties_encode, the lookup of the encoder via encoder_find((*filter).id)
returns null for unsupported (but valid) filter IDs; instead of returning
LZMA_PROG_ERROR you should return LZMA_OPTIONS_ERROR to match
lzma_properties_size behavior—locate the null-check around the
lzma_filter_encoder pointer (fe) in lzma_properties_encode and replace the
LZMA_PROG_ERROR return with LZMA_OPTIONS_ERROR so unsupported filter IDs produce
the correct error code.
liblzma-rs/src/common/filter_buffer_decoder.rs (1)

12-20: ⚠️ Potential issue | 🟠 Major

Validate filters before decoder initialization.

Validation covers input/output buffers but skips filters. A null filters reaches Line 33 and relies on downstream behavior.

🐛 Suggested fix
-    if in_0.is_null()
+    if filters.is_null()
+        || in_0.is_null()
         || in_pos.is_null()
         || *in_pos > in_size
         || out.is_null()

Also applies to: 33-33

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/filter_buffer_decoder.rs` around lines 12 - 20, The
code currently validates input/output pointers but does not check the `filters`
pointer before decoder initialization; add a null-check for `filters` (and
optionally validate `filters_len` if present) alongside the existing checks and
return `LZMA_PROG_ERROR` on null so the decoder creation code (referenced around
line 33) never receives a null `filters` pointer; update the conditional that
checks `in_0`, `in_pos`, `out`, `out_pos` to also include `filters.is_null()`
(or equivalent) and ensure the same validation is applied in the other
occurrence noted in the diff.
liblzma-rs/src/common/block_decoder.rs (1)

219-222: ⚠️ Potential issue | 🔴 Critical

Null-check block before using it in init-time validation.

Line 219 uses block before validating it. A null block causes immediate UB.

🐛 Suggested fix
 pub unsafe extern "C" fn lzma_block_decoder_init(
     next: *mut lzma_next_coder,
     allocator: *const lzma_allocator,
     block: *mut lzma_block,
 ) -> lzma_ret {
+    if block.is_null() {
+        return LZMA_PROG_ERROR;
+    }
@@
     if lzma_block_unpadded_size(block) == 0
         || !((*block).uncompressed_size <= LZMA_VLI_MAX
             || (*block).uncompressed_size == LZMA_VLI_UNKNOWN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_decoder.rs` around lines 219 - 222, The
validation uses lzma_block_unpadded_size(block) and dereferences (*block) before
ensuring block isn't null, which can cause UB; add an explicit null-check for
block (e.g., if block.is_null() / block == ptr::null()) before calling
lzma_block_unpadded_size or accessing (*block). If null, return an appropriate
error or early-fail consistent with surrounding code paths; otherwise proceed to
the existing checks that reference lzma_block_unpadded_size,
(*block).uncompressed_size, LZMA_VLI_MAX, and LZMA_VLI_UNKNOWN.
liblzma-rs/src/common/filter_buffer_encoder.rs (1)

11-13: ⚠️ Potential issue | 🟠 Major

Add a null guard for filters in argument validation.

Current validation misses filters, but Line 26 passes it to encoder init unconditionally.

🐛 Suggested fix
-    if in_0.is_null() && in_size != 0 || out.is_null() || out_pos.is_null() || *out_pos > out_size {
+    if filters.is_null()
+        || in_0.is_null() && in_size != 0
+        || out.is_null()
+        || out_pos.is_null()
+        || *out_pos > out_size
+    {
         return LZMA_PROG_ERROR;
     }

Also applies to: 26-26

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/filter_buffer_encoder.rs` around lines 11 - 13, The
argument validation currently checks in_0, in_size, out, out_pos and out_size
but misses a null check for filters, yet filters is later passed unconditionally
to the encoder init (lzma_filter_encoder_init); update the validation condition
to also guard against filters being NULL (e.g., treat filters == NULL as a
LZMA_PROG_ERROR) so the method that contains in_0, in_size, out, out_pos,
out_size and filters returns LZMA_PROG_ERROR early if filters is null before
calling the encoder init.
liblzma-rs/src/common/lzip_decoder.rs (1)

138-144: ⚠️ Potential issue | 🟠 Major

Reset preset_dict_size when clearing preset_dict.

At Line 140 you clear preset_dict but keep preset_dict_size unchanged. This leaves an inconsistent preset dictionary tuple for subsequent decoder setup.

🐛 Proposed fix
                 (*coder).options.dict_size =
                     (1u32 << b2log).wrapping_sub(fracnum << b2log.wrapping_sub(4));
                 (*coder).options.preset_dict = core::ptr::null();
+                (*coder).options.preset_dict_size = 0;
                 (*coder).options.lc = LZIP_LC;
                 (*coder).options.lp = LZIP_LP;
                 (*coder).options.pb = LZIP_PB;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/lzip_decoder.rs` around lines 138 - 144, When clearing
the preset dictionary pointer you must also reset its size to keep the preset
tuple consistent: in the block where you set (*coder).options.preset_dict =
core::ptr::null() (inside lzip_decoder setup), also set
(*coder).options.preset_dict_size = 0 (or the appropriate zero field name) so
subsequent decoder setup sees both pointer and size cleared; update the same
function/closure that assigns (*coder).options.dict_size, preset_dict, lc, lp,
pb, and memusage to include resetting preset_dict_size.
liblzma-rs/src/common/easy_preset.rs (1)

2-7: ⚠️ Potential issue | 🔴 Critical

Validate opt_easy before dereferencing.

Line 3 dereferences opt_easy unconditionally. A null pointer from FFI causes UB.

🐛 Proposed fix
 pub unsafe extern "C" fn lzma_easy_preset(opt_easy: *mut lzma_options_easy, preset: u32) -> bool {
+    if opt_easy.is_null() {
+        return true;
+    }
     if lzma_lzma_preset(::core::ptr::addr_of_mut!((*opt_easy).opt_lzma), preset) != 0 {
         return true;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/easy_preset.rs` around lines 2 - 7, The function
lzma_easy_preset dereferences the FFI pointer opt_easy without checking for
null; add a null-pointer check at the start of lzma_easy_preset (e.g., if
opt_easy.is_null() { return false; }) before calling lzma_lzma_preset or
accessing (*opt_easy).filters and (*opt_easy).opt_lzma so we avoid UB from a
null FFI pointer; keep the existing return logic (return false on null,
otherwise proceed to call lzma_lzma_preset and set filters[0].id/options).
liblzma-rs/src/common/stream_buffer_decoder.rs (1)

13-20: ⚠️ Potential issue | 🔴 Critical

Validate memlimit before dereferencing it.

*memlimit is read at Line 40, but memlimit is never null-checked in the guard at Lines 13-19.

🐛 Proposed fix
 ) -> lzma_ret {
-    if in_pos.is_null()
+    if memlimit.is_null()
+        || in_pos.is_null()
         || in_0.is_null() && *in_pos != in_size
         || *in_pos > in_size
         || out_pos.is_null()
         || out.is_null() && *out_pos != out_size
         || *out_pos > out_size

Also applies to: 37-41

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_buffer_decoder.rs` around lines 13 - 20, The
function stream_buffer_decoder (in stream_buffer_decoder.rs) dereferences
memlimit later but never null-checks it in the initial pointer-guard; add a
memlimit.is_null() condition to the same initial if that checks
in_pos/in_0/out_pos/out so memlimit is validated before any use, and mirror that
null-check in the secondary guard around the later dereference (the block
covering the other pointer validations) so *memlimit is never read when memlimit
may be null.
liblzma-rs/src/common/stream_flags_common.rs (2)

6-10: ⚠️ Potential issue | 🟠 Major

Guard null pointers before dereferencing stream flags.

At Line 10, both a and b are dereferenced without validation. Add null checks at function entry to avoid UB from FFI callers.

🐛 Proposed fix
 pub unsafe extern "C" fn lzma_stream_flags_compare(
     a: *const lzma_stream_flags,
     b: *const lzma_stream_flags,
 ) -> lzma_ret {
+    if a.is_null() || b.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     if (*a).version != 0 || (*b).version != 0 {
         return LZMA_OPTIONS_ERROR;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_common.rs` around lines 6 - 10, The
function lzma_stream_flags_compare currently dereferences a and b without null
checks; at the start of the unsafe extern "C" fn lzma_stream_flags_compare add
guards that return an appropriate lzma_ret (e.g., LZMA_PROG_ERROR) if a or b is
null before any dereference, so all subsequent uses of (*a).version and
(*b).version are safe for FFI callers.

2-5: 🧹 Nitpick | 🔵 Trivial

Prefer immutable statics for magic byte constants.

These arrays appear constant and don't need static mut; keeping them mutable unnecessarily expands unsafe/global mutation surface.

♻️ Proposed cleanup
-pub static mut lzma_header_magic: [u8; 6] = [
-    0xfd as u8, 0x37 as u8, 0x7a as u8, 0x58 as u8, 0x5a as u8, 0,
-];
-pub static mut lzma_footer_magic: [u8; 2] = [0x59 as u8, 0x5a as u8];
+pub static lzma_header_magic: [u8; 6] = [0xfd, 0x37, 0x7a, 0x58, 0x5a, 0x00];
+pub static lzma_footer_magic: [u8; 2] = [0x59, 0x5a];
benches/backend_comparison.rs (2)

121-133: ⚠️ Potential issue | 🟠 Major

Use identical compressed input for both decode benchmarks.

At Lines 126-133, C and Rust decoders are measured on different compressed streams, which makes decode throughput comparisons non-equivalent.

♻️ Proposed fix
     for &(size, label) in sizes {
         let input = make_payload(size);
-        let c_compressed = unsafe { c_encode(&input) };
-        let rs_compressed = unsafe { rs_encode(&input) };
+        let compressed = unsafe { c_encode(&input) };
         group.throughput(Throughput::Bytes(size as u64));

-        group.bench_with_input(BenchmarkId::new("c", label), &c_compressed, |b, data| {
+        group.bench_with_input(BenchmarkId::new("c", label), &compressed, |b, data| {
             b.iter(|| unsafe { c_decode(black_box(data), size) })
         });
         group.bench_with_input(
             BenchmarkId::new("rust", label),
-            &rs_compressed,
+            &compressed,
             |b, data| b.iter(|| unsafe { rs_decode(black_box(data), size) }),
         );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benches/backend_comparison.rs` around lines 121 - 133, The two decode
benchmarks use different compressed buffers (c_compressed and rs_compressed) so
results aren't comparable; change the setup to compress the same payload once
and reuse that single compressed buffer for both BenchmarkId::new("c", label)
and BenchmarkId::new("rust", label). Locate make_payload(size), the unsafe
c_encode/rs_encode calls that produce c_compressed/rs_compressed, and the
group.bench_with_input calls around c_decode and rs_decode, remove the
duplicated compression step and pass the same shared compressed data (ensuring
its length/size still matches the size parameter) into both bench closures.

22-37: ⚠️ Potential issue | 🟠 Major

Check lzma_ret before truncating/using outputs in benchmark helpers.

These helpers currently ignore encoder/decoder return codes and always truncate with out_pos. Failed operations can silently benchmark invalid outputs.

🐛 Proposed fix pattern
 unsafe fn c_encode(input: &[u8]) -> Vec<u8> {
@@
-    c_sys::lzma_easy_buffer_encode(
+    let ret = c_sys::lzma_easy_buffer_encode(
@@
-    out.truncate(out_pos);
+    assert_eq!(ret, c_sys::LZMA_OK, "c_encode failed: {}", ret);
+    out.truncate(out_pos);
     out
 }
@@
-    rs_sys::lzma_easy_buffer_encode(
+    let ret = rs_sys::lzma_easy_buffer_encode(
@@
-    out.truncate(out_pos);
+    assert_eq!(ret, rs_sys::LZMA_OK, "rs_encode failed: {}", ret);
+    out.truncate(out_pos);
     out
 }
@@
-    c_sys::lzma_stream_buffer_decode(
+    let ret = c_sys::lzma_stream_buffer_decode(
@@
-    out.truncate(out_pos);
+    assert_eq!(ret, c_sys::LZMA_OK, "c_decode failed: {}", ret);
+    assert_eq!(in_pos, compressed.len(), "c_decode did not consume all input");
+    out.truncate(out_pos);
     out
 }
@@
-    rs_sys::lzma_stream_buffer_decode(
+    let ret = rs_sys::lzma_stream_buffer_decode(
@@
-    out.truncate(out_pos);
+    assert_eq!(ret, rs_sys::LZMA_OK, "rs_decode failed: {}", ret);
+    assert_eq!(in_pos, compressed.len(), "rs_decode did not consume all input");
+    out.truncate(out_pos);
     out
 }

Also applies to: 40-56, 58-76, 78-96

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benches/backend_comparison.rs` around lines 22 - 37, The helper currently
ignores the return code from the liblzma call (e.g. the lzma_ret from
c_sys::lzma_easy_buffer_encode in unsafe fn c_encode) and truncates/uses out_pos
even when encoding failed; capture the return value (let ret =
c_sys::lzma_easy_buffer_encode(...)), check it against the success constant
(e.g. c_sys::LZMA_OK or the appropriate success enum), and on non-success return
either propagate an error or panic with the ret code and context instead of
truncating silently. Apply the same pattern to the other C helper functions
mentioned (the corresponding decoder/encoder helpers around lines 40-56, 58-76,
78-96) so each checks the lzma_ret and handles failures before using out_pos.
liblzma-rs/src/common/outqueue.rs (1)

111-117: ⚠️ Potential issue | 🟠 Major

Add a defensive null check for cache before dereference.

At Line 115, (*outq).cache is dereferenced unconditionally via (*buf).next. If cache is null, this is immediate UB at the FFI boundary.

🐛 Proposed fix
 pub unsafe extern "C" fn lzma_outq_get_buf(
     outq: *mut lzma_outq,
     worker: *mut c_void,
 ) -> *mut lzma_outbuf {
     let buf: *mut lzma_outbuf = (*outq).cache;
+    if buf.is_null() {
+        return core::ptr::null_mut();
+    }
     (*outq).cache = (*buf).next;
     (*buf).next = core::ptr::null_mut();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/outqueue.rs` around lines 111 - 117, The function
lzma_outq_get_buf currently dereferences (*outq).cache without checking for
null; modify lzma_outq_get_buf to first read let buf = (*outq).cache and if
buf.is_null() return core::ptr::null_mut() (or the equivalent sentinel) to avoid
UB, otherwise proceed to update (*outq).cache = (*buf).next and (*buf).next =
core::ptr::null_mut(); ensure all uses of buf are guarded so no dereference
occurs when cache is null.
liblzma-rs/src/common/stream_flags_decoder.rs (1)

13-15: ⚠️ Potential issue | 🔴 Critical

Export unmangled symbols for these C ABI functions.

extern "C" only fixes the calling convention. If liblzma-rs/src/types.rs still declares these names as foreign symbols, the missing #[no_mangle] means the definitions here won't satisfy those link names.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_header_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
@@
+#[no_mangle]
 pub unsafe extern "C" fn lzma_stream_footer_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
#!/bin/bash
set -euo pipefail

echo "== definitions =="
rg -n 'pub unsafe extern "C" fn lzma_stream_(header|footer)_decode' liblzma-rs/src/common/stream_flags_decoder.rs -C2

echo
echo "== no_mangle markers in this file =="
rg -n '#\[no_mangle\]' liblzma-rs/src/common/stream_flags_decoder.rs || true

echo
echo "== foreign declarations =="
rg -n 'lzma_stream_(header|footer)_decode' liblzma-rs/src/types.rs -C2 || true

Expected: the functions are defined here, there are no #[no_mangle] attributes above them, and liblzma-rs/src/types.rs still declares the same symbols as foreign functions.

Also applies to: 47-49

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_decoder.rs` around lines 13 - 15, The C
ABI functions lzma_stream_header_decode and lzma_stream_footer_decode are
missing #[no_mangle], so their Rust-defined symbols are still mangled and won't
match the foreign declarations in types.rs; add #[no_mangle] directly above the
pub unsafe extern "C" fn definitions for lzma_stream_header_decode and
lzma_stream_footer_decode (and any other exported extern "C" fns in the same
file) and verify their signatures exactly match the declarations in
liblzma-rs/src/types.rs so the linker resolves the expected names.
liblzma-rs/src/common/filter_flags_decoder.rs (1)

10-17: ⚠️ Potential issue | 🔴 Critical

Validate the FFI inputs before first use.

(*filter).options, addr_of_mut!((*filter).id), and Line 35 all consume raw caller pointers before any guard. Null or out-of-range inputs crash here instead of returning LZMA_PROG_ERROR.

🔧 Proposed fix
 pub unsafe extern "C" fn lzma_filter_flags_decode(
     filter: *mut lzma_filter,
     allocator: *const lzma_allocator,
     in_0: *const u8,
     in_pos: *mut size_t,
     in_size: size_t,
 ) -> lzma_ret {
+    if filter.is_null()
+        || in_pos.is_null()
+        || (in_0.is_null() && *in_pos != in_size)
+        || *in_pos > in_size
+    {
+        return LZMA_PROG_ERROR;
+    }
     (*filter).options = core::ptr::null_mut();

Also applies to: 35-41

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/filter_flags_decoder.rs` around lines 10 - 17, Validate
all incoming raw pointers and buffer indices before dereferencing in the FFI
entry: check that the `filter` pointer is non-null before accessing
`(*filter).options` or `(*filter).id`, verify `in_0` is non-null and that
`in_pos` and `in_size` are within bounds (e.g., `in_pos <= in_size`) before
calling `lzma_vli_decode`, and return `LZMA_PROG_ERROR` on any invalid input;
apply the same guarding pattern around the subsequent block that touches
`(*filter).options` and the second `lzma_vli_decode` call (the code around lines
35–41) so no raw pointer is consumed without validation.
liblzma-rs/src/common/block_header_encoder.rs (1)

41-41: ⚠️ Potential issue | 🟠 Major

Reject aligned headers above LZMA_BLOCK_HEADER_SIZE_MAX.

Line 41 stores any aligned size into header_size. Once it exceeds the format limit, Line 55 truncates the length byte and the encoder can overrun the caller's buffer.

🔧 Proposed fix
-    (*block).header_size = size.wrapping_add(3) & !(3);
+    let aligned_size: u32 = size.wrapping_add(3) & !3;
+    if aligned_size > LZMA_BLOCK_HEADER_SIZE_MAX {
+        return LZMA_OPTIONS_ERROR;
+    }
+    (*block).header_size = aligned_size;

Also applies to: 54-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_header_encoder.rs` at line 41, The code assigns
an aligned header size into (*block).header_size using size.wrapping_add(3) &
!(3) without checking the LZMA_BLOCK_HEADER_SIZE_MAX limit, which allows values
> LZMA_BLOCK_HEADER_SIZE_MAX to be stored and later truncated (causing buffer
overruns); compute the aligned_size into a local (e.g. let aligned_size =
size.wrapping_add(3) & !3), then check if aligned_size >
LZMA_BLOCK_HEADER_SIZE_MAX and handle it (return an error/Err or clamp and
surface an error) instead of blindly storing it in (*block).header_size so the
encoder never writes an out-of-range header_size; update the same guard where
the length byte is truncated to use the checked aligned_size.
liblzma-rs/src/common/block_buffer_decoder.rs (1)

12-19: ⚠️ Potential issue | 🔴 Critical

Guard block before decoder init.

block is the only C pointer missing from the upfront validation, and Line 34 forwards it straight into lzma_block_decoder_init. A null caller input still turns into UB instead of LZMA_PROG_ERROR.

🔧 Proposed fix
     if in_pos.is_null()
+        || block.is_null()
         || in_0.is_null() && *in_pos != in_size
         || *in_pos > in_size
         || out_pos.is_null()
         || out.is_null() && *out_pos != out_size
         || *out_pos > out_size

Also applies to: 34-34

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_buffer_decoder.rs` around lines 12 - 19, The
upfront pointer-validation is missing a check for the C pointer named block, so
a null block can be passed into lzma_block_decoder_init and cause UB; update the
existing guard condition (the big if that returns LZMA_PROG_ERROR) to include a
check for block.is_null() (or equivalent) so the function returns
LZMA_PROG_ERROR when block is null before calling lzma_block_decoder_init.
liblzma-rs/src/common/block_buffer_encoder.rs (1)

27-29: ⚠️ Potential issue | 🟠 Major

Avoid truncating the 64-bit bound to size_t.

Line 29 still casts the u64 result directly to size_t. On 32-bit targets that can hand callers an undersized buffer length, which is then reused by later buffer encoders.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/block_buffer_encoder.rs` around lines 27 - 29, The
function lzma_block_buffer_bound currently casts the 64-bit result from
lzma_block_buffer_bound64 into size_t which truncates on 32-bit targets; update
lzma_block_buffer_bound to detect when the u64 result exceeds the maximum
representable size_t and handle that safely (e.g., clamp to SIZE_MAX or return
an error/0 as the API expects) instead of directly casting, using the symbols
lzma_block_buffer_bound, lzma_block_buffer_bound64 and size_t to locate and
modify the logic so callers never receive a truncated buffer length.
liblzma-rs/src/common/microlzma_decoder.rs (1)

215-221: ⚠️ Potential issue | 🟠 Major

Validate uncomp_size before attaching a new coder.

Line 221 stores the new allocation in (*next).coder, but Lines 252-253 can still return LZMA_OPTIONS_ERROR. That leaves partially initialized state behind on the error path.

Also applies to: 252-253

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/microlzma_decoder.rs` around lines 215 - 221, The code
stores a newly allocated lzma_microlzma_coder into (*next).coder before
validating uncomp_size, which can leave a partially-initialized coder in place
if later checks return LZMA_OPTIONS_ERROR; instead, validate uncomp_size (and
any other option checks) before attaching the coder or, if allocation is
required first, keep the pointer in a local variable (coder from
internal_alloc_object) and only assign to (*next).coder after all validations
succeed; if you must attach early, ensure you free the allocated coder on every
error path (including where LZMA_OPTIONS_ERROR is returned) to avoid leaking or
leaving a partially initialized (*next).coder.
liblzma-rs/src/delta/delta_common.rs (1)

7-12: ⚠️ Potential issue | 🟠 Major

Guard next and filters before dereferencing them.

Line 12 reads (*next).coder and Line 34 reads (*filters).options with no boundary check. This public FFI initializer should return LZMA_PROG_ERROR instead of hitting UB on bad callers.

Also applies to: 34-37

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_common.rs` around lines 7 - 12, In
lzma_delta_coder_init, guard the FFI inputs by checking that `next` and
`filters` are non-null (and that `(*next).coder` is non-null if applicable)
before any dereference; if a check fails return `LZMA_PROG_ERROR` (the
function's `lzma_ret` error for bad callers). Specifically, at the top of
`lzma_delta_coder_init` validate `next != std::ptr::null_mut()` and `filters !=
std::ptr::null()` (and ensure `(*next).coder` is not null before casting to
`*mut lzma_delta_coder`) and likewise check `(*filters).options` before using
it, returning `LZMA_PROG_ERROR` on failure to avoid UB. Ensure the checks are
applied to the same dereferences at lines where `(*next).coder` and
`(*filters).options` are used.
liblzma-rs/src/common/stream_decoder.rs (2)

29-39: ⚠️ Potential issue | 🔴 Critical

Reset stream_flags.check on every decoder reset.

stream_decoder_get_check() can be called immediately after init/reset, but stream_decoder_reset() leaves stream_flags.check untouched. On a fresh coder this reads uninitialized memory; on concatenated streams it can report the previous stream's check.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder.rs` around lines 29 - 39,
stream_decoder_reset currently initializes index_hash, sequence, and pos but
does not clear the stored checksum field, leaving stream_flags.check
uninitialized or stale; update stream_decoder_reset (operating on
lzma_stream_coder) to explicitly reset/zero stream_flags.check (and any related
stream_flags state if present) so that after init/reset calls to
stream_decoder_get_check return a clean value rather than uninitialized or
previous-stream data (ensure this change is applied alongside existing setting
of sequence = SEQ_STREAM_HEADER and pos = 0).

254-267: ⚠️ Potential issue | 🟠 Major

Always clean up block_options.filters on header-decode failure.

Line 265 returns directly when lzma_block_header_decode() fails, but block_options.filters already points at the stack filters array and any partially allocated filter options are never freed. That leaves stale state in coder and leaks filter allocations on the error path.

Also applies to: 287-291

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder.rs` around lines 254 - 267, The
header-decode failure path returns immediately while coder.block_options.filters
points at the stack-local filters array and any partially allocated filter
options remain allocated; before returning from the lzma_block_header_decode
failure branch (and the similar branch around the 287-291 block), call the
appropriate cleanup for partially-allocated filter options (e.g., invoke
lzma_free_filters or the crate-equivalent cleanup on (*coder).block_options) and
reset (*coder).block_options.filters to null (or a safe sentinel) so no stale
pointer remains, then return the error code from lzma_block_header_decode.
liblzma-rs/src/common/index_hash.rs (1)

83-89: ⚠️ Potential issue | 🟠 Major

Handle null in lzma_index_hash_size().

Line 86 dereferences index_hash unconditionally. This C ABI helper should return 0 for null input instead of crashing the caller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/index_hash.rs` around lines 83 - 89, The function
lzma_index_hash_size currently dereferences the raw pointer index_hash
unconditionally; update lzma_index_hash_size to check if index_hash is null and
return 0 when it is, otherwise proceed to call index_size using
(*index_hash).blocks.count and (*index_hash).blocks.index_list_size; keep the
existing unsafe block but gate the dereference behind a null check to avoid
crashes across the C ABI.
liblzma-rs/src/delta/delta_encoder.rs (2)

100-128: ⚠️ Potential issue | 🟠 Major

Guard next before writing through it.

lzma_delta_encoder_init() dereferences next at Line 105 with no null check. This is a public C ABI initializer, so null should return LZMA_PROG_ERROR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_encoder.rs` around lines 100 - 128, The function
lzma_delta_encoder_init currently dereferences the pointer next without checking
for null; add a null guard at the start of lzma_delta_encoder_init that returns
LZMA_PROG_ERROR if next is null, before assigning to (*next).code or
(*next).update, so the public C ABI validates inputs safely; keep the rest of
the function (including calling lzma_delta_coder_init) unchanged.

130-135: ⚠️ Potential issue | 🟠 Major

Validate out before storing the encoded property byte.

Line 135 writes *out unconditionally. A null out from C will crash even when options are otherwise valid.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_encoder.rs` around lines 130 - 135, In
lzma_delta_props_encode, validate that the out pointer is non-null before
writing to it: check if out.is_null() and return LZMA_PROG_ERROR (or the
appropriate error code) early, similar to the existing lzma_delta_coder_memusage
check; then proceed to cast options to *const lzma_options_delta and write the
encoded byte to *out only after the null check. Ensure you reference
lzma_delta_props_encode, out, lzma_options_delta, and LZMA_PROG_ERROR in the
change.
liblzma-rs/src/delta/delta_decoder.rs (3)

44-63: ⚠️ Potential issue | 🟠 Major

Guard next before touching it.

lzma_delta_decoder_init() writes through next without validating the incoming C pointer first. A null caller input here should return LZMA_PROG_ERROR, not dereference (*next).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_decoder.rs` around lines 44 - 63, The function
lzma_delta_decoder_init currently dereferences next without validation; add a
null-check at the start of lzma_delta_decoder_init and return LZMA_PROG_ERROR if
next is null before touching (*next). After that guard, keep existing behavior
(setting (*next).code and calling lzma_delta_coder_init). Reference the symbols
lzma_delta_decoder_init, lzma_delta_coder_init, lzma_ret, and LZMA_PROG_ERROR
when locating where to insert the check.

25-37: ⚠️ Potential issue | 🟠 Major

Avoid aborting on an unset downstream callback.

Line 27 still unwraps (*coder).next.code inside an extern "C" path. If the chain is ever left partially initialized, this can abort the process instead of returning LZMA_PROG_ERROR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_decoder.rs` around lines 25 - 37, The code
currently unwraps the downstream callback pointer in delta_decoder (the casted
pointer `coder: *mut lzma_delta_coder` and the field `(*coder).next.code`) which
can abort if the chain is partially initialized; modify the extern "C" path to
check whether `(*coder).next.code` (and/or `(*coder).next.coder`) is NULL/None
before calling it and return `LZMA_PROG_ERROR` (an `lzma_ret` value) instead of
unwrapping, ensuring the function exits safely when the downstream callback is
unset.

65-82: ⚠️ Potential issue | 🟠 Major

Validate options and props before dereferencing them.

Lines 81-82 assume both pointers are valid. A null props or options from C becomes immediate undefined behavior instead of a normal lzma_ret error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/delta/delta_decoder.rs` around lines 65 - 82, In
lzma_delta_props_decode, validate the incoming pointer parameters before any
dereference: check that options and props are non-null (and keep the existing
props_size check) and return an appropriate lzma_ret (e.g. LZMA_PROG_ERROR) on
null pointers; only after those checks allocate and fill the lzma_options_delta
and then assign *options = opt as *mut c_void. Reference symbols:
lzma_delta_props_decode, options, props, props_size, lzma_options_delta, and the
final assignment to *options.
liblzma-rs/src/lz/lz_decoder.rs (3)

136-142: ⚠️ Potential issue | 🔴 Critical

Check unread temp bytes, not temp buffer capacity.

A fully consumed temp buffer can still have size != 0, so this path can raise LZMA_DATA_ERROR after a valid end-of-stream. Compare temp.pos against temp.size here.

Suggested fix
-        if (*coder).temp.size != 0 {
+        if (*coder).temp.pos != (*coder).temp.size {
             return LZMA_DATA_ERROR;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 136 - 142, The check in the
lz_decoder path that returns LZMA_DATA_ERROR is using temp.size
(coder.temp.size) to decide if there are unread bytes; instead compare temp.pos
against temp.size to detect unread data. Update the branch that checks
(*coder).this_finished to use (coder.temp.pos != coder.temp.size) or equivalent
to determine leftover unread temp bytes before returning LZMA_DATA_ERROR,
leaving the subsequent check of (*coder).next_finished and return of
LZMA_STREAM_END unchanged.

264-273: ⚠️ Potential issue | 🟠 Major

Clear dictionary metadata before returning on reallocation failure.

If this allocation fails, dict.size still advertises the old buffer length while dict.buf is null. A later init can then skip allocation and crash in lz_decoder_reset.

Suggested fix
     if (*coder).dict.size != alloc_size {
         crate::alloc::internal_free((*coder).dict.buf as *mut c_void, allocator);
+        (*coder).dict.buf = core::ptr::null_mut();
+        (*coder).dict.size = 0;
         (*coder).dict.buf = crate::alloc::internal_alloc_bytes(
             alloc_size.wrapping_add(LZ_DICT_EXTRA as size_t),
             allocator,
         ) as *mut u8;
         if (*coder).dict.buf.is_null() {
             return LZMA_MEM_ERROR;
         }
         (*coder).dict.size = alloc_size;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 264 - 273, When reallocating
the dictionary in the lz_decoder code path (the block that compares
(*coder).dict.size != alloc_size), ensure that if internal_alloc_bytes fails and
returns null you clear the dict metadata before returning LZMA_MEM_ERROR —
specifically set (*coder).dict.buf to null (already null) and reset
(*coder).dict.size to 0 so future calls (e.g., lz_decoder_reset or any init
checks) won't think a valid buffer exists; perform this update immediately after
the failed allocation and before returning LZMA_MEM_ERROR.

249-260: ⚠️ Potential issue | 🟠 Major

Keep lzma_lz_decoder_memusage consistent with init-time allocation.

The init path clamps the dictionary to 4096 bytes and rounds it up to 16 bytes before adding repeat space, but this helper reports dictionary_size verbatim. That underestimates memory and can make callers accept a configuration that init later rejects or exceeds.

Also applies to: 301-306

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_decoder.rs` around lines 249 - 260, The memory-usage
helper lzma_lz_decoder_memusage must compute dictionary_size the same way as
init: clamp lz_options.dict_size to at least 4096, perform the same overflow
check using SIZE_MAX/wrapping_sub with 15, (2 * LZ_DICT_REPEAT_MAX) and
LZ_DICT_EXTRA, then round up by doing dict_size = (dict_size.wrapping_add(15) &
!15) and include the repeat/extra space when returning the total; update
lzma_lz_decoder_memusage to use lz_options.dict_size (after clamping/rounding)
and the same constants (LZ_DICT_REPEAT_MAX, LZ_DICT_EXTRA, SIZE_MAX) so reported
memory matches init-time allocation logic.
liblzma-rs/src/common/file_info.rs (1)

602-605: ⚠️ Potential issue | 🟠 Major

Clear *dest_index before any work starts.

On failure this initializer leaves the caller's previous output pointer untouched, which makes stale indexes look valid and is easy to double-free from C.

Suggested fix
     if dest_index.is_null() {
         return LZMA_PROG_ERROR;
     }
+    *dest_index = core::ptr::null_mut();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/file_info.rs` around lines 602 - 605, The code must
clear the caller's output pointer to avoid leaving a stale valid-looking pointer
on failure: at the top of the function (after verifying dest_index is non-null)
set *dest_index = std::ptr::null_mut() (or equivalent null assignment in this
unsafe FFI context) before any further work or early returns; keep this
placement before the allocation/initialization that produces coder (referenced
as coder / lzma_file_info_coder) so any later return paths (including the
existing LZMA_PROG_ERROR return) cannot leave the caller with a dangling/stale
pointer.
liblzma-rs/src/common/stream_decoder_mt.rs (4)

1473-1480: ⚠️ Potential issue | 🟠 Major

Destroy the coder-level mutex and cond before freeing coder.

stream_decoder_mt_init() initializes these sync primitives, but stream_decoder_mt_end() never destroys them. Freeing the backing storage first leaks OS resources and is undefined on pthread-style backends.

Also applies to: 1309-1317

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder_mt.rs` around lines 1473 - 1480, In
stream_decoder_mt_end (and the earlier error-exit path in stream_decoder_mt_init
around the other diff), ensure the coder's synchronization primitives are
destroyed before freeing its memory: call
mythread_cond_destroy(::core::ptr::addr_of_mut!((*coder).cond)) and then
mythread_mutex_destroy(::core::ptr::addr_of_mut!((*coder).mutex)) (use the same
address expressions as the init code) prior to calling
crate::alloc::internal_free(coder as *mut c_void, allocator); do this in both
the normal teardown (stream_decoder_mt_end) and the error paths where the coder
would otherwise be freed while its cond/mutex remain initialized.

470-476: ⚠️ Potential issue | 🟠 Major

Preserve the worker's real error code.

When fail_fast is false, this overwrites pending_error with LZMA_PROG_ERROR, so the public decoder later returns the wrong terminal cause.

Suggested fix
                     if (*coder).fail_fast {
                         ret = (*coder).thread_error;
                         break;
                     } else {
-                        (*coder).pending_error = LZMA_PROG_ERROR;
+                        (*coder).pending_error = (*coder).thread_error;
                     }

Also applies to: 758-779

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder_mt.rs` around lines 470 - 476, The code
currently overwrites the worker's actual error by unconditionally setting
(*coder).pending_error = LZMA_PROG_ERROR when (*coder).thread_error != LZMA_OK
and fail_fast is false; change this to preserve the real worker error by
assigning (*coder).pending_error = (*coder).thread_error (or leaving an existing
non-zero pending_error intact if that logic is required) instead of
LZMA_PROG_ERROR; make the same change in the other analogous block around the
second occurrence (the block covering lines 758-779) so the public decoder
returns the worker's true terminal error code.

597-607: ⚠️ Potential issue | 🔴 Critical

Reset stream_flags.check before each new stream.

stream_decoder_mt_get_check() reads this field directly. Leaving it untouched here leaks the previous member's check, and the first member starts with garbage.

Suggested fix
     (*coder).sequence = SEQ_STREAM_HEADER;
+    (*coder).stream_flags.check = LZMA_CHECK_NONE;
     (*coder).pos = 0;
     LZMA_OK
 }

Also applies to: 1319-1323

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_decoder_mt.rs` around lines 597 - 607, The
stream flags' check field is not being cleared when a new stream is started, so
update stream_decoder_reset to explicitly reset (*coder).stream_flags.check =
lzma_check::LZMA_CHECK_NONE (or the equivalent zero/none value used in the
project) before returning; ensure the same fix is applied to the duplicate reset
block that also sets (*coder).sequence = SEQ_STREAM_HEADER and (*coder).pos = 0
(the other reset location around the duplicate at lines 1319-1323) so
stream_decoder_mt_get_check() never reads stale/garbage values.

889-927: ⚠️ Potential issue | 🟠 Major

Clamp memlimit_threading when the public limit is lowered.

Block-selection later compares mem_next_block against memlimit_threading, not memlimit_stop. After a successful reduction here, the decoder can still choose threaded allocations above the caller's new cap.

Also applies to: 1355-1361

liblzma-rs/src/common/string_conversion.rs (2)

319-324: ⚠️ Potential issue | 🟠 Major

Add a portable initialization path for lzma12_optmap.

parse_lzma12 and the LZMA1/LZMA2 entries in filter_name_map read lzma12_optmap, but the startup hook only exists for Linux/Windows/macOS. On wasm32 and other unsupported targets the table stays zeroed, so LZMA option parsing/stringifying breaks outright.

Also applies to: 348-365, 1340-1344

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/string_conversion.rs` around lines 319 - 324, The
lzma12_optmap table is left zeroed on targets without the OS-specific startup
hook, breaking parse_lzma12 and the LZMA1/LZMA2 entries in filter_name_map; add
a portable initialization path that guarantees lzma12_optmap is populated before
use (e.g., a lazy/once initializer or a cfg-gated static initializer used when
the Linux/Windows/macOS hook is not built). Ensure the initializer is invoked or
referenced from parse_lzma12 and the filter_name_map lookup code so
parse_options gets a valid &lzma12_optmap; reference symbols: lzma12_optmap,
parse_lzma12, filter_name_map, and the existing startup hook to mirror its
initialization behavior for unsupported targets.

1263-1337: ⚠️ Potential issue | 🟠 Major

Compute lzma_options_lzma offsets from the struct layout.

These literals only match the 64-bit layout. On 32-bit targets the preset_dict pointer is smaller, so lc/lp/pb/mode/nice/mf/depth are all read and written at the wrong addresses. Please derive them from the struct definition instead of baking in numeric constants.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/string_conversion.rs` around lines 1263 - 1337, The
option_map entries for lc/lp/pb/mode/nice/mf/depth use hardcoded numeric offsets
(20/24/28/32/36/40/44) which only match 64-bit layout; update lzma_options_lzma
so offsets are computed from the struct layout instead of literals. Replace the
numeric offset fields in the option_map initializers with values derived from
the lzma_options_lzma struct (e.g. compute offset = (addr_of!(opt.field) as
usize) - (addr_of!(opt) as usize) or use size_of::<*const c_void>() where
pointer-sized gaps exist) using core::ptr::addr_of! / core::mem::size_of to
derive preset_dict, lc, lp, pb, mode, nice, mf, depth offsets at build time;
target the option_map initializers and the lzma_options_lzma struct so the
offsets follow the actual field ordering and pointer size on 32/64-bit
platforms.
liblzma-rs/src/common/index.rs (3)

674-677: ⚠️ Potential issue | 🟠 Major

Copy the checks bitset when duplicating an index.

lzma_index_dup copies uncompressed_size, total_size, record_count, and index_list_size, but omits the checks field. This causes lzma_index_checks() to return incomplete metadata for duplicates of multi-stream indexes.

Proposed fix
     (*dest).uncompressed_size = (*src).uncompressed_size;
     (*dest).total_size = (*src).total_size;
     (*dest).record_count = (*src).record_count;
     (*dest).index_list_size = (*src).index_list_size;
+    (*dest).checks = (*src).checks;
     let mut srcstream: *const index_stream = (*src).streams.leftmost as *const index_stream;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/index.rs` around lines 674 - 677, The lzma_index_dup
implementation copies uncompressed_size, total_size, record_count, and
index_list_size but forgets to copy the checks bitset, causing
lzma_index_checks() to return incomplete data for duplicated multi-stream
indexes; update lzma_index_dup to copy the checks field from (*src).checks to
(*dest).checks (or perform an appropriate clone if checks is an owned bitset) so
the duplicated index retains the same checks metadata as the source.

320-339: ⚠️ Potential issue | 🟠 Major

Arithmetic overflow can bypass LZMA_VLI_MAX validation.

The cumulative wrapping_add operations at lines 327-330 can overflow u64 before the > LZMA_VLI_MAX check at line 331. An overflow could wrap a huge file size back to a small valid-looking number, causing the function to return an incorrect result instead of LZMA_VLI_UNKNOWN.

Proposed fix using checked arithmetic
 unsafe extern "C" fn index_file_size(
     compressed_base: lzma_vli,
     unpadded_sum: lzma_vli,
     record_count: lzma_vli,
     index_list_size: lzma_vli,
     stream_padding: lzma_vli,
 ) -> lzma_vli {
-    let mut file_size: lzma_vli = compressed_base
-        .wrapping_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
-        .wrapping_add(stream_padding)
-        .wrapping_add(vli_ceil4(unpadded_sum));
-    if file_size > LZMA_VLI_MAX {
-        return LZMA_VLI_UNKNOWN;
-    }
-    file_size = file_size.wrapping_add(index_size(record_count, index_list_size));
-    if file_size > LZMA_VLI_MAX {
-        return LZMA_VLI_UNKNOWN;
-    }
-    file_size
+    let mut file_size = match compressed_base
+        .checked_add((2 * LZMA_STREAM_HEADER_SIZE) as lzma_vli)
+        .and_then(|v| v.checked_add(stream_padding))
+        .and_then(|v| v.checked_add(vli_ceil4(unpadded_sum)))
+    {
+        Some(v) if v <= LZMA_VLI_MAX => v,
+        _ => return LZMA_VLI_UNKNOWN,
+    };
+
+    match file_size.checked_add(index_size(record_count, index_list_size)) {
+        Some(v) if v <= LZMA_VLI_MAX => v,
+        _ => LZMA_VLI_UNKNOWN,
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/index.rs` around lines 320 - 339, The index_file_size
function uses multiple wrapping_add calls that can overflow before comparing to
LZMA_VLI_MAX; replace the cumulative wrapping_add sequence with checked
arithmetic (use checked_add on compressed_base with (2 *
LZMA_STREAM_HEADER_SIZE) as lzma_vli, then checked_add stream_padding, then
checked_add vli_ceil4(unpadded_sum)), and if any checked_add returns None
immediately return LZMA_VLI_UNKNOWN; then perform a final checked_add with
index_size(record_count, index_list_size) and return LZMA_VLI_UNKNOWN on
overflow, otherwise return the computed value—apply to the symbols
index_file_size, vli_ceil4, and index_size.

259-264: ⚠️ Potential issue | 🔴 Critical

Clamp records to at least 1 to prevent zero-sized allocation.

If a caller passes records = 0, lzma_index_append will later allocate an index_group with allocated = 0, then immediately access records[0], writing past the end of the allocation.

Proposed fix
 pub unsafe extern "C" fn lzma_index_prealloc(i: *mut lzma_index, mut records: lzma_vli) {
+    if records == 0 {
+        records = 1;
+    }
     if records > PREALLOC_MAX as lzma_vli {
         records = PREALLOC_MAX as lzma_vli;
     }
     (*i).prealloc = records as size_t;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/index.rs` around lines 259 - 264, The function
lzma_index_prealloc should clamp the incoming records to a minimum of 1 to avoid
zero-sized allocations; modify lzma_index_prealloc so after capping to
PREALLOC_MAX it also does if records == 0 { records = 1; } (or equivalent
records = records.max(1)) before assigning (*i).prealloc = records as size_t,
ensuring the prealloc field is never set to 0.
liblzma-rs/src/common/stream_encoder_mt.rs (2)

982-990: ⚠️ Potential issue | 🔴 Critical

Missing SIZE_MAX bounds check before casting to size_t.

The check at line 982 validates against BLOCK_SIZE_MAX, but on 32-bit platforms BLOCK_SIZE_MAX (≈1.1×10¹⁵) far exceeds SIZE_MAX (≈4.3×10⁹). When block_size or outbuf_size_max are later cast to size_t (at lines 1209-1210), values exceeding SIZE_MAX will silently truncate, causing undersized buffer allocations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder_mt.rs` around lines 982 - 990, The code
validates *block_size against BLOCK_SIZE_MAX and UINT64_MAX but misses checking
against SIZE_MAX before later casting to size_t; update the validation in the
function that sets *outbuf_size_max (the block size check near
lzma_block_buffer_bound64) to also ensure *block_size and the computed
*outbuf_size_max do not exceed SIZE_MAX (or return
LZMA_OPTIONS_ERROR/LZMA_MEM_ERROR as appropriate) so subsequent casts to size_t
(used where *block_size and *outbuf_size_max are converted) cannot truncate;
reference symbols: *block_size, BLOCK_SIZE_MAX, UINT64_MAX, SIZE_MAX,
lzma_block_buffer_bound64, and *outbuf_size_max.

810-824: ⚠️ Potential issue | 🔴 Critical

LZMA_FULL_BARRIER and LZMA_FULL_FLUSH may return LZMA_STREAM_END prematurely.

Line 815 returns LZMA_STREAM_END for LZMA_FULL_BARRIER immediately when in_pos == in_size, without verifying that worker threads have finished encoding their blocks. Similarly, LZMA_FULL_FLUSH at line 822 only checks lzma_outq_is_empty but doesn't wait for workers to go idle. Callers could observe a completed flush/barrier while blocks are still being encoded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder_mt.rs` around lines 810 - 824, The code
returns LZMA_STREAM_END for LZMA_FULL_BARRIER and LZMA_FULL_FLUSH as soon as
in_pos == in_size (and for FULL_FLUSH only after lzma_outq_is_empty), which can
report completion while worker threads are still encoding; update the branch
handling LZMA_FULL_BARRIER and LZMA_FULL_FLUSH (the block that checks in_pos ==
in_size and calls lzma_outq_is_empty on
::core::ptr::addr_of_mut!((*coder).outq)) to also wait for worker threads to
finish encoding their queued blocks and reach idle before returning
LZMA_STREAM_END—use the existing worker synchronization/wait primitive(s) in
this module (the function that waits for workers to go idle or finishes
outstanding tasks) or add one if missing, and only return LZMA_STREAM_END after
both the out queue is empty and all workers are confirmed idle/finished.

Comment thread liblzma-rs-sys/src/lib.rs
Comment on lines +776 to +778
pub extern "C" fn lzma_index_memused(i: *const lzma_index) -> u64 {
liblzma_rs::common::index::lzma_index_memused(i.cast())
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n liblzma-rs-sys/src/lib.rs | sed -n '770,785p'

Repository: youknowone/xz2-rs

Length of output: 681


🏁 Script executed:

cat -n liblzma-rs-sys/src/lib.rs | sed -n '815,850p'

Repository: youknowone/xz2-rs

Length of output: 1399


🏁 Script executed:

cat -n liblzma-rs-sys/src/lib.rs | sed -n '960,975p'

Repository: youknowone/xz2-rs

Length of output: 531


🏁 Script executed:

rg "lzma_memlimit_get" liblzma-rs-sys/src/lib.rs -A 2

Repository: youknowone/xz2-rs

Length of output: 204


Add unsafe keyword to raw-pointer getters.

These functions dereference raw pointers but lack unsafe marking, exposing pointer-validity requirements as safe APIs:

  • lzma_index_memused (line 776)
  • lzma_index_stream_count (line 822)
  • lzma_index_block_count (line 827)
  • lzma_index_size (line 832)
  • lzma_index_stream_size (line 837)
  • lzma_index_total_size (line 842)
  • lzma_index_hash_size (line 968)

Mark them as unsafe extern "C" to match the pattern used elsewhere in the file (e.g., lzma_memlimit_get, lzma_index_init, lzma_index_file_size).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs-sys/src/lib.rs` around lines 776 - 778, The listed functions
currently accept raw pointers but are declared safe; change their signatures to
be unsafe extern "C" fn so callers know they must uphold pointer
validity—specifically update lzma_index_memused, lzma_index_stream_count,
lzma_index_block_count, lzma_index_size, lzma_index_stream_size,
lzma_index_total_size, and lzma_index_hash_size to use unsafe extern "C"
(matching existing patterns like lzma_memlimit_get and lzma_index_init) and keep
the internal calls (e.g.,
liblzma_rs::common::index::lzma_index_memused(i.cast())) unchanged.

Comment thread liblzma-rs/src/alloc.rs
Comment on lines +96 to +100
if !allocator.is_null() && (*allocator).alloc.is_some() {
(*allocator).alloc.unwrap()((*allocator).opaque, 1, size)
} else {
rust_alloc_impl(size as usize, RUST_ALLOC_ALIGN, false)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Require alloc and free as a matched pair.

The internal_alloc_* helpers switch to the caller allocator whenever alloc is Some, but internal_free() switches independently on free. If either callback is missing, allocations and frees cross allocators, and rust_free_impl() will interpret foreign memory as RustAllocHeader.

Gate both paths on alloc.is_some() && free.is_some(), or reject partial allocators up front.

Also applies to: 108-116, 120-127, 138-159, 162-167

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/alloc.rs` around lines 96 - 100, The allocation helpers
currently switch to the caller allocator when (*allocator).alloc.is_some() but
internal_free() switches independently on (*allocator).free, allowing mismatched
alloc/free pairs; update the logic so allocator usage is only chosen when both
(*allocator).alloc.is_some() && (*allocator).free.is_some(), or reject partial
allocators up front (e.g., return error or treat as null) so allocations and
frees always use the same implementation; apply this change consistently in the
internal_alloc_* helpers and internal_free(), and adjust calls involving
rust_alloc_impl, rust_free_impl and the allocator struct checks to reference
both alloc and free together.

Comment thread liblzma-rs/src/check/check.rs Outdated
Comment on lines +56 to +75
pub unsafe extern "C" fn lzma_check_update(
check: *mut lzma_check_state,
type_0: lzma_check,
buf: *const u8,
size: size_t,
) {
if check.is_null() {
return;
}

match type_0 {
LZMA_CHECK_CRC32 => {
(*check).state.crc32 = lzma_crc32(buf, size, (*check).state.crc32);
}
LZMA_CHECK_CRC64 => {
(*check).state.crc64 = lzma_crc64(buf, size, (*check).state.crc64);
}
LZMA_CHECK_SHA256 => {
lzma_sha256_update(buf, size, check);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard buf when size != 0.

Lines 68, 71, and 74 forward buf straight to the checksum backends. A C caller can currently trigger UB with buf == NULL && size != 0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/check.rs` around lines 56 - 75, In lzma_check_update,
guard against a NULL buf when size != 0 by returning early; add a check like "if
size != 0 && buf.is_null() { return; }" before the match so the calls to
lzma_crc32, lzma_crc64, and lzma_sha256_update (and use of lzma_check_state)
never receive a NULL buffer from C and thus avoid UB.

Comment on lines +84 to +90
match type_0 {
LZMA_CHECK_CRC32 => {
(*check).buffer.u32_0[0] = (*check).state.crc32;
}
LZMA_CHECK_CRC64 => {
(*check).buffer.u64_0[0] = (*check).state.crc64;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Write CRC outputs in little-endian bytes.

Lines 86 and 89 store native-endian integers into the union. block_buffer_encoder.rs later copies buffer.u8_0 verbatim, so big-endian targets will emit invalid Check-field bytes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/check.rs` around lines 84 - 90, The code currently
stores native-endian integers into (*check).buffer.u32_0 and u64_0, which emits
wrong bytes on big-endian platforms; update the LZMA_CHECK_CRC32 and
LZMA_CHECK_CRC64 match arms to write little-endian bytes explicitly by
converting (*check).state.crc32 and crc64 to little-endian byte arrays (e.g.,
crc32.to_le_bytes() / crc64.to_le_bytes()) and copying those bytes into
(*check).buffer.u8_0 (4 bytes for CRC32, 8 bytes for CRC64) so
block_buffer_encoder.rs, which reads buffer.u8_0 verbatim, always sees
little-endian Check-field bytes.

Comment thread liblzma-rs/src/check/crc32_fast.rs Outdated
Comment on lines +353 to +355
pub unsafe extern "C" fn lzma_crc32(buf: *const u8, size: size_t, crc: u32) -> u32 {
lzma_crc32_generic(buf, size, crc)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add #[no_mangle] for C linkage.

The function lzma_crc32 is declared pub unsafe extern "C" but lacks #[no_mangle], so C callers expecting the symbol lzma_crc32 won't find it.

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_crc32(buf: *const u8, size: size_t, crc: u32) -> u32 {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub unsafe extern "C" fn lzma_crc32(buf: *const u8, size: size_t, crc: u32) -> u32 {
lzma_crc32_generic(buf, size, crc)
}
#[no_mangle]
pub unsafe extern "C" fn lzma_crc32(buf: *const u8, size: size_t, crc: u32) -> u32 {
lzma_crc32_generic(buf, size, crc)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/check/crc32_fast.rs` around lines 353 - 355, The exported C
symbol is being mangled because the function lzma_crc32 is missing the
#[no_mangle] attribute; add #[no_mangle] directly above the pub unsafe extern
"C" fn lzma_crc32(...) declaration (keeping the existing signature and call to
lzma_crc32_generic) so the symbol is emitted exactly as `lzma_crc32` for C
callers.

Comment thread liblzma-rs/src/common/stream_encoder.rs Outdated
Comment on lines +231 to +237
} else if (*coder).sequence <= SEQ_BLOCK_ENCODE {
let ret = (*coder).block_encoder.update.unwrap()(
(*coder).block_encoder.coder,
allocator,
filters,
reversed_filters,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, find and examine the file
find . -name "stream_encoder.rs" -type f

Repository: youknowone/xz2-rs

Length of output: 102


🏁 Script executed:

# Read the relevant lines to see the actual code
cat -n liblzma-rs/src/common/stream_encoder.rs | sed -n '220,250p'

Repository: youknowone/xz2-rs

Length of output: 1255


🏁 Script executed:

# Look at the type definition of block_encoder to understand its structure
rg "block_encoder" liblzma-rs/src/common/stream_encoder.rs -B 5 -A 5 | head -100

Repository: youknowone/xz2-rs

Length of output: 3849


🏁 Script executed:

# Find the definition of the coder type and block_encoder field
rg "struct.*coder\|block_encoder:" liblzma-rs/src/common/ -A 10 | head -150

Repository: youknowone/xz2-rs

Length of output: 43


🏁 Script executed:

# Search for lzma_ret type definition
rg "type lzma_ret|enum lzma_ret" liblzma-rs/src/

Repository: youknowone/xz2-rs

Length of output: 112


🏁 Script executed:

# Find the definition of lzma_next_coder
rg "struct lzma_next_coder|pub struct lzma_next_coder" liblzma-rs/src/ -A 15

Repository: youknowone/xz2-rs

Length of output: 1085


🏁 Script executed:

# Also search in types.rs for the definition
cat -n liblzma-rs/src/types.rs | grep -A 20 "lzma_next_coder"

Repository: youknowone/xz2-rs

Length of output: 4474


🏁 Script executed:

# Check how other optional fields are handled in the codebase
rg "\.get_progress\.|\.get_check\.|\.memconfig\." liblzma-rs/src/ -B 2 -A 2 | head -50

Repository: youknowone/xz2-rs

Length of output: 3204


🏁 Script executed:

# Look for patterns of guarding optional fields before calling them
rg "\.is_some\(\)|if let Some" liblzma-rs/src/ -B 2 -A 2 | head -80

Repository: youknowone/xz2-rs

Length of output: 5101


🏁 Script executed:

# Check if there are other unwrap() calls on optional fields in stream_encoder.rs
rg "\.unwrap\(\)" liblzma-rs/src/common/stream_encoder.rs -B 3 -A 1

Repository: youknowone/xz2-rs

Length of output: 704


Guard the optional update callback before calling it.

block_encoder.update is Option<...> and will panic if called with .unwrap() when None. Since this is an unsafe extern "C" function, panicking across the FFI boundary is unsound. Return an error code (e.g., LZMA_PROG_ERROR) if the update callback is not available, consistent with the error handling pattern used throughout the function.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder.rs` around lines 231 - 237, The call to
(*coder).block_encoder.update.unwrap() may panic if update is None; modify the
branch where (*coder).sequence <= SEQ_BLOCK_ENCODE to first check whether
(*coder).block_encoder.update.is_some() (or pattern-match Option) and, if it is
None, return the appropriate error code (e.g., LZMA_PROG_ERROR) instead of
unwrapping; if present, call the callback via the safe dereference (e.g.,
update.unwrap()(...)) as before. Ensure to reference and use the existing
error-handling path in the same function so the returned error code matches
surrounding behavior.

Comment on lines +13 to +24
pub unsafe extern "C" fn lzma_stream_header_encode(
options: *const lzma_stream_flags,
out: *mut u8,
) -> lzma_ret {
if (*options).version != 0 {
return LZMA_OPTIONS_ERROR;
}
core::ptr::copy_nonoverlapping(
::core::ptr::addr_of!(lzma_header_magic) as *const u8,
out as *mut u8,
core::mem::size_of::<[u8; 6]>(),
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate options and out at the FFI boundary.

Line 17 and Line 47 dereference options immediately, and the later writes/copies assume out is valid. A null caller can't be reported cleanly today; it just trips UB.

Also applies to: 43-67

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_encoder.rs` around lines 13 - 24, Add
explicit null-pointer checks at the FFI boundary before dereferencing `options`
or using `out` in the stream header encoder/decoder functions (e.g.,
lzma_stream_header_encode) and return an appropriate lzma error code instead of
causing UB; specifically, verify `options != core::ptr::null()` and `out !=
core::ptr::null()` at the top of lzma_stream_header_encode (and similarly in the
other function(s) that dereference these pointers in this file) and return a
clear error (e.g., LZMA_PROG_ERROR or LZMA_OPTIONS_ERROR per project convention)
when either pointer is null so callers get a clean, defined error instead of
undefined behavior.

Comment on lines +265 to +269
core::ptr::write_bytes(
(*mf).buffer.offset((*mf).size as isize) as *mut u8,
0 as u8,
0,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Another no-op write_bytes call (count is 0).

Same C2Rust artifact as above—the count is 0, so this does nothing.

♻️ Proposed cleanup
-        core::ptr::write_bytes(
-            (*mf).buffer.offset((*mf).size as isize) as *mut u8,
-            0 as u8,
-            0,
-        );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
core::ptr::write_bytes(
(*mf).buffer.offset((*mf).size as isize) as *mut u8,
0 as u8,
0,
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 265 - 269, The
core::ptr::write_bytes call is a no-op because the count argument is 0; remove
this redundant write (the three-line block that calls core::ptr::write_bytes
with (*mf).buffer.offset((*mf).size as isize), 0u8, 0) to clean up the C2Rust
artifact—look for the usage around the local/mutable `mf` (fields `buffer` and
`size`) in lz_encoder.rs and delete that call or replace it with the intended
behavior if a non-zero clear was actually required.

Comment thread liblzma-rs/src/lz/lz_encoder.rs Outdated
Comment on lines +317 to +349
pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
let mut mf: lzma_mf = lzma_mf_s {
buffer: core::ptr::null_mut(),
size: 0,
keep_size_before: 0,
keep_size_after: 0,
offset: 0,
read_pos: 0,
read_ahead: 0,
read_limit: 0,
write_pos: 0,
pending: 0,
find: None,
skip: None,
hash: core::ptr::null_mut(),
son: core::ptr::null_mut(),
cyclic_pos: 0,
cyclic_size: 0,
hash_mask: 0,
depth: 0,
nice_len: 0,
match_len_max: 0,
action: LZMA_RUN,
hash_count: 0,
sons_count: 0,
};
if unsafe { lz_encoder_prepare(::core::ptr::addr_of_mut!(mf), core::ptr::null(), lz_options) } {
return UINT64_MAX;
}
((mf.hash_count as u64) + (mf.sons_count as u64)) * core::mem::size_of::<u32>() as u64
+ mf.size as u64
+ core::mem::size_of::<lzma_coder>() as u64
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to lzma_lz_encoder_memusage for C linkage.

This function is pub extern "C" but lacks #[no_mangle].

🔧 Proposed fix
+#[no_mangle]
 pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
let mut mf: lzma_mf = lzma_mf_s {
buffer: core::ptr::null_mut(),
size: 0,
keep_size_before: 0,
keep_size_after: 0,
offset: 0,
read_pos: 0,
read_ahead: 0,
read_limit: 0,
write_pos: 0,
pending: 0,
find: None,
skip: None,
hash: core::ptr::null_mut(),
son: core::ptr::null_mut(),
cyclic_pos: 0,
cyclic_size: 0,
hash_mask: 0,
depth: 0,
nice_len: 0,
match_len_max: 0,
action: LZMA_RUN,
hash_count: 0,
sons_count: 0,
};
if unsafe { lz_encoder_prepare(::core::ptr::addr_of_mut!(mf), core::ptr::null(), lz_options) } {
return UINT64_MAX;
}
((mf.hash_count as u64) + (mf.sons_count as u64)) * core::mem::size_of::<u32>() as u64
+ mf.size as u64
+ core::mem::size_of::<lzma_coder>() as u64
}
#[no_mangle]
pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
let mut mf: lzma_mf = lzma_mf_s {
buffer: core::ptr::null_mut(),
size: 0,
keep_size_before: 0,
keep_size_after: 0,
offset: 0,
read_pos: 0,
read_ahead: 0,
read_limit: 0,
write_pos: 0,
pending: 0,
find: None,
skip: None,
hash: core::ptr::null_mut(),
son: core::ptr::null_mut(),
cyclic_pos: 0,
cyclic_size: 0,
hash_mask: 0,
depth: 0,
nice_len: 0,
match_len_max: 0,
action: LZMA_RUN,
hash_count: 0,
sons_count: 0,
};
if unsafe { lz_encoder_prepare(::core::ptr::addr_of_mut!(mf), core::ptr::null(), lz_options) } {
return UINT64_MAX;
}
((mf.hash_count as u64) + (mf.sons_count as u64)) * core::mem::size_of::<u32>() as u64
mf.size as u64
core::mem::size_of::<lzma_coder>() as u64
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 317 - 349, The function
lzma_lz_encoder_memusage is declared pub extern "C" but missing #[no_mangle], so
add the #[no_mangle] attribute immediately above the lzma_lz_encoder_memusage
function declaration to ensure stable C linkage and unmangled symbol name while
keeping the pub extern "C" signature unchanged; after adding the attribute,
rebuild/run tests to confirm the symbol is exported as expected.

Comment thread liblzma-rs/src/lz/lz_encoder.rs Outdated
Comment on lines +394 to +407
pub unsafe extern "C" fn lzma_lz_encoder_init(
next: *mut lzma_next_coder,
allocator: *const lzma_allocator,
filters: *const lzma_filter_info,
lz_init: Option<
unsafe extern "C" fn(
*mut lzma_lz_encoder,
*const lzma_allocator,
lzma_vli,
*const c_void,
*mut lzma_lz_options,
) -> lzma_ret,
>,
) -> lzma_ret {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add #[no_mangle] to lzma_lz_encoder_init for C linkage.

This function is pub unsafe extern "C" but lacks #[no_mangle].

🔧 Proposed fix
+#[no_mangle]
 pub unsafe extern "C" fn lzma_lz_encoder_init(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 394 - 407, The exported C ABI
function lzma_lz_encoder_init is declared as pub unsafe extern "C" but missing
the #[no_mangle] attribute, so add #[no_mangle] directly above the
lzma_lz_encoder_init function declaration to prevent Rust from mangling its
symbol name and ensure stable C linkage; keep the existing pub unsafe extern "C"
signature and all parameters/return type unchanged.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 24

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@AGENTS.md`:
- Around line 6-18: Update the AGENTS.md wording to explicitly ban any
#[no_mangle] C-ABI exports and public C-ABI entry points in liblzma-rs: state
that liblzma-rs must not define #[no_mangle] extern "C" functions or expose pub
unsafe extern "C" helpers as public FFI entry points (those belong in
liblzma-rs-sys); you may permit internal non-pub extern "C" helpers for
implementation-only interop if absolutely needed but prefer normal Rust
functions and keep all actual C ABI surface (wrappers) in liblzma-rs-sys,
referencing the symbols `#[no_mangle]`, `extern "C"`, and `pub unsafe extern
"C"` so the rule is unambiguous.

In `@Cargo.lock.msrv`:
- Around line 429-435: The Cargo.lock.msrv entry for the package liblzma-rs
currently lists only libc but must also list memchr to match
liblzma-rs/Cargo.toml; regenerate the lockfile so the [[package]] block for
liblzma-rs includes both "libc" and "memchr" in its dependencies array (e.g., by
running the appropriate cargo lockfile regeneration/update so the liblzma-rs
entry lists memchr alongside libc).

In `@Cargo.toml`:
- Line 16: Update the advertised MSRV in Cargo.toml to at least "1.65" because
the code in liblzma-rs uses the `let ... else` syntax (see
liblzma-rs/src/common/block_buffer_decoder.rs and
liblzma-rs/src/common/stream_buffer_decoder.rs). Edit the rust-version key in
Cargo.toml (the `rust-version = "1.63"` entry) and set it to "1.65" or newer to
match the stabilized language feature and avoid build failures.
- Around line 35-38: The benchmark feature doesn't enable the two optional
backends that backend_comparison.rs imports (liblzma_c_sys and liblzma_sys), so
add those dependency feature flags to the bench feature in Cargo.toml: update
the "bench" feature to include "liblzma_c_sys" and "liblzma_sys" (so running
cargo bench --features bench builds both backends used by
backend_comparison.rs).

In `@liblzma-rs/src/alloc.rs`:
- Around line 24-25: The offset calculation double-applies alignment:
header_size is already increased by align - 1 then passed to round_up(), causing
extra padding; change the match arm so you compute offset =
round_up(header_size, align) (and still perform an overflow check on
header_size, e.g. via header_size.checked_add(align - 1) solely to detect
overflow) instead of passing the already-adjusted value into round_up; update
the code that sets offset (the match on header_size.checked_add(align - 1) and
the call to round_up) to use round_up(header_size, align) while preserving
overflow handling.

In `@liblzma-rs/src/check/sha256.rs`:
- Around line 583-598: The code in lzma_sha256_finish currently constructs bytes
with manual shifts which unconditionally byte-swaps and breaks on big-endian
targets; replace the manual bit-twiddling for the padded length and final digest
words by using endianness-aware conversions: call
state.sha256.size.to_be_bytes() and state.sha256.state[i].to_be_bytes() (or
equivalent htobe64/htobe32) and store those bytes into buffer.u64_0[7] and
buffer.u32_0[i] via the appropriate from_be_bytes conversions (e.g.,
u64::from_be_bytes(...) and u32::from_be_bytes(...)) so the length and digest
are emitted in SHA-256's required big-endian/network order regardless of host
endianness.

In `@liblzma-rs/src/common/alone_decoder.rs`:
- Around line 176-223: lzma_alone_decoder_init and lzma_alone_decoder currently
dereference incoming C pointers (next, strm) before validating them; add null
guards at the start of these public entry points to check for null pointers and
return the standard API error (e.g., LZMA_PROG_ERROR) without dereferencing if
any required pointer is null, and only proceed to use lzma_next_end, set
(*next).init, or read allocator/internal fields after the null checks; apply the
same pattern to the other ALONE entry snippet noted around the second function
(the block referenced as also applies to: 275-285).

In `@liblzma-rs/src/common/block_encoder.rs`:
- Around line 38-48: Replace the unchecked unwrap on (*coder).next.code with a
safe check: retrieve the function pointer via if let Some(code_fn) =
(*coder).next.code { ... } and call code_fn(...) exactly as currently done; if
it is None, return an appropriate lzma_ret error (e.g. LZMA_PROG_ERROR or the
crate's chosen sentinel) instead of panicking. This touches the call site that
currently binds ret: lzma_ret = (*coder).next.code.unwrap()(...) — change it to
use the safe variable name (code_fn) and handle the missing-case return.

In `@liblzma-rs/src/common/file_info.rs`:
- Around line 61-76: The current in-buffer seek arithmetic in the conditional
for target inside pos_min..pos_max uses wrapping_add with a potentially negative
difference and causes large wraps when target_pos < (*coder).file_cur_pos; fix
by computing the delta directionally instead of adding a signed-negative value:
inside the if (target_pos >= pos_min && target_pos <= pos_max) branch, check if
target_pos >= (*coder).file_cur_pos and then adjust *in_pos by adding
(target_pos - (*coder).file_cur_pos) as size_t, otherwise subtract
((*coder).file_cur_pos - target_pos) as size_t from *in_pos; keep setting
external_seek_needed = false and (*coder).file_cur_pos = target_pos, and ensure
casts use unsigned sizes consistently for the addition/subtraction.

In `@liblzma-rs/src/common/filter_common.rs`:
- Around line 303-325: The accumulation loop currently uses wrapping_add (on
total and final total.wrapping_add(LZMA_MEMUSAGE_BASE)) which can silently
overflow; change these to use checked addition so any overflow returns
UINT64_MAX instead of wrapping. Specifically, in the loop replace total =
total.wrapping_add(1024) and total = total.wrapping_add(usage) with checked
additions (e.g., total.checked_add(...).ok_or(UINT64_MAX) style) and if
checked_add returns None immediately return UINT64_MAX; likewise replace the
final total.wrapping_add(LZMA_MEMUSAGE_BASE) with a checked add and return
UINT64_MAX on overflow. Ensure you reference the same symbols (total, usage,
LZMA_MEMUSAGE_BASE, UINT64_MAX, coder_find, lzma_filter_coder, memusage) and
preserve existing null/UINT64_MAX early returns.
- Around line 120-126: The loops in lzma_validate_chain (and the other mentioned
loops) dereference src[i] / filters[i] before ensuring i is within the
fixed-array bound (LZMA_FILTERS_MAX + 1); fix by moving the bounds check to
occur before every src.offset(i) or filters.offset(i) read (avoid post-increment
derefs like filters[i++] without a prior check), i.e., ensure i is <
(LZMA_FILTERS_MAX + 1) (or check i == LZMA_FILTERS_MAX as appropriate) before
doing any offset(i) access and return LZMA_OPTIONS_ERROR or equivalent when the
bound would be exceeded; apply the same change to the other occurrences
referenced (around the other loops at the noted locations) so no dereference
happens past the array end.
- Around line 255-275: The code currently calls coder_find.unwrap() (the Option
lzma_filter_find) in lzma_raw_coder_init (both encoder and decoder paths) and in
lzma_raw_coder_memusage which will panic if the callback is None; replace these
unwraps with an explicit check (e.g., if coder_find.is_none() / match
coder_find) and return LZMA_PROG_ERROR from lzma_raw_coder_init when the
callback is missing, and return UINT64_MAX from lzma_raw_coder_memusage,
otherwise call the callback via the safe reference to locate the
lzma_filter_coder; update uses of coder_find.unwrap() in the encoder branch
(where fc is looked up), decoder branch (fc_0 lookup), and memusage lookup
accordingly.

In `@liblzma-rs/src/common/filter_decoder.rs`:
- Around line 344-351: Guard the incoming pointer at the top of
lzma_properties_decode: check if filter.is_null() and if so return
LZMA_PROG_ERROR; only after that dereference filter to set (*filter).options =
core::ptr::null_mut() and to read (*filter).id for decoder_find. In short, move
the null-check before any use of filter and return LZMA_PROG_ERROR on null, then
proceed with existing logic that calls decoder_find and assigns options.

In `@liblzma-rs/src/common/index.rs`:
- Around line 528-533: The overflow check uses wrapping_add (e.g.,
dest_file_size.wrapping_add(lzma_index_file_size(src)) and
(*dest).uncompressed_size.wrapping_add((*src).uncompressed_size)) which can wrap
to 0 and mis-detect overflow; replace these with proper pre-checks like testing
a > LZMA_VLI_MAX - b (e.g., check if dest_file_size > LZMA_VLI_MAX -
lzma_index_file_size(src) and if (*dest).uncompressed_size > LZMA_VLI_MAX -
(*src).uncompressed_size) before performing the addition, and update the
corresponding concatenation/assignment code paths (the code that sets dest file
size and uncompressed_size later around the 591–597 region) to only add after
the safe check or use checked_add() and handle None as overflow.

In `@liblzma-rs/src/common/microlzma_decoder.rs`:
- Around line 102-118: The code currently calls .unwrap() on the FFI callback
pointer (*coder).lzma.code inside microlzma_decoder (the block creating
dummy_in/dummy_in_pos and invoking the code callback), which can panic if the
filter didn't set a code callback; change this to defensively check whether
(*coder).lzma.code is Some(cb) before invoking it, and if it's None return an
appropriate error (e.g. LZMA_PROG_ERROR) instead of panicking; ensure you use
the unwrapped cb variable only after checking and keep the rest of the call
parameters (dummy_in, dummy_in_pos, out, out_pos, out_size, LZMA_RUN) unchanged
so behavior remains the same when the callback exists.

In `@liblzma-rs/src/common/stream_buffer_decoder.rs`:
- Around line 61-84: The branch that maps LZMA_OK to a data/buf error runs after
you reset *in_pos and *out_pos to in_start/out_start, so the check *in_pos ==
in_size compares the reset value and misclassifies full-input OK as BUF_ERROR;
in stream_buffer_decoder.rs adjust the logic in the block handling ret by
performing the ret == LZMA_OK check (and mapping to LZMA_DATA_ERROR when the
caller's original in_pos reached in_size) before you restore *in_pos/*out_pos
(or capture the original in_pos/in_size comparison into a local boolean
beforehand), and keep the existing MEMLIMIT_ERROR handling and cleanup paths
(retain lzma_next_end call and memconfig invocation) unchanged.

In `@liblzma-rs/src/common/stream_buffer_encoder.rs`:
- Around line 4-5: The constant INDEX_BOUND uses incorrect operator precedence
so the bitmask (& !(3)) only applies to the literal 3; update INDEX_BOUND to
apply the mask to the full sum by grouping the sum in parentheses (i.e., compute
(1 + 1 + 2 * LZMA_VLI_BYTES_MAX + 4 + 3) & !(3)) so it rounds the entire value
to a 4-byte boundary, and leave HEADERS_BOUND (which references INDEX_BOUND and
LZMA_STREAM_HEADER_SIZE) unchanged.

In `@liblzma-rs/src/common/stream_decoder_mt.rs`:
- Around line 1422-1425: Check for a null options pointer before any
dereference: in stream_decoder_mt_init (and similarly at the other occurrence
around the region mentioned) add an early guard that if options.is_null() (or
options == NULL) returns LZMA_PROG_ERROR instead of dereferencing
(*options).threads; also update the caller lzma_stream_decoder_mt to ensure it
forwards nulls safely or propagates the same LZMA_PROG_ERROR. This prevents UB
from a null FFI input while preserving the existing LZMA_OPTIONS_ERROR path for
invalid thread counts.

In `@liblzma-rs/src/common/stream_encoder_mt.rs`:
- Around line 1344-1349: The code multiplies filters_memusage by
(*options).threads using wrapping_mul, which silently overflows; replace the
wrapping_mul call on filters_memusage with saturating_mul so that overflow
yields UINT64_MAX and preserves the API contract that UINT64_MAX denotes an
unacceptable memory requirement — i.e. after calling lzma_raw_encoder_memusage
and checking for UINT64_MAX, change filters_memusage =
filters_memusage.wrapping_mul((*options).threads as u64) to use saturating_mul
(filters_memusage = filters_memusage.saturating_mul((*options).threads as u64))
and keep the subsequent checks against UINT64_MAX.

In `@liblzma-rs/src/common/stream_encoder.rs`:
- Around line 116-127: The code currently calls .unwrap() on block_encoder.code
which can panic across the FFI boundary; change the call site (the block that
computes ret using (*coder).block_encoder.code) to first check whether
(*coder).block_encoder.code is Some, using if let or match, and if it is None
return an appropriate lzma_ret error (e.g., LZMA_PROG_ERROR) instead of
panicking; when Some, call the function pointer with the existing arguments
(block_encoder.coder, allocator, in_0, in_pos, in_size, out, out_pos, out_size,
CONVERT[action as usize]) and assign to ret as before, ensuring no unwrap() is
used directly.
- Around line 144-158: The code calls .unwrap() on (*coder).index_encoder.code
which can panic; change it to check for presence and handle the None case
explicitly. Locate the call site in stream_encoder.rs where index_encoder.code
is used (inside the match arm for variant 4) and replace the .unwrap() with a
safe match or if-let that binds the function (e.g., code_fn) and calls it; if
index_encoder.code is None, return an appropriate lzma_ret error such as
LZMA_PROG_ERROR (or another suitable sentinel used elsewhere) instead of
panicking. Ensure the bound function is then invoked with the same arguments
previously passed to the unwrap result.

In `@liblzma-rs/src/common/stream_flags_common.rs`:
- Around line 19-25: The current check only validates backward_size when both
sides are known, letting an invalid known value pass if the other side is
LZMA_VLI_UNKNOWN; change the logic in the function containing the backward_size
checks so that each side is validated independently: call
is_backward_size_valid(a) and return LZMA_PROG_ERROR if (*a).backward_size !=
LZMA_VLI_UNKNOWN and is_backward_size_valid(a) is false, and do the same for
(*b).backward_size; only after both backward_size fields are known (both !=
LZMA_VLI_UNKNOWN) perform the equality check and return LZMA_DATA_ERROR if they
differ. Ensure you reference the symbols backward_size, LZMA_VLI_UNKNOWN,
is_backward_size_valid, LZMA_PROG_ERROR, and LZMA_DATA_ERROR in the updated
condition flow.

In `@liblzma-rs/src/common/stream_flags_decoder.rs`:
- Around line 13-16: Both FFI entrypoints (lzma_stream_header_decode and the
analogous footer entrypoint) currently dereference options and in_0 (used by
memcmp, read32le, and stream_flags_decode) without checks; add explicit
null-pointer guards at the start of each extern "C" function to validate that
both options and in_0 are non-null and return an appropriate lzma_ret error
(e.g., LZMA_PROG_ERROR) when a null pointer is detected so we avoid immediate UB
before calling memcmp/read32le/stream_flags_decode.

In `@liblzma-rs/src/lz/lz_encoder.rs`:
- Around line 317-343: The function lzma_lz_encoder_memusage currently forwards
lz_options into lz_encoder_prepare which will dereference it; add an explicit
null check at the start of lzma_lz_encoder_memusage that returns UINT64_MAX when
lz_options is null, otherwise call lz_encoder_prepare as before; reference the
pointer lz_options, the wrapper function lzma_lz_encoder_memusage, and the inner
helper lz_encoder_prepare so the null-check is placed before invoking
lz_encoder_prepare.

---

Duplicate comments:
In `@benches/backend_comparison.rs`:
- Around line 22-37: The helpers (e.g., unsafe fn c_encode and other
encode/decode helpers using c_sys::lzma_easy_buffer_encode /
c_sys::lzma_stream_buffer_decode) currently truncate output based solely on
out_pos/in_pos even when the liblzma call fails; change them to check the return
code from the liblzma single-call API first and only use out_pos/in_pos (and
truncate the buffer) when the return value indicates success (LZMA_OK or
LZMA_STREAM_END as appropriate); on error, propagate or handle the error (return
a Result or log and abort) instead of trusting out_pos/in_pos so benchmarks
won’t see partial/empty results.
- Around line 121-133: The decode benchmarks are using two different inputs
(c_compressed and rs_compressed) so they measure different workloads; compress
once and use the same compressed fixture for both decoder benches. Replace the
separate c_compressed/rs_compressed usage by producing a single compressed
buffer (e.g., from make_payload + one encoder) and pass that same buffer into
both group.bench_with_input calls that exercise c_decode and rs_decode (refer to
c_compressed, rs_compressed, c_decode, rs_decode, make_payload, and the
group.bench_with_input closures).

In `@liblzma-rs-sys/src/lib.rs`:
- Around line 776-778: The listed extern "C" functions (lzma_index_memused,
lzma_index_stream_count, lzma_index_block_count, lzma_index_size,
lzma_index_stream_size, lzma_index_total_size, lzma_index_hash_size and the
other occurrences around lines noted) currently accept raw pointers and
dereference them but are declared safe; change their signatures to be unsafe
(e.g., pub unsafe extern "C" fn ...) so callers must use an unsafe block, and
keep the internal call to liblzma_rs::common::index::... with the same pointer
casts; update all duplicated implementations mentioned (ranges ~776, 822-843,
968-970) consistently.

In `@liblzma-rs/src/alloc.rs`:
- Around line 96-97: The code currently calls allocator.alloc or allocator.free
independently which can mix caller-owned and Rust-owned frees; update the logic
in the helpers that touch the allocator struct (check the allocator variable and
its fields .alloc and .free) to require both (*allocator).alloc.is_some() &&
(*allocator).free.is_some() before opting into the caller allocator, or else
treat a partial allocator as invalid (reject up front). Adjust all affected call
sites that currently check only alloc or free (the branches around
allocator.alloc.unwrap() and allocator.free.unwrap()) to use the combined
predicate, and add regression tests covering alloc-only and free-only allocators
to ensure they are rejected or behave consistently.

In `@liblzma-rs/src/check/check.rs`:
- Around line 56-75: The function lzma_check_update must guard against buf being
NULL when size > 0 to avoid UB; update lzma_check_update to early-return (or
skip backend calls) if size != 0 and buf.is_null(), before dispatching to
LZMA_CHECK_CRC32, LZMA_CHECK_CRC64, or LZMA_CHECK_SHA256 so you never call
lzma_crc32, lzma_crc64, or lzma_sha256_update with a NULL buffer; reference the
lzma_check_update function and the backend calls lzma_crc32, lzma_crc64, and
lzma_sha256_update when making the change.
- Around line 84-89: The match arms for LZMA_CHECK_CRC32 and LZMA_CHECK_CRC64
currently store the CRC values using native-endian assignments (using
(*check).buffer.u32_0[0] and u64_0[0]); instead write the CRCs in little-endian
byte order into the byte buffer (buffer.u8_0) so reads that consume bytes work
on big-endian targets. Concretely, in the LZMA_CHECK_CRC32 arm convert
(*check).state.crc32 to little-endian bytes (e.g., via to_le_bytes()) and copy
those 4 bytes into (*check).buffer.u8_0[0..4], and in the LZMA_CHECK_CRC64 arm
convert (*check).state.crc64 to little-endian bytes and copy those 8 bytes into
(*check).buffer.u8_0[0..8]; remove the native u32_/u64_ assignments. Ensure you
reference the same symbols type_0, LZMA_CHECK_CRC32, LZMA_CHECK_CRC64,
(*check).state.crc32, (*check).state.crc64, and (*check).buffer.u8_0 when making
the change.

In `@liblzma-rs/src/check/crc64_fast.rs`:
- Line 2: Change the mutable global to an immutable one: replace the declaration
of lzma_crc64_table (currently declared as "pub static mut lzma_crc64_table:
[[u64; 256]; 4] = [") with an immutable static (e.g. "pub static
lzma_crc64_table: [[u64; 256]; 4] = ["). Ensure no code writes to
lzma_crc64_table anywhere (remove or refactor any assignments) and remove
unnecessary unsafe blocks that only existed to read from the table so consumers
can access it safely without unsafe.

In `@liblzma-rs/src/common/alone_encoder.rs`:
- Around line 65-69: alone_encoder_init currently calls
lzma_lzma_lclppb_encode(options, ...) and accesses (*options).dict_size without
checking for a null options pointer; update alone_encoder_init to reject a null
options pointer early (return appropriate lzma_ret) before any use of options
and before calling lzma_lzma_lclppb_encode, and apply the same null-check
pattern to the other place that calls lzma_lzma_lclppb_encode / reads
(*options).dict_size (the second encoder/init block that mirrors this logic) so
neither location dereferences a null options pointer.
- Around line 42-53: The extern "C" branch currently calls
(*coder).next.code.unwrap() which can panic across the FFI boundary; change it
to check whether (*coder).next.code is Some before calling: if let Some(code_fn)
= (*coder).next.code { call code_fn((*coder).next.coder, allocator, in_0,
in_pos, in_size, out, out_pos, out_size, action) } else { return LZMA_PROG_ERROR
}; ensure you return the same LZMA_PROG_ERROR constant used elsewhere and keep
the call parameters identical to the current unwrap usage.

In `@liblzma-rs/src/common/auto_decoder.rs`:
- Around line 33-60: The code currently sets (*coder).sequence = SEQ_CODE before
attempting lzma_stream_decoder_init or lzma_alone_decoder_init, which means a
failed init leaves sequence advanced and future calls skip real init; move the
assignment of (*coder).sequence = SEQ_CODE so it only happens after a successful
init (after ret_ == LZMA_OK for lzma_stream_decoder_init and after checking
ret__0 == LZMA_OK and any subsequent flag checks for lzma_alone_decoder_init),
keeping sequence at SEQ_INIT until those calls complete successfully; reference
the symbols: sequence, SEQ_INIT, SEQ_CODE, (*coder), lzma_stream_decoder_init,
and lzma_alone_decoder_init.

In `@liblzma-rs/src/common/block_buffer_encoder.rs`:
- Around line 27-29: The current lzma_block_buffer_bound wrapper truncates the
u64 result from lzma_block_buffer_bound64() to size_t unconditionally, which can
overflow on 32-bit targets; change lzma_block_buffer_bound to check whether the
u64 return (from lzma_block_buffer_bound64) exceeds size_t::MAX and only cast
when safe, otherwise return the saturated maximum (e.g., size_t::MAX) to avoid
producing a too-small buffer length; locate the function lzma_block_buffer_bound
and adjust the conversion logic to perform a bounds check against size_t::MAX
before casting instead of blindly doing ret as size_t.

In `@liblzma-rs/src/common/block_decoder.rs`:
- Around line 219-223: The initializer currently calls
lzma_block_unpadded_size(block) and uses (*block).check in
lzma_check_size()/lzma_check_init() before ensuring block is non-null and check
is in-range, which can dereference invalid input; fix by first validating that
block is not NULL and that (*block).check is within valid bounds (e.g., compare
against allowed check enum range) before calling lzma_block_unpadded_size(block)
or invoking lzma_check_size()/lzma_check_init(); apply the same
reordering/guarding to the similar logic around the
lzma_check_size()/lzma_check_init() usage (the block around lines 266–289) so
all helpers only run when block and block->check are validated.

In `@liblzma-rs/src/common/block_encoder.rs`:
- Around line 135-178: The init guard in lzma_block_encoder_init currently
dereferences (*next).init before validating next; add a null check for next at
the start of lzma_block_encoder_init (returning LZMA_PROG_ERROR) so you never
read (*next).init when next is null, and only call lzma_next_end(next,
allocator) or assign (*next).init after confirming next is non-null; update the
early-return logic to validate next first, then proceed with the existing
init-guard and block null check.
- Around line 34-76: The match uses C2rust-generated magic numbers
(current_block_34, values like 6470892831169497455) to simulate gotos; replace
this with idiomatic Rust state/control flow by introducing a clear enum (e.g.,
BlockState) or a loop with explicit state transitions and named variants instead
of magic integers, update uses of (*coder).sequence and SEQ_PADDING to set the
enum/state variant (or change sequence to that enum), and rewrite the match over
current_block_34 to normal control flow (break/continue/return) so that calls
through (*coder).next.code, the check/update of compressed/uncompressed sizes,
and the sequence transition are expressed without numeric sentinels; ensure
behavior around returning LZMA_STREAM_END, LZMA_DATA_ERROR, and LZMA_PROG_ERROR
is preserved.

In `@liblzma-rs/src/common/block_header_encoder.rs`:
- Around line 44-54: In lzma_block_header_encode, guard the out pointer before
any writes: check that out is not null (and that (*block).header_size >= 4 so
out_size calculation is valid) immediately after validating block/unpadded size;
if the check fails return LZMA_PROG_ERROR. Update the function (referencing
lzma_block_header_encode, out, out_size, and (*block).header_size) to perform
this null/size validation before any writes at lines that use out (e.g., the
subsequent byte stores) to avoid undefined behavior.
- Around line 3-43: In lzma_block_header_size, first validate the incoming
pointer: if block.is_null() return LZMA_PROG_ERROR before any dereference; after
computing the final header size (the value assigned to (*block).header_size,
currently computed as size.wrapping_add(3) & !(3)), check that this final value
does not exceed LZMA_BLOCK_HEADER_SIZE_MAX and return LZMA_PROG_ERROR if it
does, only then assign to (*block).header_size and return LZMA_OK.

In `@liblzma-rs/src/common/block_util.rs`:
- Around line 48-54: In lzma_block_total_size, remove the unnecessary mutable
binding by computing unpadded_size with lzma_block_unpadded_size and returning
either vli_ceil4(unpadded_size) when unpadded_size != LZMA_VLI_UNKNOWN or
unpadded_size otherwise; update the function to use an immutable binding and a
conditional/if expression (referencing lzma_block_total_size,
lzma_block_unpadded_size, LZMA_VLI_UNKNOWN, and vli_ceil4) to achieve this
simpler, immutable form.
- Around line 3-22: lzma_block_compressed_size must validate the unpadded_size
value before arithmetic: check that unpadded_size != LZMA_VLI_UNKNOWN and
unpadded_size <= UNPADDED_SIZE_MAX (and treat values outside that range as an
error) before comparing to container_size or computing compressed_size; if the
check fails return an appropriate error (e.g. LZMA_DATA_ERROR) instead of
proceeding. Update the logic in lzma_block_compressed_size (and rely on
lzma_block_unpadded_size only for retrieving the stored unpadded size) so the
new bounds checks occur prior to the wrapping_sub and the subsequent comparison
to (*block).compressed_size.
- Around line 23-47: The function lzma_block_unpadded_size currently returns 0
on validation failures but LZMA_VLI_UNKNOWN when compressed_size is unknown,
which is inconsistent; change the early returns that now return 0 (the
validation-failure branches checking block null, version, header_size
bounds/align, compressed_size bounds/zero, and check ID) to return
LZMA_VLI_UNKNOWN instead, and update or add a short doc comment above
lzma_block_unpadded_size explaining that LZMA_VLI_UNKNOWN is used to signal
errors/unknown sizes (or retain 0 only if you must match the original C
API—document that choice explicitly) so callers have a consistent sentinel
value.

In `@liblzma-rs/src/common/common.rs`:
- Around line 247-255: The lzma_code function dereferences strm before checking
it; add an explicit null-pointer guard at the top of lzma_code that returns
LZMA_PROG_ERROR if strm.is_null() to avoid UB, then proceed with the existing
checks (e.g. (*strm).next_in, (*strm).avail_in, (*strm).next_out,
(*strm).avail_out, (*strm).internal, (*(*strm).internal).next.code, action >
LZMA_FULL_BARRIER, and the supported_actions lookup) so all subsequent
dereferences are safe; place this check before any use of (*strm) or other
fields in the function.
- Around line 410-418: The code currently calls the get_progress callback and
unconditionally writes through progress_in/progress_out, which is UB if either
pointer is null; fix by guarding against nulls: when invoking
(*(*strm).internal).next.get_progress (and its unwrap call) use local temporary
u64 variables for in/out (e.g. tmp_in/tmp_out), pass their addresses to the
callback instead of the raw pointers, and after the callback only write the
temporaries back into *progress_in/*progress_out if those pointers are non-null;
likewise in the fallback branch only assign to *progress_in and *progress_out
after checking they are non-null (or skip assignment if null). Ensure references
to strm, (*strm).internal, next.get_progress, next.coder, progress_in,
progress_out, total_in and total_out are used to locate and apply the changes.

In `@liblzma-rs/src/common/easy_decoder_memusage.rs`:
- Around line 35-36: The error sentinel returned for invalid presets is using a
32-bit max value; update the return in the lzma_easy_preset failure branch to
return the 64-bit memusage sentinel (u64::MAX or UINT64_MAX) so callers can
distinguish failure from a legitimate 4GiB result—locate the if block calling
lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset) and replace the
UINT32_MAX as u64 return with u64::MAX to match other memusage entry points.

In `@liblzma-rs/src/common/easy_encoder_memusage.rs`:
- Around line 35-37: The error sentinel is wrong: in
lzma_easy_encoder_memusage() you currently return UINT32_MAX as u64 when
lzma_easy_preset(::core::ptr::addr_of_mut!(opt_easy), preset) fails; change the
error return to the 64-bit sentinel (u64::MAX or UINT64_MAX as u64) so the
function returns the documented UINT64_MAX on error and not a plausible ~4GiB
value. Ensure you update the return expression in the block that checks
lzma_easy_preset and keep the rest of lzma_easy_encoder_memusage() logic
unchanged.

In `@liblzma-rs/src/common/easy_preset.rs`:
- Around line 2-9: The function lzma_easy_preset should first validate the
incoming C pointer opt_easy for null before any dereference (i.e., check
opt_easy == null and return an error/false appropriately) to avoid UB; after
calling lzma_lzma_preset and setting filters[0], fully initialize the terminator
entry filters[1] by setting both filters[1].id = LZMA_VLI_UNKNOWN and
filters[1].options = core::ptr::null_mut() (or equivalent) so we don't leave a
stale pointer if lzma_options_easy is reused; locate these changes in the
lzma_easy_preset function and the lzma_options_easy.filters initialization sites
referenced in the diff.

In `@liblzma-rs/src/common/file_info.rs`:
- Around line 602-603: The caller's output pointer (dest_index) must be cleared
on early failure to avoid leaving a stale pointer in the C ABI: before any early
return or before reinitializing the decoder, check that dest_index is not null
and explicitly clear the caller's slot by writing a null value into *dest_index;
update all paths around the decoder reinitialization (the code referencing
dest_index and the decoder initialization logic) to set *dest_index = null prior
to returning or proceeding with decoder reinit so the caller cannot observe an
unchanged/stale pointer.

In `@liblzma-rs/src/common/filter_buffer_decoder.rs`:
- Around line 2-20: The function lzma_raw_buffer_decode currently doesn't
validate the filters pointer before passing it into lzma_raw_decoder_init; add a
null-check for filters at the start of lzma_raw_buffer_decode and return
LZMA_PROG_ERROR if filters.is_null() so the decoder init never receives a null
chain (preventing FFI UB). Ensure the check appears alongside the existing
input/out pointer and bounds checks and references the filters symbol and
lzma_raw_decoder_init call.

In `@liblzma-rs/src/common/filter_encoder.rs`:
- Around line 404-406: lZMA properties helpers disagree on error codes: change
the early return in lzma_properties_encode() where encoder_find((*filter).id)
yields null so it mirrors lzma_properties_size()'s behavior; instead of
returning LZMA_PROG_ERROR, return LZMA_OPTIONS_ERROR for
in-range-but-unsupported filter IDs. Update the branch that checks fe.is_null()
(the pointer from encoder_find) so lzma_properties_encode() and
lzma_properties_size() use the same LZMA_OPTIONS_ERROR contract for unsupported
filter IDs.
- Around line 291-305: The temporary array reversed_filters in filter_encoder.rs
is fixed at length 5 but count is computed from the input without bounds,
allowing a stack overflow; modify the code that iterates over filters (the loop
using count and the copy into reversed_filters) to cap count to
reversed_filters.len() - 1 (reserving one slot for LZMA_VLI_UNKNOWN) or switch
to a heap allocation (Vec<lzma_filter>) sized to count+1, then ensure you always
write the terminator into reversed_filters[count] and return an appropriate
error if the input chain exceeds the allowed capacity when using the fixed-size
buffer; update the code paths that rely on reversed_filters (e.g., the copy loop
and the subsequent next.update call) to use the safely bounded or dynamically
allocated array.

In `@liblzma-rs/src/common/filter_flags_decoder.rs`:
- Around line 38-45: The code advances *in_pos after calling
lzma_properties_decode even when the decode fails, which incorrectly marks
property bytes as consumed; change the logic in the block using
lzma_properties_decode (the ret: lzma_ret variable, the call to
lzma_properties_decode, in_pos and props_size) so that *in_pos is only
incremented (wrapping_add(props_size)) when the decode returns success (check
ret for success) and otherwise leave *in_pos unchanged and return the error ret.
- Around line 10-16: Guard the caller pointers before any dereference: check
that filter and in_pos are not null and return LZMA_PROG_ERROR if they are; only
after that assign (*filter).options = core::ptr::null_mut(); also validate in_0
— if in_0 is null but in_size > *in_pos (i.e., there is unread input) return
LZMA_PROG_ERROR, otherwise allow passing a null in_0 to lzma_vli_decode when
there is no unread input; then call lzma_vli_decode with the validated pointers
(addr_of_mut!((*filter).id), null_mut(), in_0, in_pos, in_size) and proceed as
before.

In `@liblzma-rs/src/common/filter_flags_encoder.rs`:
- Around line 3-7: Add explicit null-pointer guards at the start of both FFI
entrypoints lzma_filter_flags_size and lzma_filter_flags_encode: check that
pointers size and filter are non-null in lzma_filter_flags_size, and that
filter, out, and out_pos are non-null in lzma_filter_flags_encode, returning an
appropriate lzma_ret parameter-error code instead of dereferencing; perform
these checks before any use of (*filter) or writes to *size/*out/*out_pos so
callers receive a normal error rather than UB.

In `@liblzma-rs/src/common/index_decoder.rs`:
- Around line 227-255: The function lzma_index_decoder_init (and other public C
ABI entrypoints in this file that dereference caller pointers) currently
dereferences pointers like next and strm before validating them; add explicit
null checks at the start of each exported entrypoint (e.g.,
lzma_index_decoder_init and the decode functions that use strm) and immediately
return LZMA_PROG_ERROR when any required input pointer (next, i, strm, allocator
as appropriate) is NULL instead of dereferencing; ensure checks occur before any
use of (*next).init or similar fields so no unsafe deref happens on null.

In `@liblzma-rs/src/common/index_encoder.rs`:
- Around line 152-176: The public C entrypoints lzma_index_encoder_init and
lzma_index_encoder must validate their pointer arguments up front: check that
`next` (and in lzma_index_encoder, `strm`, and any used nested pointers such as
`(*strm).allocator` and `(*strm).internal`) are not null before dereferencing;
if any are null return `LZMA_PROG_ERROR` immediately instead of touching
`(*next).init` or other fields. Add early null checks at the top of
`lzma_index_encoder_init` (before reading `(*next).init`) and the corresponding
checks in `lzma_index_encoder` (and the duplicate block referenced), returning
`LZMA_PROG_ERROR` on failure so the functions don't walk null pointers or call
`lzma_next_end` on invalid data.

In `@liblzma-rs/src/common/index_hash.rs`:
- Around line 83-89: The FFI function lzma_index_hash_size currently
dereferences the pointer unconditionally; update lzma_index_hash_size to check
if the incoming index_hash pointer is null and return 0 for that case, otherwise
call index_size using (*index_hash).blocks.count and
(*index_hash).blocks.index_list_size as before; perform the null check inside
the unsafe block (using index_hash.is_null()) to avoid UB when called from C.
- Around line 144-154: The function lzma_index_hash_decode currently
dereferences in_pos, index_hash, and in_0 without validating pointers; add
defensive checks at the top of lzma_index_hash_decode to return LZMA_PROG_ERROR
if in_pos, in_0, or index_hash are NULL, then only after confirming in_pos is
non-null read *in_pos and validate it against in_size (return LZMA_BUF_ERROR if
out of bounds) before any further dereferences or reads; reference the
lzma_index_hash_decode function and the pointers index_hash, in_0, and in_pos
when making these checks.

In `@liblzma-rs/src/common/index.rs`:
- Around line 674-677: The copy of index metadata currently assigns
uncompressed_size, total_size, record_count, and index_list_size but omits the
accumulated checks bitset, causing multi-stream duplicates to underreport
lzma_index_checks(); update the same copy block to also assign (*dest).checks =
(*src).checks so the duplicated index retains the full checks bitset used by
lzma_index_checks().
- Around line 259-263: The lzma_index_prealloc function allows prealloc to be
set to 0 which leads to an index_group being allocated with no space for records
and causes an out-of-bounds write; update lzma_index_prealloc to clamp the
incoming records value to the range [1, PREALLOC_MAX] (i.e. if records < 1 set
it to 1, and if > PREALLOC_MAX set it to PREALLOC_MAX) before assigning
(*i).prealloc so append logic that allocates index_group always has room for at
least one record.
- Around line 327-335: The addition chain in index_file_size currently uses
wrapping_add on compressed_base and other terms which can overflow before the
LZMA_VLI_MAX check; change these to use checked_add (or equivalent checked
operations) when computing file_size and when adding index_size(record_count,
index_list_size), and if any checked_add returns None return LZMA_VLI_UNKNOWN;
ensure you still reference the same variables and helpers (compressed_base,
LZMA_STREAM_HEADER_SIZE, stream_padding, vli_ceil4(unpadded_sum),
index_size(...)) and keep the LZMA_VLI_MAX and LZMA_VLI_UNKNOWN semantics
intact.

In `@liblzma-rs/src/common/lzip_decoder.rs`:
- Around line 138-145: The code reuses the coder struct but sets
options.preset_dict to null without resetting options.preset_dict_size, leaving
an inconsistent preset-dictionary state for lzma_lzma_decoder_memusage_nocheck()
and the nested LZMA decoder initialization; fix it in lzip_decoder setup by
explicitly setting (*coder).options.preset_dict_size = 0 whenever you set
(*coder).options.preset_dict = core::ptr::null(), so the preset-dictionary tuple
is consistent before calling lzma_lzma_decoder_memusage_nocheck() and subsequent
LZMA decoder init.

In `@liblzma-rs/src/common/microlzma_decoder.rs`:
- Around line 252-260: The code currently validates uncomp_size against
LZMA_VLI_MAX after allocating and attaching a new coder to (*next).coder, which
can leave a partially initialized coder if validation fails; move the check "if
uncomp_size > LZMA_VLI_MAX as u64 { return LZMA_OPTIONS_ERROR; }" to occur
before allocating/assigning the coder so that allocation only happens for valid
uncomp_size; update the flow around the allocation/attachment (references:
(*next).coder, (*coder).comp_size, (*coder).uncomp_size, uncomp_size_is_exact,
LZMA_VLI_MAX, LZMA_OPTIONS_ERROR) to ensure the coder is only created and fields
set after validation passes.

In `@liblzma-rs/src/common/microlzma_encoder.rs`:
- Around line 20-55: Reserve the first output byte before invoking the inner
encoder so the inner encoder does not overwrite it: write a placeholder into
*out at out_start (e.g. 0), advance *out_pos to out_start.wrapping_add(1) before
calling set_out_limit/code so the inner encoder writes starting after the
reserved byte, and after successful return write the final !(*coder).props into
the reserved byte and set *out_pos to out_start.wrapping_add(uncomp_size as
size_t). Update uses of out_size.wrapping_sub(*out_pos) and the post-call
assignment to *out_pos accordingly; relevant symbols: out_start, *out_pos,
set_out_limit, (*coder).lzma.code, uncomp_size, and (*coder).props.
- Around line 173-185: Add a null-check for the public entrypoint parameter
`options` in lzma_microlzma_encoder: after calling lzma_strm_init and before
forwarding `options` to microlzma_encoder_init (which ultimately dereferences it
via lzma_lzma_lclppb_encode), if options.is_null() then call lzma_strm_end(strm)
to clean up and return an appropriate error (e.g. LZMA_PROG_ERROR) instead of
passing the null pointer into microlzma_encoder_init.

In `@liblzma-rs/src/common/outqueue.rs`:
- Around line 111-116: The function lzma_outq_get_buf currently dereferences
(*outq).cache unconditionally which can produce an NPE; defend it by first
loading the cache pointer into a local (e.g. buf) and if that pointer is null
return null immediately, otherwise update (*outq).cache = (*buf).next and return
buf. In short: in lzma_outq_get_buf, check if (*outq).cache is null before
accessing (*buf).next and return a null pointer on empty cache to avoid unsafe
deref of lzma_outq.cache / lzma_outbuf.

In `@liblzma-rs/src/common/stream_buffer_decoder.rs`:
- Around line 13-20: The function performs pointer validation but misses
checking the memlimit pointer before dereferencing `*memlimit`; add a null check
for `memlimit` in the same entry-validation conditional (or as an early return)
so the function returns LZMA_PROG_ERROR when `memlimit` is null and before any
use of `*memlimit`; update the validation that currently guards `*memlimit` (and
the related checks around lines 37–40) to include `memlimit.is_null()` (or
equivalent) to avoid dereferencing a null pointer.

In `@liblzma-rs/src/common/stream_decoder_mt.rs`:
- Around line 597-607: The reset path fails to reinitialize the member-check
field, so update stream_decoder_reset to clear the previous member's checksum by
setting (*coder).stream_flags.check = LZMA_CHECK_NONE (or the appropriate
default) before returning; do the same in the other reset routine referenced
around the 1319–1323 region so both reset points reinitialize stream_flags.check
(use the same lzma_stream_coder and stream_flags.check symbols and mirror how
other stream_flags fields like sequence/pos are set).
- Around line 470-476: The current branch that handles a worker terminal error
overwrites the real worker error with LZMA_PROG_ERROR when fail_fast is false;
instead preserve the worker's terminal error by assigning (*coder).pending_error
= (*coder).thread_error (rather than LZMA_PROG_ERROR) inside the branch that
checks (*coder).thread_error != LZMA_OK and !(*coder).fail_fast so the public
decoder sees the actual worker failure reason; reference the fields
thread_error, fail_fast, and pending_error on the coder and the constants
LZMA_OK/LZMA_PROG_ERROR when making the change.
- Around line 1355-1361: The code updates (*coder).memlimit_stop when lowering
the public limit but does not adjust the threaded cap, so memlimit_threading can
remain above the new public cap; modify the branch that handles new_memlimit !=
0 (the block that compares new_memlimit and *memusage and sets
(*coder).memlimit_stop) to also clamp (*coder).memlimit_threading to the new,
lower limit (i.e., if (*coder).memlimit_threading > new_memlimit then set
(*coder).memlimit_threading = new_memlimit) after successfully setting
memlimit_stop, ensuring threaded scheduling cannot exceed the caller's reduced
cap (referencing symbols: new_memlimit, *memusage, (*coder).memlimit_stop,
(*coder).memlimit_threading).
- Around line 1308-1317: stream_decoder_mt_end currently frees the coder memory
without first destroying the coder-level synchronization primitives; ensure you
destroy the coder's mutex and condition variable (the fields accessed as
(*coder).mutex and (*coder).cond) before calling crate::alloc::internal_free.
Add calls to the appropriate destroy functions (mirror whatever platform
wrappers the crate uses, e.g., lzma_mutex_destroy / lzma_cond_destroy or the
equivalents) in stream_decoder_mt_end, placing them after lzma_index_hash_end
and before crate::alloc::internal_free so the mutex/cond are cleaned up prior to
freeing coder.

In `@liblzma-rs/src/common/stream_decoder.rs`:
- Around line 29-39: stream_decoder_reset leaves the previous stream_flags.check
advertised value intact so stream_decoder_get_check can return stale checks; in
stream_decoder_reset set the advertised check to the neutral/default value (e.g.
LZMA_CHECK_NONE or the equivalent zero/none constant used by this crate) by
assigning (*coder).stream_flags.check = <NONE_CONST> when resetting sequence/pos
and do the same in the other reset path (the alternate reset block around lines
336-340) so all reset code paths clear the advertised check before parsing the
next header.

In `@liblzma-rs/src/common/stream_encoder_mt.rs`:
- Around line 171-176: The code defines a shared mutable static `in_chunk_max`
although it is never mutated; replace this with an immutable constant and use it
where `in_chunk_max` is referenced. Change the `static mut in_chunk_max: size_t
= 16384;` to a `const` (e.g., `const IN_CHUNK_MAX: size_t = 16384;` or a local
immutable `let in_chunk_max: size_t = 16384;`) and update the subsequent logic
that uses `in_chunk_max` (the `in_limit` calculation and `if
in_size.wrapping_sub(in_pos) > in_chunk_max { ... action = LZMA_RUN; }`) to
reference the new constant/immutable binding, removing any unsafe/mutable static
state.
- Around line 982-987: The check currently rejects block sizes > BLOCK_SIZE_MAX
or == UINT64_MAX but misses rejecting sizes that exceed SIZE_MAX before later
casting to size_t; update the validation around the lzma_block_buffer_bound64
call (variables: *block_size, *outbuf_size_max, function
lzma_block_buffer_bound64) to also return LZMA_OPTIONS_ERROR if *block_size >
SIZE_MAX and to return LZMA_MEM_ERROR (or LZMA_OPTIONS_ERROR as appropriate) if
the computed *outbuf_size_max from lzma_block_buffer_bound64(*block_size) is 0
or > SIZE_MAX, ensuring no subsequent cast at the later size_t conversions (the
casts around the code referencing outbuf_size_max/block_size where they are
converted to size_t) can truncate on 32-bit builds.
- Around line 810-824: The branch that returns LZMA_STREAM_END for
LZMA_FULL_FLUSH / LZMA_FULL_BARRIER is currently based only on
lzma_outq_is_empty((*coder).outq) and can finish while worker threads still have
blocks in flight; change the logic so that before returning LZMA_STREAM_END for
action == LZMA_FULL_FLUSH or LZMA_FULL_BARRIER you also verify that there are no
in-flight worker encodings (e.g. check the coder's worker/threads-in-flight
counter or "workers active" flag) and only return STREAM_END when both the outq
is empty and workers are idle; otherwise treat the stream as not finished (do
not return STREAM_END—return LZMA_OK or wait) so callers cannot update filters
while encoding is still ongoing.

In `@liblzma-rs/src/common/stream_encoder.rs`:
- Around line 399-424: The exported C ABI function lzma_stream_encoder is
missing #[no_mangle], so add a #[no_mangle] attribute directly above the pub
unsafe extern "C" fn lzma_stream_encoder(...) declaration to prevent Rust symbol
mangling and ensure C callers can link to lzma_stream_encoder; keep the existing
signature and visibility unchanged.
- Around line 231-241: The code calls (*coder).block_encoder.update.unwrap()
unconditionally which can panic if the optional callback is None; before
invoking the callback in the SEQ_BLOCK_ENCODE branch, check whether
(*coder).block_encoder.update is Some and only call it when present — otherwise
free filters via lzma_filters_free(temp_ptr, allocator) and return an
appropriate error (e.g., LZMA_PROG_ERROR or another LZMA_* constant used for
missing callbacks) instead of unwrapping; keep the existing usage of
(*coder).block_encoder.coder, allocator, filters, and reversed_filters when
calling the update.
- Around line 71-79: In SEQ_BLOCK_INIT when there is no input buffered (the
branch testing *in_pos == in_size), don't return LZMA_STREAM_END for non-FINISH
actions; instead detect flush actions (LZMA_SYNC_FLUSH, LZMA_FULL_FLUSH,
LZMA_FULL_BARRIER or generally any action != LZMA_FINISH and not LZMA_RUN) and
return LZMA_OK so the encoder continues to produce index/footer; keep the
existing behavior that LZMA_RUN returns LZMA_OK and only LZMA_FINISH leads
toward stream end handling—update the branch around the LZMA_FINISH/LZMA_RUN
checks in SEQ_BLOCK_INIT to explicitly return LZMA_OK for flush actions rather
than LZMA_STREAM_END.

In `@liblzma-rs/src/common/stream_flags_decoder.rs`:
- Around line 13-16: The exported C entrypoints (e.g., the extern "C" functions
lzma_stream_header_decode and the corresponding lzma_stream_footer_decode in
this file) need the #[no_mangle] attribute so their symbol names match the C
declarations in liblzma-rs/src/types.rs; add #[no_mangle] directly above each
pub unsafe extern "C" fn (e.g., lzma_stream_header_decode and
lzma_stream_footer_decode) to prevent Rust from mangling the symbols and then
verify the symbol names still match the references in types.rs.

In `@liblzma-rs/src/common/stream_flags_encoder.rs`:
- Around line 43-69: Add the missing #[no_mangle] to lzma_stream_footer_encode
and add null-pointer validation for both options and out at the top (return
LZMA_PROG_ERROR if either is null) before dereferencing options; also change the
error returned when stream_flags_encode(...) fails from LZMA_PROG_ERROR to
LZMA_OPTIONS_ERROR so the call to stream_flags_encode signals an options error.
Locate the checks and the call to stream_flags_encode in
lzma_stream_footer_encode and update them accordingly (symbols:
lzma_stream_footer_encode, lzma_stream_flags, stream_flags_encode).
- Around line 13-42: Add #[no_mangle] to the lzma_stream_header_encode
declaration, validate that both options and out are non-null at the start
(return LZMA_PROG_ERROR on null pointers), and change the error returned when
stream_flags_encode(...) fails to LZMA_OPTIONS_ERROR (since that indicates
invalid options/checks); keep the existing uses of lzma_header_magic,
LZMA_STREAM_FLAGS_SIZE, lzma_crc32, and write32le and ensure these checks occur
before any dereference or write in lzma_stream_header_encode.

In `@liblzma-rs/src/common/string_conversion.rs`:
- Around line 152-165: The tables bcj_optmap, delta_optmap, and filter_name_map
are declared as mutable globals but are never mutated; change their declarations
from static mut to immutable statics (or const where appropriate) and initialize
them without unsafe transmute blocks so the parser/formatter can access them
without unsafe shared-global writes; update the declarations for bcj_optmap,
delta_optmap, and filter_name_map (and the same pattern at the other referenced
ranges) to use static (or const) and safe initializers to remove unnecessary
unsafe access.
- Around line 1340-1344: The static initializer placed in INIT_ARRAY calling
run_static_initializers only applies to Linux/Windows/macOS, leaving
lzma12_optmap zeroed on other targets; add a fallback initialization path that
runs run_static_initializers for unsupported targets (or make lzma12_optmap
lazily initialized) so the option map is properly populated everywhere.
Concretely, add a #[cfg(not(any(target_os = "linux", target_os = "windows",
target_os = "macos")))] static or an init-on-first-use (e.g., Lazy/Once) that
invokes run_static_initializers or otherwise initializes lzma12_optmap before
any parse routines use it (referencing INIT_ARRAY, run_static_initializers, and
lzma12_optmap to locate the relevant code). Ensure the fallback executes early
enough that LZMA1/LZMA2 string parsing never observes a zeroed map.
- Around line 1263-1337: The code hardcodes byte offsets for lzma_options_lzma
into option_map entries (numbers like 20,24,28,32,36,40,44), which breaks on
32-bit targets; replace those literal offsets with computed field offsets using
a safe offset macro (e.g. memoffset::offset_of! or core::mem::offset_of if
available) referencing the lzma_options_lzma struct fields (e.g.
offset_of!(lzma_options_lzma, lc), offset_of!(..., lp), offset_of!(..., pb),
offset_of!(..., mode), offset_of!(..., nice), offset_of!(..., mf),
offset_of!(..., depth)) and cast to the expected integer type for
option_map::offset so the layout is correct on all architectures. Ensure you
update the option_map initializers that currently use numbers to use these
computed offsets instead.

In `@liblzma-rs/src/common/vli_decoder.rs`:
- Around line 9-13: Guard the incoming raw pointers vli, in_pos, and in_0 before
any dereference: check each for null and return LZMA_PROG_ERROR (or the
appropriate error constant) immediately if any is null; only after these checks
create/assign vli_pos_internal and set vli_pos = addr_of_mut!(vli_pos_internal)
and then safely dereference *vli, *in_pos, and *in_0. Update the logic in the
vli decoder function (references: vli, in_pos, in_0, vli_pos_internal) so all
pointer null-checks occur before the first use to avoid UB.

In `@liblzma-rs/src/common/vli_encoder.rs`:
- Around line 9-18: The function currently dereferences out_pos (and later
writes through out) without checking for null, causing UB for callers that pass
null; mirror the existing pattern used for vli_pos by validating both out and
out_pos early: if out_pos is null, create a local out_pos_internal and set
out_pos to its address and return LZMA_PROG_ERROR if the caller-supplied out
pointer is null (or if both are null) before any *out_pos or out writes;
otherwise, if out_pos is non-null but *out_pos >= out_size return
LZMA_BUF_ERROR. Update checks around *out_pos and subsequent writes to use the
validated out_pos and ensure out is non-null before writing.

In `@liblzma-rs/src/delta/delta_common.rs`:
- Around line 40-43: The hardcoded 256 should be derived from the history field
size; replace the literal with a computed size using
core::mem::size_of_val(&(*coder).history) (or
core::mem::size_of::<TypeOfHistory>()) so the call to
core::ptr::write_bytes(addr_of_mut!((*coder).history) as *mut u8, 0u8,
size_of_val(&(*coder).history)) uses the actual byte size of
lzma_delta_coder.history and won’t go stale if the field changes.
- Around line 7-12: Validate the incoming pointer parameters at the start of
lzma_delta_coder_init: explicitly check that next and filters are not null
before dereferencing (*next).coder or (*filters).options (and similarly guard
any later dereferences around lines 34-38). If a null pointer is detected,
return an appropriate lzma_ret error such as LZMA_PROG_ERROR (or another
project-appropriate lzma_ret) and avoid further dereferencing; update the
function flow so all uses of lzma_next_coder, lzma_filter_info, and the options
field are performed only after these null checks.

In `@liblzma-rs/src/delta/delta_decoder.rs`:
- Around line 26-37: The extern "C" decode path currently calls
(*coder).next.code.unwrap(), which will panic if next.code is None; instead
check whether (*coder).next.code is Some and if not return
lzma_ret::LZMA_PROG_ERROR (or the crate's equivalent) from the function; when
Some(handler) call handler((*coder).next.coder, allocator, in_0, in_pos,
in_size, out, out_pos, out_size, action) and preserve the existing logic for
handling its return value—update the code around coder/next.code in
delta_decoder.rs to explicitly match or if-let on (*coder).next.code and return
LZMA_PROG_ERROR on the None branch.
- Around line 44-63: The function lzma_delta_decoder_init currently dereferences
next immediately and will crash on a null pointer; add an early null check at
the start of lzma_delta_decoder_init (e.g., if next.is_null()) and return an
appropriate lzma_ret error (such as LZMA_PROG_ERROR) instead of proceeding, so
you never dereference next when it is null; keep the rest of the function
unchanged (the assignment to (*next).code and the call to lzma_delta_coder_init)
after the null-check.
- Around line 65-82: In lzma_delta_props_decode validate the incoming pointers
before dereferencing: check that options and props are not null at the top of
the function (return LZMA_PROG_ERROR if either is null), only read *props after
that check, and only write through *options after successful allocation; keep
the existing props_size check and existing allocation / LZMA_MEM_ERROR handling,
and ensure *options is never written when options is null by guarding the
assignment to *options until opt is non-null and options has been validated.

In `@liblzma-rs/src/delta/delta_encoder.rs`:
- Around line 67-85: The else branch in delta_encoder.rs calls
(*coder).next.code.unwrap(), which can still panic in FFI contexts; replace the
unwrap usage in the call site inside the function (the block that also calls
encode_in_place) with a defensive match or if let on (*coder).next.code to
extract the function pointer safely and return an appropriate error/ret value
when it is None instead of unwrapping; ensure you preserve the existing behavior
of computing out_start, calling the code function, computing size_0, and calling
encode_in_place only when size_0 > 0, but guard the code invocation with the
safe extraction and early-return error path.
- Around line 130-137: The function lzma_delta_props_encode writes to pointer
out without checking for null; before writing use a null check (e.g., if
out.is_null() { return LZMA_PROG_ERROR; }) so you don't dereference a null
pointer, and keep the existing early check using lzma_delta_coder_memusage;
locate lzma_delta_props_encode and validate out (and still cast options to
*const lzma_options_delta and read (*opt).dist only after confirming both
pointers are valid) then return LZMA_OK as before.
- Around line 100-129: Add a null-pointer guard at the start of
lzma_delta_encoder_init to avoid dereferencing next when it's NULL: check if
next == std::ptr::null_mut() and return an appropriate lzma_ret error (e.g.,
LZMA_PROG_ERROR) immediately; only assign to (*next).code and (*next).update and
call lzma_delta_coder_init after this check so no dereference happens on a null
pointer.

In `@liblzma-rs/src/lz/lz_decoder.rs`:
- Around line 264-273: When reallocating the sliding dictionary in the lz
decoder (symbols: (*coder).dict.buf, (*coder).dict.size,
crate::alloc::internal_alloc_bytes, crate::alloc::internal_free, LZ_DICT_EXTRA)
ensure that if the new allocation fails you clear/reset (*coder).dict.size
(e.g., set to 0) before returning LZMA_MEM_ERROR; this prevents later code (such
as the init/realloc path) from assuming dict.buf is valid when dict.size still
holds the old value. Update the failure branch after internal_alloc_bytes
returns null to reset dict.size appropriately and then return the error.
- Around line 136-142: The current end-of-stream check incorrectly tests
(*coder).temp.size != 0 (buffer capacity) which can be nonzero even when fully
consumed; change the condition to test unread bytes instead (e.g. check
((*coder).temp.size - (*coder).temp.pos) != 0 or equivalently (*coder).temp.pos
!= (*coder).temp.size) so only remaining, unconsumed bytes trigger
LZMA_DATA_ERROR; update the branch that returns LZMA_DATA_ERROR inside the
this_finished / next_finished logic (referencing this_finished, next_finished,
and coder.temp.pos/size).
- Around line 301-306: The memusage computation in lzma_lz_decoder_memusage
currently uses the raw dictionary_size and must mirror the decoder init
normalization: clamp dictionary_size to a minimum of 4096 and round it up to a
16-byte alignment before adding it to core::mem::size_of::<lzma_coder>() and the
constants LZ_DICT_REPEAT_MAX and LZ_DICT_EXTRA; update lzma_lz_decoder_memusage
to perform the same clamp-and-align normalization on the incoming
dictionary_size (convert to u64 safely, apply min=4096 and round-up to multiple
of 16) then use that normalized value in the wrapping_add chain so memlimit
checks match actual allocation.

In `@liblzma-rs/src/lz/lz_encoder.rs`:
- Around line 408-493: The inner lz callbacks (e.g., (*coder).lz.code,
(*coder).lz.end, (*coder).lz.options_update, (*coder).lz.set_out_limit) must be
cleared when reusing an existing coder so stale function pointers can't survive
into a new lz_init call; modify the initialization path around lz_init (in
lz_encoder setup where `let mut coder: *mut lzma_coder = (*next).coder as *mut
lzma_coder;` and before calling `lz_init`) to explicitly set those (*coder).lz.*
fields to None (or otherwise reset them) for both newly-allocated and reused
coder instances, then call `lz_init` and validate its installed callbacks (the
existing `if (*coder).lz.code.is_none()` check) to ensure the new initializer
provided fresh callbacks.

Comment thread AGENTS.md Outdated
Comment thread Cargo.lock.msrv
Comment thread Cargo.toml
categories = ["compression", "api-bindings"]
edition = "2021"
rust-version = "1.60"
rust-version = "1.63"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat Cargo.toml | head -30

Repository: youknowone/xz2-rs

Length of output: 1229


🏁 Script executed:

rg "let\s+\w+\s+else\s*{" --multiline -c

Repository: youknowone/xz2-rs

Length of output: 164


🏁 Script executed:

fd "block_buffer_decoder.rs|stream_buffer_decoder.rs" --type f

Repository: youknowone/xz2-rs

Length of output: 153


🌐 Web query:

Rust let else syntax stabilization version

💡 Result:

The let-else syntax in Rust was stabilized in version 1.65.0.

Citations:


🏁 Script executed:

rg "let\s+\w+\s+else\s*\{" liblzma-rs/src/common/block_buffer_decoder.rs liblzma-rs/src/common/stream_buffer_decoder.rs -c

Repository: youknowone/xz2-rs

Length of output: 43


🏁 Script executed:

cat -n liblzma-rs/src/common/block_buffer_decoder.rs | grep -A 2 "let.*else"

Repository: youknowone/xz2-rs

Length of output: 248


🏁 Script executed:

cat -n liblzma-rs/src/common/stream_buffer_decoder.rs | grep -A 2 "let.*else"

Repository: youknowone/xz2-rs

Length of output: 477


Raise the advertised MSRV to at least Rust 1.65.

The Rust backend code uses let ... else syntax (in liblzma-rs/src/common/block_buffer_decoder.rs line 36 and liblzma-rs/src/common/stream_buffer_decoder.rs lines 44 and 74), which was stabilized in Rust 1.65. The declared rust-version = "1.63" makes the default build fail on the advertised MSRV.

🛠️ Suggested fix
-rust-version = "1.63"
+rust-version = "1.65"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
rust-version = "1.63"
rust-version = "1.65"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` at line 16, Update the advertised MSRV in Cargo.toml to at least
"1.65" because the code in liblzma-rs uses the `let ... else` syntax (see
liblzma-rs/src/common/block_buffer_decoder.rs and
liblzma-rs/src/common/stream_buffer_decoder.rs). Edit the rust-version key in
Cargo.toml (the `rust-version = "1.63"` entry) and set it to "1.65" or newer to
match the stabilized language feature and avoid build failures.

Comment thread Cargo.toml
Comment on lines +35 to +38
[[bench]]
name = "backend_comparison"
harness = false
required-features = ["bench"]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and read the Cargo.toml file
find . -name "Cargo.toml" -type f | head -5

Repository: youknowone/xz2-rs

Length of output: 171


🏁 Script executed:

find . -name "Cargo.toml" -type f | head -5

Repository: youknowone/xz2-rs

Length of output: 171


🏁 Script executed:

# Read the Cargo.toml file
cat Cargo.toml

Repository: youknowone/xz2-rs

Length of output: 2334


🏁 Script executed:

cat Cargo.toml

Repository: youknowone/xz2-rs

Length of output: 2334


🏁 Script executed:

# Find the backend_comparison.rs file
find . -name "backend_comparison.rs" -type f

Repository: youknowone/xz2-rs

Length of output: 92


🏁 Script executed:

find . -name "backend_comparison.rs" -type f

Repository: youknowone/xz2-rs

Length of output: 92


🏁 Script executed:

# Read backend_comparison.rs to check its imports
cat benches/backend_comparison.rs

Repository: youknowone/xz2-rs

Length of output: 5437


🏁 Script executed:

cat benches/backend_comparison.rs

Repository: youknowone/xz2-rs

Length of output: 5437


Enable both backends in the benchmark feature.

backend_comparison.rs unconditionally imports both liblzma_c_sys and liblzma_sys (lines 5-6), but the bench feature only enables Criterion. Running cargo bench --features bench will fail to compile because both optional backend dependencies remain disabled.

Suggested fix
-bench = ["criterion"]
+bench = ["criterion", "rust-backend", "c-backend"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` around lines 35 - 38, The benchmark feature doesn't enable the
two optional backends that backend_comparison.rs imports (liblzma_c_sys and
liblzma_sys), so add those dependency feature flags to the bench feature in
Cargo.toml: update the "bench" feature to include "liblzma_c_sys" and
"liblzma_sys" (so running cargo bench --features bench builds both backends used
by backend_comparison.rs).

Comment thread liblzma-rs/src/alloc.rs
Comment on lines +24 to +25
let offset = match header_size.checked_add(align - 1) {
Some(value) => round_up(value, align),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix the aligned offset calculation.

Line 24 already performs the + align - 1 step. Passing that value into round_up() adds it a second time, so every allocation gets an extra alignment unit of padding and large requests can fail earlier than necessary.

Suggested fix
-    let offset = match header_size.checked_add(align - 1) {
-        Some(value) => round_up(value, align),
+    let offset = match header_size.checked_add(align - 1) {
+        Some(value) => value & !(align - 1),
         None => return core::ptr::null_mut(),
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let offset = match header_size.checked_add(align - 1) {
Some(value) => round_up(value, align),
let offset = match header_size.checked_add(align - 1) {
Some(value) => value & !(align - 1),
None => return core::ptr::null_mut(),
};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/alloc.rs` around lines 24 - 25, The offset calculation
double-applies alignment: header_size is already increased by align - 1 then
passed to round_up(), causing extra padding; change the match arm so you compute
offset = round_up(header_size, align) (and still perform an overflow check on
header_size, e.g. via header_size.checked_add(align - 1) solely to detect
overflow) instead of passing the already-adjusted value into round_up; update
the code that sets offset (the match on header_size.checked_add(align - 1) and
the call to round_up) to use round_up(header_size, align) while preserving
overflow handling.

Comment on lines +116 to +127
let ret: lzma_ret = (*coder).block_encoder.code.unwrap()(
(*coder).block_encoder.coder,
allocator,
in_0,
in_pos,
in_size,
out,
out_pos,
out_size,
CONVERT[action as usize],
);
if ret != LZMA_STREAM_END || action == LZMA_SYNC_FLUSH {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential panic from .unwrap() on block_encoder.code.

Line 116 uses .unwrap() which will panic if block_encoder.code is None, causing undefined behavior in FFI context.

🛡️ Proposed fix
-                let ret: lzma_ret = (*coder).block_encoder.code.unwrap()(
+                let Some(code_fn) = (*coder).block_encoder.code else {
+                    return LZMA_PROG_ERROR;
+                };
+                let ret: lzma_ret = code_fn(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder.rs` around lines 116 - 127, The code
currently calls .unwrap() on block_encoder.code which can panic across the FFI
boundary; change the call site (the block that computes ret using
(*coder).block_encoder.code) to first check whether (*coder).block_encoder.code
is Some, using if let or match, and if it is None return an appropriate lzma_ret
error (e.g., LZMA_PROG_ERROR) instead of panicking; when Some, call the function
pointer with the existing arguments (block_encoder.coder, allocator, in_0,
in_pos, in_size, out, out_pos, out_size, CONVERT[action as usize]) and assign to
ret as before, ensuring no unwrap() is used directly.

Comment on lines +144 to +158
4 => {
let ret_0: lzma_ret = (*coder).index_encoder.code.unwrap()(
(*coder).index_encoder.coder,
allocator,
core::ptr::null(),
core::ptr::null_mut(),
0,
out,
out_pos,
out_size,
LZMA_RUN,
);
if ret_0 != LZMA_STREAM_END {
return ret_0;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same .unwrap() issue on index_encoder.code.

Line 145 uses .unwrap() on (*coder).index_encoder.code, same FFI panic risk.

🛡️ Proposed fix
-                let ret_0: lzma_ret = (*coder).index_encoder.code.unwrap()(
+                let Some(code_fn) = (*coder).index_encoder.code else {
+                    return LZMA_PROG_ERROR;
+                };
+                let ret_0: lzma_ret = code_fn(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
4 => {
let ret_0: lzma_ret = (*coder).index_encoder.code.unwrap()(
(*coder).index_encoder.coder,
allocator,
core::ptr::null(),
core::ptr::null_mut(),
0,
out,
out_pos,
out_size,
LZMA_RUN,
);
if ret_0 != LZMA_STREAM_END {
return ret_0;
}
4 => {
let Some(code_fn) = (*coder).index_encoder.code else {
return LZMA_PROG_ERROR;
};
let ret_0: lzma_ret = code_fn(
(*coder).index_encoder.coder,
allocator,
core::ptr::null(),
core::ptr::null_mut(),
0,
out,
out_pos,
out_size,
LZMA_RUN,
);
if ret_0 != LZMA_STREAM_END {
return ret_0;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_encoder.rs` around lines 144 - 158, The code
calls .unwrap() on (*coder).index_encoder.code which can panic; change it to
check for presence and handle the None case explicitly. Locate the call site in
stream_encoder.rs where index_encoder.code is used (inside the match arm for
variant 4) and replace the .unwrap() with a safe match or if-let that binds the
function (e.g., code_fn) and calls it; if index_encoder.code is None, return an
appropriate lzma_ret error such as LZMA_PROG_ERROR (or another suitable sentinel
used elsewhere) instead of panicking. Ensure the bound function is then invoked
with the same arguments previously passed to the unwrap result.

Comment on lines +19 to +25
if (*a).backward_size != LZMA_VLI_UNKNOWN && (*b).backward_size != LZMA_VLI_UNKNOWN {
if !is_backward_size_valid(a) || !is_backward_size_valid(b) {
return LZMA_PROG_ERROR;
}
if (*a).backward_size != (*b).backward_size {
return LZMA_DATA_ERROR;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate known backward_size values independently.

Line 19 currently gates both validation and comparison on both sides being known. That lets an invalid non-LZMA_VLI_UNKNOWN backward_size slip through whenever the other side is unknown, and the function returns LZMA_OK for malformed input.

🐛 Suggested fix
-    if (*a).backward_size != LZMA_VLI_UNKNOWN && (*b).backward_size != LZMA_VLI_UNKNOWN {
-        if !is_backward_size_valid(a) || !is_backward_size_valid(b) {
-            return LZMA_PROG_ERROR;
-        }
-        if (*a).backward_size != (*b).backward_size {
-            return LZMA_DATA_ERROR;
-        }
-    }
+    if (*a).backward_size != LZMA_VLI_UNKNOWN && !is_backward_size_valid(a) {
+        return LZMA_PROG_ERROR;
+    }
+    if (*b).backward_size != LZMA_VLI_UNKNOWN && !is_backward_size_valid(b) {
+        return LZMA_PROG_ERROR;
+    }
+    if (*a).backward_size != LZMA_VLI_UNKNOWN
+        && (*b).backward_size != LZMA_VLI_UNKNOWN
+        && (*a).backward_size != (*b).backward_size
+    {
+        return LZMA_DATA_ERROR;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_common.rs` around lines 19 - 25, The
current check only validates backward_size when both sides are known, letting an
invalid known value pass if the other side is LZMA_VLI_UNKNOWN; change the logic
in the function containing the backward_size checks so that each side is
validated independently: call is_backward_size_valid(a) and return
LZMA_PROG_ERROR if (*a).backward_size != LZMA_VLI_UNKNOWN and
is_backward_size_valid(a) is false, and do the same for (*b).backward_size; only
after both backward_size fields are known (both != LZMA_VLI_UNKNOWN) perform the
equality check and return LZMA_DATA_ERROR if they differ. Ensure you reference
the symbols backward_size, LZMA_VLI_UNKNOWN, is_backward_size_valid,
LZMA_PROG_ERROR, and LZMA_DATA_ERROR in the updated condition flow.

Comment on lines +13 to +16
pub unsafe extern "C" fn lzma_stream_header_decode(
options: *mut lzma_stream_flags,
in_0: *const u8,
) -> lzma_ret {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard options and in_0 at the FFI boundary.

Both entrypoints dereference these pointers via memcmp, read32le, and stream_flags_decode without validation, so null input is immediate UB.

Suggested fix
 pub unsafe extern "C" fn lzma_stream_header_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
+    if options.is_null() || in_0.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     if memcmp(
 pub unsafe extern "C" fn lzma_stream_footer_decode(
     options: *mut lzma_stream_flags,
     in_0: *const u8,
 ) -> lzma_ret {
+    if options.is_null() || in_0.is_null() {
+        return LZMA_PROG_ERROR;
+    }
     if memcmp(

Also applies to: 47-50

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/common/stream_flags_decoder.rs` around lines 13 - 16, Both FFI
entrypoints (lzma_stream_header_decode and the analogous footer entrypoint)
currently dereference options and in_0 (used by memcmp, read32le, and
stream_flags_decode) without checks; add explicit null-pointer guards at the
start of each extern "C" function to validate that both options and in_0 are
non-null and return an appropriate lzma_ret error (e.g., LZMA_PROG_ERROR) when a
null pointer is detected so we avoid immediate UB before calling
memcmp/read32le/stream_flags_decode.

Comment thread liblzma-rs/src/lz/lz_encoder.rs Outdated
Comment on lines +317 to +343
pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
let mut mf: lzma_mf = lzma_mf_s {
buffer: core::ptr::null_mut(),
size: 0,
keep_size_before: 0,
keep_size_after: 0,
offset: 0,
read_pos: 0,
read_ahead: 0,
read_limit: 0,
write_pos: 0,
pending: 0,
find: None,
skip: None,
hash: core::ptr::null_mut(),
son: core::ptr::null_mut(),
cyclic_pos: 0,
cyclic_size: 0,
hash_mask: 0,
depth: 0,
nice_len: 0,
match_len_max: 0,
action: LZMA_RUN,
hash_count: 0,
sons_count: 0,
};
if unsafe { lz_encoder_prepare(::core::ptr::addr_of_mut!(mf), core::ptr::null(), lz_options) } {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

lzma_lz_encoder_memusage needs a null check for lz_options.

This is a safe pub extern "C" function, but it forwards lz_options into lz_encoder_prepare(), which dereferences the pointer immediately. A null pointer should return the existing UINT64_MAX error sentinel, not trigger UB.

Possible fix
 pub extern "C" fn lzma_lz_encoder_memusage(lz_options: *const lzma_lz_options) -> u64 {
+    if lz_options.is_null() {
+        return UINT64_MAX;
+    }
+
     let mut mf: lzma_mf = lzma_mf_s {
         buffer: core::ptr::null_mut(),
         size: 0,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@liblzma-rs/src/lz/lz_encoder.rs` around lines 317 - 343, The function
lzma_lz_encoder_memusage currently forwards lz_options into lz_encoder_prepare
which will dereference it; add an explicit null check at the start of
lzma_lz_encoder_memusage that returns UINT64_MAX when lz_options is null,
otherwise call lz_encoder_prepare as before; reference the pointer lz_options,
the wrapper function lzma_lz_encoder_memusage, and the inner helper
lz_encoder_prepare so the null-check is placed before invoking
lz_encoder_prepare.

…h Rust use imports, add C ABI wrappers to liblzma-rs-sys

- Remove all 167 #[no_mangle] attributes from 58 files in liblzma-rs
- Replace 27 extern "C" {} import blocks with proper Rust use statements
- Replace opaque struct definitions with re-exports from canonical modules
- Remove duplicate lzma_lz_options struct definitions across 6 files
- Add 46 new #[no_mangle] extern "C" wrapper functions to liblzma-rs-sys
- Add 4 type re-exports (lzma_block, lzma_index_iter, lzma_index_iter_mode, lzma_index_hash)
- Only 2 extern "C" {} blocks remain: types.rs (pthread/libc) and common.rs (malloc/calloc/free)
@youknowone youknowone merged commit 550061a into main Apr 1, 2026
47 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant