FAQ - Geyser gRPC Stream
Q. I've only used WebSocket before. Can I use gRPC? Do you have samples?
Yes. You can quickly test and start developing with gRPC using SLV.
Check out our gRPC Quickstart Guide for details.
Q. Can I register two IP addresses?
You can use one endpoint per subscription. If you wish to use two IP addresses, you must subscribe to two separate subscriptions.
Q. Are there any limitations?
In addition to connection limits per plan, shared endpoints (Standard, Premium) have filter limitations. For detailed specifications, please refer to:
To maximize performance, we recommend splitting filters into smaller segments and retrieving data in parallel. This approach helps distribute processing load and reduce latency.
With a dedicated gRPC node (€1280/m and up), there are no filter limitations, and no resources are shared with other customers. This ensures you always have the highest possible performance.
For more details, please contact us on Discord.
Q. I need latency of at least ~400ms or better.
To achieve latency within approximately 400ms, consider these essential points:
-
Realistic Understanding of Ping Values: Ping values indicate ideal conditions and do not reflect actual latency in streaming communications, which typically experience around 5 times the ping latency. For example, a ping of 100ms across continents realistically results in about 500ms of latency. Thus, infrastructure must be established within the same region to achieve ~400ms latency.
- Typical Ping Value Reference:
- Same network: ~0.1ms
- Private Network Interconnect (PNI): ~0.2ms
- Same data center: ~0.3ms
- Same city: ~1ms
- Neighboring country: ~5–10ms
- Intercontinental: ~100–300ms
- Typical Ping Value Reference:
-
Avoiding the Pitfall of Average Latency: Solana validators are geographically dispersed globally, and the leader schedule changes randomly with each epoch. Relying on average latency to achieve ~400ms is impractical. Instead, you should precisely track validator schedules in your specific region to identify slots with the lowest latency. To consistently achieve minimal latency, infrastructure across all relevant regions is required. Within the same region, data acquisition can occur in tens of milliseconds, with transmission possible in just a few milliseconds.
-
Tracking the Leader Schedule: Continuously monitor the leader validator schedule for your region using tools such as the Solana Beach API or Solana RPC APIs (
getSlotLeaders
andgetClusterNodes
). This allows you to identify optimal trading slots effectively.

Q. How can I achieve zero-block (zero-slot) trading?
Successfully achieving zero-block (zero-slot) trading requires more sophisticated strategies, as follows:
-
Identifying Opportunity Zones: Solana validators are distributed globally, and it's physically impossible to achieve optimal latency for every slot. Therefore, track validator leader schedules in the region where your infrastructure is located and identify your optimal opportunity zones. Deploying infrastructure across multiple regions can also be beneficial. Frankfurt, for example, is particularly popular due to its high concentration of validators, leading to more frequent leader selection and thus greater trading opportunities.
-
Implementing Dedicated Nodes: If you struggle to compete, consider deploying dedicated nodes. Shared nodes experience latency due to traffic from other users, and thus are not recommended. Furthermore, placing your dedicated node within the same network as your application significantly reduces network latency and optimizes performance.
Q. Can I use a specific endpoint?
To maintain a low-latency environment, our system automatically selects the closest available node. If you wish to use a specific endpoint, we recommend renting a server located nearest to that endpoint.
Q. I'm getting a 401 error. Why?
To maintain a low-latency environment, we implement IP restrictions. If you don't have a subscription or your IP isn't registered, you'll receive a 401 error.
Please double-check if your registered IP matches your current access IP.
Q. I'm getting a 429 error. Why?
You have reached your plan’s connection limit.
If you encounter this error, consider upgrading your plan. If you require more connections than our premium plan provides, a dedicated gRPC node would be more suitable.
Q. Why are dedicated endpoints faster?
Shared endpoints distribute resources among multiple customers, causing increased latency as traffic grows. Server resources have physical limitations, meaning there is a finite amount of processing they can handle. When many requests arrive simultaneously, they must be processed sequentially, reducing overall response speeds.
While we optimize performance with various measures for shared endpoints, dedicated endpoints ensure exclusive resource use by you alone, entirely eliminating interference from other users. Consequently, dedicated endpoints consistently deliver stable and faster response times.
Additionally, dedicated endpoints offer communication options without TLS, such as HTTP. By skipping the TLS handshake, communication speeds improve significantly compared to HTTPS.
Q. How can I achieve the lowest possible latency?
We highly recommend combining a dedicated gRPC node with our Bare-Metal server.
Both share the same network, allowing for private, zero-distance communication without traversing the internet. This setup achieves extremely low latency, typically around 0.1ms ping.
Please contact us on Discord for further details.
Q. What is the latency like?
Latency varies depending on the measurement method and your specific usage environment. Rather than focusing on exact numerical values, it's crucial to ensure that the latency meets your actual operational requirements.
We offer free trials across all our plans, enabling you to test performance directly in your real-world environment. Additionally, we provide easy-to-use tools in TypeScript and Rust for measuring latency. Feel free to utilize these tools alongside your free trial.
Q. Is this RPC (gRPC, Shreds) faster than others?
We encourage you to try our free trial and compare the performance against other services. If you find our service slower, please let us know the specific conditions and competitors you've compared it against via Discord. We will identify the cause and improve the speed further.
We continually work on improving latency based on customer feedback. If you seek the fastest possible endpoint, please share detailed information with us. Providing specific metrics and comparison conditions against competitors allows us to deliver superior performance. This feedback-driven approach has consistently enabled us to enhance our services.
Q. Which plan offers the fastest performance?
Generally, our highest-tier plan provides the fastest performance due to superior CPUs, higher memory capacities, and robust hardware configurations.
We also offer customized solutions if you require even more powerful servers, but our standard plans are designed to deliver optimal price-to-performance ratios.
We are confident in providing world-class performance at every price level. If you find a faster provider within the same price range, please let us know so we can investigate and make improvements.
Q. I'm experiencing high latency. What can I do?
Latency heavily depends on your proximity to the endpoint. We recommend accessing from a server near the provided endpoint. The fastest connections are achieved with our Bare-Metal server and VPS service.
Q. Which is the fastest: WebSockets, gRPC, or Shreds?
Feedback from our customers consistently ranks speed as follows:
Shreds > gRPC > WebSockets
Please share your experience if you observe different results.
Q. Latency isn't what I expected.
Performance varies depending on the programming language used. Generally, language speed ranks:
Rust > Go > TypeScript (JavaScript) > Python
For detailed comparisons, see:
For maximum performance, we strongly recommend using Rust.