FAQ - Latency
Q. What is the latency like?
Latency varies depending on the measurement method and your specific usage environment. Rather than focusing on exact numerical values, it's crucial to ensure that the latency meets your actual operational requirements.
We offer free trials across all our plans, enabling you to test performance directly in your real-world environment. Additionally, we provide easy-to-use tools in TypeScript and Rust for measuring latency. Feel free to utilize these tools alongside your free trial.
Q. Is this RPC (gRPC, Shreds) faster than others?
We encourage you to try our free trial and compare the performance against other services. If you find our service slower, please let us know the specific conditions and competitors you've compared it against via Discord. We will identify the cause and improve the speed further.
We continually work on improving latency based on customer feedback. If you seek the fastest possible endpoint, please share detailed information with us. Providing specific metrics and comparison conditions against competitors allows us to deliver superior performance. This feedback-driven approach has consistently enabled us to enhance our services.
Q. Which plan offers the fastest performance?
Generally, our highest-tier plan provides the fastest performance due to superior CPUs, higher memory capacities, and robust hardware configurations.
We also offer customized solutions if you require even more powerful servers, but our standard plans are designed to deliver optimal price-to-performance ratios.
We are confident in providing world-class performance at every price level. If you find a faster provider within the same price range, please let us know so we can investigate and make improvements.
Q. Why are dedicated endpoints faster?
Shared endpoints distribute resources among multiple customers, causing increased latency as traffic grows. Server resources have physical limitations, meaning there is a finite amount of processing they can handle. When many requests arrive simultaneously, they must be processed sequentially, reducing overall response speeds.
While we optimize performance with various measures for shared endpoints, dedicated endpoints ensure exclusive resource use by you alone, entirely eliminating interference from other users. Consequently, dedicated endpoints consistently deliver stable and faster response times.
Additionally, dedicated endpoints offer communication options without TLS, such as HTTP. By skipping the TLS handshake, communication speeds improve significantly compared to HTTPS.
Q. I'm experiencing high latency. Why?
Are you accessing the endpoint from a physically close location? Distance significantly affects latency. We recommend accessing from servers close to the provided endpoint.
Our network theoretically provides the fastest connections through our Bare-Metal servers and VPS services.
Q. Which is the fastest among WebSockets, gRPC, and Shreds?
Feedback from many customers is quite consistent, and the performance order is as follows:
Shreds > gRPC > WebSockets
If you discover different results or conditions, please let us know.
Q. Latency isn't what I expected.
Performance can vary significantly depending on the programming language used. Typically, language speed comparisons rank as follows:
Rust > Go > TypeScript (JavaScript) > Python
For more detailed comparisons, please see the following resource:
We strongly recommend using Rust if maximum performance is your goal.