Recently, the official website gave the performance test report of RedisJson (RedisSearch), which can be described as crushing other NoSQL, the following is the core report content, first conclusion:

  • For isolated writes, RedisJSON is faster than MongoDB 5.4x, more than 200x faster than ElasticSearch.

  • For isolated reads, RedisJSON is 12.7 times faster than MongoDB and more than 500 times faster than ElasticSearch.

In mixed-workload scenarios, real-time updates don’t impact search and read performance with RedisJSON, while ElasticSearch does. Here’s the specifics:

  • RedisJSON* supports about 50 times more operands/sec than MongoDB and 7x/sec more than ElasticSearch.

  • RedisJSON* has about 90x lower latency than MongoDB and 23.7x lower latency than ElasticSearch.

In addition, RedisJSON’s read, write, and load search latencies are far more stable than ElasticSearch and MongoDB in higher percentiles. As the write ratio increases, RedisJSON also handles higher and higher overall throughput, while when the write ratio increases, ElasticSearch decreases the overall throughput it can handle.

Query engine

As mentioned earlier, the development of reresearch and Redis JSON places a strong emphasis on performance. For each release, we want to make sure that developers can experience stability and the product. To this end, we have given some analysis tools, probes for performance analysis.

And every time we release a new version, we keep improving performance. Especially for Reresearch, version 2.2 is 1.7 times faster than 2.0 in both load and query performance, while also improving throughput and data loading latency.

1. Load Optimization

The next two graphs show the results of running the NYC taxi benchmark. The benchmark measures fundamental data such as throughput and load time.

As you can see from these charts, each new version of Reresearch has a substantial performance improvement.

2. Full-Text Search Optimization

To evaluate search performance, we index 5.9 million Wikipedia abstracts. Then we run a full-text search query panel and get the result as shown in the following figure

class=”rich_pages wxw-img js_insertlocalimg” src=”https://mmbiz.qpic.cn/mmbiz_png/icu8ekKAcwiaa4qnSe9kIHMKTMfQHHo2BjtFxGXZJ4JfYYl38GAkt7OIsURibHWKyfNWhmuiceGicQIygtFm1TSiaNkg/640?wx_fmt=png”>

As can be seen from the above figure, by migrating from v2.0 to v2.2, the same data has been greatly improved in terms of writing, reading, and searching (latency graph), thereby improving the achievable throughput of running Search and JSON.

Comparison with other frameworks

To evaluate the performance of RedisJSON, we decided to compare it to MongoDB and ElasticSearch. For ease of comparison, we compare everything from document storage, local availability, cloud availability, professional support, and providing scalability and performance.

> we used the well-established YCSB standard for test comparisons, which are able to evaluate different products based on common workloads, measuring latency, throughput curves up to saturation. In addition to the CRUD YCSB action, we’ve added a two-word search action specifically to help developers, system architects, and DevOps practitioners find the best search engine for their use case.

1. Benchmark This

test, we used the following software environment:

    > MongoDB v5.0.3

  • ElasticSearch 7.15

  • RedisJSON (RediSearch 2.2+RedisJSON 2.0)

This time benchmarking on Amazon Web Services instances, all three solutions are distributed databases and are most commonly used in a distributed manner in production. That’s why all products use the same generic m5d.8xlarge VMs and on-premises SSDs, and each setup consists of four VMs: one client + three database servers. Both the benchmark client and database server run on separate m5d.8xlarge instances under optimal network conditions, tightly packed in an Availability Zone for the low latency and stable network performance required for steady-state analysis.

The tests were performed on a three-node cluster and the deployment details were as follows

:

  • MongoDB 5.0.3: Primary-Secondary-Secondary. Replicas are used to increase read capacity and allow for lower latency reads. To support text search queries on string content, a text index is created on the search field.

  • ElasticSearch 7.15: 15 sharding setups that enable query caching and provide RAID 0 arrays for 2 NVMe-based local SSDs for higher levels of file system-related resilient operational performance. These 15 shards provide the best achievable performance results for all the sharding variants we have made for Elastic.

  • RedisJSON*: RediSearch 2.2 and RedisJSON 2.0: OSS

  • Redis Cluster v6.2.6, with 27 shards, evenly distributed across three nodes, loaded with RediSearch 2.2 and RedisJSON 2.0 OSS modules.

In addition to this primary benchmark/profiling scenario, we run benchmarks on network, memory, CPU, and I/O to understand the underlying network and virtual machine characteristics. Network performance is kept below the measured limits of bandwidth and PPS throughout the benchmark set to produce stable ultra-low latency network transmissions (p99 < 100micros per packet).

>Next, we’ll start by providing separate operational performance [100% write] and [100% read] and end with a mixed set of workloads to simulate real-world application scenarios.

2. 100% write to the baseline

As shown in the chart below, the benchmark shows that RedisJSON* ingests 8.8x faster than ElasticSearch and 1.8x faster than MongoDB, while maintaining sub-millisecond latency for each operation. Notably, 99% of Redis requests complete in less than 1.5 milliseconds.

In addition, RedisJSON* is the only solution we’ve tested that automatically updates its indexes on every write. This means that any subsequent search queries will find the updated document. ElasticSearch doesn’t have this fine-grained capacity; It places ingested documents in an internal queue that is refreshed every N documents or every M seconds by the server (not controlled by the client). They call this approach near real-time (NRT). The Apache Lucene library, which implements the full-text capabilities of ElasticSearch, is designed to search quickly, but the indexing process is complex and cumbersome. As these WRITE benchmark charts show, ElasticSearch comes at a significant cost due to this “design” limitation.

Combining latency and throughput improvements, RedisJSON* is 5.4x faster than Mongodb and more than 200x faster than ElasticSearch for isolated writes.

3. The 100% read benchmark

is similar to writes, and we can observe that Redis performs best in terms of reads, allowing 15.8x more reads than ElasticSearch and 2.8x more than MongoDB, while maintaining sub-millisecond latency across the entire latency range, as shown in the table below.

When combined with latency and throughput improvements, RedisJSON* is 12.7x faster than MongoDB and more than 500x faster than ElasticSearch for isolated reads.

4. Mixed

read/write/search benchmark

real-world application workloads are almost always a mix of read, write, and search queries. Therefore, it is more important to understand the resulting mixed workload throughput curve as you approach saturation.

as a starting point, we consider a scenario of 65% search and 35% read, which represents a common real-world scenario where we perform more searches/queries than direct reads. The initial combination of 65% search, 35% read, and 0% update also results in equal throughput for ElasticSearch and RedisJSON*. Nevertheless, the YCSB workload allows you to specify the ratio between search/read/update to meet your requirements.

In each test variant, we added 10% writes to blend and reduce the search and read percentages in the same proportion. The goal of these test variants is to understand how each product handles real-time updates of data, which we consider to be the de facto architectural goal, where writes are immediately committed to the index and reads are always up-to-date.

As you can see in the chart, constantly updating data and increasing the write ratio on RedisJSON* does not affect read or search performance and improves overall throughput. The more updates you make to your data, the greater the impact on ElasticSearch performance, ultimately resulting in slower reads and searches.

The evolution of ops/sec

that ElasticSearch can achieve from 0% to 50%, we noticed that it started with 10k Ops/sec on the 0% update benchmark and was severely affected, reducing ops/sec by 5x at the 50% update rate benchmark.

Similar to what we observed in the single operational benchmark above, MongoDB search performance is orders of magnitude slower than RedisJSON* and ElasticSearch, with MongoDB having a maximum total throughput of 424 ops/sec and RedisJSON* having a maximum ops/sec of 16K.

Finally, for mixed workloads, RedisJSON* supports 50.8x more operations/sec than MongoDB and 7x more than ElasticSearch. If we focus our analysis on latency for each operation type during a mixed workload, RedisJSON* reduces latency by up to 91x compared to MongoDB and 23.7x compared to ElasticSearch.

5. Complete latency analysis

Similar to measuring the throughput curve generated before each solution saturates, it is important to perform a complete latency analysis under sustainable loads that are common to all solutions. This will enable you to understand what is the most stable solution in terms of latency for all published operations, and which solution is less susceptible to latency spikes caused by application logic (for example, elastic query cache misses). If you’d like to dive deeper into why we do this, Gil Tene provides an in-depth overview of delay measurement considerations.

  • looking at the throughput chart in the previous section and focusing on 10% Updating the baseline to include all three operations, we did two different sustainable load variations:

  • 250 ops/sec: Comparing MongoDB, ElasticSearch and RedisJSON*, lower pressure rates than MongoDB.

  • 6000 ops/sec: Compares ElasticSearch and RedisJSON*, lower than ElasticSearch pressure rates.

1 Latency analysis for MongoDB and ElasticSearch and RedisJSON*

In the first image below, showing percentiles from p0 to p9999, it’s clear that MongoDB performs far better than Elastic and RedisJSON* on every search. Also, looking at ElasticSearch vs. RedisJSON*, it’s clear that ElasticSearch is vulnerable to higher latency, most likely caused by garbage collection (GC) triggers or search query cache misses.

RedisJSON* has a p99 of less than 2.61 milliseconds, while ElasticSearch p999 searches have reached 10.28 milliseconds.

In the read and update chart below, we can see that RedisJSON* performs best across all latency ranges, followed by MongoDB and ElasticSearch.

RedisJSON* is the only solution to maintain sub-millisecond latency across latency percentiles for all analyses. At p99, RedisJSON* has a latency of 0.23 ms, followed by MongoDB with 5.01 ms and ElasticSearch with 10.49 ms.

At write time, MongoDB and RedisJSON* maintain sub-millisecond latency even at p99. ElasticSearch, on the other hand, shows high tail latency (> 10ms), which is most likely the same cause (GC) that caused the spike in ElasticSearch search.

2. Latency analysis for ElasticSearch and RedisJSON

focuses only on ElasticSearch and RedisJSON*, and while maintaining a sustainable load of 6K ops/sec, we can observe that the read and update patterns of Elastic and RedisJSON* are consistent with the analysis at 250 ops/sec. RedisJSON* is a more stable solution, with a p99 read time of 3ms compared to Elastic’s p99 read time of 162ms.

RedisJSON* retains 3ms p99 when updating, while ElasticSearch retains 167ms p99.

Focusing on search operations, ElasticSearch and RedisJSON* start with single-digit p50 latency (1.13 ms for p50 RedisJSON* versus 2.79 ms for ElasticSearch), where ElasticSearch pays the price of GC triggers and query cache misses on higher percentiles, clearly visible on >= p90 percentiles.

RedisJSON* keeps p99 below 33 milliseconds, compared to 163 milliseconds on ElasticSearch, which is 5x higher.

Source:

Source: xiangzhihong8

link: https://blog.csdn.net/xiangzhihong8/article/details/121530019

end




 

public number (zhisheng ) reply to Face, ClickHouse, ES, Flink, Spring, Java, Kafka, Monitor keywords such as to view more articles corresponding to keywords.

like + Looking, less bugs 👇