

Imagine we want to use Redis as a cache for many small objects, that can be But what about if we have a plain key value business? To represent objects or to model other problems when there are group of This, this was the intention anyway when the hash data type API wasĭesigned (we trust simplicity more than features, so nested data structuresĪre not allowed, as expires of single fields are not allowed). (expire) like a real key, and can only contain a string. However since hash fields and values are not (always) represented as fullįeatured Redis objects, hash fields can't have an associated time to live Value pairs happens to play very well with the CPU cache (it has a better This does not only work well from the point of view of time complexity, butĪlso from the point of view of constant times, since a linear array of key It contains grows too large (you can configure the limit in nf). Hash will be converted into a real hash table as soon as the number of elements Is small, the amortized time for HGET and HSET commands is still O(1): the Instead just encode them in an O(N) data structure, like a linearĪrray with length-prefixed key value pairs. With a constant time complexity in the average case, like a hash table.īut many times hashes contain just a few fields. (also known as O(1) in big O notation) there is the need to use a data structure In theory in order to guarantee that we perform lookups in constant time Let's start with some facts: a few keys use a lot more memory than a single keyĬontaining a hash with a few fields.

Than Redis plain keys but also much more memory efficient than memcached. Where values can just be just strings, that is not just more memory efficient I understand the title of this section is a bit scary, but I'm going to explain in details what this is about.īasically it is possible to model a plain key-value store using Redis Using hashes to abstract a very memory efficient plain key-value store on top of Redis If you want to know more about this, read the next section. Instead of using different keys for name, surname, email, password, use a single hash with all the required fields. Small hashes are encoded in a very small space, so you should try representing your data using hashes whenever possible.įor instance if you have objects representing users in a web application, This is just an example but it is actually possible to model a number of problems in very little space with these new primitives. You can do the same using GETRANGE and SETRANGE in order to store one byte of information for each user. With 100 million users this data will take just 12 megabytes of RAM in a Redis instance. Setting the bit for subscribed and clearing it for unsubscribed, or the other way around. You can use a bitmap in order to save information about the subscription of users in a mailing list, Using these commands you can treat the Redis string type as a random access array.įor instance if you have an application where users are identified by a unique progressive integer number, Redis 2.2 introduced new bit and byte level operations: GETRANGE, SETRANGE, GETBIT and SETBIT. (and between little and big endian of course) so you can switch from 32 to 64 bit, or the contrary, without problems.
AVG MEMORY CLEAN WINDOWS 64 BIT
RDB and AOF files are compatible between 32 bit and 64 bit instances To compile Redis as 32 bit binary use make 32bit. Redis compiled with 32 bit target uses a lot less memory per key, since pointers are small,īut such an instance will be limited to 4 GB of maximum memory usage. This operation is very fast for small values,īut if you change the setting in order to use specially encoded valuesįor much larger aggregate types the suggestion is to run someīenchmarks and tests to check the conversion time. Redis will automatically convert it into normal encoding. If a specially encoded value overflows the configured max size, Number of elements and maximum element size for special encoded types Since this is a CPU / memory trade off it is possible to tune the maximum This is completely transparent from the point of view of the user and API. Hashes, Lists, Sets composed of just integers, and Sorted Sets, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory efficient way that uses up to 10 times less memory (with 5 time less memory used being the average saving).

Since Redis 2.2 many data types are optimized to use less space up to a certain size. Strategies for optimizing memory usage in Redis Special encoding of small aggregate data types Out-of-order / backfilled ingestion performance considerations.
