Segcache: a memory-efficient, scalable cache for small objects with TTL
In collaboration with Carnegie Mellon University, Twitter is building the next generation of storage backend, Segcache, into Pelikan. Segcache enables high memory efficiency, high throughput, and…
Taming Tail Latency and Achieving Predictability
Twitter is accelerating its Pelikan Cache framework by using the Intel® Ethernet 800 Series Network Adapter with Application Device Queues (ADQ). Delivering data from in-memory cache should be the…
Why Pelikan
This post was originally written as an internal document. Some links, project names, and content are removed when adapting to a public audience. TL;DR Twemcache and Redis both solve some subset of…
Memory Matters
This is the fourth post in our blog series about the design, implementation and usage of caching in datacenters. Memory allocation and free don't always rank high on performance tips unless you need…
Separation of Concerns
This is the third post in our blog series about the design, implementation and usage of caching in datacenters. The most important design decision we adopted in building the server is to separate…
Server First
This is the second post in our blog series about the design, implementation and usage of caching in datacenters. If you don't care why we choose to work on cache server first, skip the post. The mode…
Caching in datacenters
This is the first post in our blog series about the design, implementation and usage of caching in datacenters. There are many different definitions of caching. And indeed caching is ubiquitous as…