Pelikan Cache
BlogTalksLinks
Apr 13, 2021

Segcache: a memory-efficient, scalable cache for small objects with TTL

In collaboration with Carnegie Mellon University, Twitter is building the next generation of storage backend, Segcache, into Pelikan. Segcache enables high memory efficiency, high throughput, and…

Yao Yue (draft by Juncheng Yang)
Oct 21, 2020

Taming Tail Latency and Achieving Predictability

Twitter is accelerating its Pelikan Cache framework by using the Intel® Ethernet 800 Series Network Adapter with Application Device Queues (ADQ). Delivering data from in-memory cache should be the…

Yao Yue
Sep 20, 2019

Why Pelikan

This post was originally written as an internal document. Some links, project names, and content are removed when adapting to a public audience. TL;DR Twemcache and Redis both solve some subset of…

Yao Yue
May 8, 2019

Memory Matters

This is the fourth post in our blog series about the design, implementation and usage of caching in datacenters. Memory allocation and free don't always rank high on performance tips unless you need…

Yao Yue
May 25, 2016

Separation of Concerns

This is the third post in our blog series about the design, implementation and usage of caching in datacenters. The most important design decision we adopted in building the server is to separate…

Yao Yue
Apr 11, 2016

Server First

This is the second post in our blog series about the design, implementation and usage of caching in datacenters. If you don't care why we choose to work on cache server first, skip the post. The mode…

Yao Yue
Apr 3, 2016

Caching in datacenters

This is the first post in our blog series about the design, implementation and usage of caching in datacenters. There are many different definitions of caching. And indeed caching is ubiquitous as…

Yao Yue

subscribe via RSS

Pelikan Cache

  • Discussions:
  • Pelikan on Discord
  • pelikan-io

Pelikan is a framework for building cache services.