next up previous
Next: 5 Conclusions Up: Directory Structures for Scalable Previous: 3 Harvest and CRISP

Subsections

4 Alternatives for Managing the Cache Directory

  We now consider three alternatives for managing the cache directory map in a CRISP collective Web cache. The purpose of these alternatives is to improve geographic scalability, reduce latency, and improve resilience to mapping server failures relative to CSD, by eliminating synchronous interaction with a central mapping server, while preserving unicast probe and high yields with low overhead.

Many variants of these fundamental alternatives are possible. For example, the map could be replicated at a subset of the caching servers, partitioned as well as replicated, or distilled to a submap of some fixed size using LRU or other replacement algorithm. In all cases, the mechanics of the map are similar. Each proxy maintains some portion of the global map. On a local miss, a proxy queries the global map before fetching the document from the Web server. Each entry in the global map contains at minimum a URL and a copyset for the object. Stale map entries may produce false hits or misses, but are otherwise harmless. Additions and deletions to the map are idempotent.

We evaluate the potential effectiveness of these structures qualitatively and by analyzing the relevant reference characteristics of the 25-day traces.

4.1 Partitioned Directory

The primary benefit of PD over CSD is that it is scalable to arbitrarily large numbers of caching servers, since map query load is distributed among an arbitrary number of map sites. PD is also more resilient to failures of a mapping server; any failure affects only the portion of the directory stored at the map site. PD is similar to PPC (Section 2.2) in its reliance on a balanced hashing function and its need to reconfigure the cache if the hash function changes. The advantage over PPC is that UDP control messages generated by PD are much cheaper than long-haul TCP communication required in PPC.

In other respects PD is similar to CSD. The advantages for both structures are: (1) the map is guaranteed to deliver a hit if the requested object is present in the cache, and (2) exactly one unicast probe/update exchange is needed per local miss, and (3) only one copy of each global map entry is needed.

Unfortunately, PD fails to eliminate the need for synchronous query of a potentially distant map site, a central weakness of CSD. Moreover, while some map sites may be closer to the requesting proxy than the CSD mapping server, others may be more distant. Thus, compared to CSD, the worst-case latency would bound the geographical scalability of PD to smaller-diameter networks, where diameter is measured by maximum round-trip propagation latency.


  
Figure 7: Inter-reference gaps between various reference classes The y-axis shows the percentage of shared objects with gaps over x minutes.
\begin{figure}
\centering

\epsfig {file=pics/gaps.eps, width=2.75in}\end{figure}

4.2 Replicated Directory

Probe latency for RD is lower than either CSD or PD, since probe traffic is directed to a map replica local to the requesting proxy. However, the aggregate map storage required for RD is a factor of both the number of map replicas and the size of the map, as is the network traffic to keep the replicas coherent.

In one sense, RD is the reverse of the Flat-Harvest configuration studied in Section 3. Rather than multicasting queries and generating no updates to the map, RD multicasts the updates and generates no remote queries. One advantage of multicasting updates (RD) over multicasting queries (Harvest) is that the updates to the global map can be propagated asynchronously, whereas queries are by their nature synchronous. For example, proxies may batch updates to the map and propagate them with periodic anti-entropy exchanges using an efficient stream protocol [16]. While the propagation delay may produce some false misses, Figure 7 indicates that propagation delays of up to an hour can deliver over 90% of the potential shared hits.

4.3 Replicated Partial Directory

The goal of RPD is to reduce the storage and update costs of RD by replicating only a portion of the global map, without compromising hit ratios. We propose an instance of RPD called RPD-SD in which the shared submap contains entries for only objects that are shared.

RPD-SD can be implemented as a variant of CSD in which the role of the mapping server is to identify shared documents and propagate updates to the submap, rather than to respond to synchronous queries. As in CSD, the mapping server maintains an authoritative copy of the complete global directory. Each proxy periodically contacts the mapping server and passes a stream of URLs fetched or discarded at that proxy. The mapping server responds with a stream of updates to the submap replica at that proxy. The mapping server asynchronously updates the global directory, and identifies and adds shared URLs to the submap.


  
Figure 8: Breakdown of references to shared documents in the 25-day trace. Percentages are relative to all references in the trace.
\begin{figure}
\centering

\epsfig {file=pics/fullsharedbreakdown-pie.eps, width=2.75in}\end{figure}

We can predict the effectiveness of RPD-SD by analyzing the references to shared documents in the 25-day trace. Figure 8 shows a detailed breakdown of all shared document references. We partition the shared references into several subcategories. For every shared object, the first reference is always a compulsory miss, and the first-shared reference is the access that marks it as shared. The second-shared and n-shared references are subsequent accesses by new clients. A repeat shared reference is a further access to an object by a client who has already accessed it before.

While CSD and PD deliver all possible hits for first-shared references, RPD-SD yields hits only if the first and second clients to access a document happen to be bound to the same primary proxy. Our experiment with isolated proxies shows this probability declines with the number of proxies (Figure 4 in Section 2). Thus RPD-SD is likely to miss the majority of first-shared references, which make up 5.5% of all references in the 25-day trace.

Moreover, RPD-SD introduces two windows of vulnerability that may fail to capture potential hits from references in the second-shared and (less likely) the n-shared categories. First, such a reference may arrive before the mapping server could detect that the referenced object is shared (e.g, before the first and/or first-shared references are reported to the mapping server). Second, the second-shared and n-shared references may fail to yield hits if they occur before the new map entry has propagated to all submap replicas.

We can determine the size and impact of these windows from Figure 7, which shows the distribution of inter-reference gaps between the first-shared reference and the next three subsequent clients to access the object. The distribution predicts that RPD-SD will hit on over 95% of second-shared references if the maximum propagation delay of an object update (cache to map to cache submap) completes within 5 minutes. This can be satisfied, for example, by setting an interval of 2 minutes for the cache to mapping server update rate, and likewise for the mapping server to replicated submap. Similarly, RPD-SD will yield hits for 98% of the third-shared references and 99% of fourth-shared references. These three combined further decrease the overall hit ratio by 0.21 percentage points, an insignificant amount.

Clearly the highest cost is the loss of first shared references (a 5.5% percentage point drop). By way of a simple optimization, one can recapture these missed hits. Note that an RPD-SD cache may optionally issue a synchronous query to the central mapping server for a reference that misses in the local cache and submap replica. These queries will hit in the central mapping server if the notification of the first miss for that object was reported to the mapping server prior to this query. An update interval of 2 minutes as proposed above will make RPD-SD lose only 2% of the first-shared misses (0.10% of total), for a total drop of only 0.31 percentage points in the hit ratio. The miss latency and the best achievable hit ratio for this variant are then practically identical to CSD, but the shared submap eliminates the probe exchange for the majority of potential hits.


next up previous
Next: 5 Conclusions Up: Directory Structures for Scalable Previous: 3 Harvest and CRISP
Syam Gadde
11/14/1997