summaryrefslogtreecommitdiffstats
path: root/chapters/tail-latency.tex
diff options
context:
space:
mode:
Diffstat (limited to 'chapters/tail-latency.tex')
-rw-r--r--chapters/tail-latency.tex45
1 files changed, 44 insertions, 1 deletions
diff --git a/chapters/tail-latency.tex b/chapters/tail-latency.tex
index 4e79cff..5d4f214 100644
--- a/chapters/tail-latency.tex
+++ b/chapters/tail-latency.tex
@@ -111,7 +111,7 @@ implementation of these ideas, and then evaluate that prototype system
to demonstrate that the theoretical trade-offs are achievable in practice.
\section{The Insertion-Query Trade-off}
-
+\label{sec:tl-insert-query-tradeoff}
As reconstructions are at the heart of the insertion tail latency problem,
it seems worth taking a moment to consider \emph{why} they must be done
at all. Fundamentally, decomposition-based dynamization techniques trade
@@ -1101,6 +1101,49 @@ to provide a superior set of design trade-offs than the strict policies,
at least in environments where sufficient parallel processing and memory
are available to leverage parallel reconstructions.
+\subsection{Legacy Design Space}
+
+Our new system retains the concept of buffer size and scale factor from
+the previous version, although these have very different performance
+implications given our different compaction strategy. In this test, we
+examine the effects of these parameters on the insertion-query tradeoff
+curves noted above, as well as on insertion tail latency. The results
+are shown in Figure~\ref{fig:tl-design-space}, for a dynamized ISAM Tree
+using the SOSD \texttt{OSM} dataset and point lookup queries.
+
+\begin{figure}
+\centering
+\subfloat[Insertion Throughput vs. Query Latency for Varying Scale Factors]{\includegraphics[width=.5\textwidth]{img/tail-latency/stall-sf-sweep.pdf} \label{fig:tl-sf-curve}}
+\subfloat[Insertion Tail Latency for Varying Buffer Sizes]{\includegraphics[width=.5\textwidth]{img/tail-latency/buffer-tail-latency.pdf} \label{fig:tl-buffer-tail}} \\
+\caption{"Legacy" Design Space Examination}
+\label{fig:tl-design-space}
+\end{figure}
+
+First, we consider the insertion throughput vs. average query latency
+curves for our system using different values of scale factor in
+Figure~\ref{fig:tl-sf-curve}. Recall that our system of reconstruction in
+this chapter does not explicitly enforce any structural invariants, and so
+the scale factor's only role is in determining at what point a given level
+will have a reconstruction scheduled for it. Lower scale factors will
+more aggresively compact shards, while higher scale factors will allow
+more shards to accumulate before attempting to perform a reconstruction.
+Interestingly, there are clear differences in the curves, particularly at
+higher insertion throughputs. For lower throughputs, a scale factor of
+$s=2$ appears strictly inferior, while the other tested scale factors result
+in roughly equivalent curves. However, as the insertion throughput is
+increased, the curves begin to seperate more, with $s = 6$ emerging as
+the superior option for the majority of the space.
+
+Next, we consider the effect that buffer size has on insertion
+tail latency. Based on our discussion of the equal block method
+in Section~\ref{sec:tl-insert-query-tradeoff}, and the fact that
+our technique only blocks inserts on buffer flushes, it stands
+the reason that the buffer size should directly influence the
+worst-case insertion time. That bears out in practice, as shown in
+Figure~\ref{fig:tl-buffer-tail}. As the buffer size is increased,
+the worst-case insertion time also increases, although the effect is
+relatively small.
+
\subsection{Thread Scaling}
\begin{figure}