summaryrefslogtreecommitdiffstats
path: root/chapters
diff options
context:
space:
mode:
authorDouglas Rumbaugh <dbr4@psu.edu>2025-06-06 18:57:16 -0400
committerDouglas Rumbaugh <dbr4@psu.edu>2025-06-06 18:57:16 -0400
commit50adf588694170699adfa75cd2d1763263085165 (patch)
treebec242e13ee934c44ea7c17869de6b0ac8df2630 /chapters
parentf7f1332d3ead7f61e8e1c8a72ade33f3296d2982 (diff)
downloaddissertation-50adf588694170699adfa75cd2d1763263085165.tar.gz
updates
Diffstat (limited to 'chapters')
-rw-r--r--chapters/tail-latency.tex36
1 files changed, 29 insertions, 7 deletions
diff --git a/chapters/tail-latency.tex b/chapters/tail-latency.tex
index 5d4f214..8ec8d26 100644
--- a/chapters/tail-latency.tex
+++ b/chapters/tail-latency.tex
@@ -1061,7 +1061,7 @@ count distribution shifts. In this test, we examine the average values
of insertion throughput and query latency over a variety of stall rates.
The results of this test for ISAM with the SOSD \texttt{OSM} dataset are
-shown in Figure~\ref{fig:tl-latency-curve}, which shows the insertion
+shown in Figure~\ref{fig:tl-latency-curve-isam}, which shows the insertion
throughput plotted against the average query latency for our system at
various stall rates, and with tiering configured with an equivalent
scale factor marked as red point for reference. This plot shows two
@@ -1086,11 +1086,26 @@ of a slightly larger than 2x increase in query latency. Moving down the
curve, we see that we are able to roughly match the performance of tiering
within this space, and even shift to more query optimized configurations.
+We also performed the same testing for $k$-NN queries using
+VPTree and the \texttt{SBW} dataset. The results are shown in
+Figure~\ref{fig:tl-latency-curve-knn}. Because the run time of $k$-NN
+queries is significantly longer than the point lookups in the ISAM test,
+we additionally applied a rate limit to the query thread, issuing new
+queries every 100 milliseconds, and configured query preemption with a
+trigger point of approximately 40 milliseconds. We applied the same
+parameters for the tiering test, and counted any additional latency
+associated with query preemption towards the average query latency figures
+reported. This test shows that, like with ISAM, we have access to a
+similarly clear trade-off space by adjusting the insertion throughput,
+however in this case the standard tiering policy did perform better in
+terms of average insertion throughput and query latency.
+
\begin{figure}
\centering
-\includegraphics[width=.5\textwidth]{img/tail-latency/stall-latency-curve.pdf}
-\caption{Insertion Throughput vs. Query Latency for ISAM with 200M Records}
+\subfloat[ISAM w/ Point Lookup]{\includegraphics[width=.5\textwidth]{img/tail-latency/stall-latency-curve.pdf} \label{fig:tl-latency-curve-isam}}
+\subfloat[VPTree w/ $k$-NN]{\includegraphics[width=.5\textwidth]{img/tail-latency/knn-stall-latency-curve.pdf} \label{fig:tl-latency-curve-knn}} \\
+\caption{Insertion Throughput vs. Query Latency}
\label{fig:tl-latency-curve}
\end{figure}
@@ -1197,10 +1212,17 @@ query latency in two ways,
Interestingly, at least in this test, both of these effects are largely
supressed with only a moderate reduction in insertion throughput. But,
insufficient parallelism does result in the higher-throughput
-configurations suffering a significant query latency increase.
-
-
-
+configurations suffering a significant query latency increase in general.
+
+Of particular note here is the single internal thread test. While for very
+low insertion throughputs, even one thread is enough to keep pace, query
+performance degrades rapidly as the insertion throughput is increased,
+so much so that we had to cut off part of the curve to ensure that the
+other thread configurations were visible in the plot at all. Recall that
+this configuration requires that both queries and reconstructions be
+scheduled on the same shared thread, and so the query latency suffers
+significantly from having to wait behind long-running reconstructions,
+as well as taking longer due to having more shards in the structure.
\section{Conclusion}