summaryrefslogtreecommitdiffstats
path: root/chapters/dynamization.tex
diff options
context:
space:
mode:
Diffstat (limited to 'chapters/dynamization.tex')
-rw-r--r--chapters/dynamization.tex20
1 files changed, 11 insertions, 9 deletions
diff --git a/chapters/dynamization.tex b/chapters/dynamization.tex
index 738a436..1012597 100644
--- a/chapters/dynamization.tex
+++ b/chapters/dynamization.tex
@@ -581,10 +581,10 @@ entire structure is compacted into a single block.
\label{fig:dyn-kbin}
\end{figure}
-One of the significant limitations of the logarithmic method is that it
-is incredibly rigid. In our earlier discussion of decomposition we noted
-that there exists a clear trade-off between insert and query performance
-for half-dynamic structures mediate by the number of blocks into which the
+One of the significant limitations of the logarithmic method is that it is
+incredibly rigid. In our earlier discussion of decomposition we noted that
+there exists a clear trade-off between insert and query performance for
+half-dynamic structures, mediated by the number of blocks into which the
structure is decomposed. However, the logarithmic method does not allow
any navigation of this trade-off. In their original paper on the topic,
Bentley and Saxe proposed a different decomposition scheme that does
@@ -758,9 +758,11 @@ results in a data structure with the following performance characteristics,
\text{Worst-case Query Cost:}& \quad \mathscr{Q}(n) \in O\left(f(n) \cdot \mathscr{Q}_S\left(\frac{n}{f(n)}\right)\right) \\
\end{align*}
The equal block method is generally \emph{worse} in terms of insertion
-performance than the logarithmic and $k$-binomial decompositions, because
-the sizes of reconstructions are typically much larger for an equivalent
-block count, due to all the blocks having approximately the same size.
+performance than the logarithmic and $k$-binomial decompositions. This
+happens because, for a given number of blocks, the reconstructions will
+typically be larger in the equal block method due to each block having
+approximately the same size. This results in larger reconstructions on
+average than the logarithmic method.
\subsection{Optimizations}
@@ -1232,8 +1234,8 @@ two decomposition approaches that expose some form of performance tuning
to the user, these techniques are targeted as asymptotic results, which
results in poor results in practice. Finally, most decomposition schemes
have poor worst-case insertion performance, resulting in extremely poor
-tail latency relative to native dynamic structures. While there do exist
-decomposition schemes that have exhibit better worst-case performance,
+tail latency relative to native dynamic structures. While there do
+exist decomposition schemes that have better worst-case performance,
they are impractical. This section will discuss these limitations in
more detail, and the rest of the document will be dedicated to proposing
solutions to them.