diff options
| -rw-r--r-- | chapters/dynamization.tex | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/chapters/dynamization.tex b/chapters/dynamization.tex index c89f1fa..1e0d3e2 100644 --- a/chapters/dynamization.tex +++ b/chapters/dynamization.tex @@ -474,7 +474,7 @@ outside of the theoretical literature. \subsection{The Logarithmic Method} \label{ssec:bsm} -The original, and most frequently used, dynamization technique is the +The original, and most frequently used, decomposition technique is the logarithmic method, also called Bentley-Saxe method (BSM) in more recent literature. This technique decomposes the structure into logarithmically many blocks of exponentially increasing size. More specifically, the @@ -828,7 +828,7 @@ perform data structure merges.\footnote{ \subsubsection{Improved Worst-Case Insertion Performance} \label{ssec:bsm-worst-optimal} -Dynamization based upon amortized global reconstruction has a +Dynamization based upon decomposition and global reconstruction has a significant gap between its \emph{amortized} insertion performance, and its \emph{worst-case} insertion performance. When using the Bentley-Saxe method, the logarithmic decomposition ensures that the majority of inserts @@ -945,7 +945,7 @@ for all $A, B \in \mathcal{PS}(\mathcal{D})$ where $A \cap B = \emptyset$. Given a search problem with this property, it is possible to emulate removing a record from the structure by instead inserting into a -secondary ``ghost'' structure. When the dynamization is queried, this +secondary ``ghost'' structure. When the decomposed structure is queried, this ghost structure is queried as well as the main one. The results from the ghost structure can be removed from the result set using the inverse merge operator. This simulates the result that would have been obtained @@ -1156,10 +1156,10 @@ deletion decomposable search problem. \end{proof} For such problems, deletes can be supported by first identifying the -block in the dynamization containing the record to be deleted, and +block in the decomposition containing the record to be deleted, and then calling $\mathtt{delete}$ on it. In order to allow this block to be easily located, it is possible to maintain a hash table over all -of the records, alongside the dynamization, which maps each record +of the records, alongside the decomposition, which maps each record onto the block containing it. This table must be kept up to date as reconstructions occur, but this can be done at no extra asymptotic costs for any data structures having $B(n) \in \Omega(n)$, as it requires only @@ -1168,7 +1168,7 @@ linear time. This allows for deletes to be performed in $\mathscr{D}(n) The presence of deleted records within the structure does introduce a new problem, however. Over time, the number of records in each block will -drift away from the requirements imposed by the dynamization technique. It +drift away from the requirements imposed by the decomposition technique. It will eventually become necessary to re-partition the records to restore these invariants, which are necessary for bounding the number of blocks, and thereby the query performance. The particular invariant maintenance @@ -1176,7 +1176,7 @@ rules depend upon the decomposition scheme used. To our knowledge, there is no discussion of applying the $k$-binomial method to deletion decomposable search problems, and so method is not listed here. -\Paragraph{Logarithmic Method.} When creating a logarithmic dynamization for +\Paragraph{Logarithmic Method.} When creating a logarithmic decomposition for a deletion decomposable search problem, the $i$th block where $i \geq 2$,\footnote{ Block $i=1$ will only ever have one record, so no special maintenance must be done for it. A delete will simply empty it completely. |