summaryrefslogtreecommitdiffstats
path: root/chapters/beyond-dsp.tex
diff options
context:
space:
mode:
authorDouglas Rumbaugh <dbr4@psu.edu>2025-06-08 15:04:00 -0400
committerDouglas Rumbaugh <dbr4@psu.edu>2025-06-08 15:04:00 -0400
commit33bc7e620276f4269ee5f1820e5477135e020b3f (patch)
tree03a7bb2ccbf7f1d2943871a69bca18006270bd20 /chapters/beyond-dsp.tex
parent50adf588694170699adfa75cd2d1763263085165 (diff)
downloaddissertation-33bc7e620276f4269ee5f1820e5477135e020b3f.tar.gz
Julia updates v2
Diffstat (limited to 'chapters/beyond-dsp.tex')
-rw-r--r--chapters/beyond-dsp.tex10
1 files changed, 5 insertions, 5 deletions
diff --git a/chapters/beyond-dsp.tex b/chapters/beyond-dsp.tex
index 7a0df37..a2e7abf 100644
--- a/chapters/beyond-dsp.tex
+++ b/chapters/beyond-dsp.tex
@@ -251,7 +251,7 @@ O \left( \log_2 n \cdot P(n) + D(n) + \log_2 n \cdot \mathscr{Q}_\ell(n) + C_e(n
\end{equation}
As an example, we'll express IRS using the above interface and
-analyze its complexity to show that the resulting solution as the
+analyze its complexity to show that the resulting solution is the
same $\Theta(log^2 n + k)$ cost as the specialized solution from
Chapter~\ref{chap:sampling}. We use $\mathbftt{local\_preproc}$
to determine the number of records on each block falling on the
@@ -904,7 +904,7 @@ previous section, as well as versions of $\mathbftt{local\_preproc}$
and $\mathbftt{local\_query}$ for pre-processing and querying an unsorted
set of records, which is necessary to allow the mutable buffer to be
used as part of the query process.\footnote{
- In the worst case, these routines could construct temporary shard
+ In the worst case, these routines could construct a temporary shard
over the mutable buffer, and use this to answer queries.
} The $\mathbftt{repeat}$ function is necessary even for
normal eDSP problems, and should just return \texttt{false} with no other
@@ -1023,7 +1023,7 @@ face of deleted records.
\item \textbf{Leveling.}\\
Our leveling policy is identical to the one discussed in
Chapter~\ref{chap:sampling}. The capacity of level $i$ is $N_b \cdot
-s^i+1$ records. The first level ($i$) with available capacity to hold
+s^{i+1}$ records. The first level ($i$) with available capacity to hold
all the records from the level above it ($i-1$ or the buffer, if $i
= 0$) is found. Then, for all levels $j < i$, the records in $j$ are
merged with the records in $j+1$ and the resulting shard placed in level
@@ -1778,7 +1778,7 @@ number of records on it, rather than returning the set of records,
to overcome differences in the query interfaces in our baselines, some
of which make extra copies of the records. We consider traversing the
range and counting to be a more fair comparison. Range counts are true
-invertible search problems, and so we use tombstone-deletes. The query
+invertible search problems, and so we use tombstone deletes. The query
process itself performs no preprocessing. Local queries use the index to
identify the first record in the query range and then traverses the range,
counting the number of records and tombstones encountered. These counts
@@ -1891,7 +1891,7 @@ $N_B = 1200$, $s = 8$, the tiering layout policy, and tombstone deletes,
with a standard Bentley-Saxe dynamization (\textbf{BSM-FST}), as well
as a single static instance of the structure (\textbf{FST}).
-The results are show in Figure~\ref{fig:fst-eval}. As with range scans,
+The results are shown in Figure~\ref{fig:fst-eval}. As with range scans,
the Bentley-Saxe method shows horrible insertion performance relative to
our framework in Figure~\ref{fig:fst-insert}. Note that the significant
observed difference in update throughput for the two data sets is