| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Query Interface Adjustments/Refactoring
Began the process of adjusting the query interface (and also the shard
interface, to a lesser degree) to better accommodate the user. In
particular the following changes have been made,
1. The number of necessary template arguments for the query type
has been drastically reduced, while also removing the void pointers
and manual delete functions from the interface.
This was accomplished by requiring many of the sub-types associated
with a query (parameters, etc.) to be nested inside the main query
class, and by forcing the SHARD type to expose its associated
record type.
2. User-defined query return types are now supported.
Queries no longer are required to return strictly sets of records.
Instead, the query now has LocalResultType and ResultType
template parameters (which can be defaulted using a typedef in
the Query type itself), allowing much more flexibility.
Note that, at least for the short term, the LocalResultType must
still expose the same is_deleted/is_tombstone interface as a
Wrapped<R> used to, as this is currently needed for delete
filtering. A better approach to this is, hopefully, forthcoming.
3. Updated the ISAMTree.h shard and rangequery.h query to use the
new interfaces, and adjusted the associated unit tests as well.
4. Dropped the unnecessary "get_data()" function from the ShardInterface
concept.
5. Dropped the need to specify a record type in the ShardInterface
concept. This is now handled using a required Shard::RECORD
member of the Shard class itself, which should expose the name
of the record type.
* Updates to framework to support new Query/Shard interfaces
Pretty extensive adjustments to the framework, particularly to the
templates themselves, along with some type-renaming work, to support
the new query and shard interfaces.
Adjusted the external query interface to take an rvalue reference, rather
than a pointer, to the query parameters.
* Removed framework-level delete filtering
This was causing some issues with the new query interface, and should
probably be reworked anyway, so I'm temporarily (TM) removing the
feature.
* Updated benchmarks + remaining code for new interface
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Necessary updates to get the codebase building under OpenBSD 7.5 with
clang. This is a minimal set of changes to get building to work, which
includes disabling several things that aren't directly compatable. More
work will be necessary to get full functionality. In particular, Triespline,
PGM, and the reference M-tree do not currently build on OpenBSD with clang
due to GNU dependencies or other gcc specific features.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
The reconstruction task procedure can now simulate future reconstructions
to a specified depth.
|
| |
|
|
|
| |
The high watermark and low watermark can now be equal, to allow for
blocking reconstruction without requiring odd buffer sizes.
|
| |
|
|
|
| |
Added a ReconVector type to make it easier to do load balancing by
shifting tasks around, and clean up a few interfaces.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This approach should allow us to "simulate" a reconstruction to monitor
the future state of the structure. The idea being that we can then add
pre-emptive reconstructions to load balance and further smooth the tail
latency curve. If a given reconstruction is significantly smaller than
the next one will be, we can move some of the next one's work preemptively
into the current one.
The next phase is to do the simulation within the scratch_vector and
then do a second pass examining the state of that reconstruction. In
principle, we could look arbitrarily far ahead using this technique.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
A poorly organized commit with fixes for a variety of bugs that were
causing missing records. The core problems all appear to be fixed,
though there is an outstanding problem with tombstones not being
completely canceled. A very small number are appearing in the wrong
order during the static structure test.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
It isn't working right now (lotsa test failures), but we're to the
debugging phase now.
|
| |
|
|
|
|
|
|
| |
You can't move assign an std::Bind, but you can move construct it. So
I had to disable the move assignment operator. This means that when you
change the BufferView ownership over to, say, a QueryBufferState object,
you need to do it by passing std::move(buffview) into a constructor call
only--you cannot assign it.
|
| |
|
|
|
| |
Plus some assorted fixes for move semantics stuff in BufferView that
accompanied these changes.
|
| |
|
|
|
|
| |
Because a BufferView's lifetime is so tightly linked to the lifetime of
regions of the buffer, it can't be copied without potentially breaking
things.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few minor issues that this introduces, however. Global
tracking of a lot of secondary information, such as weights for WIRS/WSS,
or the exact number of tombstones will need to be approached differently
than they have been historically with this new approach.
I've also removed most of the tombstone capacity related code. We had
decided not to bother enforcing this at the buffer level anyway, and it
would greatly increase the complexity of the problem of predicting when
the next compaction will be.
On the whole this new approach seems like it'll simplify a lot. This
commit actually removes significantly more code than it adds.
One minor issue: the currently implementation will have problems
in the circular array indexes once more than 2^64 records have been
inserted. This doesn't seem like a realistic problem at the moment.
|
| |
|
|
|
|
|
|
| |
The existing reconstruction logic will occasionally attempt to append an
empty level to another empty level, for some reason. While the underlying
cause of this needs to be looked into, this special case should prevent
shard constructors being called with a shard count of 0 under tiering,
reducing the error handling overhead of shard code.
|
| |
|
|
|
|
|
|
| |
This also reduces the special-case overhead on shards. As it was,
shards would need to handle a special case when constructing from other
shards where the first of the two provided shards was a nullptr, which
caused a number of subtle issues (or outright crashes in some cases)
with existing shard implementations.
|
| | |
|
| |
|
|
|
|
|
| |
Currently, proactive buffer tombstone compaction is disabled by forcing
the buffer tombstone capacity to match its record capacity. It isn't
clear how to best handle proactive buffer compactions in an environment
where new buffers are spawned anyway.
|