| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
| |
The reconstruction task procedure can now simulate future reconstructions
to a specified depth.
|
| |
|
|
|
| |
The high watermark and low watermark can now be equal, to allow for
blocking reconstruction without requiring odd buffer sizes.
|
| |
|
|
|
| |
Added a ReconVector type to make it easier to do load balancing by
shifting tasks around, and clean up a few interfaces.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This approach should allow us to "simulate" a reconstruction to monitor
the future state of the structure. The idea being that we can then add
pre-emptive reconstructions to load balance and further smooth the tail
latency curve. If a given reconstruction is significantly smaller than
the next one will be, we can move some of the next one's work preemptively
into the current one.
The next phase is to do the simulation within the scratch_vector and
then do a second pass examining the state of that reconstruction. In
principle, we could look arbitrarily far ahead using this technique.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
I'm reasonably certain that this is a compiler bug...
|
| | |
|
| |
|
|
|
|
| |
Cleaned up shard implementations, fixed a few bugs, and set up some
tests. There's still some work to be done in creating tests for the
weighted sampling operations for the alias and aug btree shards.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Tweak the reconstruction trigger code to ensure that multiple
reconstructions won't be triggered at the same time.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Sometimes, when the max thread count is exceeded, it is possible for
the scheduler to lock up. This is because the scheduler is only run when
a new job is put into the queue, and so it is possible for a job to be
blocked by thread limitations and be left sitting in the queue. If the
main program is waiting on this job to finish before scheduling a new one,
then the system deadlocks.
I added a second background thread to the scheduler that wakes the
scheduler up every 20us to resolve this and prevent these deadlocks.
|
| | |
|
| |
|
|
|
|
| |
Need to figure out the best way to do the detailed tracking in
a concurrent manner. I was thinking just an event log, with parsing
routines for extracting statistics. But that'll be pretty slow.
|
| |
|
|
|
|
|
|
| |
A poorly organized commit with fixes for a variety of bugs that were
causing missing records. The core problems all appear to be fixed,
though there is an outstanding problem with tombstones not being
completely canceled. A very small number are appearing in the wrong
order during the static structure test.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
It isn't working right now (lotsa test failures), but we're to the
debugging phase now.
|
| |
|
|
|
|
|
|
| |
You can't move assign an std::Bind, but you can move construct it. So
I had to disable the move assignment operator. This means that when you
change the BufferView ownership over to, say, a QueryBufferState object,
you need to do it by passing std::move(buffview) into a constructor call
only--you cannot assign it.
|
| |
|
|
|
| |
Plus some assorted fixes for move semantics stuff in BufferView that
accompanied these changes.
|
| |
|
|
|
| |
I may still play with the shard from shards constructor, and queries
need some work yet too.
|
| |
|
|
|
|
| |
Because a BufferView's lifetime is so tightly linked to the lifetime of
regions of the buffer, it can't be copied without potentially breaking
things.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few minor issues that this introduces, however. Global
tracking of a lot of secondary information, such as weights for WIRS/WSS,
or the exact number of tombstones will need to be approached differently
than they have been historically with this new approach.
I've also removed most of the tombstone capacity related code. We had
decided not to bother enforcing this at the buffer level anyway, and it
would greatly increase the complexity of the problem of predicting when
the next compaction will be.
On the whole this new approach seems like it'll simplify a lot. This
commit actually removes significantly more code than it adds.
One minor issue: the currently implementation will have problems
in the circular array indexes once more than 2^64 records have been
inserted. This doesn't seem like a realistic problem at the moment.
|
| |
|
|
|
| |
Clarified the reasoning for a few things in comments that just tripped
me up during debugging.
|
| |
|
|
|
|
|
|
| |
The existing reconstruction logic will occasionally attempt to append an
empty level to another empty level, for some reason. While the underlying
cause of this needs to be looked into, this special case should prevent
shard constructors being called with a shard count of 0 under tiering,
reducing the error handling overhead of shard code.
|
| |
|
|
|
|
|
|
| |
This also reduces the special-case overhead on shards. As it was,
shards would need to handle a special case when constructing from other
shards where the first of the two provided shards was a nullptr, which
caused a number of subtle issues (or outright crashes in some cases)
with existing shard implementations.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Currently, proactive buffer tombstone compaction is disabled by forcing
the buffer tombstone capacity to match its record capacity. It isn't
clear how to best handle proactive buffer compactions in an environment
where new buffers are spawned anyway.
|