| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Tweak the reconstruction trigger code to ensure that multiple
reconstructions won't be triggered at the same time.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Sometimes, when the max thread count is exceeded, it is possible for
the scheduler to lock up. This is because the scheduler is only run when
a new job is put into the queue, and so it is possible for a job to be
blocked by thread limitations and be left sitting in the queue. If the
main program is waiting on this job to finish before scheduling a new one,
then the system deadlocks.
I added a second background thread to the scheduler that wakes the
scheduler up every 20us to resolve this and prevent these deadlocks.
|
| | |
|
| |
|
|
|
|
| |
Need to figure out the best way to do the detailed tracking in
a concurrent manner. I was thinking just an event log, with parsing
routines for extracting statistics. But that'll be pretty slow.
|
| |
|
|
|
|
|
|
| |
A poorly organized commit with fixes for a variety of bugs that were
causing missing records. The core problems all appear to be fixed,
though there is an outstanding problem with tombstones not being
completely canceled. A very small number are appearing in the wrong
order during the static structure test.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
It isn't working right now (lotsa test failures), but we're to the
debugging phase now.
|
| |
|
|
|
|
|
|
| |
You can't move assign an std::Bind, but you can move construct it. So
I had to disable the move assignment operator. This means that when you
change the BufferView ownership over to, say, a QueryBufferState object,
you need to do it by passing std::move(buffview) into a constructor call
only--you cannot assign it.
|
| |
|
|
|
| |
Plus some assorted fixes for move semantics stuff in BufferView that
accompanied these changes.
|
| |
|
|
|
| |
I may still play with the shard from shards constructor, and queries
need some work yet too.
|
| |
|
|
|
|
| |
Because a BufferView's lifetime is so tightly linked to the lifetime of
regions of the buffer, it can't be copied without potentially breaking
things.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few minor issues that this introduces, however. Global
tracking of a lot of secondary information, such as weights for WIRS/WSS,
or the exact number of tombstones will need to be approached differently
than they have been historically with this new approach.
I've also removed most of the tombstone capacity related code. We had
decided not to bother enforcing this at the buffer level anyway, and it
would greatly increase the complexity of the problem of predicting when
the next compaction will be.
On the whole this new approach seems like it'll simplify a lot. This
commit actually removes significantly more code than it adds.
One minor issue: the currently implementation will have problems
in the circular array indexes once more than 2^64 records have been
inserted. This doesn't seem like a realistic problem at the moment.
|
| |
|
|
|
| |
Clarified the reasoning for a few things in comments that just tripped
me up during debugging.
|
| |
|
|
|
|
|
|
| |
The existing reconstruction logic will occasionally attempt to append an
empty level to another empty level, for some reason. While the underlying
cause of this needs to be looked into, this special case should prevent
shard constructors being called with a shard count of 0 under tiering,
reducing the error handling overhead of shard code.
|
| |
|
|
|
|
|
|
| |
This also reduces the special-case overhead on shards. As it was,
shards would need to handle a special case when constructing from other
shards where the first of the two provided shards was a nullptr, which
caused a number of subtle issues (or outright crashes in some cases)
with existing shard implementations.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Currently, proactive buffer tombstone compaction is disabled by forcing
the buffer tombstone capacity to match its record capacity. It isn't
clear how to best handle proactive buffer compactions in an environment
where new buffers are spawned anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In InternalLevel::clone(), the m_shard_cnt variable was not being set
appropriately in the clone, resulting in the record counts reported for
a multi-shard level to be reported incorrectly.
In DynamicExtension::merge(), the merges were being performed in the
wrong order, resulting in multi-level merges deleting records. The
leveling tests all passed even with this bug for some reason, but it
caused tiering tests to fail. It isn't clear _why_ leveling appeared to
work, but the bug is now fixed, so that's largely irrelevant I suppose.
|
| |
|
|
|
|
|
|
|
|
| |
1. The system should now cleanly shutdown when the DynamicExtension
object is destroyed. Before now, this would lead to use-after-frees
and/or deadlocks.
2. Improved synchronization on mutable buffer structure management to
fix the issue of the framework losing track of buffers during Epoch
changeovers.
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| | |
This function wasn't ensuring that that the epoch pinned and the epoch
returned were the same epoch in the situation where the epoch was
advanced in the middle of the call. This is now resolved, and further the
function will return the newer epoch, rather than the older one, in
such a situation.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
Fixed a few bugs with concurrent operation of internal_append, as well as
enabled the spawning of multiple empty buffers while merges are currently
active.
|
| | |
| |
| |
| | |
Fixed an incorrectly initialized lock guard
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Add empty buffer now supports a CAS-like operation, where it will only
add a buffer if the currently active one is still the same as when the
decision to add a buffer was made. This is to support adding new buffers
on insert outside of the merge-lock, so that multiple concurrent threads
cannot add multiple new empty buffers.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
Use an explicit m_tail variable for insertion, rather than using m_reccnt.
This ensures that the record count doesn't increase despite new records
being inserted, and allows for the m_tail variable to be decremented on
failure without causing the record count to momentarily change.
|
| | |
| |
| |
| |
| | |
Reordered some code in internal_append() to avoid use-after frees on the
mutable buffer reference used for insertion.
|
| |/ |
|
| | |
|
| |
|
|
|
|
| |
The buffer isn't responsible for a lot of CC anymore (just the append
operation), so this code was no longer necessary. Also removed the only
calls to some of these CC operations within the rest of the framework.
|
| | |
|
| | |
|