| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
| |
I may still play with the shard from shards constructor, and queries
need some work yet too.
|
| |
|
|
|
|
| |
Because a BufferView's lifetime is so tightly linked to the lifetime of
regions of the buffer, it can't be copied without potentially breaking
things.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few minor issues that this introduces, however. Global
tracking of a lot of secondary information, such as weights for WIRS/WSS,
or the exact number of tombstones will need to be approached differently
than they have been historically with this new approach.
I've also removed most of the tombstone capacity related code. We had
decided not to bother enforcing this at the buffer level anyway, and it
would greatly increase the complexity of the problem of predicting when
the next compaction will be.
On the whole this new approach seems like it'll simplify a lot. This
commit actually removes significantly more code than it adds.
One minor issue: the currently implementation will have problems
in the circular array indexes once more than 2^64 records have been
inserted. This doesn't seem like a realistic problem at the moment.
|
| |
|
|
|
| |
Clarified the reasoning for a few things in comments that just tripped
me up during debugging.
|
| |
|
|
|
|
|
|
| |
The existing reconstruction logic will occasionally attempt to append an
empty level to another empty level, for some reason. While the underlying
cause of this needs to be looked into, this special case should prevent
shard constructors being called with a shard count of 0 under tiering,
reducing the error handling overhead of shard code.
|
| |
|
|
|
|
|
|
| |
This also reduces the special-case overhead on shards. As it was,
shards would need to handle a special case when constructing from other
shards where the first of the two provided shards was a nullptr, which
caused a number of subtle issues (or outright crashes in some cases)
with existing shard implementations.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Currently, proactive buffer tombstone compaction is disabled by forcing
the buffer tombstone capacity to match its record capacity. It isn't
clear how to best handle proactive buffer compactions in an environment
where new buffers are spawned anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In InternalLevel::clone(), the m_shard_cnt variable was not being set
appropriately in the clone, resulting in the record counts reported for
a multi-shard level to be reported incorrectly.
In DynamicExtension::merge(), the merges were being performed in the
wrong order, resulting in multi-level merges deleting records. The
leveling tests all passed even with this bug for some reason, but it
caused tiering tests to fail. It isn't clear _why_ leveling appeared to
work, but the bug is now fixed, so that's largely irrelevant I suppose.
|
| |
|
|
|
|
|
|
|
|
| |
1. The system should now cleanly shutdown when the DynamicExtension
object is destroyed. Before now, this would lead to use-after-frees
and/or deadlocks.
2. Improved synchronization on mutable buffer structure management to
fix the issue of the framework losing track of buffers during Epoch
changeovers.
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| | |
This function wasn't ensuring that that the epoch pinned and the epoch
returned were the same epoch in the situation where the epoch was
advanced in the middle of the call. This is now resolved, and further the
function will return the newer epoch, rather than the older one, in
such a situation.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
Fixed a few bugs with concurrent operation of internal_append, as well as
enabled the spawning of multiple empty buffers while merges are currently
active.
|
| | |
| |
| |
| | |
Fixed an incorrectly initialized lock guard
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Add empty buffer now supports a CAS-like operation, where it will only
add a buffer if the currently active one is still the same as when the
decision to add a buffer was made. This is to support adding new buffers
on insert outside of the merge-lock, so that multiple concurrent threads
cannot add multiple new empty buffers.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
Use an explicit m_tail variable for insertion, rather than using m_reccnt.
This ensures that the record count doesn't increase despite new records
being inserted, and allows for the m_tail variable to be decremented on
failure without causing the record count to momentarily change.
|
| | |
| |
| |
| |
| | |
Reordered some code in internal_append() to avoid use-after frees on the
mutable buffer reference used for insertion.
|
| |/ |
|
| | |
|
| |
|
|
|
|
| |
The buffer isn't responsible for a lot of CC anymore (just the append
operation), so this code was no longer necessary. Also removed the only
calls to some of these CC operations within the rest of the framework.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Added a new scheduler for ensuring single-threaded
operation. Additionally, added a static assert to (at least for now)
restrict the use of tagging to this single threaded scheduler.
|
| |
|
|
|
|
|
|
|
| |
The epochs must be released in the destructor prior to releasing the
buffers and structures, as otherwise there are references remaining to
these objects and their destructors will fail.
Additionally, fixed a bug in the constructor resulting in a memory leak
due to allocating an extra starting version and buffer.
|
| |
|
|
|
|
|
|
|
|
| |
When an epoch is created using the constructor Epoch(Structure, Buffer),
it will call take_reference() on both.
This was necessary to ensure that the destructor doesn't fail, as it
releases references and fails if the refcnt is 0. It releases the user
of the object from the burden of manually taking references in this
situation.
|
| | |
|
| |
|
|
|
| |
This is mostly just for testing purposes at the moment, though I'd
imagine it may be useful for other reasons too.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Instead of busy waiting on the active job count, a condition variable is
now used to wait for all active jobs to finish before freeing an epoch's
resources.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
get_memory_usage, get_aux_memory_usage, get_record_count,
get_tombstone_count, and create_static_structure have been adjusted to
ensure that they pull from a consistent epoch, even if a change-over
occurs midway through the function.
These functions also now register with the epoch as a job, to ensure that
the epoch they are operating own isn't retired midway through the
function. Probably not a big issue for the accessors, but I could see it
being very important for create_static_structure.
|
| | |
|
| |
|
|
|
|
|
| |
I started moving over to an explicit Epoch based system, which has
necessitated a ton of changes throughout the code base. This will
ultimately allow for a much cleaner set of abstractions for managing
concurrency.
|
| | |
|
| |
|
|
| |
currently there's a race condition of some type to sort out.
|