| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
| |
Plus some assorted fixes for move semantics stuff in BufferView that
accompanied these changes.
|
| |
|
|
|
|
| |
Because a BufferView's lifetime is so tightly linked to the lifetime of
regions of the buffer, it can't be copied without potentially breaking
things.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few minor issues that this introduces, however. Global
tracking of a lot of secondary information, such as weights for WIRS/WSS,
or the exact number of tombstones will need to be approached differently
than they have been historically with this new approach.
I've also removed most of the tombstone capacity related code. We had
decided not to bother enforcing this at the buffer level anyway, and it
would greatly increase the complexity of the problem of predicting when
the next compaction will be.
On the whole this new approach seems like it'll simplify a lot. This
commit actually removes significantly more code than it adds.
One minor issue: the currently implementation will have problems
in the circular array indexes once more than 2^64 records have been
inserted. This doesn't seem like a realistic problem at the moment.
|
| |
|
|
|
| |
Clarified the reasoning for a few things in comments that just tripped
me up during debugging.
|
| |
|
|
|
|
|
|
| |
The existing reconstruction logic will occasionally attempt to append an
empty level to another empty level, for some reason. While the underlying
cause of this needs to be looked into, this special case should prevent
shard constructors being called with a shard count of 0 under tiering,
reducing the error handling overhead of shard code.
|
| |
|
|
|
|
|
|
| |
This also reduces the special-case overhead on shards. As it was,
shards would need to handle a special case when constructing from other
shards where the first of the two provided shards was a nullptr, which
caused a number of subtle issues (or outright crashes in some cases)
with existing shard implementations.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Currently, proactive buffer tombstone compaction is disabled by forcing
the buffer tombstone capacity to match its record capacity. It isn't
clear how to best handle proactive buffer compactions in an environment
where new buffers are spawned anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In InternalLevel::clone(), the m_shard_cnt variable was not being set
appropriately in the clone, resulting in the record counts reported for
a multi-shard level to be reported incorrectly.
In DynamicExtension::merge(), the merges were being performed in the
wrong order, resulting in multi-level merges deleting records. The
leveling tests all passed even with this bug for some reason, but it
caused tiering tests to fail. It isn't clear _why_ leveling appeared to
work, but the bug is now fixed, so that's largely irrelevant I suppose.
|
| |
|
|
|
|
|
|
|
|
| |
1. The system should now cleanly shutdown when the DynamicExtension
object is destroyed. Before now, this would lead to use-after-frees
and/or deadlocks.
2. Improved synchronization on mutable buffer structure management to
fix the issue of the framework losing track of buffers during Epoch
changeovers.
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| | |
This function wasn't ensuring that that the epoch pinned and the epoch
returned were the same epoch in the situation where the epoch was
advanced in the middle of the call. This is now resolved, and further the
function will return the newer epoch, rather than the older one, in
such a situation.
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
Fixed a few bugs with concurrent operation of internal_append, as well as
enabled the spawning of multiple empty buffers while merges are currently
active.
|
| | |
| |
| |
| | |
Fixed an incorrectly initialized lock guard
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Add empty buffer now supports a CAS-like operation, where it will only
add a buffer if the currently active one is still the same as when the
decision to add a buffer was made. This is to support adding new buffers
on insert outside of the merge-lock, so that multiple concurrent threads
cannot add multiple new empty buffers.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
Use an explicit m_tail variable for insertion, rather than using m_reccnt.
This ensures that the record count doesn't increase despite new records
being inserted, and allows for the m_tail variable to be decremented on
failure without causing the record count to momentarily change.
|
| | |
| |
| |
| |
| | |
Reordered some code in internal_append() to avoid use-after frees on the
mutable buffer reference used for insertion.
|
| |/ |
|
| | |
|
| |
|
|
|
|
| |
The buffer isn't responsible for a lot of CC anymore (just the append
operation), so this code was no longer necessary. Also removed the only
calls to some of these CC operations within the rest of the framework.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Added a new scheduler for ensuring single-threaded
operation. Additionally, added a static assert to (at least for now)
restrict the use of tagging to this single threaded scheduler.
|
| |
|
|
|
|
|
|
|
| |
The epochs must be released in the destructor prior to releasing the
buffers and structures, as otherwise there are references remaining to
these objects and their destructors will fail.
Additionally, fixed a bug in the constructor resulting in a memory leak
due to allocating an extra starting version and buffer.
|
| |
|
|
|
|
|
|
|
|
| |
When an epoch is created using the constructor Epoch(Structure, Buffer),
it will call take_reference() on both.
This was necessary to ensure that the destructor doesn't fail, as it
releases references and fails if the refcnt is 0. It releases the user
of the object from the burden of manually taking references in this
situation.
|
| |
|
|
|
| |
This is mostly just for testing purposes at the moment, though I'd
imagine it may be useful for other reasons too.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Instead of busy waiting on the active job count, a condition variable is
now used to wait for all active jobs to finish before freeing an epoch's
resources.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
get_memory_usage, get_aux_memory_usage, get_record_count,
get_tombstone_count, and create_static_structure have been adjusted to
ensure that they pull from a consistent epoch, even if a change-over
occurs midway through the function.
These functions also now register with the epoch as a job, to ensure that
the epoch they are operating own isn't retired midway through the
function. Probably not a big issue for the accessors, but I could see it
being very important for create_static_structure.
|
| | |
|
| |
|
|
|
|
|
| |
I started moving over to an explicit Epoch based system, which has
necessitated a ton of changes throughout the code base. This will
ultimately allow for a much cleaner set of abstractions for managing
concurrency.
|
| | |
|
| |
|
|
| |
currently there's a race condition of some type to sort out.
|
| |
|
|
| |
I'll probably throw all this out, but I want to stash it just in case.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a big one--probably should have split it apart, but I'm feeling
lazy this morning.
* Organized the mess of header files in include/framework by splitting
them out into their own subdirectories, and renaming a few files to
remove redundancies introduced by the directory structure.
* Introduced a new framework/ShardRequirements.h header file for simpler
shard development. This header simply contains the necessary includes
from framework/* for creating shard files. This should help to remove
structural dependencies from the framework file structure and shards,
as well as centralizing the necessary framework files to make shard
development easier.
* Created a (currently dummy) SchedulerInterface, and make the scheduler
implementation a template parameter of the dynamic extension for easier
testing of various scheduling policies. There's still more work to be
done to fully integrate the scheduler (queries, multiple buffers), but
some more of the necessary framework code for this has been added as well.
* Adjusted the Task interface setup for the scheduler. The task structures
have been removed from ExtensionStructure and placed in their own header
file. Additionally, I started experimenting with using std::variant,
as opposed to inheritence, to implement subtype polymorphism on the
Merge and Query tasks. The scheduler now has a general task queue that
contains both, and std::variant, std::visit, and std::get are used to
manipulate them without virtual functions.
* Removed Alex.h, as it can't build anyway. There's a branch out there
containing the Alex implementation stripped of the C++20 stuff. So
there's no need to keep it here.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Fixed a few issues that manifested during the tiering tests,
1) When a version is copied, it now contains copies of the levels,
not just pointers (the levels themselves still hold pointers to
the shards, though).
2) Ensure that tasks are scheduled with the correct timestamp, they
were originally being scheduled backwards. The get_merge_tasks()
method already returns them in the correct order, so reversing
them again put it in the wrong order.
|
| |
|
|
|
|
|
|
|
|
|
| |
Merges are now executed from a seperate thread within the scheduler that
wakes up via condition variables when new merge tasks are scheduled. In
addition, tombstone limits are now enforced by the scheduler, with new
merges being scheduled as needed.
There are still a few tests failing, notably the zero tombstones in the
last run invarient is not holding under tiering with tombstones. Need
to look into that yet.
|