Skip to content

Prep some things for threading #1837

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Apr 11, 2020
Merged

Conversation

coolreader18
Copy link
Member

Related to #1831

@coolreader18 coolreader18 requested a review from palaviv April 2, 2020 21:46
@coolreader18
Copy link
Member Author

Looks like crates.io is acting up.

@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch 5 times, most recently from d0932a6 to 2febbe1 Compare April 3, 2020 17:24
Copy link
Contributor

@palaviv palaviv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice work. Add some comments in regard to thread safety.

@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch 3 times, most recently from 5006d25 to 1be3f0b Compare April 9, 2020 03:48
}
Err(ref e) if objtype::isinstance(&e, &vm.ctx.exceptions.index_error) => {
Err(new_stop_iteration(vm))
let mut prev = self.position.load();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something but I think we can go for a simpler solution here:

let step = if self.reversed { -1 } else { 1 };
let curr_pos = self.position.fetch_add(step);
if curr_pos >= 0 {
...

then use the same code as before.

This will answer what we expect from an iterator. Each time next is called we will receive a unique item from the iterable. Once the iterator is exhausted all consecutive calls will get stop iteration.

Copy link
Member Author

@coolreader18 coolreader18 Apr 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that probably would work since it's an isize and it wouldn't matter if it went below 0, it wouldn't underflow.

let ret = self.tuple.as_slice()[self.position.get()].clone();
self.position.set(self.position.get() + 1);
Ok(ret)
let pos = self.position.load();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again lets use fetch_add here. In this solution we can have 2 thread getting the same value.

vm/src/util.rs Outdated
done: impl Fn(usize) -> bool,
next: impl Fn(usize) -> usize,
) -> Option<usize> {
let mut prev = pos.load();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see comment in objiter

Copy link
Member Author

@coolreader18 coolreader18 Apr 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a comment on your last set of reviews; if it's a reverse iterator and reaches the end, then fetch_sub would underflow and if next(it) is called again the index would be usize::MAX_VALUE. What would you suggest for that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a separate done flag?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do prefer to keep this simple as the current implementation is a little complex. I think we an go in a few directions here:

  1. Change the position to isize.
  2. Start the position at len(iterator) instead of len(iterator) - 1 and go until 0.
  3. Check if we are at 0 before doing the fetch_sub.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, it'd be fine if we do a load and then a fetch_sub after we check that we're still in bounds, since it's still atomic. Okay, thanks for your feedback.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this again I am not sure 2 or 3 will be simple at all. I think changing to isize will be the simplest. You can't actually do load and then fetch_sub as it will not be atomic.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I updated it to just use the fetch_* operations. I didn't really want to do that, but I think I was just being silly, because yes, if someone does this:

it = iter([1,2])
try:
    next(it)
except:
    pass

Then it's eventually going to wrap around and start yielding elements again, but that's a very small edge case that doesn't really matter in the grand scheme of things.

@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch 3 times, most recently from 4eed7f9 to 11f96b6 Compare April 10, 2020 23:07
Copy link
Contributor

@palaviv palaviv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few tests failing but once they are fixed we can merge this.

@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch 2 times, most recently from 7fe9fab to 15e31f4 Compare April 11, 2020 15:24
@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch from 15e31f4 to 0d0e973 Compare April 11, 2020 20:05
@coolreader18 coolreader18 force-pushed the coolreader18/prep-multithread branch from 0d0e973 to a10936c Compare April 11, 2020 20:16
@coolreader18 coolreader18 merged commit f2fbb07 into master Apr 11, 2020
@coolreader18 coolreader18 deleted the coolreader18/prep-multithread branch April 11, 2020 21:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants