Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kill Task reports Finished and then waits for tasks to finish #65

Open
DarinJ opened this issue Aug 24, 2015 · 4 comments
Open

Kill Task reports Finished and then waits for tasks to finish #65

DarinJ opened this issue Aug 24, 2015 · 4 comments

Comments

@DarinJ
Copy link
Contributor

DarinJ commented Aug 24, 2015

In kill tasks lines 128-138. scheduleSuicideTimer doesn't block so the thread creating the TASK_FINISHED message finishes possible (and in some cases often) before the executor kills the task. This can leave processes open and cause offers to go out to new tasks before they're available, leading to crashes due to port conflicts etc.

I think the solution is to put blocking version of scheduleSuicideTimer() in the thread that builds the task finished mehtod.

Thoughts?

@tarnfeld
Copy link
Member

This is deliberate. You'll notice how the resources are broken down by executor/task, this behaviour is in place because when the task tracker is idle, we want to kill it and free up a chunk of resource that can be used for other frameworks. The problem is that we can't kill the tracker immediately, since it might have local MAP output data on disk that needs to be served to running reduce tasks on other trackers.

The details of this implementation can be found in #32 and #33.

TL;DR; If you run a big job that has 1000 maps and 200 reducers, without this behaviour the 1000 map slots would sit idle after they'd finished and would not be freed to mesos until the 200 reducers had finished. This is not ideal in a multi-tentant cluster.

It'd be interesting to hear if you have any feedback on the approach, because it was really a first stab in the dark at trying to improve the utilization of mesos when running sparse hadoop jobs.

Note: It's perfectly legal behaviour for all mesos tasks for an executor to finish while the executor stays alive.

@DarinJ
Copy link
Contributor Author

DarinJ commented Oct 16, 2015

So, I spent a lot of time looking at the failure modes of this code lately.

What we found was that since the executor continued to run, but the task reported finished other task trackers and storm workers would attempt to use the port and then crash due to port bind exceptions.
This essentially crashed storm for long periods, it may be legal but it's not polite. It also had negative impact on hadoop as tasktrackers would fail when reducers needed to be launched. Keep in mind a large map reduce job may take hours to complete so lots of tasks fail (often enough reduce tasks to kill a job).

We did attempt to correct this by creating a blocking blocking version of suicide timer inside the the thread so it would wait for everything to complete before declaring TASK_FINISHED, however as the initial reducers got killed (idle waiting for the shuffle phase) there were no new resources for new task trackers to spawn.

I think the approach has some merit, and I spend some time looking at it but ultimately didn't have the time to work on it. Eventually roled back to 0.0.9.

@tarnfeld
Copy link
Member

What we found was that since the executor continued to run, but the task reported finished other task trackers and storm workers would attempt to use the port and then crash due to port bind exceptions.

Hah, that's a bug here that is trivial to fix, we need to move the port resource to the ExecutorInfo that's defined just above so that the ports don't get offered to other frameworks while it's still alive.

That should solve your issue entirely, assuming Storm also honours the port resource correctly (which i'd expect it does).

@DarinJ
Copy link
Contributor Author

DarinJ commented Oct 16, 2015

OK, I can try that and report back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants