David M Chess
12 years ago
Okay, next silly question. :)
We have a very simple multi-threaded system where a request comes in,
starts running in a thread, and then (zero, one, or two times per request)
gets to a serialization point, where the code does:
with lock:
do_critical_section_stuff_that_might_take_awhile()
and then continues.
Which is almost the same as:
lock.acquire()
try:
do_critical_section_stuff_that_might_take_awhile()
finally:
lock.release()
Now we discover that It Would Be Nice if some requests got priority over
others, as in something like:
lock.acquire(importance=request.importance)
try:
do_critical_section_stuff_that_might_take_awhile()
finally:
lock.release()
and when lock.release() occurs, the next thread that gets to run is one of
the most important ones currently waiting in acquire() (that's the
exciting new thing).
Other requirements are that the code to do this be as simple as possible,
and that it not mess anything else up. :)
My first thought was something like a new lock-ish class that would do
roughly:
class PriorityLock(object):
def __init__(self):
self._lock = threading.Lock()
self._waiter_map = {} # maps TIDs to importance
def acquire(self,importance=0):
this_thread = threading.currentThread()
self._waiter_map[this_thread] = importance # I want in
while True:
self._lock.acquire()
if ( max( self._waiter_map.values())<=importance ): # we win
del self._waiter_map[this_thread] # not waiting anymore
return # return with lock acquired
self._lock.release() # We are not most impt: release/retry
def release(self):
self._lock.release()
(Hope the mail doesn't garble that too badly.)
Basically the acquire() method just immediately releases and tries again
if it finds that someone more important is waiting.
I think this is semantically correct, as long as the underlying lock
implementation doesn't have starvation issues, and it's nice and simple,
but on the other hand it looks eyerollingly inefficient.
Seeking any thoughts on other/better ways to do this, or whether the
inefficiency will be too eyerolling if we get say one request per second
with an average service time a bit under a second but maximum service time
well over a second, and most of them are importance zero, but every (many)
seconds there will be one or two with higher importance.
Tx,
DC
---
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20121024/cb8e59ec/attachment.html>
We have a very simple multi-threaded system where a request comes in,
starts running in a thread, and then (zero, one, or two times per request)
gets to a serialization point, where the code does:
with lock:
do_critical_section_stuff_that_might_take_awhile()
and then continues.
Which is almost the same as:
lock.acquire()
try:
do_critical_section_stuff_that_might_take_awhile()
finally:
lock.release()
Now we discover that It Would Be Nice if some requests got priority over
others, as in something like:
lock.acquire(importance=request.importance)
try:
do_critical_section_stuff_that_might_take_awhile()
finally:
lock.release()
and when lock.release() occurs, the next thread that gets to run is one of
the most important ones currently waiting in acquire() (that's the
exciting new thing).
Other requirements are that the code to do this be as simple as possible,
and that it not mess anything else up. :)
My first thought was something like a new lock-ish class that would do
roughly:
class PriorityLock(object):
def __init__(self):
self._lock = threading.Lock()
self._waiter_map = {} # maps TIDs to importance
def acquire(self,importance=0):
this_thread = threading.currentThread()
self._waiter_map[this_thread] = importance # I want in
while True:
self._lock.acquire()
if ( max( self._waiter_map.values())<=importance ): # we win
del self._waiter_map[this_thread] # not waiting anymore
return # return with lock acquired
self._lock.release() # We are not most impt: release/retry
def release(self):
self._lock.release()
(Hope the mail doesn't garble that too badly.)
Basically the acquire() method just immediately releases and tries again
if it finds that someone more important is waiting.
I think this is semantically correct, as long as the underlying lock
implementation doesn't have starvation issues, and it's nice and simple,
but on the other hand it looks eyerollingly inefficient.
Seeking any thoughts on other/better ways to do this, or whether the
inefficiency will be too eyerolling if we get say one request per second
with an average service time a bit under a second but maximum service time
well over a second, and most of them are importance zero, but every (many)
seconds there will be one or two with higher importance.
Tx,
DC
---
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20121024/cb8e59ec/attachment.html>