Skip to content

Commit 48541d2

Browse files
committed
Proof-read tutorial, added git repository urls for source download
1 parent 0098024 commit 48541d2

File tree

2 files changed

+22
-10
lines changed

2 files changed

+22
-10
lines changed

doc/source/intro.rst

+9
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,13 @@ Getting Started
2828
===============
2929
It is advised to have a look at the :ref:`Usage Guide <tutorial-label>` for a brief introduction.
3030

31+
32+
=================
33+
Source Repository
34+
=================
35+
The latest source can be cloned using git from one of the following locations:
36+
37+
* git://gitorious.org/git-python/async.git
38+
* git://github.com/Byron/async.git
39+
3140
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools

doc/source/usage.rst

+13-10
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,18 @@ Usage Guide
77
******
88
Design
99
******
10-
The central instance within *async* is the **Pool**. A pool keeps a set of 0 or more workers which can run asynchronoously and process **Task**\ s. Tasks are added to the pool using the ``add_task`` function. Once added, the caller receives a **ChannelReader** instance which connects to a channel. Calling ``read`` on the instance will trigger the actual computation. A ChannelReader can serve as input for another task as well, which once added to the Pool, indicates a dependency between these tasks. To obtain one item from task 2, one item needs to be produced by task 1 beforehand - the pool takes care of the dependency handling as well as scheduling.
10+
The central instance within *async* is the **Pool**. A pool keeps a set of 0 or more workers which can run asynchronoously and process **Task**\ s. Tasks are added to the pool using the ``add_task`` function. Once added, the caller receives a **ChannelReader** instance which connects to a channel. Calling ``read`` on the instance will trigger the actual computation. A ChannelReader can serve as input for another task as well, which once added to the Pool, indicates a dependency between these tasks. To obtain one item from task 2, one item needs to be produced by task 1 beforehand - the pool takes care of the dependency handling when scheduling items to be processed.
1111

12-
Task instances allow to define the minimum amount of items to be processed on each request, and the maximum amount of items per batch. This chunking behaviour allows you to have fine-grained control about the memory requirements as well as the actuall achieved concurrency for your chain of tasks.
12+
Task instances allow to define the minimum amount of items to be processed on each request, and the maximum amount of items per batch. This chunking behaviour allows you to have fine-grained control about the memory requirements as well as the actually achieved concurrency for your chain of tasks.
1313

14-
Task chunks are the units actually being processed by the workers, the pool assures these are processed in the right order. Chunks help to bridge the gap between slowly items that take a long time to process, and those which are quickly generated. Generally, slow tasks should have small chunks, otherwise some of the workers might just end up waiting for input while slowy processing items of a big chunk take place in another worker.
14+
Task chunks are the units actually being processed by the workers, the pool assures these are processed in the right order. Chunks help to bridge the gap between items that take a long time to process, and those which are quickly generated. Generally, slow tasks should have small chunks, otherwise some of the workers might just end up waiting for input while slowly processing items of a big chunk take place in another worker. If chunks are too big, and there are many workers, it may also be that some workers don't get any work. By default, the size of the chunk is entirely determined by the amount of items requested by the reader.
1515

1616
**************
1717
The ThreadPool
1818
**************
1919
A thread pool is a pool implementation which uses threads as workers. ``ChannelReader``\ s are blocking channels which are used as a means of communication between tasks which are currently being processed.
2020

21-
The ``set_size`` method is essential, as it determines the amount of workers in the pool. It defaults to 0 for newly created pools, which is equal to a fully synchonized mode of operation - all processing is effectively done by the calling thread::
21+
The ``set_size`` method is essential, as it determines the amount of workers in the pool. It defaults to 0 for newly created pools, which is equal to a fully synchronized mode of operation - all processing is effectively done by the calling thread::
2222
2323
from async.pool import ThreadPool
2424
@@ -35,9 +35,9 @@ Currently this is the only implementation, but it was designed with the ``Multip
3535
*****
3636
Tasks
3737
*****
38-
A task encapsulates properties of a task, and how its items should be processed. The processing is usually performed per item, calling a function with one item, to receive a processed item back which will be written to into the output channel. The reader end of that channel is either held by the client of the items, or by another task which performs additional processing.
38+
A task encapsulates properties of a task, and how its items should be processed. The processing is usually performed per item, calling a function with one item, to receive a processed item back which will be written into the output channel. The read-end of that channel is either held by the client of the items, or by another task which performs additional processing.
3939

40-
In the following example, a simple task is created which takes integers and multiplies them by itself::
40+
In the following example, a simple task is created which takes integers and multiplies them by themselves::
4141
4242
from async.task import IteratorThreadTask
4343
@@ -49,13 +49,16 @@ In the following example, a simple task is created which takes integers and mult
4949
items = reader.read()
5050
assert len(items) == 10 and items[0] == 0 and items[-1] == 81
5151
52+
.. note::
53+
Due to the gil, it makes no sense to process anything using pure python - it will never run concurrently with other workers, but only asynchronously.
54+
Concurrency can only be achieved when using c-extensions which release the GIL before long-running or blocking portions of their code.
5255

5356
*****************************
5457
Channels, Readers and Writers
5558
*****************************
56-
Channels a the means of communication between tasks as well as clients to finally receive the processed itmes. A channel has one or more write ends and and one or more read ends. Readers will block if there are less than the requested amount of items, but will wake up once the missing items where sent through the write end.
59+
Channels are the means of communication between tasks as well as clients to finally receive the processed items. A channel has one or more write-ends and and one or more read-ends. Readers will block if there are less than the requested amount of items, but will wake up once the missing items where sent through the write-end.
5760

58-
A channel's major difference over a queue is its ability to be closed, which will immediately wake up all waiting readers.
61+
A channel's major difference to a queue is its ability to be closed, which will immediately wake up all waiting readers.
5962

6063
Reader Callbacks
6164
================
@@ -64,9 +67,9 @@ The reader returned by the Pool's ``add_task`` method is a specialized version o
6467
**************
6568
Chaining Tasks
6669
**************
67-
When using different task types, chains between tasks can be created. These will be understood by the pool, which realizes the implicit task dependency and will schedule the tasks in the right order.
70+
When using different task types, chains between tasks can be created. These will be understood by the pool, which then realizes the implicit task dependency and will schedule the tasks in the right order.
6871

69-
The following example creates two tasks which combine their results. As the pool only has one worker, and as the chunk size is maximized, we can be sure that the items are returned in order in this case::
72+
The following example creates two tasks which combine their results. As the pool only has one worker, and as the chunk size is maximized, we can be sure that the items are returned in order::
7073
7174
from async.task import ChannelThreadTask
7275

0 commit comments

Comments
 (0)