-
-
Notifications
You must be signed in to change notification settings - Fork 481
Parallelise adaptive mutation population fitness computations when parallel_processing
is enabled
#201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I encountered the same issue. I hope this feature can be enabled! |
I was using this for my master's thesis but ended up simply not using adaptive mutation instead of enabling parallel processing. I have since finished my thesis and am now busy with work and no longer interested in adding this enhancement. Just want to be clear in my intentions. This library has helped me alot, so I'm sorry that I won't able to contribute to it. I'll leave the issue open so others can make work of it if they are interested in doing so. |
Appreciate it @Ririshi! This is now supported so that fitness calculation during adaptive mutation works with parallel processing. This will be available in the next release of the library. |
Release Date 29 January 2024 1. Solve bugs when multi-objective optimization is used. #238 2. When the `stop_ciiteria` parameter is used with the `reach` keyword, then multiple numeric values can be passed when solving a multi-objective problem. For example, if a problem has 3 objective functions, then `stop_criteria="reach_10_20_30"` means the GA stops if the fitness of the 3 objectives are at least 10, 20, and 30, respectively. The number values must match the number of objective functions. If a single value found (e.g. `stop_criteria=reach_5`) when solving a multi-objective problem, then it is used across all the objectives. #238 3. The `delay_after_gen` parameter is now deprecated and will be removed in a future release. If it is necessary to have a time delay after each generation, then assign a callback function/method to the `on_generation` parameter to pause the evolution. 4. Parallel processing now supports calculating the fitness during adaptive mutation. #201 5. The population size can be changed during runtime by changing all the parameters that would affect the size of any thing used by the GA. For more information, check the [Change Population Size during Runtime](https://pygad.readthedocs.io/en/latest/pygad_more.html#change-population-size-during-runtime) section. #234 6. When a dictionary exists in the `gene_space` parameter without a step, then mutation occurs by adding a random value to the gene value. The random vaue is generated based on the 2 parameters `random_mutation_min_val` and `random_mutation_max_val`. For more information, check the [How Mutation Works with the gene_space Parameter?](https://pygad.readthedocs.io/en/latest/pygad_more.html#how-mutation-works-with-the-gene-space-parameter) section. #229 7. Add `object` as a supported data type for int (GA.supported_int_types) and float (GA.supported_float_types). #174 8. Use the `raise` clause instead of the `sys.exit(-1)` to terminate the execution. #213 9. Fix a bug when multi-objective optimization is used with batch fitness calculation (e.g. `fitness_batch_size` set to a non-zero number). 10. Fix a bug in the `pygad.py` script when finding the index of the best solution. It does not work properly with multi-objective optimization where `self.best_solutions_fitness` have multiple columns.
I was checking my CPU usage while running a GA instance with adaptive mutation and noticed it's actually off most of the time. The fitness function I use takes several seconds to compute for every solution, so the lack of parallel processing on this type of mutation really hurts the overall time per generation.
I had a look at the code and expected a simple copy-paste with some minor updates to enable parallel processing in the mutation, but the architecture of the package (i.e.
pygad.GA
is a child class ofpygad.utils.Mutation
) makes it difficult for theMutation
methods to know if parallel processing is enabled without passing additional parameters. What might work is to add an__init__
method toMutation
and calling this method fromGA.__init__
. The parallel processing setup could be moved toMutation
, which is then inherited byGA
. I don't think this is the cleanest solution in terms of OOP best practices, but it would work.I might be able to put some time into creating a PR for this sometime soon.
The text was updated successfully, but these errors were encountered: