Skip to content

add fractions benchmarks#10

Closed
scoder wants to merge 9 commits intopython:masterfrom
scoder:master
Closed

add fractions benchmarks#10
scoder wants to merge 9 commits intopython:masterfrom
scoder:master

Conversation

@scoder
Copy link
Copy Markdown

@scoder scoder commented Sep 1, 2016

Add fractions benchmarks that compare the decimal and fractions modules using the same benchmarking code.
See https://bugs.python.org/issue22458

@vstinner
Copy link
Copy Markdown
Member

vstinner commented Sep 1, 2016

Add fractions benchmarks that compare the decimal and fractions modules using the same benchmarking code.

I'm not sure that I understand the purpose of the change. Do you want to compare the performances of the fractions module with the performance of the decimal module?

It's not really how the "performance" module is used. This module is a set of benchmarks to compare the performance of two Python implementations.

Maybe we can add the benchmark, use fractions by default, but don't run the benchmark with decimal? I mean that you should run it with decimal manually.

Comment thread performance/benchmarks/__init__.py Outdated

@VersionRange()
def BM_Telco_Decimal(python, options):
bm_path = Relative("bm_telco_fractions.py")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I wrote in the comment, I don't think that it makes sense to test the decimal module?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, unless you want to point out the speed difference. But that was my question on bugs.python.org: What is the specific goal of this benchmark? :)

@scoder
Copy link
Copy Markdown
Author

scoder commented Sep 1, 2016

What I mean is that there is one benchmark implementation that is executed with two different backends, thus giving comparable results for two different stdlib libraries. Thus, I think it's good to execute both. The results are not directly comparable with the "bm_telco" benchmark because that does some unrelated processing along the way.

return perf.perf_counter() - start


def run_bench(n, impl):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest to rename "n" to "loops" to be consistent with other benchmarks.

@scoder
Copy link
Copy Markdown
Author

scoder commented Sep 1, 2016

I've updated the pull request.

def find_benchmark_class(impl_name):
if impl_name == 'fractions':
from fractions import Fraction as backend_class
elif impl_name == 'quicktions':
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, what about gmpy2 rationals? As I understand it, this test suite is about regressions within Python itself. I'm not sure this would be a good precedent.

Copy link
Copy Markdown
Author

@scoder scoder Sep 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see a reason why the benchmarks shouldn't support non-stdlib libraries that reimplement standard Python functionality, as long as it can be achieved with a reasonable amount of adaptation. The elementtree benchmark I've written also supports lxml (AFAICT, it's been modified since), you can pass the library to import by name.

And since you asked, yes, gmpy2 would certainly qualify, also for the original telco benchmark, I guess.

@scoder
Copy link
Copy Markdown
Author

scoder commented Sep 2, 2016

I've updated the pull request to address the comments.

@vstinner
Copy link
Copy Markdown
Member

I closed the issue https://bugs.python.org/issue22458 but please take a look at the discussion there.

I closed the CPython issue to suggest to continue the discussion on this (GitHub) bug tracker.

@vstinner
Copy link
Copy Markdown
Member

Sorry, but I'm not really convinced that a fractions benchmark is really helpful to compare the performances of different Python implementations.

If you still want a fractions benchmark, maybe you can send me a pull request to my https://github.com/haypo/pymicrobench project which is a much wider collection of random CPython (micro)benchmarks. This project has a different purpose.

At least, I took your update on the Telco URL :-D I also wrote a longer description for each benchmark in the documentation. Here is the new documentation for telco:
http://pyperformance.readthedocs.io/benchmarks.html#telco

I close the PR.

@vstinner vstinner closed this Apr 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants