You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: posts/2010/03/hello-5058108566628405592.html
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
<html><body><p>Hello.</p>
2
2
<p>I recently did some benchmarking of <aclass="reference external" href="https://twistedmatrix.com">twisted</a> on top of PyPy. For the very
3
3
impatient: <b>PyPy is up to 285% faster than CPython</b>. For more patient people,
4
-
there is a full explanation of what I did and how I performed measurments,
4
+
there is a full explanation of what I did and how I performed measurements,
5
5
so they can judge themselves.</p>
6
6
<p>The benchmarks are living in <aclass="reference external" href="https://code.launchpad.net/~exarkun/+junk/twisted-benchmarks">twisted-benchmarks</a> and were mostly written
7
7
by <aclass="reference external" href="https://jcalderone.livejournal.com/">Jean Paul Calderone</a>. Even though he called them "initial exploratory
@@ -11,7 +11,7 @@
11
11
average benchmarks found out there.</p>
12
12
<p>The methodology was to run each benchmark for
13
13
quite some time (about 1 minute), measuring number of requests each 5s.
14
-
Then I looked at <aclass="reference external" href="https://codespeak.net/svn/user/fijal/txt/twisted-data.txt">dump</a> of data and substracted some time it took
14
+
Then I looked at <aclass="reference external" href="https://codespeak.net/svn/user/fijal/txt/twisted-data.txt">dump</a> of data and subtracted some time it took
15
15
for JIT-capable interpreters to warm up (up to 15s), averaging
16
16
everything after that. Averages of requests per second are in the table below (the higher the better):</p>
17
17
<tableborder="1" class="docutils">
@@ -105,4 +105,4 @@
105
105
</ul>
106
106
<p>Twisted version used: svn trunk, revision 28580</p>
107
107
<p>Machine: unfortunately 32bit virtual-machine under qemu, running ubuntu karmic,
108
-
on top of Quad core intel Q9550 with 6M cache. Courtesy of Michael Schneider.</p></body></html>
108
+
on top of Quad core intel Q9550 with 6M cache. Courtesy of Michael Schneider.</p></body></html>
<html><body><p>What blog post somehow fails to mention is that we do not reimplement those but reuse whatever underlaying library is there. The measurments of the actual speed is then not that interesting, because we're only interested in the overhead of call.</p></body></html>
1
+
<html><body><p>What blog post somehow fails to mention is that we do not reimplement those but reuse whatever underlaying library is there. The measurements of the actual speed is then not that interesting, because we're only interested in the overhead of call.</p></body></html>
<html><body><p>Memory footprint is tricky to measure. PyPy usually starts at 60M (as opposed to say 6 for cpython), but then data structures are smaller. We'll try to get some measurments going on some point. Benchmarking is hard :-)<br><br>No, PyPy3 is not as fast as PyPy2. We should really look into it at some point.</p></body></html>
1
+
<html><body><p>Memory footprint is tricky to measure. PyPy usually starts at 60M (as opposed to say 6 for cpython), but then data structures are smaller. We'll try to get some measurements going on some point. Benchmarking is hard :-)<br><br>No, PyPy3 is not as fast as PyPy2. We should really look into it at some point.</p></body></html>
Copy file name to clipboardexpand all lines: posts/2017/10/how-to-make-your-code-80-times-faster-1424098117108093942.html
+2-2
Original file line number
Diff line number
Diff line change
@@ -91,7 +91,7 @@
91
91
2x faster than the original CPython. At this point, most people would be happy
92
92
and go tweeting how PyPy is great.<br>
93
93
<br>
94
-
In general, when talking of CPython vs PyPy, I am rarely satified of a 2x
94
+
In general, when talking of CPython vs PyPy, I am rarely satisfied with a 2x
95
95
speedup: I know that PyPy can do much better than this, especially if you
96
96
write code which is specifically optimized for the JIT. For a real-life
97
97
example, have a look at <aclass="reference external" href="https://capnpy.readthedocs.io/en/latest/benchmarks.html">capnpy benchmarks</a>, in which the PyPy version is
@@ -123,7 +123,7 @@
123
123
actual array. Then we have a long list of 149 simple operations which set the
124
124
fields of the resulting array, construct an iterator, and finally do a
125
125
<ttclass="docutils literal">call_assembler</tt>: this is the actual logic to do the addition, which was
126
-
JITtted indipendently; <ttclass="docutils literal">call_assembler</tt> is one of the operations to do
126
+
JITtted independently; <ttclass="docutils literal">call_assembler</tt> is one of the operations to do
0 commit comments