@@ -5,202 +5,25 @@ speedtest script, but this version allows arbitrary storage types and
5
5
configurations, provides more measurements, and produces numbers that
6
6
are easier to interpret.
7
7
8
- Although you can ``easy_install `` this package, the best way to get
9
- started is to follow the directions below to set up a complete testing
10
- environment with sample tests.
11
8
12
- .. contents ::
9
+ ===============
10
+ Documentation
11
+ ===============
13
12
14
- Installing ``zodbshootout `` using Buildout
15
- ------------------------------------------
13
+ `Documentation `_ including `installation instructions `_ is hosted on `readthedocs `_.
16
14
17
- First, be sure you have certain packages installed so you can compile
18
- software. Ubuntu and Debian users should do this (tested with Ubuntu
19
- 8.04, Ubuntu 9.10, Debian Etch, and Debian Lenny)::
15
+ The complete `changelog `_ is also there.
20
16
21
- $ sudo apt-get install build-essential python-dev
22
- $ sudo apt-get install ncurses-dev libevent-dev libreadline-dev zlib1g-dev
17
+ .. _`Documentation` : http://zodbshootout.readthedocs.io/en/latest/
18
+ .. _`installation instructions` : http://zodbshootout.readthedocs.io/en/latest/install.html
19
+ .. _`readthedocs` : http://zodbshootout.readthedocs.io/en/latest/
20
+ .. _`changelog` : http://zodbshootout.readthedocs.io/en/latest/changelog.html
23
21
24
- Download the ``zodbshootout `` tar file. Unpack it and change to its
25
- top level directory::
26
22
27
- $ tar xvzf zodbshootout-*.tar.gz
28
- $ cd zodbshootout-*
23
+ =============
24
+ Development
25
+ =============
29
26
30
- Set up that same directory as a partly isolated Python environment
31
- using ``virtualenv ``::
27
+ zodbshootout is hosted at GitHub:
32
28
33
- $ virtualenv --no-site-packages .
34
-
35
- Install Buildout in that environment. (This command will create a script
36
- named ``bin/buildout ``.)::
37
-
38
- $ bin/easy_install zc.buildout
39
-
40
- Make sure you have adequate space in your temporary directory (normally
41
- ``/tmp ``) to compile MySQL and PostgreSQL. You may want to switch to a
42
- different temporary directory by setting the TMPDIR environment
43
- variable::
44
-
45
- $ TMPDIR=/var/tmp
46
- $ export TMPDIR
47
-
48
- Run Buildout. Buildout will follow the instructions specified by
49
- ``buildout.cfg `` to download, compile, and initialize versions of MySQL
50
- and PostgreSQL. It will also install several other Python packages.
51
- This may take a half hour or more the first time::
52
-
53
- $ bin/buildout
54
-
55
- If that command fails, first check for missing dependencies. The
56
- dependencies are listed above. To retry, just run ``bin/buildout ``
57
- again.
58
-
59
- Once Buildout completes successfully, start the test environment
60
- using Supervisord::
61
-
62
- $ bin/supervisord
63
-
64
- Confirm that Supervisor started all processes::
65
-
66
- $ bin/supervisorctl status
67
-
68
- If all processes are running, the test environment is now ready. Run
69
- a sample test::
70
-
71
- $ bin/zodbshootout etc/sample.conf
72
-
73
- The ``sample.conf `` test compares the performance of RelStorage with
74
- MySQL and PostgreSQL, along with FileStorage behind ZEO, where the
75
- client and server are located on the same computer.
76
-
77
- See also ``remote-sample.conf ``, which tests database speed over a
78
- network link. Set up ``remote-sample.conf `` by building
79
- ``zodbshootout `` on two networked computers, then point the client at
80
- the server by changing the ``%define host `` line at the top of
81
- ``remote-sample.conf ``. The ``etc `` directory contains other sample
82
- database configurations.
83
-
84
- Running ``zodbshootout ``
85
- ------------------------
86
-
87
- The ``zodbshootout `` script accepts the name of a database
88
- configuration file. The configuration file contains a list of databases
89
- to test, in ZConfig format. The script deletes all data from each of
90
- the databases, then writes and reads the databases while taking
91
- measurements. Finally, the script produces a tabular summary of objects
92
- written or read per second in each configuration. ``zodbshootout `` uses
93
- the names of the databases defined in the configuration file as the
94
- table column names.
95
-
96
- **Warning **: Again, ``zodbshootout `` **deletes all data ** from all
97
- databases specified in the configuration file. Do not configure it to
98
- open production databases!
99
-
100
- The ``zodbshootout `` script accepts the following options.
101
-
102
- * ``-n `` (``--object-counts ``) specifies how many persistent objects to
103
- write or read per transaction. The default is 1000. An interesting
104
- value to use is 1, causing the test to primarily measure the speed of
105
- opening connections and committing transactions.
106
-
107
- * ``-c `` (``--concurrency ``) specifies how many tests to run in
108
- parallel. The default is 2. Each of the concurrent tests runs in a
109
- separate process to prevent contention over the CPython global
110
- interpreter lock. In single-host configurations, the performance
111
- measurements should increase with the concurrency level, up to the
112
- number of CPU cores in the computer. In more complex configurations,
113
- performance will be limited by other factors such as network latency.
114
-
115
- * ``-p `` (``--profile ``) enables the Python profiler while running the
116
- tests and outputs a profile for each test in the specified directory.
117
- Note that the profiler typically reduces the database speed by a lot.
118
- This option is intended to help developers isolate performance
119
- bottlenecks.
120
-
121
- You should write a configuration file that models your intended
122
- database and network configuration. Running ``zodbshootout `` may reveal
123
- configuration optimizations that would significantly increase your
124
- application's performance.
125
-
126
- Interpreting the Results
127
- ------------------------
128
-
129
- The table below shows typical output of running ``zodbshootout `` with
130
- ``etc/sample.conf `` on a dual core, 2.1 GHz laptop::
131
-
132
- "Transaction", postgresql, mysql, mysql_mc, zeo_fs
133
- "Add 1000 Objects", 6529, 10027, 9248, 5212
134
- "Update 1000 Objects", 6754, 9012, 8064, 4393
135
- "Read 1000 Warm Objects", 4969, 6147, 21683, 1960
136
- "Read 1000 Cold Objects", 5041, 10554, 5095, 1920
137
- "Read 1000 Hot Objects", 38132, 37286, 37826, 37723
138
- "Read 1000 Steamin' Objects", 4591465, 4366792, 3339414, 4534382
139
-
140
- ``zodbshootout `` runs six kinds of tests for each database. For each
141
- test, ``zodbshootout `` instructs all processes to perform similar
142
- transactions concurrently, computes the average duration of the
143
- concurrent transactions, takes the fastest timing of three test runs,
144
- and derives how many objects per second the database is capable of
145
- writing or reading under the given conditions.
146
-
147
- ``zodbshootout `` runs these tests:
148
-
149
- * Add objects
150
-
151
- ``zodbshootout `` begins a transaction, adds the specified number of
152
- persistent objects to a ``PersistentMapping ``, and commits the
153
- transaction. In the sample output above, MySQL was able to add
154
- 10027 objects per second to the database, almost twice as fast as
155
- ZEO, which was limited to 5212 objects per second. Also, with
156
- memcached support enabled, MySQL write performance took a small hit
157
- due to the time spent storing objects in memcached.
158
-
159
- * Update objects
160
-
161
- In the same process, without clearing any caches, ``zodbshootout ``
162
- makes a simple change to each of the objects just added and commits
163
- the transaction. The sample output above shows that MySQL and ZEO
164
- typically take a little longer to update objects than to add new
165
- objects, while PostgreSQL is faster at updating objects in this case.
166
- The sample tests only history-preserving databases; you may see
167
- different results with history-free databases.
168
-
169
- * Read warm objects
170
-
171
- In a different process, without clearing any caches,
172
- ``zodbshootout `` reads all of the objects just added. This test
173
- favors databases that use either a persistent cache or a cache
174
- shared by multiple processes (such as memcached). In the sample
175
- output above, this test with MySQL and memcached runs more than ten
176
- times faster than ZEO without a persistent cache. (See
177
- ``fs-sample.conf `` for a test configuration that includes a ZEO
178
- persistent cache.)
179
-
180
- * Read cold objects
181
-
182
- In the same process as was used for reading warm objects,
183
- ``zodbshootout `` clears all ZODB caches (the pickle cache, the ZEO
184
- cache, and/or memcached) then reads all of the objects written by
185
- the update test. This test favors databases that read objects
186
- quickly, independently of caching. The sample output above shows
187
- that cold read time is currently a significant ZEO weakness.
188
-
189
- * Read hot objects
190
-
191
- In the same process as was used for reading cold objects,
192
- ``zodbshootout `` clears the in-memory ZODB caches (the pickle
193
- cache), but leaves the other caches intact, then reads all of the
194
- objects written by the update test. This test favors databases that
195
- have a process-specific cache. In the sample output above, all of
196
- the databases have that type of cache.
197
-
198
- * Read steamin' objects
199
-
200
- In the same process as was used for reading hot objects,
201
- ``zodbshootout `` once again reads all of the objects written by the
202
- update test. This test favors databases that take advantage of the
203
- ZODB pickle cache. As can be seen from the sample output above,
204
- accessing an object from the ZODB pickle cache is around 100
205
- times faster than any operation that requires network access or
206
- unpickling.
29
+ https://github.com/zodb/zodbshootout
0 commit comments