testrepository-0.0.20/0000775000175000017500000000000012377221137016041 5ustar robertcrobertc00000000000000testrepository-0.0.20/doc/0000775000175000017500000000000012377221137016606 5ustar robertcrobertc00000000000000testrepository-0.0.20/doc/DEVELOPERS.txt0000664000175000017500000000407112376313653021025 0ustar robertcrobertc00000000000000Development guidelines for Test Repository ++++++++++++++++++++++++++++++++++++++++++ Coding style ~~~~~~~~~~~~ PEP-8 please. We don't enforce a particular style, but being reasonably consistent aids readability. Copyrights and licensing ~~~~~~~~~~~~~~~~~~~~~~~~ Code committed to Test Repository must be licensed under the BSD + Apache-2.0 licences that Test Repository offers its users. Copyright assignment is not required. Please see COPYING for details about how to make a license grant in a given source file. Lastly, all copyright holders need to add their name to the master list in COPYING the first time they make a change in a given calendar year. Testing and QA ~~~~~~~~~~~~~~ For Test repository please add tests where possible. There is no requirement for one test per change (because somethings are much harder to automatically test than the benfit from such tests). Fast tests are preferred to slow tests, and understandable tests to fast tests. http://build.robertcollins.net/ has a job testing every commit made to trunk of Test Repository, and there is no automated test-before-merge process at the moment. The quid pro quo for committers is that they should check that the automated job found their change acceptable after merging it, and either roll it back or fix it immediately. A broken trunk is not acceptable! See DESIGN.txt for information about code layout which will help you find where to add tests (and indeed where to change things). Running the tests ----------------- Generally just ``make`` is all that is needed to run all the tests. However if dropping into pdb, it is currently more convenient to use ``python -m testtools.run testrepository.tests.test_suite``. Diagnosing issues ----------------- The cli UI will drop into pdb when an error is thrown if TESTR_PDB is set in the environment. This can be very useful for diagnosing problems. Releasing --------- Update NEWS and testrepository/__init__.py version numbers. Release to pypi. Pivot the next milestone on LP to version, and make a new next milestone. Make a new tag and push that to github. testrepository-0.0.20/doc/MANUAL.txt0000664000175000017500000004333012376207464020335 0ustar robertcrobertc00000000000000Test Repository users manual ++++++++++++++++++++++++++++ Overview ~~~~~~~~ Test repository is a small application for tracking test results. Any test run that can be represented as a subunit stream can be inserted into a repository. Typical workflow is to have a repository into which test runs are inserted, and then to query the repository to find out about issues that need addressing. testr can fully automate this, but lets start with the low level facilities, using the sample subunit stream included with testr:: # Note that there is a .testr.conf already: ls .testr.conf # Create a store to manage test results in. $ testr init # add a test result (shows failures) $ testr load < doc/example-failing-subunit-stream # see the tracked failing tests again $ testr failing # fix things $ testr load < doc/example-passing-subunit-stream # Now there are no tracked failing tests $ testr failing Most commands in testr have comprehensive online help, and the commands:: $ testr help [command] $ testr commands Will be useful to explore the system. Configuration ~~~~~~~~~~~~~ testr is configured via the '.testr.conf' file which needs to be in the same directory that testr is run from. testr includes online help for all the options that can be set within it:: $ testr help run Python ------ If your test suite is written in Python, the simplest - and usually correct configuration is:: [DEFAULT] test_command=python -m subunit.run discover . $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list Running tests ~~~~~~~~~~~~~ testr is taught how to run your tests by interepreting your .testr.conf file. For instance:: [DEFAULT] test_command=foo $IDOPTION test_id_option=--bar $IDFILE will cause 'testr run' to run 'foo' and process it as 'testr load' would. Likewise 'testr run --failing' will automatically create a list file listing just the failing tests, and then run 'foo --bar failing.list' and process it as 'testr load' would. failing.list will be a newline separated list of the test ids that your test runner outputs. If there are no failing tests, no test execution will happen at all. Arguments passed to 'testr run' are used to filter test ids that will be run - testr will query the runner for test ids and then apply each argument as a regex filter. Tests that match any of the given filters will be run. Arguments passed to run after a ``--`` are passed through to your test runner command line. For instance, using the above config example ``testr run quux -- bar --no-plugins`` would query for test ids, filter for those that match 'quux' and then run ``foo bar --load-list tempfile.list --no-plugins``. Shell variables are expanded in these commands on platforms that have a shell. Having setup a .testr.conf, a common workflow then becomes:: # Fix currently broken tests - repeat until there are no failures. $ testr run --failing # Do a full run to find anything that regressed during the reduction process. $ testr run # And either commit or loop around this again depending on whether errors # were found. The --failing option turns on ``--partial`` automatically (so that if the partial test run were to be interrupted, the failing tests that aren't run are not lost). Another common use case is repeating a failure that occured on a remote machine (e.g. during a jenkins test run). There are two common ways to do approach this. Firstly, if you have a subunit stream from the run you can just load it:: $ testr load < failing-stream # Run the failed tests $ testr run --failing The streams generated by test runs are in .testrepository/ named for their test id - e.g. .testrepository/0 is the first stream. If you do not have a stream (because the test runner didn't output subunit or you don't have access to the .testrepository) you may be able to use a list file. If you can get a file that contains one test id per line, you can run the named tests like this: $ testr run --load-list FILENAME This can also be useful when dealing with sporadically failing tests, or tests that only fail in combination with some other test - you can bisect the tests that were run to get smaller and smaller (or larger and larger) test subsets until the error is pinpointed. ``testr run --until-failure`` will run your test suite again and again and again stopping only when interrupted or a failure occurs. This is useful for repeating timing-related test failures. Listing tests ~~~~~~~~~~~~~ It is useful to be able to query the test program to see what tests will be run - this permits partitioning the tests and running multiple instances with separate partitions at once. Set 'test_list_option' in .testr.conf like so:: test_list_option=--list-tests You also need to use the $LISTOPT option to tell testr where to expand things: test_command=foo $LISTOPT $IDOPTION All the normal rules for invoking test program commands apply: extra parameters will be passed through, if a test list is being supplied test_option can be used via $IDOPTION. The output of the test command when this option is supplied should be a subunit test enumeration. For subunit v1 that is a series of test ids, in any order, ``\n`` separated on stdout. For v2 use the subunit protocol and emit one event per test with each test having status 'exists'. To test whether this is working the `testr list-tests` command can be useful. You can also use this to see what tests will be run by a given testr run command. For instance, the tests that ``testr run myfilter`` will run are shown by ``testr list-tests myfilter``. As with 'run', arguments to 'list-tests' are used to regex filter the tests of the test runner, and arguments after a '--' are passed to the test runner. Parallel testing ~~~~~~~~~~~~~~~~ If both test listing and filtering (via either IDLIST or IDFILE) are configured then testr is able to run your tests in parallel:: $ testr run --parallel This will first list the tests, partition the tests into one partition per CPU on the machine, and then invoke multiple test runners at the same time, with each test runner getting one partition. Currently the partitioning algorithm is simple round-robin for tests that testr has not seen run before, and equal-time buckets for tests that testr has seen run. NB: This uses the anydbm Python module to store the duration of each test. On some platforms (to date only OSX) there is no bulk-update API and performance may be impacted if you have many (10's of thousands) of tests. To determine how many CPUs are present in the machine, testrepository will use the multiprocessing Python module (present since 2.6). On operating systems where this is not implemented, or if you need to control the number of workers that are used, the --concurrency option will let you do so:: $ testr run --parallel --concurrency=2 A more granular interface is available too - if you insert into .testr.conf:: test_run_concurrency=foo bar Then when testr needs to determine concurrency, it will run that command and read the first line from stdout, cast that to an int, and use that as the number of partitions to create. A count of 0 is interpreted to mean one partition per test. For instance in .test.conf:: test_run_concurrency=echo 2 Would tell testr to use concurrency of 2. When running tests in parallel, testrepository tags each test with a tag for the worker that executed the test. The tags are of the form ``worker-%d`` and are usually used to reproduce test isolation failures, where knowing exactly what test ran on a given backend is important. The %d that is substituted in is the partition number of tests from the test run - all tests in a single run with the same worker-N ran in the same test runner instance. To find out which slave a failing test ran on just look at the 'tags' line in its test error:: ====================================================================== label: testrepository.tests.ui.TestDemo.test_methodname tags: foo worker-0 ---------------------------------------------------------------------- error text And then find tests with that tag:: $ testr last --subunit | subunit-filter -s --xfail --with-tag=worker-3 | subunit-ls > slave-3.list Grouping Tests ~~~~~~~~~~~~~~ In certain scenarios you may want to group tests of a certain type together so that they will be run by the same backend. The group_regex option in .testr.conf permits this. When set, tests are grouped by the group(0) of any regex match. Tests with no match are not grouped. For example, extending the python sample .testr.conf from the configuration section with a group regex that will group python tests cases together by class (the last . splits the class and test method):: [DEFAULT] test_command=python -m subunit.run discover . $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list group_regex=([^\.]+\.)+ Remote or isolated test environments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A common problem with parallel test running is test runners that use global resources such as well known ports, well known database names or predictable directories on disk. One way to solve this is to setup isolated environments such as chroots, containers or even separate machines. Such environments typically require some coordination when being used to run tests, so testr provides an explicit model for working with them. The model testr has is intended to support both developers working incrementally on a change and CI systems running tests in a one-off setup, for both statically and dynamically provisioned environments. The process testr follows is: 1. The user should perform any one-time or once-per-session setup. For instance, checking out source code, creating a template container, sourcing your cloud credentials. 2. Execute testr run. 3. testr queries for concurrency. 4. testr will make a callout request to provision that many instances. The provisioning callout needs to synchronise source code and do any other per-instance setup at this stage. 5. testr will make callouts to execute tests, supplying files that should be copied into the execution environment. Note that instances may be used for more than one command execution. 6. testr will callout to dispose of the instances after the test run completes. Instances may be expensive to create and dispose of. testr does not perform any caching, but the callout pattern is intended to facilitate external caching - the provisioning callout can be used to pull environments out of a cache, and the dispose to just return it to the cache. Configuring environment support ------------------------------- There are three callouts that testrepository depends on - configured in .testr.conf as usual. For instance:: instance_provision=foo -c $INSTANCE_COUNT instance_dispose=bar $INSTANCE_IDS instance_execute=quux $INSTANCE_ID $FILES -- $COMMAND These should operate as follows: * instance_provision should start up the number of instances provided in the $INSTANCE_COUNT parameter. It should print out on stdout the instance ids that testr should supply to the dispose and execute commands. There should be no other output on stdout (stderr is entirely up for grabs). An exit code of non-zero will cause testr to consider the command to have failed. A provisioned instance should be able to execute the list tests command and execute tests commands that testr will run via the instance_execute callout. Its possible to lazy-provision things if you desire - testr doesn't care - but to reduce latency we suggest performing any rsync or other code synchronisation steps during the provision step, as testr may make multiple calls to one environment, and re-doing costly operations on each command execution would impair performance. * instance_dispose should take a list of instance ids and get rid of them this might mean putting them back in a pool of instances, or powering them off, or terminating them - whatever makes sense for your project. * instance_execute should accept an instance id, a list of files that need to be copied into the instance and a command to run within the instance. It needs to copy those files into the instance (it may adjust their paths if desired). If the paths are adjusted, the same paths within $COMMAND should be adjusted to match. Execution that takes place with a shared filesystem can obviously skip file copying or adjusting (and the $FILES parameter). When the instance_execute terminates, it should use the exit code that the command used within the instance. Stdout and stderr from instance_execute are presumed to be that of $COMMAND. In particular, stdout is where the subunit test output, and subunit test listing output, are expected, and putting other output into stdout can lead to surprising results - such as corrupting the subunit stream. instance_execute is invoked for both test listing and test executing callouts. Hiding tests ~~~~~~~~~~~~ Some test runners (for instance, zope.testrunner) report pseudo tests having to do with bringing up the test environment rather than being actual tests that can be executed. These are only relevant to a test run when they fail - the rest of the time they tend to be confusing. For instance, the same 'test' may show up on multiple parallel test runs, which will inflate the 'executed tests' count depending on the number of worker threads that were used. Scheduling such 'tests' to run is also a bit pointless, as they are only ever executed implicitly when preparing (or finishing with) a test environment to run other tests in. testr can ignore such tests if they are tagged, using the filter_tags configuration option. Tests tagged with any tag in that (space separated) list will only be included in counts and reports if the test failed (or errored). Automated test isolation bisection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As mentioned above, its possible to manually analyze test isolation issues by interrogating the repository for which tests ran on which worker, and then creating a list file with those tests, re-running only half of them, checking the error still happens, rinse and repeat. However that is tedious. testr can perform this analysis for you:: $ testr run --analyze-isolation will perform that analysis for you. (This requires that your test runner is (mostly) deterministic on test ordering). The process is: 1. The last run in the repository is used as a basis for analysing against - tests are only cross checked against tests run in the same worker in that run. This means that failures accrued from several different runs would not be processed with the right basis tests - you should do a full test run to seed your repository. This can be local, or just testr load a full run from your Jenkins or other remote run environment. 2. Each test that is currently listed as a failure is run in a test process given just that id to run. 3. Tests that fail are excluded from analysis - they are broken on their own. 4. The remaining failures are then individually analysed one by one. 5. For each failing, it gets run in one work along with the first 1/2 of the tests that were previously run prior to it. 6. If the test now passes, that set of prior tests are discarded, and the other half of the tests is promoted to be the full list. If the test fails then other other half of the tests are discarded and the current set promoted. 7. Go back to running the failing test along with 1/2 of the current list of priors unless the list only has 1 test in it. If the failing test still failed with that test, we have found the isolation issue. If it did not then either the isolation issue is racy, or it is a 3-or-more test isolation issue. Neither of those cases are automated today. Forcing isolation ~~~~~~~~~~~~~~~~~ Sometimes it is useful to force a separate test runner instance for each test executed. The ``--isolated`` flag will cause testr to execute a separate runner per test:: $ testr run --isolated In this mode testr first determines tests to run (either automatically listed, using the failing set, or a user supplied load-list), and then spawns one test runner per test it runs. To avoid cross-test-runner interactions concurrency is disabled in this mode. ``--analyze-isolation`` supercedes ``--isolated`` if they are both supplied. Repositories ~~~~~~~~~~~~ A testr repository is a very simple disk structure. It contains the following files (for a format 1 repository - the only current format): * format: This file identifies the precise layout of the repository, in case future changes are needed. * next-stream: This file contains the serial number to be used when adding another stream to the repository. * failing: This file is a stream containing just the known failing tests. It is updated whenever a new stream is added to the repository, so that it only references known failing tests. * #N - all the streams inserted in the repository are given a serial number. * repo.conf: This file contains user configuration settings for the repository. ``testr repo-config`` will dump a repo configration and ``test help repo-config`` has online help for all the repository settings. setuptools integration ~~~~~~~~~~~~~~~~~~~~~~ testrepository provides a setuptools commands for ease of integration with setuptools-based workflows: * testr: ``python setup.py testr`` will run testr in parallel mode Options that would normally be passed to testr run can be added to the testr-options argument. ``python setup.py testr --testr-options="--failing"`` will append --failing to the test run. * testr --coverage: ``python setup.py testr --coverage`` will run testr in code coverage mode. This assumes the installation of the python coverage module. * ``python testr --coverage --omit=ModuleThatSucks.py`` will append --omit=ModuleThatSucks.py to the coverage report command. testrepository-0.0.20/doc/DESIGN.txt0000664000175000017500000000277612376207464020342 0ustar robertcrobertc00000000000000Design / Architecture of Test Repository ++++++++++++++++++++++++++++++++++++++++ Values ~~~~~~ Code reuse. Focus on the project. Do one thing well. Goals ~~~~~ Achieve a Clean UI, responsive UI, small-tools approach. Simulataneously have a small clean code base which is easily approachable. Data model/storage ~~~~~~~~~~~~~~~~~~ testrepository stores subunit streams as subunit streams in .testrespository with simple additional metadata. See the MANUAL for documentation on the repository layout. The key design elements are that streams are stored verbatim, and a testr managed stream called 'failing' is used to track the current failures. Code layout ~~~~~~~~~~~ One conceptual thing per module, packages for anything where multiple types are expected (e.g. testrepository.commands, testrespository.ui). Generic driver code should not trigger lots of imports: code dependencies should be loaded when needed. For example, argument validation uses argument types that each command can import, so the core code doesn't need to know about all types. The tests for the code in testrepository.foo.bar is in testrepository.tests.foo.test_bar. Interface tests for testrepository.foo is in testrepository.tests.test_foo. External integration ~~~~~~~~~~~~~~~~~~~~ Test Repository command, ui, parsing etc objects should all be suitable for reuse from other programs. Threads/concurrency ~~~~~~~~~~~~~~~~~~~ In general using any public interface is fine, but keeping syncronisation needs to a minimum for code readability. testrepository-0.0.20/doc/index.txt0000664000175000017500000000072312376207464020466 0ustar robertcrobertc00000000000000.. Test Repository documentation master file, created by sphinx-quickstart on Mon Dec 3 23:24:00 2012. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Test Repository's documentation! =========================================== Contents: .. toctree:: :maxdepth: 2 MANUAL DESIGN DEVELOPERS Indices and tables ================== * :ref:`genindex` * :ref:`search` testrepository-0.0.20/BSD0000664000175000017500000000277212306632354016403 0ustar robertcrobertc00000000000000Copyright (c) Robert Collins and Testrepository contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Robert Collins nor the names of Testrepository contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY ROBERT COLLINS AND TESTREPOSITORY CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. testrepository-0.0.20/.bzrignore0000664000175000017500000000013612306632354020042 0ustar robertcrobertc00000000000000dist MANIFEST test.xml build .testrepository __pycache__ testrepository.egg-info ./doc/_build testrepository-0.0.20/NEWS0000664000175000017500000004526612377217343016561 0ustar robertcrobertc00000000000000############################ testrepository release notes ############################ NEXT (In development) +++++++++++++++++++++ 0.0.20 ++++++ IMPROVEMENTS ------------ * Tests will be reliably tagged with worker-%d. The previous tagging logic had an implicit race - the tag id was looked up via a closure which gets the state of the pos variable at the position the overall loop has advanced too, not the position when the closure was created. (Robert Collins, #1316858) 0.0.19 ++++++ CHANGES ------- * Passing --subunit to all testr commands will now consistently output subunit v2. Previously it would output v1 for stored streams and v2 for live streams. (Robert Collins) * ``run`` was outputting bad MIME types - test/plain, not text/plain. (Robert Collins) * Test filtering was failing under python3 and would only apply the filters to the first test listed by discover. (Clark Boylan, #1317607) * Tests that are enumerated but not executed will no longer reset the test timing data. Enumeration was incorrectly recording a 0 timestamp for enumerated tests. This leads to poor scheduling after an interrupted test run. (Robert Collins, #1322763) * Version 0.0.18 of subunit is now a hard dependency - the v2 protocol solves key issues in concurrency and stream handling. Users that cannot use subunit v2 can run an older testrepository, or contact upstream to work through whatever issue is blocking them. (Robert Collins) * When list-tests encounters an error, a much clearer response will now be shown. (Robert Collins, #1271133) INTERNALS --------- * The ``get_subunit_stream`` methods now return subunit v2 streams rather than v1 streams, preparing the way for storage of native v2 streams in the repository. (Robert Collins) * ``UI.output_stream`` is now tested for handling of non-utf8 bytestreams. (Robert Collins) 0.0.18 ++++++ CHANGES ------- * ``run`` now accepts ``--isolated`` as a parameter, which will cause each selected test to be run independently. This can be useful to both workaround isolation bugs and detect tests that can not be run independently. (Robert Collins) INTERNALS --------- * ``capture_ids`` in test_run now returns a list of captures, permitting tests that need to test multiple runs to do so. (Robert Collins) 0.0.17 ++++++ CHANGES ------- * Restore the ability to import testrepository.repository.memory on Python 2.6. (Robert Collins) 0.0.16 ++++++ CHANGES ------- * A new testr.conf option ``group_regex`` can be used for grouping tests so that they get run in the same backend runner. (Matthew Treinish) * Fix Python 3.* support for entrypoints; the initial code was Python3 incompatible. (Robert Collins, Clark Boylan, #1187192) * Switch to using multiprocessing to determine CPU counts. (Chris Jones, #1092276) * The cli UI now has primitive differentiation between multiple stream types. This is not yet exposed to the end user, but is sufficient to enable the load command to take interactive input without it reading from the raw subunit stream on stdin. (Robert Collins) * The scheduler can now groups tests together permitting co-dependent tests to always be scheduled onto the same backend. Note that this does not force co-dependent tests to be executed, so partial test runs (e.g. --failing) may still fail. (Matthew Treinish, Robert Collins) * When test listing fails, testr will now report an error rather than incorrectly trying to run zero tests. A test listing failure is detected by the returncode of the test listing process. (Robert Collins, #1185231) 0.0.15 ++++++ CHANGES ------- * Expects subunit v2 if the local library has v2 support in the subunit library. This should be seamless if the system under test shares the Python libraries. If it doesn't, either arrange to use ``subunit-2to1`` or upgrade the subunit libraries for the system under test. (Robert Collins) * ``--full-results`` is now a no-op, use ``--subunit`` to get unfiltered output. (Robert Collins) 0.0.14 ++++++ IMPROVEMENTS ------------ * First cut at full Python 3 support. The 'works for me' release. (Robert Collins) 0.0.13 ++++++ IMPROVEMENTS ------------ * ``setup.py testr`` was not indicating test failures via it's return code. (Monty Taylor) 0.0.12 ++++++ IMPROVEMENTS ------------ * There is now a setuptools extension provided by ``testrespository`` making it easy to invoke testr from setup.py driven workflows. (Monty Taylor, Robert Collins) INTERNALS --------- * BSD license file incorrectly claimed copyright by subunit contributors. (Monty Taylor) * .testr.conf is now shipped in the source distribution to aid folk wanting to validate that testrepository works correctly on their machine. (Robert Collins) 0.0.11 ++++++ IMPROVEMENTS ------------ * Fix another incompatability with Mac OS X - gdbm dbm modules don't support get. (Robert Collins, #1094330) 0.0.10 ++++++ IMPROVEMENTS ------------ * It's now possible to configure ``test_run_concurrency`` in ``.testr.conf`` to have concurrency defined by a callout. (Robert Collins) * Testr supports running tests in arbitrary environments. See ``Remote or isolated test environments`` in MANUAL.txt / ``testr help run`` (Robert Collins) INTERNALS --------- * TestCommand is now a fixture. This is used to ensure cached test instances are disposed of - if using the object to run or list tests, you will need to adjust your calls. (Robert Collins) * ``TestCommand`` now offers, and ``TestListingFixture`` consumes a small protocol for obtaining and releasing test execution instances. (Robert Collins) 0.0.9 +++++ IMPROVEMENTS ------------ * On OSX the ``anydbm`` module by default returns an implementation that doesn't support update(). Workaround that by falling back to a loop. (Robert Collins, #1091500) * ``testr --analyze-improvements`` now honours test regex filters and only analyzes matching tests. (Robert Collins) 0.0.8 +++++ CHANGES ------- * As a side effect of fixing bug #597060 additional arguments passed to testr run or testr list are only passed to the underlying test runner if they are preceeded by '--'. (Robert Collins, #597060) * ``testr run --failing`` will no longer run any tests at all if there are no failing tests. (Robert Collins, #904400) IMPROVEMENTS ------------ * ``AbstractArgument`` now handles the case where additional arguments are present that the argument type cannot parse, but enough have been parsed for it to be valid. This allows optional arguments to be in the middle of a grammar. (Robert Collins) * ``cli.UI`` now passed '--' down to the argument layer for handling rather than implicitly stripping it. (Robert Collins) * ``DoubledashArgument`` added to allow fine grained control over the impact of -- in command lines. (Robert Collins) * New argument type ``ExistingPathArgument`` for use when commands want to take the name of a file. (Robert Collins) * ``testr`` will now show the version. (Robert Collins) * ``testr last`` when just one test run has been run works again. (Robert Collins) * ``testr help command`` now shows the docstring for commands (Robert Collins) * ``testr --help command`` or ``testr command --help`` now shows the options for the command. (Robert Collins) * ``testr run --analyze-isolation`` will search the current failing tests for spurious failures caused by interactions with other tests. (Robert Collins, #684069) * ``testr run --until-failure`` will repeat a test run until interrupted by ctrl-C or until a failure occurs. (Robert Collins, #680995) * ``Repository.get_test_run`` now raises KeyError if asked for a missing or nonexistant test run. (Robert Collins) * Sphinx has been added to tie the documentation toghether (And it is available on testrepository.readthedocs.org). (Robert Collins) * ``StringArgument`` now rejects '--' - it should be handled by the use of a ``DoubledashArgument`` where one is expected. This is a bit awkward and does not permit passing '--' down to a child process, so further work may be needed - file a bug if this affects you. (Robert Collins) * ``test failing --subunit`` now exits 0 unless there was a problem generating the stream. This is consistent with the general processing model of subunit generators. (Robert Collins) * ``testr last`` now supports ``--subunit`` and when passed will output the stored subunit stream. Note that the exit code is always 0 when this is done (unless an exception occurs reading the stream) - subunit consumers should parse the subunit to determine success/failure. (Robert Collins) * ``testr load`` now supports passing filenames to subunit streams to load. (Robert Collins, #620386) * ``testr run`` will now fail a test run if the test process exits non-zero. As a side effect of this change, if the test program closes its stdout but does not exit, ``testr run`` will hang (waiting for the test program to exit). (Robert Collins) * ``testr run --load-list FILENAME`` will limit the tests run to the test ids supplied in the list file FILENAME. This is useful for manually specifying the tests to run, or running testr subordinate to testr (e.g. on remote machines). (Robert Collins, partial fix for #597060) * ``testr run foo`` now applies foo as a regex filter against the tests found by doing a listing of the test runners tests. Likewise ``testr list-tests foo`` will apply foo as a filter against the found tests. This makes it easy to limit the tests that will be requested for running by the backend test process - simply pass one or more regex filters into testr run. (Robert Collins, #597060) * Test tags are now shown in failures. Of particular interest for folk debgging cross-test interactions will be the worker-N tags which indicate which backend test process executed a given test. (Robert Collins) 0.0.7 +++++ CHANGES ------- * testrepository is now distributed via distribute rather than distutils, allowing installation via pip into virtualenv environments. (Robert Collins) IMPROVEMENTS ------------ * stream loading will now synthesise datestamps before demultiplexing rather than on insertion into the repository. This fixes erroneously short times being recorded on non timestamped streams. Additionally, moving the automatic addition of timestamp material in front of the demuxer has removed the skew that caused test times to be reported as longer than the stream could indicate (by the amount of time the test runner took to start outputting subunit). This time may be something we want to track later, but the prior mechanism was inconsistent between the current run and reporting on prior runs, which lead to a very confusing UI. Now it is consistent, but totally ignores that overhead. (Robert Collins, #1048126, #980950) * ``testr run`` now accepts a --concurrency option, allowing command line override of the number of workers spawned. This allows conccurency on operating systems where autodetection is not yet implemented, or just debugging problems with concurrent test suites. (Robert Collins, #957145) * ''test_id_list_default'' would prevent ''test_list_option'' being used in previous releases. For Python environments where the context to load tests from is always needed this was not an issue (and thus not uncovered). However given a test runner which wants either a global context or a list of specific tests with no global context, there was no way to achieve that with this bug. (Robert Collins, #1027042) 0.0.6 +++++ CHANGES ------- * Now relies on subunit 0.0.8 or better and testtools 0.9.15 or better. IMPROVEMENTS ------------ * Much better handling of unicode input from subunit streams. Specifically, we won't crash when we can't figure something out. (Francesco Banconi, Martin Packman, #955006) * Parallel tests now record their worker thread number as tags in tests. This makes identifying test ordering problems much easier. (Benji York, #974622) * Python2.7 changed the interface for DBM, this has been worked around. (Robert Collins, #775214, #961103) * Subunit 0.0.7 Changes its TestResultFilter implementation, requiring the subclass in testrepository.filter to be come more robust. (Robert Collins) * A horrible thinko in the testrepository test suite came to light and has been fixed. How the tests ever ran is a mystery. (Robert Collins, #881497) * ''failing'', ''run'' and ''load'' now both take a ''--subunit'' option, which displays output in raw subunit format. If ''--full-results'' is passed too, then all subunit information is displayed. (Brad Crittenden, #949950) * Setting ''filter_tags'' in ''.testr.conf'' will cause tests tagged with those tags to be hidden unless the fail/error. This requires Subunit 0.0.8. If an older version of subunit is configured, testr will return an error. (Robert Collins, #914166) * ``testr`` will drop into PDB from its command line UI if the environment variable TESTR_PDB is set. (Robert Collins) * Test partitioning now handles a corner case where multiple tests have a reported duration of 0. Previously they could all accumulate into one partition, now they split across partitions (the length of a partition is used as a tie breaker if two partitions have the same duration). (Robert Collins, #914359) * The test 'test_outputs_results_to_stdout' was sensitive to changes in testtools and has been made more generic. (Robert Collins) 0.0.5 +++++ CHANGES ------- * The testrepository test suite depends on testtools 0.9.8. (Robert Collins) * If interrupted while updating the ``failing`` list, temp files are now cleaned up - previously a carefully timed interrupt would leave the temporary failing file in place. (Robert Collins, #531665) * Local implementation of MatchesException has been removed in favour of the testtools implementation. All ``self.assertRaises`` have been migrated to this new testing interface. * ``setup.py`` will read the version number from PKG-INFO when it is running without a bzr tree : this makes it easier to snapshot without doing a release. (Jonathan Lange) * Testrepository should be more compatible with win32 environments. (Martin [gz]) * ``testr init-repo`` now has a ``--force-init`` option which when provided will cause a repository to be created just-in-time. (Jonathan Lange) * ``testr load`` and ``testr run`` now have a flag ``--partial``. When set this will cause existing failures to be preserved. When not set, doing a load will reset existing failures. The ``testr run`` flag ``--failing`` implicitly sets ``--partial`` (so that an interrupted incremental test run does not incorrectly discard a failure record). The ``--partial`` flag exists so that deleted or renamed tests do not persist forever in the database. (Robert Collins) * ``testr load`` now loads all input streams in parallel. This has no impact on the CLI as yet, but permits API users to load from parallel processes. (Robert Collins) * ``testr list-tests`` is a new command that will list the tests for a project when ``.testr.conf`` has been configured with a ``test_list_option``. (Robert Collins) * ``test run --parallel`` will parallelise by running each test command once per CPU in the machine (detection for this only implemented on Linux so far). An internally parallelising command will not benefit from this, but for many projects it will be a win either from simplicity or because getting their test runner to parallise is nontrivial. The observed duration of tests is used to inform the partitioning algorithm - so each test runner should complete at approximately the same time, minimising total runtime. (Robert Collins) * ``testr run`` no longer attempts to expand unknown variables. This permits the use of environmen variables to control the test run. For instance, ${PYTHON:-python} in the test_command setting will run the command with $PYTHON or python if $PYTHON is not set. (Robert Collins, #595295) * ``testr run`` now resets the SIGPIPE handler to default - which is what most Unix processes expect. (Robert Collins) * ``testr run`` now uses a unique file name rather than hard coding failing.list - while not as clear, this permits concurrent testr invocations, or parallel testing from within testr, to execute safely. (Robert Collins) * ``testr run`` uses an in-process load rather than reinvoking testr. This should be faster on Windows and avoids the issue with running the wrong testr when PYTHONPATH but not PATH is set. (Robert Collins, #613129) * ``testr run`` will now pass -d to the ``testr load`` invocation, so that running ``testr run -d /some/path`` will work correctly. (Robert Collins, #529698) * ``testr run`` will now pass ``-q`` down to ``testr load``. (Robert Collins, #529701) * The ``testrepository.repository.Repository`` interface now tracks test times for use in estimating test run duration and parallel test partitioning. (Robert Collins) * There are the beginnings of a samba buildfarm backend for testrepository, though it is not hooked into the UI yet, so is only useful for API users. (Jelmer Vernooij) * Updates to next-stream are done via a temporary file to reduce the chance of an empty next-stream being written to disk. (Robert Collins, #531664) * Variable expansion no longer does python \ escape expansion. (Robert Collins, #694800) * When next-stream is damaged testr will report that it is corrupt rather than reporting an invalid literal. (Robert Collins, #531663) 0.0.4 +++++ IMPROVEMENTS ------------ * ``failing`` now supports ``--list`` to list the failing tests. (Jonathan Lange) * Repository not found errors are now clearer. (Jonathan Lange, #530010) * The summary of a test run is now formatted as foo=NN rather than foo: NN, which some folk find easier to read. * The file implementation of Repository.open now performs ~ expansion. (Jonathan Lange, #529665) * Test failures and errors are now shown as we get them in 'load', 'failing' and 'last'. (Jonathan Lange, #613152) 0.0.3 +++++ IMPROVEMENTS ------------ * ``failing`` now correctly calls ``repository.get_failing`` and will this track all seen failures rather than just the latest observed failures. * New argument type ``StringArgument`` for use when a supplied argument is just a string, rather than a typed argument. * New subcommand 'failing' added. * New subcommand ``run`` added which reads a .testr.conf file to figure out how to run tests with subunit output. It then runs them and pipes into testr load. This allows simpler integration and permits a programming interface so that tools like Tribunal/Eclipe etc can refresh tests in a testrepository. ``run`` also passes arguments and options down to the child process. ``run`` can also supply test ids on the command, for test runners that want that. * The command 'last' will no longer error on a new repository. testrepository-0.0.20/Makefile0000664000175000017500000000201512306632354017476 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. all: README.txt check .testrepository: ./testr init check: .testrepository ./testr run --parallel check-xml: python -m subunit.run testrepository.tests.test_suite | subunit2junitxml -o test.xml -f | subunit2pyunit release: ./setup.py sdist upload --sign README.txt: testrepository/commands/quickstart.py ./testr quickstart > $@ .PHONY: check check-xml release all testrepository-0.0.20/setup.py0000775000175000017500000000720612376207464017571 0ustar robertcrobertc00000000000000#!/usr/bin/env python # # Copyright (c) 2009-2013 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. from setuptools import setup import email import os import testrepository def get_revno(): import bzrlib.workingtree t = bzrlib.workingtree.WorkingTree.open_containing(__file__)[0] return t.branch.revno() def get_version_from_pkg_info(): """Get the version from PKG-INFO file if we can.""" pkg_info_path = os.path.join(os.path.dirname(__file__), 'PKG-INFO') try: pkg_info_file = open(pkg_info_path, 'r') except (IOError, OSError): return None try: pkg_info = email.message_from_file(pkg_info_file) except email.MessageError: return None return pkg_info.get('Version', None) def get_version(): """Return the version of testrepository that we are building.""" version = '.'.join( str(component) for component in testrepository.__version__[0:3]) phase = testrepository.__version__[3] if phase == 'final': return version pkg_info_version = get_version_from_pkg_info() if pkg_info_version: return pkg_info_version revno = get_revno() if phase == 'alpha': # No idea what the next version will be return 'next-r%s' % revno else: # Preserve the version number but give it a revno prefix return version + '-r%s' % revno description = open(os.path.join(os.path.dirname(__file__), 'README.txt'), 'rt').read() setup(name='testrepository', author='Robert Collins', author_email='robertc@robertcollins.net', url='https://launchpad.net/testrepository', description='A repository of test results.', long_description=description, keywords="subunit unittest testrunner", classifiers = [ 'Development Status :: 6 - Mature', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'License :: OSI Approved :: Apache Software License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Topic :: Software Development :: Quality Assurance', 'Topic :: Software Development :: Testing', ], scripts=['testr'], version=get_version(), packages=['testrepository', 'testrepository.arguments', 'testrepository.commands', 'testrepository.repository', 'testrepository.tests', 'testrepository.tests.arguments', 'testrepository.tests.commands', 'testrepository.tests.repository', 'testrepository.tests.ui', 'testrepository.ui', ], install_requires=[ 'fixtures', 'python-subunit >= 0.0.18', 'testtools >= 0.9.30', ], extras_require = dict( test=[ 'bzr', 'pytz', 'testresources', 'testscenarios', ] ), entry_points={ 'distutils.commands': [ 'testr = testrepository.setuptools_command:Testr', ], }, ) testrepository-0.0.20/testrepository/0000775000175000017500000000000012377221137021160 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/__init__.py0000664000175000017500000000333112377217762023302 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """The testrepository library. This library is divided into some broad areas. The commands package contains the main user entry points into the application. The ui package contains various user interfaces. The repository package contains the core storage code. The tests package contains tests and test specific support code. """ # same format as sys.version_info: "A tuple containing the five components of # the version number: major, minor, micro, releaselevel, and serial. All # values except releaselevel are integers; the release level is 'alpha', # 'beta', 'candidate', or 'final'. The version_info value corresponding to the # Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a # releaselevel of 'dev' for unreleased under-development code. # # If the releaselevel is 'alpha' then the major/minor/micro components are not # established at this point, and setup.py will use a version of next-$(revno). # If the releaselevel is 'final', then the tarball will be major.minor.micro. # Otherwise it is major.minor.micro~$(revno). __version__ = (0, 0, 20, 'final', 0) testrepository-0.0.20/testrepository/commands/0000775000175000017500000000000012377221137022761 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/commands/slowest.py0000664000175000017500000000472012306632354025035 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Show the longest running tests in the repository.""" import math from operator import itemgetter import optparse from testrepository.commands import Command class slowest(Command): """Show the slowest tests from the last test run. This command shows a table, with the longest running tests at the top. """ DEFAULT_ROWS_SHOWN = 10 TABLE_HEADER = ('Test id', 'Runtime (s)') options = [ optparse.Option( "--all", action="store_true", default=False, help="Show timing for all tests."), ] @staticmethod def format_times(times): times = list(times) precision = 3 digits_before_point = int( math.log10(times[0][1])) + 1 min_length = digits_before_point + precision + 1 def format_time(time): # Limit the number of digits after the decimal # place, and also enforce a minimum width # based on the longest duration return "%*.*f" % (min_length, precision, time) times = [(name, format_time(time)) for name, time in times] return times def run(self): repo = self.repository_factory.open(self.ui.here) try: latest_id = repo.latest_id() except KeyError: return 3 # what happens when there is no timing info? test_times = repo.get_test_times(repo.get_test_ids(latest_id)) known_times =list( test_times['known'].items()) known_times.sort(key=itemgetter(1), reverse=True) if len(known_times) > 0: if not self.ui.options.all: known_times = known_times[:self.DEFAULT_ROWS_SHOWN] known_times = self.format_times(known_times) rows = [self.TABLE_HEADER] + known_times self.ui.output_table(rows) return 0 testrepository-0.0.20/testrepository/commands/quickstart.py0000664000175000017500000000472712306632354025536 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Get a quickstart on testrepository.""" from testrepository.commands import Command class quickstart(Command): """Introductory documentation for testrepository.""" def run(self): # This gets written to README.txt by Makefile. help = """Test Repository +++++++++++++++ Overview ~~~~~~~~ This project provides a database of test results which can be used as part of developer workflow to ensure/check things like: * No commits without having had a test failure, test fixed cycle. * No commits without new tests being added. * What tests have failed since the last commit (to run just a subset). * What tests are currently failing and need work. Test results are inserted using subunit (and thus anything that can output subunit or be converted into a subunit stream can be accepted). A mailing list for discussion, usage and development is at https://launchpad.net/~testrepository-dev - all are welcome to join. Some folk hang out on #testrepository on irc.freenode.net. CI for the project is at http://build.robertcollins.net/job/testrepository-default/. Licensing ~~~~~~~~~ Test Repository is under BSD / Apache 2.0 licences. See the file COPYING in the source for details. Quick Start ~~~~~~~~~~~ Create a config file:: $ touch .testr.conf Create a repository:: $ testr init Load a test run into the repository:: $ testr load < testrun Query the repository:: $ testr stats $ testr last $ testr failing Delete a repository:: $ rm -rf .testrepository Documentation ~~~~~~~~~~~~~ More detailed documentation including design and implementation details, a user manual, and guidelines for development of Test Repository itself can be found at https://testrepository.readthedocs.org/en/latest, or in the source tree at doc/ (run make -C doc html). """ self.ui.output_rest(help) return 0 testrepository-0.0.20/testrepository/commands/init.py0000664000175000017500000000155412306632354024302 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Initialise a new repository.""" from testrepository.commands import Command class init(Command): """Create a new repository.""" def run(self): self.repository_factory.initialise(self.ui.here) testrepository-0.0.20/testrepository/commands/list_tests.py0000664000175000017500000000446612306632354025541 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """List the tests from a project and show them.""" from io import BytesIO from testtools import TestResult from testtools.compat import _b from testrepository.arguments.doubledash import DoubledashArgument from testrepository.arguments.string import StringArgument from testrepository.commands import Command from testrepository.testcommand import testrconf_help, TestCommand class list_tests(Command): __doc__ = """Lists the tests for a project. """ + testrconf_help args = [StringArgument('testfilters', 0, None), DoubledashArgument(), StringArgument('testargs', 0, None)] # Can be assigned to to inject a custom command factory. command_factory = TestCommand def run(self): testcommand = self.command_factory(self.ui, None) ids = None filters = None if self.ui.arguments['testfilters']: filters = self.ui.arguments['testfilters'] testcommand.setUp() try: cmd = testcommand.get_run_command( ids, self.ui.arguments['testargs'], test_filters=filters) cmd.setUp() try: # Ugh. # List tests if the fixture has not already needed to to filter. if filters is None: ids = cmd.list_tests() else: ids = cmd.test_ids stream = BytesIO() for id in ids: stream.write(('%s\n' % id).encode('utf8')) stream.seek(0) self.ui.output_stream(stream) return 0 finally: cmd.cleanUp() finally: testcommand.cleanUp() testrepository-0.0.20/testrepository/commands/stats.py0000664000175000017500000000214712306632354024474 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Report stats about a repository. Current vestigial.""" from testrepository.commands import Command class stats(Command): """Report stats about a repository. This is currently vestigial, but should grow to be the main entry point for getting summary information about the repository. """ def run(self): repo = self.repository_factory.open(self.ui.here) self.ui.output_values([('runs', repo.count())]) return 0 testrepository-0.0.20/testrepository/commands/__init__.py0000664000175000017500000001666512306632354025107 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """'Commands' for testr. The code in this module contains the Command base class, the run_argv entry point to run CLI commands. Actual commands can be found in testrepository.commands.$commandname. For example, testrepository.commands.init is the init command name, and testrepository.command.show_stats would be the show-stats command (if one existed). The Command discovery logic looks for a class in the module with the same name - e.g. tesrepository.commands.init.init would be the class. That class must obey the testrepository.commands.Command protocol, but does not need to be a subclass. Plugins and extensions wanting to add commands should install them into testrepository.commands (perhaps by extending the testrepository.commands __path__ to include a directory containing their commands - no __init__ is needed in that directory.) """ from inspect import getdoc from optparse import OptionParser import os import sys import subunit from testtools.compat import _u from testrepository.repository import file def _find_command(cmd_name): orig_cmd_name = cmd_name cmd_name = cmd_name.replace('-', '_') classname = "%s" % cmd_name modname = "testrepository.commands.%s" % cmd_name try: _temp = __import__(modname, globals(), locals(), [classname]) except ImportError: raise KeyError("Could not import command module %s" % modname) result = getattr(_temp, classname, None) if result is None: raise KeyError( "Malformed command module - no command class %s found in module %s." % (classname, modname)) if getattr(result, 'name', None) is None: # Store the name for the common case of name == lookup path. result.name = orig_cmd_name return result def iter_commands(): """Iterate over all the command classes.""" paths = __path__ names = set() for path in paths: # For now, only support regular installs. TODO: support zip, eggs. for filename in os.listdir(path): base = os.path.basename(filename) if base.startswith('.'): continue name = base.split('.', 1)[0] name = name.replace('_', '-') names.add(name) names.discard('--init--') names.discard('--pycache--') names = sorted(names) for name in names: yield _find_command(name) class Command(object): """A command that can be run. Commands contain non-UI non-domain specific behaviour - they are the glue between the UI and the object model. Commands are parameterised with: :ivar ui: a UI object which is responsible for brokering the command arguments, input and output. There is no default ui, it must be passed to the constructor. :ivar repository_factory: a repository factory which is used to create or open repositories. The default repository factory is suitable for use in the command line tool. Commands declare that they accept/need/emit: :ivar args: A list of testrepository.arguments.AbstractArgument instances. AbstractArgument arguments are validated when set_command is called on the UI layer. :ivar input_streams: A list of stream specifications. Mandatory streams are specified by a simple name. Optional streams are specified by a simple name with a ? ending the name. Optional multiple streams are specified by a simple name with a * ending the name, and mandatory multiple streams by ending the name with +. Multiple streams are used when a command can process more than one stream. :ivar options: A list of optparse.Option options to accept. These are merged with global options by the UI layer when set_command is called. """ # class defaults to no streams. input_streams = [] # class defaults to no arguments. args = [] # class defaults to no options. options = [] def __init__(self, ui): """Create a Command object with ui ui.""" self.ui = ui self.repository_factory = file.RepositoryFactory() self._init() def execute(self): """Execute a command. This interrogates the UI to ensure that arguments and options are supplied, performs any validation for the same that the command needs and finally calls run() to perform the command. Most commands should not need to override this method, and any user wanting to run a command should call this method. This is a synchronous method, and basically just a helper. GUI's or asynchronous programs can choose to not call it and instead should call lower level API's. """ if not self.ui.set_command(self): return 1 try: result = self.run() except Exception: error_tuple = sys.exc_info() self.ui.output_error(error_tuple) return 3 if not result: return 0 return result @classmethod def get_summary(klass): docs = klass.__doc__.split('\n') return docs[0] def _init(self): """Per command init call, called into by Command.__init__.""" def run(self): """The core logic for this command to be implemented by subclasses.""" raise NotImplementedError(self.run) def run_argv(argv, stdin, stdout, stderr): """Convenience function to run a command with a CLIUI. :param argv: The argv to run the command with. :param stdin: The stdin stream for the command. :param stdout: The stdout stream for the command. :param stderr: The stderr stream for the command. :return: An integer exit code for the command. """ cmd_name = None cmd_args = argv[1:] for arg in argv[1:]: if not arg.startswith('-'): cmd_name = arg break if cmd_name is None: cmd_name = 'help' cmd_args = ['help'] cmd_args.remove(cmd_name) cmdclass = _find_command(cmd_name) from testrepository.ui import cli ui = cli.UI(cmd_args, stdin, stdout, stderr) cmd = cmdclass(ui) result = cmd.execute() if not result: return 0 return result def get_command_parser(cmd): """Return an OptionParser for cmd. This populates the parser with the commands options and sets its usage string based on the arguments and docstring the command has. Global options are not provided (as they are UI specific). :return: An OptionParser instance. """ parser = OptionParser() for option in cmd.options: parser.add_option(option) usage = _u('%%prog %(cmd)s [options] %(args)s\n\n%(help)s') % { 'args': _u(' ').join(map(lambda x:x.summary(), cmd.args)), 'cmd': getattr(cmd, 'name', cmd), 'help': getdoc(cmd), } parser.set_usage(usage) return parser testrepository-0.0.20/testrepository/commands/failing.py0000664000175000017500000000614612306632354024752 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Show the current failures in the repository.""" import optparse import testtools from testtools import ExtendedToStreamDecorator, MultiTestResult from testrepository.commands import Command from testrepository.testcommand import TestCommand class failing(Command): """Show the current failures known by the repository. Today this is the failures from the most recent run, but once partial and full runs are understood it will be all the failures from the last full run combined with any failures in subsequent partial runs, minus any passes that have occured in a run more recent than a given failure. Deleted tests will only be detected on full runs with this approach. Without --subunit, the process exit code will be non-zero if the test run was not successful. With --subunit, the process exit code is non-zero if the subunit stream could not be generated successfully. """ options = [ optparse.Option( "--subunit", action="store_true", default=False, help="Show output as a subunit stream."), optparse.Option( "--list", action="store_true", default=False, help="Show only a list of failing tests."), ] # Can be assigned to to inject a custom command factory. command_factory = TestCommand def _show_subunit(self, run): stream = run.get_subunit_stream() self.ui.output_stream(stream) return 0 def _make_result(self, repo): testcommand = self.command_factory(self.ui, repo) if self.ui.options.list: list_result = testtools.StreamSummary() return list_result, list_result else: return self.ui.make_result(repo.latest_id, testcommand) def run(self): repo = self.repository_factory.open(self.ui.here) run = repo.get_failing() if self.ui.options.subunit: return self._show_subunit(run) case = run.get_test() failed = False result, summary = self._make_result(repo) result.startTestRun() try: case.run(result) finally: result.stopTestRun() failed = not summary.wasSuccessful() if failed: result = 1 else: result = 0 if self.ui.options.list: failing_tests = [ test for test, _ in summary.errors + summary.failures] self.ui.output_tests(failing_tests) return result testrepository-0.0.20/testrepository/commands/load.py0000664000175000017500000001455212376512243024261 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Load data into a repository.""" from functools import partial from operator import methodcaller import optparse import threading from extras import try_import v2_avail = try_import('subunit.ByteStreamToStreamResult') import subunit.test_results import testtools from testrepository.arguments.path import ExistingPathArgument from testrepository.commands import Command from testrepository.repository import RepositoryNotFound from testrepository.testcommand import TestCommand class InputToStreamResult(object): """Generate Stream events from stdin. Really a UI responsibility? """ def __init__(self, stream): self.source = stream self.stop = False def run(self, result): while True: if self.stop: return char = self.source.read(1) if not char: return if char == b'a': result.status(test_id='stdin', test_status='fail') class load(Command): """Load a subunit stream into a repository. Failing tests are shown on the console and a summary of the stream is printed at the end. Unless the stream is a partial stream, any existing failures are discarded. """ input_streams = ['subunit+', 'interactive?'] args = [ExistingPathArgument('streams', min=0, max=None)] options = [ optparse.Option("--partial", action="store_true", default=False, help="The stream being loaded was a partial run."), optparse.Option( "--force-init", action="store_true", default=False, help="Initialise the repository if it does not exist already"), optparse.Option("--subunit", action="store_true", default=False, help="Display results in subunit format."), optparse.Option("--full-results", action="store_true", default=False, help="No-op - deprecated and kept only for backwards compat."), ] # Can be assigned to to inject a custom command factory. command_factory = TestCommand def run(self): path = self.ui.here try: repo = self.repository_factory.open(path) except RepositoryNotFound: if self.ui.options.force_init: repo = self.repository_factory.initialise(path) else: raise testcommand = self.command_factory(self.ui, repo) # Not a full implementation of TestCase, but we only need to iterate # back to it. Needs to be a callable - its a head fake for # testsuite.add. # XXX: Be nice if we could declare that the argument, which is a path, # is to be an input stream - and thus push this conditional down into # the UI object. if self.ui.arguments.get('streams'): opener = partial(open, mode='rb') streams = map(opener, self.ui.arguments['streams']) else: streams = self.ui.iter_streams('subunit') mktagger = lambda pos, result:testtools.StreamTagger( [result], add=['worker-%d' % pos]) def make_tests(): for pos, stream in enumerate(streams): if v2_avail: # Calls StreamResult API. case = subunit.ByteStreamToStreamResult( stream, non_subunit_name='stdout') else: # Calls TestResult API. case = subunit.ProtocolTestCase(stream) def wrap_result(result): # Wrap in a router to mask out startTestRun/stopTestRun from the # ExtendedToStreamDecorator. result = testtools.StreamResultRouter( result, do_start_stop_run=False) # Wrap that in ExtendedToStreamDecorator to convert v1 calls to # StreamResult. return testtools.ExtendedToStreamDecorator(result) # Now calls StreamResult API :). case = testtools.DecorateTestCaseResult(case, wrap_result, methodcaller('startTestRun'), methodcaller('stopTestRun')) decorate = partial(mktagger, pos) case = testtools.DecorateTestCaseResult(case, decorate) yield (case, str(pos)) case = testtools.ConcurrentStreamTestSuite(make_tests) # One unmodified copy of the stream to repository storage inserter = repo.get_inserter(partial=self.ui.options.partial) # One copy of the stream to the UI layer after performing global # filters. try: previous_run = repo.get_latest_run() except KeyError: previous_run = None output_result, summary_result = self.ui.make_result( inserter.get_id, testcommand, previous_run=previous_run) result = testtools.CopyStreamResult([inserter, output_result]) runner_thread = None result.startTestRun() try: # Convert user input into a stdin event stream interactive_streams = list(self.ui.iter_streams('interactive')) if interactive_streams: case = InputToStreamResult(interactive_streams[0]) runner_thread = threading.Thread( target=case.run, args=(result,)) runner_thread.daemon = True runner_thread.start() case.run(result) finally: result.stopTestRun() if interactive_streams and runner_thread: runner_thread.stop = True runner_thread.join(10) if not summary_result.wasSuccessful(): return 1 else: return 0 testrepository-0.0.20/testrepository/commands/run.py0000664000175000017500000003703612306632354024147 0ustar robertcrobertc00000000000000# # Copyright (c) 2010-2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Run a projects tests and load them into testrepository.""" from io import BytesIO from math import ceil import optparse import re from extras import try_import import subunit v2_avail = try_import('subunit.ByteStreamToStreamResult') import testtools from testtools import ( TestByTestResult, ) from testtools.compat import _b from testrepository.arguments.doubledash import DoubledashArgument from testrepository.arguments.string import StringArgument from testrepository.commands import Command from testrepository.commands.load import load from testrepository.ui import decorator from testrepository.testcommand import TestCommand, testrconf_help from testrepository.testlist import parse_list LINEFEED = _b('\n')[0] class ReturnCodeToSubunit(object): """Converts a process return code to a subunit error on the process stdout. The ReturnCodeToSubunit object behaves as a readonly stream, supplying the read, readline and readlines methods. If the process exits non-zero a synthetic test is added to the output, making the error accessible to subunit stream consumers. If the process closes its stdout and then does not terminate, reading from the ReturnCodeToSubunit stream will hang. This class will be deleted at some point, allowing parsing to read from the actual fd and benefit from select for aggregating non-subunit output. """ def __init__(self, process): """Adapt a process to a readable stream. :param process: A subprocess.Popen object that is generating subunit. """ self.proc = process self.done = False self.source = self.proc.stdout self.lastoutput = LINEFEED def _append_return_code_as_test(self): if self.done is True: return self.source = BytesIO() returncode = self.proc.wait() if returncode != 0: if self.lastoutput != LINEFEED: # Subunit V1 is line orientated, it has to start on a fresh # line. V2 needs to start on any fresh utf8 character border # - which is not guaranteed in an arbitrary stream endpoint, so # injecting a \n gives us such a guarantee. self.source.write(_b('\n')) if v2_avail: stream = subunit.StreamResultToBytes(self.source) stream.status(test_id='process-returncode', test_status='fail', file_name='traceback', mime_type='text/plain;charset=utf8', file_bytes=('returncode %d' % returncode).encode('utf8')) else: self.source.write(_b('test: process-returncode\n' 'failure: process-returncode [\n' ' returncode %d\n' ']\n' % returncode)) self.source.seek(0) self.done = True def read(self, count=-1): if count == 0: return _b('') result = self.source.read(count) if result: self.lastoutput = result[-1] return result self._append_return_code_as_test() return self.source.read(count) def readline(self): result = self.source.readline() if result: self.lastoutput = result[-1] return result self._append_return_code_as_test() return self.source.readline() def readlines(self): result = self.source.readlines() if result: self.lastoutput = result[-1][-1] self._append_return_code_as_test() result.extend(self.source.readlines()) return result class run(Command): __doc__ = """Run the tests for a project and load them into testrepository. """ + testrconf_help options = [ optparse.Option("--failing", action="store_true", default=False, help="Run only tests known to be failing."), optparse.Option("--parallel", action="store_true", default=False, help="Run tests in parallel processes."), optparse.Option("--concurrency", action="store", type="int", default=0, help="How many processes to use. The default (0) autodetects your CPU count."), optparse.Option("--load-list", default=None, help="Only run tests listed in the named file."), optparse.Option("--partial", action="store_true", default=False, help="Only some tests will be run. Implied by --failing."), optparse.Option("--subunit", action="store_true", default=False, help="Display results in subunit format."), optparse.Option("--full-results", action="store_true", default=False, help="No-op - deprecated and kept only for backwards compat."), optparse.Option("--until-failure", action="store_true", default=False, help="Repeat the run again and again until failure occurs."), optparse.Option("--analyze-isolation", action="store_true", default=False, help="Search the last test run for 2-test test isolation interactions."), optparse.Option("--isolated", action="store_true", default=False, help="Run each test id in a separate test runner."), ] args = [StringArgument('testfilters', 0, None), DoubledashArgument(), StringArgument('testargs', 0, None)] # Can be assigned to to inject a custom command factory. command_factory = TestCommand def _find_failing(self, repo): run = repo.get_failing() case = run.get_test() ids = [] def gather_errors(test_dict): if test_dict['status'] == 'fail': ids.append(test_dict['id']) result = testtools.StreamToDict(gather_errors) result.startTestRun() try: case.run(result) finally: result.stopTestRun() return ids def run(self): repo = self.repository_factory.open(self.ui.here) if self.ui.options.failing or self.ui.options.analyze_isolation: ids = self._find_failing(repo) else: ids = None if self.ui.options.load_list: list_ids = set() # Should perhaps be text.. currently does its own decode. with open(self.ui.options.load_list, 'rb') as list_file: list_ids = set(parse_list(list_file.read())) if ids is None: # Use the supplied list verbatim ids = list_ids else: # We have some already limited set of ids, just reduce to ids # that are both failing and listed. ids = list_ids.intersection(ids) if self.ui.arguments['testfilters']: filters = self.ui.arguments['testfilters'] else: filters = None testcommand = self.command_factory(self.ui, repo) testcommand.setUp() try: if not self.ui.options.analyze_isolation: cmd = testcommand.get_run_command(ids, self.ui.arguments['testargs'], test_filters = filters) if self.ui.options.isolated: result = 0 cmd.setUp() try: ids = cmd.list_tests() finally: cmd.cleanUp() for test_id in ids: cmd = testcommand.get_run_command([test_id], self.ui.arguments['testargs'], test_filters=filters) run_result = self._run_tests(cmd) if run_result > result: result = run_result return result else: return self._run_tests(cmd) else: # Where do we source data about the cause of conflicts. # XXX: Should instead capture the run id in with the failing test # data so that we can deal with failures split across many partial # runs. latest_run = repo.get_latest_run() # Stage one: reduce the list of failing tests (possibly further # reduced by testfilters) to eliminate fails-on-own tests. spurious_failures = set() for test_id in ids: cmd = testcommand.get_run_command([test_id], self.ui.arguments['testargs'], test_filters = filters) if not self._run_tests(cmd): # If the test was filtered, it won't have been run. if test_id in repo.get_test_ids(repo.latest_id()): spurious_failures.add(test_id) # This is arguably ugly, why not just tell the system that # a pass here isn't a real pass? [so that when we find a # test that is spuriously failing, we don't forget # that it is actually failng. # Alternatively, perhaps this is a case for data mining: # when a test starts passing, keep a journal, and allow # digging back in time to see that it was a failure, # what it failed with etc... # The current solution is to just let it get marked as # a pass temporarily. if not spurious_failures: # All done. return 0 # spurious-failure -> cause. test_conflicts = {} for spurious_failure in spurious_failures: candidate_causes = self._prior_tests( latest_run, spurious_failure) bottom = 0 top = len(candidate_causes) width = top - bottom while width: check_width = int(ceil(width / 2.0)) cmd = testcommand.get_run_command( candidate_causes[bottom:bottom + check_width] + [spurious_failure], self.ui.arguments['testargs']) self._run_tests(cmd) # check that the test we're probing still failed - still # awkward. found_fail = [] def find_fail(test_dict): if test_dict['id'] == spurious_failure: found_fail.append(True) checker = testtools.StreamToDict(find_fail) checker.startTestRun() try: repo.get_failing().get_test().run(checker) finally: checker.stopTestRun() if found_fail: # Our conflict is in bottom - clamp the range down. top = bottom + check_width if width == 1: # found the cause test_conflicts[ spurious_failure] = candidate_causes[bottom] width = 0 else: width = top - bottom else: # Conflict in the range we did not run: discard bottom. bottom = bottom + check_width if width == 1: # there will be no more to check, so we didn't # reproduce the failure. width = 0 else: width = top - bottom if spurious_failure not in test_conflicts: # Could not determine cause test_conflicts[spurious_failure] = 'unknown - no conflicts' if test_conflicts: table = [('failing test', 'caused by test')] for failure, causes in test_conflicts.items(): table.append((failure, causes)) self.ui.output_table(table) return 3 return 0 finally: testcommand.cleanUp() def _prior_tests(self, run, failing_id): """Calculate what tests from the test run run ran before test_id. Tests that ran in a different worker are not included in the result. """ if not getattr(self, '_worker_to_test', False): # TODO: switch to route codes? case = run.get_test() # Use None if there is no worker-N tag # If there are multiple, map them all. # (worker-N -> [testid, ...]) worker_to_test = {} # (testid -> [workerN, ...]) test_to_worker = {} def map_test(test_dict): tags = test_dict['tags'] id = test_dict['id'] workers = [] for tag in tags: if tag.startswith('worker-'): workers.append(tag) if not workers: workers = [None] for worker in workers: worker_to_test.setdefault(worker, []).append(id) test_to_worker.setdefault(id, []).extend(workers) mapper = testtools.StreamToDict(map_test) mapper.startTestRun() try: case.run(mapper) finally: mapper.stopTestRun() self._worker_to_test = worker_to_test self._test_to_worker = test_to_worker failing_workers = self._test_to_worker[failing_id] prior_tests = [] for worker in failing_workers: worker_tests = self._worker_to_test[worker] prior_tests.extend(worker_tests[:worker_tests.index(failing_id)]) return prior_tests def _run_tests(self, cmd): """Run the tests cmd was parameterised with.""" cmd.setUp() try: def run_tests(): run_procs = [('subunit', ReturnCodeToSubunit(proc)) for proc in cmd.run_tests()] options = {} if (self.ui.options.failing or self.ui.options.analyze_isolation or self.ui.options.isolated): options['partial'] = True load_ui = decorator.UI(input_streams=run_procs, options=options, decorated=self.ui) load_cmd = load(load_ui) return load_cmd.execute() if not self.ui.options.until_failure: return run_tests() else: result = run_tests() while not result: result = run_tests() return result finally: cmd.cleanUp() testrepository-0.0.20/testrepository/commands/help.py0000664000175000017500000000276012306632354024267 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Get help on a command.""" import testrepository from testrepository.arguments import command from testrepository.commands import ( Command, get_command_parser, ) class help(Command): """Get help on a command.""" args = [command.CommandArgument('command_name', min=0)] def run(self): if not self.ui.arguments['command_name']: version = '.'.join(map(str, testrepository.__version__)) help = """testr %s -- a free test repository https://launchpad.net/testrepository/ testr commands -- list commands testr quickstart -- starter documentation testr help [command] -- help system """ % version else: cmd = self.ui.arguments['command_name'][0] parser = get_command_parser(cmd) help = parser.format_help() self.ui.output_rest(help) return 0 testrepository-0.0.20/testrepository/commands/commands.py0000664000175000017500000000201712306632354025133 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """List available commands.""" import testrepository.commands class commands(testrepository.commands.Command): """List available commands.""" def run(self): table = [('command', 'description')] for command in testrepository.commands.iter_commands(): table.append((command.name, command.get_summary())) self.ui.output_table(table) testrepository-0.0.20/testrepository/commands/last.py0000664000175000017500000000461512306632354024303 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Show the last run loaded into a repository.""" import optparse import testtools from testrepository.commands import Command from testrepository.testcommand import TestCommand class last(Command): """Show the last run loaded into a repository. Failing tests are shown on the console and a summary of the run is printed at the end. Without --subunit, the process exit code will be non-zero if the test run was not successful. With --subunit, the process exit code is non-zero if the subunit stream could not be generated successfully. """ options = [ optparse.Option( "--subunit", action="store_true", default=False, help="Show output as a subunit stream."), ] # Can be assigned to to inject a custom command factory. command_factory = TestCommand def run(self): repo = self.repository_factory.open(self.ui.here) testcommand = self.command_factory(self.ui, repo) latest_run = repo.get_latest_run() if self.ui.options.subunit: stream = latest_run.get_subunit_stream() self.ui.output_stream(stream) # Exits 0 if we successfully wrote the stream. return 0 case = latest_run.get_test() try: previous_run = repo.get_test_run(repo.latest_id() - 1) except KeyError: previous_run = None failed = False result, summary = self.ui.make_result( latest_run.get_id, testcommand, previous_run=previous_run) result.startTestRun() try: case.run(result) finally: result.stopTestRun() failed = not summary.wasSuccessful() if failed: return 1 else: return 0 testrepository-0.0.20/testrepository/ui/0000775000175000017500000000000012377221137021575 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/ui/model.py0000664000175000017500000001473112306632354023254 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Am object based UI for testrepository.""" from io import BytesIO import optparse import testtools from testrepository import ui class ProcessModel(object): """A subprocess.Popen test double.""" def __init__(self, ui): self.ui = ui self.returncode = 0 self.stdin = BytesIO() self.stdout = BytesIO() def communicate(self): self.ui.outputs.append(('communicate',)) return self.stdout.getvalue(), b'' def wait(self): return self.returncode class TestSuiteModel(object): def __init__(self): self._results = [] def recordResult(self, method, *args): self._results.append((method, args)) def run(self, result): for method, args in self._results: getattr(result, method)(*args) class TestResultModel(ui.BaseUITestResult): def __init__(self, ui, get_id, previous_run=None): super(TestResultModel, self).__init__(ui, get_id, previous_run) self._suite = TestSuiteModel() def status(self, test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None): super(TestResultModel, self).status(test_id=test_id, test_status=test_status, test_tags=test_tags, runnable=runnable, file_name=file_name, file_bytes=file_bytes, eof=eof, mime_type=mime_type, route_code=route_code, timestamp=timestamp) self._suite.recordResult('status', test_id, test_status) def stopTestRun(self): if self.ui.options.quiet: return self.ui.outputs.append(('results', self._suite)) return super(TestResultModel, self).stopTestRun() class UI(ui.AbstractUI): """A object based UI. This is useful for reusing the Command objects that provide a simplified interaction model with the domain logic from python. It is used for testing testrepository commands. """ def __init__(self, input_streams=None, options=(), args=(), here='memory:', proc_outputs=(), proc_results=()): """Create a model UI. :param input_streams: A list of stream name, (file or bytes) tuples to be used as the available input streams for this ui. :param options: Options to explicitly set values for. :param args: The argument values to give the UI. :param here: Set the here value for the UI. :param proc_outputs: byte strings to be returned in the stdout from created processes. :param proc_results: numeric exit code to be set in each created process. """ self.input_streams = {} if input_streams: for stream_type, stream_value in input_streams: if isinstance(stream_value, str) and str is not bytes: raise Exception('bad stream_value') self.input_streams.setdefault(stream_type, []).append( stream_value) self.here = here self.unparsed_opts = options self.outputs = [] # Could take parsed args, but for now this is easier. self.unparsed_args = args self.proc_outputs = list(proc_outputs) self.require_proc_stdout = False self.proc_results = list(proc_results) def _check_cmd(self): options = list(self.unparsed_opts) self.options = optparse.Values() seen_options = set() for option, value in options: setattr(self.options, option, value) seen_options.add(option) if not 'quiet' in seen_options: setattr(self.options, 'quiet', False) for option in self.cmd.options: if not option.dest in seen_options: setattr(self.options, option.dest, option.default) args = list(self.unparsed_args) parsed_args = {} failed = False for arg in self.cmd.args: try: parsed_args[arg.name] = arg.parse(args) except ValueError: failed = True break self.arguments = parsed_args return args == [] and not failed def _iter_streams(self, stream_type): streams = self.input_streams.pop(stream_type, []) for stream_value in streams: if getattr(stream_value, 'read', None): yield stream_value else: yield BytesIO(stream_value) def make_result(self, get_id, test_command, previous_run=None): result = TestResultModel(self, get_id, previous_run) return result, result._summary def output_error(self, error_tuple): self.outputs.append(('error', error_tuple)) def output_rest(self, rest_string): self.outputs.append(('rest', rest_string)) def output_stream(self, stream): self.outputs.append(('stream', stream.read())) def output_table(self, table): self.outputs.append(('table', table)) def output_tests(self, tests): """Output a list of tests.""" self.outputs.append(('tests', tests)) def output_values(self, values): self.outputs.append(('values', values)) def output_summary(self, successful, tests, tests_delta, time, time_delta, values): self.outputs.append( ('summary', successful, tests, tests_delta, time, time_delta, values)) def subprocess_Popen(self, *args, **kwargs): # Really not an output - outputs should be renamed to events. self.outputs.append(('popen', args, kwargs)) result = ProcessModel(self) if self.proc_outputs: result.stdout = BytesIO(self.proc_outputs.pop(0)) elif self.require_proc_stdout: raise Exception("No process output available") if self.proc_results: result.returncode = self.proc_results.pop(0) return result testrepository-0.0.20/testrepository/ui/__init__.py0000664000175000017500000002375612306632354023722 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """In testrepository a UI is an interface to a 'user' (which may be a machine). The testrepository.ui.cli module contains a command line interface, and the module testrepository.ui.model contains a purely object based implementation which is used for testing testrepository. See AbstractUI for details on what UI classes should do and are responsible for. """ from testtools import StreamResult from testrepository.results import SummarizingResult from testrepository.utils import timedelta_to_seconds class AbstractUI(object): """The base class for UI objects, this providers helpers and the interface. A UI object is responsible for brokering interactions with a particular user environment (e.g. the command line). These interactions can take several forms: - reading bulk data - gathering data - emitting progress or activity data - hints as to the programs execution. - providing notices about actions taken - showing the result of some query (including errors) All of these things are done in a structured fashion. See the methods iter_streams, query_user, progress, notice and result. UI objects are generally expected to be used once, with a fresh one created for each command executed. :ivar cmd: The command that is running using this UI object. :ivar here: The location that command is being run in. This may be a local path or a URL. This is only guaranteed to be set after set_command is called, as some UI's need to do option processing to determine its value. :ivar options: The parsed options for this ui, containing both global and command specific options. :ivar arguments: The parsed arguments for this ui. Set Command.args to define the accepted arguments for a command. """ def _check_cmd(self): """Check that cmd is valid. This method is meant to be overridden. :return: True if the cmd is valid - if options and args match up with the ones supplied to the UI, and so on. """ def iter_streams(self, stream_type): """Iterate over all the streams of type stream_type. Implementors of UI should implement _iter_streams which is called after argument checking is performed. :param stream_type: A simple string such as 'subunit' which matches one of the stream types defined for the cmd object this UI is being used with. :return: A generator of stream objects. stream objects have a read method and a close method which behave as for file objects. """ for stream_spec in self.cmd.input_streams: if '*' in stream_spec or '?' in stream_spec or '+' in stream_spec: found = stream_type == stream_spec[:-1] else: found = stream_type == stream_spec if found: return self._iter_streams(stream_type) raise KeyError(stream_type) def _iter_streams(self, stream_type): """Helper for iter_streams which subclasses should implement.""" raise NotImplementedError(self._iter_streams) def make_result(self, get_id, test_command, previous_run=None): """Make a `StreamResult` that can be used to display test results. This will also support the `TestResult` API until at least testrepository 0.0.16 to permit clients to migrate gracefully. :param get_id: A nullary callable that returns the id of the test run when called. :param test_command: A TestCommand object used to configure user transforms. :param previous_run: An optional previous test run. :return: A two-tuple with the stream to forward events to, and a StreamSummary for querying success after the stream is finished. """ raise NotImplementedError(self.make_result) def output_error(self, error_tuple): """Show an error to the user. This is typically used only by Command.execute when run raises an exception. :param error_tuple: An error tuple obtained from sys.exc_info(). """ raise NotImplementedError(self.output_error) def output_rest(self, rest_string): """Show rest_string - a ReST document. This is typically used as the entire output for command help or documentation. :param rest_string: A ReST source to display. """ raise NotImplementedError(self.output_rest) def output_stream(self, stream): """Show a byte stream to the user. This is not currently typed, but in future a MIME type may be permitted. :param stream: A file like object that can be read from. The UI will not close the file. """ raise NotImplementedError(self.output_results) def output_table(self, table): """Show a table to the user. :param table: an iterable of rows. The first row is used for column headings, and every row needs the same number of cells. e.g. output_table([('name', 'age'), ('robert', 1234)]) """ raise NotImplementedError(self.output_table) def output_values(self, values): """Show values to the user. :param values: An iterable of (label, value). """ raise NotImplementedError(self.output_values) def output_summary(self, successful, tests, tests_delta, time, time_delta, values): """Output a summary of a test run. An example summary might look like: Run 565 (+2) tests in 2.968s FAILED (errors=13 (-2), succeesses=31 (+2)) :param successful: A boolean indicating whether the result was successful. :param values: List of tuples in the form ``(name, value, delta)``. e.g. ``('failures', 5, -1)``. ``delta`` is None means that either the delta is unknown or inappropriate. """ raise NotImplementedError(self.output_summary) def set_command(self, cmd): """Inform the UI what command it is running. This is used to gather command line arguments, or prepare dialogs and otherwise ensure that the information the command has declared it needs will be available. The default implementation simply sets self.cmd to cmd. :param cmd: A testrepository.commands.Command. """ self.cmd = cmd return self._check_cmd() def subprocess_Popen(self, *args, **kwargs): """Call an external process from the UI's context. The behaviour of this call should match the Popen process on any given platform, except that the UI can take care of any wrapping or manipulation needed to fit into its environment. """ # This might not be the right place. raise NotImplementedError(self.subprocess_Popen) class BaseUITestResult(StreamResult): """An abstract test result used with the UI. AbstractUI.make_result probably wants to return an object like this. """ def __init__(self, ui, get_id, previous_run=None): """Construct an `AbstractUITestResult`. :param ui: The UI this result is associated with. :param get_id: A nullary callable that returns the id of the test run. """ super(BaseUITestResult, self).__init__() self.ui = ui self.get_id = get_id self._previous_run = previous_run self._summary = SummarizingResult() def _get_previous_summary(self): if self._previous_run is None: return None previous_summary = SummarizingResult() previous_summary.startTestRun() test = self._previous_run.get_test() test.run(previous_summary) previous_summary.stopTestRun() return previous_summary def _output_summary(self, run_id): """Output a test run. :param run_id: The run id. """ if self.ui.options.quiet: return time = self._summary.get_time_taken() time_delta = None num_tests_run_delta = None num_failures_delta = None values = [('id', run_id, None)] failures = self._summary.get_num_failures() previous_summary = self._get_previous_summary() if failures: if previous_summary: num_failures_delta = failures - previous_summary.get_num_failures() values.append(('failures', failures, num_failures_delta)) if previous_summary: num_tests_run_delta = self._summary.testsRun - previous_summary.testsRun if time: previous_time_taken = previous_summary.get_time_taken() if previous_time_taken: time_delta = time - previous_time_taken skips = len(self._summary.skipped) if skips: values.append(('skips', skips, None)) self.ui.output_summary( not bool(failures), self._summary.testsRun, num_tests_run_delta, time, time_delta, values) def startTestRun(self): super(BaseUITestResult, self).startTestRun() self._summary.startTestRun() def stopTestRun(self): super(BaseUITestResult, self).stopTestRun() run_id = self.get_id() self._summary.stopTestRun() self._output_summary(run_id) def status(self, *args, **kwargs): self._summary.status(*args, **kwargs) testrepository-0.0.20/testrepository/ui/cli.py0000664000175000017500000003023212376512301022711 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """A command line UI for testrepository.""" import io import os import signal import subunit import sys from extras import try_import v2_avail = try_import('subunit.ByteStreamToStreamResult') import testtools from testtools import ExtendedToStreamDecorator, StreamToExtendedDecorator from testtools.compat import unicode_output_stream, _u from testrepository import ui from testrepository.commands import get_command_parser class CLITestResult(ui.BaseUITestResult): """A TestResult for the CLI.""" def __init__(self, ui, get_id, stream, previous_run=None, filter_tags=None): """Construct a CLITestResult writing to stream. :param filter_tags: Tags that should be used to filter tests out. When a tag in this set is present on a test outcome, the test is not counted towards the test run count. If the test errors, then it is still counted and the error is still shown. """ super(CLITestResult, self).__init__(ui, get_id, previous_run) self.stream = unicode_output_stream(stream) self.sep1 = _u('=' * 70 + '\n') self.sep2 = _u('-' * 70 + '\n') self.filter_tags = filter_tags or frozenset() self.filterable_states = set(['success', 'uxsuccess', 'xfail', 'skip']) def _format_error(self, label, test, error_text, test_tags=None): test_tags = test_tags or () tags = _u(' ').join(test_tags) if tags: tags = _u('tags: %s\n') % tags return _u('').join([ self.sep1, _u('%s: %s\n') % (label, test.id()), tags, self.sep2, error_text, ]) def status(self, test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None): super(CLITestResult, self).status(test_id=test_id, test_status=test_status, test_tags=test_tags, runnable=runnable, file_name=file_name, file_bytes=file_bytes, eof=eof, mime_type=mime_type, route_code=route_code, timestamp=timestamp) if test_status == 'fail': self.stream.write( self._format_error(_u('FAIL'), *(self._summary.errors[-1]), test_tags=test_tags)) if test_status not in self.filterable_states: return if test_tags and test_tags.intersection(self.filter_tags): self._summary.testsRun -= 1 class UI(ui.AbstractUI): """A command line user interface.""" def __init__(self, argv, stdin, stdout, stderr): """Create a command line UI. :param argv: Arguments from the process invocation. :param stdin: The stream for stdin. :param stdout: The stream for stdout. :param stderr: The stream for stderr. """ self._argv = argv self._stdin = stdin self._stdout = stdout self._stderr = stderr self._binary_stdout = None def _iter_streams(self, stream_type): # Only the first stream declared in a command can be accepted at the # moment - as there is only one stdin and alternate streams are not yet # configurable in the CLI. first_stream_type = self.cmd.input_streams[0] if (stream_type != first_stream_type and stream_type != first_stream_type[:-1]): return yield subunit.make_stream_binary(self._stdin) def make_result(self, get_id, test_command, previous_run=None): if getattr(self.options, 'subunit', False): if v2_avail: serializer = subunit.StreamResultToBytes(self._stdout) else: serializer = StreamToExtendedDecorator( subunit.TestProtocolClient(self._stdout)) # By pass user transforms - just forward it all, result = serializer # and interpret everything as success. summary = testtools.StreamSummary() summary.startTestRun() summary.stopTestRun() return result, summary else: # Apply user defined transforms. filter_tags = test_command.get_filter_tags() output = CLITestResult(self, get_id, self._stdout, previous_run, filter_tags=filter_tags) summary = output._summary return output, summary def output_error(self, error_tuple): if 'TESTR_PDB' in os.environ: import traceback self._stderr.write(_u('').join(traceback.format_tb(error_tuple[2]))) self._stderr.write(_u('\n')) # This is terrible: it is because on Python2.x pdb writes bytes to # its pipes, and the test suite uses io.StringIO that refuse bytes. import pdb; if sys.version_info[0]==2: if isinstance(self._stdout, io.StringIO): write = self._stdout.write def _write(text): return write(text.decode('utf8')) self._stdout.write = _write p = pdb.Pdb(stdin=self._stdin, stdout=self._stdout) p.reset() p.interaction(None, error_tuple[2]) error_type = str(error_tuple[1]) # XX: Python2. if type(error_type) is bytes: error_type = error_type.decode('utf8') self._stderr.write(error_type + _u('\n')) def output_rest(self, rest_string): self._stdout.write(rest_string) if not rest_string.endswith('\n'): self._stdout.write(_u('\n')) def output_stream(self, stream): if not self._binary_stdout: self._binary_stdout = subunit.make_stream_binary(self._stdout) contents = stream.read(65536) assert type(contents) is bytes, \ "Bad stream contents %r" % type(contents) # If there are unflushed bytes in the text wrapper, we need to sync.. self._stdout.flush() while contents: self._binary_stdout.write(contents) contents = stream.read(65536) self._binary_stdout.flush() def output_table(self, table): # stringify contents = [] for row in table: new_row = [] for column in row: new_row.append(str(column)) contents.append(new_row) if not contents: return widths = [0] * len(contents[0]) for row in contents: for idx, column in enumerate(row): if widths[idx] < len(column): widths[idx] = len(column) # Show a row outputs = [] def show_row(row): for idx, column in enumerate(row): outputs.append(column) if idx == len(row) - 1: outputs.append('\n') return # spacers for the next column outputs.append(' '*(widths[idx]-len(column))) outputs.append(' ') show_row(contents[0]) # title spacer for idx, width in enumerate(widths): outputs.append('-'*width) if idx == len(widths) - 1: outputs.append('\n') continue outputs.append(' ') for row in contents[1:]: show_row(row) self._stdout.write(_u('').join(outputs)) def output_tests(self, tests): for test in tests: # On Python 2.6 id() returns bytes. id_str = test.id() if type(id_str) is bytes: id_str = id_str.decode('utf8') self._stdout.write(id_str) self._stdout.write(_u('\n')) def output_values(self, values): outputs = [] for label, value in values: outputs.append('%s=%s' % (label, value)) self._stdout.write(_u('%s\n' % ', '.join(outputs))) def _format_summary(self, successful, tests, tests_delta, time, time_delta, values): # We build the string by appending to a list of strings and then # joining trivially at the end. Avoids expensive string concatenation. summary = [] a = summary.append if tests: a("Ran %s" % (tests,)) if tests_delta: a(" (%+d)" % (tests_delta,)) a(" tests") if time: if not summary: a("Ran tests") a(" in %0.3fs" % (time,)) if time_delta: a(" (%+0.3fs)" % (time_delta,)) if summary: a("\n") if successful: a('PASSED') else: a('FAILED') if values: a(' (') values_strings = [] for name, value, delta in values: value_str = '%s=%s' % (name, value) if delta: value_str += ' (%+d)' % (delta,) values_strings.append(value_str) a(', '.join(values_strings)) a(')') return _u('').join(summary) def output_summary(self, successful, tests, tests_delta, time, time_delta, values): self._stdout.write( self._format_summary( successful, tests, tests_delta, time, time_delta, values)) self._stdout.write(_u('\n')) def _check_cmd(self): parser = get_command_parser(self.cmd) parser.add_option("-d", "--here", dest="here", help="Set the directory or url that a command should run from. " "This affects all default path lookups but does not affect paths " "supplied to the command.", default=os.getcwd(), type=str) parser.add_option("-q", "--quiet", action="store_true", default=False, help="Turn off output other than the primary output for a command " "and any errors.") # yank out --, as optparse makes it silly hard to just preserve it. try: where_dashdash = self._argv.index('--') opt_argv = self._argv[:where_dashdash] other_args = self._argv[where_dashdash:] except ValueError: opt_argv = self._argv other_args = [] if '-h' in opt_argv or '--help' in opt_argv or '-?' in opt_argv: self.output_rest(parser.format_help()) # Fugly, but its what optparse does: we're just overriding the # output path. raise SystemExit(0) options, args = parser.parse_args(opt_argv) args += other_args self.here = options.here self.options = options parsed_args = {} failed = False for arg in self.cmd.args: try: parsed_args[arg.name] = arg.parse(args) except ValueError: exc_info = sys.exc_info() failed = True self._stderr.write(_u("%s\n") % str(exc_info[1])) break if not failed: self.arguments = parsed_args if args != []: self._stderr.write(_u("Unexpected arguments: %r\n") % args) return not failed and args == [] def _clear_SIGPIPE(self): """Clear SIGPIPE : child processes expect the default handler.""" signal.signal(signal.SIGPIPE, signal.SIG_DFL) def subprocess_Popen(self, *args, **kwargs): import subprocess if os.name == "posix": # GZ 2010-12-04: Should perhaps check for existing preexec_fn and # combine so both will get called. kwargs['preexec_fn'] = self._clear_SIGPIPE return subprocess.Popen(*args, **kwargs) testrepository-0.0.20/testrepository/ui/decorator.py0000664000175000017500000001010412306632354024124 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """A decorator for UIs to allow use of additional command objects in-process.""" from io import BytesIO import optparse from testrepository import ui class UI(ui.AbstractUI): """A decorating UI. Not comprehensive yet - only supports overriding input streams. Note that because UI objects carry command specific state only specific things can be delegated - option/argument lookup, streams. set_command for instance, does not get passed to the decorated UI unless it has not been initialised. """ def __init__(self, input_streams=None, options={}, decorated=None): """Create a decorating UI. :param input_streams: The input steams to present from this UI. Should be a list of (stream name, file) tuples. :param options: Dict of options to replace in the base UI. These are merged with the underlying ones when set_command is called. :param decorated: The UI to decorate. """ self._decorated = decorated self.input_streams = {} if input_streams: for stream_type, stream_value in input_streams: self.input_streams.setdefault(stream_type, []).append( stream_value) self._options = options @property def arguments(self): return self._decorated.arguments @property def here(self): return self._decorated.here def _iter_streams(self, stream_type): streams = self.input_streams.pop(stream_type, []) for stream_value in streams: if getattr(stream_value, 'read', None): yield stream_value else: yield BytesIO(stream_value) def make_result(self, get_id, test_command, previous_run=None): return self._decorated.make_result( get_id, test_command, previous_run=previous_run) def output_error(self, error_tuple): return self._decorated.output_error(error_tuple) def output_rest(self, rest_string): return self._decorated.output_rest(rest_string) def output_stream(self, stream): return self._decorated.output_stream(stream) def output_table(self, table): return self._decorated.output_table(table) def output_tests(self, tests): return self._decorated.output_tests(tests) def output_values(self, values): return self._decorated.output_values(values) def output_summary(self, successful, tests, tests_delta, time, time_delta, values): return self._decorated.output_summary( successful, tests, tests_delta, time, time_delta, values) def set_command(self, cmd): self.cmd = cmd result = True if getattr(self._decorated, 'cmd', None) is None: result = self._decorated.set_command(cmd) # Pickup the repository factory from the decorated UI's command. cmd.repository_factory = self._decorated.cmd.repository_factory # Merge options self.options = optparse.Values() for option in dir(self._decorated.options): if option.startswith('_'): continue setattr(self.options, option, getattr(self._decorated.options, option)) for option, value in self._options.items(): setattr(self.options, option, value) return result def subprocess_Popen(self, *args, **kwargs): return self._decorated.subprocess_Popen(*args, **kwargs) testrepository-0.0.20/testrepository/repository/0000775000175000017500000000000012377221137023377 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/repository/memory.py0000664000175000017500000001501012376335316025262 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """In memory storage of test results.""" from extras import try_import OrderedDict = try_import('collections.OrderedDict', dict) from io import BytesIO from operator import methodcaller import subunit import testtools from testrepository.repository import ( AbstractRepository, AbstractRepositoryFactory, AbstractTestRun, RepositoryNotFound, ) class RepositoryFactory(AbstractRepositoryFactory): """A factory that can initialise and open memory repositories. This is used for testing where a repository may be created and later opened, but tests should not see each others repositories. """ def __init__(self): self.repos = {} def initialise(self, url): self.repos[url] = Repository() return self.repos[url] def open(self, url): try: return self.repos[url] except KeyError: raise RepositoryNotFound(url) class Repository(AbstractRepository): """In memory storage of test results.""" def __init__(self): # Test runs: self._runs = [] self._failing = OrderedDict() # id -> test self._times = {} # id -> duration def count(self): return len(self._runs) def get_failing(self): return _Failures(self) def get_test_run(self, run_id): if run_id < 0: raise KeyError("No such run.") return self._runs[run_id] def latest_id(self): result = self.count() - 1 if result < 0: raise KeyError("No tests in repository") return result def _get_inserter(self, partial): return _Inserter(self, partial) def _get_test_times(self, test_ids): result = {} for test_id in test_ids: duration = self._times.get(test_id, None) if duration is not None: result[test_id] = duration return result # XXX: Too much duplication between this and _Inserter class _Failures(AbstractTestRun): """Report on failures from a memory repository.""" def __init__(self, repository): self._repository = repository def get_id(self): return None def get_subunit_stream(self): result = BytesIO() serialiser = subunit.v2.StreamResultToBytes(result) serialiser = testtools.ExtendedToStreamDecorator(serialiser) serialiser.startTestRun() try: self.run(serialiser) finally: serialiser.stopTestRun() result.seek(0) return result def get_test(self): def wrap_result(result): # Wrap in a router to mask out startTestRun/stopTestRun from the # ExtendedToStreamDecorator. result = testtools.StreamResultRouter(result, do_start_stop_run=False) # Wrap that in ExtendedToStreamDecorator to convert v1 calls to # StreamResult. return testtools.ExtendedToStreamDecorator(result) return testtools.DecorateTestCaseResult( self, wrap_result, methodcaller('startTestRun'), methodcaller('stopTestRun')) def run(self, result): # Speaks original V1 protocol. for case in self._repository._failing.values(): case.run(result) class _Inserter(AbstractTestRun): """Insert test results into a memory repository, and describe them later.""" def __init__(self, repository, partial): self._repository = repository self._partial = partial self._tests = [] # Subunit V2 stream for get_subunit_stream self._subunit = None def startTestRun(self): self._subunit = BytesIO() serialiser = subunit.v2.StreamResultToBytes(self._subunit) self._hook = testtools.CopyStreamResult([ testtools.StreamToDict(self._handle_test), serialiser]) self._hook.startTestRun() def _handle_test(self, test_dict): self._tests.append(test_dict) start, stop = test_dict['timestamps'] if test_dict['status'] == 'exists' or None in (start, stop): return duration_delta = stop - start duration_seconds = ((duration_delta.microseconds + (duration_delta.seconds + duration_delta.days * 24 * 3600) * 10**6) / 10.0**6) self._repository._times[test_dict['id']] = duration_seconds def stopTestRun(self): self._hook.stopTestRun() self._repository._runs.append(self) self._run_id = len(self._repository._runs) - 1 if not self._partial: self._repository._failing = OrderedDict() for test_dict in self._tests: test_id = test_dict['id'] if test_dict['status'] == 'fail': case = testtools.testresult.real.test_dict_to_case(test_dict) self._repository._failing[test_id] = case else: self._repository._failing.pop(test_id, None) return self._run_id def status(self, *args, **kwargs): self._hook.status(*args, **kwargs) def get_id(self): return self._run_id def get_subunit_stream(self): self._subunit.seek(0) return self._subunit def get_test(self): def wrap_result(result): # Wrap in a router to mask out startTestRun/stopTestRun from the # ExtendedToStreamDecorator. result = testtools.StreamResultRouter(result, do_start_stop_run=False) # Wrap that in ExtendedToStreamDecorator to convert v1 calls to # StreamResult. return testtools.ExtendedToStreamDecorator(result) return testtools.DecorateTestCaseResult( self, wrap_result, methodcaller('startTestRun'), methodcaller('stopTestRun')) def run(self, result): # Speaks original. for test_dict in self._tests: case = testtools.testresult.real.test_dict_to_case(test_dict) case.run(result) testrepository-0.0.20/testrepository/repository/__init__.py0000664000175000017500000001526112306632354025514 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Storage of test results. A Repository provides storage and indexing of results. The AbstractRepository class defines the contract to which any Repository implementation must adhere. The file submodule is the usual repository that code will use for local access, and the memory submodule provides a memory only repository useful for testing. Repositories are identified by their URL, and new ones are made by calling the initialize function in the appropriate repository module. """ from testtools import StreamToDict, TestResult class AbstractRepositoryFactory(object): """Interface for making or opening repositories.""" def initialise(self, url): """Create a repository at URL. Call on the class of the repository you wish to create. """ raise NotImplementedError(self.initialise) def open(self, url): """Open the repository at url. Raise RepositoryNotFound if there is no repository at the given url. """ raise NotImplementedError(self.open) class AbstractRepository(object): """The base class for Repository implementations. There are no interesting attributes or methods as yet. """ def count(self): """Return the number of test runs this repository has stored. :return count: The count of test runs stored in the repositor. """ raise NotImplementedError(self.count) def get_failing(self): """Get a TestRun that contains all of and only current failing tests. :return: a TestRun. """ raise NotImplementedError(self.get_failing) def get_inserter(self, partial=False): """Get an inserter that will insert a test run into the repository. Repository implementations should implement _get_inserter. get_inserter() does not add timing data to streams: it should be provided by the caller of get_inserter (e.g. commands.load). :param partial: If True, the stream being inserted only executed some tests rather than all the projects tests. :return an inserter: Inserters meet the extended TestResult protocol that testtools 0.9.2 and above offer. The startTestRun and stopTestRun methods in particular must be called. """ return self._get_inserter(partial) def _get_inserter(self): """Get an inserter for get_inserter. The result is decorated with an AutoTimingTestResultDecorator. """ raise NotImplementedError(self._get_inserter) def get_latest_run(self): """Return the latest run. Equivalent to get_test_run(latest_id()). """ return self.get_test_run(self.latest_id()) def get_test_run(self, run_id): """Retrieve a TestRun object for run_id. :param run_id: The test run id to retrieve. :return: A TestRun object. """ raise NotImplementedError(self.get_test_run) def get_test_times(self, test_ids): """Retrieve estimated times for the tests test_ids. :param test_ids: The test ids to query for timing data. :return: A dict with two keys: 'known' and 'unknown'. The unknown key contains a set with the test ids that did run. The known key contains a dict mapping test ids to time in seconds. """ test_ids = frozenset(test_ids) known_times = self._get_test_times(test_ids) unknown_times = test_ids - set(known_times) return dict(known=known_times, unknown=unknown_times) def _get_test_times(self, test_ids): """Retrieve estimated times for tests test_ids. :param test_ids: The test ids to query for timing data. :return: A dict mapping test ids to duration in seconds. Tests that no timing data is present for should not be returned - the base class get_test_times function will collate the missing test ids and put that in to its result automatically. """ raise NotImplementedError(self._get_test_times) def latest_id(self): """Return the run id for the most recently inserted test run.""" raise NotImplementedError(self.latest_id) def get_test_ids(self, run_id): """Return the test ids from the specified run. :param run_id: the id of the test run to query. :return: a list of test ids for the tests that were part of the specified test run. """ run = self.get_test_run(run_id) ids = [] def gather(test_dict): ids.append(test_dict['id']) result = StreamToDict(gather) result.startTestRun() try: run.get_test().run(result) finally: result.stopTestRun() return ids class AbstractTestRun(object): """A test run that has been stored in a repository. Should implement the StreamResult protocol as well as the testrepository specific methods documented here. """ def get_id(self): """Get the id of the test run. Sometimes test runs will not have an id, e.g. test runs for 'failing'. In that case, this should return None. """ raise NotImplementedError(self.get_id) def get_subunit_stream(self): """Get a subunit stream for this test run.""" raise NotImplementedError(self.get_subunit_stream) def get_test(self): """Get a testtools.TestCase-like object that can be run. :return: A TestCase like object which can be run to get the individual tests reported to a testtools.StreamResult/TestResult. (Clients of repository should provide an ExtendedToStreamDecorator decorator to permit either API to be used). """ raise NotImplementedError(self.get_test) class RepositoryNotFound(Exception): """Raised when we try to open a repository that isn't there.""" def __init__(self, url): self.url = url msg = 'No repository found in %s. Create one by running "testr init".' Exception.__init__(self, msg % url) testrepository-0.0.20/testrepository/repository/file.py0000664000175000017500000002572212376335265024707 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Persistent storage of test results.""" from io import BytesIO try: import anydbm as dbm except ImportError: import dbm import errno from operator import methodcaller import os.path import sys import tempfile import subunit.v2 from subunit import TestProtocolClient import testtools from testtools.compat import _b from testrepository.repository import ( AbstractRepository, AbstractRepositoryFactory, AbstractTestRun, RepositoryNotFound, ) from testrepository.utils import timedelta_to_seconds def atomicish_rename(source, target): if os.name != "posix" and os.path.exists(target): os.remove(target) os.rename(source, target) class RepositoryFactory(AbstractRepositoryFactory): def initialise(klass, url): """Create a repository at url/path.""" base = os.path.join(os.path.expanduser(url), '.testrepository') os.mkdir(base) stream = open(os.path.join(base, 'format'), 'wt') try: stream.write('1\n') finally: stream.close() result = Repository(base) result._write_next_stream(0) return result def open(self, url): path = os.path.expanduser(url) base = os.path.join(path, '.testrepository') try: stream = open(os.path.join(base, 'format'), 'rt') except (IOError, OSError) as e: if e.errno == errno.ENOENT: raise RepositoryNotFound(url) raise if '1\n' != stream.read(): raise ValueError(url) return Repository(base) class Repository(AbstractRepository): """Disk based storage of test results. This repository stores each stream it receives as a file in a directory. Indices are then built on top of this basic store. This particular disk layout is subject to change at any time, as its primarily a bootstrapping exercise at this point. Any changes made are likely to have an automatic upgrade process. """ def __init__(self, base): """Create a file-based repository object for the repo at 'base'. :param base: The path to the repository. """ self.base = base def _allocate(self): # XXX: lock the file. K?! value = self.count() self._write_next_stream(value + 1) return value def _next_stream(self): next_content = open(os.path.join(self.base, 'next-stream'), 'rt').read() try: return int(next_content) except ValueError: raise ValueError("Corrupt next-stream file: %r" % next_content) def count(self): return self._next_stream() def latest_id(self): result = self._next_stream() - 1 if result < 0: raise KeyError("No tests in repository") return result def get_failing(self): try: run_subunit_content = open( os.path.join(self.base, "failing"), 'rb').read() except IOError: err = sys.exc_info()[1] if err.errno == errno.ENOENT: run_subunit_content = _b('') else: raise return _DiskRun(None, run_subunit_content) def get_test_run(self, run_id): try: run_subunit_content = open( os.path.join(self.base, str(run_id)), 'rb').read() except IOError as e: if e.errno == errno.ENOENT: raise KeyError("No such run.") return _DiskRun(run_id, run_subunit_content) def _get_inserter(self, partial): return _Inserter(self, partial) def _get_test_times(self, test_ids): # May be too slow, but build and iterate. # 'c' because an existing repo may be missing a file. db = dbm.open(self._path('times.dbm'), 'c') try: result = {} for test_id in test_ids: if type(test_id) != str: test_id = test_id.encode('utf8') # gdbm does not support get(). try: duration = db[test_id] except KeyError: duration = None if duration is not None: result[test_id] = float(duration) return result finally: db.close() def _path(self, suffix): return os.path.join(self.base, suffix) def _write_next_stream(self, value): # Note that this is unlocked and not threadsafe : for now, shrug - single # user, repo-per-working-tree model makes this acceptable in the short # term. Likewise we don't fsync - this data isn't valuable enough to # force disk IO. prefix = self._path('next-stream') stream = open(prefix + '.new', 'wt') try: stream.write('%d\n' % value) finally: stream.close() atomicish_rename(prefix + '.new', prefix) class _DiskRun(AbstractTestRun): """A test run that was inserted into the repository.""" def __init__(self, run_id, subunit_content): """Create a _DiskRun with the content subunit_content.""" self._run_id = run_id self._content = subunit_content assert type(subunit_content) is bytes def get_id(self): return self._run_id def get_subunit_stream(self): # Transcode - we want V2. v1_stream = BytesIO(self._content) v1_case = subunit.ProtocolTestCase(v1_stream) output = BytesIO() output_stream = subunit.v2.StreamResultToBytes(output) output_stream = testtools.ExtendedToStreamDecorator(output_stream) output_stream.startTestRun() try: v1_case.run(output_stream) finally: output_stream.stopTestRun() output.seek(0) return output def get_test(self): #case = subunit.ProtocolTestCase(self.get_subunit_stream()) case = subunit.ProtocolTestCase(BytesIO(self._content)) def wrap_result(result): # Wrap in a router to mask out startTestRun/stopTestRun from the # ExtendedToStreamDecorator. result = testtools.StreamResultRouter(result, do_start_stop_run=False) # Wrap that in ExtendedToStreamDecorator to convert v1 calls to # StreamResult. return testtools.ExtendedToStreamDecorator(result) return testtools.DecorateTestCaseResult( case, wrap_result, methodcaller('startTestRun'), methodcaller('stopTestRun')) class _SafeInserter(object): def __init__(self, repository, partial=False): # XXX: Perhaps should factor into a decorator and use an unaltered # TestProtocolClient. self._repository = repository fd, name = tempfile.mkstemp(dir=self._repository.base) self.fname = name stream = os.fdopen(fd, 'wb') self.partial = partial # The time take by each test, flushed at the end. self._times = {} self._test_start = None self._time = None subunit_client = testtools.StreamToExtendedDecorator( TestProtocolClient(stream)) self.hook = testtools.CopyStreamResult([ subunit_client, testtools.StreamToDict(self._handle_test)]) self._stream = stream def _handle_test(self, test_dict): start, stop = test_dict['timestamps'] if test_dict['status'] == 'exists' or None in (start, stop): return self._times[test_dict['id']] = str(timedelta_to_seconds(stop - start)) def startTestRun(self): self.hook.startTestRun() self._run_id = None def stopTestRun(self): self.hook.stopTestRun() self._stream.flush() self._stream.close() run_id = self._name() final_path = os.path.join(self._repository.base, str(run_id)) atomicish_rename(self.fname, final_path) # May be too slow, but build and iterate. db = dbm.open(self._repository._path('times.dbm'), 'c') try: db_times = {} for key, value in self._times.items(): if type(key) != str: key = key.encode('utf8') db_times[key] = value if getattr(db, 'update', None): db.update(db_times) else: for key, value in db_times.items(): db[key] = value finally: db.close() self._run_id = run_id def status(self, *args, **kwargs): self.hook.status(*args, **kwargs) def _cancel(self): """Cancel an insertion.""" self._stream.close() os.unlink(self.fname) def get_id(self): return self._run_id class _FailingInserter(_SafeInserter): """Insert a stream into the 'failing' file.""" def _name(self): return "failing" class _Inserter(_SafeInserter): def _name(self): return self._repository._allocate() def stopTestRun(self): super(_Inserter, self).stopTestRun() # XXX: locking (other inserts may happen while we update the failing # file). # Combine failing + this run : strip passed tests, add failures. # use memory repo to aggregate. a bit awkward on layering ;). # Should just pull the failing items aside as they happen perhaps. # Or use a router and avoid using a memory object at all. from testrepository.repository import memory repo = memory.Repository() if self.partial: # Seed with current failing inserter = testtools.ExtendedToStreamDecorator(repo.get_inserter()) inserter.startTestRun() failing = self._repository.get_failing() failing.get_test().run(inserter) inserter.stopTestRun() inserter= testtools.ExtendedToStreamDecorator(repo.get_inserter(partial=True)) inserter.startTestRun() run = self._repository.get_test_run(self.get_id()) run.get_test().run(inserter) inserter.stopTestRun() # and now write to failing inserter = _FailingInserter(self._repository) _inserter = testtools.ExtendedToStreamDecorator(inserter) _inserter.startTestRun() try: repo.get_failing().get_test().run(_inserter) except: inserter._cancel() raise else: _inserter.stopTestRun() return self.get_id() testrepository-0.0.20/testrepository/repository/samba_buildfarm.py0000664000175000017500000000533412306632354027065 0ustar robertcrobertc00000000000000# Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Access to the Samba build farm.""" import subunit import urllib from testrepository.repository import ( AbstractRepository, AbstractRepositoryFactory, AbstractTestRun, RepositoryNotFound, ) BUILD_FARM_URL = "http://build.samba.org/" class RepositoryFactory(AbstractRepositoryFactory): def initialise(klass, url): """Create a repository at url/path.""" raise NotImplementedError(klass.initialise) def open(self, url): if not url.startswith(BUILD_FARM_URL): raise RepositoryNotFound(url) return Repository(url) class Repository(AbstractRepository): """Access to the subunit results on the Samba build farm. """ def __init__(self, base): """Create a repository object for the Samba build farm at base. """ self.base = base.rstrip("/")+"/" recent_ids_url = urllib.basejoin(self.base, "+recent-ids") f = urllib.urlopen(recent_ids_url, "r") try: self.recent_ids = [x.rstrip("\n") for x in f.readlines()] finally: f.close() def count(self): return len(self.recent_ids) def latest_id(self): if len(self.recent_ids) == 0: raise KeyError("No tests in repository") return len(self.recent_ids) - 1 def get_failing(self): raise NotImplementedError(self.get_failing) def get_test_run(self, run_id): return _HttpRun(self.base, self.recent_ids[run_id]) def _get_inserter(self, partial): raise NotImplementedError(self._get_inserter) class _HttpRun(AbstractTestRun): """A test run that was inserted into the repository.""" def __init__(self, base_url, run_id): """Create a _HttpRun with the content subunit_content.""" self.base_url = base_url self.run_id = run_id self.url = urllib.basejoin(self.base_url, "../../build/%s/+subunit" % self.run_id) def get_subunit_stream(self): return urllib.urlopen(self.url) def get_test(self): return subunit.ProtocolTestCase(self.get_subunit_stream()) testrepository-0.0.20/testrepository/results.py0000664000175000017500000000476512306632354023246 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. import subunit from testtools import ( StreamSummary, StreamResult, ) from testrepository.utils import timedelta_to_seconds class SummarizingResult(StreamSummary): def __init__(self): super(SummarizingResult, self).__init__() def startTestRun(self): super(SummarizingResult, self).startTestRun() self._first_time = None self._last_time = None def status(self, *args, **kwargs): if kwargs.get('timestamp') is not None: timestamp = kwargs['timestamp'] if self._last_time is None: self._first_time = timestamp self._last_time = timestamp if timestamp < self._first_time: self._first_time = timestamp if timestamp > self._last_time: self._last_time = timestamp super(SummarizingResult, self).status(*args, **kwargs) def get_num_failures(self): return len(self.failures) + len(self.errors) def get_time_taken(self): if None in (self._last_time, self._first_time): return None return timedelta_to_seconds(self._last_time - self._first_time) #XXX: Should be in testtools. class CatFiles(StreamResult): """Cat file attachments received to a stream.""" def __init__(self, byte_stream): self.stream = subunit.make_stream_binary(byte_stream) self.last_file = None def status(self, test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None): if file_name is None: return if self.last_file != file_name: self.stream.write(("--- %s ---\n" % file_name).encode('utf8')) self.last_file = file_name self.stream.write(file_bytes) self.stream.flush() testrepository-0.0.20/testrepository/testlist.py0000664000175000017500000000366612306632354023417 0ustar robertcrobertc00000000000000# # Copyright (c) 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Handling of lists of tests - common code to --load-list etc.""" from io import BytesIO from extras import try_import bytestream_to_streamresult = try_import('subunit.ByteStreamToStreamResult') stream_result = try_import('testtools.testresult.doubles.StreamResult') from testtools.compat import _b, _u def write_list(stream, test_ids): """Write test_ids out to stream. :param stream: A file-like object. :param test_ids: An iterable of test ids. """ # May need utf8 explicitly? stream.write(_b('\n'.join(list(test_ids) + ['']))) def parse_list(list_bytes): """Parse list_bytes into a list of test ids.""" return _v1(list_bytes) def parse_enumeration(enumeration_bytes): """Parse enumeration_bytes into a list of test_ids.""" # If subunit v2 is available, use it. if bytestream_to_streamresult is not None: return _v2(enumeration_bytes) else: return _v1(enumeration_bytes) def _v1(list_bytes): return [id.strip() for id in list_bytes.decode('utf8').split(_u('\n')) if id.strip()] def _v2(list_bytes): parser = bytestream_to_streamresult(BytesIO(list_bytes), non_subunit_name='stdout') result = stream_result() parser.run(result) return [event[1] for event in result._events if event[2]=='exists'] testrepository-0.0.20/testrepository/utils.py0000664000175000017500000000035512306632354022674 0ustar robertcrobertc00000000000000 def timedelta_to_seconds(delta): """Return the number of seconds that make up the duration of a timedelta. """ return ( (delta.microseconds + (delta.seconds + delta.days * 24 * 3600) * 10**6) / float(10**6)) testrepository-0.0.20/testrepository/tests/0000775000175000017500000000000012377221137022322 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/tests/test_testr.py0000664000175000017500000000627512306632354025105 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for testr.""" import doctest import os.path import subprocess import sys from testresources import TestResource from testtools.matchers import ( DocTestMatches, ) from testrepository.tests import ResourcedTestCase from testrepository.tests.stubpackage import StubPackageResource class StubbedTestr(object): """Testr executable with replaced testrepository package for testing.""" def __init__(self, testrpath): self.execpath = testrpath def execute(self, args): # sys.executable is used so that this works on windows. proc = subprocess.Popen([sys.executable, self.execpath] + args, env={'PYTHONPATH': self.stubpackage.base}, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True) out, err = proc.communicate() return proc.returncode, out class StubbedTestrResource(TestResource): resources = [("stubpackage", StubPackageResource('testrepository', [('commands.py', r"""import sys def run_argv(argv, stdin, stdout, stderr): sys.stdout.write("%s %s %s\n" % (sys.stdin is stdin, sys.stdout is stdout, sys.stderr is stderr)) sys.stdout.write("%s\n" % argv) return len(argv) - 1 """)]))] def make(self, dependency_resources): stub = dependency_resources['stubpackage'] path = os.path.join(os.path.dirname(__file__), '..', '..', 'testr') # Make a copy of the testr script as running in place uses the current # library, not the stub library. execpath = os.path.join(stub.base, 'testr') source = open(path, 'rb') try: testr_contents = source.read() finally: source.close() target = open(execpath, 'wb') try: target.write(testr_contents) finally: target.close() return StubbedTestr(execpath) class TestExecuted(ResourcedTestCase): """Tests that execute testr. These tests are (moderately) expensive!.""" resources = [('testr', StubbedTestrResource())] def test_runs_and_returns_run_argv_some_args(self): status, output = self.testr.execute(["foo bar", "baz"]) self.assertEqual(2, status) self.assertThat(output, DocTestMatches("""True True True [..., 'foo bar', 'baz']\n""", doctest.ELLIPSIS)) def test_runs_and_returns_run_argv_no_args(self): status, output = self.testr.execute([]) self.assertThat(output, DocTestMatches("""True True True [...]\n""", doctest.ELLIPSIS)) self.assertEqual(0, status) testrepository-0.0.20/testrepository/tests/__init__.py0000664000175000017500000000441212306632354024433 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """The testrepository tests and test only code.""" import unittest import testresources from testscenarios import generate_scenarios from testtools import TestCase class ResourcedTestCase(TestCase, testresources.ResourcedTestCase): """Make all testrepository tests have resource support.""" class _Wildcard(object): """Object that is equal to everything.""" def __repr__(self): return '*' def __eq__(self, other): return True def __ne__(self, other): return False Wildcard = _Wildcard() class StubTestCommand: def __init__(self, filter_tags=None): self.results = [] self.filter_tags = filter_tags or set() def __call__(self, ui, repo): return self def get_filter_tags(self): return self.filter_tags def test_suite(): packages = [ 'arguments', 'commands', 'repository', 'ui', ] names = [ 'arguments', 'commands', 'matchers', 'monkeypatch', 'repository', 'results', 'setup', 'stubpackage', 'testcommand', 'testr', 'ui', ] module_names = ['testrepository.tests.test_' + name for name in names] loader = unittest.TestLoader() suite = loader.loadTestsFromNames(module_names) result = testresources.OptimisingTestSuite() result.addTests(generate_scenarios(suite)) for pkgname in packages: pkg = __import__('testrepository.tests.' + pkgname, globals(), locals(), ['test_suite']) result.addTests(generate_scenarios(pkg.test_suite())) return result testrepository-0.0.20/testrepository/tests/commands/0000775000175000017500000000000012377221137024123 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/tests/commands/test_failing.py0000664000175000017500000001356712376202666027166 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the failing command.""" import doctest from io import BytesIO from subunit.v2 import ByteStreamToStreamResult import testtools from testtools.compat import _b from testtools.matchers import ( DocTestMatches, Equals, ) from testtools.testresult.doubles import StreamResult from testrepository.commands import failing from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.tests import ( ResourcedTestCase, StubTestCommand, Wildcard, ) class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self, options=(), args=()): ui = UI(options=options, args=args) cmd = failing.failing(ui) ui.set_command(cmd) return ui, cmd def test_shows_failures_from_last_run(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() self.assertEqual(1, cmd.execute()) # We should have seen test outputs (of the failure) and summary data. self.assertEqual([ ('results', Wildcard), ('summary', False, 1, None, Wildcard, None, [('id', 0, None), ('failures', 1, None)])], ui.outputs) suite = ui.outputs[0][1] result = testtools.StreamSummary() result.startTestRun() try: suite.run(result) finally: result.stopTestRun() self.assertEqual(1, result.testsRun) self.assertEqual(1, len(result.errors)) def test_with_subunit_shows_subunit_stream(self): ui, cmd = self.get_test_ui_and_cmd(options=[('subunit', True)]) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() self.assertEqual(0, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('stream', ui.outputs[0][0]) as_subunit = BytesIO(ui.outputs[0][1]) stream = ByteStreamToStreamResult(as_subunit) log = StreamResult() log.startTestRun() try: stream.run(log) finally: log.stopTestRun() self.assertEqual( log._events, [ ('startTestRun',), ('status', 'failing', 'inprogress', None, True, None, None, False, None, None, Wildcard), ('status', 'failing', 'fail', None, True, None, None, False, None, None, Wildcard), ('stopTestRun',) ]) def test_with_subunit_no_failures_exit_0(self): ui, cmd = self.get_test_ui_and_cmd(options=[('subunit', True)]) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() self.assertEqual(0, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('stream', ui.outputs[0][0]) self.assertThat(ui.outputs[0][1], Equals(_b(''))) def test_with_list_shows_list_of_tests(self): ui, cmd = self.get_test_ui_and_cmd(options=[('list', True)]) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing1', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.status(test_id='failing2', test_status='fail') inserter.stopTestRun() self.assertEqual(1, cmd.execute(), ui.outputs) self.assertEqual(1, len(ui.outputs)) self.assertEqual('tests', ui.outputs[0][0]) self.assertEqual( set(['failing1', 'failing2']), set([test.id() for test in ui.outputs[0][1]])) def test_uses_get_failing(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() calls = [] open = cmd.repository_factory.open def decorate_open_with_get_failing(url): repo = open(url) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() orig = repo.get_failing def get_failing(): calls.append(True) return orig() repo.get_failing = get_failing return repo cmd.repository_factory.open = decorate_open_with_get_failing cmd.repository_factory.initialise(ui.here) self.assertEqual(1, cmd.execute()) self.assertEqual([True], calls) testrepository-0.0.20/testrepository/tests/commands/test_slowest.py0000664000175000017500000001312012306632354027230 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the "slowest" command.""" from datetime import ( datetime, timedelta, ) import pytz from testtools import PlaceHolder from testrepository.commands import slowest from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.tests import ResourcedTestCase class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self, options=(), args=()): ui = UI(options=options, args=args) cmd = slowest.slowest(ui) ui.set_command(cmd) return ui, cmd def test_shows_nothing_for_no_tests(self): """Having no tests leads to an error and no output.""" ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) self.assertEqual(3, cmd.execute()) self.assertEqual([], ui.outputs) def insert_one_test_with_runtime(self, inserter, runtime): """Insert one test, with the specified run time. :param inserter: the inserter to use to insert the test. :param runtime: the runtime (in seconds) that the test should appear to take. :return: the name of the test that was added. """ test_id = self.getUniqueString() start_time = datetime.now(pytz.UTC) inserter.status(test_id=test_id, test_status='inprogress', timestamp=start_time) inserter.status(test_id=test_id, test_status='success', timestamp=start_time + timedelta(seconds=runtime)) return test_id def test_shows_one_test_when_one_test(self): """When there is one test it is shown.""" ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() runtime = 0.1 test_id = self.insert_one_test_with_runtime( inserter, runtime) inserter.stopTestRun() retcode = cmd.execute() self.assertEqual( [('table', [slowest.slowest.TABLE_HEADER] + slowest.slowest.format_times([(test_id, runtime)]))], ui.outputs) self.assertEqual(0, retcode) def test_orders_tests_based_on_runtime(self): """Longer running tests are shown first.""" ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() runtime1 = 1.1 test_id1 = self.insert_one_test_with_runtime( inserter, runtime1) runtime2 = 0.1 test_id2 = self.insert_one_test_with_runtime( inserter, runtime2) inserter.stopTestRun() retcode = cmd.execute() rows = [(test_id1, runtime1), (test_id2, runtime2)] rows = slowest.slowest.format_times(rows) self.assertEqual(0, retcode) self.assertEqual( [('table', [slowest.slowest.TABLE_HEADER] + rows)], ui.outputs) def insert_lots_of_tests_with_timing(self, repo): inserter = repo.get_inserter() inserter.startTestRun() runtimes = [float(r) for r in range(slowest.slowest.DEFAULT_ROWS_SHOWN + 1)] test_ids = [ self.insert_one_test_with_runtime( inserter, runtime) for runtime in runtimes] inserter.stopTestRun() return test_ids, runtimes def test_limits_output_by_default(self): """Only the first 10 tests are shown by default.""" ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) test_ids, runtimes = self.insert_lots_of_tests_with_timing(repo) retcode = cmd.execute() rows = list(zip(reversed(test_ids), reversed(runtimes)) )[:slowest.slowest.DEFAULT_ROWS_SHOWN] rows = slowest.slowest.format_times(rows) self.assertEqual(0, retcode) self.assertEqual( [('table', [slowest.slowest.TABLE_HEADER] + rows)], ui.outputs) def test_option_to_show_all_rows_does_so(self): """When the all option is given all rows are shown.""" ui, cmd = self.get_test_ui_and_cmd(options=[('all', True)]) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) test_ids, runtimes = self.insert_lots_of_tests_with_timing(repo) retcode = cmd.execute() rows = zip(reversed(test_ids), reversed(runtimes)) rows = slowest.slowest.format_times(rows) self.assertEqual(0, retcode) self.assertEqual( [('table', [slowest.slowest.TABLE_HEADER] + rows)], ui.outputs) testrepository-0.0.20/testrepository/tests/commands/test_init.py0000664000175000017500000000242712306632354026503 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the init command.""" from testrepository.commands import init from testrepository.ui.model import UI from testrepository.tests import ResourcedTestCase from testrepository.tests.test_repository import RecordingRepositoryFactory from testrepository.repository import memory class TestCommandInit(ResourcedTestCase): def test_init_no_args_no_questions_no_output(self): ui = UI() cmd = init.init(ui) calls = [] cmd.repository_factory = RecordingRepositoryFactory(calls, memory.RepositoryFactory()) cmd.execute() self.assertEqual([('initialise', ui.here)], calls) testrepository-0.0.20/testrepository/tests/commands/__init__.py0000664000175000017500000000215212306632354026233 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for commands.""" import unittest def test_suite(): names = [ 'commands', 'failing', 'help', 'init', 'last', 'list_tests', 'load', 'quickstart', 'run', 'slowest', 'stats', ] module_names = ['testrepository.tests.commands.test_' + name for name in names] loader = unittest.TestLoader() return loader.loadTestsFromNames(module_names) testrepository-0.0.20/testrepository/tests/commands/test_stats.py0000664000175000017500000000306112306632354026671 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the stats command.""" from testrepository.commands import stats from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.tests import ResourcedTestCase class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self,args=()): ui = UI(args=args) cmd = stats.stats(ui) ui.set_command(cmd) return ui, cmd def test_shows_number_of_runs(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.stopTestRun() inserter = repo.get_inserter() inserter.startTestRun() inserter.stopTestRun() self.assertEqual(0, cmd.execute()) self.assertEqual([('values', [('runs', 2)])], ui.outputs) testrepository-0.0.20/testrepository/tests/commands/test_last.py0000664000175000017500000001101112376202666026476 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the last command.""" from io import BytesIO from subunit.v2 import ByteStreamToStreamResult import testtools from testtools.matchers import Equals from testtools.testresult.doubles import StreamResult from testrepository.commands import last from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.tests import ( ResourcedTestCase, StubTestCommand, Wildcard, ) class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self, args=(), options=()): ui = UI(args=args, options=options) cmd = last.last(ui) ui.set_command(cmd) return ui, cmd def test_shows_last_run_first_run(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() id = inserter.get_id() self.assertEqual(1, cmd.execute()) # We should have seen test outputs (of the failure) and summary data. self.assertEqual([ ('results', Wildcard), ('summary', False, 2, None, Wildcard, Wildcard, [('id', id, None), ('failures', 1, None)])], ui.outputs) suite = ui.outputs[0][1] result = testtools.StreamSummary() result.startTestRun() try: suite.run(result) finally: result.stopTestRun() self.assertEqual(1, len(result.errors)) self.assertEqual(2, result.testsRun) def _add_run(self, repo): inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='failing', test_status='fail') inserter.status(test_id='ok', test_status='success') inserter.stopTestRun() return inserter.get_id() def test_shows_last_run(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) self._add_run(repo) id = self._add_run(repo) self.assertEqual(1, cmd.execute()) # We should have seen test outputs (of the failure) and summary data. self.assertEqual([ ('results', Wildcard), ('summary', False, 2, 0, Wildcard, Wildcard, [('id', id, None), ('failures', 1, 0)])], ui.outputs) suite = ui.outputs[0][1] result = testtools.StreamSummary() result.startTestRun() try: suite.run(result) finally: result.stopTestRun() self.assertEqual(1, len(result.errors)) self.assertEqual(2, result.testsRun) def test_shows_subunit_stream(self): ui, cmd = self.get_test_ui_and_cmd(options=[('subunit', True)]) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) self._add_run(repo) self.assertEqual(0, cmd.execute()) # We should have seen test outputs (of the failure) and summary data. self.assertEqual([ ('stream', Wildcard), ], ui.outputs) as_subunit = BytesIO(ui.outputs[0][1]) stream = ByteStreamToStreamResult(as_subunit) log = StreamResult() log.startTestRun() try: stream.run(log) finally: log.stopTestRun() self.assertEqual( log._events, [ ('startTestRun',), ('status', 'failing', 'fail', None, True, None, None, False, None, None, None), ('status', 'ok', 'success', None, True, None, None, False, None, None, None), ('stopTestRun',) ]) testrepository-0.0.20/testrepository/tests/commands/test_load.py0000664000175000017500000003243012306632354026454 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the load command.""" from datetime import datetime, timedelta from io import BytesIO from tempfile import NamedTemporaryFile from extras import try_import v2_avail = try_import('subunit.ByteStreamToStreamResult') import subunit from subunit import iso8601 import testtools from testtools.compat import _b from testtools.content import text_content from testtools.matchers import MatchesException from testtools.tests.helpers import LoggingResult from testrepository.commands import load from testrepository.ui.model import UI from testrepository.tests import ( ResourcedTestCase, StubTestCommand, Wildcard, ) from testrepository.tests.test_repository import RecordingRepositoryFactory from testrepository.tests.repository.test_file import HomeDirTempDir from testrepository.repository import memory, RepositoryNotFound class TestCommandLoad(ResourcedTestCase): def test_load_loads_subunit_stream_to_default_repository(self): ui = UI([('subunit', _b(''))]) cmd = load.load(ui) ui.set_command(cmd) calls = [] cmd.repository_factory = RecordingRepositoryFactory(calls, memory.RepositoryFactory()) repo = cmd.repository_factory.initialise(ui.here) del calls[:] cmd.execute() # Right repo self.assertEqual([('open', ui.here)], calls) # Stream consumed self.assertFalse('subunit' in ui.input_streams) # Results loaded self.assertEqual(1, repo.count()) def test_load_loads_named_file_if_given(self): datafile = NamedTemporaryFile() self.addCleanup(datafile.close) ui = UI([('subunit', _b(''))], args=[datafile.name]) cmd = load.load(ui) ui.set_command(cmd) calls = [] cmd.repository_factory = RecordingRepositoryFactory(calls, memory.RepositoryFactory()) repo = cmd.repository_factory.initialise(ui.here) del calls[:] self.assertEqual(0, cmd.execute()) # Right repo self.assertEqual([('open', ui.here)], calls) # Stream not consumed - otherwise CLI would block when someone runs # 'testr load foo'. XXX: Be nice if we could declare that the argument, # which is a path, is to be an input stream. self.assertTrue('subunit' in ui.input_streams) # Results loaded self.assertEqual(1, repo.count()) def test_load_initialises_repo_if_doesnt_exist_and_init_forced(self): ui = UI([('subunit', _b(''))], options=[('force_init', True)]) cmd = load.load(ui) ui.set_command(cmd) calls = [] cmd.repository_factory = RecordingRepositoryFactory(calls, memory.RepositoryFactory()) del calls[:] cmd.execute() self.assertEqual([('open', ui.here), ('initialise', ui.here)], calls) def test_load_errors_if_repo_doesnt_exist(self): ui = UI([('subunit', _b(''))]) cmd = load.load(ui) ui.set_command(cmd) calls = [] cmd.repository_factory = RecordingRepositoryFactory(calls, memory.RepositoryFactory()) del calls[:] cmd.execute() self.assertEqual([('open', ui.here)], calls) self.assertEqual([('error', Wildcard)], ui.outputs) self.assertThat( ui.outputs[0][1], MatchesException(RepositoryNotFound('memory:'))) def test_load_returns_0_normally(self): ui = UI([('subunit', _b(''))]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(0, cmd.execute()) def test_load_returns_1_on_failed_stream(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='fail') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('test: foo\nfailure: foo\n') ui = UI([('subunit', subunit_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(1, cmd.execute()) def test_load_new_shows_test_failures(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='fail') subunit_bytes = buffer.getvalue() else: subunit_bytes = b'test: foo\nfailure: foo\n' ui = UI([('subunit', subunit_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(1, cmd.execute()) self.assertEqual( [('summary', False, 1, None, Wildcard, None, [('id', 0, None), ('failures', 1, None)])], ui.outputs[1:]) def test_load_new_shows_test_failure_details(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='fail', file_name="traceback", mime_type='text/plain;charset=utf8', file_bytes=b'arg\n') subunit_bytes = buffer.getvalue() else: subunit_bytes = b'test: foo\nfailure: foo [\narg\n]\n' ui = UI([('subunit', subunit_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(1, cmd.execute()) suite = ui.outputs[0][1] self.assertEqual([ ('results', Wildcard), ('summary', False, 1, None, Wildcard, None, [('id', 0, None), ('failures', 1, None)])], ui.outputs) result = testtools.StreamSummary() result.startTestRun() try: suite.run(result) finally: result.stopTestRun() self.assertEqual(1, result.testsRun) self.assertEqual(1, len(result.errors)) def test_load_new_shows_test_skips(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='skip') subunit_bytes = buffer.getvalue() else: subunit_bytes = b'test: foo\nskip: foo\n' ui = UI([('subunit', subunit_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(0, cmd.execute()) self.assertEqual( [('results', Wildcard), ('summary', True, 1, None, Wildcard, None, [('id', 0, None), ('skips', 1, None)])], ui.outputs) def test_load_new_shows_test_summary_no_tests(self): ui = UI([('subunit', _b(''))]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(0, cmd.execute()) self.assertEqual( [('results', Wildcard), ('summary', True, 0, None, None, None, [('id', 0, None)])], ui.outputs) def test_load_quiet_shows_nothing(self): ui = UI([('subunit', _b(''))], [('quiet', True)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(0, cmd.execute()) self.assertEqual([], ui.outputs) def test_load_abort_over_interactive_stream(self): ui = UI([('subunit', b''), ('interactive', b'a\n')]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) ret = cmd.execute() self.assertEqual( [('results', Wildcard), ('summary', False, 1, None, None, None, [('id', 0, None), ('failures', 1, None)])], ui.outputs) self.assertEqual(1, ret) def test_partial_passed_to_repo(self): ui = UI([('subunit', _b(''))], [('quiet', True), ('partial', True)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) retcode = cmd.execute() self.assertEqual([], ui.outputs) self.assertEqual(0, retcode) self.assertEqual(True, cmd.repository_factory.repos[ui.here].get_test_run(0)._partial) def test_load_timed_run(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) time = datetime(2011, 1, 1, 0, 0, 1, tzinfo=iso8601.Utc()) stream.status(test_id='foo', test_status='inprogress', timestamp=time) stream.status(test_id='foo', test_status='success', timestamp=time+timedelta(seconds=2)) timed_bytes = buffer.getvalue() else: timed_bytes = _b('time: 2011-01-01 00:00:01.000000Z\n' 'test: foo\n' 'time: 2011-01-01 00:00:03.000000Z\n' 'success: foo\n' 'time: 2011-01-01 00:00:06.000000Z\n') ui = UI( [('subunit', timed_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() cmd.repository_factory.initialise(ui.here) self.assertEqual(0, cmd.execute()) # Note that the time here is 2.0, the difference between first and # second time: directives. That's because 'load' uses a # ThreadsafeForwardingResult (via ConcurrentTestSuite) that suppresses # time information not involved in the start or stop of a test. self.assertEqual( [('summary', True, 1, None, 2.0, None, [('id', 0, None)])], ui.outputs[1:]) def test_load_second_run(self): # If there's a previous run in the database, then show information # about the high level differences in the test run: how many more # tests, how many more failures, how much longer it takes. if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) time = datetime(2011, 1, 2, 0, 0, 1, tzinfo=iso8601.Utc()) stream.status(test_id='foo', test_status='inprogress', timestamp=time) stream.status(test_id='foo', test_status='fail', timestamp=time+timedelta(seconds=2)) stream.status(test_id='bar', test_status='inprogress', timestamp=time+timedelta(seconds=4)) stream.status(test_id='bar', test_status='fail', timestamp=time+timedelta(seconds=6)) timed_bytes = buffer.getvalue() else: timed_bytes = _b('time: 2011-01-02 00:00:01.000000Z\n' 'test: foo\n' 'time: 2011-01-02 00:00:03.000000Z\n' 'error: foo\n' 'time: 2011-01-02 00:00:05.000000Z\n' 'test: bar\n' 'time: 2011-01-02 00:00:07.000000Z\n' 'error: bar\n') ui = UI( [('subunit', timed_bytes)]) cmd = load.load(ui) ui.set_command(cmd) cmd.repository_factory = memory.RepositoryFactory() repo = cmd.repository_factory.initialise(ui.here) # XXX: Circumvent the AutoTimingTestResultDecorator so we can get # predictable times, rather than ones based on the system # clock. (Would normally expect to use repo.get_inserter()) inserter = repo._get_inserter(False) # Insert a run with different results. inserter.startTestRun() inserter.status(test_id=self.id(), test_status='inprogress', timestamp=datetime(2011, 1, 1, 0, 0, 1, tzinfo=iso8601.Utc())) inserter.status(test_id=self.id(), test_status='fail', timestamp=datetime(2011, 1, 1, 0, 0, 10, tzinfo=iso8601.Utc())) inserter.stopTestRun() self.assertEqual(1, cmd.execute()) self.assertEqual( [('summary', False, 2, 1, 6.0, -3.0, [('id', 1, None), ('failures', 2, 1)])], ui.outputs[1:]) testrepository-0.0.20/testrepository/tests/commands/test_list_tests.py0000664000175000017500000001167012306632354027735 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the list_tests command.""" from io import BytesIO import os.path from subprocess import PIPE from extras import try_import import subunit v2_avail = try_import('subunit.ByteStreamToStreamResult') from testtools.compat import _b from testtools.matchers import MatchesException from testrepository.commands import list_tests from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.tests import ResourcedTestCase, Wildcard from testrepository.tests.stubpackage import TempDirResource from testrepository.tests.test_repository import make_test from testrepository.tests.test_testcommand import FakeTestCommand class TestCommand(ResourcedTestCase): resources = [('tempdir', TempDirResource())] def get_test_ui_and_cmd(self, options=(), args=()): self.dirty() ui = UI(options=options, args=args) ui.here = self.tempdir cmd = list_tests.list_tests(ui) ui.set_command(cmd) return ui, cmd def dirty(self): # Ugly: TODO - improve testresources to make this go away. dict(self.resources)['tempdir']._dirty = True def config_path(self): return os.path.join(self.tempdir, '.testr.conf') def set_config(self, text): with open(self.config_path(), 'wt') as stream: stream.write(text) def setup_repo(self, cmd, ui): repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='passing', test_status='success') inserter.status(test_id='failing', test_status='fail') inserter.stopTestRun() def test_no_config_file_errors(self): ui, cmd = self.get_test_ui_and_cmd() self.assertEqual(3, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('error', ui.outputs[0][0]) self.assertThat(ui.outputs[0][1], MatchesException(ValueError('No .testr.conf config file'))) def test_calls_list_tests(self): ui, cmd = self.get_test_ui_and_cmd(args=('--', 'bar', 'quux')) cmd.repository_factory = memory.RepositoryFactory() if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='returned', test_status='exists') stream.status(test_id='values', test_status='exists') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('returned\n\nvalues\n') ui.proc_outputs = [subunit_bytes] self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDOPTION\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo --list bar quux' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdout': PIPE, 'stdin': PIPE}), ('communicate',), ('stream', _b('returned\nvalues\n')), ], ui.outputs) def test_filters_use_filtered_list(self): ui, cmd = self.get_test_ui_and_cmd( args=('returned', '--', 'bar', 'quux')) cmd.repository_factory = memory.RepositoryFactory() if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='returned', test_status='exists') stream.status(test_id='values', test_status='exists') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('returned\nvalues\n') ui.proc_outputs = [subunit_bytes] self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDOPTION\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') retcode = cmd.execute() expected_cmd = 'foo --list bar quux' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdout': PIPE, 'stdin': PIPE}), ('communicate',), ('stream', _b('returned\n')), ], ui.outputs) self.assertEqual(0, retcode) testrepository-0.0.20/testrepository/tests/commands/test_run.py0000664000175000017500000005333512306632354026350 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the run command.""" from io import BytesIO import os.path from subprocess import PIPE import tempfile from extras import try_import from fixtures import ( Fixture, MonkeyPatch, ) import subunit v2_avail = try_import('subunit.ByteStreamToStreamResult') from subunit import RemotedTestCase from testscenarios.scenarios import multiply_scenarios from testtools.compat import _b from testtools.matchers import ( Equals, HasLength, MatchesException, MatchesListwise, ) from testrepository.commands import run from testrepository.ui.model import UI, ProcessModel from testrepository.repository import memory from testrepository.testlist import write_list from testrepository.tests import ResourcedTestCase, Wildcard from testrepository.tests.stubpackage import TempDirResource from testrepository.tests.test_testcommand import FakeTestCommand from testrepository.tests.test_repository import make_test class TestCommand(ResourcedTestCase): resources = [('tempdir', TempDirResource())] def get_test_ui_and_cmd(self, options=(), args=(), proc_outputs=(), proc_results=()): self.dirty() ui = UI(options=options, args=args, proc_outputs=proc_outputs, proc_results=proc_results) ui.here = self.tempdir cmd = run.run(ui) ui.set_command(cmd) return ui, cmd def dirty(self): # Ugly: TODO - improve testresources to make this go away. dict(self.resources)['tempdir']._dirty = True def config_path(self): return os.path.join(self.tempdir, '.testr.conf') def set_config(self, text): with open(self.config_path(), 'wt') as stream: stream.write(text) def setup_repo(self, cmd, ui, failures=True): repo = cmd.repository_factory.initialise(ui.here) inserter = repo.get_inserter() inserter.startTestRun() inserter.status(test_id='passing', test_status='success') if failures: inserter.status(test_id='failing1', test_status='fail') inserter.status(test_id='failing2', test_status='fail') inserter.stopTestRun() def test_no_config_file_errors(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory.initialise(ui.here) self.assertEqual(3, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('error', ui.outputs[0][0]) self.assertThat(ui.outputs[0][1], MatchesException(ValueError('No .testr.conf config file'))) def test_no_config_settings_errors(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory.initialise(ui.here) self.set_config('') self.assertEqual(3, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('error', ui.outputs[0][0]) self.assertThat(ui.outputs[0][1], MatchesException(ValueError( 'No test_command option present in .testr.conf'))) def test_IDFILE_failures(self): ui, cmd = self.get_test_ui_and_cmd(options=[('failing', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') cmd.command_factory = FakeTestCommand result = cmd.execute() listfile = os.path.join(ui.here, 'failing.list') expected_cmd = 'foo --load-list %s' % listfile self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) # TODO: check the list file is written, and deleted. self.assertEqual(0, result) def test_IDLIST_failures(self): ui, cmd = self.get_test_ui_and_cmd(options=[('failing', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo failing1 failing2' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]), ], ui.outputs) # Failing causes partial runs to be used. self.assertEqual(True, cmd.repository_factory.repos[ui.here].get_test_run(1)._partial) def test_IDLIST_default_is_empty(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo ' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) def test_IDLIST_default_passed_normally(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\ntest_id_list_default=whoo yea\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo whoo yea' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) def test_IDFILE_not_passed_normally(self): ui, cmd = self.get_test_ui_and_cmd() cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo ' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]), ], ui.outputs) def capture_ids(self, list_result=None): params = [] def capture_ids(self, ids, args, test_filters=None): params.append([self, ids, args, test_filters]) result = Fixture() result.run_tests = lambda:[] if list_result is not None: result.list_tests = lambda:list(list_result) return result return params, capture_ids def test_load_list_failing_takes_id_intersection(self): list_file = tempfile.NamedTemporaryFile() self.addCleanup(list_file.close) write_list(list_file, ['foo', 'quux', 'failing1']) # The extra tests - foo, quux - won't match known failures, and the # unlisted failure failing2 won't match the list. expected_ids = set(['failing1']) list_file.flush() ui, cmd = self.get_test_ui_and_cmd( options=[('load_list', list_file.name), ('failing', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') params, capture_ids = self.capture_ids() self.useFixture(MonkeyPatch( 'testrepository.testcommand.TestCommand.get_run_command', capture_ids)) cmd_result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) self.assertEqual(0, cmd_result) self.assertEqual([[Wildcard, expected_ids, [], None]], params) def test_load_list_passes_ids(self): list_file = tempfile.NamedTemporaryFile() self.addCleanup(list_file.close) expected_ids = set(['foo', 'quux', 'bar']) write_list(list_file, expected_ids) list_file.flush() ui, cmd = self.get_test_ui_and_cmd( options=[('load_list', list_file.name)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') params, capture_ids = self.capture_ids() self.useFixture(MonkeyPatch( 'testrepository.testcommand.TestCommand.get_run_command', capture_ids)) cmd_result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) self.assertEqual(0, cmd_result) self.assertEqual([[Wildcard, expected_ids, [], None]], params) def test_extra_options_passed_in(self): ui, cmd = self.get_test_ui_and_cmd(args=('--', 'bar', 'quux')) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') self.assertEqual(0, cmd.execute()) expected_cmd = 'foo bar quux' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) def test_quiet_passed_down(self): ui, cmd = self.get_test_ui_and_cmd(options=[('quiet', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo\n') result = cmd.execute() expected_cmd = 'foo' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ], ui.outputs) self.assertEqual(0, result) def test_partial_passed_to_repo(self): ui, cmd = self.get_test_ui_and_cmd( options=[('quiet', True), ('partial', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo\n') result = cmd.execute() expected_cmd = 'foo' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ], ui.outputs) self.assertEqual(0, result) self.assertEqual(True, cmd.repository_factory.repos[ui.here].get_test_run(1)._partial) def test_load_failure_exposed(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='fail') subunit_bytes = buffer.getvalue() else: subunit_bytes = b'test: foo\nfailure: foo\n' ui, cmd = self.get_test_ui_and_cmd(options=[('quiet', True),], proc_outputs=[subunit_bytes]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config('[DEFAULT]\ntest_command=foo\n') result = cmd.execute() cmd.repository_factory.repos[ui.here].get_test_run(1) self.assertEqual(1, result) def test_process_exit_code_nonzero_causes_synthetic_error_test(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='success') subunit_bytes = buffer.getvalue() else: subunit_bytes = b'test: foo\nsuccess: foo\n' ui, cmd = self.get_test_ui_and_cmd(options=[('quiet', True),], proc_outputs=[subunit_bytes], proc_results=[2]) # 2 is non-zero, and non-zero triggers the behaviour of exiting # with 1 - but we want to see that it doesn't pass-through the # value literally. cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config('[DEFAULT]\ntest_command=foo\n') result = cmd.execute() self.assertEqual(1, result) run = cmd.repository_factory.repos[ui.here].get_test_run(1) self.assertEqual([Wildcard, 'fail'], [test['status'] for test in run._tests]) def test_regex_test_filter(self): ui, cmd = self.get_test_ui_and_cmd(args=('ab.*cd', '--', 'bar', 'quux')) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') params, capture_ids = self.capture_ids() self.useFixture(MonkeyPatch( 'testrepository.testcommand.TestCommand.get_run_command', capture_ids)) cmd_result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) self.assertEqual(0, cmd_result) self.assertThat(params[0][1], Equals(None)) self.assertThat( params[0][2], MatchesListwise([Equals('bar'), Equals('quux')])) self.assertThat(params[0][3], MatchesListwise([Equals('ab.*cd')])) self.assertThat(params, HasLength(1)) def test_regex_test_filter_with_explicit_ids(self): ui, cmd = self.get_test_ui_and_cmd( args=('g1', '--', 'bar', 'quux'),options=[('failing', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') params, capture_ids = self.capture_ids() self.useFixture(MonkeyPatch( 'testrepository.testcommand.TestCommand.get_run_command', capture_ids)) cmd_result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]) ], ui.outputs) self.assertEqual(0, cmd_result) self.assertThat(params[0][1], Equals(['failing1', 'failing2'])) self.assertThat( params[0][2], MatchesListwise([Equals('bar'), Equals('quux')])) self.assertThat(params[0][3], MatchesListwise([Equals('g1')])) self.assertThat(params, HasLength(1)) def test_until_failure(self): ui, cmd = self.get_test_ui_and_cmd(options=[('until_failure', True)]) if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='success') subunit_bytes1 = buffer.getvalue() buffer.seek(0) buffer.truncate() stream.status(test_id='foo', test_status='inprogress') stream.status(test_id='foo', test_status='fail') subunit_bytes2 = buffer.getvalue() else: subunit_bytes1 = b'test: foo\nsuccess: foo\n' subunit_bytes2 = b'test: foo\nfailure: foo\n' ui.proc_outputs = [ subunit_bytes1, # stream one, works subunit_bytes2, # stream two, fails ] ui.require_proc_stdout = True cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') cmd_result = cmd.execute() expected_cmd = 'foo ' self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', True, 1, -2, Wildcard, Wildcard, [('id', 1, None)]), ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': PIPE, 'stdout': PIPE}), ('results', Wildcard), ('summary', False, 1, 0, Wildcard, Wildcard, [('id', 2, None), ('failures', 1, 1)]) ], ui.outputs) self.assertEqual(1, cmd_result) def test_failure_no_tests_run_when_no_failures_failures(self): ui, cmd = self.get_test_ui_and_cmd(options=[('failing', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui, failures=False) self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') cmd.command_factory = FakeTestCommand result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -1, None, None, [('id', 1, None)]) ], ui.outputs) self.assertEqual(0, result) def test_isolated_runs_multiple_processes(self): ui, cmd = self.get_test_ui_and_cmd(options=[('isolated', True)]) cmd.repository_factory = memory.RepositoryFactory() self.setup_repo(cmd, ui) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_id_option=--load-list $IDFILE\n' 'test_list_option=--list\n') params, capture_ids = self.capture_ids(list_result=['ab', 'cd', 'ef']) self.useFixture(MonkeyPatch( 'testrepository.testcommand.TestCommand.get_run_command', capture_ids)) cmd_result = cmd.execute() self.assertEqual([ ('results', Wildcard), ('summary', True, 0, -3, None, None, [('id', 1, None)]), ('results', Wildcard), ('summary', True, 0, 0, None, None, [('id', 2, None)]), ('results', Wildcard), ('summary', True, 0, 0, None, None, [('id', 3, None)]), ], ui.outputs) self.assertEqual(0, cmd_result) # once to list, then 3 each executing one test. self.assertThat(params, HasLength(4)) self.assertThat(params[0][1], Equals(None)) self.assertThat(params[1][1], Equals(['ab'])) self.assertThat(params[2][1], Equals(['cd'])) self.assertThat(params[3][1], Equals(['ef'])) def read_all(stream): return stream.read() def read_single(stream): return stream.read(1) def readline(stream): return stream.readline() def readlines(stream): return _b('').join(stream.readlines()) def accumulate(stream, reader): accumulator = [] content = reader(stream) while content: accumulator.append(content) content = reader(stream) return _b('').join(accumulator) class TestReturnCodeToSubunit(ResourcedTestCase): scenarios = multiply_scenarios( [('readdefault', dict(reader=read_all)), ('readsingle', dict(reader=read_single)), ('readline', dict(reader=readline)), ('readlines', dict(reader=readlines)), ], [('noeol', dict(stdout=_b('foo\nbar'))), ('trailingeol', dict(stdout=_b('foo\nbar\n')))]) def test_returncode_0_no_change(self): proc = ProcessModel(None) proc.stdout.write(self.stdout) proc.stdout.seek(0) stream = run.ReturnCodeToSubunit(proc) content = accumulate(stream, self.reader) self.assertEqual(self.stdout, content) def test_returncode_nonzero_fail_appended_to_content(self): proc = ProcessModel(None) proc.stdout.write(self.stdout) proc.stdout.seek(0) proc.returncode = 1 stream = run.ReturnCodeToSubunit(proc) content = accumulate(stream, self.reader) if v2_avail: buffer = BytesIO() buffer.write(b'foo\nbar\n') stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='process-returncode', test_status='fail', file_name='traceback', mime_type='text/plain;charset=utf8', file_bytes=b'returncode 1') expected_content = buffer.getvalue() else: expected_content = _b('foo\nbar\ntest: process-returncode\n' 'failure: process-returncode [\n returncode 1\n]\n') self.assertEqual(expected_content, content) testrepository-0.0.20/testrepository/tests/commands/test_commands.py0000664000175000017500000000305112306632354027333 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the commands command.""" from testrepository.commands import commands from testrepository.ui.model import UI from testrepository.tests import ResourcedTestCase class TestCommandCommands(ResourcedTestCase): def get_test_ui_and_cmd(self): ui = UI() cmd = commands.commands(ui) ui.set_command(cmd) return ui, cmd def test_shows_a_table_of_commands(self): ui, cmd = self.get_test_ui_and_cmd() cmd.execute() self.assertEqual(1, len(ui.outputs)) self.assertEqual('table', ui.outputs[0][0]) self.assertEqual(('command', 'description'), ui.outputs[0][1][0]) command_names = [row[0] for row in ui.outputs[0][1]] summaries = [row[1] for row in ui.outputs[0][1]] self.assertTrue('load' in command_names) self.assertTrue( 'Load a subunit stream into a repository.' in summaries) testrepository-0.0.20/testrepository/tests/commands/test_help.py0000664000175000017500000000361312306632354026466 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the help command.""" from inspect import getdoc from testtools.matchers import Contains from testrepository.commands import help, load from testrepository.ui.model import UI from testrepository.tests import ResourcedTestCase class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self,args=()): ui = UI(args=args) cmd = help.help(ui) ui.set_command(cmd) return ui, cmd def test_shows_rest_of__doc__(self): ui, cmd = self.get_test_ui_and_cmd(args=['load']) cmd.execute() expected_doc = getdoc(load.load) self.assertThat(ui.outputs[-1][1], Contains(expected_doc)) def test_shows_cmd_arguments(self): ui, cmd = self.get_test_ui_and_cmd(args=['load']) cmd.execute() self.assertThat(ui.outputs[-1][1], Contains("streams*")) def test_shows_cmd_partial(self): ui, cmd = self.get_test_ui_and_cmd(args=['load']) cmd.execute() self.assertThat(ui.outputs[-1][1], Contains("--partial")) def test_shows_general_help_with_no_args(self): ui, cmd = self.get_test_ui_and_cmd() self.assertEqual(0, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('rest', ui.outputs[0][0]) testrepository-0.0.20/testrepository/tests/commands/test_quickstart.py0000664000175000017500000000244712306632354027734 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the quickstart command.""" from testrepository.commands import quickstart from testrepository.ui.model import UI from testrepository.tests import ResourcedTestCase class TestCommand(ResourcedTestCase): def get_test_ui_and_cmd(self,args=()): ui = UI(args=args) cmd = quickstart.quickstart(ui) ui.set_command(cmd) return ui, cmd def test_shows_some_rest(self): ui, cmd = self.get_test_ui_and_cmd() self.assertEqual(0, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('rest', ui.outputs[0][0]) self.assertTrue('Overview' in ui.outputs[0][1]) testrepository-0.0.20/testrepository/tests/test_ui.py0000664000175000017500000002226512376333100024350 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for UI support logic and the UI contract.""" from io import BytesIO, TextIOWrapper import optparse import subprocess import sys from fixtures import EnvironmentVariable from testtools.compat import _b, _u from testtools.content import text_content from testtools.matchers import raises from testrepository import arguments, commands from testrepository.commands import load from testrepository.repository import memory from testrepository.tests import ResourcedTestCase, StubTestCommand from testrepository.ui import cli, decorator, model def cli_ui_factory(input_streams=None, options=(), args=()): if input_streams and len(input_streams) > 1: # TODO: turn additional streams into argv and simulated files, or # something - however, may need to be cli specific tests at that # point. raise NotImplementedError(cli_ui_factory) stdout = TextIOWrapper(BytesIO(), line_buffering=True) if input_streams: stdin = TextIOWrapper(BytesIO(input_streams[0][1])) else: stdin = TextIOWrapper(BytesIO()) stderr = TextIOWrapper(BytesIO(), line_buffering=True) argv = list(args) for option, value in options: # only bool handled so far if value: argv.append('--%s' % option) return cli.UI(argv, stdin, stdout, stderr) def decorator_ui_factory(input_streams=None, options=(), args=()): base = model.UI(input_streams=input_streams, options=options, args=args) return decorator.UI(input_streams=input_streams, decorated=base) # what ui implementations do we need to test? ui_implementations = [ ('CLIUI', {'ui_factory': cli_ui_factory}), ('ModelUI', {'ui_factory': model.UI}), ('DecoratorUI', {'ui_factory': decorator_ui_factory}), ] class TestUIContract(ResourcedTestCase): scenarios = ui_implementations def get_test_ui(self): ui = self.ui_factory() cmd = commands.Command(ui) ui.set_command(cmd) return ui def test_factory_noargs(self): ui = self.ui_factory() def test_factory_input_stream_args(self): ui = self.ui_factory([('subunit', _b('value'))]) def test_here(self): ui = self.get_test_ui() self.assertNotEqual(None, ui.here) def test_iter_streams_load_stdin_use_case(self): # A UI can be asked for the streams that a command has indicated it # accepts, which is what load < foo will require. ui = self.ui_factory([('subunit', _b('test: foo\nsuccess: foo\n'))]) cmd = commands.Command(ui) cmd.input_streams = ['subunit+'] ui.set_command(cmd) results = [] for result in ui.iter_streams('subunit'): results.append(result.read()) self.assertEqual([_b('test: foo\nsuccess: foo\n')], results) def test_iter_streams_unexpected_type_raises(self): ui = self.get_test_ui() self.assertThat(lambda: ui.iter_streams('subunit'), raises(KeyError)) def test_output_error(self): self.useFixture(EnvironmentVariable('TESTR_PDB')) try: raise Exception('fooo') except Exception: err_tuple = sys.exc_info() ui = self.get_test_ui() ui.output_error(err_tuple) def test_output_rest(self): # output some ReST - used for help and docs. ui = self.get_test_ui() ui.output_rest(_u('')) def test_output_stream(self): # a stream of bytes can be output. ui = self.get_test_ui() ui.output_stream(BytesIO()) def test_output_stream_non_utf8(self): # When the stream has non-utf8 bytes it still outputs correctly. ui = self.get_test_ui() ui.output_stream(BytesIO(_b('\xfa'))) def test_output_table(self): # output_table shows a table. ui = self.get_test_ui() ui.output_table([('col1', 'col2'), ('row1c1','row1c2')]) def test_output_tests(self): # output_tests can be called, and takes a list of tests to output. ui = self.get_test_ui() ui.output_tests([self, self.__class__('test_output_table')]) def test_output_values(self): # output_values can be called and takes a list of things to output. ui = self.get_test_ui() ui.output_values([('foo', 1), ('bar', 'quux')]) def test_output_summary(self): # output_summary can be called, takes success boolean and list of # things to output. ui = self.get_test_ui() ui.output_summary(True, 1, None, 1, None, []) def test_set_command(self): # All ui objects can be given their command. ui = self.ui_factory() cmd = commands.Command(ui) self.assertEqual(True, ui.set_command(cmd)) def test_set_command_checks_args_unwanted_arg(self): ui = self.ui_factory(args=['foo']) cmd = commands.Command(ui) self.assertEqual(False, ui.set_command(cmd)) def test_set_command_checks_args_missing_arg(self): ui = self.ui_factory() cmd = commands.Command(ui) cmd.args = [arguments.command.CommandArgument('foo')] self.assertEqual(False, ui.set_command(cmd)) def test_set_command_checks_args_invalid_arg(self): ui = self.ui_factory(args=['a']) cmd = commands.Command(ui) cmd.args = [arguments.command.CommandArgument('foo')] self.assertEqual(False, ui.set_command(cmd)) def test_args_are_exposed_at_arguments(self): ui = self.ui_factory(args=['load']) cmd = commands.Command(ui) cmd.args = [arguments.command.CommandArgument('foo')] self.assertEqual(True, ui.set_command(cmd)) self.assertEqual({'foo':[load.load]}, ui.arguments) def test_set_command_with_no_name_works(self): # Degrade gracefully if the name attribute has not been set. ui = self.ui_factory() cmd = commands.Command(ui) self.assertEqual(True, ui.set_command(cmd)) def test_options_at_options(self): ui = self.get_test_ui() self.assertEqual(False, ui.options.quiet) def test_options_when_set_at_options(self): ui = self.ui_factory(options=[('quiet', True)]) cmd = commands.Command(ui) ui.set_command(cmd) self.assertEqual(True, ui.options.quiet) def test_options_on_command_picked_up(self): ui = self.ui_factory(options=[('subunit', True)]) cmd = commands.Command(ui) cmd.options = [optparse.Option("--subunit", action="store_true", default=False, help="Show output as a subunit stream.")] ui.set_command(cmd) self.assertEqual(True, ui.options.subunit) # And when not given the default works. ui = self.ui_factory() cmd = commands.Command(ui) cmd.options = [optparse.Option("--subunit", action="store_true", default=False, help="Show output as a subunit stream.")] ui.set_command(cmd) self.assertEqual(False, ui.options.subunit) def test_exec_subprocess(self): # exec_subprocess should 'work like popen'. ui = self.ui_factory() proc = ui.subprocess_Popen([sys.executable, "-V"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = proc.communicate() proc.returncode def test_subprocesses_have_stdin(self): # exec_subprocess should 'work like popen'. ui = self.ui_factory() proc = ui.subprocess_Popen([sys.executable, "-V"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.stdout.read(0) out, err = proc.communicate() def test_subprocesses_have_stdout(self): # exec_subprocess should 'work like popen'. ui = self.ui_factory() proc = ui.subprocess_Popen([sys.executable, "-V"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.stdout.read(0) out, err = proc.communicate() def test_make_result(self): # make_result should return a StreamResult and a summary result. ui = self.ui_factory() ui.set_command(commands.Command(ui)) result, summary = ui.make_result(lambda: None, StubTestCommand()) result.startTestRun() result.status() result.stopTestRun() summary.wasSuccessful() def test_make_result_previous_run(self): # make_result can take a previous run. ui = self.ui_factory() ui.set_command(commands.Command(ui)) result, summary = ui.make_result( lambda: None, StubTestCommand(), previous_run=memory.Repository().get_failing()) result.startTestRun() result.status() result.stopTestRun() summary.wasSuccessful() testrepository-0.0.20/testrepository/tests/ui/0000775000175000017500000000000012377221137022737 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/tests/ui/__init__.py0000664000175000017500000000167612306632354025061 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for ui modules.""" import unittest def test_suite(): names = [ 'cli', 'decorator', ] module_names = ['testrepository.tests.ui.test_' + name for name in names] loader = unittest.TestLoader() return loader.loadTestsFromNames(module_names) testrepository-0.0.20/testrepository/tests/ui/test_decorator.py0000664000175000017500000000245612306632354026340 0ustar robertcrobertc00000000000000# -*- encoding: utf-8 -*- # # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for UI decorator.""" from testrepository import commands from testrepository.ui import decorator, model from testrepository.tests import ResourcedTestCase class TestDecoratorUI(ResourcedTestCase): def test_options_overridable(self): base = model.UI(options=[('partial', True), ('other', False)]) cmd = commands.Command(base) base.set_command(cmd) ui = decorator.UI(options={'partial':False}, decorated=base) internal_cmd = commands.Command(ui) ui.set_command(internal_cmd) self.assertEqual(False, ui.options.partial) self.assertEqual(False, ui.options.other) testrepository-0.0.20/testrepository/tests/ui/test_cli.py0000664000175000017500000004241012376201045025113 0ustar robertcrobertc00000000000000# -*- encoding: utf-8 -*- # # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for UI support logic and the UI contract.""" import doctest from io import BytesIO, StringIO, TextIOWrapper import optparse import os import sys from textwrap import dedent from fixtures import EnvironmentVariable import subunit import testtools from testtools import TestCase from testtools.compat import _b, _u from testtools.matchers import ( DocTestMatches, MatchesException, ) from testrepository import arguments from testrepository import commands from testrepository.commands import run from testrepository.ui import cli from testrepository.tests import ResourcedTestCase, StubTestCommand def get_test_ui_and_cmd(options=(), args=()): stdout = TextIOWrapper(BytesIO(), 'utf8', line_buffering=True) stdin = StringIO() stderr = StringIO() argv = list(args) for option, value in options: # only bool handled so far if value: argv.append('--%s' % option) ui = cli.UI(argv, stdin, stdout, stderr) cmd = run.run(ui) ui.set_command(cmd) return ui, cmd class TestCLIUI(ResourcedTestCase): def setUp(self): super(TestCLIUI, self).setUp() self.useFixture(EnvironmentVariable('TESTR_PDB')) def test_construct(self): stdout = BytesIO() stdin = BytesIO() stderr = BytesIO() cli.UI([], stdin, stdout, stderr) def test_stream_comes_from_stdin(self): stdout = BytesIO() stdin = BytesIO(_b('foo\n')) stderr = BytesIO() ui = cli.UI([], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.input_streams = ['subunit'] ui.set_command(cmd) results = [] for stream in ui.iter_streams('subunit'): results.append(stream.read()) self.assertEqual([_b('foo\n')], results) def test_stream_type_honoured(self): # The CLI UI has only one stdin, so when a command asks for a stream # type it didn't declare, no streams are found. stdout = BytesIO() stdin = BytesIO(_b('foo\n')) stderr = BytesIO() ui = cli.UI([], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.input_streams = ['subunit+', 'interactive?'] ui.set_command(cmd) results = [] for stream in ui.iter_streams('interactive'): results.append(stream.read()) self.assertEqual([], results) def test_dash_d_sets_here_option(self): stdout = BytesIO() stdin = BytesIO(_b('foo\n')) stderr = BytesIO() ui = cli.UI(['-d', '/nowhere/'], stdin, stdout, stderr) cmd = commands.Command(ui) ui.set_command(cmd) self.assertEqual('/nowhere/', ui.here) def test_outputs_error_string(self): try: raise Exception('fooo') except Exception: err_tuple = sys.exc_info() expected = str(err_tuple[1]) + '\n' bytestream = BytesIO() stdout = TextIOWrapper(bytestream, 'utf8', line_buffering=True) stdin = StringIO() stderr = StringIO() ui = cli.UI([], stdin, stdout, stderr) ui.output_error(err_tuple) self.assertThat(stderr.getvalue(), DocTestMatches(expected)) def test_error_enters_pdb_when_TESTR_PDB_set(self): os.environ['TESTR_PDB'] = '1' try: raise Exception('fooo') except Exception: err_tuple = sys.exc_info() expected = dedent("""\ File "...test_cli.py", line ..., in ...pdb_when_TESTR_PDB_set raise Exception('fooo') fooo """) # This should be a BytesIO + Textwrapper, but pdb on 2.7 writes bytes # - this code is the most pragmatic to test on 2.6 and up, and 3.2 and # up. stdout = StringIO() stdin = StringIO(_u('c\n')) stderr = StringIO() ui = cli.UI([], stdin, stdout, stderr) ui.output_error(err_tuple) self.assertThat(stderr.getvalue(), DocTestMatches(expected, doctest.ELLIPSIS)) def test_outputs_rest_to_stdout(self): ui, cmd = get_test_ui_and_cmd() ui.output_rest(_u('topic\n=====\n')) self.assertEqual(_b('topic\n=====\n'), ui._stdout.buffer.getvalue()) def test_outputs_results_to_stdout(self): ui, cmd = get_test_ui_and_cmd() class Case(ResourcedTestCase): def method(self): self.fail('quux') result, summary = ui.make_result(lambda: None, StubTestCommand()) result.startTestRun() Case('method').run(testtools.ExtendedToStreamDecorator(result)) result.stopTestRun() self.assertThat(ui._stdout.buffer.getvalue().decode('utf8'), DocTestMatches("""\ ====================================================================== FAIL: testrepository.tests.ui.test_cli.Case.method ---------------------------------------------------------------------- ...Traceback (most recent call last):... File "...test_cli.py", line ..., in method self.fail(\'quux\')... AssertionError: quux... """, doctest.ELLIPSIS)) def test_outputs_stream_to_stdout(self): ui, cmd = get_test_ui_and_cmd() stream = BytesIO(_b("Foo \n bar")) ui.output_stream(stream) self.assertEqual(_b("Foo \n bar"), ui._stdout.buffer.getvalue()) def test_outputs_tables_to_stdout(self): ui, cmd = get_test_ui_and_cmd() ui.output_table([('foo', 1), ('b', 'quux')]) self.assertEqual(_b('foo 1\n--- ----\nb quux\n'), ui._stdout.buffer.getvalue()) def test_outputs_tests_to_stdout(self): ui, cmd = get_test_ui_and_cmd() ui.output_tests([self, self.__class__('test_construct')]) self.assertThat( ui._stdout.buffer.getvalue().decode('utf8'), DocTestMatches( '...TestCLIUI.test_outputs_tests_to_stdout\n' '...TestCLIUI.test_construct\n', doctest.ELLIPSIS)) def test_outputs_values_to_stdout(self): ui, cmd = get_test_ui_and_cmd() ui.output_values([('foo', 1), ('bar', 'quux')]) self.assertEqual(_b('foo=1, bar=quux\n'), ui._stdout.buffer.getvalue()) def test_outputs_summary_to_stdout(self): ui, cmd = get_test_ui_and_cmd() summary = [True, 1, None, 2, None, []] expected_summary = ui._format_summary(*summary) ui.output_summary(*summary) self.assertEqual(_b("%s\n" % (expected_summary,)), ui._stdout.buffer.getvalue()) def test_parse_error_goes_to_stderr(self): bytestream = BytesIO() stdout = TextIOWrapper(bytestream, 'utf8', line_buffering=True) stdin = StringIO() stderr = StringIO() ui = cli.UI(['one'], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.args = [arguments.command.CommandArgument('foo')] ui.set_command(cmd) self.assertEqual("Could not find command 'one'.\n", stderr.getvalue()) def test_parse_excess_goes_to_stderr(self): bytestream = BytesIO() stdout = TextIOWrapper(bytestream, 'utf8', line_buffering=True) stdin = StringIO() stderr = StringIO() ui = cli.UI(['one'], stdin, stdout, stderr) cmd = commands.Command(ui) ui.set_command(cmd) self.assertEqual("Unexpected arguments: ['one']\n", stderr.getvalue()) def test_parse_options_after_double_dash_are_arguments(self): stdout = BytesIO() stdin = BytesIO() stderr = BytesIO() ui = cli.UI(['one', '--', '--two', 'three'], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.args = [arguments.string.StringArgument('myargs', max=None), arguments.doubledash.DoubledashArgument(), arguments.string.StringArgument('subargs', max=None)] ui.set_command(cmd) self.assertEqual({ 'doubledash': ['--'], 'myargs': ['one'], 'subargs': ['--two', 'three']}, ui.arguments) def test_double_dash_passed_to_arguments(self): class CaptureArg(arguments.AbstractArgument): def _parse_one(self, arg): return arg stdout = BytesIO() stdin = BytesIO() stderr = BytesIO() ui = cli.UI(['one', '--', '--two', 'three'], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.args = [CaptureArg('args', max=None)] ui.set_command(cmd) self.assertEqual({'args':['one', '--', '--two', 'three']}, ui.arguments) def test_run_subunit_option(self): ui, cmd = get_test_ui_and_cmd(options=[('subunit', True)]) self.assertEqual(True, ui.options.subunit) def test_dash_dash_help_shows_help(self): bytestream = BytesIO() stdout = TextIOWrapper(bytestream, 'utf8', line_buffering=True) stdin = StringIO() stderr = StringIO() ui = cli.UI(['--help'], stdin, stdout, stderr) cmd = commands.Command(ui) cmd.args = [arguments.string.StringArgument('foo')] cmd.name = "bar" # By definition SystemExit is not caught by 'except Exception'. try: ui.set_command(cmd) except SystemExit: exc_info = sys.exc_info() self.assertThat(exc_info, MatchesException(SystemExit(0))) else: self.fail('ui.set_command did not raise') self.assertThat(bytestream.getvalue().decode('utf8'), DocTestMatches("""Usage: run.py bar [options] foo ... A command that can be run... ... -d HERE, --here=HERE... ...""", doctest.ELLIPSIS)) class TestCLISummary(TestCase): def get_summary(self, successful, tests, tests_delta, time, time_delta, values): """Get the summary that would be output for successful & values.""" ui, cmd = get_test_ui_and_cmd() return ui._format_summary( successful, tests, tests_delta, time, time_delta, values) def test_success_only(self): x = self.get_summary(True, None, None, None, None, []) self.assertEqual('PASSED', x) def test_failure_only(self): x = self.get_summary(False, None, None, None, None, []) self.assertEqual('FAILED', x) def test_time(self): x = self.get_summary(True, None, None, 3.4, None, []) self.assertEqual('Ran tests in 3.400s\nPASSED', x) def test_time_with_delta(self): x = self.get_summary(True, None, None, 3.4, 0.1, []) self.assertEqual('Ran tests in 3.400s (+0.100s)\nPASSED', x) def test_tests_run(self): x = self.get_summary(True, 34, None, None, None, []) self.assertEqual('Ran 34 tests\nPASSED', x) def test_tests_run_with_delta(self): x = self.get_summary(True, 34, 5, None, None, []) self.assertEqual('Ran 34 (+5) tests\nPASSED', x) def test_tests_and_time(self): x = self.get_summary(True, 34, -5, 3.4, 0.1, []) self.assertEqual('Ran 34 (-5) tests in 3.400s (+0.100s)\nPASSED', x) def test_other_values(self): x = self.get_summary( True, None, None, None, None, [('failures', 12, -1), ('errors', 13, 2)]) self.assertEqual('PASSED (failures=12 (-1), errors=13 (+2))', x) def test_values_no_delta(self): x = self.get_summary( True, None, None, None, None, [('failures', 12, None), ('errors', 13, None)]) self.assertEqual('PASSED (failures=12, errors=13)', x) def test_combination(self): x = self.get_summary( True, 34, -5, 3.4, 0.1, [('failures', 12, -1), ('errors', 13, 2)]) self.assertEqual( ('Ran 34 (-5) tests in 3.400s (+0.100s)\n' 'PASSED (failures=12 (-1), errors=13 (+2))'), x) class TestCLITestResult(TestCase): def make_exc_info(self): # Make an exc_info tuple for use in testing. try: 1/0 except ZeroDivisionError: return sys.exc_info() def make_result(self, stream=None, argv=None, filter_tags=None): if stream is None: stream = BytesIO() argv = argv or [] ui = cli.UI(argv, None, stream, None) cmd = commands.Command(ui) cmd.options = [ optparse.Option("--subunit", action="store_true", default=False, help="Display results in subunit format."), ] ui.set_command(cmd) return ui.make_result( lambda: None, StubTestCommand(filter_tags=filter_tags)) def test_initial_stream(self): # CLITestResult.__init__ does not do anything to the stream it is # given. bytestream = BytesIO() stream = TextIOWrapper(bytestream, 'utf8', line_buffering=True) ui = cli.UI(None, None, None, None) cli.CLITestResult(ui, stream, lambda: None) self.assertEqual(_b(''), bytestream.getvalue()) def test_format_error(self): # CLITestResult formats errors by giving them a big fat line, a title # made up of their 'label' and the name of the test, another different # big fat line, and then the actual error itself. result = self.make_result()[0] error = result._format_error('label', self, 'error text') expected = '%s%s: %s\n%s%s' % ( result.sep1, 'label', self.id(), result.sep2, 'error text') self.assertThat(error, DocTestMatches(expected)) def test_format_error_includes_tags(self): result = self.make_result()[0] error = result._format_error('label', self, 'error text', set(['foo'])) expected = '%s%s: %s\ntags: foo\n%s%s' % ( result.sep1, 'label', self.id(), result.sep2, 'error text') self.assertThat(error, DocTestMatches(expected)) def test_addFail_outputs_error(self): # CLITestResult.status test_status='fail' outputs the given error # immediately to the stream. bytestream = BytesIO() stream = TextIOWrapper(bytestream, 'utf8', line_buffering=True) result = self.make_result(stream)[0] error = self.make_exc_info() error_text = 'foo\nbar\n' result.startTestRun() result.status(test_id=self.id(), test_status='fail', eof=True, file_name='traceback', mime_type='text/plain;charset=utf8', file_bytes=error_text.encode('utf8')) self.assertThat( bytestream.getvalue().decode('utf8'), DocTestMatches(result._format_error('FAIL', self, error_text))) def test_addFailure_handles_string_encoding(self): # CLITestResult.addFailure outputs the given error handling non-ascii # characters. # Lets say we have bytes output, not string for some reason. stream = BytesIO() result = self.make_result(stream)[0] result.startTestRun() result.status(test_id='foo', test_status='fail', file_name='traceback', mime_type='text/plain;charset=utf8', file_bytes=b'-->\xe2\x80\x9c<--', eof=True) pattern = _u("...-->?<--...") self.assertThat( stream.getvalue().decode('utf8'), DocTestMatches(pattern, doctest.ELLIPSIS)) def test_subunit_output(self): bytestream = BytesIO() stream = TextIOWrapper(bytestream, 'utf8', line_buffering=True) result = self.make_result(stream, argv=['--subunit'])[0] result.startTestRun() result.stopTestRun() self.assertEqual(b'', bytestream.getvalue()) def test_make_result_tag_filter(self): bytestream = BytesIO() stream = TextIOWrapper(bytestream, 'utf8', line_buffering=True) result, summary = self.make_result( stream, filter_tags=set(['worker-0'])) # Generate a bunch of results with tags in the same events that # testtools generates them. tags = set(['worker-0']) result.startTestRun() result.status(test_id='pass', test_status='inprogress') result.status(test_id='pass', test_status='success', test_tags=tags) result.status(test_id='fail', test_status='inprogress') result.status(test_id='fail', test_status='fail', test_tags=tags) result.status(test_id='xfail', test_status='inprogress') result.status(test_id='xfail', test_status='xfail', test_tags=tags) result.status(test_id='uxsuccess', test_status='inprogress') result.status( test_id='uxsuccess', test_status='uxsuccess', test_tags=tags) result.status(test_id='skip', test_status='inprogress') result.status(test_id='skip', test_status='skip', test_tags=tags) result.stopTestRun() self.assertEqual("""\ ====================================================================== FAIL: fail tags: worker-0 ---------------------------------------------------------------------- Ran 1 tests FAILED (id=None, failures=1, skips=1) """, bytestream.getvalue().decode('utf8')) testrepository-0.0.20/testrepository/tests/repository/0000775000175000017500000000000012377221137024541 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/tests/repository/__init__.py0000664000175000017500000000170112306632354026650 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for specific repository types.""" import unittest def test_suite(): names = [ 'file', ] module_names = ['testrepository.tests.repository.test_' + name for name in names] loader = unittest.TestLoader() return loader.loadTestsFromNames(module_names) testrepository-0.0.20/testrepository/tests/repository/test_file.py0000664000175000017500000000727312306632354027101 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the file repository implementation.""" import os.path import shutil import tempfile from fixtures import Fixture from testtools.matchers import Raises, MatchesException from testrepository.repository import file from testrepository.tests import ResourcedTestCase from testrepository.tests.stubpackage import TempDirResource class FileRepositoryFixture(Fixture): def __init__(self, case): self.tempdir = case.tempdir self.resource = case.resources[0][1] def setUp(self): super(FileRepositoryFixture, self).setUp() self.repo = file.RepositoryFactory().initialise(self.tempdir) self.resource.dirtied(self.tempdir) class HomeDirTempDir(Fixture): """Creates a temporary directory in ~.""" def setUp(self): super(HomeDirTempDir, self).setUp() home_dir = os.path.expanduser('~') self.temp_dir = tempfile.mkdtemp(dir=home_dir) self.addCleanup(shutil.rmtree, self.temp_dir) self.short_path = os.path.join('~', os.path.basename(self.temp_dir)) class TestFileRepository(ResourcedTestCase): resources = [('tempdir', TempDirResource())] def test_initialise(self): self.useFixture(FileRepositoryFixture(self)) base = os.path.join(self.tempdir, '.testrepository') stream = open(os.path.join(base, 'format'), 'rt') try: contents = stream.read() finally: stream.close() self.assertEqual("1\n", contents) stream = open(os.path.join(base, 'next-stream'), 'rt') try: contents = stream.read() finally: stream.close() self.assertEqual("0\n", contents) def test_initialise_expands_user_directory(self): short_path = self.useFixture(HomeDirTempDir()).short_path repo = file.RepositoryFactory().initialise(short_path) self.assertTrue(os.path.exists(repo.base)) def test_inserter_output_path(self): repo = self.useFixture(FileRepositoryFixture(self)).repo inserter = repo.get_inserter() inserter.startTestRun() inserter.stopTestRun() self.assertTrue(os.path.exists(os.path.join(repo.base, '0'))) def test_inserting_creates_id(self): # When inserting a stream, an id is returned from stopTestRun. repo = self.useFixture(FileRepositoryFixture(self)).repo result = repo.get_inserter() result.startTestRun() result.stopTestRun() self.assertEqual(0, result.get_id()) def test_open_expands_user_directory(self): short_path = self.useFixture(HomeDirTempDir()).short_path repo1 = file.RepositoryFactory().initialise(short_path) repo2 = file.RepositoryFactory().open(short_path) self.assertEqual(repo1.base, repo2.base) def test_next_stream_corruption_error(self): repo = self.useFixture(FileRepositoryFixture(self)).repo open(os.path.join(repo.base, 'next-stream'), 'wb').close() self.assertThat(repo.count, Raises( MatchesException(ValueError("Corrupt next-stream file: ''")))) testrepository-0.0.20/testrepository/tests/test_arguments.py0000664000175000017500000000730012306632354025737 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the arguments package.""" from testtools.matchers import ( Equals, raises, ) from testrepository import arguments from testrepository.tests import ResourcedTestCase class TestAbstractArgument(ResourcedTestCase): def test_init_base(self): arg = arguments.AbstractArgument('name') self.assertEqual('name', arg.name) self.assertEqual('name', arg.summary()) def test_init_optional(self): arg = arguments.AbstractArgument('name', min=0) self.assertEqual(0, arg.minimum_count) self.assertEqual('name?', arg.summary()) def test_init_repeating(self): arg = arguments.AbstractArgument('name', max=None) self.assertEqual(None, arg.maximum_count) self.assertEqual('name+', arg.summary()) def test_init_optional_repeating(self): arg = arguments.AbstractArgument('name', min=0, max=None) self.assertEqual(None, arg.maximum_count) self.assertEqual('name*', arg.summary()) def test_init_arbitrary(self): arg = arguments.AbstractArgument('name', max=2) self.assertEqual('name{1,2}', arg.summary()) def test_init_arbitrary_infinite(self): arg = arguments.AbstractArgument('name', min=2, max=None) self.assertEqual('name{2,}', arg.summary()) def test_parsing_calls__parse_one(self): calls = [] class AnArgument(arguments.AbstractArgument): def _parse_one(self, arg): calls.append(arg) return ('1', arg) argument = AnArgument('foo', max=2) args = ['thing', 'other', 'stranger'] # results are returned self.assertEqual([('1', 'thing'), ('1', 'other')], argument.parse(args)) # used args are removed self.assertEqual(['stranger'], args) # parse function was used self.assertEqual(['thing', 'other'], calls) def test_parsing_unlimited(self): class AnArgument(arguments.AbstractArgument): def _parse_one(self, arg): return arg argument = AnArgument('foo', max=None) args = ['thing', 'other'] # results are returned self.assertEqual(['thing', 'other'], argument.parse(args)) # used args are removed self.assertEqual([], args) def test_parsing_too_few(self): class AnArgument(arguments.AbstractArgument): def _parse_one(self, arg): return arg argument = AnArgument('foo') self.assertThat(lambda: argument.parse([]), raises(ValueError)) def test_parsing_optional_not_matching(self): class AnArgument(arguments.AbstractArgument): def _parse_one(self, arg): raise ValueError('not an argument') argument = AnArgument('foo', min=0) args = ['a', 'b'] self.assertThat(argument.parse(args), Equals([])) self.assertThat(args, Equals(['a', 'b'])) # No interface tests for now, because the interface we expect is really just # _parse_one; however if bugs or issues show up... then we should add them. testrepository-0.0.20/testrepository/tests/test_repository.py0000664000175000017500000004617212376335112026162 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for Repository support logic and the Repository contract.""" from datetime import ( datetime, timedelta, ) import doctest from subunit import ( iso8601, v2, ) from testresources import TestResource from testtools import ( clone_test_with_new_id, PlaceHolder, ) import testtools from testtools.compat import _b from testtools.testresult.doubles import ( ExtendedTestResult, StreamResult, ) from testtools.matchers import DocTestMatches, raises from testrepository import repository from testrepository.repository import file, memory from testrepository.tests import ( ResourcedTestCase, Wildcard, ) from testrepository.tests.stubpackage import ( TempDirResource, ) class RecordingRepositoryFactory(object): """Test helper for tests wanting to check repository factory callers.""" def __init__(self, calls, decorated): self.calls = calls self.factory = decorated def initialise(self, url): self.calls.append(('initialise', url)) return self.factory.initialise(url) def open(self, url): self.calls.append(('open', url)) return self.factory.open(url) class DirtyTempDirResource(TempDirResource): def __init__(self): TempDirResource.__init__(self) self._dirty = True def isDirty(self): return True def _setResource(self, new_resource): """Set the current resource to a new value.""" self._currentResource = new_resource self._dirty = True class MemoryRepositoryFactoryResource(TestResource): def make(self, dependency_resources): return memory.RepositoryFactory() # what repository implementations do we need to test? repo_implementations = [ ('file', {'repo_impl': file.RepositoryFactory(), 'resources': [('sample_url', DirtyTempDirResource())] }), ('memory', { 'resources': [('repo_impl', MemoryRepositoryFactoryResource())], 'sample_url': 'memory:'}), ] class Case(ResourcedTestCase): """Reference tests.""" def passing(self): pass def failing(self): self.fail("oops") def unexpected_success(self): self.expectFailure("unexpected success", self.assertTrue, True) def make_test(id, should_pass): """Make a test.""" if should_pass: case = Case("passing") else: case = Case("failing") return clone_test_with_new_id(case, id) def run_timed(id, duration, result, enumeration=False): """Make and run a test taking duration seconds. :param enumeration: If True, don't run, just enumerate. """ start = datetime.now(tz=iso8601.Utc()) if enumeration: result.status(test_id=id, test_status='exists', timestamp=start) else: result.status(test_id=id, test_status='inprogress', timestamp=start) result.status(test_id=id, test_status='success', timestamp=start + timedelta(seconds=duration)) class TestRepositoryErrors(ResourcedTestCase): def test_not_found(self): url = 'doesntexistatall' error = repository.RepositoryNotFound(url) self.assertEqual( 'No repository found in %s. Create one by running "testr init".' % url, str(error)) class TestRepositoryContract(ResourcedTestCase): scenarios = repo_implementations def get_failing(self, repo): """Analyze a failing stream from repo and return it.""" run = repo.get_failing() analyzer = testtools.StreamSummary() analyzer.startTestRun() try: run.get_test().run(analyzer) finally: analyzer.stopTestRun() return analyzer def get_last_run(self, repo): """Return the results from a stream.""" run = repo.get_test_run(repo.latest_id()) analyzer = testtools.StreamSummary() analyzer.startTestRun() try: run.get_test().run(analyzer) finally: analyzer.stopTestRun() return analyzer def test_can_initialise_with_param(self): repo = self.repo_impl.initialise(self.sample_url) self.assertIsInstance(repo, repository.AbstractRepository) def test_can_get_inserter(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() self.assertNotEqual(None, result) def test_insert_stream_smoke(self): # We can insert some data into the repository. repo = self.repo_impl.initialise(self.sample_url) class Case(ResourcedTestCase): def method(self): pass case = Case('method') result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() case.run(legacy_result) legacy_result.stopTestRun() self.assertEqual(1, repo.count()) self.assertNotEqual(None, result.get_id()) def test_open(self): self.repo_impl.initialise(self.sample_url) self.repo_impl.open(self.sample_url) def test_open_non_existent(self): url = 'doesntexistatall' self.assertThat(lambda: self.repo_impl.open(url), raises(repository.RepositoryNotFound(url))) def test_inserting_creates_id(self): # When inserting a stream, an id is returned from stopTestRun. # Note that this is no longer recommended - but kept for compatibility. repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() self.assertNotEqual(None, result.stopTestRun()) def test_count(self): repo = self.repo_impl.initialise(self.sample_url) self.assertEqual(0, repo.count()) result = repo.get_inserter() result.startTestRun() result.stopTestRun() self.assertEqual(1, repo.count()) result = repo.get_inserter() result.startTestRun() result.stopTestRun() self.assertEqual(2, repo.count()) def test_latest_id_empty(self): repo = self.repo_impl.initialise(self.sample_url) self.assertThat(repo.latest_id, raises(KeyError("No tests in repository"))) def test_latest_id_nonempty(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() result.stopTestRun() inserted = result.get_id() self.assertEqual(inserted, repo.latest_id()) def test_get_failing_empty(self): # repositories can return a TestRun with just latest failures in it. repo = self.repo_impl.initialise(self.sample_url) analyzed = self.get_failing(repo) self.assertEqual(0, analyzed.testsRun) def test_get_failing_one_run(self): # repositories can return a TestRun with just latest failures in it. repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('passing', True).run(legacy_result) make_test('failing', False).run(legacy_result) legacy_result.stopTestRun() analyzed = self.get_failing(repo) self.assertEqual(1, analyzed.testsRun) self.assertEqual(1, len(analyzed.errors)) self.assertEqual('failing', analyzed.errors[0][0].id()) def test_unexpected_success(self): # Unexpected successes get forwarded too. (Test added because of a # NameError in memory repo). repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() test = clone_test_with_new_id(Case('unexpected_success'), 'unexpected_success') test.run(legacy_result) legacy_result.stopTestRun() analyzed = self.get_last_run(repo) self.assertEqual(1, analyzed.testsRun) self.assertEqual(1, len(analyzed.unexpectedSuccesses)) self.assertEqual('unexpected_success', analyzed.unexpectedSuccesses[0].id()) def test_get_failing_complete_runs_delete_missing_failures(self): # failures from complete runs replace all failures. repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('passing', True).run(legacy_result) make_test('failing', False).run(legacy_result) make_test('missing', False).run(legacy_result) legacy_result.stopTestRun() result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('passing', False).run(legacy_result) make_test('failing', True).run(legacy_result) legacy_result.stopTestRun() analyzed = self.get_failing(repo) self.assertEqual(1, analyzed.testsRun) self.assertEqual(1, len(analyzed.errors)) self.assertEqual('passing', analyzed.errors[0][0].id()) def test_get_failing_partial_runs_preserve_missing_failures(self): # failures from two runs add to existing failures, and successes remove # from them. repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('passing', True).run(legacy_result) make_test('failing', False).run(legacy_result) make_test('missing', False).run(legacy_result) legacy_result.stopTestRun() result = repo.get_inserter(partial=True) legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('passing', False).run(legacy_result) make_test('failing', True).run(legacy_result) legacy_result.stopTestRun() analyzed = self.get_failing(repo) self.assertEqual(2, analyzed.testsRun) self.assertEqual(2, len(analyzed.errors)) self.assertEqual(set(['passing', 'missing']), set([test[0].id() for test in analyzed.errors])) def test_get_test_run_missing_keyerror(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() result.stopTestRun() inserted = result.get_id() self.assertThat(lambda:repo.get_test_run(inserted - 1), raises(KeyError)) def test_get_test_run(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() inserted = result.stopTestRun() run = repo.get_test_run(inserted) self.assertNotEqual(None, run) def test_get_latest_run(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() inserted = result.stopTestRun() run = repo.get_latest_run() self.assertEqual(inserted, run.get_id()) def test_get_latest_run_empty_repo(self): repo = self.repo_impl.initialise(self.sample_url) self.assertRaises(KeyError, repo.get_latest_run) def test_get_test_run_get_id(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() inserted = result.stopTestRun() run = repo.get_test_run(inserted) self.assertEqual(inserted, run.get_id()) def test_get_test_run_preserves_time(self): self.skip('Fix me before releasing.') # The test run outputs the time events that it received. now = datetime(2001, 1, 1, 0, 0, 0, tzinfo=iso8601.Utc()) second = timedelta(seconds=1) repo = self.repo_impl.initialise(self.sample_url) test_id = self.getUniqueString() test = make_test(test_id, True) result = repo.get_inserter() result.startTestRun() result.status(timestamp=now, test_id=test_id, test_status='inprogress') result.status(timestamp=(now + 1 * second), test_id=test_id, test_status='success') inserted = result.stopTestRun() run = repo.get_test_run(inserted) result = ExtendedTestResult() run.get_test().run(result) self.assertEqual( [('time', now), ('tags', set(), set()), ('startTest', Wildcard), ('time', now + 1 * second), ('addSuccess', Wildcard), ('stopTest', Wildcard), ('tags', set(), set()), ], result._events) def test_get_failing_get_id(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() result.startTestRun() result.stopTestRun() run = repo.get_failing() self.assertEqual(None, run.get_id()) def test_get_failing_get_subunit_stream(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('testrepository.tests.test_repository.Case.method', False).run(legacy_result) legacy_result.stopTestRun() run = repo.get_failing() as_subunit = run.get_subunit_stream() stream = v2.ByteStreamToStreamResult(as_subunit) log = StreamResult() log.startTestRun() try: stream.run(log) finally: log.stopTestRun() self.assertEqual( log._events, [ ('startTestRun',), ('status', 'testrepository.tests.test_repository.Case.method', 'inprogress', None, True, None, None, False, None, None, Wildcard), ('status', 'testrepository.tests.test_repository.Case.method', None, None, True, 'traceback', Wildcard, True, Wildcard, None, Wildcard), ('status', 'testrepository.tests.test_repository.Case.method', 'fail', None, True, None, None, False, None, None, Wildcard), ('stopTestRun',) ]) def test_get_subunit_from_test_run(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('testrepository.tests.test_repository.Case.method', True).run(legacy_result) legacy_result.stopTestRun() inserted = result.get_id() run = repo.get_test_run(inserted) as_subunit = run.get_subunit_stream() stream = v2.ByteStreamToStreamResult(as_subunit) log = StreamResult() log.startTestRun() try: stream.run(log) finally: log.stopTestRun() self.assertEqual( log._events, [ ('startTestRun',), ('status', 'testrepository.tests.test_repository.Case.method', 'inprogress', None, True, None, None, False, None, None, Wildcard), ('status', 'testrepository.tests.test_repository.Case.method', 'success', None, True, None, None, False, None, None, Wildcard), ('stopTestRun',) ]) def test_get_test_from_test_run(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() make_test('testrepository.tests.test_repository.Case.method', True).run(legacy_result) legacy_result.stopTestRun() inserted = result.get_id() run = repo.get_test_run(inserted) test = run.get_test() result = testtools.StreamSummary() result.startTestRun() try: test.run(result) finally: result.stopTestRun() self.assertEqual(1, result.testsRun) def test_get_times_unknown_tests_are_unknown(self): repo = self.repo_impl.initialise(self.sample_url) test_ids = set(['foo', 'bar']) self.assertEqual(test_ids, repo.get_test_times(test_ids)['unknown']) def test_inserted_test_times_known(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() test_name = 'testrepository.tests.test_repository.Case.method' run_timed(test_name, 0.1, legacy_result) legacy_result.stopTestRun() self.assertEqual({test_name: 0.1}, repo.get_test_times([test_name])['known']) def test_inserted_exists_no_impact_on_test_times(self): repo = self.repo_impl.initialise(self.sample_url) result = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(result) legacy_result.startTestRun() test_name = 'testrepository.tests.test_repository.Case.method' run_timed(test_name, 0.1, legacy_result) legacy_result.stopTestRun() result = repo.get_inserter() result.startTestRun() test_name = 'testrepository.tests.test_repository.Case.method' run_timed(test_name, 0.2, result, True) result.stopTestRun() self.assertEqual({test_name: 0.1}, repo.get_test_times([test_name])['known']) def test_get_test_ids(self): repo = self.repo_impl.initialise(self.sample_url) inserter = repo.get_inserter() legacy_result = testtools.ExtendedToStreamDecorator(inserter) legacy_result.startTestRun() test_cases = [PlaceHolder(self.getUniqueString()) for r in range(5)] for test_case in test_cases: test_case.run(legacy_result) legacy_result.stopTestRun() run_id = inserter.get_id() self.assertEqual(run_id, repo.latest_id()) returned_ids = repo.get_test_ids(run_id) self.assertEqual([test.id() for test in test_cases], returned_ids) testrepository-0.0.20/testrepository/tests/test_results.py0000664000175000017500000000406512306632354025440 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. from datetime import ( datetime, timedelta, ) import sys from testtools import TestCase from testrepository.results import SummarizingResult class TestSummarizingResult(TestCase): def test_empty(self): result = SummarizingResult() result.startTestRun() result.stopTestRun() self.assertEqual(0, result.testsRun) self.assertEqual(0, result.get_num_failures()) self.assertIs(None, result.get_time_taken()) def test_time_taken(self): result = SummarizingResult() now = datetime.now() result.startTestRun() result.status(timestamp=now) result.status(timestamp=now + timedelta(seconds=5)) result.stopTestRun() self.assertEqual(5.0, result.get_time_taken()) def test_num_failures(self): result = SummarizingResult() result.startTestRun() try: 1/0 except ZeroDivisionError: error = sys.exc_info() result.status(test_id='foo', test_status='fail') result.status(test_id='foo', test_status='fail') result.stopTestRun() self.assertEqual(2, result.get_num_failures()) def test_tests_run(self): result = SummarizingResult() result.startTestRun() for i in range(5): result.status(test_id='foo', test_status='success') result.stopTestRun() self.assertEqual(5, result.testsRun) testrepository-0.0.20/testrepository/tests/test_setup.py0000664000175000017500000000322212376202254025070 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for setup.py.""" import doctest import os.path import subprocess import sys from testtools import ( TestCase, ) from testtools.matchers import ( DocTestMatches, MatchesAny, ) class TestCanSetup(TestCase): def test_bdist(self): # Single smoke test to make sure we can build a package. path = os.path.join(os.path.dirname(__file__), '..', '..', 'setup.py') proc = subprocess.Popen([sys.executable, path, 'bdist'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True) output, err = proc.communicate() self.assertThat(output, MatchesAny( # win32 DocTestMatches("""... running install_scripts ... adding '...testr' ...""", doctest.ELLIPSIS), # unixen DocTestMatches("""... ...bin/testr ... """, doctest.ELLIPSIS) )) self.assertEqual(0, proc.returncode, "Setup failed out=%r err=%r" % (output, err)) testrepository-0.0.20/testrepository/tests/test_matchers.py0000664000175000017500000000235012306632354025540 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for matchers used by or for testing testrepository.""" import sys from testtools import TestCase class TestWildcard(TestCase): def test_wildcard_equals_everything(self): from testrepository.tests import Wildcard self.assertTrue(Wildcard == 5) self.assertTrue(Wildcard == 'orange') self.assertTrue('orange' == Wildcard) self.assertTrue(5 == Wildcard) def test_wildcard_not_equals_nothing(self): from testrepository.tests import Wildcard self.assertFalse(Wildcard != 5) self.assertFalse(Wildcard != 'orange') testrepository-0.0.20/testrepository/tests/test_stubpackage.py0000664000175000017500000000423512306632354026227 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the stubpackage test helper.""" import os.path from testrepository.tests import ResourcedTestCase from testrepository.tests.stubpackage import ( StubPackageResource, TempDirResource, ) class TestStubPackageResource(ResourcedTestCase): def test_has_tempdir(self): resource = StubPackageResource('foo', []) self.assertEqual(1, len(resource.resources)) self.assertIsInstance(resource.resources[0][1], TempDirResource) def test_writes_package(self): resource = StubPackageResource('foo', [('bar.py', 'woo')]) pkg = resource.getResource() self.addCleanup(resource.finishedWith, pkg) self.assertEqual('', open(os.path.join(pkg.base, 'foo', '__init__.py')).read()) self.assertEqual('woo', open(os.path.join(pkg.base, 'foo', 'bar.py')).read()) def test_no__init__(self): resource = StubPackageResource('foo', [('bar.py', 'woo')], init=False) pkg = resource.getResource() self.addCleanup(resource.finishedWith, pkg) self.assertFalse(os.path.exists(os.path.join(pkg.base, 'foo', '__init__.py'))) class TestTempDirResource(ResourcedTestCase): """Tests for the StubPackage resource.""" def test_makes_a_dir(self): resource = TempDirResource() tempdir = resource.getResource() try: self.assertTrue(os.path.exists(tempdir)) finally: resource.finishedWith(tempdir) self.assertFalse(os.path.exists(tempdir)) testrepository-0.0.20/testrepository/tests/stubpackage.py0000664000175000017500000000423212306632354025165 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """A TestResource that provides a temporary python package.""" import os.path import shutil import tempfile from testresources import TestResource class TempDirResource(TestResource): """A temporary directory resource. This resource is never considered dirty. """ def make(self, dependency_resources): return tempfile.mkdtemp() def clean(self, resource): shutil.rmtree(resource, ignore_errors=True) class StubPackage(object): """A temporary package for tests. :ivar base: The directory containing the package dir. """ class StubPackageResource(TestResource): def __init__(self, packagename, modulelist, init=True): super(StubPackageResource, self).__init__() self.packagename = packagename self.modulelist = modulelist self.init = init self.resources = [('base', TempDirResource())] def make(self, dependency_resources): result = StubPackage() base = dependency_resources['base'] root = os.path.join(base, self.packagename) os.mkdir(root) init_seen = not self.init for modulename, contents in self.modulelist: stream = open(os.path.join(root, modulename), 'wt') try: stream.write(contents) finally: stream.close() if modulename == '__init__.py': init_seen = True if not init_seen: open(os.path.join(root, '__init__.py'), 'wt').close() return result testrepository-0.0.20/testrepository/tests/test_commands.py0000664000175000017500000001651612306632354025544 0ustar robertcrobertc00000000000000# # Copyright (c) 2009, 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the commands module.""" import optparse import os.path import sys from testresources import TestResource from testtools.matchers import ( IsInstance, MatchesException, raises, ) from testrepository import commands from testrepository.repository import file from testrepository.tests import ResourcedTestCase from testrepository.tests.monkeypatch import monkeypatch from testrepository.tests.stubpackage import ( StubPackageResource, ) from testrepository.ui import cli, model class TemporaryCommand(object): """A temporary command.""" class TemporaryCommandResource(TestResource): def __init__(self, cmd_name): TestResource.__init__(self) cmd_name = cmd_name.replace('-', '_') self.resources.append(('pkg', StubPackageResource('commands', [('%s.py' % cmd_name, """from testrepository.commands import Command class %s(Command): def run(self): pass """ % cmd_name)], init=False))) self.cmd_name = cmd_name def make(self, dependency_resources): pkg = dependency_resources['pkg'] result = TemporaryCommand() result.path = os.path.join(pkg.base, 'commands') commands.__path__.append(result.path) return result def clean(self, resource): commands.__path__.remove(resource.path) name = 'testrepository.commands.%s' % self.cmd_name if name in sys.modules: del sys.modules[name] class TestFindCommand(ResourcedTestCase): resources = [('cmd', TemporaryCommandResource('foo'))] def test_looksupcommand(self): cmd = commands._find_command('foo') self.assertIsInstance(cmd(None), commands.Command) def test_missing_command(self): self.assertThat(lambda: commands._find_command('bar'), raises(KeyError)) def test_sets_name(self): cmd = commands._find_command('foo') self.assertEqual('foo', cmd.name) class TestNameMangling(ResourcedTestCase): resources = [('cmd', TemporaryCommandResource('foo-bar'))] def test_looksupcommand(self): cmd = commands._find_command('foo-bar') self.assertIsInstance(cmd(None), commands.Command) def test_sets_name(self): cmd = commands._find_command('foo-bar') # The name is preserved, so that 'testr commands' shows something # sensible. self.assertEqual('foo-bar', cmd.name) class TestIterCommands(ResourcedTestCase): resources = [ ('cmd1', TemporaryCommandResource('one')), ('cmd2', TemporaryCommandResource('two')), ] def test_iter_commands(self): cmds = list(commands.iter_commands()) cmds = [cmd(None).name for cmd in cmds] # We don't care about all the built in commands cmds = [cmd for cmd in cmds if cmd in ('one', 'two')] self.assertEqual(['one', 'two'], cmds) class TestRunArgv(ResourcedTestCase): def stub__find_command(self, cmd_run): self.calls = [] self.addCleanup(monkeypatch('testrepository.commands._find_command', self._find_command)) self.cmd_run = cmd_run def _find_command(self, cmd_name): self.calls.append(cmd_name) real_run = self.cmd_run class SampleCommand(commands.Command): """A command that is used for testing.""" def execute(self): return real_run(self) return SampleCommand def test_looks_up_cmd(self): self.stub__find_command(lambda x:0) commands.run_argv(['testr', 'foo'], 'in', 'out', 'err') self.assertEqual(['foo'], self.calls) def test_looks_up_cmd_skips_options(self): self.stub__find_command(lambda x:0) commands.run_argv(['testr', '--version', 'foo'], 'in', 'out', 'err') self.assertEqual(['foo'], self.calls) def test_no_cmd_issues_help(self): self.stub__find_command(lambda x:0) commands.run_argv(['testr', '--version'], 'in', 'out', 'err') self.assertEqual(['help'], self.calls) def capture_ui(self, cmd): self.ui = cmd.ui return 0 def test_runs_cmd_with_CLI_UI(self): self.stub__find_command(self.capture_ui) commands.run_argv(['testr', '--version', 'foo'], 'in', 'out', 'err') self.assertEqual(['foo'], self.calls) self.assertIsInstance(self.ui, cli.UI) def test_returns_0_when_None_returned_from_execute(self): self.stub__find_command(lambda x:None) self.assertEqual(0, commands.run_argv(['testr', 'foo'], 'in', 'out', 'err')) def test_returns_execute_result(self): self.stub__find_command(lambda x:1) self.assertEqual(1, commands.run_argv(['testr', 'foo'], 'in', 'out', 'err')) class TestGetCommandParser(ResourcedTestCase): def test_trivial(self): cmd = InstrumentedCommand(model.UI()) parser = commands.get_command_parser(cmd) self.assertThat(parser, IsInstance(optparse.OptionParser)) class InstrumentedCommand(commands.Command): """A command which records methods called on it. The first line is the summary. """ def _init(self): self.calls = [] def execute(self): self.calls.append('execute') return commands.Command.execute(self) def run(self): self.calls.append('run') class TestAbstractCommand(ResourcedTestCase): def test_execute_calls_run(self): cmd = InstrumentedCommand(model.UI()) self.assertEqual(0, cmd.execute()) self.assertEqual(['execute', 'run'], cmd.calls) def test_execute_calls_set_command(self): ui = model.UI() cmd = InstrumentedCommand(ui) cmd.execute() self.assertEqual(cmd, ui.cmd) def test_execute_does_not_run_if_set_command_errors(self): class FailUI(object): def set_command(self, ui): return False cmd = InstrumentedCommand(FailUI()) self.assertEqual(1, cmd.execute()) def test_shows_errors_from_execute_returns_3(self): class FailCommand(commands.Command): def run(self): raise Exception("foo") ui = model.UI() cmd = FailCommand(ui) self.assertEqual(3, cmd.execute()) self.assertEqual(1, len(ui.outputs)) self.assertEqual('error', ui.outputs[0][0]) self.assertThat(ui.outputs[0][1], MatchesException(Exception('foo'))) def test_default_repository_factory(self): cmd = commands.Command(model.UI()) self.assertIsInstance(cmd.repository_factory, file.RepositoryFactory) def test_get_summary(self): cmd = InstrumentedCommand self.assertEqual('A command which records methods called on it.', cmd.get_summary()) testrepository-0.0.20/testrepository/tests/test_monkeypatch.py0000664000175000017500000000212112306632354026250 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the monkeypatch helper.""" from testrepository.tests import ResourcedTestCase from testrepository.tests.monkeypatch import monkeypatch reference = 23 class TestMonkeyPatch(ResourcedTestCase): def test_patch_and_restore(self): cleanup = monkeypatch( 'testrepository.tests.test_monkeypatch.reference', 45) self.assertEqual(45, reference) cleanup() self.assertEqual(23, reference) testrepository-0.0.20/testrepository/tests/monkeypatch.py0000664000175000017500000000272312306632354025221 0ustar robertcrobertc00000000000000# # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Monkeypatch helper function for tests. This has been moved to fixtures, and should be removed from here. """ def monkeypatch(name, new_value): """Replace name with new_value. :return: A callable which will restore the original value. """ location, attribute = name.rsplit('.', 1) # Import, swallowing all errors as any element of location may be # a class or some such thing. try: __import__(location, {}, {}) except ImportError: pass components = location.split('.') current = __import__(components[0], {}, {}) for component in components[1:]: current = getattr(current, component) old_value = getattr(current, attribute) setattr(current, attribute, new_value) def restore(): setattr(current, attribute, old_value) return restore testrepository-0.0.20/testrepository/tests/arguments/0000775000175000017500000000000012377221137024327 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/tests/arguments/test_doubledash.py0000664000175000017500000000302412306632354030050 0ustar robertcrobertc00000000000000# # Copyright (c) 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the doubledash argument type.""" from testrepository.arguments import doubledash from testrepository.tests import ResourcedTestCase class TestArgument(ResourcedTestCase): def test_parses_as_string(self): arg = doubledash.DoubledashArgument() result = arg.parse(['--']) self.assertEqual(['--'], result) def test_fixed_name(self): arg = doubledash.DoubledashArgument() self.assertEqual('doubledash', arg.name) def test_fixed_min_max(self): arg = doubledash.DoubledashArgument() self.assertEqual(0, arg.minimum_count) self.assertEqual(1, arg.maximum_count) def test_parses_non_dash_dash_as_nothing(self): arg = doubledash.DoubledashArgument() args = ['foo', '--'] result = arg.parse(args) self.assertEqual([], result) self.assertEqual(['foo', '--'], args) testrepository-0.0.20/testrepository/tests/arguments/__init__.py0000664000175000017500000000176612306632354026451 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for individual arguments.""" import unittest def test_suite(): names = [ 'command', 'doubledash', 'path', 'string', ] module_names = ['testrepository.tests.arguments.test_' + name for name in names] loader = unittest.TestLoader() return loader.loadTestsFromNames(module_names) testrepository-0.0.20/testrepository/tests/arguments/test_path.py0000664000175000017500000000337712306632354026705 0ustar robertcrobertc00000000000000# # Copyright (c) 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the path argument type.""" import os from os.path import join import tempfile from fixtures import TempDir from testtools.matchers import raises from testrepository.arguments import path from testrepository.tests import ResourcedTestCase class TestArgument(ResourcedTestCase): def test_parses_as_string(self): existingfile = tempfile.NamedTemporaryFile() self.addCleanup(existingfile.close) arg = path.ExistingPathArgument('path') result = arg.parse([existingfile.name]) self.assertEqual([existingfile.name], result) def test_rejects_doubledash(self): base = self.useFixture(TempDir()).path arg = path.ExistingPathArgument('path') self.addCleanup(os.chdir, os.getcwd()) os.chdir(base) with open('--', 'wt') as f:pass self.assertThat(lambda: arg.parse(['--']), raises(ValueError)) def test_rejects_missing_file(self): base = self.useFixture(TempDir()).path arg = path.ExistingPathArgument('path') self.assertThat(lambda: arg.parse([join(base, 'foo')]), raises(ValueError)) testrepository-0.0.20/testrepository/tests/arguments/test_string.py0000664000175000017500000000225512306632354027251 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the string argument type.""" from testtools.matchers import raises from testrepository.arguments import string from testrepository.tests import ResourcedTestCase class TestArgument(ResourcedTestCase): def test_parses_as_string(self): arg = string.StringArgument('name') result = arg.parse(['load']) self.assertEqual(['load'], result) def test_rejects_doubledash(self): arg = string.StringArgument('name') self.assertThat(lambda: arg.parse(['--']), raises(ValueError)) testrepository-0.0.20/testrepository/tests/arguments/test_command.py0000664000175000017500000000240112306632354027352 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the command argument.""" from testtools.matchers import raises from testrepository.arguments import command from testrepository.commands import load from testrepository.tests import ResourcedTestCase class TestArgument(ResourcedTestCase): def test_looks_up_command(self): arg = command.CommandArgument('name') result = arg.parse(['load']) self.assertEqual([load.load], result) def test_no_command(self): arg = command.CommandArgument('name') self.assertThat(lambda: arg.parse(['one']), raises(ValueError("Could not find command 'one'."))) testrepository-0.0.20/testrepository/tests/test_testcommand.py0000664000175000017500000006313212336175304026255 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """Tests for the testcommand module.""" from io import BytesIO import os.path import optparse import re from extras import try_import import subunit v2_avail = try_import('subunit.ByteStreamToStreamResult') from testtools.compat import _b from testtools.matchers import ( Equals, MatchesAny, MatchesException, raises, ) from testtools.testresult.doubles import ExtendedTestResult from testrepository.commands import run from testrepository.ui.model import UI from testrepository.repository import memory from testrepository.testcommand import TestCommand from testrepository.tests import ResourcedTestCase, Wildcard from testrepository.tests.stubpackage import TempDirResource from testrepository.tests.test_repository import run_timed class FakeTestCommand(TestCommand): def __init__(self, ui, repo): TestCommand.__init__(self, ui, repo) self.oldschool = True class TestTestCommand(ResourcedTestCase): resources = [('tempdir', TempDirResource())] def get_test_ui_and_cmd(self, options=(), args=(), repository=None): self.dirty() ui = UI(options=options, args=args) ui.here = self.tempdir return ui, self.useFixture(TestCommand(ui, repository)) def get_test_ui_and_cmd2(self, options=(), args=()): self.dirty() ui = UI(options=options, args=args) ui.here = self.tempdir cmd = run.run(ui) ui.set_command(cmd) return ui, cmd def dirty(self): # Ugly: TODO - improve testresources to make this go away. dict(self.resources)['tempdir']._dirty = True def config_path(self): return os.path.join(self.tempdir, '.testr.conf') def set_config(self, text): stream = open(self.config_path(), 'wt') try: stream.write(text) finally: stream.close() def test_takes_ui(self): ui = UI() ui.here = self.tempdir command = TestCommand(ui, None) self.assertEqual(command.ui, ui) def test_TestCommand_is_a_fixture(self): ui = UI() ui.here = self.tempdir command = TestCommand(ui, None) command.setUp() command.cleanUp() def test_TestCommand_get_run_command_outside_setUp_fails(self): self.dirty() ui = UI() ui.here = self.tempdir command = TestCommand(ui, None) self.set_config('[DEFAULT]\ntest_command=foo\n') self.assertThat(command.get_run_command, raises(TypeError)) command.setUp() command.cleanUp() self.assertThat(command.get_run_command, raises(TypeError)) def test_TestCommand_cleanUp_disposes_instances(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo\n' 'instance_dispose=bar $INSTANCE_IDS\n') command._instances.update([_b('baz'), _b('quux')]) command.cleanUp() command.setUp() self.assertEqual([ ('values', [('running', 'bar baz quux')]), ('popen', ('bar baz quux',), {'shell': True}), ('communicate',)], ui.outputs) def test_TestCommand_cleanUp_disposes_instances_fail_raises(self): ui, command = self.get_test_ui_and_cmd() ui.proc_results = [1] self.set_config( '[DEFAULT]\ntest_command=foo\n' 'instance_dispose=bar $INSTANCE_IDS\n') command._instances.update([_b('baz'), _b('quux')]) self.assertThat(command.cleanUp, raises(ValueError('Disposing of instances failed, return 1'))) command.setUp() def test_get_run_command_no_config_file_errors(self): ui, command = self.get_test_ui_and_cmd() self.assertThat(command.get_run_command, raises(ValueError('No .testr.conf config file'))) def test_get_run_command_no_config_settings_errors(self): ui, command = self.get_test_ui_and_cmd() self.set_config('') self.assertThat(command.get_run_command, raises(ValueError( 'No test_command option present in .testr.conf'))) def test_get_run_command_returns_fixture_makes_IDFILE(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') fixture = command.get_run_command(['failing', 'alsofailing']) try: fixture.setUp() list_file_path = fixture.list_file_name source = open(list_file_path, 'rt') try: list_file_content = source.read() finally: source.close() self.assertEqual("failing\nalsofailing\n", list_file_content) finally: fixture.cleanUp() self.assertFalse(os.path.exists(list_file_path)) def test_get_run_command_IDFILE_variable_setting(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') fixture = self.useFixture( command.get_run_command(['failing', 'alsofailing'])) expected_cmd = 'foo --load-list %s' % fixture.list_file_name self.assertEqual(expected_cmd, fixture.cmd) def test_get_run_command_IDLIST_variable_setting(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') fixture = self.useFixture( command.get_run_command(['failing', 'alsofailing'])) expected_cmd = 'foo failing alsofailing' self.assertEqual(expected_cmd, fixture.cmd) def test_get_run_command_IDLIST_default_is_empty(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') fixture = self.useFixture(command.get_run_command()) expected_cmd = 'foo ' self.assertEqual(expected_cmd, fixture.cmd) def test_get_run_command_default_and_list_expands(self): ui, command = self.get_test_ui_and_cmd() if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='returned', test_status='exists') stream.status(test_id='ids', test_status='exists') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('returned\nids\n') ui.proc_outputs = [subunit_bytes] ui.options = optparse.Values() ui.options.parallel = True ui.options.concurrency = 2 self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_id_list_default=whoo yea\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) expected_cmd = 'foo returned ids ' self.assertEqual(expected_cmd, fixture.cmd) def test_get_run_command_IDLIST_default_passed_normally(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\ntest_id_list_default=whoo yea\n') fixture = self.useFixture(command.get_run_command()) expected_cmd = 'foo whoo yea' self.assertEqual(expected_cmd, fixture.cmd) def test_IDOPTION_evalutes_empty_string_no_ids(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') fixture = self.useFixture(command.get_run_command()) expected_cmd = 'foo ' self.assertEqual(expected_cmd, fixture.cmd) def test_group_regex_option(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\n' 'test_id_option=--load-list $IDFILE\n' 'group_regex=([^\\.]+\\.)+\n') fixture = self.useFixture(command.get_run_command()) self.assertEqual( 'pkg.class.', fixture._group_callback('pkg.class.test_method')) def test_extra_args_passed_in(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDOPTION\ntest_id_option=--load-list $IDFILE\n') fixture = self.useFixture(command.get_run_command( testargs=('bar', 'quux'))) expected_cmd = 'foo bar quux' self.assertEqual(expected_cmd, fixture.cmd) def test_list_tests_requests_concurrency_instances(self): # testr list-tests is non-parallel, so needs 1 instance. # testr run triggering list-tests will want to run parallel on all, so # avoid latency by asking for whatever concurrency is up front. # This covers the case for non-listing runs as well, as the code path # is common. self.dirty() ui = UI(options= [('concurrency', 2), ('parallel', True)]) ui.here = self.tempdir cmd = run.run(ui) ui.set_command(cmd) ui.proc_outputs = [_b('returned\ninstances\n')] command = self.useFixture(TestCommand(ui, None)) self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n' 'instance_provision=provision -c $INSTANCE_COUNT\n' 'instance_execute=quux $INSTANCE_ID -- $COMMAND\n') fixture = self.useFixture(command.get_run_command(test_ids=['1'])) fixture.list_tests() self.assertEqual(set([_b('returned'), _b('instances')]), command._instances) self.assertEqual(set([]), command._allocated_instances) self.assertThat(ui.outputs, MatchesAny(Equals([ ('values', [('running', 'provision -c 2')]), ('popen', ('provision -c 2',), {'shell': True, 'stdout': -1}), ('communicate',), ('values', [('running', 'quux instances -- foo --list whoo yea')]), ('popen',('quux instances -- foo --list whoo yea',), {'shell': True, 'stdin': -1, 'stdout': -1}), ('communicate',)]), Equals([ ('values', [('running', 'provision -c 2')]), ('popen', ('provision -c 2',), {'shell': True, 'stdout': -1}), ('communicate',), ('values', [('running', 'quux returned -- foo --list whoo yea')]), ('popen',('quux returned -- foo --list whoo yea',), {'shell': True, 'stdin': -1, 'stdout': -1}), ('communicate',)]))) def test_list_tests_uses_instances(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n' 'instance_execute=quux $INSTANCE_ID -- $COMMAND\n') fixture = self.useFixture(command.get_run_command()) command._instances.add(_b('bar')) fixture.list_tests() self.assertEqual(set([_b('bar')]), command._instances) self.assertEqual(set([]), command._allocated_instances) self.assertEqual([ ('values', [('running', 'quux bar -- foo --list whoo yea')]), ('popen', ('quux bar -- foo --list whoo yea',), {'shell': True, 'stdin': -1, 'stdout': -1}), ('communicate',)], ui.outputs) def test_list_tests_cmd(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) expected_cmd = 'foo --list whoo yea' self.assertEqual(expected_cmd, fixture.list_cmd) def test_list_tests_parsing(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='returned', test_status='exists') stream.status(test_id='ids', test_status='exists') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('returned\nids\n') ui, command = self.get_test_ui_and_cmd() ui.proc_outputs = [subunit_bytes] self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) self.assertEqual(set(['returned', 'ids']), set(fixture.list_tests())) def test_list_tests_nonzero_exit(self): ui, command = self.get_test_ui_and_cmd() ui.proc_results = [1] self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) self.assertThat(lambda:fixture.list_tests(), raises(ValueError)) def test_partition_tests_smoke(self): repo = memory.RepositoryFactory().initialise('memory:') # Seed with 1 slow and 2 tests making up 2/3 the time. result = repo.get_inserter() result.startTestRun() run_timed("slow", 3, result) run_timed("fast1", 1, result) run_timed("fast2", 1, result) result.stopTestRun() ui, command = self.get_test_ui_and_cmd(repository=repo) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) # partitioning by two generates 'slow' and the two fast ones as partitions # flushed out by equal numbers of unknown duration tests. test_ids = frozenset(['slow', 'fast1', 'fast2', 'unknown1', 'unknown2', 'unknown3', 'unknown4']) partitions = fixture.partition_tests(test_ids, 2) self.assertTrue('slow' in partitions[0]) self.assertFalse('fast1' in partitions[0]) self.assertFalse('fast2' in partitions[0]) self.assertFalse('slow' in partitions[1]) self.assertTrue('fast1' in partitions[1]) self.assertTrue('fast2' in partitions[1]) self.assertEqual(3, len(partitions[0])) self.assertEqual(4, len(partitions[1])) def test_partition_tests_914359(self): # When two partitions have the same duration, timed tests should be # appended to the shortest partition. In theory this doesn't matter, # but in practice, if a test is recorded with 0 duration (e.g. due to a # bug), it is better to have them split out rather than all in one # partition. 0 duration tests are unlikely to really be 0 duration. repo = memory.RepositoryFactory().initialise('memory:') # Seed with two 0-duration tests. result = repo.get_inserter() result.startTestRun() run_timed("zero1", 0, result) run_timed("zero2", 0, result) result.stopTestRun() ui, command = self.get_test_ui_and_cmd(repository=repo) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') fixture = self.useFixture(command.get_run_command()) # partitioning by two should generate two one-entry partitions. test_ids = frozenset(['zero1', 'zero2']) partitions = fixture.partition_tests(test_ids, 2) self.assertEqual(1, len(partitions[0])) self.assertEqual(1, len(partitions[1])) def test_partition_tests_with_grouping(self): repo = memory.RepositoryFactory().initialise('memory:') result = repo.get_inserter() result.startTestRun() run_timed("TestCase1.slow", 3, result) run_timed("TestCase2.fast1", 1, result) run_timed("TestCase2.fast2", 1, result) result.stopTestRun() ui, command = self.get_test_ui_and_cmd(repository=repo) self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST $LISTOPT\n' 'test_list_option=--list\n') fixture = self.useFixture(command.get_run_command()) test_ids = frozenset(['TestCase1.slow', 'TestCase1.fast', 'TestCase1.fast2', 'TestCase2.fast1', 'TestCase3.test1', 'TestCase3.test2', 'TestCase2.fast2', 'TestCase4.test', 'testdir.testfile.TestCase5.test']) regex = 'TestCase[0-5]' def group_id(test_id, regex=re.compile('TestCase[0-5]')): match = regex.match(test_id) if match: return match.group(0) # There isn't a public way to define a group callback [as yet]. fixture._group_callback = group_id partitions = fixture.partition_tests(test_ids, 2) # Timed groups are deterministic: self.assertTrue('TestCase2.fast1' in partitions[0]) self.assertTrue('TestCase2.fast2' in partitions[0]) self.assertTrue('TestCase1.slow' in partitions[1]) self.assertTrue('TestCase1.fast' in partitions[1]) self.assertTrue('TestCase1.fast2' in partitions[1]) # Untimed groups just need to be kept together: if 'TestCase3.test1' in partitions[0]: self.assertTrue('TestCase3.test2' in partitions[0]) if 'TestCase4.test' not in partitions[0]: self.assertTrue('TestCase4.test' in partitions[1]) if 'testdir.testfile.TestCase5.test' not in partitions[0]: self.assertTrue('testdir.testfile.TestCase5.test' in partitions[1]) def test_run_tests_with_instances(self): # when there are instances and no instance_execute, run_tests acts as # normal. ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n') command._instances.update([_b('foo'), _b('bar')]) fixture = self.useFixture(command.get_run_command()) procs = fixture.run_tests() self.assertEqual([ ('values', [('running', 'foo ')]), ('popen', ('foo ',), {'shell': True, 'stdin': -1, 'stdout': -1})], ui.outputs) def test_run_tests_with_existing_instances_configured(self): # when there are instances present, they are pulled out for running # tests. ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n' 'instance_execute=quux $INSTANCE_ID -- $COMMAND\n') command._instances.add(_b('bar')) fixture = self.useFixture(command.get_run_command(test_ids=['1'])) procs = fixture.run_tests() self.assertEqual([ ('values', [('running', 'quux bar -- foo 1')]), ('popen', ('quux bar -- foo 1',), {'shell': True, 'stdin': -1, 'stdout': -1})], ui.outputs) # No --parallel, so the one instance should have been allocated. self.assertEqual(set([_b('bar')]), command._instances) self.assertEqual(set([_b('bar')]), command._allocated_instances) # And after the process is run, bar is returned for re-use. procs[0].stdout.read() procs[0].wait() self.assertEqual(0, procs[0].returncode) self.assertEqual(set([_b('bar')]), command._instances) self.assertEqual(set(), command._allocated_instances) def test_run_tests_allocated_instances_skipped(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDLIST\n' 'instance_execute=quux $INSTANCE_ID -- $COMMAND\n') command._instances.update([_b('bar'), _b('baz')]) command._allocated_instances.add(_b('baz')) fixture = self.useFixture(command.get_run_command(test_ids=['1'])) procs = fixture.run_tests() self.assertEqual([ ('values', [('running', 'quux bar -- foo 1')]), ('popen', ('quux bar -- foo 1',), {'shell': True, 'stdin': -1, 'stdout': -1})], ui.outputs) # No --parallel, so the one instance should have been allocated. self.assertEqual(set([_b('bar'), _b('baz')]), command._instances) self.assertEqual(set([_b('bar'), _b('baz')]), command._allocated_instances) # And after the process is run, bar is returned for re-use. procs[0].wait() procs[0].stdout.read() self.assertEqual(0, procs[0].returncode) self.assertEqual(set([_b('bar'), _b('baz')]), command._instances) self.assertEqual(set([_b('baz')]), command._allocated_instances) def test_run_tests_list_file_in_FILES(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\ntest_command=foo $IDFILE\n' 'instance_execute=quux $INSTANCE_ID $FILES -- $COMMAND\n') command._instances.add(_b('bar')) fixture = self.useFixture(command.get_run_command(test_ids=['1'])) list_file = fixture.list_file_name procs = fixture.run_tests() expected_cmd = 'quux bar %s -- foo %s' % (list_file, list_file) self.assertEqual([ ('values', [('running', expected_cmd)]), ('popen', (expected_cmd,), {'shell': True, 'stdin': -1, 'stdout': -1})], ui.outputs) # No --parallel, so the one instance should have been allocated. self.assertEqual(set([_b('bar')]), command._instances) self.assertEqual(set([_b('bar')]), command._allocated_instances) # And after the process is run, bar is returned for re-use. procs[0].stdout.read() self.assertEqual(0, procs[0].returncode) self.assertEqual(set([_b('bar')]), command._instances) self.assertEqual(set(), command._allocated_instances) def test_filter_tags_parsing(self): ui, command = self.get_test_ui_and_cmd() self.set_config('[DEFAULT]\nfilter_tags=foo bar\n') self.assertEqual(set(['foo', 'bar']), command.get_filter_tags()) def test_callout_concurrency(self): ui, command = self.get_test_ui_and_cmd() ui.proc_outputs = [_b('4')] self.set_config( '[DEFAULT]\ntest_run_concurrency=probe\n' 'test_command=foo\n') fixture = self.useFixture(command.get_run_command()) self.assertEqual(4, fixture.callout_concurrency()) self.assertEqual([ ('popen', ('probe',), {'shell': True, 'stdin': -1, 'stdout': -1}), ('communicate',)], ui.outputs) def test_callout_concurrency_failed(self): ui, command = self.get_test_ui_and_cmd() ui.proc_results = [1] self.set_config( '[DEFAULT]\ntest_run_concurrency=probe\n' 'test_command=foo\n') fixture = self.useFixture(command.get_run_command()) self.assertThat(lambda:fixture.callout_concurrency(), raises( ValueError("test_run_concurrency failed: exit code 1, stderr=''"))) self.assertEqual([ ('popen', ('probe',), {'shell': True, 'stdin': -1, 'stdout': -1}), ('communicate',)], ui.outputs) def test_callout_concurrency_not_set(self): ui, command = self.get_test_ui_and_cmd() self.set_config( '[DEFAULT]\n' 'test_command=foo\n') fixture = self.useFixture(command.get_run_command()) self.assertEqual(None, fixture.callout_concurrency()) self.assertEqual([], ui.outputs) def test_filter_tests_by_regex_only(self): if v2_avail: buffer = BytesIO() stream = subunit.StreamResultToBytes(buffer) stream.status(test_id='returned', test_status='exists') stream.status(test_id='ids', test_status='exists') subunit_bytes = buffer.getvalue() else: subunit_bytes = _b('returned\nids\n') ui, command = self.get_test_ui_and_cmd() ui.proc_outputs = [subunit_bytes] self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') filters = ['return'] fixture = self.useFixture(command.get_run_command(test_filters=filters)) self.assertEqual(['returned'], fixture.test_ids) def test_filter_tests_by_regex_supplied_ids(self): ui, command = self.get_test_ui_and_cmd() ui.proc_outputs = [_b('returned\nids\n')] self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') filters = ['return'] fixture = self.useFixture(command.get_run_command( test_ids=['return', 'of', 'the', 'king'], test_filters=filters)) self.assertEqual(['return'], fixture.test_ids) def test_filter_tests_by_regex_supplied_ids_multi_match(self): ui, command = self.get_test_ui_and_cmd() ui.proc_outputs = [_b('returned\nids\n')] self.set_config( '[DEFAULT]\ntest_command=foo $LISTOPT $IDLIST\ntest_id_list_default=whoo yea\n' 'test_list_option=--list\n') filters = ['return'] fixture = self.useFixture(command.get_run_command( test_ids=['return', 'of', 'the', 'king', 'thereisnoreturn'], test_filters=filters)) self.assertEqual(['return', 'thereisnoreturn'], fixture.test_ids) testrepository-0.0.20/testrepository/testcommand.py0000664000175000017500000006466112376174757024103 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """The test command that test repository knows how to run.""" from extras import ( try_import, try_imports, ) from collections import defaultdict ConfigParser = try_imports(['ConfigParser', 'configparser']) import io import itertools import operator import os.path import re import subprocess import sys import tempfile import multiprocessing from textwrap import dedent from fixtures import Fixture v2 = try_import('subunit.v2') from testrepository import results from testrepository.testlist import ( parse_enumeration, write_list, ) testrconf_help = dedent(""" Configuring via .testr.conf: --- [DEFAULT] test_command=foo $IDOPTION test_id_option=--bar $IDFILE --- will cause 'testr run' to run 'foo' to execute tests, and 'testr run --failing' will cause 'foo --bar failing.list ' to be run to execute tests. Shell variables are expanded in these commands on platforms that have a shell. The full list of options and variables for .testr.conf: * filter_tags -- a list of tags which should be used to filter test counts. This is useful for stripping out non-test results from the subunit stream such as Zope test layers. These filtered items are still considered for test failures. * test_command -- command line to run to execute tests. * test_id_option -- the value to substitute into test_command when specific test ids should be run. * test_id_list_default -- the value to use for $IDLIST when no specific test ids are being run. * test_list_option -- the option to use to cause the test runner to report on the tests it would run, rather than running them. When supplied the test_command should output on stdout all the test ids that would have been run if every other option and argument was honoured, one per line. This is required for parallel testing, and is substituted into $LISTOPT. * test_run_concurrency -- Optional call out to establish concurrency. Should return one line containing the number of concurrent test runner processes to run. * instance_provision -- provision one or more test run environments. Accepts $INSTANCE_COUNT for the number of instances desired. * instance_execute -- execute a test runner process in a given environment. Accepts $INSTANCE_ID, $FILES and $COMMAND. Paths in $FILES should be synchronised into the test runner environment filesystem. $COMMAND can be adjusted if the paths are synched with different names. * instance_dispose -- dispose of one or more test running environments. Accepts $INSTANCE_IDS. * group_regex -- If set group tests by the matched section of the test id. * $IDOPTION -- the variable to use to trigger running some specific tests. * $IDFILE -- A file created before the test command is run and deleted afterwards which contains a list of test ids, one per line. This can handle test ids with emedded whitespace. * $IDLIST -- A list of the test ids to run, separated by spaces. IDLIST defaults to an empty string when no test ids are known and no explicit default is provided. This will not handle test ids with spaces. See the testrepository manual for example .testr.conf files in different programming languages. """) class CallWhenProcFinishes(object): """Convert a process object to trigger a callback when returncode is set. This just wraps the entire object and when the returncode attribute access finds a set value, calls the callback. """ def __init__(self, process, callback): """Adapt process :param process: A subprocess.Popen object. :param callback: The process to call when the process completes. """ self._proc = process self._callback = callback self._done = False @property def stdin(self): return self._proc.stdin @property def stdout(self): return self._proc.stdout @property def stderr(self): return self._proc.stderr @property def returncode(self): result = self._proc.returncode if not self._done and result is not None: self._done = True self._callback() return result def wait(self): return self._proc.wait() compiled_re_type = type(re.compile('')) class TestListingFixture(Fixture): """Write a temporary file to disk with test ids in it.""" def __init__(self, test_ids, cmd_template, listopt, idoption, ui, repository, parallel=True, listpath=None, parser=None, test_filters=None, instance_source=None, group_callback=None): """Create a TestListingFixture. :param test_ids: The test_ids to use. May be None indicating that no ids are known and they should be discovered by listing or configuration if they must be known to run tests. Test ids are needed to run tests when filtering or partitioning is needed: if the run concurrency is > 1 partitioning is needed, and filtering is needed if the user has passed in filters. :param cmd_template: string to be filled out with IDFILE. :param listopt: Option to substitute into LISTOPT to cause test listing to take place. :param idoption: Option to substitutde into cmd when supplying any test ids. :param ui: The UI in use. :param repository: The repository to query for test times, if needed. :param parallel: If not True, prohibit parallel use : used to implement --parallel run recursively. :param listpath: The file listing path to use. If None, a unique path is created. :param parser: An options parser for reading options from. :param test_filters: An optional list of test filters to apply. Each filter should be a string suitable for passing to re.compile. filters are applied using search() rather than match(), so if anchoring is needed it should be included in the regex. The test ids used for executing are the union of all the individual filters: to take the intersection instead, craft a single regex that matches all your criteria. Filters are automatically applied by run_tests(), or can be applied by calling filter_tests(test_ids). :param instance_source: A source of test run instances. Must support obtain_instance(max_concurrency) -> id and release_instance(id) calls. :param group_callback: If supplied, should be a function that accepts a test id and returns a group id. A group id is an arbitrary value used as a dictionary key in the scheduler. All test ids with the same group id are scheduled onto the same backend test process. """ self.test_ids = test_ids self.template = cmd_template self.listopt = listopt self.idoption = idoption self.ui = ui self.repository = repository self.parallel = parallel self._listpath = listpath self._parser = parser self.test_filters = test_filters self._group_callback = group_callback self._instance_source = instance_source def setUp(self): super(TestListingFixture, self).setUp() variable_regex = '\$(IDOPTION|IDFILE|IDLIST|LISTOPT)' variables = {} list_variables = {'LISTOPT': self.listopt} cmd = self.template try: default_idstr = self._parser.get('DEFAULT', 'test_id_list_default') list_variables['IDLIST'] = default_idstr # In theory we should also support casting this into IDFILE etc - # needs this horrible class refactored. except ConfigParser.NoOptionError as e: if e.message != "No option 'test_id_list_default' in section: 'DEFAULT'": raise default_idstr = None def list_subst(match): return list_variables.get(match.groups(1)[0], '') self.list_cmd = re.sub(variable_regex, list_subst, cmd) nonparallel = (not self.parallel or not getattr(self.ui, 'options', None) or not getattr(self.ui.options, 'parallel', None)) if nonparallel: self.concurrency = 1 else: self.concurrency = self.ui.options.concurrency if not self.concurrency: self.concurrency = self.callout_concurrency() if not self.concurrency: self.concurrency = self.local_concurrency() if not self.concurrency: self.concurrency = 1 if self.test_ids is None: if self.concurrency == 1: if default_idstr: self.test_ids = default_idstr.split() if self.concurrency != 1 or self.test_filters is not None: # Have to be able to tell each worker what to run / filter # tests. self.test_ids = self.list_tests() if self.test_ids is None: # No test ids to supply to the program. self.list_file_name = None name = '' idlist = '' else: self.test_ids = self.filter_tests(self.test_ids) name = self.make_listfile() variables['IDFILE'] = name idlist = ' '.join(self.test_ids) variables['IDLIST'] = idlist def subst(match): return variables.get(match.groups(1)[0], '') if self.test_ids is None: # No test ids, no id option. idoption = '' else: idoption = re.sub(variable_regex, subst, self.idoption) variables['IDOPTION'] = idoption self.cmd = re.sub(variable_regex, subst, cmd) def make_listfile(self): name = None try: if self._listpath: name = self._listpath stream = open(name, 'wb') else: fd, name = tempfile.mkstemp() stream = os.fdopen(fd, 'wb') self.list_file_name = name write_list(stream, self.test_ids) stream.close() except: if name: os.unlink(name) raise self.addCleanup(os.unlink, name) return name def filter_tests(self, test_ids): """Filter test_ids by the test_filters. :return: A list of test ids. """ if self.test_filters is None: return test_ids filters = list(map(re.compile, self.test_filters)) def include(test_id): for pred in filters: if pred.search(test_id): return True return list(filter(include, test_ids)) def list_tests(self): """List the tests returned by list_cmd. :return: A list of test ids. """ if '$LISTOPT' not in self.template: raise ValueError("LISTOPT not configured in .testr.conf") instance, list_cmd = self._per_instance_command(self.list_cmd) try: self.ui.output_values([('running', list_cmd)]) run_proc = self.ui.subprocess_Popen(list_cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE) out, err = run_proc.communicate() if run_proc.returncode != 0: if v2 is not None: new_out = io.BytesIO() v2.ByteStreamToStreamResult(io.BytesIO(out), 'stdout').run( results.CatFiles(new_out)) out = new_out.getvalue() self.ui.output_stream(io.BytesIO(out)) self.ui.output_stream(io.BytesIO(err)) raise ValueError( "Non-zero exit code (%d) from test listing." % (run_proc.returncode)) ids = parse_enumeration(out) return ids finally: if instance: self._instance_source.release_instance(instance) def _per_instance_command(self, cmd): """Customise cmd to with an instance-id. :param concurrency: The number of instances to ask for (used to avoid death-by-1000 cuts of latency. """ if self._instance_source is None: return None, cmd instance = self._instance_source.obtain_instance(self.concurrency) if instance is not None: try: instance_prefix = self._parser.get( 'DEFAULT', 'instance_execute') variables = { 'INSTANCE_ID': instance.decode('utf8'), 'COMMAND': cmd, # --list-tests cannot use FILES, so handle it being unset. 'FILES': getattr(self, 'list_file_name', None) or '', } variable_regex = '\$(INSTANCE_ID|COMMAND|FILES)' def subst(match): return variables.get(match.groups(1)[0], '') cmd = re.sub(variable_regex, subst, instance_prefix) except ConfigParser.NoOptionError: # Per-instance execution environment not configured. pass return instance, cmd def run_tests(self): """Run the tests defined by the command and ui. :return: A list of spawned processes. """ result = [] test_ids = self.test_ids if self.concurrency == 1 and (test_ids is None or test_ids): # Have to customise cmd here, as instances are allocated # just-in-time. XXX: Indicates this whole region needs refactoring. instance, cmd = self._per_instance_command(self.cmd) self.ui.output_values([('running', cmd)]) run_proc = self.ui.subprocess_Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE) # Prevent processes stalling if they read from stdin; we could # pass this through in future, but there is no point doing that # until we have a working can-run-debugger-inline story. run_proc.stdin.close() if instance: return [CallWhenProcFinishes(run_proc, lambda:self._instance_source.release_instance(instance))] else: return [run_proc] test_id_groups = self.partition_tests(test_ids, self.concurrency) for test_ids in test_id_groups: if not test_ids: # No tests in this partition continue fixture = self.useFixture(TestListingFixture(test_ids, self.template, self.listopt, self.idoption, self.ui, self.repository, parallel=False, parser=self._parser, instance_source=self._instance_source)) result.extend(fixture.run_tests()) return result def partition_tests(self, test_ids, concurrency): """Parition test_ids by concurrency. Test durations from the repository are used to get partitions which have roughly the same expected runtime. New tests - those with no recorded duration - are allocated in round-robin fashion to the partitions created using test durations. :return: A list where each element is a distinct subset of test_ids, and the union of all the elements is equal to set(test_ids). """ partitions = [list() for i in range(concurrency)] timed_partitions = [[0.0, partition] for partition in partitions] time_data = self.repository.get_test_times(test_ids) timed_tests = time_data['known'] unknown_tests = time_data['unknown'] # Group tests: generate group_id -> test_ids. group_ids = defaultdict(list) if self._group_callback is None: group_callback = lambda _:None else: group_callback = self._group_callback for test_id in test_ids: group_id = group_callback(test_id) or test_id group_ids[group_id].append(test_id) # Time groups: generate three sets of groups: # - fully timed dict(group_id -> time), # - partially timed dict(group_id -> time) and # - unknown (set of group_id) # We may in future treat partially timed different for scheduling, but # at least today we just schedule them after the fully timed groups. timed = {} partial = {} unknown = [] for group_id, group_tests in group_ids.items(): untimed_ids = unknown_tests.intersection(group_tests) group_time = sum([timed_tests[test_id] for test_id in untimed_ids.symmetric_difference(group_tests)]) if not untimed_ids: timed[group_id] = group_time elif group_time: partial[group_id] = group_time else: unknown.append(group_id) # Scheduling is NP complete in general, so we avoid aiming for # perfection. A quick approximation that is sufficient for our general # needs: # sort the groups by time # allocate to partitions by putting each group in to the partition with # the current (lowest time, shortest length[in tests]) def consume_queue(groups): queue = sorted( groups.items(), key=operator.itemgetter(1), reverse=True) for group_id, duration in queue: timed_partitions[0][0] = timed_partitions[0][0] + duration timed_partitions[0][1].extend(group_ids[group_id]) timed_partitions.sort(key=lambda item:(item[0], len(item[1]))) consume_queue(timed) consume_queue(partial) # Assign groups with entirely unknown times in round robin fashion to # the partitions. for partition, group_id in zip(itertools.cycle(partitions), unknown): partition.extend(group_ids[group_id]) return partitions def callout_concurrency(self): """Callout for user defined concurrency.""" try: concurrency_cmd = self._parser.get( 'DEFAULT', 'test_run_concurrency') except ConfigParser.NoOptionError: return None run_proc = self.ui.subprocess_Popen(concurrency_cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE) out, err = run_proc.communicate() if run_proc.returncode: raise ValueError( "test_run_concurrency failed: exit code %d, stderr='%s'" % ( run_proc.returncode, err.decode('utf8', 'replace'))) return int(out.strip()) def local_concurrency(self): try: return multiprocessing.cpu_count() except NotImplementedError: # No concurrency logic known. return None class TestCommand(Fixture): """Represents the test command defined in .testr.conf. :ivar run_factory: The fixture to use to execute a command. :ivar oldschool: Use failing.list rather than a unique file path. TestCommand is a Fixture. Many uses of it will not require it to be setUp, but calling get_run_command does require it: the fixture state is used to track test environment instances, which are disposed of when cleanUp happens. This is not done per-run-command, because test bisection (amongst other things) uses multiple get_run_command configurations. """ run_factory = TestListingFixture oldschool = False def __init__(self, ui, repository): """Create a TestCommand. :param ui: A testrepository.ui.UI object which is used to obtain the location of the .testr.conf. :param repository: A testrepository.repository.Repository used for determining test times when partitioning tests. """ super(TestCommand, self).__init__() self.ui = ui self.repository = repository self._instances = None self._allocated_instances = None def setUp(self): super(TestCommand, self).setUp() self._instances = set() self._allocated_instances = set() self.addCleanup(self._dispose_instances) def _dispose_instances(self): instances = self._instances if instances is None: return self._instances = None self._allocated_instances = None try: dispose_cmd = self.get_parser().get('DEFAULT', 'instance_dispose') except (ValueError, ConfigParser.NoOptionError): return variable_regex = '\$INSTANCE_IDS' dispose_cmd = re.sub(variable_regex, ' '.join(sorted(instance.decode('utf') for instance in instances)), dispose_cmd) self.ui.output_values([('running', dispose_cmd)]) run_proc = self.ui.subprocess_Popen(dispose_cmd, shell=True) run_proc.communicate() if run_proc.returncode: raise ValueError('Disposing of instances failed, return %d' % run_proc.returncode) def get_parser(self): """Get a parser with the .testr.conf in it.""" parser = ConfigParser.ConfigParser() # This possibly should push down into UI. if self.ui.here == 'memory:': return parser if not parser.read(os.path.join(self.ui.here, '.testr.conf')): raise ValueError("No .testr.conf config file") return parser def get_run_command(self, test_ids=None, testargs=(), test_filters=None): """Get the command that would be run to run tests. See TestListingFixture for the definition of test_ids and test_filters. """ if self._instances is None: raise TypeError('TestCommand not setUp') parser = self.get_parser() try: command = parser.get('DEFAULT', 'test_command') except ConfigParser.NoOptionError as e: if e.message != "No option 'test_command' in section: 'DEFAULT'": raise raise ValueError("No test_command option present in .testr.conf") elements = [command] + list(testargs) cmd = ' '.join(elements) idoption = '' if '$IDOPTION' in command: # IDOPTION is used, we must have it configured. try: idoption = parser.get('DEFAULT', 'test_id_option') except ConfigParser.NoOptionError as e: if e.message != "No option 'test_id_option' in section: 'DEFAULT'": raise raise ValueError("No test_id_option option present in .testr.conf") listopt = '' if '$LISTOPT' in command: # LISTOPT is used, test_list_option must be configured. try: listopt = parser.get('DEFAULT', 'test_list_option') except ConfigParser.NoOptionError as e: if e.message != "No option 'test_list_option' in section: 'DEFAULT'": raise raise ValueError("No test_list_option option present in .testr.conf") try: group_regex = parser.get('DEFAULT', 'group_regex') except ConfigParser.NoOptionError: group_regex = None if group_regex: def group_callback(test_id, regex=re.compile(group_regex)): match = regex.match(test_id) if match: return match.group(0) else: group_callback = None if self.oldschool: listpath = os.path.join(self.ui.here, 'failing.list') result = self.run_factory(test_ids, cmd, listopt, idoption, self.ui, self.repository, listpath=listpath, parser=parser, test_filters=test_filters, instance_source=self, group_callback=group_callback) else: result = self.run_factory(test_ids, cmd, listopt, idoption, self.ui, self.repository, parser=parser, test_filters=test_filters, instance_source=self, group_callback=group_callback) return result def get_filter_tags(self): parser = self.get_parser() try: tags = parser.get('DEFAULT', 'filter_tags') except ConfigParser.NoOptionError as e: if e.message != "No option 'filter_tags' in section: 'DEFAULT'": raise return set() return set([tag.strip() for tag in tags.split()]) def obtain_instance(self, concurrency): """If possible, get one or more test run environment instance ids. Note this is not threadsafe: calling it from multiple threads would likely result in shared results. """ while len(self._instances) < concurrency: try: cmd = self.get_parser().get('DEFAULT', 'instance_provision') except ConfigParser.NoOptionError: # Instance allocation not configured return None variable_regex = '\$INSTANCE_COUNT' cmd = re.sub(variable_regex, str(concurrency - len(self._instances)), cmd) self.ui.output_values([('running', cmd)]) proc = self.ui.subprocess_Popen( cmd, shell=True, stdout=subprocess.PIPE) out, _ = proc.communicate() if proc.returncode: raise ValueError('Provisioning instances failed, return %d' % proc.returncode) new_instances = set([item.strip() for item in out.split()]) self._instances.update(new_instances) # Cached first. available_instances = self._instances - self._allocated_instances # We only ask for instances when one should be available. result = available_instances.pop() self._allocated_instances.add(result) return result def release_instance(self, instance_id): """Return instance_ids to the pool for reuse.""" self._allocated_instances.remove(instance_id) testrepository-0.0.20/testrepository/arguments/0000775000175000017500000000000012377221137023165 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository/arguments/doubledash.py0000664000175000017500000000206412306632354025652 0ustar robertcrobertc00000000000000# # Copyright (c) 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """An Argument that checks for '--'.""" from testrepository.arguments import AbstractArgument class DoubledashArgument(AbstractArgument): """An argument that captures '--'.""" def __init__(self): super(DoubledashArgument, self).__init__('doubledash', min=0) def _parse_one(self, arg): if arg != '--': raise ValueError('not a doubledash %r' % (arg,)) return arg testrepository-0.0.20/testrepository/arguments/string.py0000664000175000017500000000174112306632354025047 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """An Argument that simply stores a string.""" from testrepository.arguments import AbstractArgument class StringArgument(AbstractArgument): """An argument that stores a string verbatim.""" def _parse_one(self, arg): if arg == '--': raise ValueError('-- is not a valid argument') return arg testrepository-0.0.20/testrepository/arguments/__init__.py0000664000175000017500000001004512306632354025275 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """'Arguments' for testr. This is a small typed arguments concept - which is perhaps obsoleted by argparse in Python 2.7, but for testrepository is an extension used with optparse. The code in this module contains the AbstractArgument base class. Individual argument types are present in e.g. testrepository.arguments.string. See testrepository.commands.Command for usage of Arguments. Plugins and extensions wanting to add argument types should either define them internally or install into testrepository.arguments as somename (perhaps by extending the testrepository.arguments __path__ to include a directory containing their argument types - no __init__ is needed in that directory.) """ import sys from testtools.compat import reraise class AbstractArgument(object): """A argument that a command may need. Arguments can be converted into a summary for describing the UI to users, and provide validator/parsers for the arguments. :ivar: The name of the argument. This is used for retrieving the argument from UI objects, and for generating the summary. """ def __init__(self, name, min=1, max=1): """Create an AbstractArgument. While conceptually a separate SequenceArgument could be used, all arguments support sequencing to avoid unnecessary boilerplate in user code. :param name: The name for the argument. :param min: The minimum number of occurences permitted. :param max: The maximum number of occurences permitted. None for unlimited. """ self.name = name self.minimum_count = min self.maximum_count = max def summary(self): """Get a regex-like summary of this argument.""" result = self.name if (self.minimum_count == self.maximum_count and self.minimum_count == 1): return result minmax = (self.minimum_count, self.maximum_count) if minmax == (0, 1): return result + '?' if minmax == (1, None): return result + '+' if minmax == (0, None): return result + '*' if minmax[1] == None: minmax = (minmax[0], '') return result + '{%s,%s}' % minmax def parse(self, argv): """Evaluate arguments in argv. Used arguments are removed from argv. :param argv: The arguments to parse. :return: The parsed results as a list. """ count = 0 result = [] error = None while len(argv) > count and ( self.maximum_count is None or count < self.maximum_count): arg = argv[count] count += 1 try: result.append(self._parse_one(arg)) except ValueError: # argument rejected this element error = sys.exc_info() count -= 1 break if count < self.minimum_count: if error is not None: reraise(error[0], error[1], error[2]) raise ValueError('not enough arguments present/matched in %s' % argv) del argv[:count] return result def _parse_one(self, arg): """Parse a single argument. :param arg: An arg from an argv. :result: The parsed argument. :raises ValueError: If the arg cannot be parsed/validated. """ raise NotImplementedError(self._parse_one) testrepository-0.0.20/testrepository/arguments/path.py0000664000175000017500000000213712306632354024475 0ustar robertcrobertc00000000000000# # Copyright (c) 2012 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """An Argument that gets the name of an existing path.""" import os.path from testrepository.arguments import AbstractArgument class ExistingPathArgument(AbstractArgument): """An argument that stores a string verbatim.""" def _parse_one(self, arg): if arg == '--': raise ValueError('-- is not a valid argument') if not os.path.exists(arg): raise ValueError('No such path %r' % (arg,)) return arg testrepository-0.0.20/testrepository/arguments/command.py0000664000175000017500000000221212306632354025151 0ustar robertcrobertc00000000000000# # Copyright (c) 2010 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """An Argument that looks up a command object.""" from testrepository.arguments import AbstractArgument from testrepository import commands class CustomError(ValueError): def __str__(self): return self.args[0] class CommandArgument(AbstractArgument): """An argument that looks up a command.""" def _parse_one(self, arg): try: return commands._find_command(arg) except KeyError: raise CustomError("Could not find command '%s'." % arg) testrepository-0.0.20/testrepository/setuptools_command.py0000664000175000017500000000636612306632354025463 0ustar robertcrobertc00000000000000# # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # Copyright (c) 2013 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """setuptools/distutils commands to run testr via setup.py Currently provides 'testr' which runs tests using testr. You can pass --coverage which will also export PYTHON='coverage run --source ' and automatically combine the coverage from each testr backend test runner after the run completes. To use, just use setuptools/distribute and depend on testr, and it should be picked up automatically (as the commands are exported in the testrepository package metadata. """ from distutils import cmd import distutils.errors import os import sys from testrepository import commands class Testr(cmd.Command): description = "Run unit tests using testr" user_options = [ ('coverage', None, "Replace PYTHON with coverage and merge coverage " "from each testr worker."), ('testr-args=', 't', "Run 'testr' with these args"), ('omit=', 'o', 'Files to omit from coverage calculations'), ('slowest', None, "Show slowest test times after tests complete."), ] boolean_options = ['coverage', 'slowest'] def _run_testr(self, *args): return commands.run_argv([sys.argv[0]] + list(args), sys.stdin, sys.stdout, sys.stderr) def initialize_options(self): self.testr_args = None self.coverage = None self.omit = "" self.slowest = None def finalize_options(self): if self.testr_args is None: self.testr_args = [] else: self.testr_args = self.testr_args.split() if self.omit: self.omit = "--omit=%s" % self.omit def run(self): """Set up testr repo, then run testr""" if not os.path.isdir(".testrepository"): self._run_testr("init") if self.coverage: self._coverage_before() testr_ret = self._run_testr("run", "--parallel", *self.testr_args) if testr_ret: raise distutils.errors.DistutilsError( "testr failed (%d)" % testr_ret) if self.slowest: print ("Slowest Tests") self._run_testr("slowest") if self.coverage: self._coverage_after() def _coverage_before(self): package = self.distribution.get_name() if package.startswith('python-'): package = package[7:] options = "--source %s --parallel-mode" % package os.environ['PYTHON'] = ("coverage run %s" % options) def _coverage_after(self): os.system("coverage combine") os.system("coverage html -d ./cover %s" % self.omit) testrepository-0.0.20/.testr.conf0000664000175000017500000000027412306632354020131 0ustar robertcrobertc00000000000000[DEFAULT] test_command=${PYTHON:-python} -m subunit.run $LISTOPT $IDOPTION testrepository.tests.test_suite test_id_option=--load-list $IDFILE test_list_option=--list ;filter_tags=worker-0 testrepository-0.0.20/COPYING0000664000175000017500000000340312374475461017104 0ustar robertcrobertc00000000000000Testrepository is licensed under two licenses, the Apache License, Version 2.0 or the 3-clause BSD License. You may use this project under either of these licenses - choose the one that works best for you. We require contributions to be licensed under both licenses. The primary difference between them is that the Apache license takes care of potential issues with Patents and other intellectual property concerns that some users or contributors may find important. Generally every source file in Testrepository needs a license grant under both these licenses. As the code is shipped as a single unit, a brief form is used: ---- Copyright (c) [yyyy][,yyyy]* [name or 'Testrepository Contributors'] Licensed under either the Apache License, Version 2.0 or the BSD 3-clause license at the users choice. A copy of both licenses are available in the project source as Apache-2.0 and BSD. You may not use this file except in compliance with one of these two licences. Unless required by applicable law or agreed to in writing, software distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the license you chose for the specific language governing permissions and limitations under that license. ---- A concordance of contributors is maintained here to provide an easy reference for distributions such as Debian that wish to list all the copyright holders in their metadata: * Robert Collins , 2009 * Hewlett-Packard Development Company, L.P., 2013 * IBM Corp., 2013 Code that has been incorporated into Testrepository from other projects will naturally be under its own license, and will retain that license. A known list of such code is maintained here: * No entries. testrepository-0.0.20/Apache-2.00000664000175000017500000002613612306632354017451 0ustar robertcrobertc00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. testrepository-0.0.20/MANIFEST.in0000664000175000017500000000041712306632354017600 0ustar robertcrobertc00000000000000include .bzrignore include .testr.conf include Apache-2.0 include BSD include COPYING include INSTALL.txt include MANIFEST.in include Makefile include NEWS include README.txt include doc/*.txt include testrepository/tests/*.py recursive-include testrepository/tests *.py testrepository-0.0.20/testrepository.egg-info/0000775000175000017500000000000012377221137022652 5ustar robertcrobertc00000000000000testrepository-0.0.20/testrepository.egg-info/requires.txt0000664000175000017500000000014212377221136025246 0ustar robertcrobertc00000000000000fixtures python-subunit >= 0.0.18 testtools >= 0.9.30 [test] bzr pytz testresources testscenariostestrepository-0.0.20/testrepository.egg-info/SOURCES.txt0000664000175000017500000000555112377221137024544 0ustar robertcrobertc00000000000000.bzrignore .testr.conf Apache-2.0 BSD COPYING INSTALL.txt MANIFEST.in Makefile NEWS README.txt setup.py testr doc/DESIGN.txt doc/DEVELOPERS.txt doc/MANUAL.txt doc/index.txt testrepository/__init__.py testrepository/results.py testrepository/setuptools_command.py testrepository/testcommand.py testrepository/testlist.py testrepository/utils.py testrepository.egg-info/PKG-INFO testrepository.egg-info/SOURCES.txt testrepository.egg-info/dependency_links.txt testrepository.egg-info/entry_points.txt testrepository.egg-info/requires.txt testrepository.egg-info/top_level.txt testrepository/arguments/__init__.py testrepository/arguments/command.py testrepository/arguments/doubledash.py testrepository/arguments/path.py testrepository/arguments/string.py testrepository/commands/__init__.py testrepository/commands/commands.py testrepository/commands/failing.py testrepository/commands/help.py testrepository/commands/init.py testrepository/commands/last.py testrepository/commands/list_tests.py testrepository/commands/load.py testrepository/commands/quickstart.py testrepository/commands/run.py testrepository/commands/slowest.py testrepository/commands/stats.py testrepository/repository/__init__.py testrepository/repository/file.py testrepository/repository/memory.py testrepository/repository/samba_buildfarm.py testrepository/tests/__init__.py testrepository/tests/monkeypatch.py testrepository/tests/stubpackage.py testrepository/tests/test_arguments.py testrepository/tests/test_commands.py testrepository/tests/test_matchers.py testrepository/tests/test_monkeypatch.py testrepository/tests/test_repository.py testrepository/tests/test_results.py testrepository/tests/test_setup.py testrepository/tests/test_stubpackage.py testrepository/tests/test_testcommand.py testrepository/tests/test_testr.py testrepository/tests/test_ui.py testrepository/tests/arguments/__init__.py testrepository/tests/arguments/test_command.py testrepository/tests/arguments/test_doubledash.py testrepository/tests/arguments/test_path.py testrepository/tests/arguments/test_string.py testrepository/tests/commands/__init__.py testrepository/tests/commands/test_commands.py testrepository/tests/commands/test_failing.py testrepository/tests/commands/test_help.py testrepository/tests/commands/test_init.py testrepository/tests/commands/test_last.py testrepository/tests/commands/test_list_tests.py testrepository/tests/commands/test_load.py testrepository/tests/commands/test_quickstart.py testrepository/tests/commands/test_run.py testrepository/tests/commands/test_slowest.py testrepository/tests/commands/test_stats.py testrepository/tests/repository/__init__.py testrepository/tests/repository/test_file.py testrepository/tests/ui/__init__.py testrepository/tests/ui/test_cli.py testrepository/tests/ui/test_decorator.py testrepository/ui/__init__.py testrepository/ui/cli.py testrepository/ui/decorator.py testrepository/ui/model.pytestrepository-0.0.20/testrepository.egg-info/entry_points.txt0000664000175000017500000000010612377221137026145 0ustar robertcrobertc00000000000000[distutils.commands] testr = testrepository.setuptools_command:Testr testrepository-0.0.20/testrepository.egg-info/dependency_links.txt0000664000175000017500000000000112377221137026720 0ustar robertcrobertc00000000000000 testrepository-0.0.20/testrepository.egg-info/top_level.txt0000664000175000017500000000001712377221137025402 0ustar robertcrobertc00000000000000testrepository testrepository-0.0.20/testrepository.egg-info/PKG-INFO0000664000175000017500000000525212377221136023752 0ustar robertcrobertc00000000000000Metadata-Version: 1.1 Name: testrepository Version: 0.0.20 Summary: A repository of test results. Home-page: https://launchpad.net/testrepository Author: Robert Collins Author-email: robertc@robertcollins.net License: UNKNOWN Description: Test Repository +++++++++++++++ Overview ~~~~~~~~ This project provides a database of test results which can be used as part of developer workflow to ensure/check things like: * No commits without having had a test failure, test fixed cycle. * No commits without new tests being added. * What tests have failed since the last commit (to run just a subset). * What tests are currently failing and need work. Test results are inserted using subunit (and thus anything that can output subunit or be converted into a subunit stream can be accepted). A mailing list for discussion, usage and development is at https://launchpad.net/~testrepository-dev - all are welcome to join. Some folk hang out on #testrepository on irc.freenode.net. CI for the project is at http://build.robertcollins.net/job/testrepository-default/. Licensing ~~~~~~~~~ Test Repository is under BSD / Apache 2.0 licences. See the file COPYING in the source for details. Quick Start ~~~~~~~~~~~ Create a config file:: $ touch .testr.conf Create a repository:: $ testr init Load a test run into the repository:: $ testr load < testrun Query the repository:: $ testr stats $ testr last $ testr failing Delete a repository:: $ rm -rf .testrepository Documentation ~~~~~~~~~~~~~ More detailed documentation including design and implementation details, a user manual, and guidelines for development of Test Repository itself can be found at https://testrepository.readthedocs.org/en/latest, or in the source tree at doc/ (run make -C doc html). Keywords: subunit unittest testrunner Platform: UNKNOWN Classifier: Development Status :: 6 - Mature Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Software Development :: Quality Assurance Classifier: Topic :: Software Development :: Testing testrepository-0.0.20/README.txt0000664000175000017500000000300112306632360017525 0ustar robertcrobertc00000000000000Test Repository +++++++++++++++ Overview ~~~~~~~~ This project provides a database of test results which can be used as part of developer workflow to ensure/check things like: * No commits without having had a test failure, test fixed cycle. * No commits without new tests being added. * What tests have failed since the last commit (to run just a subset). * What tests are currently failing and need work. Test results are inserted using subunit (and thus anything that can output subunit or be converted into a subunit stream can be accepted). A mailing list for discussion, usage and development is at https://launchpad.net/~testrepository-dev - all are welcome to join. Some folk hang out on #testrepository on irc.freenode.net. CI for the project is at http://build.robertcollins.net/job/testrepository-default/. Licensing ~~~~~~~~~ Test Repository is under BSD / Apache 2.0 licences. See the file COPYING in the source for details. Quick Start ~~~~~~~~~~~ Create a config file:: $ touch .testr.conf Create a repository:: $ testr init Load a test run into the repository:: $ testr load < testrun Query the repository:: $ testr stats $ testr last $ testr failing Delete a repository:: $ rm -rf .testrepository Documentation ~~~~~~~~~~~~~ More detailed documentation including design and implementation details, a user manual, and guidelines for development of Test Repository itself can be found at https://testrepository.readthedocs.org/en/latest, or in the source tree at doc/ (run make -C doc html). testrepository-0.0.20/INSTALL.txt0000664000175000017500000000113012376204670017705 0ustar robertcrobertc00000000000000Installing Test Repository ++++++++++++++++++++++++++ Run time dependencies ~~~~~~~~~~~~~~~~~~~~~ * Python2.4 or newer. * subunit (0.0.18 or newer). * fixtures (https://launchpad.net/python-fixtures, or http://pypi.python.org/pypi/fixtures/). Test dependencies ~~~~~~~~~~~~~~~~~ * testtools 0.9.8 or newer (the python-testtools package, or http://pypi.python.org/pypi/testtools/). * testresources (https://launchpad.net/testresources, or http://pypi.python.org/pypi/testresources/). * testscenarios (https://launchpad.net/testscenarios). Installing ~~~~~~~~~~ * ./setup.py install testrepository-0.0.20/setup.cfg0000664000175000017500000000007312377221137017662 0ustar robertcrobertc00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 testrepository-0.0.20/testr0000775000175000017500000000170012306632354017125 0ustar robertcrobertc00000000000000#!/usr/bin/env python # # Copyright (c) 2009 Testrepository Contributors # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the # project source as Apache-2.0 and BSD. You may not use this file except in # compliance with one of these two licences. # # Unless required by applicable law or agreed to in writing, software # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # license you chose for the specific language governing permissions and # limitations under that license. """The CLI entry point to testrepository. No program logic is in this script - see testrepository.commands.run_argv. """ import sys from testrepository.commands import run_argv if __name__ == "__main__": sys.exit(run_argv(sys.argv, sys.stdin, sys.stdout, sys.stderr)) testrepository-0.0.20/PKG-INFO0000664000175000017500000000525212377221137017142 0ustar robertcrobertc00000000000000Metadata-Version: 1.1 Name: testrepository Version: 0.0.20 Summary: A repository of test results. Home-page: https://launchpad.net/testrepository Author: Robert Collins Author-email: robertc@robertcollins.net License: UNKNOWN Description: Test Repository +++++++++++++++ Overview ~~~~~~~~ This project provides a database of test results which can be used as part of developer workflow to ensure/check things like: * No commits without having had a test failure, test fixed cycle. * No commits without new tests being added. * What tests have failed since the last commit (to run just a subset). * What tests are currently failing and need work. Test results are inserted using subunit (and thus anything that can output subunit or be converted into a subunit stream can be accepted). A mailing list for discussion, usage and development is at https://launchpad.net/~testrepository-dev - all are welcome to join. Some folk hang out on #testrepository on irc.freenode.net. CI for the project is at http://build.robertcollins.net/job/testrepository-default/. Licensing ~~~~~~~~~ Test Repository is under BSD / Apache 2.0 licences. See the file COPYING in the source for details. Quick Start ~~~~~~~~~~~ Create a config file:: $ touch .testr.conf Create a repository:: $ testr init Load a test run into the repository:: $ testr load < testrun Query the repository:: $ testr stats $ testr last $ testr failing Delete a repository:: $ rm -rf .testrepository Documentation ~~~~~~~~~~~~~ More detailed documentation including design and implementation details, a user manual, and guidelines for development of Test Repository itself can be found at https://testrepository.readthedocs.org/en/latest, or in the source tree at doc/ (run make -C doc html). Keywords: subunit unittest testrunner Platform: UNKNOWN Classifier: Development Status :: 6 - Mature Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Software Development :: Quality Assurance Classifier: Topic :: Software Development :: Testing