pax_global_header 0000666 0000000 0000000 00000000064 14605750330 0014515 g ustar 00root root 0000000 0000000 52 comment=8497b634df034dcfd86a38e6b00f548ca5c8eaca
simplestreams_0.1.0-67-g8497b634/ 0000775 0000000 0000000 00000000000 14605750330 0016076 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/.bzrignore 0000664 0000000 0000000 00000000240 14605750330 0020074 0 ustar 00root root 0000000 0000000 gnupg
.gnupg
exdata
exdata-query
examples/**/*.sjson
examples/**/*.gpg
__pycache__
./lib
./include
./local
bin/activate*
bin/easy_install*
bin/pip*
bin/python*
simplestreams_0.1.0-67-g8497b634/.gitignore 0000664 0000000 0000000 00000000141 14605750330 0020062 0 ustar 00root root 0000000 0000000 *__pycache__*
*pyc
gnupg
.gnupg
exdata
exdata-query
*.sjson
*.gpg
MANIFEST
.coverage
.tox
*.snap
simplestreams_0.1.0-67-g8497b634/LICENSE 0000664 0000000 0000000 00000103330 14605750330 0017103 0 ustar 00root root 0000000 0000000 GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
.
simplestreams_0.1.0-67-g8497b634/Makefile 0000664 0000000 0000000 00000001574 14605750330 0017545 0 ustar 00root root 0000000 0000000 TENV := ./tools/tenv
EXDATA_SIGN ?= 1
ifeq ($(EXDATA_SIGN),1)
EXDATA_SIGN_ARG := --sign
endif
build:
@echo nothing to do for $@
test: examples-sign
$(TENV) python3 -m pytest -v tests/
flake8:
# any means run via 'flake8' or 'python3 -m flake8'
./tools/run-flake8 any
exdata: exdata/fake exdata/data
exdata/data: exdata-query gnupg
$(TENV) env REAL_DATA=1 ./tools/make-test-data $(EXDATA_SIGN_ARG) exdata-query/ exdata/data
exdata/fake: exdata-query gnupg
$(TENV) ./tools/make-test-data $(EXDATA_SIGN_ARG) exdata-query/ exdata/fake
exdata-query:
rsync -avz --delete --exclude "FILE_DATA_CACHE" --exclude ".bzr/*" cloud-images.ubuntu.com::cloud-images/query/ exdata-query
gnupg: gnupg/README
gnupg/README:
./tools/create-gpgdir
examples-sign: gnupg/README
$(TENV) ./tools/sign-examples
.PHONY: exdata/fake exdata/data exdata-query examples-sign flake8 test test2 test3
simplestreams_0.1.0-67-g8497b634/README.txt 0000664 0000000 0000000 00000003511 14605750330 0017574 0 ustar 00root root 0000000 0000000 == Intro ==
This is a documentation, examples, a python library and some tools for
interacting with simple streams format.
The intent of simple streams format is to make well formated data available
about "products".
There is more documentation in doc/README.
There are examples in examples/.
== Simple Streams Getting Started ==
= Mirroring ==
To mirror one source (http or file) to a local directory, see tools/do-mirror.
For example, to mirror the 'foocloud' example content, do:
./tools/tenv do-mirror examples/foocloud/ my.out streams/v1/index.json
That will create a full mirror in my.out/.
./tools/tenv do-mirror --mirror=http://download.cirros-cloud.net/ \
--max=1 examples/cirros/ cirros.mirror/
That will create a mirror of cirros data in cirros.mirror, with only
the latest file from each product.
= Hooks =
To use the "command hooks mirror" for invoking commands to synchronize, between
one source and another, see bin/sstream-sync.
For an example, the following runs the debug hook against the example 'foocloud'
data:
./tools/tenv sstream-sync --hook=hook-debug \
--path=streams/v1/index.json examples/foocloud/
You can also run it with cloud-images.ubuntu.com data like this:
./tools/tenv sstream-sync \
--item-skip-download --hook=./tools/hook-debug \
--path=streams/v1/index.sjson http://cloud-images.ubuntu.com/releases/
The 'hook-debug' program simply outputs the data it is invoked with. It does
not actually mirror anything.
== Glance ==
Example mirror of a download image source to glance with swift serving
a localized image-id format:
./tools/sstream-mirror-glance --region=RegionOne \
--cloud-name=localcloud "--content-id=localcloud.%(region)s:partners" \
--output-swift=published/ --max=1 --name-prefix="ubuntu/" \
http://cloud-images.ubuntu.com/releases/ streams/v1/index.json
simplestreams_0.1.0-67-g8497b634/bin/ 0000775 0000000 0000000 00000000000 14605750330 0016646 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/bin/json2streams 0000775 0000000 0000000 00000000255 14605750330 0021230 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
# Copyright (C) 2013, 2015 Canonical Ltd.
import sys
from simplestreams.json2streams import main
if __name__ == '__main__':
sys.exit(main())
simplestreams_0.1.0-67-g8497b634/bin/sstream-mirror 0000775 0000000 0000000 00000014071 14605750330 0021565 0 ustar 00root root 0000000 0000000 #!/usr/bin/python3
# Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import argparse
import sys
from simplestreams import filters
from simplestreams import log
from simplestreams import mirrors
from simplestreams import objectstores
from simplestreams import util
class DotProgress(object):
def __init__(self, expected=None, columns=80):
self.curpath = None
self.printed = None
self.expected = expected
self.bytes_read = 0
self.columns = columns
def write_progress(self, path, cur, total):
if self.curpath != path:
self.printed = 0
self.curpath = path
status = ""
if self.expected:
status = (" %02s%%" %
(int(self.bytes_read * 100 / self.expected)))
sys.stderr.write('=> %s [%s]%s\n' % (path, total, status))
if cur == total:
sys.stderr.write("\n")
if self.expected:
self.bytes_read += total
return
toprint = int(cur * self.columns / total) - self.printed
if toprint <= 0:
return
sys.stderr.write('.' * toprint)
sys.stderr.flush()
self.printed += toprint
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--keep', action='store_true', default=False,
help='keep items in target up to MAX items '
'even after they have fallen out of the source')
parser.add_argument('--max', type=int, default=None,
help='store at most MAX items in the target')
parser.add_argument('--path', default=None,
help='sync from index or products file in mirror')
parser.add_argument('--no-item-download', action='store_true',
default=False,
help='do not download items with a "path"')
parser.add_argument('--dry-run', action='store_true', default=False,
help='only report what would be done')
parser.add_argument('--progress', action='store_true', default=False,
help='show progress for downloading files')
parser.add_argument('--mirror', action='append', default=[],
dest="mirrors",
help='additional mirrors to find referenced files')
parser.add_argument('--verbose', '-v', action='count', default=0)
parser.add_argument('--log-file', default=sys.stderr,
type=argparse.FileType('w'))
parser.add_argument('--keyring', action='store', default=None,
help='keyring to be specified to gpg via --keyring')
parser.add_argument('--no-verify', '-U', action='store_false',
dest='verify', default=True,
help="do not gpg check signed json files")
parser.add_argument('--no-checksumming-reader', action='store_false',
dest='checksumming_reader', default=True,
help=("do not call 'insert_item' with a reader"
" that does checksumming."))
parser.add_argument('source_mirror')
parser.add_argument('output_d')
parser.add_argument('filters', nargs='*', default=[])
args = parser.parse_args()
(mirror_url, initial_path) = util.path_from_mirror_url(args.source_mirror,
args.path)
def policy(content, path):
if initial_path.endswith('sjson'):
return util.read_signed(content,
keyring=args.keyring,
checked=args.verify)
else:
return content
filter_list = filters.get_filters(args.filters)
mirror_config = {'max_items': args.max, 'keep_items': args.keep,
'filters': filter_list,
'item_download': not args.no_item_download,
'checksumming_reader': args.checksumming_reader}
level = (log.ERROR, log.INFO, log.DEBUG)[min(args.verbose, 2)]
log.basicConfig(stream=args.log_file, level=level)
smirror = mirrors.UrlMirrorReader(mirror_url, mirrors=args.mirrors,
policy=policy)
tstore = objectstores.FileStore(args.output_d)
drmirror = mirrors.DryRunMirrorWriter(config=mirror_config,
objectstore=tstore)
drmirror.sync(smirror, initial_path)
def print_diff(char, items):
for pedigree, path, size in items:
fmt = "{char} {pedigree} {path} {size} Mb\n"
size = int(size / (1024 * 1024))
sys.stderr.write(fmt.format(
char=char, pedigree=' '.join(pedigree), path=path, size=size))
print_diff('+', drmirror.downloading)
print_diff('-', drmirror.removing)
sys.stderr.write("%d Mb change\n" % (drmirror.size / (1024 * 1024)))
if args.dry_run:
return True
if args.progress:
callback = DotProgress(expected=drmirror.size).write_progress
else:
callback = None
tstore = objectstores.FileStore(args.output_d, complete_callback=callback)
tmirror = mirrors.ObjectFilterMirror(config=mirror_config,
objectstore=tstore)
tmirror.sync(smirror, initial_path)
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/bin/sstream-mirror-glance 0000775 0000000 0000000 00000021455 14605750330 0023020 0 ustar 00root root 0000000 0000000 #!/usr/bin/python3
# Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
#
# this is python2 as openstack dependencies (swiftclient, keystoneclient,
# glanceclient) are not python3.
#
import argparse
import os.path
import sys
from simplestreams import objectstores
from simplestreams.objectstores import swift
from simplestreams import log
from simplestreams import mirrors
from simplestreams import openstack
from simplestreams import util
from simplestreams.mirrors import glance
DEFAULT_FILTERS = ['ftype~(disk1.img|disk.img)', 'arch~(x86_64|amd64|i386)']
DEFAULT_KEYRING = "/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg"
def error(msg):
sys.stderr.write(msg)
class StdoutProgressAggregator(util.ProgressAggregator):
def __init__(self, remaining_items):
super(StdoutProgressAggregator, self).__init__(remaining_items)
def emit(self, progress):
size = float(progress['size'])
written = float(progress['written'])
print("%.2f %s (%d of %d images) - %.2f" %
(written / size, progress['name'],
self.total_image_count - len(self.remaining_items) + 1,
self.total_image_count,
float(self.total_written) / self.total_size))
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--keep', action='store_true', default=False,
help='keep items in target up to MAX items '
'even after they have fallen out of the source')
parser.add_argument('--max', type=int, default=None,
help='store at most MAX items in the target')
parser.add_argument('--region', action='append', default=None,
dest='regions',
help='operate on specified region '
'[useable multiple times]')
parser.add_argument('--mirror', action='append', default=[],
dest="mirrors",
help='additional mirrors to find referenced files')
parser.add_argument('--path', default=None,
help='sync from index or products file in mirror')
parser.add_argument('--output-dir', metavar="DIR", default=False,
help='write image data to storage in dir')
parser.add_argument('--output-swift', metavar="prefix", default=False,
help='write image data to swift under prefix')
parser.add_argument('--name-prefix', metavar="prefix", default=None,
help='prefix for each published image name')
parser.add_argument('--cloud-name', metavar="name", default=None,
required=True, help='unique name for this cloud')
parser.add_argument('--modify-hook', metavar="cmd", default=None,
required=False,
help='invoke cmd on each image prior to upload')
parser.add_argument('--content-id', metavar="name", default=None,
required=True,
help='content-id to use for published data.'
' may contain "%%(region)s"')
parser.add_argument('--progress', action='store_true', default=False,
help='display per-item download progress')
parser.add_argument('--verbose', '-v', action='count', default=0)
parser.add_argument('--log-file', default=sys.stderr,
type=argparse.FileType('w'))
parser.add_argument('--keyring', action='store', default=DEFAULT_KEYRING,
help='The keyring for gpg --keyring')
parser.add_argument('source_mirror')
parser.add_argument('item_filters', nargs='*', default=DEFAULT_FILTERS,
help="Filter expression for mirrored items. "
"Multiple filter arguments can be specified"
"and will be combined with logical AND. "
"Expressions are key[!]=literal_string "
"or key[!]~regexp.")
parser.add_argument('--hypervisor-mapping', action='store_true',
default=False,
help="Set hypervisor_type attribute on stored images "
"and the virt attribute in the associated stream "
"data. This is useful in OpenStack Clouds which use "
"multiple hypervisor types with in a single region.")
parser.add_argument('--custom-property', action='append', default=[],
dest="custom_properties",
help='additional properties to add to glance'
' image metadata (key=value format).')
parser.add_argument('--visibility', action='store', default='public',
choices=('public', 'private', 'community', 'shared'),
help='Visibility to apply to stored images.')
parser.add_argument('--image-import-conversion', action='store_true',
default=False,
help="Enable conversion of images to raw format using "
"image import option in Glance.")
parser.add_argument('--set-latest-property', action='store_true',
default=False,
help="Set 'latest=true' property to latest synced "
"os_version/architecture image metadata and remove "
"latest property from the old images.")
args = parser.parse_args()
modify_hook = None
if args.modify_hook:
modify_hook = args.modify_hook.split()
mirror_config = {'max_items': args.max, 'keep_items': args.keep,
'cloud_name': args.cloud_name,
'modify_hook': modify_hook,
'item_filters': args.item_filters,
'hypervisor_mapping': args.hypervisor_mapping,
'custom_properties': args.custom_properties,
'visibility': args.visibility,
'image_import_conversion': args.image_import_conversion,
'set_latest_property': args.set_latest_property}
(mirror_url, args.path) = util.path_from_mirror_url(args.source_mirror,
args.path)
def policy(content, path): # pylint: disable=W0613
if args.path.endswith('sjson'):
return util.read_signed(content, keyring=args.keyring)
else:
return content
smirror = mirrors.UrlMirrorReader(mirror_url, mirrors=args.mirrors,
policy=policy)
if args.output_dir and args.output_swift:
error("--output-dir and --output-swift are mutually exclusive\n")
sys.exit(1)
level = (log.ERROR, log.INFO, log.DEBUG)[min(args.verbose, 2)]
log.basicConfig(stream=args.log_file, level=level)
regions = args.regions
if regions is None:
regions = openstack.get_regions(services=['image'])
for region in regions:
if args.output_dir:
outd = os.path.join(args.output_dir, region)
tstore = objectstores.FileStore(outd)
elif args.output_swift:
tstore = swift.SwiftObjectStore(args.output_swift, region=region)
else:
sys.stderr.write("not writing data anywhere\n")
tstore = None
mirror_config['content_id'] = args.content_id % {'region': region}
if args.progress:
drmirror = glance.ItemInfoDryRunMirror(config=mirror_config,
objectstore=tstore)
drmirror.sync(smirror, args.path)
p = StdoutProgressAggregator(drmirror.items)
progress_callback = p.progress_callback
else:
progress_callback = None
tmirror = glance.GlanceMirror(config=mirror_config,
objectstore=tstore, region=region,
name_prefix=args.name_prefix,
progress_callback=progress_callback)
tmirror.sync(smirror, args.path)
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/bin/sstream-query 0000775 0000000 0000000 00000012736 14605750330 0021426 0 ustar 00root root 0000000 0000000 #!/usr/bin/python3
# Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
from simplestreams import filters
from simplestreams import mirrors
from simplestreams import log
from simplestreams import util
import argparse
import errno
import json
import pprint
import signal
import sys
FORMAT_PRETTY = "PRETTY"
FORMAT_JSON = "JSON"
DEFAULT_KEYRING = "/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg"
def warn(msg):
sys.stderr.write("WARN: %s" % msg)
class FilterMirror(mirrors.BasicMirrorWriter):
def __init__(self, config=None):
super(FilterMirror, self).__init__(config=config)
if config is None:
config = {}
self.config = config
self.filters = config.get('filters', [])
outfmt = config.get('output_format')
if not outfmt:
outfmt = "%s"
self.output_format = outfmt
self.json_entries = []
def load_products(self, path=None, content_id=None):
return {'content_id': content_id, 'products': {}}
def filter_item(self, data, src, target, pedigree):
return filters.filter_item(self.filters, data, src, pedigree)
def insert_item(self, data, src, target, pedigree, contentsource):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]]
# contentsource is a ContentSource if 'path' exists in data or None
data = util.products_exdata(src, pedigree)
if 'path' in data:
data.update({'item_url': contentsource.url})
if self.output_format == FORMAT_PRETTY:
pprint.pprint(data)
elif self.output_format == FORMAT_JSON:
self.json_entries.append(data)
else:
try:
print(self.output_format % (data))
except KeyError as e:
sys.stderr.write("output format failed. Missing %s\n" % e.args)
sys.stderr.write("item: %s\n" % data)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--max', type=int, default=None, dest='max_items',
help='store at most MAX items in the target')
parser.add_argument('--path', default=None,
help='sync from index or products file in mirror')
fmt_group = parser.add_mutually_exclusive_group()
fmt_group.add_argument('--output-format', '-o', action='store',
dest='output_format', default=None,
help="specify output format per python str.format")
fmt_group.add_argument('--pretty', action='store_const',
const=FORMAT_PRETTY, dest='output_format',
help="pretty print output")
fmt_group.add_argument('--json', action='store_const',
const=FORMAT_JSON, dest='output_format',
help="output in JSON as a list of dicts.")
parser.add_argument('--verbose', '-v', action='count', default=0)
parser.add_argument('--log-file', default=sys.stderr,
type=argparse.FileType('w'))
parser.add_argument('--keyring', action='store', default=DEFAULT_KEYRING,
help='keyring to be specified to gpg via --keyring')
parser.add_argument('--no-verify', '-U', action='store_false',
dest='verify', default=True,
help="do not gpg check signed json files")
parser.add_argument('mirror_url')
parser.add_argument('filters', nargs='*', default=[])
cmdargs = parser.parse_args()
(mirror_url, path) = util.path_from_mirror_url(cmdargs.mirror_url,
cmdargs.path)
level = (log.ERROR, log.INFO, log.DEBUG)[min(cmdargs.verbose, 2)]
log.basicConfig(stream=cmdargs.log_file, level=level)
initial_path = path
def policy(content, path):
if initial_path.endswith('sjson'):
return util.read_signed(content,
keyring=cmdargs.keyring,
checked=cmdargs.verify)
else:
return content
smirror = mirrors.UrlMirrorReader(mirror_url, policy=policy)
filter_list = filters.get_filters(cmdargs.filters)
cfg = {'max_items': cmdargs.max_items,
'filters': filter_list,
'output_format': cmdargs.output_format}
tmirror = FilterMirror(config=cfg)
try:
tmirror.sync(smirror, path)
if tmirror.output_format == FORMAT_JSON:
print(json.dumps(tmirror.json_entries, indent=2, sort_keys=True,
separators=(',', ': ')))
except IOError as e:
if e.errno == errno.EPIPE:
sys.exit(0x80 | signal.SIGPIPE)
raise
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/bin/sstream-sync 0000775 0000000 0000000 00000014012 14605750330 0021222 0 ustar 00root root 0000000 0000000 #!/usr/bin/python3
# Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
from simplestreams import mirrors
from simplestreams.mirrors import command_hook
from simplestreams import log
from simplestreams import util
import argparse
import errno
import os
import signal
import sys
import yaml
def which(program):
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
fpath, _fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
def warn(msg):
sys.stderr.write("WARN: %s" % msg)
def main():
parser = argparse.ArgumentParser()
defhook = command_hook.DEFAULT_HOOK_NAME
hooks = [("--hook-%s" % hook.replace("_", "-"), hook, False)
for hook in command_hook.HOOK_NAMES]
hooks.append(('--hook', defhook, False,))
parser.add_argument('--config', '-c',
help='read config file',
type=argparse.FileType('rb'))
for (argname, cfgname, _required) in hooks:
parser.add_argument(argname, dest=cfgname, required=False)
parser.add_argument('--keep', action='store_true', default=False,
dest='keep_items',
help='keep items in target up to MAX items '
'even after they have fallen out of the source')
parser.add_argument('--max', type=int, default=None, dest='max_items',
help='store at most MAX items in the target')
parser.add_argument('--item-skip-download', action='store_true',
default=False,
help='Do not download items that are to be inserted.')
parser.add_argument('--delete', action='store_true', default=False,
dest='delete_filtered_items',
help='remove filtered items from the target')
parser.add_argument('--path', default=None,
help='sync from index or products file in mirror')
parser.add_argument('--verbose', '-v', action='count', default=0)
parser.add_argument('--log-file', default=sys.stderr,
type=argparse.FileType('w'))
parser.add_argument('--keyring', action='store', default=None,
help='keyring to be specified to gpg via --keyring')
parser.add_argument('--no-verify', '-U', action='store_false',
dest='verify', default=True,
help="do not gpg check signed json files")
parser.add_argument('mirror_url')
cmdargs = parser.parse_args()
known_cfg = [('--item-skip-download', 'item_skip_download', False),
('--max', 'max_items', False),
('--keep', 'keep_items', False),
('--delete', 'delete_filtered_items', False),
('mirror_url', 'mirror_url', True),
('--path', 'path', True)]
known_cfg.extend(hooks)
cfg = {}
if cmdargs.config:
cfg = yaml.safe_load(cmdargs.config)
if not cfg:
cfg = {}
known_names = [i[1] for i in known_cfg]
unknown = [key for key in cfg if key not in known_names]
if unknown:
warn("unknown keys in config: %s\n" % str(unknown))
missing = []
fallback = cfg.get(defhook, getattr(cmdargs, defhook, None))
for (argname, cfgname, _required) in known_cfg:
val = getattr(cmdargs, cfgname)
if val is not None:
cfg[cfgname] = val
if val == "":
cfg[cfgname] = None
if ((cfgname in command_hook.HOOK_NAMES or cfgname == defhook) and
cfg.get(cfgname) is not None):
if which(cfg[cfgname]) is None:
msg = "invalid input for %s. '%s' is not executable\n"
sys.stderr.write(msg % (argname, val))
sys.exit(1)
if (cfgname in command_hook.REQUIRED_FIELDS and
cfg.get(cfgname) is None and not fallback):
missing.append((argname, cfgname,))
pfm = util.path_from_mirror_url
(cfg['mirror_url'], cfg['path']) = pfm(cfg['mirror_url'], cfg.get('path'))
if missing:
sys.stderr.write("must provide input for (--hook/%s for default):\n"
% defhook)
for (flag, cfg) in missing:
sys.stderr.write(" cmdline '%s' or cfgname '%s'\n" % (flag, cfg))
sys.exit(1)
level = (log.ERROR, log.INFO, log.DEBUG)[min(cmdargs.verbose, 2)]
log.basicConfig(stream=cmdargs.log_file, level=level)
def policy(content, path):
if cfg['path'].endswith('sjson'):
return util.read_signed(content,
keyring=cmdargs.keyring,
checked=cmdargs.verify)
else:
return content
smirror = mirrors.UrlMirrorReader(cfg['mirror_url'], policy=policy)
tmirror = command_hook.CommandHookMirror(config=cfg)
try:
tmirror.sync(smirror, cfg['path'])
except IOError as e:
if e.errno == errno.EPIPE:
sys.exit(0x80 | signal.SIGPIPE)
raise
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/debian/ 0000775 0000000 0000000 00000000000 14605750330 0017320 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/debian/changelog.trunk 0000664 0000000 0000000 00000000232 14605750330 0022331 0 ustar 00root root 0000000 0000000 simplestreams (UPSTREAM_VER-0ubuntu1) UNRELEASED; urgency=low
* Initial release
-- Scott Moser Tue, 26 Mar 2013 01:10:01 +0000
simplestreams_0.1.0-67-g8497b634/debian/control 0000664 0000000 0000000 00000003435 14605750330 0020730 0 ustar 00root root 0000000 0000000 Source: simplestreams
Section: python
Priority: optional
Standards-Version: 4.6.0
Maintainer: Ubuntu Developers
Build-Depends: debhelper-compat (= 13),
dh-python,
python3,
python3-flake8,
python3-glanceclient,
python3-keystoneclient,
python3-mock,
python3-pytest,
python3-pytest-cov,
python3-requests (>= 1.1),
python3-setuptools,
python3-swiftclient,
python3-yaml
Homepage: http://launchpad.net/simplestreams
Rules-Requires-Root: no
Package: simplestreams
Architecture: all
Depends: python3-simplestreams,
python3-yaml,
${misc:Depends},
${python3:Depends}
Description: Library and tools for using Simple Streams data
This package provides a client for interacting with simple
streams data as is produced to describe Ubuntu's cloud images.
Package: python3-simplestreams
Architecture: all
Depends: gnupg, python3-boto3, ${misc:Depends}, ${python3:Depends}
Suggests: python3-requests (>= 1.1)
Description: Library and tools for using Simple Streams data
This package provides a client for interacting with simple
streams data as is produced to describe Ubuntu's cloud images.
Package: python3-simplestreams-openstack
Architecture: all
Depends: python3-glanceclient,
python3-keystoneclient,
python3-simplestreams (= ${binary:Version}),
python3-swiftclient,
${misc:Depends},
${python3:Depends}
Description: Library and tools for using Simple Streams data
This package depends on libraries necessary to use the openstack dependent
functionality in simplestreams. That includes interacting with glance,
swift and keystone.
simplestreams_0.1.0-67-g8497b634/debian/copyright 0000664 0000000 0000000 00000001157 14605750330 0021257 0 ustar 00root root 0000000 0000000 Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: simplestreams
Upstream-Contact: Scott Moser
Source: https://launchpad.net/simplestreams
Files: *
Copyright: 2013, Canonical Ltd.
License: AGPL-3
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
.
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
.
The complete text of the AGPL version 3 can be seen in
http://www.gnu.org/licenses/agpl-3.0.html
simplestreams_0.1.0-67-g8497b634/debian/python3-simplestreams-openstack.install 0000664 0000000 0000000 00000000036 14605750330 0027166 0 ustar 00root root 0000000 0000000 usr/bin/sstream-mirror-glance
simplestreams_0.1.0-67-g8497b634/debian/python3-simplestreams.install 0000664 0000000 0000000 00000000054 14605750330 0025201 0 ustar 00root root 0000000 0000000 usr/lib/python3*/*-packages/simplestreams/*
simplestreams_0.1.0-67-g8497b634/debian/rules 0000775 0000000 0000000 00000000613 14605750330 0020400 0 ustar 00root root 0000000 0000000 #!/usr/bin/make -f
export SS_REQUIRE_DISTRO_INFO = 0
PY3VERS := $(shell py3versions -r)
%:
dh $@ --with=python3
override_dh_auto_install:
dh_auto_install
set -ex; for python in $(PY3VERS); do \
$$python setup.py build --executable=/usr/bin/python3 && \
$$python setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb; \
done
override_dh_missing:
dh_missing --list-missing
simplestreams_0.1.0-67-g8497b634/debian/simplestreams.install 0000664 0000000 0000000 00000000225 14605750330 0023577 0 ustar 00root root 0000000 0000000 usr/bin/json2streams
usr/bin/sstream-mirror
usr/bin/sstream-query
usr/bin/sstream-sync
usr/lib/simplestreams/hook-debug usr/share/doc/simplestreams/
simplestreams_0.1.0-67-g8497b634/debian/source/ 0000775 0000000 0000000 00000000000 14605750330 0020620 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/debian/source/format 0000664 0000000 0000000 00000000014 14605750330 0022026 0 ustar 00root root 0000000 0000000 3.0 (quilt)
simplestreams_0.1.0-67-g8497b634/doc/ 0000775 0000000 0000000 00000000000 14605750330 0016643 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/doc/README 0000664 0000000 0000000 00000015113 14605750330 0017524 0 ustar 00root root 0000000 0000000 == Simple Sync Format ==
Simple Sync format consists of 2 different file formats:
* A "products list" (format=products:1.0)
* A "index" (format=index:1.0)
Files contain JSON formatted data.
Data can come in one of 2 formats:
* JSON file: .json
A .json file can be accompanied by a .json.gpg file which contains
signature data for .json file.
Due to race conditions caused by the fact that .json and .json.gpg
may not be able to be obtained from a storage location at the same
time, the preferred delivery of signed data is via '.sjson' format.
* Signed JSON File: .sjson
This is a GPG cleartext signed message:
https://tools.ietf.org/html/rfc4880#section-7
The payload the same content that would be included in the JSON file.
Special dictionary entries:
* 'path': If a 'product' dictionary in an index file or a item dictionary
in a products file contains a 'path' element, then that indicates there
is content to be downloaded associated with that element.
A 'path' must value must be relative to the base of the mirror.
* 'md5', 'sha256', 'sha512':
If an item contains a 'path' and one of these fields, then the content
referenced must have the given checksum(s).
* 'size':
For an item with a 'path', this indicates the expected download size.
It should be present for a item with a path in a products file.
Having access to expected size allows the client to provide progress
and also reduces the potential for hash collision attacks.
* 'updated'
This field can exist at the top level of a products or index, and
contains a RFC 2822 timestamp indicating when the file was last updated
This allows a client to quickly note that it is up to date.
== Simple Sync Mirrors ==
The default/expected location of an index file is 'streams/v1/index.sjson'
or 'streams/v1/index.json' underneath the top level of a mirror.
'path' entries as described above are relative to the top level of a
mirror, not relative to the location of the index.
For example:
http://example.com/my-mirror/
would be the top level of a mirror, and the expected path of an index is
http://example.com/my-mirror/streams/v1/index.sjson
To describe a file that lives at:
http://example.com/my-mirror/streams/v1/products.sjson
The 'path' element must be: 'streams/v1/products.sjson'
== Products List ==
products list: (format=products:1.0)
For Ubuntu, an example product is 'server:precise:amd64'
A Products list has a 'content_id' and multiple products.
a product has multiple versions
a version has multiple items
An item can be globally uniquely identified by the path to it.
i.e., the 'content_id' for a products list and the key in each
element of the tree form a unique tuple for that item. Given:
content_id = tree['content_id']
prod_name = tree['products'].keys()[0]
ver_name = tree['products'][prod_name]['versions'].keys(0)
item_name = tree['products'][prod_name]['versions'][ver_name].keys(0)
that unique tuple is:
(content_id, prod_name, ver_name, item_name)
The following is a description of each of these fields:
* content_id is formed similarly to an ISCSI qualified name (IQN)
An example is:
com.ubuntu.cloud:released:aws
It should have a reverse domain portion followed by a portion
that represents a name underneath that domain.
* product_name: product name is unique within a products list. The same
product name may appear in multiple products_lists. For example,
in Ubuntu, 'server:precise:amd64' will appear in both
'com.ubuntu.cloud:released:aws' and
'com.ubuntu.cloud:released:download'.
That name collision should imply that the two separate
pairs are equivalent in some manner.
* version_name:
A 'version' of a product represents a release, build or collection of
that product. A key in the 'versions' dictionary should be sortable
by rules of a 'LANG=C sort()'. That allows the client to trivially
order versions to find the most recent. Ubuntu uses "serial" numbers
for these keys, in the format YYYYMMDD[.0-9].
* item_name:
Inside of a version, there may be multiple items. An example would be
a binary build and a source tarball.
For Ubuntu download images, these are things like '.tar.gz',
'-disk1.img' and '-root.tar.gz'.
The item name does not need to be user-friendly. It must be
consistent. Because this id is unique within the given
'version_name', a client needs only to store that key, rather than
trying to determine which keys inside the item dictionary identify it.
An 'item' dictionary may contain a 'path' element.
'path' entries for a given item must be immutable. That is, for a
given 'path' under a mirror, the content must never change.
An 'item' dictionary may contain a 'mirrors' element.
'mirrors' entries it used for supply a list of mirrors prefix where
retrieve the item. When 'mirrors' is present client must create
the URL of the resource to fetch with the 'path' entry:
item_url = item_mirrors[0] . item_path
== Index ==
This is a index of products files that are available.
It has a top level 'index' dictionary. Each entry in that dictionary is a
content_id of a products file. The entry should have a 'path' item that
indicates where to download the product.
All other data inside the product entry is not required, but helps a client
to find what they're looking for.
item groups of the same "type".
this is 'stream:1.0' format.
* stream collection: a list of content streams
A stream collection is simply a way to provide an index of known content
streams, and information about them.
This is 'stream-collection:1.0'
Useful definitions
* item group
an item group is a list of like items. e.g. all produced by the same build.
requirements:
* serial: a 'serial' entry that can be sorted by YYYYMMDD[.X]
* items: a list of items
Example item groups are:
* output of the amd64 cloud image build done on 2012-04-04
* amd64 images from the cirros release version 0.3.1
* item
There are 1 or more items in a item group.
requirements:
* name: must be unique within the item group.
special fields:
* path: If an item has a 'path', then the target must be obtainable and
should be downloaded when mirroring.
* md5sum: stores checksum
Example:
* "disk1.img" produced from the amd64 cloud image build done on 2012-04-04
* -root.tar.gz produced from the same build.
Notes:
* index files are not required to be signed, as they only
contain references to other content that is signed, and that is hosted
on the same mirror.
simplestreams_0.1.0-67-g8497b634/doc/files/ 0000775 0000000 0000000 00000000000 14605750330 0017745 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/doc/files/my.cfg 0000664 0000000 0000000 00000001354 14605750330 0021056 0 ustar 00root root 0000000 0000000 # This is an example CommandHookMirror config
# You can utilize it with:
# export MIRROR_D=mirror.out; ./tools/cmd-hook-sync hook.cfg doc/example ${MIRROR_D}
#
stream_load: |
for d in "${MIRROR_D}/%(iqn)s/"*; do [ -d "$d" ] && echo "${d##*/}"; done;
true
item_insert: |
echo "%(_hookname)s: %(iqn)s/%(serial)s/%(name)s"
cp "%(path_local)s" "$MIRROR_D/%(iqn)s/%(serial)s/%(name)s"
group_insert_pre: |
echo "%(_hookname)s: %(iqn)s/%(serial)s"
mkdir -p "$MIRROR_D/%(iqn)s/%(serial)s"
group_remove_post: |
echo "%(_hookname)s: %(iqn)s/%(serial)s"
rm -Rf "$MIRROR_D/%(iqn)s/%(serial)s"
# ignore files unless they have match the provided regex
item_filter:
echo "%(name)s" | grep -q "disk1.img"
# vi: ts=4 expandtab syntax=yaml
simplestreams_0.1.0-67-g8497b634/doc/files/openstack-sync.cfg 0000664 0000000 0000000 00000003121 14605750330 0023364 0 ustar 00root root 0000000 0000000 # This is an example CommandHookMirror config for uploading to glance
# PYTHONPATH=$PWD ./tools/cmd-hook-sync ./my.cfg \
# http://download.cirros-cloud.net/streams/v1/unsigned/streams.yaml
stream_load: |
set -e; set -f;
iqn="%(iqn)s"
output=$(glance image-list --property-filter "iqn=$iqn")
ids=$(echo "$output" | awk '$2 ~ uuid { print $2 }' "uuid=[0-9a-f-]{36}")
for id in $ids; do
out=$(glance image-show $id)
serial=$(echo "$out" |
awk '$2 == "Property" && $3 == sname { print $5 }' sname="'serial'")
# for debug, list what we find to stderr
echo "$iqn $serial $id" 1>&2
# report we have the given serial for this iqn
echo "$serial"
done
item_insert: |
iqn="%(iqn)s"
serial="%(serial)s"
path_local="%(path_local)s"
[ "${arch}" = "amd64" ] && arch="x86_64"
uuid=$(uuidgen)
glance image-create --disk-format=qcow2 --container-format=bare \
"--name=${pubname:-${name##*/}}" "--id=$uuid" \
${arch:+"--property=architecture=${arch}"} \
"--property=iqn=$iqn" "--property=serial=$serial" \
${md5:+"--checksum=$md5"} \
"--file=${path_local}"
group_remove_pre: |
iqn="%(iqn)s"
serial="%(serial)s"
set -e; set -f;
output=$(glance image-list "--property-filter=iqn=$iqn" \
"--property-filter=serial=$serial")
ids=$(echo "$output" | awk '$2 ~ uuid { print $2 }' "uuid=[0-9a-f-]{36}")
for id in $ids; do
echo "remove $iqn $serial $id"
glance image-delete "$id";
done
# ignore files unless they have match the provided regex
item_filter:
echo "%(name)s" | grep -q "disk.img$"
# vi: ts=4 expandtab syntax=yaml
simplestreams_0.1.0-67-g8497b634/examples/ 0000775 0000000 0000000 00000000000 14605750330 0017714 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/cirros/ 0000775 0000000 0000000 00000000000 14605750330 0021215 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/cirros/streams/ 0000775 0000000 0000000 00000000000 14605750330 0022673 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/cirros/streams/v1/ 0000775 0000000 0000000 00000000000 14605750330 0023221 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/cirros/streams/v1/index.json 0000664 0000000 0000000 00000001534 14605750330 0025226 0 ustar 00root root 0000000 0000000 {
"index": {
"net.cirros-cloud:devel:download": {
"datatype": "image-downloads",
"path": "streams/v1/net.cirros-cloud:devel:download.json",
"updated": "Sat, 04 May 2013 01:58:57 +0000",
"products": [
"net.cirros-cloud.devel:standard:0.3:arm",
"net.cirros-cloud.devel:standard:0.3:i386",
"net.cirros-cloud.devel:standard:0.3:x86_64"
],
"format": "products:1.0"
},
"net.cirros-cloud:released:download": {
"datatype": "image-downloads",
"path": "streams/v1/net.cirros-cloud:released:download.json",
"updated": "Sat, 04 May 2013 01:58:57 +0000",
"products": [
"net.cirros-cloud:standard:0.3:i386",
"net.cirros-cloud:standard:0.3:x86_64",
"net.cirros-cloud:standard:0.3:arm"
],
"format": "products:1.0"
}
},
"updated": "Sat, 04 May 2013 01:58:57 +0000",
"format": "index:1.0"
}
simplestreams_0.1.0-67-g8497b634/examples/cirros/streams/v1/net.cirros-cloud:devel:download.json 0000664 0000000 0000000 00000025643 14605750330 0032254 0 ustar 00root root 0000000 0000000 {
"datatype": "image-downloads",
"updated": "Sat, 04 May 2013 01:58:57 +0000",
"content_id": "net.cirros-cloud:devel:download",
"products": {
"net.cirros-cloud.devel:standard:0.3:arm": {
"arch": "arm",
"stream": "devel",
"versions": {
"20130111": {
"items": {
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-uec.tar.gz",
"sha256": "2b77d7c4225793b7271f550310f6e0827b64434b88fec589dd97e98e8828d254",
"md5": "797e2d488c799eab0a8eb09a9c1ff4a3",
"size": 7314153
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-rootfs.img.gz",
"sha256": "ff2d70f474ee78209083d88caa04add60ada5cb71cfec57a69f6b696ef57eee2",
"md5": "986c9cabd412f12cb5027b7d7eb4ec03",
"size": 10949810
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-lxc.tar.gz",
"sha256": "a9225fed02b071d0cf9a60cb3d17578d19714b9de3fe4083e34eb3c1110f3f83",
"md5": "55b092bde364aad125a1db57c932f1d0",
"size": 3466163
}
},
"version": "0.3.1~pre4",
"pubname": "cirros-0.3.1~pre4-arm"
},
"20120611": {
"items": {
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-uec.tar.gz",
"sha256": "679823406624c380a1d3c5af659a41aab25ff42007b0eb0a7afd4b58142e738e",
"md5": "22bb53be0daf975e35a4fbc856ae89c2",
"size": 7173849
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-rootfs.img.gz",
"sha256": "5dd775b57a624108c82838a47e57aee576955ca162c2e8c2e50ee8ffea5f93d2",
"md5": "ad7422c2124c59466724dea9658db20f",
"size": 10762344
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-lxc.tar.gz",
"sha256": "8e619113cfeb0a799b89dd570fd3c978ac83b5f78ea7b257a3b2e2a2bb2de30c",
"md5": "b0a7d585b42d8cfff18e2a76c5115495",
"size": 3428595
}
},
"version": "0.3.1~pre1",
"pubname": "cirros-0.3.1~pre1-arm"
},
"20120827": {
"items": {
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-uec.tar.gz",
"sha256": "cc5362c2c9e84547100aed3e6df0011e2b4d91c47243c85de2643d1d72ed3946",
"md5": "d1703407ad1483e2bbf68d4d78987581",
"size": 7302383
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-rootfs.img.gz",
"sha256": "dc5c0f6f02592d6af164ae175b5f5e69d5d016734672982e9b4f29294782e61a",
"md5": "c98688744bdf66b56b19405acfc48966",
"size": 10922886
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-lxc.tar.gz",
"sha256": "136e2d3a4a870eb6a4b3dcbde44209bcd050d36b3c20c2a1f21f4dd7f6e00004",
"md5": "fbc83952ca51071c1aea8b0d82c51d1b",
"size": 3456925
}
},
"version": "0.3.1~pre3",
"pubname": "cirros-0.3.1~pre3-arm"
}
}
},
"net.cirros-cloud.devel:standard:0.3:i386": {
"arch": "i386",
"stream": "devel",
"versions": {
"20130111": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-disk.img",
"sha256": "ac09fe84e4aa5315b017ee8a522dcf0f2780ebd27b9a9eaca56a24c5e0818977",
"md5": "108c6f694a10b5376bde18e71a238ae0",
"size": 12380672
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-uec.tar.gz",
"sha256": "da62e3351c60bf40ab2b07d808aed8e0cbeed1df0b766c3870aab29330569614",
"md5": "c078a3e7b7c758a69217391f54fa4cba",
"size": 8195614
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-rootfs.img.gz",
"sha256": "3944219a3957a787735a54bdc267e2daf6c7c2e02cb4fac133ae8fa6ba4bfbaa",
"md5": "538576ec0ca05ab9eaf1ada9fc7f3084",
"size": 11566741
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-lxc.tar.gz",
"sha256": "a4ff395c1e9bf8dda5b7e25f75fdc90f9726631ffb5f6df821be0da0209dd7ef",
"md5": "ae66a7dd1c050f4d313d1fe8febd87d0",
"size": 3192308
}
},
"version": "0.3.1~pre4",
"pubname": "cirros-0.3.1~pre4-i386"
},
"20120611": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-disk.img",
"sha256": "3a453c98a4f46cb9b8acc34e4d2b39b8d9315082f80cefc4659320741ab94fcf",
"md5": "483f1377c284b5ba61e8038a7bb53849",
"size": 12204544
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-uec.tar.gz",
"sha256": "ceaea29b7808e434712cac7380659367aa41e049810fab53a36d6ecfe5ae014c",
"md5": "b5a730c2cfd08c78e8af41be06082a46",
"size": 8165911
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-rootfs.img.gz",
"sha256": "751fc761ab9e1db5ddb156cb8d54f0bb67e68bf82a92e700f9e280d6da0cad79",
"md5": "e413bfe8627aef63bc8c4cb96954f1e3",
"size": 11494115
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-lxc.tar.gz",
"sha256": "e727a344cc19caf173b376bdf1321f13dded5eaa83d47eba0c65354e7a6d5c81",
"md5": "04d84c5ed7f4d2ced01a7da82a4d573b",
"size": 3158748
}
},
"version": "0.3.1~pre1",
"pubname": "cirros-0.3.1~pre1-i386"
},
"20120827": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-disk.img",
"sha256": "4f531df04463eb9fd4a4ca05d013f6e48ef66bf87287a3d22037a6d845784390",
"md5": "919b7f3c9d1740f57cc208acbf098ae5",
"size": 12287488
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-uec.tar.gz",
"sha256": "5df8c17124c5f0b5a38c8cdd763c6432d5ca8b32f6148ffc8e77486a36cdd0f5",
"md5": "04dfffbb656e536775d522148bc031b2",
"size": 8184352
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-rootfs.img.gz",
"sha256": "d99a1eee31ee469eaf5cac51de711071282112b8b1d398759db92eefe1daf83e",
"md5": "a9f6519bf540331d74c3f938b7752ce9",
"size": 11546128
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-lxc.tar.gz",
"sha256": "9cf93aeefdb00601bdf44401adf5c89473883ae408fee0a9d65ed8dd355c2cac",
"md5": "8f59168a9f5b4962d3327f50d5d18ad7",
"size": 3186156
}
},
"version": "0.3.1~pre3",
"pubname": "cirros-0.3.1~pre3-i386"
}
}
},
"net.cirros-cloud.devel:standard:0.3:x86_64": {
"arch": "x86_64",
"stream": "devel",
"versions": {
"20130111": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-disk.img",
"sha256": "6f50d6a8874610ad25196cad3296e0cb55274fb3aa6963cef04012b413cca3af",
"md5": "c32b60592301c1cf714a93fea0a25352",
"size": 13118976
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-uec.tar.gz",
"sha256": "bb31bb2372f66949799f0ee7f272078e1d0339f3f790d40f1dbcadf9c24225f3",
"md5": "414dc72831718ebaaf8a994b59e71f62",
"size": 8633179
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-rootfs.img.gz",
"sha256": "8f14fa9de3deee186fc2fa7778913f93d93f0fb0d2e0436d7e2af92f0b98f5e6",
"md5": "8cc226def4fa6a50b4dbb0ebae57675e",
"size": 12343217
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-lxc.tar.gz",
"sha256": "867bc516aa0721ed7c4e7069249f8f891c7f186b7d7cecc3c131ef4225c6df4d",
"md5": "019857146c4d5c2f99fa03c04ba300db",
"size": 3534613
}
},
"version": "0.3.1~pre4",
"pubname": "cirros-0.3.1~pre4-x86_64"
},
"20120611": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-disk.img",
"sha256": "0cfdda25485a0b51cc9356199446028403e0745699011265ff304dd51ce3b36b",
"md5": "8875836383dcf5de32e708945a5455b5",
"size": 12992512
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-uec.tar.gz",
"sha256": "1d45297df580bef76ec2c2303124e4673b76c2b61c4a1064581171b0f8e35a79",
"md5": "af842e35f335fc55c78311303474b121",
"size": 8602015
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-rootfs.img.gz",
"sha256": "99c2c0f64f51d1184311597c87f3911799c345786a6987fd76fe742d5ef29481",
"md5": "980984e8787426059edf4ab6fe1e680f",
"size": 12277772
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-lxc.tar.gz",
"sha256": "f32cc1a108322c9b8967d97c301d4d9412bf31a321cea026773d932a5594dab6",
"md5": "d68c898e4f4e8afc9221e7c8f4f34e7d",
"size": 3497630
}
},
"version": "0.3.1~pre1",
"pubname": "cirros-0.3.1~pre1-x86_64"
},
"20120827": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-disk.img",
"sha256": "3471124e2943922fe2a14ef06ef051c8b21e43835de7a06c58b33035b4124943",
"md5": "4b65df206c61e30af1947ee21d5aad40",
"size": 13089280
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-uec.tar.gz",
"sha256": "901b13ea2f4f9216e859c1f90f5acc36b85468043ba243b75a919a9e34a4a70d",
"md5": "0a4cb79338406b275ad0d5be08c9e0dd",
"size": 8623028
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-rootfs.img.gz",
"sha256": "cba564b4f9176ba0efceb09f646a38485c647e938b7d1150c1b8d00e012905b1",
"md5": "1c649a0761e80e6e81b34371e3186deb",
"size": 12328845
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-lxc.tar.gz",
"sha256": "9abee551668c7afada5b550d6f5760c105ba8b0011e37cbe77985b3834862395",
"md5": "9fb61f717b367c0d0459eada51fee35f",
"size": 3527698
}
},
"version": "0.3.1~pre3",
"pubname": "cirros-0.3.1~pre3-x86_64"
}
}
}
},
"format": "products:1.0"
}
simplestreams_0.1.0-67-g8497b634/examples/cirros/streams/v1/net.cirros-cloud:released:download.json 0000664 0000000 0000000 00000016246 14605750330 0032740 0 ustar 00root root 0000000 0000000 {
"datatype": "image-downloads",
"updated": "Sat, 04 May 2013 01:58:57 +0000",
"content_id": "net.cirros-cloud:released:download",
"products": {
"net.cirros-cloud:standard:0.3:i386": {
"arch": "i386",
"stream": "released",
"versions": {
"20111020": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.0/cirros-0.3.0-i386-disk.img",
"sha256": "3309675d0d409128b1c2651d576bc8092ca9ab93e15f3d3aa458f40947569b61",
"md5": "90169ba6f09b5906a7f0755bd00bf2c3",
"size": 9159168
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.0/cirros-0.3.0-i386-uec.tar.gz",
"sha256": "b57e0acb32852f89734ff11a511ae0897e8cecb41882d03551649289b6854a1b",
"md5": "115ca6afa47089dc083c0dc9f9b7ff03",
"size": 6596586
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.0/cirros-0.3.0-i386-rootfs.img.gz",
"sha256": "981eb0be5deed016a6b7d668537a2ca8c7c8f8ac02f265acb63eab9a8adc4b98",
"md5": "174d97541fadaf4e88d526f656c1e0a5",
"size": 8566441
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.0/cirros-0.3.0-i386-lxc.tar.gz",
"sha256": "cac7887628527604b65a89e8caa34096d51c2dc1acfe405c15db2fc58495142a",
"md5": "e760bf470841f57c2c1bb426d407d169",
"size": 1845928
}
},
"version": "0.3.0",
"pubname": "cirros-0.3.0-i386"
},
"20130207": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1/cirros-0.3.1-i386-disk.img",
"sha256": "b8aa1ce5d11939eaa01205fc31348532a31b82790921d45ceb397fbe76492787",
"md5": "6ba617eafc992e33e7c141c679225e53",
"size": 12251136
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1/cirros-0.3.1-i386-uec.tar.gz",
"sha256": "88dda2e505b862a95d0e0044013addcaa3200e602150c9d73e32c2e29345d6f3",
"md5": "52845de5142e58faf211e135d2b45721",
"size": 8197543
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1/cirros-0.3.1-i386-rootfs.img.gz",
"sha256": "27decd793c659063988dbb48ecd159a3f6664f9552d0fda9c58ce8984af5ba54",
"md5": "cf14209217f41ea26844bf0b9cdd20ef",
"size": 11565966
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1/cirros-0.3.1-i386-lxc.tar.gz",
"sha256": "70a8c9c175589f7ac7054c6151cf2bb7eb9e210cefbe310446df2fb1a436b504",
"md5": "fc404d9fc9dc5f0eb33ebaa03920a046",
"size": 3191593
}
},
"version": "0.3.1",
"pubname": "cirros-0.3.1-i386"
}
}
},
"net.cirros-cloud:standard:0.3:x86_64": {
"arch": "x86_64",
"stream": "released",
"versions": {
"20111020": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.0/cirros-0.3.0-x86_64-disk.img",
"sha256": "648782e9287288630250d07531fed9944ecc3986764a6664f0bf6c050ec06afd",
"md5": "50bdc35edb03a38d91b1b071afb20a3c",
"size": 9761280
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.0/cirros-0.3.0-x86_64-uec.tar.gz",
"sha256": "043a3e090a5d76d23758a3919fcaff93f77ce7b97594d9d10fc8d00e85f83191",
"md5": "f56d3cffa47b7d209d2b6905628f07b9",
"size": 6957349
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.0/cirros-0.3.0-x86_64-rootfs.img.gz",
"sha256": "fb0f51c0ec8cfae9d2cef18e4897142e6a81688378fc52f3836ddb5d027a6761",
"md5": "83cd7edde7c99ae520e26d338d397875",
"size": 9184057
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.0/cirros-0.3.0-x86_64-lxc.tar.gz",
"sha256": "0e78796a30641dd7184f752c302a87d66f1eba9985c876911e4f26b4d8ba4a88",
"md5": "1a480e5150db4d93be2aa3c9ced94fa1",
"size": 2115217
}
},
"version": "0.3.0",
"pubname": "cirros-0.3.0-x86_64"
},
"20130207": {
"items": {
"disk.img": {
"ftype": "disk.img",
"path": "0.3.1/cirros-0.3.1-x86_64-disk.img",
"sha256": "e01302fb2d2b13ae65226a0300335172e4487bbe60bb1e5c8b0843a25f126d34",
"md5": "d972013792949d0d3ba628fbe8685bce",
"size": 13147648
},
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1/cirros-0.3.1-x86_64-uec.tar.gz",
"sha256": "51eb03b83123a68d4f866c7c15b195204e62db9e33475509a38b79b3122cde38",
"md5": "e1849016cb71a00808093b7bf986f36a",
"size": 8633554
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1/cirros-0.3.1-x86_64-rootfs.img.gz",
"sha256": "8af2572729827ef89eabcdba36eaa130abb04e83e5910a0c20009ef48f4be237",
"md5": "069f6411ea252bf4b343953017f35968",
"size": 12339939
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1/cirros-0.3.1-x86_64-lxc.tar.gz",
"sha256": "a086fcf2758468972d45957dc78ec6317c06f356930dbbc6cad6a8d1855f135e",
"md5": "38252f1a49fec0ebedebc820854497e0",
"size": 3534564
}
},
"version": "0.3.1",
"pubname": "cirros-0.3.1-x86_64"
}
}
},
"net.cirros-cloud:standard:0.3:arm": {
"arch": "arm",
"stream": "released",
"versions": {
"20111020": {
"items": {
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.0/cirros-0.3.0-arm-uec.tar.gz",
"sha256": "b871823406f818430f57744333b1bb17ce0047e551a316f316641f1bd70d9152",
"md5": "c31e05f7829ad45f9d9995c35d232769",
"size": 5761642
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.0/cirros-0.3.0-arm-rootfs.img.gz",
"sha256": "731821e5293f6b66688a2127094537ce79c103f45a82b27b00a08992e3aa5a7a",
"md5": "b268cfa7e634f070af401f8169adff79",
"size": 7914961
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.0/cirros-0.3.0-arm-lxc.tar.gz",
"sha256": "fcd3723c956a1c232730dc28513b466657cbe984232ba2fcc30a4e1f55aa91e9",
"md5": "91add49e56cbe6b5004015a4d2f51dbc",
"size": 2043822
}
},
"version": "0.3.0",
"pubname": "cirros-0.3.0-arm"
},
"20130207": {
"items": {
"uec.tar.gz": {
"ftype": "uec.tar.gz",
"path": "0.3.1/cirros-0.3.1-arm-uec.tar.gz",
"sha256": "09dcd3ea6f1d48b3519232973e4dc00fc5e73cbea974cda6b5f7cfa380c6b428",
"md5": "d04e6f26aed123bba2c096581b269e7f",
"size": 7314471
},
"rootfs.img.gz": {
"ftype": "rootfs.img.gz",
"path": "0.3.1/cirros-0.3.1-arm-rootfs.img.gz",
"sha256": "9841beefe7a60969585d07cb9f403ce8e73fd56f47b13f8ed111443a0ca50fb3",
"md5": "24205ff08d67b94b13588cee256e83b3",
"size": 10945168
},
"lxc.tar.gz": {
"ftype": "lxc.tar.gz",
"path": "0.3.1/cirros-0.3.1-arm-lxc.tar.gz",
"sha256": "2060e59e642b3b2bdf6e34aba3ed15f468bc6f9a8417fc196d01d29b2075493e",
"md5": "7ddea367ecb7ecb91554e18bed7c71bd",
"size": 3466149
}
},
"version": "0.3.1",
"pubname": "cirros-0.3.1-arm"
}
}
}
},
"format": "products:1.0"
}
simplestreams_0.1.0-67-g8497b634/examples/foocloud/ 0000775 0000000 0000000 00000000000 14605750330 0021526 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/ 0000775 0000000 0000000 00000000000 14605750330 0022630 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2/ 0000775 0000000 0000000 00000000000 14605750330 0023702 5 ustar 00root root 0000000 0000000 foovendor-6.1-beta2-server-cloudimg-amd64-disk1.img 0000664 0000000 0000000 00000000102 14605750330 0034637 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-amd64-disk1.img
foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gz 0000664 0000000 0000000 00000000104 14605750330 0035242 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gz
foovendor-6.1-beta2-server-cloudimg-amd64.tar.gz 0000664 0000000 0000000 00000000077 14605750330 0034272 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-amd64.tar.gz
foovendor-6.1-beta2-server-cloudimg-i386-disk1.img 0000664 0000000 0000000 00000000101 14605750330 0034414 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-i386-disk1.img
foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gz 0000664 0000000 0000000 00000000103 14605750330 0035017 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gz
foovendor-6.1-beta2-server-cloudimg-i386.tar.gz 0000664 0000000 0000000 00000000076 14605750330 0034047 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/beta-2 fake-content: foovendor-6.1-beta2-server-cloudimg-i386.tar.gz
simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001/ 0000775 0000000 0000000 00000000000 14605750330 0025134 5 ustar 00root root 0000000 0000000 foovendor-6.1-server-cloudimg-amd64-disk1.img 0000664 0000000 0000000 00000000074 14605750330 0035106 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-amd64-disk1.img
foovendor-6.1-server-cloudimg-amd64-root.tar.gz 0000664 0000000 0000000 00000000076 14605750330 0035511 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-amd64-root.tar.gz
foovendor-6.1-server-cloudimg-amd64.tar.gz 0000664 0000000 0000000 00000000071 14605750330 0034523 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-amd64.tar.gz
foovendor-6.1-server-cloudimg-i386-disk1.img 0000664 0000000 0000000 00000000073 14605750330 0034663 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-i386-disk1.img
foovendor-6.1-server-cloudimg-i386-root.tar.gz 0000664 0000000 0000000 00000000075 14605750330 0035266 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-i386-root.tar.gz
foovendor-6.1-server-cloudimg-i386.tar.gz 0000664 0000000 0000000 00000000070 14605750330 0034300 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121001 fake-content: foovendor-6.1-server-cloudimg-i386.tar.gz
simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1/ 0000775 0000000 0000000 00000000000 14605750330 0025302 5 ustar 00root root 0000000 0000000 foovendor-6.1-server-cloudimg-amd64-disk1.img 0000664 0000000 0000000 00000000074 14605750330 0035254 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-amd64-disk1.img
foovendor-6.1-server-cloudimg-amd64-root.tar.gz 0000664 0000000 0000000 00000000076 14605750330 0035657 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-amd64-root.tar.gz
foovendor-6.1-server-cloudimg-amd64.tar.gz 0000664 0000000 0000000 00000000071 14605750330 0034671 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-amd64.tar.gz
foovendor-6.1-server-cloudimg-i386-disk1.img 0000664 0000000 0000000 00000000073 14605750330 0035031 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-i386-disk1.img
foovendor-6.1-server-cloudimg-i386-root.tar.gz 0000664 0000000 0000000 00000000075 14605750330 0035434 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-i386-root.tar.gz
foovendor-6.1-server-cloudimg-i386.tar.gz 0000664 0000000 0000000 00000000070 14605750330 0034446 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/files/release-20121026.1 fake-content: foovendor-6.1-server-cloudimg-i386.tar.gz
simplestreams_0.1.0-67-g8497b634/examples/foocloud/streams/ 0000775 0000000 0000000 00000000000 14605750330 0023204 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/streams/v1/ 0000775 0000000 0000000 00000000000 14605750330 0023532 5 ustar 00root root 0000000 0000000 com.example.foovendor:released:aws.json 0000664 0000000 0000000 00000006035 14605750330 0033166 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/streams/v1 {
"datatype": "image-ids",
"updated": "Wed, 20 Mar 2013 17:56:57 +0000",
"content_id": "com.example.foovendor:released:aws",
"products": {
"com.example.foovendor:pinky:server:amd64": {
"endpoint": "https://ec2.us-east-1.amazonaws.com",
"stream": "released",
"versions": {
"20121026.1": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-amd64-server-20121026.1",
"root_store": "ebs",
"id": "ami-e720ad8e"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-amd64-server-20121026.1",
"root_store": "instance-store",
"id": "ami-1f24a976"
}
},
"label": "release"
},
"20120929": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-amd64-server-20120929",
"root_store": "ebs",
"id": "ami-3b4ff252"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-amd64-server-20120929",
"root_store": "instance-store",
"id": "ami-cd4cf1a4"
}
},
"label": "beta2"
},
"20121001": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-amd64-server-20121001",
"root_store": "ebs",
"id": "ami-9878c0f1"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-amd64-server-20121001",
"root_store": "instance-store",
"id": "ami-52863e3b"
}
},
"label": "release"
}
},
"virt_type": "paravirtual",
"region": "us-east-1",
"version": "6.1",
"build": "server",
"release": "pinky",
"arch": "amd64"
},
"com.example.foovendor:pinky:server:i386": {
"endpoint": "https://ec2.us-east-1.amazonaws.com",
"stream": "released",
"versions": {
"20121026.1": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-i386-server-20121026.1",
"root_store": "ebs",
"id": "ami-e720ad8e"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-i386-server-20121026.1",
"root_store": "instance-store",
"id": "ami-1f24a976"
}
},
"label": "release"
},
"20120929": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-i386-server-20120929",
"root_store": "ebs",
"id": "ami-3b4ff252"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-i386-server-20120929",
"root_store": "instance-store",
"id": "ami-cd4cf1a4"
}
},
"label": "beta2"
},
"20121001": {
"items": {
"usee1pe": {
"name": "ebs/foovendor-pinky-12.04-i386-server-20121001",
"root_store": "ebs",
"id": "ami-9878c0f1"
},
"usee1pi": {
"name": "foovendor-pinky-12.04-i386-server-20121001",
"root_store": "instance-store",
"id": "ami-52863e3b"
}
},
"label": "release"
}
},
"virt_type": "paravirtual",
"region": "us-east-1",
"version": "6.1",
"build": "server",
"release": "pinky",
"arch": "i386"
}
},
"format": "products:1.0"
}
com.example.foovendor:released:download.json 0000664 0000000 0000000 00000015655 14605750330 0034213 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/foocloud/streams/v1 {
"datatype": "image-downloads",
"updated": "Wed, 20 Mar 2013 17:56:57 +0000",
"content_id": "com.example.foovendor:released:download",
"products": {
"com.example.foovendor:pinky:server:amd64": {
"version": "6.1",
"build": "server",
"stream": "released",
"versions": {
"20121026.1": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-amd64-server-20121026.1.tar.gz",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64.tar.gz",
"md5": "187ea3b68f9080d4c447b910c8d0838e",
"size": 57,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-amd64-server-20121026.1-disk1.img",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-disk1.img",
"md5": "da499b246265a34db8576983371c6c2a",
"size": 60,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-amd64-server-20121026.1-root.tar.gz",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-root.tar.gz",
"md5": "da939f41059a1b1eca16c0c5856f47cd",
"size": 62,
"ftype": "root.tar.gz"
}
},
"label": "release"
},
"20120328": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-beta2-amd64-server-20120328.tar.gz",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64.tar.gz",
"md5": "c245123c1a7c16dd43962b71c604c5ee",
"size": 63,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-beta2-amd64-server-20120328-disk1.img",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-disk1.img",
"md5": "34cec541a18352783e736ba280a12201",
"size": 66,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-beta2-amd64-server-20120328-root.tar.gz",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gz",
"md5": "55686ef088f7baf0ebea9349055daa85",
"size": 68,
"ftype": "root.tar.gz"
}
},
"label": "beta2"
},
"20121001": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-amd64-server-20121001.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64.tar.gz",
"md5": "187ea3b68f9080d4c447b910c8d0838e",
"size": 57,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-amd64-server-20121001-disk1.img",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64-disk1.img",
"md5": "da499b246265a34db8576983371c6c2a",
"size": 60,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-amd64-server-20121001-root.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64-root.tar.gz",
"md5": "da939f41059a1b1eca16c0c5856f47cd",
"size": 62,
"ftype": "root.tar.gz"
}
},
"label": "release"
}
},
"release": "pinky",
"arch": "amd64"
},
"com.example.foovendor:pinky:server:i386": {
"version": "6.1",
"build": "server",
"stream": "released",
"versions": {
"20121026.1": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121026.1.tar.gz",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386.tar.gz",
"md5": "1534e487730c8162131dde430aa5fa5a",
"size": 56,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-i386-server-20121026.1-disk1.img",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-disk1.img",
"md5": "d8fa9a536e8cf1bebcee1e26875060bb",
"size": 59,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121026.1-root.tar.gz",
"path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-root.tar.gz",
"md5": "de2b46e0fe2ce8c6eda2b4cd809505a9",
"size": 61,
"ftype": "root.tar.gz"
}
},
"label": "release"
},
"20120328": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-beta2-i386-server-20120328.tar.gz",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386.tar.gz",
"md5": "2cd18b60f892af68c9d49c64ce1638e4",
"size": 62,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-beta2-i386-server-20120328-disk1.img",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-disk1.img",
"md5": "e80df7995beb31571e104947e4d7b001",
"size": 65,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-beta2-i386-server-20120328-root.tar.gz",
"path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gz",
"md5": "5d86b3e75e56e10e1019fe1153fe488f",
"size": 67,
"ftype": "root.tar.gz"
}
},
"label": "beta2"
},
"20121001": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121001.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386.tar.gz",
"md5": "1534e487730c8162131dde430aa5fa5a",
"size": 56,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-i386-server-20121001-disk1.img",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-disk1.img",
"md5": "d8fa9a536e8cf1bebcee1e26875060bb",
"size": 59,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121001-root.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-root.tar.gz",
"md5": "de2b46e0fe2ce8c6eda2b4cd809505a9",
"size": 61,
"ftype": "root.tar.gz"
}
},
"label": "release"
},
"samepaths": {
"items": {
"tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121001.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386.tar.gz",
"md5": "1534e487730c8162131dde430aa5fa5a",
"size": 56,
"ftype": "tar.gz"
},
"disk1.img": {
"name": "foovendor-pinky-6.1-i386-server-20121001-disk1.img",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-disk1.img",
"md5": "d8fa9a536e8cf1bebcee1e26875060bb",
"size": 59,
"ftype": "disk1.img"
},
"root.tar.gz": {
"name": "foovendor-pinky-6.1-i386-server-20121001-root.tar.gz",
"path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-root.tar.gz",
"md5": "de2b46e0fe2ce8c6eda2b4cd809505a9",
"size": 61,
"ftype": "root.tar.gz"
}
},
"label": "release"
}
},
"release": "pinky",
"arch": "i386"
}
},
"format": "products:1.0"
}
simplestreams_0.1.0-67-g8497b634/examples/foocloud/streams/v1/index.json 0000664 0000000 0000000 00000001422 14605750330 0025533 0 ustar 00root root 0000000 0000000 {
"index": {
"com.example.foovendor:released:aws": {
"datatype": "image-ids",
"path": "streams/v1/com.example.foovendor:released:aws.json",
"updated": "Wed, 20 Mar 2013 17:56:57 +0000",
"products": [
"com.example.foovendor:pinky:server:amd64",
"com.example.foovendor:pinky:server:i386"
],
"format": "products:1.0"
},
"com.example.foovendor:released:download": {
"datatype": "image-downloads",
"path": "streams/v1/com.example.foovendor:released:download.json",
"updated": "Wed, 20 Mar 2013 17:56:57 +0000",
"products": [
"com.example.foovendor:pinky:server:amd64",
"com.example.foovendor:pinky:server:i386"
],
"format": "products:1.0"
}
},
"updated": "Wed, 20 Mar 2013 17:56:57 +0000",
"format": "index:1.0"
}
simplestreams_0.1.0-67-g8497b634/examples/keys/ 0000775 0000000 0000000 00000000000 14605750330 0020667 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/keys/README.txt 0000664 0000000 0000000 00000001307 14605750330 0022366 0 ustar 00root root 0000000 0000000 example.sec, example.pub:
These are example gpg public and private keys.
They are generated by tools/gen-example-key.
You should be sure to *not* trust anything simply because it is signed
with these keys.
cloud-images.pub:
pub 4096R/476CF100 2012-10-27 Ubuntu Cloud Image Builder
sub 4096R/9D817405 2012-10-27
This is the public key for the entity that signs data on
cloud-images.ubuntu.com data.
cirros.pub:
pub 4096R/8F4CE6D0 2013-02-26 Cirros Signing Key
sub 4096R/0B2CED65 2013-02-26
sub 4096R/A5DDB840 2013-02-26 [expires: 2020-02-25]
This is the public key for the entity that signs simplestream
data for http://download.cirros-cloud.net/ .
simplestreams_0.1.0-67-g8497b634/examples/keys/cirros.pub 0000664 0000000 0000000 00000012307 14605750330 0022703 0 ustar 00root root 0000000 0000000 -----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.12 (GNU/Linux)
mQINBFEsA6QBEACyuJn4L+OaplLC8+W0cQw7Qqothe2bKArhXOYBwjiu231OCacd
k8B1ebvCcpNXYQ6KyAE7tDuaU0bxUNcgDKkpBDxH99nys4DHIovUgDOJS01G4RAI
Qagy5FtpEDrU1eJ41x5RP1GEA/waOxYvLnjbG7tAm3bpPpQhHldeD/ByYKset35P
he6b0kggUZM8ca9HDUty4I4yWPxyb2pSWLCHnM0oN05sFOig2o6X47dz8FYLtfiT
Ab1bLlhthVwc0vrYll7YFAoGKPwwFDA+ANuOSKcfHgR9gJtzWXODvAK0gYVsSWOh
0G2EVEufeohws1oS2tED3BCH1wyCYqLCxD2IQM1EShKUVhY81mQc4sdyoz1uurbO
gu9tPN7fCp6PRmS83q+Ic2m521dcykeWA/qkYksYIwwcfXnkZOs2XouXIv3lDKT0
rq+bS9qf87dyGAV3Jgat5IbK3nWzN+iOGu+q3X5XVWZxImK25MqHLRrxR3/n4Rkk
jQsa3dOk/o2Adj70gGyONKDH9/UTaBB6fEKBbUZyCW82K2HcZeuhCzYxV1BR5iQ/
1Jlk53tNac5gjIR2IzBu6r4zW+KWCnnE1AtNkJM4fe0ggsGvcV1j5VNSeL8XZOjn
4UCOXtgJ/rYkxaCcqCJNHzrDJCYltlUq7Tdx2FZlp+Z9RDhS4GoZBKnZHQARAQAB
tC1DaXJyb3MgU2lnbmluZyBLZXkgPHNpZ25pbmdAY2lycm9zLWNsb3VkLm5ldD6J
AjgEEwECACIFAlEsA6QCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEJWT
hISPTObQ+ooP/12NlKbc3EwG+37w9xR1/n05pUh1x0j0P++QEO8AFNiFhrt6gtoz
z50LpCYaqINuNDrY+7rXLKh5IWlNSxKnI/H06sTZaOsYtYshv9MlIZW8aJrtFuAv
HVLqDbHs4TnzAD0DL8GioepPjg424g9ocnh+J/A/sFEUw6zY+Q4kYeF+6kWWKovT
fWsb966fNRQmAclg4JkpJjyB+cRUbj8MAvP8ZMhvPDAs575kC7kOlk1isF1mZb6z
MVKybFzq4kg/s2Xgq8l3jBtx4BVGKXR8evTaJh9ABqN893yK+jpicAzeya9Yqshv
GoQkgk6X5xzcUQ3feGtM/zThUKQ4KA2tifoEDPQYKI2jHiHrf+SUsJQlAXo/VIC4
PFkUxyTKxnti3elwmj3WVGTuU87lfsdvbg0OM9FpfYhDVi1wYpjoFux3XRjHTrdy
ll6iWzhpcDPGsV8jqqXqIphoU2sEHwMZC/GwII87AFsRRiItUX9lTAMrKNzxunVe
YWTt1no1A/te5IsQK6hA31K9L0M30h0AC4vJvHpiwDl1PUfrAdyCDtcS++eT9jdK
NccMpyYmE5ZORR9xkOsfiTMNN8aP2fW2t/wB5rBOacTLVCZnu6FZ+rbB522G8vOa
yO4sBogj5H9SijrsUJq1W4CkGtFmMEEeCLjjwuhOd5bauTPrxb8G16/RuQINBFEs
A6QBEADEk7k2S2P2VH6l2HFUz5BaVnFtcGSMT8xQki2gGXt5bl67czs3Ej2vSVWa
D0lZzYxDixobwuih2ox5g9upez8SZKx6mxuLUDU493sWOTF+8uVH5h5ig+3+TOBx
U5uFks7xJpuihxmAwxslJUpoQyUA8a2WzzsXGUGKdnh/lTmm8h/XeUWuAu6Ht6ed
zgzgFaleG3BROaMVO5DWsHp3EZAq3HjEpCo6dFBV/A5b9ZQb6jIVpTInLFb6OoIh
W1w1UYIFkcsRf51Oek6W6TB9lkp5raNH3/Irg3M0zsK5i4Zaq15EdgqliH0XhpLC
1MJNLEJU6gP5Yr9GiWgRdWFmVykxG2AacHxH/hFPWZC32L4y4GX7WeAzJJg2k6Uz
5qz/DgJFa1/vfZW/0Df6TJAnSp4W12tesxgLghdwCzLdogEKqBoWLwoFhnNBe+5h
xpP8eO+nOvwfZeB1mXhVKcq0LNLrKGOediakxhrUlHUtrRDHcRSjzD1tRZO8EsCc
C2jcJjeVZ/ReH/Q/F7P3gQ3QoQzLkY0EZ+FO1z5g8T5C3KOyB7r8dHD2I2J29SOX
i+pFcSpHYw6mZXC4CvxrzpugxvvvmemZFc/Lw99NomFZZoXIUK5qhma+tMdyRQ88
MakkIll951X5U7LLtrKW5wWOSunXSYRpNSLbqCsfC/7u3NiUAwARAQABiQIfBBgB
AgAJBQJRLAOkAhsMAAoJEJWThISPTObQ/YgQAI/l1puhMa/+THkHH9j9RaEHA3cR
MVgp33QR6zA30+Rl2iFWSZjmSLOVXtlTRe2vIng9Gj5f+Oi24pdQcl6qu6UZ/89l
8X7l3pt1zOeGyliK2blCKXdBLR0tlHb+twCI5fPZ+K5NGPtMYS8zbAnrKKfWfuOr
04iiV3rJ5j4MHwGbu63HpgzwHQOQYcFWuqBk/bjPFwudt5+u/FVNAdstDkDDk9dF
W4DUFv0Yd2q1aI3/pF1HqzzmrP/1mFA7RoyiT/csihpp0WWlazN7+fTsNd6cikN5
hJtzEKD66abjLB2S3IYIgbJHKR1XylYTnFQG9fxlCR4W3xB27XaXAoGOni1DwAIT
3yUoAMWCxuXcMd5oqwQDNArc1VyRedhrQPQM4jF5vRljoLV2kkZ6WkDFNQBzB3I5
mD8NRGmkWgFGvyj6hhNgIwWijOBIG4FbwLT/HUDhiS0uoIG0lL6qGa5V+j9jnWaM
M77OG9U/z+SHuC0AL/DpKDvj0Xii+qpXeIiHQxA0XQMWTgnQozfKeBQLsKN0RMHb
IBzQg9qvws2gOtORniO7UMfv+U4rL9AQSMEan/Tohk6ZmUisHvi4vtWNsAxqNjXJ
7dqmmy8SBwkI/+eirT2Q9+vRGHqMC1Z7JDMGPSMGARRks9xKRln0FdPTyxTmObOQ
26TVo7HvWs+vVZDbuQINBFEsBg4BEADO6FgdZBI5PwpDvTplDVm10cQo7N9mcCJn
eRcrj4VQ89OTfSmJH3Qpf6JkG/UGS/kO6vIDF5b2SzRPAYT1QM3lVYhixU1gVPv1
mp4zHOJse5thnaD7HlpvM/VBnN8h+qAo6YSYhlmYc9FqLSmZRoNE2+WGp/6G/5iB
r2lOs1Iii1HW2XYgl9Rw/HnwHjXAaZiCCt0EXvLoutw3KvIirHHfkVEsX8p1dloR
HCSu40l2t29e1ujkl/LWb1UkBv9a/OrrQEVqpDfnO/ai/PJ8KSEMUYnB9eIhM5J4
OvhiaKVJI5uJVsMRFm8NUCmguVB1eXeQ1ndvWn1CeKJeyKO04BFR2ZwMVq9Pdcbe
Ga8Rz8poHI/dsut/EPPZdOnTr3LbqH+f0f+P+UsRMLXuV8O4tKZu4mrl9XEdb3yJ
18atOL2bHNvAHXXN45UXCLU6j1qWolO9yOuBNw6fPbsagK3kfvo6SBlAp1LcHjHd
OKVWM3cqUll7/xo3cBiIvMRTvZlzGSAssGWe0kt9O/0a5pPJRxyj8G1HGZ8BzDZz
JpPD9dcU/NFD5qB+gZjGrJqycdgw79JMyUCHRZpzeRDDZb0AN+NFiVqgQFnhMLek
UkILyJnVnLHaSM0vh0/++P3O6lqsQ99AKaFm1pNZCUMNF9ZU0i8/if8VKhu7Hly0
dbjDMsjWMQARAQABiQREBBgBAgAPBQJRLAYOAhsCBQkNKGiAAikJEJWThISPTObQ
wV0gBBkBAgAGBQJRLAYOAAoJEGRyWpml3bhA+SQQAMegxIPVOb7TnOzf5VazTa0a
sC+PuEWDNIz2jns2TfNHhRQx+Eb5BXQKW0OwXxyAv4+TgA2WTpWueCzEtnfXdIxH
rnjp96MLUOB/j1jIzAJADGVjcEc1kO6VVF61ba3D7ST2Z7YTQ07krsZiGD78UlxF
zoOHBkiQTIkXjIAFOu0/TqlwN4JBNRr61GtMHyI5awwgB0RParT0Vfkki3NlR2T/
bym02l2uIlGLsq0owiRdY5H9RoOyMdhKC72CSBv5UyeAqk9P5B99OyWHoQ7WZTWO
aRHkWpXaql4wWyzdoQdtJH5J5XN9/5RFUxlltcXRy/+grJUrlpd886hN8vrrg3eB
fOyts/K2JIk99WBMRf4BxwyFA9wWvoGoozFVjSzBFtluOqPtSWRHgzm5cqdZTCVT
TcSxgYkCdPlHClxv78aVi9MmEEk5TIWSHb/jjE5Lv0vOPCgdmRlhAVC7ghFj+BO1
5a5Um3TS5StZskOrAQtzg2Y7asWET9fsaWXCS/iom5saekmkbBXO4EwiQKlkoFWU
8oGSWFcIc4LnJ467QHDMGMBFq4AQfOLgi9+G1HiphXdJrDesBXV755/mBZfiIKOM
W2QLoCWi065FTRD3V7y0V5dpKjt6wVkNrVozeF0rWivErVu/r8H6Aee2L0VCPB6j
NF73z67E9Dzj4QT3LtoaWZgQAI445UYbBD/SdwtbYWTwlRkJY88isJGA2nPpDMfq
PR9JlBHoCCBf0IrsXoLXuFoCFlviROqOCsfvJ6R7bsZmmWwqWvBqENvrW8rwJTVB
XcSj23TNLaqQ/kef8JDuscVn7tDwb7dCIye0fG/eZskTsMJ9Osgt9I6S5Fz0yFHv
7xJlXjEwkxYXdyGoXEW3MbQuM7Y3LHSm8Y/M6boo9OMyrdK1XObiqhgYUoBzbD0y
nBdxQvr18wNLoM1KysvOs0yvBHhMRZXyt3pdTFCzLBYl5C94267DhiL9EJ3x8qMG
E7Yqgu9sjHrgGy/rZUlGj/l2uzd6pYzlcyaABKLO9T0H1Y2Xpx/+d9GGyy0DHVdO
FYORNtZIi3Gc5mKlL38P8eL4AAoPLp5nasy9Y3aAAKOBR2YW/BhnEpMjRndOvoxy
NCQgPZxyAKKnzCFyLvlbRN0AYhOWQDjXFHbc+RZsrJcgAR6XS9PwBe6yW1sh888c
dCSi9/XNyKJX9IR8ucdInZUAc4qrfi9zCt//4ZoHeW48OS7OgnoA3Y6RZR+UbSQE
6aBTt6z5e6SyaKRx7OlcqK3G19KmEuFzH18THtPD1s+kiUMHKXouooUoLbMEvW8P
oKte55W4stewkhS8DZV5IkvYZzPMQ8ks6GMdzHiNVkEujUkUNsEualm5Q8hEzPBA
8dxY
=NgGK
-----END PGP PUBLIC KEY BLOCK-----
simplestreams_0.1.0-67-g8497b634/examples/keys/cloud-images.pub 0000664 0000000 0000000 00000006212 14605750330 0023751 0 ustar 00root root 0000000 0000000 -----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.12 (GNU/Linux)
mQINBFCMc9EBEADDKn9mOi9VZhW+0cxmu3aFZWMg0p7NEKuIokkEdd6P+BRITccO
ddDLaBuuamMbt/V1vrxWC5J+UXe33TwgO6KGfH+ECnXD5gYdEOyjVKkUyIzYV5RV
U5BMrxTukHuh+PkcMVUy5vossCk9MivtCRIqM6eRqfeXv6IBV9MFkAbG3x96ZNI/
TqaWTlaHGszz2Axf9JccHCNfb3muLI2uVnUaojtDiZPm9SHTn6O0p7Tz7M7+P8qy
vc6bdn5FYAk+Wbo+zejYVBG/HLLE4+fNZPESGVCWZtbZODBPxppTnNVm3E84CTFt
pmWFBvBE/q2G9e8s5/mP2ATrzLdUKMxr3vcbNX+NY1Uyvn0Z02PjbxThiz1G+4qh
6Ct7gprtwXPOB/bCITZL9YLrchwXiNgLLKcGF0XjlpD1hfELGi0aPZaHFLAa6qq8
Ro9WSJljY/Z0g3woj6sXpM9TdWe/zaWhxBGmteJl33WBV7a1GucN0zF1dHIvev4F
krp13Uej3bMWLKUWCmZ01OHStLASshTqVxIBj2rgsxIcqH66DKTSdZWyBQtgm/kC
qBvuoQLFfUgIlGZihTQ96YZXqn+VfBiFbpnh1vLt24CfnVdKmzibp48KkhfqduDE
Xxx/f/uZENH7t8xCuNd3p+u1zemGNnxuO8jxS6Ico3bvnJaG4DAl48vaBQARAQAB
tG9VYnVudHUgQ2xvdWQgSW1hZ2UgQnVpbGRlciAoQ2Fub25pY2FsIEludGVybmFs
IENsb3VkIEltYWdlIEJ1aWxkZXIpIDx1YnVudHUtY2xvdWRidWlsZGVyLW5vcmVw
bHlAY2Fub25pY2FsLmNvbT6JAjgEEwECACIFAlCMc9ECGwMGCwkIBwMCBhUIAgkK
CwQWAgMBAh4BAheAAAoJEH/z9AhHbPEAvRIQAMLE4ZMYiLvwSoWPAicM+3FInaqP
2rf1ZEf1k6175/G2n8cG3vK0nIFQE9Cus+ty2LrTggm79onV2KBGGScKe3ga+meO
txj601Wd7zde10IWUa1wlTxPXBxLo6tpF4s4aw6xWOf4OFqYfPU4esKblFYn1eMK
Dd53s3/123u8BZqzFC8WSMokY6WgBa+hvr5J3qaNT95UXo1tkMf65ZXievcQJ+Hr
bp1m5pslHgd5PqzlultNWePwzqmHXXf14zI1QKtbc4UjXPQ+a59ulZLVdcpvmbjx
HdZfK0NJpQX+j5PU6bMuQ3QTMscuvrH4W41/zcZPFaPkdJE5+VcYDL17DBFVzknJ
eC1uzNHxRqSMRQy9fzOuZ72ARojvL3+cyPR1qrqSCceX1/Kp838P2/CbeNvJxadt
liwI6rzUgK7mq1Bw5LTyBo3mLwzRJ0+eJHevNpxl6VoFyuoA3rCeoyE4on3oah1G
iAJt576xXMDoa1Gdj3YtnZItEaX3jb9ZB3iz9WkzZWlZsssdyZMNmpYV30Ayj3CE
KyurYF9lzIQWyYsNPBoXORNh73jkHJmL6g1sdMaxAZeQqKqznXbuhBbt8lkbEHMJ
Stxc2IGZaNpQ+/3LCwbwCphVnSMq+xl3iLg6c0s4uRn6FGX+8aknmc/fepvRe+ba
ntqvgz+SMPKrjeevuQINBFCMc9EBEADKGFPKBL7/pMSTKf5YH1zhFH2lr7tf5hbz
ztsx6j3y+nODiaQumdG+TPMbrFlgRlJ6Ah1FTuJZqdPYObGSQ7qd/VvvYZGnDYJv
Z1kPkNDmCJrWJs+6PwNARvyLw2bMtjCIOAq/k8wByKkMzegobJgWsbr2Jb5fT4cv
FxYpm3l0QxQSw49rriO5HmwyiyG1ncvaFUcpxXJY8A2s7qX1jmjsqDY1fWsv5PaN
ue0Fr3VXfOi9p+0CfaPY0Pl4GHzat/D+wLwnOhnjl3hFtfbhY5bPl5+cD51SbOnh
2nFv+bUK5HxiZlz0bw8hTUBN3oSbAC+2zViRD/9GaBYY1QjimOuAfpO1GZmqohVI
msZKxHNIIsk5H98mN2+LB3vH+B6zrSMDm3d2Hi7ZA8wH26mLIKLbVkh7hr8RGQjf
UZRxeQEf+f8F3KVoSqmfXGJfBMUtGQMTkaIeEFpMobVeHZZ3wk+Wj3dCMZ6bbt2i
QBaoa7SU5ZmRShJkPJzCG3SkqN+g9ZcbFMQsybl+wLN7UnZ2MbSk7JEy6SLsyuVi
7EjLmqHmG2gkybisnTu3wjJezpG12oz//cuylOzjuPWUWowVQQiLs3oANzYdZ0Hp
SuNjjtEILSRnN5FAeogs0AKH6sy3kKjxtlj764CIgn1hNidSr2Hyb4xbJ/1GE3Rk
sjJi6uYIJwARAQABiQIfBBgBAgAJBQJQjHPRAhsMAAoJEH/z9AhHbPEA6IsP/3jJ
DaowJcKOBhU2TXZglHM+ZRMauHRZavo+xAKmqgQc/izgtyMxsLwJQ+wcTEQT5uqE
4DoWH2T7DGiHZd/89Qe6HuRExR4p7lQwUop7kdoabqm1wQfcqr+77Znp1+KkRDyS
lWfbsh9ARU6krQGryODEOpXJdqdzTgYhdbVRxq6dUopz1Gf+XDreFgnqJ+okGve2
fJGERKYynUmHxkFZJPWZg5ifeGVt+YY6vuOCg489dzx/CmULpjZeiOQmWyqUzqy2
QJ70/sC8BJYCjsESId9yPmgdDoMFd+gf3jhjpuZ0JHTeUUw+ncf+1kRf7LAALPJp
2PTSo7VXUwoEXDyUTM+dI02dIMcjTcY4yxvnpxRFFOtklvXt8Pwa9x/aCmJb9f0E
5FO0nj7l9pRd2g7UCJWETFRfSW52iktvdtDrBCft9OytmTl492wAmgbbGeoRq3ze
QtzkRx9cPiyNQokjXXF+SQcq586oEd8K/JUSFPdvth3IoKlfnXSQnt/hRKv71kbZ
IXmR3B/q5x2Msr+NfUxyXfUnYOZ5KertdprUfbZjudjmQ78LOvqPF8TdtHg3gD2H
+G2z+IoH7qsOsc7FaJsIIa4+dljwV3QZTE7JFmsas90bRcMuM4D37p3snOpHAHY3
p7vH1ewg+vd9ySST0+OkWXYpbMOIARfBKyrGM3nu
=+MFT
-----END PGP PUBLIC KEY BLOCK-----
simplestreams_0.1.0-67-g8497b634/examples/keys/example.pub 0000664 0000000 0000000 00000001255 14605750330 0023035 0 ustar 00root root 0000000 0000000 -----BEGIN PGP PUBLIC KEY BLOCK-----
mI0EZCL9pgEEALjMscvUXnyoFeP9McF+0yW4SFQsl8WwpXosSae3DSInIJSoEOG7
HBewSS3dOK0lHIYnDZLtA0kSuC/a43mzgRLnY2paGKRL/cC/M2z66Dib83kZgOG8
phVZH2HEMJxhb9XLibfTeIUvSgYLSnE4a00xLmlUjI11mCk36RYh4xhhABEBAAG0
XFNpbXBsZSBTdHJlYW1zIFRlc3QgVXNlciAoVGVzdCBVc2FnZSBPbmx5LiBEbyBO
b3QgSW1wb3J0LikgPHNpbXBsZXN0cmVhbXNAYm9ndXMuZXhhbXBsZS5jb20+iM4E
EwEKADgWIQQkk5C2cpQTdyB9ZpY4cK3qoW5CfAUCZCL9pgIbLwULCQgHAgYVCgkI
CwIEFgIDAQIeAQIXgAAKCRA4cK3qoW5CfP9KBACxeVNSRzLHOLvthMxvqoqB/775
AmJOPH2OiEfQOAr9C04zcW4FseBDXTS+6vydk5WsG3M7QA7p+zPiKy1atXhUOHY1
VySd2AAB0u2RWUAWgg2DaQfbzfuxztqGBqlxPsGXgpobvumXf2pNehoBH9J9T4W4
8NH75blul5zZaFvkmA==
=C/S4
-----END PGP PUBLIC KEY BLOCK-----
simplestreams_0.1.0-67-g8497b634/examples/keys/example.sec 0000664 0000000 0000000 00000002156 14605750330 0023022 0 ustar 00root root 0000000 0000000 -----BEGIN PGP PRIVATE KEY BLOCK-----
lQHYBGQi/aYBBAC4zLHL1F58qBXj/THBftMluEhULJfFsKV6LEmntw0iJyCUqBDh
uxwXsEkt3TitJRyGJw2S7QNJErgv2uN5s4ES52NqWhikS/3AvzNs+ug4m/N5GYDh
vKYVWR9hxDCcYW/Vy4m303iFL0oGC0pxOGtNMS5pVIyNdZgpN+kWIeMYYQARAQAB
AAP9HOQzr9BF7WtB8OD21G+Fh1ImTLKkD84sMMuXwFbIANzpJRSZfxEHtVRkPH1n
jPpOWVLltmDDsLryfNjV04MS3KOtCHnfM9poK+CdlLcrH6pg5QyATgktAQZdR6qA
fWZY7aKoO43bWkpj7iKg1F/eGNA//WJehZLRyZzZSQ/cvyECANFfzcJ/BlFOVrho
9s92JjGQDLVjYjrgSyGhqQr1NZptpBWbCXyaSPqCJJ+ulRz/W7/JKF1RBHdaw0Ug
+sjcB3UCAOHz6sYUs60hSlNPQCH0hk9j4y/DFm51O2qNtiFqxrNDDn3eVdbFPwLN
9r5vqR6bpe2z3eoxSuVe0V6Ylj8/W70CAI4ecsC4cBJkT83yiqWuNwndGZ1zt8+q
12THqYUZ6KSBRHrCYOt8LT5QKFzvAzAJ6oNwYk+DMbMX24mpYo+0eQ6gxbRcU2lt
cGxlIFN0cmVhbXMgVGVzdCBVc2VyIChUZXN0IFVzYWdlIE9ubHkuIERvIE5vdCBJ
bXBvcnQuKSA8c2ltcGxlc3RyZWFtc0Bib2d1cy5leGFtcGxlLmNvbT6IzgQTAQoA
OBYhBCSTkLZylBN3IH1mljhwreqhbkJ8BQJkIv2mAhsvBQsJCAcCBhUKCQgLAgQW
AgMBAh4BAheAAAoJEDhwreqhbkJ8/0oEALF5U1JHMsc4u+2EzG+qioH/vvkCYk48
fY6IR9A4Cv0LTjNxbgWx4ENdNL7q/J2TlawbcztADun7M+IrLVq1eFQ4djVXJJ3Y
AAHS7ZFZQBaCDYNpB9vN+7HO2oYGqXE+wZeCmhu+6Zd/ak16GgEf0n1Phbjw0fvl
uW6XnNloW+SY
=uILn
-----END PGP PRIVATE KEY BLOCK-----
simplestreams_0.1.0-67-g8497b634/examples/minimal/ 0000775 0000000 0000000 00000000000 14605750330 0021342 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/minimal/product1/ 0000775 0000000 0000000 00000000000 14605750330 0023103 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/minimal/product1/20150915/ 0000775 0000000 0000000 00000000000 14605750330 0024011 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/minimal/product1/20150915/root.img 0000664 0000000 0000000 00000000046 14605750330 0025472 0 ustar 00root root 0000000 0000000 content of product1/20150915/root.img
simplestreams_0.1.0-67-g8497b634/examples/minimal/product1/20150915/text.txt 0000664 0000000 0000000 00000000046 14605750330 0025536 0 ustar 00root root 0000000 0000000 content of product1/20150915/text.txt
simplestreams_0.1.0-67-g8497b634/examples/minimal/streams/ 0000775 0000000 0000000 00000000000 14605750330 0023020 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/minimal/streams/v1/ 0000775 0000000 0000000 00000000000 14605750330 0023346 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/examples/minimal/streams/v1/download.json 0000664 0000000 0000000 00000001236 14605750330 0026052 0 ustar 00root root 0000000 0000000 {
"datatype": "image-downloads",
"format": "products:1.0",
"updated": "Tue, 15 Sep 2015 16:15:24 -0400",
"content_id": "com.example:download",
"products": {
"com.example:product1": {
"versions": {
"20150915": {
"items": {
"item1": {
"ftype": "text",
"path": "product1/20150915/text.txt",
"sha256": "f4797060c79363d37129656e5069e817cfd144ba9a1db0a392eeba0c534316dd",
"size": 38
},
"item2": {
"ftype": "root.img",
"path": "product1/20150915/root.img",
"sha256": "0e2f431f4e4063483708ce24448af8586e115c984e483d60bdb532194536ece4",
"size": 38
}
}
}
}
}
}
}
simplestreams_0.1.0-67-g8497b634/examples/minimal/streams/v1/index.json 0000664 0000000 0000000 00000000506 14605750330 0025351 0 ustar 00root root 0000000 0000000 {
"index": {
"com.example:download": {
"datatype": "image-downloads",
"path": "streams/v1/download.json",
"updated": "Tue, 15 Sep 2015 16:15:24 -0400",
"products": [
"com.example:product1"
],
"format": "products:1.0"
}
},
"updated": "Tue, 15 Sep 2015 16:15:24 -0400",
"format": "index:1.0"
}
simplestreams_0.1.0-67-g8497b634/setup.py 0000664 0000000 0000000 00000001254 14605750330 0017612 0 ustar 00root root 0000000 0000000 from setuptools import setup
from glob import glob
import os
VERSION = '0.1.0'
def is_f(p):
return os.path.isfile(p)
setup(
name="python-simplestreams",
description='Library and tools for using Simple Streams data',
version=VERSION,
author='Scott Moser',
author_email='scott.moser@canonical.com',
license="AGPL",
url='http://launchpad.net/simplestreams/',
packages=['simplestreams', 'simplestreams.mirrors',
'simplestreams.objectstores'],
scripts=glob('bin/*'),
data_files=[
('lib/simplestreams', glob('tools/hook-*')),
('share/doc/simplestreams',
[f for f in glob('doc/*') if is_f(f)]),
]
)
simplestreams_0.1.0-67-g8497b634/simplestreams/ 0000775 0000000 0000000 00000000000 14605750330 0020766 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/simplestreams/__init__.py 0000664 0000000 0000000 00000000025 14605750330 0023074 0 ustar 00root root 0000000 0000000 # vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/checksum_util.py 0000664 0000000 0000000 00000007236 14605750330 0024207 0 ustar 00root root 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import hashlib
# these are in order of increasing preference
CHECKSUMS = ("md5", "sha256", "sha512")
try:
ALGORITHMS = list(getattr(hashlib, 'algorithms'))
except AttributeError:
ALGORITHMS = list(hashlib.algorithms_available)
class checksummer(object):
_hasher = None
algorithm = None
expected = None
def __init__(self, checksums):
if not checksums:
self._hasher = None
return
for meth in CHECKSUMS:
if meth in checksums and meth in ALGORITHMS:
self._hasher = hashlib.new(meth)
self.algorithm = meth
self.expected = checksums.get(self.algorithm, None)
if not self._hasher:
raise TypeError("Unable to find suitable hash algorithm")
def update(self, data):
if self._hasher is None:
return
self._hasher.update(data)
def hexdigest(self):
if self._hasher is None:
return None
return self._hasher.hexdigest()
def check(self):
return (self.expected is None or self.expected == self.hexdigest())
def __str__(self):
return ("checksummer (algorithm=%s expected=%s)" %
(self.algorithm, self.expected))
def item_checksums(item):
return {k: item[k] for k in CHECKSUMS if k in item}
class SafeCheckSummer(checksummer):
"""SafeCheckSummer raises ValueError if checksums are not provided."""
def __init__(self, checksums, allowed=None):
if allowed is None:
allowed = CHECKSUMS
super(SafeCheckSummer, self).__init__(checksums)
if self.algorithm not in allowed:
raise ValueError(
"provided checksums (%s) did not include any allowed (%s)" %
(checksums, allowed))
class InvalidChecksum(ValueError):
def __init__(self, path, cksum, size=None, expected_size=None, msg=None):
self.path = path
self.cksum = cksum
self.size = size
self.expected_size = expected_size
self.msg = msg
def __str__(self):
if self.msg is not None:
return self.msg
if not isinstance(self.expected_size, int):
msg = "Invalid size '%s' at %s." % (self.expected_size, self.path)
else:
msg = ("Invalid %s Checksum at %s. Found %s. Expected %s. "
"read %s bytes expected %s bytes." %
(self.cksum.algorithm, self.path,
self.cksum.hexdigest(), self.cksum.expected,
self.size, self.expected_size))
if self.size:
msg += (" (size %s expected %s)" %
(self.size, self.expected_size))
return msg
def invalid_checksum_for_reader(reader, msg=None):
return InvalidChecksum(path=reader.url, cksum=reader.checksummer,
size=reader.bytes_read, expected_size=reader.size,
msg=msg)
simplestreams_0.1.0-67-g8497b634/simplestreams/contentsource.py 0000664 0000000 0000000 00000033764 14605750330 0024250 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import errno
import io
import os
import sys
from . import checksum_util
if sys.version_info > (3, 0):
import urllib.parse as urlparse
import urllib.request as urllib_request
import urllib.error as urllib_error
else:
import urlparse
import urllib2 as urllib_request
urllib_error = urllib_request
READ_BUFFER_SIZE = 1024 * 10
TIMEOUT = 9.05
try:
# We try to use requests because we can do gzip encoding with it.
# however requests < 1.1 didn't have 'stream' argument to 'get'
# making it completely unsuitable for downloading large files.
import requests
from distutils.version import LooseVersion
import pkg_resources
_REQ = pkg_resources.get_distribution('requests')
_REQ_VER = LooseVersion(_REQ.version)
if _REQ_VER < LooseVersion('1.1'):
raise ImportError("Requests version < 1.1, not suitable for usage.")
URL_READER_CLASSNAME = "RequestsUrlReader"
except ImportError:
URL_READER_CLASSNAME = "Urllib2UrlReader"
requests = None
class ContentSource(object):
url = None
def open(self):
# open can be explicitly called to 'open', but might be implicitly
# called from read()
pass
def read(self, size=-1):
raise NotImplementedError()
def set_start_pos(self, offset):
""" Implemented if the ContentSource supports seeking within content.
Used to resume failed transfers. """
class SetStartPosNotImplementedError(NotImplementedError):
pass
raise SetStartPosNotImplementedError()
def __enter__(self):
self.open()
return self
def __exit__(self, etype, value, trace):
self.close()
def close(self):
raise NotImplementedError()
class UrlContentSource(ContentSource):
fd = None
def __init__(self, url, mirrors=None, url_reader=None):
if mirrors is None:
mirrors = []
self.mirrors = mirrors
self.input_url = url
self.url = url
self.offset = None
self.fd = None
if url_reader is None:
self.url_reader = URL_READER
else:
self.url_reader = url_reader
def _urlinfo(self, url):
parsed = urlparse.urlparse(url)
if not parsed.scheme:
if url.startswith("/"):
url = "file://%s" % url
else:
url = "file://%s/%s" % (os.getcwd(), url)
parsed = urlparse.urlparse(url)
if parsed.scheme == "file":
return (url, FileReader, (parsed.path,))
else:
return (url, self.url_reader, (url,))
def _open(self):
for url in [self.input_url] + self.mirrors:
try:
(normurl, opener, oargs) = self._urlinfo(url)
self.url = normurl
return opener(*oargs, offset=self.offset)
except IOError as e:
if e.errno != errno.ENOENT:
raise
continue
myerr = IOError("Unable to open %s. mirrors=%s" %
(self.input_url, self.mirrors))
myerr.errno = errno.ENOENT
raise myerr
def open(self):
if self.fd is None:
self.fd = self._open()
def read(self, size=-1):
if self.fd is None:
self.open()
return self.fd.read(size)
def set_start_pos(self, offset):
if self.fd is not None:
raise Exception("can't set start pos after open()")
self.offset = offset
def close(self):
if self.fd:
self.fd.close()
self.fd = None
class FdContentSource(ContentSource):
def __init__(self, fd, url=None):
self.fd = fd
self.url = url
def read(self, size=-1):
return self.fd.read(size)
def close(self):
self.fd.close()
class IteratorContentSource(ContentSource):
def __init__(self, itgen, url=None):
self.itgen = itgen
self.url = url
self.r_iter = None
self.leftover = bytes()
self.consumed = False
def open(self):
if self.r_iter:
return
try:
self.r_iter = self.itgen()
except Exception as exc:
if self.is_enoent(exc):
enoent = IOError(exc)
enoent.errno = errno.ENOENT
raise enoent
raise exc
def is_enoent(self, exc):
return (isinstance(exc, IOError) and exc.errno == errno.ENOENT)
def read(self, size=None):
self.open()
if self.consumed:
return bytes()
if (size is None or size < 0):
# read everything
ret = self.leftover
self.leftover = bytes()
for buf in self.r_iter:
ret += buf
self.consumed = True
return ret
ret = bytes()
if self.leftover:
if len(self.leftover) > size:
ret = self.leftover[0:size]
self.leftover = self.leftover[size:]
return ret
ret = self.leftover
self.leftover = bytes()
while True:
try:
ret += next(self.r_iter)
if len(ret) >= size:
self.leftover = ret[size:]
ret = ret[0:size]
break
except StopIteration:
self.consumed = True
break
return ret
def close(self):
pass
class MemoryContentSource(FdContentSource):
def __init__(self, url=None, content=""):
if isinstance(content, str):
content = content.encode('utf-8')
fd = io.BytesIO(content)
if url is None:
url = "MemoryContentSource://undefined"
super(MemoryContentSource, self).__init__(fd=fd, url=url)
class ChecksummingContentSource(ContentSource):
def __init__(self, csrc, checksums, size=None):
self.cs = csrc
self.bytes_read = 0
self.checksummer = None
self.size = size
try:
csummer = checksum_util.SafeCheckSummer(checksums)
except ValueError as e:
raise checksum_util.invalid_checksum_for_reader(self, msg=str(e))
self._set_checksummer(csummer)
try:
self.size = int(size)
except TypeError:
self.size = size
raise checksum_util.invalid_checksum_for_reader(self)
def resume(self, offset, checksummer):
self.cs.set_start_pos(offset)
self._set_checksummer(checksummer)
self.bytes_read = offset
@property
def algorithm(self):
return self.checksummer.algorithm
def _set_checksummer(self, checksummer):
if checksummer.algorithm not in checksum_util.CHECKSUMS:
raise ValueError("algorithm %s is not valid (%s)" %
(checksummer.algorithm, checksum_util.CHECKSUMS))
self.checksummer = checksummer
def check(self):
return self.bytes_read == self.size and self.checksummer.check()
def read(self, size=-1):
buf = self.cs.read(size)
buflen = len(buf)
self.checksummer.update(buf)
self.bytes_read += buflen
# read size was different size than expected.
# if its not the end, something wrong
if buflen != size and self.size != self.bytes_read:
raise checksum_util.invalid_checksum_for_reader(self)
if self.bytes_read == self.size and not self.check():
raise checksum_util.invalid_checksum_for_reader(self)
return buf
def open(self):
return self.cs.open()
def close(self):
return self.cs.close()
@property
def url(self):
return self.cs.url
class UrlReader(object):
def read(self, size=-1):
raise NotImplementedError()
def close(self):
raise NotImplementedError()
class FileReader(UrlReader):
def __init__(self, path, offset=None, user_agent=None):
if path.startswith("file://"):
path = path[7:]
self.fd = open(path, "rb")
if offset is not None:
self.fd.seek(offset)
def read(self, size=-1):
return _read_fd(self.fd, size)
def close(self):
return self.fd.close()
class Urllib2UrlReader(UrlReader):
timeout = TIMEOUT
def __init__(self, url, offset=None, user_agent=None):
(url, username, password) = parse_url_auth(url)
self.url = url
if username is None:
opener = urllib_request.urlopen
else:
mgr = urllib_request.HTTPPasswordMgrWithDefaultRealm()
mgr.add_password(None, url, username, password)
handler = urllib_request.HTTPBasicAuthHandler(mgr)
opener = urllib_request.build_opener(handler).open
try:
req = urllib_request.Request(url)
if user_agent is not None:
req.add_header('User-Agent', user_agent)
if offset is not None:
req.add_header('Range', 'bytes=%d-' % offset)
self.req = opener(req, timeout=self.timeout)
except urllib_error.HTTPError as e:
if e.code == 404:
myerr = IOError("Unable to open %s" % url)
myerr.errno = errno.ENOENT
raise myerr
raise e
def read(self, size=-1):
return _read_fd(self.req, size)
def close(self):
return self.req.close()
class RequestsUrlReader(UrlReader):
"""
This provides a url reader that supports deflate/gzip encoding
but still implements 'read'.
r = RequestsUrlReader('http://example.com')
r.read(10)
r.close()
"""
# requests version greater than or equal to 2.4.1 takes a 2-tuple
# for timeout, the first being a connect timeout, the second being
# a read timeout.
timeout = (TIMEOUT, None)
def __init__(self, url, buflen=None, offset=None, user_agent=None):
if requests is None:
raise ImportError("Attempt to use RequestsUrlReader "
"without suitable requests library.")
self.url = url
(url, user, password) = parse_url_auth(url)
if user is None:
auth = None
else:
auth = (user, password)
headers = {}
if user_agent is not None:
headers['User-Agent'] = user_agent
if offset is not None:
headers['Range'] = 'bytes=%d-' % offset
if headers == {}:
headers = None
# requests version less than 2.4.1 takes an optional
# float for timeout. There is no separate read timeout
if _REQ_VER < LooseVersion('2.4.1'):
self.timeout = TIMEOUT
self.req = requests.get(
url, stream=True, auth=auth, headers=headers, timeout=self.timeout
)
self.r_iter = None
if buflen is None:
buflen = READ_BUFFER_SIZE
self.buflen = buflen
self.leftover = bytes()
self.consumed = False
if (self.req.status_code == requests.codes.NOT_FOUND):
myerr = IOError("Unable to open %s" % url)
myerr.errno = errno.ENOENT
raise myerr
ce = self.req.headers.get('content-encoding', '').lower()
if 'gzip' in ce or 'deflate' in ce:
self._read = self.read_compressed
else:
self._read = self.read_raw
def read(self, size=-1):
return self._read(size)
def read_compressed(self, size=None):
if not self.r_iter:
self.r_iter = self.req.iter_content(self.buflen)
if self.consumed:
return bytes()
if (size is None or size < 0):
# read everything
ret = self.leftover
self.leftover = bytes()
for buf in self.r_iter:
ret += buf
self.consumed = True
return ret
ret = bytes()
if self.leftover:
if len(self.leftover) > size:
ret = self.leftover[0:size]
self.leftover = self.leftover[size:]
return ret
ret = self.leftover
self.leftover = bytes()
while True:
try:
ret += next(self.r_iter)
if len(ret) >= size:
self.leftover = ret[size:]
ret = ret[0:size]
break
except StopIteration:
self.consumed = True
break
return ret
def read_raw(self, size=-1):
return _read_fd(self.req.raw, size)
def close(self):
self.req.close()
def parse_url_auth(url):
parsed = urlparse.urlparse(url)
authtok = "%s:%s@" % (parsed.username, parsed.password)
if parsed.netloc.startswith(authtok):
url = url.replace(authtok, "", 1)
return (url, parsed.username, parsed.password)
def _read_fd(fd, size=-1):
# normalize calling of fd.read()
# python3 generally wants fd.read(size=None)
# python2 generally wants fd.read(size=-1)
if size is None or size < 0:
return fd.read()
return fd.read(size)
if URL_READER_CLASSNAME == "RequestsUrlReader":
URL_READER = RequestsUrlReader
elif URL_READER_CLASSNAME == "Urllib2UrlReader":
URL_READER = Urllib2UrlReader
else:
raise Exception("Unknown URL_READER_CLASSNAME: %s" % URL_READER_CLASSNAME)
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/filters.py 0000664 0000000 0000000 00000004531 14605750330 0023013 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
from simplestreams import util
import re
class ItemFilter(object):
def __init__(self, content, noneval=""):
rparsefmt = r"([\w|\-]+)[ ]*([!]{0,1}[=~])[ ]*(.*)[ ]*$"
parsed = re.match(rparsefmt, content)
if not parsed:
raise ValueError("Unable to parse expression %s" % content)
(key, op, val) = parsed.groups()
if op in ("!=", "="):
self._matches = val.__eq__
elif op in ("!~", "~"):
self._matches = re.compile(val).search
else:
raise ValueError("Bad parsing of %s" % content)
self.negator = (op[0] != "!")
self.op = op
self.key = key
self.value = val
self.content = content
self.noneval = noneval
def __str__(self):
return "%s %s %s [none=%s]" % (self.key, self.op,
self.value, self.noneval)
def __repr__(self):
return self.__str__()
def matches(self, item):
val = str(item.get(self.key, self.noneval))
return (self.negator == bool(self._matches(val)))
def get_filters(filters, noneval=""):
flist = []
for f in filters:
flist.append(ItemFilter(f, noneval=noneval))
return flist
def filter_item(filters, data, src, pedigree):
"Apply filter list to a products entity. Flatten before doing so."
return filter_dict(filters, util.products_exdata(src, pedigree))
def filter_dict(filters, data):
"Apply filter list to dict. Does not flatten."
for f in filters:
if not f.matches(data):
return False
return True
simplestreams_0.1.0-67-g8497b634/simplestreams/generate_simplestreams.py 0000664 0000000 0000000 00000007563 14605750330 0026115 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013, 2015 Canonical Ltd.
#
# Author: Scott Moser ,
# Aaron Bentley
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
from collections import namedtuple
from copy import deepcopy
import json
import os
import sys
from simplestreams import util
Item = namedtuple('Item', ['content_id', 'product_name', 'version_name',
'item_name', 'data'])
def items2content_trees(itemslist, exdata):
# input is a list with each item having:
# (content_id, product_name, version_name, item_name, {data})
ctrees = {}
for (content_id, prodname, vername, itemname, data) in itemslist:
if content_id not in ctrees:
ctrees[content_id] = {'content_id': content_id,
'format': 'products:1.0', 'products': {}}
ctrees[content_id].update(exdata)
ctree = ctrees[content_id]
if prodname not in ctree['products']:
ctree['products'][prodname] = {'versions': {}}
prodtree = ctree['products'][prodname]
if vername not in prodtree['versions']:
prodtree['versions'][vername] = {'items': {}}
vertree = prodtree['versions'][vername]
if itemname in vertree['items']:
raise ValueError("%s: already existed" %
str([content_id, prodname, vername, itemname]))
vertree['items'][itemname] = data
return ctrees
class FileNamer:
streamdir = 'streams/v1'
@classmethod
def get_index_path(cls):
return "%s/%s" % (cls.streamdir, 'index.json')
@classmethod
def get_content_path(cls, content_id):
return "%s/%s.json" % (cls.streamdir, content_id)
def generate_index(trees, updated, namer):
index = {"index": {}, 'format': 'index:1.0', 'updated': updated}
not_copied_up = ['content_id']
for content_id, content in trees.items():
index['index'][content_id] = {
'products': sorted(list(content['products'].keys())),
'path': namer.get_content_path(content_id),
}
for k in util.stringitems(content):
if k not in not_copied_up:
index['index'][content_id][k] = content[k]
return index
def write_streams(out_d, trees, updated, namer=None, condense=True):
if namer is None:
namer = FileNamer
index = generate_index(trees, updated, namer)
to_write = [(namer.get_index_path(), index,)]
# Don't let products_condense modify the input
trees = deepcopy(trees)
for content_id in trees:
if condense:
util.products_condense(trees[content_id],
sticky=[
'path', 'sha256', 'md5',
'size', 'mirrors'
])
content = trees[content_id]
to_write.append((index['index'][content_id]['path'], content,))
out_filenames = []
for (outfile, data) in to_write:
filef = os.path.join(out_d, outfile)
util.mkdir_p(os.path.dirname(filef))
json_dump(data, filef)
out_filenames.append(filef)
return out_filenames
def json_dump(data, filename):
with open(filename, "w") as fp:
sys.stderr.write(u"writing %s\n" % filename)
fp.write(json.dumps(data, indent=2, sort_keys=True,
separators=(',', ': ')) + "\n")
simplestreams_0.1.0-67-g8497b634/simplestreams/json2streams.py 0000775 0000000 0000000 00000006176 14605750330 0024007 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
# Copyright (C) 2013, 2015 Canonical Ltd.
from argparse import ArgumentParser
import json
import os
import sys
from simplestreams import util
from simplestreams.generate_simplestreams import (
FileNamer,
Item,
items2content_trees,
json_dump,
write_streams,
)
class JujuFileNamer(FileNamer):
@classmethod
def get_index_path(cls):
return "%s/%s" % (cls.streamdir, 'index2.json')
@classmethod
def get_content_path(cls, content_id):
return "%s/%s.json" % (cls.streamdir, content_id.replace(':', '-'))
def dict_to_item(item_dict):
"""Convert a dict into an Item, mutating input."""
item_dict.pop('item_url', None)
size = item_dict.get('size')
if size is not None:
item_dict['size'] = int(size)
content_id = item_dict.pop('content_id')
product_name = item_dict.pop('product_name')
version_name = item_dict.pop('version_name')
item_name = item_dict.pop('item_name')
return Item(content_id, product_name, version_name, item_name, item_dict)
def read_items_file(filename):
with open(filename) as items_file:
item_list = json.load(items_file)
return (dict_to_item(item) for item in item_list)
def write_release_index(out_d):
in_path = os.path.join(out_d, JujuFileNamer.get_index_path())
with open(in_path) as in_file:
full_index = json.load(in_file)
full_index['index'] = dict(
(k, v) for k, v in list(full_index['index'].items())
if k == 'com.ubuntu.juju:released:tools')
out_path = os.path.join(out_d, FileNamer.get_index_path())
json_dump(full_index, out_path)
return out_path
def filenames_to_streams(filenames, updated, out_d, juju_format=False):
"""Convert a list of filenames into simplestreams.
File contents must be json simplestream stanzas.
'updated' is the date to use for 'updated' in the streams.
out_d is the directory to create streams in.
"""
items = []
for items_file in filenames:
items.extend(read_items_file(items_file))
data = {'updated': updated, 'datatype': 'content-download'}
trees = items2content_trees(items, data)
if juju_format:
write = write_juju_streams
else:
write = write_streams
return write(out_d, trees, updated)
def write_juju_streams(out_d, trees, updated):
out_filenames = write_streams(out_d, trees, updated, JujuFileNamer)
out_filenames.append(write_release_index(out_d))
return out_filenames
def parse_args(argv=None):
parser = ArgumentParser()
parser.add_argument(
'items_file', metavar='items-file', help='File to read items from',
nargs='+')
parser.add_argument(
'out_d', metavar='output-dir',
help='The directory to write stream files to.')
parser.add_argument(
'--juju-format', action='store_true',
help='Write stream files in juju format.')
return parser.parse_args(argv)
def main():
args = parse_args()
updated = util.timestamp()
filenames_to_streams(args.items_file, updated, args.out_d,
args.juju_format)
if __name__ == '__main__':
sys.exit(main())
simplestreams_0.1.0-67-g8497b634/simplestreams/log.py 0000664 0000000 0000000 00000004010 14605750330 0022114 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import logging
DEBUG = logging.DEBUG
ERROR = logging.ERROR
FATAL = logging.FATAL
INFO = logging.INFO
NOTSET = logging.NOTSET
WARN = logging.WARN
WARNING = logging.WARNING
class NullHandler(logging.Handler):
def emit(self, record):
pass
def basicConfig(**kwargs):
# basically like logging.basicConfig but only output for our logger
if kwargs.get('filename'):
handler = logging.FileHandler(filename=kwargs['filename'],
mode=kwargs.get('filemode', 'a'))
elif kwargs.get('stream'):
handler = logging.StreamHandler(stream=kwargs['stream'])
else:
handler = NullHandler()
level = kwargs.get('level', NOTSET)
handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'),
datefmt=kwargs.get('datefmt')))
handler.setLevel(level)
logging.getLogger().setLevel(level)
logger = _getLogger()
for h in list(logger.handlers):
logger.removeHandler(h)
logger.setLevel(level)
logger.addHandler(handler)
def _getLogger(name='sstreams'):
return logging.getLogger(name)
if not logging.getLogger().handlers:
logging.getLogger().addHandler(NullHandler())
LOG = _getLogger()
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/simplestreams/mirrors/ 0000775 0000000 0000000 00000000000 14605750330 0022463 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/simplestreams/mirrors/__init__.py 0000664 0000000 0000000 00000056432 14605750330 0024606 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import errno
import io
import json
import simplestreams.filters as filters
import simplestreams.util as util
from simplestreams import checksum_util
import simplestreams.contentsource as cs
from simplestreams.log import LOG
DEFAULT_USER_AGENT = "python-simplestreams/0.1"
class MirrorReader(object):
def __init__(self, policy=util.policy_read_signed):
""" policy should be a function which returns the extracted payload or
raises an exception if the policy is violated. """
self.policy = policy
def load_products(self, path):
_, content = self.read_json(path)
return util.load_content(content)
def read_json(self, path):
with self.source(path) as source:
raw = source.read().decode('utf-8')
return raw, self.policy(content=raw, path=path)
def source(self, path):
raise NotImplementedError()
def sources_list(self, path):
raise NotImplementedError()
class MirrorWriter(object):
def load_products(self, path=None, content_id=None):
raise NotImplementedError()
def sync_products(self, reader, path=None, products=None, content=None):
# reader: a Reader for opening files referenced in products
# path: the path of where to store this.
# if path is None, do not store the products file itself
# products: a products file in products:1.0 format
# content: a rendered products tree, allowing you to store
# externally signed content.
#
# One of content, path, or products is required.
# * if path is not given, no rendering of products tree will be stored
# * if content is None, it will be loaded from reader(path).read()
# or rendered (json.dumps(products)) from products.
# * if products is None, it will be loaded from content
raise NotImplementedError()
def sync_index(self, reader, path=None, src=None, content=None):
# reader: a Reader for opening files referenced in index or products
# files
# path: the path of where to store this.
# if path is None, do not store the index file itself
# src: a dictionary in index:1.0 format
# content: a rendered products tree, allowing you to store
# externally signed content.
#
# One of content, path, or products is required.
# * if path not given, no rendering of products tree will be stored
# * if content is None, it will be loaded from reader(path).read()
# or rendered (json.dumps(products)) from products.
# * if products is None, it will be loaded from content
raise NotImplementedError()
def sync(self, reader, path):
content, payload = reader.read_json(path)
data = util.load_content(payload)
fmt = data.get("format", "UNSPECIFIED")
if fmt == "products:1.0":
return self.sync_products(reader, path, data, content)
elif fmt == "index:1.0":
return self.sync_index(reader, path, data, content)
else:
raise TypeError("Unknown format '%s' in '%s'" % (fmt, path))
# Index Operations
def filter_index_entry(self, data, src, pedigree):
# src is source index tree.
# data is src['index'][ped[0]]
return True
def insert_index(self, path, src, content):
# src is the source index tree
# content is None or a json rendering (possibly signed) of src
pass
def insert_index_entry(self, data, src, pedigree, contentsource):
# src is the top level index (index:1.0 format)
# data is src['index'][pedigree[0]]
# contentsource is a ContentSource if 'path' exists in data or None
pass
# Products Operations
def filter_product(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]
return True
def filter_version(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]
return True
def filter_item(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]]
return True
def insert_products(self, path, target, content):
# path is the path to store data (where it came from on source mirror)
# target is the target products:1.0 tree
# content is None or a json rendering (possibly signed) of src
pass
def insert_product(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]
pass
def insert_version(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]
pass
def insert_item(self, data, src, target, pedigree, contentsource):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]]
# contentsource can be either a ContentSource (if 'path' exists in
# data), a list of URLs (if the Reader support it) or None
pass
def remove_product(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]
pass
def remove_version(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]
pass
def remove_item(self, data, src, target, pedigree):
# src and target are top level products:1.0
# data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]]
pass
class UrlMirrorReader(MirrorReader):
def __init__(self, prefix, mirrors=None, policy=util.policy_read_signed,
user_agent=DEFAULT_USER_AGENT):
super(UrlMirrorReader, self).__init__(policy=policy)
self._cs = cs.UrlContentSource
if mirrors is None:
mirrors = []
self.mirrors = mirrors
self.user_agent = user_agent
self.prefix = prefix
self._trailing_slash_checked = self.prefix.endswith("/")
def source(self, path):
mirrors = [m + path for m in self.mirrors]
if self.user_agent is not None:
# Create a custom UrlReader with the user_agent passed in,
# using the default cs.URL_READER.
def url_reader_factory(*args, **kwargs):
return cs.URL_READER(
*args, user_agent=self.user_agent, **kwargs)
else:
url_reader_factory = None
if self._trailing_slash_checked:
return self._cs(self.prefix + path, mirrors=mirrors,
url_reader=url_reader_factory)
# A little hack to fix up the user's path. It's fairly common to
# specify URLs without a trailing slash, so we try to do that here as
# well. We open, then close and then get a new one (so the one we
# returned is not yet open (LP: #1237658)
self._trailing_slash_checked = True
try:
with self._cs(self.prefix + path, mirrors=None,
url_reader=url_reader_factory) as csource:
csource.read(1024)
except Exception as e:
if isinstance(e, IOError) and (e.errno == errno.ENOENT):
LOG.warning("got ENOENT for (%s, %s), trying with trailing /",
self.prefix, path)
self.prefix = self.prefix + '/'
else:
# this raised exception, but it was sneaky to do it
# so just ignore it.
LOG.debug("trailing / check on (%s, %s) resulted in %s",
self.prefix, path, e)
return self._cs(self.prefix + path, mirrors=mirrors,
url_reader=url_reader_factory)
def sources_list(self, path):
sources = [(self.prefix if self._trailing_slash_checked else
self.prefix + '/') + path]
sources += [m + path for m in self.mirrors]
return sources
class ObjectStoreMirrorReader(MirrorReader):
def __init__(self, objectstore, policy=util.policy_read_signed):
super(ObjectStoreMirrorReader, self).__init__(policy=policy)
self.objectstore = objectstore
def source(self, path):
return self.objectstore.source(path)
class BasicMirrorWriter(MirrorWriter):
def __init__(self, config=None):
super(BasicMirrorWriter, self).__init__()
if config is None:
config = {}
self.config = config
self.checksumming_reader = self.config.get('checksumming_reader', True)
self.external_download = self.config.get('external_download', False)
def load_products(self, path=None, content_id=None):
super(BasicMirrorWriter, self).load_products(path, content_id)
def sync_index(self, reader, path=None, src=None, content=None):
(src, content) = _get_data_content(path, src, content, reader)
util.expand_tree(src)
check_tree_paths(src)
itree = src.get('index')
for content_id, index_entry in itree.items():
if not self.filter_index_entry(index_entry, src, (content_id,)):
continue
epath = index_entry.get('path', None)
mycs = None
if epath:
if index_entry.get('format') in ("index:1.0", "products:1.0"):
self.sync(reader, path=epath)
mycs = reader.source(epath)
self.insert_index_entry(index_entry, src, (content_id,), mycs)
self.insert_index(path, src, content)
def sync_products(self, reader, path=None, src=None, content=None):
(src, content) = _get_data_content(path, src, content, reader)
util.expand_tree(src)
check_tree_paths(src)
content_id = src['content_id']
target = self.load_products(path, content_id)
if not target:
target = util.stringitems(src)
util.expand_tree(target)
stree = src.get('products', {})
if 'products' not in target:
target['products'] = {}
tproducts = target['products']
filtered_products = []
prodname = None
# Apply filters to items before filtering versions
for prodname, product in list(stree.items()):
for vername, version in list(product.get('versions', {}).items()):
for itemname, item in list(version.get('items', {}).items()):
pgree = (prodname, vername, itemname)
if not self.filter_item(item, src, target, pgree):
LOG.debug("Filtered out item: %s/%s", itemname, item)
del stree[prodname]['versions'][vername]['items'][
itemname]
if not stree[prodname]['versions'][vername].get(
'items', {}):
del stree[prodname]['versions'][vername]
if not stree[prodname].get('versions', {}):
del stree[prodname]
for prodname, product in stree.items():
if not self.filter_product(product, src, target, (prodname,)):
filtered_products.append(prodname)
continue
if prodname not in tproducts:
tproducts[prodname] = util.stringitems(product)
tproduct = tproducts[prodname]
if 'versions' not in tproduct:
tproduct['versions'] = {}
src_filtered_items = []
def _filter(itemkey):
ret = self.filter_version(product['versions'][itemkey],
src, target, (prodname, itemkey))
if not ret:
src_filtered_items.append(itemkey)
return ret
(to_add, to_remove) = util.resolve_work(
src=list(product.get('versions', {}).keys()),
target=list(tproduct.get('versions', {}).keys()),
maxnum=self.config.get('max_items'),
keep=self.config.get('keep_items'), itemfilter=_filter)
LOG.info("%s/%s: to_add=%s to_remove=%s", content_id, prodname,
to_add, to_remove)
tversions = tproduct['versions']
skipped_versions = []
for vername in to_add:
version = product['versions'][vername]
if vername not in tversions:
tversions[vername] = util.stringitems(version)
added_items = []
for itemname, item in version.get('items', {}).items():
pgree = (prodname, vername, itemname)
added_items.append(itemname)
ipath = item.get('path', None)
ipath_cs = None
if ipath and reader:
if self.external_download and hasattr(reader,
"sources_list"):
ipath_cs = reader.sources_list(ipath)
elif self.checksumming_reader:
flat = util.products_exdata(src, pgree)
ipath_cs = cs.ChecksummingContentSource(
csrc=reader.source(ipath),
size=flat.get('size'),
checksums=checksum_util.item_checksums(flat))
else:
ipath_cs = reader.source(ipath)
self.insert_item(item, src, target, pgree, ipath_cs)
if len(added_items):
# do not insert versions that had all items filtered
self.insert_version(version, src, target,
(prodname, vername))
else:
skipped_versions.append(vername)
for vername in skipped_versions:
if vername in tproduct['versions']:
del tproduct['versions'][vername]
if self.config.get('delete_filtered_items', False):
tkeys = tproduct.get('versions', {}).keys()
for v in src_filtered_items:
if v not in to_remove and v in tkeys:
to_remove.append(v)
LOG.info("After deletions %s/%s: to_add=%s to_remove=%s",
content_id, prodname, to_add, to_remove)
for vername in to_remove:
tversion = tversions[vername]
for itemname in list(tversion.get('items', {}).keys()):
self.remove_item(tversion['items'][itemname], src, target,
(prodname, vername, itemname))
self.remove_version(tversion, src, target, (prodname, vername))
del tversions[vername]
self.insert_product(tproduct, src, target, (prodname,))
# FIXME: below will remove products if they're in target
# (result of load_products) but not in the source products.
# that could accidentally delete a lot.
#
del_products = []
if self.config.get('delete_products', False):
del_products.extend([p for p in list(tproducts.keys())
if p not in stree])
if self.config.get('delete_filtered_products', False):
del_products.extend([p for p in filtered_products
if p not in stree])
for prodname in del_products:
# FIXME: we remove a product here, but unless that acts
# recursively, nothing will remove the items in that product
self.remove_product(tproducts[prodname], src, target, (prodname,))
del tproducts[prodname]
self.insert_products(path, target, content)
# ObjectStoreMirrorWriter stores data in /.data/
class ObjectStoreMirrorWriter(BasicMirrorWriter):
def __init__(self, config, objectstore):
super(ObjectStoreMirrorWriter, self).__init__(config=config)
self.store = objectstore
def products_data_path(self, content_id):
return ".data/%s" % content_id
def _reference_count_data_path(self):
return ".data/references.json"
def _load_rc_dict(self):
try:
with self.source(self._reference_count_data_path()) as source:
raw = source.read()
return json.load(io.StringIO(raw.decode('utf-8')))
except IOError as e:
if e.errno == errno.ENOENT:
return {}
raise
def _persist_rc_dict(self, rc):
source = cs.MemoryContentSource(content=json.dumps(rc))
self.store.insert(self._reference_count_data_path(), source)
def _build_rc_id(self, src, pedigree):
return '/'.join([src['content_id']] + list(pedigree))
def _inc_rc(self, path, src, pedigree):
rc = self._load_rc_dict()
id_ = self._build_rc_id(src, pedigree)
if path not in rc:
rc[path] = [id_]
else:
rc[path].append(id_)
self._persist_rc_dict(rc)
def _dec_rc(self, path, src, pedigree):
rc = self._load_rc_dict()
id_ = self._build_rc_id(src, pedigree)
entry = rc.get(path, None)
ok_to_delete = False
if entry is not None:
if len(entry) == 1:
del rc[path]
ok_to_delete = True
else:
rc[path] = list(filter(lambda x: x != id_, rc[path]))
self._persist_rc_dict(rc)
return ok_to_delete
def load_products(self, path=None, content_id=None):
if content_id:
try:
dpath = self.products_data_path(content_id)
with self.source(dpath) as source:
return util.load_content(source.read())
except IOError as e:
if e.errno != errno.ENOENT:
raise
if path:
# we possibly have 'path' that we could read.
# but returning that would indicate we have inserted all items
# rather than just the list of items that were mirrored.
# this is because the .data/ entry was missing.
# thus, just return empty.
return {}
raise TypeError("unable to load_products with no path")
def source(self, path):
return self.store.source(path)
def insert_item(self, data, src, target, pedigree, contentsource):
util.products_set(target, data, pedigree)
if 'path' not in data:
return
if not self.config.get('item_download', True):
return
LOG.debug("inserting %s to %s", contentsource.url, data['path'])
self.store.insert(data['path'], contentsource,
checksums=checksum_util.item_checksums(data),
mutable=False, size=data.get('size'))
self._inc_rc(data['path'], src, pedigree)
def insert_index_entry(self, data, src, pedigree, contentsource):
epath = data.get('path', None)
if not epath:
return
self.store.insert(epath, contentsource,
checksums=checksum_util.item_checksums(data))
def insert_products(self, path, target, content):
dpath = self.products_data_path(target['content_id'])
self.store.insert_content(dpath, util.dump_data(target))
if not path:
return
if not content:
content = util.dump_data(target)
self.store.insert_content(path, content)
def insert_index(self, path, src, content):
if not path:
return
if not content:
content = util.dump_data(src)
self.store.insert_content(path, content)
def remove_item(self, data, src, target, pedigree):
util.products_del(target, pedigree)
if 'path' not in data:
return
if self._dec_rc(data['path'], src, pedigree):
self.store.remove(data['path'])
class ObjectFilterMirror(ObjectStoreMirrorWriter):
def __init__(self, *args, **kwargs):
super(ObjectFilterMirror, self).__init__(*args, **kwargs)
self.filters = self.config.get('filters', [])
def filter_item(self, data, src, target, pedigree):
return filters.filter_item(self.filters, data, src, pedigree)
class DryRunMirrorWriter(ObjectFilterMirror):
def __init__(self, *args, **kwargs):
super(DryRunMirrorWriter, self).__init__(*args, **kwargs)
self.downloading = []
self.removing = []
# All insert/remove operations are noops.
def noop(*args):
pass
insert_index = noop
insert_index_entry = noop
insert_products = noop
insert_product = noop
insert_version = noop
remove_product = noop
remove_version = noop
def insert_item(self, data, src, target, pedigree, contentsource):
data = util.products_exdata(src, pedigree)
if 'size' in data and 'path' in data:
self.downloading.append(
(pedigree, data['path'], int(data['size'])))
def remove_item(self, data, src, target, pedigree):
data = util.products_exdata(src, pedigree)
if 'size' in data and 'path' in data:
self.removing.append(
(pedigree, data['path'], int(data['size'])))
@property
def size(self):
downloading = sum([size for _, _, size in self.downloading])
removing = sum([size for _, _, size in self.removing])
return int(downloading - removing)
def _get_data_content(path, data, content, reader):
if content is None and path:
_, content = reader.read(path)
if isinstance(content, bytes):
content = content.decode('utf-8')
if data is None and content:
data = util.load_content(content)
if not data:
raise ValueError("Data could not be loaded. "
"Path or content is required")
return (data, content)
def check_tree_paths(tree, fmt=None):
if fmt is None:
fmt = tree.get('format')
if fmt == "products:1.0":
def check_path(item, tree, pedigree):
util.assert_safe_path(item.get('path'))
util.walk_products(tree, cb_item=check_path)
elif fmt == "index:1.0":
index = tree.get('index')
for content_id in index:
util.assert_safe_path(index[content_id].get('path'))
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/mirrors/command_hook.py 0000664 0000000 0000000 00000023610 14605750330 0025475 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import simplestreams.mirrors as mirrors
import simplestreams.util as util
import os
import errno
import signal
import subprocess
import tempfile
REQUIRED_FIELDS = ("load_products",)
HOOK_NAMES = (
"filter_index_entry",
"filter_item",
"filter_product",
"filter_version",
"insert_index",
"insert_index_entry",
"insert_item",
"insert_product",
"insert_products",
"insert_version",
"load_products",
"remove_item",
"remove_product",
"remove_version",
)
DEFAULT_HOOK_NAME = "hook"
ENV_HOOK_NAME = "HOOK"
ENV_FIELDS_NAME = "FIELDS"
class CommandHookMirror(mirrors.BasicMirrorWriter):
"""
CommandHookMirror: invoke commands to implement a SimpleStreamMirror
Available command hooks:
load_products:
invoked to list items in the products in a given content_id.
See product_load_output_format.
filter_index_entry, filter_item, filter_product, filter_version:
invoked to determine if the named entity should be operated on
exit 0 for "yes", 1 for "no".
insert_index, insert_index_entry, insert_item, insert_product,
insert_products, insert_version :
invoked to insert the given entity.
remove_product, remove_version, remove_item:
invoked to remove the given entity
Other Configuration:
product_load_output_format: one of [serial_list, json]
serial_list: The default output should be white space delimited
output of product_name and version.
json: output should be a json formated dictionary formated like
products:1.0 content.
Environments / Variables:
When a hook is invoked, data about the relevant entity is
made available in the environment.
In all cases:
* a special 'FIELDS' key is available which is a space delimited
list of keys
* a special 'HOOK' field is available that specifies which
hook is being called.
For an item in a products:1.0 file that has a 'path' item, the item
will be downloaded and a 'path_local' field inserted into the metadata
which will contain the path to the local file.
If the configuration setting 'item_skip_download' is set to True, then
'path_url' will be set instead to a url where the item can be found.
"""
def __init__(self, config):
if isinstance(config, str):
config = util.load_content(config)
check_config(config)
super(CommandHookMirror, self).__init__(config=config)
def load_products(self, path=None, content_id=None):
(_rc, output) = self.call_hook('load_products',
data={'content_id': content_id},
capture=True)
fmt = self.config.get("product_load_output_format", "serial_list")
loaded = load_product_output(output=output, content_id=content_id,
fmt=fmt)
return loaded
def filter_index_entry(self, data, src, pedigree):
mdata = util.stringitems(src)
mdata['content_id'] = pedigree[0]
mdata.update(util.stringitems(data))
(ret, _output) = self.call_hook('filter_index_entry', data=mdata,
rcs=[0, 1])
return ret == 0
def filter_product(self, data, src, target, pedigree):
return self._call_filter('filter_product', src, pedigree)
def filter_version(self, data, src, target, pedigree):
return self._call_filter('filter_version', src, pedigree)
def filter_item(self, data, src, target, pedigree):
return self._call_filter('filter_item', src, pedigree)
def _call_filter(self, name, src, pedigree):
data = util.products_exdata(src, pedigree)
(ret, _output) = self.call_hook(name, data=data, rcs=[0, 1])
return ret == 0
def insert_index(self, path, src, content):
return self.call_hook('insert_index', data=src, content=content,
extra={'path': path})
def insert_products(self, path, target, content):
return self.call_hook('insert_products', data=target,
content=content, extra={'path': path})
def insert_product(self, data, src, target, pedigree):
return self.call_hook('insert_product',
data=util.products_exdata(src, pedigree))
def insert_version(self, data, src, target, pedigree):
return self.call_hook('insert_version',
data=util.products_exdata(src, pedigree))
def insert_item(self, data, src, target, pedigree, contentsource):
mdata = util.products_exdata(src, pedigree)
tmp_path = None
tmp_del = None
extra = {}
if 'path' in data:
extra.update({'item_url': contentsource.url})
if not self.config.get('item_skip_download', False):
try:
(tmp_path, tmp_del) = util.get_local_copy(contentsource)
extra['path_local'] = tmp_path
finally:
contentsource.close()
try:
ret = self.call_hook('insert_item', data=mdata, extra=extra)
finally:
if tmp_del and os.path.exists(tmp_path):
os.unlink(tmp_path)
return ret
def remove_product(self, data, src, target, pedigree):
return self.call_hook('remove_product',
data=util.products_exdata(src, pedigree))
def remove_version(self, data, src, target, pedigree):
return self.call_hook('remove_version',
data=util.products_exdata(src, pedigree))
def remove_item(self, data, src, target, pedigree):
return self.call_hook('remove_item',
data=util.products_exdata(target, pedigree))
def call_hook(self, hookname, data, capture=False, rcs=None, extra=None,
content=None):
command = self.config.get(hookname, self.config.get(DEFAULT_HOOK_NAME))
if not command:
# return successful execution with no output
return (0, '')
if isinstance(command, str):
command = ['sh', '-c', command]
fdata = util.stringitems(data)
content_file = None
if content is not None:
(tfd, content_file) = tempfile.mkstemp()
tfile = os.fdopen(tfd, "w")
tfile.write(content)
tfile.close()
fdata['content_file_path'] = content_file
if extra:
fdata.update(extra)
fdata['HOOK'] = hookname
try:
return call_hook(command=command, data=fdata,
unset=self.config.get('unset_value', None),
capture=capture, rcs=rcs)
finally:
if content_file:
os.unlink(content_file)
def call_hook(command, data, unset=None, capture=False, rcs=None):
env = os.environ.copy()
data = data.copy()
data[ENV_FIELDS_NAME] = ' '.join([k for k in data if k != ENV_HOOK_NAME])
mcommand = render(command, data, unset=unset)
env.update(data)
return run_command(mcommand, env=env, capture=capture, rcs=rcs)
def render(inputs, data, unset=None):
fdata = data.copy()
outputs = []
for i in inputs:
while True:
try:
outputs.append(i % fdata)
break
except KeyError as err:
if unset is None:
raise
for key in err.args:
fdata[key] = unset
return outputs
def check_config(config):
missing = []
for f in REQUIRED_FIELDS:
if f not in config and config.get(DEFAULT_HOOK_NAME) is None:
missing.append(f)
if missing:
raise TypeError("Missing required config entries for %s" % missing)
def load_product_output(output, content_id, fmt="serial_list"):
# parse command output and return
if fmt == "serial_list":
# "line" format just is a list of serials that are present
working = {'content_id': content_id, 'products': {}}
for line in output.splitlines():
(product_id, version) = line.split(None, 1)
if product_id not in working['products']:
working['products'][product_id] = {'versions': {}}
working['products'][product_id]['versions'][version] = {}
return working
elif fmt == "json":
return util.load_content(output)
return
def run_command(cmd, env=None, capture=False, rcs=None):
if not rcs:
rcs = [0]
if not capture:
stdout = None
else:
stdout = subprocess.PIPE
sp = subprocess.Popen(cmd, env=env, stdout=stdout, shell=False)
(out, _err) = sp.communicate()
rc = sp.returncode
if rc == 0x80 | signal.SIGPIPE:
exc = IOError("Child Received SIGPIPE: %s" % str(cmd))
exc.errno = errno.EPIPE
raise exc
if rc not in rcs:
raise subprocess.CalledProcessError(rc, cmd)
if out is None:
out = ''
elif isinstance(out, bytes):
out = out.decode('utf-8')
return (rc, out)
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/simplestreams/mirrors/glance.py 0000664 0000000 0000000 00000066335 14605750330 0024303 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import simplestreams.filters as filters
import simplestreams.mirrors as mirrors
import simplestreams.util as util
from simplestreams import checksum_util
import simplestreams.openstack as openstack
from simplestreams.log import LOG
import copy
import collections
import errno
import glanceclient
import json
import os
import re
def get_glanceclient(version='1', **kwargs):
# newer versions of the glanceclient will do this 'strip_version' for
# us, but older versions do not.
kwargs['endpoint'] = _strip_version(kwargs['endpoint'])
pt = ('endpoint', 'token', 'insecure', 'cacert')
kskw = {k: kwargs.get(k) for k in pt if k in kwargs}
if kwargs.get('session'):
sess = kwargs.get('session')
return glanceclient.Client(version, session=sess)
else:
return glanceclient.Client(version, **kskw)
def empty_iid_products(content_id):
return {'content_id': content_id, 'products': {},
'datatype': 'image-ids', 'format': 'products:1.0'}
def canonicalize_arch(arch):
'''Canonicalize Ubuntu archs for use in OpenStack'''
newarch = arch.lower()
if newarch == "amd64":
newarch = "x86_64"
if newarch == "i386":
newarch = "i686"
if newarch == "ppc64el":
newarch = "ppc64le"
if newarch == "powerpc":
newarch = "ppc"
if newarch == "armhf":
newarch = "armv7l"
if newarch == "arm64":
newarch = "aarch64"
return newarch
LXC_FTYPES = {
'root.tar.gz': 'root-tar',
'root.tar.xz': 'root-tar',
'squashfs': 'squashfs',
}
QEMU_FTYPES = {
'disk.img': 'qcow2',
'disk1.img': 'qcow2',
}
def disk_format(ftype):
'''Canonicalize disk formats for use in OpenStack.
Input ftype is a 'ftype' from a simplestream feed.
Return value is the appropriate 'disk_format' for glance.'''
newftype = ftype.lower()
if newftype in LXC_FTYPES:
return LXC_FTYPES[newftype]
if newftype in QEMU_FTYPES:
return QEMU_FTYPES[newftype]
return None
def hypervisor_type(ftype):
'''Determine hypervisor type based on image format'''
newftype = ftype.lower()
if newftype in LXC_FTYPES:
return 'lxc'
if newftype in QEMU_FTYPES:
return 'qemu'
return None
def virt_type(hypervisor_type):
'''Map underlying hypervisor types into high level virt types'''
newhtype = hypervisor_type.lower()
if newhtype == 'qemu':
return 'kvm'
if newhtype == 'lxc':
return 'lxd'
return None
# glance mirror 'image-downloads' content into glance
# if provided an object store, it will produce a 'image-ids' mirror
class GlanceMirror(mirrors.BasicMirrorWriter):
"""
GlanceMirror syncs external simplestreams index and images to Glance.
`client` argument is used for testing to override openstack module:
allows dependency injection of fake "openstack" module.
"""
def __init__(self, config, objectstore=None, region=None,
name_prefix=None, progress_callback=None,
client=None):
super(GlanceMirror, self).__init__(config=config)
self.item_filters = self.config.get('item_filters', [])
if len(self.item_filters) == 0:
self.item_filters = ['ftype~(disk1.img|disk.img)',
'arch~(x86_64|amd64|i386)']
self.item_filters = filters.get_filters(self.item_filters)
self.index_filters = self.config.get('index_filters', [])
if len(self.index_filters) == 0:
self.index_filters = ['datatype=image-downloads']
self.index_filters = filters.get_filters(self.index_filters)
self.loaded_content = {}
self.store = objectstore
if client is None:
client = openstack
self.keystone_creds = client.load_keystone_creds()
self.name_prefix = name_prefix or ""
if region is not None:
self.keystone_creds['region_name'] = region
self.progress_callback = progress_callback
conn_info = client.get_service_conn_info(
'image', **self.keystone_creds)
self.glance_api_version = conn_info['glance_version']
self.gclient = get_glanceclient(version=self.glance_api_version,
**conn_info)
self.tenant_id = conn_info['tenant_id']
self.region = self.keystone_creds.get('region_name', 'nullregion')
self.cloudname = config.get("cloud_name", 'nullcloud')
self.crsn = '-'.join((self.cloudname, self.region,))
self.auth_url = self.keystone_creds['auth_url']
self.content_id = config.get("content_id")
self.modify_hook = config.get("modify_hook")
self.inserts = {}
if not self.content_id:
raise TypeError("content_id is required")
self.custom_properties = collections.OrderedDict(
prop.split('=') for prop in config.get("custom_properties", [])
)
self.visibility = config.get("visibility", "public")
self.image_import_conversion = config.get("image_import_conversion")
self.set_latest_property = config.get("set_latest_property")
def _cidpath(self, content_id):
return "streams/v1/%s.json" % content_id
def load_products(self, path=None, content_id=None):
"""
Load metadata for all currently uploaded active images in Glance.
Uses glance as the definitive store, but loads metadata from existing
simplestreams indexes as well.
"""
my_cid = self.content_id
# glance is the definitive store. Any data loaded from the store
# is secondary.
store_t = None
if self.store:
try:
path = self._cidpath(my_cid)
store_t = util.load_content(self.store.source(path).read())
except IOError as e:
if e.errno != errno.ENOENT:
raise
if not store_t:
store_t = empty_iid_products(my_cid)
glance_t = empty_iid_products(my_cid)
images = self.gclient.images.list()
for image in images:
if self.glance_api_version == "1":
image = image.to_dict()
props = image['properties']
else:
props = copy.deepcopy(image)
if image['owner'] != self.tenant_id:
continue
if props.get('content_id') != my_cid:
continue
if image.get('status') != "active":
LOG.warning("Ignoring inactive image %s with status '%s'" % (
image['id'], image.get('status')))
continue
source_content_id = props.get('source_content_id')
product = props.get('product_name')
version = props.get('version_name')
item = props.get('item_name')
if not (version and product and item and source_content_id):
LOG.warning("%s missing required fields" % image['id'])
continue
# get data from the datastore for this item, if it exists
# and then update that with glance data (just in case different)
try:
item_data = util.products_exdata(store_t,
(product, version, item,),
include_top=False,
insert_fieldnames=False)
except KeyError:
item_data = {}
# If original simplestreams-metadata is stored on the image,
# use that as well.
if 'simplestreams_metadata' in props:
simplestreams_metadata = json.loads(
props.get('simplestreams_metadata'))
else:
simplestreams_metadata = {}
item_data.update(simplestreams_metadata)
item_data.update({'name': image['name'], 'id': image['id']})
if 'owner_id' not in item_data:
item_data['owner_id'] = self.tenant_id
util.products_set(glance_t, item_data,
(product, version, item,))
for product in glance_t['products']:
glance_t['products'][product]['region'] = self.region
glance_t['products'][product]['endpoint'] = self.auth_url
return glance_t
def filter_item(self, data, src, target, pedigree):
return filters.filter_item(self.item_filters, data, src, pedigree)
def create_glance_properties(self, content_id, source_content_id,
image_metadata, hypervisor_mapping):
"""
Construct extra properties to store in Glance for an image.
Based on source image metadata.
"""
properties = {
'content_id': content_id,
'source_content_id': source_content_id,
}
# An iterator of properties to carry over: if a property needs
# renaming, uses a tuple (old name, new name).
carry_over_simple = (
'product_name', 'version_name', 'item_name')
carry_over = carry_over_simple + (
('os', 'os_distro'), ('version', 'os_version'))
for carry_over_property in carry_over:
if isinstance(carry_over_property, tuple):
name_old, name_new = carry_over_property
else:
name_old = name_new = carry_over_property
properties[name_new] = image_metadata.get(name_old)
if 'arch' in image_metadata:
properties['architecture'] = canonicalize_arch(
image_metadata['arch'])
if hypervisor_mapping and 'ftype' in image_metadata:
_hypervisor_type = hypervisor_type(image_metadata['ftype'])
if _hypervisor_type:
properties['hypervisor_type'] = _hypervisor_type
properties.update(self.custom_properties)
if self.set_latest_property:
properties['latest'] = "true"
# Store flattened metadata for a source image along with the
# image in 'simplestreams_metadata' property.
simplestreams_metadata = image_metadata.copy()
drop_keys = carry_over_simple + ('path',)
for remove_key in drop_keys:
if remove_key in simplestreams_metadata:
del simplestreams_metadata[remove_key]
properties['simplestreams_metadata'] = json.dumps(
simplestreams_metadata, sort_keys=True)
return properties
def prepare_glance_arguments(self, full_image_name, image_metadata,
image_md5_hash, image_size, image_properties):
"""
Prepare arguments to pass into Glance image creation method.
Uses `image_metadata` for source image to derive image size, md5 hash,
disk format (based on 'ftype' field, if defined, otherwise defaults to
'qcow2').
If `image_md5_hash` and `image_size` are defined, overrides the
values from image_metadata with their values.
Sets extra image properties to dict `image_properties`.
Returns a dict to use as keyword arguments passed directly to
GlanceClient.images.create().
"""
create_kwargs = {
'name': full_image_name,
'container_format': 'bare',
'is_public': self.visibility == 'public',
'properties': image_properties,
}
# In v2 is_public=True is visibility='public'
if self.glance_api_version == "2":
del create_kwargs['is_public']
create_kwargs['visibility'] = self.visibility
# v2 automatically calculates size and checksum
if self.glance_api_version == "1":
if 'size' in image_metadata:
create_kwargs['size'] = int(image_metadata.get('size'))
if 'md5' in image_metadata:
create_kwargs['checksum'] = image_metadata.get('md5')
if image_md5_hash and image_size:
create_kwargs.update({
'checksum': image_md5_hash,
'size': image_size,
})
if self.image_import_conversion:
create_kwargs['disk_format'] = 'raw'
elif 'ftype' in image_metadata:
create_kwargs['disk_format'] = (
disk_format(image_metadata['ftype']) or 'qcow2'
)
else:
create_kwargs['disk_format'] = 'qcow2'
return create_kwargs
def download_image(self, contentsource, image_stream_data):
"""
Download an image from contentsource.
`image_stream_data` represents a flattened image metadata structure
to use for any logging messages.
Returns a tuple of
(str(local-image-path), int(image-size), str(image-md5-hash)).
"""
image_name = image_stream_data.get('pubname')
image_size = image_stream_data.get('size')
if self.progress_callback:
def progress_wrapper(written):
self.progress_callback(
dict(status="Downloading", name=image_name,
size=None if image_size is None else int(image_size),
written=written))
else:
def progress_wrapper(written):
pass
try:
tmp_path, _ = util.get_local_copy(
contentsource, progress_callback=progress_wrapper)
if self.modify_hook:
(new_size, new_md5) = call_hook(
item=image_stream_data, path=tmp_path,
cmd=self.modify_hook)
else:
new_size = os.path.getsize(tmp_path)
new_md5 = image_stream_data.get('md5')
finally:
contentsource.close()
return tmp_path, new_size, new_md5
def adapt_source_entry(self, source_entry, hypervisor_mapping, image_name,
image_md5_hash, image_size):
"""
Adapts the source simplestreams dict `source_entry` for use in the
generated local simplestreams index.
"""
output_entry = source_entry.copy()
# Drop attributes not needed for the simplestreams index itself.
for property_name in ('path', 'product_name', 'version_name',
'item_name'):
if property_name in output_entry:
del output_entry[property_name]
if hypervisor_mapping and 'ftype' in output_entry:
_hypervisor_type = hypervisor_type(output_entry['ftype'])
if _hypervisor_type:
_virt_type = virt_type(_hypervisor_type)
if _virt_type:
output_entry['virt'] = _virt_type
output_entry['region'] = self.region
output_entry['endpoint'] = self.auth_url
output_entry['owner_id'] = self.tenant_id
output_entry['name'] = image_name
if image_md5_hash and image_size:
output_entry['md5'] = image_md5_hash
output_entry['size'] = str(image_size)
return output_entry
def _insert_item(self, data, src, target, pedigree, contentsource):
"""
Upload image into glance and add image metadata to simplestreams index.
`data` is the metadata for a particular image file from the source:
unused since all that data is present in the `src` entry for
the corresponding image as well.
`src` contains the entire simplestreams index from the image syncing
source.
`target` is the simplestreams index for currently available images
in glance (generated by load_products()) to add this item to.
`pedigree` is a "path" to get to the `data` for the image we desire,
a tuple of (product_name, version_name, image_type).
`contentsource` is a ContentSource to download the actual image data
from.
"""
# Extract and flatten metadata for a product image matching
# (product-name, version-name, image-type)
# from the tuple `pedigree` in the source simplestreams index.
flattened_img_data = util.products_exdata(
src, pedigree, include_top=False)
tmp_path = None
full_image_name = "{}{}".format(
self.name_prefix,
flattened_img_data.get('pubname', flattened_img_data.get('name')))
if not full_image_name.endswith(flattened_img_data['item_name']):
full_image_name += "-{}".format(flattened_img_data['item_name'])
# Download images locally into a temporary file.
tmp_path, new_size, new_md5 = self.download_image(
contentsource, flattened_img_data)
hypervisor_mapping = self.config.get('hypervisor_mapping', False)
glance_props = self.create_glance_properties(
target['content_id'], src['content_id'], flattened_img_data,
hypervisor_mapping)
LOG.debug("glance properties %s", glance_props)
create_kwargs = self.prepare_glance_arguments(
full_image_name, flattened_img_data, new_md5, new_size,
glance_props)
target_sstream_item = self.adapt_source_entry(
flattened_img_data, hypervisor_mapping, full_image_name, new_md5,
new_size)
try:
if self.glance_api_version == "1":
# Set data as string if v1
create_kwargs['data'] = open(tmp_path, 'rb')
else:
# Keep properties for v2 update call
_properties = create_kwargs['properties']
del create_kwargs['properties']
LOG.debug("glance create_kwargs %s", create_kwargs)
glance_image = self.gclient.images.create(**create_kwargs)
target_sstream_item['id'] = glance_image.id
if self.glance_api_version == "2":
if self.image_import_conversion:
# Stage the image before starting import
self.gclient.images.stage(glance_image.id,
open(tmp_path, 'rb'))
# Import the Glance image
self.gclient.images.image_import(glance_image.id)
else:
# Upload for v2
self.gclient.images.upload(glance_image.id,
open(tmp_path, 'rb'))
# Update properties for v2
self.gclient.images.update(glance_image.id, **_properties)
# Validate the image checksum and size. This will throw an
# IOError if they do not match.
# However when it is converted to Raw, the checksum will not match,
# so skip the validation.
if not self.image_import_conversion:
self.validate_image(glance_image.id, new_md5, new_size)
print("created %s: %s" % (glance_image.id, full_image_name))
# TODO(guimalufb) use self.loaded_content or
# self.load_products() instead
if self.set_latest_property:
# Search all images with the same target attributtes
_filter_properties = {'filters': {
'latest': 'true',
'os_version': glance_props['os_version'],
'architecture': glance_props['architecture']}}
images = self.gclient.images.list(**_filter_properties)
for image in images:
if image.id != glance_image.id:
self.gclient.images.update(image.id,
remove_props=['latest'])
finally:
if tmp_path and os.path.exists(tmp_path):
os.unlink(tmp_path)
util.products_set(target, target_sstream_item, pedigree)
# We can safely ignore path and content arguments since they are
# unused in insert_products below.
self.insert_products(None, target, None)
def validate_image(self, image_id, checksum, size, delete=True):
"""Validate an image's checksum and size after upload.
Check for expected checksum and size.
Throw an IOError if they do not match.
:param image_id: str Glance Image ID
:param checksum: str Expected MD5 sum of the image
:param size: int Expected size in bytes of the image
:returns: None
"""
if not isinstance(size, util.INTEGER_TYPES):
raise TypeError("size '%s' is not an integer" % str(size))
found = self.gclient.images.get(image_id)
if found.size == size and found.checksum == checksum:
return
msg = (
("Invalid glance image: %s. " % image_id) +
("Expected size=%s md5=%s. Found size=%s md5=%s." %
(size, checksum, found.size, found.checksum)))
if delete:
LOG.warning("Deleting image %s: %s", image_id, msg)
self.gclient.images.delete(image_id)
raise IOError(msg)
def insert_item(self, data, src, target, pedigree, contentsource):
"""Queue item to be inserted in subsequent call to insert_version
This adds the item to self.inserts which is then handled in
insert_version. That allows the code to have context on
all the items for a given version, and "choose" one. Ie,
if both root.tar.xz and squashfs are available, preference
can be given to the root.tar.gz.
"""
product_name, version_name, item_name = pedigree
if product_name not in self.inserts:
self.inserts[product_name] = {}
if version_name not in self.inserts[product_name]:
self.inserts[product_name][version_name] = {}
if 'ftype' in data:
ftype = data['ftype']
else:
flat = util.products_exdata(src, pedigree, include_top=False)
ftype = flat.get('ftype')
self.inserts[product_name][version_name][item_name] = (
ftype, (data, src, target, pedigree, contentsource))
def insert_version(self, data, src, target, pedigree):
"""Upload all images for this version into glance
and add image metadata to simplestreams index.
All the work actually happens in _insert_item.
"""
product_name, version_name = pedigree
inserts = self.inserts.get(product_name, {}).get(version_name, [])
rtar_names = [f for f in inserts
if inserts[f][0] in ('root.tar.gz', 'root.tar.xz')]
for _iname, (ftype, iargs) in inserts.items():
if ftype == "squashfs" and rtar_names:
LOG.info("[%s] Skipping ftype 'squashfs' image in preference"
"for root tarball type in %s",
'/'.join(pedigree), rtar_names)
continue
self._insert_item(*iargs)
# we do not specifically do anything for insert_version, but
# call parent.
super(GlanceMirror, self).insert_version(data, src, target, pedigree)
def remove_item(self, data, src, target, pedigree):
util.products_del(target, pedigree)
if 'id' in data:
print("removing %s: %s" % (data['id'], data['name']))
self.gclient.images.delete(data['id'])
def filter_index_entry(self, data, src, pedigree):
return filters.filter_dict(self.index_filters, data)
def insert_products(self, path, target, content):
if not self.store:
return
tree = copy.deepcopy(target)
util.products_prune(tree, preserve_empty_products=True)
# stop these items from copying up when we call condense
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id']
# LP: #1329805. Juju expects these on the item.
if self.config.get('sticky_endpoint_region', True):
sticky += ['endpoint', 'region']
util.products_condense(tree, sticky=sticky)
tsnow = util.timestamp()
tree['updated'] = tsnow
dpath = self._cidpath(tree['content_id'])
LOG.info("writing data: %s", dpath)
self.store.insert_content(dpath, util.dump_data(tree))
# now insert or update an index
ipath = "streams/v1/index.json"
try:
index = util.load_content(self.store.source(ipath).read())
except IOError as exc:
if exc.errno != errno.ENOENT:
raise
index = {"index": {}, 'format': 'index:1.0',
'updated': util.timestamp()}
index['index'][tree['content_id']] = {
'updated': tsnow,
'datatype': 'image-ids',
'clouds': [{'region': self.region, 'endpoint': self.auth_url}],
'cloudname': self.cloudname,
'path': dpath,
'products': list(tree['products'].keys()),
'format': tree['format'],
}
LOG.info("writing data: %s", ipath)
self.store.insert_content(ipath, util.dump_data(index))
class ItemInfoDryRunMirror(GlanceMirror):
def __init__(self, config, objectstore):
super(ItemInfoDryRunMirror, self).__init__(config, objectstore)
self.items = {}
def noop(*args):
pass
insert_index = noop
insert_index_entry = noop
insert_products = noop
insert_product = noop
insert_version = noop
remove_item = noop
remove_product = noop
remove_version = noop
def insert_item(self, data, src, target, pedigree, contentsource):
data = util.products_exdata(src, pedigree)
if 'size' in data and 'path' in data and 'pubname' in data:
self.items[data['pubname']] = int(data['size'])
def _checksum_file(fobj, read_size=util.READ_SIZE, checksums=None):
if checksums is None:
checksums = {'md5': None}
cksum = checksum_util.checksummer(checksums=checksums)
while True:
buf = fobj.read(read_size)
cksum.update(buf)
if len(buf) != read_size:
break
return cksum.hexdigest()
def call_hook(item, path, cmd):
env = os.environ.copy()
env.update(item)
env['IMAGE_PATH'] = path
env['FIELDS'] = ' '.join(item.keys()) + ' IMAGE_PATH'
util.subp(cmd, env=env, capture=False)
with open(path, "rb") as fp:
md5 = _checksum_file(fp, checksums={'md5': None})
return (os.path.getsize(path), md5)
def _strip_version(endpoint):
"""Strip a version from the last component of an endpoint if present"""
# Get rid of trailing '/' if present
if endpoint.endswith('/'):
endpoint = endpoint[:-1]
url_bits = endpoint.split('/')
# regex to match 'v1' or 'v2.0' etc
if re.match(r'v\d+\.?\d*', url_bits[-1]):
endpoint = '/'.join(url_bits[:-1])
return endpoint
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/simplestreams/objectstores/ 0000775 0000000 0000000 00000000000 14605750330 0023474 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/simplestreams/objectstores/__init__.py 0000664 0000000 0000000 00000017363 14605750330 0025617 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import errno
import os
import simplestreams.contentsource as cs
import simplestreams.util as util
from simplestreams import checksum_util
from simplestreams.log import LOG
READ_BUFFER_SIZE = 1024 * 10
class ObjectStore(object):
read_size = READ_BUFFER_SIZE
def insert(self, path, reader, checksums=None, mutable=True, size=None):
# store content from reader.read() into path, expecting result checksum
raise NotImplementedError()
def insert_content(self, path, content, checksums=None, mutable=True):
if not isinstance(content, bytes):
content = content.encode('utf-8')
self.insert(path=path, reader=cs.MemoryContentSource(content=content),
checksums=checksums, mutable=mutable)
def remove(self, path):
# remove path from store
raise NotImplementedError()
def source(self, path):
# return a ContentSource for the provided path
raise NotImplementedError()
def exists_with_checksum(self, path, checksums=None):
return has_valid_checksum(path=path, reader=self.source,
checksums=checksums,
read_size=self.read_size)
class MemoryObjectStore(ObjectStore):
def __init__(self, data=None):
super(MemoryObjectStore, self).__init__()
if data is None:
data = {}
self.data = data
def insert(self, path, reader, checksums=None, mutable=True, size=None):
self.data[path] = reader.read()
reader.close()
def remove(self, path):
# remove path from store
del self.data[path]
def source(self, path):
try:
url = "%s://%s" % (self.__class__, path)
return cs.MemoryContentSource(content=self.data[path], url=url)
except KeyError:
raise IOError(errno.ENOENT, '%s not found' % path)
class FileStore(ObjectStore):
def __init__(self, prefix, complete_callback=None):
""" complete_callback is called periodically to notify users when a
file is being inserted. It takes three arguments: the path that is
inserted, the number of bytes downloaded, and the number of total
bytes. """
self.prefix = prefix
self.complete_callback = complete_callback
def insert(self, path, reader, checksums=None, mutable=True, size=None,
sparse=False):
wpath = self._fullpath(path)
if os.path.isfile(wpath):
if not mutable:
# if the file exists, and not mutable, return
return
if has_valid_checksum(path=path, reader=self.source,
checksums=checksums,
read_size=self.read_size):
return
zeros = None
if sparse is True:
zeros = '\0' * self.read_size
cksum = checksum_util.checksummer(checksums)
out_d = os.path.dirname(wpath)
partfile = os.path.join(out_d, "%s.part" % os.path.basename(wpath))
util.mkdir_p(out_d)
orig_part_size = 0
reader_does_checksum = (
isinstance(reader, cs.ChecksummingContentSource) and
cksum.algorithm == reader.algorithm)
if os.path.exists(partfile):
try:
orig_part_size = os.path.getsize(partfile)
if reader_does_checksum:
reader.resume(orig_part_size, cksum)
else:
reader.set_start_pos(orig_part_size)
LOG.debug("resuming partial (%s) download of '%s' from '%s'",
orig_part_size, path, partfile)
with open(partfile, "rb") as fp:
while True:
buf = fp.read(self.read_size)
cksum.update(buf)
if len(buf) != self.read_size:
break
except NotImplementedError:
# continuing not supported, just delete and retry
orig_part_size = 0
os.unlink(partfile)
with open(partfile, "ab") as wfp:
while True:
try:
buf = reader.read(self.read_size)
except checksum_util.InvalidChecksum:
break
buflen = len(buf)
if (buflen != self.read_size and zeros is not None and
zeros[0:buflen] == buf):
wfp.seek(wfp.tell() + buflen)
elif buf == zeros:
wfp.seek(wfp.tell() + buflen)
else:
wfp.write(buf)
if not reader_does_checksum:
cksum.update(buf)
if size is not None:
if self.complete_callback:
self.complete_callback(path, wfp.tell(), size)
if wfp.tell() > size:
# file is too big, so the checksum won't match; we
# might as well stop downloading.
break
if buflen != self.read_size:
break
if zeros is not None:
wfp.truncate(wfp.tell())
reader.close()
resume_msg = "resumed download of '%s' had bad checksum." % path
if reader_does_checksum:
if not reader.check():
os.unlink(partfile)
if orig_part_size:
LOG.warning(resume_msg)
raise checksum_util.invalid_checksum_for_reader(reader)
else:
if not cksum.check():
os.unlink(partfile)
if orig_part_size:
LOG.warning(resume_msg)
raise checksum_util.InvalidChecksum(path=path, cksum=cksum)
os.rename(partfile, wpath)
def remove(self, path):
try:
os.unlink(self._fullpath(path))
except OSError as e:
if e.errno != errno.ENOENT:
raise
cur_d = os.path.dirname(path)
prev_d = None
while cur_d and cur_d != prev_d:
try:
os.rmdir(cur_d)
except OSError as e:
if e.errno not in (errno.ENOENT, errno.ENOTEMPTY):
raise
prev_d = cur_d
cur_d = os.path.dirname(path)
def source(self, path):
return cs.UrlContentSource(url=self._fullpath(path))
def _fullpath(self, path):
return os.path.join(self.prefix, path)
def has_valid_checksum(path, reader, checksums=None,
read_size=READ_BUFFER_SIZE):
if checksums is None:
return False
try:
cksum = checksum_util.SafeCheckSummer(checksums)
with reader(path) as rfp:
while True:
buf = rfp.read(read_size)
cksum.update(buf)
if len(buf) != read_size:
break
return cksum.check()
except Exception:
return False
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/objectstores/s3.py 0000664 0000000 0000000 00000006424 14605750330 0024401 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import errno
import tempfile
import boto3
import simplestreams.objectstores as objectstores
import simplestreams.contentsource as cs
class S3ObjectStore(objectstores.ObjectStore):
def __init__(self, prefix):
# expect 's3://bucket/path_prefix'
self.prefix = prefix
if prefix.startswith("s3://"):
path = prefix[5:]
else:
path = prefix
(self.bucketname, self.path_prefix) = path.split("/", 1)
self._client = boto3.client("s3")
def insert(self, path, reader, checksums=None, mutable=True, size=None):
# store content from reader.read() into path, expecting result checksum
try:
tfile = tempfile.TemporaryFile()
with reader(path) as rfp:
while True:
buf = rfp.read(self.read_size)
tfile.write(buf)
if len(buf) != self.read_size:
break
tfile.seek(0)
self.insert_content(
path, tfile, checksums=checksums, mutable=mutable
)
finally:
tfile.close()
def insert_content(self, path, content, checksums=None, mutable=True):
self._client.put_object(
Body=content, Bucket=self.bucketname, Key=self.path_prefix + path
)
def remove(self, path):
# remove path from store
self._client.delete_object(
Bucket=self.bucketname, Key=self.path_prefix + path
)
def source(self, path):
# essentially return an 'open(path, r)'
try:
obj_resp = self._client.get_object(
Bucket=self.bucketname, Key=self.path_prefix + path
)
except self._client.exceptions.NoSuchKey:
fd = None
else:
fd = obj_resp.get("Body")
if not fd:
myerr = IOError("Unable to open %s" % path)
myerr.errno = errno.ENOENT
raise myerr
return cs.FdContentSource(fd=fd, url=self.path_prefix + path)
def exists_with_checksum(self, path, checksums=None):
try:
obj_resp = self._client.get_object(
Bucket=self.bucketname, Key=self.path_prefix + path
)
except self._client.exceptions.NoSuchKey:
return False
if "md5" in checksums:
md5 = obj_resp["ResponseMetadata"]["HTTPHeaders"]["etag"].replace(
'"', ""
)
return checksums["md5"] == md5
return False
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/objectstores/swift.py 0000664 0000000 0000000 00000013047 14605750330 0025207 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import simplestreams.objectstores as objectstores
import simplestreams.contentsource as cs
import simplestreams.openstack as openstack
import errno
import hashlib
from swiftclient import Connection, ClientException
def get_swiftclient(**kwargs):
# nmap has entries that need name changes from a 'get_service_conn_info'
# to a swift Connection name.
# pt has names that pass straight through
nmap = {'endpoint': 'preauthurl', 'token': 'preauthtoken'}
pt = ('insecure', 'cacert')
connargs = {v: kwargs.get(k) for k, v in nmap.items() if k in kwargs}
connargs.update({k: kwargs.get(k) for k in pt if k in kwargs})
if kwargs.get('session'):
sess = kwargs.get('session')
try:
# If session is available try it
return Connection(session=sess,
cacert=kwargs.get('cacert'))
except TypeError:
# The edge case where session is availble but swiftclient is
# < 3.3.0. Use the old style method for Connection.
pass
return Connection(**connargs)
class SwiftContentSource(cs.IteratorContentSource):
def is_enoent(self, exc):
return is_enoent(exc)
class SwiftObjectStore(objectstores.ObjectStore):
def __init__(self, prefix, region=None):
# expect 'swift://bucket/path_prefix'
self.prefix = prefix
if prefix.startswith("swift://"):
path = prefix[8:]
else:
path = prefix
(self.container, self.path_prefix) = path.split("/", 1)
super(SwiftObjectStore, self).__init__()
self.keystone_creds = openstack.load_keystone_creds()
if region is not None:
self.keystone_creds['region_name'] = region
conn_info = openstack.get_service_conn_info('object-store',
**self.keystone_creds)
self.swiftclient = get_swiftclient(**conn_info)
# http://docs.openstack.org/developer/swift/misc.html#acls
self.swiftclient.put_container(self.container,
headers={'X-Container-Read':
'.r:*,.rlistings'})
def insert(self, path, reader, checksums=None, mutable=True, size=None):
# store content from reader.read() into path, expecting result checksum
self._insert(path=path, contents=reader, checksums=checksums,
mutable=mutable)
def insert_content(self, path, content, checksums=None, mutable=True):
self._insert(path=path, contents=content, checksums=checksums,
mutable=mutable)
def remove(self, path):
self.swiftclient.delete_object(container=self.container,
obj=self.path_prefix + path)
def source(self, path):
def itgen():
(_headers, iterator) = self.swiftclient.get_object(
container=self.container, obj=self.path_prefix + path,
resp_chunk_size=self.read_size)
return iterator
return SwiftContentSource(itgen=itgen, url=self.prefix + path)
def exists_with_checksum(self, path, checksums=None):
return headers_match_checksums(self._head_path(path), checksums)
def _head_path(self, path):
try:
headers = self.swiftclient.head_object(container=self.container,
obj=self.path_prefix + path)
except Exception as exc:
if is_enoent(exc):
return {}
raise
return headers
def _insert(self, path, contents, checksums=None, mutable=True, size=None):
# content is a ContentSource or a string
headers = self._head_path(path)
if headers:
if not mutable:
return
if headers_match_checksums(headers, checksums):
return
insargs = {'container': self.container, 'obj': self.path_prefix + path,
'contents': contents}
if size is not None and isinstance(contents, str):
size = len(contents)
if size is not None:
insargs['content_length'] = size
if checksums and checksums.get('md5'):
insargs['etag'] = checksums.get('md5')
elif isinstance(contents, str):
insargs['etag'] = hashlib.md5(contents).hexdigest()
self.swiftclient.put_object(**insargs)
def headers_match_checksums(headers, checksums):
if not (headers and checksums):
return False
if ('md5' in checksums and headers.get('etag') == checksums.get('md5')):
return True
return False
def is_enoent(exc):
return ((isinstance(exc, IOError) and exc.errno == errno.ENOENT) or
(isinstance(exc, ClientException) and exc.http_status == 404))
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/simplestreams/openstack.py 0000664 0000000 0000000 00000017610 14605750330 0023334 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import collections
import os
from keystoneclient.v2_0 import client as ksclient_v2
from keystoneclient.v3 import client as ksclient_v3
try:
from keystoneauth1 import session
from keystoneauth1.identity import (v2, v3)
_LEGACY_CLIENTS = False
except ImportError:
# 14.04 level packages do not have this.
session, v2, v3 = (None, None, None)
_LEGACY_CLIENTS = True
OS_ENV_VARS = (
'OS_AUTH_TOKEN', 'OS_AUTH_URL', 'OS_CACERT', 'OS_IMAGE_API_VERSION',
'OS_IMAGE_URL', 'OS_PASSWORD', 'OS_REGION_NAME', 'OS_STORAGE_URL',
'OS_TENANT_ID', 'OS_TENANT_NAME', 'OS_USERNAME', 'OS_INSECURE',
'OS_USER_DOMAIN_NAME', 'OS_PROJECT_DOMAIN_NAME',
'OS_USER_DOMAIN_ID', 'OS_PROJECT_DOMAIN_ID', 'OS_PROJECT_NAME',
'OS_PROJECT_ID'
)
# only used for legacy client connection
PT_V2 = ('username', 'password', 'tenant_id', 'tenant_name', 'auth_url',
'cacert', 'insecure', )
# annoyingly the 'insecure' option in the old client constructor is now called
# the 'verify' option in the session.Session() constructor
PASSWORD_V2 = ('auth_url', 'username', 'password', 'user_id', 'trust_id',
'tenant_id', 'tenant_name', 'reauthenticate')
PASSWORD_V3 = ('auth_url', 'password', 'username',
'user_id', 'user_domain_id', 'user_domain_name',
'trust_id', 'system_scope',
'domain_id', 'domain_name',
'project_id', 'project_name',
'project_domain_id', 'project_domain_name',
'reauthenticate')
SESSION_ARGS = ('cert', 'timeout', 'verify', 'original_ip', 'redirect',
'addition_headers', 'app_name', 'app_version',
'additional_user_agent',
'discovery_cache', 'split_loggers', 'collect_timing')
Settings = collections.namedtuple('Settings', 'mod ident arg_set')
KS_VERSION_RESOLVER = {2: Settings(mod=ksclient_v2,
ident=v2,
arg_set=PASSWORD_V2),
3: Settings(mod=ksclient_v3,
ident=v3,
arg_set=PASSWORD_V3)}
def load_keystone_creds(**kwargs):
# either via arguments or OS_* values in environment, the kwargs
# that are required are:
# 'username', 'auth_url',
# ('auth_token' or 'password')
# ('tenant_id' or 'tenant_name')
ret = {}
for name in OS_ENV_VARS:
lc = name.lower()
# take off 'os_'
short = lc[3:]
if short in kwargs:
ret[short] = kwargs.get(short)
elif name in os.environ:
# take off 'os_'
ret[short] = os.environ[name]
if 'insecure' in ret:
if isinstance(ret['insecure'], str):
ret['insecure'] = (ret['insecure'].lower() not in
("", "0", "no", "off", 'false'))
else:
ret['insecure'] = bool(ret['insecure'])
# verify is the key that is used by requests, and thus the Session object.
# i.e. verify is either False or a certificate path or file.
if not ret.get('insecure', False) and 'cacert' in ret:
ret['verify'] = ret['cacert']
missing = []
for req in ('username', 'auth_url'):
if not ret.get(req, None):
missing.append(req)
if not (ret.get('auth_token') or ret.get('password')):
missing.append("(auth_token or password)")
api_version = get_ks_api_version(ret.get('auth_url', '')) or 2
if (api_version == 2 and
not (ret.get('tenant_id') or ret.get('tenant_name'))):
missing.append("(tenant_id or tenant_name)")
if api_version == 3:
for k in ('user_domain_name', 'project_domain_name', 'project_name'):
if not ret.get(k, None):
missing.append(k)
if missing:
raise ValueError("Need values for: %s" % missing)
return ret
def get_regions(client=None, services=None, kscreds=None):
# if kscreds had 'region_name', then return that
if kscreds and kscreds.get('region_name'):
return [kscreds.get('region_name')]
if client is None:
creds = kscreds
if creds is None:
creds = load_keystone_creds()
client = get_ksclient(**creds)
endpoints = client.service_catalog.get_endpoints()
if services is None:
services = list(endpoints.keys())
regions = set()
for service in services:
for r in endpoints.get(service, {}):
regions.add(r['region'])
return list(regions)
def get_ks_api_version(auth_url=None, env=None):
"""Get the keystone api version based on the end of the auth url.
@param auth_url: String
@returns: 2 or 3 (int)
"""
if env is None:
env = os.environ
if env.get('OS_IDENTITY_API_VERSION'):
return int(env['OS_IDENTITY_API_VERSION'])
if auth_url is None:
auth_url = ""
if auth_url.endswith('/v3'):
return 3
elif auth_url.endswith('/v2.0'):
return 2
# Return None if we can't determine the keystone version
return None
def _legacy_ksclient(**kwargs):
"""14.04 does not have session available."""
kskw = {k: kwargs.get(k) for k in PT_V2 if k in kwargs}
return ksclient_v2.Client(**kskw)
def get_ksclient(**kwargs):
# api version will be force to 3 or 2
if _LEGACY_CLIENTS:
return _legacy_ksclient(**kwargs)
api_version = get_ks_api_version(kwargs.get('auth_url', '')) or 2
arg_set = KS_VERSION_RESOLVER[api_version].arg_set
# Filter/select the args for the api version from the kwargs dictionary
kskw = {k: v for k, v in kwargs.items() if k in arg_set}
auth = KS_VERSION_RESOLVER[api_version].ident.Password(**kskw)
authkw = {k: v for k, v in kwargs.items() if k in SESSION_ARGS}
authkw['auth'] = auth
sess = session.Session(**authkw)
client = KS_VERSION_RESOLVER[api_version].mod.Client(session=sess)
client.auth_ref = auth.get_access(sess)
return client
def get_service_conn_info(service='image', client=None, **kwargs):
# return a dict with token, insecure, cacert, endpoint
if not client:
client = get_ksclient(**kwargs)
endpoint = _get_endpoint(client, service, **kwargs)
# Session client does not have tenant_id set at client.tenant_id
# If client.tenant_id not set use method to get it
tenant_id = (client.tenant_id or client.get_project_id(client.session) or
client.auth.client.get_project_id())
info = {'token': client.auth_token, 'insecure': kwargs.get('insecure'),
'cacert': kwargs.get('cacert'), 'endpoint': endpoint,
'tenant_id': tenant_id}
if not _LEGACY_CLIENTS:
info['session'] = client.session
info['glance_version'] = '2'
else:
info['glance_version'] = '1'
return info
def _get_endpoint(client, service, **kwargs):
"""Get an endpoint using the provided keystone client."""
endpoint_kwargs = {
'service_type': service,
'interface': kwargs.get('endpoint_type') or 'publicURL',
'region_name': kwargs.get('region_name'),
}
if _LEGACY_CLIENTS:
del endpoint_kwargs['interface']
endpoint = client.service_catalog.url_for(**endpoint_kwargs)
return endpoint
simplestreams_0.1.0-67-g8497b634/simplestreams/util.py 0000664 0000000 0000000 00000045154 14605750330 0022326 0 ustar 00root root 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Simplestreams is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Simplestreams is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public
# License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Simplestreams. If not, see .
import errno
import os
import re
import subprocess
import tempfile
import time
import json
import simplestreams.contentsource as cs
import simplestreams.checksum_util as checksum_util
from simplestreams.log import LOG
ALIASNAME = "_aliases"
PGP_SIGNED_MESSAGE_HEADER = "-----BEGIN PGP SIGNED MESSAGE-----"
PGP_SIGNATURE_HEADER = "-----BEGIN PGP SIGNATURE-----"
PGP_SIGNATURE_FOOTER = "-----END PGP SIGNATURE-----"
_UNSET = object()
READ_SIZE = (1024 * 10)
PRODUCTS_TREE_DATA = (
("products", "product_name"),
("versions", "version_name"),
("items", "item_name"),
)
PRODUCTS_TREE_HIERARCHY = [_k[0] for _k in PRODUCTS_TREE_DATA]
_HAS_GPGV = None
class SignatureMissingException(Exception):
pass
try:
# python2
_STRING_TYPES = (str, basestring, unicode)
except NameError:
# python3
_STRING_TYPES = (str,)
try:
# python2
INTEGER_TYPES = (int, long)
except NameError:
# python3
INTEGER_TYPES = (int,)
def stringitems(data):
f = {}
for k, v in data.items():
if isinstance(v, _STRING_TYPES):
f[k] = v
elif isinstance(v, (int, float)):
f[k] = str(v)
return f
def products_exdata(tree, pedigree, include_top=True, insert_fieldnames=True):
# given 'tree' and 'pedigree' return a 'flattened' dict that contains
# entries for all attributes of this item those that apply to its pedigree
harchy = PRODUCTS_TREE_DATA
exdata = {}
if include_top and tree:
exdata.update(stringitems(tree))
clevel = tree
for (n, key) in enumerate(pedigree):
dictname, fieldname = harchy[n]
clevel = clevel.get(dictname, {}).get(key, {})
exdata.update(stringitems(clevel))
if insert_fieldnames:
exdata[fieldname] = key
return exdata
def products_set(tree, data, pedigree):
harchy = PRODUCTS_TREE_HIERARCHY
cur = tree
for n in range(0, len(pedigree)):
if harchy[n] not in cur:
cur[harchy[n]] = {}
cur = cur[harchy[n]]
if n != (len(pedigree) - 1):
if pedigree[n] not in cur:
cur[pedigree[n]] = {}
cur = cur[pedigree[n]]
cur[pedigree[-1]] = data
def products_del(tree, pedigree):
harchy = PRODUCTS_TREE_HIERARCHY
cur = tree
for n in range(0, len(pedigree)):
if harchy[n] not in cur:
return
cur = cur[harchy[n]]
if n == (len(pedigree) - 1):
break
if pedigree[n] not in cur:
return
cur = cur[pedigree[n]]
if pedigree[-1] in cur:
del cur[pedigree[-1]]
def products_prune(tree, preserve_empty_products=False):
for prodname in list(tree.get('products', {}).keys()):
keys = list(tree['products'][prodname].get('versions', {}).keys())
for vername in keys:
vtree = tree['products'][prodname]['versions'][vername]
for itemname in list(vtree.get('items', {}).keys()):
if not vtree['items'][itemname]:
del vtree['items'][itemname]
if 'items' not in vtree or not vtree['items']:
del tree['products'][prodname]['versions'][vername]
if ('versions' not in tree['products'][prodname] or
not tree['products'][prodname]['versions']):
del tree['products'][prodname]
if (not preserve_empty_products and 'products' in tree and
not tree['products']):
del tree['products']
def walk_products(tree, cb_product=None, cb_version=None, cb_item=None,
ret_finished=_UNSET):
# walk a product tree. callbacks are called with (item, tree, (pedigree))
for prodname, proddata in tree['products'].items():
if cb_product:
ret = cb_product(proddata, tree, (prodname,))
if ret_finished != _UNSET and ret == ret_finished:
return
if (not cb_version and not cb_item) or 'versions' not in proddata:
continue
for vername, verdata in proddata['versions'].items():
if cb_version:
ret = cb_version(verdata, tree, (prodname, vername))
if ret_finished != _UNSET and ret == ret_finished:
return
if not cb_item or 'items' not in verdata:
continue
for itemname, itemdata in verdata['items'].items():
ret = cb_item(itemdata, tree, (prodname, vername, itemname))
if ret_finished != _UNSET and ret == ret_finished:
return
def expand_tree(tree, refs=None, delete=False):
if refs is None:
refs = tree.get(ALIASNAME, None)
expand_data(tree, refs, delete)
def expand_data(data, refs=None, delete=False):
if isinstance(data, dict):
if isinstance(refs, dict):
for key in list(data.keys()):
if key == ALIASNAME:
continue
ref = refs.get(key)
if not ref:
continue
value = data.get(key)
if value and isinstance(value, _STRING_TYPES):
data.update(ref[value])
if delete:
del data[key]
for key in data:
expand_data(data[key], refs)
elif isinstance(data, list):
for item in data:
expand_data(item, refs)
def resolve_work(src, target, maxnum=None, keep=False, itemfilter=None,
sort_reverse=True):
# if more than maxnum items are in src, only the most recent maxnum will be
# stored in target. If keep is true, then the most recent maxnum items
# will be kept in target even if they are no longer in src.
# if keep is false the number in target will never be greater than that
# in src.
add = []
remove = []
reverse = sort_reverse
if maxnum is None and keep:
raise TypeError("maxnum(%s) cannot be None if keep is True" % maxnum)
if not (maxnum is None or isinstance(maxnum, int)):
raise TypeError("maxnum(%s) must be integer or None" % maxnum)
if not (keep is None or isinstance(keep, int)):
raise TypeError("keep(%s) must be integer or None" % keep)
# Ensure that all source items are passed through filters
# In case the filters have changed from the last run
for item in sorted(src, reverse=reverse):
if itemfilter is None or itemfilter(item):
if item not in target:
add.append(item)
for item in sorted(target, reverse=reverse):
if item not in src:
remove.append(item)
if keep and len(remove):
after_add = len(target) + len(add)
while len(remove) and (maxnum > (after_add - len(remove))):
remove.pop(0)
mtarget = sorted([f for f in target + add if f not in remove],
reverse=reverse)
if maxnum is not None and len(mtarget) > maxnum:
for item in mtarget[maxnum:]:
if item in target:
remove.append(item)
else:
add.pop(add.index(item))
remove = sorted(remove, reverse=bool(not reverse))
return (add, remove)
def policy_read_signed(content, path, keyring=None):
# convenience wrapper around 'read_signed' for use MirrorReader policy
return read_signed(content=content, keyring=keyring)
def has_gpgv():
global _HAS_GPGV
if _HAS_GPGV is not None:
return _HAS_GPGV
if which('gpgv'):
try:
env = os.environ.copy()
env['LANG'] = 'C'
out, err = subp(["gpgv", "--help"], capture=True, env=env)
_HAS_GPGV = 'gnupg' in out.lower() or 'gnupg' in err.lower()
except subprocess.CalledProcessError:
_HAS_GPGV = False
else:
_HAS_GPGV = False
return _HAS_GPGV
def read_signed(content, keyring=None, checked=True):
# ensure that content is signed by a key in keyring.
# if no keyring given use default.
if content.startswith(PGP_SIGNED_MESSAGE_HEADER):
if checked and keyring and has_gpgv():
cmd = ["gpgv", "--keyring=%s" % keyring, "-"]
elif checked:
# http://rfc-ref.org/RFC-TEXTS/2440/chapter7.html
cmd = ["gpg", "--batch", "--verify"]
if keyring:
cmd.append("--keyring=%s" % keyring)
cmd.append("-")
if checked:
try:
subp(cmd, data=content)
except subprocess.CalledProcessError as e:
LOG.debug("failed: %s\n out=%s\n err=%s" %
(' '.join(cmd), e.output[0], e.output[1]))
raise e
ret = {'body': [], 'signature': [], 'garbage': []}
lines = content.splitlines()
i = 0
for i in range(0, len(lines)):
if lines[i] == PGP_SIGNED_MESSAGE_HEADER:
mode = "header"
continue
elif mode == "header":
if lines[i] != "":
mode = "body"
continue
elif lines[i] == PGP_SIGNATURE_HEADER:
mode = "signature"
continue
elif lines[i] == PGP_SIGNATURE_FOOTER:
mode = "garbage"
continue
# dash-escaped content in body
if lines[i].startswith("- ") and mode == "body":
ret[mode].append(lines[i][2:])
else:
ret[mode].append(lines[i])
ret['body'].append('') # need empty line at end
return "\n".join(ret['body'])
else:
raise SignatureMissingException("No signature found!")
def load_content(content):
if isinstance(content, bytes):
content = content.decode('utf-8')
return json.loads(content)
def dump_data(data):
return json.dumps(data, indent=1, sort_keys=True,
separators=(',', ': ')).encode('utf-8')
def timestamp(ts=None):
return time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.gmtime(ts))
def move_dups(src, target, sticky=None):
# given src = {e1: {a:a, b:c}, e2: {a:a, b:d, e:f}}
# update target with {a:a}, and delete 'a' from entries in dict1
# if a key exists in target, it will not be copied or deleted.
if len(src) == 0:
return
candidates = set.intersection(*[set(e.keys()) for e in src.values()])
if sticky is not None:
candidates.difference_update(sticky)
updates = {}
for entry in list(src.keys()):
for k, v in src[entry].items():
if k not in candidates:
continue
if k in updates:
if v != updates[k] or not isinstance(v, _STRING_TYPES):
del updates[k]
candidates.remove(k)
else:
if isinstance(v, _STRING_TYPES) and target.get(k, v) == v:
updates[k] = v
else:
candidates.remove(k)
for entry in list(src.keys()):
for k in list(src[entry].keys()):
if k in updates:
del src[entry][k]
target.update(updates)
def products_condense(ptree, sticky=None, top='versions'):
# walk a products tree, copying up item keys as far as they'll go
# only move items to a sibling of the 'top'.
if top not in ('versions', 'products'):
raise ValueError("'top' must be one of: %s" %
','.join(PRODUCTS_TREE_HIERARCHY))
def call_move_dups(cur, _tree, pedigree):
(_mtype, stname) = (("product", "versions"),
("version", "items"))[len(pedigree) - 1]
move_dups(cur.get(stname, {}), cur, sticky=sticky)
walk_products(ptree, cb_version=call_move_dups)
walk_products(ptree, cb_product=call_move_dups)
if top == 'versions':
return
move_dups(ptree['products'], ptree)
def assert_safe_path(path):
if path == "" or path is None:
return
path = str(path)
if os.path.isabs(path):
raise TypeError("Path '%s' is absolute path" % path)
bad = (".." + os.path.sep, "..." + os.path.sep)
for tok in bad:
if path.startswith(tok):
raise TypeError("Path '%s' starts with '%s'" % (path, tok))
bad = (os.path.sep + ".." + os.path.sep, os.path.sep + "..." + os.path.sep)
for tok in bad:
if tok in path:
raise TypeError("Path '%s' contains '%s'" % (path, tok))
def read_url(url):
return cs.UrlContentSource(url).read()
def mkdir_p(path):
try:
os.makedirs(path)
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
return
def get_local_copy(contentsource, read_size=READ_SIZE, progress_callback=None):
(tfd, tpath) = tempfile.mkstemp()
tfile = os.fdopen(tfd, "wb")
try:
LOG.debug("getting local copy of %s", contentsource.url)
while True:
buf = contentsource.read(read_size)
if progress_callback:
progress_callback(read_size)
tfile.write(buf)
if len(buf) != read_size:
break
return (tpath, True)
except Exception as e:
os.unlink(tpath)
raise e
def subp(args, data=None, capture=True, shell=False, env=None):
if not capture:
stdout, stderr = (None, None)
else:
stdout, stderr = (subprocess.PIPE, subprocess.PIPE)
sp = subprocess.Popen(args, stdout=stdout, stderr=stderr,
stdin=subprocess.PIPE, shell=shell, env=env)
if isinstance(data, str):
data = data.encode('utf-8')
(out, err) = sp.communicate(data)
if isinstance(out, bytes):
out = out.decode('utf-8')
if isinstance(err, bytes):
err = err.decode('utf-8')
rc = sp.returncode
if rc != 0:
raise subprocess.CalledProcessError(rc, args, output=(out, err))
return (out, err)
def get_sign_cmd(path, output=None, inline=False):
cmd = ['gpg']
defkey = os.environ.get('SS_GPG_DEFAULT_KEY')
if defkey:
cmd.extend(['--default-key', defkey])
batch = os.environ.get('SS_GPG_BATCH', "1").lower()
if batch not in ("0", "false"):
cmd.append('--batch')
if output:
cmd.extend(['--output', output])
if inline:
cmd.append('--clearsign')
else:
cmd.extend(['--armor', '--detach-sign'])
cmd.extend([path])
return cmd
def make_signed_content_paths(content):
# loads json content. If it is a products:1.0 file
# then it fixes up 'path' elements to point to signed names (.sjson)
# returns tuple of (changed, updated)
data = json.loads(content)
if data.get("format") != "index:1.0":
return (False, None)
for content_ent in list(data.get('index', {}).values()):
path = content_ent.get('path')
if path.endswith(".json"):
content_ent['path'] = signed_fname(path, inline=True)
return (True, json.dumps(data, indent=1))
def signed_fname(fname, inline=True):
if inline:
sfname = fname[0:-len(".json")] + ".sjson"
else:
sfname = fname + ".gpg"
return sfname
def rm_f_file(fname, skip=None):
if skip is None:
skip = []
if fname in skip:
return
try:
os.unlink(fname)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
def which(program):
# Return path of program for execution if found in path
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
_fpath, _ = os.path.split(program)
if _fpath:
if is_exe(program):
return program
else:
for path in os.environ.get("PATH", "").split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
def sign_file(fname, inline=True, outfile=None):
if outfile is None:
outfile = signed_fname(fname, inline=inline)
rm_f_file(outfile, skip=["-"])
return subp(get_sign_cmd(path=fname, output=outfile, inline=inline))[0]
def sign_content(content, outfile="-", inline=True):
rm_f_file(outfile, skip=["-"])
return subp(args=get_sign_cmd(path="-", output=outfile, inline=inline),
data=content)[0]
def path_from_mirror_url(mirror, path):
if path is not None:
return (mirror, path)
path_regex = "streams/v1/.*[.](sjson|json)$"
result = re.search(path_regex, mirror)
if result:
path = mirror[result.start():]
mirror = mirror[:result.start()]
else:
path = "streams/v1/index.sjson"
return (mirror, path)
class ProgressAggregator(object):
def __init__(self, remaining_items=None):
self.last_emitted = 0
self.current_written = 0
self.current_file = None
self.remaining_items = remaining_items
if self.remaining_items:
self.total_image_count = len(self.remaining_items)
self.total_size = sum(self.remaining_items.values())
else:
self.total_image_count = 0
self.total_size = 0
self.total_written = 0
def progress_callback(self, progress):
if self.current_file != progress['name']:
if self.remaining_items and self.current_file is not None:
del self.remaining_items[self.current_file]
self.current_file = progress['name']
self.last_emitted = 0
self.current_written = 0
size = float(progress['size'])
written = float(progress['written'])
self.current_written += written
self.total_written += written
interval = self.current_written - self.last_emitted
if interval > size / 100:
self.last_emitted = self.current_written
progress['written'] = self.current_written
self.emit(progress)
def emit(self, progress):
raise NotImplementedError()
# these are legacy
CHECKSUMS = checksum_util.CHECKSUMS
item_checksums = checksum_util.item_checksums
ALGORITHMS = checksum_util.ALGORITHMS
checksummer = checksum_util.checksummer
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/snap/ 0000775 0000000 0000000 00000000000 14605750330 0017037 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/snap/snapcraft.yaml 0000664 0000000 0000000 00000002721 14605750330 0021706 0 ustar 00root root 0000000 0000000 name: simplestreams
base: core18
adopt-info: simplestreams
summary: Library and tools for using Simple Streams data
description: Library and tools for using Simple Streams data
grade: stable
confinement: strict
layout:
/usr/share/keyrings:
bind: $SNAP/usr/share/keyrings
apps:
sstream-mirror:
command: bin/sstream-mirror
plugs:
- network
- home
sstream-mirror-glance:
command: bin/sstream-mirror-glance
plugs:
- network
- home
sstream-query:
command: bin/sstream-query
plugs:
- network
- home
sstream-sync:
command: bin/sstream-sync
plugs:
- network
- home
json2streams:
command: bin/json2streams
plugs:
- network
- home
parts:
simplestreams:
plugin: python
python-version: python3
source: .
source-type: git
constraints:
- https://raw.githubusercontent.com/openstack/requirements/stable/ussuri/upper-constraints.txt
python-packages:
- python-glanceclient
- python-keystoneclient
- python-swiftclient
stage-packages:
- gpgv
- ubuntu-keyring
build-packages:
- libffi-dev
- libssl-dev
- libxml2-dev
- libxslt1-dev
override-build: |
snapcraftctl build
last_release="$(git tag -l --sort=version:refname | grep -v ubuntu | tail -1)"
current_shorthash="$(git log --format="%h" | head -1)"
snapcraftctl set-version "${last_release}-${current_shorthash}"
simplestreams_0.1.0-67-g8497b634/tests/ 0000775 0000000 0000000 00000000000 14605750330 0017240 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/tests/__init__.py 0000664 0000000 0000000 00000000037 14605750330 0021351 0 ustar 00root root 0000000 0000000 # ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/tests/httpserver.py 0000664 0000000 0000000 00000003044 14605750330 0022021 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
import os
import sys
if sys.version_info.major == 2:
from SimpleHTTPServer import SimpleHTTPRequestHandler
from BaseHTTPServer import HTTPServer
else:
from http.server import SimpleHTTPRequestHandler
from http.server import HTTPServer
class LoggingHTTPRequestHandler(SimpleHTTPRequestHandler):
def log_request(self, code='-', size='-'):
"""
Log an accepted request along with user-agent string.
"""
user_agent = self.headers.get("user-agent")
self.log_message('"%s" %s %s (%s)',
self.requestline, str(code), str(size), user_agent)
def run(address, port,
HandlerClass=LoggingHTTPRequestHandler, ServerClass=HTTPServer):
try:
server = ServerClass((address, port), HandlerClass)
address, port = server.socket.getsockname()
sys.stderr.write("Serving HTTP: %s %s %s\n" %
(address, port, os.getcwd()))
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
if __name__ == '__main__':
import sys
if len(sys.argv) == 3:
# 2 args: address and port
address = sys.argv[1]
port = int(sys.argv[2])
elif len(sys.argv) == 2:
# 1 arg: port
address = '0.0.0.0'
port = int(sys.argv[1])
elif len(sys.argv) == 1:
# no args random port (port=0)
address = '0.0.0.0'
port = 0
else:
sys.stderr.write("Expect [address] [port]\n")
sys.exit(1)
run(address=address, port=port)
simplestreams_0.1.0-67-g8497b634/tests/testutil.py 0000664 0000000 0000000 00000001207 14605750330 0021467 0 ustar 00root root 0000000 0000000 import os
from simplestreams import objectstores
from simplestreams import mirrors
EXAMPLES_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__),
"..", "examples"))
def get_mirror_reader(name, docdir=None, signed=False):
if docdir is None:
docdir = EXAMPLES_DIR
src_d = os.path.join(EXAMPLES_DIR, name)
sstore = objectstores.FileStore(src_d)
def policy(content, path): # pylint: disable=W0613
return content
kwargs = {} if signed else {"policy": policy}
return mirrors.ObjectStoreMirrorReader(sstore, **kwargs)
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/tests/unittests/ 0000775 0000000 0000000 00000000000 14605750330 0021302 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/tests/unittests/__init__.py 0000664 0000000 0000000 00000000000 14605750330 0023401 0 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/tests/unittests/test_badmirrors.py 0000664 0000000 0000000 00000012616 14605750330 0025065 0 ustar 00root root 0000000 0000000 from unittest import TestCase
from tests.testutil import get_mirror_reader
from simplestreams.mirrors import (
ObjectStoreMirrorWriter, ObjectStoreMirrorReader)
from simplestreams.objectstores import MemoryObjectStore
from simplestreams import util
from simplestreams import checksum_util
from simplestreams import mirrors
class TestBadDataSources(TestCase):
"""Test of Bad Data in a datasource."""
dlpath = "streams/v1/download.json"
pedigree = ("com.example:product1", "20150915", "item1")
item_path = "product1/20150915/text.txt"
example = "minimal"
def setUp(self):
self.src = self.get_clean_src(self.example, path=self.dlpath)
self.target = ObjectStoreMirrorWriter(
config={}, objectstore=MemoryObjectStore())
def get_clean_src(self, exname, path):
good_src = get_mirror_reader(exname)
objectstore = MemoryObjectStore(None)
target = ObjectStoreMirrorWriter(config={}, objectstore=objectstore)
target.sync(good_src, path)
# clean the .data out of the mirror so it doesn't get read
keys = list(objectstore.data.keys())
for k in keys:
if k.startswith(".data"):
del objectstore.data[k]
return ObjectStoreMirrorReader(
objectstore=objectstore, policy=lambda content, path: content)
def test_sanity_valid(self):
# verify that the tests are fine on expected pass
_moditem(self.src, self.dlpath, self.pedigree, lambda c: c)
self.target.sync(self.src, self.dlpath)
def test_larger_size_causes_bad_checksum(self):
def size_plus_1(item):
item['size'] = int(item['size']) + 1
return item
_moditem(self.src, self.dlpath, self.pedigree, size_plus_1)
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_smaller_size_causes_bad_checksum(self):
def size_minus_1(item):
item['size'] = int(item['size']) - 1
return item
_moditem(self.src, self.dlpath, self.pedigree, size_minus_1)
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_too_much_content_causes_bad_checksum(self):
self.src.objectstore.data[self.item_path] += b"extra"
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_too_little_content_causes_bad_checksum(self):
orig = self.src.objectstore.data[self.item_path]
self.src.objectstore.data[self.item_path] = orig[0:-1]
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_busted_checksum_causes_bad_checksum(self):
def break_checksum(item):
chars = "0123456789abcdef"
orig = item['sha256']
item['sha256'] = ''.join(
[chars[(chars.find(c) + 1) % len(chars)] for c in orig])
return item
_moditem(self.src, self.dlpath, self.pedigree, break_checksum)
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_changed_content_causes_bad_checksum(self):
# correct size but different content should raise bad checksum
self.src.objectstore.data[self.item_path] = ''.join(
["x" for c in self.src.objectstore.data[self.item_path]])
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_no_checksums_cause_bad_checksum(self):
def del_checksums(item):
for c in checksum_util.item_checksums(item).keys():
del item[c]
return item
_moditem(self.src, self.dlpath, self.pedigree, del_checksums)
with _patched_missing_sum("fail"):
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
def test_missing_size_causes_bad_checksum(self):
def del_size(item):
del item['size']
return item
_moditem(self.src, self.dlpath, self.pedigree, del_size)
with _patched_missing_sum("fail"):
self.assertRaises(checksum_util.InvalidChecksum,
self.target.sync, self.src, self.dlpath)
class _patched_missing_sum(object):
"""This patches the legacy mode for missing checksum info so
that it behaves like the new code path. Thus we can make
the test run correctly"""
def __init__(self, mode="fail"):
self.mode = mode
def __enter__(self):
self.modmcb = getattr(mirrors, '_missing_cksum_behavior', {})
self.orig = self.modmcb.copy()
if self.modmcb:
self.modmcb['mode'] = self.mode
return self
def __exit__(self, type, value, traceback):
self.patch = self.orig
def _moditem(src, path, pedigree, modfunc):
# load the products data at 'path' in 'src' mirror, then call modfunc
# on the data found at pedigree. and store the updated data.
sobj = src.objectstore
tree = util.load_content(sobj.source(path).read())
item = util.products_exdata(tree, pedigree, insert_fieldnames=False)
util.products_set(tree, modfunc(item), pedigree)
sobj.insert_content(path, util.dump_data(tree))
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_command_hook_mirror.py 0000664 0000000 0000000 00000003732 14605750330 0026750 0 ustar 00root root 0000000 0000000 from unittest import TestCase
import simplestreams.mirrors.command_hook as chm
from tests.testutil import get_mirror_reader
class TestCommandHookMirror(TestCase):
"""Test of CommandHookMirror."""
def setUp(self):
self._run_commands = []
def test_init_without_load_stream_fails(self):
self.assertRaises(TypeError, chm.CommandHookMirror, {})
def test_init_with_load_products_works(self):
chm.CommandHookMirror({'load_products': 'true'})
def test_stream_load_empty(self):
src = get_mirror_reader("foocloud")
target = chm.CommandHookMirror({'load_products': ['true']})
oruncmd = chm.run_command
try:
chm.run_command = self._run_command
target.sync(src, "streams/v1/index.json")
finally:
chm.run_command = oruncmd
# the 'load_products' should be called once for each content
# in the stream.
self.assertEqual(self._run_commands, [['true'], ['true']])
def test_stream_insert_product(self):
src = get_mirror_reader("foocloud")
target = chm.CommandHookMirror(
{'load_products': ['load-products'],
'insert_products': ['insert-products']})
oruncmd = chm.run_command
try:
chm.run_command = self._run_command
target.sync(src, "streams/v1/index.json")
finally:
chm.run_command = oruncmd
# the 'load_products' should be called once for each content
# in the stream. same for 'insert-products'
self.assertEqual(len([f for f in self._run_commands
if f == ['load-products']]), 2)
self.assertEqual(len([f for f in self._run_commands
if f == ['insert-products']]), 2)
def _run_command(self, cmd, env=None, capture=False, rcs=None):
self._run_commands.append(cmd)
rc = 0
output = ''
return (rc, output)
# vi: ts=4 expandtab syntax=python
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_contentsource.py 0000664 0000000 0000000 00000030434 14605750330 0025612 0 ustar 00root root 0000000 0000000 import os
import shutil
import sys
import tempfile
from os.path import join, dirname
from simplestreams import objectstores
from simplestreams import contentsource
from subprocess import Popen, PIPE, STDOUT
from unittest import TestCase, skipIf
import pytest
class RandomPortServer(object):
def __init__(self, path):
self.path = path
self.process = None
self.port = None
self.process = None
def serve(self):
if self.port and self.process:
return
testserver_path = join(
dirname(__file__), "..", "..", "tests", "httpserver.py")
pre = b'Serving HTTP:'
cmd = [sys.executable, '-u', testserver_path, "0"]
p = Popen(cmd, cwd=self.path, stdout=PIPE, stderr=STDOUT)
line = p.stdout.readline() # pylint: disable=E1101
if line.startswith(pre):
data = line[len(pre):].strip()
addr, port_str, cwd = data.decode().split(" ", 2)
self.port = int(port_str)
self.addr = addr
self.process = p
# print("Running server on %s" % port_str)
return
else:
p.kill()
raise RuntimeError(
"Failed to start server in %s with %s. pid=%s. got: %s" %
(self.path, cmd, self.process, line))
def read_output(self):
return str(self.process.stdout.readline())
def unserve(self):
if self.process:
self.process.kill() # pylint: disable=E1101
self.process = None
self.port = None
def __enter__(self):
self.serve()
return self
def __exit__(self, _type, value, tb):
self.unserve()
def __repr__(self):
pid = None
if self.process:
pid = self.process.pid
return ("RandomPortServer(port=%s, addr=%s, process=%s, path=%s)" %
(self.port, self.addr, pid, self.path))
def url_for(self, fpath=""):
if self.port is None:
raise ValueError("No port available")
return 'http://127.0.0.1:%d/' % self.port + fpath
class BaseDirUsingTestCase(TestCase):
http = False
server = None
tmpd = None
@classmethod
def setUpClass(cls):
cls.tmpd = os.path.abspath(tempfile.mkdtemp(prefix="ss-unit."))
if cls.http:
cls.server = RandomPortServer(cls.tmpd)
cls.server.serve()
print(cls.server)
@classmethod
def tearDownClass(cls):
if cls.http:
cls.server.unserve()
shutil.rmtree(cls.tmpd)
def mkdtemp(self):
return tempfile.mkdtemp(dir=self.tmpd)
def setUp(self):
# each individual test gets its own dir that can be served.
self.test_d = self.mkdtemp()
def getcs(self, path, url_reader=None, rel=None):
return contentsource.UrlContentSource(
self.url_for(path, rel=rel), url_reader=url_reader)
def path_for(self, fpath, rel=None):
# return full path to fpath.
# if fpath is absolute path, must be under self.tmpd
# if not absolute, it is relative to 'rel' (default self.test_d)
if fpath is None:
fpath = ""
if os.path.isabs(fpath):
fullpath = os.path.abspath(fpath)
else:
if rel is None:
rel = self.test_d
else:
rel = self.tmpd
fullpath = os.path.abspath(os.path.sep.join([rel, fpath]))
if not fullpath.startswith(self.tmpd + os.path.sep):
raise ValueError(
"%s is not a valid path. Not under tmpdir: %s" %
(fpath, self.tmpd))
return fullpath
def furl_for(self, fpath=None, rel=None):
return "file://" + self.path_for(fpath=fpath, rel=rel)
def url_for(self, fpath=None, rel=None):
# return a url for fpath.
if not self.server:
raise ValueError("No server available, but proto == http")
return self.server.url_for(
self.path_for(fpath=fpath, rel=rel)[len(self.tmpd)+1:])
class TestUrlContentSource(BaseDirUsingTestCase):
http = True
fpath = 'foo'
fdata = b'hello world\n'
def setUp(self):
super(TestUrlContentSource, self).setUp()
with open(join(self.test_d, self.fpath), 'wb') as f:
f.write(self.fdata)
def test_default_url_read_handles_None(self):
scs = contentsource.UrlContentSource(self.url_for(self.fpath))
data = scs.read(None)
self.assertEqual(data, self.fdata)
def test_default_url_read_handles_negative_size(self):
scs = contentsource.UrlContentSource(self.url_for(self.fpath))
data = scs.read(-1)
self.assertEqual(data, self.fdata)
def test_fd_read_handles_None(self):
loc = self.furl_for(self.fpath)
scs = contentsource.UrlContentSource(loc)
data = scs.read(None)
self.assertEqual(data, self.fdata)
def test_fd_read_handles_negative_size(self):
loc = self.furl_for(self.fpath)
self.assertTrue(loc.startswith("file://"))
scs = contentsource.UrlContentSource(loc)
data = scs.read(-1)
self.assertEqual(data, self.fdata)
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_default_timeout(self):
self.assertEqual(contentsource.RequestsUrlReader.timeout,
(contentsource.TIMEOUT, None))
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_url_read_handles_None(self):
scs = self.getcs(self.fpath, contentsource.RequestsUrlReader)
data = scs.read(None)
self.assertEqual(data, self.fdata)
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_url_read_handles_negative_size(self):
scs = self.getcs(self.fpath, contentsource.RequestsUrlReader)
data = scs.read(-2)
self.assertEqual(data, self.fdata)
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_url_read_handles_no_size(self):
scs = self.getcs(self.fpath, contentsource.RequestsUrlReader)
data = scs.read()
self.assertEqual(data, self.fdata)
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_url_read_handles_int(self):
scs = self.getcs(self.fpath, contentsource.RequestsUrlReader)
data = scs.read(3)
self.assertEqual(data, self.fdata[0:3])
def test_urllib_default_timeout(self):
self.assertEqual(contentsource.Urllib2UrlReader.timeout,
contentsource.TIMEOUT)
def test_urllib_url_read_handles_None(self):
scs = self.getcs(self.fpath, contentsource.Urllib2UrlReader)
data = scs.read(None)
self.assertEqual(data, self.fdata)
def test_urllib_url_read_handles_negative_size(self):
scs = self.getcs(self.fpath, contentsource.Urllib2UrlReader)
data = scs.read(-2)
self.assertEqual(data, self.fdata)
def test_urllib_url_read_handles_no_size(self):
scs = self.getcs(self.fpath, contentsource.Urllib2UrlReader)
data = scs.read()
self.assertEqual(data, self.fdata)
def test_urllib_url_read_handles_int(self):
scs = self.getcs(self.fpath, contentsource.Urllib2UrlReader)
data = scs.read(3)
self.assertEqual(data, self.fdata[0:3])
class TestResume(BaseDirUsingTestCase):
http = True
def setUp(self):
super(TestResume, self).setUp()
self.target = tempfile.mkdtemp()
with open(join(self.target, 'foo.part'), 'wb') as f:
f.write(b'hello')
with open(join(self.test_d, 'foo'), 'wb') as f:
f.write(b'hello world\n')
def test_binopen_seek(self):
tcs = objectstores.FileStore(self.target)
scs = contentsource.UrlContentSource(self.furl_for('foo'))
tcs.insert('foo', scs)
with open(join(self.target, 'foo'), 'rb') as f:
contents = f.read()
assert contents == b'hello world\n', contents
def test_url_seek(self):
tcs = objectstores.FileStore(self.target)
loc = self.url_for('foo')
scs = contentsource.UrlContentSource(loc)
tcs.insert('foo', scs)
with open(join(self.target, 'foo'), 'rb') as f:
contents = f.read()
# Unfortunately, SimpleHTTPServer doesn't support the Range
# header, so we get two 'hello's.
assert contents == b'hellohello world\n', contents
def test_post_open_set_start_pos(self):
with pytest.raises(Exception):
cs = contentsource.UrlContentSource(self.furl_for('foo'))
cs.open()
cs.set_start_pos(1)
def test_percent_callback(self):
data = {'dld': 0}
def handler(path, downloaded, total):
data['dld'] = downloaded
tcs = objectstores.FileStore(self.target,
complete_callback=handler)
loc = self.url_for('foo')
scs = contentsource.UrlContentSource(loc)
tcs.insert('foo', scs, size=len('hellohello world'))
assert data['dld'] > 0 # just make sure it was called
class BaseReaderTest(BaseDirUsingTestCase):
__test__ = False
reader = None
fpath = 'foo'
fdata = b'hello world\n'
http = False
def setUp(self):
super(BaseReaderTest, self).setUp()
with open(join(self.test_d, self.fpath), 'wb') as f:
f.write(self.fdata)
def geturl(self, path):
if self.http:
return self.url_for(path)
else:
return self.furl_for(path)
def test_read_handles_None(self):
fp = self.reader(self.geturl(self.fpath))
data = fp.read(None)
fp.close()
self.assertEqual(data, self.fdata)
def test_read_handles_no_size(self):
fp = self.reader(self.geturl(self.fpath))
data = fp.read()
fp.close()
self.assertEqual(data, self.fdata)
def test_read_handles_negative_size(self):
fp = self.reader(self.geturl(self.fpath))
data = fp.read(-1)
fp.close()
self.assertEqual(data, self.fdata)
def test_read_handles_size(self):
size = len(self.fdata) - 2
fp = self.reader(self.geturl(self.fpath))
data = fp.read(size)
fp.close()
self.assertEqual(data, self.fdata[0:size])
def test_normal_usage(self):
buflen = 2
content = b''
buf = b'\0' * buflen
fp = None
try:
fp = self.reader(self.geturl(self.fpath))
while len(buf) == buflen:
buf = fp.read(buflen)
content += buf
finally:
if fp is not None:
fp.close()
self.assertEqual(content, self.fdata)
@skipIf(contentsource.requests is None, "requests not available")
class RequestsBase(object):
reader = contentsource.RequestsUrlReader
http = True
class Urllib2Base(object):
reader = contentsource.Urllib2UrlReader
http = True
class TestRequestsUrlReader(RequestsBase, BaseReaderTest):
__test__ = True
class TestUrllib2UrlReader(Urllib2Base, BaseReaderTest):
__test__ = True
class TestFileReader(BaseReaderTest):
__test__ = True
reader = contentsource.FileReader
def test_supports_file_scheme(self):
file_url = self.geturl(self.fpath)
self.assertTrue(file_url.startswith("file://"))
fp = self.reader(file_url)
data = fp.read()
fp.close()
self.assertEqual(data, self.fdata)
class UserAgentTests(BaseDirUsingTestCase):
fpath = "agent-test-filename-x"
fdata = b"this is my file content\n"
http = True
def read_url(self, reader, agent):
with open(join(self.test_d, self.fpath), 'wb') as f:
f.write(self.fdata)
fp = reader(self.url_for(self.fpath), user_agent=agent)
try:
return fp.read()
finally:
fp.close()
@skipIf(contentsource.requests is None, "requests not available")
def test_requests_sends_user_agent_when_supplied(self):
self.read_url(contentsource.RequestsUrlReader, "myagent1")
self.assertIn("myagent1", self.server.read_output())
def test_urllib2_sends_user_agent_when_supplied(self):
self.read_url(contentsource.Urllib2UrlReader, "myagent2")
self.assertIn("myagent2", self.server.read_output())
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_generate_simplestreams.py 0000664 0000000 0000000 00000024400 14605750330 0027455 0 ustar 00root root 0000000 0000000 from contextlib import contextmanager
from copy import deepcopy
import json
import os
import shutil
from io import StringIO
from tempfile import mkdtemp
from unittest import TestCase
from mock import patch
from simplestreams.generate_simplestreams import (
FileNamer,
generate_index,
Item,
items2content_trees,
write_streams,
)
@contextmanager
def temp_dir():
dirname = mkdtemp()
try:
yield dirname
finally:
shutil.rmtree(dirname)
class TestItems2ContentTrees(TestCase):
def test_items2content_trees_empty(self):
result = items2content_trees([], {})
self.assertEqual({}, result)
def test_items2content_trees_one(self):
result = items2content_trees([
Item('cid', 'pname', 'vname', 'iname',
{'data': 'foo'}),
], {'extra-data': 'bar'})
self.assertEqual(
{'cid': {
'content_id': 'cid',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {'pname': {'versions': {'vname': {
'items': {'iname': {'data': 'foo'}}
}}}}
}}, result)
def test_items2content_trees_two_items(self):
result = items2content_trees([
Item('cid', 'pname', 'vname', 'iname',
{'data': 'foo'}),
Item('cid', 'pname', 'vname', 'iname2',
{'data': 'bar'}),
], {'extra-data': 'bar'})
self.assertEqual(
{'cid': {
'content_id': 'cid',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {'pname': {'versions': {'vname': {
'items': {
'iname': {'data': 'foo'},
'iname2': {'data': 'bar'},
}
}}}}
}}, result)
def test_items2content_trees_two_products(self):
result = items2content_trees([
Item('cid', 'pname', 'vname', 'iname',
{'data': 'foo'}),
Item('cid', 'pname2', 'vname', 'iname',
{'data': 'bar'}),
], {'extra-data': 'bar'})
self.assertEqual(
{'cid': {
'content_id': 'cid',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {
'pname': {'versions': {'vname': {
'items': {'iname': {'data': 'foo'}},
}}},
'pname2': {'versions': {'vname': {
'items': {'iname': {'data': 'bar'}},
}}},
}
}}, result)
def test_items2content_trees_two_versions(self):
result = items2content_trees([
Item('cid', 'pname', 'vname', 'iname',
{'data': 'foo'}),
Item('cid', 'pname', 'vname2', 'iname',
{'data': 'bar'}),
], {'extra-data': 'bar'})
self.assertEqual(
{'cid': {
'content_id': 'cid',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {'pname': {'versions': {
'vname': {
'items': {'iname': {'data': 'foo'}},
},
'vname2': {
'items': {'iname': {'data': 'bar'}},
},
}}}
}}, result)
def test_items2content_trees_two_content_ids(self):
result = items2content_trees([
Item('cid', 'pname', 'vname', 'iname',
{'data': 'foo'}),
Item('cid2', 'pname', 'vname', 'iname',
{'data': 'bar'}),
], {'extra-data': 'bar'})
self.assertEqual(
{
'cid': {
'content_id': 'cid',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {'pname': {'versions': {
'vname': {
'items': {'iname': {'data': 'foo'}},
},
}}}
},
'cid2': {
'content_id': 'cid2',
'extra-data': 'bar',
'format': 'products:1.0',
'products': {'pname': {'versions': {
'vname': {
'items': {'iname': {'data': 'bar'}},
},
}}}
},
}, result)
class TestFileNamer(TestCase):
def test_get_index_path(self):
self.assertEqual('streams/v1/index.json', FileNamer.get_index_path())
def test_get_content_path(self):
self.assertEqual(
'streams/v1/foo:bar.json', FileNamer.get_content_path('foo:bar'))
class FakeNamer:
@staticmethod
def get_index_path():
return 'foo.json'
@staticmethod
def get_content_path(content_id):
return '{}.json'.format(content_id)
def load_json(out_dir, filename):
with open(os.path.join(out_dir, filename)) as index:
return json.load(index)
class TestGenerateIndex(TestCase):
updated = 'January 1 1970'
def test_no_content(self):
index_json = generate_index({}, self.updated, FakeNamer)
self.assertEqual({
'format': 'index:1.0', 'index': {}, 'updated': self.updated},
index_json)
def test_two_content(self):
index_json = generate_index({
'bar': {'products': {'prodbar': {}}},
'baz': {'products': {'prodbaz': {}}},
}, self.updated, FakeNamer)
self.assertEqual({
'format': 'index:1.0', 'updated': self.updated, 'index': {
'bar': {
'path': 'bar.json',
'products': ['prodbar'],
},
'baz': {
'path': 'baz.json',
'products': ['prodbaz'],
}
}
}, index_json)
def test_products_sorted(self):
''' The products list in the index should be sorted '''
index_json = generate_index({
'foo': {'products': {'prodfooz': {}, 'prodfooa': {}}},
}, self.updated, FakeNamer)
self.assertEqual({
'format': 'index:1.0', 'updated': self.updated, 'index': {
'foo': {
'path': 'foo.json',
'products': ['prodfooa', 'prodfooz'],
},
}
}, index_json)
def load_stream_dir(stream_dir):
contents = {}
for filename in os.listdir(stream_dir):
contents[filename] = load_json(stream_dir, filename)
return contents
class TestWriteStreams(TestCase):
updated = 'January 1 1970'
def test_no_content(self):
with temp_dir() as out_dir, patch('sys.stderr', StringIO()):
filenames = write_streams(out_dir, {}, self.updated, FakeNamer)
contents = load_stream_dir(out_dir)
self.assertEqual(['foo.json'], list(contents.keys()))
self.assertEqual([os.path.join(out_dir, 'foo.json')], filenames)
self.assertEqual(generate_index({}, self.updated, FakeNamer),
contents['foo.json'])
def test_two_content(self):
trees = {
'bar': {'products': {'prodbar': {}}},
'baz': {'products': {'prodbaz': {}}},
}
with temp_dir() as out_dir, patch('sys.stderr', StringIO()):
filenames = write_streams(out_dir, trees, self.updated, FakeNamer)
contents = load_stream_dir(out_dir)
self.assertEqual(sorted(['foo.json', 'bar.json', 'baz.json']),
sorted(contents.keys()))
self.assertEqual(sorted([
os.path.join(out_dir, 'foo.json'),
os.path.join(out_dir, 'bar.json'),
os.path.join(out_dir, 'baz.json'),
]), sorted(filenames))
self.assertEqual(generate_index(trees, self.updated, FakeNamer),
contents['foo.json'])
self.assertEqual({'products': {'prodbar': {}}}, contents['bar.json'])
self.assertEqual({'products': {'prodbaz': {}}}, contents['baz.json'])
def test_no_input_compaction(self):
trees = {
'bar': {'products': {'prodbar': {'versions': {'1': {'items': {
'item-1': {'arch': 'amd64'},
'item-2': {'arch': 'amd64'}}}}}}}}
trees_copy = deepcopy(trees)
with temp_dir() as out_dir, patch('sys.stderr', StringIO()):
write_streams(out_dir, trees_copy, self.updated, FakeNamer)
self.assertEqual(trees, trees_copy)
def test_no_output_compaction(self):
trees = {
'bar': {'products': {'prodbar': {'versions': {'1': {'items': {
'item-1': {'arch': 'amd64'},
'item-2': {'arch': 'amd64'}}}}}}}}
with temp_dir() as out_dir, patch('sys.stderr', StringIO()):
write_streams(out_dir, trees, self.updated, FakeNamer,
condense=False)
with open(os.path.join(out_dir, 'bar.json')) as bar_file:
bar = json.load(bar_file)
expected = {
'products': {'prodbar': {'versions': {'1': {'items': {
'item-1': {'arch': 'amd64'},
'item-2': {'arch': 'amd64'}, }}}}}}
self.assertEqual(bar, expected)
def test_mirrors(self):
trees = {
'bar': {'products': {'prodbar': {'versions': {'1': {'items': {
'item-1': {'arch': 'amd64', 'mirrors': ['url1', 'url2']},
'item-2': {'arch': 'amd64'}}}}}}}}
with temp_dir() as out_dir, patch('sys.stderr', StringIO()):
write_streams(out_dir, trees, self.updated, FakeNamer,
condense=False)
with open(os.path.join(out_dir, 'bar.json')) as bar_file:
bar = json.load(bar_file)
expected = {
'products': {'prodbar': {'versions': {'1': {'items': {
'item-1': {'arch': 'amd64', 'mirrors': ['url1', 'url2']},
'item-2': {'arch': 'amd64'}, }}}}}}
self.assertEqual(bar, expected)
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_glancemirror.py 0000664 0000000 0000000 00000143722 14605750330 0025410 0 ustar 00root root 0000000 0000000 from simplestreams.contentsource import MemoryContentSource
try:
from simplestreams.mirrors.glance import GlanceMirror
from simplestreams.objectstores import MemoryObjectStore
HAVE_OPENSTACK_LIBS = True
except ImportError:
HAVE_OPENSTACK_LIBS = False
import simplestreams.util
import copy
import hashlib
import json
import os
from unittest import TestCase, skipIf
TEST_IMAGE_DATA = b'this is my test image data'
TEST_IMAGE_MD5 = hashlib.md5(TEST_IMAGE_DATA).hexdigest()
TEST_IMAGE_SIZE = len(TEST_IMAGE_DATA)
TEST_IMAGE_FAKE_SHA256 = (
"5b982d7d4dd1a03e88ae5f35f02ed44f579e2711f3e0f27ea2bff20aef8c8d9e")
# This is a real snippet from the simplestreams index entry for
# Ubuntu 14.04 amd64 image from cloud-images.ubuntu.com as of
# 2016-06-05.
TEST_SOURCE_INDEX_ENTRY = {
u'content_id': u'com.ubuntu.cloud:released:download',
u'datatype': u'image-downloads',
u'format': u'products:1.0',
u'license': (u'http://www.canonical.com/'
u'intellectual-property-policy'),
u'products': {u'com.ubuntu.cloud:server:14.04:amd64': {
u'aliases': u'14.04,default,lts,t,trusty',
u'arch': u'amd64',
u'os': u'ubuntu',
u'release': u'trusty',
u'release_codename': u'Trusty Tahr',
u'release_title': u'14.04 LTS',
u'support_eol': u'2019-04-17',
u'supported': True,
u'version': u'14.04',
u'versions': {u'20160602': {
u'items': {u'disk1.img': {
u'ftype': u'disk1.img',
u'md5': TEST_IMAGE_MD5,
u'path': (
u'server/releases/trusty/release-20160602/'
u'ubuntu-14.04-server-cloudimg-amd64-disk1.img'),
u'sha256': TEST_IMAGE_FAKE_SHA256,
u'size': TEST_IMAGE_SIZE}},
u'label': u'release',
u'pubname': u'ubuntu-trusty-14.04-amd64-server-20160602',
}}}
}
}
# "Pedigree" is basically a "path" to get to the image data in simplestreams
# index, going through "products", their "versions", and nested "items".
TEST_IMAGE_PEDIGREE = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
# Almost real resulting data as produced by simplestreams before
# insert_item refactoring to allow for finer-grained testing.
EXPECTED_OUTPUT_INDEX = {
u'content_id': u'auto.sync',
u'datatype': u'image-ids',
u'format': u'products:1.0',
u'products': {
u"com.ubuntu.cloud:server:14.04:amd64": {
u"aliases": u"14.04,default,lts,t,trusty",
u"arch": u"amd64",
u"label": u"release",
u"os": u"ubuntu",
u"owner_id": u"bar456",
u"pubname": u"ubuntu-trusty-14.04-amd64-server-20160602",
u"release": u"trusty",
u"release_codename": u"Trusty Tahr",
u"release_title": u"14.04 LTS",
u"support_eol": u"2019-04-17",
u"supported": u"True",
u"version": u"14.04",
u"versions": {u"20160602": {u"items": {u"disk1.img": {
u"endpoint": u"http://keystone/api/",
u"ftype": u"disk1.img",
u"id": u"image-1",
u"md5": TEST_IMAGE_MD5,
u"name": (u"auto-sync/ubuntu-trusty-14.04-amd64-"
u"server-20160602-disk1.img"),
u"region": u"region1",
u"sha256": TEST_IMAGE_FAKE_SHA256,
u"size": str(TEST_IMAGE_SIZE),
}}}}
}
}
}
class FakeOpenstack(object):
"""Fake 'openstack' module replacement for testing GlanceMirror."""
def load_keystone_creds(self):
return {"auth_url": "http://keystone/api/"}
def get_service_conn_info(self, url, region_name=None, auth_url=None):
return {"endpoint": "http://objectstore/api/",
"tenant_id": "bar456",
"glance_version": "1"}
class FakeImage(object):
"""Fake image objects returned by GlanceClient.images.create()."""
def __init__(self, identifier, size, checksum, properties={}):
self.id = identifier
self.size = None if size is None else int(size)
self.checksum = checksum
self.properties = properties
class FakeImages(object):
"""Fake GlanceClient.images implementation to track method calls."""
def __init__(self):
self.create_calls = []
self.delete_calls = []
self.get_calls = []
self.update_calls = []
self.upload_calls = []
self.stage_calls = []
self.image_import_calls = []
self.imgdb = {}
def create(self, **kwargs):
self.create_calls.append(kwargs)
num = len(self.create_calls)
iid = 'image-%d' % num
self.imgdb[iid] = FakeImage(
iid, size=kwargs.get('size'), checksum=kwargs.get('checksum'))
return self.imgdb[iid]
def delete(self, image_id):
self.delete_calls.append(image_id)
del self.imgdb[image_id]
def get(self, image_id):
return self.imgdb[image_id]
def update(self, *args, **kwargs):
self.update_calls.append(kwargs)
def upload(self, image_id, fp):
img = self.imgdb[image_id]
data = fp.read()
img.size = len(data)
img.checksum = hashlib.md5(data).hexdigest()
self.upload_calls.append((image_id, fp))
def stage(self, image_id, fp):
self.stage_calls.append((image_id, fp))
def image_import(self, image_id):
self.image_import_calls.append(image_id)
def list(self, **kwargs):
return list(self.imgdb.values())
class FakeGlanceClient(object):
"""Fake GlanceClient implementation to track images.create() calls."""
def __init__(self, *args):
self.images = FakeImages()
@skipIf(not HAVE_OPENSTACK_LIBS, "no python3 openstack available")
class TestGlanceMirror(TestCase):
"""Tests for GlanceMirror methods."""
def setUp(self):
self.config = {"content_id": "foo123"}
self.mirror = GlanceMirror(
self.config, name_prefix="auto-sync/", region="region1",
client=FakeOpenstack())
def test_adapt_source_entry(self):
# Adapts source entry for use in a local simplestreams index.
source_entry = {"source-key": "source-value"}
output_entry = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=False, image_name="foobuntu-X",
image_md5_hash=None, image_size=None)
# Source and output entry are different objects.
self.assertNotEqual(source_entry, output_entry)
# Output entry gets a few new properties like the endpoint and
# owner_id taken from the GlanceMirror and OpenStack configuration,
# region from the value passed into GlanceMirror constructor, and
# image name from the passed in value.
# It also contains the source entries as well.
self.assertEqual(
{"endpoint": "http://keystone/api/",
"name": "foobuntu-X",
"owner_id": "bar456",
"region": "region1",
"source-key": "source-value"},
output_entry)
def test_adapt_source_entry_ignored_properties(self):
# adapt_source_entry() drops some properties from the source entry.
source_entry = {"path": "foo",
"product_name": "bar",
"version_name": "baz",
"item_name": "bah"}
output_entry = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=False, image_name="foobuntu-X",
image_md5_hash=None, image_size=None)
# None of the values in 'source_entry' are preserved.
for key in ("path", "product_name", "version_name", "item"):
self.assertNotIn("path", output_entry)
def test_adapt_source_entry_image_md5_and_size(self):
# adapt_source_entry() will use passed in values for md5 and size.
# Even old stale values will be overridden when image_md5_hash and
# image_size are passed in.
source_entry = {"md5": "stale-md5"}
output_entry = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=False, image_name="foobuntu-X",
image_md5_hash="new-md5", image_size=5)
self.assertEqual("new-md5", output_entry["md5"])
self.assertEqual("5", output_entry["size"])
def test_adapt_source_entry_image_md5_and_size_both_required(self):
# adapt_source_entry() requires both md5 and size to not ignore them.
source_entry = {"md5": "stale-md5"}
# image_size is not passed in, so md5 value is not used either.
output_entry1 = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=False, image_name="foobuntu-X",
image_md5_hash="new-md5", image_size=None)
self.assertEqual("stale-md5", output_entry1["md5"])
self.assertNotIn("size", output_entry1)
# image_md5_hash is not passed in, so image_size is not used either.
output_entry2 = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=False, image_name="foobuntu-X",
image_md5_hash=None, image_size=5)
self.assertEqual("stale-md5", output_entry2["md5"])
self.assertNotIn("size", output_entry2)
def test_adapt_source_entry_hypervisor_mapping(self):
# If hypervisor_mapping is set to True, 'virt' value is derived from
# the source entry 'ftype'.
source_entry = {"ftype": "disk1.img"}
output_entry = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=True, image_name="foobuntu-X",
image_md5_hash=None, image_size=None)
self.assertEqual("kvm", output_entry["virt"])
def test_adapt_source_entry_hypervisor_mapping_ftype_required(self):
# If hypervisor_mapping is set to True, but 'ftype' is missing in the
# source entry, 'virt' value is not added to the returned entry.
source_entry = {}
output_entry = self.mirror.adapt_source_entry(
source_entry, hypervisor_mapping=True, image_name="foobuntu-X",
image_md5_hash=None, image_size=None)
self.assertNotIn("virt", output_entry)
def test_create_glance_properties(self):
# Constructs glance properties to set on image during upload
# based on source image metadata.
source_entry = {
# All of these are carried over and potentially re-named.
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os": "ubuntu",
"version": "16.04",
# Unknown entries are stored in 'simplestreams_metadata'.
"extra": "value",
}
properties = self.mirror.create_glance_properties(
"content-1", "source-1", source_entry, hypervisor_mapping=False)
# Output properties contain content-id and source-content-id based
# on the passed in parameters, and carry over (with changed keys
# for "os" and "version") product_name, version_name, item_name and
# os and version values from the source entry.
# All the fields except product_name, version_name and item_name are
# also stored inside 'simplestreams_metadata' property as JSON data.
self.assertEqual(
{"content_id": "content-1",
"source_content_id": "source-1",
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os_distro": "ubuntu",
"os_version": "16.04",
"simplestreams_metadata": (
'{"extra": "value", "os": "ubuntu", "version": "16.04"}')},
properties)
def test_create_glance_properties_arch(self):
# When 'arch' is present in the source entry, it is adapted and
# returned inside 'architecture' field.
source_entry = {
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os": "ubuntu",
"version": "16.04",
"arch": "amd64",
}
properties = self.mirror.create_glance_properties(
"content-1", "source-1", source_entry, hypervisor_mapping=False)
self.assertEqual("x86_64", properties["architecture"])
def test_create_glance_properties_hypervisor_mapping(self):
# When hypervisor_mapping is requested and 'ftype' is present in
# the image metadata, 'hypervisor_type' is added to returned
# properties.
source_entry = {
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os": "ubuntu",
"version": "16.04",
"ftype": "root.tar.gz",
}
properties = self.mirror.create_glance_properties(
"content-1", "source-1", source_entry, hypervisor_mapping=True)
self.assertEqual("lxc", properties["hypervisor_type"])
def test_create_glance_properties_simplestreams_no_path(self):
# Other than 'product_name', 'version_name' and 'item_name', if 'path'
# is defined on the source entry, it is also not saved inside the
# 'simplestreams_metadata' property.
source_entry = {
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os": "ubuntu",
"version": "16.04",
"path": "/path/to/foo",
}
properties = self.mirror.create_glance_properties(
"content-1", "source-1", source_entry, hypervisor_mapping=False)
# Path is omitted from the simplestreams_metadata property JSON.
self.assertEqual(
'{"os": "ubuntu", "version": "16.04"}',
properties["simplestreams_metadata"])
def test_create_glance_properties_latest(self):
self.mirror.set_latest_property = True
source_entry = {
"product_name": "foobuntu",
"version_name": "X",
"item_name": "disk1.img",
"os": "ubuntu",
"version": "16.04",
}
properties = self.mirror.create_glance_properties(
"content-1", "source-1", source_entry, hypervisor_mapping=False)
self.assertEqual(u'true', properties["latest"])
def test_prepare_glance_arguments_v1(self):
# Prepares arguments to pass to GlanceClient.images.create()
# based on image metadata from the simplestreams source.
self.mirror.glance_api_version = "1"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
# Arguments to always pass in contain the image name, container format,
# disk format, whether image is public, and any passed-in properties.
self.assertEqual(
{"name": "foobuntu-X",
"container_format": 'bare',
"disk_format": "qcow2",
"is_public": True,
"properties": None},
create_arguments)
def test_prepare_glance_arguments_v2(self):
# Prepares arguments to pass to GlanceClient.images.create()
# based on image metadata from the simplestreams source.
self.mirror.glance_api_version = "2"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
# Arguments to always pass in contain the image name, container format,
# disk format, whether image is public, and any passed-in properties.
self.assertEqual(
{"name": "foobuntu-X",
"container_format": 'bare',
"disk_format": "qcow2",
"visibility": "public",
"properties": None},
create_arguments)
def test_prepare_glance_arguments_disk_format(self):
# Disk format is based on the image 'ftype' (if defined).
source_entry = {"ftype": "root.tar.gz"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("root-tar", create_arguments["disk_format"])
def test_prepare_glance_arguments_disk_format_squashfs(self):
# squashfs images are acceptable for nova-lxd
source_entry = {"ftype": "squashfs"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("squashfs", create_arguments["disk_format"])
def test_prepare_glance_arguments_size_v1(self):
# Size is read from image metadata if defined.
self.mirror.glance_api_version = "1"
source_entry = {"size": 5}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual(5, create_arguments["size"])
def test_prepare_glance_arguments_size_v2(self):
# Size is read from image metadata if defined.
self.mirror.glance_api_version = "2"
source_entry = {"size": 5}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual(None, create_arguments.get("size"))
def test_prepare_glance_arguments_checksum_v1(self):
# Checksum is based on the source entry 'md5' value, if defined.
self.mirror.glance_api_version = "1"
source_entry = {"md5": "foo123"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("foo123", create_arguments["checksum"])
def test_prepare_glance_arguments_size_and_md5_override_v1(self):
# Size and md5 hash are overridden from the passed-in values even if
# defined on the source entry.
source_entry = {"size": 5, "md5": "foo123"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash="bar456", image_size=10,
image_properties=None)
self.assertEqual(10, create_arguments["size"])
self.assertEqual("bar456", create_arguments["checksum"])
def test_prepare_glance_arguments_size_and_md5_no_override_hash_v1(self):
# If only one of image_md5_hash or image_size is passed directly in,
# the other value is not overridden either.
source_entry = {"size": 5, "md5": "foo123"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash="bar456",
image_size=None, image_properties=None)
self.assertEqual(5, create_arguments["size"])
self.assertEqual("foo123", create_arguments["checksum"])
def test_prepare_glance_arguments_size_and_md5_no_override_size_v1(self):
# If only one of image_md5_hash or image_size is passed directly in,
# the other value is not overridden either.
source_entry = {"size": 5, "md5": "foo123"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=10,
image_properties=None)
self.assertEqual(5, create_arguments["size"])
self.assertEqual("foo123", create_arguments["checksum"])
def test_prepare_glance_arguments_checksum_v2(self):
# Checksum is based on the source entry 'md5' value, if defined.
self.mirror.glance_api_version = "2"
source_entry = {"md5": "foo123"}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual(None, create_arguments.get("checksum"))
def test_prepare_glance_arguments_visibility_public_v1(self):
# In v1, 'visibility': 'public' is mapped to 'is_public': True.
self.mirror.visibility = "public"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertIs(True, create_arguments["is_public"])
self.assertNotIn("visibility", create_arguments)
def test_prepare_glance_arguments_visibility_public_v2(self):
# In v2, 'visibility': 'public' is passed through.
self.mirror.glance_api_version = "2"
self.mirror.visibility = "public"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("public", create_arguments["visibility"])
self.assertNotIn("is_public", create_arguments)
def test_prepare_glance_arguments_visibility_private_v1(self):
# In v1, 'visibility': 'private' is mapped to 'is_public': False.
self.mirror.visibility = "private"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertIs(False, create_arguments["is_public"])
self.assertNotIn("visibility", create_arguments)
def test_prepare_glance_arguments_visibility_private_v2(self):
# In v2, 'visibility': 'private' is passed through.
self.mirror.glance_api_version = "2"
self.mirror.visibility = "private"
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("private", create_arguments["visibility"])
self.assertNotIn("is_public", create_arguments)
def test_prepare_glance_arguments_image_import_conversion_true(self):
# 'image_import_conversion': True is passed through.
self.mirror.image_import_conversion = True
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("raw", create_arguments["disk_format"])
def test_prepare_glance_arguments_image_import_conversion_false(self):
# 'image_import_conversion': False is passed through.
self.mirror.image_import_conversion = False
source_entry = {}
create_arguments = self.mirror.prepare_glance_arguments(
"foobuntu-X", source_entry, image_md5_hash=None, image_size=None,
image_properties=None)
self.assertEqual("qcow2", create_arguments["disk_format"])
def test_download_image(self):
# Downloads image from a contentsource.
content = b"foo bazes the bar"
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img", content=content)
omd5 = hashlib.md5(content).hexdigest()
osize = len(content)
image_metadata = {"pubname": "foobuntu-X", "size": osize, "md5": omd5}
path, size, md5 = self.mirror.download_image(
content_source, image_metadata)
self.addCleanup(os.unlink, path)
self.assertIsNotNone(path)
self.assertEqual(osize, size)
self.assertEqual(omd5, md5)
def test_download_image_progress_callback(self):
# Progress callback is called with image name, size, status and buffer
# size after every 10kb of data: 3 times for 25kb of data below.
content = "abcdefghij" * int(1024 * 2.5)
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img", content=content)
image_metadata = {"pubname": "foobuntu-X", "size": len(content)}
self.progress_calls = []
def log_progress_calls(message):
self.progress_calls.append(message)
self.addCleanup(
setattr, self.mirror, "progress_callback",
self.mirror.progress_callback)
self.mirror.progress_callback = log_progress_calls
path, size, md5_hash = self.mirror.download_image(
content_source, image_metadata)
self.addCleanup(os.unlink, path)
self.assertEqual(
[{"name": "foobuntu-X", "size": 25600, "status": "Downloading",
"written": 10240}] * 3,
self.progress_calls)
def test_download_image_error(self):
# When there's an error during download, contentsource is still closed
# and the error is propagated below.
content = "abcdefghij"
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img", content=content)
image_metadata = {"pubname": "foobuntu-X", "size": len(content)}
# MemoryContentSource has an internal file descriptor which indicates
# if close() method has been called on it.
self.assertFalse(content_source.fd.closed)
self.addCleanup(
setattr, self.mirror, "progress_callback",
self.mirror.progress_callback)
self.mirror.progress_callback = lambda message: 1/0
self.assertRaises(
ZeroDivisionError,
self.mirror.download_image, content_source, image_metadata)
# We rely on the MemoryContentSource.close() side-effect to ensure
# close() method has indeed been called on the passed-in ContentSource.
self.assertTrue(content_source.fd.closed)
def test_insert_item(self):
# Downloads an image from a contentsource, uploads it into Glance,
# adapting and munging as needed (it updates the keystone endpoint,
# image and owner ids).
# We use a minimal source simplestreams index, fake ContentSource and
# GlanceClient, and only test for side-effects of each of the
# subparts of the insert_item method.
img_data = b'my-image-data'
md5 = hashlib.md5(img_data).hexdigest()
source_index = {
u'content_id': u'com.ubuntu.cloud:released:download',
u'products': {u'com.ubuntu.cloud:server:14.04:amd64': {
u'arch': u'amd64',
u'os': u'ubuntu',
u'release': u'trusty',
u'version': u'14.04',
u'versions': {u'20160602': {
u'items': {u'disk1.img': {
u'ftype': u'disk1.img',
u'md5': md5,
u'size': len(img_data)}},
u'pubname': u'ubuntu-trusty-14.04-amd64-server-20160602',
}}}
}
}
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=img_data)
# Use a fake GlanceClient to track calls and arguments passed to
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
ver_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
# There is a 'data' argument pointing to an open file descriptor
# for the locally downloaded image.
image_content = passed_create_kwargs.pop("data").read()
self.assertEqual(img_data, image_content)
# Value of "arch" from source entry is transformed into "architecture"
# image property in Glance: this ensures create_glance_properties()
# is called and result is properly passed.
self.assertEqual(
"x86_64", passed_create_kwargs["properties"]["architecture"])
# MD5 hash from source entry is put into 'checksum' field, and 'name'
# is based on full image name: this ensures prepare_glance_arguments()
# is called.
self.assertEqual(md5, passed_create_kwargs["checksum"])
self.assertEqual(
u'auto-sync/ubuntu-trusty-14.04-amd64-server-20160602-disk1.img',
passed_create_kwargs["name"])
# Our local endpoint is set in the resulting entry, which ensures
# a call to adapt_source_entry() was indeed made.
target_product = target["products"][pedigree[0]]
target_image = target_product["versions"][pedigree[1]]["items"].get(
pedigree[2])
self.assertEqual(u"http://keystone/api/", target_image["endpoint"])
def test_insert_item_full_v1(self):
# This test uses the full sample entries from the source simplestreams
# index from cloud-images.u.c and resulting local simplestreams index
# files.
self.mirror.glance_api_version = "1"
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
# "Pedigree" is basically a "path" to get to the image data in
# simplestreams index, going through "products", their "versions",
# and nested "items".
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
# Use a fake GlanceClient to track arguments passed into
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
image_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
# Drop the 'data' item pointing to an open temporary file.
passed_create_kwargs.pop("data")
expected_create_kwargs = {
'name': ('auto-sync/'
'ubuntu-trusty-14.04-amd64-server-20160602-disk1.img'),
'checksum': TEST_IMAGE_MD5,
'disk_format': 'qcow2',
'container_format': 'bare',
'is_public': True,
'properties': {
'os_distro': u'ubuntu',
'item_name': u'disk1.img',
'os_version': u'14.04',
'architecture': 'x86_64',
'version_name': u'20160602',
'content_id': 'auto.sync',
'product_name': u'com.ubuntu.cloud:server:14.04:amd64',
'source_content_id': u'com.ubuntu.cloud:released:download'},
'size': TEST_IMAGE_SIZE}
# expected to be json blob in properties/simplestreams_metadata
found_ssmd = passed_create_kwargs['properties'].pop(
'simplestreams_metadata')
expected_ssmd = {
"aliases": "14.04,default,lts,t,trusty",
"arch": "amd64", "ftype": "disk1.img",
"label": "release", "md5": TEST_IMAGE_MD5,
"os": "ubuntu",
"pubname": "ubuntu-trusty-14.04-amd64-server-20160602",
"release": "trusty", "release_codename": "Trusty Tahr",
"release_title": "14.04 LTS",
"sha256": TEST_IMAGE_FAKE_SHA256,
"size": str(TEST_IMAGE_SIZE),
"support_eol": "2019-04-17", "supported": "True",
"version": "14.04"}
self.assertEqual(expected_create_kwargs, passed_create_kwargs)
self.assertEqual(expected_ssmd, json.loads(found_ssmd))
# Apply the condensing as done in GlanceMirror.insert_products()
# to ensure we compare with the desired resulting simplestreams data.
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id', 'endpoint',
'region']
simplestreams.util.products_condense(target, sticky)
self.assertEqual(EXPECTED_OUTPUT_INDEX, target)
def test_insert_item_stores_the_index(self):
# Ensure insert_item calls insert_products() to generate the
# resulting simplestreams index file and insert it into store.
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
pedigree = TEST_IMAGE_PEDIGREE
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
self.mirror.store = MemoryObjectStore()
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
ver_data, source_index, target, pedigree[0:2])
stored_index_content = self.mirror.store.data[
'streams/v1/auto.sync.json']
stored_index = json.loads(stored_index_content.decode('utf-8'))
# Full index contains the 'updated' key with the date of last update.
self.assertIn(u"updated", stored_index)
del stored_index[u"updated"]
self.assertEqual(EXPECTED_OUTPUT_INDEX, stored_index)
def test_insert_item_full_v2(self):
# This test uses the full sample entries from the source simplestreams
# index from cloud-images.u.c and resulting local simplestreams index
# files.
self.mirror.glance_api_version = "2"
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
# "Pedigree" is basically a "path" to get to the image data in
# simplestreams index, going through "products", their "versions",
# and nested "items".
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
# Use a fake GlanceClient to track arguments passed into
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
image_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
passed_update_kwargs = self.mirror.gclient.images.update_calls[0]
expected_create_kwargs = {
'name': ('auto-sync/'
'ubuntu-trusty-14.04-amd64-server-20160602-disk1.img'),
'disk_format': 'qcow2',
'container_format': 'bare',
'visibility': "public"}
expected_update_kwargs = {
'os_distro': u'ubuntu',
'item_name': u'disk1.img',
'os_version': u'14.04',
'architecture': 'x86_64',
'version_name': u'20160602',
'content_id': 'auto.sync',
'product_name': u'com.ubuntu.cloud:server:14.04:amd64',
'source_content_id': u'com.ubuntu.cloud:released:download'}
# expected to be json blob in properties/simplestreams_metadata
found_ssmd = passed_update_kwargs.pop('simplestreams_metadata')
expected_ssmd = {
"aliases": "14.04,default,lts,t,trusty",
"arch": "amd64", "ftype": "disk1.img",
"label": "release", "md5": TEST_IMAGE_MD5,
"os": "ubuntu",
"pubname": "ubuntu-trusty-14.04-amd64-server-20160602",
"release": "trusty", "release_codename": "Trusty Tahr",
"release_title": "14.04 LTS",
"sha256": TEST_IMAGE_FAKE_SHA256,
"size": str(TEST_IMAGE_SIZE),
"support_eol": "2019-04-17", "supported": "True",
"version": "14.04"}
self.assertEqual(expected_create_kwargs, passed_create_kwargs)
self.assertEqual(expected_update_kwargs, passed_update_kwargs)
self.assertEqual(expected_ssmd, json.loads(found_ssmd))
# Apply the condensing as done in GlanceMirror.insert_products()
# to ensure we compare with the desired resulting simplestreams data.
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id', 'endpoint',
'region']
simplestreams.util.products_condense(target, sticky)
self.assertEqual(EXPECTED_OUTPUT_INDEX, target)
def test_insert_item_image_import_conversion_v2(self):
# This test uses the full sample entries with image_import_conversion
# enabled from the source simplestreams index from cloud-images.u.c
# and resulting local simplestreams index files.
self.mirror.glance_api_version = "2"
self.mirror.image_import_conversion = True
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
# "Pedigree" is basically a "path" to get to the image data in
# simplestreams index, going through "products", their "versions",
# and nested "items".
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
# Use a fake GlanceClient to track arguments passed into
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
image_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
passed_update_kwargs = self.mirror.gclient.images.update_calls[0]
expected_create_kwargs = {
'name': ('auto-sync/'
'ubuntu-trusty-14.04-amd64-server-20160602-disk1.img'),
'disk_format': 'raw',
'container_format': 'bare',
'visibility': "public"}
expected_update_kwargs = {
'os_distro': u'ubuntu',
'item_name': u'disk1.img',
'os_version': u'14.04',
'architecture': 'x86_64',
'version_name': u'20160602',
'content_id': 'auto.sync',
'product_name': u'com.ubuntu.cloud:server:14.04:amd64',
'source_content_id': u'com.ubuntu.cloud:released:download'}
# expected to be json blob in properties/simplestreams_metadata
found_ssmd = passed_update_kwargs.pop('simplestreams_metadata')
expected_ssmd = {
"aliases": "14.04,default,lts,t,trusty",
"arch": "amd64", "ftype": "disk1.img",
"label": "release", "md5": TEST_IMAGE_MD5,
"os": "ubuntu",
"pubname": "ubuntu-trusty-14.04-amd64-server-20160602",
"release": "trusty", "release_codename": "Trusty Tahr",
"release_title": "14.04 LTS",
"sha256": TEST_IMAGE_FAKE_SHA256,
"size": str(TEST_IMAGE_SIZE),
"support_eol": "2019-04-17", "supported": "True",
"version": "14.04"}
self.assertEqual(expected_create_kwargs, passed_create_kwargs)
self.assertEqual(expected_update_kwargs, passed_update_kwargs)
self.assertEqual(expected_ssmd, json.loads(found_ssmd))
# Apply the condensing as done in GlanceMirror.insert_products()
# to ensure we compare with the desired resulting simplestreams data.
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id', 'endpoint',
'region']
simplestreams.util.products_condense(target, sticky)
self.assertEqual(EXPECTED_OUTPUT_INDEX, target)
def test_validate_image(self):
# Ensure validate_image throws IOError when expected.
# Successful validation. Using None and None as
# self.mirror.download_image gets these values from the fake setup.
size, checksum = (2016, '7b03a8ebace993d806255121073fed52')
myimg = FakeImage('i-abcdefg', size, checksum)
gc = FakeGlanceClient()
gc.images.imgdb[myimg.id] = myimg
self.mirror.gclient = gc
self.assertEqual(
None,
self.mirror.validate_image(myimg.id, checksum=checksum, size=size))
# Mismatched checksum
with self.assertRaises(IOError):
self.mirror.validate_image(
myimg.id, checksum="Wrong-checksum", size=size, delete=False)
# Mismatched size
with self.assertRaises(IOError):
self.mirror.validate_image(myimg.id, myimg.checksum, size=1999)
def test_insert_item_with_latest_property(self):
# This test uses the full sample entries from the source simplestreams
# index from cloud-images.u.c and resulting local simplestreams index
# files.
self.mirror.glance_api_version = "2"
self.mirror.set_latest_property = True
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
# "Pedigree" is basically a "path" to get to the image data in
# simplestreams index, going through "products", their "versions",
# and nested "items".
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
# Use a fake GlanceClient to track arguments passed into
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = FakeGlanceClient()
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
image_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
passed_update_kwargs = self.mirror.gclient.images.update_calls[0]
expected_create_kwargs = {
'name': ('auto-sync/'
'ubuntu-trusty-14.04-amd64-server-20160602-disk1.img'),
'disk_format': 'qcow2',
'container_format': 'bare',
'visibility': "public",
}
expected_update_kwargs = {
'os_distro': u'ubuntu',
'item_name': u'disk1.img',
'os_version': u'14.04',
'architecture': 'x86_64',
'version_name': u'20160602',
'content_id': 'auto.sync',
'latest': u'true',
'product_name': u'com.ubuntu.cloud:server:14.04:amd64',
'source_content_id': u'com.ubuntu.cloud:released:download'}
# expected to be json blob in properties/simplestreams_metadata
found_ssmd = passed_update_kwargs.pop('simplestreams_metadata')
expected_ssmd = {
"aliases": "14.04,default,lts,t,trusty",
"arch": "amd64", "ftype": "disk1.img",
"label": "release", "md5": TEST_IMAGE_MD5,
"os": "ubuntu",
"pubname": "ubuntu-trusty-14.04-amd64-server-20160602",
"release": "trusty", "release_codename": "Trusty Tahr",
"release_title": "14.04 LTS",
"sha256": TEST_IMAGE_FAKE_SHA256,
"size": str(TEST_IMAGE_SIZE),
"support_eol": "2019-04-17", "supported": "True",
"version": "14.04"}
self.assertEqual(expected_create_kwargs, passed_create_kwargs)
self.assertEqual(expected_update_kwargs, passed_update_kwargs)
self.assertEqual(expected_ssmd, json.loads(found_ssmd))
# Apply the condensing as done in GlanceMirror.insert_products()
# to ensure we compare with the desired resulting simplestreams data.
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id', 'endpoint',
'region']
simplestreams.util.products_condense(target, sticky)
self.assertEqual(EXPECTED_OUTPUT_INDEX, target)
def test_update_latest_property(self):
size, checksum = (2016, '7b03a8ebace993d806255121073fed52')
old_img = FakeImage('i-abcdefg', size, checksum,
{
'latest': 'true',
'version_name': '20160601',
'os_version': '14.04'})
# old_img = FakeImage(
# 'i-abcdefg', TEST_IMAGE_SIZE, TEST_IMAGE_MD5, {'latest': 'true'})
gc = FakeGlanceClient()
gc.images.imgdb[old_img.id] = old_img
# Use a fake GlanceClient to track arguments passed into
# GlanceClient.images.create().
self.addCleanup(setattr, self.mirror, "gclient", self.mirror.gclient)
self.mirror.gclient = gc
self.mirror.set_latest_property = True
self.mirror.glance_api_version = "2"
source_index = copy.deepcopy(TEST_SOURCE_INDEX_ENTRY)
# "Pedigree" is basically a "path" to get to the image data in
# simplestreams index, going through "products", their "versions",
# and nested "items".
pedigree = (
u'com.ubuntu.cloud:server:14.04:amd64', u'20160602', u'disk1.img')
product = source_index[u'products'][pedigree[0]]
ver_data = product[u'versions'][pedigree[1]]
image_data = ver_data[u'items'][pedigree[2]]
content_source = MemoryContentSource(
url="http://image-store/fooubuntu-X-disk1.img",
content=TEST_IMAGE_DATA)
target = {
'content_id': 'auto.sync',
'datatype': 'image-ids',
'format': 'products:1.0',
}
self.mirror.insert_item(
image_data, source_index, target, pedigree, content_source)
self.mirror.insert_version(
image_data, source_index, target, pedigree[0:2])
passed_create_kwargs = self.mirror.gclient.images.create_calls[0]
passed_update_kwargs = self.mirror.gclient.images.update_calls[0]
passed_update_kwargs_old_img = \
self.mirror.gclient.images.update_calls[1]
expected_update_kwargs_old_img = {'remove_props': ['latest']}
expected_create_kwargs = {
'name': ('auto-sync/'
'ubuntu-trusty-14.04-amd64-server-20160602-disk1.img'),
'disk_format': 'qcow2',
'container_format': 'bare',
'visibility': "public"}
expected_update_kwargs = {
'os_distro': u'ubuntu',
'item_name': u'disk1.img',
'latest': u'true',
'os_version': u'14.04',
'architecture': 'x86_64',
'version_name': u'20160602',
'content_id': 'auto.sync',
'product_name': u'com.ubuntu.cloud:server:14.04:amd64',
'source_content_id': u'com.ubuntu.cloud:released:download'}
# expected to be json blob in properties/simplestreams_metadata
found_ssmd = passed_update_kwargs.pop('simplestreams_metadata')
expected_ssmd = {
"aliases": "14.04,default,lts,t,trusty",
"arch": "amd64", "ftype": "disk1.img",
"label": "release", "md5": TEST_IMAGE_MD5,
"os": "ubuntu",
"pubname": "ubuntu-trusty-14.04-amd64-server-20160602",
"release": "trusty", "release_codename": "Trusty Tahr",
"release_title": "14.04 LTS",
"sha256": TEST_IMAGE_FAKE_SHA256,
"size": str(TEST_IMAGE_SIZE),
"support_eol": "2019-04-17", "supported": "True",
"version": "14.04"}
self.assertEqual(expected_create_kwargs, passed_create_kwargs)
self.assertEqual(expected_update_kwargs, passed_update_kwargs)
self.assertEqual(expected_ssmd, json.loads(found_ssmd))
self.assertEqual(expected_update_kwargs_old_img,
passed_update_kwargs_old_img)
# Apply the condensing as done in GlanceMirror.insert_products()
# to ensure we compare with the desired resulting simplestreams data.
sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id', 'endpoint',
'region']
simplestreams.util.products_condense(target, sticky)
self.assertEqual(EXPECTED_OUTPUT_INDEX, target)
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_json2streams.py 0000664 0000000 0000000 00000016560 14605750330 0025355 0 ustar 00root root 0000000 0000000 from argparse import Namespace
from contextlib import contextmanager
import json
import os
from io import StringIO
from tempfile import NamedTemporaryFile
from unittest import TestCase
from mock import patch
from simplestreams.generate_simplestreams import (
FileNamer,
generate_index,
Item,
items2content_trees,
json_dump as json_dump_verbose,
)
from simplestreams.json2streams import (
dict_to_item,
filenames_to_streams,
JujuFileNamer,
parse_args,
read_items_file,
write_release_index,
)
from tests.unittests.test_generate_simplestreams import (
load_stream_dir,
temp_dir,
)
class TestJujuFileNamer(TestCase):
def test_get_index_path(self):
self.assertEqual('streams/v1/index2.json',
JujuFileNamer.get_index_path())
def test_get_content_path(self):
self.assertEqual('streams/v1/foo-bar-baz.json',
JujuFileNamer.get_content_path('foo:bar-baz'))
def json_dump(json, filename):
with patch('sys.stderr', StringIO()):
json_dump_verbose(json, filename)
class TestDictToItem(TestCase):
def test_dict_to_item(self):
pedigree = {
'content_id': 'cid', 'product_name': 'pname',
'version_name': 'vname', 'item_name': 'iname',
}
item_dict = {'size': '27'}
item_dict.update(pedigree)
item = dict_to_item(item_dict)
self.assertEqual(Item(data={'size': 27}, **pedigree), item)
def test_no_size(self):
pedigree = {
'content_id': 'cid', 'product_name': 'pname',
'version_name': 'vname', 'item_name': 'iname',
}
item_dict = {}
item_dict.update(pedigree)
item = dict_to_item(item_dict)
self.assertEqual(Item(data={}, **pedigree), item)
class TestReadItemsFile(TestCase):
def test_read_items_file(self):
pedigree = {
'content_id': 'cid', 'product_name': 'pname',
'version_name': 'vname', 'item_name': 'iname',
}
with NamedTemporaryFile() as items_file:
item_dict = {'size': '27'}
item_dict.update(pedigree)
json_dump([item_dict], items_file.name)
items = list(read_items_file(items_file.name))
self.assertEqual([Item(data={'size': 27}, **pedigree)], items)
class TestWriteReleaseIndex(TestCase):
def write_full_index(self, out_d, content):
os.makedirs(os.path.join(out_d, 'streams/v1'))
path = os.path.join(out_d, JujuFileNamer.get_index_path())
json_dump(content, path)
def read_release_index(self, out_d):
path = os.path.join(out_d, FileNamer.get_index_path())
with open(path) as release_index_file:
return json.load(release_index_file)
def test_empty_index(self):
with temp_dir() as out_d:
self.write_full_index(out_d, {'index': {}, 'foo': 'bar'})
with patch('sys.stderr', StringIO()):
write_release_index(out_d)
release_index = self.read_release_index(out_d)
self.assertEqual({'foo': 'bar', 'index': {}}, release_index)
def test_release_index(self):
with temp_dir() as out_d:
self.write_full_index(out_d, {
'index': {'com.ubuntu.juju:released:tools': 'foo'},
'foo': 'bar'})
with patch('sys.stderr', StringIO()):
write_release_index(out_d)
release_index = self.read_release_index(out_d)
self.assertEqual({'foo': 'bar', 'index': {
'com.ubuntu.juju:released:tools': 'foo'}
}, release_index)
def test_multi_index(self):
with temp_dir() as out_d:
self.write_full_index(out_d, {
'index': {
'com.ubuntu.juju:proposed:tools': 'foo',
'com.ubuntu.juju:released:tools': 'foo',
},
'foo': 'bar'})
with patch('sys.stderr', StringIO()):
write_release_index(out_d)
release_index = self.read_release_index(out_d)
self.assertEqual({'foo': 'bar', 'index': {
'com.ubuntu.juju:released:tools': 'foo'}
}, release_index)
class TestFilenamesToStreams(TestCase):
updated = 'updated'
@contextmanager
def filenames_to_streams_cxt(self):
item = {
'content_id': 'foo:1',
'product_name': 'bar',
'version_name': 'baz',
'item_name': 'qux',
'size': '27',
}
item2 = dict(item)
item2.update({
'size': '42',
'item_name': 'quxx'})
file_a = NamedTemporaryFile()
file_b = NamedTemporaryFile()
with temp_dir() as out_d, file_a, file_b:
json_dump([item], file_a.name)
json_dump([item2], file_b.name)
stream_dir = os.path.join(out_d, 'streams/v1')
with patch('sys.stderr', StringIO()):
yield item, item2, file_a, file_b, out_d, stream_dir
def test_filenames_to_streams(self):
with self.filenames_to_streams_cxt() as (item, item2, file_a, file_b,
out_d, stream_dir):
filenames_to_streams([file_a.name, file_b.name], self.updated,
out_d)
content = load_stream_dir(stream_dir)
self.assertEqual(
sorted(content.keys()),
sorted(['index.json', 'foo:1.json']))
items = [dict_to_item(item), dict_to_item(item2)]
trees = items2content_trees(items, {
'updated': self.updated, 'datatype': 'content-download'})
expected = generate_index(trees, 'updated', FileNamer)
self.assertEqual(expected, content['index.json'])
self.assertEqual(trees['foo:1'], content['foo:1.json'])
def test_filenames_to_streams_juju_format(self):
with self.filenames_to_streams_cxt() as (item, item2, file_a, file_b,
out_d, stream_dir):
filenames_to_streams([file_a.name, file_b.name], self.updated,
out_d, juju_format=True)
content = load_stream_dir(stream_dir)
self.assertEqual(
sorted(content.keys()),
sorted(['index.json', 'index2.json', 'foo-1.json']))
items = [dict_to_item(item), dict_to_item(item2)]
trees = items2content_trees(items, {
'updated': self.updated, 'datatype': 'content-download'})
expected = generate_index(trees, 'updated', JujuFileNamer)
self.assertEqual(expected, content['index2.json'])
index_expected = generate_index({}, 'updated', FileNamer)
self.assertEqual(index_expected, content['index.json'])
self.assertEqual(trees['foo:1'], content['foo-1.json'])
class TestParseArgs(TestCase):
def test_defaults(self):
args = parse_args(['file1', 'outdir'])
self.assertEqual(
Namespace(items_file=['file1'], out_d='outdir', juju_format=False),
args)
def test_multiple_files(self):
args = parse_args(['file1', 'file2', 'file3', 'outdir'])
self.assertEqual(
['file1', 'file2', 'file3'], args.items_file)
self.assertEqual('outdir', args.out_d)
def test_juju_format(self):
args = parse_args(['file1', 'outdir', '--juju-format'])
self.assertIs(True, args.juju_format)
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_mirrorreaders.py 0000664 0000000 0000000 00000012270 14605750330 0025575 0 ustar 00root root 0000000 0000000 from unittest import TestCase
from simplestreams.mirrors import UrlMirrorReader
from simplestreams.contentsource import URL_READER
import simplestreams.mirrors
def fake_url_reader(*args, **kwargs):
"""
Fake URL reader which returns all the arguments passed in as a dict.
Positional arguments are returned under the key "ARGS".
"""
all_args = kwargs.copy()
all_args["ARGS"] = args
return all_args
class TestUrlMirrorReader(TestCase):
def test_source(self):
"""source() method returns a ContentSource."""
# Verify source() returns a content source constructed using the
# appropriate path and mirrors.
reader = UrlMirrorReader(
"/prefix/", mirrors=["a/", "b/"], user_agent=None)
cs = reader.source("some/path")
# Resulting ContentSource is passed an URL as a concatenation of
# the prefix and the path.
self.assertEqual("/prefix/some/path", cs.url)
# Mirror URLs have path appended.
self.assertEqual(["a/some/path", "b/some/path"], cs.mirrors)
# Default URL_READER is returned.
self.assertEqual(URL_READER, cs.url_reader)
def test_source_no_trailing_slash(self):
"""Even if prefix lacks a trailing slash, it behaves the same."""
reader = UrlMirrorReader(
"/prefix", mirrors=["a/", "b/"], user_agent=None)
cs = reader.source("some/path")
self.assertEqual("/prefix/some/path", cs.url)
self.assertEqual(["a/some/path", "b/some/path"], cs.mirrors)
self.assertEqual(URL_READER, cs.url_reader)
def test_source_user_agent(self):
"""Default user_agent is set and passed to the ContentSource."""
reader = UrlMirrorReader("/prefix/", mirrors=["a/", "b/"])
cs = reader.source("some/path")
# A factory function is set instead of the URL_READER, and
# it constructs a URL_READER with user_agent passed in.
url_reader = cs.url_reader
self.assertNotEqual(URL_READER, url_reader)
# Override the default URL_READER to track arguments being passed.
simplestreams.mirrors.cs.URL_READER = fake_url_reader
result = url_reader("a", "b", something="c")
# It passes all the same arguments, with "user_agent" added in.
self.assertEqual(
{"user_agent": "python-simplestreams/0.1", "something": "c",
"ARGS": ("a", "b")},
result)
# Restore default UrlReader.
simplestreams.mirrors.cs.URL_READER = URL_READER
def test_source_user_agent_override(self):
"""When user_agent is set, it is passed to the ContentSource."""
reader = UrlMirrorReader("/prefix/", mirrors=["a/", "b/"],
user_agent="test agent")
cs = reader.source("some/path")
# A factory function is set instead of the URL_READER, and
# it constructs a URL_READER with user_agent passed in.
url_reader = cs.url_reader
self.assertNotEqual(URL_READER, url_reader)
# Override the default URL_READER to track arguments being passed.
simplestreams.mirrors.cs.URL_READER = fake_url_reader
result = url_reader("a", "b", something="c")
# It passes all the same arguments, with "user_agent" added in.
self.assertEqual(
{"user_agent": "test agent", "something": "c", "ARGS": ("a", "b")},
result)
# Restore default UrlReader.
simplestreams.mirrors.cs.URL_READER = URL_READER
def test_source_user_agent_no_trailing_slash(self):
"""
When user_agent is set, it is passed to the ContentSource even
if there is no trailing slash.
"""
reader = UrlMirrorReader("/prefix", mirrors=["a/", "b/"],
user_agent="test agent")
cs = reader.source("some/path")
# A factory function is set instead of the URL_READER, and
# it constructs a URL_READER with user_agent passed in.
url_reader = cs.url_reader
self.assertNotEqual(URL_READER, url_reader)
# Override the default URL_READER to track arguments being passed.
simplestreams.mirrors.cs.URL_READER = fake_url_reader
result = url_reader("a", "b", something="c")
# It passes all the same arguments, with "user_agent" added in.
self.assertEqual(
{"user_agent": "test agent", "something": "c", "ARGS": ("a", "b")},
result)
# Restore default UrlReader.
simplestreams.mirrors.cs.URL_READER = URL_READER
def test_sources_list(self):
"""sources_list() method returns list of sources."""
reader = UrlMirrorReader("/prefix/", mirrors=["a/", "b/"])
cs = reader.sources_list("some/path")
self.assertIn("/prefix/some/path", cs)
self.assertIn("a/some/path", cs)
self.assertIn("b/some/path", cs)
def test_sources_list_no_trailing_slash(self):
"""Even if prefix lacks a trailing slash, it behaves the same."""
reader = UrlMirrorReader("/prefix", mirrors=["a/", "b/"])
cs = reader.sources_list("some/path")
self.assertIn("/prefix/some/path", cs)
self.assertIn("a/some/path", cs)
self.assertIn("b/some/path", cs)
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_mirrorwriters.py 0000664 0000000 0000000 00000002122 14605750330 0025642 0 ustar 00root root 0000000 0000000 from tests.testutil import get_mirror_reader
from simplestreams.filters import get_filters
from simplestreams.mirrors import DryRunMirrorWriter, ObjectFilterMirror
from simplestreams.objectstores import MemoryObjectStore
from unittest import TestCase
class TestMirrorWriters(TestCase):
def test_DryRunMirrorWriter_foocloud_no_filters(self):
src = get_mirror_reader("foocloud")
config = {}
objectstore = MemoryObjectStore(None)
target = DryRunMirrorWriter(config, objectstore)
target.sync(src, "streams/v1/index.json")
self.assertEqual(1277, target.size)
def test_ObjectFilterMirror_does_item_filter(self):
src = get_mirror_reader("foocloud")
filter_list = get_filters(['ftype!=disk1.img'])
config = {'filters': filter_list}
objectstore = MemoryObjectStore(None)
target = ObjectFilterMirror(config, objectstore)
target.sync(src, "streams/v1/index.json")
unexpected = [f for f in objectstore.data if 'disk' in f]
assert len(unexpected) == 0
assert len(objectstore.data) != 0
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_openstack.py 0000664 0000000 0000000 00000027061 14605750330 0024710 0 ustar 00root root 0000000 0000000 import mock
import unittest
import simplestreams.openstack as s_openstack
class TestOpenStack(unittest.TestCase):
MOCK_OS_VARS = {
'OS_AUTH_TOKEN': 'a-token',
'OS_AUTH_URL': 'http://0.0.0.0/v2.0',
'OS_CACERT': 'some-cert',
'OS_IMAGE_API_VERSION': '2',
'OS_IMAGE_URL': 'http://1.2.3.4:9292/',
'OS_PASSWORD': 'some-password',
'OS_REGION_NAME': 'region1',
'OS_STORAGE_URL': 'http://1.2.3.4:8080/v1/AUTH_123456',
'OS_TENANT_ID': '123456789',
'OS_TENANT_NAME': 'a-project',
'OS_USERNAME': 'openstack',
'OS_INSECURE': 'true',
'OS_USER_DOMAIN_NAME': 'default',
'OS_PROJECT_DOMAIN_NAME': 'Default',
'OS_USER_DOMAIN_ID': 'default',
'OS_PROJECT_DOMAIN_ID': 'default',
'OS_PROJECT_NAME': 'some-project',
'OS_PROJECT_ID': 'project-id',
}
def test_load_keystone_creds_V2_from_osvars(self):
with mock.patch('os.environ', new=self.MOCK_OS_VARS.copy()):
creds = s_openstack.load_keystone_creds()
self.assertEqual(creds,
{'auth_token': 'a-token',
'auth_url': 'http://0.0.0.0/v2.0',
'cacert': 'some-cert',
'image_api_version': '2',
'image_url': 'http://1.2.3.4:9292/',
'insecure': True,
'password': 'some-password',
'project_domain_id': 'default',
'project_domain_name': 'Default',
'project_id': 'project-id',
'project_name': 'some-project',
'region_name': 'region1',
'storage_url': 'http://1.2.3.4:8080/v1/AUTH_123456',
'tenant_id': '123456789',
'tenant_name': 'a-project',
'user_domain_id': 'default',
'user_domain_name': 'default',
'username': 'openstack'})
def test_load_keystone_creds_V2_from_kwargs(self):
with mock.patch('os.environ', new=self.MOCK_OS_VARS.copy()):
creds = s_openstack.load_keystone_creds(
password='the-password',
username='myuser')
self.assertEqual(creds,
{'auth_token': 'a-token',
'auth_url': 'http://0.0.0.0/v2.0',
'cacert': 'some-cert',
'image_api_version': '2',
'image_url': 'http://1.2.3.4:9292/',
'insecure': True,
'password': 'the-password',
'project_domain_id': 'default',
'project_domain_name': 'Default',
'project_id': 'project-id',
'project_name': 'some-project',
'region_name': 'region1',
'storage_url': 'http://1.2.3.4:8080/v1/AUTH_123456',
'tenant_id': '123456789',
'tenant_name': 'a-project',
'user_domain_id': 'default',
'user_domain_name': 'default',
'username': 'myuser'})
def test_load_keystone_creds_V3_from_osvars(self):
v3kwargs = self.MOCK_OS_VARS.copy()
v3kwargs['OS_AUTH_URL'] = 'http://0.0.0.0/v3'
with mock.patch('os.environ', new=v3kwargs):
creds = s_openstack.load_keystone_creds()
self.assertEqual(creds,
{'auth_token': 'a-token',
'auth_url': 'http://0.0.0.0/v3',
'cacert': 'some-cert',
'image_api_version': '2',
'image_url': 'http://1.2.3.4:9292/',
'insecure': True,
'password': 'some-password',
'project_domain_id': 'default',
'project_domain_name': 'Default',
'project_id': 'project-id',
'project_name': 'some-project',
'region_name': 'region1',
'storage_url': 'http://1.2.3.4:8080/v1/AUTH_123456',
'tenant_id': '123456789',
'tenant_name': 'a-project',
'user_domain_id': 'default',
'user_domain_name': 'default',
'username': 'openstack'})
def test_load_keystone_creds_insecure(self):
"""test load_keystone_creds behaves correctly for OS_INSECURE values.
"""
kwargs = self.MOCK_OS_VARS.copy()
test_pairs = (('off', False),
('no', False),
('false', False),
('', False),
('anything-else', True))
for val, expected in test_pairs:
kwargs['OS_INSECURE'] = val
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
self.assertEqual(creds['insecure'], expected)
def test_load_keystone_creds_verify(self):
"""Test that cacert comes across as verify."""
kwargs = self.MOCK_OS_VARS.copy()
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
self.assertNotIn('verify', creds)
kwargs['OS_INSECURE'] = 'false'
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
self.assertEqual(creds['verify'], kwargs['OS_CACERT'])
kwargs['OS_INSECURE'] = 'false'
del kwargs['OS_CACERT']
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
self.assertNotIn('verify', creds)
def test_load_keystone_creds_missing(self):
kwargs = self.MOCK_OS_VARS.copy()
del kwargs['OS_USERNAME']
with mock.patch('os.environ', new=kwargs):
with self.assertRaises(ValueError):
s_openstack.load_keystone_creds()
kwargs = self.MOCK_OS_VARS.copy()
del kwargs['OS_AUTH_URL']
with mock.patch('os.environ', new=kwargs):
with self.assertRaises(ValueError):
s_openstack.load_keystone_creds()
# either auth_token or password needs to be exist, but if both are
# missing then raise an exception
kwargs = self.MOCK_OS_VARS.copy()
del kwargs['OS_AUTH_TOKEN']
with mock.patch('os.environ', new=kwargs):
s_openstack.load_keystone_creds()
kwargs = self.MOCK_OS_VARS.copy()
del kwargs['OS_PASSWORD']
with mock.patch('os.environ', new=kwargs):
s_openstack.load_keystone_creds()
kwargs = self.MOCK_OS_VARS.copy()
del kwargs['OS_AUTH_TOKEN']
del kwargs['OS_PASSWORD']
with mock.patch('os.environ', new=kwargs):
with self.assertRaises(ValueError):
s_openstack.load_keystone_creds()
# API version 3
for k in ('OS_USER_DOMAIN_NAME',
'OS_PROJECT_DOMAIN_NAME',
'OS_PROJECT_NAME'):
kwargs = self.MOCK_OS_VARS.copy()
kwargs['OS_AUTH_URL'] = 'http://0.0.0.0/v3'
del kwargs[k]
with self.assertRaises(ValueError):
s_openstack.load_keystone_creds()
@mock.patch.object(s_openstack, '_LEGACY_CLIENTS', new=False)
@mock.patch.object(s_openstack.session, 'Session')
def test_get_ksclient_v2(self, m_session):
kwargs = self.MOCK_OS_VARS.copy()
mock_ksclient_v2 = mock.Mock()
mock_ident_v2 = mock.Mock()
with mock.patch.object(
s_openstack, 'KS_VERSION_RESOLVER', new={}) as m:
m[2] = s_openstack.Settings(mod=mock_ksclient_v2,
ident=mock_ident_v2,
arg_set=s_openstack.PASSWORD_V2)
# test openstack ks 2
m_auth = mock.Mock()
mock_ident_v2.Password.return_value = m_auth
m_get_access = mock.Mock()
m_auth.get_access.return_value = m_get_access
m_session.return_value = mock.sentinel.session
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
c = s_openstack.get_ksclient(**creds)
# verify that mock_ident_v2 is called with password
mock_ident_v2.Password.assert_has_calls([
mock.call(auth_url='http://0.0.0.0/v2.0',
password='some-password',
tenant_id='123456789',
tenant_name='a-project',
username='openstack')])
# verify that the session was called with the v2 password
m_session.assert_called_once_with(auth=m_auth)
# verify that the client was called with the session
mock_ksclient_v2.Client.assert_called_once_with(
session=mock.sentinel.session)
# finally check that the client as an auth_ref and that it contains
# the get_access() call
self.assertEqual(c.auth_ref, m_get_access)
m_auth.get_access.assert_called_once_with(mock.sentinel.session)
@mock.patch.object(s_openstack, '_LEGACY_CLIENTS', new=False)
@mock.patch.object(s_openstack.session, 'Session')
def test_get_ksclient_v3(self, m_session):
kwargs = self.MOCK_OS_VARS.copy()
kwargs['OS_AUTH_URL'] = 'http://0.0.0.0/v3'
mock_ksclient_v3 = mock.Mock()
mock_ident_v3 = mock.Mock()
with mock.patch.object(
s_openstack, 'KS_VERSION_RESOLVER', new={}) as m:
m[3] = s_openstack.Settings(mod=mock_ksclient_v3,
ident=mock_ident_v3,
arg_set=s_openstack.PASSWORD_V3)
# test openstack ks 3
m_auth = mock.Mock()
mock_ident_v3.Password.return_value = m_auth
m_get_access = mock.Mock()
m_auth.get_access.return_value = m_get_access
m_session.return_value = mock.sentinel.session
with mock.patch('os.environ', new=kwargs):
creds = s_openstack.load_keystone_creds()
c = s_openstack.get_ksclient(**creds)
# verify that mock_ident_v2 is called with password
mock_ident_v3.Password.assert_has_calls([
mock.call(auth_url='http://0.0.0.0/v3',
password='some-password',
project_domain_id='default',
project_domain_name='Default',
project_id='project-id',
project_name='some-project',
user_domain_id='default',
user_domain_name='default',
username='openstack')])
# verify that the session was called with the v2 password
m_session.assert_called_once_with(auth=m_auth)
# verify that the client was called with the session
mock_ksclient_v3.Client.assert_called_once_with(
session=mock.sentinel.session)
# finally check that the client as an auth_ref and that it contains
# the get_access() call
self.assertEqual(c.auth_ref, m_get_access)
m_auth.get_access.assert_called_once_with(mock.sentinel.session)
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_resolvework.py 0000664 0000000 0000000 00000010446 14605750330 0025302 0 ustar 00root root 0000000 0000000 from unittest import TestCase
from simplestreams.util import resolve_work
from simplestreams.objectstores import MemoryObjectStore
from simplestreams.mirrors import ObjectStoreMirrorWriter
from simplestreams.filters import filter_item, ItemFilter
from tests.testutil import get_mirror_reader
class TestStreamResolveWork(TestCase):
def tryit(self, src, target, maxnum=None, keep=False,
itemfilter=None, add=None, remove=None):
if add is None:
add = []
if remove is None:
remove = []
(r_add, r_remove) = resolve_work(src, target, maxnum=maxnum, keep=keep,
itemfilter=itemfilter)
self.assertEqual(r_add, add)
self.assertEqual(r_remove, remove)
def test_keep_with_max_none_is_exception(self):
self.assertRaises(TypeError, resolve_work, [1], [2], None, True)
def test_full_replace(self):
src = [10, 9, 8]
target = [7, 6, 5]
self.tryit(src=src, target=target, add=src, remove=[5, 6, 7])
def test_only_new_with_max(self):
self.tryit(src=[10, 9, 8], target=[7, 6, 5],
add=[10, 9], remove=[5, 6, 7], maxnum=2)
def test_only_new_with_keep(self):
self.tryit(src=[10, 9, 8], target=[7, 6, 5],
add=[10, 9, 8], remove=[5, 6], maxnum=4, keep=True)
def test_only_remove(self):
self.tryit(src=[3], target=[3, 2, 1], add=[], remove=[1, 2])
def test_only_remove_with_keep(self):
self.tryit(src=[3], target=[3, 2, 1],
add=[], remove=[], maxnum=3, keep=True)
def test_only_remove_with_max(self):
self.tryit(src=[3], target=[3, 2, 1],
add=[], remove=[1, 2], maxnum=2)
def test_only_remove_with_no_max(self):
self.tryit(src=[3], target=[3, 2, 1],
add=[], remove=[1, 2], maxnum=None)
def test_null_remote_without_keep(self):
self.tryit(src=[], target=[3, 2, 1], add=[], remove=[1, 2, 3])
def test_null_remote_with_keep(self):
self.tryit(src=[], target=[3, 2, 1], maxnum=3, keep=True, add=[],
remove=[])
def test_null_remote_without_keep_with_maxnum(self):
self.tryit(src=[], target=[3, 2, 1], maxnum=3, keep=False, add=[],
remove=[1, 2, 3])
def test_max_forces_remove(self):
self.tryit(src=[2, 1], target=[2, 1], maxnum=1, keep=False,
add=[], remove=[1])
def test_nothing_needed_with_max(self):
self.tryit(src=[1], target=[1], maxnum=1, keep=False, add=[],
remove=[])
def test_filtered_items_not_present(self):
self.tryit(src=[1, 2, 3, 4, 5], target=[1], maxnum=None, keep=False,
itemfilter=lambda a: a < 3, add=[2], remove=[])
def test_max_and_target_has_newest(self):
self.tryit(src=[1, 2, 3, 4], target=[4], maxnum=1, keep=False,
add=[], remove=[])
def test_unordered_target_input(self):
self.tryit(src=['20121026.1', '20120328', '20121001'],
target=['20121001', '20120328', '20121026.1'], maxnum=2,
keep=False, add=[], remove=['20120328'])
def test_reduced_max(self):
self.tryit(src=[9, 5, 8, 4, 7, 3, 6, 2, 1],
target=[9, 8, 7, 6, 5], maxnum=4, keep=False,
add=[], remove=[5])
def test_foocloud_multiple_paths_remove(self):
config = {'delete_filtered_items': True}
memory = ObjectStoreMirrorWriter(config, MemoryObjectStore(None))
foocloud = get_mirror_reader("foocloud")
memory.sync(foocloud, "streams/v1/index.json")
# We sync'd, now we'll sync everything that doesn't have the samepaths
# tag. samepaths reuses some paths, and so if we try and delete
# anything here that would be wrong.
filters = [ItemFilter("version_name!=samepaths")]
def no_samepaths(data, src, _target, pedigree):
return filter_item(filters, data, src, pedigree)
def dont_remove(*_args):
# This shouldn't be called, because we are smart and do "reference
# counting".
assert False
memory.filter_version = no_samepaths
memory.store.remove = dont_remove
memory.sync(foocloud, "streams/v1/index.json")
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_signed_data.py 0000664 0000000 0000000 00000002356 14605750330 0025163 0 ustar 00root root 0000000 0000000 import shutil
import subprocess
import tempfile
import pytest
from os.path import join
from simplestreams import mirrors
from simplestreams import objectstores
from simplestreams.util import SignatureMissingException
from tests.testutil import get_mirror_reader, EXAMPLES_DIR
def _tmp_reader():
sstore = objectstores.FileStore(tempfile.gettempdir())
return mirrors.ObjectStoreMirrorReader(sstore)
def test_read_bad_data():
with pytest.raises(subprocess.CalledProcessError):
good = join(EXAMPLES_DIR, "foocloud", "streams", "v1", "index.sjson")
bad = join(tempfile.gettempdir(), "index.sjson")
shutil.copy(good, bad)
with open(bad, 'r+') as f:
lines = f.readlines()
f.truncate()
f.seek(0)
for line in lines:
f.write(line.replace('foovendor', 'attacker'))
_tmp_reader().read_json("index.sjson")
def test_read_unsigned():
with pytest.raises(SignatureMissingException):
# empty files aren't signed
open(join(tempfile.gettempdir(), 'index.json'), 'w').close()
_tmp_reader().read_json("index.json")
def test_read_signed():
reader = get_mirror_reader("foocloud")
reader.read_json("streams/v1/index.sjson")
simplestreams_0.1.0-67-g8497b634/tests/unittests/test_util.py 0000664 0000000 0000000 00000021361 14605750330 0023673 0 ustar 00root root 0000000 0000000 # pylint: disable=C0301
from simplestreams import util
from copy import deepcopy
import os
from unittest import TestCase
from tests.testutil import EXAMPLES_DIR
class TestProductsSet(TestCase):
def test_product_exists(self):
tree = {'products': {'P1': {"F1": "V1"}}}
util.products_set(tree, {'F2': 'V2'}, ('P1',))
self.assertEqual(tree, {'products': {'P1': {'F2': 'V2'}}})
def test_product_no_exists(self):
tree = {'products': {'A': 'B'}}
util.products_set(tree, {'F1': 'V1'}, ('P1',))
self.assertEqual(tree,
{'products': {'A': 'B', 'P1': {'F1': 'V1'}}})
def test_product_no_products_tree(self):
tree = {}
util.products_set(tree, {'F1': 'V1'}, ('P1',))
self.assertEqual(tree,
{'products': {'P1': {'F1': 'V1'}}})
def test_version_exists(self):
tree = {'products': {'P1': {'versions': {'FOO': {'1': 'one'}}}}}
util.products_set(tree, {'2': 'two'}, ('P1', 'FOO'))
self.assertEqual(tree,
{'products': {'P1': {'versions':
{'FOO': {'2': 'two'}}}}})
def test_version_no_exists(self):
tree = {'products': {'P1': {'versions': {'BAR': {'1': 'one'}}}}}
util.products_set(tree, {'2': 'two'}, ('P1', 'FOO'))
d = {'products': {'P1':
{'versions': {'BAR': {'1': 'one'},
'FOO': {'2': 'two'}}}}}
self.assertEqual(tree, d)
def test_item_exists(self):
items = {'item1': {'f1': '1'}}
tree = {'products': {'P1': {'versions':
{'VBAR': {'1': 'one',
'items': items}}}}}
mnew = {'f2': 'two'}
util.products_set(tree, mnew, ('P1', 'VBAR', 'item1',))
expvers = {'VBAR': {'1': 'one', 'items': {'item1': mnew}}}
self.assertEqual(tree, {'products': {'P1': {'versions': expvers}}})
def test_item_no_exists(self):
items = {'item1': {'f1': '1'}}
tree = {'products': {'P1': {
'versions': {'V1': {'VF1': 'VV1', 'items': items}}
}}}
util.products_set(tree, {'f2': '2'}, ('P1', 'V1', 'item2',))
expvers = {'V1': {'VF1': 'VV1', 'items': {'item1': {'f1': '1'},
'item2': {'f2': '2'}}}}
self.assertEqual(tree, {'products': {'P1': {'versions': expvers}}})
class TestProductsDel(TestCase):
def test_product_exists(self):
tree = {'products': {'P1': {"F1": "V1"}}}
util.products_del(tree, ('P1',))
self.assertEqual(tree, {'products': {}})
def test_product_no_exists(self):
ptree = {'P1': {'F1': 'V1'}}
tree = {'products': deepcopy(ptree)}
util.products_del(tree, ('P2',))
self.assertEqual(tree, {'products': ptree})
def test_version_exists(self):
otree = {'products': {
'P1': {"F1": "V1"},
'P2': {'versions': {'VER1': {'X1': 'X2'}}}
}}
tree = deepcopy(otree)
util.products_del(tree, ('P2', 'VER1'))
del otree['products']['P2']['versions']['VER1']
self.assertEqual(tree, otree)
def test_version_no_exists(self):
otree = {'products': {
'P1': {"F1": "V1"},
'P2': {'versions': {'VER1': {'X1': 'X2'}}}
}}
tree = deepcopy(otree)
util.products_del(tree, ('P2', 'VER2'))
self.assertEqual(tree, otree)
def test_item_exists(self):
otree = {'products': {
'P1': {"F1": "V1"},
'P2': {'versions': {'VER1': {'X1': 'X2',
'items': {'ITEM1': {'IF1': 'IV2'}}}}}
}}
tree = deepcopy(otree)
del otree['products']['P2']['versions']['VER1']['items']['ITEM1']
util.products_del(tree, ('P2', 'VER1', 'ITEM1'))
self.assertEqual(tree, otree)
def test_item_no_exists(self):
otree = {'products': {
'P1': {"F1": "V1"},
'P2': {'versions': {'VER1': {'X1': 'X2',
'items': {'ITEM1': {'IF1': 'IV2'}}}}}
}}
tree = deepcopy(otree)
util.products_del(tree, ('P2', 'VER1', 'ITEM2'))
self.assertEqual(tree, otree)
class TestProductsPrune(TestCase):
def test_products_empty(self):
tree = {'products': {}}
util.products_prune(tree)
self.assertEqual(tree, {})
def test_products_not_empty(self):
tree = {'products': {'fooproduct': {'a': 'b'}}}
util.products_prune(tree)
self.assertEqual(tree, {})
def test_has_item(self):
otree = {'products': {'P1': {'versions':
{'V1': {'items': {'I1': 'I'}}}}}}
tree = deepcopy(otree)
util.products_prune(tree)
self.assertEqual(tree, otree)
def test_deletes_one_version_leaves_one(self):
versions = {'V1': {'items': {}}, 'V2': {'items': {'I1': 'I'}}}
otree = {'products': {'P1': {'versions': versions}}}
tree = deepcopy(otree)
util.products_prune(tree)
del otree['products']['P1']['versions']['V1']
self.assertEqual(tree, otree)
class TestReadSigned(TestCase):
def test_read_signed(self):
path = os.path.join(EXAMPLES_DIR, 'foocloud/streams/v1/index.sjson')
with open(path) as fileobj:
util.read_signed(fileobj.read())
def test_no_check(self):
streams = os.path.join(EXAMPLES_DIR, 'foocloud/streams/v1')
path = os.path.join(streams, 'index.sjson')
with open(path) as fileobj:
util.read_signed(fileobj.read(), checked=False)
class TestProductsCondense(TestCase):
def test_condense_1(self):
tree = {'products': {'P1': {'versions': {'1': {'A': 'B'},
'2': {'A': 'B'}}}}}
exp = {'A': 'B',
'products': {'P1': {'versions': {'1': {}, '2': {}}}}}
util.products_condense(tree, top='products')
self.assertEqual(tree, exp)
def test_condense_unicode(self):
tree = {'products': {'P1': {'versions': {'1': {'A': u'B'},
'2': {'A': u'B'}}}}}
exp = {'A': u'B',
'products': {'P1': {'versions': {'1': {}, '2': {}}}}}
util.products_condense(tree, top='products')
self.assertEqual(tree, exp)
def test_condense_different_arch(self):
tree = {'products': {'P1': {'versions': {
'1': {'items': {'thing1': {'arch': 'amd64'},
'thing2': {'arch': 'amd64'}}},
'2': {'items': {'thing3': {'arch': 'i3867'}}}}}}}
exp = {'products': {'P1': {'versions': {
'1': {'arch': 'amd64',
'items': {'thing1': {}, 'thing2': {}}},
'2': {'arch': 'i3867',
'items': {'thing3': {}}}}}}}
util.products_condense(tree)
self.assertEqual(tree, exp)
def test_condense_no_arch(self):
tree = {'products': {'P1': {'versions': {
'1': {'items': {'thing1': {'arch': 'amd64'},
'thing2': {'arch': 'amd64'}}},
'2': {'items': {'thing3': {}}}}}}}
exp = {'products': {'P1': {'versions': {
'1': {'arch': 'amd64',
'items': {'thing1': {},
'thing2': {}}},
'2': {'items': {'thing3': {}}}}}}}
util.products_condense(tree)
self.assertEqual(tree, exp)
def test_repeats_removed(self):
tree = {'products': {'P1': {'A': 'B',
'versions': {'1': {'A': 'B'},
'2': {'A': 'B'}}}}}
exp = {'A': 'B',
'products': {'P1': {'versions': {'1': {}, '2': {}}}}}
util.products_condense(tree, top='products')
self.assertEqual(tree, exp)
def test_nonrepeats_stay(self):
tree = {'products': {'P1': {'A': 'C',
'versions': {'1': {'A': 'B'},
'2': {'A': 'B'}}}}}
exp = {'A': 'C',
'products': {'P1': {'versions': {'1': {'A': 'B'},
'2': {'A': 'B'}}}}}
util.products_condense(tree, top='products')
self.assertEqual(tree, exp)
def test_default_top_is_version(self):
# default top has to be version for backwards compat
tree = {'products': {'P1': {'versions': {'1': {'A': 'B'},
'2': {'A': 'B'}}}}}
exp = {'products': {'P1': {'A': 'B',
'versions': {'1': {}, '2': {}}}}}
util.products_condense(tree)
self.assertEqual(tree, exp)
# vi: ts=4 expandtab
simplestreams_0.1.0-67-g8497b634/tests/unittests/tests_filestore.py 0000664 0000000 0000000 00000004653 14605750330 0025102 0 ustar 00root root 0000000 0000000 import shutil
import tempfile
import os
from simplestreams import objectstores
from simplestreams import mirrors
from tests.testutil import get_mirror_reader
from unittest import TestCase
FOOCLOUD_FILE = ("files/release-20121026.1/"
"foovendor-6.1-server-cloudimg-amd64.tar.gz")
class TestResumePartDownload(TestCase):
def setUp(self):
self.target = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.target)
def test_mirror_resume(self):
# test mirror resuming from filestore
smirror = get_mirror_reader("foocloud")
# as long as this is less than size of file, its valid
part_size = 10
# create a valid .part file
tfile = os.path.join(self.target, FOOCLOUD_FILE)
os.makedirs(os.path.dirname(tfile))
with open(tfile + ".part", "wb") as fw:
with smirror.source(FOOCLOUD_FILE) as fr:
fw.write(fr.read(part_size))
target_objstore = objectstores.FileStore(self.target)
tmirror = mirrors.ObjectStoreMirrorWriter(config=None,
objectstore=target_objstore)
tmirror.sync(smirror, "streams/v1/index.json")
# the part file should have been cleaned up. If this fails, then
# likely the part file wasn't used, and this test is no longer valid
self.assertFalse(os.path.exists(tfile + ".part"))
def test_corrupted_mirror_resume(self):
# test corrupted .part file is caught
smirror = get_mirror_reader("foocloud")
# create a corrupt .part file
tfile = os.path.join(self.target, FOOCLOUD_FILE)
os.makedirs(os.path.dirname(tfile))
with open(tfile + ".part", "w") as fw:
# just write some invalid data
fw.write("--bogus--")
target_objstore = objectstores.FileStore(self.target)
tmirror = mirrors.ObjectStoreMirrorWriter(config=None,
objectstore=target_objstore)
self.assertRaisesRegexp(Exception, r".*%s.*" % FOOCLOUD_FILE,
tmirror.sync,
smirror, "streams/v1/index.json")
# now the .part file should be removed, and trying again should succeed
self.assertFalse(os.path.exists(tfile + ".part"))
tmirror.sync(smirror, "streams/v1/index.json")
self.assertFalse(os.path.exists(tfile + ".part"))
simplestreams_0.1.0-67-g8497b634/tools/ 0000775 0000000 0000000 00000000000 14605750330 0017236 5 ustar 00root root 0000000 0000000 simplestreams_0.1.0-67-g8497b634/tools/build-deb 0000775 0000000 0000000 00000003712 14605750330 0021016 0 ustar 00root root 0000000 0000000 #!/bin/sh
set -e
sourcename="simplestreams"
TEMP_D=""
UNCOMMITTED=${UNCOMMITTED:-0}
fail() { echo "$@" 1>&2; exit 1; }
cleanup() {
[ -z "$TEMP_D" ] || rm -Rf "$TEMP_D"
}
if [ "$1" = "-h" -o "$1" = "--help" ]; then
cat <