barman-1.3.0/000755 000765 000024 00000000000 12273465047 013542 5ustar00mnenciastaff000000 000000 barman-1.3.0/AUTHORS000644 000765 000024 00000001102 12273464541 014602 0ustar00mnenciastaff000000 000000 Barman Core Team:
* Gabriele Bartolini
* Giuseppe Broccolo
* Giulio Calacoci
* Marco Nenciarini
Past contributors:
* Carlo Ascani
Many thanks go to our "Founding sponsors" (in alphabetical order):
* 4Caast - http://4caast.morfeo-project.org/
* CSI Piemonte - http://www.csipiemonte.it/
* GestionaleAuto - http://www.gestionaleauto.com/
* Navionics - http://www.navionics.com/
* XCon Internet Services - http://www.xcon.it/
barman-1.3.0/barman/000755 000765 000024 00000000000 12273465047 015002 5ustar00mnenciastaff000000 000000 barman-1.3.0/barman.egg-info/000755 000765 000024 00000000000 12273465047 016474 5ustar00mnenciastaff000000 000000 barman-1.3.0/bin/000755 000765 000024 00000000000 12273465047 014312 5ustar00mnenciastaff000000 000000 barman-1.3.0/ChangeLog000644 000765 000024 00000037546 12273464656 015340 0ustar00mnenciastaff000000 000000 2014-02-02 Marco Nenciarini
Update the ChangeLog file
Update RPM spec file for release 1.3.0
Review of NEWS and AUTHORS files
2014-01-31 Gabriele Bartolini
Updated files for final release
2014-01-30 Marco Nenciarini
Improve error messages during remote recovery
2014-01-29 Marco Nenciarini
Use fsync to avoid xlog.db file corruption (Closes #32)
Add network_compression configuration option (Closes #19)
When network_compression is enabled, all network transfers are done
using compression (if available).
2014-01-29 Gabriele Bartolini
Check directories exist before executing a backup (#14)
2014-01-28 Giulio Calacoci
Reduce log verbosity during initialisation phase
2014-01-28 Gabriele Bartolini
Load configuration files after logger initialisation
2014-01-21 Marco Nenciarini
Avoid tablespaces inside pgdata directory from being copied twice
2014-01-09 Marco Nenciarini
Generalise recovery operations (local/remote)
2014-01-28 Gabriele Bartolini
Reviewed documentation of WAL archive hook scripts
2014-01-07 Marco Nenciarini
Add pre_archive_script and post_archive_scrip hook scripts
2014-01-23 Marco Nenciarini
Refactor the LockFile management class to report permission errors.
Fix 'Invalid cross-device link' error in cron when incoming is on a different filesystem (merge request #4 by Holger Hamann)
2014-01-22 Marco Nenciarini
Port 'show-server' command to the new output interface
2014-01-21 Giulio Calacoci
Updated copyright (2014)
2014-01-17 Marco Nenciarini
Port 'status' and 'list-server' commands to the new output interface
2014-01-09 Marco Nenciarini
Port the 'show-backup' command to the new output interface
2014-01-16 Giulio Calacoci
Added implementation for backup command --immediate-checkpoint option and immediate_checkpoint configuration option
2014-01-08 Gabriele Bartolini
Bump version number and add release notes for 1.3.0
2013-11-27 Giulio Calacoci
Add unit tests for infofile and compression modules
Fix some python3 compatibility bugs highlighted by the tests
2013-10-18 Marco Nenciarini
Move barman._pretty_size() to barman.utils.pretty_size()
2014-01-03 Marco Nenciarini
Implement BackupInfo as a FieldListFile and move it in infofile module.
2014-01-07 Marco Nenciarini
Refactor output to a dedicate module.
The following commands have been ported to the new interface:
* backup
* check
* list-backup
A special NagiosOutputWriter has been added to support Nagios compatible
output for the check command
WARNING: this code doesn't run due to a circular dependency. The issue
will be fixed in the next commit
2013-09-12 Marco Nenciarini
Isolate subrocesses' stdin/stdout in command_wrappers module
2014-01-07 Marco Nenciarini
Refactor hooks management
2013-09-12 Marco Nenciarini
Split out logging configuration and userid enforcement from the configuration class.
2013-12-16 Gabriele Bartolini
Added rebuild-xlogdb command man page
2013-11-08 Marco Nenciarini
Implement the rebuild-xlogdb command. (Closes #27)
2013-11-19 Giulio Calacoci
added documentation for tablespaces relocation (#22)
2013-10-30 Gabriele Bartolini
Added TODO list
2013-09-05 Marco Nenciarini
Update the ChangeLog file
Bump version to 1.2.3
2013-08-29 Gabriele Bartolini
Updated README and man page
Added stub of release notes
2013-08-26 Marco Nenciarini
Initial Python 3 support
Update setup.py to support py.test and recent setuptools
2013-08-24 Damon Snyder
27: Addresses potential corruption of WAL xlog.db files.
In barman.lockfile.release() the file is unlinked (deleted). This effectively
nullifies any future attempts to lock the file by a blocking process by deleting
the open file table entry upon which the flock is based.
This commit removes the unlink and instead unlocks the file and then closes the file
descriptor leaving the lock file and open file table entry intact.
2013-08-22 Marco Nenciarini
Add support for restore target name (PostgreSQL 9.1+)
2013-08-21 Marco Nenciarini
PostgreSQL version in backup.info file is an integer
Make WAL sequence calculation compatible with PostgreSQL 9.3
With PostgreSQL 9.3 WAL files are written in a continuous stream,
rather than skipping the last 16MB segment every 4GB, meaning WAL
filenames may end in FF.
2013-06-24 Marco Nenciarini
Update the ChangeLog file
Fix config file parser tests
Bump version to 1.2.2
Fix python 2.6 compatibility
Fix history in spec file
2013-06-17 Marco Nenciarini
Update RPM spec file
2013-06-13 Marco Nenciarini
Update the ChangeLog file
Fix remote recovery with bwlimit on a tablespace
2013-06-07 Marco Nenciarini
Added the "tablespace_bandwidth_limit" option
2013-06-12 Gabriele Bartolini
Updated docs and man pages for 1.2.1
Prepared NEWS file for 1.2.1 release
2013-04-26 Gabriele Bartolini
Added the "bandwidth_limit" global/server option which allows to limit the I/O bandwidth (in KBPS) for backup and recovery operations
Added /etc/barman/barman.conf as default location
2013-03-13 Gabriele Bartolini
Removed duplicate message for previous backup in show command
2013-03-07 Gabriele Bartolini
Cosmetic change in message for "all" reserved section
2013-02-08 Marco Nenciarini
Avoid triggering the minimum_redundancy check on FAILED backups
Add BARMAN_VERSION to hook script environment
2013-01-31 Marco Nenciarini
Update the ChangeLog file
Update RPM's spec files
2013-01-30 Gabriele Bartolini
Finalised files for version 1.2.0
2013-01-28 Marco Nenciarini
Forbid the usage of 'all' word as server name
2013-01-11 Gabriele Bartolini
Added basic support for Nagios plugin output for check command through the --nagios option
2013-01-28 Marco Nenciarini
Add @expects_obj decorator to cli function as required by the upcoming Argh 1.0 API
2013-01-11 Marco Nenciarini
Migratte to new argh api.
Now barman requires arg => 0.21.2 and argcomplete-
2013-01-11 Gabriele Bartolini
Prepared release notes
2012-12-18 Marco Nenciarini
Fix typo in doc/barman.conf
2012-12-14 Marco Nenciarini
Return failure exit code if backup command fails in any way
2012-12-14 Gabriele Bartolini
Prepared copyright lines for 2013
Updated documentation and man pages
Added retention policy examples in configuration file
2012-12-13 Marco Nenciarini
Q/A on retention policy code
2012-12-12 Marco Nenciarini
Fix configuration parser unit tests
Exit with error if an invalid server name is passed in any command which takes a list of server
2012-12-08 Gabriele Bartolini
Add retention status to show-backup and list-backup commands
Auto-management of retention policies for base backups
Using the report() method for retention policies, enforce retention
policy through cron (if policy mode is 'auto'), by deleting OBSOLETE
backups.
Retention status and report() method for retention policies
Created the following states for retention policies:
VALID, OBSOLETE, NONE and POTENTIALLY_OBSOLETE (an
object which is OBSOLETE but cannot be removed
automatically due to minimum_redundancy requirements).
Created the report() method for the retention policy
base class, which exected the _backup_report() method
for base backups and the _wal_report() method for WAL
retention policies (currently not enforced).
The report method iterates through the DONE backups
and according to the retention policy, classifies
the backup. RedundancyRetentionPolicy uses the number
of backups, RecoveryWindowRetentionPolicy uses the
time window and the recoverability point concept.
Integrated minimum_redundancy with "barman check"
Initialisation of retention policies for a server
Added the _init_retention_policies() method in the
Server class constructor, which integrates with
the new RetentionPolicy classes and performs
syntax checking.
Integrated retention policies with log, 'barman check'
and 'barman status'.
String representation conforms to retention syntax
The string representation produces now a syntax-valid
retention policy configuration string.
The previous __str__ method has been renamed into debug()
SimpleWALRetentionPolicy objects are now created from
the server's main retention policy by the factory class.
2012-12-07 Gabriele Bartolini
Add the global/server option minimum_redundancy. Check it is >= 0. Guarantees that when delete is performed (or retention policies are enforced), this is the minimum number of backups to be kept for that server.
Add support for retention_policy_mode global/server option which defines the method for enforcing retention policies (currently only "auto", in future versions "manual" will be allowed)
Added first stub of retention policy classes
Started version 1.2.0
2012-12-04 Marco Nenciarini
Fix unit config tests
Update the ChangeLog file
Add ssl_*_file and unix_socket_directory to dangerous options list
Display tablespace's oid in show-backup output
Alphabetically sort servers in all commands output
Don't give up on first error in 'barman check all' command
2012-12-03 Gabriele Bartolini
Added sorting of files in configuration directory
2012-11-29 Marco Nenciarini
Fix regression in barman check command when configuration_files_directory is None
Update rpm files to 1.1.2 release
2012-11-29 Carlo Ascani
Update README
2012-11-29 Gabriele Bartolini
Prepared files for release
2012-11-28 Gabriele Bartolini
Add the configuration_files_directory option which allows to include multiple files from a directory
2012-11-29 Carlo Ascani
Update README
2012-11-28 Marco Nenciarini
Update NEWS file
2012-11-05 Gabriele Bartolini
Added support for list-backup all
2012-11-04 Gabriele Bartolini
Added latest/oldest for show-backup, delete, list-files and recover commands
Added get_first_backup and get_last_backup functions to Server class
Added application_name management for PostgreSQL >= 9.0
2012-11-13 Gabriele Bartolini
Switched to version 1.1.2
Continue if a WAL file is not found during delete (bug #18)
2012-11-04 Gabriele Bartolini
Includes version 90200 for tablespace new function
2012-10-16 Marco Nenciarini
Update the ChangeLog file
Update NEWS file and rpm package
Bump version to 1.1.1
Add more information about the failing line in xlogdb_parse_line errors
2012-10-15 Marco Nenciarini
Fix two bug on recover command
2012-10-12 Marco Nenciarini
Update the ChangeLog file
Update rpm changelog
Make recover fail if an invalid tablespace relocation rule is given
Remove unused imports from cli.py
2012-10-11 Gabriele Bartolini
Updated version to 1.1.0
Fixes bug #12
2012-10-11 Marco Nenciarini
Fail fast on recover command if the destination directory contains the ':' character (Closes: #4)
Fix typo in recovery messages
Report an informative message when pg_start_backup() invocation fails because an exclusive backup is already running (Closes: #8)
Make current_action an attribute of BackupManager class
2012-10-08 Gabriele Bartolini
Added ticket #10 to NEWS
Add pg_config_detect_possible_issues function for issue #10
2012-10-04 Gabriele Bartolini
Updated NEWS file with bug fixing #9
Fixes issue #9 on pg_tablespace_location() for 9.2
2012-08-31 Marco Nenciarini
Add BARMAN_PREVIOUS_ID variable to hooks environment
2012-08-20 Marco Nenciarini
Merge spec changes from Devrim
Add BARMAN_ERROR and BARMAN_STATUS variables to hook's environment
Added backup all documentation to README
2012-08-20 Gabriele Bartolini
Updated release notes
Set version to 1.0.1
2012-08-20 Marco Nenciarini
Document {pre,post}_backup_script in README
Document {pre,post}_backup_script in configuration man-page
2012-08-17 Marco Nenciarini
Add pre/post backup hook scripts definition (Closes: #7)
Add the possibility to manage hook scripts before and after a base
backup. Add the global (overridden per server) configuration options
called:
* pre_backup_script: executed before a backup
* post_backup_script: executed after a backup
Use the environment to pass at least the following variabiles:
* BARMAN_BACKUP_DIR: backup destination directory
* BARMAN_BACKUP_ID: ID of the backup
* BARMAN_CONFIGURATION: configuration file used by barman
* BARMAN_PHASE: 'pre' or 'post'
* BARMAN_SERVER: name of the server
The script definition is passed to the shell and can return any exit code.
Barman won't perform any exit code check. It will simply log the result in the log file.
To test it you can try adding
pre_backup_script = env | grep ^BARMAN
post_backup_script = env | grep ^BARMAN
in your barman config and you'll see the variables on console.
2012-08-16 Marco Nenciarini
Add documentation for 'backup all' command.
2012-07-19 Gabriele Bartolini
Add 'backup all' shortcut and, in general, multiple servers specification (issue #1)
Add 'backup all' shortcut and, in general, multiple servers specification (issue #1)
2012-07-16 Gabriele Bartolini
Fixed typo (thanks to Daymel Bonne Solís)
2012-07-06 Marco Nenciarini
Initial commit
barman-1.3.0/doc/000755 000765 000024 00000000000 12273465047 014307 5ustar00mnenciastaff000000 000000 barman-1.3.0/INSTALL000644 000765 000024 00000000247 12273464541 014574 0ustar00mnenciastaff000000 000000 Barman INSTALL instructions
Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.l.)
For further information, see the "Installation" section
in the README file.
barman-1.3.0/LICENSE000644 000765 000024 00000104513 12267465344 014556 0ustar00mnenciastaff000000 000000 GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
barman-1.3.0/MANIFEST.in000644 000765 000024 00000000361 12273207074 015272 0ustar00mnenciastaff000000 000000 recursive-include barman *.py
recursive-include rpm *
recursive-include scripts *.bash_completion
include doc/barman.1 doc/barman.5 doc/barman.conf
include AUTHORS NEWS ChangeLog LICENSE PKG-INFO MANIFEST.in MANIFEST setup.py INSTALL README
barman-1.3.0/NEWS000644 000765 000024 00000016652 12273464541 014251 0ustar00mnenciastaff000000 000000 Barman News - History of user-visible changes
Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.l.)
Version 1.3.0 - 3 Feb 2014
* Refactored BackupInfo class for backup metadata to use the new
FieldListFile class (infofile module)
* Refactored output layer to use a dedicated module, in order to
facilitate integration with Nagios (NagiosOutputWriter class)
* Refactored subprocess handling in order to isolate stdin/stderr/
stdout channels (command_wrappers module)
* Refactored hook scripts management
* Extracted logging configuration and userid enforcement from the
configuration class.
* Support for hook scripts to be executed before and after a WAL
file is archived, through the 'pre_archive_script' and
'post_archive_script' configuration options.
* Implemented immediate checkpoint capability with
--immediate-checkpoint command option and 'immediate_checkpoint'
configuration option
* Implemented network compression for remote backup and recovery
through the 'network_compression' configuration option (#19)
* Implemented the 'rebuild-xlogdb' command (Closes #27 and #28)
* Added deduplication of tablespaces located inside the PGDATA
directory
* Refactored remote recovery code to work the same way local
recovery does, by performing remote directory preparation
(assuming the remote user has the right permissions on the remote
server)
* 'barman backup' now tries and create server directories before
attempting to execute a full backup (#14)
* Fixed bug #22: improved documentation for tablespaces relocation
* Fixed bug #31: 'barman cron' checks directory permissions for
lock file
* Fixed bug #32: xlog.db read access during cron activities
Version 1.2.3 - 5 September 2013
* Added support for PostgreSQL 9.3
* Added support for the "--target-name" recovery option, which allows to
restore to a named point previously specified with pg_create_restore_point
(only for PostgreSQL 9.1 and above users)
* Fixed bug #27 about flock() usage with barman.lockfile (many thanks to
Damon Snyder )
* Introduced Python 3 compatibility
Version 1.2.2 - 24 June 2013
* Fix python 2.6 compatibility
Version 1.2.1 - 17 June 2013
* Added the "bandwidth_limit" global/server option which allows
to limit the I/O bandwidth (in KBPS) for backup and recovery operations
* Added the "tablespace_bandwidth_limit" global/server option which allows
to limit the I/O bandwidth (in KBPS) for backup and recovery operations
on a per tablespace basis
* Added /etc/barman/barman.conf as default location
* Bug fix: avoid triggering the minimum_redundancy check
on FAILED backups (thanks to Jérôme Vanandruel)
Version 1.2.0 - 31 Jan 2013
* Added the "retention_policy_mode" global/server option which defines
the method for enforcing retention policies (currently only "auto")
* Added the "minimum_redundancy" global/server option which defines
the minimum number of backups to be kept for a server
* Added the "retention_policy" global/server option which defines
retention policies management based on redunancy (e.g. REDUNDANCY 4)
or recovery window (e.g. RECOVERY WINDOW OF 3 MONTHS)
* Added retention policy support to the logging infrastructure, the
"check" and the "status" commands
* The "check" command now integrates minimum redundancy control
* Added retention policy states (valid, obsolete and potentially obsolete)
to "show-backup" and "list-backup" commands
* The 'all' keyword is now forbidden as server name
* Added basic support for Nagios plugin output to the 'check'
command through the --nagios option
* Barman now requires argh => 0.21.2 and argcomplete-
* Minor bug fixes
Version 1.1.2 - 29 Nov 2012
* Added "configuration_files_directory" option that allows
to include multiple server configuration files from a directory
* Support for special backup IDs: latest, last, oldest, first
* Management of multiple servers to the 'list-backup' command.
'barman list-backup all' now list backups for all the configured servers.
* Added "application_name" management for PostgreSQL >= 9.0
* Fixed bug #18: ignore missing WAL files if not found during delete
Version 1.1.1 - 16 Oct 2012
* Fix regressions in recover command.
Version 1.1.0 - 12 Oct 2012
* Support for hook scripts to be executed before and after
a 'backup' command through the 'pre_backup_script' and 'post_backup_script'
configuration options.
* Management of multiple servers to the 'backup' command.
'barman backup all' now iteratively backs up all the configured servers.
* Fixed bug #9: "9.2 issue with pg_tablespace_location()"
* Add warning in recovery when file location options have been defined
in the postgresql.conf file (issue #10)
* Fail fast on recover command if the destination directory contains
the ':' character (Closes: #4) or if an invalid tablespace
relocation rule is passed
* Report an informative message when pg_start_backup() invocation
fails because an exclusive backup is already running (Closes: #8)
Version 1.0.0 - 6 July 2012
* Backup of multiple PostgreSQL servers, with different versions. Versions
from PostgreSQL 8.4+ are supported.
* Support for secure remote backup (through SSH)
* Management of a catalog of backups for every server, allowing users
to easily create new backups, delete old ones or restore them
* Compression of WAL files that can be configured on a per server
basis using compression/decompression filters, both predefined (gzip
and bzip2) or custom
* Support for INI configuration file with global and per-server directives.
Default location for configuration files are /etc/barman.conf or
~/.barman.conf. The '-c' option allows users to specify a different one
* Simple indexing of base backups and WAL segments that does not require
a local database
* Maintenance mode (invoked through the 'cron' command) which performs
ordinary operations such as WAL archival and compression, catalog
updates, etc.
* Added the 'backup' command which takes a full physical base backup
of the given PostgreSQL server configured in Barman
* Added the 'recover' command which performs local recovery of a given
backup, allowing DBAs to specify a point in time. The 'recover' command
supports relocation of both the PGDATA directory and, where applicable,
the tablespaces
* Added the '--remote-ssh-command' option to the 'recover' command for
remote recovery of a backup. Remote recovery does not currently support
relocation of tablespaces
* Added the 'list-server' command that lists all the active servers
that have been configured in barman
* Added the 'show-server' command that shows the relevant information
for a given server, including all configuration options
* Added the 'status' command which shows information about the current
state of a server, including Postgres version, current transaction ID,
archive command, etc.
* Added the 'check' command which returns 0 if everything Barman needs
is functioning correctly
* Added the 'list-backup' command that lists all the available backups
for a given server, including size of the base backup and total size
of the related WAL segments
* Added the 'show-backup' command that shows the relevant information
for a given backup, including time of start, size, number of related
WAL segments and their size, etc.
* Added the 'delete' command which removes a backup from the catalog
* Added the 'list-files' command which lists all the files for a
single backup
* RPM Package for RHEL 5/6
barman-1.3.0/PKG-INFO000644 000765 000024 00000002612 12273465047 014640 0ustar00mnenciastaff000000 000000 Metadata-Version: 1.0
Name: barman
Version: 1.3.0
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Italia (Devise.IT S.r.l.)
Author-email: info@2ndquadrant.it
License: GPL-3.0
Description: Barman (backup and recovery manager) is an administration
tool for disaster recovery of PostgreSQL servers written in Python.
It allows to perform remote backups of multiple servers
in business critical environments and help DBAs during the recovery phase.
Barman's most wanted features include backup catalogs, retention policies,
remote recovery, archiving and compression of WAL files and backups.
Barman is written and maintained by PostgreSQL professionals 2ndQuadrant.
Platform: Linux
Platform: Mac OS X
Classifier: Environment :: Console
Classifier: Development Status :: 5 - Production/Stable
Classifier: Topic :: System :: Archiving :: Backup
Classifier: Topic :: Database
Classifier: Topic :: System :: Recovery Tools
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
barman-1.3.0/README000644 000765 000024 00000111750 12273464541 014425 0ustar00mnenciastaff000000 000000 Backup and Recovery Manager for PostgreSQL Tutorial
Barman (backup and recovery manager) is an administration tool for
disaster recovery of PostgreSQL servers written in Python. Barman can
perform remote backups of multiple servers in business critical
environments, and helps DBAs during the recovery phase.
Barman's most wanted features include: backup catalogues, retention
policies, remote recovery, archiving and compression of WAL files and
of backups. Barman is written and maintained by PostgreSQL
professionals 2ndQuadrant.
__________________________________________________________________
Introduction
In a perfect world, there would be no need for a backup. However it is
important, especially in business environments, to be prepared for when
the "unexpected" happens. In a database scenario, the unexpected could
take any of the following forms:
* data corruption;
* system failure, including hardware failure;
* human error;
* natural disaster.
In such cases, any ICT manager or DBA should be able to repair the
incident and recover the database in the shortest possible time. We
normally refer to this discipline as Disaster recovery.
This guide assumes that you are familiar with theoretical disaster
recovery concepts, and you have a grasp of PostgreSQL fundamentals in
terms of physical backup and disaster recovery. If not, we encourage
you to read the PostgreSQL documentation or any of the recommended
books on PostgreSQL.
Professional training on this topic is another effective way of
learning these concepts. At any time of the year you can find many
courses available all over the world, delivered by PostgreSQL companies
such as 2ndQuadrant.
For now, you should be aware that any PostgreSQL physical/binary backup
(not to be confused with the logical backups produced by the pg_dump
utility) is composed of:
* a base backup;
* one or more WAL files (usually collected through continuous
archiving).
PostgreSQL offers the core primitives that allow DBAs to setup a really
robust Disaster Recovery environment. However, it becomes complicated
to manage multiple backups, from one or more PostgreSQL servers.
Restoring a given backup is another task that any PostgreSQL DBA would
love to see more automated and user friendly.
With these goals in mind, 2ndQuadrant started the development of Barman
for PostgreSQL. Barman is an acronym for "Backup and Recovery Manager".
Currently Barman works only on Linux and Unix operating systems.
__________________________________________________________________
Before you start
The first step is to decide the architecture of your backup. In a
simple scenario, you have one PostgreSQL instance (server) running on a
host. You want your data continuously backed up to another server,
called the backup server.
Barman allows you to launch PostgreSQL backups directly from the backup
server, using SSH connections. Furthermore, it allows you to centralise
your backups in case you have more than one PostgreSQL server to
manage.
During this guide, we will assume that:
* there is one PostgreSQL instance on a host (called pg for
simplicity)
* there is one backup server on another host (called backup)
* communication via SSH between the two servers is enabled
* the PostgreSQL server can be reached from the backup server as the
postgres operating system user (or another user with PostgreSQL
database superuser privileges, typically configured via ident
authentication)
It is important to note that, for disaster recovery, these two servers
must not share any physical resource except for the network. You can
use Barman in geographical redundancy scenarios for better disaster
recovery outcomes.
System requirements
* Linux/Unix
* Python 2.6 or 2.7
* Python modules:
+ argcomplete
+ argh >= 0.21.2
+ psycopg2
+ python-dateutil < 2.0 (since version 2.0 requires python3)
+ distribute (optional)
* PostgreSQL >= 8.4
* rsync >= 3.0.4
Important
The same major version of PostgreSQL should be installed on both
servers.
Tip
Users of RedHat Enterprise Linux, CentOS and Scientific Linux are
advised to install the Extra Packages Enterprise Linux (EPEL)
repository.
[Further information at [1]http://fedoraproject.org/wiki/EPEL]
Note
Version 1.2.3 of Barman has been refactored for Python 3 support.
Please consider it as experimental now and report any bug through the
ticketing system on SourceForge or mailing list.
__________________________________________________________________
Installation
On RedHat/CentOS using RPM packages
Barman can be installed on RHEL5 and RHEL6 Linux systems using RPM
packages.
Barman is available through the PostgreSQL Global Development Group RPM
repository with Yum. You need to follow the instructions for your
distribution (RedHat, CentOS, Fedora, etc.) and architecture that you
can find at [2]http://yum.postgresql.org/.
Then, as root simply type:
yum install barman
On Debian/Ubuntu using packages
Barman can be installed on Debian and Ubuntu Linux systems using
packages.
It is available in the official repository for Debian Sid (unstable)
and Ubuntu 12.10 (Quantal Quetzal).
Note
You can install an up-to-date version of Barman on many Debian and
Ubuntu releases using the PostgreSQL Community APT repository at
[3]http://apt.postgresql.org/. Instructions can be found at
[4]https://wiki.postgresql.org/wiki/Apt.
Installing Barman is as simple as typing as root user:
apt-get install barman
From sources
Create a system user called barman on the backup server. As barman
user, download the sources and uncompress them.
For a system-wide installation, type:
barman@backup$ ./setup.py build
barman@backup# ./setup.py install # run this command with root privileges or s
udo
For a local installation, type:
barman@backup$ ./setup.py install --user
Important
The --user option works only with python-distribute
barman will be installed in your user directory (make sure that your
PATH environment variable is set properly).
__________________________________________________________________
Getting started
Prerequisites
SSH connection
Barman needs a bidirectional SSH connection between the barman user on
the backup server and the postgres user. SSH must be configured such
that there is no password prompt presented when connecting. on the pg
server. As the barman user on the backup server, generate an SSH key
with an empty password, and append the public key to the
authorized_keys file of the postgres user on the pg server.
The barman user on the backup server should then be able to perform the
following operation without typing a password:
barman@backup$ ssh postgres@pg
The procedure must be repeated with sides swapped in order to allow the
postgres user on the pg server to connect to the backup server as the
barman user without typing a password:
postgres@pg$ ssh barman@backup
For further information, refer to OpenSSH documentation.
PostgreSQL connection
You need to make sure that the backup server allows connection to the
PostgreSQL server on pg as superuser (postgres).
You can choose your favourite client authentication method among those
offered by PostgreSQL. More information can be found here:
[5]http://www.postgresql.org/docs/current/static/client-authentication.
html
barman@backup$ psql -c 'SELECT version()' -U postgres -h pg
Note
As of version 1.1.2, Barman honours the application_name connection
option for PostgreSQL servers 9.0 or higher.
Backup directory
Barman needs a main backup directory to store all the backups. Even
though you can define a separate folder for each server you want to
back up and for each type of resource (backup or WAL segments, for
instance), we suggest that you adhere to the default rules and stick
with the conventions that Barman chooses for you.
You will see that the configuration file (as explained below) defines a
barman_home variable, which is the directory where Barman will store
all your backups by default. We choose /var/lib/barman as home
directory for Barman:
barman@backup$ sudo mkdir /var/lib/barman
barman@backup$ sudo chown barman:barman /var/lib/barman
Important
We assume that you have enough space, and that you have already thought
about redundancy and safety of your disks.
Basic configuration
In the docs directory you will find a minimal configuration file. Use
it as a template, and copy it to /etc/barman.conf, or to
~/.barman.conf. In general, the former applies to all the users on the
backup server, while the latter applies only to the barman user; for
the purpose of this tutorial there is no difference in using one or the
other.
From version 1.2.1, you can use /etc/barman/barman.conf as default
system configuration file.
The configuration file follows the standard INI format, and is split
in:
* a section for general configuration (identified by the barman
label)
* a section for each PostgreSQL server to be backed up (identified by
the server label, e.g. main or pg)
[all and barman are reserved words and cannot be used as server
labels]
As of version 1.1.2, you can now specify a directory for configuration
files similarly to other Linux applications, using the
configuration_files_directory option (empty by default). If the value
of configuration_files_directory is a directory, Barman will read all
the files with .conf extension that exist in that folder. For example,
if you set it to /etc/barman.d, you can specify your PostgreSQL servers
placing each section in a separate .conf file inside the /etc/barman.d
folder.
Otherwise, you can use Barman's standard way of specifying sections
within the main configuration file.
; Barman, Backup and Recovery Manager for PostgreSQL
; http://www.pgbarman.org/ - http://www.2ndQuadrant.com/
;
; Main configuration file
[barman]
; Main directory
barman_home = /var/lib/barman
; System user
barman_user = barman
; Log location
log_file = /var/log/barman/barman.log
; Default compression level: possible values are None (default), bzip2, gzip o
r custom
;compression = gzip
; Pre/post backup hook scripts
;pre_backup_script = env | grep ^BARMAN
;post_backup_script = env | grep ^BARMAN
; Pre/post archive hook scripts
;pre_archive_script = env | grep ^BARMAN
;post_archive_script = env | grep ^BARMAN
; Directory of configuration files. Place your sections in separate files with
.conf extension
; For example place the 'main' server section in /etc/barman.d/main.conf
;configuration_files_directory = /etc/barman.d
; Minimum number of required backups (redundancy)
;minimum_redundancy = 0
; Global retention policy (REDUNDANCY or RECOVERY WINDOW) - default empty
;retention_policy =
; Global bandwidth limit in KBPS - default 0 (meaning no limit)
;bandwidth_limit = 4000
; Immediate checkpoint for backup command
;immediate_checkpoint = false
; Enable network compression for data transfers
;network_compression = false
;; ; 'main' PostgreSQL Server configuration
;; [main]
;; ; Human readable description
;; description = "Main PostgreSQL Database"
;;
;; ; SSH options
;; ssh_command = ssh postgres@pg
;;
;; ; PostgreSQL connection string
;; conninfo = host=pg user=postgres
;;
;; ; Minimum number of required backups (redundancy)
;; ; minimum_redundancy = 1
;;
;; ; Examples of retention policies
;;
;; ; Retention policy (disabled)
;; ; retention_policy =
;; ; Retention policy (based on redundancy)
;; ; retention_policy = REDUNDANCY 2
;; ; Retention policy (based on recovery window)
;; ; retention_policy = RECOVERY WINDOW OF 4 WEEKS
You can now test the configuration by executing:
barman@backup$ barman show-server main
barman@backup$ barman check main
Write down the incoming_wals_directory, as printed by the barman
show-server main command, because you will need it to setup continuous
WAL archiving.
Important
Executing these two commands, saves you the time of manually creating
backup directories for your servers.
Continuous WAL archiving
Edit the postgresql.conf file of the PostgreSQL instance on the pg
database and activate the archive mode:
wal_level = 'archive' # For PostgreSQL >= 9.0
archive_mode = on
archive_command = 'rsync -a %p barman@backup:INCOMING_WALS_DIRECTORY/%f'
Make sure you change the INCOMING_WALS_DIRECTORY placeholder with the
value returned by the barman show-server main command above.
In case you use Hot Standby, wal_level must be set to hot_standby.
Restart the PostgreSQL server.
In order to test that continuous archiving is on and properly working,
you need to check both the PostgreSQL server
[For more information, refer to the PostgreSQL documentation]
and the backup server (in particular, that the WAL files are collected
in the destination directory).
Listing the servers
The following command displays the list of all the available servers:
barman@backup$ barman list-server
Executing a full backup
To take a backup for the main server, issue the following command:
barman@backup$ barman backup main
As of version 1.1.0, you can serialise the backup of your managed
servers by using the all target for the server:
barman@backup$ barman backup all
This will iterate through your available servers and sequentially take
a backup for each of them.
Immediate Checkpoint
As of version 1.3.0, it is possible to use the immediate_checkpoint
configuration global/server option (set to false by default).
When issuing a backup, Barman normally waits for the checkpoint to
happen on the PostgreSQL server (depending on the configuration
settings for workload or time checkpoint control). This might take
longer for a backup to start.
By setting immediate_checkpoint to true, you can force the checkpoint
on the Postgres server to happen immediately and start your backup copy
process as soon as possible:
At any time, you can override the configuration option behaviour, by
issuing barman backup with any of these two options:
* --immediate-checkpoint, which forces an immediate checkpoint;
* --no-immediate-checkpoint, which forces to wait for the checkpoint
to happen.
Viewing the list of backups for a server
To list all the available backups for a given server, issue:
barman@backup$ barman list-backup main
the format of the output is as in:
main - 20120529T092136 - Wed May 30 15:20:25 2012 - Size: 5.0 TiB - WAL Size:
845.0 GiB (tablespaces: tb_name:/home/tblspace/name, tb_temp:/home/tblspace/temp
)
where 20120529T092136 is the ID of the backup and Wed May 30 15:20:25
2012 is the start time of the operation, Size is the size of the base
backup and WAL Size is the size of WAL files archived.
As of version 1.1.2, you can get a listing of the available backups for
all your servers, using the all target for the server:
barman@backup$ barman list-backup all
Restoring a whole server
To restore a whole server issue the following command:
barman@backup$ barman recover main 20110920T185953 /path/to/recover/directory
where 20110920T185953 is the ID of the backup to be restored. When this
command completes succesfully, /path/to/recover/directory contains a
complete data directory ready to be started as a PostgreSQL database
server.
Here is an example of a command that starts the server:
barman@backup$ pg_ctl -D /path/to/recover/directory start
Important
If you run this command as user barman, it will become the database
superuser.
You can retrieve a list of backup IDs for a specific server with:
barman@backup$ barman list-backup srvpgsql
Important
Barman does not currently keep track of symbolic links inside PGDATA
(except for tablespaces inside pg_tblspc). We encourage system
administrators to keep track of symbolic links and to add them to the
disaster recovery plans/procedures in case they need to be restored in
their original location.
Remote recovery
Barman is able to recover a backup on a remote server through the
--remote-ssh-command COMMAND option for the recover command.
If this option is specified, barman uses COMMAND to connect to a remote
host.
Note
The postgres user is normally used to recover on a remote host.
There are some limitations when using remote recovery. It is important
to be aware that:
* Barman needs at least 4GB of free space in the system temporary
directory (usually /tmp);
* the SSH connection between Barman and the remote host must use
public key exchange authentication method;
* the remote user must be able to create the required destination
directories for PGDATA and, where applicable, tablespaces;
* there must be enough free space on the remote server to contain the
base backup and the WAL files needed for recovery.
Relocating one or more tablespaces
Important
Relocating a tablespace is currently available only with local
recovery.
Barman is able to automatically relocate one or more tablespaces using
the recover command with the --tablespace option. The option accepts a
pair of values as arguments using the NAME:DIRECTORY format:
* name/identifier of the tablespace (NAME);
* destination directory (DIRECTORY).
If the destination directory does not exists, Barman will try to create
it (assuming you have enough privileges).
Restoring to a given point in time
Barman employs PostgreSQL's Point-in-Time Recovery (PITR) by allowing
DBAs to specify a recovery target, either as a timestamp or as a
transaction ID; you can also specify whether the recovery target should
be included or not in the recovery.
The recovery target can be specified using one of three mutually
exclusive options:
* --target-time TARGET_TIME: to specify a timestamp
* --target-xid TARGET_XID: to specify a transaction ID
* --target-name TARGET_NAME: to specify a named restore point -
previously created with the pg_create_restore_point(name) function
[Only available for PostgreSQL 9.1 and above users]
You can use the --exclusive option to specify whether to stop
immediately before or immediately after the recovery target.
Barman allows you to specify a target timeline for recovery, using the
target-tli option. The notion of timeline goes beyond the scope of this
document; you can find more details in the PostgreSQL documentation, or
in one of 2ndQuadrant's Recovery training courses.
__________________________________________________________________
WAL compression
The barman cron command (see below) will compress WAL files if the
compression option is set in the configuration file. This option allows
three values:
* gzip: for Gzip compression (requires gzip)
* bzip2: for Bzip2 compression (requires bzip2)
* custom: for custom compression, which requires you to set the
following options as well:
+ custom_compression_filter: a compression filter
+ custom_decompression_filter: a decompression filter
__________________________________________________________________
Limiting bandwidth usage
From version 1.2.1 it is possible to limit the usage of I/O bandwidth
through the bandwidth_limit option (global/per server), by specifying
the maximum number of kilobytes per second. By default it is set to 0,
meaning no limit.
In case you have several tablespaces and you prefer to limit the I/O
workload of your backup procedures on one or more tablespaces, you can
use the tablespace_bandwidth_limit option (global/per server):
tablespace_bandwidth_limit = tbname:bwlimit[, tbname:bwlimit, ...]
The option accepts a comma separated list of pairs made up of the
tablespace name and the bandwidth limit (in kilobytes per second).
When backing up a server, Barman will try and locate any existing
tablespace in the above option. If found, the specified bandwidth limit
will be enforced. If not, the default bandwidth limit for that server
will be applied.
__________________________________________________________________
Network Compression
From version 1.3.0 it is possible to reduce the size of transferred
data using compression. It can be enabled using the network_compression
option (global/per server):
network_compression = true|false
Setting this option to true will enable data compression during network
transfers (for both backup and recovery). By default it is set to
false.
__________________________________________________________________
Backup ID shortcuts
As of version 1.1.2, you can use any of the following shortcuts to
identify a particular backup for a given server:
* latest: the latest available backup for that server, in
chronological order. You can also use the last synonym.
* oldest: the oldest available backup for that server, in
chronological order. You can also use the first synonym.
These aliases can be used with any of the following commands:
show-backup, delete, list-files and recover.
__________________________________________________________________
Minimum redundancy safety
From version 1.2.0, you can define the minimum number of periodical
backups for a PostgreSQL server.
You can use the global/per server configuration option called
minimum_redundancy for this purpose, by default set to 0.
By setting this value to any number greater than 0, Barman makes sure
that at any time you will have at least that number of backups in a
server catalogue.
This will protect you from accidental barman delete operations.
Important
Make sure that your policy retention settings do not collide with
minimum redundancy requirements. Regularly check Barman's log for
messages on this topic.
__________________________________________________________________
Retention policies
From version 1.2.0, Barman supports retention policies for backups.
A backup retention policy is an user-defined policy that determines how
long backups and related archive logs (Write Ahead Log segments) need
to be retained for recovery procedures.
Based on the user's request, Barman retains the periodical backups
required to satisfy the current retention policy, and any archived WAL
files required for the complete recovery of those backups.
Barman users can define a retention policy in terms of backup
redundancy (how many periodical backups) or a recovery window (how
long).
Retention policy based on redundancy
In a retention policy, the setting that determines how many
periodical backups to keep. A redundancy-based retention policy
is contrasted with retention policy that uses a recovery window.
Retention policy based on recovery window
A recovery window is one type of Barman backup retention policy,
in which the DBA specifies a period of time and Barman ensures
retention of backups and/or archived WAL files required for
point-in-time recovery to any time during the recovery window.
The interval always ends with the current time and extends back
in time for the number of days specified by the user. For
example, if the retention policy is set for a recovery window of
seven days, and the current time is 9:30 AM on Friday, Barman
retains the backups required to allow point-in-time recovery
back to 9:30 AM on the previous Friday.
Scope
Retention policies can be defined for:
* PostgreSQL periodical base backups: through the retention_policy
configuration option;
* Archive logs, for Point-In-Time-Recovery: through the
wal_retention_policy configuration option.
Important
In a temporal dimension, archive logs must be included in the time
window of periodical backups.
There are two typical use cases here: full or partial point-in-time
recovery.
Full point in time recovery scenario
Base backups and archive logs share the same retention policy,
allowing DBAs to recover at any point in time from the first
available backup.
Partial point in time recovery scenario
Base backup retention policy is wider than that of archive logs,
allowing users for example to keep full weekly backups of the
last 6 months, but archive logs for the last 4 weeks (granting
to recover at any point in time starting from the last 4
periodical weekly backups).
Important
Currently, Barman implements only the full point in time recovery
scenario, by constraining the wal_retention_policy option to main.
How they work
Retention policies in Barman can be:
* automated: enforced by barman cron;
* manual: Barman simply reports obsolete backups and allows DBAs to
delete them.
Important
Currently Barman does not implement manual enforcement. This feature
will be available in future versions.
Configuration and syntax
Retention policies can be defined through the following configuration
options:
* retention_policy: for base backup retention;
* wal_retention_policy: for archive logs retention;
* retention_policy_mode: can only be set to auto (retention policies
are automatically enforced by the barman cron command).
These configuration options can be defined both at a global level and a
server level, allowing users maximum flexibility on a multi-server
environment.
Syntax for retention_policy
The general syntax for a base backup retention policy through
retention_policy is the following:
retention_policy = {REDUNDANCY value | RECOVERY WINDOW OF value {DAYS | WEEKS
| MONTHS}}
Where:
* syntax is case insensitive;
* value is an integer and is > 0;
* in case of redundancy retention policy:
+ value must be greater than or equal to the server minimum
redundancy level (if not is is assigned to that value and a
warning is generated);
+ the first valid backup is the value-th backup in a reverse
ordered time series;
* in case of recovery window policy:
+ the point of recoverability is: current time - window;
+ the first valid backup is the first available backup before
the point of recoverability; its value in a reverse ordered
time series must be greater than or equal to the server
minimum redundancy level (if not is is assigned to that value
and a warning is generated).
By default, retention_policy is empty (no retention enforced).
Syntax for wal_retention_policy
Currently, the only allowed value for wal_retention_policy is the
special value main, that maps the retention policy of archive logs to
that of base backups.
__________________________________________________________________
Available commands
Barman commands are applied to three different levels:
* general commands, which apply to the backup catalogue
* server commands, which apply to a specific server (list available
backups, execute a backup, etc.)
* backup commands, which apply to a specific backup in the catalogue
(display information, issue a recovery, delete the backup, etc.)
In the following sections the available commands will be described in
detail.
General commands
List available servers
You can display the list of active servers that have been configured
for your backup system with:
barman list-server
Maintenance mode
You can perform maintenance operations, like compressing WAL files and
moving them from the incoming directory to the archived one, with:
barman cron
This command enforces retention policies on those servers that have:
* retention_policy not empty and valid;
* retention_policy_mode set to auto.
Note
This command should be executed in a cron script.
Server commands
Show the configuration for a given server
You can show the configuration parameters for a given server with:
barman show-server
Take a base backup
You can perform a full backup (base backup) for a given server with:
barman backup [--immediate-checkpoint]
Tip
You can use barman backup all to sequentially backup all your
configured servers.
Show available backups for a server
You can list the catalogue of available backups for a given server
with:
barman list-backup
Diagnostics check
You can check if the connection to a given server is properly working
with:
barman check
Tip
You can use barman check all to check all your configured servers.
Rebuild the WAL archive
At any time, you can regenerate the content of the WAL archive for a
specific server (or every server, using the all shortcut). The WAL
archive is contained in the xlog.db file, and every Barman server has
its own copy. From version 1.2.4 you can now rebuild the xlog.db file
with the rebuild-xlogdb command. This will scan all the archived WAL
files and regenerate the metadata for the archive.
Important
Users of Barman < 1.2.3 might have suffered from a bug due to bad
locking in highly concurrent environments. You can now regenerate the
WAL archive using the rebuild-xlogdb command.
barman rebuild-xlogdb
Backup commands
Note
Remember: a backup ID can be retrieved with server list
Show backup information
You can show all the available information for a particular backup of a
given server with:
barman show-backup
From version 1.1.2, in order to show the latest backup, you can issue:
barman show-backup latest
Delete a backup
You can delete a given backup with:
barman delete
From version 1.1.2, in order to delete the oldest backup, you can
issue:
barman delete oldest
Warning
Until retention policies are natively supported, you must use the
oldest shortcut with extreme care and caution. Iteratively executing
this command can easily wipe out your backup archive.
List backup files
You can list the files (base backup and required WAL files) for a given
backup with:
barman list-files [--target TARGET_TYPE]
With the --target TARGET_TYPE option, it is possible to choose the
content of the list for a given backup.
Possible values for TARGET_TYPE are:
* data: lists just the data files;
* standalone: lists the base backup files, including required WAL
files;
* wal: lists all WAL files from the beginning of the base backup to
the start of the following one (or until the end of the log);
* full: same as data + wal.
The default value for TARGET_TYPE is standalone.
Important
The list-files command facilitates interaction with external tools, and
therefore can be extremely useful to integrate Barman into your
archiving procedures.
__________________________________________________________________
Hook scripts
Barman allows a database administrator to run hook scripts on these two
events:
* before and after a backup
* before and after a WAL file is archived
Important
No check is performed on the exit code of a script. The result will be
simply written in the log file.
Backup scripts
Version 1.1.0 introduced backup scripts.
These scripts can be configured with the following global configuration
options (which can be overridden on a per server basis):
* pre_backup_script: hook script launched before a base backup
* post_backup_script: hook script launched after a base backup
The script definition is passed to a shell and can return any exit
code.
The shell environment will contain the following variables:
* BARMAN_BACKUP_DIR: backup destination directory
* BARMAN_BACKUP_ID: ID of the backup
* BARMAN_CONFIGURATION: configuration file used by barman
* BARMAN_ERROR: error message, if any (only for the post phase)
* BARMAN_PHASE: phase of the script, either pre or post
* BARMAN_PREVIOUS_ID: ID of the previous backup (if present)
* BARMAN_SERVER: name of the server
* BARMAN_STATUS: status of the backup
* BARMAN_VERSION: version of Barman (from 1.2.1)
WAL archive scripts
Version 1.3.0 introduced WAL archive hook scripts.
Similarly to backup scripts, archive scripts can be configured with
global configuration options (which can be overridden on a per server
basis):
* pre_archive_script: hook script launched before a WAL file is
archived by maintenance (usually barman cron)
* post_archive_script: hook script launched after a WAL file is
archived by maintenance
The script is executed through a shell and can return any exit code.
Archive scripts share with backup scripts some environmental variables:
* BARMAN_CONFIGURATION: configuration file used by barman
* BARMAN_ERROR: error message, if any (only for the post phase)
* BARMAN_PHASE: phase of the script, either pre or post
* BARMAN_SERVER: name of the server
Following variables are specific to archive scripts:
* BARMAN_SEGMENT: name of the WAL file
* BARMAN_FILE: full path of the WAL file
* BARMAN_SIZE: size of the WAL file
* BARMAN_TIMESTAMP: WAL file timestamp
* BARMAN_COMPRESSION: type of compression used for the WAL file
__________________________________________________________________
Support and sponsor opportunities
Barman is free software, written and maintained by 2ndQuadrant. If you
require support on using Barman, or if you need new features, please
get in touch with 2ndQuadrant. You can sponsor the development of new
features of Barman and PostgreSQL which will be made publicly available
as open source.
For further information, please visit our websites:
* Barman website: [6]http://www.pgbarman.org/
* 2ndQuadrant website: [7]http://www.2ndquadrant.com/
Useful information can be found in:
* the FAQ section of the website: [8]http://www.pgbarman.org/faq/
* the "Barman" category of 2ndQuadrant's blog:
[9]http://blog.2ndquadrant.com/tag/barman/
__________________________________________________________________
Authors
In alphabetical order:
* Gabriele Bartolini <[10]gabriele.bartolini@2ndquadrant.it> (core
team, project leader)
* Giuseppe Broccolo <[11]giuseppe.broccolo@2ndquadrant.it> (core
team, QA)
* Giulio Calacoci <[12]giulio.calacoci@2ndquadrant.it> (core team,
developer)
* Marco Nenciarini <[13]marco.nenciarini@2ndquadrant.it> (core team,
team leader)
Past contributors:
* Carlo Ascani
__________________________________________________________________
Links
* check-barman: a Nagios plugin for Barman, written by Holger Hamann
([14]https://github.com/hamann/check-barman, MIT license)
__________________________________________________________________
License and Contributions
Barman is the exclusive property of 2ndQuadrant Italia and its code is
distributed under GNU General Public License 3.
Copyright © 2011-2014 2ndQuadrant.it.
Barman has been partially funded through [15]4CaaSt, a research project
funded by the European Commission's Seventh Framework programme.
Contributions to Barman are welcome, and will be listed in the file
AUTHORS. 2ndQuadrant Italia requires that any contributions provide a
copyright assignment and a disclaimer of any work-for-hire ownership
claims from the employer of the developer. This lets us make sure that
all of the Barman distribution remains free code. Please contact
[16]info@2ndQuadrant.it for a copy of the relevant Copyright Assignment
Form.
__________________________________________________________________
Last updated 2014-01-30 16:20:35 CET
References
1. http://fedoraproject.org/wiki/EPEL
2. http://yum.postgresql.org/
3. http://apt.postgresql.org/
4. https://wiki.postgresql.org/wiki/Apt
5. http://www.postgresql.org/docs/current/static/client-authentication.html
6. http://www.pgbarman.org/
7. http://www.2ndquadrant.com/
8. http://www.pgbarman.org/faq/
9. http://blog.2ndquadrant.com/tag/barman/
10. mailto:gabriele.bartolini@2ndquadrant.it
11. mailto:giuseppe.broccolo@2ndquadrant.it
12. mailto:giulio.calacoci@2ndquadrant.it
13. mailto:marco.nenciarini@2ndquadrant.it
14. https://github.com/hamann/check-barman
15. http://4caast.morfeo-project.org/
16. mailto:info@2ndQuadrant.it
barman-1.3.0/rpm/000755 000765 000024 00000000000 12273465047 014340 5ustar00mnenciastaff000000 000000 barman-1.3.0/scripts/000755 000765 000024 00000000000 12273465047 015231 5ustar00mnenciastaff000000 000000 barman-1.3.0/setup.cfg000644 000765 000024 00000000073 12273465047 015363 0ustar00mnenciastaff000000 000000 [egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
barman-1.3.0/setup.py000755 000765 000024 00000006754 12273464542 015272 0ustar00mnenciastaff000000 000000 #!/usr/bin/env python
#
# barman - Backup and Recovery Manager for PostgreSQL
#
# Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.l.)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
"""Backup and Recovery Manager for PostgreSQL
Barman (backup and recovery manager) is an administration
tool for disaster recovery of PostgreSQL servers written in Python.
It allows to perform remote backups of multiple servers
in business critical environments and help DBAs during the recovery phase.
Barman's most wanted features include backup catalogs, retention policies,
remote recovery, archiving and compression of WAL files and backups.
Barman is written and maintained by PostgreSQL professionals 2ndQuadrant.
"""
import sys
try:
from setuptools import setup
from setuptools.command.test import test as TestCommand
class PyTest(TestCommand):
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = ['tests']
self.test_suite = True
def run_tests(self):
#import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
cmdclass={'test': PyTest}
except ImportError:
from distutils.core import setup
cmdclass={}
if sys.version_info < (2, 6):
raise SystemExit('ERROR: Barman needs at least python 2.6 to work')
install_requires = ['psycopg2', 'argh >= 0.21.2', 'python-dateutil', 'argcomplete']
if sys.version_info < (2, 7):
install_requires.append('argparse')
barman = {}
with open('barman/version.py', 'r') as fversion:
exec (fversion.read(), barman)
setup(
name='barman',
version=barman['__version__'],
author='2ndQuadrant Italia (Devise.IT S.r.l.)',
author_email='info@2ndquadrant.it',
url='http://www.pgbarman.org/',
packages=['barman', ],
scripts=['bin/barman', ],
data_files=[
('share/man/man1', ['doc/barman.1']),
('share/man/man5', ['doc/barman.5']),
],
license='GPL-3.0',
description=__doc__.split("\n")[0],
long_description="\n".join(__doc__.split("\n")[2:]),
install_requires=install_requires,
platforms=['Linux', 'Mac OS X'],
classifiers=[
'Environment :: Console',
'Development Status :: 5 - Production/Stable',
'Topic :: System :: Archiving :: Backup',
'Topic :: Database',
'Topic :: System :: Recovery Tools',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
],
tests_require=['pytest', 'mock', 'pytest-capturelog'],
cmdclass=cmdclass,
use_2to3=True,
)
barman-1.3.0/scripts/barman.bash_completion000644 000765 000024 00000000055 12162017273 021550 0ustar00mnenciastaff000000 000000 eval "$(register-python-argcomplete barman)"
barman-1.3.0/rpm/barman.spec000644 000765 000024 00000012030 12273464571 016451 0ustar00mnenciastaff000000 000000 %global pybasever 2.6
%if 0%{?rhel} == 5
%global with_python26 1
%endif
%if 0%{?with_python26}
%global __python_ver python26
%global __python %{_bindir}/python%{pybasever}
%global __os_install_post %{__multiple_python_os_install_post}
%else
%global __python_ver python
%endif
%{!?pybasever: %define pybasever %(%{__python} -c "import sys;print(sys.version[0:3])")}
%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Summary: Backup and Recovery Manager for PostgreSQL
Name: barman
Version: 1.3.0
Release: 1%{?dist}
License: GPLv3
Group: Applications/Databases
Url: http://www.pgbarman.org/
Source0: %{name}-%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
BuildArch: noarch
Vendor: 2ndQuadrant Italia (Devise.IT S.r.l.)
Requires: python-abi = %{pybasever}, %{__python_ver}-psycopg2, %{__python_ver}-argh >= 0.21.2, %{__python_ver}-argcomplete, %{__python_ver}-dateutil
Requires: /usr/sbin/useradd
%description
Barman (backup and recovery manager) is an administration
tool for disaster recovery of PostgreSQL servers written in Python.
It allows to perform remote backups of multiple servers
in business critical environments and help DBAs during the recovery phase.
Barman's most wanted features include backup catalogs, retention policies,
remote recovery, archiving and compression of WAL files and backups.
Barman is written and maintained by PostgreSQL professionals 2ndQuadrant.
%prep
%setup -n barman-%{version} -q
%build
%{__python} setup.py build
cat > barman.cron << EOF
# m h dom mon dow user command
* * * * * barman [ -x %{_bindir}/barman ] && %{_bindir}/barman -q cron
EOF
cat > barman.logrotate << EOF
/var/log/barman/barman.log {
missingok
notifempty
create 0600 barman barman
}
EOF
%install
%{__python} setup.py install -O1 --skip-build --root %{buildroot}
mkdir -p %{buildroot}%{_sysconfdir}/bash_completion.d
mkdir -p %{buildroot}%{_sysconfdir}/cron.d/
mkdir -p %{buildroot}%{_sysconfdir}/logrotate.d/
mkdir -p %{buildroot}/var/lib/barman
mkdir -p %{buildroot}/var/log/barman
install -pm 644 doc/barman.conf %{buildroot}%{_sysconfdir}/barman.conf
install -pm 644 scripts/barman.bash_completion %{buildroot}%{_sysconfdir}/bash_completion.d/barman
install -pm 644 barman.cron %{buildroot}%{_sysconfdir}/cron.d/barman
install -pm 644 barman.logrotate %{buildroot}%{_sysconfdir}/logrotate.d/barman
touch %{buildroot}/var/log/barman/barman.log
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root)
%doc INSTALL NEWS README
%{python_sitelib}/%{name}-%{version}-py%{pybasever}.egg-info/
%{python_sitelib}/%{name}/
%{_bindir}/%{name}
%doc %{_mandir}/man1/%{name}.1.gz
%doc %{_mandir}/man5/%{name}.5.gz
%config(noreplace) %{_sysconfdir}/bash_completion.d/
%config(noreplace) %{_sysconfdir}/%{name}.conf
%config(noreplace) %{_sysconfdir}/cron.d/%{name}
%config(noreplace) %{_sysconfdir}/logrotate.d/%{name}
%attr(700,barman,barman) %dir /var/lib/%{name}
%attr(755,barman,barman) %dir /var/log/%{name}
%attr(600,barman,barman) %ghost /var/log/%{name}/%{name}.log
%pre
groupadd -f -r barman >/dev/null 2>&1 || :
useradd -M -n -g barman -r -d /var/lib/barman -s /bin/bash \
-c "Backup and Recovery Manager for PostgreSQL" barman >/dev/null 2>&1 || :
%changelog
* Mon Feb 3 2014 - Marco Neciarini 1.3.0-1
- New release 1.3.0
* Thu Sep 5 2013 - Marco Neciarini 1.2.3-1
- New release 1.2.3
* Mon Jun 24 2013 - Marco Neciarini 1.2.2-1
- New release 1.2.2
* Mon Jun 17 2013 - Marco Neciarini 1.2.1-1
- New release 1.2.1
* Thu Jan 31 2013 - Marco Neciarini 1.2.0-1
- New release 1.2.0
- Depend on python-argh >= 0.21.2 and python-argcomplete
* Thu Nov 29 2012 - Marco Neciarini 1.1.2-1
- New release 1.1.2
* Tue Oct 16 2012 - Marco Neciarini 1.1.1-1
- New release 1.1.1
* Fri Oct 12 2012 - Marco Neciarini 1.1.0-1
- New release 1.1.0
- Some improvements from Devrim Gunduz
* Fri Jul 6 2012 - Marco Neciarini 1.0.0-1
- Open source release
* Thu May 17 2012 - Marco Neciarini 0.99.0-5
- Fixed exception handling and documentation
* Thu May 17 2012 - Marco Neciarini 0.99.0-4
- Fixed documentation
* Tue May 15 2012 - Marco Neciarini 0.99.0-3
- Fixed cron job
* Tue May 15 2012 - Marco Neciarini 0.99.0-2
- Add cron job
* Wed May 9 2012 - Marco Neciarini 0.99.0-1
- Update to version 0.99.0
* Tue Dec 6 2011 - Marco Neciarini 0.3.1-1
- Initial packaging.
barman-1.3.0/rpm/rhel5/000755 000765 000024 00000000000 12273465047 015357 5ustar00mnenciastaff000000 000000 barman-1.3.0/rpm/rhel6/000755 000765 000024 00000000000 12273465047 015360 5ustar00mnenciastaff000000 000000 barman-1.3.0/rpm/rhel6/python-argcomplete.spec000644 000765 000024 00000003537 12162017273 022054 0ustar00mnenciastaff000000 000000 %{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Summary: Bash tab completion for argparse
Name: python-argcomplete
Version: 0.3.5
Release: 1%{?dist}
License: ASL 2.0
Group: Development/Libraries
Url: https://github.com/kislyuk/argcomplete
Source0: http://pypi.python.org/packages/source/a/argcomplete/argcomplete-%{version}.tar.gz
BuildRequires: python-devel,python-setuptools
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
BuildArch: noarch
Requires: python-abi = %(%{__python} -c "import sys ; print sys.version[:3]")
Requires: python-argparse
%description
Argcomplete provides easy, extensible command line tab completion of
arguments for your Python script.
It makes two assumptions:
* You're using bash as your shell
* You're using argparse to manage your command line arguments/options
Argcomplete is particularly useful if your program has lots of
options or subparsers, and if your program can dynamically suggest
completions for your argument/option values (for example, if the user
is browsing resources over the network).
%prep
%setup -n argcomplete-%{version} -q
%build
%{__python} setup.py build
%install
%{__python} setup.py install -O1 --skip-build --root $RPM_BUILD_ROOT
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%doc README.rst
%{python_sitelib}/argcomplete-%{version}-py2.6.egg-info
%{python_sitelib}/argcomplete/
%{_bindir}/activate-global-python-argcomplete
%{_bindir}/python-argcomplete-check-easy-install-script
%{_bindir}/register-python-argcomplete
%changelog
* Thu Jan 31 2013 - Marco Neciarini 0.3.5-1
- Initial packaging.
barman-1.3.0/rpm/rhel6/python-argh.spec000644 000765 000024 00000003332 12162017273 020464 0ustar00mnenciastaff000000 000000 %{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Summary: A simple argparse wrapper
Name: python-argh
Version: 0.23.0
Release: 1%{?dist}
License: LGPLv3
Group: Development/Libraries
Url: http://bitbucket.org/neithere/argh/
Source0: http://pypi.python.org/packages/source/a/argh/argh-%{version}.tar.gz
BuildRequires: python-devel, python-setuptools
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
BuildArch: noarch
Requires: python-abi = %(%{__python} -c "import sys ; print sys.version[:3]")
Requires: python-argparse
%description
Agrh, argparse!
===============
Did you ever say "argh" trying to remember the details of optparse or argparse
API? If yes, this package may be useful for you. It provides a very simple
wrapper for argparse with support for hierarchical commands that can be bound
to modules or classes. Argparse can do it; argh makes it easy.
%prep
%setup -n argh-%{version} -q
%build
%{__python} setup.py build
%install
%{__python} setup.py install -O1 --skip-build --root $RPM_BUILD_ROOT
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%doc README
%{python_sitelib}/argh-%{version}-py2.6.egg-info
%{python_sitelib}/argh/
%changelog
* Thu Jan 31 2013 - Marco Neciarini 0.23.0-1
- Update to version 0.23.0
* Wed May 9 2012 - Marco Neciarini 0.15.0-1
- Update to version 0.15.0
* Sat Dec 4 2011 - Marco Neciarini 0.14.2-1
- Initial packaging.
barman-1.3.0/rpm/rhel5/python-dateutil-1.4.1-remove-embedded-timezone-data.patch000644 000765 000024 00000006367 12267465344 027771 0ustar00mnenciastaff000000 000000 diff -up python-dateutil-1.4.1/dateutil/tz.py.remove-embedded-timezone-data python-dateutil-1.4.1/dateutil/tz.py
--- python-dateutil-1.4.1/dateutil/tz.py.remove-embedded-timezone-data 2008-02-27 20:45:41.000000000 -0500
+++ python-dateutil-1.4.1/dateutil/tz.py 2010-07-13 14:40:30.228122861 -0400
@@ -930,9 +930,6 @@ def gettz(name=None):
except OSError:
pass
if not tz:
- from dateutil.zoneinfo import gettz
- tz = gettz(name)
- if not tz:
for c in name:
# name must have at least one offset to be a tzstr
if c in "0123456789":
diff -up python-dateutil-1.4.1/dateutil/zoneinfo/__init__.py.remove-embedded-timezone-data python-dateutil-1.4.1/dateutil/zoneinfo/__init__.py
--- python-dateutil-1.4.1/dateutil/zoneinfo/__init__.py.remove-embedded-timezone-data 2005-12-22 13:13:50.000000000 -0500
+++ python-dateutil-1.4.1/dateutil/zoneinfo/__init__.py 2010-07-13 14:40:30.228122861 -0400
@@ -3,6 +3,10 @@ Copyright (c) 2003-2005 Gustavo Niemeye
This module offers extensions to the standard python 2.3+
datetime module.
+
+This version of the code has been modified to remove the embedded copy
+of zoneinfo-2008e.tar.gz and instead use the system data from the tzdata
+package
"""
from dateutil.tz import tzfile
from tarfile import TarFile
@@ -13,49 +17,12 @@ __license__ = "PSF License"
__all__ = ["setcachesize", "gettz", "rebuild"]
-CACHE = []
-CACHESIZE = 10
-
-class tzfile(tzfile):
- def __reduce__(self):
- return (gettz, (self._filename,))
-
-def getzoneinfofile():
- filenames = os.listdir(os.path.join(os.path.dirname(__file__)))
- filenames.sort()
- filenames.reverse()
- for entry in filenames:
- if entry.startswith("zoneinfo") and ".tar." in entry:
- return os.path.join(os.path.dirname(__file__), entry)
- return None
-
-ZONEINFOFILE = getzoneinfofile()
-
-del getzoneinfofile
-
def setcachesize(size):
- global CACHESIZE, CACHE
- CACHESIZE = size
- del CACHE[size:]
+ pass
def gettz(name):
- tzinfo = None
- if ZONEINFOFILE:
- for cachedname, tzinfo in CACHE:
- if cachedname == name:
- break
- else:
- tf = TarFile.open(ZONEINFOFILE)
- try:
- zonefile = tf.extractfile(name)
- except KeyError:
- tzinfo = None
- else:
- tzinfo = tzfile(zonefile)
- tf.close()
- CACHE.insert(0, (name, tzinfo))
- del CACHE[CACHESIZE:]
- return tzinfo
+ from dateutil.tz import gettz
+ return gettz(name)
def rebuild(filename, tag=None, format="gz"):
import tempfile, shutil
diff -up python-dateutil-1.4.1/MANIFEST.in.remove-embedded-timezone-data python-dateutil-1.4.1/MANIFEST.in
--- python-dateutil-1.4.1/MANIFEST.in.remove-embedded-timezone-data 2010-07-13 14:42:07.974118722 -0400
+++ python-dateutil-1.4.1/MANIFEST.in 2010-07-13 14:42:14.409994960 -0400
@@ -1,4 +1,4 @@
-recursive-include dateutil *.py *.tar.*
+recursive-include dateutil *.py
recursive-include sandbox *.py
include setup.py setup.cfg MANIFEST.in README LICENSE NEWS Makefile
include test.py example.py
barman-1.3.0/rpm/rhel5/python26-argcomplete.spec000644 000765 000024 00000004170 12162017273 022215 0ustar00mnenciastaff000000 000000 # Use Python 2.6
%global pybasever 2.6
%global __python_ver 26
%global __python %{_bindir}/python%{pybasever}
%global __os_install_post %{__multiple_python_os_install_post}
%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Summary: Bash tab completion for argparse
Name: python%{__python_ver}-argcomplete
Version: 0.3.5
Release: 1%{?dist}
License: ASL 2.0
Group: Development/Libraries
Url: https://github.com/kislyuk/argcomplete
Source0: http://pypi.python.org/packages/source/a/argcomplete/argcomplete-%{version}.tar.gz
BuildRequires: python%{__python_ver}-devel,python%{__python_ver}-setuptools
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
BuildArch: noarch
Requires: python-abi = %(%{__python} -c "import sys ; print sys.version[:3]")
%if "%{__python_ver}" == "26"
Requires: python%{__python_ver}-argparse
%endif
%description
Argcomplete provides easy, extensible command line tab completion of
arguments for your Python script.
It makes two assumptions:
* You're using bash as your shell
* You're using argparse to manage your command line arguments/options
Argcomplete is particularly useful if your program has lots of
options or subparsers, and if your program can dynamically suggest
completions for your argument/option values (for example, if the user
is browsing resources over the network).
%prep
%setup -n argcomplete-%{version} -q
%build
%{__python} setup.py build
%install
%{__python} setup.py install -O1 --skip-build --root $RPM_BUILD_ROOT
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%doc README.rst
%{python_sitelib}/argcomplete-%{version}-py%{pybasever}.egg-info
%{python_sitelib}/argcomplete/
%{_bindir}/activate-global-python-argcomplete
%{_bindir}/python-argcomplete-check-easy-install-script
%{_bindir}/register-python-argcomplete
%changelog
* Thu Jan 31 2013 - Marco Neciarini 0.3.5-1
- Initial packaging.
barman-1.3.0/rpm/rhel5/python26-argh.spec000644 000765 000024 00000003762 12162017273 020642 0ustar00mnenciastaff000000 000000 # Use Python 2.6
%global pybasever 2.6
%global __python_ver 26
%global __python %{_bindir}/python%{pybasever}
%global __os_install_post %{__multiple_python_os_install_post}
%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Summary: A simple argparse wrapper
Name: python%{__python_ver}-argh
Version: 0.23.0
Release: 1%{?dist}
License: LGPLv3
Group: Development/Libraries
Url: http://bitbucket.org/neithere/argh/
Source0: http://pypi.python.org/packages/source/a/argh/argh-%{version}.tar.gz
BuildRequires: python%{__python_ver}-devel,python%{__python_ver}-setuptools
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
BuildArch: noarch
Requires: python-abi = %(%{__python} -c "import sys ; print sys.version[:3]")
%if "%{__python_ver}" == "26"
Requires: python%{__python_ver}-argparse
%endif
%description
Agrh, argparse!
===============
Did you ever say "argh" trying to remember the details of optparse or argparse
API? If yes, this package may be useful for you. It provides a very simple
wrapper for argparse with support for hierarchical commands that can be bound
to modules or classes. Argparse can do it; argh makes it easy.
%prep
%setup -n argh-%{version} -q
%build
%{__python} setup.py build
%install
%{__python} setup.py install -O1 --skip-build --root $RPM_BUILD_ROOT
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%doc README
%{python_sitelib}/argh-%{version}-py%{pybasever}.egg-info
%{python_sitelib}/argh/
%changelog
* Thu Jan 31 2013 - Marco Neciarini 0.23.0-1
- Update to version 0.23.0
* Wed May 9 2012 - Marco Neciarini 0.15.0-1
- Update to version 0.15.0
* Sat Dec 4 2011 - Marco Neciarini 0.14.2-1
- Initial packaging.
barman-1.3.0/rpm/rhel5/python26-dateutil.spec000644 000765 000024 00000010144 12162017273 021524 0ustar00mnenciastaff000000 000000 # Use Python 2.6
%global pybasever 2.6
%global __python_ver 26
%global __python %{_bindir}/python%{pybasever}
%global __os_install_post %{__multiple_python_os_install_post}
%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
Name: python%{__python_ver}-dateutil
Version: 1.4.1
Release: 6%{?dist}
Summary: Powerful extensions to the standard datetime module
Group: Development/Languages
License: Python
URL: http://labix.org/python-dateutil
Source0: http://labix.org/download/python-dateutil/python-dateutil-%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
# Redirect the exposed parts of the dateutil.zoneinfo API to remove references
# to the embedded copy of zoneinfo-2008e.tar.gz and instead use the system
# data from the "tzdata" package (rhbz#559309):
Patch0: python-dateutil-1.4.1-remove-embedded-timezone-data.patch
BuildArch: noarch
BuildRequires: python%{__python_ver}-devel,python%{__python_ver}-setuptools
Requires: tzdata
%description
The dateutil module provides powerful extensions to the standard datetime
module available in Python 2.3+.
%prep
%setup -n python-dateutil-%{version} -q
# Remove embedded copy of timezone data:
%patch0 -p1
rm dateutil/zoneinfo/zoneinfo-2008e.tar.gz
# Change encoding of NEWS file to UTF-8, preserving timestamp:
iconv -f ISO-8859-1 -t utf8 NEWS > NEWS.utf8 && \
touch -r NEWS NEWS.utf8 && \
mv NEWS.utf8 NEWS
%build
%{__python} setup.py build
%install
rm -rf $RPM_BUILD_ROOT
%{__python} setup.py install -O1 --skip-build --root $RPM_BUILD_ROOT
%clean
rm -rf $RPM_BUILD_ROOT
%check
%{__python} test.py
%files
%defattr(-,root,root,-)
%doc example.py LICENSE NEWS README
%{python_sitelib}/dateutil/
%{python_sitelib}/*.egg-info
%changelog
* Tue Jul 13 2010 David Malcolm - 1.4.1-6
- remove embedded copy of timezone data, and redirect the dateutil.zoneinfo
API accordingly
Resolves: rhbz#559309
- add a %%check, running the upstream selftest suite
* Tue Jul 13 2010 David Malcolm - 1.4.1-5
- add requirement on tzdata
Resolves: rhbz#559309
- fix encoding of the NEWS file
* Mon Nov 30 2009 Dennis Gregorovic - 1.4.1-4.1
- Rebuilt for RHEL 6
* Sun Jul 26 2009 Fedora Release Engineering - 1.4.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_12_Mass_Rebuild
* Thu Feb 26 2009 Fedora Release Engineering - 1.4.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_11_Mass_Rebuild
* Fri Feb 20 2009 Jef Spaleta - 1.4.1-2
- small specfile fix
* Fri Feb 20 2009 Jef Spaleta - 1.4.1-2
- New upstream version
* Sat Nov 29 2008 Ignacio Vazquez-Abrams - 1.4-3
- Rebuild for Python 2.6
* Fri Aug 29 2008 Tom "spot" Callaway - 1.4-2
- fix license tag
* Tue Jul 01 2008 Jef Spaleta 1.4-1
- Latest upstream release
* Fri Jan 04 2008 Jef Spaleta 1.2-2
- Fix for egg-info file creation
* Thu Jun 28 2007 Orion Poplawski 1.2-1
- Update to 1.2
* Mon Dec 11 2006 Jef Spaleta 1.1-5
- Fix python-devel BR, as per discussion in maintainers-list
* Mon Dec 11 2006 Jef Spaleta 1.1-4
- Release bump for rebuild against python 2.5 in devel tree
* Wed Jul 26 2006 Orion Poplawski 1.1-3
- Add patch to fix building on x86_64
* Wed Feb 15 2006 Orion Poplawski 1.1-2
- Rebuild for gcc/glibc changes
* Thu Dec 22 2005 Orion Poplawski 1.1-1
- Update to 1.1
* Thu Jul 28 2005 Orion Poplawski 1.0-1
- Update to 1.0
* Tue Jul 05 2005 Orion Poplawski 0.9-1
- Initial Fedora Extras package
barman-1.3.0/rpm/rhel5/python26-psycopg2.spec000644 000765 000024 00000014000 12162017273 021452 0ustar00mnenciastaff000000 000000 # Use Python 2.6
%global pybasever 2.6
%global __python_ver 26
%global __python %{_bindir}/python%{pybasever}
%global __os_install_post %{__multiple_python_os_install_post}
%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
%{!?python_sitearch: %define python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib(1)")}
%define ZPsycopgDAdir %{_localstatedir}/lib/zope/Products/ZPsycopgDA
%global pgmajorversion 90
%global pginstdir /usr/pgsql-9.0
%global sname psycopg2
Summary: A PostgreSQL database adapter for Python
Name: python26-%{sname}
Version: 2.4.5
Release: 1%{?dist}
License: LGPLv3 with exceptions
Group: Applications/Databases
Url: http://www.psycopg.org/psycopg/
Source0: http://initd.org/psycopg/tarballs/PSYCOPG-2-4/%{sname}-%{version}.tar.gz
Patch0: setup.cfg.patch
BuildRequires: python%{__python_ver}-devel postgresql%{pgmajorversion}-devel
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot-%(%{__id_u} -n)
Requires: python-abi = %(%{__python} -c "import sys ; print sys.version[:3]")
%description
psycopg is a PostgreSQL database adapter for the Python programming
language (just like pygresql and popy.) It was written from scratch
with the aim of being very small and fast, and stable as a rock. The
main advantages of psycopg are that it supports the full Python
DBAPI-2.0 and being thread safe at level 2.
%package doc
Summary: Documentation for psycopg python PostgreSQL database adapter
Group: Documentation
Requires: %{name} = %{version}-%{release}
%description doc
Documentation and example files for the psycopg python PostgreSQL
database adapter.
%package test
Summary: Tests for psycopg2
Group: Development/Libraries
Requires: %{name} = %{version}-%{release}
%description test
Tests for psycopg2.
%package zope
Summary: Zope Database Adapter ZPsycopgDA
Group: Applications/Databases
Requires: %{name} = %{version}-%{release} zope
%description zope
Zope Database Adapter for PostgreSQL, called ZPsycopgDA
%prep
%setup -q -n psycopg2-%{version}
%patch0 -p0
%build
%{__python} setup.py build
# Fix for wrong-file-end-of-line-encoding problem; upstream also must fix this.
for i in `find doc -iname "*.html"`; do sed -i 's/\r//' $i; done
for i in `find doc -iname "*.css"`; do sed -i 's/\r//' $i; done
%install
rm -Rf %{buildroot}
mkdir -p %{buildroot}%{python_sitearch}/psycopg2
%{__python} setup.py install --no-compile --root %{buildroot}
install -d %{buildroot}%{ZPsycopgDAdir}
cp -pr ZPsycopgDA/* %{buildroot}%{ZPsycopgDAdir}
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root)
%doc AUTHORS ChangeLog INSTALL LICENSE README
%dir %{python_sitearch}/psycopg2
%{python_sitearch}/psycopg2/*.py
%{python_sitearch}/psycopg2/*.pyc
%{python_sitearch}/psycopg2/*.so
%{python_sitearch}/psycopg2/*.pyo
%{python_sitearch}/psycopg2-*.egg-info
%files doc
%defattr(-,root,root)
%doc doc examples/
%files test
%defattr(-,root,root)
%{python_sitearch}/%{sname}/tests/*
%files zope
%defattr(-,root,root)
%dir %{ZPsycopgDAdir}
%{ZPsycopgDAdir}/*.py
%{ZPsycopgDAdir}/*.pyo
%{ZPsycopgDAdir}/*.pyc
%{ZPsycopgDAdir}/dtml/*
%{ZPsycopgDAdir}/icons/*
%changelog
* Wed May 9 2012 - Marco Neciarini 2.4.5-1
- Update to version 2.4.5
* Mon Aug 22 2011 Devrim GUNDUZ 2.4.2-1
- Update to 2.4.2
- Add a patch for pg_config path.
- Add new subpackage: test
* Tue Mar 16 2010 Devrim GUNDUZ 2.0.14-1
- Update to 2.0.14
* Mon Oct 19 2009 Devrim GUNDUZ 2.0.13-1
- Update to 2.0.13
* Mon Sep 7 2009 Devrim GUNDUZ 2.0.12-1
- Update to 2.0.12
* Tue May 26 2009 Devrim GUNDUZ 2.0.11-1
- Update to 2.0.11
* Fri Apr 24 2009 Devrim GUNDUZ 2.0.10-1
- Update to 2.0.10
* Thu Mar 2 2009 Devrim GUNDUZ 2.0.9-1
- Update to 2.0.9
* Wed Apr 30 2008 - Devrim GUNDUZ 2.0.7-1
- Update to 2.0.7
* Fri Jun 15 2007 - Devrim GUNDUZ 2.0.6-1
- Update to 2.0.6
* Sun May 06 2007 Thorsten Leemhuis
- rebuilt for RHEL5 final
* Wed Dec 6 2006 - Devrim GUNDUZ 2.0.5.1-4
- Rebuilt for PostgreSQL 8.2.0
* Mon Sep 11 2006 - Devrim GUNDUZ 2.0.5.1-3
- Rebuilt
* Wed Sep 6 2006 - Devrim GUNDUZ 2.0.5.1-2
- Remove ghost'ing, per Python Packaging Guidelines
* Mon Sep 4 2006 - Devrim GUNDUZ 2.0.5.1-1
- Update to 2.0.5.1
* Sun Aug 6 2006 - Devrim GUNDUZ 2.0.3-3
- Fixed zope package dependencies and macro definition, per bugzilla review (#199784)
- Fixed zope package directory ownership, per bugzilla review (#199784)
- Fixed cp usage for zope subpackage, per bugzilla review (#199784)
* Mon Jul 31 2006 - Devrim GUNDUZ 2.0.3-2
- Fixed 64 bit builds
- Fixed license
- Added Zope subpackage
- Fixed typo in doc description
- Added macro for zope subpackage dir
* Mon Jul 31 2006 - Devrim GUNDUZ 2.0.3-1
- Update to 2.0.3
- Fixed spec file, per bugzilla review (#199784)
* Sat Jul 22 2006 - Devrim GUNDUZ 2.0.2-3
- Removed python dependency, per bugzilla review. (#199784)
- Changed doc package group, per bugzilla review. (#199784)
- Replaced dos2unix with sed, per guidelines and bugzilla review (#199784)
- Fix changelog dates
* Sat Jul 21 2006 - Devrim GUNDUZ 2.0.2-2
- Added dos2unix to buildrequires
- removed python related part from package name
* Fri Jul 20 2006 - Devrim GUNDUZ 2.0.2-1
- Fix rpmlint errors, including dos2unix solution
- Re-engineered spec file
* Fri Jan 23 2006 - Devrim GUNDUZ
- First 2.0.X build
* Fri Jan 23 2006 - Devrim GUNDUZ
- Update to 1.2.21
* Tue Dec 06 2005 - Devrim GUNDUZ
- Initial release for 1.1.20
barman-1.3.0/rpm/rhel5/setup.cfg.patch000644 000765 000024 00000001001 12162017273 020255 0ustar00mnenciastaff000000 000000 --- setup.cfg.old 2011-08-22 12:16:18.703486005 +0300
+++ setup.cfg 2011-08-22 12:16:31.596486005 +0300
@@ -26,7 +26,7 @@
# libraries needed to build psycopg2. If pg_config is not in the path or
# is installed under a different name uncomment the following option and
# set it to the pg_config full path.
-#pg_config=
+pg_config=/usr/pgsql-9.0/bin/pg_config
# If "pg_config" is not available, "include_dirs" can be used to locate
# postgresql headers and libraries. Some extra checks on sys.platform will
barman-1.3.0/doc/barman.1000644 000765 000024 00000024342 12273464541 015634 0ustar00mnenciastaff000000 000000 '\" t
.\" Title: barman
.\" Author: [see the "AUTHORS" section]
.\" Generator: DocBook XSL Stylesheets v1.76.1
.\" Date: 01/24/2014
.\" Manual: \ \&
.\" Source: \ \&
.\" Language: English
.\"
.TH "BARMAN" "1" "01/24/2014" "\ \&" "\ \&"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
.ad l
.\" -----------------------------------------------------------------
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
barman \- Backup and Recovery Manager for PostgreSQL
.SH "SYNOPSIS"
.sp
\fBbarman\fR [\fIOPTIONS\fR] {COMMAND}
.SH "DESCRIPTION"
.sp
barman(1) is an administration tool for disaster recovery of PostgreSQL servers written in Python\&. barman can perform remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase\&.
.SH "OPTIONS"
.PP
\fB\-v, \-\-version\fR
.RS 4
Show program version number and exit\&.
.RE
.PP
\fB\-q, \-\-quiet\fR
.RS 4
Do not output anything\&. Useful for cron scripts\&.
.RE
.PP
\fB\-h, \-\-help\fR
.RS 4
Show a help message and exit\&.
.RE
.PP
\fB\-c CONFIG, \-\-config CONFIG\fR
.RS 4
Use the specified configuration file\&.
.RE
.SH "WHERE COMMAND CAN BE:"
.sp
Important: every command has an help option
.PP
\fBcron\fR
.RS 4
Perform maintenance tasks, such as moving incoming WAL files to the appropriate directory\&.
.RE
.PP
\fBlist\-server\fR
.RS 4
Show all the configured servers, and their descriptions\&.
.RE
.PP
\fBshow\-server SERVERNAME\fR
.RS 4
Show information about
SERVERNAME, including:
conninfo,
backup_directory,
wals_directory
and many more\&. Specify
all
as
SERVERNAME
to show information about all the configured servers\&.
.RE
.PP
\fBstatus SERVERNAME\fR
.RS 4
Show information about the status of a server, including: number of available backups,
archive_command,
archive_status
and many more\&.
.sp
.if n \{\
.RS 4
.\}
.nf
Example:
Server main:
description: PostgreSQL Example Database
PostgreSQL version: 9\&.1\&.1
PostgreSQL Data directory: /var/lib/pgsql/9\&.1/data
archive_command: rsync \-a %p barman@test\-backup\-server:/srv/barman/main/incoming/%f
archive_status: last shipped WAL segment 0000000100000009000000ED
current_xlog: 0000000100000009000000EF
No\&. of available backups: 1
first/last available backup: 20120528T113358
.fi
.if n \{\
.RE
.\}
.RE
.PP
\fBcheck SERVERNAME\fR
.RS 4
Show diagnostic information about
SERVERNAME, including: ssh connection check, PostgreSQL version, configuration and backup directories\&. Specify
all
as
SERVERNAME
to show diagnostic information about all the configured servers\&.
.PP
\fB\-\-nagios\fR
.RS 4
Nagios plugin compatible output
.RE
.RE
.PP
\fBbackup SERVERNAME\fR
.RS 4
Perform a backup of
SERVERNAME
using parameters specified in the configuration file\&. Specify
all
as
SERVERNAME
to perform a backup of all the configured servers\&.
.PP
\fB\-\-immediate\-checkpoint\fR
.RS 4
forces the initial checkpoint to be done as quickly as possible\&. Overrides value of the parameter
immediate_checkpoint, if present in the configuration file\&.
.RE
.PP
\fB\-\-no\-immediate\-checkpoint\fR
.RS 4
forces to wait for the checkpoint\&. Overrides value of the parameter
immediate_checkpoint, if present in the configuration file\&.
.RE
.RE
.PP
\fBlist\-backup SERVERNAME\fR
.RS 4
Show available backups for
SERVERNAME\&. This command is useful to retrieve a backup ID\&.
.RE
.sp
Example: servername 20111104T102647 \- Fri Nov 4 10:26:48 2011 \- Size: 17\&.0 MiB \- WAL Size: 100 B
.sp
.if n \{\
.RS 4
.\}
.nf
Here 20111104T102647 is the backup ID\&.
.fi
.if n \{\
.RE
.\}
.PP
\fBshow\-backup SERVERNAME BACKUPID\fR
.RS 4
Show detailed information about a particular backup, identified by the server name and the backup ID\&. See the "Backup ID shortcuts" section below for available shortcuts\&.
.sp
.if n \{\
.RS 4
.\}
.nf
Example:
Backup 20111104T102647:
Server Name : main
PostgreSQL Version: 90101
PGDATA directory : /var/lib/pgsql/9\&.1/data
.fi
.if n \{\
.RE
.\}
.sp
.if n \{\
.RS 4
.\}
.nf
Base backup information:
Disk usage : 17\&.0 MiB
Timeline : 1
Begin WAL : 000000010000000000000002
End WAL : 000000010000000000000002
WAL number : 0
Begin time : 2011\-11\-04 10:26:47\&.357260
End time : 2011\-11\-04 10:26:48\&.888903
Begin Offset : 32
End Offset : 160
Begin XLOG : 0/2000020
End XLOG : 0/20000A0
.fi
.if n \{\
.RE
.\}
.sp
.if n \{\
.RS 4
.\}
.nf
WAL information:
No of files : 0
Disk usage : 0 B
Last available : None
.fi
.if n \{\
.RE
.\}
.sp
.if n \{\
.RS 4
.\}
.nf
Catalog information:
Previous Backup : \- (this is the oldest base backup)
Next Backup : \- (this is the latest base backup)
.fi
.if n \{\
.RE
.\}
.RE
.PP
\fBlist\-files [OPTIONS] SERVERNAME BACKUPID\fR
.RS 4
List all the files in a particular backup, identified by the server name and the backup ID\&. See the "Backup ID shortcuts" section below for available shortcuts\&.
.PP
\fB\-\-target TARGET_TYPE\fR
.RS 4
Possible values for TARGET_TYPE are:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
\fBdata\fR
\- lists just the data files;
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
\fBstandalone\fR
\- lists the base backup files, including required WAL files;
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
\fBwal\fR
\- lists all the WAL files between the start of the base backup and the end of the log / the start of the following base backup (depending on whether the specified base backup is the most recent one available);
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
\fBfull\fR
\- same as data + wal\&. Defaults to standalone
.RE
.RE
.RE
.PP
\fBrebuild\-xlogdb SERVERNAME\fR
.RS 4
Perform a rebuild of the WAL file metadata for
SERVERNAME
(or every server, using the
all
shortcut) guessing it from the disk content\&. The metadata of the WAL archive is contained in the
xlog\&.db
file, and every Barman server has its own copy\&.
.RE
.PP
\fBrecover [OPTIONS] SERVERNAME BACKUPID DESTINATIONDIRECTORY\fR
.RS 4
Recover a backup in a given directory (local or remote, depending on the
\-\-remote\-ssh\-command
option settings)\&. See the "Backup ID shortcuts" section below for available shortcuts\&.
.PP
\fB\-\-target\-tli TARGET_TLI\fR
.RS 4
Recover the specified timeline\&.
.RE
.PP
\fB\-\-target\-time TARGET_TIME\fR
.RS 4
Recover to the specified time\&. You can use any valid unambiguous representation\&. e\&.g: "YYYY\-MM\-DD HH:MM:SS\&.mmm"\&.
.RE
.PP
\fB\-\-target\-xid TARGET_XID\fR
.RS 4
Recover to the specified transaction ID\&.
.RE
.PP
\fB\-\-target\-name TARGET_NAME\fR
.RS 4
Recover to the named restore point previously created with the
pg_create_restore_point(name)
(for PostgreSQL 9\&.1 and above users)\&.
.RE
.PP
\fB\-\-exclusive\fR
.RS 4
Set target xid to be non inclusive\&.
.RE
.PP
\fB\-\-tablespace NAME:LOCATION\fR
.RS 4
Specify tablespace relocation rule (currently not available with remote recovery)\&.
.RE
.PP
\fB\-\-remote\-ssh\-command SSH_COMMAND\fR
.RS 4
This options activates remote recovery, by specifying the secure shell command to be launched on a remote host\&. This is the equivalent of the "ssh_command" server option in the configuration file for remote recovery\&. Example:
\fIssh postgres@db2\fR\&.
.RE
.RE
.PP
\fBdelete SERVERNAME BACKUPID\fR
.RS 4
Delete the specified backup\&. See the "Backup ID shortcuts" section below for available shortcuts\&.
.RE
.SH "BACKUP ID SHORTCUTS"
.sp
Rather than using the timestamp backup ID, you can use any of the following shortcuts/aliases to identity a backup for a given server:
.PP
\fBfirst\fR
.RS 4
Oldest available backup for that server, in chronological order\&.
.RE
.PP
\fBlast\fR
.RS 4
Latest available backup for that server, in chronological order\&.
.RE
.PP
\fBlatest\fR
.RS 4
same ast
\fBlast\fR\&.
.RE
.PP
\fBoldest\fR
.RS 4
same ast
\fBfirst\fR\&.
.RE
.SH "EXIT STATUS"
.PP
\fB0\fR
.RS 4
Success
.RE
.PP
\fBNot zero\fR
.RS 4
Failure
.RE
.SH "BUGS"
.sp
Barman has been extensively tested, and is currently being used in several live installation\&. All the reported bugs were fixed prior to the open source release, due to the particular nature of backup operations where data security is paramount\&. In particular, there are no known bugs at present\&. Any bug can be reported via the Sourceforge bug tracker\&.
.SH "AUTHORS"
.sp
In alphabetical order:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Gabriele Bartolini
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Giuseppe Broccolo (core team, QA)
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Giulio Calacoci (core team, developer)
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Marco Nenciarini
.RE
.sp
Past contributors:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Carlo Ascani
.RE
.SH "RESOURCES"
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Homepage:
http://www\&.pgbarman\&.org/
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Documentation:
http://docs\&.pgbarman\&.org/
.RE
.SH "COPYING"
.sp
Barman is the exclusive property of 2ndQuadrant Italia and its code is distributed under GNU General Public License v3\&.
.sp
Copyright \(co 2011\-2014 2ndQuadrant Italia (Devise\&.IT S\&.r\&.l\&.) \- http://www\&.2ndQuadrant\&.it/\&.
barman-1.3.0/doc/barman.5000644 000765 000024 00000021643 12273464542 015642 0ustar00mnenciastaff000000 000000 '\" t
.\" Title: barman
.\" Author: [see the "AUTHORS" section]
.\" Generator: DocBook XSL Stylesheets v1.78.1
.\" Date: 01/29/2014
.\" Manual: \ \&
.\" Source: \ \&
.\" Language: English
.\"
.TH "BARMAN" "5" "01/29/2014" "\ \&" "\ \&"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
.ad l
.\" -----------------------------------------------------------------
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
barman \- backup and recovery manager for PostgreSQL
.SH "CONFIGURATION FILE LOCATIONS"
.sp
The system\-level Barman configuration file is located at
.sp
.if n \{\
.RS 4
.\}
.nf
/etc/barman\&.conf
.fi
.if n \{\
.RE
.\}
.sp
or
.sp
.if n \{\
.RS 4
.\}
.nf
/etc/barman/barman\&.conf
.fi
.if n \{\
.RE
.\}
.sp
and is overridden on a per\-user level by
.sp
.if n \{\
.RS 4
.\}
.nf
$HOME/\&.barman\&.conf
.fi
.if n \{\
.RE
.\}
.SH "CONFIGURATION FILE SYNTAX"
.sp
The Barman configuration file is a plain ini file\&. There is a general section called [barman] and a section [servername] for each server you want to backup\&. Rows starting with ; are comments\&.
.SH "CONFIGURATION FILE DIRECTORY"
.sp
Barman supports the inclusion of multiple configuration files, through the configuration_files_directory option\&. Included files must contain only server specifications, not global configurations\&. If the value of configuration_files_directory is a directory, Barman reads all files with \&.conf extension that exist in that folder\&. For example, if you set it to /etc/barman\&.d, you can specify your PostgreSQL servers placing each section in a separate \&.conf file inside the /etc/barman\&.d folder\&.
.SH "OPTIONS"
.PP
\fBactive\fR
.RS 4
Ignored\&. Server\&.
.RE
.PP
\fBdescription\fR
.RS 4
A human readable description of a server\&. Server\&.
.RE
.PP
\fBssh_command\fR
.RS 4
Command used by Barman to login to the Postgres server via ssh\&. Server\&.
.RE
.PP
\fBconninfo\fR
.RS 4
Connection string used by Barman to connect to the Postgres server\&. Server\&.
.RE
.PP
\fBbasebackups_directory\fR
.RS 4
Directory where base backups will be placed\&. Global/Server\&.
.RE
.PP
\fBwals_directory\fR
.RS 4
Directory which contains WAL files\&. Global/Server\&.
.RE
.PP
\fBincoming_wals_directory\fR
.RS 4
Directory where incoming WAL files are archived into\&. Global/Server\&.
.RE
.PP
\fBlock_file\fR
.RS 4
Lock file for a backup in progress\&. Global/Server\&.
.RE
.PP
\fBlog_file\fR
.RS 4
Location of Barman\(cqs log file\&. Global\&.
.RE
.PP
\fBlog_level\fR
.RS 4
Level of logging (DEBUG, INFO, WARNING, ERROR, CRITICAL)\&. Global\&.
.RE
.PP
\fBcustom_compression_filter\fR
.RS 4
Compression algorithm applied to WAL files\&. Global/Server\&.
.RE
.PP
\fBcustom_decompression_filter\fR
.RS 4
Decompression algorithm applied to compressed WAL files; this must match the compression algorithm\&. Global/Server\&.
.RE
.PP
\fBpre_backup_script\fR
.RS 4
Hook script launched before a base backup\&. Global/Server\&.
.RE
.PP
\fBpost_backup_script\fR
.RS 4
Hook script launched after a base backup\&. Global/Server\&.
.RE
.PP
\fBpre_archive_script\fR
.RS 4
Hook script launched before a base backup\&. Global/Server\&.
.RE
.PP
\fBpost_archive_script\fR
.RS 4
Hook script launched after a base backup\&. Global/Server\&.
.RE
.PP
\fBminimum_redundancy\fR
.RS 4
Minimum number of backups to be retained\&. Default 0\&. Global/Server\&.
.RE
.PP
\fBretention_policy\fR
.RS 4
Policy for retention of periodical backups and archive logs\&. If left empty, retention policies are not enforced\&. For redundancy based retention policy use "REDUNDANCY i" (where i is an integer > 0 and defines the number of backups to retain)\&. For recovery window retention policy use "RECOVERY WINDOW OF i DAYS" or "RECOVERY WINDOW OF i WEEKS" or "RECOVERY WINDOW OF i MONTHS" where i is a positive integer representing, specifically, the number of days, weeks or months to retain your backups\&. For more detailed information, refer to the official documentation\&. Default value is empty\&. Global/Server\&.
.RE
.PP
\fBwal_retention_policy\fR
.RS 4
Policy for retention of archive logs (WAL files)\&. Currently only "MAIN" is available\&. Global/Server\&.
.RE
.PP
\fBretention_policy_mode\fR
.RS 4
Currently only "auto" is implemented\&. Global/Server\&.
.RE
.PP
\fBbandwidth_limit\fR
.RS 4
This option allows you to specify a maximum transfer rate in kilobytes per second\&. A value of zero specifies no limit (default)\&. Global/Server\&.
.RE
.PP
\fBtablespace_bandwidth_limit\fR
.RS 4
This option allows you to specify a maximum transfer rate in kilobytes per second, by specifying a comma separated list of tablespaces (pairs TBNAME:BWLIMIT)\&. A value of zero specifies no limit (default)\&. Global/Server\&.
.RE
.PP
\fBimmediate_checkpoint\fR
.RS 4
This option allows you to control the way PostgreSQL handles checkpoint at the start of the backup\&. If set to
false
(default), Postgres will wait for a checkpoint to happen before allowing the start of the backup\&. If set to
true, an immediate checkpoint is requested\&.
.RE
.PP
\fBnetwork_compression\fR
.RS 4
This option allows you to enable data compression for network transfers\&. If set to
false
(default), no compression is used\&. If set to
true, compression is enabled, reducing network usage\&.
.RE
.SH "HOOK SCRIPTS"
.sp
The script definition is passed to a shell and can return any exit code\&.
.sp
The shell environment will contain the following variables:
.PP
BARMAN_CONFIGURATION
.RS 4
configuration file used by barman
.RE
.PP
BARMAN_ERROR
.RS 4
error message, if any (only for the
\fIpost\fR
phase)
.RE
.PP
BARMAN_PHASE
.RS 4
\fIpre\fR
or
\fIpost\fR
.RE
.PP
BARMAN_SERVER
.RS 4
name of the server
.RE
.sp
Backup scripts specific variables:
.PP
BARMAN_BACKUP_DIR
.RS 4
backup destination directory
.RE
.PP
BARMAN_BACKUP_ID
.RS 4
ID of the backup
.RE
.PP
BARMAN_PREVIOUS_ID
.RS 4
ID of the previous backup (if present)
.RE
.PP
BARMAN_STATUS
.RS 4
status of the backup
.RE
.PP
BARMAN_VERSION
.RS 4
version of Barman
.RE
.sp
Archive scripts specific variables:
.PP
BARMAN_SEGMENT
.RS 4
name of the WAL file
.RE
.PP
BARMAN_FILE
.RS 4
full path of the WAL file
.RE
.PP
BARMAN_SIZE
.RS 4
size of the WAL file
.RE
.PP
BARMAN_TIMESTAMP
.RS 4
WAL file timestamp
.RE
.PP
BARMAN_COMPRESSION
.RS 4
type of compression used for the WAL file
.RE
.sp
No check is performed on the exit code of the script\&. The result will be simply written in the log file\&.
.SH "EXAMPLE"
.sp
Example of the configuration file:
.sp
.if n \{\
.RS 4
.\}
.nf
[barman]
; Main directory
barman_home = /var/lib/barman
; System user
barman_user = barman
; Log location
log_file = /var/log/barman/barman\&.log
; Default compression level
;compression = gzip
; \*(Aqmain\*(Aq PostgreSQL Server configuration
[main]
; Human readable description
description = "Main PostgreSQL Database"
; SSH options
ssh_command = ssh postgres@pg
; PostgreSQL connection string
conninfo = host=pg user=postgres
; Minimum number of required backups (redundancy)
minimum_redundancy = 1
; Retention policy (based on redundancy)
retention_policy = REDUNDANCY 2
.fi
.if n \{\
.RE
.\}
.SH "AUTHORS"
.sp
In alphabetical order:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Gabriele Bartolini
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Giuseppe Broccolo (core team, QA)
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Giulio Calacoci (core team, developer)
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Marco Nenciarini
.RE
.sp
Past contributors:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Carlo Ascani
.RE
.SH "RESOURCES"
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Homepage:
http://www\&.pgbarman\&.org/
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
Documentation:
http://docs\&.pgbarman\&.org/
.RE
.SH "COPYING"
.sp
Barman is the exclusive property of 2ndQuadrant Italia and its code is distributed under GNU General Public License v3\&.
.sp
Copyright \(co 2011\-2014 2ndQuadrant Italia (Devise\&.IT S\&.r\&.l\&.) \- http://www\&.2ndQuadrant\&.it/\&.
barman-1.3.0/doc/barman.conf000644 000765 000024 00000003517 12273464542 016423 0ustar00mnenciastaff000000 000000 ; Barman, Backup and Recovery Manager for PostgreSQL
; http://www.pgbarman.org/ - http://www.2ndQuadrant.com/
;
; Main configuration file
[barman]
; Main directory
barman_home = /var/lib/barman
; System user
barman_user = barman
; Log location
log_file = /var/log/barman/barman.log
; Default compression level: possible values are None (default), bzip2, gzip or custom
;compression = gzip
; Pre/post backup hook scripts
;pre_backup_script = env | grep ^BARMAN
;post_backup_script = env | grep ^BARMAN
; Pre/post archive hook scripts
;pre_archive_script = env | grep ^BARMAN
;post_archive_script = env | grep ^BARMAN
; Directory of configuration files. Place your sections in separate files with .conf extension
; For example place the 'main' server section in /etc/barman.d/main.conf
;configuration_files_directory = /etc/barman.d
; Minimum number of required backups (redundancy)
;minimum_redundancy = 0
; Global retention policy (REDUNDANCY or RECOVERY WINDOW) - default empty
;retention_policy =
; Global bandwidth limit in KBPS - default 0 (meaning no limit)
;bandwidth_limit = 4000
; Immediate checkpoint for backup command
;immediate_checkpoint = false
; Enable network compression for data transfers
;network_compression = false
;; ; 'main' PostgreSQL Server configuration
;; [main]
;; ; Human readable description
;; description = "Main PostgreSQL Database"
;;
;; ; SSH options
;; ssh_command = ssh postgres@pg
;;
;; ; PostgreSQL connection string
;; conninfo = host=pg user=postgres
;;
;; ; Minimum number of required backups (redundancy)
;; ; minimum_redundancy = 1
;;
;; ; Examples of retention policies
;;
;; ; Retention policy (disabled)
;; ; retention_policy =
;; ; Retention policy (based on redundancy)
;; ; retention_policy = REDUNDANCY 2
;; ; Retention policy (based on recovery window)
;; ; retention_policy = RECOVERY WINDOW OF 4 WEEKS
barman-1.3.0/bin/barman000755 000765 000024 00000001545 12273464541 015503 0ustar00mnenciastaff000000 000000 #!/usr/bin/env python
#
# Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
#
# PYTHON_ARGCOMPLETE_OK
from barman.cli import main
if __name__ == '__main__':
main()
else:
raise NotImplementedError
barman-1.3.0/barman.egg-info/dependency_links.txt000644 000765 000024 00000000001 12273465046 022541 0ustar00mnenciastaff000000 000000
barman-1.3.0/barman.egg-info/PKG-INFO000644 000765 000024 00000002612 12273465046 017571 0ustar00mnenciastaff000000 000000 Metadata-Version: 1.0
Name: barman
Version: 1.3.0
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Italia (Devise.IT S.r.l.)
Author-email: info@2ndquadrant.it
License: GPL-3.0
Description: Barman (backup and recovery manager) is an administration
tool for disaster recovery of PostgreSQL servers written in Python.
It allows to perform remote backups of multiple servers
in business critical environments and help DBAs during the recovery phase.
Barman's most wanted features include backup catalogs, retention policies,
remote recovery, archiving and compression of WAL files and backups.
Barman is written and maintained by PostgreSQL professionals 2ndQuadrant.
Platform: Linux
Platform: Mac OS X
Classifier: Environment :: Console
Classifier: Development Status :: 5 - Production/Stable
Classifier: Topic :: System :: Archiving :: Backup
Classifier: Topic :: Database
Classifier: Topic :: System :: Recovery Tools
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
barman-1.3.0/barman.egg-info/requires.txt000644 000765 000024 00000000074 12273465046 021074 0ustar00mnenciastaff000000 000000 psycopg2
argh >= 0.21.2
python-dateutil
argcomplete
argparsebarman-1.3.0/barman.egg-info/SOURCES.txt000644 000765 000024 00000001632 12273465047 020362 0ustar00mnenciastaff000000 000000 AUTHORS
ChangeLog
INSTALL
LICENSE
MANIFEST.in
NEWS
README
setup.py
barman/__init__.py
barman/backup.py
barman/cli.py
barman/command_wrappers.py
barman/compression.py
barman/config.py
barman/fs.py
barman/hooks.py
barman/infofile.py
barman/lockfile.py
barman/output.py
barman/retention_policies.py
barman/server.py
barman/testing_helpers.py
barman/utils.py
barman/version.py
barman/xlog.py
barman.egg-info/PKG-INFO
barman.egg-info/SOURCES.txt
barman.egg-info/dependency_links.txt
barman.egg-info/requires.txt
barman.egg-info/top_level.txt
bin/barman
doc/barman.1
doc/barman.5
doc/barman.conf
rpm/barman.spec
rpm/rhel5/python-dateutil-1.4.1-remove-embedded-timezone-data.patch
rpm/rhel5/python26-argcomplete.spec
rpm/rhel5/python26-argh.spec
rpm/rhel5/python26-dateutil.spec
rpm/rhel5/python26-psycopg2.spec
rpm/rhel5/setup.cfg.patch
rpm/rhel6/python-argcomplete.spec
rpm/rhel6/python-argh.spec
scripts/barman.bash_completionbarman-1.3.0/barman.egg-info/top_level.txt000644 000765 000024 00000000007 12273465046 021222 0ustar00mnenciastaff000000 000000 barman
barman-1.3.0/barman/__init__.py000644 000765 000024 00000001506 12273464541 017113 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
The main Barman module
"""
from __future__ import absolute_import
from .version import __version__
__config__ = None
barman-1.3.0/barman/backup.py000644 000765 000024 00000126616 12273464571 016636 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
''' This module represents a backup. '''
from glob import glob
import datetime
import logging
import os
import sys
import shutil
import time
import tempfile
import re
from barman.infofile import WalFileInfo, BackupInfo, UnknownBackupIdException
from barman.fs import UnixLocalCommand, UnixRemoteCommand, FsOperationFailed
import dateutil.parser
from barman import xlog, output
from barman.command_wrappers import RsyncPgData
from barman.compression import CompressionManager, CompressionIncompatibility
from barman.hooks import HookScriptRunner
_logger = logging.getLogger(__name__)
class BackupManager(object):
'''Manager of the backup archive for a server'''
DEFAULT_STATUS_FILTER = (BackupInfo.DONE,)
DANGEROUS_OPTIONS = ['data_directory', 'config_file', 'hba_file',
'ident_file', 'external_pid_file', 'ssl_cert_file',
'ssl_key_file', 'ssl_ca_file', 'ssl_crl_file',
'unix_socket_directory']
def __init__(self, server):
'''Constructor'''
self.name = "default"
self.server = server
self.config = server.config
self.available_backups = {}
self.compression_manager = CompressionManager(self.config)
# used for error messages
self.current_action = None
def get_available_backups(self, status_filter=DEFAULT_STATUS_FILTER):
'''
Get a list of available backups
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup list returned
'''
if not isinstance(status_filter, tuple):
status_filter = tuple(status_filter,)
if status_filter not in self.available_backups:
available_backups = {}
for filename in glob("%s/*/backup.info" % self.config.basebackups_directory):
backup = BackupInfo(self.server, filename)
if backup.status not in status_filter:
continue
available_backups[backup.backup_id] = backup
self.available_backups[status_filter] = available_backups
return available_backups
else:
return self.available_backups[status_filter]
def get_previous_backup(self, backup_id, status_filter=DEFAULT_STATUS_FILTER):
'''
Get the previous backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
if not isinstance(status_filter, tuple):
status_filter = tuple(status_filter)
backup = BackupInfo(self.server, backup_id=backup_id)
available_backups = self.get_available_backups(status_filter + (backup.status,))
ids = sorted(available_backups.keys())
try:
current = ids.index(backup_id)
while current > 0:
res = available_backups[ids[current - 1]]
if res.status in status_filter:
return res
current -= 1
else:
return None
except ValueError:
raise UnknownBackupIdException('Could not find backup_id %s' % backup_id)
def get_next_backup(self, backup_id, status_filter=DEFAULT_STATUS_FILTER):
'''
Get the next backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
if not isinstance(status_filter, tuple):
status_filter = tuple(status_filter)
backup = BackupInfo(self.server, backup_id=backup_id)
available_backups = self.get_available_backups(status_filter + (backup.status,))
ids = sorted(available_backups.keys())
try:
current = ids.index(backup_id)
while current < (len(ids) - 1):
res = available_backups[ids[current + 1]]
if res.status in status_filter:
return res
current += 1
else:
return None
except ValueError:
raise UnknownBackupIdException('Could not find backup_id %s' % backup_id)
def get_last_backup(self, status_filter=DEFAULT_STATUS_FILTER):
'''
Get the last backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
available_backups = self.get_available_backups(status_filter)
if len(available_backups) == 0:
return None
ids = sorted(available_backups.keys())
return ids[-1]
def get_first_backup(self, status_filter=DEFAULT_STATUS_FILTER):
'''
Get the first backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
available_backups = self.get_available_backups(status_filter)
if len(available_backups) == 0:
return None
ids = sorted(available_backups.keys())
return ids[0]
def delete_backup(self, backup):
'''
Delete a backup
:param backup: the backup to delete
'''
available_backups = self.get_available_backups()
# Honour minimum required redundancy
if backup.status == BackupInfo.DONE and self.server.config.minimum_redundancy >= len(available_backups):
yield "Skipping delete of backup %s for server %s due to minimum redundancy requirements (%s)" % (
backup.backup_id, self.config.name, self.server.config.minimum_redundancy)
_logger.warning("Could not delete backup %s for server %s - minimum redundancy = %s, current size = %s"
% (backup.backup_id, self.config.name, self.server.config.minimum_redundancy, len(available_backups)))
return
yield "Deleting backup %s for server %s" % (backup.backup_id, self.config.name)
previous_backup = self.get_previous_backup(backup.backup_id)
next_backup = self.get_next_backup(backup.backup_id)
# remove the backup
self.delete_basebackup(backup)
if not previous_backup: # backup is the first one
yield "Delete associated WAL segments:"
remove_until = None
if next_backup:
remove_until = next_backup.begin_wal
with self.server.xlogdb() as fxlogdb:
xlogdb_new = fxlogdb.name + ".new"
with open(xlogdb_new, 'w') as fxlogdb_new:
for line in fxlogdb:
name, _, _, _ = self.server.xlogdb_parse_line(line)
if remove_until and name >= remove_until:
fxlogdb_new.write(line)
continue
else:
yield "\t%s" % name
# Delete the WAL segment
self.delete_wal(name)
shutil.move(xlogdb_new, fxlogdb.name)
yield "Done"
def backup(self, immediate_checkpoint):
"""
Performs a backup for the server
"""
_logger.debug("initialising backup information")
backup_stamp = datetime.datetime.now()
self.current_action = "starting backup"
backup_info = None
try:
backup_info = BackupInfo(
self.server,
backup_id=backup_stamp.strftime('%Y%m%dT%H%M%S'))
backup_info.save()
output.info(
"Starting backup for server %s in %s",
self.config.name,
backup_info.get_basebackup_directory())
# Run the pre-backup-script if present.
script = HookScriptRunner(self, 'backup_script', 'pre')
script.env_from_backup_info(backup_info)
script.run()
# Start the backup
self.backup_start(backup_info, immediate_checkpoint)
backup_info.set_attribute("begin_time", backup_stamp)
backup_info.save()
output.info("Backup start at xlog location: %s (%s, %08X)",
backup_info.begin_xlog,
backup_info.begin_wal,
backup_info.begin_offset)
self.current_action = "copying files"
try:
# Start the copy
output.info("Copying files.")
backup_size = self.backup_copy(backup_info)
backup_info.set_attribute("size", backup_size)
output.info("Copy done.")
except:
raise
else:
self.current_action = "issuing stop of the backup"
output.info("Asking PostgreSQL server to finalize the backup.")
finally:
self.backup_stop(backup_info)
backup_info.set_attribute("status", "DONE")
except:
if backup_info:
backup_info.set_attribute("status", "FAILED")
backup_info.set_attribute("error", "failure %s" % self.current_action)
msg = "Backup failed %s" % self.current_action
output.exception(msg)
else:
output.info("Backup end at xlog location: %s (%s, %08X)",
backup_info.end_xlog,
backup_info.end_wal,
backup_info.end_offset)
output.info("Backup completed")
finally:
if backup_info:
backup_info.save()
# Run the post-backup-script if present.
script = HookScriptRunner(self, 'backup_script', 'post')
script.env_from_backup_info(backup_info)
script.run()
output.result('backup', backup_info)
def recover(self, backup, dest, tablespaces, target_tli, target_time, target_xid, target_name, exclusive, remote_command):
'''
Performs a recovery of a backup
:param backup: the backup to recover
:param dest: the destination directory
:param tablespaces: a dictionary of tablespaces
:param target_tli: the target timeline
:param target_time: the target time
:param target_xid: the target xid
:param target_name: the target name created previously with pg_create_restore_point() function call
:param exclusive: whether the recovery is exclusive or not
:param remote_command: default None. The remote command to recover the base backup,
in case of remote backup.
'''
for line in self.cron(False):
yield line
recovery_dest = 'local'
if remote_command:
recovery_dest = 'remote'
rsync = RsyncPgData(ssh=remote_command,
bwlimit=self.config.bandwidth_limit,
network_compression=self.config.network_compression)
try:
# create a UnixRemoteCommand obj if is a remote recovery
cmd = UnixRemoteCommand(remote_command)
except FsOperationFailed:
output.error(
"Unable to connect to the target host using the command "
"'%s'" % remote_command
)
return
else:
# if is a local recovery create a UnixLocalCommand
cmd = UnixLocalCommand()
msg = "Starting %s restore for server %s using backup %s " % (recovery_dest, self.config.name, backup.backup_id)
yield msg
_logger.info(msg)
msg = "Destination directory: %s" % dest
yield msg
_logger.info(msg)
if backup.tablespaces:
try:
tblspc_dir = os.path.join(dest, 'pg_tblspc')
# check for pg_tblspc dir into recovery destination folder.
# if does not exists, create it
cmd.create_dir_if_not_exists(tblspc_dir)
except FsOperationFailed:
msg = "ERROR: unable to initialize Tablespace Recovery Operation "
_logger.critical(msg)
info = sys.exc_info()[0]
_logger.critical(info)
raise SystemExit(msg)
for name, oid, location in backup.tablespaces:
try:
# check if relocation is needed
if name in tablespaces:
location = tablespaces[name]
tblspc_file = os.path.join(tblspc_dir, str(oid))
# delete destination directory for symlink file if exists
cmd.delete_dir_if_exists(tblspc_file)
# check that destination dir for tablespace exists
cmd.check_directory_exists(location)
# create it if does not exists
cmd.create_dir_if_not_exists(location)
# check for write permission into destination directory)
cmd.check_write_permission(location)
# create symlink between tablespace and recovery folder
cmd.create_symbolic_link(location,tblspc_file)
except FsOperationFailed:
msg = "ERROR: unable to prepare '%s' tablespace (destination '%s')" % (name, location)
_logger.critical(msg)
info = sys.exc_info()[0]
_logger.critical(info)
raise SystemExit(msg)
yield "\t%s, %s, %s" % (oid, name, location)
wal_dest = os.path.join(dest, 'pg_xlog')
target_epoch = None
target_datetime = None
if target_time:
try:
target_datetime = dateutil.parser.parse(target_time)
except:
msg = "ERROR: unable to parse the target time parameter %r" % target_time
_logger.critical(msg)
raise SystemExit(msg)
target_epoch = time.mktime(target_datetime.timetuple()) + (target_datetime.microsecond / 1000000.)
if target_time or target_xid or (target_tli and target_tli != backup.timeline) or target_name:
targets = {}
if target_time:
targets['time'] = str(target_datetime)
if target_xid:
targets['xid'] = str(target_xid)
if target_tli and target_tli != backup.timeline:
targets['timeline'] = str(target_tli)
if target_name:
targets['name'] = str(target_name)
yield "Doing PITR. Recovery target %s" % \
(", ".join(["%s: %r" % (k, v) for k, v in targets.items()]))
wal_dest = os.path.join(dest, 'barman_xlog')
# Copy the base backup
msg = "Copying the base backup."
yield msg
_logger.info(msg)
self.recover_basebackup_copy(backup, dest, remote_command)
_logger.info("Base backup copied.")
# Prepare WAL segments local directory
msg = "Copying required wal segments."
_logger.info(msg)
yield msg
# Retrieve the list of required WAL segments according to recovery options
xlogs = {}
required_xlog_files = tuple(self.server.get_required_xlog_files(backup, target_tli, target_epoch))
for filename in required_xlog_files:
hashdir = xlog.hash_dir(filename)
if hashdir not in xlogs:
xlogs[hashdir] = []
xlogs[hashdir].append(filename)
# Check decompression options
compressor = self.compression_manager.get_compressor()
# Restore WAL segments
self.recover_xlog_copy(compressor, xlogs, wal_dest, remote_command)
_logger.info("Wal segmets copied.")
# Generate recovery.conf file (only if needed by PITR)
if target_time or target_xid or (target_tli and target_tli != backup.timeline) or target_name:
msg = "Generating recovery.conf"
yield msg
_logger.info(msg)
if remote_command:
tempdir = tempfile.mkdtemp(prefix='barman_recovery-')
recovery = open(os.path.join(tempdir, 'recovery.conf'), 'w')
else:
recovery = open(os.path.join(dest, 'recovery.conf'), 'w')
print >> recovery, "restore_command = 'cp barman_xlog/%f %p'"
print >> recovery, "recovery_end_command = 'rm -fr barman_xlog'"
if target_time:
print >> recovery, "recovery_target_time = '%s'" % target_time
if target_tli:
print >> recovery, "recovery_target_timeline = %s" % target_tli
if target_xid:
print >> recovery, "recovery_target_xid = '%s'" % target_xid
if target_name:
print >> recovery, "recovery_target_name = '%s'" % target_name
if (target_xid or target_time) and exclusive:
print >> recovery, "recovery_target_inclusive = '%s'" % (not exclusive)
recovery.close()
if remote_command:
recovery = rsync.from_file_list(['recovery.conf'], tempdir, ':%s' % dest)
shutil.rmtree(tempdir)
_logger.info('recovery.conf generated')
else:
# avoid shipping of just recovered pg_xlog files
if remote_command:
status_dir = tempfile.mkdtemp(prefix='barman_xlog_status-')
else:
status_dir = os.path.join(wal_dest, 'archive_status')
os.makedirs(status_dir) # no need to check, it must not exist
for filename in required_xlog_files:
with open(os.path.join(status_dir, "%s.done" % filename), 'a') as f:
f.write('')
if remote_command:
retval = rsync('%s/' % status_dir, ':%s' % os.path.join(wal_dest, 'archive_status'))
if retval != 0:
msg = "WARNING: unable to populate pg_xlog/archive_status dorectory"
yield msg
_logger.warning(msg)
shutil.rmtree(status_dir)
# Disable dangerous setting in the target data dir
if remote_command:
tempdir = tempfile.mkdtemp(prefix='barman_recovery-')
pg_config = os.path.join(tempdir, 'postgresql.conf')
shutil.copy2(os.path.join(backup.get_basebackup_directory(), 'pgdata', 'postgresql.conf'), pg_config)
else:
pg_config = os.path.join(dest, 'postgresql.conf')
if self.pg_config_mangle(pg_config,
{'archive_command': 'false'},
"%s.origin" % pg_config):
msg = "The archive_command was set to 'false' to prevent data losses."
yield msg
_logger.info(msg)
# Find dangerous options in the configuration file (locations)
clashes = self.pg_config_detect_possible_issues(pg_config)
if remote_command:
recovery = rsync.from_file_list(['postgresql.conf', 'postgresql.conf.origin'], tempdir, ':%s' % dest)
shutil.rmtree(tempdir)
yield ""
yield "Your PostgreSQL server has been successfully prepared for recovery!"
yield ""
yield "Please review network and archive related settings in the PostgreSQL"
yield "configuration file before starting the just recovered instance."
yield ""
if clashes:
yield "WARNING: Before starting up the recovered PostgreSQL server,"
yield "please review also the settings of the following configuration"
yield "options as they might interfere with your current recovery attempt:"
yield ""
for name, value in sorted(clashes.items()):
yield " %s = %s" % (name, value)
yield ""
_logger.info("Recovery completed successful.")
def cron(self, verbose):
'''
Executes maintenance operations, such as WAL trashing.
:param verbose: print some information
'''
found = False
compressor = self.compression_manager.get_compressor()
with self.server.xlogdb('a') as fxlogdb:
if verbose:
yield "Processing xlog segments for %s" % self.config.name
available_backups = self.get_available_backups(BackupInfo.STATUS_ALL)
for filename in sorted(glob(os.path.join(self.config.incoming_wals_directory, '*'))):
if not found and not verbose:
yield "Processing xlog segments for %s" % self.config.name
found = True
if not len(available_backups):
msg = "No base backup available. Trashing file %s" % os.path.basename(filename)
yield "\t%s" % msg
_logger.warning(msg)
os.unlink(filename)
continue
# Archive the WAL file
wal_info = self.cron_wal_archival(compressor, filename)
# Updates the information of the WAL archive with the latest segement's
fxlogdb.write(wal_info.to_xlogdb_line())
_logger.info('Processed file %s', filename)
yield "\t%s" % os.path.basename(filename)
if not found and verbose:
yield "\tno file found"
# Retention policy management
if self.server.enforce_retention_policies and self.config.retention_policy_mode == 'auto':
available_backups = self.get_available_backups(BackupInfo.STATUS_ALL)
retention_status = self.config.retention_policy.report()
for bid in sorted(retention_status.iterkeys()):
if retention_status[bid] == BackupInfo.OBSOLETE:
_logger.info("Enforcing retention policy: removing backup %s for server %s" % (
bid, self.config.name))
for line in self.delete_backup(available_backups[bid]):
yield line
def delete_basebackup(self, backup):
'''
Delete the given base backup
:param backup: the backup to delete
'''
backup_dir = backup.get_basebackup_directory();
shutil.rmtree(backup_dir)
def delete_wal(self, name):
'''
Delete a WAL segment, with the given name
:param name: the name of the WAL to delete
'''
hashdir = os.path.join(self.config.wals_directory, xlog.hash_dir(name))
try:
os.unlink(os.path.join(hashdir, name))
try:
os.removedirs(hashdir)
except:
pass
except:
_logger.warning('Expected WAL file %s not found during delete',
name)
def backup_start(self, backup_info, immediate_checkpoint):
'''
Start of the backup
:param backup_info: the backup information structure
'''
self.current_action = "connecting to database (%s)" % self.config.conninfo
_logger.debug(self.current_action)
# Set the PostgreSQL data directory
self.current_action = "detecting data directory"
_logger.debug(self.current_action)
data_directory = self.server.get_pg_setting('data_directory')
backup_info.set_attribute('pgdata', data_directory)
# Set server version
backup_info.set_attribute('version', self.server.server_version)
# Set configuration files location
cf = self.server.get_pg_configuration_files()
if cf:
for key in sorted(cf.keys()):
backup_info.set_attribute(key, cf[key])
# Get tablespaces information
self.current_action = "detecting tablespaces"
_logger.debug(self.current_action)
tablespaces = self.server.get_pg_tablespaces()
if tablespaces and len(tablespaces) > 0:
backup_info.set_attribute("tablespaces", tablespaces)
for oid, name, location in tablespaces:
msg = "\t%s, %s, %s" % (oid, name, location)
_logger.info(msg)
# Issue pg_start_backup on the PostgreSQL server
self.current_action = "issuing pg_start_backup command"
_logger.debug(self.current_action)
backup_label = ("Barman backup %s %s"
% (backup_info.server_name, backup_info.backup_id))
start_row = self.server.pg_start_backup(backup_label,
immediate_checkpoint)
if start_row:
start_xlog, start_file_name, start_file_offset = start_row
backup_info.set_attribute("status", "STARTED")
backup_info.set_attribute("timeline", int(start_file_name[0:8], 16))
backup_info.set_attribute("begin_xlog", start_xlog)
backup_info.set_attribute("begin_wal", start_file_name)
backup_info.set_attribute("begin_offset", start_file_offset)
else:
self.current_action = "starting the backup: PostgreSQL server is already in exclusive backup mode"
raise Exception('concurrent exclusive backups are not allowed')
def backup_copy(self, backup_info):
'''
Perform the copy of the backup.
This function returns the size of the backup (in bytes)
:param backup_info: the backup information structure
'''
# paths to be ignored from rsync
exclude_and_protect = []
# validate the bandwidth rules against the tablespace list
tablespaces_bwlimit={}
if self.config.tablespace_bandwidth_limit and backup_info.tablespaces:
valid_tablespaces = dict([(tablespace_data[0], tablespace_data[1])
for tablespace_data in backup_info.tablespaces])
for tablespace, bwlimit in self.config.tablespace_bandwidth_limit.items():
if tablespace in valid_tablespaces:
tablespace_dir = "pg_tblspc/%s" % (valid_tablespaces[tablespace],)
tablespaces_bwlimit[tablespace_dir] = bwlimit
exclude_and_protect.append(tablespace_dir)
backup_dest = os.path.join(backup_info.get_basebackup_directory(), 'pgdata')
# find tablespaces which need to be excluded from rsync command
if backup_info.tablespaces is not None:
exclude_and_protect += [
# removes tablespaces that are located within PGDATA
# as they are already being copied along with it
tablespace_data[2][len(backup_info.pgdata):]
for tablespace_data in backup_info.tablespaces
if tablespace_data[2].startswith(backup_info.pgdata)
]
rsync = RsyncPgData(ssh=self.server.ssh_command,
ssh_options=self.server.ssh_options,
bwlimit=self.config.bandwidth_limit,
exclude_and_protect=exclude_and_protect,
network_compression=self.config.network_compression)
retval = rsync(':%s/' % backup_info.pgdata, backup_dest)
if retval not in (0, 24):
msg = "ERROR: data transfer failure"
_logger.exception(msg)
raise Exception(msg)
# deal with tablespaces with a different bwlimit
if len(tablespaces_bwlimit) > 0:
for tablespace_dir, bwlimit in tablespaces_bwlimit.items():
self.current_action = "copying tablespace '%s' with bwlimit %d" % (
tablespace_dir, bwlimit)
_logger.debug(self.current_action)
tb_rsync = RsyncPgData(ssh=self.server.ssh_command,
ssh_options=self.server.ssh_options,
bwlimit=bwlimit,
network_compression=self.config.network_compression)
retval = tb_rsync(
':%s/' % os.path.join(backup_info.pgdata, tablespace_dir),
os.path.join(backup_dest, tablespace_dir))
if retval not in (0, 24):
msg = "ERROR: data transfer failure on directory '%s'" % (
tablespace_dir,)
_logger.exception(msg)
raise Exception(msg)
# Copy configuration files (if not inside PGDATA)
self.current_action = "copying configuration files"
_logger.debug(self.current_action)
cf = self.server.get_pg_configuration_files()
if cf:
for key in sorted(cf.keys()):
# Consider only those that reside outside of the original PGDATA
if cf[key]:
if cf[key].find(backup_info.pgdata) == 0:
self.current_action = "skipping %s as contained in %s directory" % (key, backup_info.pgdata)
_logger.debug(self.current_action)
continue
else:
self.current_action = "copying %s as outside %s directory" % (key, backup_info.pgdata)
_logger.info(self.current_action)
retval = rsync(':%s' % cf[key], backup_dest)
if retval not in (0, 24):
raise Exception("ERROR: data transfer failure")
self.current_action = "calculating backup size"
_logger.debug(self.current_action)
backup_size = 0
for dirpath, _, filenames in os.walk(backup_dest):
for f in filenames:
fp = os.path.join(dirpath, f)
backup_size += os.path.getsize(fp)
return backup_size
def backup_stop(self, backup_info):
'''
Stop the backup
:param backup_info: the backup information structure
'''
stop_xlog, stop_file_name, stop_file_offset = self.server.pg_stop_backup()
backup_info.set_attribute("end_time", datetime.datetime.now())
backup_info.set_attribute("end_xlog", stop_xlog)
backup_info.set_attribute("end_wal", stop_file_name)
backup_info.set_attribute("end_offset", stop_file_offset)
def recover_basebackup_copy(self, backup, dest, remote_command=None):
'''
Perform the actual copy of the base backup for recovery purposes
:param backup: the backup to recover
:param dest: the destination directory
:param remote_command: default None. The remote command to recover the base backup,
in case of remote backup.
'''
sourcedir = os.path.join(backup.get_basebackup_directory(), 'pgdata')
tablespaces_bwlimit={}
if remote_command:
dest = ':%s' % dest
# validate the bandwidth rules against the tablespace list
if self.config.tablespace_bandwidth_limit and backup.tablespaces:
valid_tablespaces = dict([(tablespace_data[0], tablespace_data[1])
for tablespace_data in backup.tablespaces])
for tablespace, bwlimit in self.config.tablespace_bandwidth_limit.items():
if tablespace in valid_tablespaces:
tablespace_dir = "pg_tblspc/%s" % (valid_tablespaces[tablespace],)
tablespaces_bwlimit[tablespace_dir] = bwlimit
rsync = RsyncPgData(ssh=remote_command,
bwlimit=self.config.bandwidth_limit,
exclude_and_protect=tablespaces_bwlimit.keys(),
network_compression=self.config.network_compression)
retval = rsync('%s/' % (sourcedir,), dest)
if retval != 0:
raise Exception("ERROR: data transfer failure")
if remote_command and len(tablespaces_bwlimit) > 0:
for tablespace_dir, bwlimit in tablespaces_bwlimit.items():
self.current_action = "copying tablespace '%s' with bwlimit %d" % (
tablespace_dir, bwlimit)
_logger.debug(self.current_action)
tb_rsync = RsyncPgData(ssh=remote_command,
bwlimit=bwlimit,
network_compression=self.config.network_compression)
retval = tb_rsync(
'%s/' % os.path.join(sourcedir, tablespace_dir),
os.path.join(dest, tablespace_dir))
if retval != 0:
msg = "ERROR: data transfer failure on directory '%s'" % (
tablespace_dir,)
_logger.exception(msg)
raise Exception(msg)
# TODO: Manage different location for configuration files that were not within the data directory
def recover_xlog_copy(self, compressor, xlogs, wal_dest, remote_command=None):
"""
Restore WAL segments
:param compressor: the compressor for the file (if any)
:param xlogs: the xlog dictionary to recover
:param wal_dest: the destination directory for xlog recover
:param remote_command: default None. The remote command to recover the xlog,
in case of remote backup.
"""
rsync = RsyncPgData(ssh=remote_command, bwlimit=self.config.bandwidth_limit,
network_compression=self.config.network_compression)
if remote_command:
# If remote recovery tell rsync to copy them remotely
wal_dest = ':%s' % wal_dest
else:
# we will not use rsync: destdir must exists
if not os.path.exists(wal_dest):
os.makedirs(wal_dest)
if compressor and remote_command:
xlog_spool = tempfile.mkdtemp(prefix='barman_xlog-')
for prefix in xlogs:
source_dir = os.path.join(self.config.wals_directory, prefix)
if compressor:
if remote_command:
for segment in xlogs[prefix]:
compressor.decompress(os.path.join(source_dir, segment), os.path.join(xlog_spool, segment))
rsync.from_file_list(xlogs[prefix], xlog_spool, wal_dest)
for segment in xlogs[prefix]:
os.unlink(os.path.join(xlog_spool, segment))
else:
# decompress directly to the right place
for segment in xlogs[prefix]:
compressor.decompress(os.path.join(source_dir, segment), os.path.join(wal_dest, segment))
else:
rsync.from_file_list(xlogs[prefix], "%s/" % os.path.join(self.config.wals_directory, prefix), wal_dest)
if compressor and remote_command:
shutil.rmtree(xlog_spool)
def cron_wal_archival(self, compressor, filename):
"""
Archive a WAL segment from the incoming directory.
This function returns a WalFileInfo object.
:param compressor: the compressor for the file (if any)
:param filename: the name of the WAL file is being processed
:return WalFileInfo:
"""
basename = os.path.basename(filename)
destdir = os.path.join(self.config.wals_directory, xlog.hash_dir(basename))
destfile = os.path.join(destdir, basename)
wal_info = WalFileInfo.from_file(filename, compression=None)
# Run the pre_archive_script if present.
script = HookScriptRunner(self, 'archive_script', 'pre')
script.env_from_wal_info(wal_info)
script.run()
if not os.path.isdir(destdir):
os.makedirs(destdir)
if compressor:
compressor.compress(filename, destfile)
shutil.copystat(filename, destfile)
os.unlink(filename)
else:
shutil.move(filename, destfile)
wal_info = WalFileInfo.from_file(
destfile,
compression=compressor and compressor.compression)
# Run the post_archive_script if present.
script = HookScriptRunner(self, 'archive_script', 'post')
script.env_from_wal_info(wal_info)
script.run()
return wal_info
def check(self):
"""
This function performs some checks on the server.
Returns 0 if all went well, 1 if any of the checks fails
"""
if self.config.compression and not self.compression_manager.check():
output.result('check', self.config.name,
'compression settings', False)
else:
status = True
try:
self.compression_manager.get_compressor()
except CompressionIncompatibility, field:
output.result('check', self.config.name,
'%s setting' % field, False)
status = False
output.result('check', self.config.name,
'compression settings', status)
# Minimum redundancy checks
no_backups = len(self.get_available_backups())
if no_backups < self.config.minimum_redundancy:
status = False
else:
status = True
output.result('check', self.config.name,
'minimum redundancy requirements', status,
'have %s backups, expected at least %s' %
(no_backups, self.config.minimum_redundancy))
def status(self):
"""
This function show the server status
"""
no_backups = len(self.get_available_backups())
output.result('status', self.config.name,
"backups_number",
"No. of available backups", no_backups)
output.result('status', self.config.name,
"first_backup",
"First available backup",
self.get_first_backup())
output.result('status', self.config.name,
"last_backup",
"Last available backup",
self.get_last_backup())
if no_backups < self.config.minimum_redundancy:
output.result('status', self.config.name,
"minimum_redundancy",
"Minimum redundancy requirements"
"FAILED (%s/%s)" % (no_backups,
self.config.minimum_redundancy))
else:
output.result('status', self.config.name,
"minimum_redundancy",
"Minimum redundancy requirements",
"satisfied (%s/%s)" % (no_backups,
self.config.minimum_redundancy))
def pg_config_mangle(self, filename, settings, backup_filename=None):
'''This method modifies the postgres configuration file,
commenting settings passed as argument, and adding the barman ones.
If backup_filename is True, it writes on a backup copy.
:param filename: the Postgres configuration file
:param settings: settings to mangle dictionary
:param backup_filename: default False. If True, work on a copy
'''
if backup_filename:
shutil.copy2(filename, backup_filename)
with open(filename) as f:
content = f.readlines()
r = re.compile('^\s*([^\s=]+)\s*=\s*(.*)$')
mangled = False
with open(filename, 'w') as f:
for line in content:
rm = r.match(line)
if rm:
key = rm.group(1)
if key in settings:
f.write("#BARMAN# %s" % line)
# TODO is it useful to handle none values?
f.write("%s = %s\n" % (key, settings[key]))
mangled = True
continue
f.write(line)
return mangled
def pg_config_detect_possible_issues(self, filename):
'''This method looks for any possible issue with PostgreSQL
location options such as data_directory, config_file, etc.
It returns a dictionary with the dangerous options that have been found.
:param filename: the Postgres configuration file
'''
clashes = {}
with open(filename) as f:
content = f.readlines()
r = re.compile('^\s*([^\s=]+)\s*=\s*(.*)$')
for line in content:
rm = r.match(line)
if rm:
key = rm.group(1)
if key in self.DANGEROUS_OPTIONS:
clashes[key] = rm.group(2)
return clashes
def rebuild_xlogdb(self):
"""
Rebuild the whole xlog database guessing it from the archive content.
"""
from os.path import isdir, join
yield "Rebuilding xlogdb for server %s" % self.config.name
root = self.config.wals_directory
default_compression = self.config.compression
wal_count = label_count = history_count = 0
# lock the xlogdb as we are about replacing it completely
with self.server.xlogdb('w') as fxlogdb:
for name in sorted(os.listdir(root)):
# ignore the xlogdb and its lockfile
if name.startswith(self.server.XLOG_DB):
continue
fullname = join(root, name)
if isdir(fullname):
# all relevant files are in subdirectories
hash_dir = fullname
for wal_name in sorted(os.listdir(hash_dir)):
fullname = join(hash_dir, wal_name)
if isdir(fullname):
_logger.warning(
'unexpected directory '
'rebuilding the wal database: %s',
fullname)
else:
if xlog.is_wal_file(fullname):
wal_count += 1
elif xlog.is_backup_file(fullname):
label_count += 1
else:
_logger.warning(
'unexpected file '
'rebuilding the wal database: %s',
fullname)
continue
wal_info = WalFileInfo.from_file(
fullname,
default_compression=default_compression)
fxlogdb.write(wal_info.to_xlogdb_line())
else:
# only history files are here
if xlog.is_history_file(fullname):
history_count += 1
wal_info = WalFileInfo.from_file(
fullname,
default_compression=default_compression)
fxlogdb.write(wal_info.to_xlogdb_line())
else:
_logger.warning(
'unexpected file '
'rebuilding the wal database: %s',
fullname)
yield 'Done rebuilding xlogdb for server %s ' \
'(history: %s, backup_labels: %s, wal_file: %s)' % (
self.config.name, history_count, label_count, wal_count)
barman-1.3.0/barman/cli.py000644 000765 000024 00000045276 12273464571 016142 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module implements the interface with the command line and the logger.
"""
from argh import ArghParser, named, arg, expects_obj
from argparse import SUPPRESS
from barman import output
from barman.infofile import BackupInfo
from barman import lockfile
from barman.server import Server
import barman.config
import logging
import os
import sys
from barman.utils import drop_privileges, configure_logging, parse_log_level
_logger = logging.getLogger(__name__)
@named('list-server')
@arg('--minimal', help='machine readable output')
def list_server(minimal=False):
"""
List available servers, with useful information
"""
servers = get_server_list()
for name in sorted(servers):
server = servers[name]
output.init('list_server', name, minimal=minimal)
output.result('list_server', name, server.config.description)
output.close_and_exit()
def cron(verbose=True):
"""
Run maintenance tasks
"""
filename = os.path.join(barman.__config__.barman_home, '.cron.lock')
try:
with lockfile.LockFile(filename, raise_if_fail=True):
servers = [Server(conf) for conf in barman.__config__.servers()]
for server in servers:
for lines in server.cron(verbose):
yield lines
except lockfile.LockFileBusy:
raise SystemExit("ERROR: Another cron is running")
except lockfile.LockFilePermissionDenied:
raise SystemExit("ERROR: Permission denied, unable to access '%s'"
% filename)
# noinspection PyUnusedLocal
def server_completer(prefix, parsed_args, **kwargs):
global_config(parsed_args)
for conf in barman.__config__.servers():
if conf.name.startswith(prefix) \
and conf.name not in parsed_args.server_name:
yield conf.name
# noinspection PyUnusedLocal
def server_completer_all(prefix, parsed_args, **kwargs):
global_config(parsed_args)
for conf in barman.__config__.servers():
if conf.name.startswith(prefix) \
and conf.name not in parsed_args.server_name:
yield conf.name
if 'server_name' in parsed_args \
and parsed_args.server_name is None \
and 'all'.startswith(prefix):
yield 'all'
# noinspection PyUnusedLocal
def backup_completer(prefix, parsed_args, **kwargs):
global_config(parsed_args)
server = get_server(parsed_args)
if server and len(parsed_args.backup_id) == 0:
for backup_id in sorted(server.get_available_backups().iterkeys(),
reverse=True):
if backup_id.startswith(prefix):
yield backup_id
for special_id in ('latest', 'last', 'oldest', 'first'):
if len(server.get_available_backups()) > 0 \
and special_id.startswidth(prefix):
yield special_id
else:
return
@arg('server_name', nargs='+',
completer=server_completer_all,
help="specifies the server names for the backup command "
"('all' will show all available servers)")
@arg('--immediate-checkpoint',
help='forces the initial checkpoint to be done as quickly as possible',
dest='immediate_checkpoint',
action='store_true',
default=SUPPRESS)
@arg('--no-immediate-checkpoint',
help='forces the initial checkpoint to be spreaded',
dest='immediate_checkpoint',
action='store_false',
default=SUPPRESS)
@expects_obj
def backup(args):
"""
Perform a full backup for the given server
"""
servers = get_server_list(args)
for name in sorted(servers):
server = servers[name]
if server is None:
output.error("Unknown server '%s'" % name)
continue
immediate_checkpoint = getattr(args, 'immediate_checkpoint',
server.config.immediate_checkpoint)
server.backup(immediate_checkpoint)
output.close_and_exit()
@named('list-backup')
@arg('server_name', nargs='+',
completer=server_completer_all,
help="specifies the server name for the command "
"('all' will show all available servers)")
@arg('--minimal', help='machine readable output', action='store_true')
@expects_obj
def list_backup(args):
"""
List available backups for the given server (supports 'all')
"""
servers = get_server_list(args)
for name in sorted(servers):
server = servers[name]
output.init('list_backup', name, minimal=args.minimal)
if server is None:
output.error("Unknown server '%s'" % name)
continue
server.list_backups()
output.close_and_exit()
@arg('server_name', nargs='+',
completer=server_completer_all,
help='specifies the server name for the command')
@expects_obj
def status(args):
"""
Shows live information and status of the PostgreSQL server
"""
servers = get_server_list(args)
for name in sorted(servers):
server = servers[name]
if server is None:
output.error("Unknown server '%s'" % name)
continue
output.init('status', name)
server.status()
output.close_and_exit()
@arg('server_name', nargs='+',
completer=server_completer_all,
help='specifies the server name for the command')
@expects_obj
def rebuild_xlogdb(args):
"""
Rebuild the WAL file database guessing it from the disk content.
"""
servers = get_server_list(args)
ok = True
for name in sorted(servers):
server = servers[name]
if server is None:
yield "Unknown server '%s'" % (name)
ok = False
continue
for line in server.rebuild_xlogdb():
yield line
yield ''
if not ok:
raise SystemExit(1)
@arg('server_name',
completer=server_completer,
help='specifies the server name for the command')
@arg('--target-tli', help='target timeline', type=int)
@arg('--target-time',
help='target time. You can use any valid unambiguous representation. e.g: "YYYY-MM-DD HH:MM:SS.mmm"')
@arg('--target-xid', help='target transaction ID')
@arg('--target-name',
help='target name created previously with pg_create_restore_point() function call')
@arg('--exclusive',
help='set target xid to be non inclusive', action="store_true")
@arg('--tablespace',
help='tablespace relocation rule',
metavar='NAME:LOCATION', action='append')
@arg('--remote-ssh-command',
metavar='SSH_COMMAND',
help='This options activates remote recovery, by specifying the secure shell command '
'to be launched on a remote host. It is the equivalent of the "ssh_command" server'
'option in the configuration file for remote recovery. Example: "ssh postgres@db2"')
@arg('backup_id',
help='specifies the backup ID to recover')
@arg('destination_directory',
help='the directory where the new server is created')
@expects_obj
def recover(args):
""" Recover a server at a given time or xid
"""
server = get_server(args)
if server is None:
raise SystemExit("ERROR: unknown server '%s'" % (args.server_name))
# Retrieves the backup
backup = parse_backup_id(server, args)
if backup is None or backup.status != BackupInfo.DONE:
raise SystemExit("ERROR: unknown backup '%s' for server '%s'" % (
args.backup_id, args.server_name))
# decode the tablespace relocation rules
tablespaces = {}
if args.tablespace:
for rule in args.tablespace:
try:
tablespaces.update([rule.split(':', 1)])
except:
raise SystemExit(
"ERROR: invalid tablespace relocation rule '%s'\n"
"HINT: the valid syntax for a relocation rule is NAME:LOCATION" % rule)
# validate the rules against the tablespace list
valid_tablespaces = [tablespace_data[0] for tablespace_data in
backup.tablespaces] if backup.tablespaces else []
for tablespace in tablespaces:
if tablespace not in valid_tablespaces:
raise SystemExit("ERROR: invalid tablespace name '%s'\n"
"HINT: please use any of the following tablespaces: %s"
% (tablespace, ', '.join(valid_tablespaces)))
# explicitly disallow the rsync remote syntax (common mistake)
if ':' in args.destination_directory:
raise SystemExit(
"ERROR: the destination directory parameter cannot contain the ':' character\n"
"HINT: if you want to do a remote recovery you have to use the --remote-ssh-command option")
for line in server.recover(backup,
args.destination_directory,
tablespaces=tablespaces,
target_tli=args.target_tli,
target_time=args.target_time,
target_xid=args.target_xid,
target_name=args.target_name,
exclusive=args.exclusive,
remote_command=args.remote_ssh_command
):
yield line
@named('show-server')
@arg('server_name', nargs='+',
completer=server_completer_all,
help="specifies the server names to show ('all' will show all available servers)")
@expects_obj
def show_server(args):
"""
Show all configuration parameters for the specified servers
"""
servers = get_server_list(args)
for name in sorted(servers):
server = servers[name]
if server is None:
output.error("Unknown server '%s'" % name)
continue
output.init('show_server', name)
server.show()
output.close_and_exit()
@arg('server_name', nargs='+',
completer=server_completer_all,
help="specifies the server names to check "
"('all' will check all available servers)")
@arg('--nagios', help='Nagios plugin compatible output', action='store_true')
@expects_obj
def check(args):
"""
Check if the server configuration is working.
This command returns success if every checks pass,
or failure if any of these fails
"""
if args.nagios:
output.set_output_writer(output.NagiosOutputWriter())
servers = get_server_list(args)
for name in sorted(servers):
server = servers[name]
if server is None:
output.error("Unknown server '%s'" % name)
continue
output.init('check', name)
server.check()
output.close_and_exit()
@named('show-backup')
@arg('server_name',
completer=server_completer,
help='specifies the server name for the command')
@arg('backup_id',
completer=backup_completer,
help='specifies the backup ID')
@expects_obj
def show_backup(args):
"""
This method shows a single backup information
"""
server = get_server(args)
if server is None:
output.error("Unknown server '%s'" % args.server_name)
else:
# Retrieves the backup
backup_info = parse_backup_id(server, args)
if backup_info is None:
output.error("Unknown backup '%s' for server '%s'" % (
args.backup_id, args.server_name))
else:
server.show_backup(backup_info)
output.close_and_exit()
@named('list-files')
@arg('server_name',
completer=server_completer,
help='specifies the server name for the command')
@arg('backup_id',
completer=backup_completer,
help='specifies the backup ID')
@arg('--target', choices=('standalone', 'data', 'wal', 'full'),
default='standalone',
help='''
Possible values are: data (just the data files), standalone (base backup files, including required WAL files),
wal (just WAL files between the beginning of base backup and the following one (if any) or the end of the log) and
full (same as data + wal). Defaults to %(default)s
'''
)
@expects_obj
def list_files(args):
""" List all the files for a single backup
"""
server = get_server(args)
if server is None:
yield "Unknown server '%s'" % (args.server_name)
return
# Retrieves the backup
backup = parse_backup_id(server, args)
if backup is None:
yield "Unknown backup '%s' for server '%s'" % (
args.backup_id, args.server_name)
return
for line in backup.get_list_of_files(args.target):
yield line
@arg('server_name',
completer=server_completer,
help='specifies the server name for the command')
@arg('backup_id',
completer=backup_completer,
help='specifies the backup ID')
@expects_obj
def delete(args):
""" Delete a backup
"""
server = get_server(args)
if server is None:
yield "Unknown server '%s'" % (args.server_name)
return
# Retrieves the backup
backup = parse_backup_id(server, args)
if backup is None:
yield "Unknown backup '%s' for server '%s'" % (
args.backup_id, args.server_name)
return
for line in server.delete_backup(backup):
yield line
class stream_wrapper(object):
""" This class represents a wrapper for a stream
"""
def __init__(self, stream):
self.stream = stream
def set_stream(self, stream):
''' Set the stream as stream argument '''
self.stream = stream
def __getattr__(self, attr):
return getattr(self.stream, attr)
_output_stream = stream_wrapper(sys.stdout)
def global_config(args):
""" Set the configuration file """
if hasattr(args, 'config'):
filename = args.config
else:
try:
filename = os.environ['BARMAN_CONFIG_FILE']
except KeyError:
filename = None
config = barman.config.Config(filename)
barman.__config__ = config
# change user if needed
try:
drop_privileges(config.user)
except OSError:
msg = "ERROR: please run barman as %r user" % config.user
raise SystemExit(msg)
except KeyError:
msg = "ERROR: the configured user %r does not exists" % config.user
raise SystemExit(msg)
# configure logging
log_level = parse_log_level(config.log_level)
configure_logging(config.log_file,
log_level or barman.config.DEFAULT_LOG_LEVEL,
config.log_format)
if log_level is None:
_logger.warn('unknown log_level in config file: %s', config.log_level)
# configure output
if args.format != output.DEFAULT_WRITER:
output.set_output_writer(args.format)
# Load additional configuration files
_logger.debug('Loading additional configuration files')
config.load_configuration_files_directory()
_logger.debug('Initialized Barman version %s (config: %s)',
barman.__version__, config.config_file)
if hasattr(args, 'quiet') and args.quiet:
_logger.debug("Replacing output stream")
global _output_stream
_output_stream.set_stream(open(os.devnull, 'w'))
def get_server(args):
"""
Get a single server from the configuration
:param args: an argparse namespace containing a single server_name parameter
"""
config = barman.__config__.get_server(args.server_name)
if not config:
return None
return Server(config)
def get_server_list(args=None):
"""
Get the server list from the configuration
If args the parameter is None or arg.server_name[0] is 'all'
returns all defined servers
:param args: an argparse namespace containing a list server_name parameter
"""
if args is None or args.server_name[0] == 'all':
return dict(
(conf.name, Server(conf)) for conf in barman.__config__.servers())
else:
server_dict = {}
for server in args.server_name:
conf = barman.__config__.get_server(server)
if conf == None:
server_dict[server] = None
else:
server_dict[server] = Server(conf)
return server_dict
def parse_backup_id(server, args):
"""
Parses backup IDs including special words such as latest, oldest, etc.
:param Server server: server object to search for the required backup
:param args: command lien arguments namespace
:rtype BackupInfo,None: the decoded backup_info object
"""
if args.backup_id in ('latest', 'last'):
backup_id = server.get_last_backup()
elif args.backup_id in ('oldest', 'first'):
backup_id = server.get_first_backup()
else:
backup_id = args.backup_id
backup_info = server.get_backup(backup_id) if backup_id else None
return backup_info
def main():
"""
The main method of Barman
"""
p = ArghParser()
p.add_argument('-v', '--version', action='version',
version=barman.__version__)
p.add_argument('-c', '--config',
help='uses a configuration file '
'(defaults: %s)'
% ', '.join(barman.config.Config.CONFIG_FILES),
default=SUPPRESS)
p.add_argument('-q', '--quiet', help='be quiet', action='store_true')
p.add_argument('-f', '--format', help='output format',
choices=output.AVAILABLE_WRITERS.keys(),
default=output.DEFAULT_WRITER)
p.add_commands(
[
cron,
list_server,
show_server,
status,
check,
backup,
list_backup,
show_backup,
list_files,
recover,
delete,
rebuild_xlogdb,
]
)
# noinspection PyBroadException
try:
p.dispatch(pre_call=global_config, output_file=_output_stream)
except Exception:
msg = "Unhandled exception. See log file for more details."
output.exception(msg)
output.close_and_exit()
if __name__ == '__main__':
# This code requires the mock module and allow us to test
# bash completion inside the IDE debugger
try:
import mock
sys.stdout = mock.Mock(wraps=sys.stdout)
sys.stdout.isatty.return_value = True
os.dup2(2, 8)
except ImportError:
pass
main()
barman-1.3.0/barman/command_wrappers.py000644 000765 000024 00000011756 12273464541 020725 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module contains a wrapper for shell commands
"""
import sys
import signal
import subprocess
import os
import logging
_logger = logging.getLogger(__name__)
class CommandFailedException(Exception):
"""
Exception which represents a failed command
"""
pass
class Command(object):
"""
Simple wrapper for a shell command
"""
def __init__(self, cmd, args=None, env_append=None, shell=False,
check=False, debug=False):
self.cmd = cmd
self.args = args if args is not None else []
self.shell = shell
self.check = check
self.debug = debug
if env_append:
self.env = os.environ.copy()
self.env.update(env_append)
else:
self.env = None
def _restore_sigpipe(self):
"""restore default signal handler (http://bugs.python.org/issue1652)"""
signal.signal(signal.SIGPIPE, signal.SIG_DFL) # pragma: no cover
def _cmd_quote(self, cmd, args):
"""
Quote all cmd's arguments.
This is needed to avoid command string breaking.
WARNING: this function does not protect against injection.
"""
if args is not None and len(args) > 0:
cmd = "%s '%s'" % (cmd, "' '".join(args))
return cmd
def __call__(self, *args):
self.getoutput(None, *args)
return self.ret
def getoutput(self, stdin=None, *args):
"""
Run the command and return the output and the error (if present)
"""
args = self.args + list(args)
if self.shell:
cmd = self._cmd_quote(self.cmd, args)
else:
cmd = [self.cmd] + args
if self.debug:
print >> sys.stderr, "Command: %r" % cmd
_logger.debug("Command: %r", cmd)
pipe = subprocess.Popen(cmd, shell=self.shell, env=self.env,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_sigpipe)
self.out, self.err = pipe.communicate(stdin)
self.ret = pipe.returncode
if self.debug:
print >> sys.stderr, "Command return code: %s" % self.ret
_logger.debug("Command return code: %s", self.ret)
_logger.debug("Command stdout: %s", self.out)
_logger.debug("Command stderr: %s", self.err)
if self.check and self.ret != 0:
raise CommandFailedException(dict(
ret=self.ret, out=self.out, err=self.err))
return self.out, self.err
class Rsync(Command):
"""
This class is a wrapper for the rsync system command,
which is used vastly by barman
"""
def __init__(self, rsync='rsync', args=None, ssh=None, ssh_options=None,
bwlimit=None, exclude_and_protect=None,
network_compression=None, **kwargs):
options = []
if ssh:
options += ['-e', self._cmd_quote(ssh, ssh_options)]
if network_compression:
options += ['-z']
if exclude_and_protect:
for path in exclude_and_protect:
options += ["--exclude=%s" % (path,), "--filter=P_%s" % (path,)]
if args:
options += args
if bwlimit is not None and bwlimit > 0:
options += ["--bwlimit=%s" % bwlimit]
Command.__init__(self, rsync, args=options, **kwargs)
def from_file_list(self, filelist, src, dst):
"""
This methods copies filelist from src to dst.
Returns the return code of the rsync command
"""
input_string = ('\n'.join(filelist)).encode('UTF-8')
_logger.debug("from_file_list: %r", filelist)
self.getoutput(input_string, '--files-from=-', src, dst)
return self.ret
class RsyncPgData(Rsync):
"""
This class is a wrapper for rsync, specialized in Postgres data
directory syncing
"""
def __init__(self, rsync='rsync', args=None, **kwargs):
options = [
'-rLKpts', '--delete-excluded', '--inplace',
'--exclude=/pg_xlog/*',
'--exclude=/pg_log/*',
'--exclude=/postmaster.pid'
]
if args:
options += args
Rsync.__init__(self, rsync, args=options, **kwargs)
barman-1.3.0/barman/compression.py000644 000765 000024 00000013041 12273464541 017712 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module is responsible to manage the compression features of Barman
"""
from barman.command_wrappers import Command
import logging
_logger = logging.getLogger(__name__)
class CompressionIncompatibility(Exception):
"""
Exception for compression incompatibility
"""
class CompressionManager(object):
def __init__(self, config):
"""
Compression manager
"""
self.config = config
def check(self, compression=None):
"""
This method returns True if the compression specified in the
configuration file is present in the register, otherwise False
"""
if not compression:
compression = self.config.compression
if compression not in compression_registry:
return False
return True
def get_compressor(self, remove_origin=False, debug=False,
compression=None):
"""
Returns a new compressor instance
"""
if not compression:
compression = self.config.compression
# Check if the requested compression mechanism is allowed
if self.check(compression):
return compression_registry[compression](
config=self.config, compression=compression,
remove_origin=remove_origin, debug=debug)
else:
return None
def identify_compression(filename):
"""
Try to guess the compression algorithm of a file
:param filename: the pat of the file to identify
:rtype: str
"""
with open(filename, 'rb') as f:
file_start = f.read(MAGIC_MAX_LENGTH)
for file_type, cls in compression_registry.iteritems():
if cls.validate(file_start):
return file_type
return None
class Compressor(object):
"""
Abstract base class for all compressors
"""
MAGIC = None
def __init__(self, config, compression, remove_origin=False, debug=False):
self.config = config
self.compression = compression
self.remove_origin = remove_origin
self.debug = debug
self.compress = None
self.decompres = None
def _build_command(self, pipe_command):
"""
Build the command string and create the actual Command object
:param pipe_command: the command used to compress/decompress
:rtype: Command
"""
command = 'command(){ '
command += pipe_command
command += ' > "$2" < "$1"'
if self.remove_origin:
command += ' && rm -f "$1"'
command += ';}; command'
return Command(command, shell=True, check=True, debug=self.debug)
@classmethod
def validate(cls, file_start):
"""
Guess if the first bytes of a file are compatible with the compression
implemented by this class
:param file_start: a binary string representing the first few
bytes of a file
:rtype: bool
"""
return cls.MAGIC and file_start.startswith(cls.MAGIC)
class GZipCompressor(Compressor):
"""
Predefined compressor with GZip
"""
MAGIC = b'\x1f\x8b\x08'
def __init__(self, config, compression, remove_origin=False, debug=False):
super(GZipCompressor, self).__init__(
config, compression, remove_origin, debug)
self.compress = self._build_command('gzip -c')
self.decompress = self._build_command('gzip -c -d')
class BZip2Compressor(Compressor):
"""
Predefined compressor with BZip2
"""
MAGIC = b'\x42\x5a\x68'
def __init__(self, config, compression, remove_origin=False, debug=False):
super(BZip2Compressor, self).__init__(
config, compression, remove_origin, debug)
self.compress = self._build_command('bzip2 -c')
self.decompress = self._build_command('bzip2 -c -d')
class CustomCompressor(Compressor):
"""
Custom compressor
"""
def __init__(self, config, compression, remove_origin=False, debug=False):
if not config.custom_compression_filter:
raise CompressionIncompatibility("custom_compression_filter")
if not config.custom_decompression_filter:
raise CompressionIncompatibility("custom_decompression_filter")
super(CustomCompressor, self).__init__(
config, compression, remove_origin, debug)
self.compress = self._build_command(
config.custom_compression_filter)
self.decompress = self._build_command(
config.custom_decompression_filter)
#: a dictionary mapping all supported compression schema
#: to the class implementing it
compression_registry = {
'gzip': GZipCompressor,
'bzip2': BZip2Compressor,
'custom': CustomCompressor,
}
#: The longest string needed to identify a compression schema
MAGIC_MAX_LENGTH = reduce(
max, [len(x.MAGIC or '') for x in compression_registry.values()], 0)
barman-1.3.0/barman/config.py000644 000765 000024 00000023430 12273464571 016624 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module is responsible for all the things related to
Barman configuration, such as parsing configuration file.
"""
import os
import re
from ConfigParser import ConfigParser, NoOptionError
import logging.handlers
from glob import iglob
from barman import output
_logger = logging.getLogger(__name__)
FORBIDDEN_SERVER_NAMES = ['all']
DEFAULT_USER = 'barman'
DEFAULT_LOG_LEVEL = logging.INFO
DEFAULT_LOG_FORMAT = "%(asctime)s %(name)s %(levelname)s: %(message)s"
_TRUE_RE = re.compile(r"""^(true|t|yes|1)$""", re.IGNORECASE)
_FALSE_RE = re.compile(r"""^(false|f|no|0)$""", re.IGNORECASE)
def parse_boolean(value):
"""
Parse a string to a boolean value
:param str value: string representing a boolean
:raises ValueError: if the string is an invalid boolean representation
"""
if _TRUE_RE.match(value):
return True
if _FALSE_RE.match(value):
return False
raise ValueError("Invalid boolean representation (use 'true' or 'false')")
class Server(object):
"""
This class represents a server.
"""
KEYS = [
'active', 'description', 'ssh_command', 'conninfo',
'backup_directory', 'basebackups_directory',
'wals_directory', 'incoming_wals_directory', 'lock_file',
'compression', 'custom_compression_filter',
'custom_decompression_filter', 'retention_policy_mode',
'retention_policy',
'wal_retention_policy', 'pre_backup_script', 'post_backup_script',
'minimum_redundancy', 'bandwidth_limit', 'tablespace_bandwidth_limit',
'immediate_checkpoint', 'network_compression',
]
BARMAN_KEYS = [
'compression', 'custom_compression_filter',
'custom_decompression_filter', 'retention_policy_mode',
'retention_policy',
'wal_retention_policy', 'pre_backup_script', 'post_backup_script',
'configuration_files_directory',
'minimum_redundancy', 'bandwidth_limit', 'tablespace_bandwidth_limit',
'immediate_checkpoint', 'network_compression',
]
DEFAULTS = {
'active': 'true',
'backup_directory': r'%(barman_home)s/%(name)s',
'basebackups_directory': r'%(backup_directory)s/base',
'wals_directory': r'%(backup_directory)s/wals',
'incoming_wals_directory': r'%(backup_directory)s/incoming',
'lock_file': r'%(backup_directory)s/%(name)s.lock',
'retention_policy_mode': 'auto',
'wal_retention_policy': 'main',
'minimum_redundancy': '0',
'immediate_checkpoint': 'false',
'network_compression': 'false',
}
PARSERS = {
'active': parse_boolean,
'immediate_checkpoint': parse_boolean,
'network_compression': parse_boolean,
}
def __init__(self, config, name):
self.config = config
self.name = name
self.barman_home = config.get('barman', 'barman_home')
for key in Server.KEYS:
value = config.get(name, key, self.__dict__)
source = '[%s] section' % name
if value is None and key in Server.BARMAN_KEYS:
value = config.get('barman', key)
source = '[barman] section'
if value is None and key in Server.DEFAULTS:
value = Server.DEFAULTS[key] % self.__dict__
source = 'DEFAULTS'
# If we have a parser for the current key use it to obtain the
# actual value. If an exception is trown output a warning and
# ignore the value.
# noinspection PyBroadException
try:
if key in self.PARSERS:
value = self.PARSERS[key](value)
except Exception, e:
output.warning("Invalid configuration value '%s' for key %s"
" in %s: %s",
value, key, source, e)
setattr(self, key, value)
class Config(object):
"""This class represents the barman configuration.
Default configuration files are /etc/barman.conf,
/etc/barman/barman.conf
and ~/.barman.conf for a per-user configuration
"""
CONFIG_FILES = [
'~/.barman.conf',
'/etc/barman.conf',
'/etc/barman/barman.conf',
]
_QUOTE_RE = re.compile(r"""^(["'])(.*)\1$""")
def __init__(self, filename=None):
self._config = ConfigParser()
if filename:
if hasattr(filename, 'read'):
self._config.readfp(filename)
else:
self._config.read(os.path.expanduser(filename))
else:
for path in self.CONFIG_FILES:
full_path = os.path.expanduser(path)
if os.path.exists(full_path) \
and full_path in self._config.read(full_path):
filename = full_path
break
self.config_file = filename
self._servers = None
self._parse_global_config()
def get(self, section, option, defaults=None):
"""Method to get the value from a given section from
Barman configuration
"""
if not self._config.has_section(section):
return None
try:
value = self._config.get(section, option, raw=False, vars=defaults)
if value == 'None':
value = None
if value is not None:
value = self._QUOTE_RE.sub(lambda m: m.group(2), value)
return value
except NoOptionError:
return None
def _parse_global_config(self):
"""This method parses the configuration file"""
self.barman_home = self.get('barman', 'barman_home')
self.user = self.get('barman', 'barman_user') \
or DEFAULT_USER
self.log_file = self.get('barman', 'log_file')
self.log_format = self.get('barman', 'log_format') \
or DEFAULT_LOG_FORMAT
self.log_level = self.get('barman', 'log_level') \
or DEFAULT_LOG_LEVEL
self._global_config = set(self._config.items('barman'))
def _is_global_config_changed(self):
"""Return true if something has changed in global configuration"""
return self._global_config != set(self._config.items('barman'))
def load_configuration_files_directory(self):
"""
Read the "configuration_files_directory" option and load all the
configuration files with the .conf suffix that lie in that folder
"""
config_files_directory = self.get('barman',
'configuration_files_directory')
if not config_files_directory:
return
if not os.path.isdir(os.path.expanduser(config_files_directory)):
_logger.warn(
'Ignoring the "configuration_files_directory" option as "%s" '
'is not a directory',
config_files_directory)
return
for cfile in sorted(iglob(
os.path.join(os.path.expanduser(config_files_directory),
'*.conf'))):
filename = os.path.basename(cfile)
if os.path.isfile(cfile):
# Load a file
_logger.debug('Including configuration file: %s', filename)
self._config.read(cfile)
if self._is_global_config_changed():
msg = "the configuration file %s contains a not empty [" \
"barman] section" % filename
_logger.fatal(msg)
raise SystemExit("FATAL: %s" % msg)
else:
# Add an info that a file has been discarded
_logger.warn('Discarding configuration file: %s (not a file)',
filename)
def _populate_servers(self):
"""Populate server list from configuration file"""
if self._servers is not None:
return
self._servers = {}
for section in self._config.sections():
if section == 'barman':
continue # skip global settings
if section in FORBIDDEN_SERVER_NAMES:
msg = "the reserved word '%s' is not allowed as server name. " \
"Please rename it." % section
_logger.fatal(msg)
raise SystemExit("FATAL: %s" % msg)
self._servers[section] = Server(self, section)
def server_names(self):
"""This method returns a list of server names"""
self._populate_servers()
return self._servers.keys()
def servers(self):
"""This method returns a list of server parameters"""
self._populate_servers()
return self._servers.values()
def get_server(self, name):
"""Get the server specifying its name"""
self._populate_servers()
return self._servers.get(name, None)
# easy config diagnostic with python -m
if __name__ == "__main__":
print "Active configuration settings:"
r = Config()
r.load_configuration_files_directory()
for section in r._config.sections():
print "Section: %s" % section
for option in r._config.options(section):
print "\t%s = %s " % (option, r.get(section, option))
barman-1.3.0/barman/fs.py000644 000765 000024 00000015735 12273464541 015775 0ustar00mnenciastaff000000 000000 # Copyright (C) 2013-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
import logging
from barman.command_wrappers import Command
from shutil import rmtree
_logger = logging.getLogger(__name__)
class FsOperationFailed(Exception):
"""
Exception which represents a failed execution of a command on FS
"""
pass
class UnixLocalCommand(object):
"""
This class is a wrapper for local calls for file system operations
"""
def __init__(self):
# initialize a shell
self.cmd = Command(cmd='sh -c', shell=True)
def create_dir_if_not_exists(self, dir_path):
"""
This method check for the existence of a directory.
if exist and is not a directory throws exception.
if is a directory everything is ok and no
mkdir operation is required.
Otherwise creates the directory using mkdir
if the mkdir fails an error is raised
:param dir_path full path for the directory
"""
exists = self.cmd('test -e %s' % dir_path)
if exists == 0:
is_dir = self.cmd('test -d %s' % dir_path)
if is_dir != 0:
raise FsOperationFailed(
'A file with the same name already exists')
else:
return False
else:
mkdir_ret = self.cmd('mkdir %s' % dir_path)
if mkdir_ret == 0:
return True
else:
raise FsOperationFailed('mkdir execution failed')
def delete_dir_if_exists(self, dir_path):
"""
This method check for the existence of a directory.
if exists and is not a directory an exception is raised
if is a directory, then is removed using a rm -fr command,
and returns True.
if the command fails an exception is raised.
If the directory does not exists returns False
:param dir_path the full path for the directory
"""
exists = self.cmd('test -e %s' % dir_path)
if exists == 0:
is_dir = self.cmd('test -d %s' % dir_path)
if is_dir != 0:
raise FsOperationFailed(
'A file with the same name exists, but is not a '
'directory')
else:
rm_ret = self.cmd('rm -fr %s' % dir_path)
if rm_ret == 0:
return True
else:
raise FsOperationFailed('rm execution failed')
else:
return False
def check_directory_exists(self, dir_path):
"""
Check for the existence of a directory in path.
if the directory exists returns true.
if the directory does not exists returns false.
if exists a file and is not a directory raises an exception
:param dir_path full path for the directory
"""
exists = self.cmd('test -e %s' % dir_path)
if exists == 0:
is_dir = self.cmd('test -d %s' % dir_path)
if is_dir != 0:
raise FsOperationFailed(
'A file with the same name exists, but is not a directory')
else:
return True
else:
return False
def check_write_permission(self, dir_path):
"""
check write permission for barman on a given path.
Creates a hidden file using touch, then remove the file.
returns true if the file is written and removed without problems
raise exception if the creation fails.
raise exception if the removal fails.
:param dir_path full dir_path for the directory to check
"""
exists = self.cmd('test -e %s' % dir_path)
if exists == 0:
is_dir = self.cmd('test -d %s' % dir_path)
if is_dir == 0:
can_write = self.cmd('touch %s/.barman_write_check' % dir_path)
if can_write == 0:
can_remove = self.cmd(
'rm %s/.barman_write_check' % dir_path)
if can_remove == 0:
return True
else:
raise FsOperationFailed('Unable to remove file')
else:
raise FsOperationFailed('Unable to create write check file')
else:
raise FsOperationFailed('%s is not a directory' % dir_path)
else:
raise FsOperationFailed('%s does not exists' % dir_path)
def create_symbolic_link(self, src, dst):
"""
Create a symlink pointing to src named dst.
Check src exists, if so, checks that destination
does not exists. if src is an invalid folder, raises an exception.
if dst allready exists, raises an exception. if ln -s command fails
raises an exception
:param src full path to the source of the symlink
:param dst full path for the destination of the symlink
"""
exists = self.cmd('test -e %s' % src)
if exists == 0:
exists_dst = self.cmd('test -e %s' % dst)
if exists_dst != 0:
link = self.cmd('ln -s %s %s' % (src, dst))
if link == 0:
return True
else:
raise FsOperationFailed('ln command failed')
else:
raise FsOperationFailed('ln destination already exists')
else:
raise FsOperationFailed('ln source does not exists')
class UnixRemoteCommand(UnixLocalCommand):
"""
This class is a wrapper for remote calls for file system operations
"""
# noinspection PyMissingConstructor
def __init__(self, ssh_command):
"""
Uses the same commands as the UnixLocalCommand
but the constructor is overridden and a remote shell is
initialized using the ssh_command provided by the user
:param ssh_command the ssh command provided by the user
"""
if ssh_command is None:
raise FsOperationFailed('No ssh command provided')
self.cmd = Command(cmd=ssh_command, shell=True)
ret = self.cmd("true")
if ret != 0:
raise FsOperationFailed("Connection failed using the command '%s'" %
ssh_command)
barman-1.3.0/barman/hooks.py000644 000765 000024 00000010617 12273464541 016502 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module contains the logic to run hook scripts
"""
import logging
from barman import version
from barman.command_wrappers import Command
from barman.infofile import UnknownBackupIdException
_logger = logging.getLogger(__name__)
class HookScriptRunner(object):
def __init__(self, backup_manager, name, phase=None, error=None,
**extra_env):
"""
Execute a hook script managing its environment
"""
self.backup_manager = backup_manager
self.name = name
self.extra_env = extra_env
self.phase = phase
self.error = error
self.environment = None
self.exit_status = None
self.exception = None
self.script = None
self.reset()
def reset(self):
"""
Reset the status of the class.
"""
self.environment = dict(self.extra_env)
config_file = self.backup_manager.config.config.config_file
self.environment.update({
'BARMAN_VERSION': version.__version__,
'BARMAN_SERVER': self.backup_manager.config.name,
'BARMAN_CONFIGURATION': config_file,
'BARMAN_HOOK': self.name,
})
if self.error:
self.environment['BARMAN_ERROR'] = self.error
if self.phase:
self.environment['BARMAN_PHASE'] = self.phase
script_config_name = "%s_%s" % (self.phase, self.name)
else:
script_config_name = self.name
self.script = getattr(self.backup_manager.config, script_config_name,
None)
self.exit_status = None
self.exception = None
def env_from_backup_info(self, backup_info):
"""
Prepare the environment for executing a script
:param BackupInfo backup_info: the backup metadata
"""
try:
previous_backup = self.backup_manager.get_previous_backup(
backup_info.backup_id)
if previous_backup:
previous_backup_id = previous_backup.backup_id
else:
previous_backup_id = ''
except UnknownBackupIdException:
previous_backup_id = ''
self.environment.update({
'BARMAN_BACKUP_DIR': backup_info.get_basebackup_directory(),
'BARMAN_BACKUP_ID': backup_info.backup_id,
'BARMAN_PREVIOUS_ID': previous_backup_id,
'BARMAN_STATUS': backup_info.status,
'BARMAN_ERROR': backup_info.error or '',
})
def env_from_wal_info(self, wal_info):
"""
Prepare the environment for executing a script
:param WalFileInfo wal_info: the backup metadata
"""
self.environment.update({
'BARMAN_SEGMENT': wal_info.name,
'BARMAN_FILE': wal_info.full_path,
'BARMAN_SIZE': str(wal_info.size),
'BARMAN_TIMESTAMP': str(wal_info.time),
'BARMAN_COMPRESSION': wal_info.compression or '',
})
def run(self):
"""
Run a a hook script if configured.
This method must never throw any exception
"""
# noinspection PyBroadException
try:
if self.script:
_logger.debug("Attempt to run %s: %s", self.name, self.script)
cmd = Command(
self.script,
env_append=self.environment,
shell=True, check=False)
self.exit_status = cmd()
_logger.debug("%s returned %d", self.script, self.exit_status)
return self.exit_status
except Exception as e:
_logger.exception('Exception running %s', self.name)
self.exception = e
return None
barman-1.3.0/barman/infofile.py000644 000765 000024 00000036461 12273464541 017157 0ustar00mnenciastaff000000 000000 # Copyright (C) 2013-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
import ast
import os
import dateutil.parser
from barman import xlog
from barman.compression import identify_compression
class Field(object):
def __init__(self, name, dump=None, load=None, default=None, doc=None):
"""
Field descriptor to be used with a FieldListFile subclass.
The resulting field is like a normal attribute with
two optional associated function: to_str and from_str
The Field descriptor can also be used as a decorator
class C(FieldListFile):
x = Field('x')
@x.dump
def x(val): return '0x%x' % val
@x.load
def x(val): return int(val, 16)
:param str name: the name of this attribute
:param callable dump: function used to dump the content to a disk
:param callable load: function used to reload the content from disk
:param default: default value for the field
:param str doc: docstring of the filed
"""
self.name = name
self.to_str = dump
self.from_str = load
self.default = default
self.__doc__ = doc
def __get__(self, obj, objtype=None):
if obj is None:
return self
if not hasattr(obj, '_fields'):
obj._fields = {}
return obj._fields.setdefault(self.name, self.default)
def __set__(self, obj, value):
if not hasattr(obj, '_fields'):
obj._fields = {}
obj._fields[self.name] = value
def __delete__(self, obj):
raise AttributeError("can't delete attribute")
def dump(self, to_str):
return type(self)(self.name, to_str, self.from_str, self.__doc__)
def load(self, from_str):
return type(self)(self.name, self.to_str, from_str, self.__doc__)
class FieldListFile(object):
__slots__ = ('_fields', 'filename')
def __init__(self, **kwargs):
"""
Represent a predefined set of keys with the associated value.
The constructor build the object assigning every keyword argument to
the corresponding attribute. If a provided keyword argument doesn't
has a corresponding attribute an AttributeError exception is raised.
This class is meant to be an abstract base class.
:raises: AttributeError
"""
self._fields = {}
self.filename = None
for name in kwargs:
field = getattr(type(self), name, None)
if isinstance(field, Field):
setattr(self, name, kwargs[name])
else:
raise AttributeError('unknown attribute %s' % name)
@classmethod
def from_meta_file(cls, filename):
"""
Factory method that read the specified file and build
an object with its content.
:param str filename: the file to read
"""
o = cls()
o.load(filename)
return o
def save(self, filename=None, file_object=None):
"""
Serialize the object to the specified file or file object
If a file_object is specified it will be used.
If the filename is not specified it uses the one memorized in the
filename attribute. If neither the filename attribute and parameter are
set a ValueError exception is raised.
:param str filename: path of the file to write
:param file file_object: a file like object to write in
:param str filename: the file to write
:raises: ValueError
"""
if file_object:
info = file_object
else:
filename = filename or self.filename
if filename:
info = open(filename, 'w')
else:
info = None
if not info:
raise ValueError(
'either a valid filename or a file_object must be specified')
with info:
for name in sorted(vars(type(self))):
field = getattr(type(self), name)
value = getattr(self, name, None)
if isinstance(field, Field):
if callable(field.to_str):
value = field.to_str(value)
info.write("%s=%s\n" % (name, value))
def load(self, filename=None, file_object=None):
"""
Replaces the current object content with the one deserialized from
the provided file.
This method set the filename attribute.
A ValueError exception is raised if the provided file contains any
invalid line.
:param str filename: path of the file to read
:param file file_object: a file like object to read from
:param str filename: the file to read
:raises: ValueError
"""
if file_object:
info = file_object
elif filename:
info = open(filename, 'r')
else:
raise ValueError(
'either filename or file_object must be specified')
# detect the filename if a file_object is passed
if not filename and file_object:
if hasattr(file_object, 'name'):
filename = file_object.name
# canonicalize filename
if filename:
self.filename = os.path.abspath(filename)
else:
self.filename = None
filename = '' # This is only for error reporting
with info:
for line in info:
# skip spaces and comments
if line.isspace() or line.rstrip().startswith('#'):
continue
# parse the line of form "key = value"
try:
name, value = [x.strip() for x in line.split('=', 1)]
except:
raise ValueError('invalid line %s in file %s' % (
line.strip(), filename))
# use the from_str function to parse the value
field = getattr(type(self), name, None)
if value == 'None':
value = None
elif isinstance(field, Field) and callable(field.from_str):
value = field.from_str(value)
setattr(self, name, value)
def items(self):
"""
Return a generator returning a list of (key, value) pairs.
If a filed has a dump function defined, it will be used.
"""
for name in sorted(vars(type(self))):
field = getattr(type(self), name)
value = getattr(self, name, None)
if isinstance(field, Field):
if callable(field.to_str):
value = field.to_str(value)
yield (name, value)
def __repr__(self):
return "%s(%s)" % (
self.__class__.__name__,
', '.join(['%s=%r' % x for x in self.items()]))
class WalFileInfo(FieldListFile):
"""
Metadata of a WAL file.
"""
__slots__ = ()
name = Field('name', doc='base name of WAL file')
full_path = Field('full_path', doc='complete path of the file')
size = Field('size', load=int, doc='WAL file size after compression')
time = Field('time', load=int, doc='WAL file modification time')
compression = Field('compression', doc='compression type')
@classmethod
def from_file(cls, filename, default_compression=None, **kwargs):
"""
Factory method to generate a WalFileInfo from a WAL file.
Every keyword argument will override any attribute from the provided
file. If a keyword argument doesn't has a corresponding attribute
an AttributeError exception is raised.
:param str filename: the file to inspect
:param str default_compression: the compression to set if
the current schema is not identifiable.
"""
stat = os.stat(filename)
kwargs.setdefault('name', os.path.basename(filename))
kwargs.setdefault('full_path', os.path.abspath(filename))
kwargs.setdefault('size', stat.st_size)
kwargs.setdefault('time', stat.st_mtime)
if 'compression' not in kwargs:
kwargs['compression'] = identify_compression(filename) \
or default_compression
obj = cls(**kwargs)
obj.filename = "%s.meta" % filename
return obj
def to_xlogdb_line(self):
"""
Format the content of this object as a xlogdb line.
"""
return "%s\t%s\t%s\t%s\n" % (
self.name,
self.size,
self.time,
self.compression)
class UnknownBackupIdException(Exception):
"""
The searched backup_id doesn't exists
"""
class BackupInfoBadInitialisation(Exception):
"""
Exception for a bad initialization error
"""
class BackupInfo(FieldListFile):
#: Conversion to string
EMPTY = 'EMPTY'
STARTED = 'STARTED'
FAILED = 'FAILED'
DONE = 'DONE'
STATUS_ALL = (EMPTY, STARTED, DONE, FAILED)
STATUS_NOT_EMPTY = (STARTED, DONE, FAILED)
#: Status according to retention policies
OBSOLETE = 'OBSOLETE'
VALID = 'VALID'
POTENTIALLY_OBSOLETE = 'OBSOLETE*'
NONE = '-'
RETENTION_STATUS = (OBSOLETE, VALID, POTENTIALLY_OBSOLETE, NONE)
version = Field('version', load=int)
pgdata = Field('pgdata')
# Parse the tablespaces as a literal Python list of tuple
# Output the tablespaces as a literal Python list of tuple
tablespaces = Field('tablespaces', load=ast.literal_eval, dump=repr)
# Timeline is an integer
timeline = Field('timeline', load=int)
begin_time = Field('begin_time', load=dateutil.parser.parse)
begin_xlog = Field('begin_xlog')
begin_wal = Field('begin_wal')
begin_offset = Field('begin_offset')
size = Field('size', load=int)
end_time = Field('end_time', load=dateutil.parser.parse)
end_xlog = Field('end_xlog')
end_wal = Field('end_wal')
end_offset = Field('end_offset')
status = Field('status', default=EMPTY)
server_name = Field('server_name')
error = Field('error')
mode = Field('mode')
config_file = Field('config_file')
hba_file = Field('hba_file')
ident_file = Field('ident_file')
__slots__ = ('server', 'config', 'backup_manager', 'backup_id')
def __init__(self, server, info_file=None, backup_id=None, **kwargs):
# Initialises the attributes for the object based on the predefined keys
"""
Stores meta information about a single backup
:param Server server:
:param file,str,None info_file:
:param str,None backup_id:
:raise BackupInfoBadInitialisation: if the info_file content is invalid
or neither backup_info or
"""
super(BackupInfo, self).__init__(**kwargs)
self.server = server
self.config = server.config
self.backup_manager = self.server.backup_manager
if backup_id:
# Cannot pass both info_file and backup_id
if info_file:
raise BackupInfoBadInitialisation(
'both info_file and backup_id parameters are set')
self.backup_id = backup_id
self.filename = self.get_filename()
# Check if a backup info file for a given server and a given ID
# already exists. If not, create it from scratch
if not os.path.exists(self.filename):
self.server_name = self.config.name
self.mode = self.backup_manager.name
else:
self.load(filename=self.filename)
elif info_file:
if hasattr(info_file, 'read'):
# We have been given a file-like object
self.load(file_object=info_file)
else:
# Just a file name
self.load(filename=info_file)
self.backup_id = self.detect_backup_id()
elif not info_file:
raise BackupInfoBadInitialisation(
'backup_id and info_file parameters are both unset')
def get_required_wal_segments(self):
"""
Get the list of required WAL segments for the current backup
"""
return xlog.enumerate_segments(self.begin_wal, self.end_wal,
self.version)
def get_list_of_files(self, target):
"""
Get the list of files for the current backup
"""
# Walk down the base backup directory
if target in ('data', 'standalone', 'full'):
for root, _, files in os.walk(self.get_basebackup_directory()):
for f in files:
yield os.path.join(root, f)
if target in 'standalone':
# List all the WAL files for this backup
for x in self.get_required_wal_segments():
hash_dir = os.path.join(self.config.wals_directory,
xlog.hash_dir(x))
yield os.path.join(hash_dir, x)
if target in ('wal', 'full'):
for x, _ in self.server.get_wal_until_next_backup(self):
hash_dir = os.path.join(self.config.wals_directory,
xlog.hash_dir(x))
yield os.path.join(hash_dir, x)
def detect_backup_id(self):
"""
Detect the backup ID from the name of the parent dir of the info file
"""
return os.path.basename(os.path.dirname(self.filename))
def get_basebackup_directory(self):
"""
Get the default filename for the backup.info file based on
backup ID and server directory for base backups
"""
return os.path.join(self.config.basebackups_directory,
self.backup_id)
def get_filename(self):
"""
Get the default filename for the backup.info file based on
backup ID and server directory for base backups
"""
return os.path.join(self.get_basebackup_directory(), 'backup.info')
def set_attribute(self, key, value):
"""
Set a value for a given key
"""
setattr(self, key, value)
def save(self, filename=None, file_object=None):
if not file_object:
# Make sure the containing directory exists
filename = filename or self.filename
dir_name = os.path.dirname(filename)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
super(BackupInfo, self).save(filename=filename, file_object=file_object)
def to_dict(self):
"""
Return the backup_info content as a simple dictionary
:return dict:
"""
result = dict(self.items())
result.update(backup_id=self.backup_id, server_name=self.server_name,
mode=self.mode, tablespaces=self.tablespaces)
return result
barman-1.3.0/barman/lockfile.py000644 000765 000024 00000010154 12273464541 017143 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module is the lock manager for Barman
"""
import errno
import fcntl
import os
class LockFileException(Exception):
"""
LockFile Exception base class
"""
pass
class LockFileBusy(LockFileException):
"""
Raised when a lock file is not free
"""
pass
class LockFilePermissionDenied(LockFileException):
"""
Raised when a lock file is not accessible
"""
pass
class LockFile(object):
"""
Ensures that there is only one process which is running against a
specified LockFile.
It supports the Context Manager interface, allowing the use in with
statements.
with LockFile('file.lock') as locked:
if not locked:
print "failed"
else:
You can also use exceptions on failures
try:
with LockFile('file.lock', True):
except LockFileBusy, e, file:
print "failed to lock %s" % file
"""
def __init__(self, filename, raise_if_fail=False, wait=False):
self.filename = os.path.abspath(filename)
self.fd = None
self.raise_if_fail = raise_if_fail
self.wait = wait
def acquire(self, raise_if_fail=None, wait=None):
"""
Creates and holds on to the lock file.
When raise_if_fail, a LockFileBusy is raised if
the lock is held by someone else and a LockFilePermissionDenied is
raised when the user executing barman have insufficient rights for
the creation of a LockFile.
Returns True if lock has been successfully acquired, False if it is not.
:param bool raise_if_fail: If True raise an exception on failure
:param bool wait: If True issue a blocking request
:returns bool: whether the lock has been acquired
"""
if self.fd:
return True
fd = None
try:
fd = os.open(self.filename, os.O_TRUNC | os.O_CREAT | os.O_RDWR,
0600)
flags = fcntl.LOCK_EX
if not wait or (wait is None and not self.wait):
flags |= fcntl.LOCK_NB
fcntl.flock(fd, flags)
os.write(fd, ("%s\n" % os.getpid()).encode('ascii'))
self.fd = fd
return True
except (OSError, IOError), e:
if fd:
os.close(fd) # let's not leak file descriptors
if (raise_if_fail or
(raise_if_fail is None and self.raise_if_fail)):
if e.errno in (errno.EAGAIN, errno.EWOULDBLOCK):
raise LockFileBusy(self.filename)
elif e.errno == errno.EACCES:
raise LockFilePermissionDenied(self.filename)
else:
raise
else:
return False
def release(self):
"""
Releases the lock.
If the lock is not held by the current process it does nothing.
"""
if not self.fd:
return
try:
fcntl.flock(self.fd, fcntl.LOCK_UN)
os.close(self.fd)
except (OSError, IOError):
pass
def __del__(self):
"""
Avoid stale lock files.
"""
self.release()
# Contextmanager interface
def __enter__(self):
return self.acquire()
def __exit__(self, exception_type, value, traceback):
self.release()
barman-1.3.0/barman/output.py000644 000765 000024 00000051374 12273464541 016724 0ustar00mnenciastaff000000 000000 # Copyright (C) 2013-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module control how the output of Barman will be rendered
"""
import inspect
import logging
import sys
from barman.infofile import BackupInfo
from barman.utils import pretty_size
__all__ = [
'error_occurred', 'debug', 'info', 'warning', 'error', 'exception',
'result', 'close_and_exit', 'close', 'set_output_writer',
'AVAILABLE_WRITERS', 'DEFAULT_WRITER', 'ConsoleOutputWriter',
'NagiosOutputWriter',
]
#: True if error or exception methods have been called
error_occurred = False
#: Exit code if error occurred
error_exit_code = 1
def _format_message(message, args):
"""
Format a message using the args list. The result will be equivalent to
message % args
If args list contains a dictionary as its only element the result will be
message % args[0]
:param str message: the template string to be formatted
:param tuple args: a list of arguments
:return: the formatted message
:rtype: str
"""
if len(args) == 1 and isinstance(args[0], dict):
return message % args[0]
elif len(args) > 0:
return message % args
else:
return message
def _put(level, message, *args, **kwargs):
"""
Send the message with all the remaining positional arguments to
the configured output manager with the right output level. The message will
be sent also to the logger unless explicitly disabled with log=False
No checks are performed on level parameter as this method is meant
to be called only by this module.
If level == 'exception' the stack trace will be also logged
:param str level:
:param str message: the template string to be formatted
:param tuple args: all remaining arguments are passed to the log formatter
:key bool log: whether to log the message
:key bool is_error: treat this message as an error
"""
# handle keyword-only parameters
log = kwargs.pop('log', True)
is_error = kwargs.pop('is_error', False)
if len(kwargs):
raise TypeError('%s() got an unexpected keyword argument %r'
% (inspect.stack()[1][3], kwargs.popitem()[0]))
if is_error:
global error_occurred
error_occurred = True
_writer.error_occurred()
# dispatch the call to the output handler
getattr(_writer, level)(message, *args)
# log the message as originating from caller's caller module
if log:
exc_info = False
if level == 'exception':
level = 'error'
exc_info = True
frm = inspect.stack()[2]
mod = inspect.getmodule(frm[0])
logger = logging.getLogger(mod.__name__)
log_level = logging.getLevelName(level.upper())
logger.log(log_level, message, *args, **{'exc_info': exc_info})
def _dispatch(obj, prefix, name, *args, **kwargs):
"""
Dispatch the call to the %(prefix)s_%(name) method of the obj object
:param obj: the target object
:param str prefix: prefix of the method to be called
:param str name: name of the method to be called
:param tuple args: all remaining positional arguments will be sent to target
:param dict kwargs: all remaining keyword arguments will be sent to target
:return: the result of the invoked method
:raise ValueError: if the target method is not present
"""
method_name = "%s_%s" % (prefix, name)
handler = getattr(obj, method_name, None)
if callable(handler):
return handler(*args, **kwargs)
else:
raise ValueError("The object %r does not have the %r method" % (
obj, method_name))
def debug(message, *args, **kwargs):
"""
Output a message with severity 'DEBUG'
:key bool log: whether to log the message
"""
_put('debug', message, *args, **kwargs)
def info(message, *args, **kwargs):
"""
Output a message with severity 'INFO'
:key bool log: whether to log the message
"""
_put('info', message, *args, **kwargs)
def warning(message, *args, **kwargs):
"""
Output a message with severity 'INFO'
:key bool log: whether to log the message
"""
_put('warning', message, *args, **kwargs)
def error(message, *args, **kwargs):
"""
Output a message with severity 'ERROR'.
Also records that an error has occurred unless the ignore parameter is True.
:key bool ignore: avoid setting an error exit status (default False)
:key bool log: whether to log the message
"""
# ignore is a keyword-only parameter
ignore = kwargs.pop('ignore', False)
if not ignore:
kwargs.setdefault('is_error', True)
_put('error', message, *args, **kwargs)
def exception(message, *args, **kwargs):
"""
Output a message with severity 'EXCEPTION'
If raise_exception parameter doesn't evaluate to false raise and exception:
- if raise_exception is callable raise the result of raise_exception()
- if raise_exception is an exception raise it
- else raise the last exception again
:key bool ignore: avoid setting an error exit status
:key raise_exception:
raise an exception after the message has been processed
:key bool log: whether to log the message
"""
# ignore and raise_exception are keyword-only parameters
ignore = kwargs.pop('ignore', False)
#noinspection PyNoneFunctionAssignment
raise_exception = kwargs.pop('raise_exception', None)
if not ignore:
kwargs.setdefault('is_error', True)
_put('exception', message, *args, **kwargs)
if raise_exception:
if callable(raise_exception):
#noinspection PyCallingNonCallable
raise raise_exception(message)
elif isinstance(raise_exception, BaseException):
raise raise_exception
else:
raise
def init(command, *args, **kwargs):
"""
Initialize the output writer for a given command.
:param str command: name of the command are being executed
:param tuple args: all remaining positional arguments will be sent
to the output processor
:param dict kwargs: all keyword arguments will be sent
to the output processor
"""
try:
_dispatch(_writer, 'init', command, *args, **kwargs)
except ValueError:
exception('The %s writer does not support the "%s" command',
_writer.__class__.__name__, command)
close_and_exit()
def result(command, *args, **kwargs):
"""
Output the result of an operation.
:param str command: name of the command are being executed
:param tuple args: all remaining positional arguments will be sent
to the output processor
:param dict kwargs: all keyword arguments will be sent
to the output processor
"""
try:
_dispatch(_writer, 'result', command, *args, **kwargs)
except ValueError:
exception('The %s writer does not support the "%s" command',
_writer.__class__.__name__, command)
close_and_exit()
def close_and_exit():
"""
Close the output writer and terminate the program.
If an error has been emitted the program will report a non zero return
value.
"""
close()
if error_occurred:
sys.exit(error_exit_code)
else:
sys.exit(0)
def close():
"""
Close the output writer.
"""
_writer.close()
def set_output_writer(new_writer, *args, **kwargs):
"""
Replace the current output writer with a new one.
The new_writer parameter can be a symbolic name or an OutputWriter object
:param new_writer: the OutputWriter name or the actual OutputWriter
:type: string or an OutputWriter
:param tuple args: all remaining positional arguments will be passed
to the OutputWriter constructor
:param dict kwargs: all remaining keyword arguments will be passed
to the OutputWriter constructor
"""
global _writer
_writer.close()
if new_writer in AVAILABLE_WRITERS:
_writer = AVAILABLE_WRITERS[new_writer](*args, **kwargs)
else:
_writer = new_writer
class ConsoleOutputWriter(object):
def __init__(self, debug_enabled=False):
"""
Default output writer that output everything on console.
If debug_enabled print the debug information on console.
:param bool debug_enabled:
"""
self._debug = debug_enabled
#: Used in check command to hold the check results
self.result_check_list = []
#: Used in status command to hold the status results
self.result_status_list = []
#: The minimal flag. If set the command must output a single list of
#: values.
self.minimal = False
def _out(self, message, args):
"""
Print a message on standard output
"""
print >> sys.stdout, _format_message(message, args)
def _err(self, message, args):
"""
Print a message on standard error
"""
print >> sys.stderr, _format_message(message, args)
def debug(self, message, *args):
"""
Emit debug.
"""
if self._debug:
self._err('DEBUG: %s' % message, args)
def info(self, message, *args):
"""
Normal messages are sent to standard output
"""
self._out(message, args)
def warning(self, message, *args):
"""
Warning messages are sent to standard error
"""
self._err('WARNING: %s' % message, args)
def error(self, message, *args):
"""
Error messages are sent to standard error
"""
self._err('ERROR: %s' % message, args)
def exception(self, message, *args):
"""
Warning messages are sent to standard error
"""
self._err('EXCEPTION: %s' % message, args)
def error_occurred(self):
"""
Called immediately before any message method when the originating
call has is_error=True
"""
def close(self):
"""
Close the output channel.
Nothing to do for console.
"""
def result_backup(self, backup_info):
"""
Render the result of a backup.
Nothing to do for console.
"""
# TODO: evaluate to display something useful here
def _record_check(self, server_name, check, status, hint):
"""
Record the check line in result_check_map attribute
This method is for subclass use
:param str server_name: the server is being checked
:param str check: the check name
:param bool status: True if succeeded
:param str,None hint: hint to print if not None
"""
self.result_check_list.append(dict(
server_name=server_name, check=check, status=status, hint=hint))
if not status:
global error_occurred
error_occurred = True
def init_check(self, server_name):
"""
Init the check command
:param str server_name: the server we are start listing
"""
self.info("Server %s:" % server_name)
def result_check(self, server_name, check, status, hint=None):
"""
Record a server result of a server check
and output it as INFO
:param str server_name: the server is being checked
:param str check: the check name
:param bool status: True if succeeded
:param str,None hint: hint to print if not None
"""
self._record_check(server_name, check, status, hint)
if hint:
self.info("\t%s: %s (%s)" %
(check, 'OK' if status else 'FAILED', hint))
else:
self.info("\t%s: %s" %
(check, 'OK' if status else 'FAILED'))
def init_list_backup(self, server_name, minimal=False):
"""
Init the list-backup command
:param str server_name: the server we are start listing
:param bool minimal: if true output only a list of backup id
"""
self.minimal = minimal
def result_list_backup(self, backup_info,
backup_size, wal_size,
retention_status):
"""
Output a single backup in the list-backup command
:param basestring server_name: server we are displaying
:param BackupInfo backup_info: backup we are displaying
:param backup_size: size of base backup (with the required WAL files)
:param wal_size: size of WAL files belonging to this backup
(without the required WAL files)
:param retention_status: retention policy status
"""
# If minimal is set only output the backup id
if self.minimal:
self.info(backup_info.backup_id)
return
out_list = ["%s %s - "
% (backup_info.server_name, backup_info.backup_id)]
if backup_info.status == BackupInfo.DONE:
end_time = backup_info.end_time.ctime()
out_list.append('%s - Size: %s - WAL Size: %s' %
(end_time,
pretty_size(backup_size),
pretty_size(wal_size)))
if backup_info.tablespaces:
tablespaces = [("%s:%s" % (name, location))
for name, _, location in backup_info.tablespaces]
out_list.append(' (tablespaces: %s)' %
', '.join(tablespaces))
if retention_status:
out_list.append(' - %s' % retention_status)
else:
out_list.append(backup_info.status)
self.info(''.join(out_list))
def result_show_backup(self, backup_ext_info):
"""
Output all available information about a backup in show-backup command
The argument has to be the result of a Server.get_backup_ext_info() call
:param dict backup_ext_info: a dictionary containing the info to display
"""
data = dict(backup_ext_info)
self.info("Backup %s:", data['backup_id'])
self.info(" Server Name : %s", data['server_name'])
self.info(" Status : %s", data['status'])
if data['status'] == BackupInfo.DONE:
self.info(" PostgreSQL Version: %s", data['version'])
self.info(" PGDATA directory : %s", data['pgdata'])
if data['tablespaces']:
self.info(" Tablespaces:")
for name, oid, location in data['tablespaces']:
self.info(" %s: %s (oid: %s)", name, location, oid)
self.info("")
self.info(" Base backup information:")
self.info(" Disk usage : %s",
pretty_size(data['size'] + data[
'wal_size']))
self.info(" Timeline : %s", data['timeline'])
self.info(" Begin WAL : %s",
data['begin_wal'])
self.info(" End WAL : %s", data['end_wal'])
self.info(" WAL number : %s", data['wal_num'])
self.info(" Begin time : %s",
data['begin_time'])
self.info(" End time : %s", data['end_time'])
self.info(" Begin Offset : %s",
data['begin_offset'])
self.info(" End Offset : %s",
data['end_offset'])
self.info(" Begin XLOG : %s",
data['begin_xlog'])
self.info(" End XLOG : %s", data['end_xlog'])
self.info("")
self.info(" WAL information:")
self.info(" No of files : %s",
data['wal_until_next_num'])
self.info(" Disk usage : %s",
pretty_size(data['wal_until_next_size']))
self.info(" Last available : %s", data['wal_last'])
self.info("")
self.info(" Catalog information:")
self.info(" Retention Policy: %s",
data['retention_policy_status']
or 'not enforced')
self.info(" Previous Backup : %s",
data.setdefault('previous_backup_id', 'not available')
or '- (this is the oldest base backup)')
self.info(" Next Backup : %s",
data.setdefault('next_backup_id', 'not available')
or '- (this is the latest base backup)')
else:
if data['error']:
self.info(" Error: : %s",
data['error'])
def init_status(self, server_name):
"""
Init the status command
:param str server_name: the server we are start listing
"""
self.info("Server %s:", server_name)
def result_status(self, server_name, status, description, message):
"""
Record a result line of a server status command
and output it as INFO
:param str server_name: the server is being checked
:param str status: the returned status code
:param str description: the returned status description
:param str,object message: status message. It will be converted to str
"""
message = str(message)
self.result_status_list.append(dict(
server_name=server_name, status=status,
description=description, message=message))
self.info("\t%s: %s", description, message)
def init_list_server(self, server_name, minimal=False):
"""
Init the list-server command
:param str server_name: the server we are start listing
"""
self.minimal = minimal
def result_list_server(self, server_name, description=None):
"""
Output a result line of a list-server command
:param str server_name: the server is being checked
:param str,None description: server description if applicable
"""
if self.minimal or not description:
self.info("%s", server_name)
else:
self.info("%s - %s", server_name, description)
def init_show_server(self, server_name):
"""
Init the show-server command output method
:param str server_name: the server we are displaying
"""
self.info("Server %s:" % server_name)
def result_show_server(self, server_name, server_info):
"""
Output the results of the show-server command
:param str server_name: the server we are displaying
:param dict server_info: a dictionary containing the info to display
"""
for status, message in server_info.items():
self.info("\t%s: %s", status, message)
class NagiosOutputWriter(ConsoleOutputWriter):
"""
Nagios output writer.
This writer doesn't output anything to console.
On close it writes a nagios-plugin compatible status
"""
def _out(self, message, args):
"""
Do not print anything on standard output
"""
def _err(self, message, args):
"""
Do not print anything on standard error
"""
def close(self):
"""
Display the result of a check run as expected by Nagios.
Also set the exit code as 2 (CRITICAL) in case of errors
"""
issues = []
servers = []
for item in self.result_check_list:
if item['server_name'] not in servers:
servers.append(item['server_name'])
if not item['status'] and item['server_name'] not in issues:
issues.append(item['server_name'])
if len(issues) > 0:
print "BARMAN CRITICAL - %d server out of %d has issues" % \
(len(issues), len(servers))
global error_exit_code
error_exit_code = 2
else:
print "BARMAN OK - Ready to serve the Espresso backup"
#: This dictionary acts as a registry of available OutputWriters
AVAILABLE_WRITERS = {
'console': ConsoleOutputWriter,
# nagios is not registered as it isn't a general purpose output writer
# 'nagios': NagiosOutputWriter,
}
#: The default OutputWriter
DEFAULT_WRITER = 'console'
#: the current active writer. Initialized according DEFAULT_WRITER on load
_writer = AVAILABLE_WRITERS[DEFAULT_WRITER]()
barman-1.3.0/barman/retention_policies.py000644 000765 000024 00000033363 12273464541 021260 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
'''
This module defines backup retention policies. A backup retention
policy in Barman is a user-defined policy for determining how long
backups and archived logs (WAL segments) need to be retained for media recovery.
You can define a retention policy in terms of backup redundancy
or a recovery window.
Barman retains the periodical backups required to satisfy
the current retention policy, and any archived WAL files required for complete
recovery of those backups.
'''
from abc import ABCMeta, abstractmethod
from datetime import datetime, timedelta
import re
import logging
from barman.infofile import BackupInfo
_logger = logging.getLogger(__name__)
class RetentionPolicy(object):
'''Abstract base class for retention policies'''
__metaclass__ = ABCMeta
def __init__(self, mode, unit, value, context, server):
'''Constructor of the retention policy base class'''
self.mode = mode
self.unit = unit
self.value = int(value)
self.context = context
self.server = server
self._first_backup = None
self._first_wal = None
def report(self, context=None):
'''Report obsolete/valid objects according to the retention policy'''
if context == None:
context = self.context
if context == 'BASE':
return self._backup_report()
elif context == 'WAL':
return self._wal_report()
else:
raise ValueError('Invalid context %s', context)
def backup_status(self, backup_id):
'''Report the status of a backup according to the retention policy'''
if self.context == 'BASE':
return self._backup_report()[backup_id]
else:
return BackupInfo.NONE
def first_backup(self):
'''Returns the first valid backup according to retention policies'''
if not self._first_backup:
self.report(context='BASE')
return self._first_backup
def first_wal(self):
'''Returns the first valid WAL according to retention policies'''
if not self._first_wal:
self.report(context='WAL')
return self._first_wal
@abstractmethod
def __str__(self):
'''String representation'''
pass
@abstractmethod
def debug(self):
'''Debug information'''
pass
@abstractmethod
def _backup_report(self):
'''Report obsolete/valid backups according to the retention policy'''
pass
@abstractmethod
def _wal_report(self):
'''Report obsolete/valid WALs according to the retention policy'''
pass
@classmethod
def create(cls, server, option, value):
'''
If given option and value from the configuration file match,
creates the retention policy object for the given server
'''
# using @abstractclassmethod from python3 would be better here
raise NotImplementedError(
'The class %s must override the create() class method',
cls.__name__)
class RedundancyRetentionPolicy(RetentionPolicy):
'''
Retention policy based on redundancy, the setting that determines
many periodical backups to keep. A redundancy-based retention polic
yis contrasted with retention policy that uses a recovery window.
'''
_re = re.compile(r'^\s*redundancy\s+(\d+)\s*$', re.IGNORECASE)
def __init__(self, context, value, server):
super(RedundancyRetentionPolicy, self
).__init__('redundancy', 'r', value, 'BASE', server)
assert (value >= 0)
def __str__(self):
return "REDUNDANCY %s" % (self.value)
def debug(self):
return "Redundancy: %s (%s)" % (self.value, self.context)
def _backup_report(self):
'''Report obsolete/valid backups according to the retention policy'''
report = dict()
backups = self.server.get_available_backups(BackupInfo.STATUS_NOT_EMPTY)
# Normalise the redundancy value (according to minimum redundancy)
redundancy = self.value
if redundancy < self.server.config.minimum_redundancy:
_logger.warning("Retention policy redundancy (%s) is lower than "
"the required minimum redundancy (%s). Enforce %s.",
redundancy, self.server.config.minimum_redundancy,
self.server.config.minimum_redundancy)
redundancy = self.server.config.minimum_redundancy
# Map the latest 'redundancy' DONE backups as VALID
# The remaining DONE backups are classified as OBSOLETE
# Non DONE backups are classified as NONE
# NOTE: reverse key orders (simulate reverse chronology)
i = 0
for bid in sorted(backups.iterkeys(), reverse=True):
if backups[bid].status == BackupInfo.DONE:
if i < redundancy:
report[bid] = BackupInfo.VALID
self._first_backup = bid
else:
report[bid] = BackupInfo.OBSOLETE
i = i + 1
else:
report[bid] = BackupInfo.NONE
return report
def _wal_report(self):
'''Report obsolete/valid WALs according to the retention policy'''
pass
@classmethod
def create(cls, server, context, optval):
# Detect Redundancy retention type
mtch = cls._re.match(optval)
if not mtch:
return None
value = int(mtch.groups()[0])
return cls(context, value, server)
class RecoveryWindowRetentionPolicy(RetentionPolicy):
'''
Retention policy based on recovery window. The DBA specifies a period of
time and Barman ensures retention of backups and archived WAL files required
for point-in-time recovery to any time during the recovery window.
The interval always ends with the current time and extends back in time
for the number of days specified by the user.
For example, if the retention policy is set for a recovery window of seven days,
and the current time is 9:30 AM on Friday, Barman retains the backups required
to allow point-in-time recovery back to 9:30 AM on the previous Friday.
'''
_re = re.compile(r"""
^\s*
recovery\s+window\s+of\s+ # recovery window of
(\d+)\s+(day|month|week)s? # N (day|month|week) with optional 's'
\s*$
""",
re.IGNORECASE | re.VERBOSE)
_kw = {'d':'DAYS', 'm':'MONTHS', 'w':'WEEKS'}
def __init__(self, context, value, unit, server):
super(RecoveryWindowRetentionPolicy, self
).__init__('window', unit, value, context, server)
assert (value >= 0)
assert (unit == 'd' or unit == 'm' or unit == 'w')
assert (context == 'WAL' or context == 'BASE')
# Calculates the time delta
if (unit == 'd'):
self.timedelta = timedelta(days=(self.value))
elif (unit == 'w'):
self.timedelta = timedelta(weeks=(self.value))
elif (unit == 'm'):
self.timedelta = timedelta(days=(31 * self.value))
def __str__(self):
return "RECOVERY WINDOW OF %s %s" % (self.value, self._kw[self.unit])
def debug(self):
return "Recovery Window: %s %s: %s (%s)" % (
self.value, self.unit, self.context,
self._point_of_recoverability())
def _point_of_recoverability(self):
'''
Based on the current time and the window, calculate the point
of recoverability, which will be then used to define the first
backup or the first WAL
'''
return datetime.now() - self.timedelta
def _backup_report(self):
'''Report obsolete/valid backups according to the retention policy'''
report = dict()
backups = self.server.get_available_backups(BackupInfo.STATUS_NOT_EMPTY)
# Map as VALID all DONE backups having end time lower than
# the point of recoverability. The older ones
# are classified as OBSOLETE.
# Non DONE backups are classified as NONE
found = False
valid = 0
# NOTE: reverse key orders (simulate reverse chronology)
for bid in sorted(backups.iterkeys(), reverse=True):
# We are interested in DONE backups only
if backups[bid].status == BackupInfo.DONE:
if found:
# Check minimum redundancy requirements
if valid < self.server.config.minimum_redundancy:
_logger.warning(
"Keeping obsolete backup %s for server %s "
"(older than %s) "
"due to minimum redundancy requirements (%s)",
bid, self.server.config.name,
self._point_of_recoverability(),
self.server.config.minimum_redundancy)
# We mark the backup as potentially obsolete
# as we must respect minimum redundancy requirements
report[bid] = BackupInfo.POTENTIALLY_OBSOLETE
self._first_backup = bid
valid = valid + 1
else:
# We mark this backup as obsolete
# (older than the first valid one)
_logger.info(
"Reporting backup %s for server %s as OBSOLETE "
"(older than %s)",
bid, self.server.config.name,
self._point_of_recoverability())
report[bid] = BackupInfo.OBSOLETE
else:
_logger.debug(
"Reporting backup %s for server %s as VALID "
"(newer than %s)",
bid, self.server.config.name,
self._point_of_recoverability())
# Backup within the recovery window
report[bid] = BackupInfo.VALID
self._first_backup = bid
valid = valid + 1
# TODO: Currently we use the backup local end time
# We need to make this more accurate
if backups[bid].end_time < self._point_of_recoverability():
found = True
else:
report[bid] = BackupInfo.NONE
return report
def _wal_report(self):
'''Report obsolete/valid WALs according to the retention policy'''
pass
@classmethod
def create(cls, server, context, optval):
# Detect Recovery Window retention type
match = cls._re.match(optval)
if not match:
return None
value = int(match.groups()[0])
unit = match.groups()[1][0].lower()
return cls(context, value, unit, server)
class SimpleWALRetentionPolicy(RetentionPolicy):
'''Simple retention policy for WAL files (identical to the main one)'''
_re = re.compile(r'^\s*main\s*$', re.IGNORECASE)
def __init__(self, context, policy, server):
super(SimpleWALRetentionPolicy, self
).__init__('simple-wal', policy.unit, policy.value,
context, server)
# The referred policy must be of type 'BASE'
assert (self.context == 'WAL' and policy.context == 'BASE')
self.policy = policy
def __str__(self):
return "MAIN"
def debug(self):
return "Simple WAL Retention Policy (%s)" % (self.policy)
def _backup_report(self):
'''Report obsolete/valid backups according to the retention policy'''
pass
def _wal_report(self):
'''Report obsolete/valid backups according to the retention policy'''
self.policy.report(context='WAL')
def first_wal(self):
'''Returns the first valid WAL according to retention policies'''
return self.policy.first_wal()
@classmethod
def create(cls, server, context, optval):
# Detect Redundancy retention type
match = cls._re.match(optval)
if not match:
return None
return cls(context, server.config.retention_policy, server)
class RetentionPolicyFactory(object):
'''Factory for retention policy objects'''
# Available retention policy types
policy_classes = [
RedundancyRetentionPolicy,
RecoveryWindowRetentionPolicy,
SimpleWALRetentionPolicy
]
@classmethod
def create(cls, server, option, value):
'''
Based on the given option and value from the configuration
file, creates the appropriate retention policy object
for the given server
'''
if option == 'wal_retention_policy':
context = 'WAL'
elif option == 'retention_policy':
context = 'BASE'
else:
raise ValueError('Unknown option for retention policy: %s' %
(option))
# Look for the matching rule
for policy_class in cls.policy_classes:
policy = policy_class.create(server, context, value)
if policy:
return policy
raise Exception('Cannot parse option %s: %s' % (option, value))
barman-1.3.0/barman/server.py000644 000765 000024 00000070757 12273464571 016703 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
'''This module represents a Server.
Barman is able to manage multiple servers.
'''
from barman import xlog, output
from barman.infofile import BackupInfo, UnknownBackupIdException
from barman.lockfile import LockFile
from barman.backup import BackupManager
from barman.command_wrappers import Command
from barman.retention_policies import RetentionPolicyFactory, SimpleWALRetentionPolicy
import os
import logging
import psycopg2
from contextlib import contextmanager
import itertools
_logger = logging.getLogger(__name__)
class Server(object):
'''This class represents a server to backup'''
XLOG_DB = "xlog.db"
def __init__(self, config):
''' The Constructor.
:param config: the server configuration
'''
self.config = config
self.conn = None
self.server_txt_version = None
self.server_version = None
self.ssh_options = config.ssh_command.split()
self.ssh_command = self.ssh_options.pop(0)
self.ssh_options.extend("-o BatchMode=yes -o StrictHostKeyChecking=no".split())
self.backup_manager = BackupManager(self)
self.configuration_files = None
self.enforce_retention_policies = False
# Set bandwidth_limit
if self.config.bandwidth_limit:
try:
self.config.bandwidth_limit = int(self.config.bandwidth_limit)
except:
_logger.warning('Invalid bandwidth_limit "%s" for server "%s" (fallback to "0")'
% (self.config.bandwidth_limit, self.config.name))
self.config.bandwidth_limit = None
# set tablespace_bandwidth_limit
if self.config.tablespace_bandwidth_limit:
rules = {}
for rule in self.config.tablespace_bandwidth_limit.split():
try:
key, value = rule.split(':', 1)
value = int(value)
if value != self.config.bandwidth_limit:
rules[key] = value
except:
_logger.warning("Invalid tablespace_bandwidth_limit rule '%s'" % (rule,))
if len(rules) > 0:
self.config.tablespace_bandwidth_limit = rules
else:
self.config.tablespace_bandwidth_limit = None
# Set minimum redundancy (default 0)
if self.config.minimum_redundancy.isdigit():
self.config.minimum_redundancy = int(self.config.minimum_redundancy)
if self.config.minimum_redundancy < 0:
_logger.warning('Negative value of minimum_redundancy "%s" for server "%s" (fallback to "0")'
% (self.config.minimum_redundancy, self.config.name))
self.config.minimum_redundancy = 0
else:
_logger.warning('Invalid minimum_redundancy "%s" for server "%s" (fallback to "0")'
% (self.config.minimum_redundancy, self.config.name))
self.config.minimum_redundancy = 0
# Initialise retention policies
self._init_retention_policies()
def _init_retention_policies(self):
# Set retention policy mode
if self.config.retention_policy_mode != 'auto':
_logger.warning('Unsupported retention_policy_mode "%s" for server "%s" (fallback to "auto")'
% (self.config.retention_policy_mode, self.config.name))
self.config.retention_policy_mode = 'auto'
# If retention_policy is present, enforce them
if self.config.retention_policy:
# Check wal_retention_policy
if self.config.wal_retention_policy != 'main':
_logger.warning('Unsupported wal_retention_policy value "%s" for server "%s" (fallback to "main")'
% (self.config.wal_retention_policy, self.config.name))
self.config.wal_retention_policy = 'main'
# Create retention policy objects
try:
rp = RetentionPolicyFactory.create(self,
'retention_policy', self.config.retention_policy)
# Reassign the configuration value (we keep it in one place)
self.config.retention_policy = rp
_logger.debug('Retention policy for server %s: %s' % (
self.config.name, self.config.retention_policy))
try:
rp = RetentionPolicyFactory.create(self,
'wal_retention_policy', self.config.wal_retention_policy)
# Reassign the configuration value (we keep it in one place)
self.wal_retention_policy = rp
_logger.debug('WAL retention policy for server %s: %s' % (
self.config.name, self.config.wal_retention_policy))
except:
_logger.error('Invalid wal_retention_policy setting "%s" for server "%s" (fallback to "main")' % (
self.config.wal_retention_policy, self.config.name))
self.wal_retention_policy = SimpleWALRetentionPolicy (
self.retention_policy, self)
self.enforce_retention_policies = True
except:
_logger.error('Invalid retention_policy setting "%s" for server "%s"' % (
self.config.retention_policy, self.config.name))
def check(self):
"""
Implements the 'server check' command and makes sure SSH and PostgreSQL
connections work properly. It checks also that backup directories exist
(and if not, it creates them).
"""
self.check_ssh()
self.check_postgres()
self.check_directories()
# Check retention policies
self.check_retention_policy_settings()
# Executes the backup manager set of checks
self.backup_manager.check()
def check_ssh(self):
"""
Checks SSH connection
"""
cmd = Command(self.ssh_command, self.ssh_options)
ret = cmd("true")
if ret == 0:
output.result('check', self.config.name, 'ssh', True)
else:
output.result('check', self.config.name, 'ssh', False,
'return code: %s' % ret)
def check_postgres(self):
"""
Checks PostgreSQL connection
"""
remote_status = self.get_remote_status()
if remote_status['server_txt_version']:
output.result('check', self.config.name, 'PostgreSQL', True)
else:
output.result('check', self.config.name, 'PostgreSQL', False)
return
if remote_status['archive_mode'] == 'on':
output.result('check', self.config.name, 'archive_mode', True)
else:
output.result('check', self.config.name, 'archive_mode', False,
"please set it to 'on'")
if remote_status['archive_command'] and\
remote_status['archive_command'] != '(disabled)':
output.result('check', self.config.name, 'archive_command', True)
else:
output.result('check', self.config.name, 'archive_command', False,
'please set it accordingly to documentation')
def _make_directories(self):
"""
Make backup directories in case they do not exist
"""
for key in self.config.KEYS:
if key.endswith('_directory') and hasattr(self.config, key):
val = getattr(self.config, key)
if val is not None and not os.path.isdir(val):
#noinspection PyTypeChecker
os.makedirs(val)
def check_directories(self):
"""
Checks backup directories and creates them if they do not exist
"""
error = None
try:
self._make_directories()
except OSError, e:
output.result('check', self.config.name, 'directories', False,
"%s: %s" % (e.filename, e.strerror))
else:
output.result('check', self.config.name, 'directories', True)
def check_retention_policy_settings(self):
"""
Checks retention policy setting
"""
if self.config.retention_policy and not self.enforce_retention_policies:
output.result('check', self.config.name,
'retention policy settings', False, 'see log')
else:
output.result('check', self.config.name,
'retention policy settings', True)
def status_postgres(self):
"""
Status of PostgreSQL server
"""
remote_status = self.get_remote_status()
if remote_status['server_txt_version']:
output.result('status', self.config.name,
"pg_version",
"PostgreSQL version",
remote_status['server_txt_version'])
else:
output.result('status', self.config.name,
"pg_version",
"PostgreSQL version",
"FAILED trying to get PostgreSQL version")
return
if remote_status['data_directory']:
output.result('status', self.config.name,
"data_directory",
"PostgreSQL Data directory",
remote_status['data_directory'])
output.result('status', self.config.name,
"archive_command",
"PostgreSQL 'archive_command' setting",
remote_status['archive_command']
or "FAILED (please set it accordingly to documentation)")
output.result('status', self.config.name,
"last_shipped_wal",
"Archive status",
"last shipped WAL segment %s" %
remote_status['last_shipped_wal']
or "No WAL segment shipped yet")
if remote_status['current_xlog']:
output.result('status', self.config.name,
"current_xlog",
"Current WAL segment",
remote_status['current_xlog'])
def status_retention_policies(self):
"""
Status of retention policies enforcement
"""
if self.enforce_retention_policies:
output.result('status', self.config.name,
"retention_policies",
"Retention policies",
"enforced "
"(mode: %s, retention: %s, WAL retention: %s)" % (
self.config.retention_policy_mode,
self.config.retention_policy,
self.config.wal_retention_policy))
else:
output.result('status', self.config.name,
"retention_policies",
"Retention policies",
"not enforced")
def status(self):
"""
Implements the 'server-status' command.
"""
if self.config.description:
output.result('status', self.config.name,
"description",
"Description", self.config.description)
self.status_postgres()
self.status_retention_policies()
# Executes the backup manager status info method
self.backup_manager.status()
def get_remote_status(self):
'''Get the status of the remote server'''
pg_settings = ('archive_mode', 'archive_command', 'data_directory')
pg_query_keys = ('server_txt_version', 'current_xlog')
result = dict.fromkeys(pg_settings + pg_query_keys, None)
try:
with self.pg_connect() as conn:
for name in pg_settings:
result[name] = self.get_pg_setting(name)
try:
cur = conn.cursor()
cur.execute("SELECT version()")
result['server_txt_version'] = cur.fetchone()[0].split()[1]
except:
result['server_txt_version'] = None
try:
cur = conn.cursor()
cur.execute('SELECT pg_xlogfile_name(pg_current_xlog_location())')
result['current_xlog'] = cur.fetchone()[0];
except:
result['current_xlog'] = None
result.update(self.get_pg_configuration_files())
except:
pass
cmd = Command(self.ssh_command, self.ssh_options)
result['last_shipped_wal'] = None
if result['data_directory'] and result['archive_command']:
archive_dir = os.path.join(result['data_directory'], 'pg_xlog', 'archive_status')
out = str(cmd.getoutput(None, 'ls', '-tr', archive_dir)[0])
for line in out.splitlines():
if line.endswith('.done'):
name = line[:-5]
if xlog.is_wal_file(name):
result['last_shipped_wal'] = line[:-5]
return result
def show(self):
"""
Shows the server configuration
"""
#Populate result map with all the required keys
result = dict([
(key, getattr(self.config, key))
for key in self.config.KEYS
])
remote_status = self.get_remote_status()
result.update(remote_status)
output.result('show_server', self.config.name, result)
@contextmanager
def pg_connect(self):
'''A generic function to connect to Postgres using Psycopg2'''
myconn = self.conn == None
if myconn:
self.conn = psycopg2.connect(self.config.conninfo)
self.server_version = self.conn.server_version
if (self.server_version >= 90000
and 'application_name=' not in self.config.conninfo):
cur = self.conn.cursor()
cur.execute('SET application_name TO barman')
cur.close()
try:
yield self.conn
finally:
if myconn:
self.conn.close()
self.conn = None
def get_pg_setting(self, name):
'''Get a postgres setting with a given name
:param name: a parameter name
'''
with self.pg_connect() as conn:
try:
cur = conn.cursor()
cur.execute('SHOW "%s"' % name.replace('"', '""'))
return cur.fetchone()[0]
except:
return None
def get_pg_tablespaces(self):
'''Returns a list of tablespaces or None if not present'''
with self.pg_connect() as conn:
try:
cur = conn.cursor()
if self.server_version >= 90200:
cur.execute("SELECT spcname, oid, pg_tablespace_location(oid) AS spclocation FROM pg_tablespace WHERE pg_tablespace_location(oid) != ''")
else:
cur.execute("SELECT spcname, oid, spclocation FROM pg_tablespace WHERE spclocation != ''")
return cur.fetchall()
except:
return None
def get_pg_configuration_files(self):
'''Get postgres configuration files or None in case of error'''
if self.configuration_files:
return self.configuration_files
with self.pg_connect() as conn:
try:
cur = conn.cursor()
cur.execute("SELECT name, setting FROM pg_settings WHERE name IN ('config_file', 'hba_file', 'ident_file')")
self.configuration_files = {}
for cname, cpath in cur.fetchall():
self.configuration_files[cname] = cpath
return self.configuration_files
except:
return None
def pg_start_backup(self, backup_label, immediate_checkpoint):
"""
Execute a pg_start_backup
:param backup_label: label for the backup
:param immediate_checkpoint Boolean for immediate checkpoint execution
"""
with self.pg_connect() as conn:
try:
cur = conn.cursor()
cur.execute('SELECT xlog_loc, (pg_xlogfile_name_offset(xlog_loc)).* from pg_start_backup(%s,%s) as xlog_loc',
(backup_label, immediate_checkpoint))
return cur.fetchone()
except:
return None
def pg_stop_backup(self):
'''Execute a pg_stop_backup'''
with self.pg_connect() as conn:
try:
cur = conn.cursor()
cur.execute('SELECT xlog_loc, (pg_xlogfile_name_offset(xlog_loc)).* from pg_stop_backup() as xlog_loc')
return cur.fetchone()
except:
return None
def delete_backup(self, backup):
'''Deletes a backup
:param backup: the backup to delete
'''
return self.backup_manager.delete_backup(backup)
def backup(self, immediate_checkpoint):
'''Performs a backup for the server'''
try:
# check required backup directories exist
self._make_directories()
except OSError, e:
output.error('failed to create %s directory: %s',
e.filename, e.strerror)
else:
self.backup_manager.backup(immediate_checkpoint)
def get_available_backups(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER):
'''Get a list of available backups
param: status_filter: the status of backups to return, default to BackupManager.DEFAULT_STATUS_FILTER
'''
return self.backup_manager.get_available_backups(status_filter)
def get_last_backup(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER):
'''
Get the last backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
return self.backup_manager.get_last_backup(status_filter)
def get_first_backup(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER):
'''
Get the first backup (if any) in the catalog
:param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned
'''
return self.backup_manager.get_first_backup(status_filter)
def list_backups(self):
"""
Lists all the available backups for the server
"""
status_filter = BackupInfo.STATUS_NOT_EMPTY
retention_status = self.report_backups()
backups = self.get_available_backups(status_filter)
for key in sorted(backups.iterkeys(), reverse=True):
backup = backups[key]
backup_size = 0
wal_size = 0
rstatus = None
if backup.status == BackupInfo.DONE:
wal_info = self.get_wal_info(backup)
backup_size = backup.size or 0 + wal_info['wal_size']
wal_size = wal_info['wal_until_next_size']
if self.enforce_retention_policies and \
retention_status[backup.backup_id] != BackupInfo.VALID:
rstatus = retention_status[backup.backup_id]
output.result('list_backup', backup, backup_size, wal_size, rstatus)
def get_backup(self, backup_id):
'''Return the backup information for the given backup,
or None if its status is not empty
:param backup_id: the ID of the backup to return
'''
try:
backup = BackupInfo(self, backup_id=backup_id)
if backup.status in BackupInfo.STATUS_NOT_EMPTY:
return backup
return None
except:
return None
def get_previous_backup(self, backup_id):
'''Get the previous backup (if any) from the catalog
:param backup_id: the backup id from which return the previous
'''
return self.backup_manager.get_previous_backup(backup_id)
def get_next_backup(self, backup_id):
'''Get the next backup (if any) from the catalog
:param backup_id: the backup id from which return the next
'''
return self.backup_manager.get_next_backup(backup_id)
def get_required_xlog_files(self, backup, target_tli=None, target_time=None, target_xid=None):
'''Get the xlog files required for a backup'''
begin = backup.begin_wal
end = backup.end_wal
# If timeline isn't specified, assume it is the same timeline of the backup
if not target_tli:
target_tli, _, _ = xlog.decode_segment_name(end)
with self.xlogdb() as fxlogdb:
for line in fxlogdb:
name, _, stamp, _ = self.xlogdb_parse_line(line)
if name < begin: continue
tli, _, _ = xlog.decode_segment_name(name)
if tli > target_tli: continue
yield name
if name > end:
end = name
if target_time and target_time < stamp:
break
# return all the remaining history files
for line in fxlogdb:
name, _, stamp, _ = self.xlogdb_parse_line(line)
if xlog.is_history_file(name):
yield name
# TODO: merge with the previous
def get_wal_until_next_backup(self, backup):
'''Get the xlog files between backup and the next
:param backup: a backup object, the starting point to retrieve wals
'''
begin = backup.begin_wal
next_end = None
if self.get_next_backup(backup.backup_id):
next_end = self.get_next_backup(backup.backup_id).end_wal
backup_tli, _, _ = xlog.decode_segment_name(begin)
with self.xlogdb() as fxlogdb:
for line in fxlogdb:
name, size, _, _ = self.xlogdb_parse_line(line)
if name < begin: continue
tli, _, _ = xlog.decode_segment_name(name)
if tli > backup_tli: continue
if not xlog.is_wal_file(name): continue
if next_end and name > next_end:
break
# count
yield (name, size)
def get_wal_info(self, backup_info):
"""
Returns information about WALs for the given backup
:param BackupInfo backup_info: the target backup
"""
end = backup_info.end_wal
# counters
wal_info = dict.fromkeys(
('wal_num', 'wal_size',
'wal_until_next_num', 'wal_until_next_size'), 0)
wal_info['wal_last'] = None
for name, size in self.get_wal_until_next_backup(backup_info):
if name <= end:
wal_info['wal_num'] += 1
wal_info['wal_size'] += size
else:
wal_info['wal_until_next_num'] += 1
wal_info['wal_until_next_size'] += size
wal_info['wal_last'] = name
return wal_info
def recover(self, backup, dest, tablespaces=[], target_tli=None, target_time=None, target_xid=None, target_name=None, exclusive=False, remote_command=None):
'''Performs a recovery of a backup'''
return self.backup_manager.recover(backup, dest, tablespaces, target_tli, target_time, target_xid, target_name, exclusive, remote_command)
def cron(self, verbose=True):
'''Maintenance operations
:param verbose: turn on verbose mode. default True
'''
return self.backup_manager.cron(verbose)
@contextmanager
def xlogdb(self, mode='r'):
"""
Context manager to access the xlogdb file.
This method uses locking to make sure only one process is accessing
the database at a time. The database file will be created if not exists.
Usage example:
with server.xlogdb('w') ad file:
file.write(new_line)
:param str mode: open the file with the required mode
(default read-only)
"""
if not os.path.exists(self.config.wals_directory):
os.makedirs(self.config.wals_directory)
xlogdb = os.path.join(self.config.wals_directory, self.XLOG_DB)
# If the file doesn't exist and it is required to read it,
# we open it in a+ mode, to be sure it will be created
if not os.path.exists(xlogdb) and mode.startswith('r'):
if '+' not in mode:
mode = "a%s+" % mode[1:]
else:
mode = "a%s" % mode[1:]
xlogdb_lock = xlogdb + ".lock"
with LockFile(xlogdb_lock, wait=True):
with open(xlogdb, mode) as f:
# execute the block nested in the with statement
yield f
# we are exiting the context
# make sure the data is written to disk
# http://docs.python.org/2/library/os.html#os.fsync
f.flush()
os.fsync(f.fileno())
def xlogdb_parse_line(self, line):
'''Parse a line from xlog catalogue
:param line: a line in the wal database to parse
'''
try:
name, size, stamp, compression = line.split()
except ValueError:
# Old format compatibility (no compression)
compression = None
try:
name, size, stamp = line.split()
except:
raise ValueError("cannot parse line: %r" % (line,))
return name, int(size), float(stamp), compression
def report_backups(self):
if not self.enforce_retention_policies:
return dict()
else:
return self.config.retention_policy.report()
def rebuild_xlogdb(self):
"""
Rebuild the whole xlog database guessing it from the archive content.
"""
return self.backup_manager.rebuild_xlogdb()
def get_backup_ext_info(self, backup_info):
"""
Return a dictionary containing all available information about a backup
The result is equivalent to the sum of information from
* BackupInfo object
* the Server.get_wal_info() return value
* the context in the catalog (if available)
* the retention policy status
:param backup_info: the target backup
:rtype dict: all information about a backup
"""
backup_ext_info = backup_info.to_dict()
if backup_info.status == BackupInfo.DONE:
try:
previous_backup = self.backup_manager.get_previous_backup(
backup_ext_info['backup_id'])
next_backup = self.backup_manager.get_next_backup(
backup_ext_info['backup_id'])
if previous_backup:
backup_ext_info[
'previous_backup_id'] = previous_backup.backup_id
else:
backup_ext_info['previous_backup_id'] = None
if next_backup:
backup_ext_info['next_backup_id'] = next_backup.backup_id
else:
backup_ext_info['next_backup_id'] = None
except UnknownBackupIdException:
# no next_backup_id and previous_backup_id items
# means "Not available"
pass
backup_ext_info.update(self.get_wal_info(backup_info))
if self.enforce_retention_policies:
policy = self.config.retention_policy
backup_ext_info['retention_policy_status'] = \
policy.backup_status(backup_info.backup_id)
else:
backup_ext_info['retention_policy_status'] = None
return backup_ext_info
def show_backup(self, backup_info):
"""
Output all available information about a backup
:param backup_info: the target backup
"""
backup_ext_info = self.get_backup_ext_info(backup_info)
output.result('show_backup', backup_ext_info)
barman-1.3.0/barman/testing_helpers.py000644 000765 000024 00000006075 12273464541 020561 0ustar00mnenciastaff000000 000000 # Copyright (C) 2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
from datetime import datetime, timedelta
import mock
from barman.infofile import BackupInfo
def mock_backup_info(backup_id='1234567890',
begin_offset=40,
begin_time=None,
begin_wal='000000010000000000000002',
begin_xlog='0/2000028',
config_file='/pgdata/location/postgresql.conf',
end_offset=184,
end_time=None,
end_wal='000000010000000000000002',
end_xlog='0/20000B8',
error=None,
hba_file='/pgdata/location/pg_hba.conf',
ident_file='/pgdata/location/pg_ident.conf',
mode='default',
pgdata='/pgdata/location',
server_name='test_server',
size=12345,
status=BackupInfo.DONE,
tablespaces=[
['tbs1', 16387, '/fake/location'],
['tbs2', 16405, '/another/location'],
],
timeline=1,
version=90302):
if begin_time is None:
begin_time = datetime.now() - timedelta(minutes=10)
if end_time is None:
end_time = datetime.now()
# make a dictionary with all the arguments
to_dict = dict(locals())
# generate a property on the mock for every key in to_dict
bi_mock = mock.Mock()
for key in to_dict:
setattr(bi_mock, key, to_dict[key])
bi_mock.to_dict.return_value = to_dict
return bi_mock
def mock_backup_ext_info(backup_info=None,
previous_backup_id=None,
next_backup_id=None,
wal_num=1,
wal_size=123456,
wal_until_next_num=18,
wal_until_next_size=2345678,
wal_last=000000010000000000000014,
retention_policy_status=None,
**kwargs):
# make a dictionary with all the arguments
ext_info = dict(locals())
del ext_info['backup_info']
if backup_info is None:
backup_info = mock_backup_info(**kwargs)
# merge the backup_info values
ext_info.update(backup_info.to_dict())
return ext_info
barman-1.3.0/barman/utils.py000644 000765 000024 00000010202 12273464541 016505 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
"""
This module contains utility functions used in Barman.
"""
import logging
import logging.handlers
import os
import pwd
import grp
_logger = logging.getLogger(__name__)
def drop_privileges(user):
"""
Change the system user of the current python process.
It will only work if called as root or as the target user.
:param string user: target user
:raise KeyError: if the target user doesn't exists
:raise OSError: when the user change fails
"""
pw = pwd.getpwnam(user)
if pw.pw_uid == os.getuid():
return
groups = [e.gr_gid for e in grp.getgrall() if pw.pw_name in e.gr_mem]
groups.append(pw.pw_gid)
os.setgroups(groups)
os.setgid(pw.pw_gid)
os.setuid(pw.pw_uid)
os.setegid(pw.pw_gid)
os.seteuid(pw.pw_uid)
os.environ['HOME'] = pw.pw_dir
def mkpath(directory):
"""
Recursively create a target directory.
If the path already exists it does nothing.
:param str directory: directory to be created
"""
if not os.path.isdir(directory):
os.makedirs(directory)
def configure_logging(
log_file,
log_level=logging.INFO,
log_format="%(asctime)s %(name)s %(levelname)s: %(message)s"):
"""
Configure the logging module
:param str,None log_file: target file path. If None use standard error.
:param int log_level: min log level to be reported in log file.
Default to INFO
:param str log_format: format string used for a log line.
Default to "%(asctime)s %(name)s %(levelname)s: %(message)s"
"""
warn = None
handler = logging.StreamHandler()
if log_file:
log_file = os.path.abspath(log_file)
log_dir = os.path.dirname(log_file)
try:
mkpath(log_dir)
handler = logging.handlers.WatchedFileHandler(log_file)
except (OSError, IOError):
# fallback to standard error
warn = "Failed opening the requested log file. " \
"Using standard error instead."
formatter = logging.Formatter(log_format)
handler.setFormatter(formatter)
logging.root.addHandler(handler)
if warn:
# this will be always displayed because the default level is WARNING
_logger.warn(warn)
logging.root.setLevel(log_level)
def parse_log_level(log_level):
"""
Convert a log level to its int representation as required by logging module.
:param log_level: An integer or a string
:return: an integer or None if an invalid argument is provided
"""
try:
log_level_int = int(log_level)
except ValueError:
log_level_int = logging.getLevelName(str(log_level).upper())
if isinstance(log_level_int, int):
return log_level_int
return None
def pretty_size(size, unit=1024):
"""
This function returns a pretty representation of a size value
:param int,long,float size: the number to to prettify
:param int unit: 1000 or 1024 (the default)
:rtype : str
"""
suffixes = ["B"] + [i + {1000: "B", 1024: "iB"}[unit] for i in "KMGTPEZY"]
if unit == 1000:
suffixes[1] = 'kB' # special case kB instead of KB
# cast to float to avoid loosing decimals
size = float(size)
for suffix in suffixes:
if size < unit or suffix == suffixes[-1]:
if suffix == suffixes[0]:
return "%d %s" % (size, suffix)
else:
return "%.1f %s" % (size, suffix)
else:
size /= unit
barman-1.3.0/barman/version.py000644 000765 000024 00000001434 12273464541 017041 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
''' This module contains the current Barman version.
'''
__version__ = '1.3.0'
barman-1.3.0/barman/xlog.py000644 000765 000024 00000007447 12273464541 016337 0ustar00mnenciastaff000000 000000 # Copyright (C) 2011-2014 2ndQuadrant Italia (Devise.IT S.r.L.)
#
# This file is part of Barman.
#
# Barman is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Barman is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Barman. If not, see .
""" This module contains functions to retrieve information
about xlog files
"""
import re
# xlog file segment name parser (regular expression)
_xlog_re = re.compile(r'\b([\dA-Fa-f]{8})(?:([\dA-Fa-f]{8})([\dA-Fa-f]{8})(?:\.[\dA-Fa-f]{8}\.backup)?|\.history)\b')
# Taken from xlog_internal.h from PostgreSQL sources
XLOG_SEG_SIZE = 1 << 24
XLOG_SEG_PER_FILE = 0xffffffff // XLOG_SEG_SIZE
XLOG_FILE_SIZE = XLOG_SEG_SIZE * XLOG_SEG_PER_FILE
class BadXlogSegmentName(Exception):
""" Exception for a bad xlog name
"""
pass
def is_history_file(name):
"""
Return True if the xlog is a .history file, False otherwise
:param str name: the file name to test
"""
match = _xlog_re.search(name)
if match and match.group(0).endswith('.history'):
return True
return False
def is_backup_file(name):
"""
Return True if the xlog is a .backup file, False otherwise
:param str name: the file name to test
"""
match = _xlog_re.search(name)
if match and match.group(0).endswith('.backup'):
return True
return False
def is_wal_file(name):
"""
Return True if the xlog is a regular xlog file, False otherwise
:param str name: the file name to test
"""
match = _xlog_re.search(name)
if match \
and not match.group(0).endswith('.backup')\
and not match.group(0).endswith('.history'):
return True
return False
def decode_segment_name(name):
""" Retrieve the timeline, log ID and segment ID from the name of a xlog segment
"""
match = _xlog_re.match(name)
if not match:
raise BadXlogSegmentName("invalid xlog segment name '%s'" % name)
return [int(x, 16) if x else None for x in match.groups()]
def encode_segment_name(tli, log, seg):
""" Build the xlog segment name based on timeline, log ID and segment ID
"""
return "%08X%08X%08X" % (tli, log, seg)
def encode_history_file_name(tli):
""" Build the history file name based on timeline
"""
return "%08X.history" % (tli,)
def enumerate_segments(begin, end, version):
""" Get the list of xlog segments from begin to end (included)
"""
begin_tli, begin_log, begin_seg = decode_segment_name(begin)
end_tli, end_log, end_seg = decode_segment_name(end)
# this method don't support timeline changes
assert begin_tli == end_tli, ("Begin segment (%s) and end segment"
"(%s) must have the same timeline part" % (begin, end))
# Start from the first xlog and sequentially enumerates the segments to the end
cur_log, cur_seg = begin_log, begin_seg
while cur_log < end_log or (cur_log == end_log and cur_seg <= end_seg):
yield encode_segment_name(begin_tli, cur_log, cur_seg)
cur_seg += 1
if cur_seg > XLOG_SEG_PER_FILE or (version < 90300 and cur_seg == XLOG_SEG_PER_FILE):
cur_seg = 0
cur_log += 1
def hash_dir(name):
"""
Get the directory where the xlog segment will be stored
"""
_, log, _ = decode_segment_name(name)
if log is not None:
return name[0:16]
else:
return ''