targetcli-3.0.pre4.1~ga55d018/COPYING0000664000000000000000000002363712443074002013551 0ustar Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. targetcli-3.0.pre4.1~ga55d018/debian/changelog0000644000000000000000000000030712443074002015575 0ustar targetcli (3.0.pre4.1~ga55d018) unstable; urgency=low * Generated from git commit a55d0180ce19cb9ce4e3276edbf0abc60782e4ea. -- Marc Fleischmann Sat, 13 Dec 2014 09:31:46 -0800 targetcli-3.0.pre4.1~ga55d018/debian/compat0000644000000000000000000000000212443074002015121 0ustar 5 targetcli-3.0.pre4.1~ga55d018/debian/control0000644000000000000000000000126112443074002015326 0ustar Source: targetcli Section: net Priority: optional Standards-Version: 3.9.2 Homepage: https://github.com/Datera/targetcli Maintainer: Jerome Martin Build-Depends: debhelper(>=7.0.50~), python(>=2.6.6-3~), python-rtslib(>=3.0~), python-configshell(>=1.5), python-prettytable Package: targetcli Architecture: all Depends: ${python:Depends}, ${misc:Depends}, python-configshell, python-rtslib(>=3.0~), python-prettytable, lsb-base(>=3.2-14) Provides: ${python:Provides} Conflicts: targetcli-frozen, rtsadmin-frozen, rtsadmin, lio-utils Description: CLI and interactive shell to manage Linux SCSI Targets . Part of the Linux Kernel SCSI Target's userspace management tools targetcli-3.0.pre4.1~ga55d018/debian/copyright0000644000000000000000000000106512443074002015660 0ustar Copyright (c) 2011-2014 by Datera, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. targetcli-3.0.pre4.1~ga55d018/debian/rules0000755000000000000000000000070012443074002015000 0ustar #!/usr/bin/make -f build_dir = build install_dir = debian/targetcli %: dh $@ --with python2 override_dh_auto_clean: # manually clean any *.pyc files rm -rf targetcli/*.pyc override_dh_auto_build: python setup.py build --build-base $(build_dir) override_dh_auto_install: python setup.py install --prefix=/usr --no-compile \ --install-layout=deb --root=$(CURDIR)/$(install_dir) override_dh_installinit: dh_installinit --name target targetcli-3.0.pre4.1~ga55d018/debian/targetcli.dirs0000644000000000000000000000007412443074002016565 0ustar /etc/target/ /var/target/ /var/target/pr/ /var/target/alua/ targetcli-3.0.pre4.1~ga55d018/debian/targetcli.docs0000644000000000000000000000002212443074002016545 0ustar README.md COPYING targetcli-3.0.pre4.1~ga55d018/debian/targetcli.manpages0000644000000000000000000000002012443074002017406 0ustar doc/targetcli.8 targetcli-3.0.pre4.1~ga55d018/doc/targetcli.80000664000000000000000000001573012443074002015325 0ustar .TH targetcli 8 .SH NAME .B targetcli .SH DESCRIPTION .B targetcli is a shell for viewing, editing, and saving the configuration of the kernel's target subsystem, also known as TCM/LIO. It enables the administrator to assign local storage resources backed by either files, volumes, local SCSI devices, or ramdisk, and export them to remote systems via network fabrics, such as iSCSI or FCoE. .P The configuration layout is tree-based, similar to a filesystem, and navigated in a similar manner. .SH USAGE .B targetcli .P .B targetcli [cmd] .P Invoke .B targetcli as root to enter the configuration shell, or follow with a command to execute but do not enter the shell. Use .B ls to list nodes below the current path. Moving around the tree is accomplished by the .B cd command, or by entering the new location directly. Objects are created using .BR create , removed using .BR delete . Use .B "help " for additional usage information. Tab-completion is available for commands and command arguments. .P Configuration changes in targetcli are made immediately to the underlying kernel target configuration. Settings will not be retained across reboot unless .B saveconfig is either explicitly called, or implicitly by exiting the shell with the global preference .B auto_save_on_exit set to .BR true , the default. .P .SH EXAMPLES To export a storage resource, 1) define a storage object using a backstore, then 2) export the object via a network fabric, such as iSCSI or FCoE. .SS DEFINING A STORAGE OBJECT WITHIN A BACKSTORE .B backstores/fileio create disk1 /disks/disk1.img 140M .br Creates a storage object named .I disk1 with the given path and size. .B targetcli supports common size abbreviations like 'M', 'G', and 'T'. .P In addition to the .I fileio backstore for file-backed volumes, other backstore types include .I iblock for block-device-backed volumes, and .I pscsi for volumes backed by local SCSI devices. .I rd_mcp backstore creates ram-based storage objects. See the built-in help for more details on the required parameters for each backstore type. .SS EXPORTING A STORAGE OBJECT VIA FCOE .B tcm_fc/ create 20:00:00:19:99:a8:34:bc .br Create an FCoE target with the given WWN. .B targetcli can tab-complete the WWN based on registered FCoE interfaces. If none are found, verify that they are properly configured and are shown in the output of .BR "fcoeadm -i" . .P .B tcm_fc/20:00:00:19:99:a8:34:bc/ .br If .B auto_cd_after_create is set to false, change to the configuration node for the given target, equivalent to giving the command prefixed by .BR cd . .P .B luns/ create /backstores/fileio/disk1 .br Create a new LUN for the interface, attached to a previously defined storage object. The storage object now shows up under the /backstores configuration node as .BR activated . .P .B acls/ create 00:99:88:77:66:55:44:33 .br Create an ACL (access control list), for defining the resources each initiator may access. The default behavior is to auto-map existing LUNs to the ACL; see help for more information. .P The LUN should now be accessible via FCoE. .SS EXPORTING A STORAGE OBJECT VIA ISCSI .B iscsi/ create .br Creates an iSCSI target with a default WWN. It will also create an initial target portal group called .IR tpg1 . .P .B iqn.2003-01.org.linux-iscsi.test2.x8664:sn123456789012/tpg1/ .br An example of changing to the configuration node for the given target's first target portal group (TPG). This is equivalent to giving the command prefixed by "cd". (Although more can be useful for certain setups, most configurations have a single TPG per target. In this case, configuring the TPG is equivalent to configuring the overall target.) .P .B portals/ create .br Add a portal, i.e. an IP address and TCP port via which the target can be contacted by initiators. Sane defaults are used if these are not specified. .P .B luns/ create /backstores/fileio/disk1 .br Create a new LUN in the TPG, attached to the storage object that has previously been defined. The storage object now shows up under the /backstores configuration node as activated. .P .B acls/ create iqn.1994-05.com.redhat:4321576890 .br Creates an ACL (access control list) for the given iSCSI initiator. .P .B acls/iqn.1994-05.com.redhat:4321576890 create 2 0 .br Gives the initiator access to the first exported LUN (lun0), which the initiator will see as lun2. The default is to give the initiator read/write access; if read-only access was desired, an additional "1" argument would be added to enable write-protect. (Note: if global setting .B auto_add_mapped_luns is true, this step is not necessary.) .P .B acls/iqn.1994-05.com.redhat:4321576890 set authentication=0 .br Purely for example, make the LUNs in the ACL accessible without authentication. See below for more information on configuring authentication. .SH OTHER COMMANDS .B saveconfig .br Save the current configuration settings to a file, from which settings will be restored if the system is rebooted. .P This command must be executed from the configuration root node. .P .B clearconfig .br Clears the entire current local configuration. The parameter .I confirm=true must also be given, as a precaution. .P This command is executed from the configuration root node. .P .B exit .br Leave the configuration shell. .SH SETTINGS GROUPS Settings are broken into groups. Individual settings are accessed by .B "get " and .BR "set =" , and the settings of an entire group may be displayed by .BR "get " . All except for .I global are associated with a particular configuration node. .SS GLOBAL Shell-related user-specific settings are in .IR global , and are visible from all configuration nodes. They are mostly shell display options, but some starting with .B auto_ affect shell behavior and may merit customization. Global settings are saved to ~/.targetcli/ upon exit, unlike other groups. .SS BACKSTORE-SPECIFIC .B attribute .br /backstore// configuration node. Contains values relating to the backstore and storage object. .P .SS ISCSI-SPECIFIC .B discovery_auth .br /iscsi configuration node. Set the normal and mutual authentication userid and password for discovery sessions, as well as enabling or disabling it. By default it is disabled -- no authentication is required for discovery. .P .B parameter .br /iscsi//tpgX configuration node. ISCSI-specific parameters such as .IR AuthMethod , .IR MaxBurstLength , .IR IFMarker , .IR DataDigest , and similar. .P .B attribute .br /iscsi//tpgX configuration node. Contains implementation-specific settings for the TPG, such as .BR authentication , to enforce or disable authentication for the full-feature phase (i.e. non-discovery). .P .B auth .br /iscsi//tpgX/acls/ configuration node. Set the userid and password for full-feature phase for this ACL. .SH FILES .B /etc/target/* .br .B /var/lib/target/* .SH AUTHOR Written by Jerome Martin . .br Man page written by Andy Grover . .SH REPORTING BUGS Report bugs to targetcli-3.0.pre4.1~ga55d018/README.md0000664000000000000000000000740712443074002013772 0ustar # TargetCLI TargetCLI is the LIO commmand-line administration tool for managing the Linux SCSI Target, and its third-party target fabric modules and backend storage objects. Based on RTSLib, it allows direct manipulation of all SCSI Target objects like storage objects, SCSI targets, TPGs, LUNs and ACLs, as well as manage startup system configuration for the SCSI Target subsystem. TargetCLI can be used either as a regular CLI tool, one command at a time, or as an interactive shell based on the python configshell CLI framework, with full auto-complete support and inline documentation. TargetCLI is part of the Linux Kernel's SCSI Target's userspace management tools. ## Installation TargetCLI is currently part of several Linux distributions. In most cases, simply installing the version packaged by your favorite Linux distribution is the best way to get it running. ## Migrating away from a targetcli < 3.0 setup Prior to version 3.x, TargetCLI relied on lio-utils for managing the target's startup configuration. Unfortunately, rtslib.Config - now used by targetcli and the `/etc/init.d/target` initscript for startup config save and restore operations - is incompatible with the legacy lio-utils config files. However, the new initscript has a special provision for this case. When attempting to start the target service when there is no `/etc/target/scsi_target.lio` configuration file present, a check is made to see if there is a target configuration currently running on the system. If there is, it is assumed to be a keeper, and the initscript will attempt to dump it to the system startup configuration file `/etc/target/scsi_target.lio`. When migrating from a lio-utils install, the trick is to prevent the old lio-utils package removal from stopping the service. For this, you can simply empty the lio-utils version of `/etc/init.d/target` - or the equivalent location for your Linux distribution. Example on Debian: echo > /etc/init.d/target dpkg --purge lio-utils apt-get install targetcli ## Building from source The packages are very easy to build and install from source as long as you're familiar with your Linux Distribution's package manager: 1. Clone the github repository for TargetCLI using `git clone https://github.com/Datera/targetcli.git`. 2. Make sure build dependencies are installed. To build TargetCLI, you will need: * GNU Make. * python 2.6 or 2.7 * A few python libraries: rtslib, configshell, lio-utils * Your favorite distribution's package developement tools, like rpm for Redhat-based systems or dpkg-dev and debhelper for Debian systems. 3. From the cloned git repository, run `make deb` to generate a Debian package, or `make rpm` for a Redhat package. 4. The newly built packages will be generated in the `dist/` directory. 5. To cleanup the repository, use `make clean` or `make cleanall` which also removes `dist/*` files. ## Documentation A manpage is provided with this packages, simply use `man targetcli` to get more information. An other good source of information is the http://linux-iscsi.org wiki, offering many resources such as a the TargetCLI User's Guide, online at http://linux-iscsi.org/wiki/targetcli. ## Mailing-list All contributions, suggestions and bugfixes are welcome! To report a bug, submit a patch or simply stay up-to-date on the Linux SCSI Target developments, you can subscribe to the Linux Kernel SCSI Target development mailing-list by sending an email message containing only `subscribe target-devel` to The archives of this mailing-list can be found online at http://dir.gmane.org/gmane.linux.scsi.target.devel ## Author LIO was developed by Datera, Inc. http://www.datera.io The original author and current maintainer is Jerome Martin targetcli-3.0.pre4.1~ga55d018/scripts/targetcli0000775000000000000000000000537212443074002016105 0ustar #!/usr/bin/python ''' Starts the targetcli CLI shell. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import sys from os import getuid, listdir from targetcli import UIRoot from rtslib import RTSLibError from configshell import ConfigShell from rtslib import __version__ as rtslib_version from targetcli import __version__ as targetcli_version class TargetCLI(ConfigShell): default_prefs = {'color_path': 'magenta', 'color_command': 'cyan', 'color_parameter': 'magenta', 'color_keyword': 'cyan', 'completions_in_columns': True, 'logfile': None, 'loglevel_console': 'info', 'loglevel_file': 'debug9', 'color_mode': True, 'prompt_length': 30, 'tree_max_depth': 0, 'tree_status_mode': True, 'tree_round_nodes': True, 'tree_show_root': True, 'auto_enable_tpgt': True, 'auto_add_mapped_luns': True, 'auto_cd_after_create': False, 'legacy_hba_view': False } def main(): ''' Start the targetcli shell. ''' shell = TargetCLI('~/.targetcli') try: listdir("/sys/kernel/config/target") except: shell.con.display("The target service is not running.") exit() if getuid() == 0: is_root = True else: is_root = False if not is_root: shell.con.display("You are not root, disabling privileged commands.\n") root_node = UIRoot(shell, as_root=is_root) try: root_node.refresh() except RTSLibError, error: shell.con.display(shell.con.render_text(str(error), 'red')) if len(sys.argv) > 1: shell.run_cmdline(" ".join(sys.argv[1:])) sys.exit(0) shell.con.display("targetcli %s (rtslib %s)\n" "Copyright (c) 2011-2014 by Datera, Inc.\n" "All rights reserved." % (targetcli_version, rtslib_version)) shell.con.display('') shell.run_interactive() if __name__ == "__main__": main() targetcli-3.0.pre4.1~ga55d018/scripts/targetcli-ng0000775000000000000000000000374112443074002016505 0ustar #!/usr/bin/env python ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import sys from pyparsing import ParseException from rtslib.config import ConfigError from targetcli.cli_live import CliLive from targetcli.cli_config import CliConfig from targetcli.cli_logger import logger as log # TODO Add tests for non-interactive mode # TODO Add batch mode if stdin is not a terminal if __name__ == '__main__': try: args = sys.argv[1:] if not args: config = 'live' CliLive(interactive=True).cmdloop() elif args[0] == "configure" and len(args) == 1: config = 'candidate' CliConfig(interactive=True).cmdloop() elif args[0] == "configure": config = 'candidate' CliConfig(interactive=False).onecmd(" ".join(args[1:])) else: config = 'live' CliLive(interactive=False).onecmd(" ".join(args)) except IOError, e: log.critical("Failed to read %s configuration: %s" % (config, e)) log.info("Check your user permissions") except ParseException, e: log.critical("Failed to parse %s configuration: %s" % (config, e)) except ParseException, e: log.critical("Failed to validate %s configuration: %s" % (config, e)) targetcli-3.0.pre4.1~ga55d018/scripts/target.init0000775000000000000000000002052412443074002016353 0ustar #! /bin/sh ### BEGIN INIT INFO # Provides: target # Required-Start: $network $remote_fs $syslog # Required-Stop: $network $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: The Linux SCSI Target service ### END INIT INFO # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin DESC="The Linux SCSI Target" NAME=target DAEMON=/usr/bin/targetcli DAEMON_ARGS="" SCRIPTNAME=/etc/init.d/$NAME CFS_BASE="/sys/kernel/config" CFS_TGT="${CFS_BASE}/target" CORE_MODS="target_core_mod target_core_pscsi target_core_iblock target_core_file" STARTUP_CONFIG="/etc/target/scsi_target.lio" # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # How we log messages depends on the system if [ -r /lib/lsb/init-functions ]; then # LSB systems like Debian . /lib/lsb/init-functions elif [ -r /etc/init.d/functions ]; then # RHEL without optional redhat-lsb . /etc/init.d/functions fi is_func() { type $1 2>/dev/null | grep -q 'function' } log_success () { if is_func log_success_msg; then log_success_msg "$*" elif is_func success; then echo -n $*; success "$*"; echo else echo "[ ok ] $*" fi } log_failure () { if is_func log_failure_msg; then log_failure_msg "$*" elif is_func failure; then echo -n $*; failure "$*"; echo else echo "[FAIL] $* ... failed!" fi } log_warning () { if is_func log_warning_msg; then log_warning_msg "$*" elif is_func warning; then echo -n $*; warning "$*"; echo else echo "[warn] $* ... (warning)." fi } log_action () { if is_func log_action_msg; then log_action_msg "$*" elif is_func action; then echo -n $*; passed "$*"; echo else echo "[info] $*." fi } load_specfiles() { FABRIC_MODS=$(python << EOF from rtslib import list_specfiles, parse_specfile print(" ".join(["%(kernel_module)s:%(configfs_group)s" % parse_specfile(spec) for spec in list_specfiles()])) EOF ) } save_running_config() { python << EOF import rtslib config = rtslib.Config() config.load_live() config.save("${STARTUP_CONFIG}") EOF } check_install() { # Check the system installation INSTALL=ok python -c "from rtslib import Config" > /dev/null 2>&1 if [ $? != 0 ]; then log_failure "Cannot load rtslib" INSTALL=nok fi SYSTEM_DIRS="/var/target/pr /var/target/alua /etc/target" for DIR in ${SYSTEM_DIRS}; do if [ ! -d ${DIR} ]; then log_warning "Creating missing directory ${DIR}" mkdir -p ${DIR} fi done if [ "${INSTALL}" != ok ]; then exit 0 else log_action "${DESC} looks properly installed" fi } load_configfs() { modprobe configfs > /dev/null 2>&1 if [ "$?" != 0 ]; then log_failure "Failed to load configfs kernel module" return 1 fi mount -t configfs configfs ${CFS_BASE} > /dev/null 2>&1 case "$?" in 0) log_warning "The configfs filesystem was not mounted, consider adding it to fstab";; 32) log_action "The configfs filesystem is already mounted";; *) log_failure "Failed to mount configfs"; return 1;; esac } load_modules() { for MODULE in ${CORE_MODS}; do if [ ! -z "$(cat /proc/modules | grep ^${MODULE}\ )" ]; then log_warning "Core module ${MODULE} already loaded" else modprobe "${MODULE}" > /dev/null 2>&1 if [ "$?" != 0 ]; then log_failure "Failed to load core module ${MODULE}" return 1 else log_action "Loaded core module ${MODULE}" fi fi done for MOD_SPEC in ${FABRIC_MODS}; do MODULE="$(echo ${MOD_SPEC} | awk -F : '{print $1}')" if [ ! -z "$(cat /proc/modules | grep ^${MODULE}\ )" ]; then log_warning "Fabric module ${MODULE} already loaded" else modprobe "${MODULE}" > /dev/null 2>&1 if [ "$?" != 0 ]; then log_warning "Failed to load fabric module ${MODULE}" else log_action "Loaded fabric module ${MODULE}" fi fi done } unload_modules() { RETCODE=0 for MOD_SPEC in ${FABRIC_MODS}; do MODULE="$(echo ${MOD_SPEC} | awk -F : '{print $1}')" CFS_GROUP="${CFS_TGT}/$(echo ${MOD_SPEC} | awk -F : '{print $2}')" if [ ! -z "$(lsmod | grep ^${MODULE}\ )" ]; then rmdir "${CFS_GROUP}" > /dev/null 2>&1 if [ -d "${CFS_GROUP}" ]; then log_failure "Failed to remove ${CFS_GROUP}" RETCODE=1 else rmmod "${MODULE}" > /dev/null 2>&1 if [ "$?" != 0 ]; then log_failure "Failed to unload fabric module ${MODULE}" RETCODE=1 else log_action "Unloaded ${MODULE} fabric module" fi fi else log_warning "Fabric module ${MODULE} is not loaded" fi done MODULES="$(echo ${CORE_MODS} | tac -s ' ')" for MODULE in ${MODULES}; do if [ ! -z "$(lsmod | grep ^${MODULE}\ )" ]; then rmmod "${MODULE}" > /dev/null 2>&1 if [ "$?" != 0 ]; then log_failure "Failed to unload target core module ${MODULE}" RETCODE=1 else log_action "Unloaded ${MODULE} target core module" fi else log_warning "Target core module ${MODULE} is not loaded" fi done return "${RETCODE}" } load_config() { if [ $(cat /etc/target/scsi_target.lio 2>/dev/null | tr -d " \n\t" | wc -c) = 0 ]; then log_warning "Startup config ${STARTUP_CONFIG} is empty, skipping" elif [ -e "${STARTUP_CONFIG}" ]; then export __STARTUP_CONFIG="${STARTUP_CONFIG}" python 2> /dev/null << EOF import os, rtslib config = rtslib.Config() config.load(os.environ['__STARTUP_CONFIG'], allow_new_attrs=True) list(config.apply()) EOF if [ "$?" != 0 ]; then unset __STARTUP_CONFIG log_failure "Failed to load ${STARTUP_CONFIG}" return 1 else unset __STARTUP_CONFIG log_action "Loaded ${STARTUP_CONFIG}" fi else log_warning "No ${STARTUP_CONFIG} to load" fi } clear_config() { python 2> /dev/null << EOF from rtslib import Config config = Config() list(config.apply()) EOF if [ "$?" != 0 ]; then log_failure "Failed to clear configuration" return 1 else log_action "Cleared configuration" fi } do_start() { # If the target is running and we do not have a config file on the # system, dump the running system config to the config file. This # helps migrating away from lio-utils or other legacy/devel systems. if [ ! -e "${STARTUP_CONFIG}" ] && [ $(ls /sys/kernel/config/target/core/ 2>/dev/null | wc -l) -gt 1 ]; then log_action "Possible config migration detected, saving the " \ "running target to ${STARTUP_CONFIG}" save_running_config elif [ -d ${CFS_TGT} ]; then log_failure "Not starting: ${CFS_TGT} already exists" return 1 fi load_specfiles # Fill in FABRIC_MODS and CFS_GROUPS check_install && load_configfs && load_modules && load_config if [ "$?" != 0 ]; then log_failure "Could not start ${DESC}" return 1 else log_success "Started ${DESC}" fi } do_stop() { if [ ! -d ${CFS_TGT} ]; then log_success "${DESC} is already stopped" else load_specfiles # Fill in FABRIC_MODS and CFS_GROUPS clear_config && unload_modules if [ "$?" != 0 ]; then log_failure "Could not stop ${DESC}" return 1 else log_success "Stopped ${DESC}" fi fi } do_status() { if [ -d ${CFS_TGT} ]; then log_action "${DESC} is started" return 0 else log_action "${DESC} is stopped" return 1 fi } case "$1" in start) # FIXME This is because stop fails with systemd on debian jessie do_stop do_start ;; stop) do_stop ;; status) do_status ;; restart|force-reload) do_stop && do_start ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2 exit 3 ;; esac targetcli-3.0.pre4.1~ga55d018/setup.py0000775000000000000000000000224512443074002014223 0ustar #! /usr/bin/env python ''' This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re from distutils.core import setup import targetcli PKG = targetcli VERSION = str(PKG.__version__) (AUTHOR, EMAIL) = re.match('^(.*?)\s*<(.*)>$', PKG.__author__).groups() URL = PKG.__url__ LICENSE = PKG.__license__ SCRIPTS = ["scripts/targetcli", "scripts/targetcli-ng"] DESCRIPTION = PKG.__description__ setup(name=PKG.__name__, description=DESCRIPTION, version=VERSION, author=AUTHOR, author_email=EMAIL, license=LICENSE, url=URL, scripts=SCRIPTS, packages=[PKG.__name__], package_data = {'':[]}) targetcli-3.0.pre4.1~ga55d018/targetcli/cli_config.py0000664000000000000000000006335312443074002017141 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import pyparsing as pp import prettytable as pt import os, sys, datetime, shutil from rtslib.config_filters import * from targetcli.cli import Cli, CliError from rtslib.config_live import dump_live from targetcli.cli_logger import logger as log from rtslib.config_parser import ConfigParser from rtslib.config import Config, ConfigError # TODO Add path vs pattern documentation # TODO Implement 'configure locked' mode # TODO Implement do_copy # TODO Implement do_comment # TODO Implement do_rollback # TODO When live summary is done, use tables for info # TODO Allow PATH=='top ... ...' to indicate top-level class CliConfig(Cli): ''' The lio target configuration command-line for edit mode. ''' config_path = "/etc/target/scsi_target.lio" history_path = os.path.expanduser("~/.targetcli/history_configure.txt") backup_dir = "/var/target" @classmethod def save_running_config(cls): if os.path.isfile(cls.config_path): # TODO remove/rotate older backups ts = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S") backup_path = "%s/backup-%s.lio" % (cls.backup_dir, ts) log.info("Performing backup of startup configuration: %s" % backup_path) shutil.copyfile(cls.config_path, backup_path) log.info("Saving new startup configuration") # We reload the config from live before saving it, in # case this kernel has new attributes not yet in our # policy files config = Config() config.load_live() config.save(cls.config_path) def __init__(self, interactive=False): Cli.__init__(self, interactive, self.history_path) self.set_prompt() log.info("Syncing policy and configuration...") self.config = Config() self.config.load_live() self.edit_levels = [''] self.needs_save = False if interactive: log.warning("[edit] top-level") @property def needs_commit(self): if self.needs_save: return True keys = ('removed', 'major', 'major_obj', 'minor', 'minor_obj', 'created') diff = self.config.diff_live() for key in keys: if diff[key]: return True return False @property def attrs_missing(self): for attr in self.config.current.walk(filter_only_missing): return True return False def add_edit_level(self, path): self.edit_levels.append(path) log.warning("[edit] %s" % self.edit_levels[-1]) self.set_prompt(self.edit_levels[-1]) def del_edit_level(self): if len(self.edit_levels) == 1: raise CliError("Already at top-level") self.edit_levels.pop() if len(self.edit_levels) == 1: log.warning("[edit] top-level") else: log.warning("[edit] %s" % self.edit_levels[-1]) self.set_prompt(self.edit_levels[-1]) def set_prompt(self, string=''): ''' Sets the prompt from string. ''' if not string: prompt = "config# " else: max_len = 25 if len(string) <= max_len: prompt = "%s# " % string else: prompt = "..%s# " % string[-max_len+3:] self.prompt = prompt def fmt_data_src(self, src): # TODO Get rid of this one in favor of lst_data_src def ts2str(ts): date = datetime.datetime.fromtimestamp(int(ts)) date = date.strftime('%Y-%m-%d %H:%M:%S') return date try: date = ts2str(src['timestamp']) except: date = "unknown date" if src['operation'] == 'set': fmt = ("(%s) set %s" % (date, src['data'].strip())) elif src['operation'] == 'delete': fmt = ("(%s) delete %s" % (date, src['pattern'].strip())) elif src['operation'] == 'load': mdate = ts2str(src['mtime']) fmt = ("(%s) load %s (modified %s)" % (date, src['filepath'], mdate)) elif src['operation'] == 'update': mdate = ts2str(src['mtime']) fmt = ("(%s) merge %s (modified %s)" % (date, src['filepath'], mdate)) elif src['operation'] == 'clear': fmt = ("(%s) cleared config" % date) elif src['operation'] == 'resync': fmt = ("(%s) Synchronized configuration with live system" % date) elif src['operation'] == 'init': fmt = ("(%s) created new configuration" % date) else: fmt = ("(%s) unknown operation" % date) return fmt def lst_data_src(self, src): def ts2str(ts): date = datetime.datetime.fromtimestamp(int(ts)) date = date.strftime('%Y-%m-%d %H:%M:%S') return date try: date = ts2str(src['timestamp']) except: date = "unknown date" if src['operation'] == 'set': lst = [date, 'set', src['data'].strip()] elif src['operation'] == 'delete': lst = [date, 'delete', src['pattern'].strip()] elif src['operation'] == 'load': mdate = ts2str(src['mtime']) lst = [date, 'load', "%s\nmodified %s" % (src['filepath'], mdate)] elif src['operation'] == 'update': mdate = ts2str(src['mtime']) lst = [date, 'merge', "%s\nmodified %s" % (src['filepath'], mdate)] elif src['operation'] == 'clear': lst = [date, 'clear', 'n/a'] elif src['operation'] == 'resync': lst = [date, 'resync', 'n/a'] elif src['operation'] == 'init': lst = [date, 'init', 'n/a'] else: lst = [date, 'unknown', 'n/a'] return lst def do_exit(self, options): ''' exit [now] Exits the current configuration edit level, and goes back to the previous edit level. If run on the top-level configuration, then exits config mode. If the now option is provided, no confirmation will be asked if there are uncommitted changes in the current candidate configuration when exiting the config mode. ''' options = self.parse(options, 'exit', pp.Optional('now'))[1:] if self.edit_levels[-1]: self.del_edit_level() exit = False elif self.needs_commit: log.warning("[edit] All non-commited changes will be lost!") if 'now' in options: log.warning("[edit] exiting anyway, as requested") exit = True else: exit = self.yes_no("Exit config mode anyway?", False) else: exit = True return exit def complete_exit(self, text, line, begidx, endidx): return self._complete_options(text, line, begidx, endidx, ['now']) def do_commit(self, options): ''' commit [check|interactive] Saves the current configuration to the system startup configuration file, after applying the changes to the running system. If the check option is provided, the current configuration will be checked but not saved or applied. If the interactive option is provided, the user will be able to confirm or skip every modification to the live system. ''' # TODO Add [as DESCRIPTION] option # TODO Change to commit only current level unless 'all' option syntax = pp.Optional(pp.oneOf("check interactive")) options = self.parse(options, 'commit', syntax)[1:] if self.attrs_missing: self.do_missing('') raise CliError("Cannot validate configuration: " "required attributes not set") if not self.needs_commit: raise CliError("No changes to commit!") log.info("Validating configuration") for msg in self.config.verify(): log.info(msg) if 'check' in options: return do_it = self.yes_no("Apply changes and overwrite system " "configuration ?", False) if do_it is not False: log.info("Applying configuration") for msg in self.config.apply(): if 'interactive' in options: apply = self.yes_no("%s\nPlease confirm" % msg, True) if apply is False: log.warning("Aborted commit on user request: " "please verify system status") return else: log.info(msg) self.save_running_config() self.needs_save = False else: log.info("Cancelled configuration commit") def complete_commit(self, text, line, begidx, endidx): return self._complete_options(text, line, begidx, endidx, ['check', 'interactive']) def do_rollback(self, options): ''' rollback Return to the last committed configuration. Only the current configuration is affected. The commit command can then be used to apply the rolled-back configuration to the running system. ''' # TODO Add more control to directly rollback the n-th version, view # backup infos before rollback, etc. backups = sorted(n for n in os.listdir(self.backup_dir) if n.endswith(".lio")) if not backups: raise ConfigError("No backup found") else: backup_path = "%s/%s" % (self.backup_dir, backups[-1]) self.config.load(backup_path) os.remove(backup_path) log.warning("Rolled-back to %s" % backup_path) def do_edit(self, options): ''' edit PATH Changes the current configuration edit level to PATH, relative to the current configuration edit level. If PATH does not exist currently, it will be created. ''' level = self.edit_levels[-1] nodes = self.config.search("%s %s" % (level, options)) if not nodes: nodes_beyond = self.config.search("%s %s .*" % (level, options)) if nodes_beyond: raise CliError("Incomplete path: [%s]" % options) else: statement = "%s %s" % (self.edit_levels[-1], options) log.debug("Setting statement '%s'" % statement) self.config.set(statement) self.needs_save = True node = self.config.search(statement)[0] log.info("Created configuration level: %s" % node.path_str) self.add_edit_level(node.path_str) self.do_missing('') elif len(nodes) > 1: raise CliError("Ambiguous path: [%s]" % options) else: self.add_edit_level(nodes[0].path_str) self.do_missing('') def complete_edit(self, text, line, begidx, endidx): # TODO Add tips for new path return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_live(self, options): ''' live COMMAND Executes a single non-interactive command in live mode. ''' # TODO Add completion from targetcli.cli_live import CliLive CliLive(interactive=False).onecmd(options) def do_set(self, options): ''' set [PATH] OBJECT IDENTIFIER set [PATH] ATTRIBUTE VALUE Sets either an OBJECT IDENTIFIER (i.e. "disk mydisk") or an ATTRIBUTE VALUE (i.e. "enable yes"). ''' if not options: raise CliError("Missing required options") statement = "%s %s" % (self.edit_levels[-1], options) log.debug("Setting statement '%s'" % statement) created = self.config.set(statement) for node in created: log.info("[%s] has been set" % node.path_str) if not created: log.info("Ignored: Current configuration already match statement") else: self.needs_save = True def complete_set(self, text, line, begidx, endidx): # TODO Add tips for new path return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_delete(self, options): ''' delete [PATH] Deletes either all LIO configuration objects at the current edit level, or only those under PATH relative to the current level. ''' path = "%s %s" % (self.edit_levels[-1], options) if not path.strip(): raise CliError("Cannot delete top-level configuration") nodes = self.config.search(path) if not nodes: # TODO Replace all "%s .*" forms with a try_hard arg to search nodes.extend(self.config.search("%s .*" % path)) if not nodes: raise CliError("No configuration objects at path: %s" % path.strip()) # FIXME Use a real tree walk with filter obj_no = 0 for node in nodes: if node.data['type'] == 'obj': obj_no +=1 if obj_no == 0: raise CliError("Can't delete attributes, only objects: %s" % path.strip()) do_it = self.yes_no("Delete %d objects(s) from current configuration?" % len(nodes), False) if do_it is not False: deleted = self.config.delete(path) if not deleted: deleted = self.config.delete("%s .*" % path) self.needs_save = True log.info("Deleted %d configuration object(s)" % obj_no) else: log.info("Cancelled: configuration not modified") def complete_delete(self, text, line, begidx, endidx): # TODO Filter for objects only, skip attributes return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_undo(self, options): ''' undo Undo the last configuration change done during this config mode session. The lio cli has unlimited undo levels capabilities within a session. To restore a previously commited configuration, see the rollback command. ''' options = self.parse(options, 'undo', '')[1:] data_src = self.config.current.data['source'] self.config.undo() self.needs_save = True # TODO Implement info option to view all previous ops # TODO Implement last N option for multiple undo log.info("[undo] %s" % self.fmt_data_src(data_src)) def do_info(self, options): ''' info [PATH] Displays edit history information about the current configuration level or all configuration items matching PATH. ''' # TODO Add node type information path = "%s %s" % (self.edit_levels[-1], options) if not path.strip(): # This is just a test for tables table = pt.PrettyTable() table.hrules = pt.ALL table.field_names = ["change", "date", "type", "data"] table.align['data'] = 'l' changes = [] nb_ver = len(self.config._configs) for idx, cfg in enumerate(reversed(self.config._configs)): lst_src = self.lst_data_src(cfg.data['source']) table.add_row(["%03d" % (idx + 1)] + lst_src) # FIXME Use term width to compute these table.max_width["date"] = 10 table.max_width["data"] = 43 sys.stdout.write("%s\n" % table.get_string()) else: nodes = self.config.search(path) if not nodes: # TODO Replace all "%s .*" forms with a try_hard arg to search nodes.extend(self.config.search("%s .*" % path)) if not nodes: raise CliError("Path does not exist: %s" % path.strip()) infos = [] for node in nodes: if node.data.get('required'): req = "(required attribute) " else: req = "" path = node.path_str infos.append("%s[%s]\nLast change: %s" % (req, path, self.fmt_data_src(node.data['source']))) log.info("\n\n".join(infos)) def complete_info(self, text, line, begidx, endidx): return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_clear(self, options): ''' clear Clears the current configuration. This removes all current objects and attributes from the configuration. ''' options = self.parse(options, 'clear', '')[1:] self.config.clear() log.info("Configuration cleared") def do_load(self, options): ''' load live|FILE_PATH Replaces the current configuration with the contents of FILE_PATH. If any error happens while doing so, the current configuration will be fully rolled back. If live is used instead of FILE_PATH, the configuration from the live system will be used instead. ''' # TODO Add completion for filepath # TODO Add a filepath type to policy and also a parser we can use here tok_string = (pp.QuotedString('"') | pp.QuotedString("'") | pp.Word(pp.printables, excludeChars="{}#'\";")) options = self.parse(options, 'load', tok_string)[1:] src = options[0] if src == 'live': if self.yes_no("Replace the current configuration with the " "running configuration?", False) is not False: self.config.load_live() else: log.info("Cancelled: configuration not modified") else: if self.yes_no("Replace the current configuration with %s?" % src, False) is not False: self.config.load(src) else: log.info("Cancelled: configuration not modified") def complete_load(self, text, line, begidx, endidx): # TODO Add filename support return self._complete_options(text, line, begidx, endidx, ['live']) def do_merge(self, options): ''' merge live|FILE_PATH Merges the contents of FILE_PATH with the current configuration. In case of conflict, values from FILE_PATH will be used. If any error happens while doing so, the current configuration will be fully rolled back. If live is used instead of FILE_PATH, the configuration from the live system will be used instead. ''' # TODO Add completion for filepath # TODO Add a filepath type to policy and also a parser we can use here tok_string = (pp.QuotedString('"') | pp.QuotedString("'") | pp.Word(pp.printables, excludeChars="{}#'\";")) options = self.parse(options, 'merge', tok_string)[1:] src = options[0] if src == 'live': if self.yes_no("Merge the running configuration with " "the current configuration?", False) is not False: self.config.set(dump_live()) else: log.info("Cancelled: configuration not modified") else: if self.yes_no("Merge %s with the current configuration?" % src, False) is not False: self.config.update(src) else: log.info("Cancelled: configuration not modified") def complete_merge(self, text, line, begidx, endidx): # TODO Add filename support return self._complete_options(text, line, begidx, endidx, ['live']) def do_dump(self, options): ''' dump FILE_PATH [PATH|all] Dumps a copy of either the current configuration level or the configuration at PATH to FILE_PATH. If PATH is 'all', then the top-level configuration will be dumped. ''' options = options.split() if len(options) < 1: raise CliError("Syntax error: expected at least one option") filepath = options.pop(0) if not filepath.startswith('/'): raise CliError("Expected an absolute file path") path = " ".join(options) if path.strip() == 'all': path = '' else: path = ("%s %s" % (self.edit_levels[-1], path)).strip() self.config.save(filepath, path) if not path: path_desc = 'all' else: path_desc = path # FIXME Accept "half-node" path log.info("Dumped [%s] to %s" % (path_desc, filepath)) def complete_dump(self, text, line, begidx, endidx): options = line.split()[1:] if len(options) < 1: return self._complete_filepath(text, options[0], begidx, endidx) else: # FIXME This is broken return self._complete_path(text, " ".join(options[1:]), begidx, endidx, self.edit_levels[-1]) def do_show(self, options): ''' show [all] [PATH] Shows the current candidate configuration for PATH, relative to the current edit level. Note that attributes with default values will be filrered out by default, unless the all option is used. ''' if options and options.split()[0] == 'all': options = " ".join(options.split()[1:]) node_filter = lambda x:x else: node_filter = filter_no_default path = ("%s %s" % (self.edit_levels[-1], options)).strip() config = self.config.dump(path, node_filter) if config is None: config = self.config.dump("%s .*" % path, node_filter) if config is not None: sys.stdout.write("%s\n" % config) else: log.error("No such path in current configuration: %s" % path) def complete_show(self, text, line, begidx, endidx): # TODO add all option return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_missing(self, options): ''' missing [PATH] Shows all missing required attribute values in the current candidate configuration for PATH, relative to the current edit level. ''' node_filter = filter_only_missing path = ("%s %s" % (self.edit_levels[-1], options)).strip() if not path: path = '.*' trees = self.config.search(path) if not trees: trees = self.config.search("%s .*" % path) if not trees: raise CliError("No such path: %s" % path) missing = [] for tree in trees: for attr in tree.walk(node_filter): missing.append(attr) if not options: path = "current configuration" if not missing: log.warning("No missing attributes values under %s" % path) else: log.warning("Missing attributes values under %s:" % path) for attr in missing: log.info(" %s" % attr.path_str) sys.stdout.write("\n") def complete_missing(self, text, line, begidx, endidx): return self._complete_path(text, line, begidx, endidx, self.edit_levels[-1]) def do_diff(self, options): ''' diff Shows all differences between the current configuration and the live running configuration. ''' options = self.parse(options, 'diff', '')[1:] diff = self.config.diff_live() has_diffs = False if diff['removed']: has_diffs = True log.warning("Objects removed in the current configuration:") for node in diff['removed']: log.info(" %s" % node.path_str) if diff['created']: has_diffs = True log.warning("New objects in the current configuration:") for node in diff['created']: log.info(" %s" % node.path_str) if diff['major']: has_diffs = True log.warning("Major attribute changes in the current configuration:") for node in diff['major']: log.info(" %s" % node.path_str) if diff['minor']: has_diffs = True log.warning("Minor attribute changes in the current configuration:") for node in diff['minor']: log.info(" %s" % node.path_str) if not has_diffs: log.warning("Current configuration is in sync with live system") else: sys.stdout.write("\n") targetcli-3.0.pre4.1~ga55d018/targetcli/cli_live.py0000664000000000000000000001137512443074002016630 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os, sys import pyparsing as pp from rtslib.config_filters import * from targetcli.cli import Cli, CliError from targetcli.cli_config import CliConfig from targetcli.cli_logger import logger as log from rtslib.config import Config, ConfigError # TODO Implement do_summary using tables + color # TODO Implement sum for PR # TODO Implement sum for initiator sessions # TODO Implement sum for alua metadata # TODO Implement sum + mgmt for fabric modules # TODO Implement sum for network BW + portals # TODO Implement sum for disk IO class CliLive(Cli): ''' The lio target configuration command-line for live mode. ''' history_path = os.path.expanduser("~/.targetcli/history_live.txt") intro = ("\nWelcome to the lio target interactive shell.\n" "Copyright (c) 2012-2014 by Datera, Inc.\n" "Enter '?' to list available commands.\n") def __init__(self, interactive=False): Cli.__init__(self, interactive, self.history_path) self.prompt = "live> " self.do_resync() def do_exit(self, options): ''' exit Exits the lio target configuration shell. ''' options = self.parse(options, 'exit', '') return True def do_resync(self, options=''): ''' resync Re-synchronizes the cli with the live running configuration. This could be useful in rare cases where manual changes have been made to the underlying configfs structure for debugging purposes. ''' options = self.parse(options, 'resync', '') log.info("Syncing policy and configuration...") # FIXME Investigate bug in ConfigTree code: error if loading live twice # without recreating the Config object. self.config = Config() self.config.load_live() def do_configure(self, options): ''' configure Switch to config mode. In this mode, you can safely edit a candidate configuration for the system, and commit it only when it is ready. ''' options = self.parse(options, 'configure', '') if not self.interactive: raise CliError("Cannot switch to config mode when running " "non-interactively.") else: self.save_history() self.clear_history() # FIXME Preserve CliConfig session state, notably undo history CliConfig(interactive=True).cmdloop() self.clear_history() self.load_history() self.do_resync() log.warning("[live] Back to live mode") def do_show(self, options): ''' show [all] [PATH] Shows the running live configuration for PATH. Note that attributes with default values will be filrered out by default, unless the all option is used. ''' if options and options.split()[0] == 'all': options = " ".join(options.split()[1:]) node_filter = lambda x:x else: node_filter = filter_no_default config = self.config.dump(options, node_filter) if config is None: config = self.config.dump("%s .*" % options, node_filter) if config is not None: sys.stdout.write("%s\n" % config) else: log.error("No such path in current configuration: %s" % options) def complete_show(self, text, line, begidx, endidx): # TODO add all option return self._complete_path(text, line, begidx, endidx) def do_initialize_system(self, options): ''' initialize_system Loads and commits the system startup configuration if it exists. ''' self.config.load(CliConfig.config_path) do_it = self.yes_no("Load and commit the system startup configuration?" , False) if do_it is not False: log.info("Initializing LIO target...") for msg in self.config.apply(): log.info(msg) self.config.load_live() targetcli-3.0.pre4.1~ga55d018/targetcli/cli_logger.py0000664000000000000000000000277412443074002017153 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import sys, logging class LogFormatter(logging.Formatter): default_format = "LOG%(levelno)s: %(msg)s" formats = {10: "DEBUG:%(module)s:%(lineno)s: %(msg)s", 20: "%(msg)s", 30: "\n### %(msg)s\n", 40: "*** %(msg)s", 50: "CRITICAL: %(msg)s"} def __init__(self): logging.Formatter.__init__(self) def format(self, record): self._fmt = self.formats.get(record.levelno, self.default_format) return logging.Formatter.format(self, record) logger = logging.getLogger("LioCli") logger.setLevel(logging.INFO) log_fmt = LogFormatter() log_handler = logging.StreamHandler(sys.stdout) log_handler.setFormatter(log_fmt) logging.root.addHandler(log_handler) targetcli-3.0.pre4.1~ga55d018/targetcli/cli.py0000664000000000000000000002606312443074002015611 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import pyparsing as pp import sys, tty, cmd, termios, readline, traceback import rtslib.config, rtslib.config_tree from targetcli.cli_logger import logger as log from rtslib.config import ConfigError # TODO Implement | filters: top N, last N, page, grep # TODO Redo help summary, using 2 columns: cmd, short description class CliError(Exception): pass class Cli(cmd.Cmd): ''' Our base Cli class, common to both CliLive and CliConfig ''' intro = '' log_levels = {'debug': 10, 'info': 20, 'warning': 30, 'error': 40, 'critical': 50} def __init__(self, interactive, history_path): ''' Initializes a new Cli object. interactive is a boolean to run either interactively or in batch mode history_path is the path to the command-line history file ''' cmd.Cmd.__init__(self) self.debug_level = 'off' self.last_traceback = None self.interactive = interactive self.do_save_history = self.interactive if self.interactive: self.load_history() readline.set_completer_delims(' \t\n`~!@#$%^&*()=+[{]}\\|;\'",<>/?') def do_EOF(self, options): sys.stdout.write("exit\n") return self.do_exit(options) def _complete_options(self, text, line, begidx, endidx, options): ''' Helper to autocomplete one or more options out of options, without any ordering considerations. ''' # TODO Add middle-of-line completion prev_options = line.split()[1:] if text: prev_options = prev_options[:-1] return ["%s " % name for name in options if name.startswith(text) if name.strip() not in prev_options] def _complete_one_option(self, text, line, begidx, endidx, options): ''' Helper to autocomplete a single option out of options. ''' # TODO Add middle-of-line completion prev_options = line.split()[1:] if text: prev_options = prev_options[:-1] return ["%s " % name for name in options if name.startswith(text) if not prev_options] def _complete_path(self, text, line, begidx, endidx, prefix=None): ''' Helper to autocomplete a configuration path. ''' # TODO Add middle-of-line completion pattern = line.partition(' ')[2] if prefix is None: prefix = '' # Are we completing an attr/obj value/id or a group? nodes_last_key = self.config.search(("%s %s.*" % (prefix, pattern)).strip()) # Or an attr/obj name/class ? nodes_first_key = [node for node in self.config.search(("%s %s.* .*" % (prefix, pattern)).strip()) if node.data['type'] != 'group'] completions = [] completions.extend(node.key[-1] for node in nodes_last_key) completions.extend(node.key[0] for node in nodes_first_key) return ["%s " % c for c in completions if c.startswith(text)] def _complete_filepath(self, text, line, begidx, endidx): ''' Helper to autocomplete file paths. ''' # TODO Implement this return [] def save_history(self): ''' Saves the command history. ''' if not self.do_save_history: return try: readline.write_history_file(self.history_path) except Exception, e: raise CliError("Failed to save command history, disabling: %s", e) self.do_save_history = False def load_history(self): ''' Loads the command history. ''' try: readline.read_history_file(self.history_path) except IOError, e: log.debug("Error while reading history: %s" % e) def clear_history(self): ''' Clears the command history. ''' readline.clear_history() def emptyline(self): ''' Just go on with a new prompt line if the user enters an empty line. ''' pass def cmdloop(self): ''' The main REPL loop. ''' intro = self.intro while True: try: cmd.Cmd.cmdloop(self, intro=intro) except KeyboardInterrupt: sys.stdout.write("^C\n") intro = '' else: break def onecmd(self, line): ''' Executes a command line. ''' try: result = cmd.Cmd.onecmd(self, line) except pp.ParseException, e: log.error("Unknown syntax: %s at char %d" % (e.msg, e.loc)) return None except ConfigError, e: self.last_traceback = traceback.format_exc() log.error(str(e)) except CliError, e: self.last_traceback = traceback.format_exc() log.error(str(e)) except Exception, e: self.last_traceback = traceback.format_exc() log.error("%s: %s\n" % (e.__class__.__name__, e)) return None else: self.save_history() return result def completenames(self, text, *ignored): return ["%s " % name[3:] for name in self.get_names() if name.startswith("do_%s" % text) if not name in ['do_EOF']] def getchar(self): ''' Returns the first character read from stdin, without waiting for the user to hit enter. ''' fd = sys.stdin.fileno() tcattr_backup = termios.tcgetattr(fd) try: tty.setraw(sys.stdin.fileno()) char = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, tcattr_backup) return char def yes_no(self, question, default=None): ''' Asks a yes/no question to be answered by typing a single 'y' or 'n' character. If we do not run in interactive mode, returns None. Else returns True for yes and False for not. default can either be True (yes is the default), False (no is the default) or None (no default). ''' keys = {'\x03': '^C', '\x04': '^D'} if not self.interactive: result = None else: if default is None: choices = "y/n" elif default is True: choices = "Y/n" dfl_key = 'y' elif default is False: choices = "y/N" dfl_key = 'n' key = None replies = ['y', 'n', 'Y', 'N'] if default is not None: replies.append('\r') while key not in replies: log.debug("Got key %r" % key) sys.stdout.write("%s [%s] " % (question, choices)) key = self.getchar() key = keys.get(key, key) if key == '\r' and default is not None: sys.stdout.write("%s\n" % dfl_key) else: sys.stdout.write("%s\n" % key) if key in ['^C', '^D']: raise CliError("Aborted") if key == '\r': result = default elif key.lower() == 'y': result = True else: result = False log.debug("yes_no(%s) -> %r" % (question, result)) return result def parse(self, line, header, grammar): ''' Parses line using a pyparsing grammar. Returns the parse tree as a list. ''' if not grammar: grammar = pp.Empty() grammar = pp.Literal(header) + grammar line = "%s %s" % (header, line) log.debug("Parsing line '%s'" % line) tokens = grammar.parseString(line, parseAll=True).asList() log.debug("Got parse tree %s" % tokens) return tokens def do_trace(self, options): ''' trace Displays the last exception trace for the current mode. This is useful only for debugging the application. Your lio support team might ask you to run this command to help understanding an issue you're experimenting. ''' options = self.parse(options, 'trace', '')[1:] if self.last_traceback is not None: log.error(self.last_traceback) else: log.error("No previous exception traceback.") def do_debug(self, options): ''' debug [off|cli|api|all] Controls the debug messages level: off disables all debug message cli enables only cli debug messages api also enables Config API messages all adds even more details to api debug With no option, displays the current debug level. ''' syntax = pp.Optional(pp.oneOf(["off", "cli", "api", "all"])) options = self.parse(options, 'debug', syntax)[1:] if not options: log.info("Current debug level: %s" % self.debug_level) else: self.debug_level = options[0] if self.debug_level == 'off': log.setLevel(self.log_levels['info']) rtslib.config.log.setLevel(self.log_levels['info']) rtslib.config_tree.log.setLevel(self.log_levels['info']) elif self.debug_level == 'cli': log.setLevel(self.log_levels['debug']) rtslib.config.log.setLevel(self.log_levels['info']) rtslib.config_tree.log.setLevel(self.log_levels['info']) elif self.debug_level == 'api': log.setLevel(self.log_levels['debug']) rtslib.config.log.setLevel(self.log_levels['debug']) rtslib.config_tree.log.setLevel(self.log_levels['info']) elif self.debug_level == 'all': log.setLevel(self.log_levels['debug']) rtslib.config.log.setLevel(self.log_levels['debug']) rtslib.config_tree.log.setLevel(self.log_levels['debug']) log.info("Debug level is now: %s" % self.debug_level) def complete_debug(self, text, line, begidx, endidx): return self._complete_one_option(text, line, begidx, endidx, ["off", "cli", "api", "all"]) targetcli-3.0.pre4.1~ga55d018/targetcli/__init__.py0000664000000000000000000000153412443074002016575 0ustar ''' This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from ui_root import UIRoot __version__ = '3.0.pre4.1~ga55d018' __author__ = "Jerome Martin " __url__ = "http://www.risingtidesystems.com" __description__ = "An administration shell for RTS storage targets." __license__ = __doc__ targetcli-3.0.pre4.1~ga55d018/targetcli.spec0000664000000000000000000000305312443074002015336 0ustar %define oname targetcli Name: targetcli License: Apache License 2.0 Group: Applications/System Summary: RisingTide Systems generic SCSI target CLI shell. Version: 3.0.pre4.1~ga55d018 Release: 1%{?dist} URL: http://www.risingtidesystems.com/git/ Source: %{oname}-%{version}.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-rpmroot BuildArch: noarch BuildRequires: python-devel, python-rtslib, python-configshell, python-prettytable Requires: python-rtslib, python-configshell, python-prettytable Conflicts: targetcli-frozen, rtsadmin-frozen, rtsadmin, lio-utils Vendor: Datera, Inc. %description RisingTide Systems generic SCSI target CLI shell. %prep %setup -q -n %{oname}-%{version} %build %{__python} setup.py build %install rm -rf %{buildroot} %{__python} setup.py install --skip-build --root=%{buildroot} --prefix=usr mkdir -p %{buildroot}/etc/target mkdir -p %{buildroot}/var/target/pr mkdir -p %{buildroot}/var/target/alua mkdir -p %{buildroot}/etc/init.d/ mkdir -p %{buildroot}/%{_mandir}/man8 cp doc/targetcli.8 %{buildroot}/%{_mandir}/man8 cp scripts/target.init %{buildroot}/etc/init.d/target %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) %{python_sitelib} /etc/target /var/target /etc/init.d/target %{_bindir}/targetcli %{_bindir}/targetcli-ng %{_mandir}/man8/* %doc COPYING README.md %changelog * Sat Dec 13 2014 Marc Fleischmann 3.0.pre4.1~ga55d018-1 - Generated from git commit a55d0180ce19cb9ce4e3276edbf0abc60782e4ea. targetcli-3.0.pre4.1~ga55d018/targetcli/ui_backstore_legacy.py0000664000000000000000000004055312443074002021040 0ustar ''' Implements the targetcli backstores related UI. his file is part of targetcli. Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from ui_node import UINode, UIRTSLibNode from rtslib import RTSRoot from rtslib import FileIOBackstore, IBlockBackstore from rtslib import PSCSIBackstore, RDMCPBackstore from rtslib import FileIOStorageObject, IBlockStorageObject from rtslib import PSCSIStorageObject, RDMCPStorageObject from rtslib.utils import get_block_type, is_disk_partition class UIBackstoresLegacy(UINode): ''' The backstores container UI. ''' def __init__(self, parent): UINode.__init__(self, 'backstores', parent) self.cfs_cwd = "%s/core" % self.cfs_cwd self.refresh() def refresh(self): self._children = set([]) for backstore in RTSRoot().backstores: backstore_plugin = backstore.plugin if backstore_plugin == 'pscsi': UIPSCSIBackstoreLegacy(backstore, self) elif backstore_plugin == 'rd_mcp': UIRDMCPBackstoreLegacy(backstore, self) elif backstore_plugin == 'fileio': UIFileIOBackstoreLegacy(backstore, self) elif backstore_plugin == 'iblock': UIIBlockBackstoreLegacy(backstore, self) def summary(self): no_backstores = len(self._children) if no_backstores > 1: msg = "%d Backstores (legacy mode)" % no_backstores else: msg = "%d Backstore (legacy mode)" % no_backstores return (msg, None) def ui_command_create(self, backstore_plugin): ''' Creates a new backstore, using the chosen I{backstore_plugin}. More than one backstores using the same I{backstore_plugin} can co-exist. They will be identified by incremental index numbers, starting from 0. AVAILABLE BACKSTORE PLUGINS =========================== B{iblock} --------- This I{backstore_plugin} provides I{SPC-4}, along with I{ALUA} and I{Persistent Reservations} emulation on top of Linux BLOCK devices: B{any block device} that appears in /sys/block. B{pscsi} -------- Provides pass-through for Linux physical SCSI devices. It can be used with any storage object that does B{direct pass-through} of SCSI commands without SCSI emulation. This assumes an underlying SCSI device that appears with lsscsi in /proc/scsi/scsi, such as a SAS hard drive, such as any SCSI device. The Linux kernel code for device SCSI drivers resides in linux/drivers/scsi. SCSI-3 and higher is supported with this subsystem, but only for control CDBs capable by the device firmware. B{fileio} --------- This I{backstore_plugin} provides I{SPC-4}, along with I{ALUA} and I{Persistent Reservations} emulation on top of Linux VFS devices: B{any file on a mounted filesystem}. It may be backed by a file or an underlying real block device. FILEIO is using struct file to serve block I/O with various methods (synchronous or asynchronous) and (buffered or direct). B{rd_mcp} -------- This I{backstore_plugin} uses a ramdisk with a separate mapping using memory copy. Typically used for bandwidth testing. EXAMPLE ======= B{create iblock} ---------------- Creates a new backstore, using the B{iblock} I{backstore_plugin}. ''' self.assert_root() self.shell.log.debug("%r" % [(backstore.plugin, backstore.index) for backstore in RTSRoot().backstores]) indexes = [backstore.index for backstore in RTSRoot().backstores if backstore.plugin == backstore_plugin] self.shell.log.debug("Existing %s backstore indexes: %r" % (backstore_plugin, indexes)) for index in range(1048576): if index not in indexes: backstore_index = index break if backstore_index is None: self.shell.log.error("Cannot find an available backstore index.") return else: self.shell.log.info("First available %s backstore index is %d." % (backstore_plugin, backstore_index)) if backstore_plugin == 'pscsi': backstore = PSCSIBackstore(backstore_index, mode='create') return self.new_node(UIPSCSIBackstoreLegacy(backstore, self)) elif backstore_plugin == 'rd_mcp': backstore = RDMCPBackstore(backstore_index, mode='create') return self.new_node(UIRDMCPBackstoreLegacy(backstore, self)) elif backstore_plugin == 'fileio': backstore = FileIOBackstore(backstore_index, mode='create') return self.new_node(UIFileIOBackstoreLegacy(backstore, self)) elif backstore_plugin == 'iblock': backstore = IBlockBackstore(backstore_index, mode='create') return self.new_node(UIIBlockBackstoreLegacy(backstore, self)) else: self.shell.log.error("Invalid backstore plugin %s" % backstore_plugin) return self.shell.log.info("Created new backstore %s" % backstore.name) def ui_complete_create(self, parameters, text, current_param): ''' Parameter auto-completion method for user command create. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'backstore_plugin': plugins = ['pscsi', 'rd_mcp', 'fileio', 'iblock'] completions = [plugin for plugin in plugins if plugin.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def ui_command_delete(self, backstore): ''' Deletes a I{backstore}, and recursively all defined storage objects hanging under it. If there are existing LUNs making use of those storage objects, they will be deleted too. EXAMPLE ======= B{delete iblock2} ----------------- That would recursively delete the B{iblock} backstore with index 2. ''' self.assert_root() try: child = self.get_child(backstore) except ValueError: self.shell.log.error("No backstore named %s." % backstore) else: child.rtsnode.delete() self.remove_child(child) self.shell.log.info("Deleted backstore %s." % backstore) self.parent.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'backstore': backstores = [child.name for child in self.children] completions = [backstore for backstore in backstores if backstore.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UIBackstoreLegacy(UIRTSLibNode): ''' A backstore UI. ''' def __init__(self, backstore, parent): UIRTSLibNode.__init__(self, backstore.name, backstore, parent) self.cfs_cwd = backstore.path self.refresh() def refresh(self): self._children = set([]) for storage_object in self.rtsnode.storage_objects: UIStorageObjectLegacy(storage_object, self) def summary(self): no_storage_objects = len(self._children) if no_storage_objects > 1: msg = "%d Storage Objects" % no_storage_objects else: msg = "%d Storage Object" % no_storage_objects return (msg, None) def prm_buffered(self, buffered): buffered = \ self.ui_eval_param(buffered, 'bool', True) if buffered: self.shell.log.info("Using buffered mode.") else: self.shell.log.info("Not using buffered mode.") return buffered def ui_command_version(self): ''' Displays the version of the current backstore's plugin. ''' self.shell.con.display("Backstore plugin %s %s" % (self.rtsnode.plugin, self.rtsnode.version)) def ui_command_delete(self, name): ''' Recursively deletes the storage object having the specified I{name}. If there are LUNs using this storage object, they will be deleted too. EXAMPLE ======= B{delete mystorage} ------------------- Deletes the storage object named mystorage, and all associated LUNs. ''' self.assert_root() try: child = self.get_child(name) except ValueError: self.shell.log.error("No storage object named %s." % name) else: child.rtsnode.delete() self.remove_child(child) self.shell.log.info("Deleted storage object %s." % name) self.parent.parent.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'name': names = [child.name for child in self.children] completions = [name for name in names if name.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UIPSCSIBackstoreLegacy(UIBackstoreLegacy): ''' PSCSI backstore UI. ''' def ui_command_create(self, name, dev): ''' Creates a PSCSI storage object, with supplied name and SCSI device. The SCSI device I{dev} can either be a path name to the device, in which case it is recommended to use the /dev/disk/by-id hierarchy to have consistent naming should your physical SCSI system be modified, or an SCSI device ID in the H:C:T:L format, which is not recommended as SCSI IDs may vary in time. ''' self.assert_root() so = PSCSIStorageObject(self.rtsnode, name, dev) ui_so = UIStorageObjectLegacy(so, self) self.shell.log.info("Created pscsi storage object %s using %s." % (name, dev)) return self.new_node(ui_so) class UIRDMCPBackstoreLegacy(UIBackstoreLegacy): ''' RDMCP backstore UI. ''' def ui_command_create(self, name, size): ''' Creates an RDMCP storage object. I{size} is the size of the ramdisk. SIZE SYNTAX =========== - If size is an int, it represents a number of bytes. - If size is a string, the following units can be used: - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) ''' self.assert_root() so = RDMCPStorageObject(self.rtsnode, name, size) ui_so = UIStorageObjectLegacy(so, self) self.shell.log.info("Created rd_mcp ramdisk %s with size %s." % (name, size)) return self.new_node(ui_so) class UIFileIOBackstoreLegacy(UIBackstoreLegacy): ''' FileIO backstore UI. ''' def ui_command_create(self, name, file_or_dev, size=None, buffered=None): ''' Creates a FileIO storage object. If I{file_or_dev} is a path to a regular file to be used as backend, then the I{size} parameter is mandatory. Else, if I{file_or_dev} is a path to a block device, the size parameter B{must} be ommited. If present, I{size} is the size of the file to be used, I{file} the path to the file or I{dev} the path to a block device. The I{buffered} parameter is a boolean stating whether or not to enable buffered mode. It is disabled by default (synchronous mode). SIZE SYNTAX =========== - If size is an int, it represents a number of bytes. - If size is a string, the following units can be used: - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) ''' self.assert_root() self.shell.log.debug('Using params size=%s buffered=%s' % (size, buffered)) is_dev = get_block_type(file_or_dev) is not None \ or is_disk_partition(file_or_dev) if size is None and is_dev: so = FileIOStorageObject(self.rtsnode, name, file_or_dev, buffered_mode=self.prm_buffered(buffered)) self.shell.log.info("Created fileio %s with size %s." % (name, size)) ui_so = UIStorageObjectLegacy(so, self) return self.new_node(ui_so) elif size is not None and not is_dev: so = FileIOStorageObject(self.rtsnode, name, file_or_dev, size, buffered_mode=self.prm_buffered(buffered)) self.shell.log.info("Created fileio storage object %s." % name) ui_so = UIStorageObjectLegacy(so, self) return self.new_node(ui_so) else: self.shell.log.error("For fileio, you must either specify both a " + "file and a size, or just a device path.") class UIIBlockBackstoreLegacy(UIBackstoreLegacy): ''' IBlock backstore UI. ''' def ui_command_create(self, name, dev): ''' Creates an IBlock Storage object. I{dev} is the path to the TYPE_DISK block device to use. ''' self.assert_root() so = IBlockStorageObject(self.rtsnode, name, dev) ui_so = UIStorageObjectLegacy(so, self) self.shell.log.info("Created iblock storage object %s using %s." % (name, dev)) return self.new_node(ui_so) class UIStorageObjectLegacy(UIRTSLibNode): ''' A storage object UI. ''' def __init__(self, storage_object, parent): name = storage_object.name UIRTSLibNode.__init__(self, name, storage_object, parent) self.cfs_cwd = storage_object.path self.refresh() def summary(self): so = self.rtsnode if so.backstore.plugin.startswith("rd"): path = "ramdisk" else: path = so.udev_path if not path: return ("BROKEN STORAGE LINK", False) else: return ("%s %s" % (path, so.status), True) targetcli-3.0.pre4.1~ga55d018/targetcli/ui_backstore.py0000664000000000000000000004041112443074002017505 0ustar ''' Implements the targetcli backstores related UI. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os from ui_node import UINode, UIRTSLibNode from rtslib import RTSRoot from rtslib import FileIOBackstore, IBlockBackstore from rtslib import PSCSIBackstore, RDMCPBackstore from rtslib import FileIOStorageObject, IBlockStorageObject from rtslib import PSCSIStorageObject, RDMCPStorageObject from rtslib.utils import get_block_type, is_disk_partition from rtslib.utils import convert_human_to_bytes, convert_bytes_to_human from configshell import ExecutionError def dedup_so_name(storage_object): ''' Useful for migration from ui_backstore_legacy to new style with 1:1 hba:so mapping. If name is a duplicate in a backstore, returns name_X where X is the HBA index. ''' names = [so.name for so in RTSRoot().storage_objects if so.backstore.plugin == storage_object.backstore.plugin] if names.count(storage_object.name) > 1: return "%s_%d" % (storage_object.name, storage_object.backstore.index) else: return storage_object.name class UIBackstores(UINode): ''' The backstores container UI. ''' def __init__(self, parent): UINode.__init__(self, 'backstores', parent) self.cfs_cwd = "%s/core" % self.cfs_cwd self.refresh() def refresh(self): self._children = set([]) UIPSCSIBackstore(self) UIRDMCPBackstore(self) UIFileIOBackstore(self) UIIBlockBackstore(self) class UIBackstore(UINode): ''' A backstore UI. Abstract Base Class, do not instantiate. ''' def __init__(self, plugin, parent): UINode.__init__(self, plugin, parent) self.cfs_cwd = "%s/core" % self.cfs_cwd self.refresh() def refresh(self): self._children = set([]) for so in RTSRoot().storage_objects: if so.backstore.plugin == self.name: ui_so = UIStorageObject(so, self) ui_so.name = dedup_so_name(so) def summary(self): no_storage_objects = len(self._children) if no_storage_objects > 1: msg = "%d Storage Objects" % no_storage_objects else: msg = "%d Storage Object" % no_storage_objects return (msg, None) def prm_buffered(self, buffered): buffered = \ self.ui_eval_param(buffered, 'bool', True) if buffered: self.shell.log.info("Using buffered mode.") else: self.shell.log.info("Not using buffered mode.") return buffered def ui_command_delete(self, name): ''' Recursively deletes the storage object having the specified I{name}. If there are LUNs using this storage object, they will be deleted too. EXAMPLE ======= B{delete mystorage} ------------------- Deletes the storage object named mystorage, and all associated LUNs. ''' self.assert_root() try: child = self.get_child(name) except ValueError: self.shell.log.error("No storage object named %s." % name) else: hba = child.rtsnode.backstore child.rtsnode.delete() if not list(hba.storage_objects): hba.delete() self.remove_child(child) self.shell.log.info("Deleted storage object %s." % name) self.parent.parent.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'name': names = [child.name for child in self.children] completions = [name for name in names if name.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def next_hba_index(self): self.shell.log.debug("%r" % [(backstore.plugin, backstore.index) for backstore in RTSRoot().backstores]) indexes = [backstore.index for backstore in RTSRoot().backstores if backstore.plugin == self.name] self.shell.log.debug("Existing %s backstore indexes: %r" % (self.name, indexes)) for index in range(1048576): if index not in indexes: backstore_index = index break if backstore_index is None: raise ExecutionError("Cannot find an available backstore index.") else: self.shell.log.debug("First available %s backstore index is %d." % (self.name, backstore_index)) return backstore_index def assert_available_so_name(self, name): names = [child.name for child in self.children] if name in names: raise ExecutionError("Storage object %s/%s already exist." % (self.name, name)) class UIPSCSIBackstore(UIBackstore): ''' PSCSI backstore UI. ''' def __init__(self, parent): UIBackstore.__init__(self, 'pscsi', parent) def ui_command_create(self, name, dev): ''' Creates a PSCSI storage object, with supplied name and SCSI device. The SCSI device I{dev} can either be a path name to the device, in which case it is recommended to use the /dev/disk/by-id hierarchy to have consistent naming should your physical SCSI system be modified, or an SCSI device ID in the H:C:T:L format, which is not recommended as SCSI IDs may vary in time. ''' self.assert_root() self.assert_available_so_name(name) backstore = PSCSIBackstore(self.next_hba_index(), mode='create') if get_block_type(dev) is not None or is_disk_partition(dev): self.shell.log.info("Note: block backstore recommended for " "SCSI block devices") try: so = PSCSIStorageObject(backstore, name, dev) except Exception, exception: backstore.delete() raise exception ui_so = UIStorageObject(so, self) self.shell.log.info("Created pscsi storage object %s using %s" % (name, dev)) return self.new_node(ui_so) class UIRDMCPBackstore(UIBackstore): ''' RDMCP backstore UI. ''' def __init__(self, parent): UIBackstore.__init__(self, 'rd_mcp', parent) def ui_command_create(self, name, size, nullio=None): ''' Creates an RDMCP storage object. I{size} is the size of the ramdisk, and the optional I{nullio} parameter is a boolean specifying whether or not we should use a stub nullio instead of a real ramdisk. SIZE SYNTAX =========== - If size is an int, it represents a number of bytes. - If size is a string, the following units can be used: - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) ''' self.assert_root() self.assert_available_so_name(name) backstore = RDMCPBackstore(self.next_hba_index(), mode='create') nullio = self.ui_eval_param(nullio, 'bool', False) try: so = RDMCPStorageObject(backstore, name, size, nullio=nullio) except Exception, exception: backstore.delete() raise exception ui_so = UIStorageObject(so, self) self.shell.log.info("Created rd_mcp ramdisk %s with size %s." % (name, size)) if nullio and not so.nullio: self.shell.log.warning("nullio ramdisk is not supported by this " "kernel version, created with nullio=false") return self.new_node(ui_so) class UIFileIOBackstore(UIBackstore): ''' FileIO backstore UI. ''' def __init__(self, parent): UIBackstore.__init__(self, 'fileio', parent) def _create_file(self, filename, size, sparse=True): f = open(filename, "w+") try: if sparse: os.ftruncate(f.fileno(), size) else: self.shell.log.info("Writing %s bytes" % size) while size > 0: write_size = min(size, 1024) f.write("\0" * write_size) size -= write_size except IOError: f.close() os.remove(filename) raise ExecutionError("Could not expand file to size") f.close() def ui_command_create(self, name, file_or_dev, size=None, buffered=None, sparse=None): ''' Creates a FileIO storage object. If I{file_or_dev} is a path to a regular file to be used as backend, then the I{size} parameter is mandatory. Else, if I{file_or_dev} is a path to a block device, the size parameter B{must} be ommited. If present, I{size} is the size of the file to be used, I{file} the path to the file or I{dev} the path to a block device. The I{buffered} parameter is a boolean stating whether or not to enable buffered mode. It is enabled by default (asynchronous mode). The I{sparse} parameter is only applicable when creating a new backing file. It is a boolean stating if the created file should be created as a sparse file (the default), or fully initialized. SIZE SYNTAX =========== - If size is an int, it represents a number of bytes. - If size is a string, the following units can be used: - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) ''' self.assert_root() self.assert_available_so_name(name) self.shell.log.debug("Using params size=%s buffered=%s" " sparse=%s" % (size, buffered, sparse)) sparse = self.ui_eval_param(sparse, 'bool', True) backstore = FileIOBackstore(self.next_hba_index(), mode='create') is_dev = get_block_type(file_or_dev) is not None \ or is_disk_partition(file_or_dev) if size is None and is_dev: backstore = FileIOBackstore(self.next_hba_index(), mode='create') try: so = FileIOStorageObject( backstore, name, file_or_dev, buffered_mode=self.prm_buffered(buffered)) except Exception, exception: backstore.delete() raise exception self.shell.log.info("Created fileio %s with size %s." % (name, size)) self.shell.log.info("Note: block backstore preferred for " " best results.") ui_so = UIStorageObject(so, self) return self.new_node(ui_so) elif size is not None and not is_dev: backstore = FileIOBackstore(self.next_hba_index(), mode='create') try: so = FileIOStorageObject( backstore, name, file_or_dev, size, buffered_mode=self.prm_buffered(buffered)) except Exception, exception: backstore.delete() raise exception self.shell.log.info("Created fileio %s." % name) ui_so = UIStorageObject(so, self) return self.new_node(ui_so) else: # use given file size only if backing file does not exist if os.path.isfile(file_or_dev): new_size = str(os.path.getsize(file_or_dev)) if size: self.shell.log.info("%s exists, using its size (%s bytes)" " instead" % (file_or_dev, new_size)) size = new_size elif os.path.exists(file_or_dev): raise ExecutionError("Path %s exists but is not a file" % file_or_dev) else: # create file and extend to given file size if not size: raise ExecutionError("Attempting to create file for new" + " fileio backstore, need a size") self._create_file(file_or_dev, convert_human_to_bytes(size), sparse) class UIIBlockBackstore(UIBackstore): ''' IBlock backstore UI. ''' def __init__(self, parent): UIBackstore.__init__(self, 'iblock', parent) def ui_command_create(self, name, dev): ''' Creates an IBlock Storage object. I{dev} is the path to the TYPE_DISK block device to use. ''' self.assert_root() self.assert_available_so_name(name) backstore = IBlockBackstore(self.next_hba_index(), mode='create') try: so = IBlockStorageObject(backstore, name, dev) except Exception, exception: backstore.delete() raise exception ui_so = UIStorageObject(so, self) self.shell.log.info("Created iblock storage object %s using %s." % (name, dev)) return self.new_node(ui_so) class UIStorageObject(UIRTSLibNode): ''' A storage object UI. Abstract Base Class, do not instantiate. ''' def __init__(self, storage_object, parent): name = storage_object.name UIRTSLibNode.__init__(self, name, storage_object, parent) self.cfs_cwd = storage_object.path self.refresh() def ui_command_version(self): ''' Displays the version of the current backstore's plugin. ''' backstore = self.rtsnode.backstore self.shell.con.display("Backstore plugin %s %s" % (backstore.plugin, backstore.version)) def summary(self): so = self.rtsnode errors = [] if so.backstore.plugin.startswith("rd"): path = "ramdisk" else: path = so.udev_path if not path: errors.append("BROKEN STORAGE LINK") legacy = [] if self.rtsnode.name != self.name: legacy.append("ADDED SUFFIX") if len(list(self.rtsnode.backstore.storage_objects)) > 1: legacy.append("SHARED HBA") if legacy: errors.append("LEGACY: " + ", ".join(legacy)) size = convert_bytes_to_human(getattr(so, "size", 0)) if so.status == "activated": status = "in use" else: status = "not in use" nullio_str = "" try: if so.nullio: nullio_str = "nullio" except AttributeError: pass if errors: info = ", ".join(errors) if path: info += " (%s %s)" % (path, status) return (info, False) else: info = ", ".join(["%s" % str(data) for data in (size, path, status, nullio_str) if data]) return (info, True) targetcli-3.0.pre4.1~ga55d018/targetcli/ui_node.py0000664000000000000000000002446612443074002016471 0ustar ''' Implements the targetcli base UI node. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from configshell import ConfigNode, ExecutionError from rtslib import RTSLibError, RTSRoot, Config from subprocess import PIPE, Popen from cli_config import CliConfig from os.path import isfile from os import getuid STARTUP_CONFIG = "/etc/target/scsi_target.lio" def exec3(cmd): ''' Executes a shell command **cmd** and returns **(retcode, stdout, stderr)**. ''' process = Popen(cmd, shell=True, bufsize=1024*1024, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) (out, err) = process.communicate() retcode = process.returncode return (retcode, out, err) class UINode(ConfigNode): ''' Our targetcli basic UI node. ''' def __init__(self, name, parent=None, shell=None): ConfigNode.__init__(self, name, parent, shell) self.cfs_cwd = RTSRoot.configfs_dir self.define_config_group_param( 'global', 'auto_enable_tpg', 'bool', 'If true, automatically enables TPGs upon creation.') self.define_config_group_param( 'global', 'auto_add_mapped_luns', 'bool', 'If true, automatically create node ACLs mapped LUNs ' + 'after creating a new target LUN or a new node ACL') self.define_config_group_param( 'global', 'legacy_hba_view', 'bool', 'If true, use legacy HBA view, allowing to create more ' + 'than one storage object per HBA.') self.define_config_group_param( 'global', 'auto_cd_after_create', 'bool', 'If true, changes current path to newly created objects.') def assert_root(self): ''' For commands requiring root privileges, disable command if not the root node's as_root attribute is False. ''' root_node = self.get_root() if hasattr(root_node, 'as_root') and not root_node.as_root: raise ExecutionError("This privileged command is disabled: " + "you are not root.") def new_node(self, new_node): ''' Used to honor global 'auto_cd_after_create'. Either returns None if the global is False, or the new_node if the global is True. In both cases, set the @last bookmark to last_node. ''' self.shell.prefs['bookmarks']['last'] = new_node.path self.shell.prefs.save() if self.shell.prefs['auto_cd_after_create']: self.shell.log.info("Entering new node %s" % new_node.path) # Piggy backs on cd instead of just returning new_node, # so we update navigation history. return self.ui_command_cd(new_node.path) else: return None def refresh(self): ''' Refreshes and updates the objects tree from the current path. ''' for child in self.children: child.refresh() def execute_command(self, command, pparams=[], kparams={}): ''' We overload this one in order to handle our own exceptions cleanly, and not just configshell's ExecutionError. ''' try: result = ConfigNode.execute_command(self, command, pparams, kparams) except RTSLibError, msg: self.shell.log.error(str(msg)) else: self.shell.log.debug("Command %s succeeded." % command) return result def ui_command_saveconfig(self): ''' Saves the whole configuration tree to disk so that it will be restored on next boot. Unless you do that, changes are lost accross reboots. ''' self.assert_root() try: input = raw_input("Save configuration? [Y/n]: ") except EOFError: input = None self.shell.con.display('') if input in ["y", "Y", ""]: CliConfig.save_running_config() else: self.shell.log.warning("Configuration not saved.") def ui_command_exit(self): ''' Exits the command line interface. ''' if getuid() == 0: self.shell.log.info("Comparing startup and running configs...") try: config = Config() if isfile(STARTUP_CONFIG): config.load(STARTUP_CONFIG, allow_new_attrs=True) saved_config = config.dump() config.load_live() live_config = config.dump() if saved_config != live_config: self.shell.log.info("Some changes need saving.") self.ui_command_saveconfig() else: self.shell.log.info("Startup config is up-to-date.") except Exception, e: self.shell.log.warning(e) return 'EXIT' def ui_command_refresh(self): ''' Refreshes and updates the objects tree from the current path. ''' self.refresh() def ui_command_status(self): ''' Displays the current node's status summary. SEE ALSO ======== B{ls} ''' description, is_healthy = self.summary() self.shell.log.info("Status for %s: %s" % (self.path, description)) def ui_setgroup_global(self, parameter, value): ConfigNode.ui_setgroup_global(self, parameter, value) self.get_root().refresh() class UIRTSLibNode(UINode): ''' A subclass of UINode for nodes with an underlying RTSLib object. ''' def __init__(self, name, rtslib_object, parent): ''' Call from the class that inherits this, with the rtslib object that should be checked upon. ''' UINode.__init__(self, name, parent) self.rtsnode = rtslib_object # If the rtsnode has parameters, use them parameters = self.rtsnode.list_parameters() parameters_ro = self.rtsnode.list_parameters(writable=False) for parameter in parameters: writable = parameter not in parameters_ro description = "The %s parameter." % parameter self.define_config_group_param( 'parameter', parameter, 'string', description, writable) # If the rtsnode has attributes, enable them attributes = self.rtsnode.list_attributes() attributes_ro = self.rtsnode.list_attributes(writable=False) for attribute in attributes: writable = attribute not in attributes_ro description = "The %s attribute." % attribute self.define_config_group_param( 'attribute', attribute, 'string', description, writable) # If the rtsnode has auth_attrs, use them auth_attrs = self.rtsnode.list_auth_attrs() auth_attrs_ro = self.rtsnode.list_auth_attrs(writable=False) for auth_attr in auth_attrs: writable = auth_attr not in auth_attrs_ro description = "The %s auth_attr." % auth_attr self.define_config_group_param( 'auth', auth_attr, 'string', description, writable) def execute_command(self, command, pparams=[], kparams={}): ''' Overrides the parent's execute_command() to check if the underlying RTSLib object still exists before returning. ''' try: self.rtsnode._check_self() except RTSLibError: self.shell.log.error("The underlying rtslib object for " + "%s does not exist." % self.path) root = self.get_root() root.refresh() return root return UINode.execute_command(self, command, pparams, kparams) def ui_getgroup_attribute(self, attribute): ''' This is the backend method for getting attributes. @param attribute: The attribute to get the value of. @type attribute: str @return: The attribute's value @rtype: arbitrary ''' return self.rtsnode.get_attribute(attribute) def ui_setgroup_attribute(self, attribute, value): ''' This is the backend method for setting attributes. @param attribute: The attribute to set the value of. @type attribute: str @param value: The attribute's value @type value: arbitrary ''' self.assert_root() self.rtsnode.set_attribute(attribute, value) def ui_getgroup_parameter(self, parameter): ''' This is the backend method for getting parameters. @param parameter: The parameter to get the value of. @type parameter: str @return: The parameter's value @rtype: arbitrary ''' return self.rtsnode.get_parameter(parameter) def ui_setgroup_parameter(self, parameter, value): ''' This is the backend method for setting parameters. @param parameter: The parameter to set the value of. @type parameter: str @param value: The parameter's value @type value: arbitrary ''' self.assert_root() self.rtsnode.set_parameter(parameter, value) def ui_getgroup_auth(self, auth_attr): ''' This is the backend method for getting auth_attrs. @param auth_attr: The auth_attr to get the value of. @type auth_attr: str @return: The auth_attr's value @rtype: arbitrary ''' return self.rtsnode.get_auth_attr(auth_attr) def ui_setgroup_auth(self, auth_attr, value): ''' This is the backend method for setting auth_attrs. @param auth_attr: The auth_attr to set the value of. @type auth_attr: str @param value: The auth_attr's value @type value: arbitrary ''' self.assert_root() self.rtsnode.set_auth_attr(auth_attr, value) targetcli-3.0.pre4.1~ga55d018/targetcli/ui_root.py0000664000000000000000000000672412443074002016524 0ustar ''' Implements the targetcli root UI. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from os import system import readline, tempfile from rtslib import RTSRoot, Config from ui_node import UINode, STARTUP_CONFIG from ui_target import UIFabricModule from ui_backstore import UIBackstores from ui_backstore_legacy import UIBackstoresLegacy class UIRoot(UINode): ''' The targetcli hierarchy root node. ''' def __init__(self, shell, as_root=False): UINode.__init__(self, '/', shell=shell) self.as_root = as_root def refresh(self): ''' Refreshes the tree of target fabric modules. ''' self._children = set([]) if self.shell.prefs['legacy_hba_view']: UIBackstoresLegacy(self) else: UIBackstores(self) for fabric_module in RTSRoot().fabric_modules: self.shell.log.debug("Using fabric module %s." % fabric_module.name) UIFabricModule(fabric_module, self) def ui_command_configure(self): ''' Enters the config mode. This mode allows editing a candidate configuration without impacting the running system. This candidate configuration can then either be commited or discarded at will. If commited, it will be applied to the running system and saved as the new startup configuration. Other features include loading a configuration from file, undo support, rollback support, configuration backups and more. This mode is a functionnal but early preview version of the next- generation targetcli environment. ''' self.assert_root() self.shell.log.warning("Entering configure mode") self.shell.log.warning("This mode is a functionnal but early " "preview version of the next-generation " "targetcli") system("targetcli-ng configure") self.refresh() def ui_command_version(self): ''' Displays the targetcli and support libraries versions. ''' from rtslib import __version__ as rtslib_version from targetcli import __version__ as targetcli_version from configshell import __version__ as configshell_version for package, version in dict(targetcli=targetcli_version, rtslib=rtslib_version, configshell=configshell_version).items(): if version == 'GIT_VERSION': self.shell.log.error("Cannot find %s version. The %s package " % (package, package) + "has probably not been built properly " + "from either the git repository or a " + "public tarball.") else: self.shell.log.info("Using %s version %s" % (package, version)) targetcli-3.0.pre4.1~ga55d018/targetcli/ui_target.py0000664000000000000000000011075412443074002017026 0ustar ''' Implements the targetcli target related UI. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from ui_node import UINode, UIRTSLibNode from ui_backstore import dedup_so_name from rtslib import RTSLibError, RTSLibBrokenLink, utils from rtslib import NodeACL, NetworkPortal, MappedLUN from rtslib import Target, TPG, LUN class UIFabricModule(UIRTSLibNode): ''' A fabric module UI. ''' def __init__(self, fabric_module, parent): UIRTSLibNode.__init__(self, fabric_module.name, fabric_module, parent) self.cfs_cwd = fabric_module.path self.refresh() if self.rtsnode.has_feature('discovery_auth'): for param in ['userid', 'password', 'mutual_userid', 'mutual_password', 'enable']: self.define_config_group_param('discovery_auth', param, 'string') self.refresh() def ui_getgroup_discovery_auth(self, auth_attr): ''' This is the backend method for getting discovery_auth attributes. @param auth_attr: The auth attribute to get the value of. @type auth_attr: str @return: The auth attribute's value @rtype: str ''' value = None if auth_attr == 'password': value = self.rtsnode.discovery_password elif auth_attr == 'userid': value = self.rtsnode.discovery_userid elif auth_attr == 'mutual_password': value = self.rtsnode.discovery_mutual_password elif auth_attr == 'mutual_userid': value = self.rtsnode.discovery_mutual_userid elif auth_attr == 'enable': value = self.rtsnode.discovery_enable_auth return value def ui_setgroup_discovery_auth(self, auth_attr, value): ''' This is the backend method for setting discovery auth attributes. @param auth_attr: The auth attribute to set the value of. @type auth_attr: str @param value: The auth's value @type value: str ''' self.assert_root() if value is None: value = '' if auth_attr == 'password': self.rtsnode.discovery_password = value elif auth_attr == 'userid': self.rtsnode.discovery_userid = value elif auth_attr == 'mutual_password': self.rtsnode.discovery_mutual_password = value elif auth_attr == 'mutual_userid': self.rtsnode.discovery_mutual_userid = value elif auth_attr == 'enable': self.rtsnode.discovery_enable_auth = value def refresh(self): self._children = set([]) for target in self.rtsnode.targets: self.shell.log.debug("Found target %s under fabric module %s." % (target.wwn, target.fabric_module)) if target.has_feature('tpgts'): UIMultiTPGTarget(target, self) else: UITarget(target, self) def summary(self): no_targets = len(self._children) if no_targets != 1: msg = "%d Targets" % no_targets else: msg = "%d Target" % no_targets return (msg, None) def ui_command_create(self, wwn=None): ''' Creates a new target. The I{wwn} format depends on the transport(s) supported by the fabric module. If the I{wwn} is ommited, then a target will be created using either a randomly generated WWN of the proper type, or the first unused WWN in the list of possible WWNs if one is available. If WWNs are constrained to a list (i.e. for hardware targets addresses) and all WWNs are in use, the target creation will fail. Use the B{info} command to get more information abour WWN type and possible values. SEE ALSO ======== B{info} ''' self.assert_root() target = Target(self.rtsnode, wwn, mode='create') wwn = target.wwn if target.has_feature('tpgts'): ui_target = UIMultiTPGTarget(target, self) self.shell.log.info("Created target %s." % wwn) return ui_target.ui_command_create() else: ui_target = UITarget(target, self) self.shell.log.info("Created target %s." % wwn) return self.new_node(ui_target) def ui_complete_create(self, parameters, text, current_param): ''' Parameter auto-completion method for user command create. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' spec = self.rtsnode.spec if current_param == 'wwn' and spec['wwn_list'] is not None: existing_wwns = [child.wwn for child in self.rtsnode.targets] completions = [wwn for wwn in spec['wwn_list'] if wwn.startswith(text) if wwn not in existing_wwns] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def ui_command_delete(self, wwn): ''' Recursively deletes the target with the specified I{wwn}, and all objects hanging under it. SEE ALSO ======== B{create} ''' self.assert_root() target = Target(self.rtsnode, wwn, mode='lookup') target.delete() self.shell.log.info("Deleted Target %s." % wwn) self.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'wwn': wwns = [child.name for child in self.children] completions = [wwn for wwn in wwns if wwn.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def ui_command_info(self): ''' Displays information about the fabric module, notably the supported transports(s) and accepted B{wwn} format(s), as long as supported features. ''' spec = self.rtsnode.spec self.shell.log.info("Fabric module name: %s" % self.name) self.shell.log.info("ConfigFS path: %s" % self.rtsnode.path) if spec['wwn_list'] is not None: self.shell.log.info("Allowed WWNs list (%s type): %s" % (spec['wwn_type'], ', '.join(spec['wwn_list']))) else: self.shell.log.info("Supported WWN type: %s" % spec['wwn_type']) self.shell.log.info("Fabric module specfile: %s" % self.rtsnode.spec_file) self.shell.log.info("Fabric module features: %s" % ', '.join(spec['features'])) self.shell.log.info("Corresponding kernel module: %s" % spec['kernel_module']) def ui_command_version(self): ''' Displays the target fabric module version. ''' version = "Target fabric module %s: %s" \ % (self.rtsnode.name, self.rtsnode.version) self.shell.con.display(version.strip()) class UIMultiTPGTarget(UIRTSLibNode): ''' A generic target UI that has multiple TPGs. ''' def __init__(self, target, parent): UIRTSLibNode.__init__(self, target.wwn, target, parent) self.cfs_cwd = target.path self.refresh() def refresh(self): self._children = set([]) for tpg in self.rtsnode.tpgs: UITPG(tpg, self) def summary(self): if not self.rtsnode.fabric_module.is_valid_wwn(self.rtsnode.wwn): description = "INVALID WWN" is_healthy = False else: is_healthy = None no_tpgs = len(self._children) if no_tpgs != 1: description = "%d TPGs" % no_tpgs else: description = "%d TPG" % no_tpgs return (description, is_healthy) def ui_command_create(self, tag=None): ''' Creates a new Target Portal Group within the target. The I{tag} must be a strictly positive integer value. If omitted, the next available Target Portal Group Tag (TPG) will be used. SEE ALSO ======== B{delete} ''' self.assert_root() if tag is None: tags = [tpg.tag for tpg in self.rtsnode.tpgs] for index in range(1048576): if index not in tags and index > 0: tag = index break if tag is None: self.shell.log.error("Cannot find an available TPG Tag.") return else: self.shell.log.info("Selected TPG Tag %d." % tag) else: try: tag = int(tag) except ValueError: self.shell.log.error("The TPG Tag must be an integer value.") return else: if tag < 0: self.shell.log.error("The TPG Tag must be 0 or more.") return tpg = TPG(self.rtsnode, tag, mode='create') if self.shell.prefs['auto_enable_tpgt']: tpg.enable = True self.shell.log.info("Created TPG %s." % tpg.tag) ui_tpg = UITPG(tpg, self) return self.new_node(ui_tpg) def ui_command_delete(self, tag): ''' Deletes the Target Portal Group with TPG I{tag} from the target. The I{tag} must be a positive integer matching an existing TPG. SEE ALSO ======== B{create} ''' self.assert_root() if tag.startswith("tpg"): tag = tag[3:] tpg = TPG(self.rtsnode, int(tag), mode='lookup') tpg.delete() self.shell.log.info("Deleted TPG %s." % tag) self.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'tag': tags = [child.name[4:] for child in self.children] completions = [tag for tag in tags if tag.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UITPG(UIRTSLibNode): ''' A generic TPG UI. ''' def __init__(self, tpg, parent): name = "tpg%d" % tpg.tag UIRTSLibNode.__init__(self, name, tpg, parent) self.cfs_cwd = tpg.path self.refresh() UILUNs(tpg, self) if tpg.has_feature('acls'): UINodeACLs(self.rtsnode, self) if tpg.has_feature('nps'): UIPortals(self.rtsnode, self) def summary(self): if self.rtsnode.has_feature('nexus'): description = ("nexus WWN %s" % self.rtsnode.nexus_wwn, True) elif self.rtsnode.enable: description = ("enabled", True) else: description = ("disabled", False) return description def ui_command_enable(self): ''' Enables the TPG. SEE ALSO ======== B{disable status} ''' self.assert_root() if self.rtsnode.enable: self.shell.log.info("The TPG is already enabled.") else: self.rtsnode.enable = True self.shell.log.info("The TPG has been enabled.") def ui_command_disable(self): ''' Disables the TPG. SEE ALSO ======== B{enable status} ''' self.assert_root() if self.rtsnode.enable: self.rtsnode.enable = False self.shell.log.info("The TPG has been disabled.") else: self.shell.log.info("The TPG is already disabled.") class UITarget(UITPG): ''' A generic target UI merged with its only TPG. ''' def __init__(self, target, parent): UITPG.__init__(self, TPG(target, 1), parent) self._name = target.wwn self.target = target self.rtsnode.enable = True def summary(self): if not self.target.fabric_module.is_valid_wwn(self.target.wwn): return ("INVALID WWN", False) else: return UITPG.summary(self) class UINodeACLs(UINode): ''' A generic UI for node ACLs. ''' def __init__(self, tpg, parent): UINode.__init__(self, "acls", parent) self.tpg = tpg self.cfs_cwd = "%s/acls" % tpg.path self.refresh() def refresh(self): self._children = set([]) for node_acl in self.tpg.node_acls: UINodeACL(node_acl, self) def summary(self): no_acls = len(self._children) if no_acls != 1: msg = "%d ACLs" % no_acls else: msg = "%d ACL" % no_acls return (msg, None) def ui_command_create(self, wwn, add_mapped_luns=None): ''' Creates a Node ACL for the initiator node with the specified I{wwn}. The node's I{wwn} must match the expected WWN Type of the target's fabric module. If I{add_mapped_luns} is omitted, the global parameter B{auto_add_mapped_luns} will be used, else B{true} or B{false} are accepted. If B{true}, then after creating the ACL, mapped LUNs will be automatically created for all existing LUNs. SEE ALSO ======== B{delete} ''' self.assert_root() spec = self.tpg.parent_target.fabric_module.spec if not utils.is_valid_wwn(spec['wwn_type'], wwn): self.shell.log.error("'%s' is not a valid %s WWN." % (wwn, spec['wwn_type'])) return add_mapped_luns = \ self.ui_eval_param(add_mapped_luns, 'bool', self.shell.prefs['auto_add_mapped_luns']) try: node_acl = NodeACL(self.tpg, wwn, mode="create") except RTSLibError, msg: self.shell.log.error(str(msg)) return else: self.shell.log.info("Created Node ACL for %s" % node_acl.node_wwn) ui_node_acl = UINodeACL(node_acl, self) if add_mapped_luns: for lun in self.tpg.luns: MappedLUN(node_acl, lun.lun, lun.lun, write_protect=False) self.shell.log.info("Created mapped LUN %d." % lun.lun) self.refresh() return self.new_node(ui_node_acl) def ui_command_delete(self, wwn): ''' Deletes the Node ACL with the specified I{wwn}. SEE ALSO ======== B{create} ''' self.assert_root() node_acl = NodeACL(self.tpg, wwn, mode='lookup') node_acl.delete() self.shell.log.info("Deleted Node ACL %s." % wwn) self.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'wwn': wwns = [acl.node_wwn for acl in self.tpg.node_acls] completions = [wwn for wwn in wwns if wwn.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UINodeACL(UIRTSLibNode): ''' A generic UI for a node ACL. ''' def __init__(self, node_acl, parent): UIRTSLibNode.__init__(self, node_acl.node_wwn, node_acl, parent) if self.rtsnode.has_feature("acls_tcq_depth"): self.define_config_group_param( 'attribute', 'tcq_depth', 'string', "Command queue depth.", True) self.cfs_cwd = node_acl.path self.refresh() def ui_getgroup_attribute(self, attribute): ''' This is the backend method for getting attributes. @param attribute: The attribute to get the value of. @type attribute: str @return: The attribute's value @rtype: arbitrary ''' if attribute == 'tcq_depth' and self.rtsnode.has_feature("acls_tcq_depth"): return self.rtsnode.tcq_depth else: return self.rtsnode.get_attribute(attribute) def ui_setgroup_attribute(self, attribute, value): ''' This is the backend method for setting attributes. @param attribute: The attribute to set the value of. @type attribute: str @param value: The attribute's value @type value: arbitrary ''' self.assert_root() if attribute == 'tcq_depth' and self.rtsnode.has_feature("acls_tcq_depth"): self.rtsnode.tcq_depth = value else: self.rtsnode.set_attribute(attribute, value) def refresh(self): self._children = set([]) for mlun in self.rtsnode.mapped_luns: UIMappedLUN(mlun, self) def summary(self): no_mluns = len(self._children) if no_mluns != 1: msg = "%d Mapped LUNs" % no_mluns else: msg = "%d Mapped LUN" % no_mluns return (msg, None) def ui_command_create(self, mapped_lun, tpg_lun, write_protect=None): ''' Creates a mapping to one of the TPG LUNs for the initiator referenced by the ACL. The provided I{tpg_lun} will appear to that initiator as LUN I{mapped_lun}. If the I{write_protect} flag is set to B{1}, the initiator will not have write access to the Mapped LUN. SEE ALSO ======== B{delete} ''' self.assert_root() try: tpg_lun = int(tpg_lun) mapped_lun = int(mapped_lun) except ValueError: self.shell.log.error("Incorrect LUN value.") return if tpg_lun in (ml.tpg_lun.lun for ml in self.rtsnode.mapped_luns): self.shell.log.warning( "Warning: TPG LUN %d already mapped to this NodeACL" % tpg_lun) mlun = MappedLUN(self.rtsnode, mapped_lun, tpg_lun, write_protect) ui_mlun = UIMappedLUN(mlun, self) self.shell.log.info("Created Mapped LUN %s." % mlun.mapped_lun) return self.new_node(ui_mlun) def ui_command_delete(self, mapped_lun): ''' Deletes the specified I{mapped_lun}. SEE ALSO ======== B{create} ''' self.assert_root() mlun = MappedLUN(self.rtsnode, mapped_lun) mlun.delete() self.shell.log.info("Deleted Mapped LUN %s." % mapped_lun) self.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'mapped_lun': mluns = [str(mlun.mapped_lun) for mlun in self.rtsnode.mapped_luns] completions = [mlun for mlun in mluns if mlun.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UIMappedLUN(UIRTSLibNode): ''' A generic UI for MappedLUN objects. ''' def __init__(self, mapped_lun, parent): name = "mapped_lun%d" % mapped_lun.mapped_lun UIRTSLibNode.__init__(self, name, mapped_lun, parent) self.cfs_cwd = mapped_lun.path self.refresh() def summary(self): mapped_lun = self.rtsnode is_healthy = True try: tpg_lun = mapped_lun.tpg_lun except RTSLibBrokenLink: description = "BROKEN LUN LINK" is_healthy = False else: if mapped_lun.write_protect: access_mode = 'ro' else: access_mode = 'rw' description = "lun%d (%s)" % (tpg_lun.lun, access_mode) return (description, is_healthy) class UILUNs(UINode): ''' A generic UI for TPG LUNs. ''' def __init__(self, tpg, parent): UINode.__init__(self, "luns", parent) self.cfs_cwd = "%s/lun" % tpg.path self.tpg = tpg self.refresh() def refresh(self): self._children = set([]) for lun in self.tpg.luns: UILUN(lun, self) def summary(self): no_luns = len(self._children) if no_luns != 1: msg = "%d LUNs" % no_luns else: msg = "%d LUN" % no_luns return (msg, None) def ui_command_create(self, storage_object, lun=None, add_mapped_luns=None): ''' Creates a new LUN in the Target Portal Group, attached to a storage object. If the I{lun} parameter is omitted, the first available LUN in the TPG will be used. If present, it must be a number greater than 0. Alternatively, the syntax I{lunX} where I{X} is a positive number is also accepted. The I{storage_object} must be the path of an existing storage object, i.e. B{/backstore/pscsi0/mydisk} to reference the B{mydisk} storage object of the virtual HBA B{pscsi0}. If I{add_mapped_luns} is omitted, the global parameter B{auto_add_mapped_luns} will be used, else B{true} or B{false} are accepted. If B{true}, then after creating the LUN, mapped LUNs will be automatically created for all existing node ACLs, mapping the new LUN. SEE ALSO ======== B{delete} ''' self.assert_root() if lun is None: luns = [lun.lun for lun in self.tpg.luns] for index in range(1048576): if index not in luns: lun = index break if lun is None: self.shell.log.error("Cannot find an available LUN.") return else: self.shell.log.info("Selected LUN %d." % lun) else: try: if lun.startswith('lun'): lun = lun[3:] lun = int(lun) except ValueError: self.shell.log.error("The LUN must be an integer value.") return else: if lun < 0: self.shell.log.error("The LUN cannot be negative.") return add_mapped_luns = \ self.ui_eval_param(add_mapped_luns, 'bool', self.shell.prefs['auto_add_mapped_luns']) try: storage_object = self.get_node(storage_object).rtsnode except ValueError: self.shell.log.error("Invalid storage object %s." % storage_object) return lun_object = LUN(self.tpg, lun, storage_object) self.shell.log.info("Created LUN %s." % lun_object.lun) ui_lun = UILUN(lun_object, self) if add_mapped_luns: for acl in self.tpg.node_acls: mapped_lun = lun existing_mluns = [mlun.mapped_lun for mlun in acl.mapped_luns] if mapped_lun in existing_mluns: tentative_mlun = 0 while mapped_lun == lun: if tentative_mlun not in existing_mluns: mapped_lun = tentative_mlun self.shell.log.warning( "Mapped LUN %d already " % lun + "exists in ACL %s, using %d instead." % (acl.node_wwn, mapped_lun)) else: tentative_mlun += 1 mlun = MappedLUN(acl, mapped_lun, lun, write_protect=False) self.shell.log.info("Created mapped LUN %d in node ACL %s" % (mapped_lun, acl.node_wwn)) self.parent.refresh() return self.new_node(ui_lun) def ui_complete_create(self, parameters, text, current_param): ''' Parameter auto-completion method for user command create. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'storage_object': storage_objects = [] for backstore in self.get_node('/backstores').children: for storage_object in backstore.children: storage_objects.append(storage_object.path) completions = [so for so in storage_objects if so.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def ui_command_delete(self, lun): ''' Deletes the supplied LUN from the Target Portal Group. The I{lun} must be a positive number matching an existing LUN. Alternatively, the syntax I{lunX} where I{X} is a positive number is also accepted. SEE ALSO ======== B{create} ''' self.assert_root() if lun.lower().startswith("lun"): lun = lun[3:] try: lun = int(lun) lun_object = LUN(self.tpg, lun) except: raise RTSLibError("Invalid LUN") lun_object.delete() self.shell.log.info("Deleted LUN %s." % lun) # Refresh the TPG as we need to also refresh acls MappedLUNs self.parent.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'lun': luns = [str(lun.lun) for lun in self.tpg.luns] completions = [lun for lun in luns if lun.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UILUN(UIRTSLibNode): ''' A generic UI for LUN objects. ''' def __init__(self, lun, parent): name = "lun%d" % lun.lun UIRTSLibNode.__init__(self, name, lun, parent) self.cfs_cwd = lun.path self.refresh() def summary(self): lun = self.rtsnode is_healthy = True try: storage_object = lun.storage_object except RTSLibBrokenLink: description = "BROKEN STORAGE LINK" is_healthy = False else: backstore = storage_object.backstore if backstore.plugin.startswith("rd"): path = "ramdisk" else: path = storage_object.udev_path if self.shell.prefs['legacy_hba_view']: description = "%s%s/%s (%s)" % (backstore.plugin, backstore.index, storage_object.name, path) else: description = "%s/%s (%s)" % (backstore.plugin, dedup_so_name(storage_object), path) return (description, is_healthy) class UIPortals(UINode): ''' A generic UI for TPG network portals. ''' def __init__(self, tpg, parent): UINode.__init__(self, "portals", parent) self.tpg = tpg self.cfs_cwd = "%s/np" % tpg.path self.refresh() def refresh(self): self._children = set([]) for portal in self.tpg.network_portals: UIPortal(portal, self) def summary(self): no_portals = len(self._children) if no_portals != 1: msg = "%d Portals" % no_portals else: msg = "%d Portal" % no_portals return (msg, None) def ui_command_create(self, ip_address=None, ip_port=None): ''' Creates a Network Portal with specified I{ip_address} and I{ip_port}. If I{ip_port} is omitted, the default port for the target fabric will be used. If I{ip_address} is omitted, the first IP address found matching the local hostname will be used. SEE ALSO ======== B{delete} ''' self.assert_root() try: listen_all = int(ip_address.replace(".", "")) == 0 except: listen_all = False if listen_all: ip_address = "0.0.0.0" if ip_port is None: # FIXME: Add a specfile parameter to determine that ip_port = 3260 self.shell.log.info("Using default IP port %d" % ip_port) if ip_address is None: if not ip_address: ip_address = utils.get_main_ip() if ip_address: self.shell.log.info("Automatically selected IP address %s." % ip_address) else: self.shell.log.error("Cannot find a usable IP address to " + "create the Network Portal.") return elif ip_address not in utils.list_eth_ips() and not listen_all: self.shell.log.error("IP address does not exist: %s" % ip_address) return try: ip_port = int(ip_port) except ValueError: self.shell.log.error("The ip_port must be an integer value.") return portal = NetworkPortal(self.tpg, ip_address, ip_port, mode='create') self.shell.log.info("Created network portal %s:%d." % (ip_address, ip_port)) ui_portal = UIPortal(portal, self) return self.new_node(ui_portal) def ui_complete_create(self, parameters, text, current_param): ''' Parameter auto-completion method for user command create. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' if current_param == 'ip_address': completions = [addr for addr in utils.list_eth_ips() if addr.startswith(text)] else: completions = [] if len(completions) == 1: return [completions[0] + ' '] else: return completions def ui_command_delete(self, ip_address, ip_port): ''' Deletes the Network Portal with specified I{ip_address} and I{ip_port}. SEE ALSO ======== B{create} ''' self.assert_root() portal = NetworkPortal(self.tpg, ip_address, ip_port, mode='lookup') portal.delete() self.shell.log.info("Deleted network portal %s:%s" % (ip_address, ip_port)) self.refresh() def ui_complete_delete(self, parameters, text, current_param): ''' Parameter auto-completion method for user command delete. @param parameters: Parameters on the command line. @type parameters: dict @param text: Current text of parameter being typed by the user. @type text: str @param current_param: Name of parameter to complete. @type current_param: str @return: Possible completions @rtype: list of str ''' completions = [] # TODO: Check if a dict comprehension is acceptable here with supported # XXX: python versions. portals = {} all_ports = set([]) for portal in self.tpg.network_portals: all_ports.add(str(portal.port)) if not portal.ip_address in portals: portals[portal.ip_address] = [] portals[portal.ip_address].append(str(portal.port)) if current_param == 'ip_address': if 'ip_port' in parameters: port = parameters['ip_port'] completions = [addr for addr in portals if port in portals[addr] if addr.startswith(text)] else: completions = [addr for addr in portals if addr.startswith(text)] elif current_param == 'ip_port': if 'ip_address' in parameters: addr = parameters['ip_address'] if addr in portals: completions = [port for port in portals[addr] if port.startswith(text)] else: completions = [port for port in all_ports if port.startswith(text)] if len(completions) == 1: return [completions[0] + ' '] else: return completions class UIPortal(UIRTSLibNode): ''' A generic UI for a network portal. ''' def __init__(self, portal, parent): name = "%s:%s" % (portal.ip_address, portal.port) UIRTSLibNode.__init__(self, name, portal, parent) self.cfs_cwd = portal.path self.portal = portal self.refresh() def summary(self): if self.portal._get_iser_attr(): return ('OK, iser enabled', True) else: return ('OK, iser disabled', True) def ui_command_iser_enable(self): ''' Enables iser operation on an network portal. ''' if self.portal._get_iser_attr() == True: self.shell.log.info("iser operation has already been enabled") else: self.portal._set_iser_attr(True) self.shell.log.info("iser operation has been enabled") def ui_command_iser_disable(self): ''' Disabled iser operation on an network portal. ''' if self.portal._get_iser_attr() == False: self.shell.log.info("iser operation has already been disabled") else: self.portal._set_iser_attr(False) self.shell.log.info("iser operation has been disabled")