swift-2.7.1/0000775000567000056710000000000013024044470014014 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/0000775000567000056710000000000013024044470014773 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/__init__.py0000664000567000056710000000466513024044352017116 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # See http://code.google.com/p/python-nose/issues/detail?id=373 # The code below enables nosetests to work with i18n _() blocks from __future__ import print_function import sys import os try: from unittest.util import safe_repr except ImportError: # Probably py26 _MAX_LENGTH = 80 def safe_repr(obj, short=False): try: result = repr(obj) except Exception: result = object.__repr__(obj) if not short or len(result) < _MAX_LENGTH: return result return result[:_MAX_LENGTH] + ' [truncated]...' # make unittests pass on all locale import swift setattr(swift, 'gettext_', lambda x: x) from swift.common.utils import readconf # Work around what seems to be a Python bug. # c.f. https://bugs.launchpad.net/swift/+bug/820185. import logging logging.raiseExceptions = False def get_config(section_name=None, defaults=None): """ Attempt to get a test config dictionary. :param section_name: the section to read (all sections if not defined) :param defaults: an optional dictionary namespace of defaults """ config = {} if defaults is not None: config.update(defaults) config_file = os.environ.get('SWIFT_TEST_CONFIG_FILE', '/etc/swift/test.conf') try: config = readconf(config_file, section_name) except SystemExit: if not os.path.exists(config_file): print('Unable to read test config %s - file not found' % config_file, file=sys.stderr) elif not os.access(config_file, os.R_OK): print('Unable to read test config %s - permission denied' % config_file, file=sys.stderr) else: print('Unable to read test config %s - section %s not found' % (config_file, section_name), file=sys.stderr) return config swift-2.7.1/test/sample.conf0000664000567000056710000000743713024044352017135 0ustar jenkinsjenkins00000000000000[func_test] # sample config for Swift with tempauth auth_host = 127.0.0.1 auth_port = 8080 auth_ssl = no auth_prefix = /auth/ ## sample config for Swift with Keystone v2 API # For keystone v2 change auth_version to 2 and auth_prefix to /v2.0/ # And "allow_account_management" should not be set "true" #auth_version = 3 #auth_host = localhost #auth_port = 5000 #auth_ssl = no #auth_prefix = /v3/ # Primary functional test account (needs admin access to the account) account = test username = tester password = testing # User on a second account (needs admin access to the account) account2 = test2 username2 = tester2 password2 = testing2 # User on same account as first, but without admin access username3 = tester3 password3 = testing3 # Fourth user is required for keystone v3 specific tests. # Account must be in a non-default domain. #account4 = test4 #username4 = tester4 #password4 = testing4 #domain4 = test-domain # Fifth user is required for service token-specific tests. # The account must be different than the primary test account # The user must not have a group (tempauth) or role (keystoneauth) on # the primary test account. The user must have a group/role that is unique # and not given to the primary tester and is specified in the options # _require_group (tempauth) or _service_roles (keystoneauth). #account5 = test5 #username5 = tester5 #password5 = testing5 # The service_prefix option is used for service token-specific tests. # If service_prefix or username5 above is not supplied, the tests are skipped. # To set the value and enable the service token tests, look at the # reseller_prefix option in /etc/swift/proxy-server.conf. There must be at # least two prefixes. If not, add a prefix as follows (where we add SERVICE): # reseller_prefix = AUTH, SERVICE # The service_prefix must match the used in _require_group # (tempauth) or _service_roles (keystoneauth); for example: # SERVICE_require_group = service # SERVICE_service_roles = service # Note: Do not enable service token tests if the first prefix in # reseller_prefix is the empty prefix AND the primary functional test # account contains an underscore. #service_prefix = SERVICE # Sixth user is required for access control tests. # Account must have a role for reseller_admin_role(keystoneauth). #account6 = test #username6 = tester6 #password6 = testing6 collate = C # Only necessary if a pre-existing server uses self-signed certificate insecure = no [unit_test] fake_syslog = False [probe_test] # check_server_timeout = 30 # validate_rsync = false [swift-constraints] # The functional test runner will try to use the constraint values provided in # the swift-constraints section of test.conf. # # If a constraint value does not exist in that section, or because the # swift-constraints section does not exist, the constraints values found in # the /info API call (if successful) will be used. # # If a constraint value cannot be found in the /info results, either because # the /info API call failed, or a value is not present, the constraint value # used will fall back to those loaded by the constraints module at time of # import (which will attempt to load /etc/swift/swift.conf, see the # swift.common.constraints module for more information). # # Note that the cluster must have "sane" values for the test suite to pass # (for some definition of sane). # #max_file_size = 5368709122 #max_meta_name_length = 128 #max_meta_value_length = 256 #max_meta_count = 90 #max_meta_overall_size = 4096 #max_header_size = 8192 #extra_header_count = 0 #max_object_name_length = 1024 #container_listing_limit = 10000 #account_listing_limit = 10000 #max_account_name_length = 256 #max_container_name_length = 256 # Newer swift versions default to strict cors mode, but older ones were the # opposite. #strict_cors_mode = true swift-2.7.1/test/unit/0000775000567000056710000000000013024044470015752 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/__init__.py0000664000567000056710000010361313024044354020070 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Swift tests """ from __future__ import print_function import os import copy import logging import errno from six.moves import range import sys from contextlib import contextmanager, closing from collections import defaultdict, Iterable import itertools from numbers import Number from tempfile import NamedTemporaryFile import time import eventlet from eventlet.green import socket from tempfile import mkdtemp from shutil import rmtree from swift.common.utils import Timestamp, NOTICE from test import get_config from swift.common import utils from swift.common.header_key_dict import HeaderKeyDict from swift.common.ring import Ring, RingData from hashlib import md5 import logging.handlers from six.moves.http_client import HTTPException from swift.common import storage_policy from swift.common.storage_policy import (StoragePolicy, ECStoragePolicy, VALID_EC_TYPES) import functools import six.moves.cPickle as pickle from gzip import GzipFile import mock as mocklib import inspect EMPTY_ETAG = md5().hexdigest() # try not to import this module from swift if not os.path.basename(sys.argv[0]).startswith('swift'): # never patch HASH_PATH_SUFFIX AGAIN! utils.HASH_PATH_SUFFIX = 'endcap' EC_TYPE_PREFERENCE = [ 'liberasurecode_rs_vand', 'jerasure_rs_vand', ] for eclib_name in EC_TYPE_PREFERENCE: if eclib_name in VALID_EC_TYPES: break else: raise SystemExit('ERROR: unable to find suitable PyECLib type' ' (none of %r found in %r)' % ( EC_TYPE_PREFERENCE, VALID_EC_TYPES, )) DEFAULT_TEST_EC_TYPE = eclib_name def patch_policies(thing_or_policies=None, legacy_only=False, with_ec_default=False, fake_ring_args=None): if isinstance(thing_or_policies, ( Iterable, storage_policy.StoragePolicyCollection)): return PatchPolicies(thing_or_policies, fake_ring_args=fake_ring_args) if legacy_only: default_policies = [ StoragePolicy(0, name='legacy', is_default=True), ] default_ring_args = [{}] elif with_ec_default: default_policies = [ ECStoragePolicy(0, name='ec', is_default=True, ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4, ec_segment_size=4096), StoragePolicy(1, name='unu'), ] default_ring_args = [{'replicas': 14}, {}] else: default_policies = [ StoragePolicy(0, name='nulo', is_default=True), StoragePolicy(1, name='unu'), ] default_ring_args = [{}, {}] fake_ring_args = fake_ring_args or default_ring_args decorator = PatchPolicies(default_policies, fake_ring_args=fake_ring_args) if not thing_or_policies: return decorator else: # it's a thing, we return the wrapped thing instead of the decorator return decorator(thing_or_policies) class PatchPolicies(object): """ Why not mock.patch? In my case, when used as a decorator on the class it seemed to patch setUp at the wrong time (i.e. in setup the global wasn't patched yet) """ def __init__(self, policies, fake_ring_args=None): if isinstance(policies, storage_policy.StoragePolicyCollection): self.policies = policies else: self.policies = storage_policy.StoragePolicyCollection(policies) self.fake_ring_args = fake_ring_args or [None] * len(self.policies) def _setup_rings(self): """ Our tests tend to use the policies rings like their own personal playground - which can be a problem in the particular case of a patched TestCase class where the FakeRing objects are scoped in the call to the patch_policies wrapper outside of the TestCase instance which can lead to some bled state. To help tests get better isolation without having to think about it, here we're capturing the args required to *build* a new FakeRing instances so we can ensure each test method gets a clean ring setup. The TestCase can always "tweak" these fresh rings in setUp - or if they'd prefer to get the same "reset" behavior with custom FakeRing's they can pass in their own fake_ring_args to patch_policies instead of setting the object_ring on the policy definitions. """ for policy, fake_ring_arg in zip(self.policies, self.fake_ring_args): if fake_ring_arg is not None: policy.object_ring = FakeRing(**fake_ring_arg) def __call__(self, thing): if isinstance(thing, type): return self._patch_class(thing) else: return self._patch_method(thing) def _patch_class(self, cls): """ Creating a new class that inherits from decorated class is the more common way I've seen class decorators done - but it seems to cause infinite recursion when super is called from inside methods in the decorated class. """ orig_setUp = cls.setUp orig_tearDown = cls.tearDown def setUp(cls_self): self._orig_POLICIES = storage_policy._POLICIES if not getattr(cls_self, '_policies_patched', False): storage_policy._POLICIES = self.policies self._setup_rings() cls_self._policies_patched = True orig_setUp(cls_self) def tearDown(cls_self): orig_tearDown(cls_self) storage_policy._POLICIES = self._orig_POLICIES cls.setUp = setUp cls.tearDown = tearDown return cls def _patch_method(self, f): @functools.wraps(f) def mywrapper(*args, **kwargs): self._orig_POLICIES = storage_policy._POLICIES try: storage_policy._POLICIES = self.policies self._setup_rings() return f(*args, **kwargs) finally: storage_policy._POLICIES = self._orig_POLICIES return mywrapper def __enter__(self): self._orig_POLICIES = storage_policy._POLICIES storage_policy._POLICIES = self.policies def __exit__(self, *args): storage_policy._POLICIES = self._orig_POLICIES class FakeRing(Ring): def __init__(self, replicas=3, max_more_nodes=0, part_power=0, base_port=1000): self._base_port = base_port self.max_more_nodes = max_more_nodes self._part_shift = 32 - part_power # 9 total nodes (6 more past the initial 3) is the cap, no matter if # this is set higher, or R^2 for R replicas self.set_replicas(replicas) self._reload() def _reload(self): self._rtime = time.time() def set_replicas(self, replicas): self.replicas = replicas self._devs = [] for x in range(self.replicas): ip = '10.0.0.%s' % x port = self._base_port + x self._devs.append({ 'ip': ip, 'replication_ip': ip, 'port': port, 'replication_port': port, 'device': 'sd' + (chr(ord('a') + x)), 'zone': x % 3, 'region': x % 2, 'id': x, }) @property def replica_count(self): return self.replicas def _get_part_nodes(self, part): return [dict(node, index=i) for i, node in enumerate(list(self._devs))] def get_more_nodes(self, part): for x in range(self.replicas, (self.replicas + self.max_more_nodes)): yield {'ip': '10.0.0.%s' % x, 'replication_ip': '10.0.0.%s' % x, 'port': self._base_port + x, 'replication_port': self._base_port + x, 'device': 'sda', 'zone': x % 3, 'region': x % 2, 'id': x} def write_fake_ring(path, *devs): """ Pretty much just a two node, two replica, 2 part power ring... """ dev1 = {'id': 0, 'zone': 0, 'device': 'sda1', 'ip': '127.0.0.1', 'port': 6000} dev2 = {'id': 0, 'zone': 0, 'device': 'sdb1', 'ip': '127.0.0.1', 'port': 6000} dev1_updates, dev2_updates = devs or ({}, {}) dev1.update(dev1_updates) dev2.update(dev2_updates) replica2part2dev_id = [[0, 1, 0, 1], [1, 0, 1, 0]] devs = [dev1, dev2] part_shift = 30 with closing(GzipFile(path, 'wb')) as f: pickle.dump(RingData(replica2part2dev_id, devs, part_shift), f) class FabricatedRing(Ring): """ When a FakeRing just won't do - you can fabricate one to meet your tests needs. """ def __init__(self, replicas=6, devices=8, nodes=4, port=6000, part_power=4): self.devices = devices self.nodes = nodes self.port = port self.replicas = 6 self.part_power = part_power self._part_shift = 32 - self.part_power self._reload() def _reload(self, *args, **kwargs): self._rtime = time.time() * 2 if hasattr(self, '_replica2part2dev_id'): return self._devs = [{ 'region': 1, 'zone': 1, 'weight': 1.0, 'id': i, 'device': 'sda%d' % i, 'ip': '10.0.0.%d' % (i % self.nodes), 'replication_ip': '10.0.0.%d' % (i % self.nodes), 'port': self.port, 'replication_port': self.port, } for i in range(self.devices)] self._replica2part2dev_id = [ [None] * 2 ** self.part_power for i in range(self.replicas) ] dev_ids = itertools.cycle(range(self.devices)) for p in range(2 ** self.part_power): for r in range(self.replicas): self._replica2part2dev_id[r][p] = next(dev_ids) class FakeMemcache(object): def __init__(self): self.store = {} def get(self, key): return self.store.get(key) def keys(self): return self.store.keys() def set(self, key, value, time=0): self.store[key] = value return True def incr(self, key, time=0): self.store[key] = self.store.setdefault(key, 0) + 1 return self.store[key] @contextmanager def soft_lock(self, key, timeout=0, retries=5): yield True def delete(self, key): try: del self.store[key] except Exception: pass return True def readuntil2crlfs(fd): rv = '' lc = '' crlfs = 0 while crlfs < 2: c = fd.read(1) if not c: raise ValueError("didn't get two CRLFs; just got %r" % rv) rv = rv + c if c == '\r' and lc != '\n': crlfs = 0 if lc == '\r' and c == '\n': crlfs += 1 lc = c return rv def connect_tcp(hostport): rv = socket.socket() rv.connect(hostport) return rv @contextmanager def tmpfile(content): with NamedTemporaryFile('w', delete=False) as f: file_name = f.name f.write(str(content)) try: yield file_name finally: os.unlink(file_name) xattr_data = {} def _get_inode(fd): if not isinstance(fd, int): try: fd = fd.fileno() except AttributeError: return os.stat(fd).st_ino return os.fstat(fd).st_ino def _setxattr(fd, k, v): inode = _get_inode(fd) data = xattr_data.get(inode, {}) data[k] = v xattr_data[inode] = data def _getxattr(fd, k): inode = _get_inode(fd) data = xattr_data.get(inode, {}).get(k) if not data: raise IOError(errno.ENODATA, "Fake IOError") return data import xattr xattr.setxattr = _setxattr xattr.getxattr = _getxattr @contextmanager def temptree(files, contents=''): # generate enough contents to fill the files c = len(files) contents = (list(contents) + [''] * c)[:c] tempdir = mkdtemp() for path, content in zip(files, contents): if os.path.isabs(path): path = '.' + path new_path = os.path.join(tempdir, path) subdir = os.path.dirname(new_path) if not os.path.exists(subdir): os.makedirs(subdir) with open(new_path, 'w') as f: f.write(str(content)) try: yield tempdir finally: rmtree(tempdir) def with_tempdir(f): """ Decorator to give a single test a tempdir as argument to test method. """ @functools.wraps(f) def wrapped(*args, **kwargs): tempdir = mkdtemp() args = list(args) args.append(tempdir) try: return f(*args, **kwargs) finally: rmtree(tempdir) return wrapped class NullLoggingHandler(logging.Handler): def emit(self, record): pass class UnmockTimeModule(object): """ Even if a test mocks time.time - you can restore unmolested behavior in a another module who imports time directly by monkey patching it's imported reference to the module with an instance of this class """ _orig_time = time.time def __getattribute__(self, name): if name == 'time': return UnmockTimeModule._orig_time return getattr(time, name) # logging.LogRecord.__init__ calls time.time logging.time = UnmockTimeModule() class WARN_DEPRECATED(Exception): def __init__(self, msg): self.msg = msg print(self.msg) class FakeLogger(logging.Logger, object): # a thread safe fake logger def __init__(self, *args, **kwargs): self._clear() self.name = 'swift.unit.fake_logger' self.level = logging.NOTSET if 'facility' in kwargs: self.facility = kwargs['facility'] self.statsd_client = None self.thread_locals = None self.parent = None store_in = { logging.ERROR: 'error', logging.WARNING: 'warning', logging.INFO: 'info', logging.DEBUG: 'debug', logging.CRITICAL: 'critical', NOTICE: 'notice', } def warn(self, *args, **kwargs): raise WARN_DEPRECATED("Deprecated Method warn use warning instead") def notice(self, msg, *args, **kwargs): """ Convenience function for syslog priority LOG_NOTICE. The python logging lvl is set to 25, just above info. SysLogHandler is monkey patched to map this log lvl to the LOG_NOTICE syslog priority. """ self.log(NOTICE, msg, *args, **kwargs) def _log(self, level, msg, *args, **kwargs): store_name = self.store_in[level] cargs = [msg] if any(args): cargs.extend(args) captured = dict(kwargs) if 'exc_info' in kwargs and \ not isinstance(kwargs['exc_info'], tuple): captured['exc_info'] = sys.exc_info() self.log_dict[store_name].append((tuple(cargs), captured)) super(FakeLogger, self)._log(level, msg, *args, **kwargs) def _clear(self): self.log_dict = defaultdict(list) self.lines_dict = {'critical': [], 'error': [], 'info': [], 'warning': [], 'debug': [], 'notice': []} clear = _clear # this is a public interface def get_lines_for_level(self, level): if level not in self.lines_dict: raise KeyError( "Invalid log level '%s'; valid levels are %s" % (level, ', '.join("'%s'" % lvl for lvl in sorted(self.lines_dict)))) return self.lines_dict[level] def all_log_lines(self): return dict((level, msgs) for level, msgs in self.lines_dict.items() if len(msgs) > 0) def _store_in(store_name): def stub_fn(self, *args, **kwargs): self.log_dict[store_name].append((args, kwargs)) return stub_fn # mock out the StatsD logging methods: update_stats = _store_in('update_stats') increment = _store_in('increment') decrement = _store_in('decrement') timing = _store_in('timing') timing_since = _store_in('timing_since') transfer_rate = _store_in('transfer_rate') set_statsd_prefix = _store_in('set_statsd_prefix') def get_increments(self): return [call[0][0] for call in self.log_dict['increment']] def get_increment_counts(self): counts = {} for metric in self.get_increments(): if metric not in counts: counts[metric] = 0 counts[metric] += 1 return counts def setFormatter(self, obj): self.formatter = obj def close(self): self._clear() def set_name(self, name): # don't touch _handlers self._name = name def acquire(self): pass def release(self): pass def createLock(self): pass def emit(self, record): pass def _handle(self, record): try: line = record.getMessage() except TypeError: print('WARNING: unable to format log message %r %% %r' % ( record.msg, record.args)) raise self.lines_dict[record.levelname.lower()].append(line) def handle(self, record): self._handle(record) def flush(self): pass def handleError(self, record): pass class DebugSwiftLogFormatter(utils.SwiftLogFormatter): def format(self, record): msg = super(DebugSwiftLogFormatter, self).format(record) return msg.replace('#012', '\n') class DebugLogger(FakeLogger): """A simple stdout logging version of FakeLogger""" def __init__(self, *args, **kwargs): FakeLogger.__init__(self, *args, **kwargs) self.formatter = DebugSwiftLogFormatter( "%(server)s %(levelname)s: %(message)s") def handle(self, record): self._handle(record) print(self.formatter.format(record)) class DebugLogAdapter(utils.LogAdapter): def _send_to_logger(name): def stub_fn(self, *args, **kwargs): return getattr(self.logger, name)(*args, **kwargs) return stub_fn # delegate to FakeLogger's mocks update_stats = _send_to_logger('update_stats') increment = _send_to_logger('increment') decrement = _send_to_logger('decrement') timing = _send_to_logger('timing') timing_since = _send_to_logger('timing_since') transfer_rate = _send_to_logger('transfer_rate') set_statsd_prefix = _send_to_logger('set_statsd_prefix') def __getattribute__(self, name): try: return object.__getattribute__(self, name) except AttributeError: return getattr(self.__dict__['logger'], name) def debug_logger(name='test'): """get a named adapted debug logger""" return DebugLogAdapter(DebugLogger(), name) original_syslog_handler = logging.handlers.SysLogHandler def fake_syslog_handler(): for attr in dir(original_syslog_handler): if attr.startswith('LOG'): setattr(FakeLogger, attr, copy.copy(getattr(logging.handlers.SysLogHandler, attr))) FakeLogger.priority_map = \ copy.deepcopy(logging.handlers.SysLogHandler.priority_map) logging.handlers.SysLogHandler = FakeLogger if utils.config_true_value( get_config('unit_test').get('fake_syslog', 'False')): fake_syslog_handler() class MockTrue(object): """ Instances of MockTrue evaluate like True Any attr accessed on an instance of MockTrue will return a MockTrue instance. Any method called on an instance of MockTrue will return a MockTrue instance. >>> thing = MockTrue() >>> thing True >>> thing == True # True == True True >>> thing == False # True == False False >>> thing != True # True != True False >>> thing != False # True != False True >>> thing.attribute True >>> thing.method() True >>> thing.attribute.method() True >>> thing.method().attribute True """ def __getattribute__(self, *args, **kwargs): return self def __call__(self, *args, **kwargs): return self def __repr__(*args, **kwargs): return repr(True) def __eq__(self, other): return other is True def __ne__(self, other): return other is not True @contextmanager def mock(update): returns = [] deletes = [] for key, value in update.items(): imports = key.split('.') attr = imports.pop(-1) module = __import__(imports[0], fromlist=imports[1:]) for modname in imports[1:]: module = getattr(module, modname) if hasattr(module, attr): returns.append((module, attr, getattr(module, attr))) else: deletes.append((module, attr)) setattr(module, attr, value) try: yield True finally: for module, attr, value in returns: setattr(module, attr, value) for module, attr in deletes: delattr(module, attr) class FakeStatus(object): """ This will work with our fake_http_connect, if you hand in one of these instead of a status int or status int tuple to the "codes" iter you can add some eventlet sleep to the expect and response stages of the connection. """ def __init__(self, status, expect_sleep=None, response_sleep=None): """ :param status: the response status int, or a tuple of ([expect_status, ...], response_status) :param expect_sleep: float, time to eventlet sleep during expect, can be a iter of floats :param response_sleep: float, time to eventlet sleep during response """ # connect exception if isinstance(status, (Exception, eventlet.Timeout)): raise status if isinstance(status, tuple): self.expect_status = list(status[:-1]) self.status = status[-1] self.explicit_expect_list = True else: self.expect_status, self.status = ([], status) self.explicit_expect_list = False if not self.expect_status: # when a swift backend service returns a status before reading # from the body (mostly an error response) eventlet.wsgi will # respond with that status line immediately instead of 100 # Continue, even if the client sent the Expect 100 header. # BufferedHttp and the proxy both see these error statuses # when they call getexpect, so our FakeConn tries to act like # our backend services and return certain types of responses # as expect statuses just like a real backend server would do. if self.status in (507, 412, 409): self.expect_status = [status] else: self.expect_status = [100, 100] # setup sleep attributes if not isinstance(expect_sleep, (list, tuple)): expect_sleep = [expect_sleep] * len(self.expect_status) self.expect_sleep_list = list(expect_sleep) while len(self.expect_sleep_list) < len(self.expect_status): self.expect_sleep_list.append(None) self.response_sleep = response_sleep def get_response_status(self): if self.response_sleep is not None: eventlet.sleep(self.response_sleep) if self.expect_status and self.explicit_expect_list: raise Exception('Test did not consume all fake ' 'expect status: %r' % (self.expect_status,)) if isinstance(self.status, (Exception, eventlet.Timeout)): raise self.status return self.status def get_expect_status(self): expect_sleep = self.expect_sleep_list.pop(0) if expect_sleep is not None: eventlet.sleep(expect_sleep) expect_status = self.expect_status.pop(0) if isinstance(expect_status, (Exception, eventlet.Timeout)): raise expect_status return expect_status class SlowBody(object): """ This will work with our fake_http_connect, if you hand in these instead of strings it will make reads take longer by the given amount. It should be a little bit easier to extend than the current slow kwarg - which inserts whitespace in the response. Also it should be easy to detect if you have one of these (or a subclass) for the body inside of FakeConn if we wanted to do something smarter than just duck-type the str/buffer api enough to get by. """ def __init__(self, body, slowness): self.body = body self.slowness = slowness def slowdown(self): eventlet.sleep(self.slowness) def __getitem__(self, s): return SlowBody(self.body[s], self.slowness) def __len__(self): return len(self.body) def __radd__(self, other): self.slowdown() return other + self.body def fake_http_connect(*code_iter, **kwargs): class FakeConn(object): def __init__(self, status, etag=None, body='', timestamp='1', headers=None, expect_headers=None, connection_id=None, give_send=None): if not isinstance(status, FakeStatus): status = FakeStatus(status) self._status = status self.reason = 'Fake' self.host = '1.2.3.4' self.port = '1234' self.sent = 0 self.received = 0 self.etag = etag self.body = body self.headers = headers or {} self.expect_headers = expect_headers or {} self.timestamp = timestamp self.connection_id = connection_id self.give_send = give_send if 'slow' in kwargs and isinstance(kwargs['slow'], list): try: self._next_sleep = kwargs['slow'].pop(0) except IndexError: self._next_sleep = None # be nice to trixy bits with node_iter's eventlet.sleep() def getresponse(self): exc = kwargs.get('raise_exc') if exc: if isinstance(exc, (Exception, eventlet.Timeout)): raise exc raise Exception('test') if kwargs.get('raise_timeout_exc'): raise eventlet.Timeout() self.status = self._status.get_response_status() return self def getexpect(self): expect_status = self._status.get_expect_status() headers = dict(self.expect_headers) if expect_status == 409: headers['X-Backend-Timestamp'] = self.timestamp response = FakeConn(expect_status, timestamp=self.timestamp, headers=headers) response.status = expect_status return response def getheaders(self): etag = self.etag if not etag: if isinstance(self.body, str): etag = '"' + md5(self.body).hexdigest() + '"' else: etag = '"68b329da9893e34099c7d8ad5cb9c940"' headers = HeaderKeyDict({ 'content-length': len(self.body), 'content-type': 'x-application/test', 'x-timestamp': self.timestamp, 'x-backend-timestamp': self.timestamp, 'last-modified': self.timestamp, 'x-object-meta-test': 'testing', 'x-delete-at': '9876543210', 'etag': etag, 'x-works': 'yes', }) if self.status // 100 == 2: headers['x-account-container-count'] = \ kwargs.get('count', 12345) if not self.timestamp: # when timestamp is None, HeaderKeyDict raises KeyError headers.pop('x-timestamp', None) try: if next(container_ts_iter) is False: headers['x-container-timestamp'] = '1' except StopIteration: pass am_slow, value = self.get_slow() if am_slow: headers['content-length'] = '4' headers.update(self.headers) return headers.items() def get_slow(self): if 'slow' in kwargs and isinstance(kwargs['slow'], list): if self._next_sleep is not None: return True, self._next_sleep else: return False, 0.01 if kwargs.get('slow') and isinstance(kwargs['slow'], Number): return True, kwargs['slow'] return bool(kwargs.get('slow')), 0.1 def read(self, amt=None): am_slow, value = self.get_slow() if am_slow: if self.sent < 4: self.sent += 1 eventlet.sleep(value) return ' ' rv = self.body[:amt] self.body = self.body[amt:] return rv def send(self, amt=None): if self.give_send: self.give_send(self.connection_id, amt) am_slow, value = self.get_slow() if am_slow: if self.received < 4: self.received += 1 eventlet.sleep(value) def getheader(self, name, default=None): return HeaderKeyDict(self.getheaders()).get(name, default) def close(self): pass timestamps_iter = iter(kwargs.get('timestamps') or ['1'] * len(code_iter)) etag_iter = iter(kwargs.get('etags') or [None] * len(code_iter)) if isinstance(kwargs.get('headers'), (list, tuple)): headers_iter = iter(kwargs['headers']) else: headers_iter = iter([kwargs.get('headers', {})] * len(code_iter)) if isinstance(kwargs.get('expect_headers'), (list, tuple)): expect_headers_iter = iter(kwargs['expect_headers']) else: expect_headers_iter = iter([kwargs.get('expect_headers', {})] * len(code_iter)) x = kwargs.get('missing_container', [False] * len(code_iter)) if not isinstance(x, (tuple, list)): x = [x] * len(code_iter) container_ts_iter = iter(x) code_iter = iter(code_iter) conn_id_and_code_iter = enumerate(code_iter) static_body = kwargs.get('body', None) body_iter = kwargs.get('body_iter', None) if body_iter: body_iter = iter(body_iter) def connect(*args, **ckwargs): if kwargs.get('slow_connect', False): eventlet.sleep(0.1) if 'give_content_type' in kwargs: if len(args) >= 7 and 'Content-Type' in args[6]: kwargs['give_content_type'](args[6]['Content-Type']) else: kwargs['give_content_type']('') i, status = next(conn_id_and_code_iter) if 'give_connect' in kwargs: give_conn_fn = kwargs['give_connect'] argspec = inspect.getargspec(give_conn_fn) if argspec.keywords or 'connection_id' in argspec.args: ckwargs['connection_id'] = i give_conn_fn(*args, **ckwargs) etag = next(etag_iter) headers = next(headers_iter) expect_headers = next(expect_headers_iter) timestamp = next(timestamps_iter) if status <= 0: raise HTTPException() if body_iter is None: body = static_body or '' else: body = next(body_iter) return FakeConn(status, etag, body=body, timestamp=timestamp, headers=headers, expect_headers=expect_headers, connection_id=i, give_send=kwargs.get('give_send')) connect.code_iter = code_iter return connect @contextmanager def mocked_http_conn(*args, **kwargs): requests = [] def capture_requests(ip, port, method, path, headers, qs, ssl): req = { 'ip': ip, 'port': port, 'method': method, 'path': path, 'headers': headers, 'qs': qs, 'ssl': ssl, } requests.append(req) kwargs.setdefault('give_connect', capture_requests) fake_conn = fake_http_connect(*args, **kwargs) fake_conn.requests = requests with mocklib.patch('swift.common.bufferedhttp.http_connect_raw', new=fake_conn): yield fake_conn left_over_status = list(fake_conn.code_iter) if left_over_status: raise AssertionError('left over status %r' % left_over_status) def make_timestamp_iter(): return iter(Timestamp(t) for t in itertools.count(int(time.time()))) def encode_frag_archive_bodies(policy, body): """ Given a stub body produce a list of complete frag_archive bodies as strings in frag_index order. :param policy: a StoragePolicy instance, with policy_type EC_POLICY :param body: a string, the body to encode into frag archives :returns: list of strings, the complete frag_archive bodies for the given plaintext """ segment_size = policy.ec_segment_size # split up the body into buffers chunks = [body[x:x + segment_size] for x in range(0, len(body), segment_size)] # encode the buffers into fragment payloads fragment_payloads = [] for chunk in chunks: fragments = policy.pyeclib_driver.encode(chunk) if not fragments: break fragment_payloads.append(fragments) # join up the fragment payloads per node ec_archive_bodies = [''.join(frags) for frags in zip(*fragment_payloads)] return ec_archive_bodies swift-2.7.1/test/unit/proxy/0000775000567000056710000000000013024044470017133 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/proxy/__init__.py0000664000567000056710000000000013024044352021231 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/proxy/test_mem_server.py0000664000567000056710000000330313024044354022710 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2013 OpenStack, LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from test.unit.proxy import test_server from test.unit.proxy.test_server import teardown from swift.obj import mem_server def setup(): test_server.do_setup(mem_server) class TestController(test_server.TestController): pass class TestProxyServer(test_server.TestProxyServer): pass class TestObjectController(test_server.TestObjectController): def test_PUT_no_etag_fallocate(self): # mem server doesn't call fallocate(), believe it or not pass # these tests all go looking in the filesystem def test_policy_IO(self): pass def test_PUT_ec(self): pass def test_PUT_ec_multiple_segments(self): pass def test_PUT_ec_fragment_archive_etag_mismatch(self): pass class TestContainerController(test_server.TestContainerController): pass class TestAccountController(test_server.TestAccountController): pass class TestAccountControllerFakeGetResponse( test_server.TestAccountControllerFakeGetResponse): pass if __name__ == '__main__': setup() try: unittest.main() finally: teardown() swift-2.7.1/test/unit/proxy/test_server.py0000664000567000056710000150537513024044354022073 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2010-2016 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import email.parser import logging import json import math import os import pickle import sys import unittest from contextlib import closing, contextmanager from gzip import GzipFile from shutil import rmtree import gc import time from textwrap import dedent from hashlib import md5 from pyeclib.ec_iface import ECDriverError from tempfile import mkdtemp, NamedTemporaryFile import weakref import operator import functools from swift.obj import diskfile import re import random from collections import defaultdict import uuid import mock from eventlet import sleep, spawn, wsgi, listen, Timeout, debug from eventlet.green import httplib from six import BytesIO from six import StringIO from six.moves import range from six.moves.urllib.parse import quote from swift.common.utils import hash_path, storage_directory, \ parse_content_type, parse_mime_headers, \ iter_multipart_mime_documents, public from test.unit import ( connect_tcp, readuntil2crlfs, FakeLogger, fake_http_connect, FakeRing, FakeMemcache, debug_logger, patch_policies, write_fake_ring, mocked_http_conn, DEFAULT_TEST_EC_TYPE) from swift.proxy import server as proxy_server from swift.proxy.controllers.obj import ReplicatedObjectController from swift.account import server as account_server from swift.container import server as container_server from swift.obj import server as object_server from swift.common.middleware import proxy_logging, versioned_writes from swift.common.middleware.acl import parse_acl, format_acl from swift.common.exceptions import ChunkReadTimeout, DiskFileNotExist, \ APIVersionError from swift.common import utils, constraints from swift.common.ring import RingData from swift.common.utils import mkdirs, normalize_timestamp, NullLogger from swift.common.wsgi import monkey_patch_mimetools, loadapp from swift.proxy.controllers import base as proxy_base from swift.proxy.controllers.base import get_container_memcache_key, \ get_account_memcache_key, cors_validation, _get_info_cache import swift.proxy.controllers import swift.proxy.controllers.obj from swift.common.header_key_dict import HeaderKeyDict from swift.common.swob import Request, Response, HTTPUnauthorized, \ HTTPException, HTTPBadRequest from swift.common import storage_policy from swift.common.storage_policy import StoragePolicy, ECStoragePolicy, \ StoragePolicyCollection, POLICIES import swift.common.request_helpers from swift.common.request_helpers import get_sys_meta_prefix # mocks logging.getLogger().addHandler(logging.StreamHandler(sys.stdout)) STATIC_TIME = time.time() _test_coros = _test_servers = _test_sockets = _orig_container_listing_limit = \ _testdir = _orig_SysLogHandler = _orig_POLICIES = _test_POLICIES = None def do_setup(the_object_server): utils.HASH_PATH_SUFFIX = 'endcap' global _testdir, _test_servers, _test_sockets, \ _orig_container_listing_limit, _test_coros, _orig_SysLogHandler, \ _orig_POLICIES, _test_POLICIES _orig_POLICIES = storage_policy._POLICIES _orig_SysLogHandler = utils.SysLogHandler utils.SysLogHandler = mock.MagicMock() monkey_patch_mimetools() # Since we're starting up a lot here, we're going to test more than # just chunked puts; we're also going to test parts of # proxy_server.Application we couldn't get to easily otherwise. _testdir = \ os.path.join(mkdtemp(), 'tmp_test_proxy_server_chunked') mkdirs(_testdir) rmtree(_testdir) for drive in ('sda1', 'sdb1', 'sdc1', 'sdd1', 'sde1', 'sdf1', 'sdg1', 'sdh1', 'sdi1'): mkdirs(os.path.join(_testdir, drive, 'tmp')) conf = {'devices': _testdir, 'swift_dir': _testdir, 'mount_check': 'false', 'allowed_headers': 'content-encoding, x-object-manifest, content-disposition, foo', 'allow_versions': 't'} prolis = listen(('localhost', 0)) acc1lis = listen(('localhost', 0)) acc2lis = listen(('localhost', 0)) con1lis = listen(('localhost', 0)) con2lis = listen(('localhost', 0)) obj1lis = listen(('localhost', 0)) obj2lis = listen(('localhost', 0)) obj3lis = listen(('localhost', 0)) objsocks = [obj1lis, obj2lis, obj3lis] _test_sockets = \ (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) account_ring_path = os.path.join(_testdir, 'account.ring.gz') account_devs = [ {'port': acc1lis.getsockname()[1]}, {'port': acc2lis.getsockname()[1]}, ] write_fake_ring(account_ring_path, *account_devs) container_ring_path = os.path.join(_testdir, 'container.ring.gz') container_devs = [ {'port': con1lis.getsockname()[1]}, {'port': con2lis.getsockname()[1]}, ] write_fake_ring(container_ring_path, *container_devs) storage_policy._POLICIES = StoragePolicyCollection([ StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False), ECStoragePolicy(3, 'ec', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=2, ec_nparity=1, ec_segment_size=4096)]) obj_rings = { 0: ('sda1', 'sdb1'), 1: ('sdc1', 'sdd1'), 2: ('sde1', 'sdf1'), # sdg1, sdh1, sdi1 taken by policy 3 (see below) } for policy_index, devices in obj_rings.items(): policy = POLICIES[policy_index] obj_ring_path = os.path.join(_testdir, policy.ring_name + '.ring.gz') obj_devs = [ {'port': objsock.getsockname()[1], 'device': dev} for objsock, dev in zip(objsocks, devices)] write_fake_ring(obj_ring_path, *obj_devs) # write_fake_ring can't handle a 3-element ring, and the EC policy needs # at least 3 devs to work with, so we do it manually devs = [{'id': 0, 'zone': 0, 'device': 'sdg1', 'ip': '127.0.0.1', 'port': obj1lis.getsockname()[1]}, {'id': 1, 'zone': 0, 'device': 'sdh1', 'ip': '127.0.0.1', 'port': obj2lis.getsockname()[1]}, {'id': 2, 'zone': 0, 'device': 'sdi1', 'ip': '127.0.0.1', 'port': obj3lis.getsockname()[1]}] pol3_replica2part2dev_id = [[0, 1, 2, 0], [1, 2, 0, 1], [2, 0, 1, 2]] obj3_ring_path = os.path.join(_testdir, POLICIES[3].ring_name + '.ring.gz') part_shift = 30 with closing(GzipFile(obj3_ring_path, 'wb')) as fh: pickle.dump(RingData(pol3_replica2part2dev_id, devs, part_shift), fh) prosrv = proxy_server.Application(conf, FakeMemcacheReturnsNone(), logger=debug_logger('proxy')) for policy in POLICIES: # make sure all the rings are loaded prosrv.get_object_ring(policy.idx) # don't lose this one! _test_POLICIES = storage_policy._POLICIES acc1srv = account_server.AccountController( conf, logger=debug_logger('acct1')) acc2srv = account_server.AccountController( conf, logger=debug_logger('acct2')) con1srv = container_server.ContainerController( conf, logger=debug_logger('cont1')) con2srv = container_server.ContainerController( conf, logger=debug_logger('cont2')) obj1srv = the_object_server.ObjectController( conf, logger=debug_logger('obj1')) obj2srv = the_object_server.ObjectController( conf, logger=debug_logger('obj2')) obj3srv = the_object_server.ObjectController( conf, logger=debug_logger('obj3')) _test_servers = \ (prosrv, acc1srv, acc2srv, con1srv, con2srv, obj1srv, obj2srv, obj3srv) nl = NullLogger() logging_prosv = proxy_logging.ProxyLoggingMiddleware(prosrv, conf, logger=prosrv.logger) prospa = spawn(wsgi.server, prolis, logging_prosv, nl) acc1spa = spawn(wsgi.server, acc1lis, acc1srv, nl) acc2spa = spawn(wsgi.server, acc2lis, acc2srv, nl) con1spa = spawn(wsgi.server, con1lis, con1srv, nl) con2spa = spawn(wsgi.server, con2lis, con2srv, nl) obj1spa = spawn(wsgi.server, obj1lis, obj1srv, nl) obj2spa = spawn(wsgi.server, obj2lis, obj2srv, nl) obj3spa = spawn(wsgi.server, obj3lis, obj3srv, nl) _test_coros = \ (prospa, acc1spa, acc2spa, con1spa, con2spa, obj1spa, obj2spa, obj3spa) # Create account ts = normalize_timestamp(time.time()) partition, nodes = prosrv.account_ring.get_nodes('a') for node in nodes: conn = swift.proxy.controllers.obj.http_connect(node['ip'], node['port'], node['device'], partition, 'PUT', '/a', {'X-Timestamp': ts, 'x-trans-id': 'test'}) resp = conn.getresponse() assert(resp.status == 201) # Create another account # used for account-to-account tests ts = normalize_timestamp(time.time()) partition, nodes = prosrv.account_ring.get_nodes('a1') for node in nodes: conn = swift.proxy.controllers.obj.http_connect(node['ip'], node['port'], node['device'], partition, 'PUT', '/a1', {'X-Timestamp': ts, 'x-trans-id': 'test'}) resp = conn.getresponse() assert(resp.status == 201) # Create containers, 1 per test policy sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' assert headers[:len(exp)] == exp, "Expected '%s', encountered '%s'" % ( exp, headers[:len(exp)]) # Create container in other account # used for account-to-account tests sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a1/c1 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' assert headers[:len(exp)] == exp, "Expected '%s', encountered '%s'" % ( exp, headers[:len(exp)]) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write( 'PUT /v1/a/c1 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\nX-Storage-Policy: one\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' assert headers[:len(exp)] == exp, \ "Expected '%s', encountered '%s'" % (exp, headers[:len(exp)]) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write( 'PUT /v1/a/c2 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\nX-Storage-Policy: two\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' assert headers[:len(exp)] == exp, \ "Expected '%s', encountered '%s'" % (exp, headers[:len(exp)]) def unpatch_policies(f): """ This will unset a TestCase level patch_policies to use the module level policies setup for the _test_servers instead. N.B. You should NEVER modify the _test_server policies or rings during a test because they persist for the life of the entire module! """ @functools.wraps(f) def wrapper(*args, **kwargs): with patch_policies(_test_POLICIES): return f(*args, **kwargs) return wrapper def setup(): do_setup(object_server) def teardown(): for server in _test_coros: server.kill() rmtree(os.path.dirname(_testdir)) utils.SysLogHandler = _orig_SysLogHandler storage_policy._POLICIES = _orig_POLICIES def sortHeaderNames(headerNames): """ Return the given string of header names sorted. headerName: a comma-delimited list of header names """ headers = [a.strip() for a in headerNames.split(',') if a.strip()] headers.sort() return ', '.join(headers) def parse_headers_string(headers_str): headers_dict = HeaderKeyDict() for line in headers_str.split('\r\n'): if ': ' in line: header, value = line.split(': ', 1) headers_dict[header] = value return headers_dict def node_error_count(proxy_app, ring_node): # Reach into the proxy's internals to get the error count for a # particular node node_key = proxy_app._error_limit_node_key(ring_node) return proxy_app._error_limiting.get(node_key, {}).get('errors', 0) def node_last_error(proxy_app, ring_node): # Reach into the proxy's internals to get the last error for a # particular node node_key = proxy_app._error_limit_node_key(ring_node) return proxy_app._error_limiting.get(node_key, {}).get('last_error') def set_node_errors(proxy_app, ring_node, value, last_error): # Set the node's error count to value node_key = proxy_app._error_limit_node_key(ring_node) stats = proxy_app._error_limiting.setdefault(node_key, {}) stats['errors'] = value stats['last_error'] = last_error class FakeMemcacheReturnsNone(FakeMemcache): def get(self, key): # Returns None as the timestamp of the container; assumes we're only # using the FakeMemcache for container existence checks. return None @contextmanager def save_globals(): orig_http_connect = getattr(swift.proxy.controllers.base, 'http_connect', None) orig_account_info = getattr(swift.proxy.controllers.Controller, 'account_info', None) orig_container_info = getattr(swift.proxy.controllers.Controller, 'container_info', None) try: yield True finally: swift.proxy.controllers.Controller.account_info = orig_account_info swift.proxy.controllers.base.http_connect = orig_http_connect swift.proxy.controllers.obj.http_connect = orig_http_connect swift.proxy.controllers.account.http_connect = orig_http_connect swift.proxy.controllers.container.http_connect = orig_http_connect swift.proxy.controllers.Controller.container_info = orig_container_info def set_http_connect(*args, **kwargs): new_connect = fake_http_connect(*args, **kwargs) swift.proxy.controllers.base.http_connect = new_connect swift.proxy.controllers.obj.http_connect = new_connect swift.proxy.controllers.account.http_connect = new_connect swift.proxy.controllers.container.http_connect = new_connect return new_connect def _make_callback_func(calls): def callback(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): context = {} context['method'] = method context['path'] = path context['headers'] = headers or {} calls.append(context) return callback def _limit_max_file_size(f): """ This will limit constraints.MAX_FILE_SIZE for the duration of the wrapped function, based on whether MAX_FILE_SIZE exceeds the sys.maxsize limit on the system running the tests. This allows successful testing on 32 bit systems. """ @functools.wraps(f) def wrapper(*args, **kwargs): test_max_file_size = constraints.MAX_FILE_SIZE if constraints.MAX_FILE_SIZE >= sys.maxsize: test_max_file_size = (2 ** 30 + 2) with mock.patch.object(constraints, 'MAX_FILE_SIZE', test_max_file_size): return f(*args, **kwargs) return wrapper # tests class TestController(unittest.TestCase): def setUp(self): self.account_ring = FakeRing() self.container_ring = FakeRing() self.memcache = FakeMemcache() app = proxy_server.Application(None, self.memcache, account_ring=self.account_ring, container_ring=self.container_ring) self.controller = swift.proxy.controllers.Controller(app) class FakeReq(object): def __init__(self): self.url = "/foo/bar" self.method = "METHOD" def as_referer(self): return self.method + ' ' + self.url self.account = 'some_account' self.container = 'some_container' self.request = FakeReq() self.read_acl = 'read_acl' self.write_acl = 'write_acl' def test_transfer_headers(self): src_headers = {'x-remove-base-meta-owner': 'x', 'x-base-meta-size': '151M', 'new-owner': 'Kun'} dst_headers = {'x-base-meta-owner': 'Gareth', 'x-base-meta-size': '150M'} self.controller.transfer_headers(src_headers, dst_headers) expected_headers = {'x-base-meta-owner': '', 'x-base-meta-size': '151M'} self.assertEqual(dst_headers, expected_headers) def check_account_info_return(self, partition, nodes, is_none=False): if is_none: p, n = None, None else: p, n = self.account_ring.get_nodes(self.account) self.assertEqual(p, partition) self.assertEqual(n, nodes) def test_account_info_container_count(self): with save_globals(): set_http_connect(200, count=123) partition, nodes, count = \ self.controller.account_info(self.account) self.assertEqual(count, 123) with save_globals(): set_http_connect(200, count='123') partition, nodes, count = \ self.controller.account_info(self.account) self.assertEqual(count, 123) with save_globals(): cache_key = get_account_memcache_key(self.account) account_info = {'status': 200, 'container_count': 1234} self.memcache.set(cache_key, account_info) partition, nodes, count = \ self.controller.account_info(self.account) self.assertEqual(count, 1234) with save_globals(): cache_key = get_account_memcache_key(self.account) account_info = {'status': 200, 'container_count': '1234'} self.memcache.set(cache_key, account_info) partition, nodes, count = \ self.controller.account_info(self.account) self.assertEqual(count, 1234) def test_make_requests(self): with save_globals(): set_http_connect(200) partition, nodes, count = \ self.controller.account_info(self.account, self.request) set_http_connect(201, raise_timeout_exc=True) self.controller._make_request( nodes, partition, 'POST', '/', '', '', self.controller.app.logger.thread_locals) # tests if 200 is cached and used def test_account_info_200(self): with save_globals(): set_http_connect(200) partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.check_account_info_return(partition, nodes) self.assertEqual(count, 12345) # Test the internal representation in memcache # 'container_count' changed from int to str cache_key = get_account_memcache_key(self.account) container_info = {'status': 200, 'container_count': '12345', 'total_object_count': None, 'bytes': None, 'meta': {}, 'sysmeta': {}} self.assertEqual(container_info, self.memcache.get(cache_key)) set_http_connect() partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.check_account_info_return(partition, nodes) self.assertEqual(count, 12345) # tests if 404 is cached and used def test_account_info_404(self): with save_globals(): set_http_connect(404, 404, 404) partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.check_account_info_return(partition, nodes, True) self.assertEqual(count, None) # Test the internal representation in memcache # 'container_count' changed from 0 to None cache_key = get_account_memcache_key(self.account) account_info = {'status': 404, 'container_count': None, # internally keep None 'total_object_count': None, 'bytes': None, 'meta': {}, 'sysmeta': {}} self.assertEqual(account_info, self.memcache.get(cache_key)) set_http_connect() partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.check_account_info_return(partition, nodes, True) self.assertEqual(count, None) # tests if some http status codes are not cached def test_account_info_no_cache(self): def test(*status_list): set_http_connect(*status_list) partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.assertEqual(len(self.memcache.keys()), 0) self.check_account_info_return(partition, nodes, True) self.assertEqual(count, None) with save_globals(): # We cache if we have two 404 responses - fail if only one test(503, 503, 404) test(504, 404, 503) test(404, 507, 503) test(503, 503, 503) def test_account_info_no_account(self): with save_globals(): self.memcache.store = {} set_http_connect(404, 404, 404) partition, nodes, count = \ self.controller.account_info(self.account, self.request) self.check_account_info_return(partition, nodes, is_none=True) self.assertEqual(count, None) def check_container_info_return(self, ret, is_none=False): if is_none: partition, nodes, read_acl, write_acl = None, None, None, None else: partition, nodes = self.container_ring.get_nodes(self.account, self.container) read_acl, write_acl = self.read_acl, self.write_acl self.assertEqual(partition, ret['partition']) self.assertEqual(nodes, ret['nodes']) self.assertEqual(read_acl, ret['read_acl']) self.assertEqual(write_acl, ret['write_acl']) def test_container_info_invalid_account(self): def account_info(self, account, request, autocreate=False): return None, None with save_globals(): swift.proxy.controllers.Controller.account_info = account_info ret = self.controller.container_info(self.account, self.container, self.request) self.check_container_info_return(ret, True) # tests if 200 is cached and used def test_container_info_200(self): with save_globals(): headers = {'x-container-read': self.read_acl, 'x-container-write': self.write_acl} set_http_connect(200, # account_info is found 200, headers=headers) # container_info is found ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret) cache_key = get_container_memcache_key(self.account, self.container) cache_value = self.memcache.get(cache_key) self.assertTrue(isinstance(cache_value, dict)) self.assertEqual(200, cache_value.get('status')) set_http_connect() ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret) # tests if 404 is cached and used def test_container_info_404(self): def account_info(self, account, request): return True, True, 0 with save_globals(): set_http_connect(503, 204, # account_info found 504, 404, 404) # container_info 'NotFound' ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret, True) cache_key = get_container_memcache_key(self.account, self.container) cache_value = self.memcache.get(cache_key) self.assertTrue(isinstance(cache_value, dict)) self.assertEqual(404, cache_value.get('status')) set_http_connect() ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret, True) set_http_connect(503, 404, 404) # account_info 'NotFound' ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret, True) cache_key = get_container_memcache_key(self.account, self.container) cache_value = self.memcache.get(cache_key) self.assertTrue(isinstance(cache_value, dict)) self.assertEqual(404, cache_value.get('status')) set_http_connect() ret = self.controller.container_info( self.account, self.container, self.request) self.check_container_info_return(ret, True) # tests if some http status codes are not cached def test_container_info_no_cache(self): def test(*status_list): set_http_connect(*status_list) ret = self.controller.container_info( self.account, self.container, self.request) self.assertEqual(len(self.memcache.keys()), 0) self.check_container_info_return(ret, True) with save_globals(): # We cache if we have two 404 responses - fail if only one test(503, 503, 404) test(504, 404, 503) test(404, 507, 503) test(503, 503, 503) def test_get_info_cache_returns_values_as_strings(self): app = mock.MagicMock() app.memcache = mock.MagicMock() app.memcache.get = mock.MagicMock() app.memcache.get.return_value = { u'foo': u'\u2603', u'meta': {u'bar': u'\u2603'}, u'sysmeta': {u'baz': u'\u2603'}, u'cors': {u'expose_headers': u'\u2603'}} env = {} r = _get_info_cache(app, env, 'account', 'container') # Test info is returned as strings self.assertEqual(r.get('foo'), '\xe2\x98\x83') self.assertTrue(isinstance(r.get('foo'), str)) # Test info['meta'] is returned as strings m = r.get('meta', {}) self.assertEqual(m.get('bar'), '\xe2\x98\x83') self.assertTrue(isinstance(m.get('bar'), str)) # Test info['sysmeta'] is returned as strings m = r.get('sysmeta', {}) self.assertEqual(m.get('baz'), '\xe2\x98\x83') self.assertTrue(isinstance(m.get('baz'), str)) # Test info['cors'] is returned as strings m = r.get('cors', {}) self.assertEqual(m.get('expose_headers'), '\xe2\x98\x83') self.assertTrue(isinstance(m.get('expose_headers'), str)) @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestProxyServer(unittest.TestCase): def test_creation(self): # later config should be extended to assert more config options app = proxy_server.Application({'node_timeout': '3.5', 'recoverable_node_timeout': '1.5'}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) self.assertEqual(app.node_timeout, 3.5) self.assertEqual(app.recoverable_node_timeout, 1.5) def test_get_object_ring(self): baseapp = proxy_server.Application({}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) with patch_policies([ StoragePolicy(0, 'a', False, object_ring=123), StoragePolicy(1, 'b', True, object_ring=456), StoragePolicy(2, 'd', False, object_ring=789) ]): # None means legacy so always use policy 0 ring = baseapp.get_object_ring(None) self.assertEqual(ring, 123) ring = baseapp.get_object_ring('') self.assertEqual(ring, 123) ring = baseapp.get_object_ring('0') self.assertEqual(ring, 123) ring = baseapp.get_object_ring('1') self.assertEqual(ring, 456) ring = baseapp.get_object_ring('2') self.assertEqual(ring, 789) # illegal values self.assertRaises(ValueError, baseapp.get_object_ring, '99') self.assertRaises(ValueError, baseapp.get_object_ring, 'asdf') def test_unhandled_exception(self): class MyApp(proxy_server.Application): def get_controller(self, path): raise Exception('this shouldn\'t be caught') app = MyApp(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) req = Request.blank('/v1/account', environ={'REQUEST_METHOD': 'HEAD'}) app.update_request(req) resp = app.handle_request(req) self.assertEqual(resp.status_int, 500) def test_internal_method_request(self): baseapp = proxy_server.Application({}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) resp = baseapp.handle_request( Request.blank('/v1/a', environ={'REQUEST_METHOD': '__init__'})) self.assertEqual(resp.status, '405 Method Not Allowed') def test_inexistent_method_request(self): baseapp = proxy_server.Application({}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) resp = baseapp.handle_request( Request.blank('/v1/a', environ={'REQUEST_METHOD': '!invalid'})) self.assertEqual(resp.status, '405 Method Not Allowed') def test_calls_authorize_allow(self): called = [False] def authorize(req): called[0] = True with save_globals(): set_http_connect(200) app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) req = Request.blank('/v1/a') req.environ['swift.authorize'] = authorize app.update_request(req) app.handle_request(req) self.assertTrue(called[0]) def test_calls_authorize_deny(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) req = Request.blank('/v1/a') req.environ['swift.authorize'] = authorize app.update_request(req) app.handle_request(req) self.assertTrue(called[0]) def test_negative_content_length(self): swift_dir = mkdtemp() try: baseapp = proxy_server.Application({'swift_dir': swift_dir}, FakeMemcache(), FakeLogger(), FakeRing(), FakeRing()) resp = baseapp.handle_request( Request.blank('/', environ={'CONTENT_LENGTH': '-1'})) self.assertEqual(resp.status, '400 Bad Request') self.assertEqual(resp.body, 'Invalid Content-Length') resp = baseapp.handle_request( Request.blank('/', environ={'CONTENT_LENGTH': '-123'})) self.assertEqual(resp.status, '400 Bad Request') self.assertEqual(resp.body, 'Invalid Content-Length') finally: rmtree(swift_dir, ignore_errors=True) def test_adds_transaction_id(self): swift_dir = mkdtemp() try: logger = FakeLogger() baseapp = proxy_server.Application({'swift_dir': swift_dir}, FakeMemcache(), logger, container_ring=FakeLogger(), account_ring=FakeRing()) baseapp.handle_request( Request.blank('/info', environ={'HTTP_X_TRANS_ID_EXTRA': 'sardine', 'REQUEST_METHOD': 'GET'})) # This is kind of a hokey way to get the transaction ID; it'd be # better to examine response headers, but the catch_errors # middleware is what sets the X-Trans-Id header, and we don't have # that available here. self.assertTrue(logger.txn_id.endswith('-sardine')) finally: rmtree(swift_dir, ignore_errors=True) def test_adds_transaction_id_length_limit(self): swift_dir = mkdtemp() try: logger = FakeLogger() baseapp = proxy_server.Application({'swift_dir': swift_dir}, FakeMemcache(), logger, container_ring=FakeLogger(), account_ring=FakeRing()) baseapp.handle_request( Request.blank('/info', environ={'HTTP_X_TRANS_ID_EXTRA': 'a' * 1000, 'REQUEST_METHOD': 'GET'})) self.assertTrue(logger.txn_id.endswith( '-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')) finally: rmtree(swift_dir, ignore_errors=True) def test_denied_host_header(self): swift_dir = mkdtemp() try: baseapp = proxy_server.Application({'swift_dir': swift_dir, 'deny_host_headers': 'invalid_host.com'}, FakeMemcache(), container_ring=FakeLogger(), account_ring=FakeRing()) resp = baseapp.handle_request( Request.blank('/v1/a/c/o', environ={'HTTP_HOST': 'invalid_host.com'})) self.assertEqual(resp.status, '403 Forbidden') finally: rmtree(swift_dir, ignore_errors=True) def test_node_timing(self): baseapp = proxy_server.Application({'sorting_method': 'timing'}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) self.assertEqual(baseapp.node_timings, {}) req = Request.blank('/v1/account', environ={'REQUEST_METHOD': 'HEAD'}) baseapp.update_request(req) resp = baseapp.handle_request(req) self.assertEqual(resp.status_int, 503) # couldn't connect to anything exp_timings = {} self.assertEqual(baseapp.node_timings, exp_timings) times = [time.time()] exp_timings = {'127.0.0.1': (0.1, times[0] + baseapp.timing_expiry)} with mock.patch('swift.proxy.server.time', lambda: times.pop(0)): baseapp.set_node_timing({'ip': '127.0.0.1'}, 0.1) self.assertEqual(baseapp.node_timings, exp_timings) nodes = [{'ip': '127.0.0.1'}, {'ip': '127.0.0.2'}, {'ip': '127.0.0.3'}] with mock.patch('swift.proxy.server.shuffle', lambda l: l): res = baseapp.sort_nodes(nodes) exp_sorting = [{'ip': '127.0.0.2'}, {'ip': '127.0.0.3'}, {'ip': '127.0.0.1'}] self.assertEqual(res, exp_sorting) def test_node_affinity(self): baseapp = proxy_server.Application({'sorting_method': 'affinity', 'read_affinity': 'r1=1'}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) nodes = [{'region': 2, 'zone': 1, 'ip': '127.0.0.1'}, {'region': 1, 'zone': 2, 'ip': '127.0.0.2'}] with mock.patch('swift.proxy.server.shuffle', lambda x: x): app_sorted = baseapp.sort_nodes(nodes) exp_sorted = [{'region': 1, 'zone': 2, 'ip': '127.0.0.2'}, {'region': 2, 'zone': 1, 'ip': '127.0.0.1'}] self.assertEqual(exp_sorted, app_sorted) def test_node_concurrency(self): nodes = [{'region': 1, 'zone': 1, 'ip': '127.0.0.1', 'port': 6010, 'device': 'sda'}, {'region': 2, 'zone': 2, 'ip': '127.0.0.2', 'port': 6010, 'device': 'sda'}, {'region': 3, 'zone': 3, 'ip': '127.0.0.3', 'port': 6010, 'device': 'sda'}] timings = {'127.0.0.1': 2, '127.0.0.2': 1, '127.0.0.3': 0} statuses = {'127.0.0.1': 200, '127.0.0.2': 200, '127.0.0.3': 200} req = Request.blank('/v1/account', environ={'REQUEST_METHOD': 'GET'}) def fake_iter_nodes(*arg, **karg): return iter(nodes) class FakeConn(object): def __init__(self, ip, *args, **kargs): self.ip = ip self.args = args self.kargs = kargs def getresponse(self): def mygetheader(header, *args, **kargs): if header == "Content-Type": return "" else: return 1 resp = mock.Mock() resp.read.side_effect = ['Response from %s' % self.ip, ''] resp.getheader = mygetheader resp.getheaders.return_value = {} resp.reason = '' resp.status = statuses[self.ip] sleep(timings[self.ip]) return resp def myfake_http_connect_raw(ip, *args, **kargs): conn = FakeConn(ip, *args, **kargs) return conn with mock.patch('swift.proxy.server.Application.iter_nodes', fake_iter_nodes): with mock.patch('swift.common.bufferedhttp.http_connect_raw', myfake_http_connect_raw): app_conf = {'concurrent_gets': 'on', 'concurrency_timeout': 0} baseapp = proxy_server.Application(app_conf, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) self.assertEqual(baseapp.concurrent_gets, True) self.assertEqual(baseapp.concurrency_timeout, 0) baseapp.update_request(req) resp = baseapp.handle_request(req) # Should get 127.0.0.3 as this has a wait of 0 seconds. self.assertEqual(resp.body, 'Response from 127.0.0.3') # lets try again, with 127.0.0.1 with 0 timing but returns an # error. timings['127.0.0.1'] = 0 statuses['127.0.0.1'] = 500 # Should still get 127.0.0.3 as this has a wait of 0 seconds # and a success baseapp.update_request(req) resp = baseapp.handle_request(req) self.assertEqual(resp.body, 'Response from 127.0.0.3') # Now lets set the concurrency_timeout app_conf['concurrency_timeout'] = 2 baseapp = proxy_server.Application(app_conf, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) self.assertEqual(baseapp.concurrency_timeout, 2) baseapp.update_request(req) resp = baseapp.handle_request(req) # Should get 127.0.0.2 as this has a wait of 1 seconds. self.assertEqual(resp.body, 'Response from 127.0.0.2') def test_info_defaults(self): app = proxy_server.Application({}, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) self.assertTrue(app.expose_info) self.assertTrue(isinstance(app.disallowed_sections, list)) self.assertEqual(1, len(app.disallowed_sections)) self.assertEqual(['swift.valid_api_versions'], app.disallowed_sections) self.assertTrue(app.admin_key is None) def test_get_info_controller(self): req = Request.blank('/info') app = proxy_server.Application({}, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) controller, path_parts = app.get_controller(req) self.assertTrue('version' in path_parts) self.assertTrue(path_parts['version'] is None) self.assertTrue('disallowed_sections' in path_parts) self.assertTrue('expose_info' in path_parts) self.assertTrue('admin_key' in path_parts) self.assertEqual(controller.__name__, 'InfoController') def test_error_limit_methods(self): logger = debug_logger('test') app = proxy_server.Application({}, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing(), logger=logger) node = app.container_ring.get_part_nodes(0)[0] # error occurred app.error_occurred(node, 'test msg') self.assertTrue('test msg' in logger.get_lines_for_level('error')[-1]) self.assertEqual(1, node_error_count(app, node)) # exception occurred try: raise Exception('kaboom1!') except Exception as e1: app.exception_occurred(node, 'test1', 'test1 msg') line = logger.get_lines_for_level('error')[-1] self.assertTrue('test1 server' in line) self.assertTrue('test1 msg' in line) log_args, log_kwargs = logger.log_dict['error'][-1] self.assertTrue(log_kwargs['exc_info']) self.assertEqual(log_kwargs['exc_info'][1], e1) self.assertEqual(2, node_error_count(app, node)) # warning exception occurred try: raise Exception('kaboom2!') except Exception as e2: app.exception_occurred(node, 'test2', 'test2 msg', level=logging.WARNING) line = logger.get_lines_for_level('warning')[-1] self.assertTrue('test2 server' in line) self.assertTrue('test2 msg' in line) log_args, log_kwargs = logger.log_dict['warning'][-1] self.assertTrue(log_kwargs['exc_info']) self.assertEqual(log_kwargs['exc_info'][1], e2) self.assertEqual(3, node_error_count(app, node)) # custom exception occurred try: raise Exception('kaboom3!') except Exception as e3: e3_info = sys.exc_info() try: raise Exception('kaboom4!') except Exception: pass app.exception_occurred(node, 'test3', 'test3 msg', level=logging.WARNING, exc_info=e3_info) line = logger.get_lines_for_level('warning')[-1] self.assertTrue('test3 server' in line) self.assertTrue('test3 msg' in line) log_args, log_kwargs = logger.log_dict['warning'][-1] self.assertTrue(log_kwargs['exc_info']) self.assertEqual(log_kwargs['exc_info'][1], e3) self.assertEqual(4, node_error_count(app, node)) def test_valid_api_version(self): app = proxy_server.Application({}, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) # The version string is only checked for account, container and object # requests; the raised APIVersionError returns a 404 to the client for path in [ '/v2/a', '/v2/a/c', '/v2/a/c/o']: req = Request.blank(path) self.assertRaises(APIVersionError, app.get_controller, req) # Default valid API versions are ok for path in [ '/v1/a', '/v1/a/c', '/v1/a/c/o', '/v1.0/a', '/v1.0/a/c', '/v1.0/a/c/o']: req = Request.blank(path) controller, path_parts = app.get_controller(req) self.assertTrue(controller is not None) # Ensure settings valid API version constraint works for version in ["42", 42]: try: with NamedTemporaryFile() as f: f.write('[swift-constraints]\n') f.write('valid_api_versions = %s\n' % version) f.flush() with mock.patch.object(utils, 'SWIFT_CONF_FILE', f.name): constraints.reload_constraints() req = Request.blank('/%s/a' % version) controller, _ = app.get_controller(req) self.assertTrue(controller is not None) # In this case v1 is invalid req = Request.blank('/v1/a') self.assertRaises(APIVersionError, app.get_controller, req) finally: constraints.reload_constraints() # Check that the valid_api_versions is not exposed by default req = Request.blank('/info') controller, path_parts = app.get_controller(req) self.assertTrue('swift.valid_api_versions' in path_parts.get('disallowed_sections')) @patch_policies([ StoragePolicy(0, 'zero', is_default=True), StoragePolicy(1, 'one'), ]) class TestProxyServerLoading(unittest.TestCase): def setUp(self): self._orig_hash_suffix = utils.HASH_PATH_SUFFIX utils.HASH_PATH_SUFFIX = 'endcap' self.tempdir = mkdtemp() def tearDown(self): rmtree(self.tempdir) utils.HASH_PATH_SUFFIX = self._orig_hash_suffix for policy in POLICIES: policy.object_ring = None def test_load_policy_rings(self): for policy in POLICIES: self.assertFalse(policy.object_ring) conf_path = os.path.join(self.tempdir, 'proxy-server.conf') conf_body = """ [DEFAULT] swift_dir = %s [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ % self.tempdir with open(conf_path, 'w') as f: f.write(dedent(conf_body)) account_ring_path = os.path.join(self.tempdir, 'account.ring.gz') write_fake_ring(account_ring_path) container_ring_path = os.path.join(self.tempdir, 'container.ring.gz') write_fake_ring(container_ring_path) for policy in POLICIES: object_ring_path = os.path.join(self.tempdir, policy.ring_name + '.ring.gz') write_fake_ring(object_ring_path) app = loadapp(conf_path) # find the end of the pipeline while hasattr(app, 'app'): app = app.app # validate loaded rings self.assertEqual(app.account_ring.serialized_path, account_ring_path) self.assertEqual(app.container_ring.serialized_path, container_ring_path) for policy in POLICIES: self.assertEqual(policy.object_ring, app.get_object_ring(int(policy))) def test_missing_rings(self): conf_path = os.path.join(self.tempdir, 'proxy-server.conf') conf_body = """ [DEFAULT] swift_dir = %s [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ % self.tempdir with open(conf_path, 'w') as f: f.write(dedent(conf_body)) ring_paths = [ os.path.join(self.tempdir, 'account.ring.gz'), os.path.join(self.tempdir, 'container.ring.gz'), ] for policy in POLICIES: self.assertFalse(policy.object_ring) object_ring_path = os.path.join(self.tempdir, policy.ring_name + '.ring.gz') ring_paths.append(object_ring_path) for policy in POLICIES: self.assertFalse(policy.object_ring) for ring_path in ring_paths: self.assertFalse(os.path.exists(ring_path)) self.assertRaises(IOError, loadapp, conf_path) write_fake_ring(ring_path) # all rings exist, app should load loadapp(conf_path) for policy in POLICIES: self.assertTrue(policy.object_ring) @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing(base_port=3000))]) class TestObjectController(unittest.TestCase): def setUp(self): self.app = proxy_server.Application( None, FakeMemcache(), logger=debug_logger('proxy-ut'), account_ring=FakeRing(), container_ring=FakeRing()) # clear proxy logger result for each test _test_servers[0].logger._clear() def tearDown(self): self.app.account_ring.set_replicas(3) self.app.container_ring.set_replicas(3) for policy in POLICIES: policy.object_ring = FakeRing(base_port=3000) def put_container(self, policy_name, container_name): # Note: only works if called with unpatched policies prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: 0\r\n' 'X-Storage-Token: t\r\n' 'X-Storage-Policy: %s\r\n' '\r\n' % (container_name, policy_name)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' self.assertEqual(headers[:len(exp)], exp) def assert_status_map(self, method, statuses, expected, raise_exc=False): with save_globals(): kwargs = {} if raise_exc: kwargs['raise_exc'] = raise_exc set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', headers={'Content-Length': '0', 'Content-Type': 'text/plain'}) self.app.update_request(req) try: res = method(req) except HTTPException as res: pass self.assertEqual(res.status_int, expected) # repeat test set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', headers={'Content-Length': '0', 'Content-Type': 'text/plain'}) self.app.update_request(req) try: res = method(req) except HTTPException as res: pass self.assertEqual(res.status_int, expected) def _sleep_enough(self, condition): for sleeptime in (0.1, 1.0): sleep(sleeptime) if condition(): break @unpatch_policies def test_policy_IO(self): def check_file(policy, cont, devs, check_val): partition, nodes = policy.object_ring.get_nodes('a', cont, 'o') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileManager(conf, FakeLogger()) for dev in devs: file = df_mgr.get_diskfile(dev, partition, 'a', cont, 'o', policy=policy) if check_val is True: file.open() prolis = _test_sockets[0] prosrv = _test_servers[0] # check policy 0: put file on c, read it back, check loc on disk sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = 'test_object0' path = '/v1/a/c/o' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: text/plain\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) req = Request.blank(path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'text/plain'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, obj) check_file(POLICIES[0], 'c', ['sda1', 'sdb1'], True) check_file(POLICIES[0], 'c', ['sdc1', 'sdd1', 'sde1', 'sdf1'], False) # check policy 1: put file on c1, read it back, check loc on disk sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() path = '/v1/a/c1/o' obj = 'test_object1' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: text/plain\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) self.assertEqual(headers[:len(exp)], exp) req = Request.blank(path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'text/plain'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, obj) check_file(POLICIES[1], 'c1', ['sdc1', 'sdd1'], True) check_file(POLICIES[1], 'c1', ['sda1', 'sdb1', 'sde1', 'sdf1'], False) # check policy 2: put file on c2, read it back, check loc on disk sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() path = '/v1/a/c2/o' obj = 'test_object2' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: text/plain\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) self.assertEqual(headers[:len(exp)], exp) req = Request.blank(path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'text/plain'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, obj) check_file(POLICIES[2], 'c2', ['sde1', 'sdf1'], True) check_file(POLICIES[2], 'c2', ['sda1', 'sdb1', 'sdc1', 'sdd1'], False) @unpatch_policies def test_policy_IO_override(self): if hasattr(_test_servers[-1], '_filesystem'): # ironically, the _filesystem attribute on the object server means # the in-memory diskfile is in use, so this test does not apply return prosrv = _test_servers[0] # validate container policy is 1 req = Request.blank('/v1/a/c1', method='HEAD') res = req.get_response(prosrv) self.assertEqual(res.status_int, 204) # sanity check self.assertEqual(POLICIES[1].name, res.headers['x-storage-policy']) # check overrides: put it in policy 2 (not where the container says) req = Request.blank( '/v1/a/c1/wrong-o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b"hello")}, headers={'Content-Type': 'text/plain', 'Content-Length': '5', 'X-Backend-Storage-Policy-Index': '2'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 201) # sanity check # go to disk to make sure it's there partition, nodes = prosrv.get_object_ring(2).get_nodes( 'a', 'c1', 'wrong-o') node = nodes[0] conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileManager(conf, FakeLogger()) df = df_mgr.get_diskfile(node['device'], partition, 'a', 'c1', 'wrong-o', policy=POLICIES[2]) with df.open(): contents = ''.join(df.reader()) self.assertEqual(contents, "hello") # can't get it from the normal place req = Request.blank('/v1/a/c1/wrong-o', environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'text/plain'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 404) # sanity check # but we can get it from policy 2 req = Request.blank('/v1/a/c1/wrong-o', environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'text/plain', 'X-Backend-Storage-Policy-Index': '2'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, 'hello') # and we can delete it the same way req = Request.blank('/v1/a/c1/wrong-o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'Content-Type': 'text/plain', 'X-Backend-Storage-Policy-Index': '2'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 204) df = df_mgr.get_diskfile(node['device'], partition, 'a', 'c1', 'wrong-o', policy=POLICIES[2]) try: df.open() except DiskFileNotExist as e: self.assertTrue(float(e.timestamp) > 0) else: self.fail('did not raise DiskFileNotExist') @unpatch_policies def test_GET_newest_large_file(self): prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = 'a' * (1024 * 1024) path = '/v1/a/c/o.large' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) req = Request.blank(path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'X-Newest': 'true'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, obj) @unpatch_policies def test_GET_ranges(self): prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = (''.join( ('beans lots of beans lots of beans lots of beans yeah %04d ' % i) for i in range(100))) path = '/v1/a/c/o.beans' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # one byte range req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'Range': 'bytes=10-200'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 206) self.assertEqual(res.body, obj[10:201]) # multiple byte ranges req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'Range': 'bytes=10-200,1000-1099,4123-4523'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 206) ct, params = parse_content_type(res.headers['Content-Type']) self.assertEqual(ct, 'multipart/byteranges') boundary = dict(params).get('boundary') self.assertTrue(boundary is not None) got_mime_docs = [] for mime_doc_fh in iter_multipart_mime_documents(StringIO(res.body), boundary): headers = parse_mime_headers(mime_doc_fh) body = mime_doc_fh.read() got_mime_docs.append((headers, body)) self.assertEqual(len(got_mime_docs), 3) first_range_headers = got_mime_docs[0][0] first_range_body = got_mime_docs[0][1] self.assertEqual(first_range_headers['Content-Range'], 'bytes 10-200/5800') self.assertEqual(first_range_body, obj[10:201]) second_range_headers = got_mime_docs[1][0] second_range_body = got_mime_docs[1][1] self.assertEqual(second_range_headers['Content-Range'], 'bytes 1000-1099/5800') self.assertEqual(second_range_body, obj[1000:1100]) second_range_headers = got_mime_docs[2][0] second_range_body = got_mime_docs[2][1] self.assertEqual(second_range_headers['Content-Range'], 'bytes 4123-4523/5800') self.assertEqual(second_range_body, obj[4123:4524]) @unpatch_policies def test_GET_bad_range_zero_byte(self): prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() path = '/v1/a/c/o.zerobyte' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: 0\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n' % (path,)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # bad byte-range req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'Range': 'bytes=spaghetti-carbonara'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, '') # not a byte-range req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'Range': 'Kotta'}) res = req.get_response(prosrv) self.assertEqual(res.status_int, 200) self.assertEqual(res.body, '') @unpatch_policies def test_GET_ranges_resuming(self): prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = (''.join( ('Smurf! The smurfing smurf is completely smurfed. %03d ' % i) for i in range(1000))) path = '/v1/a/c/o.smurfs' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/smurftet-stream\r\n' '\r\n%s' % (path, str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) kaboomed = [0] bytes_before_timeout = [None] class FileLikeKaboom(object): def __init__(self, inner_file_like): self.inner_file_like = inner_file_like # close(), etc. def __getattr__(self, attr): return getattr(self.inner_file_like, attr) def readline(self, *a, **kw): if bytes_before_timeout[0] <= 0: kaboomed[0] += 1 raise ChunkReadTimeout(None) result = self.inner_file_like.readline(*a, **kw) if len(result) > bytes_before_timeout[0]: result = result[:bytes_before_timeout[0]] bytes_before_timeout[0] -= len(result) return result def read(self, length=None): result = self.inner_file_like.read(length) if bytes_before_timeout[0] <= 0: kaboomed[0] += 1 raise ChunkReadTimeout(None) if len(result) > bytes_before_timeout[0]: result = result[:bytes_before_timeout[0]] bytes_before_timeout[0] -= len(result) return result orig_hrtdi = swift.common.request_helpers. \ http_response_to_document_iters # Use this to mock out http_response_to_document_iters. On the first # call, the result will be sabotaged to blow up with # ChunkReadTimeout after some number of bytes are read. On # subsequent calls, no sabotage will be added. def sabotaged_hrtdi(*a, **kw): resp_parts = orig_hrtdi(*a, **kw) for sb, eb, l, h, range_file in resp_parts: if bytes_before_timeout[0] <= 0: # simulate being unable to read MIME part of # multipart/byteranges response kaboomed[0] += 1 raise ChunkReadTimeout(None) boomer = FileLikeKaboom(range_file) yield sb, eb, l, h, boomer sabotaged = [False] def single_sabotage_hrtdi(*a, **kw): if not sabotaged[0]: sabotaged[0] = True return sabotaged_hrtdi(*a, **kw) else: return orig_hrtdi(*a, **kw) # We want sort of an end-to-end test of object resuming, so what we # do is mock out stuff so the proxy thinks it only read a certain # number of bytes before it got a timeout. bytes_before_timeout[0] = 300 with mock.patch.object(proxy_base, 'http_response_to_document_iters', single_sabotage_hrtdi): req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Content-Type': 'application/octet-stream', 'Range': 'bytes=0-500'}) res = req.get_response(prosrv) body = res.body # read the whole thing self.assertEqual(kaboomed[0], 1) # sanity check self.assertEqual(res.status_int, 206) self.assertEqual(len(body), 501) self.assertEqual(body, obj[:501]) # Sanity-check for multi-range resume: make sure we actually break # in the middle of the second byterange. This test is partially # about what happens when all the object servers break at once, and # partially about validating all these mocks we do. After all, the # point of resuming is that the client can't tell anything went # wrong, so we need a test where we can't resume and something # *does* go wrong so we can observe it. bytes_before_timeout[0] = 700 kaboomed[0] = 0 sabotaged[0] = False prosrv._error_limiting = {} # clear out errors with mock.patch.object(proxy_base, 'http_response_to_document_iters', sabotaged_hrtdi): # perma-broken req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-500,1000-1500,2000-2500'}) res = req.get_response(prosrv) body = '' try: for chunk in res.app_iter: body += chunk except ChunkReadTimeout: pass self.assertEqual(res.status_int, 206) self.assertTrue(kaboomed[0] > 0) # sanity check ct, params = parse_content_type(res.headers['Content-Type']) self.assertEqual(ct, 'multipart/byteranges') # sanity check boundary = dict(params).get('boundary') self.assertTrue(boundary is not None) # sanity check got_byteranges = [] for mime_doc_fh in iter_multipart_mime_documents(StringIO(body), boundary): parse_mime_headers(mime_doc_fh) body = mime_doc_fh.read() got_byteranges.append(body) self.assertEqual(len(got_byteranges), 2) self.assertEqual(len(got_byteranges[0]), 501) self.assertEqual(len(got_byteranges[1]), 199) # partial # Multi-range resume, resuming in the middle of the first byterange bytes_before_timeout[0] = 300 kaboomed[0] = 0 sabotaged[0] = False prosrv._error_limiting = {} # clear out errors with mock.patch.object(proxy_base, 'http_response_to_document_iters', single_sabotage_hrtdi): req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-500,1000-1500,2000-2500'}) res = req.get_response(prosrv) body = ''.join(res.app_iter) self.assertEqual(res.status_int, 206) self.assertEqual(kaboomed[0], 1) # sanity check ct, params = parse_content_type(res.headers['Content-Type']) self.assertEqual(ct, 'multipart/byteranges') # sanity check boundary = dict(params).get('boundary') self.assertTrue(boundary is not None) # sanity check got_byteranges = [] for mime_doc_fh in iter_multipart_mime_documents(StringIO(body), boundary): parse_mime_headers(mime_doc_fh) body = mime_doc_fh.read() got_byteranges.append(body) self.assertEqual(len(got_byteranges), 3) self.assertEqual(len(got_byteranges[0]), 501) self.assertEqual(got_byteranges[0], obj[:501]) self.assertEqual(len(got_byteranges[1]), 501) self.assertEqual(got_byteranges[1], obj[1000:1501]) self.assertEqual(len(got_byteranges[2]), 501) self.assertEqual(got_byteranges[2], obj[2000:2501]) # Multi-range resume, first GET dies in the middle of the second set # of MIME headers bytes_before_timeout[0] = 501 kaboomed[0] = 0 sabotaged[0] = False prosrv._error_limiting = {} # clear out errors with mock.patch.object(proxy_base, 'http_response_to_document_iters', single_sabotage_hrtdi): req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-500,1000-1500,2000-2500'}) res = req.get_response(prosrv) body = ''.join(res.app_iter) self.assertEqual(res.status_int, 206) self.assertTrue(kaboomed[0] >= 1) # sanity check ct, params = parse_content_type(res.headers['Content-Type']) self.assertEqual(ct, 'multipart/byteranges') # sanity check boundary = dict(params).get('boundary') self.assertTrue(boundary is not None) # sanity check got_byteranges = [] for mime_doc_fh in iter_multipart_mime_documents(StringIO(body), boundary): parse_mime_headers(mime_doc_fh) body = mime_doc_fh.read() got_byteranges.append(body) self.assertEqual(len(got_byteranges), 3) self.assertEqual(len(got_byteranges[0]), 501) self.assertEqual(got_byteranges[0], obj[:501]) self.assertEqual(len(got_byteranges[1]), 501) self.assertEqual(got_byteranges[1], obj[1000:1501]) self.assertEqual(len(got_byteranges[2]), 501) self.assertEqual(got_byteranges[2], obj[2000:2501]) # Multi-range resume, first GET dies in the middle of the second # byterange bytes_before_timeout[0] = 750 kaboomed[0] = 0 sabotaged[0] = False prosrv._error_limiting = {} # clear out errors with mock.patch.object(proxy_base, 'http_response_to_document_iters', single_sabotage_hrtdi): req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-500,1000-1500,2000-2500'}) res = req.get_response(prosrv) body = ''.join(res.app_iter) self.assertEqual(res.status_int, 206) self.assertTrue(kaboomed[0] >= 1) # sanity check ct, params = parse_content_type(res.headers['Content-Type']) self.assertEqual(ct, 'multipart/byteranges') # sanity check boundary = dict(params).get('boundary') self.assertTrue(boundary is not None) # sanity check got_byteranges = [] for mime_doc_fh in iter_multipart_mime_documents(StringIO(body), boundary): parse_mime_headers(mime_doc_fh) body = mime_doc_fh.read() got_byteranges.append(body) self.assertEqual(len(got_byteranges), 3) self.assertEqual(len(got_byteranges[0]), 501) self.assertEqual(got_byteranges[0], obj[:501]) self.assertEqual(len(got_byteranges[1]), 501) self.assertEqual(got_byteranges[1], obj[1000:1501]) self.assertEqual(len(got_byteranges[2]), 501) self.assertEqual(got_byteranges[2], obj[2000:2501]) @unpatch_policies def test_PUT_ec(self): policy = POLICIES[3] self.put_container("ec", "ec-con") obj = 'abCD' * 10 # small, so we don't get multiple EC stripes prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/o1 HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: "%s"\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) ecd = policy.pyeclib_driver expected_pieces = set(ecd.encode(obj)) # go to disk to make sure it's there and all erasure-coded partition, nodes = policy.object_ring.get_nodes('a', 'ec-con', 'o1') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[policy] got_pieces = set() got_indices = set() got_durable = [] for node_index, node in enumerate(nodes): df = df_mgr.get_diskfile(node['device'], partition, 'a', 'ec-con', 'o1', policy=policy) with df.open(): meta = df.get_metadata() contents = ''.join(df.reader()) got_pieces.add(contents) # check presence for a .durable file for the timestamp durable_file = os.path.join( _testdir, node['device'], storage_directory( diskfile.get_data_dir(policy), partition, hash_path('a', 'ec-con', 'o1')), utils.Timestamp(df.timestamp).internal + '.durable') if os.path.isfile(durable_file): got_durable.append(True) lmeta = dict((k.lower(), v) for k, v in meta.items()) got_indices.add( lmeta['x-object-sysmeta-ec-frag-index']) self.assertEqual( lmeta['x-object-sysmeta-ec-etag'], md5(obj).hexdigest()) self.assertEqual( lmeta['x-object-sysmeta-ec-content-length'], str(len(obj))) self.assertEqual( lmeta['x-object-sysmeta-ec-segment-size'], '4096') self.assertEqual( lmeta['x-object-sysmeta-ec-scheme'], '%s 2+1' % DEFAULT_TEST_EC_TYPE) self.assertEqual( lmeta['etag'], md5(contents).hexdigest()) self.assertEqual(expected_pieces, got_pieces) self.assertEqual(set(('0', '1', '2')), got_indices) # verify at least 2 puts made it all the way to the end of 2nd # phase, ie at least 2 .durable statuses were written num_durable_puts = sum(d is True for d in got_durable) self.assertTrue(num_durable_puts >= 2) @unpatch_policies def test_PUT_ec_multiple_segments(self): ec_policy = POLICIES[3] self.put_container("ec", "ec-con") pyeclib_header_size = len(ec_policy.pyeclib_driver.encode("")[0]) segment_size = ec_policy.ec_segment_size # Big enough to have multiple segments. Also a multiple of the # segment size to get coverage of that path too. obj = 'ABC' * segment_size prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/o2 HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # it's a 2+1 erasure code, so each fragment archive should be half # the length of the object, plus three inline pyeclib metadata # things (one per segment) expected_length = (len(obj) / 2 + pyeclib_header_size * 3) partition, nodes = ec_policy.object_ring.get_nodes( 'a', 'ec-con', 'o2') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[ec_policy] got_durable = [] fragment_archives = [] for node in nodes: df = df_mgr.get_diskfile( node['device'], partition, 'a', 'ec-con', 'o2', policy=ec_policy) with df.open(): contents = ''.join(df.reader()) fragment_archives.append(contents) self.assertEqual(len(contents), expected_length) # check presence for a .durable file for the timestamp durable_file = os.path.join( _testdir, node['device'], storage_directory( diskfile.get_data_dir(ec_policy), partition, hash_path('a', 'ec-con', 'o2')), utils.Timestamp(df.timestamp).internal + '.durable') if os.path.isfile(durable_file): got_durable.append(True) # Verify that we can decode each individual fragment and that they # are all the correct size fragment_size = ec_policy.fragment_size nfragments = int( math.ceil(float(len(fragment_archives[0])) / fragment_size)) for fragment_index in range(nfragments): fragment_start = fragment_index * fragment_size fragment_end = (fragment_index + 1) * fragment_size try: frags = [fa[fragment_start:fragment_end] for fa in fragment_archives] seg = ec_policy.pyeclib_driver.decode(frags) except ECDriverError: self.fail("Failed to decode fragments %d; this probably " "means the fragments are not the sizes they " "should be" % fragment_index) segment_start = fragment_index * segment_size segment_end = (fragment_index + 1) * segment_size self.assertEqual(seg, obj[segment_start:segment_end]) # verify at least 2 puts made it all the way to the end of 2nd # phase, ie at least 2 .durable statuses were written num_durable_puts = sum(d is True for d in got_durable) self.assertTrue(num_durable_puts >= 2) @unpatch_policies def test_PUT_ec_object_etag_mismatch(self): ec_policy = POLICIES[3] self.put_container("ec", "ec-con") obj = '90:6A:02:60:B1:08-96da3e706025537fc42464916427727e' prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/o3 HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: %s\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5('something else').hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 422' self.assertEqual(headers[:len(exp)], exp) # nothing should have made it to disk on the object servers partition, nodes = prosrv.get_object_ring(3).get_nodes( 'a', 'ec-con', 'o3') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[ec_policy] for node in nodes: df = df_mgr.get_diskfile(node['device'], partition, 'a', 'ec-con', 'o3', policy=POLICIES[3]) self.assertRaises(DiskFileNotExist, df.open) @unpatch_policies def test_PUT_ec_fragment_archive_etag_mismatch(self): ec_policy = POLICIES[3] self.put_container("ec", "ec-con") # Cause a hash mismatch by feeding one particular MD5 hasher some # extra data. The goal here is to get exactly one of the hashers in # an object server. countdown = [1] def busted_md5_constructor(initial_str=""): hasher = md5(initial_str) if countdown[0] == 0: hasher.update('wrong') countdown[0] -= 1 return hasher obj = 'uvarovite-esurience-cerated-symphysic' prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) with mock.patch('swift.obj.server.md5', busted_md5_constructor): fd = sock.makefile() fd.write('PUT /v1/a/ec-con/pimento HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: %s\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 503' # no quorum self.assertEqual(headers[:len(exp)], exp) # 2/3 of the fragment archives should have landed on disk partition, nodes = prosrv.get_object_ring(3).get_nodes( 'a', 'ec-con', 'pimento') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[ec_policy] found = 0 for node in nodes: df = df_mgr.get_diskfile(node['device'], partition, 'a', 'ec-con', 'pimento', policy=POLICIES[3]) try: # diskfile open won't succeed because no durable was written, # so look under the hood for data files. files = os.listdir(df._datadir) num_data_files = len([f for f in files if f.endswith('.data')]) self.assertEqual(1, num_data_files) found += 1 except OSError: pass self.assertEqual(found, 2) @unpatch_policies def test_PUT_ec_fragment_quorum_archive_etag_mismatch(self): ec_policy = POLICIES[3] self.put_container("ec", "ec-con") def busted_md5_constructor(initial_str=""): hasher = md5(initial_str) hasher.update('wrong') return hasher obj = 'uvarovite-esurience-cerated-symphysic' prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) call_count = [0] def mock_committer(self): call_count[0] += 1 commit_confirmation = \ 'swift.proxy.controllers.obj.ECPutter.send_commit_confirmation' with mock.patch('swift.obj.server.md5', busted_md5_constructor), \ mock.patch(commit_confirmation, mock_committer): fd = sock.makefile() fd.write('PUT /v1/a/ec-con/quorum HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: %s\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 503' # no quorum self.assertEqual(headers[:len(exp)], exp) # Don't send commit to object-server if quorum responses consist of 4xx self.assertEqual(0, call_count[0]) # no fragment archives should have landed on disk partition, nodes = prosrv.get_object_ring(3).get_nodes( 'a', 'ec-con', 'quorum') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[ec_policy] for node in nodes: df = df_mgr.get_diskfile(node['device'], partition, 'a', 'ec-con', 'quorum', policy=POLICIES[3]) self.assertFalse(os.path.exists(df._datadir)) @unpatch_policies def test_PUT_ec_fragment_quorum_bad_request(self): ec_policy = POLICIES[3] self.put_container("ec", "ec-con") obj = 'uvarovite-esurience-cerated-symphysic' prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) call_count = [0] def mock_committer(self): call_count[0] += 1 read_footer = \ 'swift.obj.server.ObjectController._read_metadata_footer' commit_confirmation = \ 'swift.proxy.controllers.obj.ECPutter.send_commit_confirmation' with mock.patch(read_footer) as read_footer_call, \ mock.patch(commit_confirmation, mock_committer): # Emulate missing footer MIME doc in all object-servers read_footer_call.side_effect = HTTPBadRequest( body="couldn't find footer MIME doc") fd = sock.makefile() fd.write('PUT /v1/a/ec-con/quorum HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: %s\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) # Don't show a result of the bad conversation between proxy-server # and object-server exp = 'HTTP/1.1 503' self.assertEqual(headers[:len(exp)], exp) # Don't send commit to object-server if quorum responses consist of 4xx self.assertEqual(0, call_count[0]) # no fragment archives should have landed on disk partition, nodes = prosrv.get_object_ring(3).get_nodes( 'a', 'ec-con', 'quorum') conf = {'devices': _testdir, 'mount_check': 'false'} df_mgr = diskfile.DiskFileRouter(conf, FakeLogger())[ec_policy] for node in nodes: df = df_mgr.get_diskfile(node['device'], partition, 'a', 'ec-con', 'quorum', policy=POLICIES[3]) self.assertFalse(os.path.exists(df._datadir)) @unpatch_policies def test_PUT_ec_if_none_match(self): self.put_container("ec", "ec-con") obj = 'ananepionic-lepidophyllous-ropewalker-neglectful' prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/inm HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Etag: "%s"\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/inm HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'If-None-Match: *\r\n' 'Etag: "%s"\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (md5(obj).hexdigest(), len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 412' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_GET_ec(self): self.put_container("ec", "ec-con") obj = '0123456' * 11 * 17 prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/go-get-it HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'X-Object-Meta-Color: chartreuse\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/ec-con/go-get-it HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) headers = parse_headers_string(headers) self.assertEqual(str(len(obj)), headers['Content-Length']) self.assertEqual(md5(obj).hexdigest(), headers['Etag']) self.assertEqual('chartreuse', headers['X-Object-Meta-Color']) gotten_obj = '' while True: buf = fd.read(64) if not buf: break gotten_obj += buf self.assertEqual(gotten_obj, obj) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity def _test_conditional_GET(self, policy): container_name = uuid.uuid4().hex object_path = '/v1/a/%s/conditionals' % container_name self.put_container(policy.name, container_name) obj = 'this object has an etag and is otherwise unimportant' etag = md5(obj).hexdigest() not_etag = md5(obj + "blahblah").hexdigest() prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (object_path, len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) for verb, body in (('GET', obj), ('HEAD', '')): # If-Match req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-Match': etag}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, body) self.assertEqual(etag, resp.headers.get('etag')) self.assertEqual('bytes', resp.headers.get('accept-ranges')) req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-Match': not_etag}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 412) self.assertEqual(etag, resp.headers.get('etag')) req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-Match': "*"}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, body) self.assertEqual(etag, resp.headers.get('etag')) self.assertEqual('bytes', resp.headers.get('accept-ranges')) # If-None-Match req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-None-Match': etag}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 304) self.assertEqual(etag, resp.headers.get('etag')) self.assertEqual('bytes', resp.headers.get('accept-ranges')) req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-None-Match': not_etag}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, body) self.assertEqual(etag, resp.headers.get('etag')) self.assertEqual('bytes', resp.headers.get('accept-ranges')) req = Request.blank( object_path, environ={'REQUEST_METHOD': verb}, headers={'If-None-Match': "*"}) resp = req.get_response(prosrv) self.assertEqual(resp.status_int, 304) self.assertEqual(etag, resp.headers.get('etag')) self.assertEqual('bytes', resp.headers.get('accept-ranges')) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity @unpatch_policies def test_conditional_GET_ec(self): policy = POLICIES[3] self.assertEqual('erasure_coding', policy.policy_type) # sanity self._test_conditional_GET(policy) @unpatch_policies def test_conditional_GET_replication(self): policy = POLICIES[0] self.assertEqual('replication', policy.policy_type) # sanity self._test_conditional_GET(policy) @unpatch_policies def test_GET_ec_big(self): self.put_container("ec", "ec-con") # our EC segment size is 4 KiB, so this is multiple (3) segments; # we'll verify that with a sanity check obj = 'a moose once bit my sister' * 400 self.assertTrue( len(obj) > POLICIES.get_by_name("ec").ec_segment_size * 2, "object is too small for proper testing") prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/big-obj-get HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/ec-con/big-obj-get HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) headers = parse_headers_string(headers) self.assertEqual(str(len(obj)), headers['Content-Length']) self.assertEqual(md5(obj).hexdigest(), headers['Etag']) gotten_obj = '' while True: buf = fd.read(64) if not buf: break gotten_obj += buf # This may look like a redundant test, but when things fail, this # has a useful failure message while the subsequent one spews piles # of garbage and demolishes your terminal's scrollback buffer. self.assertEqual(len(gotten_obj), len(obj)) self.assertEqual(gotten_obj, obj) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity @unpatch_policies def test_GET_ec_failure_handling(self): self.put_container("ec", "ec-con") obj = 'look at this object; it is simply amazing ' * 500 prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/crash-test-dummy HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) def explodey_iter(inner_iter): yield next(inner_iter) raise Exception("doom ba doom") def explodey_doc_parts_iter(inner_iter_iter): for item in inner_iter_iter: item = item.copy() # paranoia about mutable data item['part_iter'] = explodey_iter(item['part_iter']) yield item real_ec_app_iter = swift.proxy.controllers.obj.ECAppIter def explodey_ec_app_iter(path, policy, iterators, *a, **kw): # Each thing in `iterators` here is a document-parts iterator, # and we want to fail after getting a little into each part. # # That way, we ensure we've started streaming the response to # the client when things go wrong. return real_ec_app_iter( path, policy, [explodey_doc_parts_iter(i) for i in iterators], *a, **kw) with mock.patch("swift.proxy.controllers.obj.ECAppIter", explodey_ec_app_iter): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/ec-con/crash-test-dummy HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) headers = parse_headers_string(headers) self.assertEqual(str(len(obj)), headers['Content-Length']) self.assertEqual(md5(obj).hexdigest(), headers['Etag']) gotten_obj = '' try: with Timeout(300): # don't hang the testrun when this fails while True: buf = fd.read(64) if not buf: break gotten_obj += buf except Timeout: self.fail("GET hung when connection failed") # Ensure we failed partway through, otherwise the mocks could # get out of date without anyone noticing self.assertTrue(0 < len(gotten_obj) < len(obj)) @unpatch_policies def test_HEAD_ec(self): self.put_container("ec", "ec-con") obj = '0123456' * 11 * 17 prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/go-head-it HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'X-Object-Meta-Color: chartreuse\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('HEAD /v1/a/ec-con/go-head-it HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) headers = parse_headers_string(headers) self.assertEqual(str(len(obj)), headers['Content-Length']) self.assertEqual(md5(obj).hexdigest(), headers['Etag']) self.assertEqual('chartreuse', headers['X-Object-Meta-Color']) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity @unpatch_policies def test_GET_ec_404(self): self.put_container("ec", "ec-con") prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/ec-con/yes-we-have-no-bananas HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 404' self.assertEqual(headers[:len(exp)], exp) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity @unpatch_policies def test_HEAD_ec_404(self): self.put_container("ec", "ec-con") prolis = _test_sockets[0] prosrv = _test_servers[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('HEAD /v1/a/ec-con/yes-we-have-no-bananas HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 404' self.assertEqual(headers[:len(exp)], exp) error_lines = prosrv.logger.get_lines_for_level('error') warn_lines = prosrv.logger.get_lines_for_level('warning') self.assertEqual(len(error_lines), 0) # sanity self.assertEqual(len(warn_lines), 0) # sanity def test_PUT_expect_header_zero_content_length(self): test_errors = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.jpg': if 'expect' in headers or 'Expect' in headers: test_errors.append('Expect was in headers for object ' 'server!') with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') # The (201, Exception('test')) tuples in there have the effect of # changing the status of the initial expect response. The default # expect response from FakeConn for 201 is 100. # But the object server won't send a 100 continue line if the # client doesn't send a expect 100 header (as is the case with # zero byte PUTs as validated by this test), nevertheless the # object controller calls getexpect without prejudice. In this # case the status from the response shows up early in getexpect # instead of having to wait until getresponse. The Exception is # in there to ensure that the object controller also *uses* the # result of getexpect instead of calling getresponse in which case # our FakeConn will blow up. success_codes = [(201, Exception('test'))] * 3 set_http_connect(200, 200, *success_codes, give_connect=test_connect) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 0 self.app.update_request(req) self.app.memcache.store = {} res = controller.PUT(req) self.assertEqual(test_errors, []) self.assertTrue(res.status.startswith('201 '), res.status) def test_PUT_expect_header_nonzero_content_length(self): test_errors = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.jpg': if 'Expect' not in headers: test_errors.append('Expect was not in headers for ' 'non-zero byte PUT!') with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o.jpg') # the (100, 201) tuples in there are just being extra explicit # about the FakeConn returning the 100 Continue status when the # object controller calls getexpect. Which is FakeConn's default # for 201 if no expect_status is specified. success_codes = [(100, 201)] * 3 set_http_connect(200, 200, *success_codes, give_connect=test_connect) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 1 req.body = 'a' self.app.update_request(req) self.app.memcache.store = {} res = controller.PUT(req) self.assertEqual(test_errors, []) self.assertTrue(res.status.startswith('201 ')) def test_PUT_respects_write_affinity(self): written_to = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.jpg': written_to.append((ipaddr, port, device)) with save_globals(): def is_r0(node): return node['region'] == 0 object_ring = self.app.get_object_ring(None) object_ring.max_more_nodes = 100 self.app.write_affinity_is_local_fn = is_r0 self.app.write_affinity_node_count = lambda r: 3 controller = \ ReplicatedObjectController( self.app, 'a', 'c', 'o.jpg') set_http_connect(200, 200, 201, 201, 201, give_connect=test_connect) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 1 req.body = 'a' self.app.memcache.store = {} res = controller.PUT(req) self.assertTrue(res.status.startswith('201 ')) self.assertEqual(3, len(written_to)) for ip, port, device in written_to: # this is kind of a hokey test, but in FakeRing, the port is even # when the region is 0, and odd when the region is 1, so this test # asserts that we only wrote to nodes in region 0. self.assertEqual(0, port % 2) def test_PUT_respects_write_affinity_with_507s(self): written_to = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.jpg': written_to.append((ipaddr, port, device)) with save_globals(): def is_r0(node): return node['region'] == 0 object_ring = self.app.get_object_ring(None) object_ring.max_more_nodes = 100 self.app.write_affinity_is_local_fn = is_r0 self.app.write_affinity_node_count = lambda r: 3 controller = \ ReplicatedObjectController( self.app, 'a', 'c', 'o.jpg') self.app.error_limit( object_ring.get_part_nodes(1)[0], 'test') set_http_connect(200, 200, # account, container 201, 201, 201, # 3 working backends give_connect=test_connect) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 1 req.body = 'a' self.app.memcache.store = {} res = controller.PUT(req) self.assertTrue(res.status.startswith('201 ')) self.assertEqual(3, len(written_to)) # this is kind of a hokey test, but in FakeRing, the port is even when # the region is 0, and odd when the region is 1, so this test asserts # that we wrote to 2 nodes in region 0, then went to 1 non-r0 node. self.assertEqual(0, written_to[0][1] % 2) # it's (ip, port, device) self.assertEqual(0, written_to[1][1] % 2) self.assertNotEqual(0, written_to[2][1] % 2) @unpatch_policies def test_PUT_no_etag_fallocate(self): with mock.patch('swift.obj.diskfile.fallocate') as mock_fallocate: prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = 'hemoleucocytic-surfactant' fd.write('PUT /v1/a/c/o HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # one for each obj server; this test has 2 self.assertEqual(len(mock_fallocate.mock_calls), 2) @unpatch_policies def test_PUT_message_length_using_content_length(self): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() obj = 'j' * 20 fd.write('PUT /v1/a/c/o.content-length HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (str(len(obj)), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_message_length_using_transfer_encoding(self): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.chunked HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\n' 'oh\r\n' '4\r\n' ' say\r\n' '4\r\n' ' can\r\n' '4\r\n' ' you\r\n' '4\r\n' ' see\r\n' '3\r\n' ' by\r\n' '4\r\n' ' the\r\n' '8\r\n' ' dawns\'\n\r\n' '0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_message_length_using_both(self): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.chunked HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' 'Content-Length: 33\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\n' 'oh\r\n' '4\r\n' ' say\r\n' '4\r\n' ' can\r\n' '4\r\n' ' you\r\n' '4\r\n' ' see\r\n' '3\r\n' ' by\r\n' '4\r\n' ' the\r\n' '8\r\n' ' dawns\'\n\r\n' '0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_bad_message_length(self): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.chunked HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' 'Content-Length: 33\r\n' 'Transfer-Encoding: gzip\r\n\r\n' '2\r\n' 'oh\r\n' '4\r\n' ' say\r\n' '4\r\n' ' can\r\n' '4\r\n' ' you\r\n' '4\r\n' ' see\r\n' '3\r\n' ' by\r\n' '4\r\n' ' the\r\n' '8\r\n' ' dawns\'\n\r\n' '0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 400' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_message_length_unsup_xfr_encoding(self): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.chunked HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' 'Content-Length: 33\r\n' 'Transfer-Encoding: gzip,chunked\r\n\r\n' '2\r\n' 'oh\r\n' '4\r\n' ' say\r\n' '4\r\n' ' can\r\n' '4\r\n' ' you\r\n' '4\r\n' ' see\r\n' '3\r\n' ' by\r\n' '4\r\n' ' the\r\n' '8\r\n' ' dawns\'\n\r\n' '0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 501' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_message_length_too_large(self): with mock.patch('swift.common.constraints.MAX_FILE_SIZE', 10): prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.chunked HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: application/octet-stream\r\n' 'Content-Length: 33\r\n\r\n' 'oh say can you see by the dawns\'\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 413' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_PUT_POST_last_modified(self): prolis = _test_sockets[0] prosrv = _test_servers[0] def _do_HEAD(): # do a HEAD to get reported last modified time sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('HEAD /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) last_modified_head = [line for line in headers.split('\r\n') if lm_hdr in line][0][len(lm_hdr):] return last_modified_head def _do_conditional_GET_checks(last_modified_time): # check If-(Un)Modified-Since GETs sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'If-Modified-Since: %s\r\n' 'X-Storage-Token: t\r\n\r\n' % last_modified_time) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 304' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'If-Unmodified-Since: %s\r\n' 'X-Storage-Token: t\r\n\r\n' % last_modified_time) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) # PUT the object sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' lm_hdr = 'Last-Modified: ' self.assertEqual(headers[:len(exp)], exp) last_modified_put = [line for line in headers.split('\r\n') if lm_hdr in line][0][len(lm_hdr):] last_modified_head = _do_HEAD() self.assertEqual(last_modified_put, last_modified_head) _do_conditional_GET_checks(last_modified_put) # now POST to the object using default object_post_as_copy setting orig_post_as_copy = prosrv.object_post_as_copy # last-modified rounded in sec so sleep a sec to increment sleep(1) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('POST /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 202' self.assertEqual(headers[:len(exp)], exp) for line in headers.split('\r\n'): self.assertFalse(line.startswith(lm_hdr)) # last modified time will have changed due to POST last_modified_head = _do_HEAD() self.assertNotEqual(last_modified_put, last_modified_head) _do_conditional_GET_checks(last_modified_head) # now POST using non-default object_post_as_copy setting try: # last-modified rounded in sec so sleep a sec to increment last_modified_post = last_modified_head sleep(1) prosrv.object_post_as_copy = not orig_post_as_copy sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('POST /v1/a/c/o.last_modified HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 202' self.assertEqual(headers[:len(exp)], exp) for line in headers.split('\r\n'): self.assertFalse(line.startswith(lm_hdr)) finally: prosrv.object_post_as_copy = orig_post_as_copy # last modified time will have changed due to POST last_modified_head = _do_HEAD() self.assertNotEqual(last_modified_post, last_modified_head) _do_conditional_GET_checks(last_modified_head) def test_PUT_auto_content_type(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') def test_content_type(filename, expected): # The three responses here are for account_info() (HEAD to # account server), container_info() (HEAD to container server) # and three calls to _connect_put_node() (PUT to three object # servers) set_http_connect(201, 201, 201, 201, 201, give_content_type=lambda content_type: self.assertEqual(content_type, next(expected))) # We need into include a transfer-encoding to get past # constraints.check_object_creation() req = Request.blank('/v1/a/c/%s' % filename, {}, headers={'transfer-encoding': 'chunked'}) self.app.update_request(req) self.app.memcache.store = {} res = controller.PUT(req) # If we don't check the response here we could miss problems # in PUT() self.assertEqual(res.status_int, 201) test_content_type('test.jpg', iter(['', '', 'image/jpeg', 'image/jpeg', 'image/jpeg'])) test_content_type('test.html', iter(['', '', 'text/html', 'text/html', 'text/html'])) test_content_type('test.css', iter(['', '', 'text/css', 'text/css', 'text/css'])) def test_custom_mime_types_files(self): swift_dir = mkdtemp() try: with open(os.path.join(swift_dir, 'mime.types'), 'w') as fp: fp.write('foo/bar foo\n') proxy_server.Application({'swift_dir': swift_dir}, FakeMemcache(), FakeLogger(), FakeRing(), FakeRing()) self.assertEqual(proxy_server.mimetypes.guess_type('blah.foo')[0], 'foo/bar') self.assertEqual(proxy_server.mimetypes.guess_type('blah.jpg')[0], 'image/jpeg') finally: rmtree(swift_dir, ignore_errors=True) def test_PUT(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') def test_status_map(statuses, expected): set_http_connect(*statuses) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 0 self.app.update_request(req) self.app.memcache.store = {} res = controller.PUT(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 200, 201, 201, 201), 201) test_status_map((200, 200, 201, 201, 500), 201) test_status_map((200, 200, 204, 404, 404), 404) test_status_map((200, 200, 204, 500, 404), 503) test_status_map((200, 200, 202, 202, 204), 204) def test_PUT_connect_exceptions(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') def test_status_map(statuses, expected): set_http_connect(*statuses) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 0 self.app.update_request(req) try: res = controller.PUT(req) except HTTPException as res: pass expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 200, 201, 201, -1), 201) # connect exc # connect errors test_status_map((200, 200, Timeout(), 201, 201, ), 201) test_status_map((200, 200, 201, 201, Exception()), 201) # expect errors test_status_map((200, 200, (Timeout(), None), 201, 201), 201) test_status_map((200, 200, (Exception(), None), 201, 201), 201) # response errors test_status_map((200, 200, (100, Timeout()), 201, 201), 201) test_status_map((200, 200, (100, Exception()), 201, 201), 201) test_status_map((200, 200, 507, 201, 201), 201) # error limited test_status_map((200, 200, -1, 201, -1), 503) test_status_map((200, 200, 503, -1, 503), 503) def test_PUT_send_exceptions(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') def test_status_map(statuses, expected): self.app.memcache.store = {} set_http_connect(*statuses) req = Request.blank('/v1/a/c/o.jpg', environ={'REQUEST_METHOD': 'PUT'}, body='some data') self.app.update_request(req) try: res = controller.PUT(req) except HTTPException as res: pass expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 200, 201, -1, 201), 201) test_status_map((200, 200, 201, -1, -1), 503) test_status_map((200, 200, 503, 503, -1), 503) def test_PUT_max_size(self): with save_globals(): set_http_connect(201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', {}, headers={ 'Content-Length': str(constraints.MAX_FILE_SIZE + 1), 'Content-Type': 'foo/bar'}) self.app.update_request(req) res = controller.PUT(req) self.assertEqual(res.status_int, 413) def test_PUT_bad_content_type(self): with save_globals(): set_http_connect(201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', {}, headers={ 'Content-Length': 0, 'Content-Type': 'foo/bar;swift_hey=45'}) self.app.update_request(req) res = controller.PUT(req) self.assertEqual(res.status_int, 400) def test_PUT_getresponse_exceptions(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') def test_status_map(statuses, expected): self.app.memcache.store = {} set_http_connect(*statuses) req = Request.blank('/v1/a/c/o.jpg', {}) req.content_length = 0 self.app.update_request(req) try: res = controller.PUT(req) except HTTPException as res: pass expected = str(expected) self.assertEqual(res.status[:len(str(expected))], str(expected)) test_status_map((200, 200, 201, 201, -1), 201) test_status_map((200, 200, 201, -1, -1), 503) test_status_map((200, 200, 503, 503, -1), 503) def test_POST(self): with save_globals(): self.app.object_post_as_copy = False def test_status_map(statuses, expected): set_http_connect(*statuses) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {}, method='POST', headers={'Content-Type': 'foo/bar'}) self.app.update_request(req) res = req.get_response(self.app) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 200, 202, 202, 202), 202) test_status_map((200, 200, 202, 202, 500), 202) test_status_map((200, 200, 202, 500, 500), 503) test_status_map((200, 200, 202, 404, 500), 503) test_status_map((200, 200, 202, 404, 404), 404) test_status_map((200, 200, 404, 500, 500), 503) test_status_map((200, 200, 404, 404, 404), 404) @patch_policies([ StoragePolicy(0, 'zero', is_default=True, object_ring=FakeRing()), StoragePolicy(1, 'one', object_ring=FakeRing()), ]) def test_POST_backend_headers(self): # reset the router post patch_policies self.app.obj_controller_router = proxy_server.ObjectControllerRouter() self.app.object_post_as_copy = False self.app.sort_nodes = lambda nodes: nodes backend_requests = [] def capture_requests(ip, port, method, path, headers, *args, **kwargs): backend_requests.append((method, path, headers)) req = Request.blank('/v1/a/c/o', {}, method='POST', headers={'X-Object-Meta-Color': 'Blue', 'Content-Type': 'text/plain'}) # we want the container_info response to says a policy index of 1 resp_headers = {'X-Backend-Storage-Policy-Index': 1} with mocked_http_conn( 200, 200, 202, 202, 202, headers=resp_headers, give_connect=capture_requests ) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 202) self.assertEqual(len(backend_requests), 5) def check_request(req, method, path, headers=None): req_method, req_path, req_headers = req self.assertEqual(method, req_method) # caller can ignore leading path parts self.assertTrue(req_path.endswith(path), 'expected path to end with %s, it was %s' % ( path, req_path)) headers = headers or {} # caller can ignore some headers for k, v in headers.items(): self.assertEqual(req_headers[k], v) account_request = backend_requests.pop(0) check_request(account_request, method='HEAD', path='/sda/0/a') container_request = backend_requests.pop(0) check_request(container_request, method='HEAD', path='/sda/0/a/c') # make sure backend requests included expected container headers container_headers = {} for request in backend_requests: req_headers = request[2] device = req_headers['x-container-device'] host = req_headers['x-container-host'] container_headers[device] = host expectations = { 'method': 'POST', 'path': '/0/a/c/o', 'headers': { 'X-Container-Partition': '0', 'Connection': 'close', 'User-Agent': 'proxy-server %s' % os.getpid(), 'Host': 'localhost:80', 'Referer': 'POST http://localhost/v1/a/c/o', 'X-Object-Meta-Color': 'Blue', 'X-Backend-Storage-Policy-Index': '1' }, } check_request(request, **expectations) expected = {} for i, device in enumerate(['sda', 'sdb', 'sdc']): expected[device] = '10.0.0.%d:100%d' % (i, i) self.assertEqual(container_headers, expected) # and again with policy override self.app.memcache.store = {} backend_requests = [] req = Request.blank('/v1/a/c/o', {}, method='POST', headers={'X-Object-Meta-Color': 'Blue', 'Content-Type': 'text/plain', 'X-Backend-Storage-Policy-Index': 0}) with mocked_http_conn( 200, 200, 202, 202, 202, headers=resp_headers, give_connect=capture_requests ) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 202) self.assertEqual(len(backend_requests), 5) for request in backend_requests[2:]: expectations = { 'method': 'POST', 'path': '/0/a/c/o', # ignore device bit 'headers': { 'X-Object-Meta-Color': 'Blue', 'X-Backend-Storage-Policy-Index': '0', } } check_request(request, **expectations) # and this time with post as copy self.app.object_post_as_copy = True self.app.memcache.store = {} backend_requests = [] req = Request.blank('/v1/a/c/o', {}, method='POST', headers={'X-Object-Meta-Color': 'Blue', 'X-Backend-Storage-Policy-Index': 0}) with mocked_http_conn( 200, 200, 200, 200, 200, 201, 201, 201, headers=resp_headers, give_connect=capture_requests ) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 202) self.assertEqual(len(backend_requests), 8) policy0 = {'X-Backend-Storage-Policy-Index': '0'} policy1 = {'X-Backend-Storage-Policy-Index': '1'} expected = [ # account info {'method': 'HEAD', 'path': '/0/a'}, # container info {'method': 'HEAD', 'path': '/0/a/c'}, # x-newests {'method': 'GET', 'path': '/0/a/c/o', 'headers': policy1}, {'method': 'GET', 'path': '/0/a/c/o', 'headers': policy1}, {'method': 'GET', 'path': '/0/a/c/o', 'headers': policy1}, # new writes {'method': 'PUT', 'path': '/0/a/c/o', 'headers': policy0}, {'method': 'PUT', 'path': '/0/a/c/o', 'headers': policy0}, {'method': 'PUT', 'path': '/0/a/c/o', 'headers': policy0}, ] for request, expectations in zip(backend_requests, expected): check_request(request, **expectations) def test_POST_as_copy(self): with save_globals(): def test_status_map(statuses, expected): set_http_connect(*statuses) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar'}) self.app.update_request(req) res = req.get_response(self.app) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 200, 200, 200, 200, 202, 202, 202), 202) test_status_map((200, 200, 200, 200, 200, 202, 202, 500), 202) test_status_map((200, 200, 200, 200, 200, 202, 500, 500), 503) test_status_map((200, 200, 200, 200, 200, 202, 404, 500), 503) test_status_map((200, 200, 200, 200, 200, 202, 404, 404), 404) test_status_map((200, 200, 200, 200, 200, 404, 500, 500), 503) test_status_map((200, 200, 200, 200, 200, 404, 404, 404), 404) def test_DELETE(self): with save_globals(): def test_status_map(statuses, expected): set_http_connect(*statuses) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'DELETE'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status[:len(str(expected))], str(expected)) test_status_map((200, 200, 204, 204, 204), 204) test_status_map((200, 200, 204, 204, 500), 204) test_status_map((200, 200, 204, 404, 404), 404) test_status_map((200, 204, 500, 500, 404), 503) test_status_map((200, 200, 404, 404, 404), 404) test_status_map((200, 200, 400, 400, 400), 400) def test_HEAD(self): with save_globals(): def test_status_map(statuses, expected): set_http_connect(*statuses) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status[:len(str(expected))], str(expected)) if expected < 400: self.assertTrue('x-works' in res.headers) self.assertEqual(res.headers['x-works'], 'yes') self.assertTrue('accept-ranges' in res.headers) self.assertEqual(res.headers['accept-ranges'], 'bytes') test_status_map((200, 200, 200, 404, 404), 200) test_status_map((200, 200, 200, 500, 404), 200) test_status_map((200, 200, 304, 500, 404), 304) test_status_map((200, 200, 404, 404, 404), 404) test_status_map((200, 200, 404, 404, 500), 404) test_status_map((200, 200, 500, 500, 500), 503) def test_HEAD_newest(self): with save_globals(): def test_status_map(statuses, expected, timestamps, expected_timestamp): set_http_connect(*statuses, timestamps=timestamps) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'HEAD'}, headers={'x-newest': 'true'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status[:len(str(expected))], str(expected)) self.assertEqual(res.headers.get('last-modified'), expected_timestamp) # acct cont obj obj obj test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '2', '3'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '2'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '1'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '3', '3', '1'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', None, None, None), None) test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', None, None, '1'), '1') test_status_map((200, 200, 404, 404, 200), 200, ('0', '0', None, None, '1'), '1') def test_GET_newest(self): with save_globals(): def test_status_map(statuses, expected, timestamps, expected_timestamp): set_http_connect(*statuses, timestamps=timestamps) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'GET'}, headers={'x-newest': 'true'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status[:len(str(expected))], str(expected)) self.assertEqual(res.headers.get('last-modified'), expected_timestamp) test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '2', '3'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '2'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '1'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '3', '3', '1'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', None, None, None), None) test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', None, None, '1'), '1') with save_globals(): def test_status_map(statuses, expected, timestamps, expected_timestamp): set_http_connect(*statuses, timestamps=timestamps) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status[:len(str(expected))], str(expected)) self.assertEqual(res.headers.get('last-modified'), expected_timestamp) test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '2', '3'), '1') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '2'), '1') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '1', '3', '1'), '1') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', '3', '3', '1'), '3') test_status_map((200, 200, 200, 200, 200), 200, ('0', '0', None, '1', '2'), None) def test_POST_meta_val_len(self): with save_globals(): limit = constraints.MAX_META_VALUE_LENGTH self.app.object_post_as_copy = False ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 202, 202, 202) # acct cont obj obj obj req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', 'X-Object-Meta-Foo': 'x' * limit}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 202) set_http_connect(202, 202, 202) req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', 'X-Object-Meta-Foo': 'x' * (limit + 1)}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_POST_as_copy_meta_val_len(self): with save_globals(): limit = constraints.MAX_META_VALUE_LENGTH set_http_connect(200, 200, 200, 200, 200, 202, 202, 202) # acct cont objc objc objc obj obj obj req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', 'X-Object-Meta-Foo': 'x' * limit}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 202) set_http_connect(202, 202, 202) req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', 'X-Object-Meta-Foo': 'x' * (limit + 1)}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_POST_meta_key_len(self): with save_globals(): limit = constraints.MAX_META_NAME_LENGTH self.app.object_post_as_copy = False set_http_connect(200, 200, 202, 202, 202) # acct cont obj obj obj req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', ('X-Object-Meta-' + 'x' * limit): 'x'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 202) set_http_connect(202, 202, 202) req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', ('X-Object-Meta-' + 'x' * (limit + 1)): 'x'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_POST_as_copy_meta_key_len(self): with save_globals(): limit = constraints.MAX_META_NAME_LENGTH set_http_connect(200, 200, 200, 200, 200, 202, 202, 202) # acct cont objc objc objc obj obj obj req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', ('X-Object-Meta-' + 'x' * limit): 'x'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 202) set_http_connect(202, 202, 202) req = Request.blank( '/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'foo/bar', ('X-Object-Meta-' + 'x' * (limit + 1)): 'x'}) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_POST_meta_count(self): with save_globals(): limit = constraints.MAX_META_COUNT headers = dict( (('X-Object-Meta-' + str(i), 'a') for i in range(limit + 1))) headers.update({'Content-Type': 'foo/bar'}) set_http_connect(202, 202, 202) req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers=headers) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_POST_meta_size(self): with save_globals(): limit = constraints.MAX_META_OVERALL_SIZE count = limit / 256 # enough to cause the limit to be reached headers = dict( (('X-Object-Meta-' + str(i), 'a' * 256) for i in range(count + 1))) headers.update({'Content-Type': 'foo/bar'}) set_http_connect(202, 202, 202) req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'POST'}, headers=headers) self.app.update_request(req) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_PUT_not_autodetect_content_type(self): with save_globals(): headers = {'Content-Type': 'something/right', 'Content-Length': 0} it_worked = [] def verify_content_type(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.html': it_worked.append( headers['Content-Type'].startswith('something/right')) set_http_connect(204, 204, 201, 201, 201, give_connect=verify_content_type) req = Request.blank('/v1/a/c/o.html', {'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) req.get_response(self.app) self.assertNotEqual(it_worked, []) self.assertTrue(all(it_worked)) def test_PUT_autodetect_content_type(self): with save_globals(): headers = {'Content-Type': 'something/wrong', 'Content-Length': 0, 'X-Detect-Content-Type': 'True'} it_worked = [] def verify_content_type(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c/o.html': it_worked.append( headers['Content-Type'].startswith('text/html')) set_http_connect(204, 204, 201, 201, 201, give_connect=verify_content_type) req = Request.blank('/v1/a/c/o.html', {'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) req.get_response(self.app) self.assertNotEqual(it_worked, []) self.assertTrue(all(it_worked)) def test_client_timeout(self): with save_globals(): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 self.app.container_ring.get_nodes('account') for dev in self.app.container_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 object_ring = self.app.get_object_ring(None) object_ring.get_nodes('account') for dev in object_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 class SlowBody(object): def __init__(self): self.sent = 0 def read(self, size=-1): if self.sent < 4: sleep(0.1) self.sent += 1 return ' ' return '' req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': SlowBody()}, headers={'Content-Length': '4', 'Content-Type': 'text/plain'}) self.app.update_request(req) set_http_connect(200, 200, 201, 201, 201) # acct cont obj obj obj resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) self.app.client_timeout = 0.05 req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': SlowBody()}, headers={'Content-Length': '4', 'Content-Type': 'text/plain'}) self.app.update_request(req) set_http_connect(201, 201, 201) # obj obj obj resp = req.get_response(self.app) self.assertEqual(resp.status_int, 408) def test_client_disconnect(self): with save_globals(): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 self.app.container_ring.get_nodes('account') for dev in self.app.container_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 object_ring = self.app.get_object_ring(None) object_ring.get_nodes('account') for dev in object_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 class DisconnectedBody(object): def __init__(self): self.sent = 0 def read(self, size=-1): return '' req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': DisconnectedBody()}, headers={'Content-Length': '4', 'Content-Type': 'text/plain'}) self.app.update_request(req) set_http_connect(200, 200, 201, 201, 201) # acct cont obj obj obj resp = req.get_response(self.app) self.assertEqual(resp.status_int, 499) def test_node_read_timeout(self): with save_globals(): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 self.app.container_ring.get_nodes('account') for dev in self.app.container_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 object_ring = self.app.get_object_ring(None) object_ring.get_nodes('account') for dev in object_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) self.app.update_request(req) set_http_connect(200, 200, 200, slow=0.1) req.sent_size = 0 resp = req.get_response(self.app) got_exc = False try: resp.body except ChunkReadTimeout: got_exc = True self.assertTrue(not got_exc) self.app.recoverable_node_timeout = 0.1 set_http_connect(200, 200, 200, slow=1.0) resp = req.get_response(self.app) got_exc = False try: resp.body except ChunkReadTimeout: got_exc = True self.assertTrue(got_exc) def test_node_read_timeout_retry(self): with save_globals(): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 self.app.container_ring.get_nodes('account') for dev in self.app.container_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 object_ring = self.app.get_object_ring(None) object_ring.get_nodes('account') for dev in object_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) self.app.update_request(req) self.app.recoverable_node_timeout = 0.1 set_http_connect(200, 200, 200, slow=[1.0, 1.0, 1.0]) resp = req.get_response(self.app) got_exc = False try: self.assertEqual('', resp.body) except ChunkReadTimeout: got_exc = True self.assertTrue(got_exc) set_http_connect(200, 200, 200, body='lalala', slow=[1.0, 1.0]) resp = req.get_response(self.app) got_exc = False try: self.assertEqual(resp.body, 'lalala') except ChunkReadTimeout: got_exc = True self.assertTrue(not got_exc) set_http_connect(200, 200, 200, body='lalala', slow=[1.0, 1.0], etags=['a', 'a', 'a']) resp = req.get_response(self.app) got_exc = False try: self.assertEqual(resp.body, 'lalala') except ChunkReadTimeout: got_exc = True self.assertTrue(not got_exc) set_http_connect(200, 200, 200, body='lalala', slow=[1.0, 1.0], etags=['a', 'b', 'a']) resp = req.get_response(self.app) got_exc = False try: self.assertEqual(resp.body, 'lalala') except ChunkReadTimeout: got_exc = True self.assertTrue(not got_exc) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) set_http_connect(200, 200, 200, body='lalala', slow=[1.0, 1.0], etags=['a', 'b', 'b']) resp = req.get_response(self.app) got_exc = False try: resp.body except ChunkReadTimeout: got_exc = True self.assertTrue(got_exc) def test_node_write_timeout(self): with save_globals(): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 self.app.container_ring.get_nodes('account') for dev in self.app.container_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 object_ring = self.app.get_object_ring(None) object_ring.get_nodes('account') for dev in object_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '4', 'Content-Type': 'text/plain'}, body=' ') self.app.update_request(req) set_http_connect(200, 200, 201, 201, 201, slow=0.1) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) self.app.node_timeout = 0.1 req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '4', 'Content-Type': 'text/plain'}, body=' ') self.app.update_request(req) set_http_connect(201, 201, 201, slow=1.0) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def test_node_request_setting(self): baseapp = proxy_server.Application({'request_node_count': '3'}, FakeMemcache(), container_ring=FakeRing(), account_ring=FakeRing()) self.assertEqual(baseapp.request_node_count(3), 3) def test_iter_nodes(self): with save_globals(): try: object_ring = self.app.get_object_ring(None) object_ring.max_more_nodes = 2 partition, nodes = object_ring.get_nodes('account', 'container', 'object') collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 5) object_ring.max_more_nodes = 6 self.app.request_node_count = lambda r: 20 partition, nodes = object_ring.get_nodes('account', 'container', 'object') collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 9) # zero error-limited primary nodes -> no handoff warnings self.app.log_handoffs = True self.app.logger = FakeLogger() self.app.request_node_count = lambda r: 7 object_ring.max_more_nodes = 20 partition, nodes = object_ring.get_nodes('account', 'container', 'object') collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 7) self.assertEqual(self.app.logger.log_dict['warning'], []) self.assertEqual(self.app.logger.get_increments(), []) # one error-limited primary node -> one handoff warning self.app.log_handoffs = True self.app.logger = FakeLogger() self.app.request_node_count = lambda r: 7 self.app._error_limiting = {} # clear out errors set_node_errors(self.app, object_ring._devs[0], 999, last_error=(2 ** 63 - 1)) collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 7) self.assertEqual(self.app.logger.log_dict['warning'], [ (('Handoff requested (5)',), {})]) self.assertEqual(self.app.logger.get_increments(), ['handoff_count']) # two error-limited primary nodes -> two handoff warnings self.app.log_handoffs = True self.app.logger = FakeLogger() self.app.request_node_count = lambda r: 7 self.app._error_limiting = {} # clear out errors for i in range(2): set_node_errors(self.app, object_ring._devs[i], 999, last_error=(2 ** 63 - 1)) collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 7) self.assertEqual(self.app.logger.log_dict['warning'], [ (('Handoff requested (5)',), {}), (('Handoff requested (6)',), {})]) self.assertEqual(self.app.logger.get_increments(), ['handoff_count', 'handoff_count']) # all error-limited primary nodes -> four handoff warnings, # plus a handoff-all metric self.app.log_handoffs = True self.app.logger = FakeLogger() self.app.request_node_count = lambda r: 10 object_ring.set_replicas(4) # otherwise we run out of handoffs self.app._error_limiting = {} # clear out errors for i in range(4): set_node_errors(self.app, object_ring._devs[i], 999, last_error=(2 ** 63 - 1)) collected_nodes = [] for node in self.app.iter_nodes(object_ring, partition): collected_nodes.append(node) self.assertEqual(len(collected_nodes), 10) self.assertEqual(self.app.logger.log_dict['warning'], [ (('Handoff requested (7)',), {}), (('Handoff requested (8)',), {}), (('Handoff requested (9)',), {}), (('Handoff requested (10)',), {})]) self.assertEqual(self.app.logger.get_increments(), ['handoff_count', 'handoff_count', 'handoff_count', 'handoff_count', 'handoff_all_count']) finally: object_ring.max_more_nodes = 0 def test_iter_nodes_calls_sort_nodes(self): with mock.patch.object(self.app, 'sort_nodes') as sort_nodes: object_ring = self.app.get_object_ring(None) for node in self.app.iter_nodes(object_ring, 0): pass sort_nodes.assert_called_once_with( object_ring.get_part_nodes(0)) def test_iter_nodes_skips_error_limited(self): with mock.patch.object(self.app, 'sort_nodes', lambda n: n): object_ring = self.app.get_object_ring(None) first_nodes = list(self.app.iter_nodes(object_ring, 0)) second_nodes = list(self.app.iter_nodes(object_ring, 0)) self.assertTrue(first_nodes[0] in second_nodes) self.app.error_limit(first_nodes[0], 'test') second_nodes = list(self.app.iter_nodes(object_ring, 0)) self.assertTrue(first_nodes[0] not in second_nodes) def test_iter_nodes_gives_extra_if_error_limited_inline(self): object_ring = self.app.get_object_ring(None) with mock.patch.object(self.app, 'sort_nodes', lambda n: n), \ mock.patch.object(self.app, 'request_node_count', lambda r: 6), \ mock.patch.object(object_ring, 'max_more_nodes', 99): first_nodes = list(self.app.iter_nodes(object_ring, 0)) second_nodes = [] for node in self.app.iter_nodes(object_ring, 0): if not second_nodes: self.app.error_limit(node, 'test') second_nodes.append(node) self.assertEqual(len(first_nodes), 6) self.assertEqual(len(second_nodes), 7) def test_iter_nodes_with_custom_node_iter(self): object_ring = self.app.get_object_ring(None) node_list = [dict(id=n, ip='1.2.3.4', port=n, device='D') for n in range(10)] with mock.patch.object(self.app, 'sort_nodes', lambda n: n), \ mock.patch.object(self.app, 'request_node_count', lambda r: 3): got_nodes = list(self.app.iter_nodes(object_ring, 0, node_iter=iter(node_list))) self.assertEqual(node_list[:3], got_nodes) with mock.patch.object(self.app, 'sort_nodes', lambda n: n), \ mock.patch.object(self.app, 'request_node_count', lambda r: 1000000): got_nodes = list(self.app.iter_nodes(object_ring, 0, node_iter=iter(node_list))) self.assertEqual(node_list, got_nodes) def test_best_response_sets_headers(self): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = controller.best_response(req, [200] * 3, ['OK'] * 3, [''] * 3, 'Object', headers=[{'X-Test': '1'}, {'X-Test': '2'}, {'X-Test': '3'}]) self.assertEqual(resp.headers['X-Test'], '1') def test_best_response_sets_etag(self): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = controller.best_response(req, [200] * 3, ['OK'] * 3, [''] * 3, 'Object') self.assertEqual(resp.etag, None) resp = controller.best_response(req, [200] * 3, ['OK'] * 3, [''] * 3, 'Object', etag='68b329da9893e34099c7d8ad5cb9c940' ) self.assertEqual(resp.etag, '68b329da9893e34099c7d8ad5cb9c940') def test_proxy_passes_content_type(self): with save_globals(): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) self.app.update_request(req) set_http_connect(200, 200, 200) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'x-application/test') set_http_connect(200, 200, 200) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 0) set_http_connect(200, 200, 200, slow=True) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 4) def test_proxy_passes_content_length_on_head(self): with save_globals(): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 200) resp = controller.HEAD(req) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 0) set_http_connect(200, 200, 200, slow=True) resp = controller.HEAD(req) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 4) def test_error_limiting(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') controller.app.sort_nodes = lambda l: l object_ring = controller.app.get_object_ring(None) self.assert_status_map(controller.HEAD, (200, 200, 503, 200, 200), 200) self.assertEqual( node_error_count(controller.app, object_ring.devs[0]), 2) self.assertTrue( node_last_error(controller.app, object_ring.devs[0]) is not None) for _junk in range(self.app.error_suppression_limit): self.assert_status_map(controller.HEAD, (200, 200, 503, 503, 503), 503) self.assertEqual( node_error_count(controller.app, object_ring.devs[0]), self.app.error_suppression_limit + 1) self.assert_status_map(controller.HEAD, (200, 200, 200, 200, 200), 503) self.assertTrue( node_last_error(controller.app, object_ring.devs[0]) is not None) self.assert_status_map(controller.PUT, (200, 200, 200, 201, 201, 201), 503) self.assert_status_map(controller.POST, (200, 200, 200, 200, 200, 200, 202, 202, 202), 503) self.assert_status_map(controller.DELETE, (200, 200, 200, 204, 204, 204), 503) self.app.error_suppression_interval = -300 self.assert_status_map(controller.HEAD, (200, 200, 200, 200, 200), 200) self.assertRaises(BaseException, self.assert_status_map, controller.DELETE, (200, 200, 200, 204, 204, 204), 503, raise_exc=True) def test_error_limiting_survives_ring_reload(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') controller.app.sort_nodes = lambda l: l object_ring = controller.app.get_object_ring(None) self.assert_status_map(controller.HEAD, (200, 200, 503, 200, 200), 200) self.assertEqual( node_error_count(controller.app, object_ring.devs[0]), 2) self.assertTrue( node_last_error(controller.app, object_ring.devs[0]) is not None) for _junk in range(self.app.error_suppression_limit): self.assert_status_map(controller.HEAD, (200, 200, 503, 503, 503), 503) self.assertEqual( node_error_count(controller.app, object_ring.devs[0]), self.app.error_suppression_limit + 1) # wipe out any state in the ring for policy in POLICIES: policy.object_ring = FakeRing(base_port=3000) # and we still get an error, which proves that the # error-limiting info survived a ring reload self.assert_status_map(controller.HEAD, (200, 200, 200, 200, 200), 503) def test_PUT_error_limiting(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') controller.app.sort_nodes = lambda l: l object_ring = controller.app.get_object_ring(None) # acc con obj obj obj self.assert_status_map(controller.PUT, (200, 200, 503, 200, 200), 200) # 2, not 1, because assert_status_map() calls the method twice odevs = object_ring.devs self.assertEqual(node_error_count(controller.app, odevs[0]), 2) self.assertEqual(node_error_count(controller.app, odevs[1]), 0) self.assertEqual(node_error_count(controller.app, odevs[2]), 0) self.assertTrue( node_last_error(controller.app, odevs[0]) is not None) self.assertTrue(node_last_error(controller.app, odevs[1]) is None) self.assertTrue(node_last_error(controller.app, odevs[2]) is None) def test_PUT_error_limiting_last_node(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') controller.app.sort_nodes = lambda l: l object_ring = controller.app.get_object_ring(None) # acc con obj obj obj self.assert_status_map(controller.PUT, (200, 200, 200, 200, 503), 200) # 2, not 1, because assert_status_map() calls the method twice odevs = object_ring.devs self.assertEqual(node_error_count(controller.app, odevs[0]), 0) self.assertEqual(node_error_count(controller.app, odevs[1]), 0) self.assertEqual(node_error_count(controller.app, odevs[2]), 2) self.assertTrue(node_last_error(controller.app, odevs[0]) is None) self.assertTrue(node_last_error(controller.app, odevs[1]) is None) self.assertTrue( node_last_error(controller.app, odevs[2]) is not None) def test_acc_or_con_missing_returns_404(self): with save_globals(): self.app.memcache = FakeMemcacheReturnsNone() self.app._error_limiting = {} controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 200, 200, 200, 200) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) self.app.update_request(req) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 200) set_http_connect(404, 404, 404) # acct acct acct # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) set_http_connect(503, 404, 404) # acct acct acct # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) set_http_connect(503, 503, 404) # acct acct acct # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) set_http_connect(503, 503, 503) # acct acct acct # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) set_http_connect(200, 200, 204, 204, 204) # acct cont obj obj obj # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 204) set_http_connect(200, 404, 404, 404) # acct cont cont cont # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) set_http_connect(200, 503, 503, 503) # acct cont cont cont # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) for dev in self.app.account_ring.devs: set_node_errors( self.app, dev, self.app.error_suppression_limit + 1, time.time()) set_http_connect(200) # acct [isn't actually called since everything # is error limited] # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) for dev in self.app.account_ring.devs: set_node_errors(self.app, dev, 0, last_error=None) for dev in self.app.container_ring.devs: set_node_errors(self.app, dev, self.app.error_suppression_limit + 1, time.time()) set_http_connect(200, 200) # acct cont [isn't actually called since # everything is error limited] # make sure to use a fresh request without cached env req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = getattr(controller, 'DELETE')(req) self.assertEqual(resp.status_int, 404) def test_PUT_POST_requires_container_exist(self): with save_globals(): self.app.object_post_as_copy = False self.app.memcache = FakeMemcacheReturnsNone() controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 404, 404, 404, 200, 200, 200) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 404) set_http_connect(200, 404, 404, 404, 200, 200) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'text/plain'}) self.app.update_request(req) resp = controller.POST(req) self.assertEqual(resp.status_int, 404) def test_PUT_POST_as_copy_requires_container_exist(self): with save_globals(): self.app.memcache = FakeMemcacheReturnsNone() controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 404, 404, 404, 200, 200, 200) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 404) set_http_connect(200, 404, 404, 404, 200, 200, 200, 200, 200, 200) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'text/plain'}) self.app.update_request(req) resp = controller.POST(req) self.assertEqual(resp.status_int, 404) def test_bad_metadata(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 201, 201, 201) # acct cont obj obj obj req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0'}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Object-Meta-' + ( 'a' * constraints.MAX_META_NAME_LENGTH): 'v'}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'Content-Length': '0', 'X-Object-Meta-' + ( 'a' * (constraints.MAX_META_NAME_LENGTH + 1)): 'v'}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Object-Meta-Too-Long': 'a' * constraints.MAX_META_VALUE_LENGTH}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Object-Meta-Too-Long': 'a' * (constraints.MAX_META_VALUE_LENGTH + 1)}) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {'Content-Length': '0'} for x in range(constraints.MAX_META_COUNT): headers['X-Object-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers = {'Content-Length': '0'} for x in range(constraints.MAX_META_COUNT + 1): headers['X-Object-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {'Content-Length': '0'} header_value = 'a' * constraints.MAX_META_VALUE_LENGTH size = 0 x = 0 while size < constraints.MAX_META_OVERALL_SIZE - 4 - \ constraints.MAX_META_VALUE_LENGTH: size += 4 + constraints.MAX_META_VALUE_LENGTH headers['X-Object-Meta-%04d' % x] = header_value x += 1 if constraints.MAX_META_OVERALL_SIZE - size > 1: headers['X-Object-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size - 1) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers['X-Object-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) @contextmanager def controller_context(self, req, *args, **kwargs): _v, account, container, obj = utils.split_path(req.path, 4, 4, True) controller = ReplicatedObjectController( self.app, account, container, obj) self.app.update_request(req) self.app.memcache.store = {} with save_globals(): new_connect = set_http_connect(*args, **kwargs) yield controller unused_status_list = [] while True: try: unused_status_list.append(next(new_connect.code_iter)) except StopIteration: break if unused_status_list: raise self.fail('UN-USED STATUS CODES: %r' % unused_status_list) def test_basic_put_with_x_copy_from(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') def test_basic_put_with_x_copy_from_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_basic_put_with_x_copy_from_across_container(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c2/o'}) status_list = (200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont conc objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c2/o') def test_basic_put_with_x_copy_from_across_container_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c2/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c2/o') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_copy_non_zero_content_length(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5', 'X-Copy-From': 'c/o'}) status_list = (200, 200) # acct cont with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) def test_copy_non_zero_content_length_with_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5', 'X-Copy-From': 'c/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200) # acct cont with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 400) def test_copy_with_slashes_in_x_copy_from(self): # extra source path parsing req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o/o2'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') def test_copy_with_slashes_in_x_copy_from_and_account(self): # extra source path parsing req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o/o2', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_copy_with_spaces_in_x_copy_from(self): # space in soure path req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o%20o2'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o%20o2') def test_copy_with_spaces_in_x_copy_from_and_account(self): # space in soure path req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': 'c/o%20o2', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o%20o2') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_copy_with_leading_slash_in_x_copy_from(self): # repeat tests with leading / req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') def test_copy_with_leading_slash_in_x_copy_from_and_account(self): # repeat tests with leading / req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_copy_with_leading_slash_and_slashes_in_x_copy_from(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o/o2'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') def test_copy_with_leading_slash_and_slashes_in_x_copy_from_acct(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o/o2', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acc1 con1 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_copy_with_no_object_in_x_copy_from(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c'}) status_list = (200, 200) # acct cont with self.controller_context(req, *status_list) as controller: try: controller.PUT(req) except HTTPException as resp: self.assertEqual(resp.status_int // 100, 4) # client error else: raise self.fail('Invalid X-Copy-From did not raise ' 'client error') def test_copy_with_no_object_in_x_copy_from_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c', 'X-Copy-From-Account': 'a'}) status_list = (200, 200) # acct cont with self.controller_context(req, *status_list) as controller: try: controller.PUT(req) except HTTPException as resp: self.assertEqual(resp.status_int // 100, 4) # client error else: raise self.fail('Invalid X-Copy-From did not raise ' 'client error') def test_copy_server_error_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o'}) status_list = (200, 200, 503, 503, 503) # acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 503) def test_copy_server_error_reading_source_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 503, 503, 503) # acct cont acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 503) def test_copy_not_found_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o'}) # not found status_list = (200, 200, 404, 404, 404) # acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 404) def test_copy_not_found_reading_source_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Copy-From-Account': 'a'}) # not found status_list = (200, 200, 200, 200, 404, 404, 404) # acct cont acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 404) def test_copy_with_some_missing_sources(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o'}) status_list = (200, 200, 404, 404, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) def test_copy_with_some_missing_sources_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Copy-From-Account': 'a'}) status_list = (200, 200, 200, 200, 404, 404, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) def test_copy_with_object_metadata(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Object-Meta-Ours': 'okay'}) # test object metadata status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('x-object-meta-test'), 'testing') self.assertEqual(resp.headers.get('x-object-meta-ours'), 'okay') self.assertEqual(resp.headers.get('x-delete-at'), '9876543210') def test_copy_with_object_metadata_and_account(self): req = Request.blank('/v1/a1/c1/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o', 'X-Object-Meta-Ours': 'okay', 'X-Copy-From-Account': 'a'}) # test object metadata status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.PUT(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('x-object-meta-test'), 'testing') self.assertEqual(resp.headers.get('x-object-meta-ours'), 'okay') self.assertEqual(resp.headers.get('x-delete-at'), '9876543210') @_limit_max_file_size def test_copy_source_larger_than_max_file_size(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0', 'X-Copy-From': '/c/o'}) # copy-from object is too large to fit in target object class LargeResponseBody(object): def __len__(self): return constraints.MAX_FILE_SIZE + 1 def __getitem__(self, key): return '' copy_from_obj_body = LargeResponseBody() status_list = (200, 200, 200, 200, 200) # acct cont objc objc objc kwargs = dict(body=copy_from_obj_body) with self.controller_context(req, *status_list, **kwargs) as controller: self.app.update_request(req) self.app.memcache.store = {} try: resp = controller.PUT(req) except HTTPException as resp: pass self.assertEqual(resp.status_int, 413) def test_basic_COPY(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c/o2'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') def test_basic_COPY_account(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c1/o2', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_COPY_across_containers(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c2/o'}) status_list = (200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont c2 objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') def test_COPY_source_with_slashes_in_name(self): req = Request.blank('/v1/a/c/o/o2', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c/o'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') def test_COPY_account_source_with_slashes_in_name(self): req = Request.blank('/v1/a/c/o/o2', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_COPY_destination_leading_slash(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') def test_COPY_account_destination_leading_slash(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_COPY_source_with_slashes_destination_leading_slash(self): req = Request.blank('/v1/a/c/o/o2', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') def test_COPY_account_source_with_slashes_destination_leading_slash(self): req = Request.blank('/v1/a/c/o/o2', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from'], 'c/o/o2') self.assertEqual(resp.headers['x-copied-from-account'], 'a') def test_COPY_no_object_in_destination(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c_o'}) status_list = [] # no requests needed with self.controller_context(req, *status_list) as controller: self.assertRaises(HTTPException, controller.COPY, req) def test_COPY_account_no_object_in_destination(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c_o', 'Destination-Account': 'a1'}) status_list = [] # no requests needed with self.controller_context(req, *status_list) as controller: self.assertRaises(HTTPException, controller.COPY, req) def test_COPY_server_error_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) status_list = (200, 200, 503, 503, 503) # acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 503) def test_COPY_account_server_error_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 503, 503, 503) # acct cont acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 503) def test_COPY_not_found_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) status_list = (200, 200, 404, 404, 404) # acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 404) def test_COPY_account_not_found_reading_source(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 404, 404, 404) # acct cont acct cont objc objc objc with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 404) def test_COPY_with_some_missing_sources(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) status_list = (200, 200, 404, 404, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) def test_COPY_account_with_some_missing_sources(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 404, 404, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) def test_COPY_with_metadata(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o', 'X-Object-Meta-Ours': 'okay'}) status_list = (200, 200, 200, 200, 200, 201, 201, 201) # acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('x-object-meta-test'), 'testing') self.assertEqual(resp.headers.get('x-object-meta-ours'), 'okay') self.assertEqual(resp.headers.get('x-delete-at'), '9876543210') def test_COPY_account_with_metadata(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'X-Object-Meta-Ours': 'okay', 'Destination-Account': 'a1'}) status_list = (200, 200, 200, 200, 200, 200, 200, 201, 201, 201) # acct cont acct cont objc objc objc obj obj obj with self.controller_context(req, *status_list) as controller: resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('x-object-meta-test'), 'testing') self.assertEqual(resp.headers.get('x-object-meta-ours'), 'okay') self.assertEqual(resp.headers.get('x-delete-at'), '9876543210') @_limit_max_file_size def test_COPY_source_larger_than_max_file_size(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) class LargeResponseBody(object): def __len__(self): return constraints.MAX_FILE_SIZE + 1 def __getitem__(self, key): return '' copy_from_obj_body = LargeResponseBody() status_list = (200, 200, 200, 200, 200) # acct cont objc objc objc kwargs = dict(body=copy_from_obj_body) with self.controller_context(req, *status_list, **kwargs) as controller: try: resp = controller.COPY(req) except HTTPException as resp: pass self.assertEqual(resp.status_int, 413) @_limit_max_file_size def test_COPY_account_source_larger_than_max_file_size(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) class LargeResponseBody(object): def __len__(self): return constraints.MAX_FILE_SIZE + 1 def __getitem__(self, key): return '' copy_from_obj_body = LargeResponseBody() status_list = (200, 200, 200, 200, 200) # acct cont objc objc objc kwargs = dict(body=copy_from_obj_body) with self.controller_context(req, *status_list, **kwargs) as controller: try: resp = controller.COPY(req) except HTTPException as resp: pass self.assertEqual(resp.status_int, 413) def test_COPY_newest(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) req.account = 'a' controller.object_name = 'o' set_http_connect(200, 200, 200, 200, 200, 201, 201, 201, # act cont objc objc objc obj obj obj timestamps=('1', '1', '1', '3', '2', '4', '4', '4')) self.app.memcache.store = {} resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from-last-modified'], '3') def test_COPY_account_newest(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) req.account = 'a' controller.object_name = 'o' set_http_connect(200, 200, 200, 200, 200, 200, 200, 201, 201, 201, # act cont acct cont objc objc objc obj obj obj timestamps=('1', '1', '1', '1', '3', '2', '1', '4', '4', '4')) self.app.memcache.store = {} resp = controller.COPY(req) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers['x-copied-from-last-modified'], '3') def test_COPY_delete_at(self): with save_globals(): backend_requests = [] def capture_requests(ipaddr, port, device, partition, method, path, headers=None, query_string=None): backend_requests.append((method, path, headers)) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') set_http_connect(200, 200, 200, 200, 200, 201, 201, 201, give_connect=capture_requests) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c/o'}) self.app.update_request(req) resp = controller.COPY(req) self.assertEqual(201, resp.status_int) # sanity for method, path, given_headers in backend_requests: if method != 'PUT': continue self.assertEqual(given_headers.get('X-Delete-At'), '9876543210') self.assertTrue('X-Delete-At-Host' in given_headers) self.assertTrue('X-Delete-At-Device' in given_headers) self.assertTrue('X-Delete-At-Partition' in given_headers) self.assertTrue('X-Delete-At-Container' in given_headers) def test_COPY_account_delete_at(self): with save_globals(): backend_requests = [] def capture_requests(ipaddr, port, device, partition, method, path, headers=None, query_string=None): backend_requests.append((method, path, headers)) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') set_http_connect(200, 200, 200, 200, 200, 200, 200, 201, 201, 201, give_connect=capture_requests) self.app.memcache.store = {} req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': '/c1/o', 'Destination-Account': 'a1'}) self.app.update_request(req) resp = controller.COPY(req) self.assertEqual(201, resp.status_int) # sanity for method, path, given_headers in backend_requests: if method != 'PUT': continue self.assertEqual(given_headers.get('X-Delete-At'), '9876543210') self.assertTrue('X-Delete-At-Host' in given_headers) self.assertTrue('X-Delete-At-Device' in given_headers) self.assertTrue('X-Delete-At-Partition' in given_headers) self.assertTrue('X-Delete-At-Container' in given_headers) def test_chunked_put(self): class ChunkedFile(object): def __init__(self, bytes): self.bytes = bytes self.read_bytes = 0 @property def bytes_left(self): return self.bytes - self.read_bytes def read(self, amt=None): if self.read_bytes >= self.bytes: raise StopIteration() if not amt: amt = self.bytes_left data = 'a' * min(amt, self.bytes_left) self.read_bytes += len(data) return data with save_globals(): set_http_connect(201, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Transfer-Encoding': 'chunked', 'Content-Type': 'foo/bar'}) req.body_file = ChunkedFile(10) self.app.memcache.store = {} self.app.update_request(req) res = controller.PUT(req) self.assertEqual(res.status_int // 100, 2) # success # test 413 entity to large set_http_connect(201, 201, 201, 201) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Transfer-Encoding': 'chunked', 'Content-Type': 'foo/bar'}) req.body_file = ChunkedFile(11) self.app.memcache.store = {} self.app.update_request(req) with mock.patch('swift.common.constraints.MAX_FILE_SIZE', 10): res = controller.PUT(req) self.assertEqual(res.status_int, 413) @unpatch_policies def test_chunked_put_bad_version(self): # Check bad version (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v0 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 412' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_chunked_put_bad_path(self): # Check bad path (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET invalid HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 404' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_chunked_put_bad_utf8(self): # Check invalid utf-8 (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a%80 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 412' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_chunked_put_bad_path_no_controller(self): # Check bad path, no controller (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1 HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 412' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_chunked_put_bad_method(self): # Check bad method (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('LICK /v1/a HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 405' self.assertEqual(headers[:len(exp)], exp) @unpatch_policies def test_chunked_put_unhandled_exception(self): # Check unhandled exception (prosrv, acc1srv, acc2srv, con1srv, con2srv, obj1srv, obj2srv, obj3srv) = _test_servers (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets orig_update_request = prosrv.update_request def broken_update_request(*args, **kwargs): raise Exception('fake: this should be printed') prosrv.update_request = broken_update_request sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('HEAD /v1/a HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 500' self.assertEqual(headers[:len(exp)], exp) prosrv.update_request = orig_update_request @unpatch_policies def test_chunked_put_head_account(self): # Head account, just a double check and really is here to test # the part Application.log_request that 'enforces' a # content_length on the response. (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('HEAD /v1/a HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 204' self.assertEqual(headers[:len(exp)], exp) self.assertTrue('\r\nContent-Length: 0\r\n' in headers) @unpatch_policies def test_chunked_put_utf8_all_the_way_down(self): # Test UTF-8 Unicode all the way through the system ustr = '\xe1\xbc\xb8\xce\xbf\xe1\xbd\xba \xe1\xbc\xb0\xce' \ '\xbf\xe1\xbd\xbb\xce\x87 \xcf\x84\xe1\xbd\xb0 \xcf' \ '\x80\xe1\xbd\xb1\xce\xbd\xcf\x84\xca\xbc \xe1\xbc' \ '\x82\xce\xbd \xe1\xbc\x90\xce\xbe\xe1\xbd\xb5\xce' \ '\xba\xce\xbf\xce\xb9 \xcf\x83\xce\xb1\xcf\x86\xe1' \ '\xbf\x86.Test' ustr_short = '\xe1\xbc\xb8\xce\xbf\xe1\xbd\xbatest' # Create ustr container (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n' % quote(ustr)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # List account with ustr container (test plain) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) containers = fd.read().split('\n') self.assertTrue(ustr in containers) # List account with ustr container (test json) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a?format=json HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) listing = json.loads(fd.read()) self.assertTrue(ustr.decode('utf8') in [l['name'] for l in listing]) # List account with ustr container (test xml) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a?format=xml HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertTrue('%s' % ustr in fd.read()) # Create ustr object with ustr metadata in ustr container sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'X-Object-Meta-%s: %s\r\nContent-Length: 0\r\n\r\n' % (quote(ustr), quote(ustr), quote(ustr_short), quote(ustr))) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # List ustr container with ustr object (test plain) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n' % quote(ustr)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) objects = fd.read().split('\n') self.assertTrue(ustr in objects) # List ustr container with ustr object (test json) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?format=json HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n' % quote(ustr)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) listing = json.loads(fd.read()) self.assertEqual(listing[0]['name'], ustr.decode('utf8')) # List ustr container with ustr object (test xml) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?format=xml HTTP/1.1\r\n' 'Host: localhost\r\nConnection: close\r\n' 'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n' % quote(ustr)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertTrue('%s' % ustr in fd.read()) # Retrieve ustr object with ustr metadata sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n' % (quote(ustr), quote(ustr))) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertTrue('\r\nX-Object-Meta-%s: %s\r\n' % (quote(ustr_short).lower(), quote(ustr)) in headers) @unpatch_policies def test_chunked_put_chunked_put(self): # Do chunked object put (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() # Also happens to assert that x-storage-token is taken as a # replacement for x-auth-token. fd.write('PUT /v1/a/c/o/chunky HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\nf\r\n123456789abcdef\r\n' '0\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Ensure we get what we put sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/c/o/chunky HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Auth-Token: t\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) body = fd.read() self.assertEqual(body, 'oh hai123456789abcdef') @unpatch_policies def test_conditional_range_get(self): (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets sock = connect_tcp(('localhost', prolis.getsockname()[1])) # make a container fd = sock.makefile() fd.write('PUT /v1/a/con HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n') fd.flush() exp = 'HTTP/1.1 201' headers = readuntil2crlfs(fd) self.assertEqual(headers[:len(exp)], exp) # put an object in it sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/con/o HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: 10\r\n' 'Content-Type: text/plain\r\n' '\r\n' 'abcdefghij\r\n') fd.flush() exp = 'HTTP/1.1 201' headers = readuntil2crlfs(fd) self.assertEqual(headers[:len(exp)], exp) # request with both If-None-Match and Range etag = md5("abcdefghij").hexdigest() sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/con/o HTTP/1.1\r\n' + 'Host: localhost\r\n' + 'Connection: close\r\n' + 'X-Storage-Token: t\r\n' + 'If-None-Match: "' + etag + '"\r\n' + 'Range: bytes=3-8\r\n' + '\r\n') fd.flush() exp = 'HTTP/1.1 304' headers = readuntil2crlfs(fd) self.assertEqual(headers[:len(exp)], exp) def test_mismatched_etags(self): with save_globals(): # no etag supplied, object servers return success w/ diff values controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '0'}) self.app.update_request(req) set_http_connect(200, 201, 201, 201, etags=[None, '68b329da9893e34099c7d8ad5cb9c940', '68b329da9893e34099c7d8ad5cb9c940', '68b329da9893e34099c7d8ad5cb9c941']) resp = controller.PUT(req) self.assertEqual(resp.status_int // 100, 5) # server error # req supplies etag, object servers return 422 - mismatch headers = {'Content-Length': '0', 'ETag': '68b329da9893e34099c7d8ad5cb9c940'} req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) self.app.update_request(req) set_http_connect(200, 422, 422, 503, etags=['68b329da9893e34099c7d8ad5cb9c940', '68b329da9893e34099c7d8ad5cb9c941', None, None]) resp = controller.PUT(req) self.assertEqual(resp.status_int // 100, 4) # client error def test_response_get_accept_ranges_header(self): with save_globals(): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) self.app.update_request(req) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 200) resp = controller.GET(req) self.assertTrue('accept-ranges' in resp.headers) self.assertEqual(resp.headers['accept-ranges'], 'bytes') def test_response_head_accept_ranges_header(self): with save_globals(): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 200) resp = controller.HEAD(req) self.assertTrue('accept-ranges' in resp.headers) self.assertEqual(resp.headers['accept-ranges'], 'bytes') def test_GET_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o') req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.GET(req) self.assertTrue(called[0]) def test_HEAD_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', {'REQUEST_METHOD': 'HEAD'}) req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.HEAD(req) self.assertTrue(called[0]) def test_POST_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): self.app.object_post_as_copy = False set_http_connect(200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Length': '5'}, body='12345') req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.POST(req) self.assertTrue(called[0]) def test_POST_as_copy_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 200, 200, 200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Length': '5'}, body='12345') req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.POST(req) self.assertTrue(called[0]) def test_PUT_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5'}, body='12345') req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.PUT(req) self.assertTrue(called[0]) def test_COPY_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 200, 200, 200, 200, 201, 201, 201) controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY'}, headers={'Destination': 'c/o'}) req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.COPY(req) self.assertTrue(called[0]) def test_POST_converts_delete_after_to_delete_at(self): with save_globals(): self.app.object_post_as_copy = False controller = ReplicatedObjectController( self.app, 'account', 'container', 'object') set_http_connect(200, 200, 202, 202, 202) self.app.memcache.store = {} orig_time = time.time try: t = time.time() time.time = lambda: t req = Request.blank('/v1/a/c/o', {}, headers={'Content-Type': 'foo/bar', 'X-Delete-After': '60'}) self.app.update_request(req) res = controller.POST(req) self.assertEqual(res.status, '202 Fake') self.assertEqual(req.headers.get('x-delete-at'), str(int(t + 60))) finally: time.time = orig_time @unpatch_policies def test_ec_client_disconnect(self): prolis = _test_sockets[0] # create connection sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() # create container fd.write('PUT /v1/a/ec-discon HTTP/1.1\r\n' 'Host: localhost\r\n' 'Content-Length: 0\r\n' 'X-Storage-Token: t\r\n' 'X-Storage-Policy: ec\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' self.assertEqual(headers[:len(exp)], exp) # create object obj = 'a' * 4 * 64 * 2 ** 10 fd.write('PUT /v1/a/ec-discon/test HTTP/1.1\r\n' 'Host: localhost\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: donuts\r\n' '\r\n%s' % (len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # get object fd.write('GET /v1/a/ec-discon/test HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) # read most of the object, and disconnect fd.read(10) sock.fd._sock.close() condition = \ lambda: _test_servers[0].logger.get_lines_for_level('warning') self._sleep_enough(condition) # check for disconnect message! expected = ['Client disconnected on read'] * 2 self.assertEqual( _test_servers[0].logger.get_lines_for_level('warning'), expected) @unpatch_policies def test_ec_client_put_disconnect(self): prolis = _test_sockets[0] # create connection sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() # create container fd.write('PUT /v1/a/ec-discon HTTP/1.1\r\n' 'Host: localhost\r\n' 'Content-Length: 0\r\n' 'X-Storage-Token: t\r\n' 'X-Storage-Policy: ec\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' self.assertEqual(headers[:len(exp)], exp) # create object obj = 'a' * 4 * 64 * 2 ** 10 fd.write('PUT /v1/a/ec-discon/test HTTP/1.1\r\n' 'Host: localhost\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: donuts\r\n' '\r\n%s' % (len(obj), obj[:-10])) fd.flush() fd.close() sock.close() # sleep to trampoline enough condition = \ lambda: _test_servers[0].logger.get_lines_for_level('warning') self._sleep_enough(condition) expected = ['Client disconnected without sending enough data'] warns = _test_servers[0].logger.get_lines_for_level('warning') self.assertEqual(expected, warns) errors = _test_servers[0].logger.get_lines_for_level('error') self.assertEqual([], errors) @unpatch_policies def test_leak_1(self): _request_instances = weakref.WeakKeyDictionary() _orig_init = Request.__init__ def request_init(self, *args, **kwargs): _orig_init(self, *args, **kwargs) _request_instances[self] = None with mock.patch.object(Request, "__init__", request_init): prolis = _test_sockets[0] prosrv = _test_servers[0] obj_len = prosrv.client_chunk_size * 2 # PUT test file sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/c/test_leak_1 HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Auth-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n%s' % (obj_len, 'a' * obj_len)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Remember Request instance count, make sure the GC is run for # pythons without reference counting. for i in range(4): sleep(0) # let eventlet do its thing gc.collect() else: sleep(0) before_request_instances = len(_request_instances) # GET test file, but disconnect early sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/c/test_leak_1 HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Auth-Token: t\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) fd.read(1) sock.fd._sock.close() # Make sure the GC is run again for pythons without reference # counting for i in range(4): sleep(0) # let eventlet do its thing gc.collect() else: sleep(0) self.assertEqual( before_request_instances, len(_request_instances)) def test_OPTIONS(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o.jpg') def my_empty_container_info(*args): return {} controller.container_info = my_empty_container_info req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) def my_empty_origin_container_info(*args): return {'cors': {'allow_origin': None}} controller.container_info = my_empty_origin_container_info req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) def my_container_info(*args): return { 'cors': { 'allow_origin': 'http://foo.bar:8080 https://foo.bar', 'max_age': '999', } } controller.container_info = my_container_info req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://foo.bar', 'Access-Control-Request-Method': 'GET'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) self.assertEqual( 'https://foo.bar', resp.headers['access-control-allow-origin']) for verb in 'OPTIONS COPY GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['access-control-allow-methods']) self.assertEqual( len(resp.headers['access-control-allow-methods'].split(', ')), 7) self.assertEqual('999', resp.headers['access-control-max-age']) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://foo.bar'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) req = Request.blank('/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS COPY GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['Allow']) self.assertEqual(len(resp.headers['Allow'].split(', ')), 7) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.bar', 'Access-Control-Request-Method': 'GET'}) controller.app.cors_allow_origin = ['http://foo.bar', ] resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) def my_container_info_wildcard(*args): return { 'cors': { 'allow_origin': '*', 'max_age': '999', } } controller.container_info = my_container_info_wildcard req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://bar.baz', 'Access-Control-Request-Method': 'GET'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) self.assertEqual('*', resp.headers['access-control-allow-origin']) for verb in 'OPTIONS COPY GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['access-control-allow-methods']) self.assertEqual( len(resp.headers['access-control-allow-methods'].split(', ')), 7) self.assertEqual('999', resp.headers['access-control-max-age']) def test_CORS_valid(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') def stubContainerInfo(*args): return { 'cors': { 'allow_origin': 'http://not.foo.bar', 'expose_headers': 'X-Object-Meta-Color ' 'X-Object-Meta-Color-Ex' } } controller.container_info = stubContainerInfo controller.app.strict_cors_mode = False def objectGET(controller, req): return Response(headers={ 'X-Object-Meta-Color': 'red', 'X-Super-Secret': 'hush', }) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(objectGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertEqual('http://foo.bar', resp.headers['access-control-allow-origin']) self.assertEqual('red', resp.headers['x-object-meta-color']) # X-Super-Secret is in the response, but not "exposed" self.assertEqual('hush', resp.headers['x-super-secret']) self.assertIn('access-control-expose-headers', resp.headers) exposed = set( h.strip() for h in resp.headers['access-control-expose-headers'].split(',')) expected_exposed = set(['cache-control', 'content-language', 'content-type', 'expires', 'last-modified', 'pragma', 'etag', 'x-timestamp', 'x-trans-id', 'x-object-meta-color', 'x-object-meta-color-ex']) self.assertEqual(expected_exposed, exposed) controller.app.strict_cors_mode = True req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(objectGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertNotIn('access-control-expose-headers', resp.headers) self.assertNotIn('access-control-allow-origin', resp.headers) controller.app.strict_cors_mode = False def stubContainerInfoWithAsteriskAllowOrigin(*args): return { 'cors': { 'allow_origin': '*' } } controller.container_info = \ stubContainerInfoWithAsteriskAllowOrigin req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(objectGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertEqual('*', resp.headers['access-control-allow-origin']) def stubContainerInfoWithEmptyAllowOrigin(*args): return { 'cors': { 'allow_origin': '' } } controller.container_info = stubContainerInfoWithEmptyAllowOrigin req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(objectGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertEqual('http://foo.bar', resp.headers['access-control-allow-origin']) def test_CORS_valid_with_obj_headers(self): with save_globals(): controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') def stubContainerInfo(*args): return { 'cors': { 'allow_origin': 'http://foo.bar' } } controller.container_info = stubContainerInfo def objectGET(controller, req): return Response(headers={ 'X-Object-Meta-Color': 'red', 'X-Super-Secret': 'hush', 'Access-Control-Allow-Origin': 'http://obj.origin', 'Access-Control-Expose-Headers': 'x-trans-id' }) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(objectGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertEqual('http://obj.origin', resp.headers['access-control-allow-origin']) self.assertEqual('x-trans-id', resp.headers['access-control-expose-headers']) def _gather_x_container_headers(self, controller_call, req, *connect_args, **kwargs): header_list = kwargs.pop('header_list', ['X-Container-Device', 'X-Container-Host', 'X-Container-Partition']) seen_headers = [] def capture_headers(ipaddr, port, device, partition, method, path, headers=None, query_string=None): captured = {} for header in header_list: captured[header] = headers.get(header) seen_headers.append(captured) with save_globals(): self.app.allow_account_management = True set_http_connect(*connect_args, give_connect=capture_headers, **kwargs) resp = controller_call(req) self.assertEqual(2, resp.status_int // 100) # sanity check # don't care about the account/container HEADs, so chuck # the first two requests return sorted(seen_headers[2:], key=lambda d: d.get(header_list[0]) or 'z') def test_PUT_x_container_headers_with_equal_replicas(self): req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5'}, body='12345') controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.PUT, req, 200, 200, 201, 201, 201) # HEAD HEAD PUT PUT PUT self.assertEqual( seen_headers, [ {'X-Container-Host': '10.0.0.0:1000', 'X-Container-Partition': '0', 'X-Container-Device': 'sda'}, {'X-Container-Host': '10.0.0.1:1001', 'X-Container-Partition': '0', 'X-Container-Device': 'sdb'}, {'X-Container-Host': '10.0.0.2:1002', 'X-Container-Partition': '0', 'X-Container-Device': 'sdc'}]) def test_PUT_x_container_headers_with_fewer_container_replicas(self): self.app.container_ring.set_replicas(2) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5'}, body='12345') controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.PUT, req, 200, 200, 201, 201, 201) # HEAD HEAD PUT PUT PUT self.assertEqual( seen_headers, [ {'X-Container-Host': '10.0.0.0:1000', 'X-Container-Partition': '0', 'X-Container-Device': 'sda'}, {'X-Container-Host': '10.0.0.0:1000', 'X-Container-Partition': '0', 'X-Container-Device': 'sda'}, {'X-Container-Host': '10.0.0.1:1001', 'X-Container-Partition': '0', 'X-Container-Device': 'sdb'}]) def test_PUT_x_container_headers_with_more_container_replicas(self): self.app.container_ring.set_replicas(4) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '5'}, body='12345') controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.PUT, req, 200, 200, 201, 201, 201) # HEAD HEAD PUT PUT PUT self.assertEqual( seen_headers, [ {'X-Container-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Container-Partition': '0', 'X-Container-Device': 'sda,sdd'}, {'X-Container-Host': '10.0.0.1:1001', 'X-Container-Partition': '0', 'X-Container-Device': 'sdb'}, {'X-Container-Host': '10.0.0.2:1002', 'X-Container-Partition': '0', 'X-Container-Device': 'sdc'}]) def test_POST_x_container_headers_with_more_container_replicas(self): self.app.container_ring.set_replicas(4) self.app.object_post_as_copy = False req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'application/stuff'}) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.POST, req, 200, 200, 200, 200, 200) # HEAD HEAD POST POST POST self.assertEqual( seen_headers, [ {'X-Container-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Container-Partition': '0', 'X-Container-Device': 'sda,sdd'}, {'X-Container-Host': '10.0.0.1:1001', 'X-Container-Partition': '0', 'X-Container-Device': 'sdb'}, {'X-Container-Host': '10.0.0.2:1002', 'X-Container-Partition': '0', 'X-Container-Device': 'sdc'}]) def test_DELETE_x_container_headers_with_more_container_replicas(self): self.app.container_ring.set_replicas(4) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'Content-Type': 'application/stuff'}) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.DELETE, req, 200, 200, 200, 200, 200) # HEAD HEAD DELETE DELETE DELETE self.assertEqual(seen_headers, [ {'X-Container-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Container-Partition': '0', 'X-Container-Device': 'sda,sdd'}, {'X-Container-Host': '10.0.0.1:1001', 'X-Container-Partition': '0', 'X-Container-Device': 'sdb'}, {'X-Container-Host': '10.0.0.2:1002', 'X-Container-Partition': '0', 'X-Container-Device': 'sdc'} ]) @mock.patch('time.time', new=lambda: STATIC_TIME) def test_PUT_x_delete_at_with_fewer_container_replicas(self): self.app.container_ring.set_replicas(2) delete_at_timestamp = int(time.time()) + 100000 delete_at_container = utils.get_expirer_container( delete_at_timestamp, self.app.expiring_objects_container_divisor, 'a', 'c', 'o') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Type': 'application/stuff', 'Content-Length': '0', 'X-Delete-At': str(delete_at_timestamp)}) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.PUT, req, 200, 200, 201, 201, 201, # HEAD HEAD PUT PUT PUT header_list=('X-Delete-At-Host', 'X-Delete-At-Device', 'X-Delete-At-Partition', 'X-Delete-At-Container')) self.assertEqual(seen_headers, [ {'X-Delete-At-Host': '10.0.0.0:1000', 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': '0', 'X-Delete-At-Device': 'sda'}, {'X-Delete-At-Host': '10.0.0.1:1001', 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': '0', 'X-Delete-At-Device': 'sdb'}, {'X-Delete-At-Host': None, 'X-Delete-At-Container': None, 'X-Delete-At-Partition': None, 'X-Delete-At-Device': None} ]) @mock.patch('time.time', new=lambda: STATIC_TIME) def test_PUT_x_delete_at_with_more_container_replicas(self): self.app.container_ring.set_replicas(4) self.app.expiring_objects_account = 'expires' self.app.expiring_objects_container_divisor = 60 delete_at_timestamp = int(time.time()) + 100000 delete_at_container = utils.get_expirer_container( delete_at_timestamp, self.app.expiring_objects_container_divisor, 'a', 'c', 'o') req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Type': 'application/stuff', 'Content-Length': 0, 'X-Delete-At': str(delete_at_timestamp)}) controller = ReplicatedObjectController( self.app, 'a', 'c', 'o') seen_headers = self._gather_x_container_headers( controller.PUT, req, 200, 200, 201, 201, 201, # HEAD HEAD PUT PUT PUT header_list=('X-Delete-At-Host', 'X-Delete-At-Device', 'X-Delete-At-Partition', 'X-Delete-At-Container')) self.assertEqual(seen_headers, [ {'X-Delete-At-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': '0', 'X-Delete-At-Device': 'sda,sdd'}, {'X-Delete-At-Host': '10.0.0.1:1001', 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': '0', 'X-Delete-At-Device': 'sdb'}, {'X-Delete-At-Host': '10.0.0.2:1002', 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': '0', 'X-Delete-At-Device': 'sdc'} ]) class TestECMismatchedFA(unittest.TestCase): def tearDown(self): prosrv = _test_servers[0] # don't leak error limits and poison other tests prosrv._error_limiting = {} def test_mixing_different_objects_fragment_archives(self): (prosrv, acc1srv, acc2srv, con1srv, con2srv, obj1srv, obj2srv, obj3srv) = _test_servers ec_policy = POLICIES[3] @public def bad_disk(req): return Response(status=507, body="borken") ensure_container = Request.blank( "/v1/a/ec-crazytown", environ={"REQUEST_METHOD": "PUT"}, headers={"X-Storage-Policy": "ec", "X-Auth-Token": "t"}) resp = ensure_container.get_response(prosrv) self.assertTrue(resp.status_int in (201, 202)) obj1 = "first version..." put_req1 = Request.blank( "/v1/a/ec-crazytown/obj", environ={"REQUEST_METHOD": "PUT"}, headers={"X-Auth-Token": "t"}) put_req1.body = obj1 obj2 = u"versión segundo".encode("utf-8") put_req2 = Request.blank( "/v1/a/ec-crazytown/obj", environ={"REQUEST_METHOD": "PUT"}, headers={"X-Auth-Token": "t"}) put_req2.body = obj2 # pyeclib has checks for unequal-length; we don't want to trip those self.assertEqual(len(obj1), len(obj2)) # Server obj1 will have the first version of the object (obj2 also # gets it, but that gets stepped on later) prosrv._error_limiting = {} with mock.patch.object(obj3srv, 'PUT', bad_disk), \ mock.patch( 'swift.common.storage_policy.ECStoragePolicy.quorum'): type(ec_policy).quorum = mock.PropertyMock(return_value=2) resp = put_req1.get_response(prosrv) self.assertEqual(resp.status_int, 201) # Servers obj2 and obj3 will have the second version of the object. prosrv._error_limiting = {} with mock.patch.object(obj1srv, 'PUT', bad_disk), \ mock.patch( 'swift.common.storage_policy.ECStoragePolicy.quorum'): type(ec_policy).quorum = mock.PropertyMock(return_value=2) resp = put_req2.get_response(prosrv) self.assertEqual(resp.status_int, 201) # A GET that only sees 1 fragment archive should fail get_req = Request.blank("/v1/a/ec-crazytown/obj", environ={"REQUEST_METHOD": "GET"}, headers={"X-Auth-Token": "t"}) prosrv._error_limiting = {} with mock.patch.object(obj1srv, 'GET', bad_disk), \ mock.patch.object(obj2srv, 'GET', bad_disk): resp = get_req.get_response(prosrv) self.assertEqual(resp.status_int, 503) # A GET that sees 2 matching FAs will work get_req = Request.blank("/v1/a/ec-crazytown/obj", environ={"REQUEST_METHOD": "GET"}, headers={"X-Auth-Token": "t"}) prosrv._error_limiting = {} with mock.patch.object(obj1srv, 'GET', bad_disk): resp = get_req.get_response(prosrv) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, obj2) # A GET that sees 2 mismatching FAs will fail get_req = Request.blank("/v1/a/ec-crazytown/obj", environ={"REQUEST_METHOD": "GET"}, headers={"X-Auth-Token": "t"}) prosrv._error_limiting = {} with mock.patch.object(obj2srv, 'GET', bad_disk): resp = get_req.get_response(prosrv) self.assertEqual(resp.status_int, 503) class TestObjectDisconnectCleanup(unittest.TestCase): # update this if you need to make more different devices in do_setup device_pattern = re.compile('sd[a-z][0-9]') def _cleanup_devices(self): # make sure all the object data is cleaned up for dev in os.listdir(_testdir): if not self.device_pattern.match(dev): continue device_path = os.path.join(_testdir, dev) for datadir in os.listdir(device_path): if 'object' not in datadir: continue data_path = os.path.join(device_path, datadir) rmtree(data_path, ignore_errors=True) mkdirs(data_path) def setUp(self): debug.hub_exceptions(False) self._cleanup_devices() def tearDown(self): debug.hub_exceptions(True) self._cleanup_devices() def _check_disconnect_cleans_up(self, policy_name, is_chunked=False): proxy_port = _test_sockets[0].getsockname()[1] def put(path, headers=None, body=None): conn = httplib.HTTPConnection('localhost', proxy_port) try: conn.connect() conn.putrequest('PUT', path) for k, v in (headers or {}).items(): conn.putheader(k, v) conn.endheaders() body = body or [''] for chunk in body: if is_chunked: chunk = '%x\r\n%s\r\n' % (len(chunk), chunk) conn.send(chunk) resp = conn.getresponse() body = resp.read() finally: # seriously - shut this mother down if conn.sock: conn.sock.fd._sock.close() return resp, body # ensure container container_path = '/v1/a/%s-disconnect-test' % policy_name resp, _body = put(container_path, headers={ 'Connection': 'close', 'X-Storage-Policy': policy_name, 'Content-Length': '0', }) self.assertIn(resp.status, (201, 202)) def exploding_body(): for i in range(3): yield '\x00' * (64 * 2 ** 10) raise Exception('kaboom!') headers = {} if is_chunked: headers['Transfer-Encoding'] = 'chunked' else: headers['Content-Length'] = 64 * 2 ** 20 obj_path = container_path + '/disconnect-data' try: resp, _body = put(obj_path, headers=headers, body=exploding_body()) except Exception as e: if str(e) != 'kaboom!': raise else: self.fail('obj put connection did not ka-splod') sleep(0.1) def find_files(self): found_files = defaultdict(list) for root, dirs, files in os.walk(_testdir): for fname in files: filename, ext = os.path.splitext(fname) found_files[ext].append(os.path.join(root, fname)) return found_files def test_repl_disconnect_cleans_up(self): self._check_disconnect_cleans_up('zero') found_files = self.find_files() self.assertEqual(found_files['.data'], []) def test_ec_disconnect_cleans_up(self): self._check_disconnect_cleans_up('ec') found_files = self.find_files() self.assertEqual(found_files['.durable'], []) self.assertEqual(found_files['.data'], []) def test_repl_chunked_transfer_disconnect_cleans_up(self): self._check_disconnect_cleans_up('zero', is_chunked=True) found_files = self.find_files() self.assertEqual(found_files['.data'], []) def test_ec_chunked_transfer_disconnect_cleans_up(self): self._check_disconnect_cleans_up('ec', is_chunked=True) found_files = self.find_files() self.assertEqual(found_files['.durable'], []) self.assertEqual(found_files['.data'], []) class TestObjectECRangedGET(unittest.TestCase): def setUp(self): _test_servers[0].logger._clear() self.app = proxy_server.Application( None, FakeMemcache(), logger=debug_logger('proxy-ut'), account_ring=FakeRing(), container_ring=FakeRing()) def tearDown(self): prosrv = _test_servers[0] self.assertFalse(prosrv.logger.get_lines_for_level('error')) self.assertFalse(prosrv.logger.get_lines_for_level('warning')) @classmethod def setUpClass(cls): cls.obj_name = 'range-get-test' cls.tiny_obj_name = 'range-get-test-tiny' cls.aligned_obj_name = 'range-get-test-aligned' # Note: only works if called with unpatched policies prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: 0\r\n' 'X-Storage-Token: t\r\n' 'X-Storage-Policy: ec\r\n' '\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' assert headers[:len(exp)] == exp, "container PUT failed" seg_size = POLICIES.get_by_name("ec").ec_segment_size cls.seg_size = seg_size # EC segment size is 4 KiB, hence this gives 4 segments, which we # then verify with a quick sanity check cls.obj = ' my hovercraft is full of eels '.join( str(s) for s in range(431)) assert seg_size * 4 > len(cls.obj) > seg_size * 3, \ "object is wrong number of segments" cls.obj_etag = md5(cls.obj).hexdigest() cls.tiny_obj = 'tiny, tiny object' assert len(cls.tiny_obj) < seg_size, "tiny_obj too large" cls.aligned_obj = "".join( "abcdEFGHijkl%04d" % x for x in range(512)) assert len(cls.aligned_obj) % seg_size == 0, "aligned obj not aligned" for obj_name, obj in ((cls.obj_name, cls.obj), (cls.tiny_obj_name, cls.tiny_obj), (cls.aligned_obj_name, cls.aligned_obj)): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/ec-con/%s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'Content-Length: %d\r\n' 'X-Storage-Token: t\r\n' 'Content-Type: donuts\r\n' '\r\n%s' % (obj_name, len(obj), obj)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' assert headers[:len(exp)] == exp, \ "object PUT failed %s" % obj_name def _get_obj(self, range_value, obj_name=None): if obj_name is None: obj_name = self.obj_name prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/ec-con/%s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Range: %s\r\n' '\r\n' % (obj_name, range_value)) fd.flush() headers = readuntil2crlfs(fd) # e.g. "HTTP/1.1 206 Partial Content\r\n..." status_code = int(headers[9:12]) headers = parse_headers_string(headers) gotten_obj = '' while True: buf = fd.read(64) if not buf: break gotten_obj += buf # if we get this wrong, clients will either get truncated data or # they'll hang waiting for bytes that aren't coming, so it warrants # being asserted for every test case if 'Content-Length' in headers: self.assertEqual(int(headers['Content-Length']), len(gotten_obj)) # likewise, if we say MIME and don't send MIME or vice versa, # clients will be horribly confused if headers.get('Content-Type', '').startswith('multipart/byteranges'): self.assertEqual(gotten_obj[:2], "--") else: # In general, this isn't true, as you can start an object with # "--". However, in this test, we don't start any objects with # "--", or even include "--" in their contents anywhere. self.assertNotEqual(gotten_obj[:2], "--") return (status_code, headers, gotten_obj) def _parse_multipart(self, content_type, body): parser = email.parser.FeedParser() parser.feed("Content-Type: %s\r\n\r\n" % content_type) parser.feed(body) root_message = parser.close() self.assertTrue(root_message.is_multipart()) byteranges = root_message.get_payload() self.assertFalse(root_message.defects) for i, message in enumerate(byteranges): self.assertFalse(message.defects, "Part %d had defects" % i) self.assertFalse(message.is_multipart(), "Nested multipart at %d" % i) return byteranges def test_bogus(self): status, headers, gotten_obj = self._get_obj("tacos=3-5") self.assertEqual(status, 200) self.assertEqual(len(gotten_obj), len(self.obj)) self.assertEqual(gotten_obj, self.obj) def test_unaligned(self): # One segment's worth of data, but straddling two segment boundaries # (so it has data from three segments) status, headers, gotten_obj = self._get_obj("bytes=3783-7878") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "4096") self.assertEqual(headers['Content-Range'], "bytes 3783-7878/14513") self.assertEqual(len(gotten_obj), 4096) self.assertEqual(gotten_obj, self.obj[3783:7879]) def test_aligned_left(self): # First byte is aligned to a segment boundary, last byte is not status, headers, gotten_obj = self._get_obj("bytes=0-5500") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "5501") self.assertEqual(headers['Content-Range'], "bytes 0-5500/14513") self.assertEqual(len(gotten_obj), 5501) self.assertEqual(gotten_obj, self.obj[:5501]) def test_aligned_range(self): # Ranged GET that wants exactly one segment status, headers, gotten_obj = self._get_obj("bytes=4096-8191") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "4096") self.assertEqual(headers['Content-Range'], "bytes 4096-8191/14513") self.assertEqual(len(gotten_obj), 4096) self.assertEqual(gotten_obj, self.obj[4096:8192]) def test_aligned_range_end(self): # Ranged GET that wants exactly the last segment status, headers, gotten_obj = self._get_obj("bytes=12288-14512") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "2225") self.assertEqual(headers['Content-Range'], "bytes 12288-14512/14513") self.assertEqual(len(gotten_obj), 2225) self.assertEqual(gotten_obj, self.obj[12288:]) def test_aligned_range_aligned_obj(self): # Ranged GET that wants exactly the last segment, which is full-size status, headers, gotten_obj = self._get_obj("bytes=4096-8191", self.aligned_obj_name) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "4096") self.assertEqual(headers['Content-Range'], "bytes 4096-8191/8192") self.assertEqual(len(gotten_obj), 4096) self.assertEqual(gotten_obj, self.aligned_obj[4096:8192]) def test_byte_0(self): # Just the first byte, but it's index 0, so that's easy to get wrong status, headers, gotten_obj = self._get_obj("bytes=0-0") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], "1") self.assertEqual(headers['Content-Range'], "bytes 0-0/14513") self.assertEqual(gotten_obj, self.obj[0]) def test_unsatisfiable(self): # Goes just one byte too far off the end of the object, so it's # unsatisfiable status, headers, _junk = self._get_obj( "bytes=%d-%d" % (len(self.obj), len(self.obj) + 100)) self.assertEqual(status, 416) self.assertEqual(self.obj_etag, headers.get('Etag')) self.assertEqual('bytes', headers.get('Accept-Ranges')) def test_off_end(self): # Ranged GET that's mostly off the end of the object, but overlaps # it in just the last byte status, headers, gotten_obj = self._get_obj( "bytes=%d-%d" % (len(self.obj) - 1, len(self.obj) + 100)) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '1') self.assertEqual(headers['Content-Range'], 'bytes 14512-14512/14513') self.assertEqual(gotten_obj, self.obj[-1]) def test_aligned_off_end(self): # Ranged GET that starts on a segment boundary but asks for a whole lot status, headers, gotten_obj = self._get_obj( "bytes=%d-%d" % (8192, len(self.obj) + 100)) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '6321') self.assertEqual(headers['Content-Range'], 'bytes 8192-14512/14513') self.assertEqual(gotten_obj, self.obj[8192:]) def test_way_off_end(self): # Ranged GET that's mostly off the end of the object, but overlaps # it in just the last byte, and wants multiple segments' worth off # the end status, headers, gotten_obj = self._get_obj( "bytes=%d-%d" % (len(self.obj) - 1, len(self.obj) * 1000)) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '1') self.assertEqual(headers['Content-Range'], 'bytes 14512-14512/14513') self.assertEqual(gotten_obj, self.obj[-1]) def test_boundaries(self): # Wants the last byte of segment 1 + the first byte of segment 2 status, headers, gotten_obj = self._get_obj("bytes=4095-4096") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '2') self.assertEqual(headers['Content-Range'], 'bytes 4095-4096/14513') self.assertEqual(gotten_obj, self.obj[4095:4097]) def test_until_end(self): # Wants the last byte of segment 1 + the rest status, headers, gotten_obj = self._get_obj("bytes=4095-") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '10418') self.assertEqual(headers['Content-Range'], 'bytes 4095-14512/14513') self.assertEqual(gotten_obj, self.obj[4095:]) def test_small_suffix(self): # Small range-suffix GET: the last 100 bytes (less than one segment) status, headers, gotten_obj = self._get_obj("bytes=-100") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '100') self.assertEqual(headers['Content-Range'], 'bytes 14413-14512/14513') self.assertEqual(len(gotten_obj), 100) self.assertEqual(gotten_obj, self.obj[-100:]) def test_small_suffix_aligned(self): # Small range-suffix GET: the last 100 bytes, last segment is # full-size status, headers, gotten_obj = self._get_obj("bytes=-100", self.aligned_obj_name) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '100') self.assertEqual(headers['Content-Range'], 'bytes 8092-8191/8192') self.assertEqual(len(gotten_obj), 100) def test_suffix_two_segs(self): # Ask for enough data that we need the last two segments. The last # segment is short, though, so this ensures we compensate for that. # # Note that the total range size is less than one full-size segment. suffix_len = len(self.obj) % self.seg_size + 1 status, headers, gotten_obj = self._get_obj("bytes=-%d" % suffix_len) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], str(suffix_len)) self.assertEqual(headers['Content-Range'], 'bytes %d-%d/%d' % (len(self.obj) - suffix_len, len(self.obj) - 1, len(self.obj))) self.assertEqual(len(gotten_obj), suffix_len) def test_large_suffix(self): # Large range-suffix GET: the last 5000 bytes (more than one segment) status, headers, gotten_obj = self._get_obj("bytes=-5000") self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '5000') self.assertEqual(headers['Content-Range'], 'bytes 9513-14512/14513') self.assertEqual(len(gotten_obj), 5000) self.assertEqual(gotten_obj, self.obj[-5000:]) def test_overlarge_suffix(self): # The last N+1 bytes of an N-byte object status, headers, gotten_obj = self._get_obj( "bytes=-%d" % (len(self.obj) + 1)) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '14513') self.assertEqual(headers['Content-Range'], 'bytes 0-14512/14513') self.assertEqual(len(gotten_obj), len(self.obj)) self.assertEqual(gotten_obj, self.obj) def test_small_suffix_tiny_object(self): status, headers, gotten_obj = self._get_obj( "bytes=-5", self.tiny_obj_name) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '5') self.assertEqual(headers['Content-Range'], 'bytes 12-16/17') self.assertEqual(gotten_obj, self.tiny_obj[12:]) def test_overlarge_suffix_tiny_object(self): status, headers, gotten_obj = self._get_obj( "bytes=-1234567890", self.tiny_obj_name) self.assertEqual(status, 206) self.assertEqual(headers['Content-Length'], '17') self.assertEqual(headers['Content-Range'], 'bytes 0-16/17') self.assertEqual(len(gotten_obj), len(self.tiny_obj)) self.assertEqual(gotten_obj, self.tiny_obj) def test_multiple_ranges(self): status, headers, gotten_obj = self._get_obj( "bytes=0-100,4490-5010", self.obj_name) self.assertEqual(status, 206) self.assertEqual(headers["Content-Length"], str(len(gotten_obj))) content_type, content_type_params = parse_content_type( headers['Content-Type']) content_type_params = dict(content_type_params) self.assertEqual(content_type, 'multipart/byteranges') boundary = content_type_params.get('boundary') self.assertTrue(boundary is not None) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) first_byterange, second_byterange = got_byteranges self.assertEqual(first_byterange['Content-Range'], 'bytes 0-100/14513') self.assertEqual(first_byterange.get_payload(), self.obj[:101]) self.assertEqual(second_byterange['Content-Range'], 'bytes 4490-5010/14513') self.assertEqual(second_byterange.get_payload(), self.obj[4490:5011]) def test_multiple_ranges_overlapping_in_segment(self): status, headers, gotten_obj = self._get_obj( "bytes=0-9,20-29,40-49,60-69,80-89") self.assertEqual(status, 206) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 5) def test_multiple_ranges_off_end(self): status, headers, gotten_obj = self._get_obj( "bytes=0-10,14500-14513") # there is no byte 14513, only 0-14512 self.assertEqual(status, 206) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) self.assertEqual(got_byteranges[0]['Content-Range'], "bytes 0-10/14513") self.assertEqual(got_byteranges[1]['Content-Range'], "bytes 14500-14512/14513") def test_multiple_ranges_suffix_off_end(self): status, headers, gotten_obj = self._get_obj( "bytes=0-10,-13") self.assertEqual(status, 206) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) self.assertEqual(got_byteranges[0]['Content-Range'], "bytes 0-10/14513") self.assertEqual(got_byteranges[1]['Content-Range'], "bytes 14500-14512/14513") def test_multiple_ranges_one_barely_unsatisfiable(self): # The thing about 14515-14520 is that it comes from the last segment # in the object. When we turn this range into a fragment range, # it'll be for the last fragment, so the object servers see # something satisfiable. # # Basically, we'll get 3 byteranges from the object server, but we # have to filter out the unsatisfiable one on our own. status, headers, gotten_obj = self._get_obj( "bytes=0-10,14515-14520,40-50") self.assertEqual(status, 206) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) self.assertEqual(got_byteranges[0]['Content-Range'], "bytes 0-10/14513") self.assertEqual(got_byteranges[0].get_payload(), self.obj[0:11]) self.assertEqual(got_byteranges[1]['Content-Range'], "bytes 40-50/14513") self.assertEqual(got_byteranges[1].get_payload(), self.obj[40:51]) def test_multiple_ranges_some_unsatisfiable(self): status, headers, gotten_obj = self._get_obj( "bytes=0-100,4090-5010,999999-9999999", self.obj_name) self.assertEqual(status, 206) content_type, content_type_params = parse_content_type( headers['Content-Type']) content_type_params = dict(content_type_params) self.assertEqual(content_type, 'multipart/byteranges') boundary = content_type_params.get('boundary') self.assertTrue(boundary is not None) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) first_byterange, second_byterange = got_byteranges self.assertEqual(first_byterange['Content-Range'], 'bytes 0-100/14513') self.assertEqual(first_byterange.get_payload(), self.obj[:101]) self.assertEqual(second_byterange['Content-Range'], 'bytes 4090-5010/14513') self.assertEqual(second_byterange.get_payload(), self.obj[4090:5011]) def test_two_ranges_one_unsatisfiable(self): status, headers, gotten_obj = self._get_obj( "bytes=0-100,999999-9999999", self.obj_name) self.assertEqual(status, 206) content_type, content_type_params = parse_content_type( headers['Content-Type']) # According to RFC 7233, this could be either a multipart/byteranges # response with one part or it could be a single-part response (just # the bytes, no MIME). We're locking it down here: single-part # response. That's what replicated objects do, and we don't want any # client-visible differences between EC objects and replicated ones. self.assertEqual(content_type, 'donuts') self.assertEqual(gotten_obj, self.obj[:101]) def test_two_ranges_one_unsatisfiable_same_segment(self): # Like test_two_ranges_one_unsatisfiable(), but where both ranges # fall within the same EC segment. status, headers, gotten_obj = self._get_obj( "bytes=14500-14510,14520-14530") self.assertEqual(status, 206) content_type, content_type_params = parse_content_type( headers['Content-Type']) self.assertEqual(content_type, 'donuts') self.assertEqual(gotten_obj, self.obj[14500:14511]) def test_multiple_ranges_some_unsatisfiable_out_of_order(self): status, headers, gotten_obj = self._get_obj( "bytes=0-100,99999998-99999999,4090-5010", self.obj_name) self.assertEqual(status, 206) content_type, content_type_params = parse_content_type( headers['Content-Type']) content_type_params = dict(content_type_params) self.assertEqual(content_type, 'multipart/byteranges') boundary = content_type_params.get('boundary') self.assertTrue(boundary is not None) got_byteranges = self._parse_multipart(headers['Content-Type'], gotten_obj) self.assertEqual(len(got_byteranges), 2) first_byterange, second_byterange = got_byteranges self.assertEqual(first_byterange['Content-Range'], 'bytes 0-100/14513') self.assertEqual(first_byterange.get_payload(), self.obj[:101]) self.assertEqual(second_byterange['Content-Range'], 'bytes 4090-5010/14513') self.assertEqual(second_byterange.get_payload(), self.obj[4090:5011]) @patch_policies([ StoragePolicy(0, 'zero', True, object_ring=FakeRing(base_port=3000)), StoragePolicy(1, 'one', False, object_ring=FakeRing(base_port=3000)), StoragePolicy(2, 'two', False, True, object_ring=FakeRing(base_port=3000)) ]) class TestContainerController(unittest.TestCase): "Test swift.proxy_server.ContainerController" def setUp(self): self.app = proxy_server.Application( None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing(base_port=2000), logger=debug_logger()) def test_convert_policy_to_index(self): controller = swift.proxy.controllers.ContainerController(self.app, 'a', 'c') expected = { 'zero': 0, 'ZeRo': 0, 'one': 1, 'OnE': 1, } for name, index in expected.items(): req = Request.blank('/a/c', headers={'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Storage-Policy': name}) self.assertEqual(controller._convert_policy_to_index(req), index) # default test req = Request.blank('/a/c', headers={'Content-Length': '0', 'Content-Type': 'text/plain'}) self.assertEqual(controller._convert_policy_to_index(req), None) # negative test req = Request.blank('/a/c', headers={'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Storage-Policy': 'nada'}) self.assertRaises(HTTPException, controller._convert_policy_to_index, req) # storage policy two is deprecated req = Request.blank('/a/c', headers={'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Storage-Policy': 'two'}) self.assertRaises(HTTPException, controller._convert_policy_to_index, req) def test_convert_index_to_name(self): policy = random.choice(list(POLICIES)) req = Request.blank('/v1/a/c') with mocked_http_conn( 200, 200, headers={'X-Backend-Storage-Policy-Index': int(policy)}, ) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Storage-Policy'], policy.name) def test_no_convert_index_to_name_when_container_not_found(self): policy = random.choice(list(POLICIES)) req = Request.blank('/v1/a/c') with mocked_http_conn( 200, 404, 404, 404, headers={'X-Backend-Storage-Policy-Index': int(policy)}) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Storage-Policy'], None) def test_error_convert_index_to_name(self): req = Request.blank('/v1/a/c') with mocked_http_conn( 200, 200, headers={'X-Backend-Storage-Policy-Index': '-1'}) as fake_conn: resp = req.get_response(self.app) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Storage-Policy'], None) error_lines = self.app.logger.get_lines_for_level('error') self.assertEqual(2, len(error_lines)) for msg in error_lines: expected = "Could not translate " \ "X-Backend-Storage-Policy-Index ('-1')" self.assertTrue(expected in msg) def test_transfer_headers(self): src_headers = {'x-remove-versions-location': 'x', 'x-container-read': '*:user', 'x-remove-container-sync-key': 'x'} dst_headers = {'x-versions-location': 'backup'} controller = swift.proxy.controllers.ContainerController(self.app, 'a', 'c') controller.transfer_headers(src_headers, dst_headers) expected_headers = {'x-versions-location': '', 'x-container-read': '*:user', 'x-container-sync-key': ''} self.assertEqual(dst_headers, expected_headers) def assert_status_map(self, method, statuses, expected, raise_exc=False, missing_container=False): with save_globals(): kwargs = {} if raise_exc: kwargs['raise_exc'] = raise_exc kwargs['missing_container'] = missing_container set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', headers={'Content-Length': '0', 'Content-Type': 'text/plain'}) self.app.update_request(req) res = method(req) self.assertEqual(res.status_int, expected) set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c/', headers={'Content-Length': '0', 'Content-Type': 'text/plain'}) self.app.update_request(req) res = method(req) self.assertEqual(res.status_int, expected) def test_HEAD_GET(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'a', 'c') def test_status_map(statuses, expected, c_expected=None, a_expected=None, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', {}) self.app.update_request(req) res = controller.HEAD(req) self.assertEqual(res.status[:len(str(expected))], str(expected)) if expected < 400: self.assertTrue('x-works' in res.headers) self.assertEqual(res.headers['x-works'], 'yes') if c_expected: self.assertTrue('swift.container/a/c' in res.environ) self.assertEqual( res.environ['swift.container/a/c']['status'], c_expected) else: self.assertTrue('swift.container/a/c' not in res.environ) if a_expected: self.assertTrue('swift.account/a' in res.environ) self.assertEqual(res.environ['swift.account/a']['status'], a_expected) else: self.assertTrue('swift.account/a' not in res.environ) set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', {}) self.app.update_request(req) res = controller.GET(req) self.assertEqual(res.status[:len(str(expected))], str(expected)) if expected < 400: self.assertTrue('x-works' in res.headers) self.assertEqual(res.headers['x-works'], 'yes') if c_expected: self.assertTrue('swift.container/a/c' in res.environ) self.assertEqual( res.environ['swift.container/a/c']['status'], c_expected) else: self.assertTrue('swift.container/a/c' not in res.environ) if a_expected: self.assertTrue('swift.account/a' in res.environ) self.assertEqual(res.environ['swift.account/a']['status'], a_expected) else: self.assertTrue('swift.account/a' not in res.environ) # In all the following tests cache 200 for account # return and ache vary for container # return 200 and cache 200 for and container test_status_map((200, 200, 404, 404), 200, 200, 200) test_status_map((200, 200, 500, 404), 200, 200, 200) # return 304 don't cache container test_status_map((200, 304, 500, 404), 304, None, 200) # return 404 and cache 404 for container test_status_map((200, 404, 404, 404), 404, 404, 200) test_status_map((200, 404, 404, 500), 404, 404, 200) # return 503, don't cache container test_status_map((200, 500, 500, 500), 503, None, 200) self.assertFalse(self.app.account_autocreate) # In all the following tests cache 404 for account # return 404 (as account is not found) and don't cache container test_status_map((404, 404, 404), 404, None, 404) # This should make no difference self.app.account_autocreate = True test_status_map((404, 404, 404), 404, None, 404) def test_PUT_policy_headers(self): backend_requests = [] def capture_requests(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if method == 'PUT': backend_requests.append(headers) def test_policy(requested_policy): with save_globals(): mock_conn = set_http_connect(200, 201, 201, 201, give_connect=capture_requests) self.app.memcache.store = {} req = Request.blank('/v1/a/test', method='PUT', headers={'Content-Length': 0}) if requested_policy: expected_policy = requested_policy req.headers['X-Storage-Policy'] = policy.name else: expected_policy = POLICIES.default res = req.get_response(self.app) if expected_policy.is_deprecated: self.assertEqual(res.status_int, 400) self.assertEqual(0, len(backend_requests)) expected = 'is deprecated' self.assertTrue(expected in res.body, '%r did not include %r' % ( res.body, expected)) return self.assertEqual(res.status_int, 201) self.assertEqual( expected_policy.object_ring.replicas, len(backend_requests)) for headers in backend_requests: if not requested_policy: self.assertFalse('X-Backend-Storage-Policy-Index' in headers) self.assertTrue( 'X-Backend-Storage-Policy-Default' in headers) self.assertEqual( int(expected_policy), int(headers['X-Backend-Storage-Policy-Default'])) else: self.assertTrue('X-Backend-Storage-Policy-Index' in headers) self.assertEqual(int(headers ['X-Backend-Storage-Policy-Index']), int(policy)) # make sure all mocked responses are consumed self.assertRaises(StopIteration, mock_conn.code_iter.next) test_policy(None) # no policy header for policy in POLICIES: backend_requests = [] # reset backend requests test_policy(policy) def test_PUT(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') def test_status_map(statuses, expected, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', {}) req.content_length = 0 self.app.update_request(req) res = controller.PUT(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 201, 201, 201), 201, missing_container=True) test_status_map((200, 201, 201, 500), 201, missing_container=True) test_status_map((200, 204, 404, 404), 404, missing_container=True) test_status_map((200, 204, 500, 404), 503, missing_container=True) self.assertFalse(self.app.account_autocreate) test_status_map((404, 404, 404), 404, missing_container=True) self.app.account_autocreate = True # fail to retrieve account info test_status_map( (503, 503, 503), # account_info fails on 503 404, missing_container=True) # account fail after creation test_status_map( (404, 404, 404, # account_info fails on 404 201, 201, 201, # PUT account 404, 404, 404), # account_info fail 404, missing_container=True) test_status_map( (503, 503, 404, # account_info fails on 404 503, 503, 503, # PUT account 503, 503, 404), # account_info fail 404, missing_container=True) # put fails test_status_map( (404, 404, 404, # account_info fails on 404 201, 201, 201, # PUT account 200, # account_info success 503, 503, 201), # put container fail 503, missing_container=True) # all goes according to plan test_status_map( (404, 404, 404, # account_info fails on 404 201, 201, 201, # PUT account 200, # account_info success 201, 201, 201), # put container success 201, missing_container=True) test_status_map( (503, 404, 404, # account_info fails on 404 503, 201, 201, # PUT account 503, 200, # account_info success 503, 201, 201), # put container success 201, missing_container=True) def test_PUT_autocreate_account_with_sysmeta(self): # x-account-sysmeta headers in a container PUT request should be # transferred to the account autocreate PUT request with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') def test_status_map(statuses, expected, headers=None, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', {}, headers=headers) req.content_length = 0 self.app.update_request(req) res = controller.PUT(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) self.app.account_autocreate = True calls = [] callback = _make_callback_func(calls) key, value = 'X-Account-Sysmeta-Blah', 'something' headers = {key: value} # all goes according to plan test_status_map( (404, 404, 404, # account_info fails on 404 201, 201, 201, # PUT account 200, # account_info success 201, 201, 201), # put container success 201, missing_container=True, headers=headers, give_connect=callback) self.assertEqual(10, len(calls)) for call in calls[3:6]: self.assertEqual('/account', call['path']) self.assertTrue(key in call['headers'], '%s call, key %s missing in headers %s' % (call['method'], key, call['headers'])) self.assertEqual(value, call['headers'][key]) def test_POST(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') def test_status_map(statuses, expected, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a/c', {}) req.content_length = 0 self.app.update_request(req) res = controller.POST(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((200, 201, 201, 201), 201, missing_container=True) test_status_map((200, 201, 201, 500), 201, missing_container=True) test_status_map((200, 204, 404, 404), 404, missing_container=True) test_status_map((200, 204, 500, 404), 503, missing_container=True) self.assertFalse(self.app.account_autocreate) test_status_map((404, 404, 404), 404, missing_container=True) self.app.account_autocreate = True test_status_map((404, 404, 404), 404, missing_container=True) def test_PUT_max_containers_per_account(self): with save_globals(): self.app.max_containers_per_account = 12346 controller = proxy_server.ContainerController(self.app, 'account', 'container') self.assert_status_map(controller.PUT, (200, 201, 201, 201), 201, missing_container=True) self.app.max_containers_per_account = 12345 controller = proxy_server.ContainerController(self.app, 'account', 'container') self.assert_status_map(controller.PUT, (200, 200, 201, 201, 201), 201, missing_container=True) controller = proxy_server.ContainerController(self.app, 'account', 'container_new') self.assert_status_map(controller.PUT, (200, 404, 404, 404), 403, missing_container=True) self.app.max_containers_per_account = 12345 self.app.max_containers_whitelist = ['account'] controller = proxy_server.ContainerController(self.app, 'account', 'container') self.assert_status_map(controller.PUT, (200, 201, 201, 201), 201, missing_container=True) def test_PUT_max_container_name_length(self): with save_globals(): limit = constraints.MAX_CONTAINER_NAME_LENGTH controller = proxy_server.ContainerController(self.app, 'account', '1' * limit) self.assert_status_map(controller.PUT, (200, 201, 201, 201), 201, missing_container=True) controller = proxy_server.ContainerController(self.app, 'account', '2' * (limit + 1)) self.assert_status_map(controller.PUT, (201, 201, 201), 400, missing_container=True) def test_PUT_connect_exceptions(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') self.assert_status_map(controller.PUT, (200, 201, 201, -1), 201, missing_container=True) self.assert_status_map(controller.PUT, (200, 201, -1, -1), 503, missing_container=True) self.assert_status_map(controller.PUT, (200, 503, 503, -1), 503, missing_container=True) def test_acc_missing_returns_404(self): for meth in ('DELETE', 'PUT'): with save_globals(): self.app.memcache = FakeMemcacheReturnsNone() self.app._error_limiting = {} controller = proxy_server.ContainerController(self.app, 'account', 'container') if meth == 'PUT': set_http_connect(200, 200, 200, 200, 200, 200, missing_container=True) else: set_http_connect(200, 200, 200, 200) self.app.memcache.store = {} req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': meth}) self.app.update_request(req) resp = getattr(controller, meth)(req) self.assertEqual(resp.status_int, 200) set_http_connect(404, 404, 404, 200, 200, 200) # Make sure it is a blank request wthout env caching req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': meth}) resp = getattr(controller, meth)(req) self.assertEqual(resp.status_int, 404) set_http_connect(503, 404, 404) # Make sure it is a blank request wthout env caching req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': meth}) resp = getattr(controller, meth)(req) self.assertEqual(resp.status_int, 404) set_http_connect(503, 404, raise_exc=True) # Make sure it is a blank request wthout env caching req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': meth}) resp = getattr(controller, meth)(req) self.assertEqual(resp.status_int, 404) for dev in self.app.account_ring.devs: set_node_errors(self.app, dev, self.app.error_suppression_limit + 1, time.time()) set_http_connect(200, 200, 200, 200, 200, 200) # Make sure it is a blank request wthout env caching req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': meth}) resp = getattr(controller, meth)(req) self.assertEqual(resp.status_int, 404) def test_put_locking(self): class MockMemcache(FakeMemcache): def __init__(self, allow_lock=None): self.allow_lock = allow_lock super(MockMemcache, self).__init__() @contextmanager def soft_lock(self, key, timeout=0, retries=5): if self.allow_lock: yield True else: raise NotImplementedError with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') self.app.memcache = MockMemcache(allow_lock=True) set_http_connect(200, 201, 201, 201, missing_container=True) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'PUT'}) self.app.update_request(req) res = controller.PUT(req) self.assertEqual(res.status_int, 201) def test_error_limiting(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') container_ring = controller.app.container_ring controller.app.sort_nodes = lambda l: l self.assert_status_map(controller.HEAD, (200, 503, 200, 200), 200, missing_container=False) self.assertEqual( node_error_count(controller.app, container_ring.devs[0]), 2) self.assertTrue( node_last_error(controller.app, container_ring.devs[0]) is not None) for _junk in range(self.app.error_suppression_limit): self.assert_status_map(controller.HEAD, (200, 503, 503, 503), 503) self.assertEqual( node_error_count(controller.app, container_ring.devs[0]), self.app.error_suppression_limit + 1) self.assert_status_map(controller.HEAD, (200, 200, 200, 200), 503) self.assertTrue( node_last_error(controller.app, container_ring.devs[0]) is not None) self.assert_status_map(controller.PUT, (200, 201, 201, 201), 503, missing_container=True) self.assert_status_map(controller.DELETE, (200, 204, 204, 204), 503) self.app.error_suppression_interval = -300 self.assert_status_map(controller.HEAD, (200, 200, 200, 200), 200) self.assert_status_map(controller.DELETE, (200, 204, 204, 204), 404, raise_exc=True) def test_DELETE(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'account', 'container') self.assert_status_map(controller.DELETE, (200, 204, 204, 204), 204) self.assert_status_map(controller.DELETE, (200, 204, 204, 503), 204) self.assert_status_map(controller.DELETE, (200, 204, 503, 503), 503) self.assert_status_map(controller.DELETE, (200, 204, 404, 404), 404) self.assert_status_map(controller.DELETE, (200, 404, 404, 404), 404) self.assert_status_map(controller.DELETE, (200, 204, 503, 404), 503) self.app.memcache = FakeMemcacheReturnsNone() # 200: Account check, 404x3: Container check self.assert_status_map(controller.DELETE, (200, 404, 404, 404), 404) def test_response_get_accept_ranges_header(self): with save_globals(): set_http_connect(200, 200, body='{}') controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c?format=json') self.app.update_request(req) res = controller.GET(req) self.assertTrue('accept-ranges' in res.headers) self.assertEqual(res.headers['accept-ranges'], 'bytes') def test_response_head_accept_ranges_header(self): with save_globals(): set_http_connect(200, 200, body='{}') controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c?format=json') self.app.update_request(req) res = controller.HEAD(req) self.assertTrue('accept-ranges' in res.headers) self.assertEqual(res.headers['accept-ranges'], 'bytes') def test_PUT_metadata(self): self.metadata_helper('PUT') def test_POST_metadata(self): self.metadata_helper('POST') def metadata_helper(self, method): for test_header, test_value in ( ('X-Container-Meta-TestHeader', 'TestValue'), ('X-Container-Meta-TestHeader', ''), ('X-Remove-Container-Meta-TestHeader', 'anything'), ('X-Container-Read', '.r:*'), ('X-Remove-Container-Read', 'anything'), ('X-Container-Write', 'anyone'), ('X-Remove-Container-Write', 'anything')): test_errors = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a/c': find_header = test_header find_value = test_value if find_header.lower().startswith('x-remove-'): find_header = \ find_header.lower().replace('-remove', '', 1) find_value = '' for k, v in headers.items(): if k.lower() == find_header.lower() and \ v == find_value: break else: test_errors.append('%s: %s not in %s' % (find_header, find_value, headers)) with save_globals(): controller = \ proxy_server.ContainerController(self.app, 'a', 'c') set_http_connect(200, 201, 201, 201, give_connect=test_connect) req = Request.blank( '/v1/a/c', environ={'REQUEST_METHOD': method, 'swift_owner': True}, headers={test_header: test_value}) self.app.update_request(req) getattr(controller, method)(req) self.assertEqual(test_errors, []) def test_PUT_bad_metadata(self): self.bad_metadata_helper('PUT') def test_POST_bad_metadata(self): self.bad_metadata_helper('POST') def bad_metadata_helper(self, method): with save_globals(): controller = proxy_server.ContainerController(self.app, 'a', 'c') set_http_connect(200, 201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Container-Meta-' + ('a' * constraints.MAX_META_NAME_LENGTH): 'v'}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank( '/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Container-Meta-' + ('a' * (constraints.MAX_META_NAME_LENGTH + 1)): 'v'}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Container-Meta-Too-Long': 'a' * constraints.MAX_META_VALUE_LENGTH}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Container-Meta-Too-Long': 'a' * (constraints.MAX_META_VALUE_LENGTH + 1)}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {} for x in range(constraints.MAX_META_COUNT): headers['X-Container-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers = {} for x in range(constraints.MAX_META_COUNT + 1): headers['X-Container-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {} header_value = 'a' * constraints.MAX_META_VALUE_LENGTH size = 0 x = 0 while size < (constraints.MAX_META_OVERALL_SIZE - 4 - constraints.MAX_META_VALUE_LENGTH): size += 4 + constraints.MAX_META_VALUE_LENGTH headers['X-Container-Meta-%04d' % x] = header_value x += 1 if constraints.MAX_META_OVERALL_SIZE - size > 1: headers['X-Container-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size - 1) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers['X-Container-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) def test_POST_calls_clean_acl(self): called = [False] def clean_acl(header, value): called[0] = True raise ValueError('fake error') with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Container-Read': '.r:*'}) req.environ['swift.clean_acl'] = clean_acl self.app.update_request(req) controller.POST(req) self.assertTrue(called[0]) called[0] = False with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Container-Write': '.r:*'}) req.environ['swift.clean_acl'] = clean_acl self.app.update_request(req) controller.POST(req) self.assertTrue(called[0]) def test_PUT_calls_clean_acl(self): called = [False] def clean_acl(header, value): called[0] = True raise ValueError('fake error') with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Container-Read': '.r:*'}) req.environ['swift.clean_acl'] = clean_acl self.app.update_request(req) controller.PUT(req) self.assertTrue(called[0]) called[0] = False with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Container-Write': '.r:*'}) req.environ['swift.clean_acl'] = clean_acl self.app.update_request(req) controller.PUT(req) self.assertTrue(called[0]) def test_GET_no_content(self): with save_globals(): set_http_connect(200, 204, 204, 204) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c') self.app.update_request(req) res = controller.GET(req) self.assertEqual(res.status_int, 204) self.assertEqual( res.environ['swift.container/a/c']['status'], 204) self.assertEqual(res.content_length, 0) self.assertTrue('transfer-encoding' not in res.headers) def test_GET_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c') req.environ['swift.authorize'] = authorize self.app.update_request(req) res = controller.GET(req) self.assertEqual(res.environ['swift.container/a/c']['status'], 201) self.assertTrue(called[0]) def test_HEAD_calls_authorize(self): called = [False] def authorize(req): called[0] = True return HTTPUnauthorized(request=req) with save_globals(): set_http_connect(200, 201, 201, 201) controller = proxy_server.ContainerController(self.app, 'account', 'container') req = Request.blank('/v1/a/c', {'REQUEST_METHOD': 'HEAD'}) req.environ['swift.authorize'] = authorize self.app.update_request(req) controller.HEAD(req) self.assertTrue(called[0]) def test_unauthorized_requests_when_account_not_found(self): # verify unauthorized container requests always return response # from swift.authorize called = [0, 0] def authorize(req): called[0] += 1 return HTTPUnauthorized(request=req) def account_info(*args): called[1] += 1 return None, None, None def _do_test(method): with save_globals(): swift.proxy.controllers.Controller.account_info = account_info app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', {'REQUEST_METHOD': method}) req.environ['swift.authorize'] = authorize self.app.update_request(req) res = app.handle_request(req) return res for method in ('PUT', 'POST', 'DELETE'): # no delay_denial on method, expect one call to authorize called = [0, 0] res = _do_test(method) self.assertEqual(401, res.status_int) self.assertEqual([1, 0], called) for method in ('HEAD', 'GET'): # delay_denial on method, expect two calls to authorize called = [0, 0] res = _do_test(method) self.assertEqual(401, res.status_int) self.assertEqual([2, 1], called) def test_authorized_requests_when_account_not_found(self): # verify authorized container requests always return 404 when # account not found called = [0, 0] def authorize(req): called[0] += 1 def account_info(*args): called[1] += 1 return None, None, None def _do_test(method): with save_globals(): swift.proxy.controllers.Controller.account_info = account_info app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', {'REQUEST_METHOD': method}) req.environ['swift.authorize'] = authorize self.app.update_request(req) res = app.handle_request(req) return res for method in ('PUT', 'POST', 'DELETE', 'HEAD', 'GET'): # expect one call to authorize called = [0, 0] res = _do_test(method) self.assertEqual(404, res.status_int) self.assertEqual([1, 1], called) def test_OPTIONS_get_info_drops_origin(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'a', 'c') count = [0] def my_get_info(app, env, account, container=None, ret_not_found=False, swift_source=None): if count[0] > 11: return {} count[0] += 1 if not container: return {'some': 'stuff'} return proxy_base.was_get_info( app, env, account, container, ret_not_found, swift_source) proxy_base.was_get_info = proxy_base.get_info with mock.patch.object(proxy_base, 'get_info', my_get_info): proxy_base.get_info = my_get_info req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) controller.OPTIONS(req) self.assertTrue(count[0] < 11) def test_OPTIONS(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'a', 'c') def my_empty_container_info(*args): return {} controller.container_info = my_empty_container_info req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) def my_empty_origin_container_info(*args): return {'cors': {'allow_origin': None}} controller.container_info = my_empty_origin_container_info req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) def my_container_info(*args): return { 'cors': { 'allow_origin': 'http://foo.bar:8080 https://foo.bar', 'max_age': '999', } } controller.container_info = my_container_info req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://foo.bar', 'Access-Control-Request-Method': 'GET'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) self.assertEqual( 'https://foo.bar', resp.headers['access-control-allow-origin']) for verb in 'OPTIONS GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['access-control-allow-methods']) self.assertEqual( len(resp.headers['access-control-allow-methods'].split(', ')), 6) self.assertEqual('999', resp.headers['access-control-max-age']) req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://foo.bar'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) req = Request.blank('/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['Allow']) self.assertEqual(len(resp.headers['Allow'].split(', ')), 6) req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.bar', 'Access-Control-Request-Method': 'GET'}) resp = controller.OPTIONS(req) self.assertEqual(401, resp.status_int) req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.bar', 'Access-Control-Request-Method': 'GET'}) controller.app.cors_allow_origin = ['http://foo.bar', ] resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) def my_container_info_wildcard(*args): return { 'cors': { 'allow_origin': '*', 'max_age': '999', } } controller.container_info = my_container_info_wildcard req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://bar.baz', 'Access-Control-Request-Method': 'GET'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) self.assertEqual('*', resp.headers['access-control-allow-origin']) for verb in 'OPTIONS GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['access-control-allow-methods']) self.assertEqual( len(resp.headers['access-control-allow-methods'].split(', ')), 6) self.assertEqual('999', resp.headers['access-control-max-age']) req = Request.blank( '/v1/a/c/o.jpg', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'https://bar.baz', 'Access-Control-Request-Headers': 'x-foo, x-bar, x-auth-token', 'Access-Control-Request-Method': 'GET'} ) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) self.assertEqual( sortHeaderNames('x-foo, x-bar, x-auth-token'), sortHeaderNames(resp.headers['access-control-allow-headers'])) def test_CORS_valid(self): with save_globals(): controller = proxy_server.ContainerController(self.app, 'a', 'c') def stubContainerInfo(*args): return { 'cors': { 'allow_origin': 'http://foo.bar' } } controller.container_info = stubContainerInfo def containerGET(controller, req): return Response(headers={ 'X-Container-Meta-Color': 'red', 'X-Super-Secret': 'hush', }) req = Request.blank( '/v1/a/c', {'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://foo.bar'}) resp = cors_validation(containerGET)(controller, req) self.assertEqual(200, resp.status_int) self.assertEqual('http://foo.bar', resp.headers['access-control-allow-origin']) self.assertEqual('red', resp.headers['x-container-meta-color']) # X-Super-Secret is in the response, but not "exposed" self.assertEqual('hush', resp.headers['x-super-secret']) self.assertTrue('access-control-expose-headers' in resp.headers) exposed = set( h.strip() for h in resp.headers['access-control-expose-headers'].split(',')) expected_exposed = set(['cache-control', 'content-language', 'content-type', 'expires', 'last-modified', 'pragma', 'etag', 'x-timestamp', 'x-trans-id', 'x-container-meta-color']) self.assertEqual(expected_exposed, exposed) def _gather_x_account_headers(self, controller_call, req, *connect_args, **kwargs): seen_headers = [] to_capture = ('X-Account-Partition', 'X-Account-Host', 'X-Account-Device') def capture_headers(ipaddr, port, device, partition, method, path, headers=None, query_string=None): captured = {} for header in to_capture: captured[header] = headers.get(header) seen_headers.append(captured) with save_globals(): self.app.allow_account_management = True set_http_connect(*connect_args, give_connect=capture_headers, **kwargs) resp = controller_call(req) self.assertEqual(2, resp.status_int // 100) # sanity check # don't care about the account HEAD, so throw away the # first element return sorted(seen_headers[1:], key=lambda d: d['X-Account-Host'] or 'Z') def test_PUT_x_account_headers_with_fewer_account_replicas(self): self.app.account_ring.set_replicas(2) req = Request.blank('/v1/a/c', headers={'': ''}) controller = proxy_server.ContainerController(self.app, 'a', 'c') seen_headers = self._gather_x_account_headers( controller.PUT, req, 200, 201, 201, 201) # HEAD PUT PUT PUT self.assertEqual(seen_headers, [ {'X-Account-Host': '10.0.0.0:1000', 'X-Account-Partition': '0', 'X-Account-Device': 'sda'}, {'X-Account-Host': '10.0.0.1:1001', 'X-Account-Partition': '0', 'X-Account-Device': 'sdb'}, {'X-Account-Host': None, 'X-Account-Partition': None, 'X-Account-Device': None} ]) def test_PUT_x_account_headers_with_more_account_replicas(self): self.app.account_ring.set_replicas(4) req = Request.blank('/v1/a/c', headers={'': ''}) controller = proxy_server.ContainerController(self.app, 'a', 'c') seen_headers = self._gather_x_account_headers( controller.PUT, req, 200, 201, 201, 201) # HEAD PUT PUT PUT self.assertEqual(seen_headers, [ {'X-Account-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Account-Partition': '0', 'X-Account-Device': 'sda,sdd'}, {'X-Account-Host': '10.0.0.1:1001', 'X-Account-Partition': '0', 'X-Account-Device': 'sdb'}, {'X-Account-Host': '10.0.0.2:1002', 'X-Account-Partition': '0', 'X-Account-Device': 'sdc'} ]) def test_DELETE_x_account_headers_with_fewer_account_replicas(self): self.app.account_ring.set_replicas(2) req = Request.blank('/v1/a/c', headers={'': ''}) controller = proxy_server.ContainerController(self.app, 'a', 'c') seen_headers = self._gather_x_account_headers( controller.DELETE, req, 200, 204, 204, 204) # HEAD DELETE DELETE DELETE self.assertEqual(seen_headers, [ {'X-Account-Host': '10.0.0.0:1000', 'X-Account-Partition': '0', 'X-Account-Device': 'sda'}, {'X-Account-Host': '10.0.0.1:1001', 'X-Account-Partition': '0', 'X-Account-Device': 'sdb'}, {'X-Account-Host': None, 'X-Account-Partition': None, 'X-Account-Device': None} ]) def test_DELETE_x_account_headers_with_more_account_replicas(self): self.app.account_ring.set_replicas(4) req = Request.blank('/v1/a/c', headers={'': ''}) controller = proxy_server.ContainerController(self.app, 'a', 'c') seen_headers = self._gather_x_account_headers( controller.DELETE, req, 200, 204, 204, 204) # HEAD DELETE DELETE DELETE self.assertEqual(seen_headers, [ {'X-Account-Host': '10.0.0.0:1000,10.0.0.3:1003', 'X-Account-Partition': '0', 'X-Account-Device': 'sda,sdd'}, {'X-Account-Host': '10.0.0.1:1001', 'X-Account-Partition': '0', 'X-Account-Device': 'sdb'}, {'X-Account-Host': '10.0.0.2:1002', 'X-Account-Partition': '0', 'X-Account-Device': 'sdc'} ]) def test_PUT_backed_x_timestamp_header(self): timestamps = [] def capture_timestamps(*args, **kwargs): headers = kwargs['headers'] timestamps.append(headers.get('X-Timestamp')) req = Request.blank('/v1/a/c', method='PUT', headers={'': ''}) with save_globals(): new_connect = set_http_connect(200, # account existence check 201, 201, 201, give_connect=capture_timestamps) resp = self.app.handle_request(req) # sanity self.assertRaises(StopIteration, new_connect.code_iter.next) self.assertEqual(2, resp.status_int // 100) timestamps.pop(0) # account existence check self.assertEqual(3, len(timestamps)) for timestamp in timestamps: self.assertEqual(timestamp, timestamps[0]) self.assertTrue(re.match('[0-9]{10}\.[0-9]{5}', timestamp)) def test_DELETE_backed_x_timestamp_header(self): timestamps = [] def capture_timestamps(*args, **kwargs): headers = kwargs['headers'] timestamps.append(headers.get('X-Timestamp')) req = Request.blank('/v1/a/c', method='DELETE', headers={'': ''}) self.app.update_request(req) with save_globals(): new_connect = set_http_connect(200, # account existence check 201, 201, 201, give_connect=capture_timestamps) resp = self.app.handle_request(req) # sanity self.assertRaises(StopIteration, new_connect.code_iter.next) self.assertEqual(2, resp.status_int // 100) timestamps.pop(0) # account existence check self.assertEqual(3, len(timestamps)) for timestamp in timestamps: self.assertEqual(timestamp, timestamps[0]) self.assertTrue(re.match('[0-9]{10}\.[0-9]{5}', timestamp)) def test_node_read_timeout_retry_to_container(self): with save_globals(): req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'GET'}) self.app.node_timeout = 0.1 set_http_connect(200, 200, 200, body='abcdef', slow=[1.0, 1.0]) resp = req.get_response(self.app) got_exc = False try: resp.body except ChunkReadTimeout: got_exc = True self.assertTrue(got_exc) @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestAccountController(unittest.TestCase): def setUp(self): self.app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) def assert_status_map(self, method, statuses, expected, env_expected=None, headers=None, **kwargs): headers = headers or {} with save_globals(): set_http_connect(*statuses, **kwargs) req = Request.blank('/v1/a', {}, headers=headers) self.app.update_request(req) res = method(req) self.assertEqual(res.status_int, expected) if env_expected: self.assertEqual(res.environ['swift.account/a']['status'], env_expected) set_http_connect(*statuses) req = Request.blank('/v1/a/', {}) self.app.update_request(req) res = method(req) self.assertEqual(res.status_int, expected) if env_expected: self.assertEqual(res.environ['swift.account/a']['status'], env_expected) def test_OPTIONS(self): with save_globals(): self.app.allow_account_management = False controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/account', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST HEAD'.split(): self.assertTrue( verb in resp.headers['Allow']) self.assertEqual(len(resp.headers['Allow'].split(', ')), 4) # Test a CORS OPTIONS request (i.e. including Origin and # Access-Control-Request-Method headers) self.app.allow_account_management = False controller = proxy_server.AccountController(self.app, 'account') req = Request.blank( '/v1/account', {'REQUEST_METHOD': 'OPTIONS'}, headers={'Origin': 'http://foo.com', 'Access-Control-Request-Method': 'GET'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST HEAD'.split(): self.assertTrue( verb in resp.headers['Allow']) self.assertEqual(len(resp.headers['Allow'].split(', ')), 4) self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/account', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = controller.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST PUT DELETE HEAD'.split(): self.assertTrue( verb in resp.headers['Allow']) self.assertEqual(len(resp.headers['Allow'].split(', ')), 6) def test_GET(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') # GET returns after the first successful call to an Account Server self.assert_status_map(controller.GET, (200,), 200, 200) self.assert_status_map(controller.GET, (503, 200), 200, 200) self.assert_status_map(controller.GET, (503, 503, 200), 200, 200) self.assert_status_map(controller.GET, (204,), 204, 204) self.assert_status_map(controller.GET, (503, 204), 204, 204) self.assert_status_map(controller.GET, (503, 503, 204), 204, 204) self.assert_status_map(controller.GET, (404, 200), 200, 200) self.assert_status_map(controller.GET, (404, 404, 200), 200, 200) self.assert_status_map(controller.GET, (404, 503, 204), 204, 204) # If Account servers fail, if autocreate = False, return majority # response self.assert_status_map(controller.GET, (404, 404, 404), 404, 404) self.assert_status_map(controller.GET, (404, 404, 503), 404, 404) self.assert_status_map(controller.GET, (404, 503, 503), 503) self.app.memcache = FakeMemcacheReturnsNone() self.assert_status_map(controller.GET, (404, 404, 404), 404, 404) def test_GET_autocreate(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') self.app.memcache = FakeMemcacheReturnsNone() self.assertFalse(self.app.account_autocreate) # Repeat the test for autocreate = False and 404 by all self.assert_status_map(controller.GET, (404, 404, 404), 404) self.assert_status_map(controller.GET, (404, 503, 404), 404) # When autocreate is True, if none of the nodes respond 2xx # And quorum of the nodes responded 404, # ALL nodes are asked to create the account # If successful, the GET request is repeated. controller.app.account_autocreate = True self.assert_status_map(controller.GET, (404, 404, 404), 204) self.assert_status_map(controller.GET, (404, 503, 404), 204) # We always return 503 if no majority between 4xx, 3xx or 2xx found self.assert_status_map(controller.GET, (500, 500, 400), 503) def test_HEAD(self): # Same behaviour as GET with save_globals(): controller = proxy_server.AccountController(self.app, 'account') self.assert_status_map(controller.HEAD, (200,), 200, 200) self.assert_status_map(controller.HEAD, (503, 200), 200, 200) self.assert_status_map(controller.HEAD, (503, 503, 200), 200, 200) self.assert_status_map(controller.HEAD, (204,), 204, 204) self.assert_status_map(controller.HEAD, (503, 204), 204, 204) self.assert_status_map(controller.HEAD, (204, 503, 503), 204, 204) self.assert_status_map(controller.HEAD, (204,), 204, 204) self.assert_status_map(controller.HEAD, (404, 404, 404), 404, 404) self.assert_status_map(controller.HEAD, (404, 404, 200), 200, 200) self.assert_status_map(controller.HEAD, (404, 200), 200, 200) self.assert_status_map(controller.HEAD, (404, 404, 503), 404, 404) self.assert_status_map(controller.HEAD, (404, 503, 503), 503) self.assert_status_map(controller.HEAD, (404, 503, 204), 204, 204) def test_HEAD_autocreate(self): # Same behaviour as GET with save_globals(): controller = proxy_server.AccountController(self.app, 'account') self.app.memcache = FakeMemcacheReturnsNone() self.assertFalse(self.app.account_autocreate) self.assert_status_map(controller.HEAD, (404, 404, 404), 404) controller.app.account_autocreate = True self.assert_status_map(controller.HEAD, (404, 404, 404), 204) self.assert_status_map(controller.HEAD, (500, 404, 404), 204) # We always return 503 if no majority between 4xx, 3xx or 2xx found self.assert_status_map(controller.HEAD, (500, 500, 400), 503) def test_POST_autocreate(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') self.app.memcache = FakeMemcacheReturnsNone() # first test with autocreate being False self.assertFalse(self.app.account_autocreate) self.assert_status_map(controller.POST, (404, 404, 404), 404) # next turn it on and test account being created than updated controller.app.account_autocreate = True self.assert_status_map( controller.POST, (404, 404, 404, 202, 202, 202, 201, 201, 201), 201) # account_info PUT account POST account self.assert_status_map( controller.POST, (404, 404, 503, 201, 201, 503, 204, 204, 504), 204) # what if create fails self.assert_status_map( controller.POST, (404, 404, 404, 403, 403, 403, 400, 400, 400), 400) def test_POST_autocreate_with_sysmeta(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') self.app.memcache = FakeMemcacheReturnsNone() # first test with autocreate being False self.assertFalse(self.app.account_autocreate) self.assert_status_map(controller.POST, (404, 404, 404), 404) # next turn it on and test account being created than updated controller.app.account_autocreate = True calls = [] callback = _make_callback_func(calls) key, value = 'X-Account-Sysmeta-Blah', 'something' headers = {key: value} self.assert_status_map( controller.POST, (404, 404, 404, 202, 202, 202, 201, 201, 201), 201, # POST , autocreate PUT, POST again headers=headers, give_connect=callback) self.assertEqual(9, len(calls)) for call in calls: self.assertTrue(key in call['headers'], '%s call, key %s missing in headers %s' % (call['method'], key, call['headers'])) self.assertEqual(value, call['headers'][key]) def test_connection_refused(self): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = 1 # can't connect on this port controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/account', environ={'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) resp = controller.HEAD(req) self.assertEqual(resp.status_int, 503) def test_other_socket_error(self): self.app.account_ring.get_nodes('account') for dev in self.app.account_ring.devs: dev['ip'] = '127.0.0.1' dev['port'] = -1 # invalid port number controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/account', environ={'REQUEST_METHOD': 'HEAD'}) self.app.update_request(req) resp = controller.HEAD(req) self.assertEqual(resp.status_int, 503) def test_response_get_accept_ranges_header(self): with save_globals(): set_http_connect(200, 200, body='{}') controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/a?format=json') self.app.update_request(req) res = controller.GET(req) self.assertTrue('accept-ranges' in res.headers) self.assertEqual(res.headers['accept-ranges'], 'bytes') def test_response_head_accept_ranges_header(self): with save_globals(): set_http_connect(200, 200, body='{}') controller = proxy_server.AccountController(self.app, 'account') req = Request.blank('/v1/a?format=json') self.app.update_request(req) res = controller.HEAD(req) res.body self.assertTrue('accept-ranges' in res.headers) self.assertEqual(res.headers['accept-ranges'], 'bytes') def test_PUT(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') def test_status_map(statuses, expected, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a', {}) req.content_length = 0 self.app.update_request(req) res = controller.PUT(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((201, 201, 201), 405) self.app.allow_account_management = True test_status_map((201, 201, 201), 201) test_status_map((201, 201, 500), 201) test_status_map((201, 500, 500), 503) test_status_map((204, 500, 404), 503) def test_PUT_max_account_name_length(self): with save_globals(): self.app.allow_account_management = True limit = constraints.MAX_ACCOUNT_NAME_LENGTH controller = proxy_server.AccountController(self.app, '1' * limit) self.assert_status_map(controller.PUT, (201, 201, 201), 201) controller = proxy_server.AccountController( self.app, '2' * (limit + 1)) self.assert_status_map(controller.PUT, (201, 201, 201), 400) def test_PUT_connect_exceptions(self): with save_globals(): self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'account') self.assert_status_map(controller.PUT, (201, 201, -1), 201) self.assert_status_map(controller.PUT, (201, -1, -1), 503) self.assert_status_map(controller.PUT, (503, 503, -1), 503) def test_PUT_status(self): with save_globals(): self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'account') self.assert_status_map(controller.PUT, (201, 201, 202), 202) def test_PUT_metadata(self): self.metadata_helper('PUT') def test_POST_metadata(self): self.metadata_helper('POST') def metadata_helper(self, method): for test_header, test_value in ( ('X-Account-Meta-TestHeader', 'TestValue'), ('X-Account-Meta-TestHeader', ''), ('X-Remove-Account-Meta-TestHeader', 'anything')): test_errors = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): if path == '/a': find_header = test_header find_value = test_value if find_header.lower().startswith('x-remove-'): find_header = \ find_header.lower().replace('-remove', '', 1) find_value = '' for k, v in headers.items(): if k.lower() == find_header.lower() and \ v == find_value: break else: test_errors.append('%s: %s not in %s' % (find_header, find_value, headers)) with save_globals(): self.app.allow_account_management = True controller = \ proxy_server.AccountController(self.app, 'a') set_http_connect(201, 201, 201, give_connect=test_connect) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={test_header: test_value}) self.app.update_request(req) getattr(controller, method)(req) self.assertEqual(test_errors, []) def test_PUT_bad_metadata(self): self.bad_metadata_helper('PUT') def test_POST_bad_metadata(self): self.bad_metadata_helper('POST') def bad_metadata_helper(self, method): with save_globals(): self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'a') set_http_connect(200, 201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Account-Meta-' + ('a' * constraints.MAX_META_NAME_LENGTH): 'v'}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank( '/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Account-Meta-' + ('a' * (constraints.MAX_META_NAME_LENGTH + 1)): 'v'}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Account-Meta-Too-Long': 'a' * constraints.MAX_META_VALUE_LENGTH}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers={'X-Account-Meta-Too-Long': 'a' * (constraints.MAX_META_VALUE_LENGTH + 1)}) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {} for x in range(constraints.MAX_META_COUNT): headers['X-Account-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers = {} for x in range(constraints.MAX_META_COUNT + 1): headers['X-Account-Meta-%d' % x] = 'v' req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) set_http_connect(201, 201, 201) headers = {} header_value = 'a' * constraints.MAX_META_VALUE_LENGTH size = 0 x = 0 while size < (constraints.MAX_META_OVERALL_SIZE - 4 - constraints.MAX_META_VALUE_LENGTH): size += 4 + constraints.MAX_META_VALUE_LENGTH headers['X-Account-Meta-%04d' % x] = header_value x += 1 if constraints.MAX_META_OVERALL_SIZE - size > 1: headers['X-Account-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size - 1) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 201) set_http_connect(201, 201, 201) headers['X-Account-Meta-a'] = \ 'a' * (constraints.MAX_META_OVERALL_SIZE - size) req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}, headers=headers) self.app.update_request(req) resp = getattr(controller, method)(req) self.assertEqual(resp.status_int, 400) def test_DELETE(self): with save_globals(): controller = proxy_server.AccountController(self.app, 'account') def test_status_map(statuses, expected, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a', {'REQUEST_METHOD': 'DELETE'}) req.content_length = 0 self.app.update_request(req) res = controller.DELETE(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((201, 201, 201), 405) self.app.allow_account_management = True test_status_map((201, 201, 201), 201) test_status_map((201, 201, 500), 201) test_status_map((201, 500, 500), 503) test_status_map((204, 500, 404), 503) def test_DELETE_with_query_string(self): # Extra safety in case someone typos a query string for an # account-level DELETE request that was really meant to be caught by # some middleware. with save_globals(): controller = proxy_server.AccountController(self.app, 'account') def test_status_map(statuses, expected, **kwargs): set_http_connect(*statuses, **kwargs) self.app.memcache.store = {} req = Request.blank('/v1/a?whoops', environ={'REQUEST_METHOD': 'DELETE'}) req.content_length = 0 self.app.update_request(req) res = controller.DELETE(req) expected = str(expected) self.assertEqual(res.status[:len(expected)], expected) test_status_map((201, 201, 201), 400) self.app.allow_account_management = True test_status_map((201, 201, 201), 400) test_status_map((201, 201, 500), 400) test_status_map((201, 500, 500), 400) test_status_map((204, 500, 404), 400) @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestAccountControllerFakeGetResponse(unittest.TestCase): """ Test all the faked-out GET responses for accounts that don't exist. They have to match the responses for empty accounts that really exist. """ def setUp(self): conf = {'account_autocreate': 'yes'} self.app = proxy_server.Application(conf, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) self.app.memcache = FakeMemcacheReturnsNone() def test_GET_autocreate_accept_json(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank( '/v1/a', headers={'Accept': 'application/json'}, environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a'}) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) self.assertEqual('application/json; charset=utf-8', resp.headers['Content-Type']) self.assertEqual("[]", resp.body) def test_GET_autocreate_format_json(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank('/v1/a?format=json', environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a', 'QUERY_STRING': 'format=json'}) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) self.assertEqual('application/json; charset=utf-8', resp.headers['Content-Type']) self.assertEqual("[]", resp.body) def test_GET_autocreate_accept_xml(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank('/v1/a', headers={"Accept": "text/xml"}, environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a'}) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) self.assertEqual('text/xml; charset=utf-8', resp.headers['Content-Type']) empty_xml_listing = ('\n' '\n') self.assertEqual(empty_xml_listing, resp.body) def test_GET_autocreate_format_xml(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank('/v1/a?format=xml', environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a', 'QUERY_STRING': 'format=xml'}) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) self.assertEqual('application/xml; charset=utf-8', resp.headers['Content-Type']) empty_xml_listing = ('\n' '\n') self.assertEqual(empty_xml_listing, resp.body) def test_GET_autocreate_accept_unknown(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank('/v1/a', headers={"Accept": "mystery/meat"}, environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a'}) resp = req.get_response(self.app) self.assertEqual(406, resp.status_int) def test_GET_autocreate_format_invalid_utf8(self): with save_globals(): set_http_connect(*([404] * 100)) # nonexistent: all backends 404 req = Request.blank('/v1/a?format=\xff\xfe', environ={'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a', 'QUERY_STRING': 'format=\xff\xfe'}) resp = req.get_response(self.app) self.assertEqual(400, resp.status_int) def test_account_acl_header_access(self): acl = { 'admin': ['AUTH_alice'], 'read-write': ['AUTH_bob'], 'read-only': ['AUTH_carol'], } prefix = get_sys_meta_prefix('account') privileged_headers = {(prefix + 'core-access-control'): format_acl( version=2, acl_dict=acl)} app = proxy_server.Application( None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) with save_globals(): # Mock account server will provide privileged information (ACLs) set_http_connect(200, 200, 200, headers=privileged_headers) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'GET'}) resp = app.handle_request(req) # Not a swift_owner -- ACLs should NOT be in response header = 'X-Account-Access-Control' self.assertTrue(header not in resp.headers, '%r was in %r' % ( header, resp.headers)) # Same setup -- mock acct server will provide ACLs set_http_connect(200, 200, 200, headers=privileged_headers) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'GET', 'swift_owner': True}) resp = app.handle_request(req) # For a swift_owner, the ACLs *should* be in response self.assertTrue(header in resp.headers, '%r not in %r' % ( header, resp.headers)) def test_account_acls_through_delegation(self): # Define a way to grab the requests sent out from the AccountController # to the Account Server, and a way to inject responses we'd like the # Account Server to return. resps_to_send = [] @contextmanager def patch_account_controller_method(verb): old_method = getattr(proxy_server.AccountController, verb) new_method = lambda self, req, *_, **__: resps_to_send.pop(0) try: setattr(proxy_server.AccountController, verb, new_method) yield finally: setattr(proxy_server.AccountController, verb, old_method) def make_test_request(http_method, swift_owner=True): env = { 'REQUEST_METHOD': http_method, 'swift_owner': swift_owner, } acl = { 'admin': ['foo'], 'read-write': ['bar'], 'read-only': ['bas'], } headers = {} if http_method in ('GET', 'HEAD') else { 'x-account-access-control': format_acl(version=2, acl_dict=acl) } return Request.blank('/v1/a', environ=env, headers=headers) # Our AccountController will invoke methods to communicate with the # Account Server, and they will return responses like these: def make_canned_response(http_method): acl = { 'admin': ['foo'], 'read-write': ['bar'], 'read-only': ['bas'], } headers = {'x-account-sysmeta-core-access-control': format_acl( version=2, acl_dict=acl)} canned_resp = Response(headers=headers) canned_resp.environ = { 'PATH_INFO': '/acct', 'REQUEST_METHOD': http_method, } resps_to_send.append(canned_resp) app = proxy_server.Application( None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) app.allow_account_management = True ext_header = 'x-account-access-control' with patch_account_controller_method('GETorHEAD_base'): # GET/HEAD requests should remap sysmeta headers from acct server for verb in ('GET', 'HEAD'): make_canned_response(verb) req = make_test_request(verb) resp = app.handle_request(req) h = parse_acl(version=2, data=resp.headers.get(ext_header)) self.assertEqual(h['admin'], ['foo']) self.assertEqual(h['read-write'], ['bar']) self.assertEqual(h['read-only'], ['bas']) # swift_owner = False: GET/HEAD shouldn't return sensitive info make_canned_response(verb) req = make_test_request(verb, swift_owner=False) resp = app.handle_request(req) h = resp.headers self.assertIsNone(h.get(ext_header)) # swift_owner unset: GET/HEAD shouldn't return sensitive info make_canned_response(verb) req = make_test_request(verb, swift_owner=False) del req.environ['swift_owner'] resp = app.handle_request(req) h = resp.headers self.assertIsNone(h.get(ext_header)) # Verify that PUT/POST requests remap sysmeta headers from acct server with patch_account_controller_method('make_requests'): make_canned_response('PUT') req = make_test_request('PUT') resp = app.handle_request(req) h = parse_acl(version=2, data=resp.headers.get(ext_header)) self.assertEqual(h['admin'], ['foo']) self.assertEqual(h['read-write'], ['bar']) self.assertEqual(h['read-only'], ['bas']) make_canned_response('POST') req = make_test_request('POST') resp = app.handle_request(req) h = parse_acl(version=2, data=resp.headers.get(ext_header)) self.assertEqual(h['admin'], ['foo']) self.assertEqual(h['read-write'], ['bar']) self.assertEqual(h['read-only'], ['bas']) class FakeObjectController(object): def __init__(self): self.app = self self.logger = self self.account_name = 'a' self.container_name = 'c' self.object_name = 'o' self.trans_id = 'tx1' self.object_ring = FakeRing() self.node_timeout = 1 self.rate_limit_after_segment = 3 self.rate_limit_segments_per_sec = 2 self.GETorHEAD_base_args = [] def exception(self, *args): self.exception_args = args self.exception_info = sys.exc_info() def GETorHEAD_base(self, *args): self.GETorHEAD_base_args.append(args) req = args[0] path = args[4] body = data = path[-1] * int(path[-1]) if req.range: r = req.range.ranges_for_length(len(data)) if r: (start, stop) = r[0] body = data[start:stop] resp = Response(app_iter=iter(body)) return resp def iter_nodes(self, ring, partition): for node in ring.get_part_nodes(partition): yield node for node in ring.get_more_nodes(partition): yield node def sort_nodes(self, nodes): return nodes def set_node_timing(self, node, timing): return class TestProxyObjectPerformance(unittest.TestCase): def setUp(self): # This is just a simple test that can be used to verify and debug the # various data paths between the proxy server and the object # server. Used as a play ground to debug buffer sizes for sockets. prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) # Client is transmitting in 2 MB chunks fd = sock.makefile('wb', 2 * 1024 * 1024) # Small, fast for testing obj_len = 2 * 64 * 1024 # Use 1 GB or more for measurements # obj_len = 2 * 512 * 1024 * 1024 self.path = '/v1/a/c/o.large' fd.write('PUT %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' 'Content-Length: %s\r\n' 'Content-Type: application/octet-stream\r\n' '\r\n' % (self.path, str(obj_len))) fd.write('a' * obj_len) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) self.obj_len = obj_len def test_GET_debug_large_file(self): for i in range(10): start = time.time() prolis = _test_sockets[0] sock = connect_tcp(('localhost', prolis.getsockname()[1])) # Client is reading in 2 MB chunks fd = sock.makefile('wb', 2 * 1024 * 1024) fd.write('GET %s HTTP/1.1\r\n' 'Host: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n' '\r\n' % self.path) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) total = 0 while True: buf = fd.read(100000) if not buf: break total += len(buf) self.assertEqual(total, self.obj_len) end = time.time() print("Run %02d took %07.03f" % (i, end - start)) @patch_policies([StoragePolicy(0, 'migrated', object_ring=FakeRing()), StoragePolicy(1, 'ernie', True, object_ring=FakeRing()), StoragePolicy(2, 'deprecated', is_deprecated=True, object_ring=FakeRing()), StoragePolicy(3, 'bert', object_ring=FakeRing())]) class TestSwiftInfo(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def test_registered_defaults(self): proxy_server.Application({}, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) si = utils.get_swift_info()['swift'] self.assertTrue('version' in si) self.assertEqual(si['max_file_size'], constraints.MAX_FILE_SIZE) self.assertEqual(si['max_meta_name_length'], constraints.MAX_META_NAME_LENGTH) self.assertEqual(si['max_meta_value_length'], constraints.MAX_META_VALUE_LENGTH) self.assertEqual(si['max_meta_count'], constraints.MAX_META_COUNT) self.assertEqual(si['max_header_size'], constraints.MAX_HEADER_SIZE) self.assertEqual(si['max_meta_overall_size'], constraints.MAX_META_OVERALL_SIZE) self.assertEqual(si['account_listing_limit'], constraints.ACCOUNT_LISTING_LIMIT) self.assertEqual(si['container_listing_limit'], constraints.CONTAINER_LISTING_LIMIT) self.assertEqual(si['max_account_name_length'], constraints.MAX_ACCOUNT_NAME_LENGTH) self.assertEqual(si['max_container_name_length'], constraints.MAX_CONTAINER_NAME_LENGTH) self.assertEqual(si['max_object_name_length'], constraints.MAX_OBJECT_NAME_LENGTH) self.assertTrue('strict_cors_mode' in si) self.assertEqual(si['allow_account_management'], False) self.assertEqual(si['account_autocreate'], False) # This setting is by default excluded by disallowed_sections self.assertEqual(si['valid_api_versions'], constraints.VALID_API_VERSIONS) # this next test is deliberately brittle in order to alert if # other items are added to swift info self.assertEqual(len(si), 18) self.assertTrue('policies' in si) sorted_pols = sorted(si['policies'], key=operator.itemgetter('name')) self.assertEqual(len(sorted_pols), 3) for policy in sorted_pols: self.assertNotEqual(policy['name'], 'deprecated') self.assertEqual(sorted_pols[0]['name'], 'bert') self.assertEqual(sorted_pols[1]['name'], 'ernie') self.assertEqual(sorted_pols[2]['name'], 'migrated') class TestSocketObjectVersions(unittest.TestCase): def setUp(self): global _test_sockets self.prolis = prolis = listen(('localhost', 0)) self._orig_prolis = _test_sockets[0] allowed_headers = ', '.join([ 'content-encoding', 'x-object-manifest', 'content-disposition', 'foo' ]) conf = {'devices': _testdir, 'swift_dir': _testdir, 'mount_check': 'false', 'allowed_headers': allowed_headers} prosrv = versioned_writes.VersionedWritesMiddleware( proxy_logging.ProxyLoggingMiddleware( _test_servers[0], conf, logger=_test_servers[0].logger), {}) self.coro = spawn(wsgi.server, prolis, prosrv, NullLogger()) # replace global prosrv with one that's filtered with version # middleware self.sockets = list(_test_sockets) self.sockets[0] = prolis _test_sockets = tuple(self.sockets) def tearDown(self): self.coro.kill() # put the global state back global _test_sockets self.sockets[0] = self._orig_prolis _test_sockets = tuple(self.sockets) def test_version_manifest(self, oc='versions', vc='vers', o='name'): versions_to_create = 3 # Create a container for our versioned object testing (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis) = _test_sockets pre = quote('%03x' % len(o)) osub = '%s/sub' % o presub = quote('%03x' % len(osub)) osub = quote(osub) presub = quote(presub) oc = quote(oc) vc = quote(vc) def put_container(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\nX-Versions-Location: %s\r\n\r\n' % (oc, vc)) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers headers = put_container() exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) def get_container(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n\r\n\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) body = fd.read() return headers, body # check that the header was set headers, body = get_container() exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) self.assertIn('X-Versions-Location: %s' % vc, headers) def put_version_container(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\n\r\n' % vc) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers # make the container for the object versions headers = put_version_container() exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) def put(version): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 5\r\nContent-Type: text/jibberish%s' '\r\n\r\n%05d\r\n' % (oc, o, version, version)) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers def get(container=oc, obj=o): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: t\r\n' '\r\n' % (container, obj)) fd.flush() headers = readuntil2crlfs(fd) body = fd.read() return headers, body # Create the versioned file headers = put(0) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Create the object versions for version in range(1, versions_to_create): sleep(.01) # guarantee that the timestamp changes headers = put(version) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Ensure retrieving the manifest file gets the latest version headers, body = get() exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertIn('Content-Type: text/jibberish%s' % version, headers) self.assertNotIn('X-Object-Meta-Foo: barbaz', headers) self.assertEqual(body, '%05d' % version) def get_version_container(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\n' 'X-Storage-Token: t\r\n\r\n' % vc) fd.flush() headers = readuntil2crlfs(fd) body = fd.read() return headers, body # Ensure we have the right number of versions saved headers, body = get_version_container() exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) versions = [x for x in body.split('\n') if x] self.assertEqual(len(versions), versions_to_create - 1) def delete(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('DELETE /v1/a/%s/%s HTTP/1.1\r\nHost: localhost\r' '\nConnection: close\r\nX-Storage-Token: t\r\n\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers def copy(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('COPY /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: ' 't\r\nDestination: %s/copied_name\r\n' 'Content-Length: 0\r\n\r\n' % (oc, o, oc)) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers # copy a version and make sure the version info is stripped headers = copy() exp = 'HTTP/1.1 2' # 2xx series response to the COPY self.assertEqual(headers[:len(exp)], exp) def get_copy(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s/copied_name HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\n' 'X-Auth-Token: t\r\n\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) body = fd.read() return headers, body headers, body = get_copy() exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertEqual(body, '%05d' % version) def post(): sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('POST /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: ' 't\r\nContent-Type: foo/bar\r\nContent-Length: 0\r\n' 'X-Object-Meta-Bar: foo\r\n\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) fd.read() return headers # post and make sure it's updated headers = post() exp = 'HTTP/1.1 2' # 2xx series response to the POST self.assertEqual(headers[:len(exp)], exp) headers, body = get() self.assertIn('Content-Type: foo/bar', headers) self.assertIn('X-Object-Meta-Bar: foo', headers) self.assertEqual(body, '%05d' % version) # check container listing headers, body = get_container() exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) # Delete the object versions for segment in range(versions_to_create - 1, 0, -1): headers = delete() exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) # Ensure retrieving the manifest file gets the latest version headers, body = get() exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) self.assertIn('Content-Type: text/jibberish%s' % (segment - 1), headers) self.assertEqual(body, '%05d' % (segment - 1)) # Ensure we have the right number of versions saved sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?prefix=%s%s/ HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: t\r\n\r' '\n' % (vc, pre, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) body = fd.read() versions = [x for x in body.split('\n') if x] self.assertEqual(len(versions), segment - 1) # there is now one version left (in the manifest) # Ensure we have no saved versions sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?prefix=%s%s/ HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: t\r\n\r\n' % (vc, pre, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 204 No Content' self.assertEqual(headers[:len(exp)], exp) # delete the last version sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('DELETE /v1/a/%s/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) # Ensure it's all gone sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: t\r\n\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 404' self.assertEqual(headers[:len(exp)], exp) # make sure manifest files will be ignored for _junk in range(1, versions_to_create): sleep(.01) # guarantee that the timestamp changes sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 0\r\n' 'Content-Type: text/jibberish0\r\n' 'Foo: barbaz\r\nX-Object-Manifest: %s/%s/\r\n\r\n' % (oc, o, oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?prefix=%s%s/ HTTP/1.1\r\nhost: ' 'localhost\r\nconnection: close\r\nx-auth-token: t\r\n\r\n' % (vc, pre, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 204 No Content' self.assertEqual(headers[:len(exp)], exp) # DELETE v1/a/c/obj shouldn't delete v1/a/c/obj/sub versions sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 5\r\nContent-Type: text/jibberish0\r\n' 'Foo: barbaz\r\n\r\n00000\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 5\r\nContent-Type: text/jibberish0\r\n' 'Foo: barbaz\r\n\r\n00001\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 4\r\nContent-Type: text/jibberish0\r\n' 'Foo: barbaz\r\n\r\nsub1\r\n' % (oc, osub)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%s/%s HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 4\r\nContent-Type: text/jibberish0\r\n' 'Foo: barbaz\r\n\r\nsub2\r\n' % (oc, osub)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('DELETE /v1/a/%s/%s HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n\r\n' % (oc, o)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('GET /v1/a/%s?prefix=%s%s/ HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Auth-Token: t\r\n\r\n' % (vc, presub, osub)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' # 2xx series response self.assertEqual(headers[:len(exp)], exp) body = fd.read() versions = [x for x in body.split('\n') if x] self.assertEqual(len(versions), 1) # Check for when the versions target container doesn't exist sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%swhoops HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n' 'Content-Length: 0\r\nX-Versions-Location: none\r\n\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Create the versioned file sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%swhoops/foo HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 5\r\n\r\n00000\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) # Create another version sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('PUT /v1/a/%swhoops/foo HTTP/1.1\r\nHost: ' 'localhost\r\nConnection: close\r\nX-Storage-Token: ' 't\r\nContent-Length: 5\r\n\r\n00001\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 412' self.assertEqual(headers[:len(exp)], exp) # Delete the object sock = connect_tcp(('localhost', prolis.getsockname()[1])) fd = sock.makefile() fd.write('DELETE /v1/a/%swhoops/foo HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\nX-Storage-Token: t\r\n\r\n' % oc) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 2' # 2xx response self.assertEqual(headers[:len(exp)], exp) def test_version_manifest_utf8(self): oc = '0_oc_non_ascii\xc2\xa3' vc = '0_vc_non_ascii\xc2\xa3' o = '0_o_non_ascii\xc2\xa3' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_container(self): oc = '1_oc_non_ascii\xc2\xa3' vc = '1_vc_ascii' o = '1_o_ascii' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_version_container(self): oc = '2_oc_ascii' vc = '2_vc_non_ascii\xc2\xa3' o = '2_o_ascii' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_containers(self): oc = '3_oc_non_ascii\xc2\xa3' vc = '3_vc_non_ascii\xc2\xa3' o = '3_o_ascii' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_object(self): oc = '4_oc_ascii' vc = '4_vc_ascii' o = '4_o_non_ascii\xc2\xa3' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_version_container_utf_object(self): oc = '5_oc_ascii' vc = '5_vc_non_ascii\xc2\xa3' o = '5_o_non_ascii\xc2\xa3' self.test_version_manifest(oc, vc, o) def test_version_manifest_utf8_container_utf_object(self): oc = '6_oc_non_ascii\xc2\xa3' vc = '6_vc_ascii' o = '6_o_non_ascii\xc2\xa3' self.test_version_manifest(oc, vc, o) if __name__ == '__main__': setup() try: unittest.main() finally: teardown() swift-2.7.1/test/unit/proxy/test_sysmeta.py0000664000567000056710000003724713024044354022247 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from six.moves.urllib.parse import quote import unittest import os from tempfile import mkdtemp import shutil from swift.common.storage_policy import StoragePolicy from swift.common.swob import Request from swift.common.utils import mkdirs, split_path from swift.common.wsgi import monkey_patch_mimetools, WSGIContext from swift.obj import server as object_server from swift.proxy import server as proxy import swift.proxy.controllers from test.unit import FakeMemcache, debug_logger, FakeRing, \ fake_http_connect, patch_policies class FakeServerConnection(WSGIContext): '''Fakes an HTTPConnection to a server instance.''' def __init__(self, app): super(FakeServerConnection, self).__init__(app) self.data = '' def getheaders(self): return self._response_headers def read(self, amt=None): try: result = next(self.resp_iter) return result except StopIteration: return '' def getheader(self, name, default=None): result = self._response_header_value(name) return result if result else default def getresponse(self): environ = {'REQUEST_METHOD': self.method} req = Request.blank(self.path, environ, headers=self.req_headers, body=self.data) self.data = '' self.resp = self._app_call(req.environ) self.resp_iter = iter(self.resp) if self._response_headers is None: self._response_headers = [] status_parts = self._response_status.split(' ', 1) self.status = int(status_parts[0]) self.reason = status_parts[1] if len(status_parts) == 2 else '' return self def getexpect(self): class ContinueResponse(object): status = 100 return ContinueResponse() def send(self, data): self.data += data def close(self): pass def __call__(self, ipaddr, port, device, partition, method, path, headers=None, query_string=None): self.path = quote('/' + device + '/' + str(partition) + path) self.method = method self.req_headers = headers return self def get_http_connect(account_func, container_func, object_func): '''Returns a http_connect function that delegates to entity-specific http_connect methods based on request path. ''' def http_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): a, c, o = split_path(path, 1, 3, True) if o: func = object_func elif c: func = container_func else: func = account_func resp = func(ipaddr, port, device, partition, method, path, headers=headers, query_string=query_string) return resp return http_connect @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing(replicas=1))]) class TestObjectSysmeta(unittest.TestCase): '''Tests object sysmeta is correctly handled by combination of proxy server and object server. ''' def _assertStatus(self, resp, expected): self.assertEqual(resp.status_int, expected, 'Expected %d, got %s' % (expected, resp.status)) def _assertInHeaders(self, resp, expected): for key, val in expected.items(): self.assertTrue(key in resp.headers, 'Header %s missing from %s' % (key, resp.headers)) self.assertEqual(val, resp.headers[key], 'Expected header %s:%s, got %s:%s' % (key, val, key, resp.headers[key])) def _assertNotInHeaders(self, resp, unexpected): for key, val in unexpected.items(): self.assertFalse(key in resp.headers, 'Header %s not expected in %s' % (key, resp.headers)) def setUp(self): self.app = proxy.Application(None, FakeMemcache(), logger=debug_logger('proxy-ut'), account_ring=FakeRing(replicas=1), container_ring=FakeRing(replicas=1)) monkey_patch_mimetools() self.tmpdir = mkdtemp() self.testdir = os.path.join(self.tmpdir, 'tmp_test_object_server_ObjectController') mkdirs(os.path.join(self.testdir, 'sda', 'tmp')) conf = {'devices': self.testdir, 'mount_check': 'false'} self.obj_ctlr = object_server.ObjectController( conf, logger=debug_logger('obj-ut')) http_connect = get_http_connect(fake_http_connect(200), fake_http_connect(200), FakeServerConnection(self.obj_ctlr)) self.orig_base_http_connect = swift.proxy.controllers.base.http_connect self.orig_obj_http_connect = swift.proxy.controllers.obj.http_connect swift.proxy.controllers.base.http_connect = http_connect swift.proxy.controllers.obj.http_connect = http_connect def tearDown(self): shutil.rmtree(self.tmpdir) swift.proxy.controllers.base.http_connect = self.orig_base_http_connect swift.proxy.controllers.obj.http_connect = self.orig_obj_http_connect original_sysmeta_headers_1 = {'x-object-sysmeta-test0': 'val0', 'x-object-sysmeta-test1': 'val1'} original_sysmeta_headers_2 = {'x-object-sysmeta-test2': 'val2'} changed_sysmeta_headers = {'x-object-sysmeta-test0': '', 'x-object-sysmeta-test1': 'val1 changed'} new_sysmeta_headers = {'x-object-sysmeta-test3': 'val3'} original_meta_headers_1 = {'x-object-meta-test0': 'meta0', 'x-object-meta-test1': 'meta1'} original_meta_headers_2 = {'x-object-meta-test2': 'meta2'} changed_meta_headers = {'x-object-meta-test0': '', 'x-object-meta-test1': 'meta1 changed'} new_meta_headers = {'x-object-meta-test3': 'meta3'} bad_headers = {'x-account-sysmeta-test1': 'bad1'} def test_PUT_sysmeta_then_GET(self): path = '/v1/a/c/o' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_meta_headers_1) hdrs.update(self.bad_headers) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) req = Request.blank(path, environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.original_sysmeta_headers_1) self._assertInHeaders(resp, self.original_meta_headers_1) self._assertNotInHeaders(resp, self.bad_headers) def test_PUT_sysmeta_then_HEAD(self): path = '/v1/a/c/o' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_meta_headers_1) hdrs.update(self.bad_headers) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) env = {'REQUEST_METHOD': 'HEAD'} req = Request.blank(path, environ=env) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.original_sysmeta_headers_1) self._assertInHeaders(resp, self.original_meta_headers_1) self._assertNotInHeaders(resp, self.bad_headers) def test_sysmeta_replaced_by_PUT(self): path = '/v1/a/c/o' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_sysmeta_headers_2) hdrs.update(self.original_meta_headers_1) hdrs.update(self.original_meta_headers_2) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.changed_sysmeta_headers) hdrs.update(self.new_sysmeta_headers) hdrs.update(self.changed_meta_headers) hdrs.update(self.new_meta_headers) hdrs.update(self.bad_headers) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) req = Request.blank(path, environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertNotInHeaders(resp, self.original_sysmeta_headers_2) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertNotInHeaders(resp, self.original_meta_headers_2) def _test_sysmeta_not_updated_by_POST(self): # check sysmeta is not changed by a POST but user meta is replaced path = '/v1/a/c/o' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_meta_headers_1) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) env = {'REQUEST_METHOD': 'POST'} hdrs = dict(self.changed_sysmeta_headers) hdrs.update(self.new_sysmeta_headers) hdrs.update(self.changed_meta_headers) hdrs.update(self.new_meta_headers) hdrs.update(self.bad_headers) req = Request.blank(path, environ=env, headers=hdrs) resp = req.get_response(self.app) self._assertStatus(resp, 202) req = Request.blank(path, environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.original_sysmeta_headers_1) self._assertNotInHeaders(resp, self.new_sysmeta_headers) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertNotInHeaders(resp, self.bad_headers) env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.changed_sysmeta_headers) hdrs.update(self.new_sysmeta_headers) hdrs.update(self.bad_headers) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) req = Request.blank(path, environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertNotInHeaders(resp, self.original_sysmeta_headers_2) def test_sysmeta_not_updated_by_POST(self): self.app.object_post_as_copy = False self._test_sysmeta_not_updated_by_POST() def test_sysmeta_not_updated_by_POST_as_copy(self): self.app.object_post_as_copy = True self._test_sysmeta_not_updated_by_POST() def test_sysmeta_updated_by_COPY(self): # check sysmeta is updated by a COPY in same way as user meta path = '/v1/a/c/o' dest = '/c/o2' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_sysmeta_headers_2) hdrs.update(self.original_meta_headers_1) hdrs.update(self.original_meta_headers_2) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) env = {'REQUEST_METHOD': 'COPY'} hdrs = dict(self.changed_sysmeta_headers) hdrs.update(self.new_sysmeta_headers) hdrs.update(self.changed_meta_headers) hdrs.update(self.new_meta_headers) hdrs.update(self.bad_headers) hdrs.update({'Destination': dest}) req = Request.blank(path, environ=env, headers=hdrs) resp = req.get_response(self.app) self._assertStatus(resp, 201) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertInHeaders(resp, self.original_sysmeta_headers_2) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertInHeaders(resp, self.original_meta_headers_2) self._assertNotInHeaders(resp, self.bad_headers) req = Request.blank('/v1/a/c/o2', environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertInHeaders(resp, self.original_sysmeta_headers_2) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertInHeaders(resp, self.original_meta_headers_2) self._assertNotInHeaders(resp, self.bad_headers) def test_sysmeta_updated_by_COPY_from(self): # check sysmeta is updated by a COPY in same way as user meta path = '/v1/a/c/o' env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.original_sysmeta_headers_1) hdrs.update(self.original_sysmeta_headers_2) hdrs.update(self.original_meta_headers_1) hdrs.update(self.original_meta_headers_2) req = Request.blank(path, environ=env, headers=hdrs, body='x') resp = req.get_response(self.app) self._assertStatus(resp, 201) env = {'REQUEST_METHOD': 'PUT'} hdrs = dict(self.changed_sysmeta_headers) hdrs.update(self.new_sysmeta_headers) hdrs.update(self.changed_meta_headers) hdrs.update(self.new_meta_headers) hdrs.update(self.bad_headers) hdrs.update({'X-Copy-From': '/c/o'}) req = Request.blank('/v1/a/c/o2', environ=env, headers=hdrs, body='') resp = req.get_response(self.app) self._assertStatus(resp, 201) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertInHeaders(resp, self.original_sysmeta_headers_2) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertInHeaders(resp, self.original_meta_headers_2) self._assertNotInHeaders(resp, self.bad_headers) req = Request.blank('/v1/a/c/o2', environ={}) resp = req.get_response(self.app) self._assertStatus(resp, 200) self._assertInHeaders(resp, self.changed_sysmeta_headers) self._assertInHeaders(resp, self.new_sysmeta_headers) self._assertInHeaders(resp, self.original_sysmeta_headers_2) self._assertInHeaders(resp, self.changed_meta_headers) self._assertInHeaders(resp, self.new_meta_headers) self._assertInHeaders(resp, self.original_meta_headers_2) self._assertNotInHeaders(resp, self.bad_headers) swift-2.7.1/test/unit/proxy/controllers/0000775000567000056710000000000013024044470021501 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/proxy/controllers/__init__.py0000664000567000056710000000000013024044352023577 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/proxy/controllers/test_obj.py0000775000567000056710000031422613024044354023700 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import email.parser import itertools import random import time import unittest from collections import defaultdict from contextlib import contextmanager import json from hashlib import md5 import mock from eventlet import Timeout from six import BytesIO from six.moves import range import swift from swift.common import utils, swob, exceptions from swift.common.header_key_dict import HeaderKeyDict from swift.proxy import server as proxy_server from swift.proxy.controllers import obj from swift.proxy.controllers.base import get_info as _real_get_info from swift.common.storage_policy import POLICIES, ECDriverError, StoragePolicy from test.unit import FakeRing, FakeMemcache, fake_http_connect, \ debug_logger, patch_policies, SlowBody, FakeStatus, \ encode_frag_archive_bodies from test.unit.proxy.test_server import node_error_count def unchunk_body(chunked_body): body = '' remaining = chunked_body while remaining: hex_length, remaining = remaining.split('\r\n', 1) length = int(hex_length, 16) body += remaining[:length] remaining = remaining[length + 2:] return body @contextmanager def set_http_connect(*args, **kwargs): old_connect = swift.proxy.controllers.base.http_connect new_connect = fake_http_connect(*args, **kwargs) try: swift.proxy.controllers.base.http_connect = new_connect swift.proxy.controllers.obj.http_connect = new_connect swift.proxy.controllers.account.http_connect = new_connect swift.proxy.controllers.container.http_connect = new_connect yield new_connect left_over_status = list(new_connect.code_iter) if left_over_status: raise AssertionError('left over status %r' % left_over_status) finally: swift.proxy.controllers.base.http_connect = old_connect swift.proxy.controllers.obj.http_connect = old_connect swift.proxy.controllers.account.http_connect = old_connect swift.proxy.controllers.container.http_connect = old_connect class PatchedObjControllerApp(proxy_server.Application): """ This patch is just a hook over the proxy server's __call__ to ensure that calls to get_info will return the stubbed value for container_info if it's a container info call. """ container_info = {} per_container_info = {} def __call__(self, *args, **kwargs): def _fake_get_info(app, env, account, container=None, **kwargs): if container: if container in self.per_container_info: return self.per_container_info[container] return self.container_info else: return _real_get_info(app, env, account, container, **kwargs) mock_path = 'swift.proxy.controllers.base.get_info' with mock.patch(mock_path, new=_fake_get_info): return super( PatchedObjControllerApp, self).__call__(*args, **kwargs) class BaseObjectControllerMixin(object): container_info = { 'write_acl': None, 'read_acl': None, 'storage_policy': None, 'sync_key': None, 'versions': None, } # this needs to be set on the test case controller_cls = None def setUp(self): # setup fake rings with handoffs for policy in POLICIES: policy.object_ring.max_more_nodes = policy.object_ring.replicas self.logger = debug_logger('proxy-server') self.logger.thread_locals = ('txn1', '127.0.0.2') self.app = PatchedObjControllerApp( None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing(), logger=self.logger) # you can over-ride the container_info just by setting it on the app self.app.container_info = dict(self.container_info) # default policy and ring references self.policy = POLICIES.default self.obj_ring = self.policy.object_ring self._ts_iter = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) def ts(self): return next(self._ts_iter) def replicas(self, policy=None): policy = policy or POLICIES.default return policy.object_ring.replicas def quorum(self, policy=None): policy = policy or POLICIES.default return policy.quorum def test_iter_nodes_local_first_noops_when_no_affinity(self): # this test needs a stable node order - most don't self.app.sort_nodes = lambda l: l controller = self.controller_cls( self.app, 'a', 'c', 'o') self.app.write_affinity_is_local_fn = None object_ring = self.app.get_object_ring(None) all_nodes = object_ring.get_part_nodes(1) all_nodes.extend(object_ring.get_more_nodes(1)) local_first_nodes = list(controller.iter_nodes_local_first( object_ring, 1)) self.maxDiff = None self.assertEqual(all_nodes, local_first_nodes) def test_iter_nodes_local_first_moves_locals_first(self): controller = self.controller_cls( self.app, 'a', 'c', 'o') self.app.write_affinity_is_local_fn = ( lambda node: node['region'] == 1) # we'll write to one more than replica count local nodes self.app.write_affinity_node_count = lambda r: r + 1 object_ring = self.app.get_object_ring(None) # make our fake ring have plenty of nodes, and not get limited # artificially by the proxy max request node count object_ring.max_more_nodes = 100000 # nothing magic about * 2 + 3, just a way to make it bigger self.app.request_node_count = lambda r: r * 2 + 3 all_nodes = object_ring.get_part_nodes(1) all_nodes.extend(object_ring.get_more_nodes(1)) # limit to the number we're going to look at in this request nodes_requested = self.app.request_node_count(object_ring.replicas) all_nodes = all_nodes[:nodes_requested] # make sure we have enough local nodes (sanity) all_local_nodes = [n for n in all_nodes if self.app.write_affinity_is_local_fn(n)] self.assertTrue(len(all_local_nodes) >= self.replicas() + 1) # finally, create the local_first_nodes iter and flatten it out local_first_nodes = list(controller.iter_nodes_local_first( object_ring, 1)) # the local nodes move up in the ordering self.assertEqual([1] * (self.replicas() + 1), [ node['region'] for node in local_first_nodes[ :self.replicas() + 1]]) # we don't skip any nodes self.assertEqual(len(all_nodes), len(local_first_nodes)) self.assertEqual(sorted(all_nodes), sorted(local_first_nodes)) def test_iter_nodes_local_first_best_effort(self): controller = self.controller_cls( self.app, 'a', 'c', 'o') self.app.write_affinity_is_local_fn = ( lambda node: node['region'] == 1) object_ring = self.app.get_object_ring(None) all_nodes = object_ring.get_part_nodes(1) all_nodes.extend(object_ring.get_more_nodes(1)) local_first_nodes = list(controller.iter_nodes_local_first( object_ring, 1)) # we won't have quite enough local nodes... self.assertEqual(len(all_nodes), self.replicas() + POLICIES.default.object_ring.max_more_nodes) all_local_nodes = [n for n in all_nodes if self.app.write_affinity_is_local_fn(n)] self.assertEqual(len(all_local_nodes), self.replicas()) # but the local nodes we do have are at the front of the local iter first_n_local_first_nodes = local_first_nodes[:len(all_local_nodes)] self.assertEqual(sorted(all_local_nodes), sorted(first_n_local_first_nodes)) # but we *still* don't *skip* any nodes self.assertEqual(len(all_nodes), len(local_first_nodes)) self.assertEqual(sorted(all_nodes), sorted(local_first_nodes)) def test_connect_put_node_timeout(self): controller = self.controller_cls( self.app, 'a', 'c', 'o') self.app.conn_timeout = 0.05 with set_http_connect(slow_connect=True): nodes = [dict(ip='', port='', device='')] res = controller._connect_put_node(nodes, '', '', {}, ('', '')) self.assertTrue(res is None) def test_DELETE_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') codes = [204] * self.replicas() with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) def test_DELETE_missing_one(self): # Obviously this test doesn't work if we're testing 1 replica. # In that case, we don't have any failovers to check. if self.replicas() == 1: return req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') codes = [404] + [204] * (self.replicas() - 1) random.shuffle(codes) with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) def test_DELETE_not_found(self): # Obviously this test doesn't work if we're testing 1 replica. # In that case, we don't have any failovers to check. if self.replicas() == 1: return req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') codes = [404] * (self.replicas() - 1) + [204] with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 404) def test_DELETE_mostly_found(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') mostly_204s = [204] * self.quorum() codes = mostly_204s + [404] * (self.replicas() - len(mostly_204s)) self.assertEqual(len(codes), self.replicas()) with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) def test_DELETE_mostly_not_found(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') mostly_404s = [404] * self.quorum() codes = mostly_404s + [204] * (self.replicas() - len(mostly_404s)) self.assertEqual(len(codes), self.replicas()) with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 404) def test_DELETE_half_not_found_statuses(self): self.obj_ring.set_replicas(4) req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') with set_http_connect(404, 204, 404, 204): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) def test_DELETE_half_not_found_headers_and_body(self): # Transformed responses have bogus bodies and headers, so make sure we # send the client headers and body from a real node's response. self.obj_ring.set_replicas(4) status_codes = (404, 404, 204, 204) bodies = ('not found', 'not found', '', '') headers = [{}, {}, {'Pick-Me': 'yes'}, {'Pick-Me': 'yes'}] req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') with set_http_connect(*status_codes, body_iter=bodies, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('Pick-Me'), 'yes') self.assertEqual(resp.body, '') def test_DELETE_handoff(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE') codes = [204] * self.replicas() with set_http_connect(507, *codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 204) def test_POST_non_int_delete_after(self): t = str(int(time.time() + 100)) + '.1' req = swob.Request.blank('/v1/a/c/o', method='POST', headers={'Content-Type': 'foo/bar', 'X-Delete-After': t}) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('Non-integer X-Delete-After', resp.body) def test_PUT_non_int_delete_after(self): t = str(int(time.time() + 100)) + '.1' req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-After': t}) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('Non-integer X-Delete-After', resp.body) def test_POST_negative_delete_after(self): req = swob.Request.blank('/v1/a/c/o', method='POST', headers={'Content-Type': 'foo/bar', 'X-Delete-After': '-60'}) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('X-Delete-After in past', resp.body) def test_PUT_negative_delete_after(self): req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-After': '-60'}) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('X-Delete-After in past', resp.body) def test_POST_delete_at_non_integer(self): t = str(int(time.time() + 100)) + '.1' req = swob.Request.blank('/v1/a/c/o', method='POST', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('Non-integer X-Delete-At', resp.body) def test_PUT_delete_at_non_integer(self): t = str(int(time.time() - 100)) + '.1' req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('Non-integer X-Delete-At', resp.body) def test_POST_delete_at_in_past(self): t = str(int(time.time() - 100)) req = swob.Request.blank('/v1/a/c/o', method='POST', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('X-Delete-At in past', resp.body) def test_PUT_delete_at_in_past(self): t = str(int(time.time() - 100)) req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) self.assertEqual('X-Delete-At in past', resp.body) def test_HEAD_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='HEAD') with set_http_connect(200): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertIn('Accept-Ranges', resp.headers) def test_HEAD_x_newest(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='HEAD', headers={'X-Newest': 'true'}) with set_http_connect(*([200] * self.replicas())): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) def test_HEAD_x_newest_different_timestamps(self): req = swob.Request.blank('/v1/a/c/o', method='HEAD', headers={'X-Newest': 'true'}) ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) timestamps = [next(ts) for i in range(self.replicas())] newest_timestamp = timestamps[-1] random.shuffle(timestamps) backend_response_headers = [{ 'X-Backend-Timestamp': t.internal, 'X-Timestamp': t.normal } for t in timestamps] with set_http_connect(*([200] * self.replicas()), headers=backend_response_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-timestamp'], newest_timestamp.normal) def test_HEAD_x_newest_with_two_vector_timestamps(self): req = swob.Request.blank('/v1/a/c/o', method='HEAD', headers={'X-Newest': 'true'}) ts = (utils.Timestamp(time.time(), offset=offset) for offset in itertools.count()) timestamps = [next(ts) for i in range(self.replicas())] newest_timestamp = timestamps[-1] random.shuffle(timestamps) backend_response_headers = [{ 'X-Backend-Timestamp': t.internal, 'X-Timestamp': t.normal } for t in timestamps] with set_http_connect(*([200] * self.replicas()), headers=backend_response_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-backend-timestamp'], newest_timestamp.internal) def test_HEAD_x_newest_with_some_missing(self): req = swob.Request.blank('/v1/a/c/o', method='HEAD', headers={'X-Newest': 'true'}) ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) request_count = self.app.request_node_count(self.obj_ring.replicas) backend_response_headers = [{ 'x-timestamp': next(ts).normal, } for i in range(request_count)] responses = [404] * (request_count - 1) responses.append(200) request_log = [] def capture_requests(ip, port, device, part, method, path, headers=None, **kwargs): req = { 'ip': ip, 'port': port, 'device': device, 'part': part, 'method': method, 'path': path, 'headers': headers, } request_log.append(req) with set_http_connect(*responses, headers=backend_response_headers, give_connect=capture_requests): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) for req in request_log: self.assertEqual(req['method'], 'HEAD') self.assertEqual(req['path'], '/a/c/o') def test_container_sync_delete(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: req = swob.Request.blank( '/v1/a/c/o', method='DELETE', headers={ 'X-Timestamp': next(ts).internal}) codes = [409] * self.obj_ring.replicas ts_iter = itertools.repeat(next(ts).internal) with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 409) def test_PUT_requires_length(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') resp = req.get_response(self.app) self.assertEqual(resp.status_int, 411) def test_container_update_backend_requests(self): for policy in POLICIES: req = swift.common.swob.Request.blank( '/v1/a/c/o', method='PUT', headers={'Content-Length': '0', 'X-Backend-Storage-Policy-Index': int(policy)}) controller = self.controller_cls(self.app, 'a', 'c', 'o') # This is the number of container updates we're doing, simulating # 1 to 15 container replicas. for num_containers in range(1, 16): containers = [{'ip': '1.0.0.%s' % i, 'port': '60%s' % str(i).zfill(2), 'device': 'sdb'} for i in range(num_containers)] backend_headers = controller._backend_requests( req, self.replicas(policy), 1, containers) # how many of the backend headers have a container update container_updates = len( [headers for headers in backend_headers if 'X-Container-Partition' in headers]) if num_containers <= self.quorum(policy): # filling case expected = min(self.quorum(policy) + 1, self.replicas(policy)) else: # container updates >= object replicas expected = min(num_containers, self.replicas(policy)) self.assertEqual(container_updates, expected) # end of BaseObjectControllerMixin @patch_policies() class TestReplicatedObjController(BaseObjectControllerMixin, unittest.TestCase): controller_cls = obj.ReplicatedObjectController def test_PUT_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['content-length'] = '0' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_txn_id_logging_on_PUT(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') self.app.logger.txn_id = req.environ['swift.trans_id'] = 'test-txn-id' req.headers['content-length'] = '0' # we capture stdout since the debug log formatter prints the formatted # message to stdout stdout = BytesIO() with set_http_connect((100, Timeout()), 503, 503), \ mock.patch('sys.stdout', stdout): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) for line in stdout.getvalue().splitlines(): self.assertIn('test-txn-id', line) self.assertIn('Trying to get final status of PUT to', stdout.getvalue()) def test_PUT_empty_bad_etag(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['Content-Length'] = '0' req.headers['Etag'] = '"catbus"' # The 2-tuple here makes getexpect() return 422, not 100. For # objects that are >0 bytes, you get a 100 Continue and then a 422 # Unprocessable Entity after sending the body. For zero-byte # objects, though, you get the 422 right away. codes = [FakeStatus((422, 422)) for _junk in range(self.replicas())] with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 422) def test_PUT_if_none_match(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['if-none-match'] = '*' req.headers['content-length'] = '0' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_if_none_match_denied(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['if-none-match'] = '*' req.headers['content-length'] = '0' with set_http_connect(201, 412, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 412) def test_PUT_if_none_match_not_star(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['if-none-match'] = 'somethingelse' req.headers['content-length'] = '0' with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 400) def test_PUT_connect_exceptions(self): object_ring = self.app.get_object_ring(None) self.app.sort_nodes = lambda n: n # disable shuffle def test_status_map(statuses, expected): self.app._error_limiting = {} req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') with set_http_connect(*statuses): resp = req.get_response(self.app) self.assertEqual(resp.status_int, expected) base_status = [201] * 3 # test happy path test_status_map(list(base_status), 201) for i in range(3): self.assertEqual(node_error_count( self.app, object_ring.devs[i]), 0) # single node errors and test isolation for i in range(3): status_list = list(base_status) status_list[i] = 503 test_status_map(status_list, 201) for j in range(3): self.assertEqual(node_error_count( self.app, object_ring.devs[j]), 1 if j == i else 0) # connect errors test_status_map((201, Timeout(), 201, 201), 201) self.assertEqual(node_error_count( self.app, object_ring.devs[1]), 1) test_status_map((Exception('kaboom!'), 201, 201, 201), 201) self.assertEqual(node_error_count( self.app, object_ring.devs[0]), 1) # expect errors test_status_map((201, 201, (503, None), 201), 201) self.assertEqual(node_error_count( self.app, object_ring.devs[2]), 1) test_status_map(((507, None), 201, 201, 201), 201) self.assertEqual( node_error_count(self.app, object_ring.devs[0]), self.app.error_suppression_limit + 1) # response errors test_status_map(((100, Timeout()), 201, 201), 201) self.assertEqual( node_error_count(self.app, object_ring.devs[0]), 1) test_status_map((201, 201, (100, Exception())), 201) self.assertEqual( node_error_count(self.app, object_ring.devs[2]), 1) test_status_map((201, (100, 507), 201), 201) self.assertEqual( node_error_count(self.app, object_ring.devs[1]), self.app.error_suppression_limit + 1) def test_PUT_error_during_transfer_data(self): class FakeReader(object): def read(self, size): raise exceptions.ChunkReadError('exception message') req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 499) def test_PUT_chunkreadtimeout_during_transfer_data(self): class FakeReader(object): def read(self, size): raise exceptions.ChunkReadTimeout() req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 408) def test_PUT_timeout_during_transfer_data(self): class FakeReader(object): def read(self, size): raise Timeout() req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 499) def test_PUT_exception_during_transfer_data(self): class FakeReader(object): def read(self, size): raise Exception('exception message') req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' with set_http_connect(201, 201, 201): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 500) def test_GET_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o') with set_http_connect(200): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertIn('Accept-Ranges', resp.headers) def test_GET_transfer_encoding_chunked(self): req = swift.common.swob.Request.blank('/v1/a/c/o') with set_http_connect(200, headers={'transfer-encoding': 'chunked'}): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['Transfer-Encoding'], 'chunked') def test_GET_error(self): req = swift.common.swob.Request.blank('/v1/a/c/o') self.app.logger.txn_id = req.environ['swift.trans_id'] = 'my-txn-id' stdout = BytesIO() with set_http_connect(503, 200), \ mock.patch('sys.stdout', stdout): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) for line in stdout.getvalue().splitlines(): self.assertIn('my-txn-id', line) self.assertIn('From Object Server', stdout.getvalue()) def test_GET_handoff(self): req = swift.common.swob.Request.blank('/v1/a/c/o') codes = [503] * self.obj_ring.replicas + [200] with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) def test_GET_not_found(self): req = swift.common.swob.Request.blank('/v1/a/c/o') codes = [404] * (self.obj_ring.replicas + self.obj_ring.max_more_nodes) with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 404) def test_POST_as_COPY_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='POST') get_resp = [200] * self.obj_ring.replicas + \ [404] * self.obj_ring.max_more_nodes put_resp = [201] * self.obj_ring.replicas codes = get_resp + put_resp with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) self.assertEqual(req.environ['QUERY_STRING'], '') self.assertTrue('swift.post_as_copy' in req.environ) def test_POST_as_COPY_static_large_object(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='POST') get_resp = [200] * self.obj_ring.replicas + \ [404] * self.obj_ring.max_more_nodes put_resp = [201] * self.obj_ring.replicas codes = get_resp + put_resp slo_headers = \ [{'X-Static-Large-Object': True}] * self.obj_ring.replicas get_headers = slo_headers + [{}] * (len(codes) - len(slo_headers)) headers = {'headers': get_headers} with set_http_connect(*codes, **headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) self.assertEqual(req.environ['QUERY_STRING'], '') self.assertTrue('swift.post_as_copy' in req.environ) def test_POST_delete_at(self): t = str(int(time.time() + 100)) req = swob.Request.blank('/v1/a/c/o', method='POST', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) post_headers = [] def capture_headers(ip, port, device, part, method, path, headers, **kwargs): if method == 'POST': post_headers.append(headers) x_newest_responses = [200] * self.obj_ring.replicas + \ [404] * self.obj_ring.max_more_nodes post_resp = [200] * self.obj_ring.replicas codes = x_newest_responses + post_resp with set_http_connect(*codes, give_connect=capture_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(req.environ['QUERY_STRING'], '') # sanity self.assertTrue('swift.post_as_copy' in req.environ) for given_headers in post_headers: self.assertEqual(given_headers.get('X-Delete-At'), t) self.assertTrue('X-Delete-At-Host' in given_headers) self.assertTrue('X-Delete-At-Device' in given_headers) self.assertTrue('X-Delete-At-Partition' in given_headers) self.assertTrue('X-Delete-At-Container' in given_headers) def test_PUT_delete_at(self): t = str(int(time.time() + 100)) req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-At': t}) put_headers = [] def capture_headers(ip, port, device, part, method, path, headers, **kwargs): if method == 'PUT': put_headers.append(headers) codes = [201] * self.obj_ring.replicas with set_http_connect(*codes, give_connect=capture_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) for given_headers in put_headers: self.assertEqual(given_headers.get('X-Delete-At'), t) self.assertTrue('X-Delete-At-Host' in given_headers) self.assertTrue('X-Delete-At-Device' in given_headers) self.assertTrue('X-Delete-At-Partition' in given_headers) self.assertTrue('X-Delete-At-Container' in given_headers) def test_PUT_converts_delete_after_to_delete_at(self): req = swob.Request.blank('/v1/a/c/o', method='PUT', body='', headers={'Content-Type': 'foo/bar', 'X-Delete-After': '60'}) put_headers = [] def capture_headers(ip, port, device, part, method, path, headers, **kwargs): if method == 'PUT': put_headers.append(headers) codes = [201] * self.obj_ring.replicas t = time.time() with set_http_connect(*codes, give_connect=capture_headers): with mock.patch('time.time', lambda: t): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) expected_delete_at = str(int(t) + 60) for given_headers in put_headers: self.assertEqual(given_headers.get('X-Delete-At'), expected_delete_at) self.assertTrue('X-Delete-At-Host' in given_headers) self.assertTrue('X-Delete-At-Device' in given_headers) self.assertTrue('X-Delete-At-Partition' in given_headers) self.assertTrue('X-Delete-At-Container' in given_headers) def test_container_sync_put_x_timestamp_not_found(self): test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: self.app.container_info['storage_policy'] = policy_index put_timestamp = utils.Timestamp(time.time()).normal req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': put_timestamp}) codes = [201] * self.obj_ring.replicas with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_container_sync_put_x_timestamp_match(self): test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: self.app.container_info['storage_policy'] = policy_index put_timestamp = utils.Timestamp(time.time()).normal req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': put_timestamp}) ts_iter = itertools.repeat(put_timestamp) codes = [409] * self.obj_ring.replicas with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_container_sync_put_x_timestamp_older(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: self.app.container_info['storage_policy'] = policy_index req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': next(ts).internal}) ts_iter = itertools.repeat(next(ts).internal) codes = [409] * self.obj_ring.replicas with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_container_sync_put_x_timestamp_newer(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: orig_timestamp = next(ts).internal req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': next(ts).internal}) ts_iter = itertools.repeat(orig_timestamp) codes = [201] * self.obj_ring.replicas with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_put_x_timestamp_conflict(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': next(ts).internal}) ts_iter = iter([next(ts).internal, None, None]) codes = [409] + [201] * (self.obj_ring.replicas - 1) with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_put_x_timestamp_conflict_with_missing_backend_timestamp(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': next(ts).internal}) ts_iter = iter([None, None, None]) codes = [409] * self.obj_ring.replicas with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_put_x_timestamp_conflict_with_other_weird_success_response(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': next(ts).internal}) ts_iter = iter([next(ts).internal, None, None]) codes = [409] + [(201, 'notused')] * (self.obj_ring.replicas - 1) with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_put_x_timestamp_conflict_with_if_none_match(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'If-None-Match': '*', 'X-Timestamp': next(ts).internal}) ts_iter = iter([next(ts).internal, None, None]) codes = [409] + [(412, 'notused')] * (self.obj_ring.replicas - 1) with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 412) def test_container_sync_put_x_timestamp_race(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: put_timestamp = next(ts).internal req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': put_timestamp}) # object nodes they respond 409 because another in-flight request # finished and now the on disk timestamp is equal to the request. put_ts = [put_timestamp] * self.obj_ring.replicas codes = [409] * self.obj_ring.replicas ts_iter = iter(put_ts) with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_container_sync_put_x_timestamp_unsynced_race(self): ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) test_indexes = [None] + [int(p) for p in POLICIES] for policy_index in test_indexes: put_timestamp = next(ts).internal req = swob.Request.blank( '/v1/a/c/o', method='PUT', headers={ 'Content-Length': 0, 'X-Timestamp': put_timestamp}) # only one in-flight request finished put_ts = [None] * (self.obj_ring.replicas - 1) put_resp = [201] * (self.obj_ring.replicas - 1) put_ts += [put_timestamp] put_resp += [409] ts_iter = iter(put_ts) codes = put_resp with set_http_connect(*codes, timestamps=ts_iter): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) def test_COPY_simple(self): req = swift.common.swob.Request.blank( '/v1/a/c/o', method='COPY', headers={'Content-Length': 0, 'Destination': 'c/o-copy'}) head_resp = [200] * self.obj_ring.replicas + \ [404] * self.obj_ring.max_more_nodes put_resp = [201] * self.obj_ring.replicas codes = head_resp + put_resp with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_log_info(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') req.headers['x-copy-from'] = 'some/where' req.headers['Content-Length'] = 0 # override FakeConn default resp headers to keep log_info clean resp_headers = {'x-delete-at': None} head_resp = [200] * self.obj_ring.replicas + \ [404] * self.obj_ring.max_more_nodes put_resp = [201] * self.obj_ring.replicas codes = head_resp + put_resp with set_http_connect(*codes, headers=resp_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) self.assertEqual( req.environ.get('swift.log_info'), ['x-copy-from:some/where']) # and then check that we don't do that for originating POSTs req = swift.common.swob.Request.blank('/v1/a/c/o') req.method = 'POST' req.headers['x-copy-from'] = 'else/where' with set_http_connect(*codes, headers=resp_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 202) self.assertEqual(req.environ.get('swift.log_info'), None) @patch_policies( [StoragePolicy(0, '1-replica', True), StoragePolicy(1, '5-replica', False), StoragePolicy(2, '8-replica', False), StoragePolicy(3, '15-replica', False)], fake_ring_args=[ {'replicas': 1}, {'replicas': 5}, {'replicas': 8}, {'replicas': 15}]) class TestReplicatedObjControllerVariousReplicas(BaseObjectControllerMixin, unittest.TestCase): controller_cls = obj.ReplicatedObjectController @patch_policies(legacy_only=True) class TestObjControllerLegacyCache(TestReplicatedObjController): """ This test pretends like memcache returned a stored value that should resemble whatever "old" format. It catches KeyErrors you'd get if your code was expecting some new format during a rolling upgrade. """ # in this case policy_index is missing container_info = { 'read_acl': None, 'write_acl': None, 'sync_key': None, 'versions': None, } def test_invalid_storage_policy_cache(self): self.app.container_info['storage_policy'] = 1 for method in ('GET', 'HEAD', 'POST', 'PUT', 'COPY'): req = swob.Request.blank('/v1/a/c/o', method=method) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) class StubResponse(object): def __init__(self, status, body='', headers=None): self.status = status self.body = body self.readable = BytesIO(body) self.headers = HeaderKeyDict(headers) fake_reason = ('Fake', 'This response is a lie.') self.reason = swob.RESPONSE_REASONS.get(status, fake_reason)[0] def getheader(self, header_name, default=None): return self.headers.get(header_name, default) def getheaders(self): if 'Content-Length' not in self.headers: self.headers['Content-Length'] = len(self.body) return self.headers.items() def read(self, amt=0): return self.readable.read(amt) @contextmanager def capture_http_requests(get_response): class FakeConn(object): def __init__(self, req): self.req = req self.resp = None def getresponse(self): self.resp = get_response(self.req) return self.resp class ConnectionLog(object): def __init__(self): self.connections = [] def __len__(self): return len(self.connections) def __getitem__(self, i): return self.connections[i] def __iter__(self): return iter(self.connections) def __call__(self, ip, port, method, path, headers, qs, ssl): req = { 'ip': ip, 'port': port, 'method': method, 'path': path, 'headers': headers, 'qs': qs, 'ssl': ssl, } conn = FakeConn(req) self.connections.append(conn) return conn fake_conn = ConnectionLog() with mock.patch('swift.common.bufferedhttp.http_connect_raw', new=fake_conn): yield fake_conn @patch_policies(with_ec_default=True) class TestECObjController(BaseObjectControllerMixin, unittest.TestCase): container_info = { 'read_acl': None, 'write_acl': None, 'sync_key': None, 'versions': None, 'storage_policy': '0', } controller_cls = obj.ECObjectController def test_determine_chunk_destinations(self): class FakePutter(object): def __init__(self, index): self.node_index = index controller = self.controller_cls( self.app, 'a', 'c', 'o') # create a dummy list of putters, check no handoffs putters = [] for index in range(0, 4): putters.append(FakePutter(index)) got = controller._determine_chunk_destinations(putters) expected = {} for i, p in enumerate(putters): expected[p] = i self.assertEqual(got, expected) # now lets make a handoff at the end putters[3].node_index = None got = controller._determine_chunk_destinations(putters) self.assertEqual(got, expected) putters[3].node_index = 3 # now lets make a handoff at the start putters[0].node_index = None got = controller._determine_chunk_destinations(putters) self.assertEqual(got, expected) putters[0].node_index = 0 # now lets make a handoff in the middle putters[2].node_index = None got = controller._determine_chunk_destinations(putters) self.assertEqual(got, expected) putters[2].node_index = 0 # now lets make all of them handoffs for index in range(0, 4): putters[index].node_index = None got = controller._determine_chunk_destinations(putters) self.assertEqual(got, expected) def test_GET_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o') get_resp = [200] * self.policy.ec_ndata with set_http_connect(*get_resp): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertIn('Accept-Ranges', resp.headers) def test_GET_simple_x_newest(self): req = swift.common.swob.Request.blank('/v1/a/c/o', headers={'X-Newest': 'true'}) codes = [200] * self.policy.ec_ndata with set_http_connect(*codes): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) def test_GET_error(self): req = swift.common.swob.Request.blank('/v1/a/c/o') get_resp = [503] + [200] * self.policy.ec_ndata with set_http_connect(*get_resp): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) def test_GET_with_body(self): req = swift.common.swob.Request.blank('/v1/a/c/o') # turn a real body into fragments segment_size = self.policy.ec_segment_size real_body = ('asdf' * segment_size)[:-10] # split it up into chunks chunks = [real_body[x:x + segment_size] for x in range(0, len(real_body), segment_size)] fragment_payloads = [] for chunk in chunks: fragments = self.policy.pyeclib_driver.encode(chunk) if not fragments: break fragment_payloads.append(fragments) # sanity sanity_body = '' for fragment_payload in fragment_payloads: sanity_body += self.policy.pyeclib_driver.decode( fragment_payload) self.assertEqual(len(real_body), len(sanity_body)) self.assertEqual(real_body, sanity_body) # list(zip(...)) for py3 compatibility (zip is lazy there) node_fragments = list(zip(*fragment_payloads)) self.assertEqual(len(node_fragments), self.replicas()) # sanity headers = {'X-Object-Sysmeta-Ec-Content-Length': str(len(real_body))} responses = [(200, ''.join(node_fragments[i]), headers) for i in range(POLICIES.default.ec_ndata)] status_codes, body_iter, headers = zip(*responses) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(len(real_body), len(resp.body)) self.assertEqual(real_body, resp.body) def test_PUT_simple(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_txn_id_logging_ECPUT(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') self.app.logger.txn_id = req.environ['swift.trans_id'] = 'test-txn-id' codes = [(100, Timeout(), 503, 503)] * self.replicas() stdout = BytesIO() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers), \ mock.patch('sys.stdout', stdout): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) for line in stdout.getvalue().splitlines(): self.assertIn('test-txn-id', line) self.assertIn('Trying to get ', stdout.getvalue()) def test_PUT_with_explicit_commit_status(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [(100, 100, 201)] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_error(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [503] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def test_PUT_mostly_success(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * self.quorum() codes += [503] * (self.replicas() - len(codes)) random.shuffle(codes) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_error_commit(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [(100, 503, Exception('not used'))] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def test_PUT_mostly_success_commit(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * self.quorum() codes += [(100, 503, Exception('not used'))] * ( self.replicas() - len(codes)) random.shuffle(codes) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_mostly_error_commit(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [(100, 503, Exception('not used'))] * self.quorum() codes += [201] * (self.replicas() - len(codes)) random.shuffle(codes) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def test_PUT_commit_timeout(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * (self.replicas() - 1) codes.append((100, Timeout(), Exception('not used'))) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_commit_exception(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * (self.replicas() - 1) codes.append((100, Exception('kaboom!'), Exception('not used'))) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_ec_error_during_transfer_data(self): class FakeReader(object): def read(self, size): raise exceptions.ChunkReadError('exception message') req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 499) def test_PUT_ec_chunkreadtimeout_during_transfer_data(self): class FakeReader(object): def read(self, size): raise exceptions.ChunkReadTimeout() req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 408) def test_PUT_ec_timeout_during_transfer_data(self): class FakeReader(object): def read(self, size): raise exceptions.Timeout() req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 499) def test_PUT_ec_exception_during_transfer_data(self): class FakeReader(object): def read(self, size): raise Exception('exception message') req = swob.Request.blank('/v1/a/c/o.jpg', method='PUT', body='test body') req.environ['wsgi.input'] = FakeReader() req.headers['content-length'] = '6' codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 500) def test_PUT_with_body(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT') segment_size = self.policy.ec_segment_size test_body = ('asdf' * segment_size)[:-10] etag = md5(test_body).hexdigest() size = len(test_body) req.body = test_body codes = [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } put_requests = defaultdict(lambda: {'boundary': None, 'chunks': []}) def capture_body(conn_id, chunk): put_requests[conn_id]['chunks'].append(chunk) def capture_headers(ip, port, device, part, method, path, headers, **kwargs): conn_id = kwargs['connection_id'] put_requests[conn_id]['boundary'] = headers[ 'X-Backend-Obj-Multipart-Mime-Boundary'] put_requests[conn_id]['backend-content-length'] = headers[ 'X-Backend-Obj-Content-Length'] with set_http_connect(*codes, expect_headers=expect_headers, give_send=capture_body, give_connect=capture_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) frag_archives = [] for connection_id, info in put_requests.items(): body = unchunk_body(''.join(info['chunks'])) self.assertTrue(info['boundary'] is not None, "didn't get boundary for conn %r" % ( connection_id,)) self.assertTrue(size > int(info['backend-content-length']) > 0, "invalid backend-content-length for conn %r" % ( connection_id,)) # email.parser.FeedParser doesn't know how to take a multipart # message and boundary together and parse it; it only knows how # to take a string, parse the headers, and figure out the # boundary on its own. parser = email.parser.FeedParser() parser.feed( "Content-Type: multipart/nobodycares; boundary=%s\r\n\r\n" % info['boundary']) parser.feed(body) message = parser.close() self.assertTrue(message.is_multipart()) # sanity check mime_parts = message.get_payload() self.assertEqual(len(mime_parts), 3) obj_part, footer_part, commit_part = mime_parts # attach the body to frag_archives list self.assertEqual(obj_part['X-Document'], 'object body') frag_archives.append(obj_part.get_payload()) # assert length was correct for this connection self.assertEqual(int(info['backend-content-length']), len(frag_archives[-1])) # assert length was the same for all connections self.assertEqual(int(info['backend-content-length']), len(frag_archives[0])) # validate some footer metadata self.assertEqual(footer_part['X-Document'], 'object metadata') footer_metadata = json.loads(footer_part.get_payload()) self.assertTrue(footer_metadata) expected = { 'X-Object-Sysmeta-EC-Content-Length': str(size), 'X-Backend-Container-Update-Override-Size': str(size), 'X-Object-Sysmeta-EC-Etag': etag, 'X-Backend-Container-Update-Override-Etag': etag, 'X-Object-Sysmeta-EC-Segment-Size': str(segment_size), } for header, value in expected.items(): self.assertEqual(footer_metadata[header], value) # sanity on commit message self.assertEqual(commit_part['X-Document'], 'put commit') self.assertEqual(len(frag_archives), self.replicas()) fragment_size = self.policy.fragment_size node_payloads = [] for fa in frag_archives: payload = [fa[x:x + fragment_size] for x in range(0, len(fa), fragment_size)] node_payloads.append(payload) fragment_payloads = zip(*node_payloads) expected_body = '' for fragment_payload in fragment_payloads: self.assertEqual(len(fragment_payload), self.replicas()) if True: fragment_payload = list(fragment_payload) expected_body += self.policy.pyeclib_driver.decode( fragment_payload) self.assertEqual(len(test_body), len(expected_body)) self.assertEqual(test_body, expected_body) def test_PUT_old_obj_server(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') responses = [ # one server will response 100-continue but not include the # needful expect headers and the connection will be dropped ((100, Exception('not used')), {}), ] + [ # and pleanty of successful responses too (201, { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes', }), ] * self.replicas() random.shuffle(responses) if responses[-1][0] != 201: # whoops, stupid random responses = responses[1:] + [responses[0]] codes, expect_headers = zip(*responses) with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_COPY_cross_policy_type_from_replicated(self): self.app.per_container_info = { 'c1': self.app.container_info.copy(), 'c2': self.app.container_info.copy(), } # make c2 use replicated storage policy 1 self.app.per_container_info['c2']['storage_policy'] = '1' # a put request with copy from source c2 req = swift.common.swob.Request.blank('/v1/a/c1/o', method='PUT', body='', headers={ 'X-Copy-From': 'c2/o'}) # c2 get codes = [200] * self.replicas(POLICIES[1]) codes += [404] * POLICIES[1].object_ring.max_more_nodes # c1 put codes += [201] * self.replicas() expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_COPY_cross_policy_type_to_replicated(self): self.app.per_container_info = { 'c1': self.app.container_info.copy(), 'c2': self.app.container_info.copy(), } # make c1 use replicated storage policy 1 self.app.per_container_info['c1']['storage_policy'] = '1' # a put request with copy from source c2 req = swift.common.swob.Request.blank('/v1/a/c1/o', method='PUT', body='', headers={ 'X-Copy-From': 'c2/o'}) # c2 get codes = [404, 200] * self.policy.ec_ndata headers = { 'X-Object-Sysmeta-Ec-Content-Length': 0, } # c1 put codes += [201] * self.replicas(POLICIES[1]) with set_http_connect(*codes, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_COPY_cross_policy_type_unknown(self): self.app.per_container_info = { 'c1': self.app.container_info.copy(), 'c2': self.app.container_info.copy(), } # make c1 use some made up storage policy index self.app.per_container_info['c1']['storage_policy'] = '13' # a COPY request of c2 with destination in c1 req = swift.common.swob.Request.blank('/v1/a/c2/o', method='COPY', body='', headers={ 'Destination': 'c1/o'}) with set_http_connect(): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def _make_ec_archive_bodies(self, test_body, policy=None): policy = policy or self.policy return encode_frag_archive_bodies(policy, test_body) def _make_ec_object_stub(self, test_body=None, policy=None): policy = policy or self.policy segment_size = policy.ec_segment_size test_body = test_body or ( 'test' * segment_size)[:-random.randint(0, 1000)] etag = md5(test_body).hexdigest() ec_archive_bodies = self._make_ec_archive_bodies(test_body, policy=policy) return { 'body': test_body, 'etag': etag, 'frags': ec_archive_bodies, } def _fake_ec_node_response(self, node_frags): """ Given a list of entries for each node in ring order, where the entries are a dict (or list of dicts) which describe all of the fragment(s); create a function suitable for use with capture_http_requests that will accept a req object and return a response that will suitably fake the behavior of an object server who had the given fragments on disk at the time. """ node_map = {} all_nodes = [] def _build_node_map(req): node_key = lambda n: (n['ip'], n['port']) part = utils.split_path(req['path'], 5, 5, True)[1] policy = POLICIES[int( req['headers']['X-Backend-Storage-Policy-Index'])] all_nodes.extend(policy.object_ring.get_part_nodes(part)) all_nodes.extend(policy.object_ring.get_more_nodes(part)) for i, node in enumerate(all_nodes): node_map[node_key(node)] = i # normalize node_frags to a list of fragments for each node even # if there's only one fragment in the dataset provided. for i, frags in enumerate(node_frags): if isinstance(frags, dict): node_frags[i] = [frags] def get_response(req): if not node_map: _build_node_map(req) try: node_index = node_map[(req['ip'], req['port'])] except KeyError: raise Exception("Couldn't find node %s:%s in %r" % ( req['ip'], req['port'], all_nodes)) try: frags = node_frags[node_index] except KeyError: raise Exception('Found node %r:%r at index %s - ' 'but only got %s stub response nodes' % ( req['ip'], req['port'], node_index, len(node_frags))) try: stub = random.choice(frags) except IndexError: stub = None if stub: body = stub['obj']['frags'][stub['frag']] headers = { 'X-Object-Sysmeta-Ec-Content-Length': len( stub['obj']['body']), 'X-Object-Sysmeta-Ec-Etag': stub['obj']['etag'], 'X-Object-Sysmeta-Ec-Frag-Index': stub['frag'], } resp = StubResponse(200, body, headers) else: resp = StubResponse(404) return resp return get_response def test_GET_with_frags_swapped_around(self): segment_size = self.policy.ec_segment_size test_data = ('test' * segment_size)[:-657] etag = md5(test_data).hexdigest() ec_archive_bodies = self._make_ec_archive_bodies(test_data) _part, primary_nodes = self.obj_ring.get_nodes('a', 'c', 'o') node_key = lambda n: (n['ip'], n['port']) response_map = { node_key(n): StubResponse(200, ec_archive_bodies[i], { 'X-Object-Sysmeta-Ec-Content-Length': len(test_data), 'X-Object-Sysmeta-Ec-Etag': etag, 'X-Object-Sysmeta-Ec-Frag-Index': i, }) for i, n in enumerate(primary_nodes) } # swap a parity response into a data node data_node = random.choice(primary_nodes[:self.policy.ec_ndata]) parity_node = random.choice(primary_nodes[self.policy.ec_ndata:]) (response_map[node_key(data_node)], response_map[node_key(parity_node)]) = \ (response_map[node_key(parity_node)], response_map[node_key(data_node)]) def get_response(req): req_key = (req['ip'], req['port']) return response_map.pop(req_key) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(get_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(len(log), self.policy.ec_ndata) self.assertEqual(len(response_map), len(primary_nodes) - self.policy.ec_ndata) def test_GET_with_single_missed_overwrite_does_not_need_handoff(self): obj1 = self._make_ec_object_stub() obj2 = self._make_ec_object_stub() node_frags = [ {'obj': obj2, 'frag': 0}, {'obj': obj2, 'frag': 1}, {'obj': obj1, 'frag': 2}, # missed over write {'obj': obj2, 'frag': 3}, {'obj': obj2, 'frag': 4}, {'obj': obj2, 'frag': 5}, {'obj': obj2, 'frag': 6}, {'obj': obj2, 'frag': 7}, {'obj': obj2, 'frag': 8}, {'obj': obj2, 'frag': 9}, {'obj': obj2, 'frag': 10}, # parity {'obj': obj2, 'frag': 11}, # parity {'obj': obj2, 'frag': 12}, # parity {'obj': obj2, 'frag': 13}, # parity # {'obj': obj2, 'frag': 2}, # handoff (not used in this test) ] fake_response = self._fake_ec_node_response(node_frags) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(fake_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['etag'], obj2['etag']) self.assertEqual(md5(resp.body).hexdigest(), obj2['etag']) collected_responses = defaultdict(set) for conn in log: etag = conn.resp.headers['X-Object-Sysmeta-Ec-Etag'] index = conn.resp.headers['X-Object-Sysmeta-Ec-Frag-Index'] collected_responses[etag].add(index) # because the primary nodes are shuffled, it's possible the proxy # didn't even notice the missed overwrite frag - but it might have self.assertLessEqual(len(log), self.policy.ec_ndata + 1) self.assertLessEqual(len(collected_responses), 2) # ... regardless we should never need to fetch more than ec_ndata # frags for any given etag for etag, frags in collected_responses.items(): self.assertTrue(len(frags) <= self.policy.ec_ndata, 'collected %s frags for etag %s' % ( len(frags), etag)) def test_GET_with_many_missed_overwrite_will_need_handoff(self): obj1 = self._make_ec_object_stub() obj2 = self._make_ec_object_stub() node_frags = [ {'obj': obj2, 'frag': 0}, {'obj': obj2, 'frag': 1}, {'obj': obj1, 'frag': 2}, # missed {'obj': obj2, 'frag': 3}, {'obj': obj2, 'frag': 4}, {'obj': obj2, 'frag': 5}, {'obj': obj1, 'frag': 6}, # missed {'obj': obj2, 'frag': 7}, {'obj': obj2, 'frag': 8}, {'obj': obj1, 'frag': 9}, # missed {'obj': obj1, 'frag': 10}, # missed {'obj': obj1, 'frag': 11}, # missed {'obj': obj2, 'frag': 12}, {'obj': obj2, 'frag': 13}, {'obj': obj2, 'frag': 6}, # handoff ] fake_response = self._fake_ec_node_response(node_frags) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(fake_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['etag'], obj2['etag']) self.assertEqual(md5(resp.body).hexdigest(), obj2['etag']) collected_responses = defaultdict(set) for conn in log: etag = conn.resp.headers['X-Object-Sysmeta-Ec-Etag'] index = conn.resp.headers['X-Object-Sysmeta-Ec-Frag-Index'] collected_responses[etag].add(index) # there's not enough of the obj2 etag on the primaries, we would # have collected responses for both etags, and would have made # one more request to the handoff node self.assertEqual(len(log), self.replicas() + 1) self.assertEqual(len(collected_responses), 2) # ... regardless we should never need to fetch more than ec_ndata # frags for any given etag for etag, frags in collected_responses.items(): self.assertTrue(len(frags) <= self.policy.ec_ndata, 'collected %s frags for etag %s' % ( len(frags), etag)) def test_GET_with_missing_and_mixed_frags_will_dig_deep_but_succeed(self): obj1 = self._make_ec_object_stub() obj2 = self._make_ec_object_stub() node_frags = [ {'obj': obj1, 'frag': 0}, {'obj': obj2, 'frag': 0}, {}, {'obj': obj1, 'frag': 1}, {'obj': obj2, 'frag': 1}, {}, {'obj': obj1, 'frag': 2}, {'obj': obj2, 'frag': 2}, {}, {'obj': obj1, 'frag': 3}, {'obj': obj2, 'frag': 3}, {}, {'obj': obj1, 'frag': 4}, {'obj': obj2, 'frag': 4}, {}, {'obj': obj1, 'frag': 5}, {'obj': obj2, 'frag': 5}, {}, {'obj': obj1, 'frag': 6}, {'obj': obj2, 'frag': 6}, {}, {'obj': obj1, 'frag': 7}, {'obj': obj2, 'frag': 7}, {}, {'obj': obj1, 'frag': 8}, {'obj': obj2, 'frag': 8}, {}, {'obj': obj2, 'frag': 9}, ] fake_response = self._fake_ec_node_response(node_frags) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(fake_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['etag'], obj2['etag']) self.assertEqual(md5(resp.body).hexdigest(), obj2['etag']) collected_responses = defaultdict(set) for conn in log: etag = conn.resp.headers['X-Object-Sysmeta-Ec-Etag'] index = conn.resp.headers['X-Object-Sysmeta-Ec-Frag-Index'] collected_responses[etag].add(index) # we go exactly as long as we have to, finding two different # etags and some 404's (i.e. collected_responses[None]) self.assertEqual(len(log), len(node_frags)) self.assertEqual(len(collected_responses), 3) # ... regardless we should never need to fetch more than ec_ndata # frags for any given etag for etag, frags in collected_responses.items(): self.assertTrue(len(frags) <= self.policy.ec_ndata, 'collected %s frags for etag %s' % ( len(frags), etag)) def test_GET_with_missing_and_mixed_frags_will_dig_deep_but_stop(self): obj1 = self._make_ec_object_stub() obj2 = self._make_ec_object_stub() node_frags = [ {'obj': obj1, 'frag': 0}, {'obj': obj2, 'frag': 0}, {}, {'obj': obj1, 'frag': 1}, {'obj': obj2, 'frag': 1}, {}, {'obj': obj1, 'frag': 2}, {'obj': obj2, 'frag': 2}, {}, {'obj': obj1, 'frag': 3}, {'obj': obj2, 'frag': 3}, {}, {'obj': obj1, 'frag': 4}, {'obj': obj2, 'frag': 4}, {}, {'obj': obj1, 'frag': 5}, {'obj': obj2, 'frag': 5}, {}, {'obj': obj1, 'frag': 6}, {'obj': obj2, 'frag': 6}, {}, {'obj': obj1, 'frag': 7}, {'obj': obj2, 'frag': 7}, {}, {'obj': obj1, 'frag': 8}, {'obj': obj2, 'frag': 8}, {}, {}, ] fake_response = self._fake_ec_node_response(node_frags) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(fake_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 404) collected_responses = defaultdict(set) for conn in log: etag = conn.resp.headers['X-Object-Sysmeta-Ec-Etag'] index = conn.resp.headers['X-Object-Sysmeta-Ec-Frag-Index'] collected_responses[etag].add(index) # default node_iter will exhaust at 2 * replicas self.assertEqual(len(log), 2 * self.replicas()) self.assertEqual(len(collected_responses), 3) # ... regardless we should never need to fetch more than ec_ndata # frags for any given etag for etag, frags in collected_responses.items(): self.assertTrue(len(frags) <= self.policy.ec_ndata, 'collected %s frags for etag %s' % ( len(frags), etag)) def test_GET_mixed_success_with_range(self): fragment_size = self.policy.fragment_size ec_stub = self._make_ec_object_stub() frag_archives = ec_stub['frags'] frag_archive_size = len(ec_stub['frags'][0]) headers = { 'Content-Type': 'text/plain', 'Content-Length': fragment_size, 'Content-Range': 'bytes 0-%s/%s' % (fragment_size - 1, frag_archive_size), 'X-Object-Sysmeta-Ec-Content-Length': len(ec_stub['body']), 'X-Object-Sysmeta-Ec-Etag': ec_stub['etag'], } responses = [ StubResponse(206, frag_archives[0][:fragment_size], headers), StubResponse(206, frag_archives[1][:fragment_size], headers), StubResponse(206, frag_archives[2][:fragment_size], headers), StubResponse(206, frag_archives[3][:fragment_size], headers), StubResponse(206, frag_archives[4][:fragment_size], headers), # data nodes with old frag StubResponse(416), StubResponse(416), StubResponse(206, frag_archives[7][:fragment_size], headers), StubResponse(206, frag_archives[8][:fragment_size], headers), StubResponse(206, frag_archives[9][:fragment_size], headers), # hopefully we ask for two more StubResponse(206, frag_archives[10][:fragment_size], headers), StubResponse(206, frag_archives[11][:fragment_size], headers), ] def get_response(req): return responses.pop(0) if responses else StubResponse(404) req = swob.Request.blank('/v1/a/c/o', headers={'Range': 'bytes=0-3'}) with capture_http_requests(get_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 206) self.assertEqual(resp.body, 'test') self.assertEqual(len(log), self.policy.ec_ndata + 2) def test_GET_with_range_unsatisfiable_mixed_success(self): responses = [ StubResponse(416), StubResponse(416), StubResponse(416), StubResponse(416), StubResponse(416), StubResponse(416), StubResponse(416), # sneak in bogus extra responses StubResponse(404), StubResponse(206), # and then just "enough" more 416's StubResponse(416), StubResponse(416), StubResponse(416), ] def get_response(req): return responses.pop(0) if responses else StubResponse(404) req = swob.Request.blank('/v1/a/c/o', headers={ 'Range': 'bytes=%s-' % 100000000000000}) with capture_http_requests(get_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 416) # ec_ndata responses that must agree, plus the bogus extras self.assertEqual(len(log), self.policy.ec_ndata + 2) def test_GET_mixed_ranged_responses_success(self): segment_size = self.policy.ec_segment_size fragment_size = self.policy.fragment_size new_data = ('test' * segment_size)[:-492] new_etag = md5(new_data).hexdigest() new_archives = self._make_ec_archive_bodies(new_data) old_data = ('junk' * segment_size)[:-492] old_etag = md5(old_data).hexdigest() old_archives = self._make_ec_archive_bodies(old_data) frag_archive_size = len(new_archives[0]) new_headers = { 'Content-Type': 'text/plain', 'Content-Length': fragment_size, 'Content-Range': 'bytes 0-%s/%s' % (fragment_size - 1, frag_archive_size), 'X-Object-Sysmeta-Ec-Content-Length': len(new_data), 'X-Object-Sysmeta-Ec-Etag': new_etag, } old_headers = { 'Content-Type': 'text/plain', 'Content-Length': fragment_size, 'Content-Range': 'bytes 0-%s/%s' % (fragment_size - 1, frag_archive_size), 'X-Object-Sysmeta-Ec-Content-Length': len(old_data), 'X-Object-Sysmeta-Ec-Etag': old_etag, } # 7 primaries with stale frags, 3 handoffs failed to get new frags responses = [ StubResponse(206, old_archives[0][:fragment_size], old_headers), StubResponse(206, new_archives[1][:fragment_size], new_headers), StubResponse(206, old_archives[2][:fragment_size], old_headers), StubResponse(206, new_archives[3][:fragment_size], new_headers), StubResponse(206, old_archives[4][:fragment_size], old_headers), StubResponse(206, new_archives[5][:fragment_size], new_headers), StubResponse(206, old_archives[6][:fragment_size], old_headers), StubResponse(206, new_archives[7][:fragment_size], new_headers), StubResponse(206, old_archives[8][:fragment_size], old_headers), StubResponse(206, new_archives[9][:fragment_size], new_headers), StubResponse(206, old_archives[10][:fragment_size], old_headers), StubResponse(206, new_archives[11][:fragment_size], new_headers), StubResponse(206, old_archives[12][:fragment_size], old_headers), StubResponse(206, new_archives[13][:fragment_size], new_headers), StubResponse(206, new_archives[0][:fragment_size], new_headers), StubResponse(404), StubResponse(404), StubResponse(206, new_archives[6][:fragment_size], new_headers), StubResponse(404), StubResponse(206, new_archives[10][:fragment_size], new_headers), StubResponse(206, new_archives[12][:fragment_size], new_headers), ] def get_response(req): return responses.pop(0) if responses else StubResponse(404) req = swob.Request.blank('/v1/a/c/o') with capture_http_requests(get_response) as log: resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, new_data[:segment_size]) self.assertEqual(len(log), self.policy.ec_ndata + 10) def test_GET_mismatched_fragment_archives(self): segment_size = self.policy.ec_segment_size test_data1 = ('test' * segment_size)[:-333] # N.B. the object data *length* here is different test_data2 = ('blah1' * segment_size)[:-333] etag1 = md5(test_data1).hexdigest() etag2 = md5(test_data2).hexdigest() ec_archive_bodies1 = self._make_ec_archive_bodies(test_data1) ec_archive_bodies2 = self._make_ec_archive_bodies(test_data2) headers1 = {'X-Object-Sysmeta-Ec-Etag': etag1, 'X-Object-Sysmeta-Ec-Content-Length': '333'} # here we're going to *lie* and say the etag here matches headers2 = {'X-Object-Sysmeta-Ec-Etag': etag1, 'X-Object-Sysmeta-Ec-Content-Length': '333'} responses1 = [(200, body, headers1) for body in ec_archive_bodies1] responses2 = [(200, body, headers2) for body in ec_archive_bodies2] req = swob.Request.blank('/v1/a/c/o') # sanity check responses1 responses = responses1[:self.policy.ec_ndata] status_codes, body_iter, headers = zip(*responses) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(md5(resp.body).hexdigest(), etag1) # sanity check responses2 responses = responses2[:self.policy.ec_ndata] status_codes, body_iter, headers = zip(*responses) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(md5(resp.body).hexdigest(), etag2) # now mix the responses a bit mix_index = random.randint(0, self.policy.ec_ndata - 1) mixed_responses = responses1[:self.policy.ec_ndata] mixed_responses[mix_index] = responses2[mix_index] status_codes, body_iter, headers = zip(*mixed_responses) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) try: resp.body except ECDriverError: resp._app_iter.close() else: self.fail('invalid ec fragment response body did not blow up!') error_lines = self.logger.get_lines_for_level('error') self.assertEqual(1, len(error_lines)) msg = error_lines[0] self.assertTrue('Error decoding fragments' in msg) self.assertTrue('/a/c/o' in msg) log_msg_args, log_msg_kwargs = self.logger.log_dict['error'][0] self.assertEqual(log_msg_kwargs['exc_info'][0], ECDriverError) def test_GET_read_timeout(self): segment_size = self.policy.ec_segment_size test_data = ('test' * segment_size)[:-333] etag = md5(test_data).hexdigest() ec_archive_bodies = self._make_ec_archive_bodies(test_data) headers = {'X-Object-Sysmeta-Ec-Etag': etag} self.app.recoverable_node_timeout = 0.01 responses = [(200, SlowBody(body, 0.1), headers) for body in ec_archive_bodies] req = swob.Request.blank('/v1/a/c/o') status_codes, body_iter, headers = zip(*responses + [ (404, '', {}) for i in range( self.policy.object_ring.max_more_nodes)]) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) # do this inside the fake http context manager, it'll try to # resume but won't be able to give us all the right bytes self.assertNotEqual(md5(resp.body).hexdigest(), etag) error_lines = self.logger.get_lines_for_level('error') self.assertEqual(self.replicas(), len(error_lines)) nparity = self.policy.ec_nparity for line in error_lines[:nparity]: self.assertTrue('retrying' in line) for line in error_lines[nparity:]: self.assertTrue('ChunkReadTimeout (0.01s)' in line) def test_GET_read_timeout_resume(self): segment_size = self.policy.ec_segment_size test_data = ('test' * segment_size)[:-333] etag = md5(test_data).hexdigest() ec_archive_bodies = self._make_ec_archive_bodies(test_data) headers = {'X-Object-Sysmeta-Ec-Etag': etag} self.app.recoverable_node_timeout = 0.05 # first one is slow responses = [(200, SlowBody(ec_archive_bodies[0], 0.1), headers)] # ... the rest are fine responses += [(200, body, headers) for body in ec_archive_bodies[1:]] req = swob.Request.blank('/v1/a/c/o') status_codes, body_iter, headers = zip( *responses[:self.policy.ec_ndata + 1]) with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertTrue(md5(resp.body).hexdigest(), etag) error_lines = self.logger.get_lines_for_level('error') self.assertEqual(1, len(error_lines)) self.assertTrue('retrying' in error_lines[0]) def test_fix_response_HEAD(self): headers = {'X-Object-Sysmeta-Ec-Content-Length': '10', 'X-Object-Sysmeta-Ec-Etag': 'foo'} # sucsessful HEAD responses = [(200, '', headers)] status_codes, body_iter, headers = zip(*responses) req = swift.common.swob.Request.blank('/v1/a/c/o', method='HEAD') with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, '') # 200OK shows original object content length self.assertEqual(resp.headers['Content-Length'], '10') self.assertEqual(resp.headers['Etag'], 'foo') # not found HEAD responses = [(404, '', {})] * self.replicas() * 2 status_codes, body_iter, headers = zip(*responses) req = swift.common.swob.Request.blank('/v1/a/c/o', method='HEAD') with set_http_connect(*status_codes, body_iter=body_iter, headers=headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 404) # 404 shows actual response body size (i.e. 0 for HEAD) self.assertEqual(resp.headers['Content-Length'], '0') def test_PUT_with_slow_commits(self): # It's important that this timeout be much less than the delay in # the slow commit responses so that the slow commits are not waited # for. self.app.post_quorum_timeout = 0.01 req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') # plenty of slow commits response_sleep = 5.0 codes = [FakeStatus(201, response_sleep=response_sleep) for i in range(self.replicas())] # swap out some with regular fast responses number_of_fast_responses_needed_to_be_quick_enough = \ self.policy.quorum fast_indexes = random.sample( range(self.replicas()), number_of_fast_responses_needed_to_be_quick_enough) for i in fast_indexes: codes[i] = 201 expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): start = time.time() resp = req.get_response(self.app) response_time = time.time() - start self.assertEqual(resp.status_int, 201) self.assertTrue(response_time < response_sleep) def test_PUT_with_just_enough_durable_responses(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * (self.policy.ec_ndata + 1) codes += [503] * (self.policy.ec_nparity - 1) self.assertEqual(len(codes), self.replicas()) random.shuffle(codes) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_PUT_with_less_durable_responses(self): req = swift.common.swob.Request.blank('/v1/a/c/o', method='PUT', body='') codes = [201] * (self.policy.ec_ndata) codes += [503] * (self.policy.ec_nparity) self.assertEqual(len(codes), self.replicas()) random.shuffle(codes) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*codes, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 503) def test_COPY_with_ranges(self): req = swift.common.swob.Request.blank( '/v1/a/c/o', method='COPY', headers={'Destination': 'c1/o', 'Range': 'bytes=5-10'}) # turn a real body into fragments segment_size = self.policy.ec_segment_size real_body = ('asdf' * segment_size)[:-10] # split it up into chunks chunks = [real_body[x:x + segment_size] for x in range(0, len(real_body), segment_size)] # we need only first chunk to rebuild 5-10 range fragments = self.policy.pyeclib_driver.encode(chunks[0]) fragment_payloads = [] fragment_payloads.append(fragments) node_fragments = zip(*fragment_payloads) self.assertEqual(len(node_fragments), self.replicas()) # sanity headers = {'X-Object-Sysmeta-Ec-Content-Length': str(len(real_body))} responses = [(200, ''.join(node_fragments[i]), headers) for i in range(POLICIES.default.ec_ndata)] responses += [(201, '', {})] * self.obj_ring.replicas status_codes, body_iter, headers = zip(*responses) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*status_codes, body_iter=body_iter, headers=headers, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 201) def test_GET_with_invalid_ranges(self): # real body size is segment_size - 10 (just 1 segment) segment_size = self.policy.ec_segment_size real_body = ('a' * segment_size)[:-10] # range is out of real body but in segment size self._test_invalid_ranges('GET', real_body, segment_size, '%s-' % (segment_size - 10)) # range is out of both real body and segment size self._test_invalid_ranges('GET', real_body, segment_size, '%s-' % (segment_size + 10)) def test_COPY_with_invalid_ranges(self): # real body size is segment_size - 10 (just 1 segment) segment_size = self.policy.ec_segment_size real_body = ('a' * segment_size)[:-10] # range is out of real body but in segment size self._test_invalid_ranges('COPY', real_body, segment_size, '%s-' % (segment_size - 10)) # range is out of both real body and segment size self._test_invalid_ranges('COPY', real_body, segment_size, '%s-' % (segment_size + 10)) def _test_invalid_ranges(self, method, real_body, segment_size, req_range): # make a request with range starts from more than real size. body_etag = md5(real_body).hexdigest() req = swift.common.swob.Request.blank( '/v1/a/c/o', method=method, headers={'Destination': 'c1/o', 'Range': 'bytes=%s' % (req_range)}) fragments = self.policy.pyeclib_driver.encode(real_body) fragment_payloads = [fragments] node_fragments = zip(*fragment_payloads) self.assertEqual(len(node_fragments), self.replicas()) # sanity headers = {'X-Object-Sysmeta-Ec-Content-Length': str(len(real_body)), 'X-Object-Sysmeta-Ec-Etag': body_etag} start = int(req_range.split('-')[0]) self.assertTrue(start >= 0) # sanity title, exp = swob.RESPONSE_REASONS[416] range_not_satisfiable_body = \ '

%s

%s

' % (title, exp) if start >= segment_size: responses = [(416, range_not_satisfiable_body, headers) for i in range(POLICIES.default.ec_ndata)] else: responses = [(200, ''.join(node_fragments[i]), headers) for i in range(POLICIES.default.ec_ndata)] status_codes, body_iter, headers = zip(*responses) expect_headers = { 'X-Obj-Metadata-Footer': 'yes', 'X-Obj-Multiphase-Commit': 'yes' } with set_http_connect(*status_codes, body_iter=body_iter, headers=headers, expect_headers=expect_headers): resp = req.get_response(self.app) self.assertEqual(resp.status_int, 416) self.assertEqual(resp.content_length, len(range_not_satisfiable_body)) self.assertEqual(resp.body, range_not_satisfiable_body) self.assertEqual(resp.etag, body_etag) self.assertEqual(resp.headers['Accept-Ranges'], 'bytes') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/proxy/controllers/test_account.py0000664000567000056710000003670713024044354024564 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import unittest from swift.common.swob import Request, Response from swift.common.middleware.acl import format_acl from swift.proxy import server as proxy_server from swift.proxy.controllers.base import headers_to_account_info from swift.common import constraints from test.unit import fake_http_connect, FakeRing, FakeMemcache from swift.common.storage_policy import StoragePolicy from swift.common.request_helpers import get_sys_meta_prefix import swift.proxy.controllers.base from test.unit import patch_policies @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestAccountController(unittest.TestCase): def setUp(self): self.app = proxy_server.Application( None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) def _make_callback_func(self, context): def callback(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): context['method'] = method context['path'] = path context['headers'] = headers or {} return callback def _assert_responses(self, method, test_cases): if method in ('PUT', 'DELETE'): self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'AUTH_bob') for responses, expected in test_cases: with mock.patch( 'swift.proxy.controllers.base.http_connect', fake_http_connect(*responses)): req = Request.blank('/v1/AUTH_bob') resp = getattr(controller, method)(req) self.assertEqual(expected, resp.status_int, 'Expected %s but got %s. Failed case: %s' % (expected, resp.status_int, str(responses))) def test_account_info_in_response_env(self): controller = proxy_server.AccountController(self.app, 'AUTH_bob') with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, body='')): req = Request.blank('/v1/AUTH_bob', {'PATH_INFO': '/v1/AUTH_bob'}) resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) self.assertTrue('swift.account/AUTH_bob' in resp.environ) self.assertEqual(headers_to_account_info(resp.headers), resp.environ['swift.account/AUTH_bob']) def test_swift_owner(self): owner_headers = { 'x-account-meta-temp-url-key': 'value', 'x-account-meta-temp-url-key-2': 'value'} controller = proxy_server.AccountController(self.app, 'a') req = Request.blank('/v1/a') with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, headers=owner_headers)): resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) for key in owner_headers: self.assertTrue(key not in resp.headers) req = Request.blank('/v1/a', environ={'swift_owner': True}) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, headers=owner_headers)): resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) for key in owner_headers: self.assertTrue(key in resp.headers) def test_get_deleted_account(self): resp_headers = { 'x-account-status': 'deleted', } controller = proxy_server.AccountController(self.app, 'a') req = Request.blank('/v1/a') with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(404, headers=resp_headers)): resp = controller.HEAD(req) self.assertEqual(410, resp.status_int) def test_long_acct_names(self): long_acct_name = '%sLongAccountName' % ( 'Very' * (constraints.MAX_ACCOUNT_NAME_LENGTH // 4)) controller = proxy_server.AccountController(self.app, long_acct_name) req = Request.blank('/v1/%s' % long_acct_name) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200)): resp = controller.HEAD(req) self.assertEqual(400, resp.status_int) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200)): resp = controller.GET(req) self.assertEqual(400, resp.status_int) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200)): resp = controller.POST(req) self.assertEqual(400, resp.status_int) def test_sys_meta_headers_PUT(self): # check that headers in sys meta namespace make it through # the proxy controller sys_meta_key = '%stest' % get_sys_meta_prefix('account') sys_meta_key = sys_meta_key.title() user_meta_key = 'X-Account-Meta-Test' # allow PUTs to account... self.app.allow_account_management = True controller = proxy_server.AccountController(self.app, 'a') context = {} callback = self._make_callback_func(context) hdrs_in = {sys_meta_key: 'foo', user_meta_key: 'bar', 'x-timestamp': '1.0'} req = Request.blank('/v1/a', headers=hdrs_in) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, give_connect=callback)): controller.PUT(req) self.assertEqual(context['method'], 'PUT') self.assertTrue(sys_meta_key in context['headers']) self.assertEqual(context['headers'][sys_meta_key], 'foo') self.assertTrue(user_meta_key in context['headers']) self.assertEqual(context['headers'][user_meta_key], 'bar') self.assertNotEqual(context['headers']['x-timestamp'], '1.0') def test_sys_meta_headers_POST(self): # check that headers in sys meta namespace make it through # the proxy controller sys_meta_key = '%stest' % get_sys_meta_prefix('account') sys_meta_key = sys_meta_key.title() user_meta_key = 'X-Account-Meta-Test' controller = proxy_server.AccountController(self.app, 'a') context = {} callback = self._make_callback_func(context) hdrs_in = {sys_meta_key: 'foo', user_meta_key: 'bar', 'x-timestamp': '1.0'} req = Request.blank('/v1/a', headers=hdrs_in) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, give_connect=callback)): controller.POST(req) self.assertEqual(context['method'], 'POST') self.assertTrue(sys_meta_key in context['headers']) self.assertEqual(context['headers'][sys_meta_key], 'foo') self.assertTrue(user_meta_key in context['headers']) self.assertEqual(context['headers'][user_meta_key], 'bar') self.assertNotEqual(context['headers']['x-timestamp'], '1.0') def _make_user_and_sys_acl_headers_data(self): acl = { 'admin': ['AUTH_alice', 'AUTH_bob'], 'read-write': ['AUTH_carol'], 'read-only': [], } user_prefix = 'x-account-' # external, user-facing user_headers = {(user_prefix + 'access-control'): format_acl( version=2, acl_dict=acl)} sys_prefix = get_sys_meta_prefix('account') # internal, system-facing sys_headers = {(sys_prefix + 'core-access-control'): format_acl( version=2, acl_dict=acl)} return user_headers, sys_headers def test_account_acl_headers_translated_for_GET_HEAD(self): # Verify that a GET/HEAD which receives X-Account-Sysmeta-Acl-* headers # from the account server will remap those headers to X-Account-Acl-* hdrs_ext, hdrs_int = self._make_user_and_sys_acl_headers_data() controller = proxy_server.AccountController(self.app, 'acct') for verb in ('GET', 'HEAD'): req = Request.blank('/v1/acct', environ={'swift_owner': True}) controller.GETorHEAD_base = lambda *_: Response( headers=hdrs_int, environ={ 'PATH_INFO': '/acct', 'REQUEST_METHOD': verb, }) method = getattr(controller, verb) resp = method(req) for header, value in hdrs_ext.items(): if value: self.assertEqual(resp.headers.get(header), value) else: # blank ACLs should result in no header self.assertTrue(header not in resp.headers) def test_add_acls_impossible_cases(self): # For test coverage: verify that defensive coding does defend, in cases # that shouldn't arise naturally # add_acls should do nothing if REQUEST_METHOD isn't HEAD/GET/PUT/POST resp = Response() controller = proxy_server.AccountController(self.app, 'a') resp.environ['PATH_INFO'] = '/a' resp.environ['REQUEST_METHOD'] = 'OPTIONS' controller.add_acls_from_sys_metadata(resp) self.assertEqual(1, len(resp.headers)) # we always get Content-Type self.assertEqual(2, len(resp.environ)) def test_memcache_key_impossible_cases(self): # For test coverage: verify that defensive coding does defend, in cases # that shouldn't arise naturally self.assertRaises( ValueError, lambda: swift.proxy.controllers.base.get_container_memcache_key( '/a', None)) def test_stripping_swift_admin_headers(self): # Verify that a GET/HEAD which receives privileged headers from the # account server will strip those headers for non-swift_owners headers = { 'x-account-meta-harmless': 'hi mom', 'x-account-meta-temp-url-key': 's3kr1t', } controller = proxy_server.AccountController(self.app, 'acct') for verb in ('GET', 'HEAD'): for env in ({'swift_owner': True}, {'swift_owner': False}): req = Request.blank('/v1/acct', environ=env) controller.GETorHEAD_base = lambda *_: Response( headers=headers, environ={ 'PATH_INFO': '/acct', 'REQUEST_METHOD': verb, }) method = getattr(controller, verb) resp = method(req) self.assertEqual(resp.headers.get('x-account-meta-harmless'), 'hi mom') privileged_header_present = ( 'x-account-meta-temp-url-key' in resp.headers) self.assertEqual(privileged_header_present, env['swift_owner']) def test_response_code_for_PUT(self): PUT_TEST_CASES = [ ((201, 201, 201), 201), ((201, 201, 404), 201), ((201, 201, 503), 201), ((201, 404, 404), 404), ((201, 404, 503), 503), ((201, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('PUT', PUT_TEST_CASES) def test_response_code_for_DELETE(self): DELETE_TEST_CASES = [ ((204, 204, 204), 204), ((204, 204, 404), 204), ((204, 204, 503), 204), ((204, 404, 404), 404), ((204, 404, 503), 503), ((204, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('DELETE', DELETE_TEST_CASES) def test_response_code_for_POST(self): POST_TEST_CASES = [ ((204, 204, 204), 204), ((204, 204, 404), 204), ((204, 204, 503), 204), ((204, 404, 404), 404), ((204, 404, 503), 503), ((204, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('POST', POST_TEST_CASES) @patch_policies( [StoragePolicy(0, 'zero', True, object_ring=FakeRing(replicas=4))]) class TestAccountController4Replicas(TestAccountController): def setUp(self): self.app = proxy_server.Application( None, FakeMemcache(), account_ring=FakeRing(replicas=4), container_ring=FakeRing(replicas=4)) def test_response_code_for_PUT(self): PUT_TEST_CASES = [ ((201, 201, 201, 201), 201), ((201, 201, 201, 404), 201), ((201, 201, 201, 503), 201), ((201, 201, 404, 404), 503), ((201, 201, 404, 503), 503), ((201, 201, 503, 503), 503), ((201, 404, 404, 404), 404), ((201, 404, 404, 503), 503), ((201, 404, 503, 503), 503), ((201, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('PUT', PUT_TEST_CASES) def test_response_code_for_DELETE(self): DELETE_TEST_CASES = [ ((204, 204, 204, 204), 204), ((204, 204, 204, 404), 204), ((204, 204, 204, 503), 204), ((204, 204, 404, 404), 503), ((204, 204, 404, 503), 503), ((204, 204, 503, 503), 503), ((204, 404, 404, 404), 404), ((204, 404, 404, 503), 503), ((204, 404, 503, 503), 503), ((204, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('DELETE', DELETE_TEST_CASES) def test_response_code_for_POST(self): POST_TEST_CASES = [ ((204, 204, 204, 204), 204), ((204, 204, 204, 404), 204), ((204, 204, 204, 503), 204), ((204, 204, 404, 404), 503), ((204, 204, 404, 503), 503), ((204, 204, 503, 503), 503), ((204, 404, 404, 404), 404), ((204, 404, 404, 503), 503), ((204, 404, 503, 503), 503), ((204, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('POST', POST_TEST_CASES) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/proxy/controllers/test_base.py0000664000567000056710000011326613024044354024036 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools from collections import defaultdict import unittest from mock import patch from swift.proxy.controllers.base import headers_to_container_info, \ headers_to_account_info, headers_to_object_info, get_container_info, \ get_container_memcache_key, get_account_info, get_account_memcache_key, \ get_object_env_key, get_info, get_object_info, \ Controller, GetOrHeadHandler, _set_info_cache, _set_object_info_cache, \ bytes_to_skip from swift.common.swob import Request, HTTPException, RESPONSE_REASONS from swift.common import exceptions from swift.common.utils import split_path from swift.common.header_key_dict import HeaderKeyDict from swift.common.http import is_success from swift.common.storage_policy import StoragePolicy, POLICIES from test.unit import fake_http_connect, FakeRing, FakeMemcache from swift.proxy import server as proxy_server from swift.common.request_helpers import get_sys_meta_prefix from test.unit import patch_policies class FakeResponse(object): base_headers = {} def __init__(self, status_int=200, headers=None, body=''): self.status_int = status_int self._headers = headers or {} self.body = body @property def headers(self): if is_success(self.status_int): self._headers.update(self.base_headers) return self._headers class AccountResponse(FakeResponse): base_headers = { 'x-account-container-count': 333, 'x-account-object-count': 1000, 'x-account-bytes-used': 6666, } class ContainerResponse(FakeResponse): base_headers = { 'x-container-object-count': 1000, 'x-container-bytes-used': 6666, } class ObjectResponse(FakeResponse): base_headers = { 'content-length': 5555, 'content-type': 'text/plain' } class DynamicResponseFactory(object): def __init__(self, *statuses): if statuses: self.statuses = iter(statuses) else: self.statuses = itertools.repeat(200) self.stats = defaultdict(int) response_type = { 'obj': ObjectResponse, 'container': ContainerResponse, 'account': AccountResponse, } def _get_response(self, type_): self.stats[type_] += 1 class_ = self.response_type[type_] return class_(next(self.statuses)) def get_response(self, environ): (version, account, container, obj) = split_path( environ['PATH_INFO'], 2, 4, True) if obj: resp = self._get_response('obj') elif container: resp = self._get_response('container') else: resp = self._get_response('account') resp.account = account resp.container = container resp.obj = obj return resp class FakeApp(object): recheck_container_existence = 30 recheck_account_existence = 30 def __init__(self, response_factory=None, statuses=None): self.responses = response_factory or \ DynamicResponseFactory(*statuses or []) self.sources = [] def __call__(self, environ, start_response): self.sources.append(environ.get('swift.source')) response = self.responses.get_response(environ) reason = RESPONSE_REASONS[response.status_int][0] start_response('%d %s' % (response.status_int, reason), [(k, v) for k, v in response.headers.items()]) # It's a bit strange, but the get_info cache stuff relies on the # app setting some keys in the environment as it makes requests # (in particular GETorHEAD_base) - so our fake does the same _set_info_cache(self, environ, response.account, response.container, response) if response.obj: _set_object_info_cache(self, environ, response.account, response.container, response.obj, response) return iter(response.body) class FakeCache(FakeMemcache): def __init__(self, stub=None, **pre_cached): super(FakeCache, self).__init__() if pre_cached: self.store.update(pre_cached) self.stub = stub def get(self, key): return self.stub or self.store.get(key) @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestFuncs(unittest.TestCase): def setUp(self): self.app = proxy_server.Application(None, FakeMemcache(), account_ring=FakeRing(), container_ring=FakeRing()) def test_GETorHEAD_base(self): base = Controller(self.app) req = Request.blank('/v1/a/c/o/with/slashes') ring = FakeRing() nodes = list(ring.get_part_nodes(0)) + list(ring.get_more_nodes(0)) with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.GETorHEAD_base(req, 'object', iter(nodes), 'part', '/a/c/o/with/slashes') self.assertTrue('swift.object/a/c/o/with/slashes' in resp.environ) self.assertEqual( resp.environ['swift.object/a/c/o/with/slashes']['status'], 200) req = Request.blank('/v1/a/c/o') with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.GETorHEAD_base(req, 'object', iter(nodes), 'part', '/a/c/o') self.assertTrue('swift.object/a/c/o' in resp.environ) self.assertEqual(resp.environ['swift.object/a/c/o']['status'], 200) req = Request.blank('/v1/a/c') with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.GETorHEAD_base(req, 'container', iter(nodes), 'part', '/a/c') self.assertTrue('swift.container/a/c' in resp.environ) self.assertEqual(resp.environ['swift.container/a/c']['status'], 200) req = Request.blank('/v1/a') with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.GETorHEAD_base(req, 'account', iter(nodes), 'part', '/a') self.assertTrue('swift.account/a' in resp.environ) self.assertEqual(resp.environ['swift.account/a']['status'], 200) # Run the above tests again, but this time with concurrent_reads # turned on policy = next(iter(POLICIES)) concurrent_get_threads = policy.object_ring.replica_count for concurrency_timeout in (0, 2): self.app.concurrency_timeout = concurrency_timeout req = Request.blank('/v1/a/c/o/with/slashes') # NOTE: We are using slow_connect of fake_http_connect as using # a concurrency of 0 when mocking the connection is a little too # fast for eventlet. Network i/o will make this fine, but mocking # it seems is too instantaneous. with patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, slow_connect=True)): resp = base.GETorHEAD_base( req, 'object', iter(nodes), 'part', '/a/c/o/with/slashes', concurrency=concurrent_get_threads) self.assertTrue('swift.object/a/c/o/with/slashes' in resp.environ) self.assertEqual( resp.environ['swift.object/a/c/o/with/slashes']['status'], 200) req = Request.blank('/v1/a/c/o') with patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, slow_connect=True)): resp = base.GETorHEAD_base( req, 'object', iter(nodes), 'part', '/a/c/o', concurrency=concurrent_get_threads) self.assertTrue('swift.object/a/c/o' in resp.environ) self.assertEqual(resp.environ['swift.object/a/c/o']['status'], 200) req = Request.blank('/v1/a/c') with patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, slow_connect=True)): resp = base.GETorHEAD_base( req, 'container', iter(nodes), 'part', '/a/c', concurrency=concurrent_get_threads) self.assertTrue('swift.container/a/c' in resp.environ) self.assertEqual(resp.environ['swift.container/a/c']['status'], 200) req = Request.blank('/v1/a') with patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, slow_connect=True)): resp = base.GETorHEAD_base( req, 'account', iter(nodes), 'part', '/a', concurrency=concurrent_get_threads) self.assertTrue('swift.account/a' in resp.environ) self.assertEqual(resp.environ['swift.account/a']['status'], 200) def test_get_info(self): app = FakeApp() # Do a non cached call to account env = {} info_a = get_info(app, env, 'a') # Check that you got proper info self.assertEqual(info_a['status'], 200) self.assertEqual(info_a['bytes'], 6666) self.assertEqual(info_a['total_object_count'], 1000) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) # Make sure the app was called self.assertEqual(app.responses.stats['account'], 1) # Do an env cached call to account info_a = get_info(app, env, 'a') # Check that you got proper info self.assertEqual(info_a['status'], 200) self.assertEqual(info_a['bytes'], 6666) self.assertEqual(info_a['total_object_count'], 1000) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) # Make sure the app was NOT called AGAIN self.assertEqual(app.responses.stats['account'], 1) # This time do env cached call to account and non cached to container info_c = get_info(app, env, 'a', 'c') # Check that you got proper info self.assertEqual(info_c['status'], 200) self.assertEqual(info_c['bytes'], 6666) self.assertEqual(info_c['object_count'], 1000) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) self.assertEqual(env.get('swift.container/a/c'), info_c) # Make sure the app was called for container self.assertEqual(app.responses.stats['container'], 1) # This time do a non cached call to account than non cached to # container app = FakeApp() env = {} # abandon previous call to env info_c = get_info(app, env, 'a', 'c') # Check that you got proper info self.assertEqual(info_c['status'], 200) self.assertEqual(info_c['bytes'], 6666) self.assertEqual(info_c['object_count'], 1000) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) self.assertEqual(env.get('swift.container/a/c'), info_c) # check app calls both account and container self.assertEqual(app.responses.stats['account'], 1) self.assertEqual(app.responses.stats['container'], 1) # This time do an env cached call to container while account is not # cached del(env['swift.account/a']) info_c = get_info(app, env, 'a', 'c') # Check that you got proper info self.assertEqual(info_a['status'], 200) self.assertEqual(info_c['bytes'], 6666) self.assertEqual(info_c['object_count'], 1000) # Make sure the env cache is set and account still not cached self.assertEqual(env.get('swift.container/a/c'), info_c) # no additional calls were made self.assertEqual(app.responses.stats['account'], 1) self.assertEqual(app.responses.stats['container'], 1) # Do a non cached call to account not found with ret_not_found app = FakeApp(statuses=(404,)) env = {} info_a = get_info(app, env, 'a', ret_not_found=True) # Check that you got proper info self.assertEqual(info_a['status'], 404) self.assertEqual(info_a['bytes'], None) self.assertEqual(info_a['total_object_count'], None) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) # and account was called self.assertEqual(app.responses.stats['account'], 1) # Do a cached call to account not found with ret_not_found info_a = get_info(app, env, 'a', ret_not_found=True) # Check that you got proper info self.assertEqual(info_a['status'], 404) self.assertEqual(info_a['bytes'], None) self.assertEqual(info_a['total_object_count'], None) # Make sure the env cache is set self.assertEqual(env.get('swift.account/a'), info_a) # add account was NOT called AGAIN self.assertEqual(app.responses.stats['account'], 1) # Do a non cached call to account not found without ret_not_found app = FakeApp(statuses=(404,)) env = {} info_a = get_info(app, env, 'a') # Check that you got proper info self.assertEqual(info_a, None) self.assertEqual(env['swift.account/a']['status'], 404) # and account was called self.assertEqual(app.responses.stats['account'], 1) # Do a cached call to account not found without ret_not_found info_a = get_info(None, env, 'a') # Check that you got proper info self.assertEqual(info_a, None) self.assertEqual(env['swift.account/a']['status'], 404) # add account was NOT called AGAIN self.assertEqual(app.responses.stats['account'], 1) def test_get_container_info_swift_source(self): app = FakeApp() req = Request.blank("/v1/a/c", environ={'swift.cache': FakeCache()}) get_container_info(req.environ, app, swift_source='MC') self.assertEqual(app.sources, ['GET_INFO', 'MC']) def test_get_object_info_swift_source(self): app = FakeApp() req = Request.blank("/v1/a/c/o", environ={'swift.cache': FakeCache()}) get_object_info(req.environ, app, swift_source='LU') self.assertEqual(app.sources, ['LU']) def test_get_container_info_no_cache(self): req = Request.blank("/v1/AUTH_account/cont", environ={'swift.cache': FakeCache({})}) resp = get_container_info(req.environ, FakeApp()) self.assertEqual(resp['storage_policy'], '0') self.assertEqual(resp['bytes'], 6666) self.assertEqual(resp['object_count'], 1000) def test_get_container_info_no_account(self): responses = DynamicResponseFactory(404, 200) app = FakeApp(responses) req = Request.blank("/v1/AUTH_does_not_exist/cont") info = get_container_info(req.environ, app) self.assertEqual(info['status'], 0) def test_get_container_info_no_auto_account(self): responses = DynamicResponseFactory(404, 200) app = FakeApp(responses) req = Request.blank("/v1/.system_account/cont") info = get_container_info(req.environ, app) self.assertEqual(info['status'], 200) self.assertEqual(info['bytes'], 6666) self.assertEqual(info['object_count'], 1000) def test_get_container_info_cache(self): cache_stub = { 'status': 404, 'bytes': 3333, 'object_count': 10, 'versions': u"\u1F4A9"} req = Request.blank("/v1/account/cont", environ={'swift.cache': FakeCache(cache_stub)}) resp = get_container_info(req.environ, FakeApp()) self.assertEqual(resp['storage_policy'], '0') self.assertEqual(resp['bytes'], 3333) self.assertEqual(resp['object_count'], 10) self.assertEqual(resp['status'], 404) self.assertEqual(resp['versions'], "\xe1\xbd\x8a\x39") def test_get_container_info_env(self): cache_key = get_container_memcache_key("account", "cont") env_key = 'swift.%s' % cache_key req = Request.blank("/v1/account/cont", environ={env_key: {'bytes': 3867}, 'swift.cache': FakeCache({})}) resp = get_container_info(req.environ, 'xxx') self.assertEqual(resp['bytes'], 3867) def test_get_account_info_swift_source(self): app = FakeApp() req = Request.blank("/v1/a", environ={'swift.cache': FakeCache()}) get_account_info(req.environ, app, swift_source='MC') self.assertEqual(app.sources, ['MC']) def test_get_account_info_no_cache(self): app = FakeApp() req = Request.blank("/v1/AUTH_account", environ={'swift.cache': FakeCache({})}) resp = get_account_info(req.environ, app) self.assertEqual(resp['bytes'], 6666) self.assertEqual(resp['total_object_count'], 1000) def test_get_account_info_cache(self): # The original test that we prefer to preserve cached = {'status': 404, 'bytes': 3333, 'total_object_count': 10} req = Request.blank("/v1/account/cont", environ={'swift.cache': FakeCache(cached)}) resp = get_account_info(req.environ, FakeApp()) self.assertEqual(resp['bytes'], 3333) self.assertEqual(resp['total_object_count'], 10) self.assertEqual(resp['status'], 404) # Here is a more realistic test cached = {'status': 404, 'bytes': '3333', 'container_count': '234', 'total_object_count': '10', 'meta': {}} req = Request.blank("/v1/account/cont", environ={'swift.cache': FakeCache(cached)}) resp = get_account_info(req.environ, FakeApp()) self.assertEqual(resp['status'], 404) self.assertEqual(resp['bytes'], '3333') self.assertEqual(resp['container_count'], 234) self.assertEqual(resp['meta'], {}) self.assertEqual(resp['total_object_count'], '10') def test_get_account_info_env(self): cache_key = get_account_memcache_key("account") env_key = 'swift.%s' % cache_key req = Request.blank("/v1/account", environ={env_key: {'bytes': 3867}, 'swift.cache': FakeCache({})}) resp = get_account_info(req.environ, 'xxx') self.assertEqual(resp['bytes'], 3867) def test_get_object_info_env(self): cached = {'status': 200, 'length': 3333, 'type': 'application/json', 'meta': {}} env_key = get_object_env_key("account", "cont", "obj") req = Request.blank("/v1/account/cont/obj", environ={env_key: cached, 'swift.cache': FakeCache({})}) resp = get_object_info(req.environ, 'xxx') self.assertEqual(resp['length'], 3333) self.assertEqual(resp['type'], 'application/json') def test_get_object_info_no_env(self): app = FakeApp() req = Request.blank("/v1/account/cont/obj", environ={'swift.cache': FakeCache({})}) resp = get_object_info(req.environ, app) self.assertEqual(app.responses.stats['account'], 0) self.assertEqual(app.responses.stats['container'], 0) self.assertEqual(app.responses.stats['obj'], 1) self.assertEqual(resp['length'], 5555) self.assertEqual(resp['type'], 'text/plain') def test_options(self): base = Controller(self.app) base.account_name = 'a' base.container_name = 'c' origin = 'http://m.com' self.app.cors_allow_origin = [origin] req = Request.blank('/v1/a/c/o', environ={'swift.cache': FakeCache()}, headers={'Origin': origin, 'Access-Control-Request-Method': 'GET'}) with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.OPTIONS(req) self.assertEqual(resp.status_int, 200) def test_options_with_null_allow_origin(self): base = Controller(self.app) base.account_name = 'a' base.container_name = 'c' def my_container_info(*args): return { 'cors': { 'allow_origin': '*', } } base.container_info = my_container_info req = Request.blank('/v1/a/c/o', environ={'swift.cache': FakeCache()}, headers={'Origin': '*', 'Access-Control-Request-Method': 'GET'}) with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.OPTIONS(req) self.assertEqual(resp.status_int, 200) def test_options_unauthorized(self): base = Controller(self.app) base.account_name = 'a' base.container_name = 'c' self.app.cors_allow_origin = ['http://NOT_IT'] req = Request.blank('/v1/a/c/o', environ={'swift.cache': FakeCache()}, headers={'Origin': 'http://m.com', 'Access-Control-Request-Method': 'GET'}) with patch('swift.proxy.controllers.base.' 'http_connect', fake_http_connect(200)): resp = base.OPTIONS(req) self.assertEqual(resp.status_int, 401) def test_headers_to_container_info_missing(self): resp = headers_to_container_info({}, 404) self.assertEqual(resp['status'], 404) self.assertEqual(resp['read_acl'], None) self.assertEqual(resp['write_acl'], None) def test_headers_to_container_info_meta(self): headers = {'X-Container-Meta-Whatevs': 14, 'x-container-meta-somethingelse': 0} resp = headers_to_container_info(headers.items(), 200) self.assertEqual(len(resp['meta']), 2) self.assertEqual(resp['meta']['whatevs'], 14) self.assertEqual(resp['meta']['somethingelse'], 0) def test_headers_to_container_info_sys_meta(self): prefix = get_sys_meta_prefix('container') headers = {'%sWhatevs' % prefix: 14, '%ssomethingelse' % prefix: 0} resp = headers_to_container_info(headers.items(), 200) self.assertEqual(len(resp['sysmeta']), 2) self.assertEqual(resp['sysmeta']['whatevs'], 14) self.assertEqual(resp['sysmeta']['somethingelse'], 0) def test_headers_to_container_info_values(self): headers = { 'x-container-read': 'readvalue', 'x-container-write': 'writevalue', 'x-container-sync-key': 'keyvalue', 'x-container-meta-access-control-allow-origin': 'here', } resp = headers_to_container_info(headers.items(), 200) self.assertEqual(resp['read_acl'], 'readvalue') self.assertEqual(resp['write_acl'], 'writevalue') self.assertEqual(resp['cors']['allow_origin'], 'here') headers['x-unused-header'] = 'blahblahblah' self.assertEqual( resp, headers_to_container_info(headers.items(), 200)) def test_container_info_without_req(self): base = Controller(self.app) base.account_name = 'a' base.container_name = 'c' container_info = \ base.container_info(base.account_name, base.container_name) self.assertEqual(container_info['status'], 0) def test_headers_to_account_info_missing(self): resp = headers_to_account_info({}, 404) self.assertEqual(resp['status'], 404) self.assertEqual(resp['bytes'], None) self.assertEqual(resp['container_count'], None) def test_headers_to_account_info_meta(self): headers = {'X-Account-Meta-Whatevs': 14, 'x-account-meta-somethingelse': 0} resp = headers_to_account_info(headers.items(), 200) self.assertEqual(len(resp['meta']), 2) self.assertEqual(resp['meta']['whatevs'], 14) self.assertEqual(resp['meta']['somethingelse'], 0) def test_headers_to_account_info_sys_meta(self): prefix = get_sys_meta_prefix('account') headers = {'%sWhatevs' % prefix: 14, '%ssomethingelse' % prefix: 0} resp = headers_to_account_info(headers.items(), 200) self.assertEqual(len(resp['sysmeta']), 2) self.assertEqual(resp['sysmeta']['whatevs'], 14) self.assertEqual(resp['sysmeta']['somethingelse'], 0) def test_headers_to_account_info_values(self): headers = { 'x-account-object-count': '10', 'x-account-container-count': '20', } resp = headers_to_account_info(headers.items(), 200) self.assertEqual(resp['total_object_count'], '10') self.assertEqual(resp['container_count'], '20') headers['x-unused-header'] = 'blahblahblah' self.assertEqual( resp, headers_to_account_info(headers.items(), 200)) def test_headers_to_object_info_missing(self): resp = headers_to_object_info({}, 404) self.assertEqual(resp['status'], 404) self.assertEqual(resp['length'], None) self.assertEqual(resp['etag'], None) def test_headers_to_object_info_meta(self): headers = {'X-Object-Meta-Whatevs': 14, 'x-object-meta-somethingelse': 0} resp = headers_to_object_info(headers.items(), 200) self.assertEqual(len(resp['meta']), 2) self.assertEqual(resp['meta']['whatevs'], 14) self.assertEqual(resp['meta']['somethingelse'], 0) def test_headers_to_object_info_sys_meta(self): prefix = get_sys_meta_prefix('object') headers = {'%sWhatevs' % prefix: 14, '%ssomethingelse' % prefix: 0} resp = headers_to_object_info(headers.items(), 200) self.assertEqual(len(resp['sysmeta']), 2) self.assertEqual(resp['sysmeta']['whatevs'], 14) self.assertEqual(resp['sysmeta']['somethingelse'], 0) def test_headers_to_object_info_values(self): headers = { 'content-length': '1024', 'content-type': 'application/json', } resp = headers_to_object_info(headers.items(), 200) self.assertEqual(resp['length'], '1024') self.assertEqual(resp['type'], 'application/json') headers['x-unused-header'] = 'blahblahblah' self.assertEqual( resp, headers_to_object_info(headers.items(), 200)) def test_base_have_quorum(self): base = Controller(self.app) # just throw a bunch of test cases at it self.assertEqual(base.have_quorum([201, 404], 3), False) self.assertEqual(base.have_quorum([201, 201], 4), False) self.assertEqual(base.have_quorum([201, 201, 404, 404], 4), False) self.assertEqual(base.have_quorum([201, 503, 503, 201], 4), False) self.assertEqual(base.have_quorum([201, 201], 3), True) self.assertEqual(base.have_quorum([404, 404], 3), True) self.assertEqual(base.have_quorum([201, 201], 2), True) self.assertEqual(base.have_quorum([404, 404], 2), True) self.assertEqual(base.have_quorum([201, 404, 201, 201], 4), True) def test_best_response_overrides(self): base = Controller(self.app) responses = [ (302, 'Found', '', 'The resource has moved temporarily.'), (100, 'Continue', '', ''), (404, 'Not Found', '', 'Custom body'), ] server_type = "Base DELETE" req = Request.blank('/v1/a/c/o', method='DELETE') statuses, reasons, headers, bodies = zip(*responses) # First test that you can't make a quorum with only overridden # responses overrides = {302: 204, 100: 204} resp = base.best_response(req, statuses, reasons, bodies, server_type, headers=headers, overrides=overrides) self.assertEqual(resp.status, '503 Service Unavailable') # next make a 404 quorum and make sure the last delete (real) 404 # status is the one returned. overrides = {100: 404} resp = base.best_response(req, statuses, reasons, bodies, server_type, headers=headers, overrides=overrides) self.assertEqual(resp.status, '404 Not Found') self.assertEqual(resp.body, 'Custom body') def test_range_fast_forward(self): req = Request.blank('/') handler = GetOrHeadHandler(None, req, None, None, None, None, {}) handler.fast_forward(50) self.assertEqual(handler.backend_headers['Range'], 'bytes=50-') handler = GetOrHeadHandler(None, req, None, None, None, None, {'Range': 'bytes=23-50'}) handler.fast_forward(20) self.assertEqual(handler.backend_headers['Range'], 'bytes=43-50') self.assertRaises(HTTPException, handler.fast_forward, 80) handler = GetOrHeadHandler(None, req, None, None, None, None, {'Range': 'bytes=23-'}) handler.fast_forward(20) self.assertEqual(handler.backend_headers['Range'], 'bytes=43-') handler = GetOrHeadHandler(None, req, None, None, None, None, {'Range': 'bytes=-100'}) handler.fast_forward(20) self.assertEqual(handler.backend_headers['Range'], 'bytes=-80') def test_transfer_headers_with_sysmeta(self): base = Controller(self.app) good_hdrs = {'x-base-sysmeta-foo': 'ok', 'X-Base-sysmeta-Bar': 'also ok'} bad_hdrs = {'x-base-sysmeta-': 'too short'} hdrs = dict(good_hdrs) hdrs.update(bad_hdrs) dst_hdrs = HeaderKeyDict() base.transfer_headers(hdrs, dst_hdrs) self.assertEqual(HeaderKeyDict(good_hdrs), dst_hdrs) def test_generate_request_headers(self): base = Controller(self.app) src_headers = {'x-remove-base-meta-owner': 'x', 'x-base-meta-size': '151M', 'new-owner': 'Kun'} req = Request.blank('/v1/a/c/o', headers=src_headers) dst_headers = base.generate_request_headers(req, transfer=True) expected_headers = {'x-base-meta-owner': '', 'x-base-meta-size': '151M', 'connection': 'close'} for k, v in expected_headers.items(): self.assertTrue(k in dst_headers) self.assertEqual(v, dst_headers[k]) self.assertFalse('new-owner' in dst_headers) def test_generate_request_headers_with_sysmeta(self): base = Controller(self.app) good_hdrs = {'x-base-sysmeta-foo': 'ok', 'X-Base-sysmeta-Bar': 'also ok'} bad_hdrs = {'x-base-sysmeta-': 'too short'} hdrs = dict(good_hdrs) hdrs.update(bad_hdrs) req = Request.blank('/v1/a/c/o', headers=hdrs) dst_headers = base.generate_request_headers(req, transfer=True) for k, v in good_hdrs.items(): self.assertTrue(k.lower() in dst_headers) self.assertEqual(v, dst_headers[k.lower()]) for k, v in bad_hdrs.items(): self.assertFalse(k.lower() in dst_headers) def test_generate_request_headers_with_no_orig_req(self): base = Controller(self.app) src_headers = {'x-remove-base-meta-owner': 'x', 'x-base-meta-size': '151M', 'new-owner': 'Kun'} dst_headers = base.generate_request_headers(None, additional=src_headers) expected_headers = {'x-base-meta-size': '151M', 'connection': 'close'} for k, v in expected_headers.items(): self.assertIn(k, dst_headers) self.assertEqual(v, dst_headers[k]) self.assertEqual('', dst_headers['Referer']) def test_client_chunk_size(self): class TestSource(object): def __init__(self, chunks): self.chunks = list(chunks) self.status = 200 def read(self, _read_size): if self.chunks: return self.chunks.pop(0) else: return '' def getheader(self, header): if header.lower() == "content-length": return str(sum(len(c) for c in self.chunks)) def getheaders(self): return [('content-length', self.getheader('content-length'))] source = TestSource(( 'abcd', '1234', 'abc', 'd1', '234abcd1234abcd1', '2')) req = Request.blank('/v1/a/c/o') node = {} handler = GetOrHeadHandler(self.app, req, None, None, None, None, {}, client_chunk_size=8) app_iter = handler._make_app_iter(req, node, source) client_chunks = list(app_iter) self.assertEqual(client_chunks, [ 'abcd1234', 'abcd1234', 'abcd1234', 'abcd12']) def test_client_chunk_size_resuming(self): class TestSource(object): def __init__(self, chunks): self.chunks = list(chunks) self.status = 200 def read(self, _read_size): if self.chunks: chunk = self.chunks.pop(0) if chunk is None: raise exceptions.ChunkReadTimeout() else: return chunk else: return '' def getheader(self, header): if header.lower() == "content-length": return str(sum(len(c) for c in self.chunks if c is not None)) def getheaders(self): return [('content-length', self.getheader('content-length'))] node = {'ip': '1.2.3.4', 'port': 6000, 'device': 'sda'} source1 = TestSource(['abcd', '1234', 'abc', None]) source2 = TestSource(['efgh5678']) req = Request.blank('/v1/a/c/o') handler = GetOrHeadHandler( self.app, req, 'Object', None, None, None, {}, client_chunk_size=8) app_iter = handler._make_app_iter(req, node, source1) with patch.object(handler, '_get_source_and_node', lambda: (source2, node)): client_chunks = list(app_iter) self.assertEqual(client_chunks, ['abcd1234', 'efgh5678']) def test_client_chunk_size_resuming_chunked(self): class TestChunkedSource(object): def __init__(self, chunks): self.chunks = list(chunks) self.status = 200 self.headers = {'transfer-encoding': 'chunked', 'content-type': 'text/plain'} def read(self, _read_size): if self.chunks: chunk = self.chunks.pop(0) if chunk is None: raise exceptions.ChunkReadTimeout() else: return chunk else: return '' def getheader(self, header): return self.headers.get(header.lower()) def getheaders(self): return self.headers node = {'ip': '1.2.3.4', 'port': 6000, 'device': 'sda'} source1 = TestChunkedSource(['abcd', '1234', 'abc', None]) source2 = TestChunkedSource(['efgh5678']) req = Request.blank('/v1/a/c/o') handler = GetOrHeadHandler( self.app, req, 'Object', None, None, None, {}, client_chunk_size=8) app_iter = handler._make_app_iter(req, node, source1) with patch.object(handler, '_get_source_and_node', lambda: (source2, node)): client_chunks = list(app_iter) self.assertEqual(client_chunks, ['abcd1234', 'efgh5678']) def test_bytes_to_skip(self): # if you start at the beginning, skip nothing self.assertEqual(bytes_to_skip(1024, 0), 0) # missed the first 10 bytes, so we've got 1014 bytes of partial # record self.assertEqual(bytes_to_skip(1024, 10), 1014) # skipped some whole records first self.assertEqual(bytes_to_skip(1024, 4106), 1014) # landed on a record boundary self.assertEqual(bytes_to_skip(1024, 1024), 0) self.assertEqual(bytes_to_skip(1024, 2048), 0) # big numbers self.assertEqual(bytes_to_skip(2 ** 20, 2 ** 32), 0) self.assertEqual(bytes_to_skip(2 ** 20, 2 ** 32 + 1), 2 ** 20 - 1) self.assertEqual(bytes_to_skip(2 ** 20, 2 ** 32 + 2 ** 19), 2 ** 19) # odd numbers self.assertEqual(bytes_to_skip(123, 0), 0) self.assertEqual(bytes_to_skip(123, 23), 100) self.assertEqual(bytes_to_skip(123, 247), 122) # prime numbers self.assertEqual(bytes_to_skip(11, 7), 4) self.assertEqual(bytes_to_skip(97, 7873823), 55) swift-2.7.1/test/unit/proxy/controllers/test_container.py0000664000567000056710000003320513024044354025100 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import unittest from eventlet import Timeout from swift.common.swob import Request from swift.proxy import server as proxy_server from swift.proxy.controllers.base import headers_to_container_info from test.unit import fake_http_connect, FakeRing, FakeMemcache from swift.common.storage_policy import StoragePolicy from swift.common.request_helpers import get_sys_meta_prefix from test.unit import patch_policies, mocked_http_conn, debug_logger from test.unit.common.ring.test_ring import TestRingBase from test.unit.proxy.test_server import node_error_count @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestContainerController(TestRingBase): CONTAINER_REPLICAS = 3 def setUp(self): TestRingBase.setUp(self) self.logger = debug_logger() self.container_ring = FakeRing(replicas=self.CONTAINER_REPLICAS, max_more_nodes=9) self.app = proxy_server.Application(None, FakeMemcache(), logger=self.logger, account_ring=FakeRing(), container_ring=self.container_ring) self.account_info = { 'status': 200, 'container_count': '10', 'total_object_count': '100', 'bytes': '1000', 'meta': {}, 'sysmeta': {}, } class FakeAccountInfoContainerController( proxy_server.ContainerController): def account_info(controller, *args, **kwargs): patch_path = 'swift.proxy.controllers.base.get_info' with mock.patch(patch_path) as mock_get_info: mock_get_info.return_value = dict(self.account_info) return super(FakeAccountInfoContainerController, controller).account_info( *args, **kwargs) _orig_get_controller = self.app.get_controller def wrapped_get_controller(*args, **kwargs): with mock.patch('swift.proxy.server.ContainerController', new=FakeAccountInfoContainerController): return _orig_get_controller(*args, **kwargs) self.app.get_controller = wrapped_get_controller def _make_callback_func(self, context): def callback(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): context['method'] = method context['path'] = path context['headers'] = headers or {} return callback def _assert_responses(self, method, test_cases): controller = proxy_server.ContainerController(self.app, 'a', 'c') for responses, expected in test_cases: with mock.patch( 'swift.proxy.controllers.base.http_connect', fake_http_connect(*responses)): req = Request.blank('/v1/a/c') resp = getattr(controller, method)(req) self.assertEqual(expected, resp.status_int, 'Expected %s but got %s. Failed case: %s' % (expected, resp.status_int, str(responses))) def test_container_info_in_response_env(self): controller = proxy_server.ContainerController(self.app, 'a', 'c') with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, body='')): req = Request.blank('/v1/a/c', {'PATH_INFO': '/v1/a/c'}) resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) self.assertTrue("swift.container/a/c" in resp.environ) self.assertEqual(headers_to_container_info(resp.headers), resp.environ['swift.container/a/c']) def test_swift_owner(self): owner_headers = { 'x-container-read': 'value', 'x-container-write': 'value', 'x-container-sync-key': 'value', 'x-container-sync-to': 'value'} controller = proxy_server.ContainerController(self.app, 'a', 'c') req = Request.blank('/v1/a/c') with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, headers=owner_headers)): resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) for key in owner_headers: self.assertTrue(key not in resp.headers) req = Request.blank('/v1/a/c', environ={'swift_owner': True}) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, headers=owner_headers)): resp = controller.HEAD(req) self.assertEqual(2, resp.status_int // 100) for key in owner_headers: self.assertTrue(key in resp.headers) def test_sys_meta_headers_PUT(self): # check that headers in sys meta namespace make it through # the container controller sys_meta_key = '%stest' % get_sys_meta_prefix('container') sys_meta_key = sys_meta_key.title() user_meta_key = 'X-Container-Meta-Test' controller = proxy_server.ContainerController(self.app, 'a', 'c') context = {} callback = self._make_callback_func(context) hdrs_in = {sys_meta_key: 'foo', user_meta_key: 'bar', 'x-timestamp': '1.0'} req = Request.blank('/v1/a/c', headers=hdrs_in) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, give_connect=callback)): controller.PUT(req) self.assertEqual(context['method'], 'PUT') self.assertTrue(sys_meta_key in context['headers']) self.assertEqual(context['headers'][sys_meta_key], 'foo') self.assertTrue(user_meta_key in context['headers']) self.assertEqual(context['headers'][user_meta_key], 'bar') self.assertNotEqual(context['headers']['x-timestamp'], '1.0') def test_sys_meta_headers_POST(self): # check that headers in sys meta namespace make it through # the container controller sys_meta_key = '%stest' % get_sys_meta_prefix('container') sys_meta_key = sys_meta_key.title() user_meta_key = 'X-Container-Meta-Test' controller = proxy_server.ContainerController(self.app, 'a', 'c') context = {} callback = self._make_callback_func(context) hdrs_in = {sys_meta_key: 'foo', user_meta_key: 'bar', 'x-timestamp': '1.0'} req = Request.blank('/v1/a/c', headers=hdrs_in) with mock.patch('swift.proxy.controllers.base.http_connect', fake_http_connect(200, 200, give_connect=callback)): controller.POST(req) self.assertEqual(context['method'], 'POST') self.assertTrue(sys_meta_key in context['headers']) self.assertEqual(context['headers'][sys_meta_key], 'foo') self.assertTrue(user_meta_key in context['headers']) self.assertEqual(context['headers'][user_meta_key], 'bar') self.assertNotEqual(context['headers']['x-timestamp'], '1.0') def test_node_errors(self): self.app.sort_nodes = lambda n: n for method in ('PUT', 'DELETE', 'POST'): def test_status_map(statuses, expected): self.app._error_limiting = {} req = Request.blank('/v1/a/c', method=method) with mocked_http_conn(*statuses) as fake_conn: resp = req.get_response(self.app) self.assertEqual(resp.status_int, expected) for req in fake_conn.requests: self.assertEqual(req['method'], method) self.assertTrue(req['path'].endswith('/a/c')) base_status = [201] * 3 # test happy path test_status_map(list(base_status), 201) for i in range(3): self.assertEqual(node_error_count( self.app, self.container_ring.devs[i]), 0) # single node errors and test isolation for i in range(3): status_list = list(base_status) status_list[i] = 503 status_list.append(201) test_status_map(status_list, 201) for j in range(3): expected = 1 if j == i else 0 self.assertEqual(node_error_count( self.app, self.container_ring.devs[j]), expected) # timeout test_status_map((201, Timeout(), 201, 201), 201) self.assertEqual(node_error_count( self.app, self.container_ring.devs[1]), 1) # exception test_status_map((Exception('kaboom!'), 201, 201, 201), 201) self.assertEqual(node_error_count( self.app, self.container_ring.devs[0]), 1) # insufficient storage test_status_map((201, 201, 507, 201), 201) self.assertEqual(node_error_count( self.app, self.container_ring.devs[2]), self.app.error_suppression_limit + 1) def test_response_code_for_PUT(self): PUT_TEST_CASES = [ ((201, 201, 201), 201), ((201, 201, 404), 201), ((201, 201, 503), 201), ((201, 404, 404), 404), ((201, 404, 503), 503), ((201, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('PUT', PUT_TEST_CASES) def test_response_code_for_DELETE(self): DELETE_TEST_CASES = [ ((204, 204, 204), 204), ((204, 204, 404), 204), ((204, 204, 503), 204), ((204, 404, 404), 404), ((204, 404, 503), 503), ((204, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('DELETE', DELETE_TEST_CASES) def test_response_code_for_POST(self): POST_TEST_CASES = [ ((204, 204, 204), 204), ((204, 204, 404), 204), ((204, 204, 503), 204), ((204, 404, 404), 404), ((204, 404, 503), 503), ((204, 503, 503), 503), ((404, 404, 404), 404), ((404, 404, 503), 404), ((404, 503, 503), 503), ((503, 503, 503), 503) ] self._assert_responses('POST', POST_TEST_CASES) @patch_policies( [StoragePolicy(0, 'zero', True, object_ring=FakeRing(replicas=4))]) class TestContainerController4Replicas(TestContainerController): CONTAINER_REPLICAS = 4 def test_response_code_for_PUT(self): PUT_TEST_CASES = [ ((201, 201, 201, 201), 201), ((201, 201, 201, 404), 201), ((201, 201, 201, 503), 201), ((201, 201, 404, 404), 503), ((201, 201, 404, 503), 503), ((201, 201, 503, 503), 503), ((201, 404, 404, 404), 404), ((201, 404, 404, 503), 503), ((201, 404, 503, 503), 503), ((201, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('PUT', PUT_TEST_CASES) def test_response_code_for_DELETE(self): DELETE_TEST_CASES = [ ((204, 204, 204, 204), 204), ((204, 204, 204, 404), 204), ((204, 204, 204, 503), 204), ((204, 204, 404, 404), 503), ((204, 204, 404, 503), 503), ((204, 204, 503, 503), 503), ((204, 404, 404, 404), 404), ((204, 404, 404, 503), 503), ((204, 404, 503, 503), 503), ((204, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('DELETE', DELETE_TEST_CASES) def test_response_code_for_POST(self): POST_TEST_CASES = [ ((204, 204, 204, 204), 204), ((204, 204, 204, 404), 204), ((204, 204, 204, 503), 204), ((204, 204, 404, 404), 503), ((204, 204, 404, 503), 503), ((204, 204, 503, 503), 503), ((204, 404, 404, 404), 404), ((204, 404, 404, 503), 503), ((204, 404, 503, 503), 503), ((204, 503, 503, 503), 503), ((404, 404, 404, 404), 404), ((404, 404, 404, 503), 404), ((404, 404, 503, 503), 503), ((404, 503, 503, 503), 503), ((503, 503, 503, 503), 503) ] self._assert_responses('POST', POST_TEST_CASES) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/proxy/controllers/test_info.py0000664000567000056710000003033113024044354024046 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import unittest import time from mock import Mock from swift.proxy.controllers import InfoController from swift.proxy.server import Application as ProxyApp from swift.common import utils from swift.common.swob import Request, HTTPException class TestInfoController(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def get_controller(self, expose_info=None, disallowed_sections=None, admin_key=None): disallowed_sections = disallowed_sections or [] app = Mock(spec=ProxyApp) return InfoController(app, None, expose_info, disallowed_sections, admin_key) def start_response(self, status, headers): self.got_statuses.append(status) for h in headers: self.got_headers.append({h[0]: h[1]}) def test_disabled_info(self): controller = self.get_controller(expose_info=False) req = Request.blank( '/info', environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('403 Forbidden', str(resp)) def test_get_info(self): controller = self.get_controller(expose_info=True) utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} req = Request.blank( '/info', environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) info = json.loads(resp.body) self.assertTrue('admin' not in info) self.assertTrue('foo' in info) self.assertTrue('bar' in info['foo']) self.assertEqual(info['foo']['bar'], 'baz') def test_options_info(self): controller = self.get_controller(expose_info=True) req = Request.blank( '/info', environ={'REQUEST_METHOD': 'GET'}) resp = controller.OPTIONS(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) self.assertTrue('Allow' in resp.headers) def test_get_info_cors(self): controller = self.get_controller(expose_info=True) utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} req = Request.blank( '/info', environ={'REQUEST_METHOD': 'GET'}, headers={'Origin': 'http://example.com'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) info = json.loads(resp.body) self.assertTrue('admin' not in info) self.assertTrue('foo' in info) self.assertTrue('bar' in info['foo']) self.assertEqual(info['foo']['bar'], 'baz') self.assertTrue('Access-Control-Allow-Origin' in resp.headers) self.assertTrue('Access-Control-Expose-Headers' in resp.headers) def test_head_info(self): controller = self.get_controller(expose_info=True) utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} req = Request.blank( '/info', environ={'REQUEST_METHOD': 'HEAD'}) resp = controller.HEAD(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) def test_disallow_info(self): controller = self.get_controller(expose_info=True, disallowed_sections=['foo2']) utils._swift_info = {'foo': {'bar': 'baz'}, 'foo2': {'bar2': 'baz2'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} req = Request.blank( '/info', environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) info = json.loads(resp.body) self.assertTrue('foo' in info) self.assertTrue('bar' in info['foo']) self.assertEqual(info['foo']['bar'], 'baz') self.assertTrue('foo2' not in info) def test_disabled_admin_info(self): controller = self.get_controller(expose_info=True, admin_key='') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/info', expires, '') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('403 Forbidden', str(resp)) def test_get_admin_info(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) info = json.loads(resp.body) self.assertTrue('admin' in info) self.assertTrue('qux' in info['admin']) self.assertTrue('quux' in info['admin']['qux']) self.assertEqual(info['admin']['qux']['quux'], 'corge') def test_head_admin_info(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'HEAD'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) expires = int(time.time() + 86400) sig = utils.get_hmac('HEAD', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'HEAD'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) def test_get_admin_info_invalid_method(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('HEAD', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('401 Unauthorized', str(resp)) def test_get_admin_info_invalid_expires(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = 1 sig = utils.get_hmac('GET', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('401 Unauthorized', str(resp)) expires = 'abc' sig = utils.get_hmac('GET', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('401 Unauthorized', str(resp)) def test_get_admin_info_invalid_path(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/foo', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('401 Unauthorized', str(resp)) def test_get_admin_info_invalid_key(self): controller = self.get_controller(expose_info=True, admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/foo', expires, 'invalid-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('401 Unauthorized', str(resp)) def test_admin_disallow_info(self): controller = self.get_controller(expose_info=True, disallowed_sections=['foo2'], admin_key='secret-admin-key') utils._swift_info = {'foo': {'bar': 'baz'}, 'foo2': {'bar2': 'baz2'}} utils._swift_admin_info = {'qux': {'quux': 'corge'}} expires = int(time.time() + 86400) sig = utils.get_hmac('GET', '/info', expires, 'secret-admin-key') path = '/info?swiftinfo_sig={sig}&swiftinfo_expires={expires}'.format( sig=sig, expires=expires) req = Request.blank( path, environ={'REQUEST_METHOD': 'GET'}) resp = controller.GET(req) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual('200 OK', str(resp)) info = json.loads(resp.body) self.assertTrue('foo2' not in info) self.assertTrue('admin' in info) self.assertTrue('disallowed_sections' in info['admin']) self.assertTrue('foo2' in info['admin']['disallowed_sections']) self.assertTrue('qux' in info['admin']) self.assertTrue('quux' in info['admin']['qux']) self.assertEqual(info['admin']['qux']['quux'], 'corge') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/0000775000567000056710000000000013024044470017242 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/test_wsgi.py0000664000567000056710000020113613024044354021630 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.common.wsgi""" import errno import logging import mimetools import socket import unittest import os from textwrap import dedent from collections import defaultdict from eventlet import listen from six import BytesIO from six import StringIO from six.moves.urllib.parse import quote import mock import swift.common.middleware.catch_errors import swift.common.middleware.gatekeeper import swift.proxy.server import swift.obj.server as obj_server import swift.container.server as container_server import swift.account.server as account_server from swift.common.swob import Request from swift.common import wsgi, utils from swift.common.storage_policy import POLICIES from test.unit import ( temptree, with_tempdir, write_fake_ring, patch_policies, FakeLogger) from paste.deploy import loadwsgi def _fake_rings(tmpdir): write_fake_ring(os.path.join(tmpdir, 'account.ring.gz')) write_fake_ring(os.path.join(tmpdir, 'container.ring.gz')) for policy in POLICIES: obj_ring_path = \ os.path.join(tmpdir, policy.ring_name + '.ring.gz') write_fake_ring(obj_ring_path) # make sure there's no other ring cached on this policy policy.object_ring = None @patch_policies class TestWSGI(unittest.TestCase): """Tests for swift.common.wsgi""" def setUp(self): utils.HASH_PATH_PREFIX = 'startcap' self._orig_parsetype = mimetools.Message.parsetype def tearDown(self): mimetools.Message.parsetype = self._orig_parsetype def test_monkey_patch_mimetools(self): sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).type, 'text/plain') sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).plisttext, '') sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).maintype, 'text') sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).subtype, 'plain') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).type, 'text/html') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).plisttext, '; charset=ISO-8859-4') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).maintype, 'text') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).subtype, 'html') wsgi.monkey_patch_mimetools() sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).type, None) sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).plisttext, '') sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).maintype, None) sio = StringIO('blah') self.assertEqual(mimetools.Message(sio).subtype, None) sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).type, 'text/html') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).plisttext, '; charset=ISO-8859-4') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).maintype, 'text') sio = StringIO('Content-Type: text/html; charset=ISO-8859-4') self.assertEqual(mimetools.Message(sio).subtype, 'html') def test_init_request_processor(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) app, conf, logger, log_name = wsgi.init_request_processor( conf_file, 'proxy-server') # verify pipeline is catch_errors -> dlo -> proxy-server expected = swift.common.middleware.catch_errors.CatchErrorMiddleware self.assertTrue(isinstance(app, expected)) app = app.app expected = swift.common.middleware.gatekeeper.GatekeeperMiddleware self.assertTrue(isinstance(app, expected)) app = app.app expected = swift.common.middleware.dlo.DynamicLargeObject self.assertTrue(isinstance(app, expected)) app = app.app expected = \ swift.common.middleware.versioned_writes.VersionedWritesMiddleware self.assertIsInstance(app, expected) app = app.app expected = swift.proxy.server.Application self.assertTrue(isinstance(app, expected)) # config settings applied to app instance self.assertEqual(0.2, app.conn_timeout) # appconfig returns values from 'proxy-server' section expected = { '__file__': conf_file, 'here': os.path.dirname(conf_file), 'conn_timeout': '0.2', 'swift_dir': t, } self.assertEqual(expected, conf) # logger works logger.info('testing') self.assertEqual('proxy-server', log_name) @with_tempdir def test_loadapp_from_file(self, tempdir): conf_path = os.path.join(tempdir, 'object-server.conf') conf_body = """ [app:main] use = egg:swift#object """ contents = dedent(conf_body) with open(conf_path, 'w') as f: f.write(contents) app = wsgi.loadapp(conf_path) self.assertTrue(isinstance(app, obj_server.ObjectController)) def test_loadapp_from_string(self): conf_body = """ [app:main] use = egg:swift#object """ app = wsgi.loadapp(wsgi.ConfigString(conf_body)) self.assertTrue(isinstance(app, obj_server.ObjectController)) def test_init_request_processor_from_conf_dir(self): config_dir = { 'proxy-server.conf.d/pipeline.conf': """ [pipeline:main] pipeline = catch_errors proxy-server """, 'proxy-server.conf.d/app.conf': """ [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 """, 'proxy-server.conf.d/catch-errors.conf': """ [filter:catch_errors] use = egg:swift#catch_errors """ } # strip indent from test config contents config_dir = dict((f, dedent(c)) for (f, c) in config_dir.items()) with mock.patch('swift.proxy.server.Application.modify_wsgi_pipeline'): with temptree(*zip(*config_dir.items())) as conf_root: conf_dir = os.path.join(conf_root, 'proxy-server.conf.d') with open(os.path.join(conf_dir, 'swift.conf'), 'w') as f: f.write('[DEFAULT]\nswift_dir = %s' % conf_root) _fake_rings(conf_root) app, conf, logger, log_name = wsgi.init_request_processor( conf_dir, 'proxy-server') # verify pipeline is catch_errors -> proxy-server expected = swift.common.middleware.catch_errors.CatchErrorMiddleware self.assertTrue(isinstance(app, expected)) self.assertTrue(isinstance(app.app, swift.proxy.server.Application)) # config settings applied to app instance self.assertEqual(0.2, app.app.conn_timeout) # appconfig returns values from 'proxy-server' section expected = { '__file__': conf_dir, 'here': conf_dir, 'conn_timeout': '0.2', 'swift_dir': conf_root, } self.assertEqual(expected, conf) # logger works logger.info('testing') self.assertEqual('proxy-server', log_name) def test_get_socket_bad_values(self): # first try with no port set self.assertRaises(wsgi.ConfigFilePortError, wsgi.get_socket, {}) # next try with a bad port value set self.assertRaises(wsgi.ConfigFilePortError, wsgi.get_socket, {'bind_port': 'abc'}) self.assertRaises(wsgi.ConfigFilePortError, wsgi.get_socket, {'bind_port': None}) def test_get_socket(self): # stubs conf = {'bind_port': 54321} ssl_conf = conf.copy() ssl_conf.update({ 'cert_file': '', 'key_file': '', }) # mocks class MockSocket(object): def __init__(self): self.opts = defaultdict(dict) def setsockopt(self, level, optname, value): self.opts[level][optname] = value def mock_listen(*args, **kwargs): return MockSocket() class MockSsl(object): def __init__(self): self.wrap_socket_called = [] def wrap_socket(self, sock, **kwargs): self.wrap_socket_called.append(kwargs) return sock # patch old_listen = wsgi.listen old_ssl = wsgi.ssl try: wsgi.listen = mock_listen wsgi.ssl = MockSsl() # test sock = wsgi.get_socket(conf) # assert self.assertTrue(isinstance(sock, MockSocket)) expected_socket_opts = { socket.SOL_SOCKET: { socket.SO_REUSEADDR: 1, socket.SO_KEEPALIVE: 1, }, socket.IPPROTO_TCP: { socket.TCP_NODELAY: 1, } } if hasattr(socket, 'TCP_KEEPIDLE'): expected_socket_opts[socket.IPPROTO_TCP][ socket.TCP_KEEPIDLE] = 600 self.assertEqual(sock.opts, expected_socket_opts) # test ssl sock = wsgi.get_socket(ssl_conf) expected_kwargs = { 'certfile': '', 'keyfile': '', } self.assertEqual(wsgi.ssl.wrap_socket_called, [expected_kwargs]) finally: wsgi.listen = old_listen wsgi.ssl = old_ssl def test_address_in_use(self): # stubs conf = {'bind_port': 54321} # mocks def mock_listen(*args, **kwargs): raise socket.error(errno.EADDRINUSE) def value_error_listen(*args, **kwargs): raise ValueError('fake') def mock_sleep(*args): pass class MockTime(object): """Fast clock advances 10 seconds after every call to time """ def __init__(self): self.current_time = old_time.time() def time(self, *args, **kwargs): rv = self.current_time # advance for next call self.current_time += 10 return rv old_listen = wsgi.listen old_sleep = wsgi.sleep old_time = wsgi.time try: wsgi.listen = mock_listen wsgi.sleep = mock_sleep wsgi.time = MockTime() # test error self.assertRaises(Exception, wsgi.get_socket, conf) # different error wsgi.listen = value_error_listen self.assertRaises(ValueError, wsgi.get_socket, conf) finally: wsgi.listen = old_listen wsgi.sleep = old_sleep wsgi.time = old_time def test_run_server(self): config = """ [DEFAULT] client_timeout = 30 max_clients = 1000 swift_dir = TEMPDIR [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy # while "set" values normally override default set client_timeout = 20 # this section is not in conf during run_server set max_clients = 10 """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) with mock.patch('swift.proxy.server.Application.' 'modify_wsgi_pipeline'): with mock.patch('swift.common.wsgi.wsgi') as _wsgi: with mock.patch('swift.common.wsgi.eventlet') as _eventlet: with mock.patch('swift.common.wsgi.inspect'): conf = wsgi.appconfig(conf_file) logger = logging.getLogger('test') sock = listen(('localhost', 0)) wsgi.run_server(conf, logger, sock) self.assertEqual('HTTP/1.0', _wsgi.HttpProtocol.default_request_version) self.assertEqual(30, _wsgi.WRITE_TIMEOUT) _eventlet.hubs.use_hub.assert_called_with(utils.get_hub()) _eventlet.patcher.monkey_patch.assert_called_with(all=False, socket=True, thread=True) _eventlet.debug.hub_exceptions.assert_called_with(False) self.assertTrue(_wsgi.server.called) args, kwargs = _wsgi.server.call_args server_sock, server_app, server_logger = args self.assertEqual(sock, server_sock) self.assertTrue(isinstance(server_app, swift.proxy.server.Application)) self.assertEqual(20, server_app.client_timeout) self.assertTrue(isinstance(server_logger, wsgi.NullLogger)) self.assertTrue('custom_pool' in kwargs) self.assertEqual(1000, kwargs['custom_pool'].size) def test_run_server_with_latest_eventlet(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy """ def argspec_stub(server): return mock.MagicMock(args=['capitalize_response_headers']) contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) with mock.patch('swift.proxy.server.Application.' 'modify_wsgi_pipeline'), \ mock.patch('swift.common.wsgi.wsgi') as _wsgi, \ mock.patch('swift.common.wsgi.eventlet'), \ mock.patch('swift.common.wsgi.inspect', getargspec=argspec_stub): conf = wsgi.appconfig(conf_file) logger = logging.getLogger('test') sock = listen(('localhost', 0)) wsgi.run_server(conf, logger, sock) self.assertTrue(_wsgi.server.called) args, kwargs = _wsgi.server.call_args self.assertEqual(kwargs.get('capitalize_response_headers'), False) def test_run_server_conf_dir(self): config_dir = { 'proxy-server.conf.d/pipeline.conf': """ [pipeline:main] pipeline = proxy-server """, 'proxy-server.conf.d/app.conf': """ [app:proxy-server] use = egg:swift#proxy """, 'proxy-server.conf.d/default.conf': """ [DEFAULT] client_timeout = 30 """ } # strip indent from test config contents config_dir = dict((f, dedent(c)) for (f, c) in config_dir.items()) with temptree(*zip(*config_dir.items())) as conf_root: conf_dir = os.path.join(conf_root, 'proxy-server.conf.d') with open(os.path.join(conf_dir, 'swift.conf'), 'w') as f: f.write('[DEFAULT]\nswift_dir = %s' % conf_root) _fake_rings(conf_root) with mock.patch('swift.proxy.server.Application.' 'modify_wsgi_pipeline'): with mock.patch('swift.common.wsgi.wsgi') as _wsgi: with mock.patch('swift.common.wsgi.eventlet') as _eventlet: with mock.patch.dict('os.environ', {'TZ': ''}): with mock.patch('swift.common.wsgi.inspect'): conf = wsgi.appconfig(conf_dir) logger = logging.getLogger('test') sock = listen(('localhost', 0)) wsgi.run_server(conf, logger, sock) self.assertTrue(os.environ['TZ'] is not '') self.assertEqual('HTTP/1.0', _wsgi.HttpProtocol.default_request_version) self.assertEqual(30, _wsgi.WRITE_TIMEOUT) _eventlet.hubs.use_hub.assert_called_with(utils.get_hub()) _eventlet.patcher.monkey_patch.assert_called_with(all=False, socket=True, thread=True) _eventlet.debug.hub_exceptions.assert_called_with(False) self.assertTrue(_wsgi.server.called) args, kwargs = _wsgi.server.call_args server_sock, server_app, server_logger = args self.assertEqual(sock, server_sock) self.assertTrue(isinstance(server_app, swift.proxy.server.Application)) self.assertTrue(isinstance(server_logger, wsgi.NullLogger)) self.assertTrue('custom_pool' in kwargs) def test_run_server_debug(self): config = """ [DEFAULT] eventlet_debug = yes client_timeout = 30 max_clients = 1000 swift_dir = TEMPDIR [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy # while "set" values normally override default set client_timeout = 20 # this section is not in conf during run_server set max_clients = 10 """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) with mock.patch('swift.proxy.server.Application.' 'modify_wsgi_pipeline'): with mock.patch('swift.common.wsgi.wsgi') as _wsgi: mock_server = _wsgi.server _wsgi.server = lambda *args, **kwargs: mock_server( *args, **kwargs) with mock.patch('swift.common.wsgi.eventlet') as _eventlet: conf = wsgi.appconfig(conf_file) logger = logging.getLogger('test') sock = listen(('localhost', 0)) wsgi.run_server(conf, logger, sock) self.assertEqual('HTTP/1.0', _wsgi.HttpProtocol.default_request_version) self.assertEqual(30, _wsgi.WRITE_TIMEOUT) _eventlet.hubs.use_hub.assert_called_with(utils.get_hub()) _eventlet.patcher.monkey_patch.assert_called_with(all=False, socket=True, thread=True) _eventlet.debug.hub_exceptions.assert_called_with(True) self.assertTrue(mock_server.called) args, kwargs = mock_server.call_args server_sock, server_app, server_logger = args self.assertEqual(sock, server_sock) self.assertTrue(isinstance(server_app, swift.proxy.server.Application)) self.assertEqual(20, server_app.client_timeout) self.assertEqual(server_logger, None) self.assertTrue('custom_pool' in kwargs) self.assertEqual(1000, kwargs['custom_pool'].size) def test_appconfig_dir_ignores_hidden_files(self): config_dir = { 'server.conf.d/01.conf': """ [app:main] use = egg:swift#proxy port = 8080 """, 'server.conf.d/.01.conf.swp': """ [app:main] use = egg:swift#proxy port = 8081 """, } # strip indent from test config contents config_dir = dict((f, dedent(c)) for (f, c) in config_dir.items()) with temptree(*zip(*config_dir.items())) as path: conf_dir = os.path.join(path, 'server.conf.d') conf = wsgi.appconfig(conf_dir) expected = { '__file__': os.path.join(path, 'server.conf.d'), 'here': os.path.join(path, 'server.conf.d'), 'port': '8080', } self.assertEqual(conf, expected) def test_pre_auth_wsgi_input(self): oldenv = {} newenv = wsgi.make_pre_authed_env(oldenv) self.assertTrue('wsgi.input' in newenv) self.assertEqual(newenv['wsgi.input'].read(), '') oldenv = {'wsgi.input': BytesIO(b'original wsgi.input')} newenv = wsgi.make_pre_authed_env(oldenv) self.assertTrue('wsgi.input' in newenv) self.assertEqual(newenv['wsgi.input'].read(), '') oldenv = {'swift.source': 'UT'} newenv = wsgi.make_pre_authed_env(oldenv) self.assertEqual(newenv['swift.source'], 'UT') oldenv = {'swift.source': 'UT'} newenv = wsgi.make_pre_authed_env(oldenv, swift_source='SA') self.assertEqual(newenv['swift.source'], 'SA') def test_pre_auth_req(self): class FakeReq(object): @classmethod def fake_blank(cls, path, environ=None, body='', headers=None): if environ is None: environ = {} if headers is None: headers = {} self.assertEqual(environ['swift.authorize']('test'), None) self.assertFalse('HTTP_X_TRANS_ID' in environ) was_blank = Request.blank Request.blank = FakeReq.fake_blank wsgi.make_pre_authed_request({'HTTP_X_TRANS_ID': '1234'}, 'PUT', '/', body='tester', headers={}) wsgi.make_pre_authed_request({'HTTP_X_TRANS_ID': '1234'}, 'PUT', '/', headers={}) Request.blank = was_blank def test_pre_auth_req_with_quoted_path(self): r = wsgi.make_pre_authed_request( {'HTTP_X_TRANS_ID': '1234'}, 'PUT', path=quote('/a space'), body='tester', headers={}) self.assertEqual(r.path, quote('/a space')) def test_pre_auth_req_drops_query(self): r = wsgi.make_pre_authed_request( {'QUERY_STRING': 'original'}, 'GET', 'path') self.assertEqual(r.query_string, 'original') r = wsgi.make_pre_authed_request( {'QUERY_STRING': 'original'}, 'GET', 'path?replacement') self.assertEqual(r.query_string, 'replacement') r = wsgi.make_pre_authed_request( {'QUERY_STRING': 'original'}, 'GET', 'path?') self.assertEqual(r.query_string, '') def test_pre_auth_req_with_body(self): r = wsgi.make_pre_authed_request( {'QUERY_STRING': 'original'}, 'GET', 'path', 'the body') self.assertEqual(r.body, 'the body') def test_pre_auth_creates_script_name(self): e = wsgi.make_pre_authed_env({}) self.assertTrue('SCRIPT_NAME' in e) def test_pre_auth_copies_script_name(self): e = wsgi.make_pre_authed_env({'SCRIPT_NAME': '/script_name'}) self.assertEqual(e['SCRIPT_NAME'], '/script_name') def test_pre_auth_copies_script_name_unless_path_overridden(self): e = wsgi.make_pre_authed_env({'SCRIPT_NAME': '/script_name'}, path='/override') self.assertEqual(e['SCRIPT_NAME'], '') self.assertEqual(e['PATH_INFO'], '/override') def test_pre_auth_req_swift_source(self): r = wsgi.make_pre_authed_request( {'QUERY_STRING': 'original'}, 'GET', 'path', 'the body', swift_source='UT') self.assertEqual(r.body, 'the body') self.assertEqual(r.environ['swift.source'], 'UT') def test_run_server_global_conf_callback(self): calls = defaultdict(lambda: 0) def _initrp(conf_file, app_section, *args, **kwargs): return ( {'__file__': 'test', 'workers': 0}, 'logger', 'log_name') def _global_conf_callback(preloaded_app_conf, global_conf): calls['_global_conf_callback'] += 1 self.assertEqual( preloaded_app_conf, {'__file__': 'test', 'workers': 0}) self.assertEqual(global_conf, {'log_name': 'log_name'}) global_conf['test1'] = 'one' def _loadapp(uri, name=None, **kwargs): calls['_loadapp'] += 1 self.assertTrue('global_conf' in kwargs) self.assertEqual(kwargs['global_conf'], {'log_name': 'log_name', 'test1': 'one'}) with mock.patch.object(wsgi, '_initrp', _initrp), \ mock.patch.object(wsgi, 'get_socket'), \ mock.patch.object(wsgi, 'drop_privileges'), \ mock.patch.object(wsgi, 'loadapp', _loadapp), \ mock.patch.object(wsgi, 'capture_stdio'), \ mock.patch.object(wsgi, 'run_server'): wsgi.run_wsgi('conf_file', 'app_section', global_conf_callback=_global_conf_callback) self.assertEqual(calls['_global_conf_callback'], 1) self.assertEqual(calls['_loadapp'], 1) def test_run_server_success(self): calls = defaultdict(lambda: 0) def _initrp(conf_file, app_section, *args, **kwargs): calls['_initrp'] += 1 return ( {'__file__': 'test', 'workers': 0}, 'logger', 'log_name') def _loadapp(uri, name=None, **kwargs): calls['_loadapp'] += 1 with mock.patch.object(wsgi, '_initrp', _initrp), \ mock.patch.object(wsgi, 'get_socket'), \ mock.patch.object(wsgi, 'drop_privileges'), \ mock.patch.object(wsgi, 'loadapp', _loadapp), \ mock.patch.object(wsgi, 'capture_stdio'), \ mock.patch.object(wsgi, 'run_server'): rc = wsgi.run_wsgi('conf_file', 'app_section') self.assertEqual(calls['_initrp'], 1) self.assertEqual(calls['_loadapp'], 1) self.assertEqual(rc, 0) @mock.patch('swift.common.wsgi.run_server') @mock.patch('swift.common.wsgi.WorkersStrategy') @mock.patch('swift.common.wsgi.ServersPerPortStrategy') def test_run_server_strategy_plumbing(self, mock_per_port, mock_workers, mock_run_server): # Make sure the right strategy gets used in a number of different # config cases. mock_per_port().bind_ports.return_value = 'stop early' mock_workers().bind_ports.return_value = 'stop early' logger = FakeLogger() stub__initrp = [ {'__file__': 'test', 'workers': 2}, # conf logger, 'log_name', ] with mock.patch.object(wsgi, '_initrp', return_value=stub__initrp): for server_type in ('account-server', 'container-server', 'object-server'): mock_per_port.reset_mock() mock_workers.reset_mock() logger._clear() self.assertEqual(1, wsgi.run_wsgi('conf_file', server_type)) self.assertEqual([ 'stop early', ], logger.get_lines_for_level('error')) self.assertEqual([], mock_per_port.mock_calls) self.assertEqual([ mock.call(stub__initrp[0], logger), mock.call().bind_ports(), ], mock_workers.mock_calls) stub__initrp[0]['servers_per_port'] = 3 for server_type in ('account-server', 'container-server'): mock_per_port.reset_mock() mock_workers.reset_mock() logger._clear() self.assertEqual(1, wsgi.run_wsgi('conf_file', server_type)) self.assertEqual([ 'stop early', ], logger.get_lines_for_level('error')) self.assertEqual([], mock_per_port.mock_calls) self.assertEqual([ mock.call(stub__initrp[0], logger), mock.call().bind_ports(), ], mock_workers.mock_calls) mock_per_port.reset_mock() mock_workers.reset_mock() logger._clear() self.assertEqual(1, wsgi.run_wsgi('conf_file', 'object-server')) self.assertEqual([ 'stop early', ], logger.get_lines_for_level('error')) self.assertEqual([ mock.call(stub__initrp[0], logger, servers_per_port=3), mock.call().bind_ports(), ], mock_per_port.mock_calls) self.assertEqual([], mock_workers.mock_calls) def test_run_server_failure1(self): calls = defaultdict(lambda: 0) def _initrp(conf_file, app_section, *args, **kwargs): calls['_initrp'] += 1 raise wsgi.ConfigFileError('test exception') def _loadapp(uri, name=None, **kwargs): calls['_loadapp'] += 1 with mock.patch.object(wsgi, '_initrp', _initrp), \ mock.patch.object(wsgi, 'get_socket'), \ mock.patch.object(wsgi, 'drop_privileges'), \ mock.patch.object(wsgi, 'loadapp', _loadapp), \ mock.patch.object(wsgi, 'capture_stdio'), \ mock.patch.object(wsgi, 'run_server'): rc = wsgi.run_wsgi('conf_file', 'app_section') self.assertEqual(calls['_initrp'], 1) self.assertEqual(calls['_loadapp'], 0) self.assertEqual(rc, 1) def test_pre_auth_req_with_empty_env_no_path(self): r = wsgi.make_pre_authed_request( {}, 'GET') self.assertEqual(r.path, quote('')) self.assertTrue('SCRIPT_NAME' in r.environ) self.assertTrue('PATH_INFO' in r.environ) def test_pre_auth_req_with_env_path(self): r = wsgi.make_pre_authed_request( {'PATH_INFO': '/unquoted path with %20'}, 'GET') self.assertEqual(r.path, quote('/unquoted path with %20')) self.assertEqual(r.environ['SCRIPT_NAME'], '') def test_pre_auth_req_with_env_script(self): r = wsgi.make_pre_authed_request({'SCRIPT_NAME': '/hello'}, 'GET') self.assertEqual(r.path, quote('/hello')) def test_pre_auth_req_with_env_path_and_script(self): env = {'PATH_INFO': '/unquoted path with %20', 'SCRIPT_NAME': '/script'} r = wsgi.make_pre_authed_request(env, 'GET') expected_path = quote(env['SCRIPT_NAME'] + env['PATH_INFO']) self.assertEqual(r.path, expected_path) env = {'PATH_INFO': '', 'SCRIPT_NAME': '/script'} r = wsgi.make_pre_authed_request(env, 'GET') self.assertEqual(r.path, '/script') env = {'PATH_INFO': '/path', 'SCRIPT_NAME': ''} r = wsgi.make_pre_authed_request(env, 'GET') self.assertEqual(r.path, '/path') env = {'PATH_INFO': '', 'SCRIPT_NAME': ''} r = wsgi.make_pre_authed_request(env, 'GET') self.assertEqual(r.path, '') def test_pre_auth_req_path_overrides_env(self): env = {'PATH_INFO': '/path', 'SCRIPT_NAME': '/script'} r = wsgi.make_pre_authed_request(env, 'GET', '/override') self.assertEqual(r.path, '/override') self.assertEqual(r.environ['SCRIPT_NAME'], '') self.assertEqual(r.environ['PATH_INFO'], '/override') def test_make_env_keep_user_project_id(self): oldenv = {'HTTP_X_USER_ID': '1234', 'HTTP_X_PROJECT_ID': '5678'} newenv = wsgi.make_env(oldenv) self.assertTrue('HTTP_X_USER_ID' in newenv) self.assertEqual(newenv['HTTP_X_USER_ID'], '1234') self.assertTrue('HTTP_X_PROJECT_ID' in newenv) self.assertEqual(newenv['HTTP_X_PROJECT_ID'], '5678') def test_make_env_keeps_referer(self): oldenv = {'HTTP_REFERER': 'http://blah.example.com'} newenv = wsgi.make_env(oldenv) self.assertTrue('HTTP_REFERER' in newenv) self.assertEqual(newenv['HTTP_REFERER'], 'http://blah.example.com') class TestServersPerPortStrategy(unittest.TestCase): def setUp(self): self.logger = FakeLogger() self.conf = { 'workers': 100, # ignored 'user': 'bob', 'swift_dir': '/jim/cricket', 'ring_check_interval': '76', 'bind_ip': '2.3.4.5', } self.servers_per_port = 3 self.s1, self.s2 = mock.MagicMock(), mock.MagicMock() patcher = mock.patch('swift.common.wsgi.get_socket', side_effect=[self.s1, self.s2]) self.mock_get_socket = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.drop_privileges') self.mock_drop_privileges = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.BindPortsCache') self.mock_cache_class = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.os.setsid') self.mock_setsid = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.os.chdir') self.mock_chdir = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.os.umask') self.mock_umask = patcher.start() self.addCleanup(patcher.stop) self.all_bind_ports_for_node = \ self.mock_cache_class().all_bind_ports_for_node self.ports = (6006, 6007) self.all_bind_ports_for_node.return_value = set(self.ports) self.strategy = wsgi.ServersPerPortStrategy(self.conf, self.logger, self.servers_per_port) def test_loop_timeout(self): # This strategy should loop every ring_check_interval seconds, even if # no workers exit. self.assertEqual(76, self.strategy.loop_timeout()) # Check the default del self.conf['ring_check_interval'] self.strategy = wsgi.ServersPerPortStrategy(self.conf, self.logger, self.servers_per_port) self.assertEqual(15, self.strategy.loop_timeout()) def test_bind_ports(self): self.strategy.bind_ports() self.assertEqual(set((6006, 6007)), self.strategy.bind_ports) self.assertEqual([ mock.call({'workers': 100, # ignored 'user': 'bob', 'swift_dir': '/jim/cricket', 'ring_check_interval': '76', 'bind_ip': '2.3.4.5', 'bind_port': 6006}), mock.call({'workers': 100, # ignored 'user': 'bob', 'swift_dir': '/jim/cricket', 'ring_check_interval': '76', 'bind_ip': '2.3.4.5', 'bind_port': 6007}), ], self.mock_get_socket.mock_calls) self.assertEqual( 6006, self.strategy.port_pid_state.port_for_sock(self.s1)) self.assertEqual( 6007, self.strategy.port_pid_state.port_for_sock(self.s2)) self.assertEqual([mock.call()], self.mock_setsid.mock_calls) self.assertEqual([mock.call('/')], self.mock_chdir.mock_calls) self.assertEqual([mock.call(0o22)], self.mock_umask.mock_calls) def test_bind_ports_ignores_setsid_errors(self): self.mock_setsid.side_effect = OSError() self.strategy.bind_ports() self.assertEqual(set((6006, 6007)), self.strategy.bind_ports) self.assertEqual([ mock.call({'workers': 100, # ignored 'user': 'bob', 'swift_dir': '/jim/cricket', 'ring_check_interval': '76', 'bind_ip': '2.3.4.5', 'bind_port': 6006}), mock.call({'workers': 100, # ignored 'user': 'bob', 'swift_dir': '/jim/cricket', 'ring_check_interval': '76', 'bind_ip': '2.3.4.5', 'bind_port': 6007}), ], self.mock_get_socket.mock_calls) self.assertEqual( 6006, self.strategy.port_pid_state.port_for_sock(self.s1)) self.assertEqual( 6007, self.strategy.port_pid_state.port_for_sock(self.s2)) self.assertEqual([mock.call()], self.mock_setsid.mock_calls) self.assertEqual([mock.call('/')], self.mock_chdir.mock_calls) self.assertEqual([mock.call(0o22)], self.mock_umask.mock_calls) def test_no_fork_sock(self): self.assertIsNone(self.strategy.no_fork_sock()) def test_new_worker_socks(self): self.strategy.bind_ports() self.all_bind_ports_for_node.reset_mock() pid = 88 got_si = [] for s, i in self.strategy.new_worker_socks(): got_si.append((s, i)) self.strategy.register_worker_start(s, i, pid) pid += 1 self.assertEqual([ (self.s1, 0), (self.s1, 1), (self.s1, 2), (self.s2, 0), (self.s2, 1), (self.s2, 2), ], got_si) self.assertEqual([ 'Started child %d (PID %d) for port %d' % (0, 88, 6006), 'Started child %d (PID %d) for port %d' % (1, 89, 6006), 'Started child %d (PID %d) for port %d' % (2, 90, 6006), 'Started child %d (PID %d) for port %d' % (0, 91, 6007), 'Started child %d (PID %d) for port %d' % (1, 92, 6007), 'Started child %d (PID %d) for port %d' % (2, 93, 6007), ], self.logger.get_lines_for_level('notice')) self.logger._clear() # Steady-state... self.assertEqual([], list(self.strategy.new_worker_socks())) self.all_bind_ports_for_node.reset_mock() # Get rid of servers for ports which disappear from the ring self.ports = (6007,) self.all_bind_ports_for_node.return_value = set(self.ports) self.s1.reset_mock() self.s2.reset_mock() with mock.patch('swift.common.wsgi.greenio') as mock_greenio: self.assertEqual([], list(self.strategy.new_worker_socks())) self.assertEqual([ mock.call(), # ring_check_interval has passed... ], self.all_bind_ports_for_node.mock_calls) self.assertEqual([ mock.call.shutdown_safe(self.s1), ], mock_greenio.mock_calls) self.assertEqual([ mock.call.close(), ], self.s1.mock_calls) self.assertEqual([], self.s2.mock_calls) # not closed self.assertEqual([ 'Closing unnecessary sock for port %d' % 6006, ], self.logger.get_lines_for_level('notice')) self.logger._clear() # Create new socket & workers for new ports that appear in ring self.ports = (6007, 6009) self.all_bind_ports_for_node.return_value = set(self.ports) self.s1.reset_mock() self.s2.reset_mock() s3 = mock.MagicMock() self.mock_get_socket.side_effect = Exception('ack') # But first make sure we handle failure to bind to the requested port! got_si = [] for s, i in self.strategy.new_worker_socks(): got_si.append((s, i)) self.strategy.register_worker_start(s, i, pid) pid += 1 self.assertEqual([], got_si) self.assertEqual([ 'Unable to bind to port %d: %s' % (6009, Exception('ack')), 'Unable to bind to port %d: %s' % (6009, Exception('ack')), 'Unable to bind to port %d: %s' % (6009, Exception('ack')), ], self.logger.get_lines_for_level('critical')) self.logger._clear() # Will keep trying, so let it succeed again self.mock_get_socket.side_effect = [s3] got_si = [] for s, i in self.strategy.new_worker_socks(): got_si.append((s, i)) self.strategy.register_worker_start(s, i, pid) pid += 1 self.assertEqual([ (s3, 0), (s3, 1), (s3, 2), ], got_si) self.assertEqual([ 'Started child %d (PID %d) for port %d' % (0, 94, 6009), 'Started child %d (PID %d) for port %d' % (1, 95, 6009), 'Started child %d (PID %d) for port %d' % (2, 96, 6009), ], self.logger.get_lines_for_level('notice')) self.logger._clear() # Steady-state... self.assertEqual([], list(self.strategy.new_worker_socks())) self.all_bind_ports_for_node.reset_mock() # Restart a guy who died on us self.strategy.register_worker_exit(95) # server_idx == 1 got_si = [] for s, i in self.strategy.new_worker_socks(): got_si.append((s, i)) self.strategy.register_worker_start(s, i, pid) pid += 1 self.assertEqual([ (s3, 1), ], got_si) self.assertEqual([ 'Started child %d (PID %d) for port %d' % (1, 97, 6009), ], self.logger.get_lines_for_level('notice')) self.logger._clear() # Check log_sock_exit self.strategy.log_sock_exit(self.s2, 2) self.assertEqual([ 'Child %d (PID %d, port %d) exiting normally' % ( 2, os.getpid(), 6007), ], self.logger.get_lines_for_level('notice')) # It's ok to register_worker_exit for a PID that's already had its # socket closed due to orphaning. # This is one of the workers for port 6006 that already got reaped. self.assertIsNone(self.strategy.register_worker_exit(89)) def test_post_fork_hook(self): self.strategy.post_fork_hook() self.assertEqual([ mock.call('bob', call_setsid=False), ], self.mock_drop_privileges.mock_calls) def test_shutdown_sockets(self): self.strategy.bind_ports() with mock.patch('swift.common.wsgi.greenio') as mock_greenio: self.strategy.shutdown_sockets() self.assertEqual([ mock.call.shutdown_safe(self.s1), mock.call.shutdown_safe(self.s2), ], mock_greenio.mock_calls) self.assertEqual([ mock.call.close(), ], self.s1.mock_calls) self.assertEqual([ mock.call.close(), ], self.s2.mock_calls) class TestWorkersStrategy(unittest.TestCase): def setUp(self): self.logger = FakeLogger() self.conf = { 'workers': 2, 'user': 'bob', } self.strategy = wsgi.WorkersStrategy(self.conf, self.logger) patcher = mock.patch('swift.common.wsgi.get_socket', return_value='abc') self.mock_get_socket = patcher.start() self.addCleanup(patcher.stop) patcher = mock.patch('swift.common.wsgi.drop_privileges') self.mock_drop_privileges = patcher.start() self.addCleanup(patcher.stop) def test_loop_timeout(self): # This strategy should sit in the green.os.wait() for a bit (to avoid # busy-waiting) but not forever (so the keep-running flag actually # gets checked). self.assertEqual(0.5, self.strategy.loop_timeout()) def test_binding(self): self.assertIsNone(self.strategy.bind_ports()) self.assertEqual('abc', self.strategy.sock) self.assertEqual([ mock.call(self.conf), ], self.mock_get_socket.mock_calls) self.assertEqual([ mock.call('bob'), ], self.mock_drop_privileges.mock_calls) self.mock_get_socket.side_effect = wsgi.ConfigFilePortError() self.assertEqual( 'bind_port wasn\'t properly set in the config file. ' 'It must be explicitly set to a valid port number.', self.strategy.bind_ports()) def test_no_fork_sock(self): self.strategy.bind_ports() self.assertIsNone(self.strategy.no_fork_sock()) self.conf['workers'] = 0 self.strategy = wsgi.WorkersStrategy(self.conf, self.logger) self.strategy.bind_ports() self.assertEqual('abc', self.strategy.no_fork_sock()) def test_new_worker_socks(self): self.strategy.bind_ports() pid = 88 sock_count = 0 for s, i in self.strategy.new_worker_socks(): self.assertEqual('abc', s) self.assertIsNone(i) # unused for this strategy self.strategy.register_worker_start(s, 'unused', pid) pid += 1 sock_count += 1 self.assertEqual([ 'Started child %s' % 88, 'Started child %s' % 89, ], self.logger.get_lines_for_level('notice')) self.assertEqual(2, sock_count) self.assertEqual([], list(self.strategy.new_worker_socks())) sock_count = 0 self.strategy.register_worker_exit(88) self.assertEqual([ 'Removing dead child %s' % 88, ], self.logger.get_lines_for_level('error')) for s, i in self.strategy.new_worker_socks(): self.assertEqual('abc', s) self.assertIsNone(i) # unused for this strategy self.strategy.register_worker_start(s, 'unused', pid) pid += 1 sock_count += 1 self.assertEqual(1, sock_count) self.assertEqual([ 'Started child %s' % 88, 'Started child %s' % 89, 'Started child %s' % 90, ], self.logger.get_lines_for_level('notice')) def test_post_fork_hook(self): # Just don't crash or do something stupid self.assertIsNone(self.strategy.post_fork_hook()) def test_shutdown_sockets(self): self.mock_get_socket.return_value = mock.MagicMock() self.strategy.bind_ports() with mock.patch('swift.common.wsgi.greenio') as mock_greenio: self.strategy.shutdown_sockets() self.assertEqual([ mock.call.shutdown_safe(self.mock_get_socket.return_value), ], mock_greenio.mock_calls) self.assertEqual([ mock.call.close(), ], self.mock_get_socket.return_value.mock_calls) def test_log_sock_exit(self): self.strategy.log_sock_exit('blahblah', 'blahblah') my_pid = os.getpid() self.assertEqual([ 'Child %d exiting normally' % my_pid, ], self.logger.get_lines_for_level('notice')) class TestWSGIContext(unittest.TestCase): def test_app_call(self): statuses = ['200 Ok', '404 Not Found'] def app(env, start_response): start_response(statuses.pop(0), [('Content-Length', '3')]) yield 'Ok\n' wc = wsgi.WSGIContext(app) r = Request.blank('/') it = wc._app_call(r.environ) self.assertEqual(wc._response_status, '200 Ok') self.assertEqual(''.join(it), 'Ok\n') r = Request.blank('/') it = wc._app_call(r.environ) self.assertEqual(wc._response_status, '404 Not Found') self.assertEqual(''.join(it), 'Ok\n') def test_app_iter_is_closable(self): def app(env, start_response): start_response('200 OK', [('Content-Length', '25')]) yield 'aaaaa' yield 'bbbbb' yield 'ccccc' yield 'ddddd' yield 'eeeee' wc = wsgi.WSGIContext(app) r = Request.blank('/') iterable = wc._app_call(r.environ) self.assertEqual(wc._response_status, '200 OK') iterator = iter(iterable) self.assertEqual('aaaaa', next(iterator)) self.assertEqual('bbbbb', next(iterator)) iterable.close() self.assertRaises(StopIteration, iterator.next) class TestPipelineWrapper(unittest.TestCase): def setUp(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = healthcheck catch_errors tempurl proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck [filter:tempurl] paste.filter_factory = swift.common.middleware.tempurl:filter_factory """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) ctx = wsgi.loadcontext(loadwsgi.APP, conf_file, global_conf={}) self.pipe = wsgi.PipelineWrapper(ctx) def _entry_point_names(self): # Helper method to return a list of the entry point names for the # filters in the pipeline. return [c.entry_point_name for c in self.pipe.context.filter_contexts] def test_startswith(self): self.assertTrue(self.pipe.startswith("healthcheck")) self.assertFalse(self.pipe.startswith("tempurl")) def test_startswith_no_filters(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) ctx = wsgi.loadcontext(loadwsgi.APP, conf_file, global_conf={}) pipe = wsgi.PipelineWrapper(ctx) self.assertTrue(pipe.startswith('proxy')) def test_insert_filter(self): original_modules = ['healthcheck', 'catch_errors', None] self.assertEqual(self._entry_point_names(), original_modules) self.pipe.insert_filter(self.pipe.create_filter('catch_errors')) expected_modules = ['catch_errors', 'healthcheck', 'catch_errors', None] self.assertEqual(self._entry_point_names(), expected_modules) def test_str(self): self.assertEqual( str(self.pipe), "healthcheck catch_errors tempurl proxy-server") def test_str_unknown_filter(self): del self.pipe.context.filter_contexts[0].__dict__['name'] self.pipe.context.filter_contexts[0].object = 'mysterious' self.assertEqual( str(self.pipe), " catch_errors tempurl proxy-server") @patch_policies @mock.patch('swift.common.utils.HASH_PATH_SUFFIX', new='endcap') class TestPipelineModification(unittest.TestCase): def pipeline_modules(self, app): # This is rather brittle; it'll break if a middleware stores its app # anywhere other than an attribute named "app", but it works for now. pipe = [] for _ in range(1000): pipe.append(app.__class__.__module__) if not hasattr(app, 'app'): break app = app.app return pipe def test_load_app(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = healthcheck proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck """ def modify_func(app, pipe): new = pipe.create_filter('catch_errors') pipe.insert_filter(new) contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) with mock.patch( 'swift.proxy.server.Application.modify_wsgi_pipeline', modify_func): app = wsgi.loadapp(conf_file, global_conf={}) exp = swift.common.middleware.catch_errors.CatchErrorMiddleware self.assertTrue(isinstance(app, exp), app) exp = swift.common.middleware.healthcheck.HealthCheckMiddleware self.assertTrue(isinstance(app.app, exp), app.app) exp = swift.proxy.server.Application self.assertTrue(isinstance(app.app.app, exp), app.app.app) # make sure you can turn off the pipeline modification if you want def blow_up(*_, **__): raise self.fail("needs more struts") with mock.patch( 'swift.proxy.server.Application.modify_wsgi_pipeline', blow_up): app = wsgi.loadapp(conf_file, global_conf={}, allow_modify_pipeline=False) # the pipeline was untouched exp = swift.common.middleware.healthcheck.HealthCheckMiddleware self.assertTrue(isinstance(app, exp), app) exp = swift.proxy.server.Application self.assertTrue(isinstance(app.app, exp), app.app) def test_proxy_unmodified_wsgi_pipeline(self): # Make sure things are sane even when we modify nothing config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = catch_errors gatekeeper proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:catch_errors] use = egg:swift#catch_errors [filter:gatekeeper] use = egg:swift#gatekeeper """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) app = wsgi.loadapp(conf_file, global_conf={}) self.assertEqual(self.pipeline_modules(app), ['swift.common.middleware.catch_errors', 'swift.common.middleware.gatekeeper', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.proxy.server']) def test_proxy_modify_wsgi_pipeline(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = healthcheck proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:healthcheck] use = egg:swift#healthcheck """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) app = wsgi.loadapp(conf_file, global_conf={}) self.assertEqual(self.pipeline_modules(app), ['swift.common.middleware.catch_errors', 'swift.common.middleware.gatekeeper', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.common.middleware.healthcheck', 'swift.proxy.server']) def test_proxy_modify_wsgi_pipeline_inserts_versioned_writes(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = slo dlo healthcheck proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:healthcheck] use = egg:swift#healthcheck [filter:dlo] use = egg:swift#dlo [filter:slo] use = egg:swift#slo """ contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) app = wsgi.loadapp(conf_file, global_conf={}) self.assertEqual(self.pipeline_modules(app), ['swift.common.middleware.catch_errors', 'swift.common.middleware.gatekeeper', 'swift.common.middleware.slo', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.common.middleware.healthcheck', 'swift.proxy.server']) def test_proxy_modify_wsgi_pipeline_ordering(self): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = healthcheck proxy-logging bulk tempurl proxy-server [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:healthcheck] use = egg:swift#healthcheck [filter:proxy-logging] use = egg:swift#proxy_logging [filter:bulk] use = egg:swift#bulk [filter:tempurl] use = egg:swift#tempurl """ new_req_filters = [ # not in pipeline, no afters {'name': 'catch_errors'}, # already in pipeline {'name': 'proxy_logging', 'after_fn': lambda _: ['catch_errors']}, # not in pipeline, comes after more than one thing {'name': 'container_quotas', 'after_fn': lambda _: ['catch_errors', 'bulk']}] contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) with mock.patch.object(swift.proxy.server, 'required_filters', new_req_filters): app = wsgi.loadapp(conf_file, global_conf={}) self.assertEqual(self.pipeline_modules(app), [ 'swift.common.middleware.catch_errors', 'swift.common.middleware.healthcheck', 'swift.common.middleware.proxy_logging', 'swift.common.middleware.bulk', 'swift.common.middleware.container_quotas', 'swift.common.middleware.tempurl', 'swift.proxy.server']) def _proxy_modify_wsgi_pipeline(self, pipe): config = """ [DEFAULT] swift_dir = TEMPDIR [pipeline:main] pipeline = %s [app:proxy-server] use = egg:swift#proxy conn_timeout = 0.2 [filter:healthcheck] use = egg:swift#healthcheck [filter:catch_errors] use = egg:swift#catch_errors [filter:gatekeeper] use = egg:swift#gatekeeper """ config = config % (pipe,) contents = dedent(config) with temptree(['proxy-server.conf']) as t: conf_file = os.path.join(t, 'proxy-server.conf') with open(conf_file, 'w') as f: f.write(contents.replace('TEMPDIR', t)) _fake_rings(t) app = wsgi.loadapp(conf_file, global_conf={}) return app def test_gatekeeper_insertion_catch_errors_configured_at_start(self): # catch_errors is configured at start, gatekeeper is not configured, # so gatekeeper should be inserted just after catch_errors pipe = 'catch_errors healthcheck proxy-server' app = self._proxy_modify_wsgi_pipeline(pipe) self.assertEqual(self.pipeline_modules(app), [ 'swift.common.middleware.catch_errors', 'swift.common.middleware.gatekeeper', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.common.middleware.healthcheck', 'swift.proxy.server']) def test_gatekeeper_insertion_catch_errors_configured_not_at_start(self): # catch_errors is configured, gatekeeper is not configured, so # gatekeeper should be inserted at start of pipeline pipe = 'healthcheck catch_errors proxy-server' app = self._proxy_modify_wsgi_pipeline(pipe) self.assertEqual(self.pipeline_modules(app), [ 'swift.common.middleware.gatekeeper', 'swift.common.middleware.healthcheck', 'swift.common.middleware.catch_errors', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.proxy.server']) def test_catch_errors_gatekeeper_configured_not_at_start(self): # catch_errors is configured, gatekeeper is configured, so # no change should be made to pipeline pipe = 'healthcheck catch_errors gatekeeper proxy-server' app = self._proxy_modify_wsgi_pipeline(pipe) self.assertEqual(self.pipeline_modules(app), [ 'swift.common.middleware.healthcheck', 'swift.common.middleware.catch_errors', 'swift.common.middleware.gatekeeper', 'swift.common.middleware.dlo', 'swift.common.middleware.versioned_writes', 'swift.proxy.server']) @with_tempdir def test_loadapp_proxy(self, tempdir): conf_path = os.path.join(tempdir, 'proxy-server.conf') conf_body = """ [DEFAULT] swift_dir = %s [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ % tempdir with open(conf_path, 'w') as f: f.write(dedent(conf_body)) _fake_rings(tempdir) account_ring_path = os.path.join(tempdir, 'account.ring.gz') container_ring_path = os.path.join(tempdir, 'container.ring.gz') object_ring_paths = {} for policy in POLICIES: object_ring_paths[int(policy)] = os.path.join( tempdir, policy.ring_name + '.ring.gz') app = wsgi.loadapp(conf_path) proxy_app = app.app.app.app.app.app self.assertEqual(proxy_app.account_ring.serialized_path, account_ring_path) self.assertEqual(proxy_app.container_ring.serialized_path, container_ring_path) for policy_index, expected_path in object_ring_paths.items(): object_ring = proxy_app.get_object_ring(policy_index) self.assertEqual(expected_path, object_ring.serialized_path) @with_tempdir def test_loadapp_storage(self, tempdir): expectations = { 'object': obj_server.ObjectController, 'container': container_server.ContainerController, 'account': account_server.AccountController, } for server_type, controller in expectations.items(): conf_path = os.path.join( tempdir, '%s-server.conf' % server_type) conf_body = """ [DEFAULT] swift_dir = %s [app:main] use = egg:swift#%s """ % (tempdir, server_type) with open(conf_path, 'w') as f: f.write(dedent(conf_body)) app = wsgi.loadapp(conf_path) self.assertTrue(isinstance(app, controller)) def test_pipeline_property(self): depth = 3 class FakeApp(object): pass class AppFilter(object): def __init__(self, app): self.app = app # make a pipeline app = FakeApp() filtered_app = app for i in range(depth): filtered_app = AppFilter(filtered_app) # AttributeError if no apps in the pipeline have attribute wsgi._add_pipeline_properties(filtered_app, 'foo') self.assertRaises(AttributeError, getattr, filtered_app, 'foo') # set the attribute self.assertTrue(isinstance(app, FakeApp)) app.foo = 'bar' self.assertEqual(filtered_app.foo, 'bar') # attribute is cached app.foo = 'baz' self.assertEqual(filtered_app.foo, 'bar') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_exceptions.py0000664000567000056710000000364713024044352023045 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # TODO(creiht): Tests import unittest from swift.common import exceptions class TestExceptions(unittest.TestCase): def test_replication_exception(self): self.assertEqual(str(exceptions.ReplicationException()), '') self.assertEqual(str(exceptions.ReplicationException('test')), 'test') def test_replication_lock_timeout(self): exc = exceptions.ReplicationLockTimeout(15, 'test') try: self.assertTrue(isinstance(exc, exceptions.MessageTimeout)) finally: exc.cancel() def test_client_exception(self): strerror = 'test: HTTP://random:888/randompath?foo=1 666 reason: ' \ 'device /sdb1 content' exc = exceptions.ClientException('test', http_scheme='HTTP', http_host='random', http_port=888, http_path='/randompath', http_query='foo=1', http_status=666, http_reason='reason', http_device='/sdb1', http_response_content='content') self.assertEqual(str(exc), strerror) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/0000775000567000056710000000000013024044470021357 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/middleware/test_proxy_logging.py0000664000567000056710000013124513024044354025666 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from logging.handlers import SysLogHandler import mock from six import BytesIO from six.moves.urllib.parse import unquote from test.unit import FakeLogger from swift.common.utils import get_logger, split_path from swift.common.middleware import proxy_logging from swift.common.swob import Request, Response from swift.common import constraints from swift.common.storage_policy import StoragePolicy from test.unit import patch_policies class FakeApp(object): def __init__(self, body=None, response_str='200 OK', policy_idx='0'): if body is None: body = ['FAKE APP'] self.body = body self.response_str = response_str self.policy_idx = policy_idx def __call__(self, env, start_response): try: # /v1/a/c or /v1/a/c/o split_path(env['PATH_INFO'], 3, 4, True) is_container_or_object_req = True except ValueError: is_container_or_object_req = False headers = [('Content-Type', 'text/plain'), ('Content-Length', str(sum(map(len, self.body))))] if is_container_or_object_req and self.policy_idx is not None: headers.append(('X-Backend-Storage-Policy-Index', str(self.policy_idx))) start_response(self.response_str, headers) while env['wsgi.input'].read(5): pass return self.body class FakeAppThatExcepts(object): def __call__(self, env, start_response): raise Exception("We take exception to that!") class FakeAppNoContentLengthNoTransferEncoding(object): def __init__(self, body=None): if body is None: body = ['FAKE APP'] self.body = body def __call__(self, env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) while env['wsgi.input'].read(5): pass return self.body class FileLikeExceptor(object): def __init__(self): pass def read(self, len): raise IOError('of some sort') def readline(self, len=1024): raise IOError('of some sort') class FakeAppReadline(object): def __call__(self, env, start_response): start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '8')]) env['wsgi.input'].readline() return ["FAKE APP"] def start_response(*args): pass @patch_policies([StoragePolicy(0, 'zero', False)]) class TestProxyLogging(unittest.TestCase): def setUp(self): pass def _log_parts(self, app, should_be_empty=False): info_calls = app.access_logger.log_dict['info'] if should_be_empty: self.assertEqual([], info_calls) else: self.assertEqual(1, len(info_calls)) return info_calls[0][0][0].split(' ') def assertTiming(self, exp_metric, app, exp_timing=None): timing_calls = app.access_logger.log_dict['timing'] found = False for timing_call in timing_calls: self.assertEqual({}, timing_call[1]) self.assertEqual(2, len(timing_call[0])) if timing_call[0][0] == exp_metric: found = True if exp_timing is not None: self.assertAlmostEqual(exp_timing, timing_call[0][1], places=4) if not found: self.assertTrue(False, 'assertTiming: %s not found in %r' % ( exp_metric, timing_calls)) def assertTimingSince(self, exp_metric, app, exp_start=None): timing_calls = app.access_logger.log_dict['timing_since'] found = False for timing_call in timing_calls: self.assertEqual({}, timing_call[1]) self.assertEqual(2, len(timing_call[0])) if timing_call[0][0] == exp_metric: found = True if exp_start is not None: self.assertAlmostEqual(exp_start, timing_call[0][1], places=4) if not found: self.assertTrue(False, 'assertTimingSince: %s not found in %r' % ( exp_metric, timing_calls)) def assertNotTiming(self, not_exp_metric, app): timing_calls = app.access_logger.log_dict['timing'] for timing_call in timing_calls: self.assertNotEqual(not_exp_metric, timing_call[0][0]) def assertUpdateStats(self, exp_metrics_and_values, app): update_stats_calls = sorted(app.access_logger.log_dict['update_stats']) got_metrics_values_and_kwargs = [(usc[0][0], usc[0][1], usc[1]) for usc in update_stats_calls] exp_metrics_values_and_kwargs = [(emv[0], emv[1], {}) for emv in exp_metrics_and_values] self.assertEqual(got_metrics_values_and_kwargs, exp_metrics_values_and_kwargs) def test_log_request_statsd_invalid_stats_types(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() for url in ['/', '/foo', '/foo/bar', '/v1']: req = Request.blank(url, environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) # get body ''.join(resp) self.assertEqual([], app.access_logger.log_dict['timing']) self.assertEqual([], app.access_logger.log_dict['update_stats']) def test_log_request_stat_type_bad(self): for bad_path in ['', '/', '/bad', '/baddy/mc_badderson', '/v1', '/v1/']: app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank(bad_path, environ={'REQUEST_METHOD': 'GET'}) now = 10000.0 app.log_request(req, 123, 7, 13, now, now + 2.71828182846) self.assertEqual([], app.access_logger.log_dict['timing']) self.assertEqual([], app.access_logger.log_dict['update_stats']) def test_log_request_stat_type_good(self): """ log_request() should send timing and byte-count counters for GET requests. Also, __call__()'s iter_response() function should statsd-log time to first byte (calling the passed-in start_response function), but only for GET requests. """ stub_times = [] def stub_time(): return stub_times.pop(0) path_types = { '/v1/a': 'account', '/v1/a/': 'account', '/v1/a/c': 'container', '/v1/a/c/': 'container', '/v1/a/c/o': 'object', '/v1/a/c/o/': 'object', '/v1/a/c/o/p': 'object', '/v1/a/c/o/p/': 'object', '/v1/a/c/o/p/p2': 'object', } with mock.patch("time.time", stub_time): for path, exp_type in path_types.items(): # GET app = proxy_logging.ProxyLoggingMiddleware( FakeApp(body='7654321', response_str='321 Fubar'), {}) app.access_logger = FakeLogger() req = Request.blank(path, environ={ 'REQUEST_METHOD': 'GET', 'wsgi.input': BytesIO(b'4321')}) stub_times = [18.0, 20.71828182846] iter_response = app(req.environ, lambda *_: None) self.assertEqual('7654321', ''.join(iter_response)) self.assertTiming('%s.GET.321.timing' % exp_type, app, exp_timing=2.71828182846 * 1000) self.assertTimingSince( '%s.GET.321.first-byte.timing' % exp_type, app, exp_start=18.0) if exp_type == 'object': # Object operations also return stats by policy # In this case, the value needs to match the timing for GET self.assertTiming('%s.policy.0.GET.321.timing' % exp_type, app, exp_timing=2.71828182846 * 1000) self.assertUpdateStats([('%s.GET.321.xfer' % exp_type, 4 + 7), ('object.policy.0.GET.321.xfer', 4 + 7)], app) else: self.assertUpdateStats([('%s.GET.321.xfer' % exp_type, 4 + 7)], app) # GET Repeat the test above, but with a non-existent policy # Do this only for object types if exp_type == 'object': app = proxy_logging.ProxyLoggingMiddleware( FakeApp(body='7654321', response_str='321 Fubar', policy_idx='-1'), {}) app.access_logger = FakeLogger() req = Request.blank(path, environ={ 'REQUEST_METHOD': 'GET', 'wsgi.input': BytesIO(b'4321')}) stub_times = [18.0, 20.71828182846] iter_response = app(req.environ, lambda *_: None) self.assertEqual('7654321', ''.join(iter_response)) self.assertTiming('%s.GET.321.timing' % exp_type, app, exp_timing=2.71828182846 * 1000) self.assertTimingSince( '%s.GET.321.first-byte.timing' % exp_type, app, exp_start=18.0) # No results returned for the non-existent policy self.assertUpdateStats([('%s.GET.321.xfer' % exp_type, 4 + 7)], app) # GET with swift.proxy_access_log_made already set app = proxy_logging.ProxyLoggingMiddleware( FakeApp(body='7654321', response_str='321 Fubar'), {}) app.access_logger = FakeLogger() req = Request.blank(path, environ={ 'REQUEST_METHOD': 'GET', 'swift.proxy_access_log_made': True, 'wsgi.input': BytesIO(b'4321')}) stub_times = [18.0, 20.71828182846] iter_response = app(req.environ, lambda *_: None) self.assertEqual('7654321', ''.join(iter_response)) self.assertEqual([], app.access_logger.log_dict['timing']) self.assertEqual([], app.access_logger.log_dict['timing_since']) self.assertEqual([], app.access_logger.log_dict['update_stats']) # PUT (no first-byte timing!) app = proxy_logging.ProxyLoggingMiddleware( FakeApp(body='87654321', response_str='314 PiTown'), {}) app.access_logger = FakeLogger() req = Request.blank(path, environ={ 'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b'654321')}) # (it's not a GET, so time() doesn't have a 2nd call) stub_times = [58.2, 58.2 + 7.3321] iter_response = app(req.environ, lambda *_: None) self.assertEqual('87654321', ''.join(iter_response)) self.assertTiming('%s.PUT.314.timing' % exp_type, app, exp_timing=7.3321 * 1000) self.assertNotTiming( '%s.GET.314.first-byte.timing' % exp_type, app) self.assertNotTiming( '%s.PUT.314.first-byte.timing' % exp_type, app) if exp_type == 'object': # Object operations also return stats by policy In this # case, the value needs to match the timing for PUT. self.assertTiming('%s.policy.0.PUT.314.timing' % exp_type, app, exp_timing=7.3321 * 1000) self.assertUpdateStats( [('object.PUT.314.xfer', 6 + 8), ('object.policy.0.PUT.314.xfer', 6 + 8)], app) else: self.assertUpdateStats( [('%s.PUT.314.xfer' % exp_type, 6 + 8)], app) # PUT Repeat the test above, but with a non-existent policy # Do this only for object types if exp_type == 'object': app = proxy_logging.ProxyLoggingMiddleware( FakeApp(body='87654321', response_str='314 PiTown', policy_idx='-1'), {}) app.access_logger = FakeLogger() req = Request.blank(path, environ={ 'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b'654321')}) # (it's not a GET, so time() doesn't have a 2nd call) stub_times = [58.2, 58.2 + 7.3321] iter_response = app(req.environ, lambda *_: None) self.assertEqual('87654321', ''.join(iter_response)) self.assertTiming('%s.PUT.314.timing' % exp_type, app, exp_timing=7.3321 * 1000) self.assertNotTiming( '%s.GET.314.first-byte.timing' % exp_type, app) self.assertNotTiming( '%s.PUT.314.first-byte.timing' % exp_type, app) # No results returned for the non-existent policy self.assertUpdateStats([('object.PUT.314.xfer', 6 + 8)], app) def test_log_request_stat_method_filtering_default(self): method_map = { 'foo': 'BAD_METHOD', '': 'BAD_METHOD', 'PUTT': 'BAD_METHOD', 'SPECIAL': 'BAD_METHOD', 'GET': 'GET', 'PUT': 'PUT', 'COPY': 'COPY', 'HEAD': 'HEAD', 'POST': 'POST', 'DELETE': 'DELETE', 'OPTIONS': 'OPTIONS', } for method, exp_method in method_map.items(): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/v1/a/', environ={'REQUEST_METHOD': method}) now = 10000.0 app.log_request(req, 299, 11, 3, now, now + 1.17) self.assertTiming('account.%s.299.timing' % exp_method, app, exp_timing=1.17 * 1000) self.assertUpdateStats([('account.%s.299.xfer' % exp_method, 11 + 3)], app) def test_log_request_stat_method_filtering_custom(self): method_map = { 'foo': 'BAD_METHOD', '': 'BAD_METHOD', 'PUTT': 'BAD_METHOD', 'SPECIAL': 'SPECIAL', # will be configured 'GET': 'GET', 'PUT': 'PUT', 'COPY': 'BAD_METHOD', # prove no one's special } # this conf var supports optional leading access_ for conf_key in ['access_log_statsd_valid_http_methods', 'log_statsd_valid_http_methods']: for method, exp_method in method_map.items(): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { conf_key: 'SPECIAL, GET,PUT ', # crazy spaces ok }) app.access_logger = FakeLogger() req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': method}) now = 10000.0 app.log_request(req, 911, 4, 43, now, now + 1.01) self.assertTiming('container.%s.911.timing' % exp_method, app, exp_timing=1.01 * 1000) self.assertUpdateStats([('container.%s.911.xfer' % exp_method, 4 + 43)], app) def test_basic_req(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'FAKE APP') self.assertEqual(log_parts[11], str(len(resp_body))) def test_basic_req_second_time(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={ 'swift.proxy_access_log_made': True, 'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) self._log_parts(app, should_be_empty=True) self.assertEqual(resp_body, 'FAKE APP') def test_multi_segment_resp(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp( ['some', 'chunks', 'of data']), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'swift.source': 'SOS'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'somechunksof data') self.assertEqual(log_parts[11], str(len(resp_body))) self.assertUpdateStats([('SOS.GET.200.xfer', len(resp_body))], app) def test_log_headers(self): for conf_key in ['access_log_headers', 'log_headers']: app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {conf_key: 'yes'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) headers = unquote(log_parts[14]).split('\n') self.assertTrue('Host: localhost:80' in headers) def test_access_log_headers_only(self): app = proxy_logging.ProxyLoggingMiddleware( FakeApp(), {'log_headers': 'yes', 'access_log_headers_only': 'FIRST, seCond'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'First': '1', 'Second': '2', 'Third': '3'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) headers = unquote(log_parts[14]).split('\n') self.assertTrue('First: 1' in headers) self.assertTrue('Second: 2' in headers) self.assertTrue('Third: 3' not in headers) self.assertTrue('Host: localhost:80' not in headers) def test_upload_size(self): # Using default policy app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {'log_headers': 'yes'}) app.access_logger = FakeLogger() req = Request.blank( '/v1/a/c/o/foo', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b'some stuff')}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[11], str(len('FAKE APP'))) self.assertEqual(log_parts[10], str(len('some stuff'))) self.assertUpdateStats([('object.PUT.200.xfer', len('some stuff') + len('FAKE APP')), ('object.policy.0.PUT.200.xfer', len('some stuff') + len('FAKE APP'))], app) # Using a non-existent policy app = proxy_logging.ProxyLoggingMiddleware(FakeApp(policy_idx='-1'), {'log_headers': 'yes'}) app.access_logger = FakeLogger() req = Request.blank( '/v1/a/c/o/foo', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b'some stuff')}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[11], str(len('FAKE APP'))) self.assertEqual(log_parts[10], str(len('some stuff'))) self.assertUpdateStats([('object.PUT.200.xfer', len('some stuff') + len('FAKE APP'))], app) def test_upload_size_no_policy(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(policy_idx=None), {'log_headers': 'yes'}) app.access_logger = FakeLogger() req = Request.blank( '/v1/a/c/o/foo', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': BytesIO(b'some stuff')}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[11], str(len('FAKE APP'))) self.assertEqual(log_parts[10], str(len('some stuff'))) self.assertUpdateStats([('object.PUT.200.xfer', len('some stuff') + len('FAKE APP'))], app) def test_upload_line(self): app = proxy_logging.ProxyLoggingMiddleware(FakeAppReadline(), {'log_headers': 'yes'}) app.access_logger = FakeLogger() req = Request.blank( '/v1/a/c', environ={'REQUEST_METHOD': 'POST', 'wsgi.input': BytesIO(b'some stuff\nsome other stuff\n')}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[11], str(len('FAKE APP'))) self.assertEqual(log_parts[10], str(len('some stuff\n'))) self.assertUpdateStats([('container.POST.200.xfer', len('some stuff\n') + len('FAKE APP'))], app) def test_log_query_string(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'x=3'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(unquote(log_parts[4]), '/?x=3') def test_client_logging(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '1.2.3.4'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[0], '1.2.3.4') # client ip self.assertEqual(log_parts[1], '1.2.3.4') # remote addr def test_iterator_closing(self): class CloseableBody(object): def __init__(self): self.closed = False def close(self): self.closed = True def __iter__(self): return iter(["CloseableBody"]) body = CloseableBody() app = proxy_logging.ProxyLoggingMiddleware(FakeApp(body), {}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '1.2.3.4'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] self.assertTrue(body.closed) def test_proxy_client_logging(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={ 'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '1.2.3.4', 'HTTP_X_FORWARDED_FOR': '4.5.6.7,8.9.10.11'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[0], '4.5.6.7') # client ip self.assertEqual(log_parts[1], '1.2.3.4') # remote addr app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={ 'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '1.2.3.4', 'HTTP_X_CLUSTER_CLIENT_IP': '4.5.6.7'}) resp = app(req.environ, start_response) # exhaust generator [x for x in resp] log_parts = self._log_parts(app) self.assertEqual(log_parts[0], '4.5.6.7') # client ip self.assertEqual(log_parts[1], '1.2.3.4') # remote addr def test_facility(self): app = proxy_logging.ProxyLoggingMiddleware( FakeApp(), {'log_headers': 'yes', 'access_log_facility': 'LOG_LOCAL7'}) handler = get_logger.handler4logger[app.access_logger.logger] self.assertEqual(SysLogHandler.LOG_LOCAL7, handler.facility) def test_filter(self): factory = proxy_logging.filter_factory({}) self.assertTrue(callable(factory)) self.assertTrue(callable(factory(FakeApp()))) def test_unread_body(self): app = proxy_logging.ProxyLoggingMiddleware( FakeApp(['some', 'stuff']), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) # read first chunk next(resp) resp.close() # raise a GeneratorExit in middleware app_iter loop log_parts = self._log_parts(app) self.assertEqual(log_parts[6], '499') self.assertEqual(log_parts[11], '4') # write length def test_disconnect_on_readline(self): app = proxy_logging.ProxyLoggingMiddleware(FakeAppReadline(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'wsgi.input': FileLikeExceptor()}) try: resp = app(req.environ, start_response) # read body ''.join(resp) except IOError: pass log_parts = self._log_parts(app) self.assertEqual(log_parts[6], '499') self.assertEqual(log_parts[10], '-') # read length def test_disconnect_on_read(self): app = proxy_logging.ProxyLoggingMiddleware( FakeApp(['some', 'stuff']), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'wsgi.input': FileLikeExceptor()}) try: resp = app(req.environ, start_response) # read body ''.join(resp) except IOError: pass log_parts = self._log_parts(app) self.assertEqual(log_parts[6], '499') self.assertEqual(log_parts[10], '-') # read length def test_app_exception(self): app = proxy_logging.ProxyLoggingMiddleware( FakeAppThatExcepts(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) try: app(req.environ, start_response) except Exception: pass log_parts = self._log_parts(app) self.assertEqual(log_parts[6], '500') self.assertEqual(log_parts[10], '-') # read length def test_no_content_length_no_transfer_encoding_with_list_body(self): app = proxy_logging.ProxyLoggingMiddleware( FakeAppNoContentLengthNoTransferEncoding( # test the "while not chunk: chunk = next(iterator)" body=['', '', 'line1\n', 'line2\n'], ), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'line1\nline2\n') self.assertEqual(log_parts[11], str(len(resp_body))) def test_no_content_length_no_transfer_encoding_with_empty_strings(self): app = proxy_logging.ProxyLoggingMiddleware( FakeAppNoContentLengthNoTransferEncoding( # test the "while not chunk: chunk = next(iterator)" body=['', '', ''], ), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, '') self.assertEqual(log_parts[11], '-') def test_no_content_length_no_transfer_encoding_with_generator(self): class BodyGen(object): def __init__(self, data): self.data = data def __iter__(self): yield self.data app = proxy_logging.ProxyLoggingMiddleware( FakeAppNoContentLengthNoTransferEncoding( body=BodyGen('abc'), ), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'abc') self.assertEqual(log_parts[11], '3') def test_req_path_info_popping(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/v1/something', environ={'REQUEST_METHOD': 'GET'}) req.path_info_pop() self.assertEqual(req.environ['PATH_INFO'], '/something') resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/v1/something') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'FAKE APP') self.assertEqual(log_parts[11], str(len(resp_body))) def test_ipv6(self): ipv6addr = '2001:db8:85a3:8d3:1319:8a2e:370:7348' app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) req.remote_addr = ipv6addr resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[0], ipv6addr) self.assertEqual(log_parts[1], ipv6addr) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'FAKE APP') self.assertEqual(log_parts[11], str(len(resp_body))) def test_log_info_none(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) list(app(req.environ, start_response)) log_parts = self._log_parts(app) self.assertEqual(log_parts[17], '-') app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) req.environ['swift.log_info'] = [] list(app(req.environ, start_response)) log_parts = self._log_parts(app) self.assertEqual(log_parts[17], '-') def test_log_info_single(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) req.environ['swift.log_info'] = ['one'] list(app(req.environ, start_response)) log_parts = self._log_parts(app) self.assertEqual(log_parts[17], 'one') def test_log_info_multiple(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) req.environ['swift.log_info'] = ['one', 'and two'] list(app(req.environ, start_response)) log_parts = self._log_parts(app) self.assertEqual(log_parts[17], 'one%2Cand%20two') def test_log_auth_token(self): auth_token = 'b05bf940-0464-4c0e-8c70-87717d2d73e8' # Default - reveal_sensitive_prefix is 16 # No x-auth-token header app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], '-') # Has x-auth-token header app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_AUTH_TOKEN': auth_token}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], 'b05bf940-0464-4c...') # Truncate to first 8 characters app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': '8'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], '-') app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': '8'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_AUTH_TOKEN': auth_token}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], 'b05bf940...') # Token length and reveal_sensitive_prefix are same (no truncate) app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': str(len(auth_token))}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_AUTH_TOKEN': auth_token}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], auth_token) # No effective limit on auth token app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': constraints.MAX_HEADER_SIZE}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_AUTH_TOKEN': auth_token}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], auth_token) # Don't log x-auth-token app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': '0'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], '-') app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), { 'reveal_sensitive_prefix': '0'}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_AUTH_TOKEN': auth_token}) resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[9], '...') # Avoids pyflakes error, "local variable 'resp_body' is assigned to # but never used self.assertTrue(resp_body is not None) def test_ensure_fields(self): app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) with mock.patch('time.time', mock.MagicMock( side_effect=[10000000.0, 10000001.0])): resp = app(req.environ, start_response) resp_body = ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(len(log_parts), 21) self.assertEqual(log_parts[0], '-') self.assertEqual(log_parts[1], '-') self.assertEqual(log_parts[2], '26/Apr/1970/17/46/41') self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(log_parts[7], '-') self.assertEqual(log_parts[8], '-') self.assertEqual(log_parts[9], '-') self.assertEqual(log_parts[10], '-') self.assertEqual(resp_body, 'FAKE APP') self.assertEqual(log_parts[11], str(len(resp_body))) self.assertEqual(log_parts[12], '-') self.assertEqual(log_parts[13], '-') self.assertEqual(log_parts[14], '-') self.assertEqual(log_parts[15], '1.0000') self.assertEqual(log_parts[16], '-') self.assertEqual(log_parts[17], '-') self.assertEqual(log_parts[18], '10000000.000000000') self.assertEqual(log_parts[19], '10000001.000000000') self.assertEqual(log_parts[20], '-') def test_dual_logging_middlewares(self): # Since no internal request is being made, outer most proxy logging # middleware, log1, should have performed the logging. app = FakeApp() flg0 = FakeLogger() env = {} log0 = proxy_logging.ProxyLoggingMiddleware(app, env, logger=flg0) flg1 = FakeLogger() log1 = proxy_logging.ProxyLoggingMiddleware(log0, env, logger=flg1) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = log1(req.environ, start_response) resp_body = ''.join(resp) self._log_parts(log0, should_be_empty=True) log_parts = self._log_parts(log1) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'FAKE APP') self.assertEqual(log_parts[11], str(len(resp_body))) def test_dual_logging_middlewares_w_inner(self): class FakeMiddleware(object): """ Fake middleware to make a separate internal request, but construct the response with different data. """ def __init__(self, app, conf): self.app = app self.conf = conf def GET(self, req): # Make the internal request ireq = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(ireq.environ, start_response) resp_body = ''.join(resp) if resp_body != 'FAKE APP': return Response(request=req, body="FAKE APP WAS NOT RETURNED", content_type="text/plain") # But our response is different return Response(request=req, body="FAKE MIDDLEWARE", content_type="text/plain") def __call__(self, env, start_response): req = Request(env) return self.GET(req)(env, start_response) # Since an internal request is being made, inner most proxy logging # middleware, log0, should have performed the logging. app = FakeApp() flg0 = FakeLogger() env = {} log0 = proxy_logging.ProxyLoggingMiddleware(app, env, logger=flg0) fake = FakeMiddleware(log0, env) flg1 = FakeLogger() log1 = proxy_logging.ProxyLoggingMiddleware(fake, env, logger=flg1) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = log1(req.environ, start_response) resp_body = ''.join(resp) # Inner most logger should have logged the app's response log_parts = self._log_parts(log0) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(log_parts[11], str(len('FAKE APP'))) # Outer most logger should have logged the other middleware's response log_parts = self._log_parts(log1) self.assertEqual(log_parts[3], 'GET') self.assertEqual(log_parts[4], '/') self.assertEqual(log_parts[5], 'HTTP/1.0') self.assertEqual(log_parts[6], '200') self.assertEqual(resp_body, 'FAKE MIDDLEWARE') self.assertEqual(log_parts[11], str(len(resp_body))) def test_policy_index(self): # Policy index can be specified by X-Backend-Storage-Policy-Index # in the request header for object API app = proxy_logging.ProxyLoggingMiddleware(FakeApp(policy_idx='1'), {}) app.access_logger = FakeLogger() req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}) resp = app(req.environ, start_response) ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[20], '1') # Policy index can be specified by X-Backend-Storage-Policy-Index # in the response header for container API app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {}) app.access_logger = FakeLogger() req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'GET'}) def fake_call(app, env, start_response): start_response(app.response_str, [('Content-Type', 'text/plain'), ('Content-Length', str(sum(map(len, app.body)))), ('X-Backend-Storage-Policy-Index', '1')]) while env['wsgi.input'].read(5): pass return app.body with mock.patch.object(FakeApp, '__call__', fake_call): resp = app(req.environ, start_response) ''.join(resp) log_parts = self._log_parts(app) self.assertEqual(log_parts[20], '1') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_versioned_writes.py0000664000567000056710000011336213024044354026372 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import json import os import time import unittest from swift.common import swob from swift.common.middleware import versioned_writes from swift.common.swob import Request from test.unit.common.middleware.helpers import FakeSwift class FakeCache(object): def __init__(self, val): if 'status' not in val: val['status'] = 200 self.val = val def get(self, *args): return self.val def local_tz(func): ''' Decorator to change the timezone when running a test. This uses the Eastern Time Zone definition from the time module's docs. Note that the timezone affects things like time.time() and time.mktime(). ''' @functools.wraps(func) def wrapper(*args, **kwargs): tz = os.environ.get('TZ', '') try: os.environ['TZ'] = 'EST+05EDT,M4.1.0,M10.5.0' time.tzset() return func(*args, **kwargs) finally: os.environ['TZ'] = tz time.tzset() return wrapper class VersionedWritesBaseTestCase(unittest.TestCase): def setUp(self): self.app = FakeSwift() conf = {'allow_versioned_writes': 'true'} self.vw = versioned_writes.filter_factory(conf)(self.app) def call_app(self, req, app=None, expect_exception=False): if app is None: app = self.app self.authorized = [] def authorize(req): self.authorized.append(req) if 'swift.authorize' not in req.environ: req.environ['swift.authorize'] = authorize req.headers.setdefault("User-Agent", "Marula Kruger") status = [None] headers = [None] def start_response(s, h, ei=None): status[0] = s headers[0] = h body_iter = app(req.environ, start_response) body = '' caught_exc = None try: for chunk in body_iter: body += chunk except Exception as exc: if expect_exception: caught_exc = exc else: raise if expect_exception: return status[0], headers[0], body, caught_exc else: return status[0], headers[0], body def call_vw(self, req, **kwargs): return self.call_app(req, app=self.vw, **kwargs) def assertRequestEqual(self, req, other): self.assertEqual(req.method, other.method) self.assertEqual(req.path, other.path) class VersionedWritesTestCase(VersionedWritesBaseTestCase): def test_put_container(self): self.app.register('PUT', '/v1/a/c', swob.HTTPOk, {}, 'passed') req = Request.blank('/v1/a/c', headers={'X-Versions-Location': 'ver_cont'}, environ={'REQUEST_METHOD': 'PUT'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') # check for sysmeta header calls = self.app.calls_with_headers method, path, req_headers = calls[0] self.assertEqual('PUT', method) self.assertEqual('/v1/a/c', path) self.assertTrue('x-container-sysmeta-versions-location' in req_headers) self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_container_allow_versioned_writes_false(self): self.vw.conf = {'allow_versioned_writes': 'false'} # PUT/POST container must fail as 412 when allow_versioned_writes # set to false for method in ('PUT', 'POST'): req = Request.blank('/v1/a/c', headers={'X-Versions-Location': 'ver_cont'}, environ={'REQUEST_METHOD': method}) status, headers, body = self.call_vw(req) self.assertEqual(status, "412 Precondition Failed") # GET/HEAD performs as normal self.app.register('GET', '/v1/a/c', swob.HTTPOk, {}, 'passed') self.app.register('HEAD', '/v1/a/c', swob.HTTPOk, {}, 'passed') for method in ('GET', 'HEAD'): req = Request.blank('/v1/a/c', headers={'X-Versions-Location': 'ver_cont'}, environ={'REQUEST_METHOD': method}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') def test_remove_versions_location(self): self.app.register('POST', '/v1/a/c', swob.HTTPOk, {}, 'passed') req = Request.blank('/v1/a/c', headers={'X-Remove-Versions-Location': 'x'}, environ={'REQUEST_METHOD': 'POST'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') # check for sysmeta header calls = self.app.calls_with_headers method, path, req_headers = calls[0] self.assertEqual('POST', method) self.assertEqual('/v1/a/c', path) self.assertTrue('x-container-sysmeta-versions-location' in req_headers) self.assertTrue('x-versions-location' in req_headers) self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_remove_add_versions_precedence(self): self.app.register( 'POST', '/v1/a/c', swob.HTTPOk, {'x-container-sysmeta-versions-location': 'ver_cont'}, 'passed') req = Request.blank('/v1/a/c', headers={'X-Remove-Versions-Location': 'x', 'X-Versions-Location': 'ver_cont'}, environ={'REQUEST_METHOD': 'POST'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertTrue(('X-Versions-Location', 'ver_cont') in headers) # check for sysmeta header calls = self.app.calls_with_headers method, path, req_headers = calls[0] self.assertEqual('POST', method) self.assertEqual('/v1/a/c', path) self.assertTrue('x-container-sysmeta-versions-location' in req_headers) self.assertTrue('x-remove-versions-location' not in req_headers) self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_get_container(self): self.app.register( 'GET', '/v1/a/c', swob.HTTPOk, {'x-container-sysmeta-versions-location': 'ver_cont'}, None) req = Request.blank( '/v1/a/c', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertTrue(('X-Versions-Location', 'ver_cont') in headers) self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_get_head(self): self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, None) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) self.app.register('HEAD', '/v1/a/c/o', swob.HTTPOk, {}, None) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_put_object_no_versioning(self): self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') cache = FakeCache({}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_put_first_object_success(self): self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/c/o', swob.HTTPNotFound, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_PUT_versioning_with_nonzero_default_policy(self): self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/c/o', swob.HTTPNotFound, {}, None) cache = FakeCache({'versions': 'ver_cont', 'storage_policy': '2'}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') # check for 'X-Backend-Storage-Policy-Index' in HEAD request calls = self.app.calls_with_headers method, path, req_headers = calls[0] self.assertEqual('HEAD', method) self.assertEqual('/v1/a/c/o', path) self.assertTrue('X-Backend-Storage-Policy-Index' in req_headers) self.assertEqual('2', req_headers.get('X-Backend-Storage-Policy-Index')) self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_put_object_no_versioning_with_container_config_true(self): # set False to versions_write obsously and expect no COPY occurred self.vw.conf = {'allow_versioned_writes': 'false'} self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPCreated, {}, 'passed') self.app.register( 'HEAD', '/v1/a/c/o', swob.HTTPOk, {'last-modified': 'Wed, 19 Nov 2014 18:19:02 GMT'}, 'passed') cache = FakeCache({'versions': 'ver_cont'}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '201 Created') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) called_method = [method for (method, path, hdrs) in self.app._calls] self.assertTrue('COPY' not in called_method) def test_delete_object_no_versioning_with_container_config_true(self): # set False to versions_write obviously and expect no GET versioning # container and COPY called (just delete object as normal) self.vw.conf = {'allow_versioned_writes': 'false'} self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPNoContent, {}, 'passed') cache = FakeCache({'versions': 'ver_cont'}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache}) status, headers, body = self.call_vw(req) self.assertEqual(status, '204 No Content') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) called_method = \ [method for (method, path, rheaders) in self.app._calls] self.assertTrue('COPY' not in called_method) self.assertTrue('GET' not in called_method) def test_copy_object_no_versioning_with_container_config_true(self): # set False to versions_write obviously and expect no extra # COPY called (just copy object as normal) self.vw.conf = {'allow_versioned_writes': 'false'} self.app.register( 'COPY', '/v1/a/c/o', swob.HTTPCreated, {}, None) cache = FakeCache({'versions': 'ver_cont'}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache}) status, headers, body = self.call_vw(req) self.assertEqual(status, '201 Created') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) called_method = \ [method for (method, path, rheaders) in self.app._calls] self.assertTrue('COPY' in called_method) self.assertEqual(called_method.count('COPY'), 1) def test_new_version_success(self): self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/c/o', swob.HTTPOk, {'last-modified': 'Wed, 19 Nov 2014 18:19:02 GMT'}, 'passed') self.app.register( 'COPY', '/v1/a/c/o', swob.HTTPCreated, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) @local_tz def test_new_version_sysmeta_precedence(self): self.app.register( 'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/c/o', swob.HTTPOk, {'last-modified': 'Thu, 1 Jan 1970 00:00:00 GMT'}, 'passed') self.app.register( 'COPY', '/v1/a/c/o', swob.HTTPCreated, {}, None) # fill cache with two different values for versions location # new middleware should use sysmeta first cache = FakeCache({'versions': 'old_ver_cont', 'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) # check that sysmeta header was used calls = self.app.calls_with_headers method, path, req_headers = calls[1] self.assertEqual('COPY', method) self.assertEqual('/v1/a/c/o', path) self.assertEqual('ver_cont/001o/0000000000.00000', req_headers['Destination']) def test_copy_first_version(self): self.app.register( 'COPY', '/v1/a/src_cont/src_obj', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/tgt_cont/tgt_obj', swob.HTTPNotFound, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/src_cont/src_obj', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}, headers={'Destination': 'tgt_cont/tgt_obj'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_copy_new_version(self): self.app.register( 'COPY', '/v1/a/src_cont/src_obj', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/a/tgt_cont/tgt_obj', swob.HTTPOk, {'last-modified': 'Wed, 19 Nov 2014 18:19:02 GMT'}, 'passed') self.app.register( 'COPY', '/v1/a/tgt_cont/tgt_obj', swob.HTTPCreated, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/src_cont/src_obj', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}, headers={'Destination': 'tgt_cont/tgt_obj'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_copy_new_version_different_account(self): self.app.register( 'COPY', '/v1/src_a/src_cont/src_obj', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/tgt_a/tgt_cont/tgt_obj', swob.HTTPOk, {'last-modified': 'Wed, 19 Nov 2014 18:19:02 GMT'}, 'passed') self.app.register( 'COPY', '/v1/tgt_a/tgt_cont/tgt_obj', swob.HTTPCreated, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/src_a/src_cont/src_obj', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}, headers={'Destination': 'tgt_cont/tgt_obj', 'Destination-Account': 'tgt_a'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) def test_copy_new_version_bogus_account(self): cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/src_a/src_cont/src_obj', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache, 'CONTENT_LENGTH': '100'}, headers={'Destination': 'tgt_cont/tgt_obj', 'Destination-Account': '/im/on/a/boat'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '412 Precondition Failed') def test_delete_first_object_success(self): self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on', swob.HTTPNotFound, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('DELETE', '/v1/a/c/o'), ]) def test_delete_latest_version_success(self): self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}, ' '{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}]') self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/2', swob.HTTPOk, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', headers={'X-If-Delete-At': 1}, environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) # check that X-If-Delete-At was removed from DELETE request req_headers = self.app.headers[-1] self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('COPY', '/v1/a/ver_cont/001o/2'), ('DELETE', '/v1/a/ver_cont/001o/2'), ]) def test_delete_single_version_success(self): # check that if the first listing page has just a single item then # it is not erroneously inferred to be a non-reversed listing self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}]') self.app.register( 'COPY', '/v1/a/ver_cont/001o/1', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/1', swob.HTTPOk, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('COPY', '/v1/a/ver_cont/001o/1'), ('DELETE', '/v1/a/ver_cont/001o/1'), ]) def test_DELETE_on_expired_versioned_object(self): self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}, ' '{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}]') # expired object self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/1', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/1', swob.HTTPOk, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('COPY', '/v1/a/ver_cont/001o/2'), ('COPY', '/v1/a/ver_cont/001o/1'), ('DELETE', '/v1/a/ver_cont/001o/1'), ]) def test_denied_DELETE_of_versioned_object(self): authorize_call = [] self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}, ' '{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}]') self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPForbidden, {}, None) def fake_authorize(req): authorize_call.append(req) return swob.HTTPForbidden() cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'swift.authorize': fake_authorize, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '403 Forbidden') self.assertEqual(len(authorize_call), 1) self.assertRequestEqual(req, authorize_call[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ]) class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase): def test_delete_latest_version_success(self): self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}, ' '{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}]') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/' '&marker=001o/2', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/2', swob.HTTPOk, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', headers={'X-If-Delete-At': 1}, environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) # check that X-If-Delete-At was removed from DELETE request req_headers = self.app.headers[-1] self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('GET', prefix_listing_prefix + 'marker=001o/2'), ('COPY', '/v1/a/ver_cont/001o/2'), ('DELETE', '/v1/a/ver_cont/001o/2'), ]) def test_DELETE_on_expired_versioned_object(self): self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}, ' '{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}]') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/' '&marker=001o/2', swob.HTTPNotFound, {}, None) # expired object self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/1', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/1', swob.HTTPOk, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '200 OK') self.assertEqual(len(self.authorized), 1) self.assertRequestEqual(req, self.authorized[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('GET', prefix_listing_prefix + 'marker=001o/2'), ('COPY', '/v1/a/ver_cont/001o/2'), ('COPY', '/v1/a/ver_cont/001o/1'), ('DELETE', '/v1/a/ver_cont/001o/1'), ]) def test_denied_DELETE_of_versioned_object(self): authorize_call = [] self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&reverse=on', swob.HTTPOk, {}, '[{"hash": "x", ' '"last_modified": "2014-11-21T14:14:27.409100", ' '"bytes": 3, ' '"name": "001o/1", ' '"content_type": "text/plain"}, ' '{"hash": "y", ' '"last_modified": "2014-11-21T14:23:02.206740", ' '"bytes": 3, ' '"name": "001o/2", ' '"content_type": "text/plain"}]') self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/' '&marker=001o/2', swob.HTTPNotFound, {}, None) self.app.register( 'DELETE', '/v1/a/c/o', swob.HTTPForbidden, {}, None) def fake_authorize(req): authorize_call.append(req) return swob.HTTPForbidden() cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'swift.authorize': fake_authorize, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '403 Forbidden') self.assertEqual(len(authorize_call), 1) self.assertRequestEqual(req, authorize_call[0]) prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('GET', prefix_listing_prefix + 'marker=001o/2'), ]) def test_partially_upgraded_cluster(self): old_versions = [ {'hash': 'etag%d' % x, 'last_modified': "2014-11-21T14:14:%02d.409100" % x, 'bytes': 3, 'name': '001o/%d' % x, 'content_type': 'text/plain'} for x in range(5)] # first container server can reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&reverse=on', swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[2:])))) # but all objects are already gone self.app.register( 'COPY', '/v1/a/ver_cont/001o/4', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/3', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPNotFound, {}, None) # second container server can't reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/2&reverse=on', swob.HTTPOk, {}, json.dumps(old_versions[3:])) # subsequent requests shouldn't reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&end_marker=001o/2', swob.HTTPOk, {}, json.dumps(old_versions[:1])) self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/0&end_marker=001o/2', swob.HTTPOk, {}, json.dumps(old_versions[1:2])) self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/1&end_marker=001o/2', swob.HTTPOk, {}, '[]') self.app.register( 'COPY', '/v1/a/ver_cont/001o/1', swob.HTTPOk, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/1', swob.HTTPNoContent, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '204 No Content') prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('COPY', '/v1/a/ver_cont/001o/4'), ('COPY', '/v1/a/ver_cont/001o/3'), ('COPY', '/v1/a/ver_cont/001o/2'), ('GET', prefix_listing_prefix + 'marker=001o/2&reverse=on'), ('GET', prefix_listing_prefix + 'marker=&end_marker=001o/2'), ('GET', prefix_listing_prefix + 'marker=001o/0&end_marker=001o/2'), ('GET', prefix_listing_prefix + 'marker=001o/1&end_marker=001o/2'), ('COPY', '/v1/a/ver_cont/001o/1'), ('DELETE', '/v1/a/ver_cont/001o/1'), ]) def test_partially_upgraded_cluster_single_result_on_second_page(self): old_versions = [ {'hash': 'etag%d' % x, 'last_modified': "2014-11-21T14:14:%02d.409100" % x, 'bytes': 3, 'name': '001o/%d' % x, 'content_type': 'text/plain'} for x in range(5)] # first container server can reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&reverse=on', swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[-2:])))) # but both objects are already gone self.app.register( 'COPY', '/v1/a/ver_cont/001o/4', swob.HTTPNotFound, {}, None) self.app.register( 'COPY', '/v1/a/ver_cont/001o/3', swob.HTTPNotFound, {}, None) # second container server can't reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/3&reverse=on', swob.HTTPOk, {}, json.dumps(old_versions[4:])) # subsequent requests shouldn't reverse self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=&end_marker=001o/3', swob.HTTPOk, {}, json.dumps(old_versions[:2])) self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/1&end_marker=001o/3', swob.HTTPOk, {}, json.dumps(old_versions[2:3])) self.app.register( 'GET', '/v1/a/ver_cont?format=json&prefix=001o/&' 'marker=001o/2&end_marker=001o/3', swob.HTTPOk, {}, '[]') self.app.register( 'COPY', '/v1/a/ver_cont/001o/2', swob.HTTPOk, {}, None) self.app.register( 'DELETE', '/v1/a/ver_cont/001o/2', swob.HTTPNoContent, {}, None) cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}}) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE', 'swift.cache': cache, 'CONTENT_LENGTH': '0'}) status, headers, body = self.call_vw(req) self.assertEqual(status, '204 No Content') prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&' self.assertEqual(self.app.calls, [ ('GET', prefix_listing_prefix + 'marker=&reverse=on'), ('COPY', '/v1/a/ver_cont/001o/4'), ('COPY', '/v1/a/ver_cont/001o/3'), ('GET', prefix_listing_prefix + 'marker=001o/3&reverse=on'), ('GET', prefix_listing_prefix + 'marker=&end_marker=001o/3'), ('GET', prefix_listing_prefix + 'marker=001o/1&end_marker=001o/3'), ('GET', prefix_listing_prefix + 'marker=001o/2&end_marker=001o/3'), ('COPY', '/v1/a/ver_cont/001o/2'), ('DELETE', '/v1/a/ver_cont/001o/2'), ]) swift-2.7.1/test/unit/common/middleware/test_healthcheck.py0000664000567000056710000000551513024044352025240 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil import tempfile import unittest from swift.common.swob import Request, Response from swift.common.middleware import healthcheck class FakeApp(object): def __call__(self, env, start_response): req = Request(env) return Response(request=req, body='FAKE APP')( env, start_response) class TestHealthCheck(unittest.TestCase): def setUp(self): self.tempdir = tempfile.mkdtemp() self.disable_path = os.path.join(self.tempdir, 'dont-taze-me-bro') self.got_statuses = [] def tearDown(self): shutil.rmtree(self.tempdir, ignore_errors=True) def get_app(self, app, global_conf, **local_conf): factory = healthcheck.filter_factory(global_conf, **local_conf) return factory(app) def start_response(self, status, headers): self.got_statuses.append(status) def test_healthcheck(self): req = Request.blank('/healthcheck', environ={'REQUEST_METHOD': 'GET'}) app = self.get_app(FakeApp(), {}) resp = app(req.environ, self.start_response) self.assertEqual(['200 OK'], self.got_statuses) self.assertEqual(resp, ['OK']) def test_healtcheck_pass(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) app = self.get_app(FakeApp(), {}) resp = app(req.environ, self.start_response) self.assertEqual(['200 OK'], self.got_statuses) self.assertEqual(resp, ['FAKE APP']) def test_healthcheck_pass_not_disabled(self): req = Request.blank('/healthcheck', environ={'REQUEST_METHOD': 'GET'}) app = self.get_app(FakeApp(), {}, disable_path=self.disable_path) resp = app(req.environ, self.start_response) self.assertEqual(['200 OK'], self.got_statuses) self.assertEqual(resp, ['OK']) def test_healthcheck_pass_disabled(self): open(self.disable_path, 'w') req = Request.blank('/healthcheck', environ={'REQUEST_METHOD': 'GET'}) app = self.get_app(FakeApp(), {}, disable_path=self.disable_path) resp = app(req.environ, self.start_response) self.assertEqual(['503 Service Unavailable'], self.got_statuses) self.assertEqual(resp, ['DISABLED BY FILE']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_bulk.py0000664000567000056710000012314413024044354023733 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import numbers from six.moves import urllib import unittest import os import tarfile import zlib import mock import six from six import BytesIO from shutil import rmtree from tempfile import mkdtemp from eventlet import sleep from mock import patch, call from test.unit.common.middleware.helpers import FakeSwift from swift.common import utils, constraints from swift.common.header_key_dict import HeaderKeyDict from swift.common.middleware import bulk from swift.common.swob import Request, Response, HTTPException, \ HTTPNoContent, HTTPCreated from swift.common.http import HTTP_NOT_FOUND, HTTP_UNAUTHORIZED class FakeApp(object): def __init__(self): self.calls = 0 self.delete_paths = [] self.max_pathlen = 100 self.del_cont_total_calls = 2 self.del_cont_cur_call = 0 def __call__(self, env, start_response): self.calls += 1 if env['PATH_INFO'].startswith('/unauth/'): if env['PATH_INFO'].endswith('/c/f_ok'): return Response(status='204 No Content')(env, start_response) return Response(status=401)(env, start_response) if env['PATH_INFO'].startswith('/create_cont/'): if env['REQUEST_METHOD'] == 'HEAD': return Response(status='404 Not Found')(env, start_response) return Response(status='201 Created')(env, start_response) if env['PATH_INFO'].startswith('/create_cont_fail/'): if env['REQUEST_METHOD'] == 'HEAD': return Response(status='403 Forbidden')(env, start_response) return Response(status='404 Not Found')(env, start_response) if env['PATH_INFO'].startswith('/create_obj_unauth/'): if env['PATH_INFO'].endswith('/cont'): return Response(status='201 Created')(env, start_response) return Response(status=401)(env, start_response) if env['PATH_INFO'].startswith('/tar_works/'): if len(env['PATH_INFO']) > self.max_pathlen: return Response(status='400 Bad Request')(env, start_response) return Response(status='201 Created')(env, start_response) if env['PATH_INFO'].startswith('/tar_works_cont_head_fail/'): if env['REQUEST_METHOD'] == 'HEAD': return Response(status='404 Not Found')(env, start_response) if len(env['PATH_INFO']) > 100: return Response(status='400 Bad Request')(env, start_response) return Response(status='201 Created')(env, start_response) if (env['PATH_INFO'].startswith('/delete_works/') and env['REQUEST_METHOD'] == 'DELETE'): self.delete_paths.append(env['PATH_INFO']) if len(env['PATH_INFO']) > self.max_pathlen: return Response(status='400 Bad Request')(env, start_response) if env['PATH_INFO'].endswith('404'): return Response(status='404 Not Found')(env, start_response) if env['PATH_INFO'].endswith('badutf8'): return Response( status='412 Precondition Failed')(env, start_response) return Response(status='204 No Content')(env, start_response) if env['PATH_INFO'].startswith('/delete_cont_fail/'): return Response(status='409 Conflict')(env, start_response) if env['PATH_INFO'].startswith('/broke/'): return Response(status='500 Internal Error')(env, start_response) if env['PATH_INFO'].startswith('/delete_cont_success_after_attempts/'): if self.del_cont_cur_call < self.del_cont_total_calls: self.del_cont_cur_call += 1 return Response(status='409 Conflict')(env, start_response) else: return Response(status='204 No Content')(env, start_response) def build_dir_tree(start_path, tree_obj): if isinstance(tree_obj, list): for obj in tree_obj: build_dir_tree(start_path, obj) if isinstance(tree_obj, dict): for dir_name, obj in tree_obj.items(): dir_path = os.path.join(start_path, dir_name) os.mkdir(dir_path) build_dir_tree(dir_path, obj) if isinstance(tree_obj, six.text_type): tree_obj = tree_obj.encode('utf8') if isinstance(tree_obj, str): obj_path = os.path.join(start_path, tree_obj) with open(obj_path, 'w+') as tree_file: tree_file.write('testing') def build_tar_tree(tar, start_path, tree_obj, base_path=''): if isinstance(tree_obj, list): for obj in tree_obj: build_tar_tree(tar, start_path, obj, base_path=base_path) if isinstance(tree_obj, dict): for dir_name, obj in tree_obj.items(): dir_path = os.path.join(start_path, dir_name) tar_info = tarfile.TarInfo(dir_path[len(base_path):]) tar_info.type = tarfile.DIRTYPE tar.addfile(tar_info) build_tar_tree(tar, dir_path, obj, base_path=base_path) if isinstance(tree_obj, six.text_type): tree_obj = tree_obj.encode('utf8') if isinstance(tree_obj, str): obj_path = os.path.join(start_path, tree_obj) tar_info = tarfile.TarInfo('./' + obj_path[len(base_path):]) tar.addfile(tar_info) class TestUntarMetadata(unittest.TestCase): def setUp(self): self.app = FakeSwift() self.bulk = bulk.filter_factory({})(self.app) self.testdir = mkdtemp(suffix='tmp_test_bulk') def tearDown(self): rmtree(self.testdir, ignore_errors=1) def test_extract_metadata(self): self.app.register('HEAD', '/v1/a/c?extract-archive=tar', HTTPNoContent, {}, None) self.app.register('PUT', '/v1/a/c/obj1?extract-archive=tar', HTTPCreated, {}, None) self.app.register('PUT', '/v1/a/c/obj2?extract-archive=tar', HTTPCreated, {}, None) # It's a real pain to instantiate TarInfo objects directly; they # really want to come from a file on disk or a tarball. So, we write # out some files and add pax headers to them as they get placed into # the tarball. with open(os.path.join(self.testdir, "obj1"), "w") as fh1: fh1.write("obj1 contents\n") with open(os.path.join(self.testdir, "obj2"), "w") as fh2: fh2.write("obj2 contents\n") tar_ball = BytesIO() tar_file = tarfile.TarFile.open(fileobj=tar_ball, mode="w", format=tarfile.PAX_FORMAT) # With GNU tar 1.27.1 or later (possibly 1.27 as well), a file with # extended attribute user.thingy = dingy gets put into the tarfile # with pax_headers containing key/value pair # (SCHILY.xattr.user.thingy, dingy), both unicode strings (py2: type # unicode, not type str). # # With BSD tar (libarchive), you get key/value pair # (LIBARCHIVE.xattr.user.thingy, dingy), which strikes me as # gratuitous incompatibility. # # Still, we'll support uploads with both. Just heap more code on the # problem until you can forget it's under there. with open(os.path.join(self.testdir, "obj1")) as fh1: tar_info1 = tar_file.gettarinfo(fileobj=fh1, arcname="obj1") tar_info1.pax_headers[u'SCHILY.xattr.user.mime_type'] = \ u'application/food-diary' tar_info1.pax_headers[u'SCHILY.xattr.user.meta.lunch'] = \ u'sopa de albóndigas' tar_info1.pax_headers[ u'SCHILY.xattr.user.meta.afternoon-snack'] = \ u'gigantic bucket of coffee' tar_file.addfile(tar_info1, fh1) with open(os.path.join(self.testdir, "obj2")) as fh2: tar_info2 = tar_file.gettarinfo(fileobj=fh2, arcname="obj2") tar_info2.pax_headers[ u'LIBARCHIVE.xattr.user.meta.muppet'] = u'bert' tar_info2.pax_headers[ u'LIBARCHIVE.xattr.user.meta.cat'] = u'fluffy' tar_info2.pax_headers[ u'LIBARCHIVE.xattr.user.notmeta'] = u'skipped' tar_file.addfile(tar_info2, fh2) tar_ball.seek(0) req = Request.blank('/v1/a/c?extract-archive=tar') req.environ['REQUEST_METHOD'] = 'PUT' req.environ['wsgi.input'] = tar_ball req.headers['transfer-encoding'] = 'chunked' req.headers['accept'] = 'application/json;q=1.0' resp = req.get_response(self.bulk) self.assertEqual(resp.status_int, 200) # sanity check to make sure the upload worked upload_status = utils.json.loads(resp.body) self.assertEqual(upload_status['Number Files Created'], 2) put1_headers = HeaderKeyDict(self.app.calls_with_headers[1][2]) self.assertEqual( put1_headers.get('Content-Type'), 'application/food-diary') self.assertEqual( put1_headers.get('X-Object-Meta-Lunch'), 'sopa de alb\xc3\xb3ndigas') self.assertEqual( put1_headers.get('X-Object-Meta-Afternoon-Snack'), 'gigantic bucket of coffee') put2_headers = HeaderKeyDict(self.app.calls_with_headers[2][2]) self.assertEqual(put2_headers.get('X-Object-Meta-Muppet'), 'bert') self.assertEqual(put2_headers.get('X-Object-Meta-Cat'), 'fluffy') self.assertEqual(put2_headers.get('Content-Type'), None) self.assertEqual(put2_headers.get('X-Object-Meta-Blah'), None) class TestUntar(unittest.TestCase): def setUp(self): self.app = FakeApp() self.bulk = bulk.filter_factory({})(self.app) self.testdir = mkdtemp(suffix='tmp_test_bulk') def tearDown(self): self.app.calls = 0 rmtree(self.testdir, ignore_errors=1) def handle_extract_and_iter(self, req, compress_format, out_content_type='application/json'): resp_body = ''.join( self.bulk.handle_extract_iter(req, compress_format, out_content_type=out_content_type)) return resp_body def test_create_container_for_path(self): req = Request.blank('/') self.assertEqual( self.bulk.create_container(req, '/create_cont/acc/cont'), True) self.assertEqual(self.app.calls, 2) self.assertRaises( bulk.CreateContainerError, self.bulk.create_container, req, '/create_cont_fail/acc/cont') self.assertEqual(self.app.calls, 3) def test_extract_tar_works(self): # On systems where $TMPDIR is long (like OS X), we need to do this # or else every upload will fail due to the path being too long. self.app.max_pathlen += len(self.testdir) for compress_format in ['', 'gz', 'bz2']: base_name = 'base_works_%s' % compress_format dir_tree = [ {base_name: [{'sub_dir1': ['sub1_file1', 'sub1_file2']}, {'sub_dir2': ['sub2_file1', u'test obj \u2661']}, 'sub_file1', {'sub_dir3': [{'sub4_dir1': '../sub4 file1'}]}, {'sub_dir4': None}, ]}] build_dir_tree(self.testdir, dir_tree) mode = 'w' extension = '' if compress_format: mode += ':' + compress_format extension += '.' + compress_format tar = tarfile.open(name=os.path.join(self.testdir, 'tar_works.tar' + extension), mode=mode) tar.add(os.path.join(self.testdir, base_name)) tar.close() req = Request.blank('/tar_works/acc/cont/') req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar' + extension)) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, compress_format) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Files Created'], 6) # test out xml req = Request.blank('/tar_works/acc/cont/') req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar' + extension)) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter( req, compress_format, 'application/xml') self.assertTrue( '201 Created' in resp_body) self.assertTrue( '6' in resp_body) # test out nonexistent format req = Request.blank('/tar_works/acc/cont/?extract-archive=tar', headers={'Accept': 'good_xml'}) req.environ['REQUEST_METHOD'] = 'PUT' req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar' + extension)) req.headers['transfer-encoding'] = 'chunked' def fake_start_response(*args, **kwargs): pass app_iter = self.bulk(req.environ, fake_start_response) resp_body = ''.join([i for i in app_iter]) self.assertTrue('Response Status: 406' in resp_body) def test_extract_call(self): base_name = 'base_works_gz' dir_tree = [ {base_name: [{'sub_dir1': ['sub1_file1', 'sub1_file2']}, {'sub_dir2': ['sub2_file1', 'sub2_file2']}, 'sub_file1', {'sub_dir3': [{'sub4_dir1': 'sub4_file1'}]}]}] build_dir_tree(self.testdir, dir_tree) tar = tarfile.open(name=os.path.join(self.testdir, 'tar_works.tar.gz'), mode='w:gz') tar.add(os.path.join(self.testdir, base_name)) tar.close() def fake_start_response(*args, **kwargs): pass req = Request.blank('/tar_works/acc/cont/?extract-archive=tar.gz') req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar.gz')) self.bulk(req.environ, fake_start_response) self.assertEqual(self.app.calls, 1) self.app.calls = 0 req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar.gz')) req.headers['transfer-encoding'] = 'Chunked' req.method = 'PUT' app_iter = self.bulk(req.environ, fake_start_response) list(app_iter) # iter over resp self.assertEqual(self.app.calls, 7) self.app.calls = 0 req = Request.blank('/tar_works/acc/cont/?extract-archive=bad') req.method = 'PUT' req.headers['transfer-encoding'] = 'Chunked' req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar.gz')) t = self.bulk(req.environ, fake_start_response) self.assertEqual(t[0], "Unsupported archive format") tar = tarfile.open(name=os.path.join(self.testdir, 'tar_works.tar'), mode='w') tar.add(os.path.join(self.testdir, base_name)) tar.close() self.app.calls = 0 req = Request.blank('/tar_works/acc/cont/?extract-archive=tar') req.method = 'PUT' req.headers['transfer-encoding'] = 'Chunked' req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar')) app_iter = self.bulk(req.environ, fake_start_response) list(app_iter) # iter over resp self.assertEqual(self.app.calls, 7) def test_bad_container(self): req = Request.blank('/invalid/', body='') resp_body = self.handle_extract_and_iter(req, '') self.assertTrue('404 Not Found' in resp_body) def test_content_length_required(self): req = Request.blank('/create_cont_fail/acc/cont') resp_body = self.handle_extract_and_iter(req, '') self.assertTrue('411 Length Required' in resp_body) def test_bad_tar(self): req = Request.blank('/create_cont_fail/acc/cont', body='') def bad_open(*args, **kwargs): raise zlib.error('bad tar') with patch.object(tarfile, 'open', bad_open): resp_body = self.handle_extract_and_iter(req, '') self.assertTrue('400 Bad Request' in resp_body) def build_tar(self, dir_tree=None): if not dir_tree: dir_tree = [ {'base_fails1': [{'sub_dir1': ['sub1_file1']}, {'sub_dir2': ['sub2_file1', 'sub2_file2']}, 'f' * 101, {'sub_dir3': [{'sub4_dir1': 'sub4_file1'}]}]}] tar = tarfile.open(name=os.path.join(self.testdir, 'tar_fails.tar'), mode='w') build_tar_tree(tar, self.testdir, dir_tree, base_path=self.testdir + '/') tar.close() return tar def test_extract_tar_with_basefile(self): dir_tree = [ 'base_lvl_file', 'another_base_file', {'base_fails1': [{'sub_dir1': ['sub1_file1']}, {'sub_dir2': ['sub2_file1', 'sub2_file2']}, {'sub_dir3': [{'sub4_dir1': 'sub4_file1'}]}]}] self.build_tar(dir_tree) req = Request.blank('/tar_works/acc/') req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Files Created'], 4) def test_extract_tar_fail_cont_401(self): self.build_tar() req = Request.blank('/unauth/acc/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') self.assertEqual(self.app.calls, 1) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '401 Unauthorized') self.assertEqual(resp_data['Errors'], []) def test_extract_tar_fail_obj_401(self): self.build_tar() req = Request.blank('/create_obj_unauth/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '401 Unauthorized') self.assertEqual( resp_data['Errors'], [['cont/base_fails1/sub_dir1/sub1_file1', '401 Unauthorized']]) def test_extract_tar_fail_obj_name_len(self): self.build_tar() req = Request.blank('/tar_works/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') self.assertEqual(self.app.calls, 6) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Files Created'], 4) self.assertEqual( resp_data['Errors'], [['cont/base_fails1/' + ('f' * 101), '400 Bad Request']]) def test_extract_tar_fail_compress_type(self): self.build_tar() req = Request.blank('/tar_works/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, 'gz') self.assertEqual(self.app.calls, 0) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual( resp_data['Response Body'].lower(), 'invalid tar file: not a gzip file') def test_extract_tar_fail_max_failed_extractions(self): self.build_tar() with patch.object(self.bulk, 'max_failed_extractions', 1): self.app.calls = 0 req = Request.blank('/tar_works/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') self.assertEqual(self.app.calls, 5) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Files Created'], 3) self.assertEqual( resp_data['Errors'], [['cont/base_fails1/' + ('f' * 101), '400 Bad Request']]) @patch.object(constraints, 'MAX_FILE_SIZE', 4) def test_extract_tar_fail_max_file_size(self): tar = self.build_tar() dir_tree = [{'test': [{'sub_dir1': ['sub1_file1']}]}] build_dir_tree(self.testdir, dir_tree) tar = tarfile.open(name=os.path.join(self.testdir, 'tar_works.tar'), mode='w') tar.add(os.path.join(self.testdir, 'test')) tar.close() self.app.calls = 0 req = Request.blank('/tar_works/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open( os.path.join(self.testdir, 'tar_works.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') resp_data = utils.json.loads(resp_body) self.assertEqual( resp_data['Errors'], [['cont' + self.testdir + '/test/sub_dir1/sub1_file1', '413 Request Entity Too Large']]) def test_extract_tar_fail_max_cont(self): dir_tree = [{'sub_dir1': ['sub1_file1']}, {'sub_dir2': ['sub2_file1', 'sub2_file2']}, 'f' * 101, {'sub_dir3': [{'sub4_dir1': 'sub4_file1'}]}] self.build_tar(dir_tree) with patch.object(self.bulk, 'max_containers', 1): self.app.calls = 0 body = open(os.path.join(self.testdir, 'tar_fails.tar')).read() req = Request.blank('/tar_works_cont_head_fail/acc/', body=body, headers={'Accept': 'application/json'}) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') self.assertEqual(self.app.calls, 5) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual( resp_data['Response Body'], 'More than 1 containers to create from tar.') def test_extract_tar_fail_create_cont(self): dir_tree = [{'base_fails1': [ {'sub_dir1': ['sub1_file1']}, {'sub_dir2': ['sub2_file1', 'sub2_file2']}, {'./sub_dir3': [{'sub4_dir1': 'sub4_file1'}]}]}] self.build_tar(dir_tree) req = Request.blank('/create_cont_fail/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') resp_data = utils.json.loads(resp_body) self.assertEqual(self.app.calls, 5) self.assertEqual(len(resp_data['Errors']), 5) def test_extract_tar_fail_create_cont_value_err(self): self.build_tar() req = Request.blank('/create_cont_fail/acc/cont/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' def bad_create(req, path): raise ValueError('Test') with patch.object(self.bulk, 'create_container', bad_create): resp_body = self.handle_extract_and_iter(req, '') resp_data = utils.json.loads(resp_body) self.assertEqual(self.app.calls, 0) self.assertEqual(len(resp_data['Errors']), 5) self.assertEqual( resp_data['Errors'][0], ['cont/base_fails1/sub_dir1/sub1_file1', '400 Bad Request']) def test_extract_tar_fail_unicode(self): dir_tree = [{'sub_dir1': ['sub1_file1']}, {'sub_dir2': ['sub2\xdefile1', 'sub2_file2']}, {'sub_\xdedir3': [{'sub4_dir1': 'sub4_file1'}]}] self.build_tar(dir_tree) req = Request.blank('/tar_works/acc/', headers={'Accept': 'application/json'}) req.environ['wsgi.input'] = open(os.path.join(self.testdir, 'tar_fails.tar')) req.headers['transfer-encoding'] = 'chunked' resp_body = self.handle_extract_and_iter(req, '') resp_data = utils.json.loads(resp_body) self.assertEqual(self.app.calls, 4) self.assertEqual(resp_data['Number Files Created'], 2) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual( resp_data['Errors'], [['sub_dir2/sub2%DEfile1', '412 Precondition Failed'], ['sub_%DEdir3/sub4_dir1/sub4_file1', '412 Precondition Failed']]) def test_get_response_body(self): txt_body = bulk.get_response_body( 'bad_formay', {'hey': 'there'}, [['json > xml', '202 Accepted']]) self.assertTrue('hey: there' in txt_body) xml_body = bulk.get_response_body( 'text/xml', {'hey': 'there'}, [['json > xml', '202 Accepted']]) self.assertTrue('>' in xml_body) class TestDelete(unittest.TestCase): def setUp(self): self.app = FakeApp() self.bulk = bulk.filter_factory({})(self.app) def tearDown(self): self.app.calls = 0 self.app.delete_paths = [] def handle_delete_and_iter(self, req, out_content_type='application/json'): resp_body = ''.join(self.bulk.handle_delete_iter( req, out_content_type=out_content_type)) return resp_body def test_bulk_delete_uses_predefined_object_errors(self): req = Request.blank('/delete_works/AUTH_Acc') objs_to_delete = [ {'name': '/c/file_a'}, {'name': '/c/file_b', 'error': {'code': HTTP_NOT_FOUND, 'message': 'not found'}}, {'name': '/c/file_c', 'error': {'code': HTTP_UNAUTHORIZED, 'message': 'unauthorized'}}, {'name': '/c/file_d'}] resp_body = ''.join(self.bulk.handle_delete_iter( req, objs_to_delete=objs_to_delete, out_content_type='application/json')) self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/file_a', '/delete_works/AUTH_Acc/c/file_d']) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Number Deleted'], 2) self.assertEqual(resp_data['Number Not Found'], 1) self.assertEqual(resp_data['Errors'], [['/c/file_c', 'unauthorized']]) def test_bulk_delete_works_with_POST_verb(self): req = Request.blank('/delete_works/AUTH_Acc', body='/c/f\n/c/f404', headers={'Accept': 'application/json'}) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/f', '/delete_works/AUTH_Acc/c/f404']) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 1) self.assertEqual(resp_data['Number Not Found'], 1) def test_bulk_delete_works_with_DELETE_verb(self): req = Request.blank('/delete_works/AUTH_Acc', body='/c/f\n/c/f404', headers={'Accept': 'application/json'}) req.method = 'DELETE' resp_body = self.handle_delete_and_iter(req) self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/f', '/delete_works/AUTH_Acc/c/f404']) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 1) self.assertEqual(resp_data['Number Not Found'], 1) def test_bulk_delete_bad_content_type(self): req = Request.blank('/delete_works/AUTH_Acc', headers={'Accept': 'badformat'}) req = Request.blank('/delete_works/AUTH_Acc', headers={'Accept': 'application/json', 'Content-Type': 'text/xml'}) req.method = 'POST' req.environ['wsgi.input'] = BytesIO(b'/c/f\n/c/f404') resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '406 Not Acceptable') def test_bulk_delete_call_and_content_type(self): def fake_start_response(*args, **kwargs): self.assertEqual(args[1][0], ('Content-Type', 'application/json')) req = Request.blank('/delete_works/AUTH_Acc?bulk-delete') req.method = 'POST' req.headers['Transfer-Encoding'] = 'chunked' req.headers['Accept'] = 'application/json' req.environ['wsgi.input'] = BytesIO(b'/c/f%20') list(self.bulk(req.environ, fake_start_response)) # iterate over resp self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/f ']) self.assertEqual(self.app.calls, 1) def test_bulk_delete_get_objs(self): req = Request.blank('/delete_works/AUTH_Acc', body='1%20\r\n2\r\n') req.method = 'POST' with patch.object(self.bulk, 'max_deletes_per_request', 2): results = self.bulk.get_objs_to_delete(req) self.assertEqual(results, [{'name': '1 '}, {'name': '2'}]) with patch.object(self.bulk, 'max_path_length', 2): results = [] req.environ['wsgi.input'] = BytesIO(b'1\n2\n3') results = self.bulk.get_objs_to_delete(req) self.assertEqual(results, [{'name': '1'}, {'name': '2'}, {'name': '3'}]) with patch.object(self.bulk, 'max_deletes_per_request', 9): with patch.object(self.bulk, 'max_path_length', 1): req_body = '\n'.join([str(i) for i in range(10)]) req = Request.blank('/delete_works/AUTH_Acc', body=req_body) self.assertRaises( HTTPException, self.bulk.get_objs_to_delete, req) def test_bulk_delete_works_extra_newlines_extra_quoting(self): req = Request.blank('/delete_works/AUTH_Acc', body='/c/f\n\n\n/c/f404\n\n\n/c/%2525', headers={'Accept': 'application/json'}) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/f', '/delete_works/AUTH_Acc/c/f404', '/delete_works/AUTH_Acc/c/%25']) self.assertEqual(self.app.calls, 3) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 2) self.assertEqual(resp_data['Number Not Found'], 1) def test_bulk_delete_too_many_newlines(self): req = Request.blank('/delete_works/AUTH_Acc') req.method = 'POST' data = b'\n\n' * self.bulk.max_deletes_per_request req.environ['wsgi.input'] = BytesIO(data) req.content_length = len(data) resp_body = self.handle_delete_and_iter(req) self.assertTrue('413 Request Entity Too Large' in resp_body) def test_bulk_delete_works_unicode(self): body = (u'/c/ obj \u2661\r\n'.encode('utf8') + 'c/ objbadutf8\r\n' + '/c/f\xdebadutf8\n') req = Request.blank('/delete_works/AUTH_Acc', body=body, headers={'Accept': 'application/json'}) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) self.assertEqual( self.app.delete_paths, ['/delete_works/AUTH_Acc/c/ obj \xe2\x99\xa1', '/delete_works/AUTH_Acc/c/ objbadutf8']) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 1) self.assertEqual(len(resp_data['Errors']), 2) self.assertEqual(resp_data['Errors'], [[urllib.parse.quote('c/ objbadutf8'), '412 Precondition Failed'], [urllib.parse.quote('/c/f\xdebadutf8'), '412 Precondition Failed']]) def test_bulk_delete_no_body(self): req = Request.blank('/unauth/AUTH_acc/') resp_body = self.handle_delete_and_iter(req) self.assertTrue('411 Length Required' in resp_body) def test_bulk_delete_no_files_in_body(self): req = Request.blank('/unauth/AUTH_acc/', body=' ') resp_body = self.handle_delete_and_iter(req) self.assertTrue('400 Bad Request' in resp_body) def test_bulk_delete_unauth(self): req = Request.blank('/unauth/AUTH_acc/', body='/c/f\n/c/f_ok\n', headers={'Accept': 'application/json'}) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Errors'], [['/c/f', '401 Unauthorized']]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Number Deleted'], 1) def test_bulk_delete_500_resp(self): req = Request.blank('/broke/AUTH_acc/', body='/c/f\nc/f2\n', headers={'Accept': 'application/json'}) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual( resp_data['Errors'], [['/c/f', '500 Internal Error'], ['c/f2', '500 Internal Error']]) self.assertEqual(resp_data['Response Status'], '502 Bad Gateway') def test_bulk_delete_bad_path(self): req = Request.blank('/delete_cont_fail/') resp_body = self.handle_delete_and_iter(req) self.assertTrue('404 Not Found' in resp_body) def test_bulk_delete_container_delete(self): req = Request.blank('/delete_cont_fail/AUTH_Acc', body='c\n', headers={'Accept': 'application/json'}) req.method = 'POST' with patch('swift.common.middleware.bulk.sleep', new=mock.MagicMock(wraps=sleep, return_value=None)) as mock_sleep: resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 0) self.assertEqual(resp_data['Errors'], [['c', '409 Conflict']]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual([], mock_sleep.call_args_list) def test_bulk_delete_container_delete_retry_and_fails(self): self.bulk.retry_count = 3 req = Request.blank('/delete_cont_fail/AUTH_Acc', body='c\n', headers={'Accept': 'application/json'}) req.method = 'POST' with patch('swift.common.middleware.bulk.sleep', new=mock.MagicMock(wraps=sleep, return_value=None)) as mock_sleep: resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 0) self.assertEqual(resp_data['Errors'], [['c', '409 Conflict']]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual([call(self.bulk.retry_interval), call(self.bulk.retry_interval ** 2), call(self.bulk.retry_interval ** 3)], mock_sleep.call_args_list) def test_bulk_delete_container_delete_retry_and_success(self): self.bulk.retry_count = 3 self.app.del_container_total = 2 req = Request.blank('/delete_cont_success_after_attempts/AUTH_Acc', body='c\n', headers={'Accept': 'application/json'}) req.method = 'DELETE' with patch('swift.common.middleware.bulk.sleep', new=mock.MagicMock(wraps=sleep, return_value=None)) as mock_sleep: resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 1) self.assertEqual(resp_data['Errors'], []) self.assertEqual(resp_data['Response Status'], '200 OK') self.assertEqual([call(self.bulk.retry_interval), call(self.bulk.retry_interval ** 2)], mock_sleep.call_args_list) def test_bulk_delete_bad_file_too_long(self): req = Request.blank('/delete_works/AUTH_Acc', headers={'Accept': 'application/json'}) req.method = 'POST' bad_file = 'c/' + ('1' * self.bulk.max_path_length) data = b'/c/f\n' + bad_file.encode('ascii') + b'\n/c/f' req.environ['wsgi.input'] = BytesIO(data) req.headers['Transfer-Encoding'] = 'chunked' resp_body = self.handle_delete_and_iter(req) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Number Deleted'], 2) self.assertEqual(resp_data['Errors'], [[bad_file, '400 Bad Request']]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') def test_bulk_delete_bad_file_over_twice_max_length(self): body = '/c/f\nc/' + ('123456' * self.bulk.max_path_length) + '\n' req = Request.blank('/delete_works/AUTH_Acc', body=body) req.method = 'POST' resp_body = self.handle_delete_and_iter(req) self.assertTrue('400 Bad Request' in resp_body) def test_bulk_delete_max_failures(self): req = Request.blank('/unauth/AUTH_Acc', body='/c/f1\n/c/f2\n/c/f3', headers={'Accept': 'application/json'}) req.method = 'POST' with patch.object(self.bulk, 'max_failed_deletes', 2): resp_body = self.handle_delete_and_iter(req) self.assertEqual(self.app.calls, 2) resp_data = utils.json.loads(resp_body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Response Body'], 'Max delete failures exceeded') self.assertEqual(resp_data['Errors'], [['/c/f1', '401 Unauthorized'], ['/c/f2', '401 Unauthorized']]) class TestSwiftInfo(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def test_registered_defaults(self): bulk.filter_factory({}) swift_info = utils.get_swift_info() self.assertTrue('bulk_upload' in swift_info) self.assertTrue(isinstance( swift_info['bulk_upload'].get('max_containers_per_extraction'), numbers.Integral)) self.assertTrue(isinstance( swift_info['bulk_upload'].get('max_failed_extractions'), numbers.Integral)) self.assertTrue('bulk_delete' in swift_info) self.assertTrue(isinstance( swift_info['bulk_delete'].get('max_deletes_per_request'), numbers.Integral)) self.assertTrue(isinstance( swift_info['bulk_delete'].get('max_failed_deletes'), numbers.Integral)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_cname_lookup.py0000664000567000056710000002167213024044352025453 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import mock from nose import SkipTest try: # this test requires the dnspython package to be installed import dns.resolver # noqa except ImportError: skip = True else: # executed if the try has no errors skip = False from swift.common.middleware import cname_lookup from swift.common.swob import Request class FakeApp(object): def __call__(self, env, start_response): return "FAKE APP" def start_response(*args): pass original_lookup = cname_lookup.lookup_cname class TestCNAMELookup(unittest.TestCase): def setUp(self): if skip: raise SkipTest self.app = cname_lookup.CNAMELookupMiddleware(FakeApp(), {'lookup_depth': 2}) def test_pass_ip_addresses(self): cname_lookup.lookup_cname = original_lookup req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': '10.134.23.198'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'fc00:7ea1:f155::6321:8841'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_passthrough(self): def my_lookup(d): return 0, d cname_lookup.lookup_cname = my_lookup req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'foo.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'foo.example.com:8080'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'SERVER_NAME': 'foo.example.com'}, headers={'Host': None}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_good_lookup(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'mysite.com'}) def my_lookup(d): return 0, '%s.example.com' % d cname_lookup.lookup_cname = my_lookup resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'mysite.com:8080'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'SERVER_NAME': 'mysite.com'}, headers={'Host': None}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_lookup_chain_too_long(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'mysite.com'}) def my_lookup(d): if d == 'mysite.com': site = 'level1.foo.com' elif d == 'level1.foo.com': site = 'level2.foo.com' elif d == 'level2.foo.com': site = 'bar.example.com' return 0, site cname_lookup.lookup_cname = my_lookup resp = self.app(req.environ, start_response) self.assertEqual(resp, ['CNAME lookup failed after 2 tries']) def test_lookup_chain_bad_target(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'mysite.com'}) def my_lookup(d): return 0, 'some.invalid.site.com' cname_lookup.lookup_cname = my_lookup resp = self.app(req.environ, start_response) self.assertEqual(resp, ['CNAME lookup failed to resolve to a valid domain']) def test_something_weird(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'mysite.com'}) def my_lookup(d): return 0, None cname_lookup.lookup_cname = my_lookup resp = self.app(req.environ, start_response) self.assertEqual(resp, ['CNAME lookup failed to resolve to a valid domain']) def test_with_memcache(self): def my_lookup(d): return 0, '%s.example.com' % d cname_lookup.lookup_cname = my_lookup class memcache_stub(object): def __init__(self): self.cache = {} def get(self, key): return self.cache.get(key, None) def set(self, key, value, *a, **kw): self.cache[key] = value memcache = memcache_stub() req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'swift.cache': memcache}, headers={'Host': 'mysite.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'swift.cache': memcache}, headers={'Host': 'mysite.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_cname_matching_ending_not_domain(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'foo.com'}) def my_lookup(d): return 0, 'c.aexample.com' cname_lookup.lookup_cname = my_lookup resp = self.app(req.environ, start_response) self.assertEqual(resp, ['CNAME lookup failed to resolve to a valid domain']) def test_cname_configured_with_empty_storage_domain(self): app = cname_lookup.CNAMELookupMiddleware(FakeApp(), {'storage_domain': '', 'lookup_depth': 2}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.a.example.com'}) def my_lookup(d): return 0, None cname_lookup.lookup_cname = my_lookup resp = app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_storage_domains_conf_format(self): conf = {'storage_domain': 'foo.com'} app = cname_lookup.filter_factory(conf)(FakeApp()) self.assertEqual(app.storage_domain, ['.foo.com']) conf = {'storage_domain': 'foo.com, '} app = cname_lookup.filter_factory(conf)(FakeApp()) self.assertEqual(app.storage_domain, ['.foo.com']) conf = {'storage_domain': 'foo.com, bar.com'} app = cname_lookup.filter_factory(conf)(FakeApp()) self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com']) conf = {'storage_domain': 'foo.com, .bar.com'} app = cname_lookup.filter_factory(conf)(FakeApp()) self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com']) conf = {'storage_domain': '.foo.com, .bar.com'} app = cname_lookup.filter_factory(conf)(FakeApp()) self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com']) def test_multiple_storage_domains(self): conf = {'storage_domain': 'storage1.com, storage2.com', 'lookup_depth': 2} app = cname_lookup.CNAMELookupMiddleware(FakeApp(), conf) def do_test(lookup_back): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.a.example.com'}) module = 'swift.common.middleware.cname_lookup.lookup_cname' with mock.patch(module, lambda x: (0, lookup_back)): return app(req.environ, start_response) resp = do_test('c.storage1.com') self.assertEqual(resp, 'FAKE APP') resp = do_test('c.storage2.com') self.assertEqual(resp, 'FAKE APP') bad_domain = ['CNAME lookup failed to resolve to a valid domain'] resp = do_test('c.badtest.com') self.assertEqual(resp, bad_domain) swift-2.7.1/test/unit/common/middleware/__init__.py0000664000567000056710000000000013024044352023455 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/middleware/test_dlo.py0000664000567000056710000012415113024044354023553 0ustar jenkinsjenkins00000000000000# coding: utf-8 # Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hashlib import json import mock import shutil import tempfile from textwrap import dedent import time import unittest from swift.common import exceptions, swob from swift.common.header_key_dict import HeaderKeyDict from swift.common.middleware import dlo from swift.common.utils import closing_if_possible from test.unit.common.middleware.helpers import FakeSwift LIMIT = 'swift.common.constraints.CONTAINER_LISTING_LIMIT' def md5hex(s): return hashlib.md5(s).hexdigest() class DloTestCase(unittest.TestCase): def call_dlo(self, req, app=None, expect_exception=False): if app is None: app = self.dlo req.headers.setdefault("User-Agent", "Soap Opera") status = [None] headers = [None] def start_response(s, h, ei=None): status[0] = s headers[0] = h body_iter = app(req.environ, start_response) body = '' caught_exc = None try: # appease the close-checker with closing_if_possible(body_iter): for chunk in body_iter: body += chunk except Exception as exc: if expect_exception: caught_exc = exc else: raise if expect_exception: return status[0], headers[0], body, caught_exc else: return status[0], headers[0], body def setUp(self): self.app = FakeSwift() self.dlo = dlo.filter_factory({ # don't slow down tests with rate limiting 'rate_limit_after_segment': '1000000', })(self.app) self.dlo.logger = self.app.logger self.app.register( 'GET', '/v1/AUTH_test/c/seg_01', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("aaaaa")}, 'aaaaa') self.app.register( 'GET', '/v1/AUTH_test/c/seg_02', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("bbbbb")}, 'bbbbb') self.app.register( 'GET', '/v1/AUTH_test/c/seg_03', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("ccccc")}, 'ccccc') self.app.register( 'GET', '/v1/AUTH_test/c/seg_04', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("ddddd")}, 'ddddd') self.app.register( 'GET', '/v1/AUTH_test/c/seg_05', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("eeeee")}, 'eeeee') # an unrelated object (not seg*) to test the prefix matching self.app.register( 'GET', '/v1/AUTH_test/c/catpicture.jpg', swob.HTTPOk, {'Content-Length': '9', 'Etag': md5hex("meow meow meow meow")}, 'meow meow meow meow') self.app.register( 'GET', '/v1/AUTH_test/mancon/manifest', swob.HTTPOk, {'Content-Length': '17', 'Etag': 'manifest-etag', 'X-Object-Manifest': 'c/seg'}, 'manifest-contents') lm = '2013-11-22T02:42:13.781760' ct = 'application/octet-stream' segs = [{"hash": md5hex("aaaaa"), "bytes": 5, "name": "seg_01", "last_modified": lm, "content_type": ct}, {"hash": md5hex("bbbbb"), "bytes": 5, "name": "seg_02", "last_modified": lm, "content_type": ct}, {"hash": md5hex("ccccc"), "bytes": 5, "name": "seg_03", "last_modified": lm, "content_type": ct}, {"hash": md5hex("ddddd"), "bytes": 5, "name": "seg_04", "last_modified": lm, "content_type": ct}, {"hash": md5hex("eeeee"), "bytes": 5, "name": "seg_05", "last_modified": lm, "content_type": ct}] full_container_listing = segs + [{"hash": "cats-etag", "bytes": 9, "name": "catpicture.jpg", "last_modified": lm, "content_type": "application/png"}] self.app.register( 'GET', '/v1/AUTH_test/c?format=json', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps(full_container_listing)) self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=seg', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps(segs)) # This is to let us test multi-page container listings; we use the # trailing underscore to send small (pagesize=3) listings. # # If you're testing against this, be sure to mock out # CONTAINER_LISTING_LIMIT to 3 in your test. self.app.register( 'GET', '/v1/AUTH_test/mancon/manifest-many-segments', swob.HTTPOk, {'Content-Length': '7', 'Etag': 'etag-manyseg', 'X-Object-Manifest': 'c/seg_'}, 'manyseg') self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=seg_', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps(segs[:3])) self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps(segs[3:])) # Here's a manifest with 0 segments self.app.register( 'GET', '/v1/AUTH_test/mancon/manifest-no-segments', swob.HTTPOk, {'Content-Length': '7', 'Etag': 'noseg', 'X-Object-Manifest': 'c/noseg_'}, 'noseg') self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=noseg_', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps([])) class TestDloPutManifest(DloTestCase): def setUp(self): super(TestDloPutManifest, self).setUp() self.app.register( 'PUT', '/v1/AUTH_test/c/m', swob.HTTPCreated, {}, None) def test_validating_x_object_manifest(self): exp_okay = ["c/o", "c/obj/with/slashes", "c/obj/with/trailing/slash/", "c/obj/with//multiple///slashes////adjacent"] exp_bad = ["", "/leading/slash", "double//slash", "container-only", "whole-container/", "c/o?short=querystring", "c/o?has=a&long-query=string"] got_okay = [] got_bad = [] for val in (exp_okay + exp_bad): req = swob.Request.blank("/v1/AUTH_test/c/m", environ={'REQUEST_METHOD': 'PUT'}, headers={"X-Object-Manifest": val}) status, _, _ = self.call_dlo(req) if status.startswith("201"): got_okay.append(val) else: got_bad.append(val) self.assertEqual(exp_okay, got_okay) self.assertEqual(exp_bad, got_bad) def test_validation_watches_manifests_with_slashes(self): self.app.register( 'PUT', '/v1/AUTH_test/con/w/x/y/z', swob.HTTPCreated, {}, None) req = swob.Request.blank( "/v1/AUTH_test/con/w/x/y/z", environ={'REQUEST_METHOD': 'PUT'}, headers={"X-Object-Manifest": 'good/value'}) status, _, _ = self.call_dlo(req) self.assertEqual(status, "201 Created") req = swob.Request.blank( "/v1/AUTH_test/con/w/x/y/z", environ={'REQUEST_METHOD': 'PUT'}, headers={"X-Object-Manifest": '/badvalue'}) status, _, _ = self.call_dlo(req) self.assertEqual(status, "400 Bad Request") def test_validation_ignores_containers(self): self.app.register( 'PUT', '/v1/a/c', swob.HTTPAccepted, {}, None) req = swob.Request.blank( "/v1/a/c", environ={'REQUEST_METHOD': 'PUT'}, headers={"X-Object-Manifest": "/superbogus/?wrong=in&every=way"}) status, _, _ = self.call_dlo(req) self.assertEqual(status, "202 Accepted") def test_validation_ignores_accounts(self): self.app.register( 'PUT', '/v1/a', swob.HTTPAccepted, {}, None) req = swob.Request.blank( "/v1/a", environ={'REQUEST_METHOD': 'PUT'}, headers={"X-Object-Manifest": "/superbogus/?wrong=in&every=way"}) status, _, _ = self.call_dlo(req) self.assertEqual(status, "202 Accepted") class TestDloHeadManifest(DloTestCase): def test_head_large_object(self): expected_etag = '"%s"' % md5hex( md5hex("aaaaa") + md5hex("bbbbb") + md5hex("ccccc") + md5hex("ddddd") + md5hex("eeeee")) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(headers["Etag"], expected_etag) self.assertEqual(headers["Content-Length"], "25") def test_head_large_object_too_many_segments(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'HEAD'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) # etag is manifest's etag self.assertEqual(headers["Etag"], "etag-manyseg") self.assertEqual(headers.get("Content-Length"), None) def test_head_large_object_no_segments(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-no-segments', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(headers["Etag"], '"%s"' % md5hex("")) self.assertEqual(headers["Content-Length"], "0") # one request to HEAD the manifest # one request for the first page of listings # *zero* requests for the second page of listings self.assertEqual( self.app.calls, [('HEAD', '/v1/AUTH_test/mancon/manifest-no-segments'), ('GET', '/v1/AUTH_test/c?format=json&prefix=noseg_')]) class TestDloGetManifest(DloTestCase): def tearDown(self): self.assertEqual(self.app.unclosed_requests, {}) def test_get_manifest(self): expected_etag = '"%s"' % md5hex( md5hex("aaaaa") + md5hex("bbbbb") + md5hex("ccccc") + md5hex("ddddd") + md5hex("eeeee")) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(headers["Etag"], expected_etag) self.assertEqual(headers["Content-Length"], "25") self.assertEqual(body, 'aaaaabbbbbcccccdddddeeeee') for _, _, hdrs in self.app.calls_with_headers[1:]: ua = hdrs.get("User-Agent", "") self.assertTrue("DLO MultipartGET" in ua) self.assertFalse("DLO MultipartGET DLO MultipartGET" in ua) # the first request goes through unaltered self.assertFalse( "DLO MultipartGET" in self.app.calls_with_headers[0][2]) # we set swift.source for everything but the first request self.assertEqual(self.app.swift_sources, [None, 'DLO', 'DLO', 'DLO', 'DLO', 'DLO', 'DLO']) def test_get_non_manifest_passthrough(self): req = swob.Request.blank('/v1/AUTH_test/c/catpicture.jpg', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) self.assertEqual(body, "meow meow meow meow") def test_get_non_object_passthrough(self): self.app.register('GET', '/info', swob.HTTPOk, {}, 'useful stuff here') req = swob.Request.blank('/info', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) self.assertEqual(status, '200 OK') self.assertEqual(body, 'useful stuff here') self.assertEqual(self.app.call_count, 1) def test_get_manifest_passthrough(self): # reregister it with the query param self.app.register( 'GET', '/v1/AUTH_test/mancon/manifest?multipart-manifest=get', swob.HTTPOk, {'Content-Length': '17', 'Etag': 'manifest-etag', 'X-Object-Manifest': 'c/seg'}, 'manifest-contents') req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'multipart-manifest=get'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(headers["Etag"], "manifest-etag") self.assertEqual(body, "manifest-contents") def test_error_passthrough(self): self.app.register( 'GET', '/v1/AUTH_test/gone/404ed', swob.HTTPNotFound, {}, None) req = swob.Request.blank('/v1/AUTH_test/gone/404ed', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) self.assertEqual(status, '404 Not Found') def test_get_range(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=8-17'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "10") self.assertEqual(body, "bbcccccddd") expected_etag = '"%s"' % md5hex( md5hex("aaaaa") + md5hex("bbbbb") + md5hex("ccccc") + md5hex("ddddd") + md5hex("eeeee")) self.assertEqual(headers.get("Etag"), expected_etag) def test_get_range_on_segment_boundaries(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=10-19'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "10") self.assertEqual(body, "cccccddddd") def test_get_range_first_byte(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-0'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "1") self.assertEqual(body, "a") def test_get_range_last_byte(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=24-24'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "1") self.assertEqual(body, "e") def test_get_range_overlapping_end(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=18-30'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "7") self.assertEqual(headers["Content-Range"], "bytes 18-24/25") self.assertEqual(body, "ddeeeee") def test_get_range_unsatisfiable(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=25-30'}) status, headers, body = self.call_dlo(req) self.assertEqual(status, "416 Requested Range Not Satisfiable") def test_get_range_many_segments_satisfiable(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=3-12'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "10") # The /15 here indicates that this is a 15-byte object. DLO can't tell # if there are more segments or not without fetching more container # listings, though, so we just go with the sum of the lengths of the # segments we can see. In an ideal world, this would be "bytes 3-12/*" # to indicate that we don't know the full object length. However, RFC # 2616 section 14.16 explicitly forbids us from doing that: # # A response with status code 206 (Partial Content) MUST NOT include # a Content-Range field with a byte-range-resp-spec of "*". # # Since the truth is forbidden, we lie. self.assertEqual(headers["Content-Range"], "bytes 3-12/15") self.assertEqual(body, "aabbbbbccc") self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/mancon/manifest-many-segments'), ('GET', '/v1/AUTH_test/c?format=json&prefix=seg_'), ('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'), ('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'), ('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')]) def test_get_range_many_segments_satisfiability_unknown(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=10-22'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "200 OK") # this requires multiple pages of container listing, so we can't send # a Content-Length header self.assertEqual(headers.get("Content-Length"), None) self.assertEqual(body, "aaaaabbbbbcccccdddddeeeee") def test_get_suffix_range(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=-40'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "206 Partial Content") self.assertEqual(headers["Content-Length"], "25") self.assertEqual(body, "aaaaabbbbbcccccdddddeeeee") def test_get_suffix_range_many_segments(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=-5'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "200 OK") self.assertEqual(headers.get("Content-Length"), None) self.assertEqual(headers.get("Content-Range"), None) self.assertEqual(body, "aaaaabbbbbcccccdddddeeeee") def test_get_multi_range(self): # DLO doesn't support multi-range GETs. The way that you express that # in HTTP is to return a 200 response containing the whole entity. req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=5-9,15-19'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, "200 OK") self.assertEqual(headers.get("Content-Length"), None) self.assertEqual(headers.get("Content-Range"), None) self.assertEqual(body, "aaaaabbbbbcccccdddddeeeee") def test_if_match_matches(self): manifest_etag = '"%s"' % md5hex( md5hex("aaaaa") + md5hex("bbbbb") + md5hex("ccccc") + md5hex("ddddd") + md5hex("eeeee")) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': manifest_etag}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '25') self.assertEqual(body, 'aaaaabbbbbcccccdddddeeeee') def test_if_match_does_not_match(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': 'not it'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '412 Precondition Failed') self.assertEqual(headers['Content-Length'], '0') self.assertEqual(body, '') def test_if_none_match_matches(self): manifest_etag = '"%s"' % md5hex( md5hex("aaaaa") + md5hex("bbbbb") + md5hex("ccccc") + md5hex("ddddd") + md5hex("eeeee")) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': manifest_etag}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '304 Not Modified') self.assertEqual(headers['Content-Length'], '0') self.assertEqual(body, '') def test_if_none_match_does_not_match(self): req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': 'not it'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '25') self.assertEqual(body, 'aaaaabbbbbcccccdddddeeeee') def test_get_with_if_modified_since(self): # It's important not to pass the If-[Un]Modified-Since header to the # proxy for segment GET requests, as it may result in 304 Not Modified # responses, and those don't contain segment data. req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': 'Wed, 12 Feb 2014 22:24:52 GMT', 'If-Unmodified-Since': 'Thu, 13 Feb 2014 23:25:53 GMT'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) for _, _, hdrs in self.app.calls_with_headers[1:]: self.assertFalse('If-Modified-Since' in hdrs) self.assertFalse('If-Unmodified-Since' in hdrs) def test_error_fetching_first_segment(self): self.app.register( 'GET', '/v1/AUTH_test/c/seg_01', swob.HTTPForbidden, {}, None) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) self.assertEqual(status, "409 Conflict") err_lines = self.dlo.logger.get_lines_for_level('error') self.assertEqual(len(err_lines), 1) self.assertTrue(err_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_error_fetching_second_segment(self): self.app.register( 'GET', '/v1/AUTH_test/c/seg_02', swob.HTTPForbidden, {}, None) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertTrue(isinstance(exc, exceptions.SegmentError)) self.assertEqual(status, "200 OK") self.assertEqual(''.join(body), "aaaaa") # first segment made it out err_lines = self.dlo.logger.get_lines_for_level('error') self.assertEqual(len(err_lines), 1) self.assertTrue(err_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_error_listing_container_first_listing_request(self): self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=seg_', swob.HTTPNotFound, {}, None) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=-5'}) with mock.patch(LIMIT, 3): status, headers, body = self.call_dlo(req) self.assertEqual(status, "404 Not Found") def test_error_listing_container_second_listing_request(self): self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03', swob.HTTPNotFound, {}, None) req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=-5'}) with mock.patch(LIMIT, 3): status, headers, body, exc = self.call_dlo( req, expect_exception=True) self.assertTrue(isinstance(exc, exceptions.ListingIterError)) self.assertEqual(status, "200 OK") self.assertEqual(body, "aaaaabbbbbccccc") def test_mismatched_etag_fetching_second_segment(self): self.app.register( 'GET', '/v1/AUTH_test/c/seg_02', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("bbbbb")}, 'bbWRONGbb') req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertTrue(isinstance(exc, exceptions.SegmentError)) self.assertEqual(status, "200 OK") self.assertEqual(''.join(body), "aaaaabbWRONGbb") # stop after error def test_etag_comparison_ignores_quotes(self): # a little future-proofing here in case we ever fix this in swob self.app.register( 'HEAD', '/v1/AUTH_test/mani/festo', swob.HTTPOk, {'Content-Length': '0', 'Etag': 'blah', 'X-Object-Manifest': 'c/quotetags'}, None) self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=quotetags', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps([{"hash": "\"abc\"", "bytes": 5, "name": "quotetags1", "last_modified": "2013-11-22T02:42:14.261620", "content-type": "application/octet-stream"}, {"hash": "def", "bytes": 5, "name": "quotetags2", "last_modified": "2013-11-22T02:42:14.261620", "content-type": "application/octet-stream"}])) req = swob.Request.blank('/v1/AUTH_test/mani/festo', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_dlo(req) headers = HeaderKeyDict(headers) self.assertEqual(headers["Etag"], '"' + hashlib.md5("abcdef").hexdigest() + '"') def test_object_prefix_quoting(self): self.app.register( 'GET', '/v1/AUTH_test/man/accent', swob.HTTPOk, {'Content-Length': '0', 'Etag': 'blah', 'X-Object-Manifest': u'c/é'.encode('utf-8')}, None) segs = [{"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é1"}, {"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é2"}] self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=%C3%A9', swob.HTTPOk, {'Content-Type': 'application/json'}, json.dumps(segs)) self.app.register( 'GET', '/v1/AUTH_test/c/\xC3\xa91', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("AAAAA")}, "AAAAA") self.app.register( 'GET', '/v1/AUTH_test/c/\xC3\xA92', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex("BBBBB")}, "BBBBB") req = swob.Request.blank('/v1/AUTH_test/man/accent', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_dlo(req) self.assertEqual(status, "200 OK") self.assertEqual(body, "AAAAABBBBB") def test_get_taking_too_long(self): the_time = [time.time()] def mock_time(): return the_time[0] # this is just a convenient place to hang a time jump def mock_is_success(status_int): the_time[0] += 9 * 3600 return status_int // 100 == 2 req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) with mock.patch('swift.common.request_helpers.time.time', mock_time), \ mock.patch('swift.common.request_helpers.is_success', mock_is_success), \ mock.patch.object(dlo, 'is_success', mock_is_success): status, headers, body, exc = self.call_dlo( req, expect_exception=True) self.assertEqual(status, '200 OK') self.assertEqual(body, 'aaaaabbbbbccccc') self.assertTrue(isinstance(exc, exceptions.SegmentError)) def test_get_oversize_segment(self): # If we send a Content-Length header to the client, it's based on the # container listing. If a segment gets bigger by the time we get to it # (like if a client uploads a bigger segment w/the same name), we need # to not send anything beyond the length we promised. Also, we should # probably raise an exception. # This is now longer than the original seg_03+seg_04+seg_05 combined self.app.register( 'GET', '/v1/AUTH_test/c/seg_03', swob.HTTPOk, {'Content-Length': '20', 'Etag': 'seg03-etag'}, 'cccccccccccccccccccc') req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(headers.get('Content-Length'), '25') # sanity check self.assertEqual(body, 'aaaaabbbbbccccccccccccccc') self.assertTrue(isinstance(exc, exceptions.SegmentError)) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/mancon/manifest'), ('GET', '/v1/AUTH_test/c?format=json&prefix=seg'), ('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'), ('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'), ('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')]) def test_get_undersize_segment(self): # If we send a Content-Length header to the client, it's based on the # container listing. If a segment gets smaller by the time we get to # it (like if a client uploads a smaller segment w/the same name), we # need to raise an exception so that the connection will be closed by # the WSGI server. Otherwise, the WSGI server will be waiting for the # next request, the client will still be waiting for the rest of the # response, and nobody will be happy. # Shrink it by a single byte self.app.register( 'GET', '/v1/AUTH_test/c/seg_03', swob.HTTPOk, {'Content-Length': '4', 'Etag': md5hex("cccc")}, 'cccc') req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(headers.get('Content-Length'), '25') # sanity check self.assertEqual(body, 'aaaaabbbbbccccdddddeeeee') self.assertTrue(isinstance(exc, exceptions.SegmentError)) def test_get_undersize_segment_range(self): # Shrink it by a single byte self.app.register( 'GET', '/v1/AUTH_test/c/seg_03', swob.HTTPOk, {'Content-Length': '4', 'Etag': md5hex("cccc")}, 'cccc') req = swob.Request.blank( '/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-14'}) status, headers, body, exc = self.call_dlo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') # sanity check self.assertEqual(headers.get('Content-Length'), '15') # sanity check self.assertEqual(body, 'aaaaabbbbbcccc') self.assertTrue(isinstance(exc, exceptions.SegmentError)) def test_get_with_auth_overridden(self): auth_got_called = [0] def my_auth(req): auth_got_called[0] += 1 return None req = swob.Request.blank('/v1/AUTH_test/mancon/manifest', environ={'REQUEST_METHOD': 'GET', 'swift.authorize': my_auth}) status, headers, body = self.call_dlo(req) self.assertTrue(auth_got_called[0] > 1) def fake_start_response(*args, **kwargs): pass class TestDloCopyHook(DloTestCase): def setUp(self): super(TestDloCopyHook, self).setUp() self.app.register( 'GET', '/v1/AUTH_test/c/o1', swob.HTTPOk, {'Content-Length': '10', 'Etag': 'o1-etag'}, "aaaaaaaaaa") self.app.register( 'GET', '/v1/AUTH_test/c/o2', swob.HTTPOk, {'Content-Length': '10', 'Etag': 'o2-etag'}, "bbbbbbbbbb") self.app.register( 'GET', '/v1/AUTH_test/c/man', swob.HTTPOk, {'X-Object-Manifest': 'c/o'}, "manifest-contents") lm = '2013-11-22T02:42:13.781760' ct = 'application/octet-stream' segs = [{"hash": "o1-etag", "bytes": 10, "name": "o1", "last_modified": lm, "content_type": ct}, {"hash": "o2-etag", "bytes": 5, "name": "o2", "last_modified": lm, "content_type": ct}] self.app.register( 'GET', '/v1/AUTH_test/c?format=json&prefix=o', swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'}, json.dumps(segs)) copy_hook = [None] # slip this guy in there to pull out the hook def extract_copy_hook(env, sr): copy_hook[0] = env.get('swift.copy_hook') return self.app(env, sr) self.dlo = dlo.filter_factory({})(extract_copy_hook) req = swob.Request.blank('/v1/AUTH_test/c/o1', environ={'REQUEST_METHOD': 'GET'}) self.dlo(req.environ, fake_start_response) self.copy_hook = copy_hook[0] self.assertTrue(self.copy_hook is not None) # sanity check def test_copy_hook_passthrough(self): source_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'GET'}) sink_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'PUT'}) source_resp = swob.Response(request=source_req, status=200) # no X-Object-Manifest header, so do nothing modified_resp = self.copy_hook(source_req, source_resp, sink_req) self.assertTrue(modified_resp is source_resp) def test_copy_hook_manifest(self): source_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'GET'}) sink_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'PUT'}) source_resp = swob.Response( request=source_req, status=200, headers={"X-Object-Manifest": "c/o"}, app_iter=["manifest"]) # it's a manifest, so copy the segments to make a normal object modified_resp = self.copy_hook(source_req, source_resp, sink_req) self.assertTrue(modified_resp is not source_resp) self.assertEqual(modified_resp.etag, hashlib.md5("o1-etago2-etag").hexdigest()) self.assertEqual(sink_req.headers.get('X-Object-Manifest'), None) def test_copy_hook_manifest_with_multipart_manifest_get(self): source_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'multipart-manifest=get'}) sink_req = swob.Request.blank( '/v1/AUTH_test/c/man', environ={'REQUEST_METHOD': 'PUT'}) source_resp = swob.Response( request=source_req, status=200, headers={"X-Object-Manifest": "c/o"}, app_iter=["manifest"]) # make sure the sink request (the backend PUT) gets X-Object-Manifest # on it, but that's all modified_resp = self.copy_hook(source_req, source_resp, sink_req) self.assertTrue(modified_resp is source_resp) self.assertEqual(sink_req.headers.get('X-Object-Manifest'), 'c/o') class TestDloConfiguration(unittest.TestCase): """ For backwards compatibility, we will read a couple of values out of the proxy's config section if we don't have any config values. """ def setUp(self): self.tmpdir = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.tmpdir) def test_skip_defaults_if_configured(self): # The presence of even one config value in our config section means we # won't go looking for the proxy config at all. proxy_conf = dedent(""" [DEFAULT] bind_ip = 10.4.5.6 [pipeline:main] pipeline = catch_errors dlo ye-olde-proxy-server [filter:dlo] use = egg:swift#dlo max_get_time = 3600 [app:ye-olde-proxy-server] use = egg:swift#proxy rate_limit_segments_per_sec = 7 rate_limit_after_segment = 13 max_get_time = 2900 """) conffile = tempfile.NamedTemporaryFile() conffile.write(proxy_conf) conffile.flush() mware = dlo.filter_factory({ 'max_get_time': '3600', '__file__': conffile.name })("no app here") self.assertEqual(1, mware.rate_limit_segments_per_sec) self.assertEqual(10, mware.rate_limit_after_segment) self.assertEqual(3600, mware.max_get_time) def test_finding_defaults_from_file(self): # If DLO has no config vars, go pull them from the proxy server's # config section proxy_conf = dedent(""" [DEFAULT] bind_ip = 10.4.5.6 [pipeline:main] pipeline = catch_errors dlo ye-olde-proxy-server [filter:dlo] use = egg:swift#dlo [app:ye-olde-proxy-server] use = egg:swift#proxy rate_limit_after_segment = 13 max_get_time = 2900 """) conffile = tempfile.NamedTemporaryFile() conffile.write(proxy_conf) conffile.flush() mware = dlo.filter_factory({ '__file__': conffile.name })("no app here") self.assertEqual(1, mware.rate_limit_segments_per_sec) self.assertEqual(13, mware.rate_limit_after_segment) self.assertEqual(2900, mware.max_get_time) def test_finding_defaults_from_dir(self): # If DLO has no config vars, go pull them from the proxy server's # config section proxy_conf1 = dedent(""" [DEFAULT] bind_ip = 10.4.5.6 [pipeline:main] pipeline = catch_errors dlo ye-olde-proxy-server """) proxy_conf2 = dedent(""" [filter:dlo] use = egg:swift#dlo [app:ye-olde-proxy-server] use = egg:swift#proxy rate_limit_after_segment = 13 max_get_time = 2900 """) conf_dir = self.tmpdir conffile1 = tempfile.NamedTemporaryFile(dir=conf_dir, suffix='.conf') conffile1.write(proxy_conf1) conffile1.flush() conffile2 = tempfile.NamedTemporaryFile(dir=conf_dir, suffix='.conf') conffile2.write(proxy_conf2) conffile2.flush() mware = dlo.filter_factory({ '__file__': conf_dir })("no app here") self.assertEqual(1, mware.rate_limit_segments_per_sec) self.assertEqual(13, mware.rate_limit_after_segment) self.assertEqual(2900, mware.max_get_time) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_list_endpoints.py0000664000567000056710000004361013024044354026033 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import array import json import unittest from tempfile import mkdtemp from shutil import rmtree import os import mock from swift.common import ring, utils from swift.common.utils import split_path from swift.common.swob import Request, Response from swift.common.middleware import list_endpoints from swift.common.storage_policy import StoragePolicy, POLICIES from test.unit import patch_policies class FakeApp(object): def __call__(self, env, start_response): return Response(body="FakeApp")(env, start_response) def start_response(*args): pass @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True)]) class TestListEndpoints(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' self.testdir = mkdtemp() accountgz = os.path.join(self.testdir, 'account.ring.gz') containergz = os.path.join(self.testdir, 'container.ring.gz') objectgz = os.path.join(self.testdir, 'object.ring.gz') objectgz_1 = os.path.join(self.testdir, 'object-1.ring.gz') self.policy_to_test = 0 self.expected_path = ('v1', 'a', 'c', 'o1') # Let's make the rings slightly different so we can test # that the correct ring is consulted (e.g. we don't consult # the object ring to get nodes for a container) intended_replica2part2dev_id_a = [ array.array('H', [3, 1, 3, 1]), array.array('H', [0, 3, 1, 4]), array.array('H', [1, 4, 0, 3])] intended_replica2part2dev_id_c = [ array.array('H', [4, 3, 0, 1]), array.array('H', [0, 1, 3, 4]), array.array('H', [3, 4, 0, 1])] intended_replica2part2dev_id_o = [ array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1]), array.array('H', [3, 4, 3, 4])] intended_replica2part2dev_id_o_1 = [ array.array('H', [1, 0, 1, 0]), array.array('H', [1, 0, 1, 0]), array.array('H', [4, 3, 4, 3])] intended_devs = [{'id': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'device': 'sda1'}, {'id': 1, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'device': 'sdb1'}, None, {'id': 3, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000, 'device': 'sdc1'}, {'id': 4, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000, 'device': 'sdd1'}] intended_part_shift = 30 ring.RingData(intended_replica2part2dev_id_a, intended_devs, intended_part_shift).save(accountgz) ring.RingData(intended_replica2part2dev_id_c, intended_devs, intended_part_shift).save(containergz) ring.RingData(intended_replica2part2dev_id_o, intended_devs, intended_part_shift).save(objectgz) ring.RingData(intended_replica2part2dev_id_o_1, intended_devs, intended_part_shift).save(objectgz_1) self.app = FakeApp() self.list_endpoints = list_endpoints.filter_factory( {'swift_dir': self.testdir})(self.app) def tearDown(self): rmtree(self.testdir, ignore_errors=1) def FakeGetInfo(self, env, app, swift_source=None): info = {'status': 0, 'sync_key': None, 'meta': {}, 'cors': {'allow_origin': None, 'expose_headers': None, 'max_age': None}, 'sysmeta': {}, 'read_acl': None, 'object_count': None, 'write_acl': None, 'versions': None, 'bytes': None} info['storage_policy'] = self.policy_to_test (version, account, container, unused) = \ split_path(env['PATH_INFO'], 3, 4, True) self.assertEqual((version, account, container), self.expected_path[:3]) return info def test_parse_response_version(self): expectations = { '': 1.0, # legacy compat '/1': 1.0, '/v1': 1.0, '/1.0': 1.0, '/v1.0': 1.0, '/2': 2.0, '/v2': 2.0, '/2.0': 2.0, '/v2.0': 2.0, } accounts = ( 'AUTH_test', 'test', 'verybadreseller_prefix' 'verybadaccount' ) for expected_account in accounts: for version, expected in expectations.items(): path = '/endpoints%s/%s/c/o' % (version, expected_account) req = Request.blank(path) version, account, container, obj = \ self.list_endpoints._parse_path(req) try: self.assertEqual(version, expected) self.assertEqual(account, expected_account) except AssertionError: self.fail('Unexpected result from parse path %r: %r != %r' % (path, (version, account), (expected, expected_account))) def test_parse_version_that_looks_like_account(self): """ Demonstrate the failure mode for versions that look like accounts, if you can make _parse_path better and this is the *only* test that fails you can delete it ;) """ bad_versions = ( 'v_3', 'verybadreseller_prefix', ) for bad_version in bad_versions: req = Request.blank('/endpoints/%s/a/c/o' % bad_version) version, account, container, obj = \ self.list_endpoints._parse_path(req) self.assertEqual(version, 1.0) self.assertEqual(account, bad_version) self.assertEqual(container, 'a') self.assertEqual(obj, 'c/o') def test_parse_account_that_looks_like_version(self): """ Demonstrate the failure mode for accounts that looks like versions, if you can make _parse_path better and this is the *only* test that fails you can delete it ;) """ bad_accounts = ( 'v3.0', 'verybaddaccountwithnoprefix', ) for bad_account in bad_accounts: req = Request.blank('/endpoints/%s/c/o' % bad_account) self.assertRaises(ValueError, self.list_endpoints._parse_path, req) even_worse_accounts = { 'v1': 1.0, 'v2.0': 2.0, } for bad_account, guessed_version in even_worse_accounts.items(): req = Request.blank('/endpoints/%s/c/o' % bad_account) version, account, container, obj = \ self.list_endpoints._parse_path(req) self.assertEqual(version, guessed_version) self.assertEqual(account, 'c') self.assertEqual(container, 'o') self.assertEqual(obj, None) def test_get_object_ring(self): self.assertEqual(isinstance(self.list_endpoints.get_object_ring(0), ring.Ring), True) self.assertEqual(isinstance(self.list_endpoints.get_object_ring(1), ring.Ring), True) self.assertRaises(ValueError, self.list_endpoints.get_object_ring, 99) def test_parse_path_no_version_specified(self): req = Request.blank('/endpoints/a/c/o1') version, account, container, obj = \ self.list_endpoints._parse_path(req) self.assertEqual(account, 'a') self.assertEqual(container, 'c') self.assertEqual(obj, 'o1') def test_parse_path_with_valid_version(self): req = Request.blank('/endpoints/v2/a/c/o1') version, account, container, obj = \ self.list_endpoints._parse_path(req) self.assertEqual(version, 2.0) self.assertEqual(account, 'a') self.assertEqual(container, 'c') self.assertEqual(obj, 'o1') def test_parse_path_with_invalid_version(self): req = Request.blank('/endpoints/v3/a/c/o1') self.assertRaises(ValueError, self.list_endpoints._parse_path, req) def test_parse_path_with_no_account(self): bad_paths = ('v1', 'v2', '') for path in bad_paths: req = Request.blank('/endpoints/%s' % path) try: self.list_endpoints._parse_path(req) self.fail('Expected ValueError to be raised') except ValueError as err: self.assertEqual(str(err), 'No account specified') def test_get_endpoint(self): # Expected results for objects taken from test_ring # Expected results for others computed by manually invoking # ring.get_nodes(). resp = Request.blank('/endpoints/a/c/o1').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sdb1/1/a/c/o1", "http://10.1.2.2:6000/sdd1/1/a/c/o1" ]) # test policies with no version endpoint name expected = [[ "http://10.1.1.1:6000/sdb1/1/a/c/o1", "http://10.1.2.2:6000/sdd1/1/a/c/o1"], [ "http://10.1.1.1:6000/sda1/1/a/c/o1", "http://10.1.2.1:6000/sdc1/1/a/c/o1" ]] PATCHGI = 'swift.common.middleware.list_endpoints.get_container_info' for pol in POLICIES: self.policy_to_test = pol.idx with mock.patch(PATCHGI, self.FakeGetInfo): resp = Request.blank('/endpoints/a/c/o1').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), expected[pol.idx]) # Here, 'o1/' is the object name. resp = Request.blank('/endpoints/a/c/o1/').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sdb1/3/a/c/o1/", "http://10.1.2.2:6000/sdd1/3/a/c/o1/" ]) resp = Request.blank('/endpoints/a/c2').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sda1/2/a/c2", "http://10.1.2.1:6000/sdc1/2/a/c2" ]) resp = Request.blank('/endpoints/a1').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.2.1:6000/sdc1/0/a1", "http://10.1.1.1:6000/sda1/0/a1", "http://10.1.1.1:6000/sdb1/0/a1" ]) resp = Request.blank('/endpoints/').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 400) resp = Request.blank('/endpoints/a/c 2').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sdb1/3/a/c%202", "http://10.1.2.2:6000/sdd1/3/a/c%202" ]) resp = Request.blank('/endpoints/a/c%202').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sdb1/3/a/c%202", "http://10.1.2.2:6000/sdd1/3/a/c%202" ]) resp = Request.blank('/endpoints/ac%20count/con%20tainer/ob%20ject') \ .get_response(self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [ "http://10.1.1.1:6000/sdb1/3/ac%20count/con%20tainer/ob%20ject", "http://10.1.2.2:6000/sdd1/3/ac%20count/con%20tainer/ob%20ject" ]) resp = Request.blank('/endpoints/a/c/o1', {'REQUEST_METHOD': 'POST'}) \ .get_response(self.list_endpoints) self.assertEqual(resp.status_int, 405) self.assertEqual(resp.status, '405 Method Not Allowed') self.assertEqual(resp.headers['allow'], 'GET') resp = Request.blank('/not-endpoints').get_response( self.list_endpoints) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.status, '200 OK') self.assertEqual(resp.body, 'FakeApp') # test policies with custom endpoint name for pol in POLICIES: # test custom path with trailing slash custom_path_le = list_endpoints.filter_factory({ 'swift_dir': self.testdir, 'list_endpoints_path': '/some/another/path/' })(self.app) self.policy_to_test = pol.idx with mock.patch(PATCHGI, self.FakeGetInfo): resp = Request.blank('/some/another/path/a/c/o1') \ .get_response(custom_path_le) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), expected[pol.idx]) # test custom path without trailing slash custom_path_le = list_endpoints.filter_factory({ 'swift_dir': self.testdir, 'list_endpoints_path': '/some/another/path' })(self.app) self.policy_to_test = pol.idx with mock.patch(PATCHGI, self.FakeGetInfo): resp = Request.blank('/some/another/path/a/c/o1') \ .get_response(custom_path_le) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), expected[pol.idx]) def test_v1_response(self): req = Request.blank('/endpoints/v1/a/c/o1') resp = req.get_response(self.list_endpoints) expected = ["http://10.1.1.1:6000/sdb1/1/a/c/o1", "http://10.1.2.2:6000/sdd1/1/a/c/o1"] self.assertEqual(resp.body, json.dumps(expected)) def test_v2_obj_response(self): req = Request.blank('/endpoints/v2/a/c/o1') resp = req.get_response(self.list_endpoints) expected = { 'endpoints': ["http://10.1.1.1:6000/sdb1/1/a/c/o1", "http://10.1.2.2:6000/sdd1/1/a/c/o1"], 'headers': {'X-Backend-Storage-Policy-Index': "0"}, } self.assertEqual(resp.body, json.dumps(expected)) for policy in POLICIES: patch_path = 'swift.common.middleware.list_endpoints' \ '.get_container_info' mock_get_container_info = lambda *args, **kwargs: \ {'storage_policy': int(policy)} with mock.patch(patch_path, mock_get_container_info): resp = req.get_response(self.list_endpoints) part, nodes = policy.object_ring.get_nodes('a', 'c', 'o1') [node.update({'part': part}) for node in nodes] path = 'http://%(ip)s:%(port)s/%(device)s/%(part)s/a/c/o1' expected = { 'headers': { 'X-Backend-Storage-Policy-Index': str(int(policy))}, 'endpoints': [path % node for node in nodes], } self.assertEqual(resp.body, json.dumps(expected)) def test_v2_non_obj_response(self): # account req = Request.blank('/endpoints/v2/a') resp = req.get_response(self.list_endpoints) expected = { 'endpoints': ["http://10.1.2.1:6000/sdc1/0/a", "http://10.1.1.1:6000/sda1/0/a", "http://10.1.1.1:6000/sdb1/0/a"], 'headers': {}, } # container self.assertEqual(resp.body, json.dumps(expected)) req = Request.blank('/endpoints/v2/a/c') resp = req.get_response(self.list_endpoints) expected = { 'endpoints': ["http://10.1.2.2:6000/sdd1/0/a/c", "http://10.1.1.1:6000/sda1/0/a/c", "http://10.1.2.1:6000/sdc1/0/a/c"], 'headers': {}, } self.assertEqual(resp.body, json.dumps(expected)) def test_version_account_response(self): req = Request.blank('/endpoints/a') resp = req.get_response(self.list_endpoints) expected = ["http://10.1.2.1:6000/sdc1/0/a", "http://10.1.1.1:6000/sda1/0/a", "http://10.1.1.1:6000/sdb1/0/a"] self.assertEqual(resp.body, json.dumps(expected)) req = Request.blank('/endpoints/v1.0/a') resp = req.get_response(self.list_endpoints) self.assertEqual(resp.body, json.dumps(expected)) req = Request.blank('/endpoints/v2/a') resp = req.get_response(self.list_endpoints) expected = { 'endpoints': ["http://10.1.2.1:6000/sdc1/0/a", "http://10.1.1.1:6000/sda1/0/a", "http://10.1.1.1:6000/sdb1/0/a"], 'headers': {}, } self.assertEqual(resp.body, json.dumps(expected)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_acl.py0000664000567000056710000002266213024044352023536 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.middleware import acl class TestACL(unittest.TestCase): def test_clean_acl(self): value = acl.clean_acl('header', '.r:*') self.assertEqual(value, '.r:*') value = acl.clean_acl('header', '.r:specific.host') self.assertEqual(value, '.r:specific.host') value = acl.clean_acl('header', '.r:.ending.with') self.assertEqual(value, '.r:.ending.with') value = acl.clean_acl('header', '.r:*.ending.with') self.assertEqual(value, '.r:.ending.with') value = acl.clean_acl('header', '.r:-*.ending.with') self.assertEqual(value, '.r:-.ending.with') value = acl.clean_acl('header', '.r:one,.r:two') self.assertEqual(value, '.r:one,.r:two') value = acl.clean_acl('header', '.r:*,.r:-specific.host') self.assertEqual(value, '.r:*,.r:-specific.host') value = acl.clean_acl('header', '.r:*,.r:-.ending.with') self.assertEqual(value, '.r:*,.r:-.ending.with') value = acl.clean_acl('header', '.r:one,.r:-two') self.assertEqual(value, '.r:one,.r:-two') value = acl.clean_acl('header', '.r:one,.r:-two,account,account:user') self.assertEqual(value, '.r:one,.r:-two,account,account:user') value = acl.clean_acl('header', 'TEST_account') self.assertEqual(value, 'TEST_account') value = acl.clean_acl('header', '.ref:*') self.assertEqual(value, '.r:*') value = acl.clean_acl('header', '.referer:*') self.assertEqual(value, '.r:*') value = acl.clean_acl('header', '.referrer:*') self.assertEqual(value, '.r:*') value = acl.clean_acl('header', ' .r : one , ,, .r:two , .r : - three ') self.assertEqual(value, '.r:one,.r:two,.r:-three') self.assertRaises(ValueError, acl.clean_acl, 'header', '.unknown:test') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r:') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r:*.') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r : * . ') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r:-*.') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r : - * . ') self.assertRaises(ValueError, acl.clean_acl, 'header', ' .r : ') self.assertRaises(ValueError, acl.clean_acl, 'header', 'user , .r : ') self.assertRaises(ValueError, acl.clean_acl, 'header', '.r:-') self.assertRaises(ValueError, acl.clean_acl, 'header', ' .r : - ') self.assertRaises(ValueError, acl.clean_acl, 'header', 'user , .r : - ') self.assertRaises(ValueError, acl.clean_acl, 'write-header', '.r:r') def test_parse_acl(self): self.assertEqual(acl.parse_acl(None), ([], [])) self.assertEqual(acl.parse_acl(''), ([], [])) self.assertEqual(acl.parse_acl('.r:ref1'), (['ref1'], [])) self.assertEqual(acl.parse_acl('.r:-ref1'), (['-ref1'], [])) self.assertEqual(acl.parse_acl('account:user'), ([], ['account:user'])) self.assertEqual(acl.parse_acl('account'), ([], ['account'])) self.assertEqual(acl.parse_acl('acc1,acc2:usr2,.r:ref3,.r:-ref4'), (['ref3', '-ref4'], ['acc1', 'acc2:usr2'])) self.assertEqual(acl.parse_acl( 'acc1,acc2:usr2,.r:ref3,acc3,acc4:usr4,.r:ref5,.r:-ref6'), (['ref3', 'ref5', '-ref6'], ['acc1', 'acc2:usr2', 'acc3', 'acc4:usr4'])) def test_parse_v2_acl(self): # For all these tests, the header name will be "hdr". tests = [ # Simple case: all ACL data in one header line ({'hdr': '{"a":1,"b":"foo"}'}, {'a': 1, 'b': 'foo'}), # No header "hdr" exists -- should return None ({}, None), ({'junk': 'junk'}, None), # Empty ACLs should return empty dict ({'hdr': ''}, {}), ({'hdr': '{}'}, {}), ({'hdr': '{ }'}, {}), # Bad input -- should return None ({'hdr': '["array"]'}, None), ({'hdr': 'null'}, None), ({'hdr': '"some_string"'}, None), ({'hdr': '123'}, None), ] for hdrs_in, expected in tests: result = acl.parse_acl(version=2, data=hdrs_in.get('hdr')) self.assertEqual(expected, result, '%r: %r != %r' % (hdrs_in, result, expected)) def test_format_v1_acl(self): tests = [ ((['a', 'b'], ['c.com']), 'a,b,.r:c.com'), ((['a', 'b'], ['c.com', '-x.c.com']), 'a,b,.r:c.com,.r:-x.c.com'), ((['a', 'b'], None), 'a,b'), ((None, ['c.com']), '.r:c.com'), ((None, None), ''), ] for (groups, refs), expected in tests: result = acl.format_acl( version=1, groups=groups, referrers=refs, header_name='hdr') self.assertEqual(expected, result, 'groups=%r, refs=%r: %r != %r' % (groups, refs, result, expected)) def test_format_v2_acl(self): tests = [ ({}, '{}'), ({'foo': 'bar'}, '{"foo":"bar"}'), ({'groups': ['a', 'b'], 'referrers': ['c.com', '-x.c.com']}, '{"groups":["a","b"],"referrers":["c.com","-x.c.com"]}'), ] for data, expected in tests: result = acl.format_acl(version=2, acl_dict=data) self.assertEqual(expected, result, 'data=%r: %r *!=* %r' % (data, result, expected)) def test_acls_from_account_info(self): test_data = [ ({}, None), ({'sysmeta': {}}, None), ({'sysmeta': {'core-access-control': '{"VERSION":1,"admin":["a","b"]}'}}, {'admin': ['a', 'b'], 'read-write': [], 'read-only': []}), ({ 'some-key': 'some-value', 'other-key': 'other-value', 'sysmeta': { 'core-access-control': '{"VERSION":1,"admin":["a","b"],"r' 'ead-write":["c"],"read-only":[]}', }}, {'admin': ['a', 'b'], 'read-write': ['c'], 'read-only': []}), ] for args, expected in test_data: result = acl.acls_from_account_info(args) self.assertEqual(expected, result, "%r: Got %r, expected %r" % (args, result, expected)) def test_referrer_allowed(self): self.assertTrue(not acl.referrer_allowed('host', None)) self.assertTrue(not acl.referrer_allowed('host', [])) self.assertTrue(acl.referrer_allowed(None, ['*'])) self.assertTrue(acl.referrer_allowed('', ['*'])) self.assertTrue(not acl.referrer_allowed(None, ['specific.host'])) self.assertTrue(not acl.referrer_allowed('', ['specific.host'])) self.assertTrue( acl.referrer_allowed('http://www.example.com/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://user@www.example.com/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://user:pass@www.example.com/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://www.example.com:8080/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://user@www.example.com:8080/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://user:pass@www.example.com:8080/index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://user:pass@www.example.com:8080', ['.example.com'])) self.assertTrue(acl.referrer_allowed('http://www.example.com', ['.example.com'])) self.assertTrue(not acl.referrer_allowed( 'http://thief.example.com', ['.example.com', '-thief.example.com'])) self.assertTrue(not acl.referrer_allowed( 'http://thief.example.com', ['*', '-thief.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://www.example.com', ['.other.com', 'www.example.com'])) self.assertTrue(acl.referrer_allowed( 'http://www.example.com', ['-.example.com', 'www.example.com'])) # This is considered a relative uri to the request uri, a mode not # currently supported. self.assertTrue(not acl.referrer_allowed('www.example.com', ['.example.com'])) self.assertTrue(not acl.referrer_allowed('../index.html', ['.example.com'])) self.assertTrue(acl.referrer_allowed('www.example.com', ['*'])) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_xprofile.py0000664000567000056710000005544613024044354024637 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack, LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import json import shutil import tempfile import unittest from nose import SkipTest from six import BytesIO from swift import gettext_ as _ from swift.common.swob import Request, Response try: from swift.common.middleware import xprofile from swift.common.middleware.xprofile import ProfileMiddleware from swift.common.middleware.x_profile.exceptions import ( MethodNotAllowed, NotFoundException, ODFLIBNotInstalled, PLOTLIBNotInstalled) from swift.common.middleware.x_profile.html_viewer import ( HTMLViewer, PLOTLIB_INSTALLED) from swift.common.middleware.x_profile.profile_model import ( ODFLIB_INSTALLED, ProfileLog, Stats2) except ImportError: xprofile = None class FakeApp(object): def __call__(self, env, start_response): req = Request(env) return Response(request=req, body='FAKE APP')( env, start_response) class TestXProfile(unittest.TestCase): def test_get_profiler(self): if xprofile is None: raise SkipTest self.assertTrue(xprofile.get_profiler('cProfile') is not None) self.assertTrue(xprofile.get_profiler('eventlet.green.profile') is not None) class TestProfilers(unittest.TestCase): def setUp(self): if xprofile is None: raise SkipTest self.profilers = [xprofile.get_profiler('cProfile'), xprofile.get_profiler('eventlet.green.profile')] def fake_func(self, *args, **kw): return len(args) + len(kw) def test_runcall(self): for p in self.profilers: v = p.runcall(self.fake_func, 'one', 'two', {'key1': 'value1'}) self.assertEqual(v, 3) def test_runctx(self): for p in self.profilers: p.runctx('import os;os.getcwd();', globals(), locals()) p.snapshot_stats() self.assertTrue(p.stats is not None) self.assertTrue(len(p.stats.keys()) > 0) class TestProfileMiddleware(unittest.TestCase): def setUp(self): if xprofile is None: raise SkipTest self.got_statuses = [] self.app = ProfileMiddleware(FakeApp, {}) self.tempdir = os.path.dirname(self.app.log_filename_prefix) self.pids = ['123', '456', str(os.getpid())] profiler = xprofile.get_profiler('eventlet.green.profile') for pid in self.pids: path = self.app.log_filename_prefix + pid profiler.runctx('import os;os.getcwd();', globals(), locals()) profiler.dump_stats(path) profiler.runctx('import os;os.getcwd();', globals(), locals()) profiler.dump_stats(path + '.tmp') def tearDown(self): shutil.rmtree(self.tempdir, ignore_errors=True) def get_app(self, app, global_conf, **local_conf): factory = xprofile.filter_factory(global_conf, **local_conf) return factory(app) def start_response(self, status, headers): self.got_statuses = [status] self.headers = headers def test_combine_body_qs(self): body = (b"profile=all&sort=time&limit=-1&fulldirs=1" b"&nfl_filter=__call__&query=query&metric=nc&format=default") wsgi_input = BytesIO(body) environ = {'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'profile=all&format=json', 'wsgi.input': wsgi_input} req = Request.blank('/__profile__/', environ=environ) query_dict = self.app._combine_body_qs(req) self.assertEqual(query_dict['profile'], ['all']) self.assertEqual(query_dict['sort'], ['time']) self.assertEqual(query_dict['limit'], ['-1']) self.assertEqual(query_dict['fulldirs'], ['1']) self.assertEqual(query_dict['nfl_filter'], ['__call__']) self.assertEqual(query_dict['query'], ['query']) self.assertEqual(query_dict['metric'], ['nc']) self.assertEqual(query_dict['format'], ['default']) def test_call(self): body = b"sort=time&limit=-1&fulldirs=1&nfl_filter=&metric=nc" wsgi_input = BytesIO(body + b'&query=query') environ = {'HTTP_HOST': 'localhost:8080', 'PATH_INFO': '/__profile__', 'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'profile=all&format=json', 'wsgi.input': wsgi_input} resp = self.app(environ, self.start_response) self.assertTrue(resp[0].find('') > 0, resp) self.assertEqual(self.got_statuses, ['200 OK']) self.assertEqual(self.headers, [('content-type', 'text/html')]) wsgi_input = BytesIO(body + b'&plot=plot') environ['wsgi.input'] = wsgi_input if PLOTLIB_INSTALLED: resp = self.app(environ, self.start_response) self.assertEqual(self.got_statuses, ['200 OK']) self.assertEqual(self.headers, [('content-type', 'image/jpg')]) else: resp = self.app(environ, self.start_response) self.assertEqual(self.got_statuses, ['500 Internal Server Error']) wsgi_input = BytesIO(body + '&download=download&format=default') environ['wsgi.input'] = wsgi_input resp = self.app(environ, self.start_response) self.assertEqual(self.headers, [('content-type', HTMLViewer.format_dict['default'])]) wsgi_input = BytesIO(body + '&download=download&format=json') environ['wsgi.input'] = wsgi_input resp = self.app(environ, self.start_response) self.assertTrue(self.headers == [('content-type', HTMLViewer.format_dict['json'])]) env2 = environ.copy() env2['REQUEST_METHOD'] = 'DELETE' resp = self.app(env2, self.start_response) self.assertEqual(self.got_statuses, ['405 Method Not Allowed'], resp) # use a totally bogus profile identifier wsgi_input = BytesIO(body + b'&profile=ABC&download=download') environ['wsgi.input'] = wsgi_input resp = self.app(environ, self.start_response) self.assertEqual(self.got_statuses, ['404 Not Found'], resp) wsgi_input = BytesIO(body + b'&download=download&format=ods') environ['wsgi.input'] = wsgi_input resp = self.app(environ, self.start_response) if ODFLIB_INSTALLED: self.assertEqual(self.headers, [('content-type', HTMLViewer.format_dict['ods'])]) else: self.assertEqual(self.got_statuses, ['500 Internal Server Error']) def test_dump_checkpoint(self): self.app.dump_checkpoint() self.assertTrue(self.app.last_dump_at is not None) def test_renew_profile(self): old_profiler = self.app.profiler self.app.renew_profile() new_profiler = self.app.profiler self.assertTrue(old_profiler != new_profiler) class Test_profile_log(unittest.TestCase): def setUp(self): if xprofile is None: raise SkipTest self.dir1 = tempfile.mkdtemp() self.log_filename_prefix1 = self.dir1 + '/unittest.profile' self.profile_log1 = ProfileLog(self.log_filename_prefix1, False) self.pids1 = ['123', '456', str(os.getpid())] profiler1 = xprofile.get_profiler('eventlet.green.profile') for pid in self.pids1: profiler1.runctx('import os;os.getcwd();', globals(), locals()) self.profile_log1.dump_profile(profiler1, pid) self.dir2 = tempfile.mkdtemp() self.log_filename_prefix2 = self.dir2 + '/unittest.profile' self.profile_log2 = ProfileLog(self.log_filename_prefix2, True) self.pids2 = ['321', '654', str(os.getpid())] profiler2 = xprofile.get_profiler('eventlet.green.profile') for pid in self.pids2: profiler2.runctx('import os;os.getcwd();', globals(), locals()) self.profile_log2.dump_profile(profiler2, pid) def tearDown(self): self.profile_log1.clear('all') self.profile_log2.clear('all') shutil.rmtree(self.dir1, ignore_errors=True) shutil.rmtree(self.dir2, ignore_errors=True) def test_get_all_pids(self): self.assertEqual(self.profile_log1.get_all_pids(), sorted(self.pids1, reverse=True)) for pid in self.profile_log2.get_all_pids(): self.assertTrue(pid.split('-')[0] in self.pids2) def test_clear(self): self.profile_log1.clear('123') self.assertFalse(os.path.exists(self.log_filename_prefix1 + '123')) self.profile_log1.clear('current') self.assertFalse(os.path.exists(self.log_filename_prefix1 + str(os.getpid()))) self.profile_log1.clear('all') for pid in self.pids1: self.assertFalse(os.path.exists(self.log_filename_prefix1 + pid)) self.profile_log2.clear('321') self.assertFalse(os.path.exists(self.log_filename_prefix2 + '321')) self.profile_log2.clear('current') self.assertFalse(os.path.exists(self.log_filename_prefix2 + str(os.getpid()))) self.profile_log2.clear('all') for pid in self.pids2: self.assertFalse(os.path.exists(self.log_filename_prefix2 + pid)) def test_get_logfiles(self): log_files = self.profile_log1.get_logfiles('all') self.assertEqual(len(log_files), 3) self.assertEqual(len(log_files), len(self.pids1)) log_files = self.profile_log1.get_logfiles('current') self.assertEqual(len(log_files), 1) self.assertEqual(log_files, [self.log_filename_prefix1 + str(os.getpid())]) log_files = self.profile_log1.get_logfiles(self.pids1[0]) self.assertEqual(len(log_files), 1) self.assertEqual(log_files, [self.log_filename_prefix1 + self.pids1[0]]) log_files = self.profile_log2.get_logfiles('all') self.assertEqual(len(log_files), 3) self.assertEqual(len(log_files), len(self.pids2)) log_files = self.profile_log2.get_logfiles('current') self.assertEqual(len(log_files), 1) self.assertTrue(log_files[0].find(self.log_filename_prefix2 + str(os.getpid())) > -1) log_files = self.profile_log2.get_logfiles(self.pids2[0]) self.assertEqual(len(log_files), 1) self.assertTrue(log_files[0].find(self.log_filename_prefix2 + self.pids2[0]) > -1) def test_dump_profile(self): prof = xprofile.get_profiler('eventlet.green.profile') prof.runctx('import os;os.getcwd();', globals(), locals()) prof.create_stats() pfn = self.profile_log1.dump_profile(prof, os.getpid()) self.assertTrue(os.path.exists(pfn)) os.remove(pfn) pfn = self.profile_log2.dump_profile(prof, os.getpid()) self.assertTrue(os.path.exists(pfn)) os.remove(pfn) class Test_html_viewer(unittest.TestCase): def setUp(self): if xprofile is None: raise SkipTest self.app = ProfileMiddleware(FakeApp, {}) self.log_files = [] self.tempdir = tempfile.mkdtemp() self.log_filename_prefix = self.tempdir + '/unittest.profile' self.profile_log = ProfileLog(self.log_filename_prefix, False) self.pids = ['123', '456', str(os.getpid())] profiler = xprofile.get_profiler('eventlet.green.profile') for pid in self.pids: profiler.runctx('import os;os.getcwd();', globals(), locals()) self.log_files.append(self.profile_log.dump_profile(profiler, pid)) self.viewer = HTMLViewer('__profile__', 'eventlet.green.profile', self.profile_log) body = (b"profile=123&profile=456&sort=time&sort=nc&limit=10" b"&fulldirs=1&nfl_filter=getcwd&query=query&metric=nc") wsgi_input = BytesIO(body) environ = {'REQUEST_METHOD': 'GET', 'QUERY_STRING': 'profile=all', 'wsgi.input': wsgi_input} req = Request.blank('/__profile__/', environ=environ) self.query_dict = self.app._combine_body_qs(req) def tearDown(self): shutil.rmtree(self.tempdir, ignore_errors=True) def fake_call_back(self): pass def test_get_param(self): query_dict = self.query_dict get_param = self.viewer._get_param self.assertEqual(get_param(query_dict, 'profile', 'current', True), ['123', '456']) self.assertEqual(get_param(query_dict, 'profile', 'current'), '123') self.assertEqual(get_param(query_dict, 'sort', 'time'), 'time') self.assertEqual(get_param(query_dict, 'sort', 'time', True), ['time', 'nc']) self.assertEqual(get_param(query_dict, 'limit', -1), 10) self.assertEqual(get_param(query_dict, 'fulldirs', '0'), '1') self.assertEqual(get_param(query_dict, 'nfl_filter', ''), 'getcwd') self.assertEqual(get_param(query_dict, 'query', ''), 'query') self.assertEqual(get_param(query_dict, 'metric', 'time'), 'nc') self.assertEqual(get_param(query_dict, 'format', 'default'), 'default') def test_render(self): url = 'http://localhost:8080/__profile__' path_entries = ['/__profile__'.split('/'), '/__profile__/'.split('/'), '/__profile__/123'.split('/'), '/__profile__/123/'.split('/'), '/__profile__/123/:0(getcwd)'.split('/'), '/__profile__/all'.split('/'), '/__profile__/all/'.split('/'), '/__profile__/all/:0(getcwd)'.split('/'), '/__profile__/current'.split('/'), '/__profile__/current/'.split('/'), '/__profile__/current/:0(getcwd)'.split('/')] content, headers = self.viewer.render(url, 'GET', path_entries[0], self.query_dict, None) self.assertTrue(content is not None) self.assertEqual(headers, [('content-type', 'text/html')]) content, headers = self.viewer.render(url, 'POST', path_entries[0], self.query_dict, None) self.assertTrue(content is not None) self.assertEqual(headers, [('content-type', 'text/html')]) plot_dict = self.query_dict.copy() plot_dict['plot'] = ['plot'] if PLOTLIB_INSTALLED: content, headers = self.viewer.render(url, 'POST', path_entries[0], plot_dict, None) self.assertEqual(headers, [('content-type', 'image/jpg')]) else: self.assertRaises(PLOTLIBNotInstalled, self.viewer.render, url, 'POST', path_entries[0], plot_dict, None) clear_dict = self.query_dict.copy() clear_dict['clear'] = ['clear'] del clear_dict['query'] clear_dict['profile'] = ['xxx'] content, headers = self.viewer.render(url, 'POST', path_entries[0], clear_dict, None) self.assertEqual(headers, [('content-type', 'text/html')]) download_dict = self.query_dict.copy() download_dict['download'] = ['download'] content, headers = self.viewer.render(url, 'POST', path_entries[0], download_dict, None) self.assertTrue(headers == [('content-type', self.viewer.format_dict['default'])]) content, headers = self.viewer.render(url, 'GET', path_entries[1], self.query_dict, None) self.assertTrue(isinstance(json.loads(content), dict)) for method in ['HEAD', 'PUT', 'DELETE', 'XYZMethod']: self.assertRaises(MethodNotAllowed, self.viewer.render, url, method, path_entries[10], self.query_dict, None) for entry in path_entries[2:]: download_dict['format'] = 'default' content, headers = self.viewer.render(url, 'GET', entry, download_dict, None) self.assertTrue( ('content-type', self.viewer.format_dict['default']) in headers, entry) download_dict['format'] = 'json' content, headers = self.viewer.render(url, 'GET', entry, download_dict, None) self.assertTrue(isinstance(json.loads(content), dict)) def test_index(self): content, headers = self.viewer.index_page(self.log_files[0:1], profile_id='current') self.assertTrue(content.find('') > -1) self.assertTrue(headers == [('content-type', 'text/html')]) def test_index_all(self): content, headers = self.viewer.index_page(self.log_files, profile_id='all') for f in self.log_files: self.assertTrue(content.find(f) > 0, content) self.assertTrue(headers == [('content-type', 'text/html')]) def test_download(self): content, headers = self.viewer.download(self.log_files) self.assertTrue(content is not None) self.assertEqual(headers, [('content-type', self.viewer.format_dict['default'])]) content, headers = self.viewer.download(self.log_files, sort='calls', limit=10, nfl_filter='os') self.assertTrue(content is not None) self.assertEqual(headers, [('content-type', self.viewer.format_dict['default'])]) content, headers = self.viewer.download(self.log_files, output_format='default') self.assertEqual(headers, [('content-type', self.viewer.format_dict['default'])]) content, headers = self.viewer.download(self.log_files, output_format='json') self.assertTrue(isinstance(json.loads(content), dict)) self.assertEqual(headers, [('content-type', self.viewer.format_dict['json'])]) content, headers = self.viewer.download(self.log_files, output_format='csv') self.assertEqual(headers, [('content-type', self.viewer.format_dict['csv'])]) if ODFLIB_INSTALLED: content, headers = self.viewer.download(self.log_files, output_format='ods') self.assertEqual(headers, [('content-type', self.viewer.format_dict['ods'])]) else: self.assertRaises(ODFLIBNotInstalled, self.viewer.download, self.log_files, output_format='ods') content, headers = self.viewer.download(self.log_files, nfl_filter=__file__, output_format='python') self.assertEqual(headers, [('content-type', self.viewer.format_dict['python'])]) def test_plot(self): if PLOTLIB_INSTALLED: content, headers = self.viewer.plot(self.log_files) self.assertTrue(content is not None) self.assertEqual(headers, [('content-type', 'image/jpg')]) self.assertRaises(NotFoundException, self.viewer.plot, []) else: self.assertRaises(PLOTLIBNotInstalled, self.viewer.plot, self.log_files) def test_format_source_code(self): osfile = os.__file__.rstrip('c') nfl_os = '%s:%d(%s)' % (osfile, 136, 'makedirs') self.assertIn('makedirs', self.viewer.format_source_code(nfl_os)) self.assertNotIn('makedirsXYZ', self.viewer.format_source_code(nfl_os)) nfl_illegal = '%sc:136(makedirs)' % osfile self.assertIn(_('The file type are forbidden to access!'), self.viewer.format_source_code(nfl_illegal)) nfl_not_exist = '%s.py:136(makedirs)' % osfile expected_msg = _('Can not access the file %s.py.') % osfile self.assertIn(expected_msg, self.viewer.format_source_code(nfl_not_exist)) class TestStats2(unittest.TestCase): def setUp(self): if xprofile is None: raise SkipTest self.profile_file = tempfile.mktemp('profile', 'unittest') self.profilers = [xprofile.get_profiler('cProfile'), xprofile.get_profiler('eventlet.green.profile')] for p in self.profilers: p.runctx('import os;os.getcwd();', globals(), locals()) p.dump_stats(self.profile_file) self.stats2 = Stats2(self.profile_file) self.selections = [['getcwd'], ['getcwd', -1], ['getcwd', -10], ['getcwd', 0.1]] def tearDown(self): os.remove(self.profile_file) def test_func_to_dict(self): func = ['profile.py', 100, '__call__'] self.assertEqual({'module': 'profile.py', 'line': 100, 'function': '__call__'}, self.stats2.func_to_dict(func)) func = ['', 0, '__call__'] self.assertEqual({'module': '', 'line': 0, 'function': '__call__'}, self.stats2.func_to_dict(func)) def test_to_json(self): for selection in self.selections: js = self.stats2.to_json(selection) self.assertTrue(isinstance(json.loads(js), dict)) self.assertTrue(json.loads(js)['stats'] is not None) self.assertTrue(json.loads(js)['stats'][0] is not None) def test_to_ods(self): if ODFLIB_INSTALLED: for selection in self.selections: self.assertTrue(self.stats2.to_ods(selection) is not None) def test_to_csv(self): for selection in self.selections: self.assertTrue(self.stats2.to_csv(selection) is not None) self.assertTrue('function calls' in self.stats2.to_csv(selection)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_staticweb.py0000664000567000056710000011175013024044354024763 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import unittest import mock from swift.common.swob import Request, Response, HTTPUnauthorized from swift.common.middleware import staticweb meta_map = { 'c1': {'status': 401}, 'c2': {}, 'c3': {'meta': {'web-index': 'index.html', 'web-listings': 't'}}, 'c3b': {'meta': {'web-index': 'index.html', 'web-listings': 't'}}, 'c4': {'meta': {'web-index': 'index.html', 'web-error': 'error.html', 'web-listings': 't', 'web-listings-css': 'listing.css', 'web-directory-type': 'text/dir'}}, 'c5': {'meta': {'web-index': 'index.html', 'web-error': 'error.html', 'web-listings': 't', 'web-listings-css': 'listing.css'}}, 'c6': {'meta': {'web-listings': 't', 'web-error': 'error.html'}}, 'c6b': {'meta': {'web-listings': 't', 'web-listings-label': 'foo'}}, 'c7': {'meta': {'web-listings': 'f'}}, 'c8': {'meta': {'web-error': 'error.html', 'web-listings': 't', 'web-listings-css': 'http://localhost/stylesheets/listing.css'}}, 'c9': {'meta': {'web-error': 'error.html', 'web-listings': 't', 'web-listings-css': '/absolute/listing.css'}}, 'c10': {'meta': {'web-listings': 't'}}, 'c11': {'meta': {'web-index': 'index.html'}}, 'c11a': {'meta': {'web-index': 'index.html', 'web-directory-type': 'text/directory'}}, 'c12': {'meta': {'web-index': 'index.html', 'web-error': 'error.html'}}, 'c13': {'meta': {'web-listings': 'f', 'web-listings-css': 'listing.css'}}, } def mock_get_container_info(env, app, swift_source='SW'): container = env['PATH_INFO'].rstrip('/').split('/')[3] container_info = meta_map[container] container_info.setdefault('status', 200) container_info.setdefault('read_acl', '.r:*') return container_info class FakeApp(object): def __init__(self, status_headers_body_iter=None): self.calls = 0 self.get_c4_called = False def __call__(self, env, start_response): self.calls += 1 if 'swift.authorize' in env: resp = env['swift.authorize'](Request(env)) if resp: return resp(env, start_response) if env['PATH_INFO'] == '/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1': return Response( status='412 Precondition Failed')(env, start_response) elif env['PATH_INFO'] == '/v1/a': return Response(status='401 Unauthorized')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c1': return Response(status='401 Unauthorized')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c2': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c2/one.txt': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/index.html': return Response(status='200 Ok', body='''

Test main index.html file.

Visit subdir.

Don't visit subdir2 because it doesn't really exist.

Visit subdir3.

Visit subdir3/subsubdir.

''')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3b': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3b/index.html': resp = Response(status='204 No Content') resp.app_iter = iter([]) return resp(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir3/subsubdir': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir3/subsubdir/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdir3/subsubdir/index.html': return Response(status='200 Ok', body='index file')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdirx/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdirx/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdiry/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdiry/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdirz': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/subdirz/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/unknown': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c3/unknown/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4': self.get_c4_called = True return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/one.txt': return Response( status='200 Ok', headers={'x-object-meta-test': 'value'}, body='1')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/two.txt': return Response(status='503 Service Unavailable')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/subdir/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/subdir/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/unknown': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/unknown/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c4/404error.html': return Response(status='200 Ok', body='''

Chrome's 404 fancy-page sucks.

'''.strip())(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5/index.html': return Response(status='503 Service Unavailable')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5/503error.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5/unknown': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5/unknown/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c5/404error.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c6': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c6b': return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c6/subdir': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c6/401error.html': return Response(status='200 Ok', body='''

Hey, you're not authorized to see this!

'''.strip())(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c7', '/v1/a/c7/'): return self.listing(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c8', '/v1/a/c8/'): return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c8/subdir/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c9', '/v1/a/c9/'): return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c9/subdir/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c10', '/v1/a/c10/'): return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c10/\xe2\x98\x83/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c10/\xe2\x98\x83/\xe2\x98\x83/': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c11', '/v1/a/c11/'): return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11/subdir/': return Response(status='200 Ok', headers={ 'Content-Type': 'application/directory'})( env, start_response) elif env['PATH_INFO'] == '/v1/a/c11/subdir/index.html': return Response(status='200 Ok', body='''

c11 subdir index

'''.strip())(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11/subdir2/': return Response(status='200 Ok', headers={'Content-Type': 'application/directory'})(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11/subdir2/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] in ('/v1/a/c11a', '/v1/a/c11a/'): return self.listing(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir/': return Response(status='200 Ok', headers={'Content-Type': 'text/directory'})(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir2/': return Response(status='200 Ok', headers={'Content-Type': 'application/directory'})(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir2/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir3/': return Response(status='200 Ok', headers={'Content-Type': 'not_a/directory'})(env, start_response) elif env['PATH_INFO'] == '/v1/a/c11a/subdir3/index.html': return Response(status='404 Not Found')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c12/index.html': return Response(status='200 Ok', body='index file')(env, start_response) elif env['PATH_INFO'] == '/v1/a/c12/200error.html': return Response(status='200 Ok', body='error file')(env, start_response) else: raise Exception('Unknown path %r' % env['PATH_INFO']) def listing(self, env, start_response): headers = {'x-container-read': '.r:*'} if ((env['PATH_INFO'] in ( '/v1/a/c3', '/v1/a/c4', '/v1/a/c8', '/v1/a/c9')) and (env['QUERY_STRING'] == 'delimiter=/&format=json&prefix=subdir/')): headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'X-Container-Read': '.r:*', 'Content-Type': 'application/json; charset=utf-8'}) body = ''' [{"name":"subdir/1.txt", "hash":"5f595114a4b3077edfac792c61ca4fe4", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.709100"}, {"name":"subdir/2.txt", "hash":"c85c1dcd19cf5cbac84e6043c31bb63e", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.734140"}, {"subdir":"subdir3/subsubdir/"}] '''.strip() elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \ 'delimiter=/&format=json&prefix=subdiry/': headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'X-Container-Read': '.r:*', 'Content-Type': 'application/json; charset=utf-8'}) body = '[]' elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \ 'limit=1&format=json&delimiter=/&limit=1&prefix=subdirz/': headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'X-Container-Read': '.r:*', 'Content-Type': 'application/json; charset=utf-8'}) body = ''' [{"name":"subdirz/1.txt", "hash":"5f595114a4b3077edfac792c61ca4fe4", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.709100"}] '''.strip() elif env['PATH_INFO'] == '/v1/a/c6' and env['QUERY_STRING'] == \ 'limit=1&format=json&delimiter=/&limit=1&prefix=subdir/': headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'X-Container-Read': '.r:*', 'X-Container-Web-Listings': 't', 'Content-Type': 'application/json; charset=utf-8'}) body = ''' [{"name":"subdir/1.txt", "hash":"5f595114a4b3077edfac792c61ca4fe4", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.709100"}] '''.strip() elif env['PATH_INFO'] == '/v1/a/c10' and ( env['QUERY_STRING'] == 'delimiter=/&format=json&prefix=%E2%98%83/' or env['QUERY_STRING'] == 'delimiter=/&format=json&prefix=%E2%98%83/%E2%98%83/'): headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'X-Container-Read': '.r:*', 'X-Container-Web-Listings': 't', 'Content-Type': 'application/json; charset=utf-8'}) body = ''' [{"name":"\u2603/\u2603/one.txt", "hash":"73f1dd69bacbf0847cc9cffa3c6b23a1", "bytes":22, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.709100"}, {"subdir":"\u2603/\u2603/"}] '''.strip() elif 'prefix=' in env['QUERY_STRING']: return Response(status='204 No Content')(env, start_response) elif 'format=json' in env['QUERY_STRING']: headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'Content-Type': 'application/json; charset=utf-8'}) body = ''' [{"name":"401error.html", "hash":"893f8d80692a4d3875b45be8f152ad18", "bytes":110, "content_type":"text/html", "last_modified":"2011-03-24T04:27:52.713710"}, {"name":"404error.html", "hash":"62dcec9c34ed2b347d94e6ca707aff8c", "bytes":130, "content_type":"text/html", "last_modified":"2011-03-24T04:27:52.720850"}, {"name":"index.html", "hash":"8b469f2ca117668a5131fe9ee0815421", "bytes":347, "content_type":"text/html", "last_modified":"2011-03-24T04:27:52.683590"}, {"name":"listing.css", "hash":"7eab5d169f3fcd06a08c130fa10c5236", "bytes":17, "content_type":"text/css", "last_modified":"2011-03-24T04:27:52.721610"}, {"name":"one.txt", "hash":"73f1dd69bacbf0847cc9cffa3c6b23a1", "bytes":22, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.722270"}, {"name":"subdir/1.txt", "hash":"5f595114a4b3077edfac792c61ca4fe4", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.709100"}, {"name":"subdir/2.txt", "hash":"c85c1dcd19cf5cbac84e6043c31bb63e", "bytes":20, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.734140"}, {"name":"subdir/\u2603.txt", "hash":"7337d028c093130898d937c319cc9865", "bytes":72981, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.735460"}, {"name":"subdir2", "hash":"d41d8cd98f00b204e9800998ecf8427e", "bytes":0, "content_type":"text/directory", "last_modified":"2011-03-24T04:27:52.676690"}, {"name":"subdir3/subsubdir/index.html", "hash":"04eea67110f883b1a5c97eb44ccad08c", "bytes":72, "content_type":"text/html", "last_modified":"2011-03-24T04:27:52.751260"}, {"name":"two.txt", "hash":"10abb84c63a5cff379fdfd6385918833", "bytes":22, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.825110"}, {"name":"\u2603/\u2603/one.txt", "hash":"73f1dd69bacbf0847cc9cffa3c6b23a1", "bytes":22, "content_type":"text/plain", "last_modified":"2011-03-24T04:27:52.935560"}] '''.strip() else: headers.update({'X-Container-Object-Count': '12', 'X-Container-Bytes-Used': '73763', 'Content-Type': 'text/plain; charset=utf-8'}) body = '\n'.join(['401error.html', '404error.html', 'index.html', 'listing.css', 'one.txt', 'subdir/1.txt', 'subdir/2.txt', u'subdir/\u2603.txt', 'subdir2', 'subdir3/subsubdir/index.html', 'two.txt', u'\u2603/\u2603/one.txt']) return Response(status='200 Ok', headers=headers, body=body)(env, start_response) class FakeAuthFilter(object): def __init__(self, app, deny_objects=False, deny_listing=False): self.app = app self.deny_objects = deny_objects self.deny_listing = deny_listing def authorize(self, req): path_parts = req.path.strip('/').split('/') if ((self.deny_objects and len(path_parts) > 3) or (self.deny_listing and len(path_parts) == 3)): return HTTPUnauthorized() def __call__(self, env, start_response): env['swift.authorize'] = self.authorize return self.app(env, start_response) class TestStaticWeb(unittest.TestCase): def setUp(self): self.app = FakeApp() self.test_staticweb = FakeAuthFilter( staticweb.filter_factory({})(self.app)) self._orig_get_container_info = staticweb.get_container_info staticweb.get_container_info = mock_get_container_info def tearDown(self): staticweb.get_container_info = self._orig_get_container_info def test_app_set(self): app = FakeApp() sw = staticweb.filter_factory({})(app) self.assertEqual(sw.app, app) def test_conf_set(self): conf = {'blah': 1} sw = staticweb.filter_factory(conf)(FakeApp()) self.assertEqual(sw.conf, conf) def test_root(self): resp = Request.blank('/').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 404) def test_version(self): resp = Request.blank('/v1').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 412) def test_account(self): resp = Request.blank('/v1/a').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 401) def test_container1(self): resp = Request.blank('/v1/a/c1').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 401) def test_container1_web_mode_explicitly_off(self): resp = Request.blank('/v1/a/c1', headers={'x-web-mode': 'false'}).get_response( self.test_staticweb) self.assertEqual(resp.status_int, 401) def test_container1_web_mode_explicitly_on(self): resp = Request.blank('/v1/a/c1', headers={'x-web-mode': 'true'}).get_response( self.test_staticweb) self.assertEqual(resp.status_int, 404) def test_container2(self): resp = Request.blank('/v1/a/c2').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(len(resp.body.split('\n')), int(resp.headers['x-container-object-count'])) def test_container2_web_mode_explicitly_off(self): resp = Request.blank( '/v1/a/c2', headers={'x-web-mode': 'false'}).get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(len(resp.body.split('\n')), int(resp.headers['x-container-object-count'])) def test_container2_web_mode_explicitly_on(self): resp = Request.blank( '/v1/a/c2', headers={'x-web-mode': 'true'}).get_response(self.test_staticweb) self.assertEqual(resp.status_int, 404) def test_container2onetxt(self): resp = Request.blank( '/v1/a/c2/one.txt').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 404) def test_container2json(self): resp = Request.blank( '/v1/a/c2?format=json').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(len(json.loads(resp.body)), int(resp.headers['x-container-object-count'])) def test_container2json_web_mode_explicitly_off(self): resp = Request.blank( '/v1/a/c2?format=json', headers={'x-web-mode': 'false'}).get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(len(json.loads(resp.body)), int(resp.headers['x-container-object-count'])) def test_container2json_web_mode_explicitly_on(self): resp = Request.blank( '/v1/a/c2?format=json', headers={'x-web-mode': 'true'}).get_response(self.test_staticweb) self.assertEqual(resp.status_int, 404) def test_container3(self): resp = Request.blank('/v1/a/c3').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 301) self.assertEqual(resp.headers['location'], 'http://localhost/v1/a/c3/') def test_container3indexhtml(self): resp = Request.blank('/v1/a/c3/').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertTrue('Test main index.html file.' in resp.body) def test_container3subsubdir(self): resp = Request.blank( '/v1/a/c3/subdir3/subsubdir').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 301) def test_container3subsubdircontents(self): resp = Request.blank( '/v1/a/c3/subdir3/subsubdir/').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, 'index file') def test_container3subdir(self): resp = Request.blank( '/v1/a/c3/subdir/').get_response(self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertTrue('Listing of /v1/a/c3/subdir/' in resp.body) self.assertTrue('' in resp.body) self.assertTrue('' not in resp.body) self.assertTrue('c11 subdir index' in resp.body) def test_container11subdirmarkermatchdirtype(self): resp = Request.blank('/v1/a/c11a/subdir/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 404) self.assertTrue('Index File Not Found' in resp.body) def test_container11subdirmarkeraltdirtype(self): resp = Request.blank('/v1/a/c11a/subdir2/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 200) def test_container11subdirmarkerinvaliddirtype(self): resp = Request.blank('/v1/a/c11a/subdir3/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 200) def test_container12unredirectedrequest(self): resp = Request.blank('/v1/a/c12/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 200) self.assertTrue('index file' in resp.body) def test_container_404_has_css(self): resp = Request.blank('/v1/a/c13/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 404) self.assertTrue('listing.css' in resp.body) def test_container_404_has_no_css(self): resp = Request.blank('/v1/a/c7/').get_response( self.test_staticweb) self.assertEqual(resp.status_int, 404) self.assertTrue('listing.css' not in resp.body) self.assertTrue('\n' \ '\n' \ '\n' \ '' req = Request.blank('/crossdomain.xml', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, [expectedResponse]) # GET of /crossdomain.xml (custom) def test_crossdomain_custom(self): conf = {'cross_domain_policy': '\n'} self.app = crossdomain.CrossDomainMiddleware(FakeApp(), conf) expectedResponse = '\n' \ '\n' \ '\n' \ '\n' \ '\n' \ '' req = Request.blank('/crossdomain.xml', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, [expectedResponse]) # GET to a different resource should be passed on def test_crossdomain_pass(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') # Only GET is allowed on the /crossdomain.xml resource def test_crossdomain_get_only(self): for method in ['HEAD', 'PUT', 'POST', 'COPY', 'OPTIONS']: req = Request.blank('/crossdomain.xml', environ={'REQUEST_METHOD': method}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_tempurl.py0000664000567000056710000014574513024044354024501 0ustar jenkinsjenkins00000000000000# Copyright (c) 2011-2014 Greg Holt # Copyright (c) 2012-2013 Peter Portante # Copyright (c) 2012 Iryoung Jeong # Copyright (c) 2012 Michael Barton # Copyright (c) 2013 Alex Gaynor # Copyright (c) 2013 Chuck Thier # Copyright (c) 2013 David Goetz # Copyright (c) 2015 Donagh McCabe # Copyright (c) 2013 Greg Lange # Copyright (c) 2013 John Dickinson # Copyright (c) 2013 Kun Huang # Copyright (c) 2013 Richard Hawkins # Copyright (c) 2013 Samuel Merritt # Copyright (c) 2013 Shri Javadekar # Copyright (c) 2013 Tong Li # Copyright (c) 2013 ZhiQiang Fan # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hmac import itertools import unittest from hashlib import sha1 from time import time from swift.common.middleware import tempauth, tempurl from swift.common.header_key_dict import HeaderKeyDict from swift.common.swob import Request, Response from swift.common import utils class FakeApp(object): def __init__(self, status_headers_body_iter=None): self.calls = 0 self.status_headers_body_iter = status_headers_body_iter if not self.status_headers_body_iter: self.status_headers_body_iter = iter( itertools.repeat(( '404 Not Found', { 'x-test-header-one-a': 'value1', 'x-test-header-two-a': 'value2', 'x-test-header-two-b': 'value3'}, ''))) self.request = None def __call__(self, env, start_response): self.calls += 1 self.request = Request.blank('', environ=env) if 'swift.authorize' in env: resp = env['swift.authorize'](self.request) if resp: return resp(env, start_response) status, headers, body = next(self.status_headers_body_iter) return Response(status=status, headers=headers, body=body)(env, start_response) class TestTempURL(unittest.TestCase): def setUp(self): self.app = FakeApp() self.auth = tempauth.filter_factory({'reseller_prefix': ''})(self.app) self.tempurl = tempurl.filter_factory({})(self.auth) def _make_request(self, path, environ=None, keys=(), container_keys=None, **kwargs): if environ is None: environ = {} _junk, account, _junk, _junk = utils.split_path(path, 2, 4) self._fake_cache_environ(environ, account, keys, container_keys=container_keys) req = Request.blank(path, environ=environ, **kwargs) return req def _fake_cache_environ(self, environ, account, keys, container_keys=None): """ Fake out the caching layer for get_account_info(). Injects account data into environ such that keys are the tempurl keys, if set. """ meta = {'swash': 'buckle'} for idx, key in enumerate(keys): meta_name = 'Temp-URL-key' + (("-%d" % (idx + 1) if idx else "")) if key: meta[meta_name] = key environ['swift.account/' + account] = { 'status': 204, 'container_count': '0', 'total_object_count': '0', 'bytes': '0', 'meta': meta} meta = {} for i, key in enumerate(container_keys or []): meta_name = 'Temp-URL-key' + (("-%d" % (i + 1) if i else "")) meta[meta_name] = key container_cache_key = 'swift.container/' + account + '/c' environ.setdefault(container_cache_key, {'meta': meta}) def test_passthrough(self): resp = self._make_request('/v1/a/c/o').get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' not in resp.body) def test_allow_options(self): self.app.status_headers_body_iter = iter([('200 Ok', {}, '')]) resp = self._make_request( '/v1/a/c/o?temp_url_sig=abcde&temp_url_expires=12345', environ={'REQUEST_METHOD': 'OPTIONS'}).get_response(self.tempurl) self.assertEqual(resp.status_int, 200) def assert_valid_sig(self, expires, path, keys, sig, environ=None): if not environ: environ = {} environ['QUERY_STRING'] = 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires) req = self._make_request(path, keys=keys, environ=environ) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'attachment; filename="o"; ' + "filename*=UTF-8''o") self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_get_valid(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() self.assert_valid_sig(expires, path, [key], sig) def test_get_valid_key2(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key1 = 'abc123' key2 = 'def456' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig1 = hmac.new(key1, hmac_body, sha1).hexdigest() sig2 = hmac.new(key2, hmac_body, sha1).hexdigest() for sig in (sig1, sig2): self.assert_valid_sig(expires, path, [key1, key2], sig) def test_get_valid_container_keys(self): environ = {} # Add two static container keys container_keys = ['me', 'other'] meta = {} for idx, key in enumerate(container_keys): meta_name = 'Temp-URL-key' + (("-%d" % (idx + 1) if idx else "")) if key: meta[meta_name] = key environ['swift.container/a/c'] = {'meta': meta} method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key1 = 'me' key2 = 'other' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig1 = hmac.new(key1, hmac_body, sha1).hexdigest() sig2 = hmac.new(key2, hmac_body, sha1).hexdigest() account_keys = [] for sig in (sig1, sig2): self.assert_valid_sig(expires, path, account_keys, sig, environ) def test_get_valid_with_filename(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'filename=bob%%20%%22killer%%22.txt' % (sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'attachment; filename="bob %22killer%22.txt"; ' + "filename*=UTF-8''bob%20%22killer%22.txt") self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_head_valid(self): method = 'HEAD' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'REQUEST_METHOD': 'HEAD', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % (sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) def test_get_valid_with_filename_and_inline(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'filename=bob%%20%%22killer%%22.txt&inline=' % (sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'inline') self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_get_valid_with_inline(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'inline=' % (sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'inline') self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_obj_odd_chars(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/a\r\nb' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'attachment; filename="a%0D%0Ab"; ' + "filename*=UTF-8''a%0D%0Ab") self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_obj_odd_chars_in_content_disposition_metadata(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) headers = [('Content-Disposition', 'attachment; filename="fu\nbar"')] self.tempurl.app = FakeApp(iter([('200 Ok', headers, '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'attachment; filename="fu%0Abar"') self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_obj_trailing_slash(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o/' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-disposition'], 'attachment; filename="o"; ' + "filename*=UTF-8''o") self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_filename_trailing_slash(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request(path, keys=[key], environ={ 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'filename=/i/want/this/just/as/it/is/' % (sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertEqual( resp.headers['content-disposition'], 'attachment; filename="/i/want/this/just/as/it/is/"; ' + "filename*=UTF-8''/i/want/this/just/as/it/is/") self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_get_valid_but_404(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertFalse('content-disposition' in resp.headers) self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_put_not_allowed_by_get(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'PUT', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_put_valid(self): method = 'PUT' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'PUT', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_get_not_allowed_by_put(self): method = 'PUT' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_missing_sig(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_expires=%s' % expires}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_missing_expires(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s' % sig}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_bad_path(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_no_key(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_head_allowed_by_get(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'HEAD', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_head_allowed_by_put(self): method = 'PUT' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'HEAD', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_head_allowed_by_post(self): method = 'POST' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'HEAD', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertEqual(req.environ['swift.authorize_override'], True) self.assertEqual(req.environ['REMOTE_USER'], '.wsgi.tempurl') def test_head_otherwise_not_allowed(self): method = 'PUT' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() # Deliberately fudge expires to show HEADs aren't just automatically # allowed. expires += 1 req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'HEAD', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) def test_post_when_forbidden_by_config(self): self.tempurl.conf['methods'].remove('POST') method = 'POST' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'POST', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_delete_when_forbidden_by_config(self): self.tempurl.conf['methods'].remove('DELETE') method = 'DELETE' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'DELETE', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_delete_allowed(self): method = 'DELETE' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'DELETE', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) def test_unknown_not_allowed(self): method = 'UNKNOWN' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'REQUEST_METHOD': 'UNKNOWN', 'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_authorize_limits_scope(self): req_other_object = Request.blank("/v1/a/c/o2") req_other_container = Request.blank("/v1/a/c2/o2") req_other_account = Request.blank("/v1/a2/c2/o2") key_kwargs = { 'keys': ['account-key', 'shared-key'], 'container_keys': ['container-key', 'shared-key'], } # A request with the account key limits the pre-authed scope to the # account level. method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new('account-key', hmac_body, sha1).hexdigest() qs = '?temp_url_sig=%s&temp_url_expires=%s' % (sig, expires) # make request will setup the environ cache for us req = self._make_request(path + qs, **key_kwargs) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) # sanity check authorize = req.environ['swift.authorize'] # Requests for other objects happen if, for example, you're # downloading a large object or creating a large-object manifest. oo_resp = authorize(req_other_object) self.assertEqual(oo_resp, None) oc_resp = authorize(req_other_container) self.assertEqual(oc_resp, None) oa_resp = authorize(req_other_account) self.assertEqual(oa_resp.status_int, 401) # A request with the container key limits the pre-authed scope to # the container level; a different container in the same account is # out of scope and thus forbidden. hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new('container-key', hmac_body, sha1).hexdigest() qs = '?temp_url_sig=%s&temp_url_expires=%s' % (sig, expires) req = self._make_request(path + qs, **key_kwargs) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) # sanity check authorize = req.environ['swift.authorize'] oo_resp = authorize(req_other_object) self.assertEqual(oo_resp, None) oc_resp = authorize(req_other_container) self.assertEqual(oc_resp.status_int, 401) oa_resp = authorize(req_other_account) self.assertEqual(oa_resp.status_int, 401) # If account and container share a key (users set these, so this can # happen by accident, stupidity, *or* malice!), limit the scope to # account level. This prevents someone from shrinking the scope of # account-level tempurls by reusing one of the account's keys on a # container. hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new('shared-key', hmac_body, sha1).hexdigest() qs = '?temp_url_sig=%s&temp_url_expires=%s' % (sig, expires) req = self._make_request(path + qs, **key_kwargs) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) # sanity check authorize = req.environ['swift.authorize'] oo_resp = authorize(req_other_object) self.assertEqual(oo_resp, None) oc_resp = authorize(req_other_container) self.assertEqual(oc_resp, None) oa_resp = authorize(req_other_account) self.assertEqual(oa_resp.status_int, 401) def test_changed_path_invalid(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path + '2', keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_changed_sig_invalid(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() if sig[-1] != '0': sig = sig[:-1] + '0' else: sig = sig[:-1] + '1' req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_changed_expires_invalid(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires + 1)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_different_key_invalid(self): method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key + '2'], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_disallowed_header_object_manifest(self): self.tempurl = tempurl.filter_factory({})(self.auth) expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' for method in ('PUT', 'POST'): hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, method=method, keys=[key], headers={'x-object-manifest': 'private/secret'}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % (sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 400) self.assertTrue('header' in resp.body) self.assertTrue('not allowed' in resp.body) self.assertTrue('X-Object-Manifest' in resp.body) def test_removed_incoming_header(self): self.tempurl = tempurl.filter_factory({ 'incoming_remove_headers': 'x-remove-this'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={'x-remove-this': 'value'}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertTrue('x-remove-this' not in self.app.request.headers) def test_removed_incoming_headers_match(self): self.tempurl = tempurl.filter_factory({ 'incoming_remove_headers': 'x-remove-this-*', 'incoming_allow_headers': 'x-remove-this-except-this'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={'x-remove-this-one': 'value1', 'x-remove-this-except-this': 'value2'}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertTrue('x-remove-this-one' not in self.app.request.headers) self.assertEqual( self.app.request.headers['x-remove-this-except-this'], 'value2') def test_allow_trumps_incoming_header_conflict(self): self.tempurl = tempurl.filter_factory({ 'incoming_remove_headers': 'x-conflict-header', 'incoming_allow_headers': 'x-conflict-header'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={'x-conflict-header': 'value'}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertTrue('x-conflict-header' in self.app.request.headers) def test_allow_trumps_incoming_header_startswith_conflict(self): self.tempurl = tempurl.filter_factory({ 'incoming_remove_headers': 'x-conflict-header-*', 'incoming_allow_headers': 'x-conflict-header-*'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={'x-conflict-header-test': 'value'}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertTrue('x-conflict-header-test' in self.app.request.headers) def test_removed_outgoing_header(self): self.tempurl = tempurl.filter_factory({ 'outgoing_remove_headers': 'x-test-header-one-a'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertTrue('x-test-header-one-a' not in resp.headers) self.assertEqual(resp.headers['x-test-header-two-a'], 'value2') def test_removed_outgoing_headers_match(self): self.tempurl = tempurl.filter_factory({ 'outgoing_remove_headers': 'x-test-header-two-*', 'outgoing_allow_headers': 'x-test-header-two-b'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['x-test-header-one-a'], 'value1') self.assertTrue('x-test-header-two-a' not in resp.headers) self.assertEqual(resp.headers['x-test-header-two-b'], 'value3') def test_allow_trumps_outgoing_header_conflict(self): self.tempurl = tempurl.filter_factory({ 'outgoing_remove_headers': 'x-conflict-header', 'outgoing_allow_headers': 'x-conflict-header'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', { 'X-Conflict-Header': 'value'}, '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertTrue('x-conflict-header' in resp.headers) self.assertEqual(resp.headers['x-conflict-header'], 'value') def test_allow_trumps_outgoing_header_startswith_conflict(self): self.tempurl = tempurl.filter_factory({ 'outgoing_remove_headers': 'x-conflict-header-*', 'outgoing_allow_headers': 'x-conflict-header-*'})(self.auth) method = 'GET' expires = int(time() + 86400) path = '/v1/a/c/o' key = 'abc' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() req = self._make_request( path, keys=[key], headers={}, environ={'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( sig, expires)}) self.tempurl.app = FakeApp(iter([('200 Ok', { 'X-Conflict-Header-Test': 'value'}, '123')])) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 200) self.assertTrue('x-conflict-header-test' in resp.headers) self.assertEqual(resp.headers['x-conflict-header-test'], 'value') def test_get_account_and_container(self): self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'HEAD', 'PATH_INFO': '/v1/a/c/o'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c/o'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'PUT', 'PATH_INFO': '/v1/a/c/o'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'POST', 'PATH_INFO': '/v1/a/c/o'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'DELETE', 'PATH_INFO': '/v1/a/c/o'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'UNKNOWN', 'PATH_INFO': '/v1/a/c/o'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c/'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c//////'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c///o///'}), ('a', 'c')) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a//o'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1//c/o'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '//a/c/o'}), (None, None)) self.assertEqual(self.tempurl._get_account_and_container({ 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v2/a/c/o'}), (None, None)) def test_get_temp_url_info(self): s = 'f5d5051bddf5df7e27c628818738334f' e = int(time() + 86400) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( s, e)}), (s, e, None, None)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'filename=bobisyouruncle' % (s, e)}), (s, e, 'bobisyouruncle', None)) self.assertEqual( self.tempurl._get_temp_url_info({}), (None, None, None, None)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_expires=%s' % e}), (None, e, None, None)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s' % s}), (s, None, None, None)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=bad' % ( s)}), (s, 0, None, None)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'inline=' % (s, e)}), (s, e, None, True)) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s&' 'filename=bobisyouruncle&inline=' % (s, e)}), (s, e, 'bobisyouruncle', True)) e = int(time() - 1) self.assertEqual( self.tempurl._get_temp_url_info( {'QUERY_STRING': 'temp_url_sig=%s&temp_url_expires=%s' % ( s, e)}), (s, 0, None, None)) def test_get_hmacs(self): self.assertEqual( self.tempurl._get_hmacs( {'REQUEST_METHOD': 'GET', 'PATH_INFO': '/v1/a/c/o'}, 1, [('abc', 'account')]), [('026d7f7cc25256450423c7ad03fc9f5ffc1dab6d', 'account')]) self.assertEqual( self.tempurl._get_hmacs( {'REQUEST_METHOD': 'HEAD', 'PATH_INFO': '/v1/a/c/o'}, 1, [('abc', 'account')], request_method='GET'), [('026d7f7cc25256450423c7ad03fc9f5ffc1dab6d', 'account')]) def test_invalid(self): def _start_response(status, headers, exc_info=None): self.assertTrue(status, '401 Unauthorized') self.assertTrue('Temp URL invalid' in ''.join( self.tempurl._invalid({'REQUEST_METHOD': 'GET'}, _start_response))) self.assertEqual('', ''.join( self.tempurl._invalid({'REQUEST_METHOD': 'HEAD'}, _start_response))) def test_auth_scheme_value(self): # Passthrough environ = {} resp = self._make_request('/v1/a/c/o', environ=environ).get_response( self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' not in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) self.assertTrue('swift.auth_scheme' not in environ) # Rejected by TempURL environ = {'REQUEST_METHOD': 'PUT', 'QUERY_STRING': 'temp_url_sig=dummy&temp_url_expires=1234'} req = self._make_request('/v1/a/c/o', keys=['abc'], environ=environ) resp = req.get_response(self.tempurl) self.assertEqual(resp.status_int, 401) self.assertTrue('Temp URL invalid' in resp.body) self.assertTrue('Www-Authenticate' in resp.headers) def test_clean_incoming_headers(self): irh = [] iah = [] env = {'HTTP_TEST_HEADER': 'value'} tempurl.TempURL( None, {'incoming_remove_headers': irh, 'incoming_allow_headers': iah} )._clean_incoming_headers(env) self.assertTrue('HTTP_TEST_HEADER' in env) irh = ['test-header'] iah = [] env = {'HTTP_TEST_HEADER': 'value'} tempurl.TempURL( None, {'incoming_remove_headers': irh, 'incoming_allow_headers': iah} )._clean_incoming_headers(env) self.assertTrue('HTTP_TEST_HEADER' not in env) irh = ['test-header-*'] iah = [] env = {'HTTP_TEST_HEADER_ONE': 'value', 'HTTP_TEST_HEADER_TWO': 'value'} tempurl.TempURL( None, {'incoming_remove_headers': irh, 'incoming_allow_headers': iah} )._clean_incoming_headers(env) self.assertTrue('HTTP_TEST_HEADER_ONE' not in env) self.assertTrue('HTTP_TEST_HEADER_TWO' not in env) irh = ['test-header-*'] iah = ['test-header-two'] env = {'HTTP_TEST_HEADER_ONE': 'value', 'HTTP_TEST_HEADER_TWO': 'value'} tempurl.TempURL( None, {'incoming_remove_headers': irh, 'incoming_allow_headers': iah} )._clean_incoming_headers(env) self.assertTrue('HTTP_TEST_HEADER_ONE' not in env) self.assertTrue('HTTP_TEST_HEADER_TWO' in env) irh = ['test-header-*', 'test-other-header'] iah = ['test-header-two', 'test-header-yes-*'] env = {'HTTP_TEST_HEADER_ONE': 'value', 'HTTP_TEST_HEADER_TWO': 'value', 'HTTP_TEST_OTHER_HEADER': 'value', 'HTTP_TEST_HEADER_YES': 'value', 'HTTP_TEST_HEADER_YES_THIS': 'value'} tempurl.TempURL( None, {'incoming_remove_headers': irh, 'incoming_allow_headers': iah} )._clean_incoming_headers(env) self.assertTrue('HTTP_TEST_HEADER_ONE' not in env) self.assertTrue('HTTP_TEST_HEADER_TWO' in env) self.assertTrue('HTTP_TEST_OTHER_HEADER' not in env) self.assertTrue('HTTP_TEST_HEADER_YES' not in env) self.assertTrue('HTTP_TEST_HEADER_YES_THIS' in env) def test_clean_outgoing_headers(self): orh = [] oah = [] hdrs = {'test-header': 'value'} hdrs = HeaderKeyDict(tempurl.TempURL( None, {'outgoing_remove_headers': orh, 'outgoing_allow_headers': oah} )._clean_outgoing_headers(hdrs.items())) self.assertTrue('test-header' in hdrs) orh = ['test-header'] oah = [] hdrs = {'test-header': 'value'} hdrs = HeaderKeyDict(tempurl.TempURL( None, {'outgoing_remove_headers': orh, 'outgoing_allow_headers': oah} )._clean_outgoing_headers(hdrs.items())) self.assertTrue('test-header' not in hdrs) orh = ['test-header-*'] oah = [] hdrs = {'test-header-one': 'value', 'test-header-two': 'value'} hdrs = HeaderKeyDict(tempurl.TempURL( None, {'outgoing_remove_headers': orh, 'outgoing_allow_headers': oah} )._clean_outgoing_headers(hdrs.items())) self.assertTrue('test-header-one' not in hdrs) self.assertTrue('test-header-two' not in hdrs) orh = ['test-header-*'] oah = ['test-header-two'] hdrs = {'test-header-one': 'value', 'test-header-two': 'value'} hdrs = HeaderKeyDict(tempurl.TempURL( None, {'outgoing_remove_headers': orh, 'outgoing_allow_headers': oah} )._clean_outgoing_headers(hdrs.items())) self.assertTrue('test-header-one' not in hdrs) self.assertTrue('test-header-two' in hdrs) orh = ['test-header-*', 'test-other-header'] oah = ['test-header-two', 'test-header-yes-*'] hdrs = {'test-header-one': 'value', 'test-header-two': 'value', 'test-other-header': 'value', 'test-header-yes': 'value', 'test-header-yes-this': 'value'} hdrs = HeaderKeyDict(tempurl.TempURL( None, {'outgoing_remove_headers': orh, 'outgoing_allow_headers': oah} )._clean_outgoing_headers(hdrs.items())) self.assertTrue('test-header-one' not in hdrs) self.assertTrue('test-header-two' in hdrs) self.assertTrue('test-other-header' not in hdrs) self.assertTrue('test-header-yes' not in hdrs) self.assertTrue('test-header-yes-this' in hdrs) def test_unicode_metadata_value(self): meta = {"temp-url-key": "test", "temp-url-key-2": u"test2"} results = tempurl.get_tempurl_keys_from_metadata(meta) for str_value in results: self.assertTrue(isinstance(str_value, str)) class TestSwiftInfo(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def test_registered_defaults(self): tempurl.filter_factory({}) swift_info = utils.get_swift_info() self.assertTrue('tempurl' in swift_info) info = swift_info['tempurl'] self.assertEqual(set(info['methods']), set(('GET', 'HEAD', 'PUT', 'POST', 'DELETE'))) self.assertEqual(set(info['incoming_remove_headers']), set(('x-timestamp',))) self.assertEqual(set(info['incoming_allow_headers']), set()) self.assertEqual(set(info['outgoing_remove_headers']), set(('x-object-meta-*',))) self.assertEqual(set(info['outgoing_allow_headers']), set(('x-object-meta-public-*',))) def test_non_default_methods(self): tempurl.filter_factory({ 'methods': 'GET HEAD PUT DELETE BREW', 'incoming_remove_headers': '', 'incoming_allow_headers': 'x-timestamp x-versions-location', 'outgoing_remove_headers': 'x-*', 'outgoing_allow_headers': 'x-object-meta-* content-type', }) swift_info = utils.get_swift_info() self.assertTrue('tempurl' in swift_info) info = swift_info['tempurl'] self.assertEqual(set(info['methods']), set(('GET', 'HEAD', 'PUT', 'DELETE', 'BREW'))) self.assertEqual(set(info['incoming_remove_headers']), set()) self.assertEqual(set(info['incoming_allow_headers']), set(('x-timestamp', 'x-versions-location'))) self.assertEqual(set(info['outgoing_remove_headers']), set(('x-*', ))) self.assertEqual(set(info['outgoing_allow_headers']), set(('x-object-meta-*', 'content-type'))) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_formpost.py0000664000567000056710000021125013024044354024643 0ustar jenkinsjenkins00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hmac import unittest from hashlib import sha1 from time import time import six from six import BytesIO from swift.common.swob import Request, Response from swift.common.middleware import tempauth, formpost from swift.common.utils import split_path class FakeApp(object): def __init__(self, status_headers_body_iter=None, check_no_query_string=True): self.status_headers_body_iter = status_headers_body_iter if not self.status_headers_body_iter: self.status_headers_body_iter = iter([('404 Not Found', { 'x-test-header-one-a': 'value1', 'x-test-header-two-a': 'value2', 'x-test-header-two-b': 'value3'}, '')]) self.requests = [] self.check_no_query_string = check_no_query_string def __call__(self, env, start_response): try: if self.check_no_query_string and env.get('QUERY_STRING'): raise Exception('Query string %s should have been discarded!' % env['QUERY_STRING']) body = b'' while True: chunk = env['wsgi.input'].read() if not chunk: break body += chunk env['wsgi.input'] = BytesIO(body) self.requests.append(Request.blank('', environ=env)) if env.get('swift.authorize_override') and \ env.get('REMOTE_USER') != '.wsgi.pre_authed': raise Exception( 'Invalid REMOTE_USER %r with swift.authorize_override' % ( env.get('REMOTE_USER'),)) if 'swift.authorize' in env: resp = env['swift.authorize'](self.requests[-1]) if resp: return resp(env, start_response) status, headers, body = next(self.status_headers_body_iter) return Response(status=status, headers=headers, body=body)(env, start_response) except EOFError: start_response('499 Client Disconnect', [('Content-Type', 'text/plain')]) return ['Client Disconnect\n'] class TestCappedFileLikeObject(unittest.TestCase): def test_whole(self): self.assertEqual( formpost._CappedFileLikeObject(BytesIO(b'abc'), 10).read(), b'abc') def test_exceeded(self): exc = None try: formpost._CappedFileLikeObject(BytesIO(b'abc'), 2).read() except EOFError as err: exc = err self.assertEqual(str(exc), 'max_file_size exceeded') def test_whole_readline(self): fp = formpost._CappedFileLikeObject(BytesIO(b'abc\ndef'), 10) self.assertEqual(fp.readline(), b'abc\n') self.assertEqual(fp.readline(), b'def') self.assertEqual(fp.readline(), b'') def test_exceeded_readline(self): fp = formpost._CappedFileLikeObject(BytesIO(b'abc\ndef'), 5) self.assertEqual(fp.readline(), b'abc\n') exc = None try: self.assertEqual(fp.readline(), b'def') except EOFError as err: exc = err self.assertEqual(str(exc), 'max_file_size exceeded') def test_read_sized(self): fp = formpost._CappedFileLikeObject(BytesIO(b'abcdefg'), 10) self.assertEqual(fp.read(2), b'ab') self.assertEqual(fp.read(2), b'cd') self.assertEqual(fp.read(2), b'ef') self.assertEqual(fp.read(2), b'g') self.assertEqual(fp.read(2), b'') class TestFormPost(unittest.TestCase): def setUp(self): self.app = FakeApp() self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) def _make_request(self, path, tempurl_keys=(), **kwargs): req = Request.blank(path, **kwargs) # Fake out the caching layer so that get_account_info() finds its # data. Include something that isn't tempurl keys to prove we skip it. meta = {'user-job-title': 'Personal Trainer', 'user-real-name': 'Jim Shortz'} for idx, key in enumerate(tempurl_keys): meta_name = 'temp-url-key' + (("-%d" % (idx + 1) if idx else "")) if key: meta[meta_name] = key _junk, account, _junk, _junk = split_path(path, 2, 4) req.environ['swift.account/' + account] = self._fake_cache_env( account, tempurl_keys) return req def _fake_cache_env(self, account, tempurl_keys=()): # Fake out the caching layer so that get_account_info() finds its # data. Include something that isn't tempurl keys to prove we skip it. meta = {'user-job-title': 'Personal Trainer', 'user-real-name': 'Jim Shortz'} for idx, key in enumerate(tempurl_keys): meta_name = 'temp-url-key' + ("-%d" % (idx + 1) if idx else "") if key: meta[meta_name] = key return {'status': 204, 'container_count': '0', 'total_object_count': '0', 'bytes': '0', 'meta': meta} def _make_sig_env_body(self, path, redirect, max_file_size, max_file_count, expires, key, user_agent=True): sig = hmac.new( key, '%s\n%s\n%s\n%s\n%s' % ( path, redirect, max_file_size, max_file_count, expires), sha1).hexdigest() body = [ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="redirect"', '', redirect, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="expires"', '', str(expires), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="signature"', '', sig, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file1"; ' 'filename="testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file2"; ' 'filename="testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR--', '', ] if six.PY3: body = [line.encode('utf-8') for line in body] wsgi_errors = six.StringIO() env = { 'CONTENT_TYPE': 'multipart/form-data; ' 'boundary=----WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;' 'q=0.9,*/*;q=0.8', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_HOST': 'ubuntu:8080', 'HTTP_ORIGIN': 'file://', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X ' '10_7_2) AppleWebKit/534.52.7 (KHTML, like Gecko) ' 'Version/5.1.2 Safari/534.52.7', 'PATH_INFO': path, 'REMOTE_ADDR': '172.16.83.1', 'REQUEST_METHOD': 'POST', 'SCRIPT_NAME': '', 'SERVER_NAME': '172.16.83.128', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'wsgi.errors': wsgi_errors, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0), } if user_agent is False: del env['HTTP_USER_AGENT'] return sig, env, body def test_passthrough(self): for method in ('HEAD', 'GET', 'PUT', 'POST', 'DELETE'): resp = self._make_request( '/v1/a/c/o', environ={'REQUEST_METHOD': method}).get_response(self.formpost) self.assertEqual(resp.status_int, 401) self.assertTrue('FormPost' not in resp.body) def test_auth_scheme(self): # FormPost rejects key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() - 10), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') authenticate_v = None for h, v in headers: if h.lower() == 'www-authenticate': authenticate_v = v self.assertTrue('FormPost: Form Expired' in body) self.assertEqual('Swift realm="unknown"', authenticate_v) def test_safari(self): key = 'abc' path = '/v1/AUTH_test/container' redirect = 'http://brim.net' max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig = hmac.new( key, '%s\n%s\n%s\n%s\n%s' % ( path, redirect, max_file_size, max_file_count, expires), sha1).hexdigest() wsgi_input = '\r\n'.join([ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="redirect"', '', redirect, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="expires"', '', str(expires), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="signature"', '', sig, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file1"; ' 'filename="testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file2"; ' 'filename="testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR--', '', ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') wsgi_input = BytesIO(wsgi_input) wsgi_errors = six.StringIO() env = { 'CONTENT_TYPE': 'multipart/form-data; ' 'boundary=----WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;' 'q=0.9,*/*;q=0.8', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_HOST': 'ubuntu:8080', 'HTTP_ORIGIN': 'file://', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X ' '10_7_2) AppleWebKit/534.52.7 (KHTML, like Gecko) ' 'Version/5.1.2 Safari/534.52.7', 'PATH_INFO': path, 'REMOTE_ADDR': '172.16.83.1', 'REQUEST_METHOD': 'POST', 'SCRIPT_NAME': '', 'SERVER_NAME': '172.16.83.128', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'swift.account/AUTH_test': self._fake_cache_env( 'AUTH_test', [key]), 'swift.container/AUTH_test/container': {'meta': {}}, 'wsgi.errors': wsgi_errors, 'wsgi.input': wsgi_input, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0), } self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://brim.net?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue('http://brim.net?status=201&message=' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_firefox(self): key = 'abc' path = '/v1/AUTH_test/container' redirect = 'http://brim.net' max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig = hmac.new( key, '%s\n%s\n%s\n%s\n%s' % ( path, redirect, max_file_size, max_file_count, expires), sha1).hexdigest() wsgi_input = '\r\n'.join([ '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="redirect"', '', redirect, '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="expires"', '', str(expires), '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="signature"', '', sig, '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="file1"; ' 'filename="testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="file2"; ' 'filename="testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '-----------------------------168072824752491622650073', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '-----------------------------168072824752491622650073--', '' ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') wsgi_input = BytesIO(wsgi_input) wsgi_errors = six.StringIO() env = { 'CONTENT_TYPE': 'multipart/form-data; ' 'boundary=---------------------------168072824752491622650073', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;' 'q=0.9,*/*;q=0.8', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_HOST': 'ubuntu:8080', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; ' 'rv:8.0.1) Gecko/20100101 Firefox/8.0.1', 'PATH_INFO': '/v1/AUTH_test/container', 'REMOTE_ADDR': '172.16.83.1', 'REQUEST_METHOD': 'POST', 'SCRIPT_NAME': '', 'SERVER_NAME': '172.16.83.128', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'swift.account/AUTH_test': self._fake_cache_env( 'AUTH_test', [key]), 'swift.container/AUTH_test/container': {'meta': {}}, 'wsgi.errors': wsgi_errors, 'wsgi.input': wsgi_input, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0), } self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://brim.net?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue('http://brim.net?status=201&message=' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_chrome(self): key = 'abc' path = '/v1/AUTH_test/container' redirect = 'http://brim.net' max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig = hmac.new( key, '%s\n%s\n%s\n%s\n%s' % ( path, redirect, max_file_size, max_file_count, expires), sha1).hexdigest() wsgi_input = '\r\n'.join([ '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="redirect"', '', redirect, '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="expires"', '', str(expires), '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="signature"', '', sig, '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="file1"; ' 'filename="testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="file2"; ' 'filename="testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '------WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '------WebKitFormBoundaryq3CFxUjfsDMu8XsA--', '' ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') wsgi_input = BytesIO(wsgi_input) wsgi_errors = six.StringIO() env = { 'CONTENT_TYPE': 'multipart/form-data; ' 'boundary=----WebKitFormBoundaryq3CFxUjfsDMu8XsA', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;' 'q=0.9,*/*;q=0.8', 'HTTP_CACHE_CONTROL': 'max-age=0', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_HOST': 'ubuntu:8080', 'HTTP_ORIGIN': 'null', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X ' '10_7_2) AppleWebKit/535.7 (KHTML, like Gecko) ' 'Chrome/16.0.912.63 Safari/535.7', 'PATH_INFO': '/v1/AUTH_test/container', 'REMOTE_ADDR': '172.16.83.1', 'REQUEST_METHOD': 'POST', 'SCRIPT_NAME': '', 'SERVER_NAME': '172.16.83.128', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'swift.account/AUTH_test': self._fake_cache_env( 'AUTH_test', [key]), 'swift.container/AUTH_test/container': {'meta': {}}, 'wsgi.errors': wsgi_errors, 'wsgi.input': wsgi_input, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0), } self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://brim.net?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue('http://brim.net?status=201&message=' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_explorer(self): key = 'abc' path = '/v1/AUTH_test/container' redirect = 'http://brim.net' max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig = hmac.new( key, '%s\n%s\n%s\n%s\n%s' % ( path, redirect, max_file_size, max_file_count, expires), sha1).hexdigest() wsgi_input = '\r\n'.join([ '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="redirect"', '', redirect, '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="expires"', '', str(expires), '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="signature"', '', sig, '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="file1"; ' 'filename="C:\\testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="file2"; ' 'filename="C:\\testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '-----------------------------7db20d93017c', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '-----------------------------7db20d93017c--', '' ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') wsgi_input = BytesIO(wsgi_input) wsgi_errors = six.StringIO() env = { 'CONTENT_TYPE': 'multipart/form-data; ' 'boundary=---------------------------7db20d93017c', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-US', 'HTTP_ACCEPT': 'text/html, application/xhtml+xml, */*', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_HOST': '172.16.83.128:8080', 'HTTP_USER_AGENT': 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT ' '6.1; WOW64; Trident/5.0)', 'PATH_INFO': '/v1/AUTH_test/container', 'REMOTE_ADDR': '172.16.83.129', 'REQUEST_METHOD': 'POST', 'SCRIPT_NAME': '', 'SERVER_NAME': '172.16.83.128', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'swift.account/AUTH_test': self._fake_cache_env( 'AUTH_test', [key]), 'swift.container/AUTH_test/container': {'meta': {}}, 'wsgi.errors': wsgi_errors, 'wsgi.input': wsgi_input, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0), } self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://brim.net?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue('http://brim.net?status=201&message=' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_messed_up_start(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://brim.net', 5, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'XX' + b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) def log_assert_int_status(env, response_status_int): self.assertTrue(isinstance(response_status_int, int)) self.formpost._log_request = log_assert_int_status status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') self.assertEqual(exc_info, None) self.assertTrue('FormPost: invalid starting boundary' in body) self.assertEqual(len(self.app.requests), 0) def test_max_file_size_exceeded(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://brim.net', 5, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') self.assertEqual(exc_info, None) self.assertTrue('FormPost: max_file_size exceeded' in body) self.assertEqual(len(self.app.requests), 0) def test_max_file_count_exceeded(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://brim.net', 1024, 1, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual( location, 'http://brim.net?status=400&message=max%20file%20count%20exceeded') self.assertEqual(exc_info, None) self.assertTrue( 'http://brim.net?status=400&message=max%20file%20count%20exceeded' in body) self.assertEqual(len(self.app.requests), 1) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') def test_subrequest_does_not_pass_query(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['QUERY_STRING'] = 'this=should¬=get&passed' env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp( iter([('201 Created', {}, ''), ('201 Created', {}, '')]), check_no_query_string=True) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] # Make sure we 201 Created, which means we made the final subrequest # (and FakeApp verifies that no QUERY_STRING got passed). self.assertEqual(status, '201 Created') self.assertEqual(exc_info, None) self.assertTrue('201 Created' in body) self.assertEqual(len(self.app.requests), 2) def test_subrequest_fails(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://brim.net', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('404 Not Found', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://brim.net?status=404&message=') self.assertEqual(exc_info, None) self.assertTrue('http://brim.net?status=404&message=' in body) self.assertEqual(len(self.app.requests), 1) def test_truncated_attr_value(self): key = 'abc' redirect = 'a' * formpost.MAX_VALUE_LENGTH max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', redirect, max_file_size, max_file_count, expires, key) # Tack on an extra char to redirect, but shouldn't matter since it # should get truncated off on read. redirect += 'b' wsgi_input = '\r\n'.join([ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="redirect"', '', redirect, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="expires"', '', str(expires), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="signature"', '', sig, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file1"; ' 'filename="testfile1.txt"', 'Content-Type: text/plain', '', 'Test File\nOne\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file2"; ' 'filename="testfile2.txt"', 'Content-Type: text/plain', '', 'Test\nFile\nTwo\n', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="file3"; filename=""', 'Content-Type: application/octet-stream', '', '', '------WebKitFormBoundaryNcxTqxSlX7t4TDkR--', '', ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual( location, ('a' * formpost.MAX_VALUE_LENGTH) + '?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue( ('a' * formpost.MAX_VALUE_LENGTH) + '?status=201&message=' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_no_file_to_process(self): key = 'abc' redirect = 'http://brim.net' max_file_size = 1024 max_file_count = 10 expires = int(time() + 86400) sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', redirect, max_file_size, max_file_count, expires, key) wsgi_input = '\r\n'.join([ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="redirect"', '', redirect, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_size"', '', str(max_file_size), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="max_file_count"', '', str(max_file_count), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="expires"', '', str(expires), '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="signature"', '', sig, '------WebKitFormBoundaryNcxTqxSlX7t4TDkR--', '', ]) if six.PY3: wsgi_input = wsgi_input.encode('utf-8') env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual( location, 'http://brim.net?status=400&message=no%20files%20to%20process') self.assertEqual(exc_info, None) self.assertTrue( 'http://brim.net?status=400&message=no%20files%20to%20process' in body) self.assertEqual(len(self.app.requests), 0) def test_formpost_without_useragent(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect', 1024, 10, int(time() + 86400), key, user_agent=False) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) def start_response(s, h, e=None): pass body = ''.join(self.formpost(env, start_response)) self.assertTrue('User-Agent' in self.app.requests[0].headers) self.assertEqual(self.app.requests[0].headers['User-Agent'], 'FormPost') def test_formpost_with_origin(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect', 1024, 10, int(time() + 86400), key, user_agent=False) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} env['HTTP_ORIGIN'] = 'http://localhost:5000' self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {'Access-Control-Allow-Origin': 'http://localhost:5000'}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) headers = {} def start_response(s, h, e=None): for k, v in h: headers[k] = v pass body = ''.join(self.formpost(env, start_response)) self.assertEqual(headers['Access-Control-Allow-Origin'], 'http://localhost:5000') def test_formpost_with_multiple_keys(self): key = 'ernie' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) # Stick it in X-Account-Meta-Temp-URL-Key-2 and make sure we get it env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', ['bert', key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h body = ''.join(self.formpost(env, start_response)) self.assertEqual('303 See Other', status[0]) self.assertEqual( 'http://redirect?status=201&message=', dict(headers[0]).get('Location')) def test_formpost_with_multiple_container_keys(self): first_key = 'ernie' second_key = 'bert' keys = [first_key, second_key] meta = {} for idx, key in enumerate(keys): meta_name = 'temp-url-key' + ("-%d" % (idx + 1) if idx else "") if key: meta[meta_name] = key for key in keys: sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env('AUTH_test') # Stick it in X-Container-Meta-Temp-URL-Key-2 and ensure we get it env['swift.container/AUTH_test/container'] = {'meta': meta} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h body = ''.join(self.formpost(env, start_response)) self.assertEqual('303 See Other', status[0]) self.assertEqual( 'http://redirect?status=201&message=', dict(headers[0]).get('Location')) def test_redirect(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://redirect?status=201&message=') self.assertEqual(exc_info, None) self.assertTrue(location in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_redirect_with_query(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', 'http://redirect?one=two', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '303 See Other') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, 'http://redirect?one=two&status=201&message=') self.assertEqual(exc_info, None) self.assertTrue(location in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_no_redirect(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '201 Created') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('201 Created' in body) self.assertEqual(len(self.app.requests), 2) self.assertEqual(self.app.requests[0].body, 'Test File\nOne\n') self.assertEqual(self.app.requests[1].body, 'Test\nFile\nTwo\n') def test_no_redirect_expired(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() - 10), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Form Expired' in body) def test_no_redirect_invalid_sig(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) # Change key to invalidate sig env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key + ' is bogus now']) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_no_redirect_with_error(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'XX' + b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: invalid starting boundary' in body) def test_no_v1(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v2/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_empty_v1(self): key = 'abc' sig, env, body = self._make_sig_env_body( '//AUTH_test/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_empty_account(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1//container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_wrong_account(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_tst/container', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([ ('200 Ok', {'x-account-meta-temp-url-key': 'def'}, ''), ('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_no_container(self): key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test', '', 1024, 10, int(time() + 86400), key) env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '401 Unauthorized') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: Invalid Signature' in body) def test_completely_non_int_expires(self): key = 'abc' expires = int(time() + 86400) sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, expires, key) for i, v in enumerate(body): if v == str(expires): body[i] = 'badvalue' break env['wsgi.input'] = BytesIO(b'\r\n'.join(body)) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') location = None for h, v in headers: if h.lower() == 'location': location = v self.assertEqual(location, None) self.assertEqual(exc_info, None) self.assertTrue('FormPost: expired not an integer' in body) def test_x_delete_at(self): delete_at = int(time() + 100) x_delete_body_part = [ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="x_delete_at"', '', str(delete_at), ] key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) wsgi_input = b'\r\n'.join(x_delete_body_part + body) env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '201 Created') self.assertTrue('201 Created' in body) self.assertEqual(len(self.app.requests), 2) self.assertTrue("X-Delete-At" in self.app.requests[0].headers) self.assertTrue("X-Delete-At" in self.app.requests[1].headers) self.assertEqual(delete_at, self.app.requests[0].headers["X-Delete-At"]) self.assertEqual(delete_at, self.app.requests[1].headers["X-Delete-At"]) def test_x_delete_at_not_int(self): delete_at = "2014-07-16" x_delete_body_part = [ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="x_delete_at"', '', str(delete_at), ] key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) wsgi_input = b'\r\n'.join(x_delete_body_part + body) env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') self.assertTrue('FormPost: x_delete_at not an integer' in body) def test_x_delete_after(self): delete_after = 100 x_delete_body_part = [ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="x_delete_after"', '', str(delete_after), ] key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) wsgi_input = b'\r\n'.join(x_delete_body_part + body) env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) env['swift.container/AUTH_test/container'] = {'meta': {}} self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '201 Created') self.assertTrue('201 Created' in body) self.assertEqual(len(self.app.requests), 2) self.assertTrue("X-Delete-After" in self.app.requests[0].headers) self.assertTrue("X-Delete-After" in self.app.requests[1].headers) self.assertEqual(delete_after, self.app.requests[0].headers["X-Delete-After"]) self.assertEqual(delete_after, self.app.requests[1].headers["X-Delete-After"]) def test_x_delete_after_not_int(self): delete_after = "2 days" x_delete_body_part = [ '------WebKitFormBoundaryNcxTqxSlX7t4TDkR', 'Content-Disposition: form-data; name="x_delete_after"', '', str(delete_after), ] key = 'abc' sig, env, body = self._make_sig_env_body( '/v1/AUTH_test/container', '', 1024, 10, int(time() + 86400), key) wsgi_input = b'\r\n'.join(x_delete_body_part + body) env['wsgi.input'] = BytesIO(wsgi_input) env['swift.account/AUTH_test'] = self._fake_cache_env( 'AUTH_test', [key]) self.app = FakeApp(iter([('201 Created', {}, ''), ('201 Created', {}, '')])) self.auth = tempauth.filter_factory({})(self.app) self.formpost = formpost.filter_factory({})(self.auth) status = [None] headers = [None] exc_info = [None] def start_response(s, h, e=None): status[0] = s headers[0] = h exc_info[0] = e body = ''.join(self.formpost(env, start_response)) status = status[0] headers = headers[0] exc_info = exc_info[0] self.assertEqual(status, '400 Bad Request') self.assertTrue('FormPost: x_delete_after not an integer' in body) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_keystoneauth.py0000664000567000056710000021550313024044354025522 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.middleware import keystoneauth from swift.common.swob import Request, Response from swift.common.http import HTTP_FORBIDDEN from swift.common.utils import split_path from swift.proxy.controllers.base import _get_cache_key from test.unit import FakeLogger UNKNOWN_ID = keystoneauth.UNKNOWN_ID def _fake_token_info(version='2'): if version == '2': return {'access': 'fake_value'} if version == '3': return {'token': 'fake_value'} def operator_roles(test_auth): # Return copy -- not a reference return list(test_auth.account_rules[test_auth.reseller_prefixes[0]].get( 'operator_roles')) def get_account_for_tenant(test_auth, tenant_id): """Convenience function reduces unit test churn""" return '%s%s' % (test_auth.reseller_prefixes[0], tenant_id) def get_identity_headers(status='Confirmed', tenant_id='1', tenant_name='acct', project_domain_name='domA', project_domain_id='99', user_name='usr', user_id='42', user_domain_name='domA', user_domain_id='99', role='admin', service_role=None): if role is None: role = [] if isinstance(role, list): role = ','.join(role) res = dict(X_IDENTITY_STATUS=status, X_TENANT_ID=tenant_id, X_TENANT_NAME=tenant_name, X_PROJECT_ID=tenant_id, X_PROJECT_NAME=tenant_name, X_PROJECT_DOMAIN_ID=project_domain_id, X_PROJECT_DOMAIN_NAME=project_domain_name, X_ROLES=role, X_USER_NAME=user_name, X_USER_ID=user_id, X_USER_DOMAIN_NAME=user_domain_name, X_USER_DOMAIN_ID=user_domain_id) if service_role: res.update(X_SERVICE_ROLES=service_role) return res class FakeApp(object): def __init__(self, status_headers_body_iter=None): self.calls = 0 self.call_contexts = [] self.status_headers_body_iter = status_headers_body_iter if not self.status_headers_body_iter: self.status_headers_body_iter = iter([('404 Not Found', {}, '')]) def __call__(self, env, start_response): self.calls += 1 self.request = Request.blank('', environ=env) if 'swift.authorize' in env: resp = env['swift.authorize'](self.request) if resp: return resp(env, start_response) context = {'method': self.request.method, 'headers': self.request.headers} self.call_contexts.append(context) status, headers, body = next(self.status_headers_body_iter) return Response(status=status, headers=headers, body=body)(env, start_response) class SwiftAuth(unittest.TestCase): def setUp(self): self.test_auth = keystoneauth.filter_factory({})(FakeApp()) self.test_auth.logger = FakeLogger() def _make_request(self, path=None, headers=None, **kwargs): if not path: path = '/v1/%s/c/o' % get_account_for_tenant(self.test_auth, 'foo') return Request.blank(path, headers=headers, **kwargs) def _get_successful_middleware(self): response_iter = iter([('200 OK', {}, '')]) return keystoneauth.filter_factory({})(FakeApp(response_iter)) def test_invalid_request_authorized(self): role = self.test_auth.reseller_admin_role headers = get_identity_headers(role=role) req = self._make_request('/', headers=headers) resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 404) def test_invalid_request_non_authorized(self): req = self._make_request('/') resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 404) def test_confirmed_identity_is_authorized(self): role = self.test_auth.reseller_admin_role headers = get_identity_headers(role=role) req = self._make_request('/v1/AUTH_acct/c', headers) resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 200) def test_detect_reseller_request(self): role = self.test_auth.reseller_admin_role headers = get_identity_headers(role=role) req = self._make_request('/v1/AUTH_acct/c', headers) req.get_response(self._get_successful_middleware()) self.assertTrue(req.environ.get('reseller_request')) def test_confirmed_identity_is_not_authorized(self): headers = get_identity_headers() req = self._make_request('/v1/AUTH_acct/c', headers) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 403) def test_anonymous_is_authorized_for_permitted_referrer(self): req = self._make_request(headers={'X_IDENTITY_STATUS': 'Invalid'}) req.acl = '.r:*' resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 200) def test_anonymous_with_validtoken_authorized_for_permitted_referrer(self): req = self._make_request(headers={'X_IDENTITY_STATUS': 'Confirmed'}) req.acl = '.r:*' resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 200) def test_anonymous_is_not_authorized_for_unknown_reseller_prefix(self): req = self._make_request(path='/v1/BLAH_foo/c/o', headers={'X_IDENTITY_STATUS': 'Invalid'}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) def test_denied_responses(self): def get_resp_status(headers): req = self._make_request(headers=headers) resp = req.get_response(self.test_auth) return resp.status_int self.assertEqual(get_resp_status({'X_IDENTITY_STATUS': 'Confirmed'}), 403) self.assertEqual(get_resp_status( {'X_IDENTITY_STATUS': 'Confirmed', 'X_SERVICE_IDENTITY_STATUS': 'Confirmed'}), 403) self.assertEqual(get_resp_status({}), 401) self.assertEqual(get_resp_status( {'X_IDENTITY_STATUS': 'Invalid'}), 401) self.assertEqual(get_resp_status( {'X_IDENTITY_STATUS': 'Invalid', 'X_SERVICE_IDENTITY_STATUS': 'Confirmed'}), 401) self.assertEqual(get_resp_status( {'X_IDENTITY_STATUS': 'Confirmed', 'X_SERVICE_IDENTITY_STATUS': 'Invalid'}), 401) self.assertEqual(get_resp_status( {'X_IDENTITY_STATUS': 'Invalid', 'X_SERVICE_IDENTITY_STATUS': 'Invalid'}), 401) def test_blank_reseller_prefix(self): conf = {'reseller_prefix': ''} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) account = tenant_id = 'foo' self.assertTrue(test_auth._account_matches_tenant(account, tenant_id)) def test_reseller_prefix_added_underscore(self): conf = {'reseller_prefix': 'AUTH'} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth.reseller_prefixes[0], "AUTH_") def test_reseller_prefix_not_added_double_underscores(self): conf = {'reseller_prefix': 'AUTH_'} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth.reseller_prefixes[0], "AUTH_") def test_override_asked_for_but_not_allowed(self): conf = {'allow_overrides': 'false'} self.test_auth = keystoneauth.filter_factory(conf)(FakeApp()) req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) def test_override_asked_for_and_allowed(self): conf = {'allow_overrides': 'true'} self.test_auth = keystoneauth.filter_factory(conf)(FakeApp()) req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 404) def test_override_default_allowed(self): req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 404) def test_anonymous_options_allowed(self): req = self._make_request('/v1/AUTH_account', environ={'REQUEST_METHOD': 'OPTIONS'}) resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 200) def test_identified_options_allowed(self): headers = get_identity_headers() headers['REQUEST_METHOD'] = 'OPTIONS' req = self._make_request('/v1/AUTH_account', headers=get_identity_headers(), environ={'REQUEST_METHOD': 'OPTIONS'}) resp = req.get_response(self._get_successful_middleware()) self.assertEqual(resp.status_int, 200) def test_auth_scheme(self): req = self._make_request(path='/v1/BLAH_foo/c/o', headers={'X_IDENTITY_STATUS': 'Invalid'}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) def test_project_domain_id_sysmeta_set(self): proj_id = '12345678' proj_domain_id = '13' headers = get_identity_headers(tenant_id=proj_id, project_domain_id=proj_domain_id) account = get_account_for_tenant(self.test_auth, proj_id) path = '/v1/' + account # fake cached account info _, info_key = _get_cache_key(account, None) env = {info_key: {'status': 0, 'sysmeta': {}}, 'keystone.token_info': _fake_token_info(version='3')} req = Request.blank(path, environ=env, headers=headers) req.method = 'POST' headers_out = {'X-Account-Sysmeta-Project-Domain-Id': proj_domain_id} fake_app = FakeApp(iter([('200 OK', headers_out, '')])) test_auth = keystoneauth.filter_factory({})(fake_app) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(len(fake_app.call_contexts), 1) headers_sent = fake_app.call_contexts[0]['headers'] self.assertTrue('X-Account-Sysmeta-Project-Domain-Id' in headers_sent, headers_sent) self.assertEqual(headers_sent['X-Account-Sysmeta-Project-Domain-Id'], proj_domain_id) self.assertTrue('X-Account-Project-Domain-Id' in resp.headers) self.assertEqual(resp.headers['X-Account-Project-Domain-Id'], proj_domain_id) def test_project_domain_id_sysmeta_set_to_unknown(self): proj_id = '12345678' # token scoped to a different project headers = get_identity_headers(tenant_id='87654321', project_domain_id='default', role='reselleradmin') account = get_account_for_tenant(self.test_auth, proj_id) path = '/v1/' + account # fake cached account info _, info_key = _get_cache_key(account, None) env = {info_key: {'status': 0, 'sysmeta': {}}, 'keystone.token_info': _fake_token_info(version='3')} req = Request.blank(path, environ=env, headers=headers) req.method = 'POST' fake_app = FakeApp(iter([('200 OK', {}, '')])) test_auth = keystoneauth.filter_factory({})(fake_app) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(len(fake_app.call_contexts), 1) headers_sent = fake_app.call_contexts[0]['headers'] self.assertTrue('X-Account-Sysmeta-Project-Domain-Id' in headers_sent, headers_sent) self.assertEqual(headers_sent['X-Account-Sysmeta-Project-Domain-Id'], UNKNOWN_ID) def test_project_domain_id_sysmeta_not_set(self): proj_id = '12345678' headers = get_identity_headers(tenant_id=proj_id, role='admin') account = get_account_for_tenant(self.test_auth, proj_id) path = '/v1/' + account _, info_key = _get_cache_key(account, None) # v2 token env = {info_key: {'status': 0, 'sysmeta': {}}, 'keystone.token_info': _fake_token_info(version='2')} req = Request.blank(path, environ=env, headers=headers) req.method = 'POST' fake_app = FakeApp(iter([('200 OK', {}, '')])) test_auth = keystoneauth.filter_factory({})(fake_app) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(len(fake_app.call_contexts), 1) headers_sent = fake_app.call_contexts[0]['headers'] self.assertFalse('X-Account-Sysmeta-Project-Domain-Id' in headers_sent, headers_sent) def test_project_domain_id_sysmeta_set_unknown_with_v2(self): proj_id = '12345678' # token scoped to a different project headers = get_identity_headers(tenant_id='87654321', role='reselleradmin') account = get_account_for_tenant(self.test_auth, proj_id) path = '/v1/' + account _, info_key = _get_cache_key(account, None) # v2 token env = {info_key: {'status': 0, 'sysmeta': {}}, 'keystone.token_info': _fake_token_info(version='2')} req = Request.blank(path, environ=env, headers=headers) req.method = 'POST' fake_app = FakeApp(iter([('200 OK', {}, '')])) test_auth = keystoneauth.filter_factory({})(fake_app) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(len(fake_app.call_contexts), 1) headers_sent = fake_app.call_contexts[0]['headers'] self.assertTrue('X-Account-Sysmeta-Project-Domain-Id' in headers_sent, headers_sent) self.assertEqual(headers_sent['X-Account-Sysmeta-Project-Domain-Id'], UNKNOWN_ID) class SwiftAuthMultiple(SwiftAuth): """Runs same tests as SwiftAuth with multiple reseller prefixes Runs SwiftAuth tests while a second reseller prefix item exists. Validates that there is no regression against the original single prefix configuration. """ def setUp(self): self.test_auth = keystoneauth.filter_factory( {'reseller_prefix': 'AUTH, PRE2'})(FakeApp()) self.test_auth.logger = FakeLogger() class ServiceTokenFunctionality(unittest.TestCase): def _make_authed_request(self, conf, project_id, path, method='GET', user_role='admin', service_role=None, environ=None): """Make a request with keystoneauth as auth By default, acts as though the user had presented a token containing the 'admin' role in X-Auth-Token scoped to the specified project_id. :param conf: configuration for keystoneauth :param project_id: the project_id of the token :param path: the path of the request :param method: the method (defaults to GET) :param user_role: the role of X-Auth-Token (defaults to 'admin') :param service_role: the role in X-Service-Token (defaults to none) :param environ: a dict of items to be added to the request environ (defaults to none) :returns: response object """ headers = get_identity_headers(tenant_id=project_id, role=user_role, service_role=service_role) (version, account, _junk, _junk) = split_path(path, 2, 4, True) _, info_key = _get_cache_key(account, None) env = {info_key: {'status': 0, 'sysmeta': {}}, 'keystone.token_info': _fake_token_info(version='2')} if environ: env.update(environ) req = Request.blank(path, environ=env, headers=headers) req.method = method fake_app = FakeApp(iter([('200 OK', {}, '')])) test_auth = keystoneauth.filter_factory(conf)(fake_app) resp = req.get_response(test_auth) return resp def test_existing_swift_owner_ignored(self): # a request without admin role is denied resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678', environ={'swift_owner': False}, user_role='something_else') self.assertEqual(resp.status_int, 403) # ... even when swift_owner has previously been set True in request env resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678', environ={'swift_owner': True}, user_role='something_else') self.assertEqual(resp.status_int, 403) # a request with admin role but to different account prefix is denied resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/SERVICE_12345678', environ={'swift_owner': False}) self.assertEqual(resp.status_int, 403) # ... even when swift_owner has previously been set True in request env resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/SERVICE_12345678', environ={'swift_owner': True}) self.assertEqual(resp.status_int, 403) def test_unknown_prefix(self): resp = self._make_authed_request({}, '12345678', '/v1/BLAH_12345678') self.assertEqual(resp.status_int, 403) resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2'}, '12345678', '/v1/BLAH_12345678') self.assertEqual(resp.status_int, 403) def test_authed_for_path_single(self): resp = self._make_authed_request({}, '12345678', '/v1/AUTH_12345678') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678/c') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678', user_role='ResellerAdmin') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_anything', user_role='ResellerAdmin') self.assertEqual(resp.status_int, 200) def test_denied_for_path_single(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_789') self.assertEqual(resp.status_int, 403) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678', user_role='something_else') self.assertEqual(resp.status_int, 403) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, '12345678', '/v1/AUTH_12345678', method='DELETE') self.assertEqual(resp.status_int, 403) def test_authed_for_primary_path_multiple(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/AUTH_12345678') self.assertEqual(resp.status_int, 200) def test_denied_for_second_path_with_only_operator_role(self): # User only presents X-Auth-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678') self.assertEqual(resp.status_int, 403) # User puts token in X-Service-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='', service_role='admin') self.assertEqual(resp.status_int, 403) # User puts token in both X-Auth-Token and X-Service-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='admin', service_role='admin') self.assertEqual(resp.status_int, 403) def test_authed_for_second_path_with_operator_role_and_service(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', service_role='service') self.assertEqual(resp.status_int, 200) def test_denied_for_second_path_with_only_service(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='something_else', service_role='service') self.assertEqual(resp.status_int, 403) def test_denied_for_second_path_for_service_user(self): # User presents token with 'service' role in X-Auth-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='service') self.assertEqual(resp.status_int, 403) # User presents token with 'service' role in X-Auth-Token # and also in X-Service-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='service', service_role='service') self.assertEqual(resp.status_int, 403) def test_delete_denied_for_second_path(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', service_role='service', method='DELETE') self.assertEqual(resp.status_int, 403) def test_delete_of_second_path_by_reseller_admin(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_service_roles': 'service'}, '12345678', '/v1/PRE2_12345678', user_role='ResellerAdmin', method='DELETE') self.assertEqual(resp.status_int, 200) class BaseTestAuthorize(unittest.TestCase): def setUp(self): self.test_auth = keystoneauth.filter_factory({})(FakeApp()) self.test_auth.logger = FakeLogger() def _make_request(self, path, **kwargs): return Request.blank(path, **kwargs) def _get_account(self, identity=None): if not identity: identity = self._get_identity() return get_account_for_tenant(self.test_auth, identity['HTTP_X_TENANT_ID']) def _get_identity(self, tenant_id='tenant_id', tenant_name='tenant_name', user_id='user_id', user_name='user_name', roles=None, project_domain_name='domA', project_domain_id='foo', user_domain_name='domA', user_domain_id='foo'): if roles is None: roles = [] if isinstance(roles, list): roles = ','.join(roles) return {'HTTP_X_USER_ID': user_id, 'HTTP_X_USER_NAME': user_name, 'HTTP_X_USER_DOMAIN_NAME': user_domain_name, 'HTTP_X_USER_DOMAIN_ID': user_domain_id, 'HTTP_X_TENANT_ID': tenant_id, 'HTTP_X_TENANT_NAME': tenant_name, 'HTTP_X_PROJECT_DOMAIN_ID': project_domain_id, 'HTTP_X_PROJECT_DOMAIN_NAME': project_domain_name, 'HTTP_X_ROLES': roles, 'HTTP_X_IDENTITY_STATUS': 'Confirmed'} def _get_env_id(self, tenant_id='tenant_id', tenant_name='tenant_name', user_id='user_id', user_name='user_name', roles=[], project_domain_name='domA', project_domain_id='99', user_domain_name='domA', user_domain_id='99', auth_version='3'): env = self._get_identity(tenant_id, tenant_name, user_id, user_name, roles, project_domain_name, project_domain_id, user_domain_name, user_domain_id) token_info = _fake_token_info(version=auth_version) env.update({'keystone.token_info': token_info}) return self.test_auth._keystone_identity(env) class TestAuthorize(BaseTestAuthorize): def _check_authenticate(self, account=None, identity=None, headers=None, exception=None, acl=None, env=None, path=None): if not identity: identity = self._get_identity() if not account: account = self._get_account(identity) if not path: path = '/v1/%s/c' % account # fake cached account info _, info_key = _get_cache_key(account, None) default_env = {'REMOTE_USER': identity['HTTP_X_TENANT_ID'], info_key: {'status': 200, 'sysmeta': {}}} default_env.update(identity) if env: default_env.update(env) req = self._make_request(path, headers=headers, environ=default_env) req.acl = acl env_identity = self.test_auth._keystone_identity(req.environ) result = self.test_auth.authorize(env_identity, req) # if we have requested an exception but nothing came back then if exception and not result: self.fail("error %s was not returned" % (str(exception))) elif exception: self.assertEqual(result.status_int, exception) else: self.assertTrue(result is None) return req def test_authorize_fails_for_unauthorized_user(self): self._check_authenticate(exception=HTTP_FORBIDDEN) def test_authorize_fails_for_invalid_reseller_prefix(self): self._check_authenticate(account='BLAN_a', exception=HTTP_FORBIDDEN) def test_authorize_succeeds_for_reseller_admin(self): roles = [self.test_auth.reseller_admin_role] identity = self._get_identity(roles=roles) req = self._check_authenticate(identity=identity) self.assertTrue(req.environ.get('swift_owner')) def test_authorize_succeeds_for_insensitive_reseller_admin(self): roles = [self.test_auth.reseller_admin_role.upper()] identity = self._get_identity(roles=roles) req = self._check_authenticate(identity=identity) self.assertTrue(req.environ.get('swift_owner')) def test_authorize_succeeds_as_owner_for_operator_role(self): roles = operator_roles(self.test_auth) identity = self._get_identity(roles=roles) req = self._check_authenticate(identity=identity) self.assertTrue(req.environ.get('swift_owner')) def test_authorize_succeeds_as_owner_for_insensitive_operator_role(self): roles = [r.upper() for r in operator_roles(self.test_auth)] identity = self._get_identity(roles=roles) req = self._check_authenticate(identity=identity) self.assertTrue(req.environ.get('swift_owner')) def test_authorize_fails_same_user_and_tenant(self): # Historically the is_admin option allowed access when user_name # matched tenant_name, but it is no longer supported. This test is a # sanity check that the option no longer works. self.test_auth.is_admin = True identity = self._get_identity(user_name='same_name', tenant_name='same_name') req = self._check_authenticate(identity=identity, exception=HTTP_FORBIDDEN) self.assertFalse(bool(req.environ.get('swift_owner'))) def test_authorize_succeeds_for_container_sync(self): env = {'swift_sync_key': 'foo', 'REMOTE_ADDR': '127.0.0.1'} headers = {'x-container-sync-key': 'foo', 'x-timestamp': '1'} self._check_authenticate(env=env, headers=headers) def test_authorize_fails_for_invalid_referrer(self): env = {'HTTP_REFERER': 'http://invalid.com/index.html'} self._check_authenticate(acl='.r:example.com', env=env, exception=HTTP_FORBIDDEN) def test_authorize_fails_for_referrer_without_rlistings(self): env = {'HTTP_REFERER': 'http://example.com/index.html'} self._check_authenticate(acl='.r:example.com', env=env, exception=HTTP_FORBIDDEN) def test_authorize_succeeds_for_referrer_with_rlistings(self): env = {'HTTP_REFERER': 'http://example.com/index.html'} self._check_authenticate(acl='.r:example.com,.rlistings', env=env) def test_authorize_succeeds_for_referrer_with_obj(self): path = '/v1/%s/c/o' % self._get_account() env = {'HTTP_REFERER': 'http://example.com/index.html'} self._check_authenticate(acl='.r:example.com', env=env, path=path) def test_authorize_succeeds_for_user_role_in_roles(self): acl = 'allowme' identity = self._get_identity(roles=[acl]) self._check_authenticate(identity=identity, acl=acl) def test_authorize_succeeds_for_tenant_name_user_in_roles(self): identity = self._get_identity() user_name = identity['HTTP_X_USER_NAME'] user_id = identity['HTTP_X_USER_ID'] tenant_name = identity['HTTP_X_TENANT_NAME'] for user in [user_id, user_name, '*']: acl = '%s:%s' % (tenant_name, user) self._check_authenticate(identity=identity, acl=acl) def test_authorize_succeeds_for_tenant_id_user_in_roles(self): identity = self._get_identity() user_name = identity['HTTP_X_USER_NAME'] user_id = identity['HTTP_X_USER_ID'] tenant_id = identity['HTTP_X_TENANT_ID'] for user in [user_id, user_name, '*']: acl = '%s:%s' % (tenant_id, user) self._check_authenticate(identity=identity, acl=acl) def test_authorize_succeeds_for_wildcard_tenant_user_in_roles(self): identity = self._get_identity() user_name = identity['HTTP_X_USER_NAME'] user_id = identity['HTTP_X_USER_ID'] for user in [user_id, user_name, '*']: acl = '*:%s' % user self._check_authenticate(identity=identity, acl=acl) def test_cross_tenant_authorization_success(self): self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userA']), 'tenantID:userA') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userA']), 'tenantNAME:userA') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['*:userA']), '*:userA') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userID']), 'tenantID:userID') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userID']), 'tenantNAME:userID') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['*:userID']), '*:userID') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:*']), 'tenantID:*') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:*']), 'tenantNAME:*') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['*:*']), '*:*') def test_cross_tenant_authorization_failure(self): self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantXYZ:userA']), None) def test_cross_tenant_authorization_allow_names(self): # tests that the allow_names arg does the right thing self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userA'], allow_names=True), 'tenantNAME:userA') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userID'], allow_names=True), 'tenantNAME:userID') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userA'], allow_names=True), 'tenantID:userA') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userID'], allow_names=True), 'tenantID:userID') self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userA'], allow_names=False), None) self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userA'], allow_names=False), None) self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantNAME:userID'], allow_names=False), None) self.assertEqual( self.test_auth._authorize_cross_tenant( 'userID', 'userA', 'tenantID', 'tenantNAME', ['tenantID:userID'], allow_names=False), 'tenantID:userID') def test_delete_own_account_not_allowed(self): roles = operator_roles(self.test_auth) identity = self._get_identity(roles=roles) account = self._get_account(identity) self._check_authenticate(account=account, identity=identity, exception=HTTP_FORBIDDEN, path='/v1/' + account, env={'REQUEST_METHOD': 'DELETE'}) def test_delete_own_account_when_reseller_allowed(self): roles = [self.test_auth.reseller_admin_role] identity = self._get_identity(roles=roles) account = self._get_account(identity) req = self._check_authenticate(account=account, identity=identity, path='/v1/' + account, env={'REQUEST_METHOD': 'DELETE'}) self.assertEqual(bool(req.environ.get('swift_owner')), True) def test_identity_set_up_at_call(self): def fake_start_response(*args, **kwargs): pass the_env = self._get_identity( tenant_id='test', roles=['reselleradmin']) self.test_auth(the_env, fake_start_response) subreq = Request.blank( '/v1/%s/c/o' % get_account_for_tenant(self.test_auth, 'test')) subreq.environ.update( self._get_identity(tenant_id='test', roles=['got_erased'])) authorize_resp = the_env['swift.authorize'](subreq) self.assertEqual(authorize_resp, None) def test_names_disallowed_in_acls_outside_default_domain(self): id = self._get_identity(user_domain_id='non-default', project_domain_id='non-default') env = {'keystone.token_info': _fake_token_info(version='3')} acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) def test_names_allowed_in_acls_inside_default_domain(self): id = self._get_identity(user_domain_id='default', project_domain_id='default') env = {'keystone.token_info': _fake_token_info(version='3')} acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) def test_names_allowed_in_acls_inside_default_domain_with_config(self): conf = {'allow_names_in_acls': 'yes'} self.test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.test_auth.logger = FakeLogger() id = self._get_identity(user_domain_id='default', project_domain_id='default') env = {'keystone.token_info': _fake_token_info(version='3')} acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) def test_names_disallowed_in_acls_inside_default_domain(self): conf = {'allow_names_in_acls': 'false'} self.test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.test_auth.logger = FakeLogger() id = self._get_identity(user_domain_id='default', project_domain_id='default') env = {'keystone.token_info': _fake_token_info(version='3')} acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_NAME'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_NAME']) self._check_authenticate(acl=acl, identity=id, env=env, exception=HTTP_FORBIDDEN) acl = '%s:%s' % (id['HTTP_X_TENANT_ID'], id['HTTP_X_USER_ID']) self._check_authenticate(acl=acl, identity=id, env=env) def test_keystone_identity(self): user = ('U_ID', 'U_NAME') roles = ('ROLE1', 'ROLE2') service_roles = ('ROLE3', 'ROLE4') project = ('P_ID', 'P_NAME') user_domain = ('UD_ID', 'UD_NAME') project_domain = ('PD_ID', 'PD_NAME') # no valid identity info in headers req = Request.blank('/v/a/c/o') data = self.test_auth._keystone_identity(req.environ) self.assertIsNone(data) # valid identity info in headers, but status unconfirmed req.headers.update({'X-Identity-Status': 'Blah', 'X-Roles': '%s,%s' % roles, 'X-User-Id': user[0], 'X-User-Name': user[1], 'X-Tenant-Id': project[0], 'X-Tenant-Name': project[1], 'X-User-Domain-Id': user_domain[0], 'X-User-Domain-Name': user_domain[1], 'X-Project-Domain-Id': project_domain[0], 'X-Project-Domain-Name': project_domain[1]}) data = self.test_auth._keystone_identity(req.environ) self.assertIsNone(data) # valid identity info in headers, no token info in environ req.headers.update({'X-Identity-Status': 'Confirmed'}) expected = {'user': user, 'tenant': project, 'roles': list(roles), 'service_roles': [], 'user_domain': (None, None), 'project_domain': (None, None), 'auth_version': 0} data = self.test_auth._keystone_identity(req.environ) self.assertEqual(expected, data) # v2 token info in environ req.environ['keystone.token_info'] = _fake_token_info(version='2') expected = {'user': user, 'tenant': project, 'roles': list(roles), 'service_roles': [], 'user_domain': (None, None), 'project_domain': (None, None), 'auth_version': 2} data = self.test_auth._keystone_identity(req.environ) self.assertEqual(expected, data) # v3 token info in environ req.environ['keystone.token_info'] = _fake_token_info(version='3') expected = {'user': user, 'tenant': project, 'roles': list(roles), 'service_roles': [], 'user_domain': user_domain, 'project_domain': project_domain, 'auth_version': 3} data = self.test_auth._keystone_identity(req.environ) self.assertEqual(expected, data) # service token in environ req.headers.update({'X-Service-Roles': '%s,%s' % service_roles}) expected = {'user': user, 'tenant': project, 'roles': list(roles), 'service_roles': list(service_roles), 'user_domain': user_domain, 'project_domain': project_domain, 'auth_version': 3} data = self.test_auth._keystone_identity(req.environ) self.assertEqual(expected, data) def test_get_project_domain_id(self): sysmeta = {} info = {'sysmeta': sysmeta} _, info_key = _get_cache_key('AUTH_1234', None) env = {'PATH_INFO': '/v1/AUTH_1234', info_key: info} # account does not exist info['status'] = 404 self.assertEqual(self.test_auth._get_project_domain_id(env), (False, None)) info['status'] = 0 self.assertEqual(self.test_auth._get_project_domain_id(env), (False, None)) # account exists, no project domain id in sysmeta info['status'] = 200 self.assertEqual(self.test_auth._get_project_domain_id(env), (True, None)) # account exists with project domain id in sysmeta sysmeta['project-domain-id'] = 'default' self.assertEqual(self.test_auth._get_project_domain_id(env), (True, 'default')) class TestIsNameAllowedInACL(BaseTestAuthorize): def setUp(self): super(TestIsNameAllowedInACL, self).setUp() self.default_id = 'default' def _assert_names_allowed(self, expected, user_domain_id=None, req_project_domain_id=None, sysmeta_project_domain_id=None, scoped='account'): project_name = 'foo' account_id = '12345678' account = get_account_for_tenant(self.test_auth, account_id) parts = ('v1', account, None, None) path = '/%s/%s' % parts[0:2] sysmeta = {} if sysmeta_project_domain_id: sysmeta = {'project-domain-id': sysmeta_project_domain_id} # pretend account exists info = {'status': 200, 'sysmeta': sysmeta} _, info_key = _get_cache_key(account, None) req = Request.blank(path, environ={info_key: info}) if scoped == 'account': project_name = 'account_name' project_id = account_id elif scoped == 'other': project_name = 'other_name' project_id = '87654321' else: # unscoped token project_name, project_id, req_project_domain_id = None, None, None if user_domain_id: id = self._get_env_id(tenant_name=project_name, tenant_id=project_id, user_domain_id=user_domain_id, project_domain_id=req_project_domain_id) else: # must be v2 token info id = self._get_env_id(tenant_name=project_name, tenant_id=project_id, auth_version='2') actual = self.test_auth._is_name_allowed_in_acl(req, parts, id) self.assertEqual(actual, expected, '%s, %s, %s, %s' % (user_domain_id, req_project_domain_id, sysmeta_project_domain_id, scoped)) def test_is_name_allowed_in_acl_with_token_scoped_to_tenant(self): # no user or project domain ids in request token so must be v2, # user and project should be assumed to be in default domain self._assert_names_allowed(True, user_domain_id=None, req_project_domain_id=None, sysmeta_project_domain_id=None) self._assert_names_allowed(True, user_domain_id=None, req_project_domain_id=None, sysmeta_project_domain_id=self.default_id) self._assert_names_allowed(True, user_domain_id=None, req_project_domain_id=None, sysmeta_project_domain_id=UNKNOWN_ID) self._assert_names_allowed(True, user_domain_id=None, req_project_domain_id=None, sysmeta_project_domain_id='foo') # user in default domain, project domain in token info takes precedence self._assert_names_allowed(True, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id=None) self._assert_names_allowed(True, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id=UNKNOWN_ID) self._assert_names_allowed(True, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id='bar') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id=None) self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id=self.default_id) self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id='foo') # user in non-default domain so names should never be allowed self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=None) self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=self.default_id) self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=UNKNOWN_ID) self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id='foo') def test_is_name_allowed_in_acl_with_unscoped_token(self): # user in default domain self._assert_names_allowed(True, user_domain_id=self.default_id, sysmeta_project_domain_id=None, scoped=False) self._assert_names_allowed(True, user_domain_id=self.default_id, sysmeta_project_domain_id=self.default_id, scoped=False) self._assert_names_allowed(False, user_domain_id=self.default_id, sysmeta_project_domain_id=UNKNOWN_ID, scoped=False) self._assert_names_allowed(False, user_domain_id=self.default_id, sysmeta_project_domain_id='foo', scoped=False) # user in non-default domain so names should never be allowed self._assert_names_allowed(False, user_domain_id='foo', sysmeta_project_domain_id=None, scoped=False) self._assert_names_allowed(False, user_domain_id='foo', sysmeta_project_domain_id=self.default_id, scoped=False) self._assert_names_allowed(False, user_domain_id='foo', sysmeta_project_domain_id=UNKNOWN_ID, scoped=False) self._assert_names_allowed(False, user_domain_id='foo', sysmeta_project_domain_id='foo', scoped=False) def test_is_name_allowed_in_acl_with_token_scoped_to_other_tenant(self): # user and scoped tenant in default domain self._assert_names_allowed(True, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id=None, scoped='other') self._assert_names_allowed(True, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id=self.default_id, scoped='other') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id=UNKNOWN_ID, scoped='other') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id=self.default_id, sysmeta_project_domain_id='foo', scoped='other') # user in default domain, but scoped tenant in non-default domain self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id=None, scoped='other') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id=self.default_id, scoped='other') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id=UNKNOWN_ID, scoped='other') self._assert_names_allowed(False, user_domain_id=self.default_id, req_project_domain_id='foo', sysmeta_project_domain_id='foo', scoped='other') # user in non-default domain, scoped tenant in default domain self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=None, scoped='other') self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=self.default_id, scoped='other') self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id=UNKNOWN_ID, scoped='other') self._assert_names_allowed(False, user_domain_id='foo', req_project_domain_id=self.default_id, sysmeta_project_domain_id='foo', scoped='other') class TestIsNameAllowedInACLWithConfiguredDomain(TestIsNameAllowedInACL): def setUp(self): super(TestIsNameAllowedInACLWithConfiguredDomain, self).setUp() conf = {'default_domain_id': 'mydefault'} self.test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.test_auth.logger = FakeLogger() self.default_id = 'mydefault' class TestSetProjectDomain(BaseTestAuthorize): def _assert_set_project_domain(self, expected, account, req_project_id, req_project_domain_id, sysmeta_project_domain_id, warning=False): hdr = 'X-Account-Sysmeta-Project-Domain-Id' # set up fake account info in req env status = 0 if sysmeta_project_domain_id is None else 200 sysmeta = {} if sysmeta_project_domain_id: sysmeta['project-domain-id'] = sysmeta_project_domain_id info = {'status': status, 'sysmeta': sysmeta} _, info_key = _get_cache_key(account, None) env = {info_key: info} # create fake env identity env_id = self._get_env_id(tenant_id=req_project_id, project_domain_id=req_project_domain_id) # reset fake logger self.test_auth.logger = FakeLogger() num_warnings = 0 # check account requests path = '/v1/%s' % account for method in ['PUT', 'POST']: req = Request.blank(path, environ=env) req.method = method path_parts = req.split_path(1, 4, True) self.test_auth._set_project_domain_id(req, path_parts, env_id) if warning: num_warnings += 1 warnings = self.test_auth.logger.get_lines_for_level('warning') self.assertEqual(len(warnings), num_warnings) self.assertTrue(warnings[-1].startswith('Inconsistent proj')) if expected is not None: self.assertTrue(hdr in req.headers) self.assertEqual(req.headers[hdr], expected) else: self.assertFalse(hdr in req.headers, req.headers) for method in ['GET', 'HEAD', 'DELETE', 'OPTIONS']: req = Request.blank(path, environ=env) req.method = method self.test_auth._set_project_domain_id(req, path_parts, env_id) self.assertFalse(hdr in req.headers) # check container requests path = '/v1/%s/c' % account for method in ['PUT']: req = Request.blank(path, environ=env) req.method = method path_parts = req.split_path(1, 4, True) self.test_auth._set_project_domain_id(req, path_parts, env_id) if warning: num_warnings += 1 warnings = self.test_auth.logger.get_lines_for_level('warning') self.assertEqual(len(warnings), num_warnings) self.assertTrue(warnings[-1].startswith('Inconsistent proj')) if expected is not None: self.assertTrue(hdr in req.headers) self.assertEqual(req.headers[hdr], expected) else: self.assertFalse(hdr in req.headers) for method in ['POST', 'GET', 'HEAD', 'DELETE', 'OPTIONS']: req = Request.blank(path, environ=env) req.method = method self.test_auth._set_project_domain_id(req, path_parts, env_id) self.assertFalse(hdr in req.headers) # never set for object requests path = '/v1/%s/c/o' % account for method in ['PUT', 'COPY', 'POST', 'GET', 'HEAD', 'DELETE', 'OPTIONS']: req = Request.blank(path, environ=env) req.method = method path_parts = req.split_path(1, 4, True) self.test_auth._set_project_domain_id(req, path_parts, env_id) self.assertFalse(hdr in req.headers) def test_set_project_domain_id_new_account(self): # scoped token with project domain info self._assert_set_project_domain('test_id', account='AUTH_1234', req_project_id='1234', req_project_domain_id='test_id', sysmeta_project_domain_id=None) # scoped v2 token without project domain id self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id=None, sysmeta_project_domain_id=None) # unscoped v2 token without project domain id self._assert_set_project_domain(UNKNOWN_ID, account='AUTH_1234', req_project_id=None, req_project_domain_id=None, sysmeta_project_domain_id=None) # token scoped on another project self._assert_set_project_domain(UNKNOWN_ID, account='AUTH_1234', req_project_id='4321', req_project_domain_id='default', sysmeta_project_domain_id=None) def test_set_project_domain_id_existing_v2_account(self): # project domain id provided in scoped request token, # update empty value self._assert_set_project_domain('default', account='AUTH_1234', req_project_id='1234', req_project_domain_id='default', sysmeta_project_domain_id='') # inconsistent project domain id provided in scoped request token, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id='unexpected_id', sysmeta_project_domain_id='', warning=True) # project domain id not provided, scoped request token, # no change to empty value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id=None, sysmeta_project_domain_id='') # unscoped request token, no change to empty value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id=None, req_project_domain_id=None, sysmeta_project_domain_id='') # token scoped on another project, # update empty value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='4321', req_project_domain_id=None, sysmeta_project_domain_id='') def test_set_project_domain_id_existing_account_unknown_domain(self): # project domain id provided in scoped request token, # set known value self._assert_set_project_domain('test_id', account='AUTH_1234', req_project_id='1234', req_project_domain_id='test_id', sysmeta_project_domain_id=UNKNOWN_ID) # project domain id not provided, scoped request token, # set empty value self._assert_set_project_domain('', account='AUTH_1234', req_project_id='1234', req_project_domain_id=None, sysmeta_project_domain_id=UNKNOWN_ID) # project domain id not provided, unscoped request token, # leave unknown value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id=None, req_project_domain_id=None, sysmeta_project_domain_id=UNKNOWN_ID) # token scoped on another project, leave unknown value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='4321', req_project_domain_id='default', sysmeta_project_domain_id=UNKNOWN_ID) def test_set_project_domain_id_existing_known_domain(self): # project domain id provided in scoped request token, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id='test_id', sysmeta_project_domain_id='test_id') # inconsistent project domain id provided in scoped request token, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id='unexpected_id', sysmeta_project_domain_id='test_id', warning=True) # project domain id not provided, scoped request token, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='1234', req_project_domain_id=None, sysmeta_project_domain_id='test_id') # project domain id not provided, unscoped request token, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id=None, req_project_domain_id=None, sysmeta_project_domain_id='test_id') # project domain id not provided, token scoped on another project, # leave known value self._assert_set_project_domain(None, account='AUTH_1234', req_project_id='4321', req_project_domain_id='default', sysmeta_project_domain_id='test_id') class ResellerInInfo(unittest.TestCase): def setUp(self): self.default_rules = {'operator_roles': ['admin', 'swiftoperator'], 'service_roles': []} def test_defaults(self): test_auth = keystoneauth.filter_factory({})(FakeApp()) self.assertEqual(test_auth.account_rules['AUTH_'], self.default_rules) def test_multiple(self): conf = {"reseller_prefix": "AUTH, '', PRE2"} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth.account_rules['AUTH_'], self.default_rules) self.assertEqual(test_auth.account_rules[''], self.default_rules) self.assertEqual(test_auth.account_rules['PRE2_'], self.default_rules) class PrefixAccount(unittest.TestCase): def test_default(self): conf = {} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(get_account_for_tenant(test_auth, '1234'), 'AUTH_1234') self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) self.assertTrue(test_auth._account_matches_tenant( 'AUTH_1234', '1234')) self.assertFalse(test_auth._account_matches_tenant( 'AUTH_1234', '5678')) self.assertFalse(test_auth._account_matches_tenant( 'JUNK_1234', '1234')) def test_same_as_default(self): conf = {'reseller_prefix': 'AUTH'} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(get_account_for_tenant(test_auth, '1234'), 'AUTH_1234') self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) self.assertTrue(test_auth._account_matches_tenant( 'AUTH_1234', '1234')) self.assertFalse(test_auth._account_matches_tenant( 'AUTH_1234', '5678')) def test_blank_reseller(self): conf = {'reseller_prefix': ''} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(get_account_for_tenant(test_auth, '1234'), '1234') self.assertEqual(test_auth._get_account_prefix( '1234'), '') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), '') # yes, it should return '' self.assertTrue(test_auth._account_matches_tenant( '1234', '1234')) self.assertFalse(test_auth._account_matches_tenant( '1234', '5678')) self.assertFalse(test_auth._account_matches_tenant( 'JUNK_1234', '1234')) def test_multiple_resellers(self): conf = {'reseller_prefix': 'AUTH, PRE2'} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(get_account_for_tenant(test_auth, '1234'), 'AUTH_1234') self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) self.assertTrue(test_auth._account_matches_tenant( 'AUTH_1234', '1234')) self.assertTrue(test_auth._account_matches_tenant( 'PRE2_1234', '1234')) self.assertFalse(test_auth._account_matches_tenant( 'AUTH_1234', '5678')) self.assertFalse(test_auth._account_matches_tenant( 'PRE2_1234', '5678')) def test_blank_plus_other_reseller(self): conf = {'reseller_prefix': " '', PRE2"} test_auth = keystoneauth.filter_factory(conf)(FakeApp()) self.assertEqual(get_account_for_tenant(test_auth, '1234'), '1234') self.assertEqual(test_auth._get_account_prefix( 'PRE2_1234'), 'PRE2_') self.assertEqual(test_auth._get_account_prefix('JUNK_1234'), '') self.assertTrue(test_auth._account_matches_tenant( '1234', '1234')) self.assertTrue(test_auth._account_matches_tenant( 'PRE2_1234', '1234')) self.assertFalse(test_auth._account_matches_tenant( '1234', '5678')) self.assertFalse(test_auth._account_matches_tenant( 'PRE2_1234', '5678')) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_except.py0000664000567000056710000001322613024044354024265 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.swob import Request from swift.common.middleware import catch_errors from swift.common.utils import get_logger class StrangeException(BaseException): pass class FakeApp(object): def __init__(self, error=False, body_iter=None): self.error = error self.body_iter = body_iter def __call__(self, env, start_response): if 'swift.trans_id' not in env: raise Exception('Trans id should always be in env') if self.error: if self.error == 'strange': raise StrangeException('whoa') raise Exception('An error occurred') if self.body_iter is None: return ["FAKE APP"] else: return self.body_iter def start_response(*args): pass class TestCatchErrors(unittest.TestCase): def setUp(self): self.logger = get_logger({}) self.logger.txn_id = None def test_catcherrors_passthrough(self): app = catch_errors.CatchErrorMiddleware(FakeApp(), {}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) self.assertEqual(list(resp), ['FAKE APP']) def test_catcherrors(self): app = catch_errors.CatchErrorMiddleware(FakeApp(True), {}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) self.assertEqual(list(resp), ['An error occurred']) def test_trans_id_header_pass(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware(FakeApp(), {}) req = Request.blank('/v1/a/c/o') app(req.environ, start_response) self.assertEqual(len(self.logger.txn_id), 34) # 32 hex + 'tx' def test_trans_id_header_fail(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware(FakeApp(True), {}) req = Request.blank('/v1/a/c/o') app(req.environ, start_response) self.assertEqual(len(self.logger.txn_id), 34) def test_error_in_iterator(self): app = catch_errors.CatchErrorMiddleware( FakeApp(body_iter=(int(x) for x in 'abcd')), {}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) self.assertEqual(list(resp), ['An error occurred']) def test_trans_id_header_suffix(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware( FakeApp(), {'trans_id_suffix': '-stuff'}) req = Request.blank('/v1/a/c/o') app(req.environ, start_response) self.assertTrue(self.logger.txn_id.endswith('-stuff')) def test_trans_id_header_extra(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware( FakeApp(), {'trans_id_suffix': '-fromconf'}) req = Request.blank('/v1/a/c/o', headers={'X-Trans-Id-Extra': 'fromuser'}) app(req.environ, start_response) self.assertTrue(self.logger.txn_id.endswith('-fromconf-fromuser')) def test_trans_id_header_extra_length_limit(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware( FakeApp(), {'trans_id_suffix': '-fromconf'}) req = Request.blank('/v1/a/c/o', headers={'X-Trans-Id-Extra': 'a' * 1000}) app(req.environ, start_response) self.assertTrue(self.logger.txn_id.endswith( '-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')) def test_trans_id_header_extra_quoted(self): self.assertEqual(self.logger.txn_id, None) def start_response(status, headers, exc_info=None): self.assertTrue('X-Trans-Id' in (x[0] for x in headers)) app = catch_errors.CatchErrorMiddleware(FakeApp(), {}) req = Request.blank('/v1/a/c/o', headers={'X-Trans-Id-Extra': 'xan than"gum'}) app(req.environ, start_response) self.assertTrue(self.logger.txn_id.endswith('-xan%20than%22gum')) def test_catcherrors_with_unexpected_error(self): app = catch_errors.CatchErrorMiddleware(FakeApp(error='strange'), {}) req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = app(req.environ, start_response) self.assertEqual(list(resp), ['An error occurred']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_account_quotas.py0000664000567000056710000005173213024044354026031 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.swob import Request, wsgify, HTTPForbidden from swift.common.middleware import account_quotas from swift.proxy.controllers.base import _get_cache_key, \ headers_to_account_info, get_object_env_key, \ headers_to_object_info class FakeCache(object): def __init__(self, val): self.val = val def get(self, *args): return self.val def set(self, *args, **kwargs): pass class FakeBadApp(object): def __init__(self, headers=None): if headers is None: headers = [] self.headers = headers def __call__(self, env, start_response): start_response('404 NotFound', self.headers) return [] class FakeApp(object): def __init__(self, headers=None): if headers is None: headers = [] self.headers = headers def __call__(self, env, start_response): if 'swift.authorize' in env: aresp = env['swift.authorize'](Request(env)) if aresp: return aresp(env, start_response) if env['REQUEST_METHOD'] == "HEAD" and \ env['PATH_INFO'] == '/v1/a/c2/o2': env_key = get_object_env_key('a', 'c2', 'o2') env[env_key] = headers_to_object_info(self.headers, 200) start_response('200 OK', self.headers) elif env['REQUEST_METHOD'] == "HEAD" and \ env['PATH_INFO'] == '/v1/a/c2/o3': start_response('404 Not Found', []) else: # Cache the account_info (same as a real application) cache_key, env_key = _get_cache_key('a', None) env[env_key] = headers_to_account_info(self.headers, 200) start_response('200 OK', self.headers) return [] class FakeAuthFilter(object): def __init__(self, app): self.app = app @wsgify def __call__(self, req): def authorize(req): if req.headers['x-auth-token'] == 'secret': return return HTTPForbidden(request=req) req.environ['swift.authorize'] = authorize return req.get_response(self.app) class TestAccountQuota(unittest.TestCase): def test_unauthorized(self): headers = [('x-account-bytes-used', '1000'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) # Response code of 200 because authentication itself is not done here self.assertEqual(res.status_int, 200) def test_no_quotas(self): headers = [('x-account-bytes-used', '1000'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_obj_request_ignores_attempt_to_set_quotas(self): # If you try to set X-Account-Meta-* on an object, it's ignored, so # the quota middleware shouldn't complain about it even if we're not a # reseller admin. headers = [('x-account-bytes-used', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', headers={'X-Account-Meta-Quota-Bytes': '99999'}, environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_container_request_ignores_attempt_to_set_quotas(self): # As with an object, if you try to set X-Account-Meta-* on a # container, it's ignored. headers = [('x-account-bytes-used', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c', headers={'X-Account-Meta-Quota-Bytes': '99999'}, environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_bogus_quota_is_ignored(self): # This can happen if the metadata was set by a user prior to the # activation of the account-quota middleware headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', 'pasty-plastogene')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_exceed_bytes_quota(self): headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', '0')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 413) self.assertEqual(res.body, 'Upload exceeds quota.') def test_exceed_quota_not_authorized(self): headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', '0')] app = FakeAuthFilter( account_quotas.AccountQuotaMiddleware(FakeApp(headers))) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', method='PUT', headers={'x-auth-token': 'bad-secret'}, environ={'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 403) def test_exceed_quota_authorized(self): headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', '0')] app = FakeAuthFilter( account_quotas.AccountQuotaMiddleware(FakeApp(headers))) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', method='PUT', headers={'x-auth-token': 'secret'}, environ={'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 413) def test_under_quota_not_authorized(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000')] app = FakeAuthFilter( account_quotas.AccountQuotaMiddleware(FakeApp(headers))) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', method='PUT', headers={'x-auth-token': 'bad-secret'}, environ={'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 403) def test_under_quota_authorized(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000')] app = FakeAuthFilter( account_quotas.AccountQuotaMiddleware(FakeApp(headers))) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', method='PUT', headers={'x-auth-token': 'secret'}, environ={'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_over_quota_container_create_still_works(self): headers = [('x-account-bytes-used', '1001'), ('x-account-meta-quota-bytes', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/new_container', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_CONTAINER_META_BERT': 'ernie', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_over_quota_container_post_still_works(self): headers = [('x-account-bytes-used', '1001'), ('x-account-meta-quota-bytes', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/new_container', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_CONTAINER_META_BERT': 'ernie', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_over_quota_obj_post_still_works(self): headers = [('x-account-bytes-used', '1001'), ('x-account-meta-quota-bytes', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_OBJECT_META_BERT': 'ernie', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_exceed_bytes_quota_copy_from(self): headers = [('x-account-bytes-used', '500'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}, headers={'x-copy-from': '/c2/o2'}) res = req.get_response(app) self.assertEqual(res.status_int, 413) self.assertEqual(res.body, 'Upload exceeds quota.') def test_exceed_bytes_quota_copy_verb(self): headers = [('x-account-bytes-used', '500'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c2/o2', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache}, headers={'Destination': '/c/o'}) res = req.get_response(app) self.assertEqual(res.status_int, 413) self.assertEqual(res.body, 'Upload exceeds quota.') def test_not_exceed_bytes_quota_copy_from(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}, headers={'x-copy-from': '/c2/o2'}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_not_exceed_bytes_quota_copy_verb(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c2/o2', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache}, headers={'Destination': '/c/o'}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_quota_copy_from_no_src(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}, headers={'x-copy-from': '/c2/o3'}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_quota_copy_from_bad_src(self): headers = [('x-account-bytes-used', '0'), ('x-account-meta-quota-bytes', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}, headers={'x-copy-from': 'bad_path'}) res = req.get_response(app) self.assertEqual(res.status_int, 412) def test_exceed_bytes_quota_reseller(self): headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', '0')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'reseller_request': True}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_exceed_bytes_quota_reseller_copy_from(self): headers = [('x-account-bytes-used', '500'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache, 'reseller_request': True}, headers={'x-copy-from': 'c2/o2'}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_exceed_bytes_quota_reseller_copy_verb(self): headers = [('x-account-bytes-used', '500'), ('x-account-meta-quota-bytes', '1000'), ('content-length', '1000')] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c2/o2', environ={'REQUEST_METHOD': 'COPY', 'swift.cache': cache, 'reseller_request': True}, headers={'Destination': 'c/o'}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_bad_application_quota(self): headers = [] app = account_quotas.AccountQuotaMiddleware(FakeBadApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 404) def test_no_info_quota(self): headers = [] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_not_exceed_bytes_quota(self): headers = [('x-account-bytes-used', '1000'), ('x-account-meta-quota-bytes', 2000)] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_invalid_quotas(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_ACCOUNT_META_QUOTA_BYTES': 'abc', 'reseller_request': True}) res = req.get_response(app) self.assertEqual(res.status_int, 400) def test_valid_quotas_admin(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_ACCOUNT_META_QUOTA_BYTES': '100'}) res = req.get_response(app) self.assertEqual(res.status_int, 403) def test_valid_quotas_reseller(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_ACCOUNT_META_QUOTA_BYTES': '100', 'reseller_request': True}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_delete_quotas(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_ACCOUNT_META_QUOTA_BYTES': ''}) res = req.get_response(app) self.assertEqual(res.status_int, 403) def test_delete_quotas_with_remove_header(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={ 'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_REMOVE_ACCOUNT_META_QUOTA_BYTES': 'True'}) res = req.get_response(app) self.assertEqual(res.status_int, 403) def test_delete_quotas_reseller(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) req = Request.blank('/v1/a', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_ACCOUNT_META_QUOTA_BYTES': '', 'reseller_request': True}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_delete_quotas_with_remove_header_reseller(self): headers = [('x-account-bytes-used', '0'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1/a', environ={ 'REQUEST_METHOD': 'POST', 'swift.cache': cache, 'HTTP_X_REMOVE_ACCOUNT_META_QUOTA_BYTES': 'True', 'reseller_request': True}) res = req.get_response(app) self.assertEqual(res.status_int, 200) def test_invalid_request_exception(self): headers = [('x-account-bytes-used', '1000'), ] app = account_quotas.AccountQuotaMiddleware(FakeApp(headers)) cache = FakeCache(None) req = Request.blank('/v1', environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache}) res = req.get_response(app) # Response code of 200 because authentication itself is not done here self.assertEqual(res.status_int, 200) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/helpers.py0000664000567000056710000001312513024044354023376 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # This stuff can't live in test/unit/__init__.py due to its swob dependency. from collections import defaultdict from copy import deepcopy from hashlib import md5 from swift.common import swob from swift.common.header_key_dict import HeaderKeyDict from swift.common.utils import split_path from test.unit import FakeLogger, FakeRing class LeakTrackingIter(object): def __init__(self, inner_iter, fake_swift, path): self.inner_iter = inner_iter self.fake_swift = fake_swift self.path = path def __iter__(self): for x in self.inner_iter: yield x def close(self): self.fake_swift.mark_closed(self.path) class FakeSwift(object): """ A good-enough fake Swift proxy server to use in testing middleware. """ def __init__(self): self._calls = [] self._unclosed_req_paths = defaultdict(int) self.req_method_paths = [] self.swift_sources = [] self.uploaded = {} # mapping of (method, path) --> (response class, headers, body) self._responses = {} self.logger = FakeLogger('fake-swift') self.account_ring = FakeRing() self.container_ring = FakeRing() self.get_object_ring = lambda policy_index: FakeRing() def _find_response(self, method, path): resp = self._responses[(method, path)] if isinstance(resp, list): try: resp = resp.pop(0) except IndexError: raise IndexError("Didn't find any more %r " "in allowed responses" % ( (method, path),)) return resp def __call__(self, env, start_response): method = env['REQUEST_METHOD'] path = env['PATH_INFO'] _, acc, cont, obj = split_path(env['PATH_INFO'], 0, 4, rest_with_last=True) if env.get('QUERY_STRING'): path += '?' + env['QUERY_STRING'] if 'swift.authorize' in env: resp = env['swift.authorize'](swob.Request(env)) if resp: return resp(env, start_response) req_headers = swob.Request(env).headers self.swift_sources.append(env.get('swift.source')) try: resp_class, raw_headers, body = self._find_response(method, path) headers = HeaderKeyDict(raw_headers) except KeyError: if (env.get('QUERY_STRING') and (method, env['PATH_INFO']) in self._responses): resp_class, raw_headers, body = self._find_response( method, env['PATH_INFO']) headers = HeaderKeyDict(raw_headers) elif method == 'HEAD' and ('GET', path) in self._responses: resp_class, raw_headers, body = self._find_response( 'GET', path) body = None headers = HeaderKeyDict(raw_headers) elif method == 'GET' and obj and path in self.uploaded: resp_class = swob.HTTPOk headers, body = self.uploaded[path] else: raise KeyError("Didn't find %r in allowed responses" % ( (method, path),)) self._calls.append((method, path, req_headers)) # simulate object PUT if method == 'PUT' and obj: input = env['wsgi.input'].read() etag = md5(input).hexdigest() headers.setdefault('Etag', etag) headers.setdefault('Content-Length', len(input)) # keep it for subsequent GET requests later self.uploaded[path] = (deepcopy(headers), input) if "CONTENT_TYPE" in env: self.uploaded[path][0]['Content-Type'] = env["CONTENT_TYPE"] # range requests ought to work, hence conditional_response=True req = swob.Request(env) resp = resp_class(req=req, headers=headers, body=body, conditional_response=True) wsgi_iter = resp(env, start_response) self.mark_opened(path) return LeakTrackingIter(wsgi_iter, self, path) def mark_opened(self, path): self._unclosed_req_paths[path] += 1 def mark_closed(self, path): self._unclosed_req_paths[path] -= 1 @property def unclosed_requests(self): return {path: count for path, count in self._unclosed_req_paths.items() if count > 0} @property def calls(self): return [(method, path) for method, path, headers in self._calls] @property def headers(self): return [headers for method, path, headers in self._calls] @property def calls_with_headers(self): return self._calls @property def call_count(self): return len(self._calls) def register(self, method, path, response_class, headers, body=''): self._responses[(method, path)] = (response_class, headers, body) def register_responses(self, method, path, responses): self._responses[(method, path)] = list(responses) swift-2.7.1/test/unit/common/middleware/test_gatekeeper.py0000664000567000056710000002163213024044354025111 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.swob import Request, Response from swift.common.middleware import gatekeeper class FakeApp(object): def __init__(self, headers=None): if headers is None: headers = {} self.headers = headers self.req = None def __call__(self, env, start_response): self.req = Request(env) return Response(request=self.req, body='FAKE APP', headers=self.headers)(env, start_response) class FakeMiddleware(object): def __init__(self, app, conf, header_list=None): self.app = app self.conf = conf self.header_list = header_list def __call__(self, env, start_response): def fake_resp(status, response_headers, exc_info=None): for i in self.header_list: response_headers.append(i) return start_response(status, response_headers, exc_info) return self.app(env, fake_resp) class TestGatekeeper(unittest.TestCase): methods = ['PUT', 'POST', 'GET', 'DELETE', 'HEAD', 'COPY', 'OPTIONS'] allowed_headers = {'xx-account-sysmeta-foo': 'value', 'xx-container-sysmeta-foo': 'value', 'xx-object-sysmeta-foo': 'value', 'x-account-meta-foo': 'value', 'x-container-meta-foo': 'value', 'x-object-meta-foo': 'value', 'x-timestamp-foo': 'value'} sysmeta_headers = {'x-account-sysmeta-': 'value', 'x-container-sysmeta-': 'value', 'x-object-sysmeta-': 'value', 'x-account-sysmeta-foo': 'value', 'x-container-sysmeta-foo': 'value', 'x-object-sysmeta-foo': 'value', 'X-Account-Sysmeta-BAR': 'value', 'X-Container-Sysmeta-BAR': 'value', 'X-Object-Sysmeta-BAR': 'value'} x_backend_headers = {'X-Backend-Replication': 'true', 'X-Backend-Replication-Headers': 'stuff'} x_timestamp_headers = {'X-Timestamp': '1455952805.719739'} forbidden_headers_out = dict(sysmeta_headers.items() + x_backend_headers.items()) forbidden_headers_in = dict(sysmeta_headers.items() + x_backend_headers.items()) shunted_headers_in = dict(x_timestamp_headers.items()) def _assertHeadersEqual(self, expected, actual): for key in expected: self.assertTrue(key.lower() in actual, '%s missing from %s' % (key, actual)) def _assertHeadersAbsent(self, unexpected, actual): for key in unexpected: self.assertTrue(key.lower() not in actual, '%s is in %s' % (key, actual)) def get_app(self, app, global_conf, **local_conf): factory = gatekeeper.filter_factory(global_conf, **local_conf) return factory(app) def test_ok_header(self): req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers=self.allowed_headers) fake_app = FakeApp() app = self.get_app(fake_app, {}) resp = req.get_response(app) self.assertEqual('200 OK', resp.status) self.assertEqual(resp.body, 'FAKE APP') self._assertHeadersEqual(self.allowed_headers, fake_app.req.headers) def _test_reserved_header_removed_inbound(self, method): headers = dict(self.forbidden_headers_in) headers.update(self.allowed_headers) headers.update(self.shunted_headers_in) req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': method}, headers=headers) fake_app = FakeApp() app = self.get_app(fake_app, {}) resp = req.get_response(app) self.assertEqual('200 OK', resp.status) expected_headers = dict(self.allowed_headers) # shunt_inbound_x_timestamp should be enabled by default expected_headers.update({'X-Backend-Inbound-' + k: v for k, v in self.shunted_headers_in.items()}) self._assertHeadersEqual(expected_headers, fake_app.req.headers) unexpected_headers = dict(self.forbidden_headers_in.items() + self.shunted_headers_in.items()) self._assertHeadersAbsent(unexpected_headers, fake_app.req.headers) def test_reserved_header_removed_inbound(self): for method in self.methods: self._test_reserved_header_removed_inbound(method) def _test_reserved_header_shunted_inbound(self, method): headers = dict(self.shunted_headers_in) headers.update(self.allowed_headers) req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': method}, headers=headers) fake_app = FakeApp() app = self.get_app(fake_app, {}, shunt_inbound_x_timestamp='true') resp = req.get_response(app) self.assertEqual('200 OK', resp.status) expected_headers = dict(self.allowed_headers) expected_headers.update({'X-Backend-Inbound-' + k: v for k, v in self.shunted_headers_in.items()}) self._assertHeadersEqual(expected_headers, fake_app.req.headers) self._assertHeadersAbsent(self.shunted_headers_in, fake_app.req.headers) def test_reserved_header_shunted_inbound(self): for method in self.methods: self._test_reserved_header_shunted_inbound(method) def _test_reserved_header_shunt_bypassed_inbound(self, method): headers = dict(self.shunted_headers_in) headers.update(self.allowed_headers) req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': method}, headers=headers) fake_app = FakeApp() app = self.get_app(fake_app, {}, shunt_inbound_x_timestamp='false') resp = req.get_response(app) self.assertEqual('200 OK', resp.status) expected_headers = dict(self.allowed_headers.items() + self.shunted_headers_in.items()) self._assertHeadersEqual(expected_headers, fake_app.req.headers) def test_reserved_header_shunt_bypassed_inbound(self): for method in self.methods: self._test_reserved_header_shunt_bypassed_inbound(method) def _test_reserved_header_removed_outbound(self, method): headers = dict(self.forbidden_headers_out) headers.update(self.allowed_headers) req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': method}) fake_app = FakeApp(headers=headers) app = self.get_app(fake_app, {}) resp = req.get_response(app) self.assertEqual('200 OK', resp.status) self._assertHeadersEqual(self.allowed_headers, resp.headers) self._assertHeadersAbsent(self.forbidden_headers_out, resp.headers) def test_reserved_header_removed_outbound(self): for method in self.methods: self._test_reserved_header_removed_outbound(method) def _test_duplicate_headers_not_removed(self, method, app_hdrs): def fake_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) headers = [('X-Header', 'xxx'), ('X-Header', 'yyy')] def fake_filter(app): return FakeMiddleware(app, conf, headers) return fake_filter def fake_start_response(status, response_headers, exc_info=None): hdr_list = [] for k, v in response_headers: if k == 'X-Header': hdr_list.append(v) self.assertTrue('xxx' in hdr_list) self.assertTrue('yyy' in hdr_list) self.assertEqual(len(hdr_list), 2) req = Request.blank('/v/a/c', environ={'REQUEST_METHOD': method}) fake_app = FakeApp(headers=app_hdrs) factory = gatekeeper.filter_factory({}) factory_wrap = fake_factory({}) app = factory(factory_wrap(fake_app)) app(req.environ, fake_start_response) def test_duplicate_headers_not_removed(self): for method in self.methods: for app_hdrs in ({}, self.forbidden_headers_out): self._test_duplicate_headers_not_removed(method, app_hdrs) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_tempauth.py0000664000567000056710000021451513024044354024630 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2011-2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import unittest from contextlib import contextmanager from base64 import b64encode from time import time import mock from swift.common.middleware import tempauth as auth from swift.common.middleware.acl import format_acl from swift.common.swob import Request, Response from swift.common.utils import split_path NO_CONTENT_RESP = (('204 No Content', {}, ''),) # mock server response class FakeMemcache(object): def __init__(self): self.store = {} def get(self, key): return self.store.get(key) def set(self, key, value, time=0): self.store[key] = value return True def incr(self, key, time=0): self.store[key] = self.store.setdefault(key, 0) + 1 return self.store[key] @contextmanager def soft_lock(self, key, timeout=0, retries=5): yield True def delete(self, key): try: del self.store[key] except Exception: pass return True class FakeApp(object): def __init__(self, status_headers_body_iter=None, acl=None, sync_key=None): self.calls = 0 self.status_headers_body_iter = status_headers_body_iter if not self.status_headers_body_iter: self.status_headers_body_iter = iter([('404 Not Found', {}, '')]) self.acl = acl self.sync_key = sync_key def __call__(self, env, start_response): self.calls += 1 self.request = Request(env) if self.acl: self.request.acl = self.acl if self.sync_key: self.request.environ['swift_sync_key'] = self.sync_key if 'swift.authorize' in env: resp = env['swift.authorize'](self.request) if resp: return resp(env, start_response) status, headers, body = next(self.status_headers_body_iter) return Response(status=status, headers=headers, body=body)(env, start_response) class FakeConn(object): def __init__(self, status_headers_body_iter=None): self.calls = 0 self.status_headers_body_iter = status_headers_body_iter if not self.status_headers_body_iter: self.status_headers_body_iter = iter([('404 Not Found', {}, '')]) def request(self, method, path, headers): self.calls += 1 self.request_path = path self.status, self.headers, self.body = \ next(self.status_headers_body_iter) self.status, self.reason = self.status.split(' ', 1) self.status = int(self.status) def getresponse(self): return self def read(self): body = self.body self.body = '' return body class TestAuth(unittest.TestCase): def setUp(self): self.test_auth = auth.filter_factory({})(FakeApp()) def _make_request(self, path, **kwargs): req = Request.blank(path, **kwargs) req.environ['swift.cache'] = FakeMemcache() return req def test_reseller_prefix_init(self): app = FakeApp() ath = auth.filter_factory({})(app) self.assertEqual(ath.reseller_prefix, 'AUTH_') self.assertEqual(ath.reseller_prefixes, ['AUTH_']) ath = auth.filter_factory({'reseller_prefix': 'TEST'})(app) self.assertEqual(ath.reseller_prefix, 'TEST_') self.assertEqual(ath.reseller_prefixes, ['TEST_']) ath = auth.filter_factory({'reseller_prefix': 'TEST_'})(app) self.assertEqual(ath.reseller_prefix, 'TEST_') self.assertEqual(ath.reseller_prefixes, ['TEST_']) ath = auth.filter_factory({'reseller_prefix': ''})(app) self.assertEqual(ath.reseller_prefix, '') self.assertEqual(ath.reseller_prefixes, ['']) ath = auth.filter_factory({'reseller_prefix': ' '})(app) self.assertEqual(ath.reseller_prefix, '') self.assertEqual(ath.reseller_prefixes, ['']) ath = auth.filter_factory({'reseller_prefix': ' '' '})(app) self.assertEqual(ath.reseller_prefix, '') self.assertEqual(ath.reseller_prefixes, ['']) ath = auth.filter_factory({'reseller_prefix': " '', TEST"})(app) self.assertEqual(ath.reseller_prefix, '') self.assertTrue('' in ath.reseller_prefixes) self.assertTrue('TEST_' in ath.reseller_prefixes) def test_auth_prefix_init(self): app = FakeApp() ath = auth.filter_factory({})(app) self.assertEqual(ath.auth_prefix, '/auth/') ath = auth.filter_factory({'auth_prefix': ''})(app) self.assertEqual(ath.auth_prefix, '/auth/') ath = auth.filter_factory({'auth_prefix': '/'})(app) self.assertEqual(ath.auth_prefix, '/auth/') ath = auth.filter_factory({'auth_prefix': '/test/'})(app) self.assertEqual(ath.auth_prefix, '/test/') ath = auth.filter_factory({'auth_prefix': '/test'})(app) self.assertEqual(ath.auth_prefix, '/test/') ath = auth.filter_factory({'auth_prefix': 'test/'})(app) self.assertEqual(ath.auth_prefix, '/test/') ath = auth.filter_factory({'auth_prefix': 'test'})(app) self.assertEqual(ath.auth_prefix, '/test/') def test_top_level_deny(self): req = self._make_request('/') resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(req.environ['swift.authorize'], self.test_auth.denied_response) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="unknown"') def test_anon(self): req = self._make_request('/v1/AUTH_account') resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(req.environ['swift.authorize'], self.test_auth.authorize) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_account"') def test_anon_badpath(self): req = self._make_request('/v1') resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="unknown"') def test_override_asked_for_but_not_allowed(self): self.test_auth = \ auth.filter_factory({'allow_overrides': 'false'})(FakeApp()) req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_account"') self.assertEqual(req.environ['swift.authorize'], self.test_auth.authorize) def test_override_asked_for_and_allowed(self): self.test_auth = \ auth.filter_factory({'allow_overrides': 'true'})(FakeApp()) req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 404) self.assertTrue('swift.authorize' not in req.environ) def test_override_default_allowed(self): req = self._make_request('/v1/AUTH_account', environ={'swift.authorize_override': True}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 404) self.assertTrue('swift.authorize' not in req.environ) def test_auth_deny_non_reseller_prefix(self): req = self._make_request('/v1/BLAH_account', headers={'X-Auth-Token': 'BLAH_t'}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="BLAH_account"') self.assertEqual(req.environ['swift.authorize'], self.test_auth.denied_response) def test_auth_deny_non_reseller_prefix_no_override(self): fake_authorize = lambda x: Response(status='500 Fake') req = self._make_request('/v1/BLAH_account', headers={'X-Auth-Token': 'BLAH_t'}, environ={'swift.authorize': fake_authorize} ) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 500) self.assertEqual(req.environ['swift.authorize'], fake_authorize) def test_auth_no_reseller_prefix_deny(self): # Ensures that when we have no reseller prefix, we don't deny a request # outright but set up a denial swift.authorize and pass the request on # down the chain. local_app = FakeApp() local_auth = auth.filter_factory({'reseller_prefix': ''})(local_app) req = self._make_request('/v1/account', headers={'X-Auth-Token': 't'}) resp = req.get_response(local_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="account"') self.assertEqual(local_app.calls, 1) self.assertEqual(req.environ['swift.authorize'], local_auth.denied_response) def test_auth_reseller_prefix_with_s3_deny(self): # Ensures that when we have a reseller prefix and using a middleware # relying on Http-Authorization (for example swift3), we don't deny a # request outright but set up a denial swift.authorize and pass the # request on down the chain. local_app = FakeApp() local_auth = auth.filter_factory({'reseller_prefix': 'PRE'})(local_app) req = self._make_request('/v1/account', headers={'X-Auth-Token': 't', 'Authorization': 'AWS user:pw'}) resp = req.get_response(local_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(local_app.calls, 1) self.assertEqual(req.environ['swift.authorize'], local_auth.denied_response) def test_auth_with_s3_authorization(self): local_app = FakeApp() local_auth = auth.filter_factory( {'user_s3_s3': 's3 .admin'})(local_app) req = self._make_request('/v1/AUTH_s3', headers={'X-Auth-Token': 't', 'AUTHORIZATION': 'AWS s3:s3:pass'}) with mock.patch('base64.urlsafe_b64decode') as msg, \ mock.patch('base64.encodestring') as sign: msg.return_value = '' sign.return_value = 'pass' resp = req.get_response(local_auth) self.assertEqual(resp.status_int, 404) self.assertEqual(local_app.calls, 1) self.assertEqual(req.environ['swift.authorize'], local_auth.authorize) def test_auth_no_reseller_prefix_no_token(self): # Check that normally we set up a call back to our authorize. local_auth = auth.filter_factory({'reseller_prefix': ''})(FakeApp()) req = self._make_request('/v1/account') resp = req.get_response(local_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="account"') self.assertEqual(req.environ['swift.authorize'], local_auth.authorize) # Now make sure we don't override an existing swift.authorize when we # have no reseller prefix. local_auth = \ auth.filter_factory({'reseller_prefix': ''})(FakeApp()) local_authorize = lambda req: Response('test') req = self._make_request('/v1/account', environ={'swift.authorize': local_authorize}) resp = req.get_response(local_auth) self.assertEqual(req.environ['swift.authorize'], local_authorize) self.assertEqual(resp.status_int, 200) def test_auth_fail(self): resp = self._make_request( '/v1/AUTH_cfa', headers={'X-Auth-Token': 'AUTH_t'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') def test_authorize_bad_path(self): req = self._make_request('/badpath') resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="unknown"') req = self._make_request('/badpath') req.remote_user = 'act:usr,act,AUTH_cfa' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_authorize_account_access(self): req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act,AUTH_cfa' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_authorize_acl_group_access(self): self.test_auth = auth.filter_factory({})( FakeApp(iter(NO_CONTENT_RESP * 3))) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' req.acl = 'act' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' req.acl = 'act:usr' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' req.acl = 'act2' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_cfa') req.remote_user = 'act:usr,act' req.acl = 'act:usr2' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_deny_cross_reseller(self): # Tests that cross-reseller is denied, even if ACLs/group names match req = self._make_request('/v1/OTHER_cfa') req.remote_user = 'act:usr,act,AUTH_cfa' req.acl = 'act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_authorize_acl_referer_after_user_groups(self): req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr' req.acl = '.r:*,act:usr' self.assertEqual(self.test_auth.authorize(req), None) def test_authorize_acl_referrer_access(self): self.test_auth = auth.filter_factory({})( FakeApp(iter(NO_CONTENT_RESP * 6))) req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr,act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr,act' req.acl = '.r:*,.rlistings' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr,act' req.acl = '.r:*' # No listings allowed resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr,act' req.acl = '.r:.example.com,.rlistings' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_cfa/c') req.remote_user = 'act:usr,act' req.referer = 'http://www.example.com/index.html' req.acl = '.r:.example.com,.rlistings' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa/c') resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') req = self._make_request('/v1/AUTH_cfa/c') req.acl = '.r:*,.rlistings' self.assertEqual(self.test_auth.authorize(req), None) req = self._make_request('/v1/AUTH_cfa/c') req.acl = '.r:*' # No listings allowed resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') req = self._make_request('/v1/AUTH_cfa/c') req.acl = '.r:.example.com,.rlistings' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') req = self._make_request('/v1/AUTH_cfa/c') req.referer = 'http://www.example.com/index.html' req.acl = '.r:.example.com,.rlistings' self.assertEqual(self.test_auth.authorize(req), None) def test_detect_reseller_request(self): req = self._make_request('/v1/AUTH_admin', headers={'X-Auth-Token': 'AUTH_t'}) cache_key = 'AUTH_/token/AUTH_t' cache_entry = (time() + 3600, '.reseller_admin') req.environ['swift.cache'].set(cache_key, cache_entry) req.get_response(self.test_auth) self.assertTrue(req.environ.get('reseller_request', False)) def test_account_put_permissions(self): self.test_auth = auth.filter_factory({})( FakeApp(iter(NO_CONTENT_RESP * 4))) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'PUT'}) req.remote_user = 'act:usr,act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'PUT'}) req.remote_user = 'act:usr,act,AUTH_other' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) # Even PUTs to your own account as account admin should fail req = self._make_request('/v1/AUTH_old', environ={'REQUEST_METHOD': 'PUT'}) req.remote_user = 'act:usr,act,AUTH_old' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'PUT'}) req.remote_user = 'act:usr,act,.reseller_admin' resp = self.test_auth.authorize(req) self.assertEqual(resp, None) # .super_admin is not something the middleware should ever see or care # about req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'PUT'}) req.remote_user = 'act:usr,act,.super_admin' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_account_delete_permissions(self): self.test_auth = auth.filter_factory({})( FakeApp(iter(NO_CONTENT_RESP * 4))) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'DELETE'}) req.remote_user = 'act:usr,act' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'DELETE'}) req.remote_user = 'act:usr,act,AUTH_other' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) # Even DELETEs to your own account as account admin should fail req = self._make_request('/v1/AUTH_old', environ={'REQUEST_METHOD': 'DELETE'}) req.remote_user = 'act:usr,act,AUTH_old' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'DELETE'}) req.remote_user = 'act:usr,act,.reseller_admin' resp = self.test_auth.authorize(req) self.assertEqual(resp, None) # .super_admin is not something the middleware should ever see or care # about req = self._make_request('/v1/AUTH_new', environ={'REQUEST_METHOD': 'DELETE'}) req.remote_user = 'act:usr,act,.super_admin' resp = self.test_auth.authorize(req) self.assertEqual(resp.status_int, 403) def test_get_token_success(self): # Example of how to simulate the auth transaction test_auth = auth.filter_factory({'user_ac_user': 'testing'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertTrue(resp.headers['x-storage-url'].endswith('/v1/AUTH_ac')) self.assertTrue(resp.headers['x-auth-token'].startswith('AUTH_')) self.assertEqual(resp.headers['x-auth-token'], resp.headers['x-storage-token']) self.assertAlmostEqual(int(resp.headers['x-auth-token-expires']), auth.DEFAULT_TOKEN_LIFE - 0.5, delta=0.5) self.assertGreater(len(resp.headers['x-auth-token']), 10) def test_get_token_success_other_auth_prefix(self): test_auth = auth.filter_factory({'user_ac_user': 'testing', 'auth_prefix': '/other/'})(FakeApp()) req = self._make_request( '/other/v1.0', headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 200) self.assertTrue(resp.headers['x-storage-url'].endswith('/v1/AUTH_ac')) self.assertTrue(resp.headers['x-auth-token'].startswith('AUTH_')) self.assertTrue(len(resp.headers['x-auth-token']) > 10) def test_use_token_success(self): # Example of how to simulate an authorized request test_auth = auth.filter_factory({'user_acct_user': 'testing'})( FakeApp(iter(NO_CONTENT_RESP * 1))) req = self._make_request('/v1/AUTH_acct', headers={'X-Auth-Token': 'AUTH_t'}) cache_key = 'AUTH_/token/AUTH_t' cache_entry = (time() + 3600, 'AUTH_acct') req.environ['swift.cache'].set(cache_key, cache_entry) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) def test_get_token_fail(self): resp = self._make_request('/auth/v1.0').get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="unknown"') resp = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'act:usr', 'X-Auth-Key': 'key'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="act"') def test_get_token_fail_invalid_x_auth_user_format(self): resp = self._make_request( '/auth/v1/act/auth', headers={'X-Auth-User': 'usr', 'X-Auth-Key': 'key'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="act"') def test_get_token_fail_non_matching_account_in_request(self): resp = self._make_request( '/auth/v1/act/auth', headers={'X-Auth-User': 'act2:usr', 'X-Auth-Key': 'key'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="act"') def test_get_token_fail_bad_path(self): resp = self._make_request( '/auth/v1/act/auth/invalid', headers={'X-Auth-User': 'act:usr', 'X-Auth-Key': 'key'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 400) def test_get_token_fail_missing_key(self): resp = self._make_request( '/auth/v1/act/auth', headers={'X-Auth-User': 'act:usr'}).get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="act"') def test_object_name_containing_slash(self): test_auth = auth.filter_factory({'user_acct_user': 'testing'})( FakeApp(iter(NO_CONTENT_RESP * 1))) req = self._make_request('/v1/AUTH_acct/cont/obj/name/with/slash', headers={'X-Auth-Token': 'AUTH_t'}) cache_key = 'AUTH_/token/AUTH_t' cache_entry = (time() + 3600, 'AUTH_acct') req.environ['swift.cache'].set(cache_key, cache_entry) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) def test_storage_url_default(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) del req.environ['HTTP_HOST'] req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-storage-url'], 'http://bob:1234/v1/AUTH_test') def test_storage_url_based_on_host(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) req.environ['HTTP_HOST'] = 'somehost:5678' req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-storage-url'], 'http://somehost:5678/v1/AUTH_test') def test_storage_url_overridden_scheme(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing', 'storage_url_scheme': 'fake'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) req.environ['HTTP_HOST'] = 'somehost:5678' req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-storage-url'], 'fake://somehost:5678/v1/AUTH_test') def test_use_old_token_from_memcached(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing', 'storage_url_scheme': 'fake'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) req.environ['HTTP_HOST'] = 'somehost:5678' req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' req.environ['swift.cache'].set('AUTH_/user/test:tester', 'uuid_token') expires = time() + 180 req.environ['swift.cache'].set('AUTH_/token/uuid_token', (expires, 'test,test:tester')) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['x-auth-token'], 'uuid_token') self.assertEqual(resp.headers['x-auth-token'], resp.headers['x-storage-token']) self.assertAlmostEqual(int(resp.headers['x-auth-token-expires']), 179.5, delta=0.5) def test_old_token_overdate(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing', 'storage_url_scheme': 'fake'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) req.environ['HTTP_HOST'] = 'somehost:5678' req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' req.environ['swift.cache'].set('AUTH_/user/test:tester', 'uuid_token') req.environ['swift.cache'].set('AUTH_/token/uuid_token', (0, 'test,test:tester')) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertNotEqual(resp.headers['x-auth-token'], 'uuid_token') self.assertEqual(resp.headers['x-auth-token'][:7], 'AUTH_tk') self.assertAlmostEqual(int(resp.headers['x-auth-token-expires']), auth.DEFAULT_TOKEN_LIFE - 0.5, delta=0.5) def test_old_token_with_old_data(self): self.test_auth = \ auth.filter_factory({'user_test_tester': 'testing', 'storage_url_scheme': 'fake'})(FakeApp()) req = self._make_request( '/auth/v1.0', headers={'X-Auth-User': 'test:tester', 'X-Auth-Key': 'testing'}) req.environ['HTTP_HOST'] = 'somehost:5678' req.environ['SERVER_NAME'] = 'bob' req.environ['SERVER_PORT'] = '1234' req.environ['swift.cache'].set('AUTH_/user/test:tester', 'uuid_token') req.environ['swift.cache'].set('AUTH_/token/uuid_token', (time() + 99, 'test,test:tester,.role')) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 200) self.assertNotEqual(resp.headers['x-auth-token'], 'uuid_token') self.assertEqual(resp.headers['x-auth-token'][:7], 'AUTH_tk') self.assertAlmostEqual(int(resp.headers['x-auth-token-expires']), auth.DEFAULT_TOKEN_LIFE - 0.5, delta=0.5) def test_reseller_admin_is_owner(self): orig_authorize = self.test_auth.authorize owner_values = [] def mitm_authorize(req): rv = orig_authorize(req) owner_values.append(req.environ.get('swift_owner', False)) return rv self.test_auth.authorize = mitm_authorize req = self._make_request('/v1/AUTH_cfa', headers={'X-Auth-Token': 'AUTH_t'}) req.remote_user = '.reseller_admin' self.test_auth.authorize(req) self.assertEqual(owner_values, [True]) def test_admin_is_owner(self): orig_authorize = self.test_auth.authorize owner_values = [] def mitm_authorize(req): rv = orig_authorize(req) owner_values.append(req.environ.get('swift_owner', False)) return rv self.test_auth.authorize = mitm_authorize req = self._make_request( '/v1/AUTH_cfa', headers={'X-Auth-Token': 'AUTH_t'}) req.remote_user = 'AUTH_cfa' self.test_auth.authorize(req) self.assertEqual(owner_values, [True]) def test_regular_is_not_owner(self): orig_authorize = self.test_auth.authorize owner_values = [] def mitm_authorize(req): rv = orig_authorize(req) owner_values.append(req.environ.get('swift_owner', False)) return rv self.test_auth.authorize = mitm_authorize req = self._make_request( '/v1/AUTH_cfa/c', headers={'X-Auth-Token': 'AUTH_t'}) req.remote_user = 'act:usr' self.test_auth.authorize(req) self.assertEqual(owner_values, [False]) def test_sync_request_success(self): self.test_auth.app = FakeApp(iter(NO_CONTENT_RESP * 1), sync_key='secret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret', 'x-timestamp': '123.456'}) req.remote_addr = '127.0.0.1' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 204) def test_sync_request_fail_key(self): self.test_auth.app = FakeApp(sync_key='secret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'wrongsecret', 'x-timestamp': '123.456'}) req.remote_addr = '127.0.0.1' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') self.test_auth.app = FakeApp(sync_key='othersecret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret', 'x-timestamp': '123.456'}) req.remote_addr = '127.0.0.1' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') self.test_auth.app = FakeApp(sync_key=None) req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret', 'x-timestamp': '123.456'}) req.remote_addr = '127.0.0.1' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') def test_sync_request_fail_no_timestamp(self): self.test_auth.app = FakeApp(sync_key='secret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret'}) req.remote_addr = '127.0.0.1' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="AUTH_cfa"') def test_sync_request_success_lb_sync_host(self): self.test_auth.app = FakeApp(iter(NO_CONTENT_RESP * 1), sync_key='secret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret', 'x-timestamp': '123.456', 'x-forwarded-for': '127.0.0.1'}) req.remote_addr = '127.0.0.2' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 204) self.test_auth.app = FakeApp(iter(NO_CONTENT_RESP * 1), sync_key='secret') req = self._make_request( '/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'x-container-sync-key': 'secret', 'x-timestamp': '123.456', 'x-cluster-client-ip': '127.0.0.1'}) req.remote_addr = '127.0.0.2' resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 204) def test_options_call(self): req = self._make_request('/v1/AUTH_cfa/c/o', environ={'REQUEST_METHOD': 'OPTIONS'}) resp = self.test_auth.authorize(req) self.assertEqual(resp, None) def test_get_user_group(self): # More tests in TestGetUserGroups class app = FakeApp() ath = auth.filter_factory({})(app) ath.users = {'test:tester': {'groups': ['.admin']}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester,AUTH_test') ath.users = {'test:tester': {'groups': []}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester') def test_auth_scheme(self): req = self._make_request('/v1/BLAH_account', headers={'X-Auth-Token': 'BLAH_t'}) resp = req.get_response(self.test_auth) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual(resp.headers.get('Www-Authenticate'), 'Swift realm="BLAH_account"') class TestAuthWithMultiplePrefixes(TestAuth): """ Repeats all tests in TestAuth except adds multiple reseller_prefix items """ def setUp(self): self.test_auth = auth.filter_factory( {'reseller_prefix': 'AUTH_, SOMEOTHER_, YETANOTHER_'})(FakeApp()) class TestGetUserGroups(unittest.TestCase): def test_custom_url_config(self): app = FakeApp() ath = auth.filter_factory({ 'user_test_tester': 'testing .admin http://saio:8080/v1/AUTH_monkey'})(app) groups = ath._get_user_groups('test', 'test:tester', 'AUTH_monkey') self.assertEqual(groups, 'test,test:tester,AUTH_test,AUTH_monkey') def test_no_prefix_reseller(self): app = FakeApp() ath = auth.filter_factory({'reseller_prefix': ''})(app) ath.users = {'test:tester': {'groups': ['.admin']}} groups = ath._get_user_groups('test', 'test:tester', 'test') self.assertEqual(groups, 'test,test:tester') ath.users = {'test:tester': {'groups': []}} groups = ath._get_user_groups('test', 'test:tester', 'test') self.assertEqual(groups, 'test,test:tester') def test_single_reseller(self): app = FakeApp() ath = auth.filter_factory({})(app) ath.users = {'test:tester': {'groups': ['.admin']}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester,AUTH_test') ath.users = {'test:tester': {'groups': []}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester') def test_multiple_reseller(self): app = FakeApp() ath = auth.filter_factory( {'reseller_prefix': 'AUTH_, SOMEOTHER_, YETANOTHER_'})(app) self.assertEqual(ath.reseller_prefixes, ['AUTH_', 'SOMEOTHER_', 'YETANOTHER_']) ath.users = {'test:tester': {'groups': ['.admin']}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester,AUTH_test,' 'SOMEOTHER_test,YETANOTHER_test') ath.users = {'test:tester': {'groups': []}} groups = ath._get_user_groups('test', 'test:tester', 'AUTH_test') self.assertEqual(groups, 'test,test:tester') class TestDefinitiveAuth(unittest.TestCase): def setUp(self): self.test_auth = auth.filter_factory( {'reseller_prefix': 'AUTH_, SOMEOTHER_'})(FakeApp()) def test_noreseller_prefix(self): ath = auth.filter_factory({'reseller_prefix': ''})(FakeApp()) result = ath._is_definitive_auth(path='/v1/test') self.assertEqual(result, False) result = ath._is_definitive_auth(path='/v1/AUTH_test') self.assertEqual(result, False) result = ath._is_definitive_auth(path='/v1/BLAH_test') self.assertEqual(result, False) def test_blank_prefix(self): ath = auth.filter_factory({'reseller_prefix': " '', SOMEOTHER"})(FakeApp()) result = ath._is_definitive_auth(path='/v1/test') self.assertEqual(result, False) result = ath._is_definitive_auth(path='/v1/SOMEOTHER_test') self.assertEqual(result, True) result = ath._is_definitive_auth(path='/v1/SOMEOTHERtest') self.assertEqual(result, False) def test_default_prefix(self): ath = auth.filter_factory({})(FakeApp()) result = ath._is_definitive_auth(path='/v1/AUTH_test') self.assertEqual(result, True) result = ath._is_definitive_auth(path='/v1/BLAH_test') self.assertEqual(result, False) ath = auth.filter_factory({'reseller_prefix': 'AUTH'})(FakeApp()) result = ath._is_definitive_auth(path='/v1/AUTH_test') self.assertEqual(result, True) result = ath._is_definitive_auth(path='/v1/BLAH_test') self.assertEqual(result, False) def test_multiple_prefixes(self): ath = auth.filter_factory({'reseller_prefix': 'AUTH, SOMEOTHER'})(FakeApp()) result = ath._is_definitive_auth(path='/v1/AUTH_test') self.assertEqual(result, True) result = ath._is_definitive_auth(path='/v1/SOMEOTHER_test') self.assertEqual(result, True) result = ath._is_definitive_auth(path='/v1/BLAH_test') self.assertEqual(result, False) class TestParseUserCreation(unittest.TestCase): def test_parse_user_creation(self): auth_filter = auth.filter_factory({ 'reseller_prefix': 'ABC', 'user_test_tester3': 'testing', 'user_has_url': 'urlly .admin http://a.b/v1/DEF_has', 'user_admin_admin': 'admin .admin .reseller_admin', })(FakeApp()) self.assertEqual(auth_filter.users, { 'admin:admin': { 'url': '$HOST/v1/ABC_admin', 'groups': ['.admin', '.reseller_admin'], 'key': 'admin' }, 'test:tester3': { 'url': '$HOST/v1/ABC_test', 'groups': [], 'key': 'testing' }, 'has:url': { 'url': 'http://a.b/v1/DEF_has', 'groups': ['.admin'], 'key': 'urlly' }, }) def test_base64_encoding(self): auth_filter = auth.filter_factory({ 'reseller_prefix': 'ABC', 'user64_%s_%s' % ( b64encode('test').rstrip('='), b64encode('tester3').rstrip('=')): 'testing .reseller_admin', 'user64_%s_%s' % ( b64encode('user_foo').rstrip('='), b64encode('ab').rstrip('=')): 'urlly .admin http://a.b/v1/DEF_has', })(FakeApp()) self.assertEqual(auth_filter.users, { 'test:tester3': { 'url': '$HOST/v1/ABC_test', 'groups': ['.reseller_admin'], 'key': 'testing' }, 'user_foo:ab': { 'url': 'http://a.b/v1/DEF_has', 'groups': ['.admin'], 'key': 'urlly' }, }) def test_key_with_no_value(self): self.assertRaises(ValueError, auth.filter_factory({ 'user_test_tester3': 'testing', 'user_bob_bobby': '', 'user_admin_admin': 'admin .admin .reseller_admin', }), FakeApp()) class TestAccountAcls(unittest.TestCase): """ These tests use a single reseller prefix (AUTH_) and the target paths are /v1/AUTH_ """ def setUp(self): self.reseller_prefix = {} self.accpre = 'AUTH' def _make_request(self, path, **kwargs): # Our TestAccountAcls default request will have a valid auth token version, acct, _ = split_path(path, 1, 3, True) headers = kwargs.pop('headers', {'X-Auth-Token': 'AUTH_t'}) user_groups = kwargs.pop('user_groups', 'AUTH_firstacct') # The account being accessed will have account ACLs acl = {'admin': ['AUTH_admin'], 'read-write': ['AUTH_rw'], 'read-only': ['AUTH_ro']} header_data = {'core-access-control': format_acl(version=2, acl_dict=acl)} acls = kwargs.pop('acls', header_data) req = Request.blank(path, headers=headers, **kwargs) # Authorize the token by populating the request's cache req.environ['swift.cache'] = FakeMemcache() cache_key = 'AUTH_/token/AUTH_t' cache_entry = (time() + 3600, user_groups) req.environ['swift.cache'].set(cache_key, cache_entry) # Pretend get_account_info returned ACLs in sysmeta, and we cached that cache_key = 'account/%s' % acct cache_entry = {'sysmeta': acls} req.environ['swift.cache'].set(cache_key, cache_entry) return req def _conf(self, moreconf): conf = self.reseller_prefix conf.update(moreconf) return conf def test_account_acl_success(self): test_auth = auth.filter_factory( self._conf({'user_admin_user': 'testing'}))( FakeApp(iter(NO_CONTENT_RESP * 1))) # admin (not a swift admin) wants to read from otheracct req = self._make_request('/v1/%s_otheract' % self.accpre, user_groups="AUTH_admin") # The request returned by _make_request should be allowed resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) def test_account_acl_failures(self): test_auth = auth.filter_factory( self._conf({'user_admin_user': 'testing'}))( FakeApp()) # If I'm not authed as anyone on the ACLs, I shouldn't get in req = self._make_request('/v1/%s_otheract' % self.accpre, user_groups="AUTH_bob") resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 403) # If the target account has no ACLs, a non-owner shouldn't get in req = self._make_request('/v1/%s_otheract' % self.accpre, user_groups="AUTH_admin", acls={}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 403) def test_admin_privileges(self): test_auth = auth.filter_factory( self._conf({'user_admin_user': 'testing'}))( FakeApp(iter(NO_CONTENT_RESP * 18))) for target in ( '/v1/%s_otheracct' % self.accpre, '/v1/%s_otheracct/container' % self.accpre, '/v1/%s_otheracct/container/obj' % self.accpre): for method in ('GET', 'HEAD', 'OPTIONS', 'PUT', 'POST', 'DELETE'): # Admin ACL user can do anything req = self._make_request(target, user_groups="AUTH_admin", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) # swift_owner should be set to True if method != 'OPTIONS': self.assertTrue(req.environ.get('swift_owner')) def test_readwrite_privileges(self): test_auth = auth.filter_factory( self._conf({'user_rw_user': 'testing'}))( FakeApp(iter(NO_CONTENT_RESP * 15))) for target in ('/v1/%s_otheracct' % self.accpre,): for method in ('GET', 'HEAD', 'OPTIONS'): # Read-Write user can read account data req = self._make_request(target, user_groups="AUTH_rw", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) # swift_owner should NOT be set to True self.assertFalse(req.environ.get('swift_owner')) # RW user should NOT be able to PUT, POST, or DELETE to the account for method in ('PUT', 'POST', 'DELETE'): req = self._make_request(target, user_groups="AUTH_rw", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 403) # RW user should be able to GET, PUT, POST, or DELETE to containers # and objects for target in ('/v1/%s_otheracct/c' % self.accpre, '/v1/%s_otheracct/c/o' % self.accpre): for method in ('GET', 'HEAD', 'OPTIONS', 'PUT', 'POST', 'DELETE'): req = self._make_request(target, user_groups="AUTH_rw", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) def test_readonly_privileges(self): test_auth = auth.filter_factory( self._conf({'user_ro_user': 'testing'}))( FakeApp(iter(NO_CONTENT_RESP * 9))) # ReadOnly user should NOT be able to PUT, POST, or DELETE to account, # container, or object for target in ('/v1/%s_otheracct' % self.accpre, '/v1/%s_otheracct/cont' % self.accpre, '/v1/%s_otheracct/cont/obj' % self.accpre): for method in ('GET', 'HEAD', 'OPTIONS'): req = self._make_request(target, user_groups="AUTH_ro", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) # swift_owner should NOT be set to True for the ReadOnly ACL self.assertFalse(req.environ.get('swift_owner')) for method in ('PUT', 'POST', 'DELETE'): req = self._make_request(target, user_groups="AUTH_ro", environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 403) # swift_owner should NOT be set to True for the ReadOnly ACL self.assertFalse(req.environ.get('swift_owner')) def test_user_gets_best_acl(self): test_auth = auth.filter_factory( self._conf({'user_acct_username': 'testing'}))( FakeApp(iter(NO_CONTENT_RESP * 18))) mygroups = "AUTH_acct,AUTH_ro,AUTH_something,AUTH_admin" for target in ('/v1/%s_otheracct' % self.accpre, '/v1/%s_otheracct/container' % self.accpre, '/v1/%s_otheracct/container/obj' % self.accpre): for method in ('GET', 'HEAD', 'OPTIONS', 'PUT', 'POST', 'DELETE'): # Admin ACL user can do anything req = self._make_request(target, user_groups=mygroups, environ={'REQUEST_METHOD': method}) resp = req.get_response(test_auth) self.assertEqual( resp.status_int, 204, "%s (%s) - expected 204, got %d" % (target, method, resp.status_int)) # swift_owner should be set to True if method != 'OPTIONS': self.assertTrue(req.environ.get('swift_owner')) def test_acl_syntax_verification(self): test_auth = auth.filter_factory( self._conf({'user_admin_user': 'testing .admin'}))( FakeApp(iter(NO_CONTENT_RESP * 5))) user_groups = test_auth._get_user_groups('admin', 'admin:user', 'AUTH_admin') good_headers = {'X-Auth-Token': 'AUTH_t'} good_acl = json.dumps({"read-only": [u"á", "b"]}) bad_list_types = '{"read-only": ["a", 99]}' bad_acl = 'syntactically invalid acl -- this does not parse as JSON' wrong_acl = '{"other-auth-system":["valid","json","but","wrong"]}' bad_value_acl = '{"read-write":["fine"],"admin":"should be a list"}' not_dict_acl = '["read-only"]' not_dict_acl2 = 1 empty_acls = ['{}', '', '{ }'] target = '/v1/%s_firstacct' % self.accpre # no acls -- no problem! req = self._make_request(target, headers=good_headers, user_groups=user_groups) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) # syntactically valid acls should go through update = {'x-account-access-control': good_acl} req = self._make_request(target, user_groups=user_groups, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204, 'Expected 204, got %s, response body: %s' % (resp.status_int, resp.body)) # syntactically valid empty acls should go through for acl in empty_acls: update = {'x-account-access-control': acl} req = self._make_request(target, user_groups=user_groups, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) errmsg = 'X-Account-Access-Control invalid: %s' # syntactically invalid acls get a 400 update = {'x-account-access-control': bad_acl} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertEqual(errmsg % "Syntax error", resp.body[:46]) # syntactically valid acls with bad keys also get a 400 update = {'x-account-access-control': wrong_acl} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertTrue(resp.body.startswith( errmsg % "Key 'other-auth-system' not recognized"), resp.body) # acls with good keys but bad values also get a 400 update = {'x-account-access-control': bad_value_acl} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertTrue(resp.body.startswith( errmsg % "Value for key 'admin' must be a list"), resp.body) # acls with non-string-types in list also get a 400 update = {'x-account-access-control': bad_list_types} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertTrue(resp.body.startswith( errmsg % "Elements of 'read-only' list must be strings"), resp.body) # acls with wrong json structure also get a 400 update = {'x-account-access-control': not_dict_acl} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertEqual(errmsg % "Syntax error", resp.body[:46]) # acls with wrong json structure also get a 400 update = {'x-account-access-control': not_dict_acl2} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) self.assertEqual(errmsg % "Syntax error", resp.body[:46]) def test_acls_propagate_to_sysmeta(self): test_auth = auth.filter_factory({'user_admin_user': 'testing'})( FakeApp(iter(NO_CONTENT_RESP * 3))) sysmeta_hdr = 'x-account-sysmeta-core-access-control' target = '/v1/AUTH_firstacct' good_headers = {'X-Auth-Token': 'AUTH_t'} good_acl = '{"read-only":["a","b"]}' # no acls -- no problem! req = self._make_request(target, headers=good_headers) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) self.assertIsNone(req.headers.get(sysmeta_hdr)) # syntactically valid acls should go through update = {'x-account-access-control': good_acl} req = self._make_request(target, headers=dict(good_headers, **update)) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 204) self.assertEqual(good_acl, req.headers.get(sysmeta_hdr)) def test_bad_acls_get_denied(self): test_auth = auth.filter_factory({'user_admin_user': 'testing'})( FakeApp(iter(NO_CONTENT_RESP * 3))) target = '/v1/AUTH_firstacct' good_headers = {'X-Auth-Token': 'AUTH_t'} bad_acls = ( 'syntax error', '{"bad_key":"should_fail"}', '{"admin":"not a list, should fail"}', '{"admin":["valid"],"read-write":"not a list, should fail"}', ) for bad_acl in bad_acls: hdrs = dict(good_headers, **{'x-account-access-control': bad_acl}) req = self._make_request(target, headers=hdrs) resp = req.get_response(test_auth) self.assertEqual(resp.status_int, 400) class TestAuthMultiplePrefixes(TestAccountAcls): """ These tests repeat the same tests as TestAccountACLs, but use multiple reseller prefix items (AUTH_ and SOMEOTHER_). The target paths are /v1/SOMEOTHER_ """ def setUp(self): self.reseller_prefix = {'reseller_prefix': 'AUTH_, SOMEOTHER_'} self.accpre = 'SOMEOTHER' class PrefixAccount(unittest.TestCase): def test_default(self): conf = {} test_auth = auth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) def test_same_as_default(self): conf = {'reseller_prefix': 'AUTH'} test_auth = auth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) def test_blank_reseller(self): conf = {'reseller_prefix': ''} test_auth = auth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth._get_account_prefix( '1234'), '') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), '') # yes, it should return '' def test_multiple_resellers(self): conf = {'reseller_prefix': 'AUTH, PRE2'} test_auth = auth.filter_factory(conf)(FakeApp()) self.assertEqual(test_auth._get_account_prefix( 'AUTH_1234'), 'AUTH_') self.assertEqual(test_auth._get_account_prefix( 'JUNK_1234'), None) class ServiceTokenFunctionality(unittest.TestCase): def _make_authed_request(self, conf, remote_user, path, method='GET'): """Make a request with tempauth as auth Acts as though the user had presented a token granting groups as described in remote_user. If remote_user contains the .service group, it emulates presenting X-Service-Token containing a .service group. :param conf: configuration for tempauth :param remote_user: the groups the user belongs to. Examples: acct:joe,acct user joe, no .admin acct:joe,acct,AUTH_joeacct user joe, jas .admin group acct:joe,acct,AUTH_joeacct,.service adds .service group :param path: the path of the request :param method: the method (defaults to GET) :returns: response object """ self.req = Request.blank(path) self.req.method = method self.req.remote_user = remote_user fake_app = FakeApp(iter([('200 OK', {}, '')])) test_auth = auth.filter_factory(conf)(fake_app) resp = self.req.get_response(test_auth) return resp def test_authed_for_path_single(self): resp = self._make_authed_request({}, 'acct:joe,acct,AUTH_acct', '/v1/AUTH_acct') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'acct:joe,acct,AUTH_acct', '/v1/AUTH_acct/c', method='PUT') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'admin:mary,admin,AUTH_admin,.reseller_admin', '/v1/AUTH_acct', method='GET') self.assertEqual(resp.status_int, 200) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'admin:mary,admin,AUTH_admin,.reseller_admin', '/v1/AUTH_acct', method='DELETE') self.assertEqual(resp.status_int, 200) def test_denied_for_path_single(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'fredacc:fred,fredacct,AUTH_fredacc', '/v1/AUTH_acct') self.assertEqual(resp.status_int, 403) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'acct:joe,acct', '/v1/AUTH_acct', method='PUT') self.assertEqual(resp.status_int, 403) resp = self._make_authed_request( {'reseller_prefix': 'AUTH'}, 'acct:joe,acct,AUTH_acct', '/v1/AUTH_acct', method='DELETE') self.assertEqual(resp.status_int, 403) def test_authed_for_primary_path_multiple(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2'}, 'acct:joe,acct,AUTH_acct,PRE2_acct', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 200) def test_denied_for_second_path_with_only_operator_role(self): # User only presents a token in X-Auth-Token (or in X-Service-Token) resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'acct:joe,acct,AUTH_acct,PRE2_acct', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 403) # User puts token in both X-Auth-Token and X-Service-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'acct:joe,acct,AUTH_acct,PRE2_acct,AUTH_acct,PRE2_acct', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 403) def test_authed_for_second_path_with_operator_role_and_service(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'acct:joe,acct,AUTH_acct,PRE2_acct,' 'admin:mary,admin,AUTH_admin,PRE2_admin,.service', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 200) def test_denied_for_second_path_with_only_service(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'admin:mary,admin,AUTH_admin,PRE2_admin,.service', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 403) def test_denied_for_second_path_for_service_user(self): # User presents token with 'service' role in X-Auth-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'admin:mary,admin,AUTH_admin,PRE2_admin,.service', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 403) # User presents token with 'service' role in X-Auth-Token # and also in X-Service-Token resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'admin:mary,admin,AUTH_admin,PRE2_admin,.service,' 'admin:mary,admin,AUTH_admin,PRE2_admin,.service', '/v1/PRE2_acct') self.assertEqual(resp.status_int, 403) def test_delete_denied_for_second_path(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'acct:joe,acct,AUTH_acct,PRE2_acct,' 'admin:mary,admin,AUTH_admin,PRE2_admin,.service', '/v1/PRE2_acct', method='DELETE') self.assertEqual(resp.status_int, 403) def test_delete_of_second_path_by_reseller_admin(self): resp = self._make_authed_request( {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'}, 'acct:joe,acct,AUTH_acct,PRE2_acct,' 'admin:mary,admin,AUTH_admin,PRE2_admin,.reseller_admin', '/v1/PRE2_acct', method='DELETE') self.assertEqual(resp.status_int, 200) class TestTokenHandling(unittest.TestCase): def _make_request(self, conf, path, headers, method='GET'): """Make a request with tempauth as auth It sets up AUTH_t and AUTH_s as tokens in memcache, where "joe" has .admin role on /v1/AUTH_acct and user "glance" has .service role on /v1/AUTH_admin. :param conf: configuration for tempauth :param path: the path of the request :param headers: allows you to pass X-Auth-Token, etc. :param method: the method (defaults to GET) :returns: response object """ fake_app = FakeApp(iter([('200 OK', {}, '')])) self.test_auth = auth.filter_factory(conf)(fake_app) self.req = Request.blank(path, headers=headers) self.req.method = method self.req.environ['swift.cache'] = FakeMemcache() self._setup_user_and_token('AUTH_t', 'acct', 'acct:joe', '.admin') self._setup_user_and_token('AUTH_s', 'admin', 'admin:glance', '.service') resp = self.req.get_response(self.test_auth) return resp def _setup_user_and_token(self, token_name, account, account_user, groups): """Setup named token in memcache :param token_name: name of token :param account: example: acct :param account_user: example: acct_joe :param groups: example: .admin """ self.test_auth.users[account_user] = dict(groups=[groups]) account_id = 'AUTH_%s' % account cache_key = 'AUTH_/token/%s' % token_name cache_entry = (time() + 3600, self.test_auth._get_user_groups(account, account_user, account_id)) self.req.environ['swift.cache'].set(cache_key, cache_entry) def test_tokens_set_remote_user(self): conf = {} # Default conf resp = self._make_request(conf, '/v1/AUTH_acct', {'x-auth-token': 'AUTH_t'}) self.assertEqual(self.req.environ['REMOTE_USER'], 'acct,acct:joe,AUTH_acct') self.assertEqual(resp.status_int, 200) # Add x-service-token resp = self._make_request(conf, '/v1/AUTH_acct', {'x-auth-token': 'AUTH_t', 'x-service-token': 'AUTH_s'}) self.assertEqual(self.req.environ['REMOTE_USER'], 'acct,acct:joe,AUTH_acct,admin,admin:glance,.service') self.assertEqual(resp.status_int, 200) # Put x-auth-token value into x-service-token resp = self._make_request(conf, '/v1/AUTH_acct', {'x-auth-token': 'AUTH_t', 'x-service-token': 'AUTH_t'}) self.assertEqual(self.req.environ['REMOTE_USER'], 'acct,acct:joe,AUTH_acct,acct,acct:joe,AUTH_acct') self.assertEqual(resp.status_int, 200) def test_service_token_given_and_needed(self): conf = {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'} resp = self._make_request(conf, '/v1/PRE2_acct', {'x-auth-token': 'AUTH_t', 'x-service-token': 'AUTH_s'}) self.assertEqual(resp.status_int, 200) def test_service_token_omitted(self): conf = {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'} resp = self._make_request(conf, '/v1/PRE2_acct', {'x-auth-token': 'AUTH_t'}) self.assertEqual(resp.status_int, 403) def test_invalid_tokens(self): conf = {'reseller_prefix': 'AUTH, PRE2', 'PRE2_require_group': '.service'} resp = self._make_request(conf, '/v1/PRE2_acct', {'x-auth-token': 'AUTH_junk'}) self.assertEqual(resp.status_int, 401) resp = self._make_request(conf, '/v1/PRE2_acct', {'x-auth-token': 'AUTH_t', 'x-service-token': 'AUTH_junk'}) self.assertEqual(resp.status_int, 403) resp = self._make_request(conf, '/v1/PRE2_acct', {'x-auth-token': 'AUTH_junk', 'x-service-token': 'AUTH_s'}) self.assertEqual(resp.status_int, 401) class TestUtilityMethods(unittest.TestCase): def test_account_acls_bad_path_raises_exception(self): auth_inst = auth.filter_factory({})(FakeApp()) req = Request({'PATH_INFO': '/'}) self.assertRaises(ValueError, auth_inst.account_acls, req) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_slo.py0000664000567000056710000035365713024044354023611 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from six.moves import range import hashlib import json import time import unittest from mock import patch from hashlib import md5 from swift.common import swob, utils from swift.common.exceptions import ListingIterError, SegmentError from swift.common.header_key_dict import HeaderKeyDict from swift.common.middleware import slo from swift.common.swob import Request, Response, HTTPException from swift.common.utils import quote, closing_if_possible, close_if_possible from test.unit.common.middleware.helpers import FakeSwift test_xml_data = ''' /cont/object etagoftheobjectsegment 100 ''' test_json_data = json.dumps([{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}]) def fake_start_response(*args, **kwargs): pass def md5hex(s): return hashlib.md5(s).hexdigest() class SloTestCase(unittest.TestCase): def setUp(self): self.app = FakeSwift() slo_conf = {'rate_limit_under_size': '0'} self.slo = slo.filter_factory(slo_conf)(self.app) self.slo.logger = self.app.logger def call_app(self, req, app=None, expect_exception=False): if app is None: app = self.app req.headers.setdefault("User-Agent", "Mozzarella Foxfire") status = [None] headers = [None] def start_response(s, h, ei=None): status[0] = s headers[0] = h body_iter = app(req.environ, start_response) body = '' caught_exc = None try: # appease the close-checker with closing_if_possible(body_iter): for chunk in body_iter: body += chunk except Exception as exc: if expect_exception: caught_exc = exc else: raise if expect_exception: return status[0], headers[0], body, caught_exc else: return status[0], headers[0], body def call_slo(self, req, **kwargs): return self.call_app(req, app=self.slo, **kwargs) class TestSloMiddleware(SloTestCase): def setUp(self): super(TestSloMiddleware, self).setUp() self.app.register( 'GET', '/', swob.HTTPOk, {}, 'passed') self.app.register( 'PUT', '/', swob.HTTPOk, {}, 'passed') def test_handle_multipart_no_obj(self): req = Request.blank('/') resp_iter = self.slo(req.environ, fake_start_response) self.assertEqual(self.app.calls, [('GET', '/')]) self.assertEqual(''.join(resp_iter), 'passed') def test_slo_header_assigned(self): req = Request.blank( '/v1/a/c/o', headers={'x-static-large-object': "true"}, environ={'REQUEST_METHOD': 'PUT'}) resp = ''.join(self.slo(req.environ, fake_start_response)) self.assertTrue( resp.startswith('X-Static-Large-Object is a reserved header')) def _put_bogus_slo(self, manifest_text, manifest_path='/v1/a/c/the-manifest'): with self.assertRaises(HTTPException) as catcher: slo.parse_and_validate_input(manifest_text, manifest_path) self.assertEqual(400, catcher.exception.status_int) return catcher.exception.body def _put_slo(self, manifest_text, manifest_path='/v1/a/c/the-manifest'): return slo.parse_and_validate_input(manifest_text, manifest_path) def test_bogus_input(self): self.assertEqual('Manifest must be valid JSON.\n', self._put_bogus_slo('some non json')) self.assertEqual('Manifest must be a list.\n', self._put_bogus_slo('{}')) self.assertEqual('Index 0: not a JSON object\n', self._put_bogus_slo('["zombocom"]')) def test_bogus_input_bad_keys(self): self.assertEqual( "Index 0: extraneous keys \"baz\", \"foo\"\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100, 'foo': 'bar', 'baz': 'quux'}]))) def test_bogus_input_ranges(self): self.assertEqual( "Index 0: invalid range\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': 100, 'range': 'non-range value'}]))) self.assertEqual( "Index 0: multiple ranges (only one allowed)\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': 100, 'range': '1-20,30-40'}]))) def test_bogus_input_unsatisfiable_range(self): self.assertEqual( "Index 0: unsatisfiable range\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': 100, 'range': '8888-9999'}]))) # since size is optional, we have to be able to defer this check segs = self._put_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': None, 'range': '8888-9999'}])) self.assertEqual(1, len(segs)) def test_bogus_input_path(self): self.assertEqual( "Index 0: path does not refer to an object. Path must be of the " "form /container/object.\n" "Index 1: path does not refer to an object. Path must be of the " "form /container/object.\n", self._put_bogus_slo(json.dumps( [{'path': '/cont', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}, {'path': '/c-trailing-slash/', 'etag': 'e', 'size_bytes': 100}, {'path': '/con/obj', 'etag': 'e', 'size_bytes': 100}, {'path': '/con/obj-trailing-slash/', 'etag': 'e', 'size_bytes': 100}, {'path': '/con/obj/with/slashes', 'etag': 'e', 'size_bytes': 100}]))) def test_bogus_input_multiple(self): self.assertEqual( "Index 0: invalid range\nIndex 1: not a JSON object\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100, 'range': 'non-range value'}, None]))) def test_bogus_input_size_bytes(self): self.assertEqual( "Index 0: invalid size_bytes\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': "fht"}, {'path': '/cont/object', 'etag': 'blah', 'size_bytes': None}, {'path': '/cont/object', 'etag': 'blah', 'size_bytes': 100}], ))) self.assertEqual( "Index 0: invalid size_bytes\n", self._put_bogus_slo(json.dumps( [{'path': '/cont/object', 'etag': 'blah', 'size_bytes': []}], ))) def test_bogus_input_self_referential(self): self.assertEqual( "Index 0: manifest must not include itself as a segment\n", self._put_bogus_slo(json.dumps( [{'path': '/c/the-manifest', 'etag': 'gate', 'size_bytes': 100, 'range': 'non-range value'}]))) def test_bogus_input_self_referential_non_ascii(self): self.assertEqual( "Index 0: manifest must not include itself as a segment\n", self._put_bogus_slo( json.dumps([{'path': u'/c/あ_1', 'etag': 'a', 'size_bytes': 1}]), manifest_path=quote(u'/v1/a/c/あ_1'))) def test_bogus_input_self_referential_last_segment(self): test_json_data = json.dumps([ {'path': '/c/seg_1', 'etag': 'a', 'size_bytes': 1}, {'path': '/c/seg_2', 'etag': 'a', 'size_bytes': 1}, {'path': '/c/seg_3', 'etag': 'a', 'size_bytes': 1}, {'path': '/c/the-manifest', 'etag': 'a', 'size_bytes': 1}, ]) self.assertEqual( "Index 3: manifest must not include itself as a segment\n", self._put_bogus_slo( test_json_data, manifest_path=quote('/v1/a/c/the-manifest'))) def test_bogus_input_undersize_segment(self): self.assertEqual( "Index 1: too small; each segment " "must be at least 1 byte.\n" "Index 2: too small; each segment " "must be at least 1 byte.\n", self._put_bogus_slo( json.dumps([ {'path': u'/c/s1', 'etag': 'a', 'size_bytes': 1}, {'path': u'/c/s2', 'etag': 'b', 'size_bytes': 0}, {'path': u'/c/s3', 'etag': 'c', 'size_bytes': 0}, # No error for this one since size_bytes is unspecified {'path': u'/c/s4', 'etag': 'd', 'size_bytes': None}, {'path': u'/c/s5', 'etag': 'e', 'size_bytes': 1000}]))) def test_valid_input(self): data = json.dumps( [{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}]) self.assertEqual( '/cont/object', slo.parse_and_validate_input(data, '/v1/a/cont/man')[0]['path']) data = json.dumps( [{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100, 'range': '0-40'}]) parsed = slo.parse_and_validate_input(data, '/v1/a/cont/man') self.assertEqual('/cont/object', parsed[0]['path']) self.assertEqual([(0, 40)], parsed[0]['range'].ranges) data = json.dumps( [{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': None, 'range': '0-40'}]) parsed = slo.parse_and_validate_input(data, '/v1/a/cont/man') self.assertEqual('/cont/object', parsed[0]['path']) self.assertIsNone(parsed[0]['size_bytes']) self.assertEqual([(0, 40)], parsed[0]['range'].ranges) class TestSloPutManifest(SloTestCase): def setUp(self): super(TestSloPutManifest, self).setUp() self.app.register( 'GET', '/', swob.HTTPOk, {}, 'passed') self.app.register( 'PUT', '/', swob.HTTPOk, {}, 'passed') self.app.register( 'HEAD', '/v1/AUTH_test/cont/object', swob.HTTPOk, {'Content-Length': '100', 'Etag': 'etagoftheobjectsegment'}, None) self.app.register( 'HEAD', '/v1/AUTH_test/cont/object2', swob.HTTPOk, {'Content-Length': '100', 'Etag': 'etagoftheobjectsegment'}, None) self.app.register( 'HEAD', '/v1/AUTH_test/cont/object\xe2\x99\xa1', swob.HTTPOk, {'Content-Length': '100', 'Etag': 'etagoftheobjectsegment'}, None) self.app.register( 'HEAD', '/v1/AUTH_test/cont/small_object', swob.HTTPOk, {'Content-Length': '10', 'Etag': 'etagoftheobjectsegment'}, None) self.app.register( 'HEAD', '/v1/AUTH_test/cont/empty_object', swob.HTTPOk, {'Content-Length': '0', 'Etag': 'etagoftheobjectsegment'}, None) self.app.register( 'HEAD', u'/v1/AUTH_test/cont/あ_1', swob.HTTPOk, {'Content-Length': '1', 'Etag': 'a'}, None) self.app.register( 'PUT', '/v1/AUTH_test/c/man', swob.HTTPCreated, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/c/man', swob.HTTPNoContent, {}, None) self.app.register( 'HEAD', '/v1/AUTH_test/checktest/a_1', swob.HTTPOk, {'Content-Length': '1', 'Etag': 'a'}, None) self.app.register( 'HEAD', '/v1/AUTH_test/checktest/badreq', swob.HTTPBadRequest, {}, None) self.app.register( 'HEAD', '/v1/AUTH_test/checktest/b_2', swob.HTTPOk, {'Content-Length': '2', 'Etag': 'b', 'Last-Modified': 'Fri, 01 Feb 2012 20:38:36 GMT'}, None) _manifest_json = json.dumps( [{'name': '/checktest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5'}]) self.app.register( 'GET', '/v1/AUTH_test/checktest/slob', swob.HTTPOk, {'X-Static-Large-Object': 'true', 'Etag': 'slob-etag', 'Content-Type': 'cat/picture;swift_bytes=12345', 'Content-Length': len(_manifest_json)}, _manifest_json) self.app.register( 'PUT', '/v1/AUTH_test/checktest/man_3', swob.HTTPCreated, {}, None) def test_put_manifest_too_quick_fail(self): req = Request.blank('/v1/a/c/o') req.content_length = self.slo.max_manifest_size + 1 try: self.slo.handle_multipart_put(req, fake_start_response) except HTTPException as e: pass self.assertEqual(e.status_int, 413) with patch.object(self.slo, 'max_manifest_segments', 0): req = Request.blank('/v1/a/c/o', body=test_json_data) e = None try: self.slo.handle_multipart_put(req, fake_start_response) except HTTPException as e: pass self.assertEqual(e.status_int, 413) req = Request.blank('/v1/a/c/o', headers={'X-Copy-From': 'lala'}) try: self.slo.handle_multipart_put(req, fake_start_response) except HTTPException as e: pass self.assertEqual(e.status_int, 405) # ignores requests to / req = Request.blank( '/?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=test_json_data) self.assertEqual( list(self.slo.handle_multipart_put(req, fake_start_response)), ['passed']) def test_handle_multipart_put_success(self): req = Request.blank( '/v1/AUTH_test/c/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'test'}, body=test_json_data) self.assertTrue('X-Static-Large-Object' not in req.headers) def my_fake_start_response(*args, **kwargs): gen_etag = '"' + md5('etagoftheobjectsegment').hexdigest() + '"' self.assertTrue(('Etag', gen_etag) in args[1]) self.slo(req.environ, my_fake_start_response) self.assertTrue('X-Static-Large-Object' in req.headers) def test_handle_multipart_put_disallow_empty_first_segment(self): test_json_data = json.dumps([{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 0}, {'path': '/cont/small_object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}]) req = Request.blank('/v1/a/c/o', body=test_json_data) with self.assertRaises(HTTPException) as catcher: self.slo.handle_multipart_put(req, fake_start_response) self.assertEqual(catcher.exception.status_int, 400) def test_handle_multipart_put_disallow_empty_last_segment(self): test_json_data = json.dumps([{'path': '/cont/object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}, {'path': '/cont/small_object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 0}]) req = Request.blank('/v1/a/c/o', body=test_json_data) with self.assertRaises(HTTPException) as catcher: self.slo.handle_multipart_put(req, fake_start_response) self.assertEqual(catcher.exception.status_int, 400) def test_handle_multipart_put_success_unicode(self): test_json_data = json.dumps([{'path': u'/cont/object\u2661', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}]) req = Request.blank( '/v1/AUTH_test/c/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'test'}, body=test_json_data) self.assertTrue('X-Static-Large-Object' not in req.headers) self.slo(req.environ, fake_start_response) self.assertTrue('X-Static-Large-Object' in req.headers) self.assertTrue(req.environ['PATH_INFO'], '/cont/object\xe2\x99\xa1') def test_handle_multipart_put_no_xml(self): req = Request.blank( '/test_good/AUTH_test/c/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'test'}, body=test_xml_data) no_xml = self.slo(req.environ, fake_start_response) self.assertEqual(no_xml, ['Manifest must be valid JSON.\n']) def test_handle_multipart_put_bad_data(self): bad_data = json.dumps([{'path': '/cont/object', 'etag': 'etagoftheobj', 'size_bytes': 'lala'}]) req = Request.blank( '/test_good/AUTH_test/c/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=bad_data) self.assertRaises(HTTPException, self.slo.handle_multipart_put, req, fake_start_response) for bad_data in [ json.dumps([{'path': '/cont', 'etag': 'etagoftheobj', 'size_bytes': 100}]), json.dumps('asdf'), json.dumps(None), json.dumps(5), 'not json', '1234', None, '', json.dumps({'path': None}), json.dumps([{'path': '/cont/object', 'etag': None, 'size_bytes': 12}]), json.dumps([{'path': '/cont/object', 'etag': 'asdf', 'size_bytes': 'sd'}]), json.dumps([{'path': 12, 'etag': 'etagoftheobj', 'size_bytes': 100}]), json.dumps([{'path': u'/cont/object\u2661', 'etag': 'etagoftheobj', 'size_bytes': 100}]), json.dumps([{'path': 12, 'size_bytes': 100}]), json.dumps([{'path': 12, 'size_bytes': 100}]), json.dumps([{'path': '/c/o', 'etag': 123, 'size_bytes': 100}]), json.dumps([{'path': None, 'etag': 'etagoftheobj', 'size_bytes': 100}])]: req = Request.blank( '/v1/AUTH_test/c/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=bad_data) self.assertRaises(HTTPException, self.slo.handle_multipart_put, req, fake_start_response) def test_handle_multipart_put_check_data(self): good_data = json.dumps( [{'path': '/checktest/a_1', 'etag': 'a', 'size_bytes': '1'}, {'path': '/checktest/b_2', 'etag': 'b', 'size_bytes': '2'}]) req = Request.blank( '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=good_data) status, headers, body = self.call_slo(req) self.assertEqual(self.app.call_count, 3) # go behind SLO's back and see what actually got stored req = Request.blank( # this string looks weird, but it's just an artifact # of FakeSwift '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_app(req) headers = dict(headers) manifest_data = json.loads(body) self.assertTrue(headers['Content-Type'].endswith(';swift_bytes=3')) self.assertEqual(len(manifest_data), 2) self.assertEqual(manifest_data[0]['hash'], 'a') self.assertEqual(manifest_data[0]['bytes'], 1) self.assertTrue( not manifest_data[0]['last_modified'].startswith('2012')) self.assertTrue(manifest_data[1]['last_modified'].startswith('2012')) def test_handle_multipart_put_check_data_bad(self): bad_data = json.dumps( [{'path': '/checktest/a_1', 'etag': 'a', 'size_bytes': '2'}, {'path': '/checktest/badreq', 'etag': 'a', 'size_bytes': '1'}, {'path': '/checktest/b_2', 'etag': 'not-b', 'size_bytes': '2'}, {'path': '/checktest/slob', 'etag': 'not-slob', 'size_bytes': '12345'}]) req = Request.blank( '/v1/AUTH_test/checktest/man?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'application/json'}, body=bad_data) status, headers, body = self.call_slo(req) self.assertEqual(self.app.call_count, 5) errors = json.loads(body)['Errors'] self.assertEqual(len(errors), 5) self.assertEqual(errors[0][0], '/checktest/a_1') self.assertEqual(errors[0][1], 'Size Mismatch') self.assertEqual(errors[1][0], '/checktest/badreq') self.assertEqual(errors[1][1], '400 Bad Request') self.assertEqual(errors[2][0], '/checktest/b_2') self.assertEqual(errors[2][1], 'Etag Mismatch') self.assertEqual(errors[3][0], '/checktest/slob') self.assertEqual(errors[3][1], 'Size Mismatch') self.assertEqual(errors[4][0], '/checktest/slob') self.assertEqual(errors[4][1], 'Etag Mismatch') def test_handle_multipart_put_skip_size_check(self): good_data = json.dumps( [{'path': '/checktest/a_1', 'etag': 'a', 'size_bytes': None}, {'path': '/checktest/b_2', 'etag': 'b', 'size_bytes': None}]) req = Request.blank( '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=good_data) status, headers, body = self.call_slo(req) self.assertEqual(self.app.call_count, 3) # Check that we still populated the manifest properly from our HEADs req = Request.blank( # this string looks weird, but it's just an artifact # of FakeSwift '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_app(req) manifest_data = json.loads(body) self.assertEqual(1, manifest_data[0]['bytes']) self.assertEqual(2, manifest_data[1]['bytes']) def test_handle_multipart_put_skip_size_check_still_uses_min_size(self): test_json_data = json.dumps([{'path': '/cont/empty_object', 'etag': 'etagoftheobjectsegment', 'size_bytes': None}, {'path': '/cont/small_object', 'etag': 'etagoftheobjectsegment', 'size_bytes': 100}]) req = Request.blank('/v1/AUTH_test/c/o', body=test_json_data) with self.assertRaises(HTTPException) as cm: self.slo.handle_multipart_put(req, fake_start_response) self.assertEqual(cm.exception.status_int, 400) def test_handle_multipart_put_skip_size_check_no_early_bailout(self): # The first is too small (it's 0 bytes), and # the second has a bad etag. Make sure both errors show up in # the response. test_json_data = json.dumps([{'path': '/cont/empty_object', 'etag': 'etagoftheobjectsegment', 'size_bytes': None}, {'path': '/cont/object2', 'etag': 'wrong wrong wrong', 'size_bytes': 100}]) req = Request.blank('/v1/AUTH_test/c/o', body=test_json_data) with self.assertRaises(HTTPException) as cm: self.slo.handle_multipart_put(req, fake_start_response) self.assertEqual(cm.exception.status_int, 400) self.assertIn('at least 1 byte', cm.exception.body) self.assertIn('Etag Mismatch', cm.exception.body) def test_handle_multipart_put_skip_etag_check(self): good_data = json.dumps( [{'path': '/checktest/a_1', 'etag': None, 'size_bytes': 1}, {'path': '/checktest/b_2', 'etag': None, 'size_bytes': 2}]) req = Request.blank( '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=good_data) status, headers, body = self.call_slo(req) self.assertEqual(self.app.call_count, 3) # Check that we still populated the manifest properly from our HEADs req = Request.blank( # this string looks weird, but it's just an artifact # of FakeSwift '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_app(req) manifest_data = json.loads(body) self.assertEqual('a', manifest_data[0]['hash']) self.assertEqual('b', manifest_data[1]['hash']) def test_handle_unsatisfiable_ranges(self): bad_data = json.dumps( [{'path': '/checktest/a_1', 'etag': None, 'size_bytes': None, 'range': '1-'}]) req = Request.blank( '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=bad_data) with self.assertRaises(HTTPException) as catcher: self.slo.handle_multipart_put(req, fake_start_response) self.assertEqual(400, catcher.exception.status_int) self.assertIn("Unsatisfiable Range", catcher.exception.body) def test_handle_single_ranges(self): good_data = json.dumps( [{'path': '/checktest/a_1', 'etag': None, 'size_bytes': None, 'range': '0-0'}, {'path': '/checktest/b_2', 'etag': None, 'size_bytes': 2, 'range': '-1'}, {'path': '/checktest/b_2', 'etag': None, 'size_bytes': 2, 'range': '0-0'}, {'path': '/cont/object', 'etag': None, 'size_bytes': None, 'range': '10-40'}]) req = Request.blank( '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'PUT'}, body=good_data) status, headers, body = self.call_slo(req) expected_etag = '"%s"' % md5('ab:1-1;b:0-0;etagoftheobjectsegment:' '10-40;').hexdigest() self.assertEqual(expected_etag, dict(headers)['Etag']) self.assertEqual([ ('HEAD', '/v1/AUTH_test/checktest/a_1'), ('HEAD', '/v1/AUTH_test/checktest/b_2'), # Only once! ('HEAD', '/v1/AUTH_test/cont/object'), ('PUT', '/v1/AUTH_test/checktest/man_3?multipart-manifest=put'), ], self.app.calls) # Check that we still populated the manifest properly from our HEADs req = Request.blank( # this string looks weird, but it's just an artifact # of FakeSwift '/v1/AUTH_test/checktest/man_3?multipart-manifest=put', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_app(req) manifest_data = json.loads(body) self.assertEqual('a', manifest_data[0]['hash']) self.assertNotIn('range', manifest_data[0]) self.assertNotIn('segment_bytes', manifest_data[0]) self.assertEqual('b', manifest_data[1]['hash']) self.assertEqual('1-1', manifest_data[1]['range']) self.assertEqual('b', manifest_data[2]['hash']) self.assertEqual('0-0', manifest_data[2]['range']) self.assertEqual('etagoftheobjectsegment', manifest_data[3]['hash']) self.assertEqual('10-40', manifest_data[3]['range']) class TestSloDeleteManifest(SloTestCase): def setUp(self): super(TestSloDeleteManifest, self).setUp() _submanifest_data = json.dumps( [{'name': '/deltest/b_2', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest/c_3', 'hash': 'b', 'bytes': '2'}]) self.app.register( 'GET', '/v1/AUTH_test/deltest/man_404', swob.HTTPNotFound, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/man', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/deltest/gone', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest/b_2', 'hash': 'b', 'bytes': '2'}])) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/man', swob.HTTPNoContent, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/man-all-there', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/deltest/b_2', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest/c_3', 'hash': 'b', 'bytes': '2'}])) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/man-all-there', swob.HTTPNoContent, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/gone', swob.HTTPNotFound, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/a_1', swob.HTTPOk, {'Content-Length': '1'}, 'a') self.app.register( 'DELETE', '/v1/AUTH_test/deltest/a_1', swob.HTTPNoContent, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/b_2', swob.HTTPNoContent, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/c_3', swob.HTTPNoContent, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/d_3', swob.HTTPNoContent, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/manifest-with-submanifest', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/deltest/a_1', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest/submanifest', 'sub_slo': True, 'hash': 'submanifest-etag', 'bytes': len(_submanifest_data)}, {'name': '/deltest/d_3', 'hash': 'd', 'bytes': '3'}])) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/manifest-with-submanifest', swob.HTTPNoContent, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/submanifest', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, _submanifest_data) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/submanifest', swob.HTTPNoContent, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/manifest-missing-submanifest', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/deltest/a_1', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest/missing-submanifest', 'hash': 'a', 'bytes': '2', 'sub_slo': True}, {'name': '/deltest/d_3', 'hash': 'd', 'bytes': '3'}])) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/manifest-missing-submanifest', swob.HTTPNoContent, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/missing-submanifest', swob.HTTPNotFound, {}, None) self.app.register( 'GET', '/v1/AUTH_test/deltest/manifest-badjson', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, "[not {json (at ++++all") self.app.register( 'GET', '/v1/AUTH_test/deltest/manifest-with-unauth-segment', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/deltest/a_1', 'hash': 'a', 'bytes': '1'}, {'name': '/deltest-unauth/q_17', 'hash': '11', 'bytes': '17'}])) self.app.register( 'DELETE', '/v1/AUTH_test/deltest/manifest-with-unauth-segment', swob.HTTPNoContent, {}, None) self.app.register( 'DELETE', '/v1/AUTH_test/deltest-unauth/q_17', swob.HTTPUnauthorized, {}, None) def test_handle_multipart_delete_man(self): req = Request.blank( '/v1/AUTH_test/deltest/man', environ={'REQUEST_METHOD': 'DELETE'}) self.slo(req.environ, fake_start_response) self.assertEqual(self.app.call_count, 1) def test_handle_multipart_delete_bad_utf8(self): req = Request.blank( '/v1/AUTH_test/deltest/man\xff\xfe?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') resp_data = json.loads(body) self.assertEqual(resp_data['Response Status'], '412 Precondition Failed') def test_handle_multipart_delete_whole_404(self): req = Request.blank( '/v1/AUTH_test/deltest/man_404?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/man_404?multipart-manifest=get')]) self.assertEqual(resp_data['Response Status'], '200 OK') self.assertEqual(resp_data['Response Body'], '') self.assertEqual(resp_data['Number Deleted'], 0) self.assertEqual(resp_data['Number Not Found'], 1) self.assertEqual(resp_data['Errors'], []) def test_handle_multipart_delete_segment_404(self): req = Request.blank( '/v1/AUTH_test/deltest/man?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/man?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/gone?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/b_2?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/man?multipart-manifest=delete')]) self.assertEqual(resp_data['Response Status'], '200 OK') self.assertEqual(resp_data['Number Deleted'], 2) self.assertEqual(resp_data['Number Not Found'], 1) def test_handle_multipart_delete_whole(self): req = Request.blank( '/v1/AUTH_test/deltest/man-all-there?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE'}) self.call_slo(req) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/man-all-there?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/b_2?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/c_3?multipart-manifest=delete'), ('DELETE', ('/v1/AUTH_test/deltest/' + 'man-all-there?multipart-manifest=delete'))]) def test_handle_multipart_delete_nested(self): req = Request.blank( '/v1/AUTH_test/deltest/manifest-with-submanifest?' + 'multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE'}) self.call_slo(req) self.assertEqual( set(self.app.calls), set([('GET', '/v1/AUTH_test/deltest/' + 'manifest-with-submanifest?multipart-manifest=get'), ('GET', '/v1/AUTH_test/deltest/' + 'submanifest?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/a_1?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/b_2?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/c_3?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/' + 'submanifest?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/d_3?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/' + 'manifest-with-submanifest?multipart-manifest=delete')])) def test_handle_multipart_delete_nested_too_many_segments(self): req = Request.blank( '/v1/AUTH_test/deltest/manifest-with-submanifest?' + 'multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) with patch.object(slo, 'MAX_BUFFERED_SLO_SEGMENTS', 1): status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') resp_data = json.loads(body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Response Body'], 'Too many buffered slo segments to delete.') def test_handle_multipart_delete_nested_404(self): req = Request.blank( '/v1/AUTH_test/deltest/manifest-missing-submanifest' + '?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/' + 'manifest-missing-submanifest?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/a_1?multipart-manifest=delete'), ('GET', '/v1/AUTH_test/deltest/' + 'missing-submanifest?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/d_3?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/' + 'manifest-missing-submanifest?multipart-manifest=delete')]) self.assertEqual(resp_data['Response Status'], '200 OK') self.assertEqual(resp_data['Response Body'], '') self.assertEqual(resp_data['Number Deleted'], 3) self.assertEqual(resp_data['Number Not Found'], 1) self.assertEqual(resp_data['Errors'], []) def test_handle_multipart_delete_nested_401(self): self.app.register( 'GET', '/v1/AUTH_test/deltest/submanifest', swob.HTTPUnauthorized, {}, None) req = Request.blank( ('/v1/AUTH_test/deltest/manifest-with-submanifest' + '?multipart-manifest=delete'), environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') resp_data = json.loads(body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Errors'], [['/deltest/submanifest', '401 Unauthorized']]) def test_handle_multipart_delete_nested_500(self): self.app.register( 'GET', '/v1/AUTH_test/deltest/submanifest', swob.HTTPServerError, {}, None) req = Request.blank( ('/v1/AUTH_test/deltest/manifest-with-submanifest' + '?multipart-manifest=delete'), environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') resp_data = json.loads(body) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Errors'], [['/deltest/submanifest', 'Unable to load SLO manifest or segment.']]) def test_handle_multipart_delete_not_a_manifest(self): req = Request.blank( '/v1/AUTH_test/deltest/a_1?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/a_1?multipart-manifest=get')]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Response Body'], '') self.assertEqual(resp_data['Number Deleted'], 0) self.assertEqual(resp_data['Number Not Found'], 0) self.assertEqual(resp_data['Errors'], [['/deltest/a_1', 'Not an SLO manifest']]) def test_handle_multipart_delete_bad_json(self): req = Request.blank( '/v1/AUTH_test/deltest/manifest-badjson?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual(self.app.calls, [('GET', '/v1/AUTH_test/deltest/' + 'manifest-badjson?multipart-manifest=get')]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Response Body'], '') self.assertEqual(resp_data['Number Deleted'], 0) self.assertEqual(resp_data['Number Not Found'], 0) self.assertEqual(resp_data['Errors'], [['/deltest/manifest-badjson', 'Unable to load SLO manifest']]) def test_handle_multipart_delete_401(self): req = Request.blank( '/v1/AUTH_test/deltest/manifest-with-unauth-segment' + '?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) resp_data = json.loads(body) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/' + 'manifest-with-unauth-segment?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/a_1?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest-unauth/' + 'q_17?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/' + 'manifest-with-unauth-segment?multipart-manifest=delete')]) self.assertEqual(resp_data['Response Status'], '400 Bad Request') self.assertEqual(resp_data['Response Body'], '') self.assertEqual(resp_data['Number Deleted'], 2) self.assertEqual(resp_data['Number Not Found'], 0) self.assertEqual(resp_data['Errors'], [['/deltest-unauth/q_17', '401 Unauthorized']]) def test_handle_multipart_delete_client_content_type(self): req = Request.blank( '/v1/AUTH_test/deltest/man-all-there?multipart-manifest=delete', environ={'REQUEST_METHOD': 'DELETE', 'CONTENT_TYPE': 'foo/bar'}, headers={'Accept': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') resp_data = json.loads(body) self.assertEqual(resp_data["Number Deleted"], 3) self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/deltest/man-all-there?multipart-manifest=get'), ('DELETE', '/v1/AUTH_test/deltest/b_2?multipart-manifest=delete'), ('DELETE', '/v1/AUTH_test/deltest/c_3?multipart-manifest=delete'), ('DELETE', ('/v1/AUTH_test/deltest/' + 'man-all-there?multipart-manifest=delete'))]) class TestSloHeadManifest(SloTestCase): def setUp(self): super(TestSloHeadManifest, self).setUp() self._manifest_json = json.dumps([ {'name': '/gettest/seg01', 'bytes': '100', 'hash': 'seg01-hash', 'content_type': 'text/plain', 'last_modified': '2013-11-19T11:33:45.137446'}, {'name': '/gettest/seg02', 'bytes': '200', 'hash': 'seg02-hash', 'content_type': 'text/plain', 'last_modified': '2013-11-19T11:33:45.137447'}]) self.app.register( 'GET', '/v1/AUTH_test/headtest/man', swob.HTTPOk, {'Content-Length': str(len(self._manifest_json)), 'X-Static-Large-Object': 'true', 'Etag': md5(self._manifest_json).hexdigest()}, self._manifest_json) def test_etag_is_hash_of_segment_etags(self): req = Request.blank( '/v1/AUTH_test/headtest/man', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers.get('Etag', '').strip("'\""), md5("seg01-hashseg02-hash").hexdigest()) self.assertEqual(body, '') # it's a HEAD request, after all def test_etag_matching(self): etag = md5("seg01-hashseg02-hash").hexdigest() req = Request.blank( '/v1/AUTH_test/headtest/man', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': etag}) status, headers, body = self.call_slo(req) self.assertEqual(status, '304 Not Modified') class TestSloGetRawManifest(SloTestCase): def setUp(self): super(TestSloGetRawManifest, self).setUp() _bc_manifest_json = json.dumps( [{'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'bytes': '10', 'content_type': 'text/plain', 'last_modified': '1970-01-01T00:00:00.000000'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'bytes': '15', 'content_type': 'text/plain', 'last_modified': '1970-01-01T00:00:00.000000'}, {'name': '/gettest/d_10', 'hash': md5hex(md5hex("e" * 5) + md5hex("f" * 5)), 'bytes': '10', 'content_type': 'application/json;swift_bytes=10', 'sub_slo': True, 'last_modified': '1970-01-01T00:00:00.000000'}]) self.bc_etag = md5hex(_bc_manifest_json) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-bc', swob.HTTPOk, {'Content-Type': 'application/json;swift_bytes=35', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Plant': 'Ficus', 'Etag': md5hex(_bc_manifest_json)}, _bc_manifest_json) _bc_manifest_json_ranges = json.dumps( [{'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'bytes': '10', 'last_modified': '1970-01-01T00:00:00.000000', 'content_type': 'text/plain', 'range': '1-99'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'bytes': '15', 'last_modified': '1970-01-01T00:00:00.000000', 'content_type': 'text/plain', 'range': '100-200'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-bc-r', swob.HTTPOk, {'Content-Type': 'application/json;swift_bytes=25', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Plant': 'Ficus', 'Etag': md5hex(_bc_manifest_json_ranges)}, _bc_manifest_json_ranges) def test_get_raw_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-bc' '?multipart-manifest=get&format=raw', environ={'REQUEST_METHOD': 'GET', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') self.assertTrue(('Etag', self.bc_etag) in headers, headers) self.assertTrue(('X-Static-Large-Object', 'true') in headers, headers) self.assertTrue( ('Content-Type', 'application/json; charset=utf-8') in headers, headers) try: resp_data = json.loads(body) except ValueError: self.fail("Invalid JSON in manifest GET: %r" % body) self.assertEqual( resp_data, [{'etag': md5hex('b' * 10), 'size_bytes': '10', 'path': '/gettest/b_10'}, {'etag': md5hex('c' * 15), 'size_bytes': '15', 'path': '/gettest/c_15'}, {'etag': md5hex(md5hex("e" * 5) + md5hex("f" * 5)), 'size_bytes': '10', 'path': '/gettest/d_10'}]) def test_get_raw_manifest_passthrough_with_ranges(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-bc-r' '?multipart-manifest=get&format=raw', environ={'REQUEST_METHOD': 'GET', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') self.assertTrue( ('Content-Type', 'application/json; charset=utf-8') in headers, headers) try: resp_data = json.loads(body) except ValueError: self.fail("Invalid JSON in manifest GET: %r" % body) self.assertEqual( resp_data, [{'etag': md5hex('b' * 10), 'size_bytes': '10', 'path': '/gettest/b_10', 'range': '1-99'}, {'etag': md5hex('c' * 15), 'size_bytes': '15', 'path': '/gettest/c_15', 'range': '100-200'}], body) class TestSloGetManifest(SloTestCase): def setUp(self): super(TestSloGetManifest, self).setUp() # some plain old objects self.app.register( 'GET', '/v1/AUTH_test/gettest/a_5', swob.HTTPOk, {'Content-Length': '5', 'Etag': md5hex('a' * 5)}, 'a' * 5) self.app.register( 'GET', '/v1/AUTH_test/gettest/b_10', swob.HTTPOk, {'Content-Length': '10', 'Etag': md5hex('b' * 10)}, 'b' * 10) self.app.register( 'GET', '/v1/AUTH_test/gettest/c_15', swob.HTTPOk, {'Content-Length': '15', 'Etag': md5hex('c' * 15)}, 'c' * 15) self.app.register( 'GET', '/v1/AUTH_test/gettest/d_20', swob.HTTPOk, {'Content-Length': '20', 'Etag': md5hex('d' * 20)}, 'd' * 20) self.app.register( 'GET', '/v1/AUTH_test/gettest/e_25', swob.HTTPOk, {'Content-Length': '25', 'Etag': md5hex('e' * 25)}, 'e' * 25) self.app.register( 'GET', '/v1/AUTH_test/gettest/f_30', swob.HTTPOk, {'Content-Length': '30', 'Etag': md5hex('f' * 30)}, 'f' * 30) self.app.register( 'GET', '/v1/AUTH_test/gettest/g_35', swob.HTTPOk, {'Content-Length': '35', 'Etag': md5hex('g' * 35)}, 'g' * 35) self.app.register( 'GET', '/v1/AUTH_test/gettest/h_40', swob.HTTPOk, {'Content-Length': '40', 'Etag': md5hex('h' * 40)}, 'h' * 40) self.app.register( 'GET', '/v1/AUTH_test/gettest/i_45', swob.HTTPOk, {'Content-Length': '45', 'Etag': md5hex('i' * 45)}, 'i' * 45) self.app.register( 'GET', '/v1/AUTH_test/gettest/j_50', swob.HTTPOk, {'Content-Length': '50', 'Etag': md5hex('j' * 50)}, 'j' * 50) self.app.register( 'GET', '/v1/AUTH_test/gettest/k_55', swob.HTTPOk, {'Content-Length': '55', 'Etag': md5hex('k' * 55)}, 'k' * 55) self.app.register( 'GET', '/v1/AUTH_test/gettest/l_60', swob.HTTPOk, {'Content-Length': '60', 'Etag': md5hex('l' * 60)}, 'l' * 60) _bc_manifest_json = json.dumps( [{'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'bytes': '10', 'content_type': 'text/plain'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'bytes': '15', 'content_type': 'text/plain'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-bc', swob.HTTPOk, {'Content-Type': 'application/json;swift_bytes=25', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Plant': 'Ficus', 'Etag': md5hex(_bc_manifest_json)}, _bc_manifest_json) _abcd_manifest_json = json.dumps( [{'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/manifest-bc', 'sub_slo': True, 'content_type': 'application/json;swift_bytes=25', 'hash': md5hex(md5hex("b" * 10) + md5hex("c" * 15)), 'bytes': len(_bc_manifest_json)}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-abcd', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': md5(_abcd_manifest_json).hexdigest()}, _abcd_manifest_json) _abcdefghijkl_manifest_json = json.dumps( [{'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/b_10', 'hash': md5hex("b" * 10), 'content_type': 'text/plain', 'bytes': '10'}, {'name': '/gettest/c_15', 'hash': md5hex("c" * 15), 'content_type': 'text/plain', 'bytes': '15'}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20'}, {'name': '/gettest/e_25', 'hash': md5hex("e" * 25), 'content_type': 'text/plain', 'bytes': '25'}, {'name': '/gettest/f_30', 'hash': md5hex("f" * 30), 'content_type': 'text/plain', 'bytes': '30'}, {'name': '/gettest/g_35', 'hash': md5hex("g" * 35), 'content_type': 'text/plain', 'bytes': '35'}, {'name': '/gettest/h_40', 'hash': md5hex("h" * 40), 'content_type': 'text/plain', 'bytes': '40'}, {'name': '/gettest/i_45', 'hash': md5hex("i" * 45), 'content_type': 'text/plain', 'bytes': '45'}, {'name': '/gettest/j_50', 'hash': md5hex("j" * 50), 'content_type': 'text/plain', 'bytes': '50'}, {'name': '/gettest/k_55', 'hash': md5hex("k" * 55), 'content_type': 'text/plain', 'bytes': '55'}, {'name': '/gettest/l_60', 'hash': md5hex("l" * 60), 'content_type': 'text/plain', 'bytes': '60'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-abcdefghijkl', swob.HTTPOk, { 'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': md5(_abcdefghijkl_manifest_json).hexdigest()}, _abcdefghijkl_manifest_json) self.manifest_abcd_etag = md5hex( md5hex("a" * 5) + md5hex(md5hex("b" * 10) + md5hex("c" * 15)) + md5hex("d" * 20)) _bc_ranges_manifest_json = json.dumps( [{'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'content_type': 'text/plain', 'bytes': '10', 'range': '4-7'}, {'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'content_type': 'text/plain', 'bytes': '10', 'range': '2-5'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'content_type': 'text/plain', 'bytes': '15', 'range': '0-3'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'content_type': 'text/plain', 'bytes': '15', 'range': '11-14'}]) self.bc_ranges_etag = md5hex(_bc_ranges_manifest_json) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-bc-ranges', swob.HTTPOk, {'Content-Type': 'application/json;swift_bytes=16', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Plant': 'Ficus', 'Etag': self.bc_ranges_etag}, _bc_ranges_manifest_json) _abcd_ranges_manifest_json = json.dumps( [{'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5', 'range': '0-3'}, {'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5', 'range': '1-4'}, {'name': '/gettest/manifest-bc-ranges', 'sub_slo': True, 'content_type': 'application/json;swift_bytes=16', 'hash': self.bc_ranges_etag, 'bytes': len(_bc_ranges_manifest_json), 'range': '8-15'}, {'name': '/gettest/manifest-bc-ranges', 'sub_slo': True, 'content_type': 'application/json;swift_bytes=16', 'hash': self.bc_ranges_etag, 'bytes': len(_bc_ranges_manifest_json), 'range': '0-7'}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20', 'range': '0-3'}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20', 'range': '8-11'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': md5hex(_abcd_ranges_manifest_json)}, _abcd_ranges_manifest_json) _abcd_subranges_manifest_json = json.dumps( [{'name': '/gettest/manifest-abcd-ranges', 'sub_slo': True, 'hash': md5hex("a" * 8), 'content_type': 'text/plain', 'bytes': '32', 'range': '6-10'}, {'name': '/gettest/manifest-abcd-ranges', 'sub_slo': True, 'hash': md5hex("a" * 8), 'content_type': 'text/plain', 'bytes': '32', 'range': '31-31'}, {'name': '/gettest/manifest-abcd-ranges', 'sub_slo': True, 'hash': md5hex("a" * 8), 'content_type': 'text/plain', 'bytes': '32', 'range': '14-18'}, {'name': '/gettest/manifest-abcd-ranges', 'sub_slo': True, 'hash': md5hex("a" * 8), 'content_type': 'text/plain', 'bytes': '32', 'range': '0-0'}, {'name': '/gettest/manifest-abcd-ranges', 'sub_slo': True, 'hash': md5hex("a" * 8), 'content_type': 'text/plain', 'bytes': '32', 'range': '22-26'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-abcd-subranges', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': md5hex(_abcd_subranges_manifest_json)}, _abcd_subranges_manifest_json) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-badjson', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Fish': 'Bass'}, "[not {json (at ++++all") def tearDown(self): self.assertEqual(self.app.unclosed_requests, {}) def test_get_manifest_passthrough(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-bc?multipart-manifest=get', environ={'REQUEST_METHOD': 'GET', 'HTTP_ACCEPT': 'application/json'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') self.assertTrue( ('Content-Type', 'application/json; charset=utf-8') in headers, headers) try: resp_data = json.loads(body) except ValueError: self.fail("Invalid JSON in manifest GET: %r" % body) self.assertEqual( resp_data, [{'hash': md5hex('b' * 10), 'bytes': '10', 'name': '/gettest/b_10', 'content_type': 'text/plain'}, {'hash': md5hex('c' * 15), 'bytes': '15', 'name': '/gettest/c_15', 'content_type': 'text/plain'}], body) def test_get_nonmanifest_passthrough(self): req = Request.blank( '/v1/AUTH_test/gettest/a_5', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') self.assertEqual(body, 'aaaaa') def test_get_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-bc', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) manifest_etag = md5hex(md5hex("b" * 10) + md5hex("c" * 15)) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '25') self.assertEqual(headers['Etag'], '"%s"' % manifest_etag) self.assertEqual(headers['X-Object-Meta-Plant'], 'Ficus') self.assertEqual(body, 'bbbbbbbbbbccccccccccccccc') for _, _, hdrs in self.app.calls_with_headers[1:]: ua = hdrs.get("User-Agent", "") self.assertTrue("SLO MultipartGET" in ua) self.assertFalse("SLO MultipartGET SLO MultipartGET" in ua) # the first request goes through unaltered first_ua = self.app.calls_with_headers[0][2].get("User-Agent") self.assertFalse( "SLO MultipartGET" in first_ua) def test_get_manifest_repeated_segments(self): _aabbccdd_manifest_json = json.dumps( [{'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/a_5', 'hash': md5hex("a" * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/b_10', 'hash': md5hex("b" * 10), 'content_type': 'text/plain', 'bytes': '10'}, {'name': '/gettest/b_10', 'hash': md5hex("b" * 10), 'content_type': 'text/plain', 'bytes': '10'}, {'name': '/gettest/c_15', 'hash': md5hex("c" * 15), 'content_type': 'text/plain', 'bytes': '15'}, {'name': '/gettest/c_15', 'hash': md5hex("c" * 15), 'content_type': 'text/plain', 'bytes': '15'}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20'}, {'name': '/gettest/d_20', 'hash': md5hex("d" * 20), 'content_type': 'text/plain', 'bytes': '20'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-aabbccdd', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': md5(_aabbccdd_manifest_json).hexdigest()}, _aabbccdd_manifest_json) req = Request.blank( '/v1/AUTH_test/gettest/manifest-aabbccdd', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(body, ( 'aaaaaaaaaabbbbbbbbbbbbbbbbbbbbcccccccccccccccccccccccccccccc' 'dddddddddddddddddddddddddddddddddddddddd')) self.assertEqual(self.app.calls, [ ('GET', '/v1/AUTH_test/gettest/manifest-aabbccdd'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ None, 'bytes=0-4,0-4', 'bytes=0-9,0-9', 'bytes=0-14,0-14', 'bytes=0-19,0-19']) def test_get_manifest_ratelimiting(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcdefghijkl', environ={'REQUEST_METHOD': 'GET'}) the_time = [time.time()] sleeps = [] def mock_time(): return the_time[0] def mock_sleep(duration): sleeps.append(duration) the_time[0] += duration with patch('time.time', mock_time), \ patch('eventlet.sleep', mock_sleep), \ patch.object(self.slo, 'rate_limit_under_size', 999999999), \ patch.object(self.slo, 'rate_limit_after_segment', 0): status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(sleeps, [2.0, 2.0, 2.0, 2.0, 2.0]) # give the client the first 4 segments without ratelimiting; we'll # sleep less del sleeps[:] with patch('time.time', mock_time), \ patch('eventlet.sleep', mock_sleep), \ patch.object(self.slo, 'rate_limit_under_size', 999999999), \ patch.object(self.slo, 'rate_limit_after_segment', 4): status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(sleeps, [2.0, 2.0, 2.0]) # ratelimit segments under 35 bytes; this affects a-f del sleeps[:] with patch('time.time', mock_time), \ patch('eventlet.sleep', mock_sleep), \ patch.object(self.slo, 'rate_limit_under_size', 35), \ patch.object(self.slo, 'rate_limit_after_segment', 0): status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(sleeps, [2.0, 2.0]) # ratelimit segments under 36 bytes; this now affects a-g, netting # us one more sleep than before del sleeps[:] with patch('time.time', mock_time), \ patch('eventlet.sleep', mock_sleep), \ patch.object(self.slo, 'rate_limit_under_size', 36), \ patch.object(self.slo, 'rate_limit_after_segment', 0): status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') # sanity check self.assertEqual(sleeps, [2.0, 2.0, 2.0]) def test_if_none_match_matches(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': self.manifest_abcd_etag}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '304 Not Modified') self.assertEqual(headers['Content-Length'], '0') self.assertEqual(body, '') def test_if_none_match_does_not_match(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': "not-%s" % self.manifest_abcd_etag}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') def test_if_match_matches(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': self.manifest_abcd_etag}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') def test_if_match_does_not_match(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': "not-%s" % self.manifest_abcd_etag}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '412 Precondition Failed') self.assertEqual(headers['Content-Length'], '0') self.assertEqual(body, '') def test_if_match_matches_and_range(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': self.manifest_abcd_etag, 'Range': 'bytes=3-6'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '4') self.assertEqual(body, 'aabb') def test_get_manifest_with_submanifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '50') self.assertEqual(headers['Etag'], '"%s"' % self.manifest_abcd_etag) self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') def test_range_get_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=3-17'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '15') self.assertTrue('Etag' not in headers) self.assertEqual(body, 'aabbbbbbbbbbccc') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ 'bytes=3-17', None, None, 'bytes=3-', None, 'bytes=0-2']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) def test_range_get_includes_whole_manifest(self): # If the first range GET results in retrieval of the entire manifest # body (which we can detect by looking at Content-Range), then we # should not go make a second, non-ranged request just to retrieve the # same bytes again. req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-999999999'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) def test_range_get_beyond_manifest(self): big = 'e' * 1024 * 1024 big_etag = md5hex(big) self.app.register( 'GET', '/v1/AUTH_test/gettest/big_seg', swob.HTTPOk, {'Content-Type': 'application/foo', 'Etag': big_etag}, big) big_manifest = json.dumps( [{'name': '/gettest/big_seg', 'hash': big_etag, 'bytes': 1024 * 1024, 'content_type': 'application/foo'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/big_manifest', swob.HTTPOk, {'Content-Type': 'application/octet-stream', 'X-Static-Large-Object': 'true', 'Etag': md5(big_manifest).hexdigest()}, big_manifest) req = Request.blank( '/v1/AUTH_test/gettest/big_manifest', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=100000-199999'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') count_e = sum(1 if x == 'e' else 0 for x in body) self.assertEqual(count_e, 100000) self.assertEqual(len(body) - count_e, 0) self.assertEqual( self.app.calls, [ # has Range header, gets 416 ('GET', '/v1/AUTH_test/gettest/big_manifest'), # retry the first one ('GET', '/v1/AUTH_test/gettest/big_manifest'), ('GET', '/v1/AUTH_test/gettest/big_seg?multipart-manifest=get')]) def test_range_get_bogus_content_range(self): # Just a little paranoia; Swift currently sends back valid # Content-Range headers, but if somehow someone sneaks an invalid one # in there, we'll ignore it. def content_range_breaker_factory(app): def content_range_breaker(env, start_response): req = swob.Request(env) resp = req.get_response(app) resp.headers['Content-Range'] = 'triscuits' return resp(env, start_response) return content_range_breaker self.slo = slo.filter_factory({})( content_range_breaker_factory(self.app)) req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-999999999'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) def test_range_get_manifest_on_segment_boundaries(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=5-29'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '25') self.assertTrue('Etag' not in headers) self.assertEqual(body, 'bbbbbbbbbbccccccccccccccc') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get')]) headers = [c[2] for c in self.app.calls_with_headers] self.assertEqual(headers[0].get('Range'), 'bytes=5-29') self.assertEqual(headers[1].get('Range'), None) self.assertEqual(headers[2].get('Range'), None) self.assertEqual(headers[3].get('Range'), None) self.assertEqual(headers[4].get('Range'), None) def test_range_get_manifest_first_byte(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-0'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '1') self.assertEqual(body, 'a') # Make sure we don't get any objects we don't need, including # submanifests. self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get')]) def test_range_get_manifest_sub_slo(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=25-30'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '6') self.assertEqual(body, 'cccccd') # Make sure we don't get any objects we don't need, including # submanifests. self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) def test_range_get_manifest_overlapping_end(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=45-55'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '5') self.assertEqual(body, 'ddddd') def test_range_get_manifest_unsatisfiable(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=100-200'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '416 Requested Range Not Satisfiable') def test_multi_range_get_manifest(self): # SLO doesn't support multi-range GETs. The way that you express # "unsupported" in HTTP is to return a 200 and the whole entity. req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-0,2-2'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '50') self.assertEqual( body, 'aaaaabbbbbbbbbbcccccccccccccccdddddddddddddddddddd') def test_get_segment_with_non_ascii_path(self): segment_body = u"a møøse once bit my sister".encode("utf-8") self.app.register( 'GET', u'/v1/AUTH_test/ünicode/öbject-segment'.encode('utf-8'), swob.HTTPOk, {'Content-Length': str(len(segment_body)), 'Etag': md5hex(segment_body)}, segment_body) manifest_json = json.dumps([{'name': u'/ünicode/öbject-segment', 'hash': md5hex(segment_body), 'content_type': 'text/plain', 'bytes': len(segment_body)}]) self.app.register( 'GET', u'/v1/AUTH_test/ünicode/manifest'.encode('utf-8'), swob.HTTPOk, {'Content-Type': 'application/json', 'Content-Length': str(len(manifest_json)), 'X-Static-Large-Object': 'true'}, manifest_json) req = Request.blank( '/v1/AUTH_test/ünicode/manifest', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(body, segment_body) def test_get_range_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-ranges', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '32') self.assertEqual(headers['Content-Type'], 'application/json') self.assertEqual(body, 'aaaaaaaaccccccccbbbbbbbbdddddddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ None, None, 'bytes=0-3,1-', 'bytes=0-3,11-', 'bytes=4-7,2-5', 'bytes=0-3,8-11']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) self.assertEqual(md5hex(''.join([ md5hex('a' * 5), ':0-3;', md5hex('a' * 5), ':1-4;', self.bc_ranges_etag, ':8-15;', self.bc_ranges_etag, ':0-7;', md5hex('d' * 20), ':0-3;', md5hex('d' * 20), ':8-11;', ])), headers['Etag'].strip('"')) def test_get_subrange_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-subranges', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '17') self.assertEqual(headers['Content-Type'], 'application/json') self.assertEqual(body, 'aacccdccbbbabbddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd-subranges'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ None, None, None, 'bytes=3-', 'bytes=0-2', None, 'bytes=11-11', 'bytes=13-', 'bytes=4-6', None, 'bytes=0-0', 'bytes=4-5', 'bytes=0-2']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) def test_range_get_range_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-ranges', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=7-26'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '20') self.assertEqual(headers['Content-Type'], 'application/json') self.assertNotIn('Etag', headers) self.assertEqual(body, 'accccccccbbbbbbbbddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ 'bytes=7-26', None, None, 'bytes=4-', 'bytes=0-3,11-', 'bytes=4-7,2-5', 'bytes=0-2']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) def test_range_get_subrange_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-subranges', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=4-12'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '9') self.assertEqual(headers['Content-Type'], 'application/json') self.assertEqual(body, 'cdccbbbab') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd-subranges'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd-subranges'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ 'bytes=4-12', None, None, None, 'bytes=2-2', None, 'bytes=11-11', 'bytes=13-', 'bytes=4-6', None, 'bytes=0-0', 'bytes=4-4']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) def test_range_get_includes_whole_range_manifest(self): # If the first range GET results in retrieval of the entire manifest # body (which we can detect by looking at Content-Range), then we # should not go make a second, non-ranged request just to retrieve the # same bytes again. req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-ranges', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-999999999'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '206 Partial Content') self.assertEqual(headers['Content-Length'], '32') self.assertEqual(headers['Content-Type'], 'application/json') self.assertEqual(body, 'aaaaaaaaccccccccbbbbbbbbdddddddd') self.assertEqual( self.app.calls, [('GET', '/v1/AUTH_test/gettest/manifest-abcd-ranges'), ('GET', '/v1/AUTH_test/gettest/manifest-bc-ranges'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/d_20?multipart-manifest=get')]) ranges = [c[2].get('Range') for c in self.app.calls_with_headers] self.assertEqual(ranges, [ 'bytes=0-999999999', None, 'bytes=0-3,1-', 'bytes=0-3,11-', 'bytes=4-7,2-5', 'bytes=0-3,8-11']) # we set swift.source for everything but the first request self.assertIsNone(self.app.swift_sources[0]) self.assertEqual(self.app.swift_sources[1:], ['SLO'] * (len(self.app.swift_sources) - 1)) def test_multi_range_get_range_manifest(self): # SLO doesn't support multi-range GETs. The way that you express # "unsupported" in HTTP is to return a 200 and the whole entity. req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd-ranges', environ={'REQUEST_METHOD': 'GET'}, headers={'Range': 'bytes=0-0,2-2'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Type'], 'application/json') self.assertEqual(body, 'aaaaaaaaccccccccbbbbbbbbdddddddd') self.assertNotIn('Transfer-Encoding', headers) self.assertNotIn('Content-Range', headers) self.assertEqual(headers['Content-Length'], '32') def test_get_bogus_manifest(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-badjson', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '0') self.assertEqual(headers['X-Object-Meta-Fish'], 'Bass') self.assertEqual(body, '') def test_generator_closure(self): # Test that the SLO WSGI iterable closes its internal .app_iter when # it receives a close() message. # # This is sufficient to fix a memory leak. The memory leak arises # due to cyclic references involving a running generator; a running # generator sometimes preventes the GC from collecting it in the # same way that an object with a defined __del__ does. # # There are other ways to break the cycle and fix the memory leak as # well; calling .close() on the generator is sufficient, but not # necessary. However, having this test is better than nothing for # preventing regressions. leaks = [0] class LeakTracker(object): def __init__(self, inner_iter): leaks[0] += 1 self.inner_iter = iter(inner_iter) def __iter__(self): return self def next(self): return next(self.inner_iter) def close(self): leaks[0] -= 1 close_if_possible(self.inner_iter) class LeakTrackingSegmentedIterable(slo.SegmentedIterable): def _internal_iter(self, *a, **kw): it = super( LeakTrackingSegmentedIterable, self)._internal_iter( *a, **kw) return LeakTracker(it) status = [None] headers = [None] def start_response(s, h, ei=None): status[0] = s headers[0] = h req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET', 'HTTP_ACCEPT': 'application/json'}) # can't self.call_slo() here since we don't want to consume the # whole body with patch.object(slo, 'SegmentedIterable', LeakTrackingSegmentedIterable): app_resp = self.slo(req.environ, start_response) self.assertEqual(status[0], '200 OK') # sanity check body_iter = iter(app_resp) chunk = next(body_iter) self.assertEqual(chunk, 'aaaaa') # sanity check app_resp.close() self.assertEqual(0, leaks[0]) def test_head_manifest_is_efficient(self): req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'HEAD'}) status, headers, body = self.call_slo(req) headers = HeaderKeyDict(headers) self.assertEqual(status, '200 OK') self.assertEqual(headers['Content-Length'], '50') self.assertEqual(headers['Etag'], '"%s"' % self.manifest_abcd_etag) self.assertEqual(body, '') # Note the lack of recursive descent into manifest-bc. We know the # content-length from the outer manifest, so there's no need for any # submanifest fetching here, but a naïve implementation might do it # anyway. self.assertEqual(self.app.calls, [ ('HEAD', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-abcd')]) def test_recursion_limit(self): # man1 points to obj1 and man2, man2 points to obj2 and man3... for i in range(20): self.app.register('GET', '/v1/AUTH_test/gettest/obj%d' % i, swob.HTTPOk, {'Content-Type': 'text/plain', 'Etag': md5hex('body%02d' % i)}, 'body%02d' % i) manifest_json = json.dumps([{'name': '/gettest/obj20', 'hash': md5hex('body20'), 'content_type': 'text/plain', 'bytes': '6'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) for i in range(19, 0, -1): manifest_data = [ {'name': '/gettest/obj%d' % i, 'hash': md5hex('body%02d' % i), 'bytes': '6', 'content_type': 'text/plain'}, {'name': '/gettest/man%d' % (i + 1), 'hash': 'man%d' % (i + 1), 'sub_slo': True, 'bytes': len(manifest_json), 'content_type': 'application/json;swift_bytes=%d' % ((21 - i) * 6)}] manifest_json = json.dumps(manifest_data) self.app.register( 'GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) req = Request.blank( '/v1/AUTH_test/gettest/man1', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertIsInstance(exc, ListingIterError) # we don't know at header-sending time that things are going to go # wrong, so we end up with a 200 and a truncated body self.assertEqual(status, '200 OK') self.assertEqual(body, ('body01body02body03body04body05' + 'body06body07body08body09body10')) # make sure we didn't keep asking for segments self.assertEqual(self.app.call_count, 20) def test_sub_slo_recursion(self): # man1 points to man2 and obj1, man2 points to man3 and obj2... for i in range(11): self.app.register('GET', '/v1/AUTH_test/gettest/obj%d' % i, swob.HTTPOk, {'Content-Type': 'text/plain', 'Content-Length': '6', 'Etag': md5hex('body%02d' % i)}, 'body%02d' % i) manifest_json = json.dumps([{'name': '/gettest/obj%d' % i, 'hash': md5hex('body%2d' % i), 'content_type': 'text/plain', 'bytes': '6'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) self.app.register( 'HEAD', '/v1/AUTH_test/gettest/obj%d' % i, swob.HTTPOk, {'Content-Length': '6', 'Etag': md5hex('body%2d' % i)}, None) for i in range(9, 0, -1): manifest_data = [ {'name': '/gettest/man%d' % (i + 1), 'hash': 'man%d' % (i + 1), 'sub_slo': True, 'bytes': len(manifest_json), 'content_type': 'application/json;swift_bytes=%d' % ((10 - i) * 6)}, {'name': '/gettest/obj%d' % i, 'hash': md5hex('body%02d' % i), 'bytes': '6', 'content_type': 'text/plain'}] manifest_json = json.dumps(manifest_data) self.app.register( 'GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) req = Request.blank( '/v1/AUTH_test/gettest/man1', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '200 OK') self.assertEqual(body, ('body10body09body08body07body06' + 'body05body04body03body02body01')) self.assertEqual(self.app.call_count, 20) def test_sub_slo_recursion_limit(self): # man1 points to man2 and obj1, man2 points to man3 and obj2... for i in range(12): self.app.register('GET', '/v1/AUTH_test/gettest/obj%d' % i, swob.HTTPOk, {'Content-Type': 'text/plain', 'Content-Length': '6', 'Etag': md5hex('body%02d' % i)}, 'body%02d' % i) manifest_json = json.dumps([{'name': '/gettest/obj%d' % i, 'hash': md5hex('body%2d' % i), 'content_type': 'text/plain', 'bytes': '6'}]) self.app.register( 'GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) self.app.register( 'HEAD', '/v1/AUTH_test/gettest/obj%d' % i, swob.HTTPOk, {'Content-Length': '6', 'Etag': md5hex('body%2d' % i)}, None) for i in range(11, 0, -1): manifest_data = [ {'name': '/gettest/man%d' % (i + 1), 'hash': 'man%d' % (i + 1), 'sub_slo': True, 'bytes': len(manifest_json), 'content_type': 'application/json;swift_bytes=%d' % ((12 - i) * 6)}, {'name': '/gettest/obj%d' % i, 'hash': md5hex('body%02d' % i), 'bytes': '6', 'content_type': 'text/plain'}] manifest_json = json.dumps(manifest_data) self.app.register('GET', '/v1/AUTH_test/gettest/man%d' % i, swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true', 'Etag': 'man%d' % i}, manifest_json) req = Request.blank( '/v1/AUTH_test/gettest/man1', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual(status, '409 Conflict') self.assertEqual(self.app.call_count, 10) error_lines = self.slo.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertTrue(error_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_get_with_if_modified_since(self): # It's important not to pass the If-[Un]Modified-Since header to the # proxy for segment or submanifest GET requests, as it may result in # 304 Not Modified responses, and those don't contain any useful data. req = swob.Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': 'Wed, 12 Feb 2014 22:24:52 GMT', 'If-Unmodified-Since': 'Thu, 13 Feb 2014 23:25:53 GMT'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) for _, _, hdrs in self.app.calls_with_headers[1:]: self.assertFalse('If-Modified-Since' in hdrs) self.assertFalse('If-Unmodified-Since' in hdrs) def test_error_fetching_segment(self): self.app.register('GET', '/v1/AUTH_test/gettest/c_15', swob.HTTPUnauthorized, {}, None) req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) headers = HeaderKeyDict(headers) self.assertIsInstance(exc, SegmentError) self.assertEqual(status, '200 OK') self.assertEqual(self.app.calls, [ ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), # This one has the error, and so is the last one we fetch. ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get')]) def test_error_fetching_submanifest(self): self.app.register('GET', '/v1/AUTH_test/gettest/manifest-bc', swob.HTTPUnauthorized, {}, None) req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) self.assertIsInstance(exc, ListingIterError) self.assertEqual("200 OK", status) self.assertEqual("aaaaa", body) self.assertEqual(self.app.calls, [ ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), # This one has the error, and so is the last one we fetch. ('GET', '/v1/AUTH_test/gettest/manifest-bc'), # But we were looking ahead to see if we could combine ranges, # so we still get the first segment out ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get')]) def test_error_fetching_first_segment_submanifest(self): # This differs from the normal submanifest error because this one # happens before we've actually sent any response body. self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-a', swob.HTTPForbidden, {}, None) self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-manifest-a', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/manifest-a', 'sub_slo': True, 'content_type': 'application/json;swift_bytes=5', 'hash': 'manifest-a', 'bytes': '12345'}])) req = Request.blank( '/v1/AUTH_test/gettest/manifest-manifest-a', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual('409 Conflict', status) error_lines = self.slo.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertTrue(error_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_invalid_json_submanifest(self): self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-bc', swob.HTTPOk, {'Content-Type': 'application/json;swift_bytes=25', 'X-Static-Large-Object': 'true', 'X-Object-Meta-Plant': 'Ficus'}, "[this {isn't (JSON") req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) self.assertIsInstance(exc, ListingIterError) self.assertEqual('200 OK', status) self.assertEqual(body, 'aaaaa') def test_mismatched_etag(self): self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-a-b-badetag-c', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/a_5', 'hash': md5hex('a' * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/b_10', 'hash': 'wrong!', 'content_type': 'text/plain', 'bytes': '10'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'content_type': 'text/plain', 'bytes': '15'}])) req = Request.blank( '/v1/AUTH_test/gettest/manifest-a-b-badetag-c', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) self.assertIsInstance(exc, SegmentError) self.assertEqual('200 OK', status) self.assertEqual(body, 'aaaaa') def test_mismatched_size(self): self.app.register( 'GET', '/v1/AUTH_test/gettest/manifest-a-b-badsize-c', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/a_5', 'hash': md5hex('a' * 5), 'content_type': 'text/plain', 'bytes': '5'}, {'name': '/gettest/b_10', 'hash': md5hex('b' * 10), 'content_type': 'text/plain', 'bytes': '999999'}, {'name': '/gettest/c_15', 'hash': md5hex('c' * 15), 'content_type': 'text/plain', 'bytes': '15'}])) req = Request.blank( '/v1/AUTH_test/gettest/manifest-a-b-badsize-c', environ={'REQUEST_METHOD': 'GET'}) status, headers, body, exc = self.call_slo(req, expect_exception=True) self.assertIsInstance(exc, SegmentError) self.assertEqual('200 OK', status) self.assertEqual(body, 'aaaaa') def test_first_segment_mismatched_etag(self): self.app.register('GET', '/v1/AUTH_test/gettest/manifest-badetag', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/a_5', 'hash': 'wrong!', 'content_type': 'text/plain', 'bytes': '5'}])) req = Request.blank('/v1/AUTH_test/gettest/manifest-badetag', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual('409 Conflict', status) error_lines = self.slo.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertTrue(error_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_first_segment_mismatched_size(self): self.app.register('GET', '/v1/AUTH_test/gettest/manifest-badsize', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/a_5', 'hash': md5hex('a' * 5), 'content_type': 'text/plain', 'bytes': '999999'}])) req = Request.blank('/v1/AUTH_test/gettest/manifest-badsize', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual('409 Conflict', status) error_lines = self.slo.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertTrue(error_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) def test_download_takes_too_long(self): the_time = [time.time()] def mock_time(): return the_time[0] # this is just a convenient place to hang a time jump; there's nothing # special about the choice of is_success(). def mock_is_success(status_int): the_time[0] += 7 * 3600 return status_int // 100 == 2 req = Request.blank( '/v1/AUTH_test/gettest/manifest-abcd', environ={'REQUEST_METHOD': 'GET'}) with patch.object(slo, 'is_success', mock_is_success), \ patch('swift.common.request_helpers.time.time', mock_time), \ patch('swift.common.request_helpers.is_success', mock_is_success): status, headers, body, exc = self.call_slo( req, expect_exception=True) self.assertIsInstance(exc, SegmentError) self.assertEqual(status, '200 OK') self.assertEqual(self.app.calls, [ ('GET', '/v1/AUTH_test/gettest/manifest-abcd'), ('GET', '/v1/AUTH_test/gettest/manifest-bc'), ('GET', '/v1/AUTH_test/gettest/a_5?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/b_10?multipart-manifest=get'), ('GET', '/v1/AUTH_test/gettest/c_15?multipart-manifest=get')]) def test_first_segment_not_exists(self): self.app.register('GET', '/v1/AUTH_test/gettest/not_exists_obj', swob.HTTPNotFound, {}, None) self.app.register('GET', '/v1/AUTH_test/gettest/manifest-not-exists', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/gettest/not_exists_obj', 'hash': md5hex('not_exists_obj'), 'content_type': 'text/plain', 'bytes': '%d' % len('not_exists_obj') }])) req = Request.blank('/v1/AUTH_test/gettest/manifest-not-exists', environ={'REQUEST_METHOD': 'GET'}) status, headers, body = self.call_slo(req) self.assertEqual('409 Conflict', status) error_lines = self.slo.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertTrue(error_lines[0].startswith( 'ERROR: An error occurred while retrieving segments')) class TestSloBulkLogger(unittest.TestCase): def test_reused_logger(self): slo_mware = slo.filter_factory({})('fake app') self.assertTrue(slo_mware.logger is slo_mware.bulk_deleter.logger) class TestSloCopyHook(SloTestCase): def setUp(self): super(TestSloCopyHook, self).setUp() self.app.register( 'GET', '/v1/AUTH_test/c/o', swob.HTTPOk, {'Content-Length': '3', 'Etag': md5hex("obj")}, "obj") self.app.register( 'GET', '/v1/AUTH_test/c/man', swob.HTTPOk, {'Content-Type': 'application/json', 'X-Static-Large-Object': 'true'}, json.dumps([{'name': '/c/o', 'hash': md5hex("obj"), 'bytes': '3'}])) self.app.register( 'COPY', '/v1/AUTH_test/c/o', swob.HTTPCreated, {}) copy_hook = [None] # slip this guy in there to pull out the hook def extract_copy_hook(env, sr): if env['REQUEST_METHOD'] == 'COPY': copy_hook[0] = env['swift.copy_hook'] return self.app(env, sr) self.slo = slo.filter_factory({})(extract_copy_hook) req = Request.blank('/v1/AUTH_test/c/o', environ={'REQUEST_METHOD': 'COPY'}) self.slo(req.environ, fake_start_response) self.copy_hook = copy_hook[0] self.assertTrue(self.copy_hook is not None) # sanity check def test_copy_hook_passthrough(self): source_req = Request.blank( '/v1/AUTH_test/c/o', environ={'REQUEST_METHOD': 'GET'}) sink_req = Request.blank( '/v1/AUTH_test/c/o', environ={'REQUEST_METHOD': 'PUT'}) # no X-Static-Large-Object header, so do nothing source_resp = Response(request=source_req, status=200) modified_resp = self.copy_hook(source_req, source_resp, sink_req) self.assertTrue(modified_resp is source_resp) def test_copy_hook_manifest(self): source_req = Request.blank( '/v1/AUTH_test/c/o', environ={'REQUEST_METHOD': 'GET'}) sink_req = Request.blank( '/v1/AUTH_test/c/o', environ={'REQUEST_METHOD': 'PUT'}) source_resp = Response(request=source_req, status=200, headers={"X-Static-Large-Object": "true"}, app_iter=[json.dumps([{'name': '/c/o', 'hash': md5hex("obj"), 'bytes': '3'}])]) modified_resp = self.copy_hook(source_req, source_resp, sink_req) self.assertTrue(modified_resp is not source_resp) self.assertEqual(modified_resp.etag, md5hex(md5hex("obj"))) class TestSwiftInfo(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def test_registered_defaults(self): mware = slo.filter_factory({})('have to pass in an app') swift_info = utils.get_swift_info() self.assertTrue('slo' in swift_info) self.assertEqual(swift_info['slo'].get('max_manifest_segments'), mware.max_manifest_segments) self.assertEqual(swift_info['slo'].get('min_segment_size'), 1) self.assertEqual(swift_info['slo'].get('max_manifest_size'), mware.max_manifest_size) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_memcache.py0000664000567000056710000003753613024044352024547 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from textwrap import dedent import unittest import mock from six.moves.configparser import NoSectionError, NoOptionError from swift.common.middleware import memcache from swift.common.memcached import MemcacheRing from swift.common.swob import Request from swift.common.wsgi import loadapp from test.unit import with_tempdir, patch_policies class FakeApp(object): def __call__(self, env, start_response): return env class ExcConfigParser(object): def read(self, path): raise Exception('read called with %r' % path) class EmptyConfigParser(object): def read(self, path): return False def get_config_parser(memcache_servers='1.2.3.4:5', memcache_serialization_support='1', memcache_max_connections='4', section='memcache'): _srvs = memcache_servers _sers = memcache_serialization_support _maxc = memcache_max_connections _section = section class SetConfigParser(object): def items(self, section_name): if section_name != section: raise NoSectionError(section_name) return { 'memcache_servers': memcache_servers, 'memcache_serialization_support': memcache_serialization_support, 'memcache_max_connections': memcache_max_connections, } def read(self, path): return True def get(self, section, option): if _section == section: if option == 'memcache_servers': if _srvs == 'error': raise NoOptionError(option, section) return _srvs elif option == 'memcache_serialization_support': if _sers == 'error': raise NoOptionError(option, section) return _sers elif option in ('memcache_max_connections', 'max_connections'): if _maxc == 'error': raise NoOptionError(option, section) return _maxc else: raise NoOptionError(option, section) else: raise NoSectionError(option) return SetConfigParser def start_response(*args): pass class TestCacheMiddleware(unittest.TestCase): def setUp(self): self.app = memcache.MemcacheMiddleware(FakeApp(), {}) def test_cache_middleware(self): req = Request.blank('/something', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertTrue('swift.cache' in resp) self.assertTrue(isinstance(resp['swift.cache'], MemcacheRing)) def test_conf_default_read(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = ExcConfigParser count = 0 try: for d in ({}, {'memcache_servers': '6.7.8.9:10'}, {'memcache_serialization_support': '0'}, {'memcache_max_connections': '30'}, {'memcache_servers': '6.7.8.9:10', 'memcache_serialization_support': '0'}, {'memcache_servers': '6.7.8.9:10', 'memcache_max_connections': '30'}, {'memcache_serialization_support': '0', 'memcache_max_connections': '30'} ): try: memcache.MemcacheMiddleware(FakeApp(), d) except Exception as err: self.assertEqual( str(err), "read called with '/etc/swift/memcache.conf'") count += 1 finally: memcache.ConfigParser = orig_parser self.assertEqual(count, 7) def test_conf_set_no_read(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = ExcConfigParser exc = None try: memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '1.2.3.4:5', 'memcache_serialization_support': '2', 'memcache_max_connections': '30'}) except Exception as err: exc = err finally: memcache.ConfigParser = orig_parser self.assertEqual(exc, None) def test_conf_default(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = EmptyConfigParser try: app = memcache.MemcacheMiddleware(FakeApp(), {}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '127.0.0.1:11211') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, False) self.assertEqual( app.memcache._client_cache['127.0.0.1:11211'].max_size, 2) def test_conf_inline(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '6.7.8.9:10', 'memcache_serialization_support': '0', 'memcache_max_connections': '5'}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '6.7.8.9:10') self.assertEqual(app.memcache._allow_pickle, True) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['6.7.8.9:10'].max_size, 5) def test_conf_extra_no_section(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser(section='foobar') try: app = memcache.MemcacheMiddleware(FakeApp(), {}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '127.0.0.1:11211') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, False) self.assertEqual( app.memcache._client_cache['127.0.0.1:11211'].max_size, 2) def test_conf_extra_no_option(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser( memcache_servers='error', memcache_serialization_support='error', memcache_max_connections='error') try: app = memcache.MemcacheMiddleware(FakeApp(), {}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '127.0.0.1:11211') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, False) self.assertEqual( app.memcache._client_cache['127.0.0.1:11211'].max_size, 2) def test_conf_inline_other_max_conn(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '6.7.8.9:10', 'memcache_serialization_support': '0', 'max_connections': '5'}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '6.7.8.9:10') self.assertEqual(app.memcache._allow_pickle, True) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['6.7.8.9:10'].max_size, 5) def test_conf_inline_bad_max_conn(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '6.7.8.9:10', 'memcache_serialization_support': '0', 'max_connections': 'bad42'}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '6.7.8.9:10') self.assertEqual(app.memcache._allow_pickle, True) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['6.7.8.9:10'].max_size, 4) def test_conf_from_extra_conf(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware(FakeApp(), {}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '1.2.3.4:5') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['1.2.3.4:5'].max_size, 4) def test_conf_from_extra_conf_bad_max_conn(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser( memcache_max_connections='bad42') try: app = memcache.MemcacheMiddleware(FakeApp(), {}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '1.2.3.4:5') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['1.2.3.4:5'].max_size, 2) def test_conf_from_inline_and_maxc_from_extra_conf(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '6.7.8.9:10', 'memcache_serialization_support': '0'}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '6.7.8.9:10') self.assertEqual(app.memcache._allow_pickle, True) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['6.7.8.9:10'].max_size, 4) def test_conf_from_inline_and_sers_from_extra_conf(self): orig_parser = memcache.ConfigParser memcache.ConfigParser = get_config_parser() try: app = memcache.MemcacheMiddleware( FakeApp(), {'memcache_servers': '6.7.8.9:10', 'memcache_max_connections': '42'}) finally: memcache.ConfigParser = orig_parser self.assertEqual(app.memcache_servers, '6.7.8.9:10') self.assertEqual(app.memcache._allow_pickle, False) self.assertEqual(app.memcache._allow_unpickle, True) self.assertEqual( app.memcache._client_cache['6.7.8.9:10'].max_size, 42) def test_filter_factory(self): factory = memcache.filter_factory({'max_connections': '3'}, memcache_servers='10.10.10.10:10', memcache_serialization_support='1') thefilter = factory('myapp') self.assertEqual(thefilter.app, 'myapp') self.assertEqual(thefilter.memcache_servers, '10.10.10.10:10') self.assertEqual(thefilter.memcache._allow_pickle, False) self.assertEqual(thefilter.memcache._allow_unpickle, True) self.assertEqual( thefilter.memcache._client_cache['10.10.10.10:10'].max_size, 3) @patch_policies def _loadapp(self, proxy_config_path): """ Load a proxy from an app.conf to get the memcache_ring :returns: the memcache_ring of the memcache middleware filter """ with mock.patch('swift.proxy.server.Ring'): app = loadapp(proxy_config_path) memcache_ring = None while True: memcache_ring = getattr(app, 'memcache', None) if memcache_ring: break app = app.app return memcache_ring @with_tempdir def test_real_config(self, tempdir): config = """ [pipeline:main] pipeline = cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache """ config_path = os.path.join(tempdir, 'test.conf') with open(config_path, 'w') as f: f.write(dedent(config)) memcache_ring = self._loadapp(config_path) # only one server by default self.assertEqual(memcache_ring._client_cache.keys(), ['127.0.0.1:11211']) # extra options self.assertEqual(memcache_ring._connect_timeout, 0.3) self.assertEqual(memcache_ring._pool_timeout, 1.0) # tries is limited to server count self.assertEqual(memcache_ring._tries, 1) self.assertEqual(memcache_ring._io_timeout, 2.0) @with_tempdir def test_real_config_with_options(self, tempdir): config = """ [pipeline:main] pipeline = cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache memcache_servers = 10.0.0.1:11211,10.0.0.2:11211,10.0.0.3:11211, 10.0.0.4:11211 connect_timeout = 1.0 pool_timeout = 0.5 tries = 4 io_timeout = 1.0 """ config_path = os.path.join(tempdir, 'test.conf') with open(config_path, 'w') as f: f.write(dedent(config)) memcache_ring = self._loadapp(config_path) self.assertEqual(sorted(memcache_ring._client_cache.keys()), ['10.0.0.%d:11211' % i for i in range(1, 5)]) # extra options self.assertEqual(memcache_ring._connect_timeout, 1.0) self.assertEqual(memcache_ring._pool_timeout, 0.5) # tries is limited to server count self.assertEqual(memcache_ring._tries, 4) self.assertEqual(memcache_ring._io_timeout, 1.0) @with_tempdir def test_real_memcache_config(self, tempdir): proxy_config = """ [DEFAULT] swift_dir = %s [pipeline:main] pipeline = cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache connect_timeout = 1.0 """ % tempdir proxy_config_path = os.path.join(tempdir, 'test.conf') with open(proxy_config_path, 'w') as f: f.write(dedent(proxy_config)) memcache_config = """ [memcache] memcache_servers = 10.0.0.1:11211,10.0.0.2:11211,10.0.0.3:11211, 10.0.0.4:11211 connect_timeout = 0.5 io_timeout = 1.0 """ memcache_config_path = os.path.join(tempdir, 'memcache.conf') with open(memcache_config_path, 'w') as f: f.write(dedent(memcache_config)) memcache_ring = self._loadapp(proxy_config_path) self.assertEqual(sorted(memcache_ring._client_cache.keys()), ['10.0.0.%d:11211' % i for i in range(1, 5)]) # proxy option takes precedence self.assertEqual(memcache_ring._connect_timeout, 1.0) # default tries are not limited by servers self.assertEqual(memcache_ring._tries, 3) # memcache conf options are defaults self.assertEqual(memcache_ring._io_timeout, 1.0) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_recon.py0000664000567000056710000017112713024044354024110 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import array from contextlib import contextmanager import mock import os from posix import stat_result, statvfs_result from shutil import rmtree import unittest from unittest import TestCase from swift import __version__ as swiftver from swift.common import ring, utils from swift.common.swob import Request from swift.common.middleware import recon from swift.common.storage_policy import StoragePolicy from test.unit import patch_policies def fake_check_mount(a, b): raise OSError('Input/Output Error') def fail_os_listdir(): raise OSError('No such file or directory') def fail_io_open(file_path, open_mode): raise IOError('No such file or directory') class FakeApp(object): def __call__(self, env, start_response): return "FAKE APP" def start_response(*args): pass class FakeFromCache(object): def __init__(self, out=None): self.fakeout = out self.fakeout_calls = [] def fake_from_recon_cache(self, *args, **kwargs): self.fakeout_calls.append((args, kwargs)) return self.fakeout class OpenAndReadTester(object): def __init__(self, output_iter): self.index = 0 self.out_len = len(output_iter) - 1 self.data = output_iter self.output_iter = iter(output_iter) self.read_calls = [] self.open_calls = [] def __iter__(self): return self def next(self): if self.index == self.out_len: raise StopIteration else: line = self.data[self.index] self.index += 1 return line def read(self, *args, **kwargs): self.read_calls.append((args, kwargs)) try: return next(self.output_iter) except StopIteration: return '' @contextmanager def open(self, *args, **kwargs): self.open_calls.append((args, kwargs)) yield self class MockOS(object): def __init__(self, ls_out=None, isdir_out=None, ismount_out=False, statvfs_out=None): self.ls_output = ls_out self.isdir_output = isdir_out self.ismount_output = ismount_out self.statvfs_output = statvfs_out self.listdir_calls = [] self.isdir_calls = [] self.ismount_calls = [] self.statvfs_calls = [] def fake_listdir(self, *args, **kwargs): self.listdir_calls.append((args, kwargs)) return self.ls_output def fake_isdir(self, *args, **kwargs): self.isdir_calls.append((args, kwargs)) return self.isdir_output def fake_ismount(self, *args, **kwargs): self.ismount_calls.append((args, kwargs)) if isinstance(self.ismount_output, Exception): raise self.ismount_output else: return self.ismount_output def fake_statvfs(self, *args, **kwargs): self.statvfs_calls.append((args, kwargs)) return statvfs_result(self.statvfs_output) class FakeRecon(object): def __init__(self): self.fake_replication_rtype = None self.fake_updater_rtype = None self.fake_auditor_rtype = None self.fake_expirer_rtype = None def fake_mem(self): return {'memtest': "1"} def fake_load(self): return {'loadtest': "1"} def fake_async(self): return {'asynctest': "1"} def fake_get_device_info(self): return {"/srv/1/node": ["sdb1"]} def fake_replication(self, recon_type): self.fake_replication_rtype = recon_type return {'replicationtest': "1"} def fake_updater(self, recon_type): self.fake_updater_rtype = recon_type return {'updatertest': "1"} def fake_auditor(self, recon_type): self.fake_auditor_rtype = recon_type return {'auditortest': "1"} def fake_expirer(self, recon_type): self.fake_expirer_rtype = recon_type return {'expirertest': "1"} def fake_mounted(self): return {'mountedtest': "1"} def fake_unmounted(self): return {'unmountedtest': "1"} def fake_unmounted_empty(self): return [] def fake_diskusage(self): return {'diskusagetest': "1"} def fake_ringmd5(self): return {'ringmd5test': "1"} def fake_swiftconfmd5(self): return {'/etc/swift/swift.conf': "abcdef"} def fake_quarantined(self): return {'quarantinedtest': "1"} def fake_sockstat(self): return {'sockstattest': "1"} def fake_driveaudit(self): return {'driveaudittest': "1"} def fake_time(self): return {'timetest': "1"} def nocontent(self): return None def raise_IOError(self, *args, **kwargs): raise IOError def raise_ValueError(self, *args, **kwargs): raise ValueError def raise_Exception(self, *args, **kwargs): raise Exception @patch_policies(legacy_only=True) class TestReconSuccess(TestCase): def setUp(self): # can't use mkdtemp here as 2.6 gzip puts the filename in the header # which will cause ring md5 checks to fail self.tempdir = '/tmp/swift_recon_md5_test' utils.mkdirs(self.tempdir) self.app = recon.ReconMiddleware(FakeApp(), {'swift_dir': self.tempdir}) self.mockos = MockOS() self.fakecache = FakeFromCache() self.real_listdir = os.listdir self.real_isdir = os.path.isdir self.real_ismount = utils.ismount self.real_statvfs = os.statvfs os.listdir = self.mockos.fake_listdir os.path.isdir = self.mockos.fake_isdir utils.ismount = self.mockos.fake_ismount os.statvfs = self.mockos.fake_statvfs self.real_from_cache = self.app._from_recon_cache self.app._from_recon_cache = self.fakecache.fake_from_recon_cache self.frecon = FakeRecon() self.ring_part_shift = 5 self.ring_devs = [{'id': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'device': 'sda1'}, {'id': 1, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'device': 'sdb1'}, None, {'id': 3, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000, 'device': 'sdc1'}, {'id': 4, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000, 'device': 'sdd1'}] self._create_rings() def tearDown(self): os.listdir = self.real_listdir os.path.isdir = self.real_isdir utils.ismount = self.real_ismount os.statvfs = self.real_statvfs del self.mockos self.app._from_recon_cache = self.real_from_cache del self.fakecache rmtree(self.tempdir) def _create_ring(self, ringpath, replica_map, devs, part_shift): def fake_time(): return 0 def fake_base(fname): # least common denominator with gzip versions is to # not use the .gz extension in the gzip header return fname[:-3] # eliminate time from the equation as gzip 2.6 includes # it in the header resulting in md5 file mismatch, also # have to mock basename as one version uses it, one doesn't with mock.patch("time.time", fake_time): with mock.patch("os.path.basename", fake_base): ring.RingData(replica_map, devs, part_shift).save(ringpath, mtime=None) def _create_rings(self): # make the rings unique so they have different md5 sums rings = { 'account.ring.gz': [ array.array('H', [3, 1, 3, 1]), array.array('H', [0, 3, 1, 4]), array.array('H', [1, 4, 0, 3])], 'container.ring.gz': [ array.array('H', [4, 3, 0, 1]), array.array('H', [0, 1, 3, 4]), array.array('H', [3, 4, 0, 1])], 'object.ring.gz': [ array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1]), array.array('H', [3, 4, 3, 4])], 'object-1.ring.gz': [ array.array('H', [1, 0, 1, 0]), array.array('H', [1, 0, 1, 0]), array.array('H', [4, 3, 4, 3])], 'object-2.ring.gz': [ array.array('H', [1, 1, 1, 0]), array.array('H', [1, 0, 1, 3]), array.array('H', [4, 2, 4, 3])] } for ringfn, replica_map in rings.iteritems(): ringpath = os.path.join(self.tempdir, ringfn) self._create_ring(ringpath, replica_map, self.ring_devs, self.ring_part_shift) @patch_policies([ StoragePolicy(0, 'stagecoach'), StoragePolicy(1, 'pinto', is_deprecated=True), StoragePolicy(2, 'toyota', is_default=True), ]) def test_get_ring_md5(self): # We should only see configured and present rings, so to handle the # "normal" case just patch the policies to match the existing rings. expt_out = {'%s/account.ring.gz' % self.tempdir: 'd288bdf39610e90d4f0b67fa00eeec4f', '%s/container.ring.gz' % self.tempdir: '9a5a05a8a4fbbc61123de792dbe4592d', '%s/object.ring.gz' % self.tempdir: 'da02bfbd0bf1e7d56faea15b6fe5ab1e', '%s/object-1.ring.gz' % self.tempdir: '3f1899b27abf5f2efcc67d6fae1e1c64', '%s/object-2.ring.gz' % self.tempdir: '8f0e57079b3c245d9b3d5a428e9312ee'} # We need to instantiate app after overriding the configured policies. # object-{1,2}.ring.gz should both appear as they are present on disk # and were configured as policies. app = recon.ReconMiddleware(FakeApp(), {'swift_dir': self.tempdir}) self.assertEqual(sorted(app.get_ring_md5().items()), sorted(expt_out.items())) def test_get_ring_md5_ioerror_produces_none_hash(self): # Ring files that are present but produce an IOError on read should # still produce a ringmd5 entry with a None for the hash. Note that # this is different than if an expected ring file simply doesn't exist, # in which case it is excluded altogether from the ringmd5 response. def fake_open(fn, fmode): raise IOError expt_out = {'%s/account.ring.gz' % self.tempdir: None, '%s/container.ring.gz' % self.tempdir: None, '%s/object.ring.gz' % self.tempdir: None} ringmd5 = self.app.get_ring_md5(openr=fake_open) self.assertEqual(sorted(ringmd5.items()), sorted(expt_out.items())) def test_get_ring_md5_failed_ring_hash_recovers_without_restart(self): # Ring files that are present but produce an IOError on read will # show a None hash, but if they can be read later their hash # should become available in the ringmd5 response. def fake_open(fn, fmode): raise IOError expt_out = {'%s/account.ring.gz' % self.tempdir: None, '%s/container.ring.gz' % self.tempdir: None, '%s/object.ring.gz' % self.tempdir: None} ringmd5 = self.app.get_ring_md5(openr=fake_open) self.assertEqual(sorted(ringmd5.items()), sorted(expt_out.items())) # If we fix a ring and it can be read again, its hash should then # appear using the same app instance def fake_open_objonly(fn, fmode): if 'object' not in fn: raise IOError return open(fn, fmode) expt_out = {'%s/account.ring.gz' % self.tempdir: None, '%s/container.ring.gz' % self.tempdir: None, '%s/object.ring.gz' % self.tempdir: 'da02bfbd0bf1e7d56faea15b6fe5ab1e'} ringmd5 = self.app.get_ring_md5(openr=fake_open_objonly) self.assertEqual(sorted(ringmd5.items()), sorted(expt_out.items())) @patch_policies([ StoragePolicy(0, 'stagecoach'), StoragePolicy(2, 'bike', is_default=True), StoragePolicy(3502, 'train') ]) def test_get_ring_md5_missing_ring_recovers_without_restart(self): # If a configured ring is missing when the app is instantiated, but is # later moved into place, we shouldn't need to restart object-server # for it to appear in recon. expt_out = {'%s/account.ring.gz' % self.tempdir: 'd288bdf39610e90d4f0b67fa00eeec4f', '%s/container.ring.gz' % self.tempdir: '9a5a05a8a4fbbc61123de792dbe4592d', '%s/object.ring.gz' % self.tempdir: 'da02bfbd0bf1e7d56faea15b6fe5ab1e', '%s/object-2.ring.gz' % self.tempdir: '8f0e57079b3c245d9b3d5a428e9312ee'} # We need to instantiate app after overriding the configured policies. # object-1.ring.gz should not appear as it's present but unconfigured. # object-3502.ring.gz should not appear as it's configured but not # present. app = recon.ReconMiddleware(FakeApp(), {'swift_dir': self.tempdir}) self.assertEqual(sorted(app.get_ring_md5().items()), sorted(expt_out.items())) # Simulate the configured policy's missing ringfile being moved into # place during runtime ringfn = 'object-3502.ring.gz' ringpath = os.path.join(self.tempdir, ringfn) ringmap = [array.array('H', [1, 2, 1, 4]), array.array('H', [4, 0, 1, 3]), array.array('H', [1, 1, 0, 3])] self._create_ring(os.path.join(self.tempdir, ringfn), ringmap, self.ring_devs, self.ring_part_shift) expt_out[ringpath] = 'acfa4b85396d2a33f361ebc07d23031d' # We should now see it in the ringmd5 response, without a restart # (using the same app instance) self.assertEqual(sorted(app.get_ring_md5().items()), sorted(expt_out.items())) @patch_policies([ StoragePolicy(0, 'stagecoach', is_default=True), StoragePolicy(2, 'bike'), StoragePolicy(2305, 'taxi') ]) def test_get_ring_md5_excludes_configured_missing_obj_rings(self): # Object rings that are configured but missing aren't meant to appear # in the ringmd5 response. expt_out = {'%s/account.ring.gz' % self.tempdir: 'd288bdf39610e90d4f0b67fa00eeec4f', '%s/container.ring.gz' % self.tempdir: '9a5a05a8a4fbbc61123de792dbe4592d', '%s/object.ring.gz' % self.tempdir: 'da02bfbd0bf1e7d56faea15b6fe5ab1e', '%s/object-2.ring.gz' % self.tempdir: '8f0e57079b3c245d9b3d5a428e9312ee'} # We need to instantiate app after overriding the configured policies. # object-1.ring.gz should not appear as it's present but unconfigured. # object-2305.ring.gz should not appear as it's configured but not # present. app = recon.ReconMiddleware(FakeApp(), {'swift_dir': self.tempdir}) self.assertEqual(sorted(app.get_ring_md5().items()), sorted(expt_out.items())) @patch_policies([ StoragePolicy(0, 'zero', is_default=True), ]) def test_get_ring_md5_excludes_unconfigured_present_obj_rings(self): # Object rings that are present but not configured in swift.conf # aren't meant to appear in the ringmd5 response. expt_out = {'%s/account.ring.gz' % self.tempdir: 'd288bdf39610e90d4f0b67fa00eeec4f', '%s/container.ring.gz' % self.tempdir: '9a5a05a8a4fbbc61123de792dbe4592d', '%s/object.ring.gz' % self.tempdir: 'da02bfbd0bf1e7d56faea15b6fe5ab1e'} # We need to instantiate app after overriding the configured policies. # object-{1,2}.ring.gz should not appear as they are present on disk # but were not configured as policies. app = recon.ReconMiddleware(FakeApp(), {'swift_dir': self.tempdir}) self.assertEqual(sorted(app.get_ring_md5().items()), sorted(expt_out.items())) def test_from_recon_cache(self): oart = OpenAndReadTester(['{"notneeded": 5, "testkey1": "canhazio"}']) self.app._from_recon_cache = self.real_from_cache rv = self.app._from_recon_cache(['testkey1', 'notpresentkey'], 'test.cache', openr=oart.open) self.assertEqual(oart.read_calls, [((), {})]) self.assertEqual(oart.open_calls, [(('test.cache', 'r'), {})]) self.assertEqual(rv, {'notpresentkey': None, 'testkey1': 'canhazio'}) self.app._from_recon_cache = self.fakecache.fake_from_recon_cache def test_from_recon_cache_ioerror(self): oart = self.frecon.raise_IOError self.app._from_recon_cache = self.real_from_cache rv = self.app._from_recon_cache(['testkey1', 'notpresentkey'], 'test.cache', openr=oart) self.assertEqual(rv, {'notpresentkey': None, 'testkey1': None}) self.app._from_recon_cache = self.fakecache.fake_from_recon_cache def test_from_recon_cache_valueerror(self): oart = self.frecon.raise_ValueError self.app._from_recon_cache = self.real_from_cache rv = self.app._from_recon_cache(['testkey1', 'notpresentkey'], 'test.cache', openr=oart) self.assertEqual(rv, {'notpresentkey': None, 'testkey1': None}) self.app._from_recon_cache = self.fakecache.fake_from_recon_cache def test_from_recon_cache_exception(self): oart = self.frecon.raise_Exception self.app._from_recon_cache = self.real_from_cache rv = self.app._from_recon_cache(['testkey1', 'notpresentkey'], 'test.cache', openr=oart) self.assertEqual(rv, {'notpresentkey': None, 'testkey1': None}) self.app._from_recon_cache = self.fakecache.fake_from_recon_cache def test_get_mounted(self): mounts_content = [ 'rootfs / rootfs rw 0 0', 'none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0', 'none /proc proc rw,nosuid,nodev,noexec,relatime 0 0', 'none /dev devtmpfs rw,relatime,size=248404k,nr_inodes=62101,' 'mode=755 0 0', 'none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,' 'ptmxmode=000 0 0', '/dev/disk/by-uuid/e5b143bd-9f31-49a7-b018-5e037dc59252 / ext4' ' rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0', 'none /sys/fs/fuse/connections fusectl rw,relatime 0 0', 'none /sys/kernel/debug debugfs rw,relatime 0 0', 'none /sys/kernel/security securityfs rw,relatime 0 0', 'none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0', 'none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0', 'none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0', 'none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0', '/dev/loop0 /mnt/sdb1 xfs rw,noatime,nodiratime,attr2,nobarrier,' 'logbufs=8,noquota 0 0', 'rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0', 'nfsd /proc/fs/nfsd nfsd rw,relatime 0 0', 'none /proc/fs/vmblock/mountPoint vmblock rw,relatime 0 0', ''] mounted_resp = [ {'device': 'rootfs', 'path': '/'}, {'device': 'none', 'path': '/sys'}, {'device': 'none', 'path': '/proc'}, {'device': 'none', 'path': '/dev'}, {'device': 'none', 'path': '/dev/pts'}, {'device': '/dev/disk/by-uuid/' 'e5b143bd-9f31-49a7-b018-5e037dc59252', 'path': '/'}, {'device': 'none', 'path': '/sys/fs/fuse/connections'}, {'device': 'none', 'path': '/sys/kernel/debug'}, {'device': 'none', 'path': '/sys/kernel/security'}, {'device': 'none', 'path': '/dev/shm'}, {'device': 'none', 'path': '/var/run'}, {'device': 'none', 'path': '/var/lock'}, {'device': 'none', 'path': '/lib/init/rw'}, {'device': '/dev/loop0', 'path': '/mnt/sdb1'}, {'device': 'rpc_pipefs', 'path': '/var/lib/nfs/rpc_pipefs'}, {'device': 'nfsd', 'path': '/proc/fs/nfsd'}, {'device': 'none', 'path': '/proc/fs/vmblock/mountPoint'}] oart = OpenAndReadTester(mounts_content) rv = self.app.get_mounted(openr=oart.open) self.assertEqual(oart.open_calls, [(('/proc/mounts', 'r'), {})]) self.assertEqual(rv, mounted_resp) def test_get_load(self): oart = OpenAndReadTester(['0.03 0.03 0.00 1/220 16306']) rv = self.app.get_load(openr=oart.open) self.assertEqual(oart.read_calls, [((), {})]) self.assertEqual(oart.open_calls, [(('/proc/loadavg', 'r'), {})]) self.assertEqual(rv, {'5m': 0.029999999999999999, '15m': 0.0, 'processes': 16306, 'tasks': '1/220', '1m': 0.029999999999999999}) def test_get_mem(self): meminfo_content = ['MemTotal: 505840 kB', 'MemFree: 26588 kB', 'Buffers: 44948 kB', 'Cached: 146376 kB', 'SwapCached: 14736 kB', 'Active: 194900 kB', 'Inactive: 193412 kB', 'Active(anon): 94208 kB', 'Inactive(anon): 102848 kB', 'Active(file): 100692 kB', 'Inactive(file): 90564 kB', 'Unevictable: 0 kB', 'Mlocked: 0 kB', 'SwapTotal: 407544 kB', 'SwapFree: 313436 kB', 'Dirty: 104 kB', 'Writeback: 0 kB', 'AnonPages: 185268 kB', 'Mapped: 9592 kB', 'Shmem: 68 kB', 'Slab: 61716 kB', 'SReclaimable: 46620 kB', 'SUnreclaim: 15096 kB', 'KernelStack: 1760 kB', 'PageTables: 8832 kB', 'NFS_Unstable: 0 kB', 'Bounce: 0 kB', 'WritebackTmp: 0 kB', 'CommitLimit: 660464 kB', 'Committed_AS: 565608 kB', 'VmallocTotal: 34359738367 kB', 'VmallocUsed: 266724 kB', 'VmallocChunk: 34359467156 kB', 'HardwareCorrupted: 0 kB', 'HugePages_Total: 0', 'HugePages_Free: 0', 'HugePages_Rsvd: 0', 'HugePages_Surp: 0', 'Hugepagesize: 2048 kB', 'DirectMap4k: 10240 kB', 'DirectMap2M: 514048 kB', ''] meminfo_resp = {'WritebackTmp': '0 kB', 'SwapTotal': '407544 kB', 'Active(anon)': '94208 kB', 'SwapFree': '313436 kB', 'DirectMap4k': '10240 kB', 'KernelStack': '1760 kB', 'MemFree': '26588 kB', 'HugePages_Rsvd': '0', 'Committed_AS': '565608 kB', 'Active(file)': '100692 kB', 'NFS_Unstable': '0 kB', 'VmallocChunk': '34359467156 kB', 'Writeback': '0 kB', 'Inactive(file)': '90564 kB', 'MemTotal': '505840 kB', 'VmallocUsed': '266724 kB', 'HugePages_Free': '0', 'AnonPages': '185268 kB', 'Active': '194900 kB', 'Inactive(anon)': '102848 kB', 'CommitLimit': '660464 kB', 'Hugepagesize': '2048 kB', 'Cached': '146376 kB', 'SwapCached': '14736 kB', 'VmallocTotal': '34359738367 kB', 'Shmem': '68 kB', 'Mapped': '9592 kB', 'SUnreclaim': '15096 kB', 'Unevictable': '0 kB', 'SReclaimable': '46620 kB', 'Mlocked': '0 kB', 'DirectMap2M': '514048 kB', 'HugePages_Surp': '0', 'Bounce': '0 kB', 'Inactive': '193412 kB', 'PageTables': '8832 kB', 'HardwareCorrupted': '0 kB', 'HugePages_Total': '0', 'Slab': '61716 kB', 'Buffers': '44948 kB', 'Dirty': '104 kB'} oart = OpenAndReadTester(meminfo_content) rv = self.app.get_mem(openr=oart.open) self.assertEqual(oart.open_calls, [(('/proc/meminfo', 'r'), {})]) self.assertEqual(rv, meminfo_resp) def test_get_async_info(self): from_cache_response = {'async_pending': 5} self.fakecache.fakeout = from_cache_response rv = self.app.get_async_info() self.assertEqual(self.fakecache.fakeout_calls, [((['async_pending'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, {'async_pending': 5}) def test_get_replication_info_account(self): from_cache_response = { "replication_stats": { "attempted": 1, "diff": 0, "diff_capped": 0, "empty": 0, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "no_change": 2, "remote_merge": 0, "remove": 0, "rsync": 0, "start": 1333044050.855202, "success": 2, "ts_repl": 0}, "replication_time": 0.2615511417388916, "replication_last": 1357969645.25} self.fakecache.fakeout = from_cache_response rv = self.app.get_replication_info('account') self.assertEqual(self.fakecache.fakeout_calls, [((['replication_time', 'replication_stats', 'replication_last'], '/var/cache/swift/account.recon'), {})]) self.assertEqual(rv, { "replication_stats": { "attempted": 1, "diff": 0, "diff_capped": 0, "empty": 0, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "no_change": 2, "remote_merge": 0, "remove": 0, "rsync": 0, "start": 1333044050.855202, "success": 2, "ts_repl": 0}, "replication_time": 0.2615511417388916, "replication_last": 1357969645.25}) def test_get_replication_info_container(self): from_cache_response = { "replication_time": 200.0, "replication_stats": { "attempted": 179, "diff": 0, "diff_capped": 0, "empty": 0, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "no_change": 358, "remote_merge": 0, "remove": 0, "rsync": 0, "start": 5.5, "success": 358, "ts_repl": 0}, "replication_last": 1357969645.25} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_replication_info('container') self.assertEqual(self.fakecache.fakeout_calls, [((['replication_time', 'replication_stats', 'replication_last'], '/var/cache/swift/container.recon'), {})]) self.assertEqual(rv, { "replication_time": 200.0, "replication_stats": { "attempted": 179, "diff": 0, "diff_capped": 0, "empty": 0, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "no_change": 358, "remote_merge": 0, "remove": 0, "rsync": 0, "start": 5.5, "success": 358, "ts_repl": 0}, "replication_last": 1357969645.25}) def test_get_replication_object(self): from_cache_response = { "replication_time": 0.2615511417388916, "replication_stats": { "attempted": 179, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "remove": 0, "rsync": 0, "start": 1333044050.855202, "success": 358}, "replication_last": 1357969645.25, "object_replication_time": 0.2615511417388916, "object_replication_last": 1357969645.25} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_replication_info('object') self.assertEqual(self.fakecache.fakeout_calls, [((['replication_time', 'replication_stats', 'replication_last', 'object_replication_time', 'object_replication_last'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, { "replication_time": 0.2615511417388916, "replication_stats": { "attempted": 179, "failure": 0, "hashmatch": 0, "failure_nodes": { "192.168.0.1": 0, "192.168.0.2": 0}, "remove": 0, "rsync": 0, "start": 1333044050.855202, "success": 358}, "replication_last": 1357969645.25, "object_replication_time": 0.2615511417388916, "object_replication_last": 1357969645.25}) def test_get_replication_info_unrecognized(self): rv = self.app.get_replication_info('unrecognized_recon_type') self.assertIsNone(rv) def test_get_updater_info_container(self): from_cache_response = {"container_updater_sweep": 18.476239919662476} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_updater_info('container') self.assertEqual(self.fakecache.fakeout_calls, [((['container_updater_sweep'], '/var/cache/swift/container.recon'), {})]) self.assertEqual(rv, {"container_updater_sweep": 18.476239919662476}) def test_get_updater_info_object(self): from_cache_response = {"object_updater_sweep": 0.79848217964172363} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_updater_info('object') self.assertEqual(self.fakecache.fakeout_calls, [((['object_updater_sweep'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, {"object_updater_sweep": 0.79848217964172363}) def test_get_updater_info_unrecognized(self): rv = self.app.get_updater_info('unrecognized_recon_type') self.assertIsNone(rv) def test_get_expirer_info_object(self): from_cache_response = {'object_expiration_pass': 0.79848217964172363, 'expired_last_pass': 99} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_expirer_info('object') self.assertEqual(self.fakecache.fakeout_calls, [((['object_expiration_pass', 'expired_last_pass'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, from_cache_response) def test_get_auditor_info_account(self): from_cache_response = {"account_auditor_pass_completed": 0.24, "account_audits_failed": 0, "account_audits_passed": 6, "account_audits_since": "1333145374.1373529"} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_auditor_info('account') self.assertEqual(self.fakecache.fakeout_calls, [((['account_audits_passed', 'account_auditor_pass_completed', 'account_audits_since', 'account_audits_failed'], '/var/cache/swift/account.recon'), {})]) self.assertEqual(rv, {"account_auditor_pass_completed": 0.24, "account_audits_failed": 0, "account_audits_passed": 6, "account_audits_since": "1333145374.1373529"}) def test_get_auditor_info_container(self): from_cache_response = {"container_auditor_pass_completed": 0.24, "container_audits_failed": 0, "container_audits_passed": 6, "container_audits_since": "1333145374.1373529"} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_auditor_info('container') self.assertEqual(self.fakecache.fakeout_calls, [((['container_audits_passed', 'container_auditor_pass_completed', 'container_audits_since', 'container_audits_failed'], '/var/cache/swift/container.recon'), {})]) self.assertEqual(rv, {"container_auditor_pass_completed": 0.24, "container_audits_failed": 0, "container_audits_passed": 6, "container_audits_since": "1333145374.1373529"}) def test_get_auditor_info_object(self): from_cache_response = { "object_auditor_stats_ALL": { "audit_time": 115.14418768882751, "bytes_processed": 234660, "completed": 115.4512460231781, "errors": 0, "files_processed": 2310, "quarantined": 0}, "object_auditor_stats_ZBF": { "audit_time": 45.877294063568115, "bytes_processed": 0, "completed": 46.181446075439453, "errors": 0, "files_processed": 2310, "quarantined": 0}} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_auditor_info('object') self.assertEqual(self.fakecache.fakeout_calls, [((['object_auditor_stats_ALL', 'object_auditor_stats_ZBF'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, { "object_auditor_stats_ALL": { "audit_time": 115.14418768882751, "bytes_processed": 234660, "completed": 115.4512460231781, "errors": 0, "files_processed": 2310, "quarantined": 0}, "object_auditor_stats_ZBF": { "audit_time": 45.877294063568115, "bytes_processed": 0, "completed": 46.181446075439453, "errors": 0, "files_processed": 2310, "quarantined": 0}}) def test_get_auditor_info_object_parallel_once(self): from_cache_response = { "object_auditor_stats_ALL": { 'disk1': { "audit_time": 115.14418768882751, "bytes_processed": 234660, "completed": 115.4512460231781, "errors": 0, "files_processed": 2310, "quarantined": 0}, 'disk2': { "audit_time": 115, "bytes_processed": 234660, "completed": 115, "errors": 0, "files_processed": 2310, "quarantined": 0}}, "object_auditor_stats_ZBF": {'disk1disk2': { "audit_time": 45.877294063568115, "bytes_processed": 0, "completed": 46.181446075439453, "errors": 0, "files_processed": 2310, "quarantined": 0}}} self.fakecache.fakeout_calls = [] self.fakecache.fakeout = from_cache_response rv = self.app.get_auditor_info('object') self.assertEqual(self.fakecache.fakeout_calls, [((['object_auditor_stats_ALL', 'object_auditor_stats_ZBF'], '/var/cache/swift/object.recon'), {})]) self.assertEqual(rv, { "object_auditor_stats_ALL": { 'disk1': { "audit_time": 115.14418768882751, "bytes_processed": 234660, "completed": 115.4512460231781, "errors": 0, "files_processed": 2310, "quarantined": 0}, 'disk2': { "audit_time": 115, "bytes_processed": 234660, "completed": 115, "errors": 0, "files_processed": 2310, "quarantined": 0}}, "object_auditor_stats_ZBF": {'disk1disk2': { "audit_time": 45.877294063568115, "bytes_processed": 0, "completed": 46.181446075439453, "errors": 0, "files_processed": 2310, "quarantined": 0}}}) def test_get_auditor_info_unrecognized(self): rv = self.app.get_auditor_info('unrecognized_recon_type') self.assertIsNone(rv) def test_get_unmounted(self): unmounted_resp = [{'device': 'fakeone', 'mounted': False}, {'device': 'faketwo', 'mounted': False}] self.mockos.ls_output = ['fakeone', 'faketwo'] self.mockos.isdir_output = True self.mockos.ismount_output = False rv = self.app.get_unmounted() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/fakeone',), {}), (('/srv/node/faketwo',), {})]) self.assertEqual(rv, unmounted_resp) def test_get_unmounted_excludes_files(self): unmounted_resp = [] self.mockos.ls_output = ['somerando.log'] self.mockos.isdir_output = False self.mockos.ismount_output = False rv = self.app.get_unmounted() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/somerando.log',), {})]) self.assertEqual(rv, unmounted_resp) def test_get_unmounted_all_mounted(self): unmounted_resp = [] self.mockos.ls_output = ['fakeone', 'faketwo'] self.mockos.isdir_output = True self.mockos.ismount_output = True rv = self.app.get_unmounted() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/fakeone',), {}), (('/srv/node/faketwo',), {})]) self.assertEqual(rv, unmounted_resp) def test_get_unmounted_checkmount_fail(self): unmounted_resp = [{'device': 'fakeone', 'mounted': 'brokendrive'}] self.mockos.ls_output = ['fakeone'] self.mockos.isdir_output = True self.mockos.ismount_output = OSError('brokendrive') rv = self.app.get_unmounted() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/fakeone',), {})]) self.assertEqual(self.mockos.ismount_calls, [(('/srv/node/fakeone',), {})]) self.assertEqual(rv, unmounted_resp) def test_get_unmounted_no_mounts(self): def fake_checkmount_true(*args): return True unmounted_resp = [] self.mockos.ls_output = [] self.mockos.isdir_output = False self.mockos.ismount_output = False rv = self.app.get_unmounted() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, []) self.assertEqual(rv, unmounted_resp) def test_get_diskusage(self): # posix.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=1963185, # f_bfree=1113075, f_bavail=1013351, # f_files=498736, # f_ffree=397839, f_favail=397839, f_flag=0, # f_namemax=255) statvfs_content = (4096, 4096, 1963185, 1113075, 1013351, 498736, 397839, 397839, 0, 255) du_resp = [{'device': 'canhazdrive1', 'avail': 4150685696, 'mounted': True, 'used': 3890520064, 'size': 8041205760}] self.mockos.ls_output = ['canhazdrive1'] self.mockos.isdir_output = True self.mockos.statvfs_output = statvfs_content self.mockos.ismount_output = True rv = self.app.get_diskusage() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/canhazdrive1',), {})]) self.assertEqual(self.mockos.statvfs_calls, [(('/srv/node/canhazdrive1',), {})]) self.assertEqual(rv, du_resp) def test_get_diskusage_excludes_files(self): du_resp = [] self.mockos.ls_output = ['somerando.log'] self.mockos.isdir_output = False rv = self.app.get_diskusage() self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/somerando.log',), {})]) self.assertEqual(self.mockos.statvfs_calls, []) self.assertEqual(rv, du_resp) def test_get_diskusage_checkmount_fail(self): du_resp = [{'device': 'canhazdrive1', 'avail': '', 'mounted': 'brokendrive', 'used': '', 'size': ''}] self.mockos.ls_output = ['canhazdrive1'] self.mockos.isdir_output = True self.mockos.ismount_output = OSError('brokendrive') rv = self.app.get_diskusage() self.assertEqual(self.mockos.listdir_calls, [(('/srv/node',), {})]) self.assertEqual(self.mockos.isdir_calls, [(('/srv/node/canhazdrive1',), {})]) self.assertEqual(self.mockos.ismount_calls, [(('/srv/node/canhazdrive1',), {})]) self.assertEqual(rv, du_resp) @mock.patch("swift.common.middleware.recon.check_mount", fake_check_mount) def test_get_diskusage_oserror(self): du_resp = [{'device': 'canhazdrive1', 'avail': '', 'mounted': 'Input/Output Error', 'used': '', 'size': ''}] self.mockos.ls_output = ['canhazdrive1'] self.mockos.isdir_output = True rv = self.app.get_diskusage() self.assertEqual(rv, du_resp) def test_get_quarantine_count(self): dirs = [['sda'], ['accounts', 'containers', 'objects', 'objects-1']] self.mockos.ismount_output = True def fake_lstat(*args, **kwargs): # posix.lstat_result(st_mode=1, st_ino=2, st_dev=3, st_nlink=4, # st_uid=5, st_gid=6, st_size=7, st_atime=8, # st_mtime=9, st_ctime=10) return stat_result((1, 2, 3, 4, 5, 6, 7, 8, 9, 10)) def fake_exists(*args, **kwargs): return True def fake_listdir(*args, **kwargs): return dirs.pop(0) with mock.patch("os.lstat", fake_lstat): with mock.patch("os.path.exists", fake_exists): with mock.patch("os.listdir", fake_listdir): rv = self.app.get_quarantine_count() self.assertEqual(rv, {'objects': 4, 'accounts': 2, 'policies': {'1': {'objects': 2}, '0': {'objects': 2}}, 'containers': 2}) def test_get_socket_info(self): sockstat_content = ['sockets: used 271', 'TCP: inuse 30 orphan 0 tw 0 alloc 31 mem 0', 'UDP: inuse 16 mem 4', 'UDPLITE: inuse 0', 'RAW: inuse 0', 'FRAG: inuse 0 memory 0', ''] oart = OpenAndReadTester(sockstat_content) self.app.get_socket_info(openr=oart.open) self.assertEqual(oart.open_calls, [ (('/proc/net/sockstat', 'r'), {}), (('/proc/net/sockstat6', 'r'), {})]) def test_get_driveaudit_info(self): from_cache_response = {'drive_audit_errors': 7} self.fakecache.fakeout = from_cache_response rv = self.app.get_driveaudit_error() self.assertEqual(self.fakecache.fakeout_calls, [((['drive_audit_errors'], '/var/cache/swift/drive.recon'), {})]) self.assertEqual(rv, {'drive_audit_errors': 7}) def test_get_time(self): def fake_time(): return 1430000000.0 with mock.patch("time.time", fake_time): now = fake_time() rv = self.app.get_time() self.assertEqual(rv, now) class TestReconMiddleware(unittest.TestCase): def fake_list(self, path): return ['a', 'b'] def setUp(self): self.frecon = FakeRecon() self.real_listdir = os.listdir os.listdir = self.fake_list self.app = recon.ReconMiddleware(FakeApp(), {'object_recon': "true"}) self.real_app_get_device_info = self.app.get_device_info self.real_app_get_swift_conf_md5 = self.app.get_swift_conf_md5 os.listdir = self.real_listdir # self.app.object_recon = True self.app.get_mem = self.frecon.fake_mem self.app.get_load = self.frecon.fake_load self.app.get_async_info = self.frecon.fake_async self.app.get_device_info = self.frecon.fake_get_device_info self.app.get_replication_info = self.frecon.fake_replication self.app.get_auditor_info = self.frecon.fake_auditor self.app.get_updater_info = self.frecon.fake_updater self.app.get_expirer_info = self.frecon.fake_expirer self.app.get_mounted = self.frecon.fake_mounted self.app.get_unmounted = self.frecon.fake_unmounted self.app.get_diskusage = self.frecon.fake_diskusage self.app.get_ring_md5 = self.frecon.fake_ringmd5 self.app.get_swift_conf_md5 = self.frecon.fake_swiftconfmd5 self.app.get_quarantine_count = self.frecon.fake_quarantined self.app.get_socket_info = self.frecon.fake_sockstat self.app.get_driveaudit_error = self.frecon.fake_driveaudit self.app.get_time = self.frecon.fake_time def test_recon_get_mem(self): get_mem_resp = ['{"memtest": "1"}'] req = Request.blank('/recon/mem', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_mem_resp) def test_recon_get_version(self): req = Request.blank('/recon/version', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, [utils.json.dumps({'version': swiftver})]) def test_recon_get_load(self): get_load_resp = ['{"loadtest": "1"}'] req = Request.blank('/recon/load', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_load_resp) def test_recon_get_async(self): get_async_resp = ['{"asynctest": "1"}'] req = Request.blank('/recon/async', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_async_resp) def test_get_device_info(self): get_device_resp = ['{"/srv/1/node": ["sdb1"]}'] req = Request.blank('/recon/devices', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_device_resp) def test_recon_get_replication_notype(self): get_replication_resp = ['{"replicationtest": "1"}'] req = Request.blank('/recon/replication', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_replication_resp) self.assertEqual(self.frecon.fake_replication_rtype, 'object') self.frecon.fake_replication_rtype = None def test_recon_get_replication_all(self): get_replication_resp = ['{"replicationtest": "1"}'] # test account req = Request.blank('/recon/replication/account', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_replication_resp) self.assertEqual(self.frecon.fake_replication_rtype, 'account') self.frecon.fake_replication_rtype = None # test container req = Request.blank('/recon/replication/container', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_replication_resp) self.assertEqual(self.frecon.fake_replication_rtype, 'container') self.frecon.fake_replication_rtype = None # test object req = Request.blank('/recon/replication/object', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_replication_resp) self.assertEqual(self.frecon.fake_replication_rtype, 'object') self.frecon.fake_replication_rtype = None def test_recon_get_auditor_invalid(self): get_auditor_resp = ['Invalid path: /recon/auditor/invalid'] req = Request.blank('/recon/auditor/invalid', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_auditor_resp) def test_recon_get_auditor_notype(self): get_auditor_resp = ['Invalid path: /recon/auditor'] req = Request.blank('/recon/auditor', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_auditor_resp) def test_recon_get_auditor_all(self): get_auditor_resp = ['{"auditortest": "1"}'] req = Request.blank('/recon/auditor/account', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_auditor_resp) self.assertEqual(self.frecon.fake_auditor_rtype, 'account') self.frecon.fake_auditor_rtype = None req = Request.blank('/recon/auditor/container', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_auditor_resp) self.assertEqual(self.frecon.fake_auditor_rtype, 'container') self.frecon.fake_auditor_rtype = None req = Request.blank('/recon/auditor/object', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_auditor_resp) self.assertEqual(self.frecon.fake_auditor_rtype, 'object') self.frecon.fake_auditor_rtype = None def test_recon_get_updater_invalid(self): get_updater_resp = ['Invalid path: /recon/updater/invalid'] req = Request.blank('/recon/updater/invalid', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_updater_resp) def test_recon_get_updater_notype(self): get_updater_resp = ['Invalid path: /recon/updater'] req = Request.blank('/recon/updater', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_updater_resp) def test_recon_get_updater(self): get_updater_resp = ['{"updatertest": "1"}'] req = Request.blank('/recon/updater/container', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(self.frecon.fake_updater_rtype, 'container') self.frecon.fake_updater_rtype = None self.assertEqual(resp, get_updater_resp) req = Request.blank('/recon/updater/object', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_updater_resp) self.assertEqual(self.frecon.fake_updater_rtype, 'object') self.frecon.fake_updater_rtype = None def test_recon_get_expirer_invalid(self): get_updater_resp = ['Invalid path: /recon/expirer/invalid'] req = Request.blank('/recon/expirer/invalid', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_updater_resp) def test_recon_get_expirer_notype(self): get_updater_resp = ['Invalid path: /recon/expirer'] req = Request.blank('/recon/expirer', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_updater_resp) def test_recon_get_expirer_object(self): get_expirer_resp = ['{"expirertest": "1"}'] req = Request.blank('/recon/expirer/object', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_expirer_resp) self.assertEqual(self.frecon.fake_expirer_rtype, 'object') self.frecon.fake_updater_rtype = None def test_recon_get_mounted(self): get_mounted_resp = ['{"mountedtest": "1"}'] req = Request.blank('/recon/mounted', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_mounted_resp) def test_recon_get_unmounted(self): get_unmounted_resp = ['{"unmountedtest": "1"}'] self.app.get_unmounted = self.frecon.fake_unmounted req = Request.blank('/recon/unmounted', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_unmounted_resp) def test_recon_get_unmounted_empty(self): get_unmounted_resp = '[]' self.app.get_unmounted = self.frecon.fake_unmounted_empty req = Request.blank('/recon/unmounted', environ={'REQUEST_METHOD': 'GET'}) resp = ''.join(self.app(req.environ, start_response)) self.assertEqual(resp, get_unmounted_resp) def test_recon_get_diskusage(self): get_diskusage_resp = ['{"diskusagetest": "1"}'] req = Request.blank('/recon/diskusage', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_diskusage_resp) def test_recon_get_ringmd5(self): get_ringmd5_resp = ['{"ringmd5test": "1"}'] req = Request.blank('/recon/ringmd5', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_ringmd5_resp) def test_recon_get_swiftconfmd5(self): get_swiftconfmd5_resp = ['{"/etc/swift/swift.conf": "abcdef"}'] req = Request.blank('/recon/swiftconfmd5', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_swiftconfmd5_resp) def test_recon_get_quarantined(self): get_quarantined_resp = ['{"quarantinedtest": "1"}'] req = Request.blank('/recon/quarantined', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_quarantined_resp) def test_recon_get_sockstat(self): get_sockstat_resp = ['{"sockstattest": "1"}'] req = Request.blank('/recon/sockstat', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_sockstat_resp) def test_recon_invalid_path(self): req = Request.blank('/recon/invalid', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, ['Invalid path: /recon/invalid']) def test_no_content(self): self.app.get_load = self.frecon.nocontent req = Request.blank('/recon/load', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, ['Internal server error.']) def test_recon_pass(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, 'FAKE APP') def test_recon_get_driveaudit(self): get_driveaudit_resp = ['{"driveaudittest": "1"}'] req = Request.blank('/recon/driveaudit', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_driveaudit_resp) def test_recon_get_time(self): get_time_resp = ['{"timetest": "1"}'] req = Request.blank('/recon/time', environ={'REQUEST_METHOD': 'GET'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, get_time_resp) def test_get_device_info_function(self): """Test get_device_info function call success""" resp = self.app.get_device_info() self.assertEqual(['sdb1'], resp['/srv/1/node']) def test_get_device_info_fail(self): """Test get_device_info failure by failing os.listdir""" os.listdir = fail_os_listdir resp = self.real_app_get_device_info() os.listdir = self.real_listdir device_path = resp.keys()[0] self.assertIsNone(resp[device_path]) def test_get_swift_conf_md5(self): """Test get_swift_conf_md5 success""" resp = self.app.get_swift_conf_md5() self.assertEqual('abcdef', resp['/etc/swift/swift.conf']) def test_get_swift_conf_md5_fail(self): """Test get_swift_conf_md5 failure by failing file open""" resp = self.real_app_get_swift_conf_md5(fail_io_open) self.assertIsNone(resp['/etc/swift/swift.conf']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_domain_remap.py0000664000567000056710000001763013024044352025431 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.swob import Request from swift.common.middleware import domain_remap from swift.common import utils class FakeApp(object): def __call__(self, env, start_response): return env['PATH_INFO'] def start_response(*args): pass class TestDomainRemap(unittest.TestCase): def setUp(self): self.app = domain_remap.DomainRemapMiddleware(FakeApp(), {}) def test_domain_remap_passthrough(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'SERVER_NAME': 'example.com'}, headers={'Host': None}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'example.com:8080'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/') def test_domain_remap_account(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET', 'SERVER_NAME': 'AUTH_a.example.com'}, headers={'Host': None}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a') req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'AUTH-uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_uuid') def test_domain_remap_account_container(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a/c') def test_domain_remap_extra_subdomains(self): req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'x.y.c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, ['Bad domain in host header']) def test_domain_remap_account_with_path_root(self): req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a') def test_domain_remap_account_container_with_path_root(self): req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a/c') def test_domain_remap_account_container_with_path(self): req = Request.blank('/obj', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a/c/obj') def test_domain_remap_account_container_with_path_root_and_path(self): req = Request.blank('/v1/obj', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_a/c/obj') def test_domain_remap_account_matching_ending_not_domain(self): req = Request.blank('/dontchange', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.aexample.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/dontchange') def test_domain_remap_configured_with_empty_storage_domain(self): self.app = domain_remap.DomainRemapMiddleware(FakeApp(), {'storage_domain': ''}) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.AUTH_a.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/test') def test_domain_remap_configured_with_prefixes(self): conf = {'reseller_prefixes': 'PREFIX'} self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.prefix_uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/PREFIX_uuid/c/test') def test_domain_remap_configured_with_bad_prefixes(self): conf = {'reseller_prefixes': 'UNKNOWN'} self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.prefix_uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/test') def test_domain_remap_configured_with_no_prefixes(self): conf = {'reseller_prefixes': ''} self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'c.uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/uuid/c/test') def test_domain_remap_add_prefix(self): conf = {'default_reseller_prefix': 'FOO'} self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/FOO_uuid/test') def test_domain_remap_add_prefix_already_there(self): conf = {'default_reseller_prefix': 'AUTH'} self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf) req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'}, headers={'Host': 'auth-uuid.example.com'}) resp = self.app(req.environ, start_response) self.assertEqual(resp, '/v1/AUTH_uuid/test') class TestSwiftInfo(unittest.TestCase): def setUp(self): utils._swift_info = {} utils._swift_admin_info = {} def test_registered_defaults(self): domain_remap.filter_factory({}) swift_info = utils.get_swift_info() self.assertTrue('domain_remap' in swift_info) self.assertTrue( swift_info['domain_remap'].get('default_reseller_prefix') is None) def test_registered_nondefaults(self): domain_remap.filter_factory({'default_reseller_prefix': 'cupcake'}) swift_info = utils.get_swift_info() self.assertTrue('domain_remap' in swift_info) self.assertEqual( swift_info['domain_remap'].get('default_reseller_prefix'), 'cupcake') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/middleware/test_name_check.py0000664000567000056710000001041613024044352025046 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. ''' Unit tests for Name_check filter Created on February 29, 2012 @author: eamonn-otoole ''' import unittest from swift.common.swob import Request, Response from swift.common.middleware import name_check MAX_LENGTH = 255 FORBIDDEN_CHARS = '\'\"<>`' FORBIDDEN_REGEXP = "/\./|/\.\./|/\.$|/\.\.$" class FakeApp(object): def __call__(self, env, start_response): return Response(body="OK")(env, start_response) class TestNameCheckMiddleware(unittest.TestCase): def setUp(self): self.conf = {'maximum_length': MAX_LENGTH, 'forbidden_chars': FORBIDDEN_CHARS, 'forbidden_regexp': FORBIDDEN_REGEXP} self.test_check = name_check.filter_factory(self.conf)(FakeApp()) def test_valid_length_and_character(self): path = '/V1.0/' + 'c' * (MAX_LENGTH - 6) resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'} ).get_response(self.test_check) self.assertEqual(resp.body, 'OK') def test_invalid_character(self): for c in self.conf['forbidden_chars']: path = '/V1.0/1234' + c + '5' resp = Request.blank( path, environ={'REQUEST_METHOD': 'PUT'}).get_response( self.test_check) self.assertEqual( resp.body, ("Object/Container/Account name contains forbidden chars " "from %s" % self.conf['forbidden_chars'])) self.assertEqual(resp.status_int, 400) def test_maximum_length_from_config(self): # test invalid length orig_test_check = self.test_check conf = {'maximum_length': "500"} self.test_check = name_check.filter_factory(conf)(FakeApp()) path = '/V1.0/a/c' + 'o' * (500 - 8) resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'} ).get_response(self.test_check) self.assertEqual( resp.body, ("Object/Container/Account name longer than the allowed " "maximum 500")) self.assertEqual(resp.status_int, 400) # test valid length path = '/V1.0/a/c' + 'o' * (MAX_LENGTH - 10) resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'} ).get_response(self.test_check) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, 'OK') self.test_check = orig_test_check def test_invalid_length(self): path = '/V1.0/' + 'c' * (MAX_LENGTH - 5) resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'} ).get_response(self.test_check) self.assertEqual( resp.body, ("Object/Container/Account name longer than the allowed maximum %s" % self.conf['maximum_length'])) self.assertEqual(resp.status_int, 400) def test_invalid_regexp(self): for s in ['/.', '/..', '/./foo', '/../foo']: path = '/V1.0/' + s resp = Request.blank( path, environ={'REQUEST_METHOD': 'PUT'}).get_response( self.test_check) self.assertEqual( resp.body, ("Object/Container/Account name contains a forbidden " "substring from regular expression %s" % self.conf['forbidden_regexp'])) self.assertEqual(resp.status_int, 400) def test_valid_regexp(self): for s in ['/...', '/.\.', '/foo']: path = '/V1.0/' + s resp = Request.blank( path, environ={'REQUEST_METHOD': 'PUT'}).get_response( self.test_check) self.assertEqual(resp.body, 'OK') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/ring/0000775000567000056710000000000013024044470020201 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/ring/__init__.py0000664000567000056710000000000013024044352022277 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/ring/test_ring.py0000664000567000056710000011520713024044354022560 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import array import six.moves.cPickle as pickle import os import unittest import stat from contextlib import closing from gzip import GzipFile from tempfile import mkdtemp from shutil import rmtree from time import sleep, time import random from six.moves import range from swift.common import ring, utils from swift.common.ring import utils as ring_utils class TestRingBase(unittest.TestCase): def setUp(self): self._orig_hash_suffix = utils.HASH_PATH_SUFFIX self._orig_hash_prefix = utils.HASH_PATH_PREFIX utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' def tearDown(self): utils.HASH_PATH_SUFFIX = self._orig_hash_suffix utils.HASH_PATH_PREFIX = self._orig_hash_prefix class TestRingData(unittest.TestCase): def setUp(self): self.testdir = os.path.join(os.path.dirname(__file__), 'ring_data') rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) def tearDown(self): rmtree(self.testdir, ignore_errors=1) def assert_ring_data_equal(self, rd_expected, rd_got): self.assertEqual(rd_expected._replica2part2dev_id, rd_got._replica2part2dev_id) self.assertEqual(rd_expected.devs, rd_got.devs) self.assertEqual(rd_expected._part_shift, rd_got._part_shift) def test_attrs(self): r2p2d = [[0, 1, 0, 1], [0, 1, 0, 1]] d = [{'id': 0, 'zone': 0, 'region': 0, 'ip': '10.1.1.0', 'port': 7000}, {'id': 1, 'zone': 1, 'region': 1, 'ip': '10.1.1.1', 'port': 7000}] s = 30 rd = ring.RingData(r2p2d, d, s) self.assertEqual(rd._replica2part2dev_id, r2p2d) self.assertEqual(rd.devs, d) self.assertEqual(rd._part_shift, s) def test_can_load_pickled_ring_data(self): rd = ring.RingData( [[0, 1, 0, 1], [0, 1, 0, 1]], [{'id': 0, 'zone': 0, 'ip': '10.1.1.0', 'port': 7000}, {'id': 1, 'zone': 1, 'ip': '10.1.1.1', 'port': 7000}], 30) ring_fname = os.path.join(self.testdir, 'foo.ring.gz') for p in range(pickle.HIGHEST_PROTOCOL): with closing(GzipFile(ring_fname, 'wb')) as f: pickle.dump(rd, f, protocol=p) meta_only = ring.RingData.load(ring_fname, metadata_only=True) self.assertEqual([ {'id': 0, 'zone': 0, 'region': 1, 'ip': '10.1.1.0', 'port': 7000}, {'id': 1, 'zone': 1, 'region': 1, 'ip': '10.1.1.1', 'port': 7000}, ], meta_only.devs) # Pickled rings can't load only metadata, so you get it all self.assert_ring_data_equal(rd, meta_only) ring_data = ring.RingData.load(ring_fname) self.assert_ring_data_equal(rd, ring_data) def test_roundtrip_serialization(self): ring_fname = os.path.join(self.testdir, 'foo.ring.gz') rd = ring.RingData( [array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1])], [{'id': 0, 'zone': 0}, {'id': 1, 'zone': 1}], 30) rd.save(ring_fname) meta_only = ring.RingData.load(ring_fname, metadata_only=True) self.assertEqual([ {'id': 0, 'zone': 0, 'region': 1}, {'id': 1, 'zone': 1, 'region': 1}, ], meta_only.devs) self.assertEqual([], meta_only._replica2part2dev_id) rd2 = ring.RingData.load(ring_fname) self.assert_ring_data_equal(rd, rd2) def test_deterministic_serialization(self): """ Two identical rings should produce identical .gz files on disk. """ os.mkdir(os.path.join(self.testdir, '1')) os.mkdir(os.path.join(self.testdir, '2')) # These have to have the same filename (not full path, # obviously) since the filename gets encoded in the gzip data. ring_fname1 = os.path.join(self.testdir, '1', 'the.ring.gz') ring_fname2 = os.path.join(self.testdir, '2', 'the.ring.gz') rd = ring.RingData( [array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1])], [{'id': 0, 'zone': 0}, {'id': 1, 'zone': 1}], 30) rd.save(ring_fname1) rd.save(ring_fname2) with open(ring_fname1) as ring1: with open(ring_fname2) as ring2: self.assertEqual(ring1.read(), ring2.read()) def test_permissions(self): ring_fname = os.path.join(self.testdir, 'stat.ring.gz') rd = ring.RingData( [array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1])], [{'id': 0, 'zone': 0}, {'id': 1, 'zone': 1}], 30) rd.save(ring_fname) self.assertEqual(oct(stat.S_IMODE(os.stat(ring_fname).st_mode)), '0644') class TestRing(TestRingBase): def setUp(self): super(TestRing, self).setUp() self.testdir = mkdtemp() self.testgz = os.path.join(self.testdir, 'whatever.ring.gz') self.intended_replica2part2dev_id = [ array.array('H', [0, 1, 0, 1]), array.array('H', [0, 1, 0, 1]), array.array('H', [3, 4, 3, 4])] self.intended_devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.0.1', 'replication_port': 6066}, {'id': 1, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.0.2', 'replication_port': 6066}, None, {'id': 3, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000, 'replication_ip': '10.2.0.1', 'replication_port': 6066}, {'id': 4, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000, 'replication_ip': '10.2.0.1', 'replication_port': 6066}] self.intended_part_shift = 30 self.intended_reload_time = 15 ring.RingData( self.intended_replica2part2dev_id, self.intended_devs, self.intended_part_shift).save(self.testgz) self.ring = ring.Ring( self.testdir, reload_time=self.intended_reload_time, ring_name='whatever') def tearDown(self): super(TestRing, self).tearDown() rmtree(self.testdir, ignore_errors=1) def test_creation(self): self.assertEqual(self.ring._replica2part2dev_id, self.intended_replica2part2dev_id) self.assertEqual(self.ring._part_shift, self.intended_part_shift) self.assertEqual(self.ring.devs, self.intended_devs) self.assertEqual(self.ring.reload_time, self.intended_reload_time) self.assertEqual(self.ring.serialized_path, self.testgz) # test invalid endcap _orig_hash_path_suffix = utils.HASH_PATH_SUFFIX _orig_hash_path_prefix = utils.HASH_PATH_PREFIX _orig_swift_conf_file = utils.SWIFT_CONF_FILE try: utils.HASH_PATH_SUFFIX = '' utils.HASH_PATH_PREFIX = '' utils.SWIFT_CONF_FILE = '' self.assertRaises(SystemExit, ring.Ring, self.testdir, 'whatever') finally: utils.HASH_PATH_SUFFIX = _orig_hash_path_suffix utils.HASH_PATH_PREFIX = _orig_hash_path_prefix utils.SWIFT_CONF_FILE = _orig_swift_conf_file def test_has_changed(self): self.assertEqual(self.ring.has_changed(), False) os.utime(self.testgz, (time() + 60, time() + 60)) self.assertEqual(self.ring.has_changed(), True) def test_reload(self): os.utime(self.testgz, (time() - 300, time() - 300)) self.ring = ring.Ring(self.testdir, reload_time=0.001, ring_name='whatever') orig_mtime = self.ring._mtime self.assertEqual(len(self.ring.devs), 5) self.intended_devs.append( {'id': 3, 'region': 0, 'zone': 3, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 9876}) ring.RingData( self.intended_replica2part2dev_id, self.intended_devs, self.intended_part_shift).save(self.testgz) sleep(0.1) self.ring.get_nodes('a') self.assertEqual(len(self.ring.devs), 6) self.assertNotEqual(self.ring._mtime, orig_mtime) os.utime(self.testgz, (time() - 300, time() - 300)) self.ring = ring.Ring(self.testdir, reload_time=0.001, ring_name='whatever') orig_mtime = self.ring._mtime self.assertEqual(len(self.ring.devs), 6) self.intended_devs.append( {'id': 5, 'region': 0, 'zone': 4, 'weight': 1.0, 'ip': '10.5.5.5', 'port': 9876}) ring.RingData( self.intended_replica2part2dev_id, self.intended_devs, self.intended_part_shift).save(self.testgz) sleep(0.1) self.ring.get_part_nodes(0) self.assertEqual(len(self.ring.devs), 7) self.assertNotEqual(self.ring._mtime, orig_mtime) os.utime(self.testgz, (time() - 300, time() - 300)) self.ring = ring.Ring(self.testdir, reload_time=0.001, ring_name='whatever') orig_mtime = self.ring._mtime part, nodes = self.ring.get_nodes('a') self.assertEqual(len(self.ring.devs), 7) self.intended_devs.append( {'id': 6, 'region': 0, 'zone': 5, 'weight': 1.0, 'ip': '10.6.6.6', 'port': 6000}) ring.RingData( self.intended_replica2part2dev_id, self.intended_devs, self.intended_part_shift).save(self.testgz) sleep(0.1) next(self.ring.get_more_nodes(part)) self.assertEqual(len(self.ring.devs), 8) self.assertNotEqual(self.ring._mtime, orig_mtime) os.utime(self.testgz, (time() - 300, time() - 300)) self.ring = ring.Ring(self.testdir, reload_time=0.001, ring_name='whatever') orig_mtime = self.ring._mtime self.assertEqual(len(self.ring.devs), 8) self.intended_devs.append( {'id': 5, 'region': 0, 'zone': 4, 'weight': 1.0, 'ip': '10.5.5.5', 'port': 6000}) ring.RingData( self.intended_replica2part2dev_id, self.intended_devs, self.intended_part_shift).save(self.testgz) sleep(0.1) self.assertEqual(len(self.ring.devs), 9) self.assertNotEqual(self.ring._mtime, orig_mtime) def test_reload_without_replication(self): replication_less_devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000}, {'id': 1, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000}, None, {'id': 3, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000}, {'id': 4, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000}] intended_devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.1.1', 'replication_port': 6000}, {'id': 1, 'region': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.1.1', 'replication_port': 6000}, None, {'id': 3, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000, 'replication_ip': '10.1.2.1', 'replication_port': 6000}, {'id': 4, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000, 'replication_ip': '10.1.2.2', 'replication_port': 6000}] testgz = os.path.join(self.testdir, 'without_replication.ring.gz') ring.RingData( self.intended_replica2part2dev_id, replication_less_devs, self.intended_part_shift).save(testgz) self.ring = ring.Ring( self.testdir, reload_time=self.intended_reload_time, ring_name='without_replication') self.assertEqual(self.ring.devs, intended_devs) def test_reload_old_style_pickled_ring(self): devs = [{'id': 0, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000}, {'id': 1, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000}, None, {'id': 3, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000}, {'id': 4, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000}] intended_devs = [{'id': 0, 'region': 1, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.1.1', 'replication_port': 6000}, {'id': 1, 'region': 1, 'zone': 0, 'weight': 1.0, 'ip': '10.1.1.1', 'port': 6000, 'replication_ip': '10.1.1.1', 'replication_port': 6000}, None, {'id': 3, 'region': 1, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.1', 'port': 6000, 'replication_ip': '10.1.2.1', 'replication_port': 6000}, {'id': 4, 'region': 1, 'zone': 2, 'weight': 1.0, 'ip': '10.1.2.2', 'port': 6000, 'replication_ip': '10.1.2.2', 'replication_port': 6000}] # simulate an old-style pickled ring testgz = os.path.join(self.testdir, 'without_replication_or_region.ring.gz') ring_data = ring.RingData(self.intended_replica2part2dev_id, devs, self.intended_part_shift) # an old-style pickled ring won't have region data for dev in ring_data.devs: if dev: del dev["region"] gz_file = GzipFile(testgz, 'wb') pickle.dump(ring_data, gz_file, protocol=2) gz_file.close() self.ring = ring.Ring( self.testdir, reload_time=self.intended_reload_time, ring_name='without_replication_or_region') self.assertEqual(self.ring.devs, intended_devs) def test_get_part(self): part1 = self.ring.get_part('a') nodes1 = self.ring.get_part_nodes(part1) part2, nodes2 = self.ring.get_nodes('a') self.assertEqual(part1, part2) self.assertEqual(nodes1, nodes2) def test_get_part_nodes(self): part, nodes = self.ring.get_nodes('a') self.assertEqual(nodes, self.ring.get_part_nodes(part)) def test_get_nodes(self): # Yes, these tests are deliberately very fragile. We want to make sure # that if someones changes the results the ring produces, they know it. self.assertRaises(TypeError, self.ring.get_nodes) part, nodes = self.ring.get_nodes('a') self.assertEqual(part, 0) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a1') self.assertEqual(part, 0) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a4') self.assertEqual(part, 1) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[1], self.intended_devs[4]])]) part, nodes = self.ring.get_nodes('aa') self.assertEqual(part, 1) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[1], self.intended_devs[4]])]) part, nodes = self.ring.get_nodes('a', 'c1') self.assertEqual(part, 0) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a', 'c0') self.assertEqual(part, 3) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[1], self.intended_devs[4]])]) part, nodes = self.ring.get_nodes('a', 'c3') self.assertEqual(part, 2) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a', 'c2') self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a', 'c', 'o1') self.assertEqual(part, 1) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[1], self.intended_devs[4]])]) part, nodes = self.ring.get_nodes('a', 'c', 'o5') self.assertEqual(part, 0) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a', 'c', 'o0') self.assertEqual(part, 0) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) part, nodes = self.ring.get_nodes('a', 'c', 'o2') self.assertEqual(part, 2) self.assertEqual(nodes, [dict(node, index=i) for i, node in enumerate([self.intended_devs[0], self.intended_devs[3]])]) def add_dev_to_ring(self, new_dev): self.ring.devs.append(new_dev) self.ring._rebuild_tier_data() def test_get_more_nodes(self): # Yes, these tests are deliberately very fragile. We want to make sure # that if someone changes the results the ring produces, they know it. exp_part = 6 exp_devs = [71, 77, 30] exp_zones = set([6, 3, 7]) exp_handoffs = [99, 43, 94, 13, 1, 49, 60, 72, 27, 68, 78, 26, 21, 9, 51, 105, 47, 89, 65, 82, 34, 98, 38, 85, 16, 4, 59, 102, 40, 90, 20, 8, 54, 66, 80, 25, 14, 2, 50, 12, 0, 48, 70, 76, 32, 107, 45, 87, 101, 44, 93, 100, 42, 95, 106, 46, 88, 97, 37, 86, 96, 36, 84, 17, 5, 57, 63, 81, 33, 67, 79, 24, 15, 3, 58, 69, 75, 31, 61, 74, 29, 23, 10, 52, 22, 11, 53, 64, 83, 35, 62, 73, 28, 18, 6, 56, 104, 39, 91, 103, 41, 92, 19, 7, 55] exp_first_handoffs = [23, 64, 105, 102, 67, 17, 99, 65, 69, 97, 15, 17, 24, 98, 66, 65, 69, 18, 104, 105, 16, 107, 100, 15, 14, 19, 102, 105, 63, 104, 99, 12, 107, 99, 16, 105, 71, 15, 15, 63, 63, 99, 21, 68, 20, 64, 96, 21, 98, 19, 68, 99, 15, 69, 62, 100, 96, 102, 17, 62, 13, 61, 102, 105, 22, 16, 21, 18, 21, 100, 20, 16, 21, 106, 66, 106, 16, 99, 16, 22, 62, 60, 99, 69, 18, 23, 104, 98, 106, 61, 21, 23, 23, 16, 67, 71, 101, 16, 64, 66, 70, 15, 102, 63, 19, 98, 18, 106, 101, 100, 62, 63, 98, 18, 13, 97, 23, 22, 100, 13, 14, 67, 96, 14, 105, 97, 71, 64, 96, 22, 65, 66, 98, 19, 105, 98, 97, 21, 15, 69, 100, 98, 106, 65, 66, 97, 62, 22, 68, 63, 61, 67, 67, 20, 105, 106, 105, 18, 71, 100, 17, 62, 60, 13, 103, 99, 101, 96, 97, 16, 60, 21, 14, 20, 12, 60, 69, 104, 65, 65, 17, 16, 67, 13, 64, 15, 16, 68, 96, 21, 104, 66, 96, 105, 58, 105, 103, 21, 96, 60, 16, 96, 21, 71, 16, 99, 101, 63, 62, 103, 18, 102, 60, 17, 19, 106, 97, 14, 99, 68, 102, 13, 70, 103, 21, 22, 19, 61, 103, 23, 104, 65, 62, 68, 16, 65, 15, 102, 102, 71, 99, 63, 67, 19, 23, 15, 69, 107, 14, 13, 64, 13, 105, 15, 98, 69] rb = ring.RingBuilder(8, 3, 1) next_dev_id = 0 for zone in range(1, 10): for server in range(1, 5): for device in range(1, 4): rb.add_dev({'id': next_dev_id, 'ip': '1.2.%d.%d' % (zone, server), 'port': 1234 + device, 'zone': zone, 'region': 0, 'weight': 1.0}) next_dev_id += 1 rb.rebalance(seed=2) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') # every part has the same number of handoffs part_handoff_counts = set() for part in range(r.partition_count): part_handoff_counts.add(len(list(r.get_more_nodes(part)))) self.assertEqual(part_handoff_counts, {105}) # which less the primaries - is every device in the ring self.assertEqual(len(list(rb._iter_devs())) - rb.replicas, 105) part, devs = r.get_nodes('a', 'c', 'o') primary_zones = set([d['zone'] for d in devs]) self.assertEqual(part, exp_part) self.assertEqual([d['id'] for d in devs], exp_devs) self.assertEqual(primary_zones, exp_zones) devs = list(r.get_more_nodes(part)) self.assertEqual(len(devs), len(exp_handoffs)) dev_ids = [d['id'] for d in devs] self.assertEqual(dev_ids, exp_handoffs) # The first 6 replicas plus the 3 primary nodes should cover all 9 # zones in this test seen_zones = set(primary_zones) seen_zones.update([d['zone'] for d in devs[:6]]) self.assertEqual(seen_zones, set(range(1, 10))) # The first handoff nodes for each partition in the ring devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) self.assertEqual(devs, exp_first_handoffs) # Add a new device we can handoff to. zone = 5 server = 0 rb.add_dev({'id': next_dev_id, 'ip': '1.2.%d.%d' % (zone, server), 'port': 1234, 'zone': zone, 'region': 0, 'weight': 1.0}) next_dev_id += 1 rb.pretend_min_part_hours_passed() num_parts_changed, _balance, _removed_dev = rb.rebalance(seed=2) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') # so now we expect the device list to be longer by one device part_handoff_counts = set() for part in range(r.partition_count): part_handoff_counts.add(len(list(r.get_more_nodes(part)))) self.assertEqual(part_handoff_counts, {106}) self.assertEqual(len(list(rb._iter_devs())) - rb.replicas, 106) # I don't think there's any special reason this dev goes at this index exp_handoffs.insert(27, rb.devs[-1]['id']) # We would change expectations here, but in this part only the added # device changed at all. part, devs = r.get_nodes('a', 'c', 'o') primary_zones = set([d['zone'] for d in devs]) self.assertEqual(part, exp_part) self.assertEqual([d['id'] for d in devs], exp_devs) self.assertEqual(primary_zones, exp_zones) devs = list(r.get_more_nodes(part)) dev_ids = [d['id'] for d in devs] self.assertEqual(len(dev_ids), len(exp_handoffs)) for index, dev in enumerate(dev_ids): self.assertEqual( dev, exp_handoffs[index], 'handoff differs at position %d\n%s\n%s' % ( index, dev_ids[index:], exp_handoffs[index:])) # The handoffs still cover all the non-primary zones first seen_zones = set(primary_zones) seen_zones.update([d['zone'] for d in devs[:6]]) self.assertEqual(seen_zones, set(range(1, 10))) # Change expectations for the rest of the parts devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) changed_first_handoff = 0 for part in range(r.partition_count): if devs[part] != exp_first_handoffs[part]: changed_first_handoff += 1 exp_first_handoffs[part] = devs[part] self.assertEqual(devs, exp_first_handoffs) self.assertEqual(changed_first_handoff, num_parts_changed) # Remove a device - no need to fluff min_part_hours. rb.remove_dev(0) num_parts_changed, _balance, _removed_dev = rb.rebalance(seed=1) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') # so now we expect the device list to be shorter by one device part_handoff_counts = set() for part in range(r.partition_count): part_handoff_counts.add(len(list(r.get_more_nodes(part)))) self.assertEqual(part_handoff_counts, {105}) self.assertEqual(len(list(rb._iter_devs())) - rb.replicas, 105) # Change expectations for our part exp_handoffs.remove(0) first_matches = 0 total_changed = 0 devs = list(d['id'] for d in r.get_more_nodes(exp_part)) for i, part in enumerate(devs): if exp_handoffs[i] != devs[i]: total_changed += 1 exp_handoffs[i] = devs[i] if not total_changed: first_matches += 1 self.assertEqual(devs, exp_handoffs) # the first 21 handoffs were the same across the rebalance self.assertEqual(first_matches, 21) # but as you dig deeper some of the differences show up self.assertEqual(total_changed, 41) # Change expectations for the rest of the parts devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) changed_first_handoff = 0 for part in range(r.partition_count): if devs[part] != exp_first_handoffs[part]: changed_first_handoff += 1 exp_first_handoffs[part] = devs[part] self.assertEqual(devs, exp_first_handoffs) self.assertEqual(changed_first_handoff, num_parts_changed) # Test part, devs = r.get_nodes('a', 'c', 'o') primary_zones = set([d['zone'] for d in devs]) self.assertEqual(part, exp_part) self.assertEqual([d['id'] for d in devs], exp_devs) self.assertEqual(primary_zones, exp_zones) devs = list(r.get_more_nodes(part)) dev_ids = [d['id'] for d in devs] self.assertEqual(len(dev_ids), len(exp_handoffs)) for index, dev in enumerate(dev_ids): self.assertEqual( dev, exp_handoffs[index], 'handoff differs at position %d\n%s\n%s' % ( index, dev_ids[index:], exp_handoffs[index:])) seen_zones = set(primary_zones) seen_zones.update([d['zone'] for d in devs[:6]]) self.assertEqual(seen_zones, set(range(1, 10))) devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) for part in range(r.partition_count): self.assertEqual( devs[part], exp_first_handoffs[part], 'handoff for partitition %d is now device id %d' % ( part, devs[part])) # Add a partial replica rb.set_replicas(3.5) num_parts_changed, _balance, _removed_dev = rb.rebalance(seed=164) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') # Change expectations # We have another replica now exp_devs.append(90) exp_zones.add(8) # and therefore one less handoff exp_handoffs = exp_handoffs[:-1] # Caused some major changes in the sequence of handoffs for our test # partition, but at least the first stayed the same. devs = list(d['id'] for d in r.get_more_nodes(exp_part)) first_matches = 0 total_changed = 0 for i, part in enumerate(devs): if exp_handoffs[i] != devs[i]: total_changed += 1 exp_handoffs[i] = devs[i] if not total_changed: first_matches += 1 # most seeds seem to throw out first handoff stabilization with # replica_count change self.assertEqual(first_matches, 2) # and lots of other handoff changes... self.assertEqual(total_changed, 95) self.assertEqual(devs, exp_handoffs) # Change expectations for the rest of the parts devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) changed_first_handoff = 0 for part in range(r.partition_count): if devs[part] != exp_first_handoffs[part]: changed_first_handoff += 1 exp_first_handoffs[part] = devs[part] self.assertEqual(devs, exp_first_handoffs) self.assertLessEqual(changed_first_handoff, num_parts_changed) # Test part, devs = r.get_nodes('a', 'c', 'o') primary_zones = set([d['zone'] for d in devs]) self.assertEqual(part, exp_part) self.assertEqual([d['id'] for d in devs], exp_devs) self.assertEqual(primary_zones, exp_zones) devs = list(r.get_more_nodes(part)) dev_ids = [d['id'] for d in devs] self.assertEqual(len(dev_ids), len(exp_handoffs)) for index, dev in enumerate(dev_ids): self.assertEqual( dev, exp_handoffs[index], 'handoff differs at position %d\n%s\n%s' % ( index, dev_ids[index:], exp_handoffs[index:])) seen_zones = set(primary_zones) seen_zones.update([d['zone'] for d in devs[:6]]) self.assertEqual(seen_zones, set(range(1, 10))) devs = [] for part in range(r.partition_count): devs.append(next(r.get_more_nodes(part))['id']) for part in range(r.partition_count): self.assertEqual( devs[part], exp_first_handoffs[part], 'handoff for partitition %d is now device id %d' % ( part, devs[part])) # One last test of a partial replica partition exp_part2 = 136 exp_devs2 = [70, 76, 32] exp_zones2 = set([3, 6, 7]) exp_handoffs2 = [89, 97, 37, 53, 20, 1, 86, 64, 102, 40, 90, 60, 72, 27, 99, 68, 78, 26, 105, 45, 42, 95, 22, 13, 49, 55, 11, 8, 83, 16, 4, 59, 33, 108, 61, 74, 29, 88, 66, 80, 25, 100, 39, 67, 79, 24, 65, 96, 36, 84, 54, 21, 63, 81, 56, 71, 77, 30, 48, 23, 10, 52, 82, 34, 17, 107, 87, 104, 5, 35, 2, 50, 43, 62, 73, 28, 18, 14, 98, 38, 85, 15, 57, 9, 51, 12, 6, 91, 3, 103, 41, 92, 47, 75, 44, 69, 101, 93, 106, 46, 94, 31, 19, 7, 58] part2, devs2 = r.get_nodes('a', 'c', 'o2') primary_zones2 = set([d['zone'] for d in devs2]) self.assertEqual(part2, exp_part2) self.assertEqual([d['id'] for d in devs2], exp_devs2) self.assertEqual(primary_zones2, exp_zones2) devs2 = list(r.get_more_nodes(part2)) dev_ids2 = [d['id'] for d in devs2] self.assertEqual(len(dev_ids2), len(exp_handoffs2)) for index, dev in enumerate(dev_ids2): self.assertEqual( dev, exp_handoffs2[index], 'handoff differs at position %d\n%s\n%s' % ( index, dev_ids2[index:], exp_handoffs2[index:])) seen_zones = set(primary_zones2) seen_zones.update([d['zone'] for d in devs2[:6]]) self.assertEqual(seen_zones, set(range(1, 10))) # Test distribution across regions rb.set_replicas(3) for region in range(1, 5): rb.add_dev({'id': next_dev_id, 'ip': '1.%d.1.%d' % (region, server), 'port': 1234, # 108.0 is the weight of all devices created prior to # this test in region 0; this way all regions have # equal combined weight 'zone': 1, 'region': region, 'weight': 108.0}) next_dev_id += 1 rb.pretend_min_part_hours_passed() rb.rebalance(seed=1) rb.pretend_min_part_hours_passed() rb.rebalance(seed=1) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') # There's 5 regions now, so the primary nodes + first 2 handoffs # should span all 5 regions part, devs = r.get_nodes('a1', 'c1', 'o1') primary_regions = set([d['region'] for d in devs]) primary_zones = set([(d['region'], d['zone']) for d in devs]) more_devs = list(r.get_more_nodes(part)) seen_regions = set(primary_regions) seen_regions.update([d['region'] for d in more_devs[:2]]) self.assertEqual(seen_regions, set(range(0, 5))) # There are 13 zones now, so the first 13 nodes should all have # distinct zones (that's r0z0, r0z1, ..., r0z8, r1z1, r2z1, r3z1, and # r4z1). seen_zones = set(primary_zones) seen_zones.update([(d['region'], d['zone']) for d in more_devs[:10]]) self.assertEqual(13, len(seen_zones)) # Here's a brittle canary-in-the-coalmine test to make sure the region # handoff computation didn't change accidentally exp_handoffs = [111, 112, 35, 58, 62, 74, 20, 105, 41, 90, 53, 6, 3, 67, 55, 76, 108, 32, 12, 80, 38, 85, 94, 42, 27, 99, 50, 47, 70, 87, 26, 9, 15, 97, 102, 81, 23, 65, 33, 77, 34, 4, 75, 8, 5, 30, 13, 73, 36, 92, 54, 51, 72, 78, 66, 1, 48, 14, 93, 95, 88, 86, 84, 106, 60, 101, 57, 43, 89, 59, 79, 46, 61, 52, 44, 45, 37, 68, 25, 100, 49, 24, 16, 71, 96, 21, 107, 98, 64, 39, 18, 29, 103, 91, 22, 63, 69, 28, 56, 11, 82, 10, 17, 19, 7, 40, 83, 104, 31] dev_ids = [d['id'] for d in more_devs] self.assertEqual(len(dev_ids), len(exp_handoffs)) for index, dev_id in enumerate(dev_ids): self.assertEqual( dev_id, exp_handoffs[index], 'handoff differs at position %d\n%s\n%s' % ( index, dev_ids[index:], exp_handoffs[index:])) def test_get_more_nodes_with_zero_weight_region(self): rb = ring.RingBuilder(8, 3, 1) devs = [ ring_utils.parse_add_value(v) for v in [ 'r1z1-127.0.0.1:6000/d1', 'r1z1-127.0.0.1:6001/d2', 'r1z1-127.0.0.1:6002/d3', 'r1z1-127.0.0.1:6003/d4', 'r1z2-127.0.0.2:6000/d1', 'r1z2-127.0.0.2:6001/d2', 'r1z2-127.0.0.2:6002/d3', 'r1z2-127.0.0.2:6003/d4', 'r2z1-127.0.1.1:6000/d1', 'r2z1-127.0.1.1:6001/d2', 'r2z1-127.0.1.1:6002/d3', 'r2z1-127.0.1.1:6003/d4', 'r2z2-127.0.1.2:6000/d1', 'r2z2-127.0.1.2:6001/d2', 'r2z2-127.0.1.2:6002/d3', 'r2z2-127.0.1.2:6003/d4', ] ] for dev in devs: if dev['region'] == 2: dev['weight'] = 0.0 else: dev['weight'] = 1.0 rb.add_dev(dev) rb.rebalance(seed=1) rb.get_ring().save(self.testgz) r = ring.Ring(self.testdir, ring_name='whatever') class CountingRingTable(object): def __init__(self, table): self.table = table self.count = 0 def __iter__(self): self._iter = iter(self.table) return self def __next__(self): self.count += 1 return next(self._iter) # complete the api next = __next__ def __getitem__(self, key): return self.table[key] counting_table = CountingRingTable(r._replica2part2dev_id) r._replica2part2dev_id = counting_table part = random.randint(0, r.partition_count) node_iter = r.get_more_nodes(part) next(node_iter) self.assertEqual(5, counting_table.count) # sanity self.assertEqual(1, r._num_regions) self.assertEqual(2, r._num_zones) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/ring/test_utils.py0000664000567000056710000007742713024044354022774 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common import ring from swift.common.ring.utils import (tiers_for_dev, build_tier_tree, validate_and_normalize_ip, validate_and_normalize_address, is_valid_ip, is_valid_ipv4, is_valid_ipv6, is_valid_hostname, is_local_device, parse_search_value, parse_search_values_from_opts, parse_change_values_from_opts, validate_args, parse_args, parse_builder_ring_filename_args, build_dev_from_opts, dispersion_report, parse_address) class TestUtils(unittest.TestCase): def setUp(self): self.test_dev = {'region': 1, 'zone': 1, 'ip': '192.168.1.1', 'port': '6000', 'id': 0} def get_test_devs(): dev0 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1', 'port': '6000', 'id': 0} dev1 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1', 'port': '6000', 'id': 1} dev2 = {'region': 1, 'zone': 1, 'ip': '192.168.1.1', 'port': '6000', 'id': 2} dev3 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2', 'port': '6000', 'id': 3} dev4 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2', 'port': '6000', 'id': 4} dev5 = {'region': 1, 'zone': 1, 'ip': '192.168.1.2', 'port': '6000', 'id': 5} dev6 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1', 'port': '6000', 'id': 6} dev7 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1', 'port': '6000', 'id': 7} dev8 = {'region': 1, 'zone': 2, 'ip': '192.168.2.1', 'port': '6000', 'id': 8} dev9 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2', 'port': '6000', 'id': 9} dev10 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2', 'port': '6000', 'id': 10} dev11 = {'region': 1, 'zone': 2, 'ip': '192.168.2.2', 'port': '6000', 'id': 11} return [dev0, dev1, dev2, dev3, dev4, dev5, dev6, dev7, dev8, dev9, dev10, dev11] self.test_devs = get_test_devs() def test_tiers_for_dev(self): self.assertEqual( tiers_for_dev(self.test_dev), ((1,), (1, 1), (1, 1, '192.168.1.1'), (1, 1, '192.168.1.1', 0))) def test_build_tier_tree(self): ret = build_tier_tree(self.test_devs) self.assertEqual(len(ret), 8) self.assertEqual(ret[()], set([(1,)])) self.assertEqual(ret[(1,)], set([(1, 1), (1, 2)])) self.assertEqual(ret[(1, 1)], set([(1, 1, '192.168.1.2'), (1, 1, '192.168.1.1')])) self.assertEqual(ret[(1, 2)], set([(1, 2, '192.168.2.2'), (1, 2, '192.168.2.1')])) self.assertEqual(ret[(1, 1, '192.168.1.1')], set([(1, 1, '192.168.1.1', 0), (1, 1, '192.168.1.1', 1), (1, 1, '192.168.1.1', 2)])) self.assertEqual(ret[(1, 1, '192.168.1.2')], set([(1, 1, '192.168.1.2', 3), (1, 1, '192.168.1.2', 4), (1, 1, '192.168.1.2', 5)])) self.assertEqual(ret[(1, 2, '192.168.2.1')], set([(1, 2, '192.168.2.1', 6), (1, 2, '192.168.2.1', 7), (1, 2, '192.168.2.1', 8)])) self.assertEqual(ret[(1, 2, '192.168.2.2')], set([(1, 2, '192.168.2.2', 9), (1, 2, '192.168.2.2', 10), (1, 2, '192.168.2.2', 11)])) def test_is_valid_ip(self): self.assertTrue(is_valid_ip("127.0.0.1")) self.assertTrue(is_valid_ip("10.0.0.1")) ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80::204:61ff:fe9d:f156" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80::204:61ff:254.157.241.86" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "fe80::" self.assertTrue(is_valid_ip(ipv6)) ipv6 = "::1" self.assertTrue(is_valid_ip(ipv6)) not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a" self.assertFalse(is_valid_ip(not_ipv6)) not_ipv6 = "1:2:3:4:5:6::7:8" self.assertFalse(is_valid_ip(not_ipv6)) def test_is_valid_ipv4(self): self.assertTrue(is_valid_ipv4("127.0.0.1")) self.assertTrue(is_valid_ipv4("10.0.0.1")) ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80::204:61ff:fe9d:f156" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80::204:61ff:254.157.241.86" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "fe80::" self.assertFalse(is_valid_ipv4(ipv6)) ipv6 = "::1" self.assertFalse(is_valid_ipv4(ipv6)) not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a" self.assertFalse(is_valid_ipv4(not_ipv6)) not_ipv6 = "1:2:3:4:5:6::7:8" self.assertFalse(is_valid_ipv4(not_ipv6)) def test_is_valid_ipv6(self): self.assertFalse(is_valid_ipv6("127.0.0.1")) self.assertFalse(is_valid_ipv6("10.0.0.1")) ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80::204:61ff:fe9d:f156" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80::204:61ff:254.157.241.86" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "fe80::" self.assertTrue(is_valid_ipv6(ipv6)) ipv6 = "::1" self.assertTrue(is_valid_ipv6(ipv6)) not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a" self.assertFalse(is_valid_ipv6(not_ipv6)) not_ipv6 = "1:2:3:4:5:6::7:8" self.assertFalse(is_valid_ipv6(not_ipv6)) def test_is_valid_hostname(self): self.assertTrue(is_valid_hostname("local")) self.assertTrue(is_valid_hostname("test.test.com")) hostname = "test." * 51 self.assertTrue(is_valid_hostname(hostname)) hostname = hostname.rstrip('.') self.assertTrue(is_valid_hostname(hostname)) hostname = hostname + "00" self.assertFalse(is_valid_hostname(hostname)) self.assertFalse(is_valid_hostname("$blah#")) def test_is_local_device(self): # localhost shows up in whataremyips() output as "::1" for IPv6 my_ips = ["127.0.0.1", "::1"] my_port = 6000 self.assertTrue(is_local_device(my_ips, my_port, "127.0.0.1", my_port)) self.assertTrue(is_local_device(my_ips, my_port, "::1", my_port)) self.assertTrue(is_local_device( my_ips, my_port, "0000:0000:0000:0000:0000:0000:0000:0001", my_port)) self.assertTrue(is_local_device(my_ips, my_port, "localhost", my_port)) self.assertFalse(is_local_device(my_ips, my_port, "localhost", my_port + 1)) self.assertFalse(is_local_device(my_ips, my_port, "127.0.0.2", my_port)) # for those that don't have a local port self.assertTrue(is_local_device(my_ips, None, my_ips[0], None)) # When servers_per_port is active, the "my_port" passed in is None # which means "don't include port in the determination of locality # because it's not reliable in this deployment scenario" self.assertTrue(is_local_device(my_ips, None, "127.0.0.1", 6666)) self.assertTrue(is_local_device(my_ips, None, "::1", 6666)) self.assertTrue(is_local_device( my_ips, None, "0000:0000:0000:0000:0000:0000:0000:0001", 6666)) self.assertTrue(is_local_device(my_ips, None, "localhost", 6666)) self.assertFalse(is_local_device(my_ips, None, "127.0.0.2", my_port)) def test_validate_and_normalize_ip(self): ipv4 = "10.0.0.1" self.assertEqual(ipv4, validate_and_normalize_ip(ipv4)) ipv6 = "fe80::204:61ff:fe9d:f156" self.assertEqual(ipv6, validate_and_normalize_ip(ipv6.upper())) hostname = "test.test.com" self.assertRaises(ValueError, validate_and_normalize_ip, hostname) hostname = "$blah#" self.assertRaises(ValueError, validate_and_normalize_ip, hostname) def test_validate_and_normalize_address(self): ipv4 = "10.0.0.1" self.assertEqual(ipv4, validate_and_normalize_address(ipv4)) ipv6 = "fe80::204:61ff:fe9d:f156" self.assertEqual(ipv6, validate_and_normalize_address(ipv6.upper())) hostname = "test.test.com" self.assertEqual(hostname, validate_and_normalize_address(hostname.upper())) hostname = "$blah#" self.assertRaises(ValueError, validate_and_normalize_address, hostname) def test_parse_search_value(self): res = parse_search_value('r0') self.assertEqual(res, {'region': 0}) res = parse_search_value('r1') self.assertEqual(res, {'region': 1}) res = parse_search_value('r1z2') self.assertEqual(res, {'region': 1, 'zone': 2}) res = parse_search_value('d1') self.assertEqual(res, {'id': 1}) res = parse_search_value('z1') self.assertEqual(res, {'zone': 1}) res = parse_search_value('-127.0.0.1') self.assertEqual(res, {'ip': '127.0.0.1'}) res = parse_search_value('127.0.0.1') self.assertEqual(res, {'ip': '127.0.0.1'}) res = parse_search_value('-[127.0.0.1]:10001') self.assertEqual(res, {'ip': '127.0.0.1', 'port': 10001}) res = parse_search_value(':10001') self.assertEqual(res, {'port': 10001}) res = parse_search_value('R127.0.0.10') self.assertEqual(res, {'replication_ip': '127.0.0.10'}) res = parse_search_value('R[127.0.0.10]:20000') self.assertEqual(res, {'replication_ip': '127.0.0.10', 'replication_port': 20000}) res = parse_search_value('R:20000') self.assertEqual(res, {'replication_port': 20000}) res = parse_search_value('/sdb1') self.assertEqual(res, {'device': 'sdb1'}) res = parse_search_value('_meta1') self.assertEqual(res, {'meta': 'meta1'}) self.assertRaises(ValueError, parse_search_value, 'OMGPONIES') def test_parse_search_values_from_opts(self): argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "change.test.test.com", "--change-port", "6001", "--change-replication-ip", "change.r.test.com", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] expected = { 'id': 1, 'region': 2, 'zone': 3, 'ip': "test.test.com", 'port': 6000, 'replication_ip': "r.test.com", 'replication_port': 7000, 'device': "sda3", 'meta': "some meta data", 'weight': 3.14159265359, } new_cmd_format, opts, args = validate_args(argv) search_values = parse_search_values_from_opts(opts) self.assertEqual(search_values, expected) argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.10", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "127.0.0.2", "--change-port", "6001", "--change-replication-ip", "127.0.0.20", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] expected = { 'id': 1, 'region': 2, 'zone': 3, 'ip': "127.0.0.1", 'port': 6000, 'replication_ip': "127.0.0.10", 'replication_port': 7000, 'device': "sda3", 'meta': "some meta data", 'weight': 3.14159265359, } new_cmd_format, opts, args = validate_args(argv) search_values = parse_search_values_from_opts(opts) self.assertEqual(search_values, expected) argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "[127.0.0.1]", "--port", "6000", "--replication-ip", "[127.0.0.10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "[127.0.0.2]", "--change-port", "6001", "--change-replication-ip", "[127.0.0.20]", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] new_cmd_format, opts, args = validate_args(argv) search_values = parse_search_values_from_opts(opts) self.assertEqual(search_values, expected) def test_parse_change_values_from_opts(self): argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "change.test.test.com", "--change-port", "6001", "--change-replication-ip", "change.r.test.com", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] expected = { 'ip': "change.test.test.com", 'port': 6001, 'replication_ip': "change.r.test.com", 'replication_port': 7001, 'device': "sdb3", 'meta': "some meta data for change", } new_cmd_format, opts, args = validate_args(argv) search_values = parse_change_values_from_opts(opts) self.assertEqual(search_values, expected) argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.10", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "127.0.0.2", "--change-port", "6001", "--change-replication-ip", "127.0.0.20", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] expected = { 'ip': "127.0.0.2", 'port': 6001, 'replication_ip': "127.0.0.20", 'replication_port': 7001, 'device': "sdb3", 'meta': "some meta data for change", } new_cmd_format, opts, args = validate_args(argv) search_values = parse_change_values_from_opts(opts) self.assertEqual(search_values, expected) argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "[127.0.0.1]", "--port", "6000", "--replication-ip", "[127.0.0.10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "[127.0.0.2]", "--change-port", "6001", "--change-replication-ip", "[127.0.0.20]", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] new_cmd_format, opts, args = validate_args(argv) search_values = parse_change_values_from_opts(opts) self.assertEqual(search_values, expected) def test_validate_args(self): argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "change.test.test.com", "--change-port", "6001", "--change-replication-ip", "change.r.test.com", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) self.assertEqual(opts.id, 1) self.assertEqual(opts.region, 2) self.assertEqual(opts.zone, 3) self.assertEqual(opts.ip, "test.test.com") self.assertEqual(opts.port, 6000) self.assertEqual(opts.replication_ip, "r.test.com") self.assertEqual(opts.replication_port, 7000) self.assertEqual(opts.device, "sda3") self.assertEqual(opts.meta, "some meta data") self.assertEqual(opts.weight, 3.14159265359) self.assertEqual(opts.change_ip, "change.test.test.com") self.assertEqual(opts.change_port, 6001) self.assertEqual(opts.change_replication_ip, "change.r.test.com") self.assertEqual(opts.change_replication_port, 7001) self.assertEqual(opts.change_device, "sdb3") self.assertEqual(opts.change_meta, "some meta data for change") def test_validate_args_new_cmd_format(self): argv = \ ["--id", "0", "--region", "0", "--zone", "0", "--ip", "", "--port", "0", "--replication-ip", "", "--replication-port", "0", "--device", "", "--meta", "", "--weight", "0", "--change-ip", "", "--change-port", "0", "--change-replication-ip", "", "--change-replication-port", "0", "--change-device", "", "--change-meta", ""] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) argv = \ ["--id", None, "--region", None, "--zone", None, "--ip", "", "--port", "0", "--replication-ip", "", "--replication-port", "0", "--device", "", "--meta", "", "--weight", None, "--change-ip", "change.test.test.com", "--change-port", "6001", "--change-replication-ip", "change.r.test.com", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] new_cmd_format, opts, args = validate_args(argv) self.assertFalse(new_cmd_format) argv = \ ["--id", "0"] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) argv = \ ["--region", "0"] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) argv = \ ["--zone", "0"] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) argv = \ ["--weight", "0"] new_cmd_format, opts, args = validate_args(argv) self.assertTrue(new_cmd_format) def test_parse_args(self): argv = \ ["--id", "1", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359", "--change-ip", "change.test.test.com", "--change-port", "6001", "--change-replication-ip", "change.r.test.com", "--change-replication-port", "7001", "--change-device", "sdb3", "--change-meta", "some meta data for change"] opts, args = parse_args(argv) self.assertEqual(opts.id, 1) self.assertEqual(opts.region, 2) self.assertEqual(opts.zone, 3) self.assertEqual(opts.ip, "test.test.com") self.assertEqual(opts.port, 6000) self.assertEqual(opts.replication_ip, "r.test.com") self.assertEqual(opts.replication_port, 7000) self.assertEqual(opts.device, "sda3") self.assertEqual(opts.meta, "some meta data") self.assertEqual(opts.weight, 3.14159265359) self.assertEqual(opts.change_ip, "change.test.test.com") self.assertEqual(opts.change_port, 6001) self.assertEqual(opts.change_replication_ip, "change.r.test.com") self.assertEqual(opts.change_replication_port, 7001) self.assertEqual(opts.change_device, "sdb3") self.assertEqual(opts.change_meta, "some meta data for change") self.assertEqual(len(args), 0) def test_parse_builder_ring_filename_args(self): args = 'swift-ring-builder object.builder write_ring' self.assertEqual(( 'object.builder', 'object.ring.gz' ), parse_builder_ring_filename_args(args.split())) args = 'swift-ring-builder container.ring.gz write_builder' self.assertEqual(( 'container.builder', 'container.ring.gz' ), parse_builder_ring_filename_args(args.split())) # builder name arg should always fall through args = 'swift-ring-builder test create' self.assertEqual(( 'test', 'test.ring.gz' ), parse_builder_ring_filename_args(args.split())) args = 'swift-ring-builder my.file.name create' self.assertEqual(( 'my.file.name', 'my.file.name.ring.gz' ), parse_builder_ring_filename_args(args.split())) def test_build_dev_from_opts(self): argv = \ ["--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] expected = { 'region': 2, 'zone': 3, 'ip': "test.test.com", 'port': 6000, 'replication_ip': "r.test.com", 'replication_port': 7000, 'device': "sda3", 'meta': "some meta data", 'weight': 3.14159265359, } opts, args = parse_args(argv) device = build_dev_from_opts(opts) self.assertEqual(device, expected) argv = \ ["--region", "2", "--zone", "3", "--ip", "[test.test.com]", "--port", "6000", "--replication-ip", "[r.test.com]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] opts, args = parse_args(argv) self.assertRaises(ValueError, build_dev_from_opts, opts) argv = \ ["--region", "2", "--zone", "3", "--ip", "[test.test.com]", "--port", "6000", "--replication-ip", "[r.test.com]", "--replication-port", "7000", "--meta", "some meta data", "--weight", "3.14159265359"] opts, args = parse_args(argv) self.assertRaises(ValueError, build_dev_from_opts, opts) def test_replication_defaults(self): args = '-r 1 -z 1 -i 127.0.0.1 -p 6010 -d d1 -w 100'.split() opts, _ = parse_args(args) device = build_dev_from_opts(opts) expected = { 'device': 'd1', 'ip': '127.0.0.1', 'meta': '', 'port': 6010, 'region': 1, 'replication_ip': '127.0.0.1', 'replication_port': 6010, 'weight': 100.0, 'zone': 1, } self.assertEqual(device, expected) args = '-r 1 -z 1 -i test.com -p 6010 -d d1 -w 100'.split() opts, _ = parse_args(args) device = build_dev_from_opts(opts) expected = { 'device': 'd1', 'ip': 'test.com', 'meta': '', 'port': 6010, 'region': 1, 'replication_ip': 'test.com', 'replication_port': 6010, 'weight': 100.0, 'zone': 1, } self.assertEqual(device, expected) def test_dispersion_report(self): rb = ring.RingBuilder(8, 3, 0) rb.add_dev({'id': 0, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 4, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdc1'}) rb.add_dev({'id': 5, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1'}) rb.add_dev({'id': 7, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdc1'}) rb.add_dev({'id': 8, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd1'}) rb.add_dev({'id': 2, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 9, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdb1'}) rb.add_dev({'id': 10, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc1'}) rb.add_dev({'id': 11, 'region': 1, 'zone': 1, 'weight': 200, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdd1'}) # this ring is pretty volatile and the assertions are pretty brittle # so we use a specific seed rb.rebalance(seed=100) rb.validate() self.assertEqual(rb.dispersion, 39.84375) report = dispersion_report(rb) self.assertEqual(report['worst_tier'], 'r1z1') self.assertEqual(report['max_dispersion'], 39.84375) def build_tier_report(max_replicas, placed_parts, dispersion, replicas): return { 'max_replicas': max_replicas, 'placed_parts': placed_parts, 'dispersion': dispersion, 'replicas': replicas, } # Each node should store 256 partitions to avoid multiple replicas # 2/5 of total weight * 768 ~= 307 -> 51 partitions on each node in # zone 1 are stored at least twice on the nodes expected = [ ['r1z1', build_tier_report( 2, 256, 39.84375, [0, 0, 154, 102])], ['r1z1-127.0.0.1', build_tier_report( 1, 256, 19.921875, [0, 205, 51, 0])], ['r1z1-127.0.0.2', build_tier_report( 1, 256, 19.921875, [0, 205, 51, 0])], ] report = dispersion_report(rb, 'r1z1[^/]*$', verbose=True) graph = report['graph'] for i, (expected_key, expected_report) in enumerate(expected): key, report = graph[i] self.assertEqual( (key, report), (expected_key, expected_report) ) # overcompensate in r1z0 rb.add_dev({'id': 12, 'region': 1, 'zone': 0, 'weight': 500, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 13, 'region': 1, 'zone': 0, 'weight': 500, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdb1'}) rb.add_dev({'id': 14, 'region': 1, 'zone': 0, 'weight': 500, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdc1'}) rb.add_dev({'id': 15, 'region': 1, 'zone': 0, 'weight': 500, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'}) # when the biggest tier has the smallest devices things get ugly rb.rebalance(seed=100) report = dispersion_report(rb, verbose=True) self.assertEqual(rb.dispersion, 70.3125) self.assertEqual(report['worst_tier'], 'r1z0-127.0.0.3') self.assertEqual(report['max_dispersion'], 88.23529411764706) # ... but overload can square it rb.set_overload(rb.get_required_overload()) rb.rebalance() self.assertEqual(rb.dispersion, 0.0) def test_parse_address_old_format(self): # Test old format argv = "127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data" ip, port, rest = parse_address(argv) self.assertEqual(ip, '127.0.0.1') self.assertEqual(port, 6000) self.assertEqual(rest, 'R127.0.0.1:6000/sda1_some meta data') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/ring/test_builder.py0000664000567000056710000051764313024044354023261 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import errno import mock import operator import os import unittest import six.moves.cPickle as pickle from array import array from collections import Counter, defaultdict from math import ceil from tempfile import mkdtemp from shutil import rmtree import random import uuid from six.moves import range from swift.common import exceptions from swift.common import ring from swift.common.ring import utils from swift.common.ring.builder import MAX_BALANCE class TestRingBuilder(unittest.TestCase): def setUp(self): self.testdir = mkdtemp() def tearDown(self): rmtree(self.testdir, ignore_errors=1) def _partition_counts(self, builder, key='id'): """ Returns a dictionary mapping the given device key to (number of partitions assigned to to that key). """ return Counter(builder.devs[dev_id][key] for part2dev_id in builder._replica2part2dev for dev_id in part2dev_id) def _get_population_by_region(self, builder): """ Returns a dictionary mapping region to number of partitions in that region. """ return self._partition_counts(builder, key='region') def test_init(self): rb = ring.RingBuilder(8, 3, 1) self.assertEqual(rb.part_power, 8) self.assertEqual(rb.replicas, 3) self.assertEqual(rb.min_part_hours, 1) self.assertEqual(rb.parts, 2 ** 8) self.assertEqual(rb.devs, []) self.assertEqual(rb.devs_changed, False) self.assertEqual(rb.version, 0) def test_overlarge_part_powers(self): ring.RingBuilder(32, 3, 1) # passes by not crashing self.assertRaises(ValueError, ring.RingBuilder, 33, 3, 1) def test_insufficient_replicas(self): ring.RingBuilder(8, 1.0, 1) # passes by not crashing self.assertRaises(ValueError, ring.RingBuilder, 8, 0.999, 1) def test_negative_min_part_hours(self): ring.RingBuilder(8, 3, 0) # passes by not crashing self.assertRaises(ValueError, ring.RingBuilder, 8, 3, -1) def test_deepcopy(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sdb1'}) # more devices in zone #1 rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sdc1'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sdd1'}) rb.rebalance() rb_copy = copy.deepcopy(rb) self.assertEqual(rb.to_dict(), rb_copy.to_dict()) self.assertTrue(rb.devs is not rb_copy.devs) self.assertTrue(rb._replica2part2dev is not rb_copy._replica2part2dev) self.assertTrue(rb._last_part_moves is not rb_copy._last_part_moves) self.assertTrue(rb._remove_devs is not rb_copy._remove_devs) self.assertTrue(rb._dispersion_graph is not rb_copy._dispersion_graph) def test_get_ring(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.remove_dev(1) rb.rebalance() r = rb.get_ring() self.assertTrue(isinstance(r, ring.RingData)) r2 = rb.get_ring() self.assertTrue(r is r2) rb.rebalance() r3 = rb.get_ring() self.assertTrue(r3 is not r2) r4 = rb.get_ring() self.assertTrue(r3 is r4) def test_rebalance_with_seed(self): devs = [(0, 10000), (1, 10001), (2, 10002), (1, 10003)] ring_builders = [] for n in range(3): rb = ring.RingBuilder(8, 3, 1) idx = 0 for zone, port in devs: for d in ('sda1', 'sdb1'): rb.add_dev({'id': idx, 'region': 0, 'zone': zone, 'ip': '127.0.0.1', 'port': port, 'device': d, 'weight': 1}) idx += 1 ring_builders.append(rb) rb0 = ring_builders[0] rb1 = ring_builders[1] rb2 = ring_builders[2] r0 = rb0.get_ring() self.assertTrue(rb0.get_ring() is r0) rb0.rebalance() # NO SEED rb1.rebalance(seed=10) rb2.rebalance(seed=10) r1 = rb1.get_ring() r2 = rb2.get_ring() self.assertFalse(rb0.get_ring() is r0) self.assertNotEqual(r0.to_dict(), r1.to_dict()) self.assertEqual(r1.to_dict(), r2.to_dict()) def test_rebalance_part_on_deleted_other_part_on_drained(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10005, 'device': 'sda1'}) rb.rebalance(seed=1) # We want a partition where 1 replica is on a removed device, 1 # replica is on a 0-weight device, and 1 on a normal device. To # guarantee we have one, we see where partition 123 is, then # manipulate its devices accordingly. zero_weight_dev_id = rb._replica2part2dev[1][123] delete_dev_id = rb._replica2part2dev[2][123] rb.set_dev_weight(zero_weight_dev_id, 0.0) rb.remove_dev(delete_dev_id) rb.rebalance() def test_set_replicas(self): rb = ring.RingBuilder(8, 3.2, 1) rb.devs_changed = False rb.set_replicas(3.25) self.assertTrue(rb.devs_changed) rb.devs_changed = False rb.set_replicas(3.2500001) self.assertFalse(rb.devs_changed) def test_add_dev(self): rb = ring.RingBuilder(8, 3, 1) dev = {'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000} dev_id = rb.add_dev(dev) self.assertRaises(exceptions.DuplicateDeviceError, rb.add_dev, dev) self.assertEqual(dev_id, 0) rb = ring.RingBuilder(8, 3, 1) # test add new dev with no id dev_id = rb.add_dev({'zone': 0, 'region': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 6000}) self.assertEqual(rb.devs[0]['id'], 0) self.assertEqual(dev_id, 0) # test add another dev with no id dev_id = rb.add_dev({'zone': 3, 'region': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 6000}) self.assertEqual(rb.devs[1]['id'], 1) self.assertEqual(dev_id, 1) def test_set_dev_weight(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.rebalance() r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 128, 1: 128, 2: 256, 3: 256}) rb.set_dev_weight(0, 0.75) rb.set_dev_weight(1, 0.25) rb.pretend_min_part_hours_passed() rb.rebalance() r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 192, 1: 64, 2: 256, 3: 256}) def test_remove_dev(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.rebalance() r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 192, 1: 192, 2: 192, 3: 192}) rb.remove_dev(1) rb.pretend_min_part_hours_passed() rb.rebalance() r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 256, 2: 256, 3: 256}) def test_round_off_error(self): # 3 nodes with 11 disks each is particularly problematic. Probably has # to do with the binary repr. of 1/33? Those ones look suspicious... # # >>> bin(int(struct.pack('!f', 1.0/(33)).encode('hex'), 16)) # '0b111100111110000011111000010000' rb = ring.RingBuilder(8, 3, 1) for id, (region, zone) in enumerate(11 * [(0, 0), (1, 10), (1, 11)]): rb.add_dev({'id': id, 'region': region, 'zone': zone, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000 + region * 100 + zone, 'device': 'sda%d' % id}) rb.rebalance() self.assertEqual(self._partition_counts(rb, 'zone'), {0: 256, 10: 256, 11: 256}) wanted_by_zone = defaultdict(lambda: defaultdict(int)) for dev in rb._iter_devs(): wanted_by_zone[dev['zone']][dev['parts_wanted']] += 1 # We're nicely balanced, but parts_wanted is slightly lumpy # because reasons. self.assertEqual(wanted_by_zone, { 0: {0: 10, 1: 1}, 10: {0: 11}, 11: {0: 10, -1: 1}}) def test_remove_a_lot(self): rb = ring.RingBuilder(3, 3, 1) rb.add_dev({'id': 0, 'device': 'd0', 'ip': '10.0.0.1', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 1}) rb.add_dev({'id': 1, 'device': 'd1', 'ip': '10.0.0.2', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 2}) rb.add_dev({'id': 2, 'device': 'd2', 'ip': '10.0.0.3', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 3}) rb.add_dev({'id': 3, 'device': 'd3', 'ip': '10.0.0.1', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 1}) rb.add_dev({'id': 4, 'device': 'd4', 'ip': '10.0.0.2', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 2}) rb.add_dev({'id': 5, 'device': 'd5', 'ip': '10.0.0.3', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 3}) rb.rebalance() rb.validate() # this has to put more than 1/3 of the partitions in the # cluster on removed devices in order to ensure that at least # one partition has multiple replicas that need to move. # # (for an N-replica ring, it's more than 1/N of the # partitions, of course) rb.remove_dev(3) rb.remove_dev(4) rb.remove_dev(5) rb.rebalance() rb.validate() def test_remove_zero_weighted(self): rb = ring.RingBuilder(8, 3, 0) rb.add_dev({'id': 0, 'device': 'd0', 'ip': '10.0.0.1', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 1}) rb.add_dev({'id': 1, 'device': 'd1', 'ip': '10.0.0.2', 'port': 6002, 'weight': 0.0, 'region': 0, 'zone': 2}) rb.add_dev({'id': 2, 'device': 'd2', 'ip': '10.0.0.3', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 3}) rb.add_dev({'id': 3, 'device': 'd3', 'ip': '10.0.0.1', 'port': 6002, 'weight': 1000.0, 'region': 0, 'zone': 1}) rb.rebalance() rb.remove_dev(1) parts, balance, removed = rb.rebalance() self.assertEqual(removed, 1) def test_shuffled_gather(self): if self._shuffled_gather_helper() and \ self._shuffled_gather_helper(): raise AssertionError('It is highly likely the ring is no ' 'longer shuffling the set of partitions ' 'to reassign on a rebalance.') def _shuffled_gather_helper(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.rebalance() rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) replica_plan = rb._build_replica_plan() rb._set_parts_wanted(replica_plan) for dev in rb._iter_devs(): dev['tiers'] = utils.tiers_for_dev(dev) assign_parts = defaultdict(list) rb._gather_parts_for_balance(assign_parts, replica_plan) max_run = 0 run = 0 last_part = 0 for part, _ in assign_parts.items(): if part > last_part: run += 1 else: if run > max_run: max_run = run run = 0 last_part = part if run > max_run: max_run = run return max_run > len(assign_parts) / 2 def test_initial_balance(self): # 2 boxes, 2 drives each in zone 1 # 1 box, 2 drives in zone 2 # # This is balanceable, but there used to be some nondeterminism in # rebalance() that would sometimes give you an imbalanced ring. rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'region': 1, 'zone': 1, 'weight': 4000.0, 'ip': '10.1.1.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'region': 1, 'zone': 1, 'weight': 4000.0, 'ip': '10.1.1.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'region': 1, 'zone': 1, 'weight': 4000.0, 'ip': '10.1.1.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'region': 1, 'zone': 1, 'weight': 4000.0, 'ip': '10.1.1.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'region': 1, 'zone': 2, 'weight': 4000.0, 'ip': '10.1.1.3', 'port': 10000, 'device': 'sda'}) rb.add_dev({'region': 1, 'zone': 2, 'weight': 4000.0, 'ip': '10.1.1.3', 'port': 10000, 'device': 'sdb'}) _, balance, _ = rb.rebalance(seed=2) # maybe not *perfect*, but should be close self.assertTrue(balance <= 1) def test_multitier_partial(self): # Multitier test, nothing full rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 2, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 3, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.rebalance() rb.validate() for part in range(rb.parts): counts = defaultdict(lambda: defaultdict(int)) for replica in range(rb.replicas): dev = rb.devs[rb._replica2part2dev[replica][part]] counts['region'][dev['region']] += 1 counts['zone'][dev['zone']] += 1 if any(c > 1 for c in counts['region'].values()): raise AssertionError( "Partition %d not evenly region-distributed (got %r)" % (part, counts['region'])) if any(c > 1 for c in counts['zone'].values()): raise AssertionError( "Partition %d not evenly zone-distributed (got %r)" % (part, counts['zone'])) # Multitier test, zones full, nodes not full rb = ring.RingBuilder(8, 6, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdf'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sdg'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sdh'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sdi'}) rb.rebalance() rb.validate() for part in range(rb.parts): counts = defaultdict(lambda: defaultdict(int)) for replica in range(rb.replicas): dev = rb.devs[rb._replica2part2dev[replica][part]] counts['zone'][dev['zone']] += 1 counts['dev_id'][dev['id']] += 1 if counts['zone'] != {0: 2, 1: 2, 2: 2}: raise AssertionError( "Partition %d not evenly distributed (got %r)" % (part, counts['zone'])) for dev_id, replica_count in counts['dev_id'].items(): if replica_count > 1: raise AssertionError( "Partition %d is on device %d more than once (%r)" % (part, dev_id, counts['dev_id'])) def test_multitier_full(self): # Multitier test, #replicas == #devs rb = ring.RingBuilder(8, 6, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdf'}) rb.rebalance() rb.validate() for part in range(rb.parts): counts = defaultdict(lambda: defaultdict(int)) for replica in range(rb.replicas): dev = rb.devs[rb._replica2part2dev[replica][part]] counts['zone'][dev['zone']] += 1 counts['dev_id'][dev['id']] += 1 if counts['zone'] != {0: 2, 1: 2, 2: 2}: raise AssertionError( "Partition %d not evenly distributed (got %r)" % (part, counts['zone'])) for dev_id, replica_count in counts['dev_id'].items(): if replica_count != 1: raise AssertionError( "Partition %d is on device %d %d times, not 1 (%r)" % (part, dev_id, replica_count, counts['dev_id'])) def test_multitier_overfull(self): # Multitier test, #replicas > #zones (to prove even distribution) rb = ring.RingBuilder(8, 8, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdg'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdh'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdf'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdi'}) rb.rebalance() rb.validate() for part in range(rb.parts): counts = defaultdict(lambda: defaultdict(int)) for replica in range(rb.replicas): dev = rb.devs[rb._replica2part2dev[replica][part]] counts['zone'][dev['zone']] += 1 counts['dev_id'][dev['id']] += 1 self.assertEqual(8, sum(counts['zone'].values())) for zone, replica_count in counts['zone'].items(): if replica_count not in (2, 3): raise AssertionError( "Partition %d not evenly distributed (got %r)" % (part, counts['zone'])) for dev_id, replica_count in counts['dev_id'].items(): if replica_count not in (1, 2): raise AssertionError( "Partition %d is on device %d %d times, " "not 1 or 2 (%r)" % (part, dev_id, replica_count, counts['dev_id'])) def test_multitier_expansion_more_devices(self): rb = ring.RingBuilder(8, 6, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 2, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.rebalance() rb.validate() rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 11, 'region': 0, 'zone': 2, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf'}) for _ in range(5): rb.pretend_min_part_hours_passed() rb.rebalance() rb.validate() for part in range(rb.parts): counts = dict(zone=defaultdict(int), dev_id=defaultdict(int)) for replica in range(rb.replicas): dev = rb.devs[rb._replica2part2dev[replica][part]] counts['zone'][dev['zone']] += 1 counts['dev_id'][dev['id']] += 1 self.assertEqual({0: 2, 1: 2, 2: 2}, dict(counts['zone'])) # each part is assigned once to six unique devices self.assertEqual((counts['dev_id'].values()), [1] * 6) self.assertEqual(len(set(counts['dev_id'].keys())), 6) def test_multitier_part_moves_with_0_min_part_hours(self): rb = ring.RingBuilder(8, 3, 0) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde1'}) rb.rebalance() rb.validate() # min_part_hours is 0, so we're clear to move 2 replicas to # new devs rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc1'}) rb.rebalance() rb.validate() for part in range(rb.parts): devs = set() for replica in range(rb.replicas): devs.add(rb._replica2part2dev[replica][part]) if len(devs) != 3: raise AssertionError( "Partition %d not on 3 devs (got %r)" % (part, devs)) def test_multitier_part_moves_with_positive_min_part_hours(self): rb = ring.RingBuilder(8, 3, 99) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde1'}) rb.rebalance() rb.validate() # min_part_hours is >0, so we'll only be able to move 1 # replica to a new home rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc1'}) rb.pretend_min_part_hours_passed() rb.rebalance() rb.validate() for part in range(rb.parts): devs = set() for replica in range(rb.replicas): devs.add(rb._replica2part2dev[replica][part]) if not any(rb.devs[dev_id]['zone'] == 1 for dev_id in devs): raise AssertionError( "Partition %d did not move (got %r)" % (part, devs)) def test_multitier_dont_move_too_many_replicas(self): rb = ring.RingBuilder(8, 3, 1) # there'll be at least one replica in z0 and z1 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.rebalance() rb.validate() # only 1 replica should move rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 4, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf1'}) rb.pretend_min_part_hours_passed() rb.rebalance() rb.validate() for part in range(rb.parts): zones = set() for replica in range(rb.replicas): zones.add(rb.devs[rb._replica2part2dev[replica][part]]['zone']) if len(zones) != 3: raise AssertionError( "Partition %d not in 3 zones (got %r)" % (part, zones)) if 0 not in zones or 1 not in zones: raise AssertionError( "Partition %d not in zones 0 and 1 (got %r)" % (part, zones)) def test_min_part_hours_zero_will_move_whatever_it_takes(self): rb = ring.RingBuilder(8, 3, 0) # there'll be at least one replica in z0 and z1 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.rebalance(seed=1) rb.validate() rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 4, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf1'}) rb.rebalance(seed=3) rb.validate() self.assertEqual(0, rb.dispersion) # a balance of w/i a 1% isn't too bad for 3 replicas on 7 # devices when part power is only 8 self.assertAlmostEqual(rb.get_balance(), 0, delta=0.5) # every zone has either 153 or 154 parts for zone, count in self._partition_counts( rb, key='zone').items(): self.assertAlmostEqual(153.5, count, delta=1) parts_with_moved_count = defaultdict(int) for part in range(rb.parts): zones = set() for replica in range(rb.replicas): zones.add(rb.devs[rb._replica2part2dev[replica][part]]['zone']) moved_replicas = len(zones - {0, 1}) parts_with_moved_count[moved_replicas] += 1 # as usual, the real numbers depend on the seed, but we want to # validate a few things here: # # 1) every part had to move one replica to hit dispersion (so no # one can have a moved count 0) # # 2) it's quite reasonable that some small percent of parts will # have a replica in {0, 1, X} (meaning only one replica of the # part moved) # # 3) when min_part_hours is 0, more than one replica of a part # can move in a rebalance, and since that movement would get to # better dispersion faster we expect to observe most parts in # {[0,1], X, X} (meaning *two* replicas of the part moved) # # 4) there's plenty of weight in z0 & z1 to hold a whole # replicanth, so there is no reason for any part to have to move # all three replicas out of those zones (meaning no one can have # a moved count 3) # expected = { 1: 52, 2: 204, } self.assertEqual(parts_with_moved_count, expected) def test_rerebalance(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.rebalance() counts = self._partition_counts(rb) self.assertEqual(counts, {0: 256, 1: 256, 2: 256}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.pretend_min_part_hours_passed() rb.rebalance() counts = self._partition_counts(rb) self.assertEqual(counts, {0: 192, 1: 192, 2: 192, 3: 192}) rb.set_dev_weight(3, 100) rb.rebalance() counts = self._partition_counts(rb) self.assertEqual(counts[3], 256) def test_add_rebalance_add_rebalance_delete_rebalance(self): # Test for https://bugs.launchpad.net/swift/+bug/845952 # min_part of 0 to allow for rapid rebalancing rb = ring.RingBuilder(8, 3, 0) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.rebalance() rb.validate() rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10005, 'device': 'sda1'}) rb.rebalance() rb.validate() rb.remove_dev(1) # well now we have only one device in z0 rb.set_overload(0.5) rb.rebalance() rb.validate() def test_remove_last_partition_from_zero_weight(self): rb = ring.RingBuilder(4, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 1, 'weight': 1.0, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 1.0, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 3, 'weight': 1.0, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 3, 'weight': 1.0, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 3, 'weight': 1.0, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 0.4, 'ip': '127.0.0.3', 'port': 10001, 'device': 'zero'}) zero_weight_dev = 3 rb.rebalance(seed=1) # We want at least one partition with replicas only in zone 2 and 3 # due to device weights. It would *like* to spread out into zone 1, # but can't, due to device weight. # # Also, we want such a partition to have a replica on device 3, # which we will then reduce to zero weight. This should cause the # removal of the replica from device 3. # # Getting this to happen by chance is hard, so let's just set up a # builder so that it's in the state we want. This is a synthetic # example; while the bug has happened on a real cluster, that # builder file had a part_power of 16, so its contents are much too # big to include here. rb._replica2part2dev = [ # these are the relevant ones # | | | # v v v array('H', [2, 5, 6, 2, 5, 6, 2, 5, 6, 2, 5, 6, 2, 5, 6, 2]), array('H', [1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4]), array('H', [0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 5, 6, 2, 5, 6])] # fix up bookkeeping new_dev_parts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: new_dev_parts[dev_id] += 1 for dev in rb._iter_devs(): dev['parts'] = new_dev_parts[dev['id']] rb.set_dev_weight(zero_weight_dev, 0.0) rb.pretend_min_part_hours_passed() rb.rebalance(seed=1) node_counts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: node_counts[dev_id] += 1 self.assertEqual(node_counts[zero_weight_dev], 0) # it's as balanced as it gets, so nothing moves anymore rb.pretend_min_part_hours_passed() parts_moved, _balance, _removed = rb.rebalance(seed=1) new_node_counts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: new_node_counts[dev_id] += 1 del node_counts[zero_weight_dev] self.assertEqual(node_counts, new_node_counts) self.assertEqual(parts_moved, 0) def test_part_swapping_problem(self): rb = ring.RingBuilder(4, 3, 1) # 127.0.0.1 (2 devs) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) # 127.0.0.2 (3 devs) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdc'}) expected = { '127.0.0.1': 1.2, '127.0.0.2': 1.7999999999999998, } for wr in (rb._build_weighted_replicas_by_tier(), rb._build_wanted_replicas_by_tier(), rb._build_target_replicas_by_tier()): self.assertEqual(expected, {t[-1]: r for (t, r) in wr.items() if len(t) == 3}) self.assertEqual(rb.get_required_overload(), 0) rb.rebalance(seed=3) # so 127.0.0.1 ended up with... tier = (0, 0, '127.0.0.1') # ... 6 parts with 1 replicas self.assertEqual(rb._dispersion_graph[tier][1], 12) # ... 4 parts with 2 replicas self.assertEqual(rb._dispersion_graph[tier][2], 4) # but since we only have two tiers, this is *totally* dispersed self.assertEqual(0, rb.dispersion) # small rings are hard to balance... expected = {0: 10, 1: 10, 2: 10, 3: 9, 4: 9} self.assertEqual(expected, {d['id']: d['parts'] for d in rb._iter_devs()}) # everyone wants 9.6 parts expected = { 0: 4.166666666666671, 1: 4.166666666666671, 2: 4.166666666666671, 3: -6.25, 4: -6.25, } self.assertEqual(expected, rb._build_balance_per_dev()) # original sorted _replica2part2dev """ rb._replica2part2dev = [ array('H', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]), array('H', [1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3]), array('H', [2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4])] """ # now imagine if we came along this _replica2part2dev through no # fault of our own; if instead of the 12 parts with only one # replica on 127.0.0.1 being split evenly (6 and 6) on device's # 0 and 1 - device 1 inexplicitly had 3 extra parts rb._replica2part2dev = [ # these are the relevant one's here # | | | # v v v array('H', [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]), array('H', [1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3]), array('H', [2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4])] # fix up bookkeeping new_dev_parts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: new_dev_parts[dev_id] += 1 for dev in rb._iter_devs(): dev['parts'] = new_dev_parts[dev['id']] rb.pretend_min_part_hours_passed() rb.rebalance() expected = { 0: 4.166666666666671, 1: 4.166666666666671, 2: 4.166666666666671, 3: -6.25, 4: -6.25, } self.assertEqual(expected, rb._build_balance_per_dev()) self.assertEqual(rb.get_balance(), 6.25) def test_wrong_tier_with_no_where_to_go(self): rb = ring.RingBuilder(4, 3, 1) # 127.0.0.1 (even devices) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 900, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 900, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 900, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) # 127.0.0.2 (odd devices) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdd'}) expected = { '127.0.0.1': 1.75, '127.0.0.2': 1.25, } for wr in (rb._build_weighted_replicas_by_tier(), rb._build_wanted_replicas_by_tier(), rb._build_target_replicas_by_tier()): self.assertEqual(expected, {t[-1]: r for (t, r) in wr.items() if len(t) == 3}) self.assertEqual(rb.get_required_overload(), 0) rb.rebalance(seed=3) # so 127.0.0.1 ended up with... tier = (0, 0, '127.0.0.1') # ... 4 parts with 1 replicas self.assertEqual(rb._dispersion_graph[tier][1], 4) # ... 12 parts with 2 replicas self.assertEqual(rb._dispersion_graph[tier][2], 12) # ... and of course 0 parts with 3 replicas self.assertEqual(rb._dispersion_graph[tier][3], 0) # but since we only have two tiers, this is *totally* dispersed self.assertEqual(0, rb.dispersion) # small rings are hard to balance, but it's possible when # part-replicas (3 * 2 ** 4) can go evenly into device weights # (4800) like we've done here expected = { 0: 1, 2: 9, 4: 9, 6: 9, 1: 5, 3: 5, 5: 5, 7: 5, } self.assertEqual(expected, {d['id']: d['parts'] for d in rb._iter_devs()}) expected = { 0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, } self.assertEqual(expected, rb._build_balance_per_dev()) # all devices have exactly the # of parts they want expected = { 0: 0, 2: 0, 4: 0, 6: 0, 1: 0, 3: 0, 5: 0, 7: 0, } self.assertEqual(expected, {d['id']: d['parts_wanted'] for d in rb._iter_devs()}) # original sorted _replica2part2dev """ rb._replica2part2dev = [ array('H', [0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 4, ]), array('H', [4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 1, 1, 1, 1, ]), array('H', [1, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7, ])] """ # now imagine if we came along this _replica2part2dev through no # fault of our own; and device 0 had extra parts, but both # copies of the other replicas were already in the other tier! rb._replica2part2dev = [ # these are the relevant one's here # | | # v v array('H', [2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 0, 0]), array('H', [4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 1, 1, 1]), array('H', [1, 1, 3, 3, 3, 3, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7])] # fix up bookkeeping new_dev_parts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: new_dev_parts[dev_id] += 1 for dev in rb._iter_devs(): dev['parts'] = new_dev_parts[dev['id']] replica_plan = rb._build_replica_plan() rb._set_parts_wanted(replica_plan) expected = { 0: -1, # this device wants to shed 2: 0, 4: 0, 6: 0, 1: 0, 3: 1, # there's devices with room on the other server 5: 0, 7: 0, } self.assertEqual(expected, {d['id']: d['parts_wanted'] for d in rb._iter_devs()}) rb.pretend_min_part_hours_passed() rb.rebalance() self.assertEqual(rb.get_balance(), 0) def test_multiple_duplicate_device_assignment(self): rb = ring.RingBuilder(4, 4, 1) devs = [ 'r1z1-127.0.0.1:33440/d1', 'r1z1-127.0.0.1:33441/d2', 'r1z1-127.0.0.1:33442/d3', 'r1z1-127.0.0.1:33443/d4', 'r1z1-127.0.0.2:33440/d5', 'r1z1-127.0.0.2:33441/d6', 'r1z1-127.0.0.2:33442/d7', 'r1z1-127.0.0.2:33442/d8', ] for add_value in devs: dev = utils.parse_add_value(add_value) dev['weight'] = 1.0 rb.add_dev(dev) rb.rebalance() rb._replica2part2dev = [ # these are the relevant one's here # | | | | | # v v v v v array('H', [0, 1, 2, 3, 3, 0, 0, 0, 4, 6, 4, 4, 4, 4, 4, 4]), array('H', [0, 1, 3, 1, 1, 1, 1, 1, 5, 7, 5, 5, 5, 5, 5, 5]), array('H', [0, 1, 2, 2, 2, 2, 2, 2, 4, 6, 6, 6, 6, 6, 6, 6]), array('H', [0, 3, 2, 3, 3, 3, 3, 3, 5, 7, 7, 7, 7, 7, 7, 7]) # ^ # | # this sort of thing worked already ] # fix up bookkeeping new_dev_parts = defaultdict(int) for part2dev_id in rb._replica2part2dev: for dev_id in part2dev_id: new_dev_parts[dev_id] += 1 for dev in rb._iter_devs(): dev['parts'] = new_dev_parts[dev['id']] rb.pretend_min_part_hours_passed() rb.rebalance() rb.validate() def test_region_fullness_with_balanceable_ring(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 1, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 2, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10005, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 2, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10006, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 3, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10007, 'device': 'sda1'}) rb.add_dev({'id': 7, 'region': 3, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10008, 'device': 'sda1'}) rb.rebalance(seed=2) population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 192, 1: 192, 2: 192, 3: 192}) def test_region_fullness_with_unbalanceable_ring(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 2, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 1, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.rebalance(seed=2) population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 512, 1: 256}) def test_adding_region_slowly_with_unbalanceable_ring(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc1'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdc1'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 1, 'weight': 0.5, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdd1'}) rb.rebalance(seed=2) rb.add_dev({'id': 2, 'region': 1, 'zone': 0, 'weight': 0.25, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 1, 'weight': 0.25, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.pretend_min_part_hours_passed() changed_parts, _balance, _removed = rb.rebalance(seed=2) # there's not enough room in r1 for every partition to have a replica # in it, so only 86 assignments occur in r1 (that's ~1/5 of the total, # since r1 has 1/5 of the weight). population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 682, 1: 86}) # really 86 parts *should* move (to the new region) but to avoid # accidentally picking up too many and causing some parts to randomly # flop around devices in the original region - our gather algorithm # is conservative when picking up only from devices that are for sure # holding more parts than they want (math.ceil() of the replica_plan) # which guarantees any parts picked up will have new homes in a better # tier or failure_domain. self.assertEqual(86, changed_parts) # and since there's not enough room, subsequent rebalances will not # cause additional assignments to r1 rb.pretend_min_part_hours_passed() rb.rebalance(seed=2) rb.validate() population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 682, 1: 86}) # after you add more weight, more partition assignments move rb.set_dev_weight(2, 0.5) rb.set_dev_weight(3, 0.5) rb.pretend_min_part_hours_passed() rb.rebalance(seed=2) rb.validate() population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 614, 1: 154}) rb.set_dev_weight(2, 1.0) rb.set_dev_weight(3, 1.0) rb.pretend_min_part_hours_passed() rb.rebalance(seed=2) rb.validate() population_by_region = self._get_population_by_region(rb) self.assertEqual(population_by_region, {0: 512, 1: 256}) def test_avoid_tier_change_new_region(self): rb = ring.RingBuilder(8, 3, 1) for i in range(5): rb.add_dev({'id': i, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': i, 'device': 'sda1'}) rb.rebalance(seed=2) # Add a new device in new region to a balanced ring rb.add_dev({'id': 5, 'region': 1, 'zone': 0, 'weight': 0, 'ip': '127.0.0.5', 'port': 10000, 'device': 'sda1'}) # Increase the weight of region 1 slowly moved_partitions = [] errors = [] for weight in range(0, 101, 10): rb.set_dev_weight(5, weight) rb.pretend_min_part_hours_passed() changed_parts, _balance, _removed = rb.rebalance(seed=2) rb.validate() moved_partitions.append(changed_parts) # Ensure that the second region has enough partitions # Otherwise there will be replicas at risk min_parts_for_r1 = ceil(weight / (500.0 + weight) * 768) parts_for_r1 = self._get_population_by_region(rb).get(1, 0) try: self.assertEqual(min_parts_for_r1, parts_for_r1) except AssertionError: errors.append('weight %s got %s parts but expected %s' % ( weight, parts_for_r1, min_parts_for_r1)) self.assertFalse(errors) # Number of partitions moved on each rebalance # 10/510 * 768 ~ 15.06 -> move at least 15 partitions in first step ref = [0, 16, 14, 14, 13, 13, 13, 12, 11, 12, 10] self.assertEqual(ref, moved_partitions) def test_set_replicas_increase(self): rb = ring.RingBuilder(8, 2, 0) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.rebalance() rb.validate() rb.replicas = 2.1 rb.rebalance() rb.validate() self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], [256, 256, 25]) rb.replicas = 2.2 rb.rebalance() rb.validate() self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], [256, 256, 51]) def test_set_replicas_decrease(self): rb = ring.RingBuilder(4, 5, 0) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.rebalance() rb.validate() rb.replicas = 4.9 rb.rebalance() rb.validate() self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], [16, 16, 16, 16, 14]) # cross a couple of integer thresholds (4 and 3) rb.replicas = 2.5 rb.rebalance() rb.validate() self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], [16, 16, 8]) def test_fractional_replicas_rebalance(self): rb = ring.RingBuilder(8, 2.5, 0) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.rebalance() # passes by not crashing rb.validate() # also passes by not crashing self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], [256, 256, 128]) def test_create_add_dev_add_replica_rebalance(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.set_replicas(4) rb.rebalance() # this would crash since parts_wanted was not set rb.validate() def test_reduce_replicas_after_remove_device(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.rebalance() rb.remove_dev(0) self.assertRaises(exceptions.RingValidationError, rb.rebalance) rb.set_replicas(2) rb.rebalance() rb.validate() def test_rebalance_post_upgrade(self): rb = ring.RingBuilder(8, 3, 1) # 5 devices: 5 is the smallest number that does not divide 3 * 2^8, # which forces some rounding to happen. rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde'}) rb.rebalance() rb.validate() # Older versions of the ring builder code would round down when # computing parts_wanted, while the new code rounds up. Make sure we # can handle a ring built by the old method. # # This code mimics the old _set_parts_wanted. weight_of_one_part = rb.weight_of_one_part() for dev in rb._iter_devs(): if not dev['weight']: dev['parts_wanted'] = -rb.parts * rb.replicas else: dev['parts_wanted'] = ( int(weight_of_one_part * dev['weight']) - dev['parts']) rb.pretend_min_part_hours_passed() rb.rebalance() # this crashes unless rebalance resets parts_wanted rb.validate() def test_add_replicas_then_rebalance_respects_weight(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdg'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdh'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 2, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdi'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 2, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdj'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdk'}) rb.add_dev({'id': 11, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdl'}) rb.rebalance(seed=1) r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 96, 1: 96, 2: 32, 3: 32, 4: 96, 5: 96, 6: 32, 7: 32, 8: 96, 9: 96, 10: 32, 11: 32}) rb.replicas *= 2 rb.rebalance(seed=1) r = rb.get_ring() counts = {} for part2dev_id in r._replica2part2dev_id: for dev_id in part2dev_id: counts[dev_id] = counts.get(dev_id, 0) + 1 self.assertEqual(counts, {0: 192, 1: 192, 2: 64, 3: 64, 4: 192, 5: 192, 6: 64, 7: 64, 8: 192, 9: 192, 10: 64, 11: 64}) def test_overload(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdf'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdg'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdh'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdi'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdj'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdk'}) rb.add_dev({'id': 11, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdl'}) rb.rebalance(seed=12345) rb.validate() # sanity check: balance respects weights, so default part_counts = self._partition_counts(rb, key='zone') self.assertEqual(part_counts[0], 192) self.assertEqual(part_counts[1], 192) self.assertEqual(part_counts[2], 384) # Devices 0 and 1 take 10% more than their fair shares by weight since # overload is 10% (0.1). rb.set_overload(0.1) rb.pretend_min_part_hours_passed() rb.rebalance() part_counts = self._partition_counts(rb, key='zone') self.assertEqual(part_counts[0], 212) self.assertEqual(part_counts[1], 211) self.assertEqual(part_counts[2], 345) # Now, devices 0 and 1 take 50% more than their fair shares by # weight. rb.set_overload(0.5) for _ in range(3): rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) part_counts = self._partition_counts(rb, key='zone') self.assertEqual(part_counts[0], 256) self.assertEqual(part_counts[1], 256) self.assertEqual(part_counts[2], 256) # Devices 0 and 1 may take up to 75% over their fair share, but the # placement algorithm only wants to spread things out evenly between # all drives, so the devices stay at 50% more. rb.set_overload(0.75) for _ in range(3): rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) part_counts = self._partition_counts(rb, key='zone') self.assertEqual(part_counts[0], 256) self.assertEqual(part_counts[1], 256) self.assertEqual(part_counts[2], 256) def test_unoverload(self): # Start off needing overload to balance, then add capacity until we # don't need overload any more and see that things still balance. # Overload doesn't prevent optimal balancing. rb = ring.RingBuilder(8, 3, 1) rb.set_overload(0.125) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 11, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdc'}) rb.rebalance(seed=12345) # sanity check: our overload is big enough to balance things part_counts = self._partition_counts(rb, key='ip') self.assertEqual(part_counts['127.0.0.1'], 216) self.assertEqual(part_counts['127.0.0.2'], 216) self.assertEqual(part_counts['127.0.0.3'], 336) # Add some weight: balance improves for dev in rb.devs: if dev['ip'] in ('127.0.0.1', '127.0.0.2'): rb.set_dev_weight(dev['id'], 1.22) rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) part_counts = self._partition_counts(rb, key='ip') self.assertEqual(part_counts['127.0.0.1'], 238) self.assertEqual(part_counts['127.0.0.2'], 237) self.assertEqual(part_counts['127.0.0.3'], 293) # Even out the weights: balance becomes perfect for dev in rb.devs: if dev['ip'] in ('127.0.0.1', '127.0.0.2'): rb.set_dev_weight(dev['id'], 2) rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) part_counts = self._partition_counts(rb, key='ip') self.assertEqual(part_counts['127.0.0.1'], 256) self.assertEqual(part_counts['127.0.0.2'], 256) self.assertEqual(part_counts['127.0.0.3'], 256) # Add a new server: balance stays optimal rb.add_dev({'id': 12, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.4', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 13, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.4', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 14, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.4', 'port': 10000, 'device': 'sdf'}) rb.add_dev({'id': 15, 'region': 0, 'zone': 0, 'weight': 2, 'ip': '127.0.0.4', 'port': 10000, 'device': 'sdf'}) # we're moving more than 1/3 of the replicas but fewer than 2/3, so # we have to do this twice rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) rb.pretend_min_part_hours_passed() rb.rebalance(seed=12345) expected = { '127.0.0.1': 192, '127.0.0.2': 192, '127.0.0.3': 192, '127.0.0.4': 192, } part_counts = self._partition_counts(rb, key='ip') self.assertEqual(part_counts, expected) def test_overload_keeps_balanceable_things_balanced_initially(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 8, 'ip': '10.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 8, 'ip': '10.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.3', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.3', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.4', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.4', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.5', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.5', 'port': 10000, 'device': 'sdb'}) rb.set_overload(99999) rb.rebalance(seed=12345) part_counts = self._partition_counts(rb) self.assertEqual(part_counts, { 0: 128, 1: 128, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64, }) def test_overload_keeps_balanceable_things_balanced_on_rebalance(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 8, 'ip': '10.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 8, 'ip': '10.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.3', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.3', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.4', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.4', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.5', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '10.0.0.5', 'port': 10000, 'device': 'sdb'}) rb.set_overload(99999) rb.rebalance(seed=123) part_counts = self._partition_counts(rb) self.assertEqual(part_counts, { 0: 128, 1: 128, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64, }) # swap weights between 10.0.0.1 and 10.0.0.2 rb.set_dev_weight(0, 4) rb.set_dev_weight(1, 4) rb.set_dev_weight(2, 8) rb.set_dev_weight(1, 8) rb.rebalance(seed=456) part_counts = self._partition_counts(rb) self.assertEqual(part_counts, { 0: 128, 1: 128, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64, }) def test_server_per_port(self): # 3 servers, 3 disks each, with each disk on its own port rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.1', 'port': 10000, 'device': 'sdx'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.1', 'port': 10001, 'device': 'sdy'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.2', 'port': 10000, 'device': 'sdx'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.2', 'port': 10001, 'device': 'sdy'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.3', 'port': 10000, 'device': 'sdx'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.3', 'port': 10001, 'device': 'sdy'}) rb.rebalance(seed=1) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.1', 'port': 10002, 'device': 'sdz'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.2', 'port': 10002, 'device': 'sdz'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '10.0.0.3', 'port': 10002, 'device': 'sdz'}) rb.pretend_min_part_hours_passed() rb.rebalance(seed=1) poorly_dispersed = [] for part in range(rb.parts): on_nodes = set() for replica in range(rb.replicas): dev_id = rb._replica2part2dev[replica][part] on_nodes.add(rb.devs[dev_id]['ip']) if len(on_nodes) < rb.replicas: poorly_dispersed.append(part) self.assertEqual(poorly_dispersed, []) def test_load(self): rb = ring.RingBuilder(8, 3, 1) devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda1', 'meta': 'meta0'}, {'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1', 'meta': 'meta1'}, {'id': 2, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc1', 'meta': 'meta2'}, {'id': 3, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'}] for d in devs: rb.add_dev(d) rb.rebalance() real_pickle = pickle.load fake_open = mock.mock_open() io_error_not_found = IOError() io_error_not_found.errno = errno.ENOENT io_error_no_perm = IOError() io_error_no_perm.errno = errno.EPERM io_error_generic = IOError() io_error_generic.errno = errno.EOPNOTSUPP try: # test a legit builder fake_pickle = mock.Mock(return_value=rb) pickle.load = fake_pickle builder = ring.RingBuilder.load('fake.builder', open=fake_open) self.assertEqual(fake_pickle.call_count, 1) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) self.assertEqual(builder, rb) fake_pickle.reset_mock() # test old style builder fake_pickle.return_value = rb.to_dict() pickle.load = fake_pickle builder = ring.RingBuilder.load('fake.builder', open=fake_open) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) self.assertEqual(builder.devs, rb.devs) fake_pickle.reset_mock() # test old devs but no meta no_meta_builder = rb for dev in no_meta_builder.devs: del(dev['meta']) fake_pickle.return_value = no_meta_builder pickle.load = fake_pickle builder = ring.RingBuilder.load('fake.builder', open=fake_open) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) self.assertEqual(builder.devs, rb.devs) # test an empty builder fake_pickle.side_effect = EOFError pickle.load = fake_pickle self.assertRaises(exceptions.UnPicklingError, ring.RingBuilder.load, 'fake.builder', open=fake_open) # test a corrupted builder fake_pickle.side_effect = pickle.UnpicklingError pickle.load = fake_pickle self.assertRaises(exceptions.UnPicklingError, ring.RingBuilder.load, 'fake.builder', open=fake_open) # test some error fake_pickle.side_effect = AttributeError pickle.load = fake_pickle self.assertRaises(exceptions.UnPicklingError, ring.RingBuilder.load, 'fake.builder', open=fake_open) finally: pickle.load = real_pickle # test non existent builder file fake_open.side_effect = io_error_not_found self.assertRaises(exceptions.FileNotFoundError, ring.RingBuilder.load, 'fake.builder', open=fake_open) # test non accessible builder file fake_open.side_effect = io_error_no_perm self.assertRaises(exceptions.PermissionError, ring.RingBuilder.load, 'fake.builder', open=fake_open) # test an error other then ENOENT and ENOPERM fake_open.side_effect = io_error_generic self.assertRaises(IOError, ring.RingBuilder.load, 'fake.builder', open=fake_open) def test_save_load(self): rb = ring.RingBuilder(8, 3, 1) devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.0', 'port': 10000, 'replication_ip': '127.0.0.0', 'replication_port': 10000, 'device': 'sda1', 'meta': 'meta0'}, {'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'replication_ip': '127.0.0.1', 'replication_port': 10001, 'device': 'sdb1', 'meta': 'meta1'}, {'id': 2, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'replication_ip': '127.0.0.2', 'replication_port': 10002, 'device': 'sdc1', 'meta': 'meta2'}, {'id': 3, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.3', 'port': 10003, 'replication_ip': '127.0.0.3', 'replication_port': 10003, 'device': 'sdd1', 'meta': ''}] rb.set_overload(3.14159) for d in devs: rb.add_dev(d) rb.rebalance() builder_file = os.path.join(self.testdir, 'test_save.builder') rb.save(builder_file) loaded_rb = ring.RingBuilder.load(builder_file) self.maxDiff = None self.assertEqual(loaded_rb.to_dict(), rb.to_dict()) self.assertEqual(loaded_rb.overload, 3.14159) @mock.patch('six.moves.builtins.open', autospec=True) @mock.patch('swift.common.ring.builder.pickle.dump', autospec=True) def test_save(self, mock_pickle_dump, mock_open): mock_open.return_value = mock_fh = mock.MagicMock() rb = ring.RingBuilder(8, 3, 1) devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda1', 'meta': 'meta0'}, {'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1', 'meta': 'meta1'}, {'id': 2, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc1', 'meta': 'meta2'}, {'id': 3, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'}] for d in devs: rb.add_dev(d) rb.rebalance() rb.save('some.builder') mock_open.assert_called_once_with('some.builder', 'wb') mock_pickle_dump.assert_called_once_with(rb.to_dict(), mock_fh.__enter__(), protocol=2) def test_search_devs(self): rb = ring.RingBuilder(8, 3, 1) devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda1', 'meta': 'meta0'}, {'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1', 'meta': 'meta1'}, {'id': 2, 'region': 1, 'zone': 2, 'weight': 2, 'ip': '127.0.0.2', 'port': 10002, 'device': 'sdc1', 'meta': 'meta2'}, {'id': 3, 'region': 1, 'zone': 3, 'weight': 2, 'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1', 'meta': 'meta3'}, {'id': 4, 'region': 2, 'zone': 4, 'weight': 1, 'ip': '127.0.0.4', 'port': 10004, 'device': 'sde1', 'meta': 'meta4', 'replication_ip': '127.0.0.10', 'replication_port': 20000}, {'id': 5, 'region': 2, 'zone': 5, 'weight': 2, 'ip': '127.0.0.5', 'port': 10005, 'device': 'sdf1', 'meta': 'meta5', 'replication_ip': '127.0.0.11', 'replication_port': 20001}, {'id': 6, 'region': 2, 'zone': 6, 'weight': 2, 'ip': '127.0.0.6', 'port': 10006, 'device': 'sdg1', 'meta': 'meta6', 'replication_ip': '127.0.0.12', 'replication_port': 20002}] for d in devs: rb.add_dev(d) rb.rebalance() res = rb.search_devs({'region': 0}) self.assertEqual(res, [devs[0], devs[1]]) res = rb.search_devs({'region': 1}) self.assertEqual(res, [devs[2], devs[3]]) res = rb.search_devs({'region': 1, 'zone': 2}) self.assertEqual(res, [devs[2]]) res = rb.search_devs({'id': 1}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'zone': 1}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'ip': '127.0.0.1'}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'ip': '127.0.0.1', 'port': 10001}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'port': 10001}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'replication_ip': '127.0.0.10'}) self.assertEqual(res, [devs[4]]) res = rb.search_devs({'replication_ip': '127.0.0.10', 'replication_port': 20000}) self.assertEqual(res, [devs[4]]) res = rb.search_devs({'replication_port': 20000}) self.assertEqual(res, [devs[4]]) res = rb.search_devs({'device': 'sdb1'}) self.assertEqual(res, [devs[1]]) res = rb.search_devs({'meta': 'meta1'}) self.assertEqual(res, [devs[1]]) def test_validate(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 11, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 12, 'region': 0, 'zone': 2, 'weight': 2, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 13, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 14, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) rb.add_dev({'id': 15, 'region': 0, 'zone': 3, 'weight': 2, 'ip': '127.0.0.1', 'port': 10003, 'device': 'sda1'}) # Degenerate case: devices added but not rebalanced yet self.assertRaises(exceptions.RingValidationError, rb.validate) rb.rebalance() counts = self._partition_counts(rb, key='zone') self.assertEqual(counts, {0: 128, 1: 128, 2: 256, 3: 256}) dev_usage, worst = rb.validate() self.assertTrue(dev_usage is None) self.assertTrue(worst is None) dev_usage, worst = rb.validate(stats=True) self.assertEqual(list(dev_usage), [32, 32, 64, 64, 32, 32, 32, # added zone0 32, 32, 32, # added zone1 64, 64, 64, # added zone2 64, 64, 64, # added zone3 ]) self.assertEqual(int(worst), 0) # min part hours should pin all the parts assigned to this zero # weight device onto it such that the balance will look horrible rb.set_dev_weight(2, 0) rb.rebalance() self.assertEqual(rb.validate(stats=True)[1], MAX_BALANCE) # Test not all partitions doubly accounted for rb.devs[1]['parts'] -= 1 self.assertRaises(exceptions.RingValidationError, rb.validate) rb.devs[1]['parts'] += 1 # Test non-numeric port rb.devs[1]['port'] = '10001' self.assertRaises(exceptions.RingValidationError, rb.validate) rb.devs[1]['port'] = 10001 # Test partition on nonexistent device rb.pretend_min_part_hours_passed() orig_dev_id = rb._replica2part2dev[0][0] rb._replica2part2dev[0][0] = len(rb.devs) self.assertRaises(exceptions.RingValidationError, rb.validate) rb._replica2part2dev[0][0] = orig_dev_id # Tests that validate can handle 'holes' in .devs rb.remove_dev(2) rb.pretend_min_part_hours_passed() rb.rebalance() rb.validate(stats=True) # Test partition assigned to a hole if rb.devs[2]: rb.remove_dev(2) rb.pretend_min_part_hours_passed() orig_dev_id = rb._replica2part2dev[0][0] rb._replica2part2dev[0][0] = 2 self.assertRaises(exceptions.RingValidationError, rb.validate) rb._replica2part2dev[0][0] = orig_dev_id # Validate that zero weight devices with no partitions don't count on # the 'worst' value. self.assertNotEqual(rb.validate(stats=True)[1], MAX_BALANCE) rb.add_dev({'id': 16, 'region': 0, 'zone': 0, 'weight': 0, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.pretend_min_part_hours_passed() rb.rebalance() self.assertNotEqual(rb.validate(stats=True)[1], MAX_BALANCE) def test_validate_partial_replica(self): rb = ring.RingBuilder(8, 2.5, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdc'}) rb.rebalance() rb.validate() # sanity self.assertEqual(len(rb._replica2part2dev[0]), 256) self.assertEqual(len(rb._replica2part2dev[1]), 256) self.assertEqual(len(rb._replica2part2dev[2]), 128) # now swap partial replica part maps rb._replica2part2dev[1], rb._replica2part2dev[2] = \ rb._replica2part2dev[2], rb._replica2part2dev[1] self.assertRaises(exceptions.RingValidationError, rb.validate) def test_validate_duplicate_part_assignment(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdc'}) rb.rebalance() rb.validate() # sanity # now double up a device assignment rb._replica2part2dev[1][200] = rb._replica2part2dev[2][200] class SubStringMatcher(object): def __init__(self, substr): self.substr = substr def __eq__(self, other): return self.substr in other with self.assertRaises(exceptions.RingValidationError) as e: rb.validate() expected = 'The partition 200 has been assigned to duplicate devices' self.assertIn(expected, str(e.exception)) def test_get_part_devices(self): rb = ring.RingBuilder(8, 3, 1) self.assertEqual(rb.get_part_devices(0), []) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.rebalance() part_devs = sorted(rb.get_part_devices(0), key=operator.itemgetter('id')) self.assertEqual(part_devs, [rb.devs[0], rb.devs[1], rb.devs[2]]) def test_get_part_devices_partial_replicas(self): rb = ring.RingBuilder(8, 2.5, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 1, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.rebalance(seed=4) # note: partition 255 will only have 2 replicas part_devs = sorted(rb.get_part_devices(255), key=operator.itemgetter('id')) self.assertEqual(part_devs, [rb.devs[1], rb.devs[2]]) def test_dispersion_with_zero_weight_devices(self): rb = ring.RingBuilder(8, 3.0, 0) # add two devices to a single server in a single zone rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) # and a zero weight device rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 0, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.rebalance() self.assertEqual(rb.dispersion, 0.0) self.assertEqual(rb._dispersion_graph, { (0,): [0, 0, 0, 256], (0, 0): [0, 0, 0, 256], (0, 0, '127.0.0.1'): [0, 0, 0, 256], (0, 0, '127.0.0.1', 0): [0, 256, 0, 0], (0, 0, '127.0.0.1', 1): [0, 256, 0, 0], (0, 0, '127.0.0.1', 2): [0, 256, 0, 0], }) def test_dispersion_with_zero_weight_devices_with_parts(self): rb = ring.RingBuilder(8, 3.0, 1) # add four devices to a single server in a single zone rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.rebalance(seed=1) self.assertEqual(rb.dispersion, 0.0) self.assertEqual(rb._dispersion_graph, { (0,): [0, 0, 0, 256], (0, 0): [0, 0, 0, 256], (0, 0, '127.0.0.1'): [0, 0, 0, 256], (0, 0, '127.0.0.1', 0): [64, 192, 0, 0], (0, 0, '127.0.0.1', 1): [64, 192, 0, 0], (0, 0, '127.0.0.1', 2): [64, 192, 0, 0], (0, 0, '127.0.0.1', 3): [64, 192, 0, 0], }) # now mark a device 2 for decom rb.set_dev_weight(2, 0.0) # we'll rebalance but can't move any parts rb.rebalance(seed=1) # zero weight tier has one copy of 1/4 part-replica self.assertEqual(rb.dispersion, 75.0) self.assertEqual(rb._dispersion_graph, { (0,): [0, 0, 0, 256], (0, 0): [0, 0, 0, 256], (0, 0, '127.0.0.1'): [0, 0, 0, 256], (0, 0, '127.0.0.1', 0): [64, 192, 0, 0], (0, 0, '127.0.0.1', 1): [64, 192, 0, 0], (0, 0, '127.0.0.1', 2): [64, 192, 0, 0], (0, 0, '127.0.0.1', 3): [64, 192, 0, 0], }) # unlock the stuck parts rb.pretend_min_part_hours_passed() rb.rebalance(seed=3) self.assertEqual(rb.dispersion, 0.0) self.assertEqual(rb._dispersion_graph, { (0,): [0, 0, 0, 256], (0, 0): [0, 0, 0, 256], (0, 0, '127.0.0.1'): [0, 0, 0, 256], (0, 0, '127.0.0.1', 0): [0, 256, 0, 0], (0, 0, '127.0.0.1', 1): [0, 256, 0, 0], (0, 0, '127.0.0.1', 3): [0, 256, 0, 0], }) def test_effective_overload(self): rb = ring.RingBuilder(8, 3, 1) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) # z1 rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) # z2 rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) # this ring requires overload required = rb.get_required_overload() self.assertGreater(required, 0.1) # and we'll use a little bit rb.set_overload(0.1) rb.rebalance(seed=7) rb.validate() # but with-out enough overload we're not dispersed self.assertGreater(rb.dispersion, 0) # add the other dev to z2 rb.add_dev({'id': 8, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdc'}) # but also fail another device in the same! rb.remove_dev(6) # we still require overload required = rb.get_required_overload() self.assertGreater(required, 0.1) rb.pretend_min_part_hours_passed() rb.rebalance(seed=7) rb.validate() # ... and without enough we're full dispersed self.assertGreater(rb.dispersion, 0) # ok, let's fix z2's weight for real rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) # ... technically, we no longer require overload self.assertEqual(rb.get_required_overload(), 0.0) # so let's rebalance w/o resetting min_part_hours rb.rebalance(seed=7) rb.validate() # ... and that got it in one pass boo-yah! self.assertEqual(rb.dispersion, 0) def zone_weights_over_device_count(self): rb = ring.RingBuilder(8, 3, 1) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) # z1 rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) # z2 rb.add_dev({'id': 2, 'region': 0, 'zone': 2, 'weight': 200, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.rebalance(seed=7) rb.validate() self.assertEqual(rb.dispersion, 0) self.assertAlmostEqual(rb.get_balance(), (1.0 / 3.0) * 100) def test_more_devices_than_replicas_validation_when_removed_dev(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sdc'}) rb.rebalance() rb.remove_dev(2) with self.assertRaises(ValueError) as e: rb.set_dev_weight(2, 1) msg = "Can not set weight of dev_id 2 because it is marked " \ "for removal" self.assertIn(msg, str(e.exception)) with self.assertRaises(exceptions.RingValidationError) as e: rb.rebalance() msg = 'Replica count of 3 requires more than 2 devices' self.assertIn(msg, str(e.exception)) def _add_dev_delete_first_n(self, add_dev_count, n): rb = ring.RingBuilder(8, 3, 1) dev_names = ['sda', 'sdb', 'sdc', 'sdd', 'sde', 'sdf'] for i in range(add_dev_count): if i < len(dev_names): dev_name = dev_names[i] else: dev_name = 'sda' rb.add_dev({'id': i, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': dev_name}) rb.rebalance() if (n > 0): rb.pretend_min_part_hours_passed() # remove first n for i in range(n): rb.remove_dev(i) rb.pretend_min_part_hours_passed() rb.rebalance() return rb def test_reuse_of_dev_holes_without_id(self): # try with contiguous holes at beginning add_dev_count = 6 rb = self._add_dev_delete_first_n(add_dev_count, add_dev_count - 3) new_dev_id = rb.add_dev({'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) self.assertTrue(new_dev_id < add_dev_count) # try with non-contiguous holes # [0, 1, None, 3, 4, None] rb2 = ring.RingBuilder(8, 3, 1) for i in range(6): rb2.add_dev({'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) rb2.rebalance() rb2.pretend_min_part_hours_passed() rb2.remove_dev(2) rb2.remove_dev(5) rb2.pretend_min_part_hours_passed() rb2.rebalance() first = rb2.add_dev({'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) second = rb2.add_dev({'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) # add a new one (without reusing a hole) third = rb2.add_dev({'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) self.assertEqual(first, 2) self.assertEqual(second, 5) self.assertEqual(third, 6) def test_reuse_of_dev_holes_with_id(self): add_dev_count = 6 rb = self._add_dev_delete_first_n(add_dev_count, add_dev_count - 3) # add specifying id exp_new_dev_id = 2 # [dev, dev, None, dev, dev, None] try: new_dev_id = rb.add_dev({'id': exp_new_dev_id, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'weight': 1.0, 'device': 'sda'}) self.assertEqual(new_dev_id, exp_new_dev_id) except exceptions.DuplicateDeviceError: self.fail("device hole not reused") def test_increase_partition_power(self): rb = ring.RingBuilder(8, 3.0, 1) self.assertEqual(rb.part_power, 8) # add more devices than replicas to the ring for i in range(10): dev = "sdx%s" % i rb.add_dev({'id': i, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': dev}) rb.rebalance(seed=1) # Let's save the ring, and get the nodes for an object ring_file = os.path.join(self.testdir, 'test_partpower.ring.gz') rd = rb.get_ring() rd.save(ring_file) r = ring.Ring(ring_file) old_part, old_nodes = r.get_nodes("acc", "cont", "obj") old_version = rb.version rb.increase_partition_power() rb.validate() changed_parts, _balance, removed_devs = rb.rebalance() self.assertEqual(changed_parts, 0) self.assertEqual(removed_devs, 0) old_ring = r rd = rb.get_ring() rd.save(ring_file) r = ring.Ring(ring_file) new_part, new_nodes = r.get_nodes("acc", "cont", "obj") # sanity checks self.assertEqual(rb.part_power, 9) self.assertEqual(rb.version, old_version + 2) # make sure there is always the same device assigned to every pair of # partitions for replica in rb._replica2part2dev: for part in range(0, len(replica), 2): dev = replica[part] next_dev = replica[part + 1] self.assertEqual(dev, next_dev) # same for last_part moves for part in range(0, rb.parts, 2): this_last_moved = rb._last_part_moves[part] next_last_moved = rb._last_part_moves[part + 1] self.assertEqual(this_last_moved, next_last_moved) for i in range(100): suffix = uuid.uuid4() account = 'account_%s' % suffix container = 'container_%s' % suffix obj = 'obj_%s' % suffix old_part, old_nodes = old_ring.get_nodes(account, container, obj) new_part, new_nodes = r.get_nodes(account, container, obj) # Due to the increased partition power, the partition each object # is assigned to has changed. If the old partition was X, it will # now be either located in 2*X or 2*X+1 self.assertTrue(new_part in [old_part * 2, old_part * 2 + 1]) # Importantly, we expect the objects to be placed on the same # nodes after increasing the partition power self.assertEqual(old_nodes, new_nodes) class TestGetRequiredOverload(unittest.TestCase): maxDiff = None def test_none_needed(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) # 4 equal-weight devs and 3 replicas: this can be balanced without # resorting to overload at all self.assertAlmostEqual(rb.get_required_overload(), 0) expected = { (0, 0, '127.0.0.1', 0): 0.75, (0, 0, '127.0.0.1', 1): 0.75, (0, 0, '127.0.0.1', 2): 0.75, (0, 0, '127.0.0.1', 3): 0.75, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, { tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 4}) wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 4}) # since no overload is needed, target_replicas is the same rb.set_overload(0.10) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) # ... no matter how high you go! rb.set_overload(100.0) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) # 3 equal-weight devs and 3 replicas: this can also be balanced rb.remove_dev(3) self.assertAlmostEqual(rb.get_required_overload(), 0) expected = { (0, 0, '127.0.0.1', 0): 1.0, (0, 0, '127.0.0.1', 1): 1.0, (0, 0, '127.0.0.1', 2): 1.0, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 4}) wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 4}) # ... still no overload rb.set_overload(100.0) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) def test_equal_replica_and_devices_count_ignore_weights(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 7.47, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 5.91, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 6.44, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) expected = { 0: 1.0, 1: 1.0, 2: 1.0, } # simplicity itself self.assertEqual(expected, { t[-1]: r for (t, r) in rb._build_weighted_replicas_by_tier().items() if len(t) == 4}) self.assertEqual(expected, { t[-1]: r for (t, r) in rb._build_wanted_replicas_by_tier().items() if len(t) == 4}) self.assertEqual(expected, { t[-1]: r for (t, r) in rb._build_target_replicas_by_tier().items() if len(t) == 4}) # ... no overload required! self.assertEqual(0, rb.get_required_overload()) rb.rebalance() expected = { 0: 256, 1: 256, 2: 256, } self.assertEqual(expected, {d['id']: d['parts'] for d in rb._iter_devs()}) def test_small_zone(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 4, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 4, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 4, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 4, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 3, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) expected = { (0, 0): 1.0434782608695652, (0, 1): 1.0434782608695652, (0, 2): 0.9130434782608695, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) expected = { (0, 0): 1.0, (0, 1): 1.0, (0, 2): 1.0, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) # the device tier is interesting because one of the devices in zone # two has a different weight expected = { 0: 0.5217391304347826, 1: 0.5217391304347826, 2: 0.5217391304347826, 3: 0.5217391304347826, 4: 0.5217391304347826, 5: 0.3913043478260869, } self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 4}) # ... but, each pair of devices still needs to hold a whole # replicanth; which we'll try distribute fairly among devices in # zone 2, so that they can share the burden and ultimately the # required overload will be as small as possible. expected = { 0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5714285714285715, 5: 0.42857142857142855, } self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 4}) # full dispersion requires zone two's devices to eat more than # they're weighted for self.assertAlmostEqual(rb.get_required_overload(), 0.095238, delta=1e-5) # so... if we give it enough overload it we should get full dispersion rb.set_overload(0.1) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) def test_multiple_small_zones(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 150, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 150, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 10, 'region': 0, 'zone': 1, 'weight': 150, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 3, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 3, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) expected = { (0, 0): 2.1052631578947367, (0, 1): 0.47368421052631576, (0, 2): 0.21052631578947367, (0, 3): 0.21052631578947367, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) # without any overload, we get weight target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: r for (tier, r) in target_replicas.items() if len(tier) == 2}) expected = { (0, 0): 1.0, (0, 1): 1.0, (0, 2): 0.49999999999999994, (0, 3): 0.49999999999999994, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {t: r for (t, r) in wanted_replicas.items() if len(t) == 2}) self.assertEqual(1.3750000000000002, rb.get_required_overload()) # with enough overload we get the full dispersion rb.set_overload(1.5) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: r for (tier, r) in target_replicas.items() if len(tier) == 2}) # with not enough overload, we get somewhere in the middle rb.set_overload(1.0) expected = { (0, 0): 1.3014354066985647, (0, 1): 0.8564593301435406, (0, 2): 0.4210526315789473, (0, 3): 0.4210526315789473, } target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: r for (tier, r) in target_replicas.items() if len(tier) == 2}) def test_big_zone(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 60, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 60, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 3, 'weight': 60, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 3, 'weight': 60, 'ip': '127.0.0.3', 'port': 10000, 'device': 'sdb'}) expected = { (0, 0): 1.0714285714285714, (0, 1): 0.6428571428571429, (0, 2): 0.6428571428571429, (0, 3): 0.6428571428571429, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) expected = { (0, 0): 1.0, (0, 1): 0.6666666666666667, (0, 2): 0.6666666666666667, (0, 3): 0.6666666666666667, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) # when all the devices and servers in a zone are evenly weighted # it will accurately proxy their required overload, all the # zones besides 0 require the same overload t = random.choice([t for t in weighted_replicas if len(t) == 2 and t[1] != 0]) expected_overload = ((wanted_replicas[t] - weighted_replicas[t]) / weighted_replicas[t]) self.assertAlmostEqual(rb.get_required_overload(), expected_overload) # but if you only give it out half of that rb.set_overload(expected_overload / 2.0) # ... you can expect it's not going to full disperse expected = { (0, 0): 1.0357142857142856, (0, 1): 0.6547619047619049, (0, 2): 0.6547619047619049, (0, 3): 0.6547619047619049, } target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 2}) def test_enormous_zone(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 500, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'weight': 60, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 60, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 2, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 3, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 9, 'region': 0, 'zone': 3, 'weight': 60, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) expected = { (0, 0): 2.542372881355932, (0, 1): 0.15254237288135591, (0, 2): 0.15254237288135591, (0, 3): 0.15254237288135591, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) expected = { (0, 0): 1.0, (0, 1): 0.6666666666666667, (0, 2): 0.6666666666666667, (0, 3): 0.6666666666666667, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) # ouch, those "tiny" devices need to hold 3x more than their # weighted for! self.assertAlmostEqual(rb.get_required_overload(), 3.370370, delta=1e-5) # let's get a little crazy, and let devices eat up to 1x more than # their capacity is weighted for - see how far that gets us... rb.set_overload(1) target_replicas = rb._build_target_replicas_by_tier() expected = { (0, 0): 2.084745762711864, (0, 1): 0.30508474576271183, (0, 2): 0.30508474576271183, (0, 3): 0.30508474576271183, } self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 2}) def test_two_big_two_small(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 100, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'weight': 45, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'weight': 45, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 3, 'weight': 35, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 3, 'weight': 35, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) expected = { (0, 0): 1.0714285714285714, (0, 1): 1.0714285714285714, (0, 2): 0.48214285714285715, (0, 3): 0.375, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) expected = { (0, 0): 1.0, (0, 1): 1.0, (0, 2): 0.5625, (0, 3): 0.43749999999999994, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) # I'm not sure it's significant or coincidental that the devices # in zone 2 & 3 who end up splitting the 3rd replica turn out to # need to eat ~1/6th extra replicanths self.assertAlmostEqual(rb.get_required_overload(), 1.0 / 6.0) # ... *so* 10% isn't *quite* enough rb.set_overload(0.1) target_replicas = rb._build_target_replicas_by_tier() expected = { (0, 0): 1.0285714285714285, (0, 1): 1.0285714285714285, (0, 2): 0.5303571428571429, (0, 3): 0.4125, } self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 2}) # ... but 20% will do the trick! rb.set_overload(0.2) target_replicas = rb._build_target_replicas_by_tier() expected = { (0, 0): 1.0, (0, 1): 1.0, (0, 2): 0.5625, (0, 3): 0.43749999999999994, } self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 2}) def test_multiple_replicas_each(self): rb = ring.RingBuilder(8, 7, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 80, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 80, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 80, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'weight': 80, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sdd'}) rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'weight': 80, 'ip': '127.0.0.0', 'port': 10000, 'device': 'sde'}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'weight': 70, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'weight': 70, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb'}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'weight': 70, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdc'}) rb.add_dev({'id': 8, 'region': 0, 'zone': 1, 'weight': 70, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdd'}) expected = { (0, 0): 4.117647058823529, (0, 1): 2.8823529411764706, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) expected = { (0, 0): 4.0, (0, 1): 3.0, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) # I guess 2.88 => 3.0 is about a 4% increase self.assertAlmostEqual(rb.get_required_overload(), 0.040816326530612256) # ... 10% is plenty enough here rb.set_overload(0.1) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 2}) def test_small_extra_server_in_zone_with_multiple_replicas(self): rb = ring.RingBuilder(8, 5, 1) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 1000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdb', 'weight': 1000}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdc', 'weight': 1000}) # z1 rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 1000}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdb', 'weight': 1000}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdc', 'weight': 1000}) # z1 - extra small server rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sda', 'weight': 50}) expected = { (0, 0): 2.479338842975207, (0, 1): 2.5206611570247937, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {t: r for (t, r) in weighted_replicas.items() if len(t) == 2}) # dispersion is fine with this at the zone tier wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {t: r for (t, r) in wanted_replicas.items() if len(t) == 2}) # ... but not ok with that tiny server expected = { '127.0.0.1': 2.479338842975207, '127.0.0.2': 1.5206611570247937, '127.0.0.3': 1.0, } self.assertEqual(expected, {t[-1]: r for (t, r) in wanted_replicas.items() if len(t) == 3}) self.assertAlmostEqual(23.2, rb.get_required_overload()) def test_multiple_replicas_in_zone_with_single_device(self): rb = ring.RingBuilder(8, 5, 0) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 100}) # z1 rb.add_dev({'id': 1, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sdb', 'weight': 100}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sdc', 'weight': 100}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sdd', 'weight': 100}) # first things first, make sure we do this right rb.rebalance() # each device get's a sing replica of every part expected = { 0: 256, 1: 256, 2: 256, 3: 256, 4: 256, } self.assertEqual(expected, {d['id']: d['parts'] for d in rb._iter_devs()}) # but let's make sure we're thinking about it right too expected = { 0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0, } # by weight everyone is equal weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in weighted_replicas.items() if len(t) == 4}) # wanted might have liked to have fewer replicas in z1, but the # single device in z0 limits us one replica per device with rb.debug(): wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in wanted_replicas.items() if len(t) == 4}) # even with some overload - still one replica per device rb.set_overload(1.0) target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in target_replicas.items() if len(t) == 4}) # when overload can not change the outcome none is required self.assertEqual(0.0, rb.get_required_overload()) # even though dispersion is terrible (in z1 particularly) self.assertEqual(100.0, rb.dispersion) def test_one_big_guy_does_not_spoil_his_buddy(self): rb = ring.RingBuilder(8, 3, 0) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 100}) # z1 rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sda', 'weight': 100}) # z2 rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'ip': '127.0.2.1', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'ip': '127.0.2.2', 'port': 6000, 'device': 'sda', 'weight': 10000}) # obviously d5 gets one whole replica; the other two replicas # are split evenly among the five other devices # (i.e. ~0.4 replicanths for each 100 units of weight) expected = { 0: 0.39999999999999997, 1: 0.39999999999999997, 2: 0.39999999999999997, 3: 0.39999999999999997, 4: 0.39999999999999997, 5: 1.0, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in weighted_replicas.items() if len(t) == 4}) # with no overload we get the "balanced" placement target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in target_replicas.items() if len(t) == 4}) # but in reality, these devices having such disparate weights # leads to a *terrible* balance even w/o overload! rb.rebalance(seed=9) self.assertEqual(rb.get_balance(), 1308.2031249999998) # even though part assignment is pretty reasonable expected = { 0: 103, 1: 102, 2: 103, 3: 102, 4: 102, 5: 256, } self.assertEqual(expected, { d['id']: d['parts'] for d in rb._iter_devs()}) # so whats happening is the small devices are holding *way* more # *real* parts than their *relative* portion of the weight would # like them too! expected = { 0: 1308.2031249999998, 1: 1294.5312499999998, 2: 1308.2031249999998, 3: 1294.5312499999998, 4: 1294.5312499999998, 5: -65.0, } self.assertEqual(expected, rb._build_balance_per_dev()) # increasing overload moves towards one replica in each tier rb.set_overload(0.20) expected = { 0: 0.48, 1: 0.48, 2: 0.48, 3: 0.48, 4: 0.30857142857142855, 5: 0.7714285714285714, } target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in target_replicas.items() if len(t) == 4}) # ... and as always increasing overload makes balance *worse* rb.rebalance(seed=17) self.assertEqual(rb.get_balance(), 1581.6406249999998) # but despite the overall trend toward imbalance, in the tier # with the huge device, the small device is trying to shed parts # as effectively as it can (which would be useful if it was the # only small device isolated in a tier with other huge devices # trying to gobble up all the replicanths in the tier - see # `test_one_small_guy_does_not_spoil_his_buddy`!) expected = { 0: 123, 1: 123, 2: 123, 3: 123, 4: 79, 5: 197, } self.assertEqual(expected, { d['id']: d['parts'] for d in rb._iter_devs()}) # *see*, at least *someones* balance is getting better! expected = { 0: 1581.6406249999998, 1: 1581.6406249999998, 2: 1581.6406249999998, 3: 1581.6406249999998, 4: 980.078125, 5: -73.06640625, } self.assertEqual(expected, rb._build_balance_per_dev()) def test_one_small_guy_does_not_spoil_his_buddy(self): rb = ring.RingBuilder(8, 3, 0) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 10000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 10000}) # z1 rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sda', 'weight': 10000}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sda', 'weight': 10000}) # z2 rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'ip': '127.0.2.1', 'port': 6000, 'device': 'sda', 'weight': 10000}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'ip': '127.0.2.2', 'port': 6000, 'device': 'sda', 'weight': 100}) # it's almost like 3.0 / 5 ~= 0.6, but that one little guy get's # his fair share expected = { 0: 0.5988023952095808, 1: 0.5988023952095808, 2: 0.5988023952095808, 3: 0.5988023952095808, 4: 0.5988023952095808, 5: 0.005988023952095809, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in weighted_replicas.items() if len(t) == 4}) # with no overload we get a nice balanced placement target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in target_replicas.items() if len(t) == 4}) rb.rebalance(seed=9) # part placement looks goods expected = { 0: 154, 1: 153, 2: 153, 3: 153, 4: 153, 5: 2, } self.assertEqual(expected, { d['id']: d['parts'] for d in rb._iter_devs()}) # ... balance is a little lumpy on the small guy since he wants # one and a half parts :\ expected = { 0: 0.4609375000000142, 1: -0.1914062499999858, 2: -0.1914062499999858, 3: -0.1914062499999858, 4: -0.1914062499999858, 5: 30.46875, } self.assertEqual(expected, rb._build_balance_per_dev()) self.assertEqual(rb.get_balance(), 30.46875) # increasing overload moves towards one replica in each tier rb.set_overload(0.5) expected = { 0: 0.5232035928143712, 1: 0.5232035928143712, 2: 0.5232035928143712, 3: 0.5232035928143712, 4: 0.8982035928143712, 5: 0.008982035928143714, } target_replicas = rb._build_target_replicas_by_tier() self.assertEqual(expected, {t[-1]: r for (t, r) in target_replicas.items() if len(t) == 4}) # ... and as always increasing overload makes balance *worse* rb.rebalance(seed=17) self.assertEqual(rb.get_balance(), 95.703125) # but despite the overall trend toward imbalance, the little guy # isn't really taking on many new parts! expected = { 0: 134, 1: 134, 2: 134, 3: 133, 4: 230, 5: 3, } self.assertEqual(expected, { d['id']: d['parts'] for d in rb._iter_devs()}) # *see*, at everyone's balance is getting worse *together*! expected = { 0: -12.585937499999986, 1: -12.585937499999986, 2: -12.585937499999986, 3: -13.238281249999986, 4: 50.0390625, 5: 95.703125, } self.assertEqual(expected, rb._build_balance_per_dev()) def test_two_servers_with_more_than_one_replica(self): rb = ring.RingBuilder(8, 3, 0) # z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 60}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 60}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sda', 'weight': 60}) # z1 rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sda', 'weight': 80}) rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sda', 'weight': 128}) # z2 rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'ip': '127.0.2.1', 'port': 6000, 'device': 'sda', 'weight': 80}) rb.add_dev({'id': 6, 'region': 0, 'zone': 2, 'ip': '127.0.2.2', 'port': 6000, 'device': 'sda', 'weight': 240}) rb.set_overload(0.1) rb.rebalance() self.assertEqual(12.161458333333343, rb.get_balance()) replica_plan = rb._build_target_replicas_by_tier() for dev in rb._iter_devs(): tier = (dev['region'], dev['zone'], dev['ip'], dev['id']) expected_parts = replica_plan[tier] * rb.parts self.assertAlmostEqual(dev['parts'], expected_parts, delta=1) def test_multi_zone_with_failed_device(self): rb = ring.RingBuilder(8, 3, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdb', 'weight': 2000}) rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdb', 'weight': 2000}) rb.add_dev({'id': 4, 'region': 0, 'zone': 2, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 5, 'region': 0, 'zone': 2, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sdb', 'weight': 2000}) # sanity, balanced and dispersed expected = { (0, 0): 1.0, (0, 1): 1.0, (0, 2): 1.0, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 2}) self.assertEqual(rb.get_required_overload(), 0.0) # fail a device in zone 2 rb.remove_dev(4) expected = { 0: 0.6, 1: 0.6, 2: 0.6, 3: 0.6, 5: 0.6, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 4}) expected = { 0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 5: 1.0, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 4}) # does this make sense? every zone was holding 1/3rd of the # replicas, so each device was 1/6th, remove a device and # suddenly it's holding *both* sixths which is 2/3rds? self.assertAlmostEqual(rb.get_required_overload(), 2.0 / 3.0) # 10% isn't nearly enough rb.set_overload(0.1) target_replicas = rb._build_target_replicas_by_tier() expected = { 0: 0.585, 1: 0.585, 2: 0.585, 3: 0.585, 5: 0.6599999999999999, } self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) # 50% isn't even enough rb.set_overload(0.5) target_replicas = rb._build_target_replicas_by_tier() expected = { 0: 0.525, 1: 0.525, 2: 0.525, 3: 0.525, 5: 0.8999999999999999, } self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) # even 65% isn't enough (but it's getting closer) rb.set_overload(0.65) target_replicas = rb._build_target_replicas_by_tier() expected = { 0: 0.5025000000000001, 1: 0.5025000000000001, 2: 0.5025000000000001, 3: 0.5025000000000001, 5: 0.99, } self.assertEqual(expected, {tier[3]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 4}) def test_balanced_zones_unbalanced_servers(self): rb = ring.RingBuilder(8, 3, 1) # zone 0 server 127.0.0.1 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 3000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdb', 'weight': 3000}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 3000}) # zone 1 server 127.0.0.2 rb.add_dev({'id': 4, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 4000}) rb.add_dev({'id': 5, 'region': 0, 'zone': 1, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdb', 'weight': 4000}) # zone 1 (again) server 127.0.0.3 rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sda', 'weight': 1000}) weighted_replicas = rb._build_weighted_replicas_by_tier() # zones are evenly weighted expected = { (0, 0): 1.5, (0, 1): 1.5, } self.assertEqual(expected, {tier: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 2}) # ... but servers are not expected = { '127.0.0.1': 1.5, '127.0.0.2': 1.3333333333333333, '127.0.0.3': 0.16666666666666666, } self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 3}) # make sure wanted will even it out expected = { '127.0.0.1': 1.5, '127.0.0.2': 1.0, '127.0.0.3': 0.4999999999999999, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 3}) # so it wants 1/6th and eats 1/2 - that's 2/6ths more than it # wants which is a 200% increase self.assertAlmostEqual(rb.get_required_overload(), 2.0) # the overload doesn't effect the tiers that are already dispersed rb.set_overload(1) target_replicas = rb._build_target_replicas_by_tier() expected = { '127.0.0.1': 1.5, # notice with half the overload 1/6th replicanth swapped servers '127.0.0.2': 1.1666666666666665, '127.0.0.3': 0.3333333333333333, } self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 3}) def test_adding_second_zone(self): rb = ring.RingBuilder(3, 3, 1) # zone 0 server 127.0.0.1 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdb', 'weight': 2000}) # zone 0 server 127.0.0.2 rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdb', 'weight': 2000}) # zone 0 server 127.0.0.3 rb.add_dev({'id': 4, 'region': 0, 'zone': 0, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 5, 'region': 0, 'zone': 0, 'ip': '127.0.0.3', 'port': 6000, 'device': 'sdb', 'weight': 2000}) # sanity, balanced and dispersed expected = { '127.0.0.1': 1.0, '127.0.0.2': 1.0, '127.0.0.3': 1.0, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 3}) wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 3}) self.assertEqual(rb.get_required_overload(), 0) # start adding a second zone # zone 1 server 127.0.1.1 rb.add_dev({'id': 6, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 7, 'region': 0, 'zone': 1, 'ip': '127.0.1.1', 'port': 6000, 'device': 'sdb', 'weight': 100}) # zone 1 server 127.0.1.2 rb.add_dev({'id': 8, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 9, 'region': 0, 'zone': 1, 'ip': '127.0.1.2', 'port': 6000, 'device': 'sdb', 'weight': 100}) # zone 1 server 127.0.1.3 rb.add_dev({'id': 10, 'region': 0, 'zone': 1, 'ip': '127.0.1.3', 'port': 6000, 'device': 'sda', 'weight': 100}) rb.add_dev({'id': 11, 'region': 0, 'zone': 1, 'ip': '127.0.1.3', 'port': 6000, 'device': 'sdb', 'weight': 100}) # this messes things up pretty royally expected = { '127.0.0.1': 0.9523809523809523, '127.0.0.2': 0.9523809523809523, '127.0.0.3': 0.9523809523809523, '127.0.1.1': 0.047619047619047616, '127.0.1.2': 0.047619047619047616, '127.0.1.3': 0.047619047619047616, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 3}) expected = { '127.0.0.1': 0.6666666666666667, '127.0.0.2': 0.6666666666666667, '127.0.0.3': 0.6666666666666667, '127.0.1.1': 0.3333333333333333, '127.0.1.2': 0.3333333333333333, '127.0.1.3': 0.3333333333333333, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in wanted_replicas.items() if len(tier) == 3}) # so dispersion would require these devices hold 6x more than # prescribed by weight, defeating any attempt at gradually # anything self.assertAlmostEqual(rb.get_required_overload(), 6.0) # so let's suppose we only allow for 10% overload rb.set_overload(0.10) target_replicas = rb._build_target_replicas_by_tier() expected = { # we expect servers in zone 0 to be between 0.952 and 0.666 '127.0.0.1': 0.9476190476190476, '127.0.0.2': 0.9476190476190476, '127.0.0.3': 0.9476190476190476, # we expect servers in zone 1 to be between 0.0476 and 0.333 # and in fact its ~10% increase (very little compared to 6x!) '127.0.1.1': 0.052380952380952375, '127.0.1.2': 0.052380952380952375, '127.0.1.3': 0.052380952380952375, } self.assertEqual(expected, {tier[2]: weighted for (tier, weighted) in target_replicas.items() if len(tier) == 3}) def test_gradual_replica_count(self): rb = ring.RingBuilder(3, 2.5, 1) rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sdb', 'weight': 2000}) rb.add_dev({'id': 2, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sda', 'weight': 2000}) rb.add_dev({'id': 3, 'region': 0, 'zone': 0, 'ip': '127.0.0.2', 'port': 6000, 'device': 'sdb', 'weight': 2000}) expected = { 0: 0.625, 1: 0.625, 2: 0.625, 3: 0.625, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, { tier[3]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 4}) wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, { tier[3]: wanted for (tier, wanted) in wanted_replicas.items() if len(tier) == 4}) self.assertEqual(rb.get_required_overload(), 0) # server 127.0.0.2 will have only one device rb.remove_dev(2) # server 127.0.0.1 has twice the capacity of 127.0.0.2 expected = { '127.0.0.1': 1.6666666666666667, '127.0.0.2': 0.8333333333333334, } weighted_replicas = rb._build_weighted_replicas_by_tier() self.assertEqual(expected, { tier[2]: weighted for (tier, weighted) in weighted_replicas.items() if len(tier) == 3}) # dispersion requirements extend only to whole replicas expected = { '127.0.0.1': 1.4999999999999998, '127.0.0.2': 1.0, } wanted_replicas = rb._build_wanted_replicas_by_tier() self.assertEqual(expected, { tier[2]: wanted for (tier, wanted) in wanted_replicas.items() if len(tier) == 3}) # 5/6ths to a whole replicanth is a 20% increase self.assertAlmostEqual(rb.get_required_overload(), 0.2) # so let's suppose we only allow for 10% overload rb.set_overload(0.1) target_replicas = rb._build_target_replicas_by_tier() expected = { '127.0.0.1': 1.5833333333333333, '127.0.0.2': 0.9166666666666667, } self.assertEqual(expected, { tier[2]: wanted for (tier, wanted) in target_replicas.items() if len(tier) == 3}) def test_perfect_four_zone_four_replica_bad_placement(self): rb = ring.RingBuilder(4, 4, 1) # this weight is sorta nuts, but it's really just to help the # weight_of_one_part hit a magic number where floats mess up # like they would on ring with a part power of 19 and 100's of # 1000's of units of weight. weight = 21739130434795e-11 # r0z0 rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': weight, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': weight, 'ip': '127.0.0.2', 'port': 10000, 'device': 'sdb'}) # r0z1 rb.add_dev({'id': 2, 'region': 0, 'zone': 1, 'weight': weight, 'ip': '127.0.1.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 3, 'region': 0, 'zone': 1, 'weight': weight, 'ip': '127.0.1.2', 'port': 10000, 'device': 'sdb'}) # r1z0 rb.add_dev({'id': 4, 'region': 1, 'zone': 0, 'weight': weight, 'ip': '127.1.0.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 5, 'region': 1, 'zone': 0, 'weight': weight, 'ip': '127.1.0.2', 'port': 10000, 'device': 'sdb'}) # r1z1 rb.add_dev({'id': 6, 'region': 1, 'zone': 1, 'weight': weight, 'ip': '127.1.1.1', 'port': 10000, 'device': 'sda'}) rb.add_dev({'id': 7, 'region': 1, 'zone': 1, 'weight': weight, 'ip': '127.1.1.2', 'port': 10000, 'device': 'sdb'}) # the replica plan is sound expectations = { # tier_len => expected replicas 1: { (0,): 2.0, (1,): 2.0, }, 2: { (0, 0): 1.0, (0, 1): 1.0, (1, 0): 1.0, (1, 1): 1.0, } } wr = rb._build_replica_plan() for tier_len, expected in expectations.items(): self.assertEqual(expected, {t: r['max'] for (t, r) in wr.items() if len(t) == tier_len}) # even thought a naive ceil of weights is surprisingly wrong expectations = { # tier_len => expected replicas 1: { (0,): 3.0, (1,): 3.0, }, 2: { (0, 0): 2.0, (0, 1): 2.0, (1, 0): 2.0, (1, 1): 2.0, } } wr = rb._build_weighted_replicas_by_tier() for tier_len, expected in expectations.items(): self.assertEqual(expected, {t: ceil(r) for (t, r) in wr.items() if len(t) == tier_len}) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_daemon.py0000664000567000056710000000663713024044354022133 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # TODO(clayg): Test kill_children signal handlers import os from six import StringIO import unittest from getpass import getuser import logging from test.unit import tmpfile from mock import patch from swift.common import daemon, utils class MyDaemon(daemon.Daemon): def __init__(self, conf): self.conf = conf self.logger = utils.get_logger(None, 'server', log_route='server') MyDaemon.forever_called = False MyDaemon.once_called = False def run_forever(self): MyDaemon.forever_called = True def run_once(self): MyDaemon.once_called = True def run_raise(self): raise OSError def run_quit(self): raise KeyboardInterrupt class TestDaemon(unittest.TestCase): def test_create(self): d = daemon.Daemon({}) self.assertEqual(d.conf, {}) self.assertTrue(isinstance(d.logger, utils.LogAdapter)) def test_stubs(self): d = daemon.Daemon({}) self.assertRaises(NotImplementedError, d.run_once) self.assertRaises(NotImplementedError, d.run_forever) class TestRunDaemon(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' utils.drop_privileges = lambda *args: None utils.capture_stdio = lambda *args: None def tearDown(self): reload(utils) def test_run(self): d = MyDaemon({}) self.assertFalse(MyDaemon.forever_called) self.assertFalse(MyDaemon.once_called) # test default d.run() self.assertEqual(d.forever_called, True) # test once d.run(once=True) self.assertEqual(d.once_called, True) def test_run_daemon(self): sample_conf = "[my-daemon]\nuser = %s\n" % getuser() with tmpfile(sample_conf) as conf_file: with patch.dict('os.environ', {'TZ': ''}): daemon.run_daemon(MyDaemon, conf_file) self.assertEqual(MyDaemon.forever_called, True) self.assertTrue(os.environ['TZ'] is not '') daemon.run_daemon(MyDaemon, conf_file, once=True) self.assertEqual(MyDaemon.once_called, True) # test raise in daemon code MyDaemon.run_once = MyDaemon.run_raise self.assertRaises(OSError, daemon.run_daemon, MyDaemon, conf_file, once=True) # test user quit MyDaemon.run_forever = MyDaemon.run_quit sio = StringIO() logger = logging.getLogger('server') logger.addHandler(logging.StreamHandler(sio)) logger = utils.get_logger(None, 'server', log_route='server') daemon.run_daemon(MyDaemon, conf_file, logger=logger) self.assertTrue('user quit' in sio.getvalue().lower()) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_db.py0000664000567000056710000014624613024044354021256 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.common.db""" import os import sys import unittest from tempfile import mkdtemp from shutil import rmtree, copy from uuid import uuid4 import six.moves.cPickle as pickle import json import sqlite3 import itertools import time import random from mock import patch, MagicMock from eventlet.timeout import Timeout from six.moves import range import swift.common.db from swift.common.constraints import \ MAX_META_VALUE_LENGTH, MAX_META_COUNT, MAX_META_OVERALL_SIZE from swift.common.db import chexor, dict_factory, get_db_connection, \ DatabaseBroker, DatabaseConnectionError, DatabaseAlreadyExists, \ GreenDBConnection, PICKLE_PROTOCOL from swift.common.utils import normalize_timestamp, mkdirs, Timestamp from swift.common.exceptions import LockTimeout from swift.common.swob import HTTPException from test.unit import with_tempdir class TestDatabaseConnectionError(unittest.TestCase): def test_str(self): err = \ DatabaseConnectionError(':memory:', 'No valid database connection') self.assertTrue(':memory:' in str(err)) self.assertTrue('No valid database connection' in str(err)) err = DatabaseConnectionError(':memory:', 'No valid database connection', timeout=1357) self.assertTrue(':memory:' in str(err)) self.assertTrue('No valid database connection' in str(err)) self.assertTrue('1357' in str(err)) class TestDictFactory(unittest.TestCase): def test_normal_case(self): conn = sqlite3.connect(':memory:') conn.execute('CREATE TABLE test (one TEXT, two INTEGER)') conn.execute('INSERT INTO test (one, two) VALUES ("abc", 123)') conn.execute('INSERT INTO test (one, two) VALUES ("def", 456)') conn.commit() curs = conn.execute('SELECT one, two FROM test') self.assertEqual(dict_factory(curs, next(curs)), {'one': 'abc', 'two': 123}) self.assertEqual(dict_factory(curs, next(curs)), {'one': 'def', 'two': 456}) class TestChexor(unittest.TestCase): def test_normal_case(self): self.assertEqual( chexor('d41d8cd98f00b204e9800998ecf8427e', 'new name', normalize_timestamp(1)), '4f2ea31ac14d4273fe32ba08062b21de') def test_invalid_old_hash(self): self.assertRaises(ValueError, chexor, 'oldhash', 'name', normalize_timestamp(1)) def test_no_name(self): self.assertRaises(Exception, chexor, 'd41d8cd98f00b204e9800998ecf8427e', None, normalize_timestamp(1)) def test_chexor(self): ts = (normalize_timestamp(ts) for ts in itertools.count(int(time.time()))) objects = [ ('frank', next(ts)), ('bob', next(ts)), ('tom', next(ts)), ('frank', next(ts)), ('tom', next(ts)), ('bob', next(ts)), ] hash_ = '0' random.shuffle(objects) for obj in objects: hash_ = chexor(hash_, *obj) other_hash = '0' random.shuffle(objects) for obj in objects: other_hash = chexor(other_hash, *obj) self.assertEqual(hash_, other_hash) class TestGreenDBConnection(unittest.TestCase): def test_execute_when_locked(self): # This test is dependent on the code under test calling execute and # commit as sqlite3.Cursor.execute in a subclass. class InterceptCursor(sqlite3.Cursor): pass db_error = sqlite3.OperationalError('database is locked') InterceptCursor.execute = MagicMock(side_effect=db_error) with patch('sqlite3.Cursor', new=InterceptCursor): conn = sqlite3.connect(':memory:', check_same_thread=False, factory=GreenDBConnection, timeout=0.1) self.assertRaises(Timeout, conn.execute, 'select 1') self.assertTrue(InterceptCursor.execute.called) self.assertEqual(InterceptCursor.execute.call_args_list, list((InterceptCursor.execute.call_args,) * InterceptCursor.execute.call_count)) def text_commit_when_locked(self): # This test is dependent on the code under test calling commit and # commit as sqlite3.Connection.commit in a subclass. class InterceptConnection(sqlite3.Connection): pass db_error = sqlite3.OperationalError('database is locked') InterceptConnection.commit = MagicMock(side_effect=db_error) with patch('sqlite3.Connection', new=InterceptConnection): conn = sqlite3.connect(':memory:', check_same_thread=False, factory=GreenDBConnection, timeout=0.1) self.assertRaises(Timeout, conn.commit) self.assertTrue(InterceptConnection.commit.called) self.assertEqual(InterceptConnection.commit.call_args_list, list((InterceptConnection.commit.call_args,) * InterceptConnection.commit.call_count)) class TestGetDBConnection(unittest.TestCase): def test_normal_case(self): conn = get_db_connection(':memory:') self.assertTrue(hasattr(conn, 'execute')) def test_invalid_path(self): self.assertRaises(DatabaseConnectionError, get_db_connection, 'invalid database path / name') def test_locked_db(self): # This test is dependent on the code under test calling execute and # commit as sqlite3.Cursor.execute in a subclass. class InterceptCursor(sqlite3.Cursor): pass db_error = sqlite3.OperationalError('database is locked') mock_db_cmd = MagicMock(side_effect=db_error) InterceptCursor.execute = mock_db_cmd with patch('sqlite3.Cursor', new=InterceptCursor): self.assertRaises(Timeout, get_db_connection, ':memory:', timeout=0.1) self.assertTrue(mock_db_cmd.called) self.assertEqual(mock_db_cmd.call_args_list, list((mock_db_cmd.call_args,) * mock_db_cmd.call_count)) class ExampleBroker(DatabaseBroker): """ Concrete enough implementation of a DatabaseBroker. """ db_type = 'test' db_contains_type = 'test' db_reclaim_timestamp = 'created_at' def _initialize(self, conn, put_timestamp, **kwargs): if not self.account: raise ValueError( 'Attempting to create a new database with no account set') conn.executescript(''' CREATE TABLE test_stat ( account TEXT, test_count INTEGER DEFAULT 0, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', hash TEXT default '00000000000000000000000000000000', id TEXT, status_changed_at TEXT DEFAULT '0', metadata TEXT DEFAULT '' ); CREATE TABLE test ( ROWID INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, created_at TEXT, deleted INTEGER DEFAULT 0 ); CREATE TRIGGER test_insert AFTER INSERT ON test BEGIN UPDATE test_stat SET test_count = test_count + (1 - new.deleted); END; CREATE TRIGGER test_delete AFTER DELETE ON test BEGIN UPDATE test_stat SET test_count = test_count - (1 - old.deleted); END; ''') conn.execute(""" INSERT INTO test_stat ( account, created_at, id, put_timestamp, status_changed_at) VALUES (?, ?, ?, ?, ?); """, (self.account, Timestamp(time.time()).internal, str(uuid4()), put_timestamp, put_timestamp)) def merge_items(self, item_list): with self.get() as conn: for rec in item_list: conn.execute( 'DELETE FROM test WHERE name = ? and created_at < ?', ( rec['name'], rec['created_at'])) if not conn.execute( 'SELECT 1 FROM test WHERE name = ?', (rec['name'],)).fetchall(): conn.execute(''' INSERT INTO test (name, created_at, deleted) VALUES (?, ?, ?)''', ( rec['name'], rec['created_at'], rec['deleted'])) conn.commit() def _commit_puts_load(self, item_list, entry): (name, timestamp, deleted) = pickle.loads(entry.decode('base64')) item_list.append({ 'name': name, 'created_at': timestamp, 'deleted': deleted, }) def _load_item(self, name, timestamp, deleted): if self.db_file == ':memory:': record = { 'name': name, 'created_at': timestamp, 'deleted': deleted, } self.merge_items([record]) return with open(self.pending_file, 'a+b') as fp: fp.write(':') fp.write(pickle.dumps( (name, timestamp, deleted), protocol=PICKLE_PROTOCOL).encode('base64')) fp.flush() def put_test(self, name, timestamp): self._load_item(name, timestamp, 0) def delete_test(self, name, timestamp): self._load_item(name, timestamp, 1) def _delete_db(self, conn, timestamp): conn.execute(""" UPDATE test_stat SET delete_timestamp = ?, status_changed_at = ? WHERE delete_timestamp < ? """, (timestamp, timestamp, timestamp)) def _is_deleted(self, conn): info = conn.execute('SELECT * FROM test_stat').fetchone() return (info['test_count'] in (None, '', 0, '0')) and \ (Timestamp(info['delete_timestamp']) > Timestamp(info['put_timestamp'])) class TestExampleBroker(unittest.TestCase): """ Tests that use the mostly Concrete enough ExampleBroker to exercise some of the abstract methods on DatabaseBroker. """ broker_class = ExampleBroker policy = 0 def setUp(self): self.ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) def test_delete_db(self): broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(next(self.ts)) broker.delete_db(next(self.ts)) self.assertTrue(broker.is_deleted()) def test_merge_timestamps_simple_delete(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp) created_at = broker.get_info()['created_at'] broker.merge_timestamps(created_at, put_timestamp, '0') info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], '0') self.assertEqual(info['status_changed_at'], put_timestamp) # delete delete_timestamp = next(self.ts) broker.merge_timestamps(created_at, put_timestamp, delete_timestamp) self.assertTrue(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertTrue(info['status_changed_at'] > Timestamp(put_timestamp)) def put_item(self, broker, timestamp): broker.put_test('test', timestamp) def delete_item(self, broker, timestamp): broker.delete_test('test', timestamp) def test_merge_timestamps_delete_with_objects(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) created_at = broker.get_info()['created_at'] broker.merge_timestamps(created_at, put_timestamp, '0') info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], '0') self.assertEqual(info['status_changed_at'], put_timestamp) # add object self.put_item(broker, next(self.ts)) self.assertEqual(broker.get_info()[ '%s_count' % broker.db_contains_type], 1) # delete delete_timestamp = next(self.ts) broker.merge_timestamps(created_at, put_timestamp, delete_timestamp) self.assertFalse(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) # status is unchanged self.assertEqual(info['status_changed_at'], put_timestamp) # count is causing status to hold on self.delete_item(broker, next(self.ts)) self.assertEqual(broker.get_info()[ '%s_count' % broker.db_contains_type], 0) self.assertTrue(broker.is_deleted()) def test_merge_timestamps_simple_recreate(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) virgin_status_changed_at = broker.get_info()['status_changed_at'] created_at = broker.get_info()['created_at'] delete_timestamp = next(self.ts) broker.merge_timestamps(created_at, put_timestamp, delete_timestamp) self.assertTrue(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) orig_status_changed_at = info['status_changed_at'] self.assertTrue(orig_status_changed_at > Timestamp(virgin_status_changed_at)) # recreate recreate_timestamp = next(self.ts) status_changed_at = time.time() with patch('swift.common.db.time.time', new=lambda: status_changed_at): broker.merge_timestamps(created_at, recreate_timestamp, '0') self.assertFalse(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], recreate_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertTrue(info['status_changed_at'], status_changed_at) def test_merge_timestamps_recreate_with_objects(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) created_at = broker.get_info()['created_at'] # delete delete_timestamp = next(self.ts) broker.merge_timestamps(created_at, put_timestamp, delete_timestamp) self.assertTrue(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], put_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) orig_status_changed_at = info['status_changed_at'] self.assertTrue(Timestamp(orig_status_changed_at) >= Timestamp(put_timestamp)) # add object self.put_item(broker, next(self.ts)) count_key = '%s_count' % broker.db_contains_type self.assertEqual(broker.get_info()[count_key], 1) self.assertFalse(broker.is_deleted()) # recreate recreate_timestamp = next(self.ts) broker.merge_timestamps(created_at, recreate_timestamp, '0') self.assertFalse(broker.is_deleted()) info = broker.get_info() self.assertEqual(info['created_at'], created_at) self.assertEqual(info['put_timestamp'], recreate_timestamp) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertEqual(info['status_changed_at'], orig_status_changed_at) # count is not causing status to hold on self.delete_item(broker, next(self.ts)) self.assertFalse(broker.is_deleted()) def test_merge_timestamps_update_put_no_status_change(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) info = broker.get_info() orig_status_changed_at = info['status_changed_at'] created_at = info['created_at'] new_put_timestamp = next(self.ts) broker.merge_timestamps(created_at, new_put_timestamp, '0') info = broker.get_info() self.assertEqual(new_put_timestamp, info['put_timestamp']) self.assertEqual(orig_status_changed_at, info['status_changed_at']) def test_merge_timestamps_update_delete_no_status_change(self): put_timestamp = next(self.ts) broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) created_at = broker.get_info()['created_at'] broker.merge_timestamps(created_at, put_timestamp, next(self.ts)) orig_status_changed_at = broker.get_info()['status_changed_at'] new_delete_timestamp = next(self.ts) broker.merge_timestamps(created_at, put_timestamp, new_delete_timestamp) info = broker.get_info() self.assertEqual(new_delete_timestamp, info['delete_timestamp']) self.assertEqual(orig_status_changed_at, info['status_changed_at']) def test_get_max_row(self): broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(next(self.ts), storage_policy_index=int(self.policy)) self.assertEqual(-1, broker.get_max_row()) self.put_item(broker, next(self.ts)) self.assertEqual(1, broker.get_max_row()) self.delete_item(broker, next(self.ts)) self.assertEqual(2, broker.get_max_row()) self.put_item(broker, next(self.ts)) self.assertEqual(3, broker.get_max_row()) def test_get_info(self): broker = self.broker_class(':memory:', account='test', container='c') created_at = time.time() with patch('swift.common.db.time.time', new=lambda: created_at): broker.initialize(Timestamp(1).internal, storage_policy_index=int(self.policy)) info = broker.get_info() count_key = '%s_count' % broker.db_contains_type expected = { count_key: 0, 'created_at': Timestamp(created_at).internal, 'put_timestamp': Timestamp(1).internal, 'status_changed_at': Timestamp(1).internal, 'delete_timestamp': '0', } for k, v in expected.items(): self.assertEqual(info[k], v, 'mismatch for %s, %s != %s' % ( k, info[k], v)) def test_get_raw_metadata(self): broker = self.broker_class(':memory:', account='test', container='c') broker.initialize(Timestamp(0).internal, storage_policy_index=int(self.policy)) self.assertEqual(broker.metadata, {}) self.assertEqual(broker.get_raw_metadata(), '') key = u'test\u062a'.encode('utf-8') value = u'value\u062a' metadata = { key: [value, Timestamp(1).internal] } broker.update_metadata(metadata) self.assertEqual(broker.metadata, metadata) self.assertEqual(broker.get_raw_metadata(), json.dumps(metadata)) def test_put_timestamp(self): broker = self.broker_class(':memory:', account='a', container='c') orig_put_timestamp = next(self.ts) broker.initialize(orig_put_timestamp, storage_policy_index=int(self.policy)) self.assertEqual(broker.get_info()['put_timestamp'], orig_put_timestamp) # put_timestamp equal - no change broker.update_put_timestamp(orig_put_timestamp) self.assertEqual(broker.get_info()['put_timestamp'], orig_put_timestamp) # put_timestamp newer - gets newer newer_put_timestamp = next(self.ts) broker.update_put_timestamp(newer_put_timestamp) self.assertEqual(broker.get_info()['put_timestamp'], newer_put_timestamp) # put_timestamp older - no change broker.update_put_timestamp(orig_put_timestamp) self.assertEqual(broker.get_info()['put_timestamp'], newer_put_timestamp) def test_status_changed_at(self): broker = self.broker_class(':memory:', account='test', container='c') put_timestamp = next(self.ts) created_at = time.time() with patch('swift.common.db.time.time', new=lambda: created_at): broker.initialize(put_timestamp, storage_policy_index=int(self.policy)) self.assertEqual(broker.get_info()['status_changed_at'], put_timestamp) self.assertEqual(broker.get_info()['created_at'], Timestamp(created_at).internal) status_changed_at = next(self.ts) broker.update_status_changed_at(status_changed_at) self.assertEqual(broker.get_info()['status_changed_at'], status_changed_at) # save the old and get a new status_changed_at old_status_changed_at, status_changed_at = \ status_changed_at, next(self.ts) broker.update_status_changed_at(status_changed_at) self.assertEqual(broker.get_info()['status_changed_at'], status_changed_at) # status changed at won't go backwards... broker.update_status_changed_at(old_status_changed_at) self.assertEqual(broker.get_info()['status_changed_at'], status_changed_at) def test_get_syncs(self): broker = self.broker_class(':memory:', account='a', container='c') broker.initialize(Timestamp(time.time()).internal, storage_policy_index=int(self.policy)) self.assertEqual([], broker.get_syncs()) broker.merge_syncs([{'sync_point': 1, 'remote_id': 'remote1'}]) self.assertEqual([{'sync_point': 1, 'remote_id': 'remote1'}], broker.get_syncs()) self.assertEqual([], broker.get_syncs(incoming=False)) broker.merge_syncs([{'sync_point': 2, 'remote_id': 'remote2'}], incoming=False) self.assertEqual([{'sync_point': 2, 'remote_id': 'remote2'}], broker.get_syncs(incoming=False)) @with_tempdir def test_commit_pending(self, tempdir): broker = self.broker_class(os.path.join(tempdir, 'test.db'), account='a', container='c') broker.initialize(next(self.ts), storage_policy_index=int(self.policy)) self.put_item(broker, next(self.ts)) qry = 'select * from %s_stat' % broker.db_type with broker.get() as conn: rows = [dict(x) for x in conn.execute(qry)] info = rows[0] count_key = '%s_count' % broker.db_contains_type self.assertEqual(0, info[count_key]) broker.get_info() self.assertEqual(1, broker.get_info()[count_key]) class TestDatabaseBroker(unittest.TestCase): def setUp(self): self.testdir = mkdtemp() def tearDown(self): rmtree(self.testdir, ignore_errors=1) def test_DB_PREALLOCATION_setting(self): u = uuid4().hex b = DatabaseBroker(u) swift.common.db.DB_PREALLOCATION = False b._preallocate() swift.common.db.DB_PREALLOCATION = True self.assertRaises(OSError, b._preallocate) def test_memory_db_init(self): broker = DatabaseBroker(':memory:') self.assertEqual(broker.db_file, ':memory:') self.assertRaises(AttributeError, broker.initialize, normalize_timestamp('0')) def test_disk_db_init(self): db_file = os.path.join(self.testdir, '1.db') broker = DatabaseBroker(db_file) self.assertEqual(broker.db_file, db_file) self.assertTrue(broker.conn is None) def test_disk_preallocate(self): test_size = [-1] def fallocate_stub(fd, size): test_size[0] = size with patch('swift.common.db.fallocate', fallocate_stub): db_file = os.path.join(self.testdir, 'pre.db') # Write 1 byte and hope that the fs will allocate less than 1 MB. f = open(db_file, "w") f.write('@') f.close() b = DatabaseBroker(db_file) b._preallocate() # We only wrote 1 byte, so we should end with the 1st step or 1 MB. self.assertEqual(test_size[0], 1024 * 1024) def test_initialize(self): self.assertRaises(AttributeError, DatabaseBroker(':memory:').initialize, normalize_timestamp('1')) stub_dict = {} def stub(*args, **kwargs): for key in stub_dict.keys(): del stub_dict[key] stub_dict['args'] = args for key, value in kwargs.items(): stub_dict[key] = value broker = DatabaseBroker(':memory:') broker._initialize = stub broker.initialize(normalize_timestamp('1')) self.assertTrue(hasattr(stub_dict['args'][0], 'execute')) self.assertEqual(stub_dict['args'][1], '0000000001.00000') with broker.get() as conn: conn.execute('SELECT * FROM outgoing_sync') conn.execute('SELECT * FROM incoming_sync') broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) broker._initialize = stub broker.initialize(normalize_timestamp('1')) self.assertTrue(hasattr(stub_dict['args'][0], 'execute')) self.assertEqual(stub_dict['args'][1], '0000000001.00000') with broker.get() as conn: conn.execute('SELECT * FROM outgoing_sync') conn.execute('SELECT * FROM incoming_sync') broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) broker._initialize = stub self.assertRaises(DatabaseAlreadyExists, broker.initialize, normalize_timestamp('1')) def test_delete_db(self): def init_stub(conn, put_timestamp, **kwargs): conn.execute('CREATE TABLE test (one TEXT)') conn.execute('CREATE TABLE test_stat (id TEXT)') conn.execute('INSERT INTO test_stat (id) VALUES (?)', (str(uuid4),)) conn.execute('INSERT INTO test (one) VALUES ("1")') conn.commit() stub_called = [False] def delete_stub(*a, **kw): stub_called[0] = True broker = DatabaseBroker(':memory:') broker.db_type = 'test' broker._initialize = init_stub # Initializes a good broker for us broker.initialize(normalize_timestamp('1')) self.assertTrue(broker.conn is not None) broker._delete_db = delete_stub stub_called[0] = False broker.delete_db('2') self.assertTrue(stub_called[0]) broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) broker.db_type = 'test' broker._initialize = init_stub broker.initialize(normalize_timestamp('1')) broker._delete_db = delete_stub stub_called[0] = False broker.delete_db('2') self.assertTrue(stub_called[0]) # ensure that metadata was cleared m2 = broker.metadata self.assertTrue(not any(v[0] for v in m2.itervalues())) self.assertTrue(all(v[1] == normalize_timestamp('2') for v in m2.itervalues())) def test_get(self): broker = DatabaseBroker(':memory:') got_exc = False try: with broker.get() as conn: conn.execute('SELECT 1') except Exception: got_exc = True broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) got_exc = False try: with broker.get() as conn: conn.execute('SELECT 1') except Exception: got_exc = True self.assertTrue(got_exc) def stub(*args, **kwargs): pass broker._initialize = stub broker.initialize(normalize_timestamp('1')) with broker.get() as conn: conn.execute('CREATE TABLE test (one TEXT)') try: with broker.get() as conn: conn.execute('INSERT INTO test (one) VALUES ("1")') raise Exception('test') conn.commit() except Exception: pass broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) with broker.get() as conn: self.assertEqual( [r[0] for r in conn.execute('SELECT * FROM test')], []) with broker.get() as conn: conn.execute('INSERT INTO test (one) VALUES ("1")') conn.commit() broker = DatabaseBroker(os.path.join(self.testdir, '1.db')) with broker.get() as conn: self.assertEqual( [r[0] for r in conn.execute('SELECT * FROM test')], ['1']) dbpath = os.path.join(self.testdir, 'dev', 'dbs', 'par', 'pre', 'db') mkdirs(dbpath) qpath = os.path.join(self.testdir, 'dev', 'quarantined', 'tests', 'db') with patch('swift.common.db.renamer', lambda a, b, fsync: b): # Test malformed database copy(os.path.join(os.path.dirname(__file__), 'malformed_example.db'), os.path.join(dbpath, '1.db')) broker = DatabaseBroker(os.path.join(dbpath, '1.db')) broker.db_type = 'test' exc = None try: with broker.get() as conn: conn.execute('SELECT * FROM test') except Exception as err: exc = err self.assertEqual( str(exc), 'Quarantined %s to %s due to malformed database' % (dbpath, qpath)) # Test corrupted database copy(os.path.join(os.path.dirname(__file__), 'corrupted_example.db'), os.path.join(dbpath, '1.db')) broker = DatabaseBroker(os.path.join(dbpath, '1.db')) broker.db_type = 'test' exc = None try: with broker.get() as conn: conn.execute('SELECT * FROM test') except Exception as err: exc = err self.assertEqual( str(exc), 'Quarantined %s to %s due to corrupted database' % (dbpath, qpath)) def test_lock(self): broker = DatabaseBroker(os.path.join(self.testdir, '1.db'), timeout=.1) got_exc = False try: with broker.lock(): pass except Exception: got_exc = True self.assertTrue(got_exc) def stub(*args, **kwargs): pass broker._initialize = stub broker.initialize(normalize_timestamp('1')) with broker.lock(): pass with broker.lock(): pass broker2 = DatabaseBroker(os.path.join(self.testdir, '1.db'), timeout=.1) broker2._initialize = stub with broker.lock(): got_exc = False try: with broker2.lock(): pass except LockTimeout: got_exc = True self.assertTrue(got_exc) try: with broker.lock(): raise Exception('test') except Exception: pass with broker.lock(): pass def test_newid(self): broker = DatabaseBroker(':memory:') broker.db_type = 'test' broker.db_contains_type = 'test' uuid1 = str(uuid4()) def _initialize(conn, timestamp, **kwargs): conn.execute('CREATE TABLE test (one TEXT)') conn.execute('CREATE TABLE test_stat (id TEXT)') conn.execute('INSERT INTO test_stat (id) VALUES (?)', (uuid1,)) conn.commit() broker._initialize = _initialize broker.initialize(normalize_timestamp('1')) uuid2 = str(uuid4()) broker.newid(uuid2) with broker.get() as conn: uuids = [r[0] for r in conn.execute('SELECT * FROM test_stat')] self.assertEqual(len(uuids), 1) self.assertNotEqual(uuids[0], uuid1) uuid1 = uuids[0] points = [(r[0], r[1]) for r in conn.execute( 'SELECT sync_point, ' 'remote_id FROM incoming_sync WHERE remote_id = ?', (uuid2,))] self.assertEqual(len(points), 1) self.assertEqual(points[0][0], -1) self.assertEqual(points[0][1], uuid2) conn.execute('INSERT INTO test (one) VALUES ("1")') conn.commit() uuid3 = str(uuid4()) broker.newid(uuid3) with broker.get() as conn: uuids = [r[0] for r in conn.execute('SELECT * FROM test_stat')] self.assertEqual(len(uuids), 1) self.assertNotEqual(uuids[0], uuid1) uuid1 = uuids[0] points = [(r[0], r[1]) for r in conn.execute( 'SELECT sync_point, ' 'remote_id FROM incoming_sync WHERE remote_id = ?', (uuid3,))] self.assertEqual(len(points), 1) self.assertEqual(points[0][1], uuid3) broker.newid(uuid2) with broker.get() as conn: uuids = [r[0] for r in conn.execute('SELECT * FROM test_stat')] self.assertEqual(len(uuids), 1) self.assertNotEqual(uuids[0], uuid1) points = [(r[0], r[1]) for r in conn.execute( 'SELECT sync_point, ' 'remote_id FROM incoming_sync WHERE remote_id = ?', (uuid2,))] self.assertEqual(len(points), 1) self.assertEqual(points[0][1], uuid2) def test_get_items_since(self): broker = DatabaseBroker(':memory:') broker.db_type = 'test' broker.db_contains_type = 'test' def _initialize(conn, timestamp, **kwargs): conn.execute('CREATE TABLE test (one TEXT)') conn.execute('INSERT INTO test (one) VALUES ("1")') conn.execute('INSERT INTO test (one) VALUES ("2")') conn.execute('INSERT INTO test (one) VALUES ("3")') conn.commit() broker._initialize = _initialize broker.initialize(normalize_timestamp('1')) self.assertEqual(broker.get_items_since(-1, 10), [{'one': '1'}, {'one': '2'}, {'one': '3'}]) self.assertEqual(broker.get_items_since(-1, 2), [{'one': '1'}, {'one': '2'}]) self.assertEqual(broker.get_items_since(1, 2), [{'one': '2'}, {'one': '3'}]) self.assertEqual(broker.get_items_since(3, 2), []) self.assertEqual(broker.get_items_since(999, 2), []) def test_get_sync(self): broker = DatabaseBroker(':memory:') broker.db_type = 'test' broker.db_contains_type = 'test' uuid1 = str(uuid4()) def _initialize(conn, timestamp, **kwargs): conn.execute('CREATE TABLE test (one TEXT)') conn.execute('CREATE TABLE test_stat (id TEXT)') conn.execute('INSERT INTO test_stat (id) VALUES (?)', (uuid1,)) conn.execute('INSERT INTO test (one) VALUES ("1")') conn.commit() pass broker._initialize = _initialize broker.initialize(normalize_timestamp('1')) uuid2 = str(uuid4()) self.assertEqual(broker.get_sync(uuid2), -1) broker.newid(uuid2) self.assertEqual(broker.get_sync(uuid2), 1) uuid3 = str(uuid4()) self.assertEqual(broker.get_sync(uuid3), -1) with broker.get() as conn: conn.execute('INSERT INTO test (one) VALUES ("2")') conn.commit() broker.newid(uuid3) self.assertEqual(broker.get_sync(uuid2), 1) self.assertEqual(broker.get_sync(uuid3), 2) self.assertEqual(broker.get_sync(uuid2, incoming=False), -1) self.assertEqual(broker.get_sync(uuid3, incoming=False), -1) broker.merge_syncs([{'sync_point': 1, 'remote_id': uuid2}], incoming=False) self.assertEqual(broker.get_sync(uuid2), 1) self.assertEqual(broker.get_sync(uuid3), 2) self.assertEqual(broker.get_sync(uuid2, incoming=False), 1) self.assertEqual(broker.get_sync(uuid3, incoming=False), -1) broker.merge_syncs([{'sync_point': 2, 'remote_id': uuid3}], incoming=False) self.assertEqual(broker.get_sync(uuid2, incoming=False), 1) self.assertEqual(broker.get_sync(uuid3, incoming=False), 2) def test_merge_syncs(self): broker = DatabaseBroker(':memory:') def stub(*args, **kwargs): pass broker._initialize = stub broker.initialize(normalize_timestamp('1')) uuid2 = str(uuid4()) broker.merge_syncs([{'sync_point': 1, 'remote_id': uuid2}]) self.assertEqual(broker.get_sync(uuid2), 1) uuid3 = str(uuid4()) broker.merge_syncs([{'sync_point': 2, 'remote_id': uuid3}]) self.assertEqual(broker.get_sync(uuid2), 1) self.assertEqual(broker.get_sync(uuid3), 2) self.assertEqual(broker.get_sync(uuid2, incoming=False), -1) self.assertEqual(broker.get_sync(uuid3, incoming=False), -1) broker.merge_syncs([{'sync_point': 3, 'remote_id': uuid2}, {'sync_point': 4, 'remote_id': uuid3}], incoming=False) self.assertEqual(broker.get_sync(uuid2, incoming=False), 3) self.assertEqual(broker.get_sync(uuid3, incoming=False), 4) self.assertEqual(broker.get_sync(uuid2), 1) self.assertEqual(broker.get_sync(uuid3), 2) broker.merge_syncs([{'sync_point': 5, 'remote_id': uuid2}]) self.assertEqual(broker.get_sync(uuid2), 5) def test_get_replication_info(self): self.get_replication_info_tester(metadata=False) def test_get_replication_info_with_metadata(self): self.get_replication_info_tester(metadata=True) def get_replication_info_tester(self, metadata=False): broker = DatabaseBroker(':memory:', account='a') broker.db_type = 'test' broker.db_contains_type = 'test' broker_creation = normalize_timestamp(1) broker_uuid = str(uuid4()) broker_metadata = metadata and json.dumps( {'Test': ('Value', normalize_timestamp(1))}) or '' def _initialize(conn, put_timestamp, **kwargs): if put_timestamp is None: put_timestamp = normalize_timestamp(0) conn.executescript(''' CREATE TABLE test ( ROWID INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE, created_at TEXT ); CREATE TRIGGER test_insert AFTER INSERT ON test BEGIN UPDATE test_stat SET test_count = test_count + 1, hash = chexor(hash, new.name, new.created_at); END; CREATE TRIGGER test_update BEFORE UPDATE ON test BEGIN SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT'); END; CREATE TRIGGER test_delete AFTER DELETE ON test BEGIN UPDATE test_stat SET test_count = test_count - 1, hash = chexor(hash, old.name, old.created_at); END; CREATE TABLE test_stat ( account TEXT, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', status_changed_at TEXT DEFAULT '0', test_count INTEGER, hash TEXT default '00000000000000000000000000000000', id TEXT %s ); INSERT INTO test_stat (test_count) VALUES (0); ''' % (metadata and ", metadata TEXT DEFAULT ''" or "")) conn.execute(''' UPDATE test_stat SET account = ?, created_at = ?, id = ?, put_timestamp = ?, status_changed_at = ? ''', (broker.account, broker_creation, broker_uuid, put_timestamp, put_timestamp)) if metadata: conn.execute('UPDATE test_stat SET metadata = ?', (broker_metadata,)) conn.commit() broker._initialize = _initialize put_timestamp = normalize_timestamp(2) broker.initialize(put_timestamp) info = broker.get_replication_info() self.assertEqual(info, { 'account': broker.account, 'count': 0, 'hash': '00000000000000000000000000000000', 'created_at': broker_creation, 'put_timestamp': put_timestamp, 'delete_timestamp': '0', 'status_changed_at': put_timestamp, 'max_row': -1, 'id': broker_uuid, 'metadata': broker_metadata}) insert_timestamp = normalize_timestamp(3) with broker.get() as conn: conn.execute(''' INSERT INTO test (name, created_at) VALUES ('test', ?) ''', (insert_timestamp,)) conn.commit() info = broker.get_replication_info() self.assertEqual(info, { 'account': broker.account, 'count': 1, 'hash': 'bdc4c93f574b0d8c2911a27ce9dd38ba', 'created_at': broker_creation, 'put_timestamp': put_timestamp, 'delete_timestamp': '0', 'status_changed_at': put_timestamp, 'max_row': 1, 'id': broker_uuid, 'metadata': broker_metadata}) with broker.get() as conn: conn.execute('DELETE FROM test') conn.commit() info = broker.get_replication_info() self.assertEqual(info, { 'account': broker.account, 'count': 0, 'hash': '00000000000000000000000000000000', 'created_at': broker_creation, 'put_timestamp': put_timestamp, 'delete_timestamp': '0', 'status_changed_at': put_timestamp, 'max_row': 1, 'id': broker_uuid, 'metadata': broker_metadata}) return broker def test_metadata(self): def reclaim(broker, timestamp): with broker.get() as conn: broker._reclaim(conn, timestamp) conn.commit() # Initializes a good broker for us broker = self.get_replication_info_tester(metadata=True) # Add our first item first_timestamp = normalize_timestamp(1) first_value = '1' broker.update_metadata({'First': [first_value, first_timestamp]}) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) # Add our second item second_timestamp = normalize_timestamp(2) second_value = '2' broker.update_metadata({'Second': [second_value, second_timestamp]}) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' in broker.metadata) self.assertEqual(broker.metadata['Second'], [second_value, second_timestamp]) # Update our first item first_timestamp = normalize_timestamp(3) first_value = '1b' broker.update_metadata({'First': [first_value, first_timestamp]}) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' in broker.metadata) self.assertEqual(broker.metadata['Second'], [second_value, second_timestamp]) # Delete our second item (by setting to empty string) second_timestamp = normalize_timestamp(4) second_value = '' broker.update_metadata({'Second': [second_value, second_timestamp]}) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' in broker.metadata) self.assertEqual(broker.metadata['Second'], [second_value, second_timestamp]) # Reclaim at point before second item was deleted reclaim(broker, normalize_timestamp(3)) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' in broker.metadata) self.assertEqual(broker.metadata['Second'], [second_value, second_timestamp]) # Reclaim at point second item was deleted reclaim(broker, normalize_timestamp(4)) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' in broker.metadata) self.assertEqual(broker.metadata['Second'], [second_value, second_timestamp]) # Reclaim after point second item was deleted reclaim(broker, normalize_timestamp(5)) self.assertTrue('First' in broker.metadata) self.assertEqual(broker.metadata['First'], [first_value, first_timestamp]) self.assertTrue('Second' not in broker.metadata) @patch.object(DatabaseBroker, 'validate_metadata') def test_validate_metadata_is_called_from_update_metadata(self, mock): broker = self.get_replication_info_tester(metadata=True) first_timestamp = normalize_timestamp(1) first_value = '1' metadata = {'First': [first_value, first_timestamp]} broker.update_metadata(metadata, validate_metadata=True) self.assertTrue(mock.called) @patch.object(DatabaseBroker, 'validate_metadata') def test_validate_metadata_is_not_called_from_update_metadata(self, mock): broker = self.get_replication_info_tester(metadata=True) first_timestamp = normalize_timestamp(1) first_value = '1' metadata = {'First': [first_value, first_timestamp]} broker.update_metadata(metadata) self.assertFalse(mock.called) def test_metadata_with_max_count(self): metadata = {} for c in range(MAX_META_COUNT): key = 'X-Account-Meta-F{0}'.format(c) metadata[key] = ('B', normalize_timestamp(1)) key = 'X-Account-Meta-Foo'.format(c) metadata[key] = ('', normalize_timestamp(1)) try: DatabaseBroker.validate_metadata(metadata) except HTTPException: self.fail('Unexpected HTTPException') def test_metadata_raises_exception_over_max_count(self): metadata = {} for c in range(MAX_META_COUNT + 1): key = 'X-Account-Meta-F{0}'.format(c) metadata[key] = ('B', normalize_timestamp(1)) message = '' try: DatabaseBroker.validate_metadata(metadata) except HTTPException as e: message = str(e) self.assertEqual(message, '400 Bad Request') def test_metadata_with_max_overall_size(self): metadata = {} metadata_value = 'v' * MAX_META_VALUE_LENGTH size = 0 x = 0 while size < (MAX_META_OVERALL_SIZE - 4 - MAX_META_VALUE_LENGTH): size += 4 + MAX_META_VALUE_LENGTH metadata['X-Account-Meta-%04d' % x] = (metadata_value, normalize_timestamp(1)) x += 1 if MAX_META_OVERALL_SIZE - size > 1: metadata['X-Account-Meta-k'] = ( 'v' * (MAX_META_OVERALL_SIZE - size - 1), normalize_timestamp(1)) try: DatabaseBroker.validate_metadata(metadata) except HTTPException: self.fail('Unexpected HTTPException') def test_metadata_raises_exception_over_max_overall_size(self): metadata = {} metadata_value = 'k' * MAX_META_VALUE_LENGTH size = 0 x = 0 while size < (MAX_META_OVERALL_SIZE - 4 - MAX_META_VALUE_LENGTH): size += 4 + MAX_META_VALUE_LENGTH metadata['X-Account-Meta-%04d' % x] = (metadata_value, normalize_timestamp(1)) x += 1 if MAX_META_OVERALL_SIZE - size > 1: metadata['X-Account-Meta-k'] = ( 'v' * (MAX_META_OVERALL_SIZE - size - 1), normalize_timestamp(1)) metadata['X-Account-Meta-k2'] = ('v', normalize_timestamp(1)) message = '' try: DatabaseBroker.validate_metadata(metadata) except HTTPException as e: message = str(e) self.assertEqual(message, '400 Bad Request') def test_possibly_quarantine_disk_error(self): dbpath = os.path.join(self.testdir, 'dev', 'dbs', 'par', 'pre', 'db') mkdirs(dbpath) qpath = os.path.join(self.testdir, 'dev', 'quarantined', 'tests', 'db') broker = DatabaseBroker(os.path.join(dbpath, '1.db')) broker.db_type = 'test' def stub(): raise sqlite3.OperationalError('disk I/O error') try: stub() except Exception: try: broker.possibly_quarantine(*sys.exc_info()) except Exception as exc: self.assertEqual( str(exc), 'Quarantined %s to %s due to disk error ' 'while accessing database' % (dbpath, qpath)) else: self.fail('Expected an exception to be raised') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/__init__.py0000664000567000056710000000000013024044352021340 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/common/test_base_storage_server.py0000664000567000056710000001050413024044352024676 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import os from swift.common.base_storage_server import BaseStorageServer from tempfile import mkdtemp from swift import __version__ as swift_version from swift.common.swob import Request from swift.common.utils import get_logger, public from shutil import rmtree class FakeOPTIONS(BaseStorageServer): server_type = 'test-server' def __init__(self, conf, logger=None): super(FakeOPTIONS, self).__init__(conf) self.logger = logger or get_logger(conf, log_route='test-server') class FakeANOTHER(FakeOPTIONS): @public def ANOTHER(self): """this is to test adding to allowed_methods""" pass class TestBaseStorageServer(unittest.TestCase): """Test swift.common.base_storage_server""" def setUp(self): self.tmpdir = mkdtemp() self.testdir = os.path.join(self.tmpdir, 'tmp_test_base_storage_server') def tearDown(self): """Tear down for testing swift.common.base_storage_server""" rmtree(self.tmpdir) def test_server_type(self): conf = {'devices': self.testdir, 'mount_check': 'false'} baseserver = BaseStorageServer(conf) msg = 'Storage nodes have not implemented the Server type.' try: baseserver.server_type except NotImplementedError as e: self.assertEqual(str(e), msg) def test_allowed_methods(self): conf = {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'} # test what's available in the base class allowed_methods_test = FakeOPTIONS(conf).allowed_methods self.assertEqual(allowed_methods_test, ['OPTIONS']) # test that a subclass can add allowed methods allowed_methods_test = FakeANOTHER(conf).allowed_methods allowed_methods_test.sort() self.assertEqual(allowed_methods_test, ['ANOTHER', 'OPTIONS']) conf = {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'true'} # test what's available in the base class allowed_methods_test = FakeOPTIONS(conf).allowed_methods self.assertEqual(allowed_methods_test, []) # test that a subclass can add allowed methods allowed_methods_test = FakeANOTHER(conf).allowed_methods self.assertEqual(allowed_methods_test, []) conf = {'devices': self.testdir, 'mount_check': 'false'} # test what's available in the base class allowed_methods_test = FakeOPTIONS(conf).allowed_methods self.assertEqual(allowed_methods_test, ['OPTIONS']) # test that a subclass can add allowed methods allowed_methods_test = FakeANOTHER(conf).allowed_methods allowed_methods_test.sort() self.assertEqual(allowed_methods_test, ['ANOTHER', 'OPTIONS']) def test_OPTIONS_error(self): msg = 'Storage nodes have not implemented the Server type.' conf = {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'} baseserver = BaseStorageServer(conf) req = Request.blank('/sda1/p/a/c/o', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 try: baseserver.OPTIONS(req) except NotImplementedError as e: self.assertEqual(str(e), msg) def test_OPTIONS(self): conf = {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'} req = Request.blank('/sda1/p/a/c/o', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = FakeOPTIONS(conf).OPTIONS(req) self.assertEqual(resp.headers['Allow'], 'OPTIONS') self.assertEqual(resp.headers['Server'], 'test-server/' + swift_version) swift-2.7.1/test/unit/common/test_utils.py0000664000567000056710000067661113024044354022035 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.common.utils""" from __future__ import print_function from test.unit import temptree import ctypes import contextlib import errno import eventlet import eventlet.event import functools import grp import logging import os import mock import random import re import socket import stat import sys import json import math import six from six import BytesIO, StringIO from six.moves.queue import Queue, Empty from six.moves import range from textwrap import dedent import tempfile import time import traceback import unittest import fcntl import shutil from getpass import getuser from shutil import rmtree from functools import partial from tempfile import TemporaryFile, NamedTemporaryFile, mkdtemp from netifaces import AF_INET6 from mock import MagicMock, patch from six.moves.configparser import NoSectionError, NoOptionError from swift.common.exceptions import Timeout, MessageTimeout, \ ConnectionTimeout, LockTimeout, ReplicationLockTimeout, \ MimeInvalid, ThreadPoolDead from swift.common import utils from swift.common.container_sync_realms import ContainerSyncRealms from swift.common.header_key_dict import HeaderKeyDict from swift.common.swob import Request, Response from test.unit import FakeLogger threading = eventlet.patcher.original('threading') class MockOs(object): def __init__(self, pass_funcs=None, called_funcs=None, raise_funcs=None): if pass_funcs is None: pass_funcs = [] if called_funcs is None: called_funcs = [] if raise_funcs is None: raise_funcs = [] self.closed_fds = [] for func in pass_funcs: setattr(self, func, self.pass_func) self.called_funcs = {} for func in called_funcs: c_func = partial(self.called_func, func) setattr(self, func, c_func) for func in raise_funcs: r_func = partial(self.raise_func, func) setattr(self, func, r_func) def pass_func(self, *args, **kwargs): pass setgroups = chdir = setsid = setgid = setuid = umask = pass_func def called_func(self, name, *args, **kwargs): self.called_funcs[name] = True def raise_func(self, name, *args, **kwargs): self.called_funcs[name] = True raise OSError() def dup2(self, source, target): self.closed_fds.append(target) def geteuid(self): '''Pretend we are running as root.''' return 0 def __getattr__(self, name): # I only over-ride portions of the os module try: return object.__getattr__(self, name) except AttributeError: return getattr(os, name) class MockUdpSocket(object): def __init__(self, sendto_errno=None): self.sent = [] self.sendto_errno = sendto_errno def sendto(self, data, target): if self.sendto_errno: raise socket.error(self.sendto_errno, 'test errno %s' % self.sendto_errno) self.sent.append((data, target)) def close(self): pass class MockSys(object): def __init__(self): self.stdin = TemporaryFile('w') self.stdout = TemporaryFile('r') self.stderr = TemporaryFile('r') self.__stderr__ = self.stderr self.stdio_fds = [self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()] def reset_loggers(): if hasattr(utils.get_logger, 'handler4logger'): for logger, handler in utils.get_logger.handler4logger.items(): logger.removeHandler(handler) delattr(utils.get_logger, 'handler4logger') if hasattr(utils.get_logger, 'console_handler4logger'): for logger, h in utils.get_logger.console_handler4logger.items(): logger.removeHandler(h) delattr(utils.get_logger, 'console_handler4logger') # Reset the LogAdapter class thread local state. Use get_logger() here # to fetch a LogAdapter instance because the items from # get_logger.handler4logger above are the underlying logger instances, # not the LogAdapter. utils.get_logger(None).thread_locals = (None, None) def reset_logger_state(f): @functools.wraps(f) def wrapper(self, *args, **kwargs): reset_loggers() try: return f(self, *args, **kwargs) finally: reset_loggers() return wrapper class TestTimestamp(unittest.TestCase): """Tests for swift.common.utils.Timestamp""" def test_invalid_input(self): self.assertRaises(ValueError, utils.Timestamp, time.time(), offset=-1) def test_invalid_string_conversion(self): t = utils.Timestamp(time.time()) self.assertRaises(TypeError, str, t) def test_offset_limit(self): t = 1417462430.78693 # can't have a offset above MAX_OFFSET self.assertRaises(ValueError, utils.Timestamp, t, offset=utils.MAX_OFFSET + 1) # exactly max offset is fine ts = utils.Timestamp(t, offset=utils.MAX_OFFSET) self.assertEqual(ts.internal, '1417462430.78693_ffffffffffffffff') # but you can't offset it further self.assertRaises(ValueError, utils.Timestamp, ts.internal, offset=1) # unless you start below it ts = utils.Timestamp(t, offset=utils.MAX_OFFSET - 1) self.assertEqual(utils.Timestamp(ts.internal, offset=1), '1417462430.78693_ffffffffffffffff') def test_normal_format_no_offset(self): expected = '1402436408.91203' test_values = ( '1402436408.91203', '1402436408.91203_00000000', '1402436408.912030000', '1402436408.912030000_0000000000000', '000001402436408.912030000', '000001402436408.912030000_0000000000', 1402436408.91203, 1402436408.912029, 1402436408.9120300000000000, 1402436408.91202999999999999, utils.Timestamp(1402436408.91203), utils.Timestamp(1402436408.91203, offset=0), utils.Timestamp(1402436408.912029), utils.Timestamp(1402436408.912029, offset=0), utils.Timestamp('1402436408.91203'), utils.Timestamp('1402436408.91203', offset=0), utils.Timestamp('1402436408.91203_00000000'), utils.Timestamp('1402436408.91203_00000000', offset=0), ) for value in test_values: timestamp = utils.Timestamp(value) self.assertEqual(timestamp.normal, expected) # timestamp instance can also compare to string or float self.assertEqual(timestamp, expected) self.assertEqual(timestamp, float(expected)) self.assertEqual(timestamp, utils.normalize_timestamp(expected)) def test_isoformat(self): expected = '2014-06-10T22:47:32.054580' test_values = ( '1402440452.05458', '1402440452.054579', '1402440452.05458_00000000', '1402440452.054579_00000000', '1402440452.054580000', '1402440452.054579999', '1402440452.054580000_0000000000000', '1402440452.054579999_0000ff00', '000001402440452.054580000', '000001402440452.0545799', '000001402440452.054580000_0000000000', '000001402440452.054579999999_00000fffff', 1402440452.05458, 1402440452.054579, 1402440452.0545800000000000, 1402440452.054579999, utils.Timestamp(1402440452.05458), utils.Timestamp(1402440452.0545799), utils.Timestamp(1402440452.05458, offset=0), utils.Timestamp(1402440452.05457999999, offset=0), utils.Timestamp(1402440452.05458, offset=100), utils.Timestamp(1402440452.054579, offset=100), utils.Timestamp('1402440452.05458'), utils.Timestamp('1402440452.054579999'), utils.Timestamp('1402440452.05458', offset=0), utils.Timestamp('1402440452.054579', offset=0), utils.Timestamp('1402440452.05458', offset=300), utils.Timestamp('1402440452.05457999', offset=300), utils.Timestamp('1402440452.05458_00000000'), utils.Timestamp('1402440452.05457999_00000000'), utils.Timestamp('1402440452.05458_00000000', offset=0), utils.Timestamp('1402440452.05457999_00000aaa', offset=0), utils.Timestamp('1402440452.05458_00000000', offset=400), utils.Timestamp('1402440452.054579_0a', offset=400), ) for value in test_values: self.assertEqual(utils.Timestamp(value).isoformat, expected) expected = '1970-01-01T00:00:00.000000' test_values = ( '0', '0000000000.00000', '0000000000.00000_ffffffffffff', 0, 0.0, ) for value in test_values: self.assertEqual(utils.Timestamp(value).isoformat, expected) def test_not_equal(self): ts = '1402436408.91203_0000000000000001' test_values = ( utils.Timestamp('1402436408.91203_0000000000000002'), utils.Timestamp('1402436408.91203'), utils.Timestamp(1402436408.91203), utils.Timestamp(1402436408.91204), utils.Timestamp(1402436408.91203, offset=0), utils.Timestamp(1402436408.91203, offset=2), ) for value in test_values: self.assertTrue(value != ts) self.assertIs(True, utils.Timestamp(ts) == ts) # sanity self.assertIs(False, utils.Timestamp(ts) != utils.Timestamp(ts)) self.assertIs(False, utils.Timestamp(ts) != ts) self.assertIs(False, utils.Timestamp(ts) is None) self.assertIs(True, utils.Timestamp(ts) is not None) def test_no_force_internal_no_offset(self): """Test that internal is the same as normal with no offset""" with mock.patch('swift.common.utils.FORCE_INTERNAL', new=False): self.assertEqual(utils.Timestamp(0).internal, '0000000000.00000') self.assertEqual(utils.Timestamp(1402437380.58186).internal, '1402437380.58186') self.assertEqual(utils.Timestamp(1402437380.581859).internal, '1402437380.58186') self.assertEqual(utils.Timestamp(0).internal, utils.normalize_timestamp(0)) def test_no_force_internal_with_offset(self): """Test that internal always includes the offset if significant""" with mock.patch('swift.common.utils.FORCE_INTERNAL', new=False): self.assertEqual(utils.Timestamp(0, offset=1).internal, '0000000000.00000_0000000000000001') self.assertEqual( utils.Timestamp(1402437380.58186, offset=16).internal, '1402437380.58186_0000000000000010') self.assertEqual( utils.Timestamp(1402437380.581859, offset=240).internal, '1402437380.58186_00000000000000f0') self.assertEqual( utils.Timestamp('1402437380.581859_00000001', offset=240).internal, '1402437380.58186_00000000000000f1') def test_force_internal(self): """Test that internal always includes the offset if forced""" with mock.patch('swift.common.utils.FORCE_INTERNAL', new=True): self.assertEqual(utils.Timestamp(0).internal, '0000000000.00000_0000000000000000') self.assertEqual(utils.Timestamp(1402437380.58186).internal, '1402437380.58186_0000000000000000') self.assertEqual(utils.Timestamp(1402437380.581859).internal, '1402437380.58186_0000000000000000') self.assertEqual(utils.Timestamp(0, offset=1).internal, '0000000000.00000_0000000000000001') self.assertEqual( utils.Timestamp(1402437380.58186, offset=16).internal, '1402437380.58186_0000000000000010') self.assertEqual( utils.Timestamp(1402437380.581859, offset=16).internal, '1402437380.58186_0000000000000010') def test_internal_format_no_offset(self): expected = '1402436408.91203_0000000000000000' test_values = ( '1402436408.91203', '1402436408.91203_00000000', '1402436408.912030000', '1402436408.912030000_0000000000000', '000001402436408.912030000', '000001402436408.912030000_0000000000', 1402436408.91203, 1402436408.9120300000000000, 1402436408.912029, 1402436408.912029999999999999, utils.Timestamp(1402436408.91203), utils.Timestamp(1402436408.91203, offset=0), utils.Timestamp(1402436408.912029), utils.Timestamp(1402436408.91202999999999999, offset=0), utils.Timestamp('1402436408.91203'), utils.Timestamp('1402436408.91203', offset=0), utils.Timestamp('1402436408.912029'), utils.Timestamp('1402436408.912029', offset=0), utils.Timestamp('1402436408.912029999999999'), utils.Timestamp('1402436408.912029999999999', offset=0), ) for value in test_values: # timestamp instance is always equivalent self.assertEqual(utils.Timestamp(value), expected) if utils.FORCE_INTERNAL: # the FORCE_INTERNAL flag makes the internal format always # include the offset portion of the timestamp even when it's # not significant and would be bad during upgrades self.assertEqual(utils.Timestamp(value).internal, expected) else: # unless we FORCE_INTERNAL, when there's no offset the # internal format is equivalent to the normalized format self.assertEqual(utils.Timestamp(value).internal, '1402436408.91203') def test_internal_format_with_offset(self): expected = '1402436408.91203_00000000000000f0' test_values = ( '1402436408.91203_000000f0', '1402436408.912030000_0000000000f0', '1402436408.912029_000000f0', '1402436408.91202999999_0000000000f0', '000001402436408.912030000_000000000f0', '000001402436408.9120299999_000000000f0', utils.Timestamp(1402436408.91203, offset=240), utils.Timestamp(1402436408.912029, offset=240), utils.Timestamp('1402436408.91203', offset=240), utils.Timestamp('1402436408.91203_00000000', offset=240), utils.Timestamp('1402436408.91203_0000000f', offset=225), utils.Timestamp('1402436408.9120299999', offset=240), utils.Timestamp('1402436408.9120299999_00000000', offset=240), utils.Timestamp('1402436408.9120299999_00000010', offset=224), ) for value in test_values: timestamp = utils.Timestamp(value) self.assertEqual(timestamp.internal, expected) # can compare with offset if the string is internalized self.assertEqual(timestamp, expected) # if comparison value only includes the normalized portion and the # timestamp includes an offset, it is considered greater normal = utils.Timestamp(expected).normal self.assertTrue(timestamp > normal, '%r is not bigger than %r given %r' % ( timestamp, normal, value)) self.assertTrue(timestamp > float(normal), '%r is not bigger than %f given %r' % ( timestamp, float(normal), value)) def test_short_format_with_offset(self): expected = '1402436408.91203_f0' timestamp = utils.Timestamp(1402436408.91203, 0xf0) self.assertEqual(expected, timestamp.short) expected = '1402436408.91203' timestamp = utils.Timestamp(1402436408.91203) self.assertEqual(expected, timestamp.short) def test_raw(self): expected = 140243640891203 timestamp = utils.Timestamp(1402436408.91203) self.assertEqual(expected, timestamp.raw) # 'raw' does not include offset timestamp = utils.Timestamp(1402436408.91203, 0xf0) self.assertEqual(expected, timestamp.raw) def test_delta(self): def _assertWithinBounds(expected, timestamp): tolerance = 0.00001 minimum = expected - tolerance maximum = expected + tolerance self.assertTrue(float(timestamp) > minimum) self.assertTrue(float(timestamp) < maximum) timestamp = utils.Timestamp(1402436408.91203, delta=100) _assertWithinBounds(1402436408.91303, timestamp) self.assertEqual(140243640891303, timestamp.raw) timestamp = utils.Timestamp(1402436408.91203, delta=-100) _assertWithinBounds(1402436408.91103, timestamp) self.assertEqual(140243640891103, timestamp.raw) timestamp = utils.Timestamp(1402436408.91203, delta=0) _assertWithinBounds(1402436408.91203, timestamp) self.assertEqual(140243640891203, timestamp.raw) # delta is independent of offset timestamp = utils.Timestamp(1402436408.91203, offset=42, delta=100) self.assertEqual(140243640891303, timestamp.raw) self.assertEqual(42, timestamp.offset) # cannot go negative self.assertRaises(ValueError, utils.Timestamp, 1402436408.91203, delta=-140243640891203) def test_int(self): expected = 1402437965 test_values = ( '1402437965.91203', '1402437965.91203_00000000', '1402437965.912030000', '1402437965.912030000_0000000000000', '000001402437965.912030000', '000001402437965.912030000_0000000000', 1402437965.91203, 1402437965.9120300000000000, 1402437965.912029, 1402437965.912029999999999999, utils.Timestamp(1402437965.91203), utils.Timestamp(1402437965.91203, offset=0), utils.Timestamp(1402437965.91203, offset=500), utils.Timestamp(1402437965.912029), utils.Timestamp(1402437965.91202999999999999, offset=0), utils.Timestamp(1402437965.91202999999999999, offset=300), utils.Timestamp('1402437965.91203'), utils.Timestamp('1402437965.91203', offset=0), utils.Timestamp('1402437965.91203', offset=400), utils.Timestamp('1402437965.912029'), utils.Timestamp('1402437965.912029', offset=0), utils.Timestamp('1402437965.912029', offset=200), utils.Timestamp('1402437965.912029999999999'), utils.Timestamp('1402437965.912029999999999', offset=0), utils.Timestamp('1402437965.912029999999999', offset=100), ) for value in test_values: timestamp = utils.Timestamp(value) self.assertEqual(int(timestamp), expected) self.assertTrue(timestamp > expected) def test_float(self): expected = 1402438115.91203 test_values = ( '1402438115.91203', '1402438115.91203_00000000', '1402438115.912030000', '1402438115.912030000_0000000000000', '000001402438115.912030000', '000001402438115.912030000_0000000000', 1402438115.91203, 1402438115.9120300000000000, 1402438115.912029, 1402438115.912029999999999999, utils.Timestamp(1402438115.91203), utils.Timestamp(1402438115.91203, offset=0), utils.Timestamp(1402438115.91203, offset=500), utils.Timestamp(1402438115.912029), utils.Timestamp(1402438115.91202999999999999, offset=0), utils.Timestamp(1402438115.91202999999999999, offset=300), utils.Timestamp('1402438115.91203'), utils.Timestamp('1402438115.91203', offset=0), utils.Timestamp('1402438115.91203', offset=400), utils.Timestamp('1402438115.912029'), utils.Timestamp('1402438115.912029', offset=0), utils.Timestamp('1402438115.912029', offset=200), utils.Timestamp('1402438115.912029999999999'), utils.Timestamp('1402438115.912029999999999', offset=0), utils.Timestamp('1402438115.912029999999999', offset=100), ) tolerance = 0.00001 minimum = expected - tolerance maximum = expected + tolerance for value in test_values: timestamp = utils.Timestamp(value) self.assertTrue(float(timestamp) > minimum, '%f is not bigger than %f given %r' % ( timestamp, minimum, value)) self.assertTrue(float(timestamp) < maximum, '%f is not smaller than %f given %r' % ( timestamp, maximum, value)) # direct comparison of timestamp works too self.assertTrue(timestamp > minimum, '%s is not bigger than %f given %r' % ( timestamp.normal, minimum, value)) self.assertTrue(timestamp < maximum, '%s is not smaller than %f given %r' % ( timestamp.normal, maximum, value)) # ... even against strings self.assertTrue(timestamp > '%f' % minimum, '%s is not bigger than %s given %r' % ( timestamp.normal, minimum, value)) self.assertTrue(timestamp < '%f' % maximum, '%s is not smaller than %s given %r' % ( timestamp.normal, maximum, value)) def test_false(self): self.assertFalse(utils.Timestamp(0)) self.assertFalse(utils.Timestamp(0, offset=0)) self.assertFalse(utils.Timestamp('0')) self.assertFalse(utils.Timestamp('0', offset=0)) self.assertFalse(utils.Timestamp(0.0)) self.assertFalse(utils.Timestamp(0.0, offset=0)) self.assertFalse(utils.Timestamp('0.0')) self.assertFalse(utils.Timestamp('0.0', offset=0)) self.assertFalse(utils.Timestamp(00000000.00000000)) self.assertFalse(utils.Timestamp(00000000.00000000, offset=0)) self.assertFalse(utils.Timestamp('00000000.00000000')) self.assertFalse(utils.Timestamp('00000000.00000000', offset=0)) def test_true(self): self.assertTrue(utils.Timestamp(1)) self.assertTrue(utils.Timestamp(1, offset=1)) self.assertTrue(utils.Timestamp(0, offset=1)) self.assertTrue(utils.Timestamp('1')) self.assertTrue(utils.Timestamp('1', offset=1)) self.assertTrue(utils.Timestamp('0', offset=1)) self.assertTrue(utils.Timestamp(1.1)) self.assertTrue(utils.Timestamp(1.1, offset=1)) self.assertTrue(utils.Timestamp(0.0, offset=1)) self.assertTrue(utils.Timestamp('1.1')) self.assertTrue(utils.Timestamp('1.1', offset=1)) self.assertTrue(utils.Timestamp('0.0', offset=1)) self.assertTrue(utils.Timestamp(11111111.11111111)) self.assertTrue(utils.Timestamp(11111111.11111111, offset=1)) self.assertTrue(utils.Timestamp(00000000.00000000, offset=1)) self.assertTrue(utils.Timestamp('11111111.11111111')) self.assertTrue(utils.Timestamp('11111111.11111111', offset=1)) self.assertTrue(utils.Timestamp('00000000.00000000', offset=1)) def test_greater_no_offset(self): now = time.time() older = now - 1 timestamp = utils.Timestamp(now) test_values = ( 0, '0', 0.0, '0.0', '0000.0000', '000.000_000', 1, '1', 1.1, '1.1', '1111.1111', '111.111_111', 1402443112.213252, '1402443112.213252', '1402443112.213252_ffff', older, '%f' % older, '%f_0000ffff' % older, ) for value in test_values: other = utils.Timestamp(value) self.assertNotEqual(timestamp, other) # sanity self.assertTrue(timestamp > value, '%r is not greater than %r given %r' % ( timestamp, value, value)) self.assertTrue(timestamp > other, '%r is not greater than %r given %r' % ( timestamp, other, value)) self.assertTrue(timestamp > other.normal, '%r is not greater than %r given %r' % ( timestamp, other.normal, value)) self.assertTrue(timestamp > other.internal, '%r is not greater than %r given %r' % ( timestamp, other.internal, value)) self.assertTrue(timestamp > float(other), '%r is not greater than %r given %r' % ( timestamp, float(other), value)) self.assertTrue(timestamp > int(other), '%r is not greater than %r given %r' % ( timestamp, int(other), value)) def test_greater_with_offset(self): now = time.time() older = now - 1 test_values = ( 0, '0', 0.0, '0.0', '0000.0000', '000.000_000', 1, '1', 1.1, '1.1', '1111.1111', '111.111_111', 1402443346.935174, '1402443346.93517', '1402443346.935169_ffff', older, '%f' % older, '%f_0000ffff' % older, now, '%f' % now, '%f_00000000' % now, ) for offset in range(1, 1000, 100): timestamp = utils.Timestamp(now, offset=offset) for value in test_values: other = utils.Timestamp(value) self.assertNotEqual(timestamp, other) # sanity self.assertTrue(timestamp > value, '%r is not greater than %r given %r' % ( timestamp, value, value)) self.assertTrue(timestamp > other, '%r is not greater than %r given %r' % ( timestamp, other, value)) self.assertTrue(timestamp > other.normal, '%r is not greater than %r given %r' % ( timestamp, other.normal, value)) self.assertTrue(timestamp > other.internal, '%r is not greater than %r given %r' % ( timestamp, other.internal, value)) self.assertTrue(timestamp > float(other), '%r is not greater than %r given %r' % ( timestamp, float(other), value)) self.assertTrue(timestamp > int(other), '%r is not greater than %r given %r' % ( timestamp, int(other), value)) def test_smaller_no_offset(self): now = time.time() newer = now + 1 timestamp = utils.Timestamp(now) test_values = ( 9999999999.99999, '9999999999.99999', '9999999999.99999_ffff', newer, '%f' % newer, '%f_0000ffff' % newer, ) for value in test_values: other = utils.Timestamp(value) self.assertNotEqual(timestamp, other) # sanity self.assertTrue(timestamp < value, '%r is not smaller than %r given %r' % ( timestamp, value, value)) self.assertTrue(timestamp < other, '%r is not smaller than %r given %r' % ( timestamp, other, value)) self.assertTrue(timestamp < other.normal, '%r is not smaller than %r given %r' % ( timestamp, other.normal, value)) self.assertTrue(timestamp < other.internal, '%r is not smaller than %r given %r' % ( timestamp, other.internal, value)) self.assertTrue(timestamp < float(other), '%r is not smaller than %r given %r' % ( timestamp, float(other), value)) self.assertTrue(timestamp < int(other), '%r is not smaller than %r given %r' % ( timestamp, int(other), value)) def test_smaller_with_offset(self): now = time.time() newer = now + 1 test_values = ( 9999999999.99999, '9999999999.99999', '9999999999.99999_ffff', newer, '%f' % newer, '%f_0000ffff' % newer, ) for offset in range(1, 1000, 100): timestamp = utils.Timestamp(now, offset=offset) for value in test_values: other = utils.Timestamp(value) self.assertNotEqual(timestamp, other) # sanity self.assertTrue(timestamp < value, '%r is not smaller than %r given %r' % ( timestamp, value, value)) self.assertTrue(timestamp < other, '%r is not smaller than %r given %r' % ( timestamp, other, value)) self.assertTrue(timestamp < other.normal, '%r is not smaller than %r given %r' % ( timestamp, other.normal, value)) self.assertTrue(timestamp < other.internal, '%r is not smaller than %r given %r' % ( timestamp, other.internal, value)) self.assertTrue(timestamp < float(other), '%r is not smaller than %r given %r' % ( timestamp, float(other), value)) self.assertTrue(timestamp < int(other), '%r is not smaller than %r given %r' % ( timestamp, int(other), value)) def test_cmp_with_none(self): self.assertGreater(utils.Timestamp(0), None) self.assertGreater(utils.Timestamp(1.0), None) self.assertGreater(utils.Timestamp(1.0, 42), None) def test_ordering(self): given = [ '1402444820.62590_000000000000000a', '1402444820.62589_0000000000000001', '1402444821.52589_0000000000000004', '1402444920.62589_0000000000000004', '1402444821.62589_000000000000000a', '1402444821.72589_000000000000000a', '1402444920.62589_0000000000000002', '1402444820.62589_0000000000000002', '1402444820.62589_000000000000000a', '1402444820.62590_0000000000000004', '1402444920.62589_000000000000000a', '1402444820.62590_0000000000000002', '1402444821.52589_0000000000000002', '1402444821.52589_0000000000000000', '1402444920.62589', '1402444821.62589_0000000000000004', '1402444821.72589_0000000000000001', '1402444820.62590', '1402444820.62590_0000000000000001', '1402444820.62589_0000000000000004', '1402444821.72589_0000000000000000', '1402444821.52589_000000000000000a', '1402444821.72589_0000000000000004', '1402444821.62589', '1402444821.52589_0000000000000001', '1402444821.62589_0000000000000001', '1402444821.62589_0000000000000002', '1402444821.72589_0000000000000002', '1402444820.62589', '1402444920.62589_0000000000000001'] expected = [ '1402444820.62589', '1402444820.62589_0000000000000001', '1402444820.62589_0000000000000002', '1402444820.62589_0000000000000004', '1402444820.62589_000000000000000a', '1402444820.62590', '1402444820.62590_0000000000000001', '1402444820.62590_0000000000000002', '1402444820.62590_0000000000000004', '1402444820.62590_000000000000000a', '1402444821.52589', '1402444821.52589_0000000000000001', '1402444821.52589_0000000000000002', '1402444821.52589_0000000000000004', '1402444821.52589_000000000000000a', '1402444821.62589', '1402444821.62589_0000000000000001', '1402444821.62589_0000000000000002', '1402444821.62589_0000000000000004', '1402444821.62589_000000000000000a', '1402444821.72589', '1402444821.72589_0000000000000001', '1402444821.72589_0000000000000002', '1402444821.72589_0000000000000004', '1402444821.72589_000000000000000a', '1402444920.62589', '1402444920.62589_0000000000000001', '1402444920.62589_0000000000000002', '1402444920.62589_0000000000000004', '1402444920.62589_000000000000000a', ] # less visual version """ now = time.time() given = [ utils.Timestamp(now + i, offset=offset).internal for i in (0, 0.00001, 0.9, 1.0, 1.1, 100.0) for offset in (0, 1, 2, 4, 10) ] expected = [t for t in given] random.shuffle(given) """ self.assertEqual(len(given), len(expected)) # sanity timestamps = [utils.Timestamp(t) for t in given] # our expected values don't include insignificant offsets with mock.patch('swift.common.utils.FORCE_INTERNAL', new=False): self.assertEqual( [t.internal for t in sorted(timestamps)], expected) # string sorting works as well self.assertEqual( sorted([t.internal for t in timestamps]), expected) def test_hashable(self): ts_0 = utils.Timestamp('1402444821.72589') ts_0_also = utils.Timestamp('1402444821.72589') self.assertEqual(ts_0, ts_0_also) # sanity self.assertEqual(hash(ts_0), hash(ts_0_also)) d = {ts_0: 'whatever'} self.assertIn(ts_0, d) # sanity self.assertIn(ts_0_also, d) class TestTimestampEncoding(unittest.TestCase): def setUp(self): t0 = utils.Timestamp(0.0) t1 = utils.Timestamp(997.9996) t2 = utils.Timestamp(999) t3 = utils.Timestamp(1000, 24) t4 = utils.Timestamp(1001) t5 = utils.Timestamp(1002.00040) # encodings that are expected when explicit = False self.non_explicit_encodings = ( ('0000001000.00000_18', (t3, t3, t3)), ('0000001000.00000_18', (t3, t3, None)), ) # mappings that are expected when explicit = True self.explicit_encodings = ( ('0000001000.00000_18+0+0', (t3, t3, t3)), ('0000001000.00000_18+0', (t3, t3, None)), ) # mappings that are expected when explicit = True or False self.encodings = ( ('0000001000.00000_18+0+186a0', (t3, t3, t4)), ('0000001000.00000_18+186a0+186c8', (t3, t4, t5)), ('0000001000.00000_18-186a0+0', (t3, t2, t2)), ('0000001000.00000_18+0-186a0', (t3, t3, t2)), ('0000001000.00000_18-186a0-186c8', (t3, t2, t1)), ('0000001000.00000_18', (t3, None, None)), ('0000001000.00000_18+186a0', (t3, t4, None)), ('0000001000.00000_18-186a0', (t3, t2, None)), ('0000001000.00000_18', (t3, None, t1)), ('0000001000.00000_18-5f5e100', (t3, t0, None)), ('0000001000.00000_18+0-5f5e100', (t3, t3, t0)), ('0000001000.00000_18-5f5e100+5f45a60', (t3, t0, t2)), ) # decodings that are expected when explicit = False self.non_explicit_decodings = ( ('0000001000.00000_18', (t3, t3, t3)), ('0000001000.00000_18+186a0', (t3, t4, t4)), ('0000001000.00000_18-186a0', (t3, t2, t2)), ('0000001000.00000_18+186a0', (t3, t4, t4)), ('0000001000.00000_18-186a0', (t3, t2, t2)), ('0000001000.00000_18-5f5e100', (t3, t0, t0)), ) # decodings that are expected when explicit = True self.explicit_decodings = ( ('0000001000.00000_18+0+0', (t3, t3, t3)), ('0000001000.00000_18+0', (t3, t3, None)), ('0000001000.00000_18', (t3, None, None)), ('0000001000.00000_18+186a0', (t3, t4, None)), ('0000001000.00000_18-186a0', (t3, t2, None)), ('0000001000.00000_18-5f5e100', (t3, t0, None)), ) # decodings that are expected when explicit = True or False self.decodings = ( ('0000001000.00000_18+0+186a0', (t3, t3, t4)), ('0000001000.00000_18+186a0+186c8', (t3, t4, t5)), ('0000001000.00000_18-186a0+0', (t3, t2, t2)), ('0000001000.00000_18+0-186a0', (t3, t3, t2)), ('0000001000.00000_18-186a0-186c8', (t3, t2, t1)), ('0000001000.00000_18-5f5e100+5f45a60', (t3, t0, t2)), ) def _assertEqual(self, expected, actual, test): self.assertEqual(expected, actual, 'Got %s but expected %s for parameters %s' % (actual, expected, test)) def test_encoding(self): for test in self.explicit_encodings: actual = utils.encode_timestamps(test[1][0], test[1][1], test[1][2], True) self._assertEqual(test[0], actual, test[1]) for test in self.non_explicit_encodings: actual = utils.encode_timestamps(test[1][0], test[1][1], test[1][2], False) self._assertEqual(test[0], actual, test[1]) for explicit in (True, False): for test in self.encodings: actual = utils.encode_timestamps(test[1][0], test[1][1], test[1][2], explicit) self._assertEqual(test[0], actual, test[1]) def test_decoding(self): for test in self.explicit_decodings: actual = utils.decode_timestamps(test[0], True) self._assertEqual(test[1], actual, test[0]) for test in self.non_explicit_decodings: actual = utils.decode_timestamps(test[0], False) self._assertEqual(test[1], actual, test[0]) for explicit in (True, False): for test in self.decodings: actual = utils.decode_timestamps(test[0], explicit) self._assertEqual(test[1], actual, test[0]) class TestUtils(unittest.TestCase): """Tests for swift.common.utils """ def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' def test_lock_path(self): tmpdir = mkdtemp() try: with utils.lock_path(tmpdir, 0.1): exc = None success = False try: with utils.lock_path(tmpdir, 0.1): success = True except LockTimeout as err: exc = err self.assertTrue(exc is not None) self.assertTrue(not success) finally: shutil.rmtree(tmpdir) def test_lock_path_num_sleeps(self): tmpdir = mkdtemp() num_short_calls = [0] exception_raised = [False] def my_sleep(to_sleep): if to_sleep == 0.01: num_short_calls[0] += 1 else: raise Exception('sleep time changed: %s' % to_sleep) try: with mock.patch('swift.common.utils.sleep', my_sleep): with utils.lock_path(tmpdir): with utils.lock_path(tmpdir): pass except Exception as e: exception_raised[0] = True self.assertTrue('sleep time changed' in str(e)) finally: shutil.rmtree(tmpdir) self.assertEqual(num_short_calls[0], 11) self.assertTrue(exception_raised[0]) def test_lock_path_class(self): tmpdir = mkdtemp() try: with utils.lock_path(tmpdir, 0.1, ReplicationLockTimeout): exc = None exc2 = None success = False try: with utils.lock_path(tmpdir, 0.1, ReplicationLockTimeout): success = True except ReplicationLockTimeout as err: exc = err except LockTimeout as err: exc2 = err self.assertTrue(exc is not None) self.assertTrue(exc2 is None) self.assertTrue(not success) exc = None exc2 = None success = False try: with utils.lock_path(tmpdir, 0.1): success = True except ReplicationLockTimeout as err: exc = err except LockTimeout as err: exc2 = err self.assertTrue(exc is None) self.assertTrue(exc2 is not None) self.assertTrue(not success) finally: shutil.rmtree(tmpdir) def test_normalize_timestamp(self): # Test swift.common.utils.normalize_timestamp self.assertEqual(utils.normalize_timestamp('1253327593.48174'), "1253327593.48174") self.assertEqual(utils.normalize_timestamp(1253327593.48174), "1253327593.48174") self.assertEqual(utils.normalize_timestamp('1253327593.48'), "1253327593.48000") self.assertEqual(utils.normalize_timestamp(1253327593.48), "1253327593.48000") self.assertEqual(utils.normalize_timestamp('253327593.48'), "0253327593.48000") self.assertEqual(utils.normalize_timestamp(253327593.48), "0253327593.48000") self.assertEqual(utils.normalize_timestamp('1253327593'), "1253327593.00000") self.assertEqual(utils.normalize_timestamp(1253327593), "1253327593.00000") self.assertRaises(ValueError, utils.normalize_timestamp, '') self.assertRaises(ValueError, utils.normalize_timestamp, 'abc') def test_normalize_delete_at_timestamp(self): self.assertEqual( utils.normalize_delete_at_timestamp(1253327593), '1253327593') self.assertEqual( utils.normalize_delete_at_timestamp(1253327593.67890), '1253327593') self.assertEqual( utils.normalize_delete_at_timestamp('1253327593'), '1253327593') self.assertEqual( utils.normalize_delete_at_timestamp('1253327593.67890'), '1253327593') self.assertEqual( utils.normalize_delete_at_timestamp(-1253327593), '0000000000') self.assertEqual( utils.normalize_delete_at_timestamp(-1253327593.67890), '0000000000') self.assertEqual( utils.normalize_delete_at_timestamp('-1253327593'), '0000000000') self.assertEqual( utils.normalize_delete_at_timestamp('-1253327593.67890'), '0000000000') self.assertEqual( utils.normalize_delete_at_timestamp(71253327593), '9999999999') self.assertEqual( utils.normalize_delete_at_timestamp(71253327593.67890), '9999999999') self.assertEqual( utils.normalize_delete_at_timestamp('71253327593'), '9999999999') self.assertEqual( utils.normalize_delete_at_timestamp('71253327593.67890'), '9999999999') self.assertRaises(ValueError, utils.normalize_timestamp, '') self.assertRaises(ValueError, utils.normalize_timestamp, 'abc') def test_last_modified_date_to_timestamp(self): expectations = { '1970-01-01T00:00:00.000000': 0.0, '2014-02-28T23:22:36.698390': 1393629756.698390, '2011-03-19T04:03:00.604554': 1300507380.604554, } for last_modified, ts in expectations.items(): real = utils.last_modified_date_to_timestamp(last_modified) self.assertEqual(real, ts, "failed for %s" % last_modified) def test_last_modified_date_to_timestamp_when_system_not_UTC(self): try: old_tz = os.environ.get('TZ') # Western Argentina Summer Time. Found in glibc manual; this # timezone always has a non-zero offset from UTC, so this test is # always meaningful. os.environ['TZ'] = 'WART4WARST,J1/0,J365/25' self.assertEqual(utils.last_modified_date_to_timestamp( '1970-01-01T00:00:00.000000'), 0.0) finally: if old_tz is not None: os.environ['TZ'] = old_tz else: os.environ.pop('TZ') def test_backwards(self): # Test swift.common.utils.backward # The lines are designed so that the function would encounter # all of the boundary conditions and typical conditions. # Block boundaries are marked with '<>' characters blocksize = 25 lines = [b'123456789x12345678><123456789\n', # block larger than rest b'123456789x123>\n', # block ends just before \n character b'123423456789\n', b'123456789x\n', # block ends at the end of line b'<123456789x123456789x123\n', b'<6789x123\n', # block ends at the beginning of the line b'6789x1234\n', b'1234><234\n', # block ends typically in the middle of line b'123456789x123456789\n'] with TemporaryFile() as f: for line in lines: f.write(line) count = len(lines) - 1 for line in utils.backward(f, blocksize): self.assertEqual(line, lines[count].split(b'\n')[0]) count -= 1 # Empty file case with TemporaryFile('r') as f: self.assertEqual([], list(utils.backward(f))) def test_mkdirs(self): testdir_base = mkdtemp() testroot = os.path.join(testdir_base, 'mkdirs') try: self.assertTrue(not os.path.exists(testroot)) utils.mkdirs(testroot) self.assertTrue(os.path.exists(testroot)) utils.mkdirs(testroot) self.assertTrue(os.path.exists(testroot)) rmtree(testroot, ignore_errors=1) testdir = os.path.join(testroot, 'one/two/three') self.assertTrue(not os.path.exists(testdir)) utils.mkdirs(testdir) self.assertTrue(os.path.exists(testdir)) utils.mkdirs(testdir) self.assertTrue(os.path.exists(testdir)) rmtree(testroot, ignore_errors=1) open(testroot, 'wb').close() self.assertTrue(not os.path.exists(testdir)) self.assertRaises(OSError, utils.mkdirs, testdir) os.unlink(testroot) finally: rmtree(testdir_base) def test_split_path(self): # Test swift.common.utils.split_account_path self.assertRaises(ValueError, utils.split_path, '') self.assertRaises(ValueError, utils.split_path, '/') self.assertRaises(ValueError, utils.split_path, '//') self.assertEqual(utils.split_path('/a'), ['a']) self.assertRaises(ValueError, utils.split_path, '//a') self.assertEqual(utils.split_path('/a/'), ['a']) self.assertRaises(ValueError, utils.split_path, '/a/c') self.assertRaises(ValueError, utils.split_path, '//c') self.assertRaises(ValueError, utils.split_path, '/a/c/') self.assertRaises(ValueError, utils.split_path, '/a//') self.assertRaises(ValueError, utils.split_path, '/a', 2) self.assertRaises(ValueError, utils.split_path, '/a', 2, 3) self.assertRaises(ValueError, utils.split_path, '/a', 2, 3, True) self.assertEqual(utils.split_path('/a/c', 2), ['a', 'c']) self.assertEqual(utils.split_path('/a/c/o', 3), ['a', 'c', 'o']) self.assertRaises(ValueError, utils.split_path, '/a/c/o/r', 3, 3) self.assertEqual(utils.split_path('/a/c/o/r', 3, 3, True), ['a', 'c', 'o/r']) self.assertEqual(utils.split_path('/a/c', 2, 3, True), ['a', 'c', None]) self.assertRaises(ValueError, utils.split_path, '/a', 5, 4) self.assertEqual(utils.split_path('/a/c/', 2), ['a', 'c']) self.assertEqual(utils.split_path('/a/c/', 2, 3), ['a', 'c', '']) try: utils.split_path('o\nn e', 2) except ValueError as err: self.assertEqual(str(err), 'Invalid path: o%0An%20e') try: utils.split_path('o\nn e', 2, 3, True) except ValueError as err: self.assertEqual(str(err), 'Invalid path: o%0An%20e') def test_validate_device_partition(self): # Test swift.common.utils.validate_device_partition utils.validate_device_partition('foo', 'bar') self.assertRaises(ValueError, utils.validate_device_partition, '', '') self.assertRaises(ValueError, utils.validate_device_partition, '', 'foo') self.assertRaises(ValueError, utils.validate_device_partition, 'foo', '') self.assertRaises(ValueError, utils.validate_device_partition, 'foo/bar', 'foo') self.assertRaises(ValueError, utils.validate_device_partition, 'foo', 'foo/bar') self.assertRaises(ValueError, utils.validate_device_partition, '.', 'foo') self.assertRaises(ValueError, utils.validate_device_partition, '..', 'foo') self.assertRaises(ValueError, utils.validate_device_partition, 'foo', '.') self.assertRaises(ValueError, utils.validate_device_partition, 'foo', '..') try: utils.validate_device_partition('o\nn e', 'foo') except ValueError as err: self.assertEqual(str(err), 'Invalid device: o%0An%20e') try: utils.validate_device_partition('foo', 'o\nn e') except ValueError as err: self.assertEqual(str(err), 'Invalid partition: o%0An%20e') def test_NullLogger(self): # Test swift.common.utils.NullLogger sio = StringIO() nl = utils.NullLogger() nl.write('test') self.assertEqual(sio.getvalue(), '') def test_LoggerFileObject(self): orig_stdout = sys.stdout orig_stderr = sys.stderr sio = StringIO() handler = logging.StreamHandler(sio) logger = logging.getLogger() logger.addHandler(handler) lfo_stdout = utils.LoggerFileObject(logger) lfo_stderr = utils.LoggerFileObject(logger) lfo_stderr = utils.LoggerFileObject(logger, 'STDERR') print('test1') self.assertEqual(sio.getvalue(), '') sys.stdout = lfo_stdout print('test2') self.assertEqual(sio.getvalue(), 'STDOUT: test2\n') sys.stderr = lfo_stderr print('test4', file=sys.stderr) self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n') sys.stdout = orig_stdout print('test5') self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n') print('test6', file=sys.stderr) self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n' 'STDERR: test6\n') sys.stderr = orig_stderr print('test8') self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n' 'STDERR: test6\n') lfo_stdout.writelines(['a', 'b', 'c']) self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n' 'STDERR: test6\nSTDOUT: a#012b#012c\n') lfo_stdout.close() lfo_stderr.close() lfo_stdout.write('d') self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n' 'STDERR: test6\nSTDOUT: a#012b#012c\nSTDOUT: d\n') lfo_stdout.flush() self.assertEqual(sio.getvalue(), 'STDOUT: test2\nSTDERR: test4\n' 'STDERR: test6\nSTDOUT: a#012b#012c\nSTDOUT: d\n') for lfo in (lfo_stdout, lfo_stderr): got_exc = False try: for line in lfo: pass except Exception: got_exc = True self.assertTrue(got_exc) got_exc = False try: for line in lfo: pass except Exception: got_exc = True self.assertTrue(got_exc) self.assertRaises(IOError, lfo.read) self.assertRaises(IOError, lfo.read, 1024) self.assertRaises(IOError, lfo.readline) self.assertRaises(IOError, lfo.readline, 1024) lfo.tell() def test_parse_options(self): # Get a file that is definitely on disk with NamedTemporaryFile() as f: conf_file = f.name conf, options = utils.parse_options(test_args=[conf_file]) self.assertEqual(conf, conf_file) # assert defaults self.assertEqual(options['verbose'], False) self.assertTrue('once' not in options) # assert verbose as option conf, options = utils.parse_options(test_args=[conf_file, '-v']) self.assertEqual(options['verbose'], True) # check once option conf, options = utils.parse_options(test_args=[conf_file], once=True) self.assertEqual(options['once'], False) test_args = [conf_file, '--once'] conf, options = utils.parse_options(test_args=test_args, once=True) self.assertEqual(options['once'], True) # check options as arg parsing test_args = [conf_file, 'once', 'plugin_name', 'verbose'] conf, options = utils.parse_options(test_args=test_args, once=True) self.assertEqual(options['verbose'], True) self.assertEqual(options['once'], True) self.assertEqual(options['extra_args'], ['plugin_name']) def test_parse_options_errors(self): orig_stdout = sys.stdout orig_stderr = sys.stderr stdo = StringIO() stde = StringIO() utils.sys.stdout = stdo utils.sys.stderr = stde self.assertRaises(SystemExit, utils.parse_options, once=True, test_args=[]) self.assertTrue('missing config' in stdo.getvalue()) # verify conf file must exist, context manager will delete temp file with NamedTemporaryFile() as f: conf_file = f.name self.assertRaises(SystemExit, utils.parse_options, once=True, test_args=[conf_file]) self.assertTrue('unable to locate' in stdo.getvalue()) # reset stdio utils.sys.stdout = orig_stdout utils.sys.stderr = orig_stderr def test_dump_recon_cache(self): testdir_base = mkdtemp() testcache_file = os.path.join(testdir_base, 'cache.recon') logger = utils.get_logger(None, 'server', log_route='server') try: submit_dict = {'key1': {'value1': 1, 'value2': 2}} utils.dump_recon_cache(submit_dict, testcache_file, logger) fd = open(testcache_file) file_dict = json.loads(fd.readline()) fd.close() self.assertEqual(submit_dict, file_dict) # Use a nested entry submit_dict = {'key1': {'key2': {'value1': 1, 'value2': 2}}} result_dict = {'key1': {'key2': {'value1': 1, 'value2': 2}, 'value1': 1, 'value2': 2}} utils.dump_recon_cache(submit_dict, testcache_file, logger) fd = open(testcache_file) file_dict = json.loads(fd.readline()) fd.close() self.assertEqual(result_dict, file_dict) finally: rmtree(testdir_base) def test_dump_recon_cache_permission_denied(self): testdir_base = mkdtemp() testcache_file = os.path.join(testdir_base, 'cache.recon') class MockLogger(object): def __init__(self): self._excs = [] def exception(self, message): _junk, exc, _junk = sys.exc_info() self._excs.append(exc) logger = MockLogger() try: submit_dict = {'key1': {'value1': 1, 'value2': 2}} with mock.patch( 'swift.common.utils.NamedTemporaryFile', side_effect=IOError(13, 'Permission Denied')): utils.dump_recon_cache(submit_dict, testcache_file, logger) self.assertIsInstance(logger._excs[0], IOError) finally: rmtree(testdir_base) def test_get_logger(self): sio = StringIO() logger = logging.getLogger('server') logger.addHandler(logging.StreamHandler(sio)) logger = utils.get_logger(None, 'server', log_route='server') logger.warning('test1') self.assertEqual(sio.getvalue(), 'test1\n') logger.debug('test2') self.assertEqual(sio.getvalue(), 'test1\n') logger = utils.get_logger({'log_level': 'DEBUG'}, 'server', log_route='server') logger.debug('test3') self.assertEqual(sio.getvalue(), 'test1\ntest3\n') # Doesn't really test that the log facility is truly being used all the # way to syslog; but exercises the code. logger = utils.get_logger({'log_facility': 'LOG_LOCAL3'}, 'server', log_route='server') logger.warning('test4') self.assertEqual(sio.getvalue(), 'test1\ntest3\ntest4\n') # make sure debug doesn't log by default logger.debug('test5') self.assertEqual(sio.getvalue(), 'test1\ntest3\ntest4\n') # make sure notice lvl logs by default logger.notice('test6') self.assertEqual(sio.getvalue(), 'test1\ntest3\ntest4\ntest6\n') def test_get_logger_sysloghandler_plumbing(self): orig_sysloghandler = utils.SysLogHandler syslog_handler_args = [] def syslog_handler_catcher(*args, **kwargs): syslog_handler_args.append((args, kwargs)) return orig_sysloghandler(*args, **kwargs) syslog_handler_catcher.LOG_LOCAL0 = orig_sysloghandler.LOG_LOCAL0 syslog_handler_catcher.LOG_LOCAL3 = orig_sysloghandler.LOG_LOCAL3 try: utils.SysLogHandler = syslog_handler_catcher utils.get_logger({ 'log_facility': 'LOG_LOCAL3', }, 'server', log_route='server') expected_args = [((), {'address': '/dev/log', 'facility': orig_sysloghandler.LOG_LOCAL3})] if not os.path.exists('/dev/log') or \ os.path.isfile('/dev/log') or \ os.path.isdir('/dev/log'): # Since socket on OSX is in /var/run/syslog, there will be # a fallback to UDP. expected_args.append( ((), {'facility': orig_sysloghandler.LOG_LOCAL3})) self.assertEqual(expected_args, syslog_handler_args) syslog_handler_args = [] utils.get_logger({ 'log_facility': 'LOG_LOCAL3', 'log_address': '/foo/bar', }, 'server', log_route='server') self.assertEqual([ ((), {'address': '/foo/bar', 'facility': orig_sysloghandler.LOG_LOCAL3}), # Second call is because /foo/bar didn't exist (and wasn't a # UNIX domain socket). ((), {'facility': orig_sysloghandler.LOG_LOCAL3})], syslog_handler_args) # Using UDP with default port syslog_handler_args = [] utils.get_logger({ 'log_udp_host': 'syslog.funtimes.com', }, 'server', log_route='server') self.assertEqual([ ((), {'address': ('syslog.funtimes.com', logging.handlers.SYSLOG_UDP_PORT), 'facility': orig_sysloghandler.LOG_LOCAL0})], syslog_handler_args) # Using UDP with non-default port syslog_handler_args = [] utils.get_logger({ 'log_udp_host': 'syslog.funtimes.com', 'log_udp_port': '2123', }, 'server', log_route='server') self.assertEqual([ ((), {'address': ('syslog.funtimes.com', 2123), 'facility': orig_sysloghandler.LOG_LOCAL0})], syslog_handler_args) finally: utils.SysLogHandler = orig_sysloghandler @reset_logger_state def test_clean_logger_exception(self): # setup stream logging sio = StringIO() logger = utils.get_logger(None) handler = logging.StreamHandler(sio) logger.logger.addHandler(handler) def strip_value(sio): sio.seek(0) v = sio.getvalue() sio.truncate(0) return v def log_exception(exc): try: raise exc except (Exception, Timeout): logger.exception('blah') try: # establish base case self.assertEqual(strip_value(sio), '') logger.info('test') self.assertEqual(strip_value(sio), 'test\n') self.assertEqual(strip_value(sio), '') logger.info('test') logger.info('test') self.assertEqual(strip_value(sio), 'test\ntest\n') self.assertEqual(strip_value(sio), '') # test OSError for en in (errno.EIO, errno.ENOSPC): log_exception(OSError(en, 'my %s error message' % en)) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('my %s error message' % en in log_msg) # unfiltered log_exception(OSError()) self.assertTrue('Traceback' in strip_value(sio)) # test socket.error log_exception(socket.error(errno.ECONNREFUSED, 'my error message')) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('errno.ECONNREFUSED message test' not in log_msg) self.assertTrue('Connection refused' in log_msg) log_exception(socket.error(errno.EHOSTUNREACH, 'my error message')) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('my error message' not in log_msg) self.assertTrue('Host unreachable' in log_msg) log_exception(socket.error(errno.ETIMEDOUT, 'my error message')) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('my error message' not in log_msg) self.assertTrue('Connection timeout' in log_msg) # unfiltered log_exception(socket.error(0, 'my error message')) log_msg = strip_value(sio) self.assertTrue('Traceback' in log_msg) self.assertTrue('my error message' in log_msg) # test eventlet.Timeout connection_timeout = ConnectionTimeout(42, 'my error message') log_exception(connection_timeout) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('ConnectionTimeout' in log_msg) self.assertTrue('(42s)' in log_msg) self.assertTrue('my error message' not in log_msg) connection_timeout.cancel() message_timeout = MessageTimeout(42, 'my error message') log_exception(message_timeout) log_msg = strip_value(sio) self.assertTrue('Traceback' not in log_msg) self.assertTrue('MessageTimeout' in log_msg) self.assertTrue('(42s)' in log_msg) self.assertTrue('my error message' in log_msg) message_timeout.cancel() # test unhandled log_exception(Exception('my error message')) log_msg = strip_value(sio) self.assertTrue('Traceback' in log_msg) self.assertTrue('my error message' in log_msg) finally: logger.logger.removeHandler(handler) @reset_logger_state def test_swift_log_formatter_max_line_length(self): # setup stream logging sio = StringIO() logger = utils.get_logger(None) handler = logging.StreamHandler(sio) formatter = utils.SwiftLogFormatter(max_line_length=10) handler.setFormatter(formatter) logger.logger.addHandler(handler) def strip_value(sio): sio.seek(0) v = sio.getvalue() sio.truncate(0) return v try: logger.info('12345') self.assertEqual(strip_value(sio), '12345\n') logger.info('1234567890') self.assertEqual(strip_value(sio), '1234567890\n') logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '12 ... de\n') formatter.max_line_length = 11 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '123 ... cde\n') formatter.max_line_length = 0 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '1234567890abcde\n') formatter.max_line_length = 1 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '1\n') formatter.max_line_length = 2 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '12\n') formatter.max_line_length = 3 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '123\n') formatter.max_line_length = 4 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '1234\n') formatter.max_line_length = 5 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '12345\n') formatter.max_line_length = 6 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '123456\n') formatter.max_line_length = 7 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '1 ... e\n') formatter.max_line_length = -10 logger.info('1234567890abcde') self.assertEqual(strip_value(sio), '1234567890abcde\n') finally: logger.logger.removeHandler(handler) @reset_logger_state def test_swift_log_formatter(self): # setup stream logging sio = StringIO() logger = utils.get_logger(None) handler = logging.StreamHandler(sio) handler.setFormatter(utils.SwiftLogFormatter()) logger.logger.addHandler(handler) def strip_value(sio): sio.seek(0) v = sio.getvalue() sio.truncate(0) return v try: self.assertFalse(logger.txn_id) logger.error('my error message') log_msg = strip_value(sio) self.assertTrue('my error message' in log_msg) self.assertTrue('txn' not in log_msg) logger.txn_id = '12345' logger.error('test') log_msg = strip_value(sio) self.assertTrue('txn' in log_msg) self.assertTrue('12345' in log_msg) # test no txn on info message self.assertEqual(logger.txn_id, '12345') logger.info('test') log_msg = strip_value(sio) self.assertTrue('txn' not in log_msg) self.assertTrue('12345' not in log_msg) # test txn already in message self.assertEqual(logger.txn_id, '12345') logger.warning('test 12345 test') self.assertEqual(strip_value(sio), 'test 12345 test\n') # Test multi line collapsing logger.error('my\nerror\nmessage') log_msg = strip_value(sio) self.assertTrue('my#012error#012message' in log_msg) # test client_ip self.assertFalse(logger.client_ip) logger.error('my error message') log_msg = strip_value(sio) self.assertTrue('my error message' in log_msg) self.assertTrue('client_ip' not in log_msg) logger.client_ip = '1.2.3.4' logger.error('test') log_msg = strip_value(sio) self.assertTrue('client_ip' in log_msg) self.assertTrue('1.2.3.4' in log_msg) # test no client_ip on info message self.assertEqual(logger.client_ip, '1.2.3.4') logger.info('test') log_msg = strip_value(sio) self.assertTrue('client_ip' not in log_msg) self.assertTrue('1.2.3.4' not in log_msg) # test client_ip (and txn) already in message self.assertEqual(logger.client_ip, '1.2.3.4') logger.warning('test 1.2.3.4 test 12345') self.assertEqual(strip_value(sio), 'test 1.2.3.4 test 12345\n') finally: logger.logger.removeHandler(handler) def test_storage_directory(self): self.assertEqual(utils.storage_directory('objects', '1', 'ABCDEF'), 'objects/1/DEF/ABCDEF') def test_expand_ipv6(self): expanded_ipv6 = "fe80::204:61ff:fe9d:f156" upper_ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156" self.assertEqual(expanded_ipv6, utils.expand_ipv6(upper_ipv6)) omit_ipv6 = "fe80:0000:0000::0204:61ff:fe9d:f156" self.assertEqual(expanded_ipv6, utils.expand_ipv6(omit_ipv6)) less_num_ipv6 = "fe80:0:00:000:0204:61ff:fe9d:f156" self.assertEqual(expanded_ipv6, utils.expand_ipv6(less_num_ipv6)) def test_whataremyips(self): myips = utils.whataremyips() self.assertTrue(len(myips) > 1) self.assertTrue('127.0.0.1' in myips) def test_whataremyips_bind_to_all(self): for any_addr in ('0.0.0.0', '0000:0000:0000:0000:0000:0000:0000:0000', '::0', '::0000', '::', # Wacky parse-error input produces all IPs 'I am a bear'): myips = utils.whataremyips(any_addr) self.assertTrue(len(myips) > 1) self.assertTrue('127.0.0.1' in myips) def test_whataremyips_bind_ip_specific(self): self.assertEqual(['1.2.3.4'], utils.whataremyips('1.2.3.4')) def test_whataremyips_error(self): def my_interfaces(): return ['eth0'] def my_ifaddress_error(interface): raise ValueError with patch('netifaces.interfaces', my_interfaces), \ patch('netifaces.ifaddresses', my_ifaddress_error): self.assertEqual(utils.whataremyips(), []) def test_whataremyips_ipv6(self): test_ipv6_address = '2001:6b0:dead:beef:2::32' test_interface = 'eth0' def my_ipv6_interfaces(): return ['eth0'] def my_ipv6_ifaddresses(interface): return {AF_INET6: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': '%s%%%s' % (test_ipv6_address, test_interface)}]} with patch('netifaces.interfaces', my_ipv6_interfaces), \ patch('netifaces.ifaddresses', my_ipv6_ifaddresses): myips = utils.whataremyips() self.assertEqual(len(myips), 1) self.assertEqual(myips[0], test_ipv6_address) def test_hash_path(self): # Yes, these tests are deliberately very fragile. We want to make sure # that if someones changes the results hash_path produces, they know it with mock.patch('swift.common.utils.HASH_PATH_PREFIX', ''): self.assertEqual(utils.hash_path('a'), '1c84525acb02107ea475dcd3d09c2c58') self.assertEqual(utils.hash_path('a', 'c'), '33379ecb053aa5c9e356c68997cbb59e') self.assertEqual(utils.hash_path('a', 'c', 'o'), '06fbf0b514e5199dfc4e00f42eb5ea83') self.assertEqual(utils.hash_path('a', 'c', 'o', raw_digest=False), '06fbf0b514e5199dfc4e00f42eb5ea83') self.assertEqual(utils.hash_path('a', 'c', 'o', raw_digest=True), '\x06\xfb\xf0\xb5\x14\xe5\x19\x9d\xfcN' '\x00\xf4.\xb5\xea\x83') self.assertRaises(ValueError, utils.hash_path, 'a', object='o') utils.HASH_PATH_PREFIX = 'abcdef' self.assertEqual(utils.hash_path('a', 'c', 'o', raw_digest=False), '363f9b535bfb7d17a43a46a358afca0e') def test_validate_hash_conf(self): # no section causes InvalidHashPathConfigError self._test_validate_hash_conf([], [], True) # 'swift-hash' section is there but no options causes # InvalidHashPathConfigError self._test_validate_hash_conf(['swift-hash'], [], True) # if we have the section and either of prefix or suffix, # InvalidHashPathConfigError doesn't occur self._test_validate_hash_conf( ['swift-hash'], ['swift_hash_path_prefix'], False) self._test_validate_hash_conf( ['swift-hash'], ['swift_hash_path_suffix'], False) # definitely, we have the section and both of them, # InvalidHashPathConfigError doesn't occur self._test_validate_hash_conf( ['swift-hash'], ['swift_hash_path_suffix', 'swift_hash_path_prefix'], False) # But invalid section name should make an error even if valid # options are there self._test_validate_hash_conf( ['swift-hash-xxx'], ['swift_hash_path_suffix', 'swift_hash_path_prefix'], True) def _test_validate_hash_conf(self, sections, options, should_raise_error): class FakeConfigParser(object): def read(self, conf_path): return True def get(self, section, option): if section not in sections: raise NoSectionError('section error') elif option not in options: raise NoOptionError('option error', 'this option') else: return 'some_option_value' with mock.patch('swift.common.utils.HASH_PATH_PREFIX', ''), \ mock.patch('swift.common.utils.HASH_PATH_SUFFIX', ''), \ mock.patch('swift.common.utils.ConfigParser', FakeConfigParser): try: utils.validate_hash_conf() except utils.InvalidHashPathConfigError: if not should_raise_error: self.fail('validate_hash_conf should not raise an error') else: if should_raise_error: self.fail('validate_hash_conf should raise an error') def test_load_libc_function(self): self.assertTrue(callable( utils.load_libc_function('printf'))) self.assertTrue(callable( utils.load_libc_function('some_not_real_function'))) self.assertRaises(AttributeError, utils.load_libc_function, 'some_not_real_function', fail_if_missing=True) def test_readconf(self): conf = '''[section1] foo = bar [section2] log_name = yarr''' # setup a real file fd, temppath = tempfile.mkstemp(dir='/tmp') with os.fdopen(fd, 'wb') as f: f.write(conf) make_filename = lambda: temppath # setup a file stream make_fp = lambda: StringIO(conf) for conf_object_maker in (make_filename, make_fp): conffile = conf_object_maker() result = utils.readconf(conffile) expected = {'__file__': conffile, 'log_name': None, 'section1': {'foo': 'bar'}, 'section2': {'log_name': 'yarr'}} self.assertEqual(result, expected) conffile = conf_object_maker() result = utils.readconf(conffile, 'section1') expected = {'__file__': conffile, 'log_name': 'section1', 'foo': 'bar'} self.assertEqual(result, expected) conffile = conf_object_maker() result = utils.readconf(conffile, 'section2').get('log_name') expected = 'yarr' self.assertEqual(result, expected) conffile = conf_object_maker() result = utils.readconf(conffile, 'section1', log_name='foo').get('log_name') expected = 'foo' self.assertEqual(result, expected) conffile = conf_object_maker() result = utils.readconf(conffile, 'section1', defaults={'bar': 'baz'}) expected = {'__file__': conffile, 'log_name': 'section1', 'foo': 'bar', 'bar': 'baz'} self.assertEqual(result, expected) self.assertRaises(SystemExit, utils.readconf, temppath, 'section3') os.unlink(temppath) self.assertRaises(SystemExit, utils.readconf, temppath) def test_readconf_raw(self): conf = '''[section1] foo = bar [section2] log_name = %(yarr)s''' # setup a real file fd, temppath = tempfile.mkstemp(dir='/tmp') with os.fdopen(fd, 'wb') as f: f.write(conf) make_filename = lambda: temppath # setup a file stream make_fp = lambda: StringIO(conf) for conf_object_maker in (make_filename, make_fp): conffile = conf_object_maker() result = utils.readconf(conffile, raw=True) expected = {'__file__': conffile, 'log_name': None, 'section1': {'foo': 'bar'}, 'section2': {'log_name': '%(yarr)s'}} self.assertEqual(result, expected) os.unlink(temppath) self.assertRaises(SystemExit, utils.readconf, temppath) def test_readconf_dir(self): config_dir = { 'server.conf.d/01.conf': """ [DEFAULT] port = 8080 foo = bar [section1] name=section1 """, 'server.conf.d/section2.conf': """ [DEFAULT] port = 8081 bar = baz [section2] name=section2 """, 'other-server.conf.d/01.conf': """ [DEFAULT] port = 8082 [section3] name=section3 """ } # strip indent from test config contents config_dir = dict((f, dedent(c)) for (f, c) in config_dir.items()) with temptree(*zip(*config_dir.items())) as path: conf_dir = os.path.join(path, 'server.conf.d') conf = utils.readconf(conf_dir) expected = { '__file__': os.path.join(path, 'server.conf.d'), 'log_name': None, 'section1': { 'port': '8081', 'foo': 'bar', 'bar': 'baz', 'name': 'section1', }, 'section2': { 'port': '8081', 'foo': 'bar', 'bar': 'baz', 'name': 'section2', }, } self.assertEqual(conf, expected) def test_readconf_dir_ignores_hidden_and_nondotconf_files(self): config_dir = { 'server.conf.d/01.conf': """ [section1] port = 8080 """, 'server.conf.d/.01.conf.swp': """ [section] port = 8081 """, 'server.conf.d/01.conf-bak': """ [section] port = 8082 """, } # strip indent from test config contents config_dir = dict((f, dedent(c)) for (f, c) in config_dir.items()) with temptree(*zip(*config_dir.items())) as path: conf_dir = os.path.join(path, 'server.conf.d') conf = utils.readconf(conf_dir) expected = { '__file__': os.path.join(path, 'server.conf.d'), 'log_name': None, 'section1': { 'port': '8080', }, } self.assertEqual(conf, expected) def test_drop_privileges(self): user = getuser() # over-ride os with mock required_func_calls = ('setgroups', 'setgid', 'setuid', 'setsid', 'chdir', 'umask') utils.os = MockOs(called_funcs=required_func_calls) # exercise the code utils.drop_privileges(user) for func in required_func_calls: self.assertTrue(utils.os.called_funcs[func]) import pwd self.assertEqual(pwd.getpwnam(user)[5], utils.os.environ['HOME']) groups = [g.gr_gid for g in grp.getgrall() if user in g.gr_mem] groups.append(pwd.getpwnam(user).pw_gid) self.assertEqual(set(groups), set(os.getgroups())) # reset; test same args, OSError trying to get session leader utils.os = MockOs(called_funcs=required_func_calls, raise_funcs=('setsid',)) for func in required_func_calls: self.assertFalse(utils.os.called_funcs.get(func, False)) utils.drop_privileges(user) for func in required_func_calls: self.assertTrue(utils.os.called_funcs[func]) def test_drop_privileges_no_call_setsid(self): user = getuser() # over-ride os with mock required_func_calls = ('setgroups', 'setgid', 'setuid', 'chdir', 'umask') bad_func_calls = ('setsid',) utils.os = MockOs(called_funcs=required_func_calls, raise_funcs=bad_func_calls) # exercise the code utils.drop_privileges(user, call_setsid=False) for func in required_func_calls: self.assertTrue(utils.os.called_funcs[func]) for func in bad_func_calls: self.assertTrue(func not in utils.os.called_funcs) @reset_logger_state def test_capture_stdio(self): # stubs logger = utils.get_logger(None, 'dummy') # mock utils system modules _orig_sys = utils.sys _orig_os = utils.os try: utils.sys = MockSys() utils.os = MockOs() # basic test utils.capture_stdio(logger) self.assertTrue(utils.sys.excepthook is not None) self.assertEqual(utils.os.closed_fds, utils.sys.stdio_fds) self.assertTrue( isinstance(utils.sys.stdout, utils.LoggerFileObject)) self.assertTrue( isinstance(utils.sys.stderr, utils.LoggerFileObject)) # reset; test same args, but exc when trying to close stdio utils.os = MockOs(raise_funcs=('dup2',)) utils.sys = MockSys() # test unable to close stdio utils.capture_stdio(logger) self.assertTrue(utils.sys.excepthook is not None) self.assertEqual(utils.os.closed_fds, []) self.assertTrue( isinstance(utils.sys.stdout, utils.LoggerFileObject)) self.assertTrue( isinstance(utils.sys.stderr, utils.LoggerFileObject)) # reset; test some other args utils.os = MockOs() utils.sys = MockSys() logger = utils.get_logger(None, log_to_console=True) # test console log utils.capture_stdio(logger, capture_stdout=False, capture_stderr=False) self.assertTrue(utils.sys.excepthook is not None) # when logging to console, stderr remains open self.assertEqual(utils.os.closed_fds, utils.sys.stdio_fds[:2]) reset_loggers() # stdio not captured self.assertFalse(isinstance(utils.sys.stdout, utils.LoggerFileObject)) self.assertFalse(isinstance(utils.sys.stderr, utils.LoggerFileObject)) finally: utils.sys = _orig_sys utils.os = _orig_os @reset_logger_state def test_get_logger_console(self): logger = utils.get_logger(None) console_handlers = [h for h in logger.logger.handlers if isinstance(h, logging.StreamHandler)] self.assertFalse(console_handlers) logger = utils.get_logger(None, log_to_console=True) console_handlers = [h for h in logger.logger.handlers if isinstance(h, logging.StreamHandler)] self.assertTrue(console_handlers) # make sure you can't have two console handlers self.assertEqual(len(console_handlers), 1) old_handler = console_handlers[0] logger = utils.get_logger(None, log_to_console=True) console_handlers = [h for h in logger.logger.handlers if isinstance(h, logging.StreamHandler)] self.assertEqual(len(console_handlers), 1) new_handler = console_handlers[0] self.assertNotEqual(new_handler, old_handler) def verify_under_pseudo_time( self, func, target_runtime_ms=1, *args, **kwargs): curr_time = [42.0] def my_time(): curr_time[0] += 0.001 return curr_time[0] def my_sleep(duration): curr_time[0] += 0.001 curr_time[0] += duration with patch('time.time', my_time), \ patch('time.sleep', my_sleep), \ patch('eventlet.sleep', my_sleep): start = time.time() func(*args, **kwargs) # make sure it's accurate to 10th of a second, converting the time # difference to milliseconds, 100 milliseconds is 1/10 of a second diff_from_target_ms = abs( target_runtime_ms - ((time.time() - start) * 1000)) self.assertTrue(diff_from_target_ms < 100, "Expected %d < 100" % diff_from_target_ms) def test_ratelimit_sleep(self): def testfunc(): running_time = 0 for i in range(100): running_time = utils.ratelimit_sleep(running_time, -5) self.verify_under_pseudo_time(testfunc, target_runtime_ms=1) def testfunc(): running_time = 0 for i in range(100): running_time = utils.ratelimit_sleep(running_time, 0) self.verify_under_pseudo_time(testfunc, target_runtime_ms=1) def testfunc(): running_time = 0 for i in range(50): running_time = utils.ratelimit_sleep(running_time, 200) self.verify_under_pseudo_time(testfunc, target_runtime_ms=250) def test_ratelimit_sleep_with_incr(self): def testfunc(): running_time = 0 vals = [5, 17, 0, 3, 11, 30, 40, 4, 13, 2, -1] * 2 # adds up to 248 total = 0 for i in vals: running_time = utils.ratelimit_sleep(running_time, 500, incr_by=i) total += i self.assertEqual(248, total) self.verify_under_pseudo_time(testfunc, target_runtime_ms=500) def test_ratelimit_sleep_with_sleep(self): def testfunc(): running_time = 0 sleeps = [0] * 7 + [.2] * 3 + [0] * 30 for i in sleeps: running_time = utils.ratelimit_sleep(running_time, 40, rate_buffer=1) time.sleep(i) self.verify_under_pseudo_time(testfunc, target_runtime_ms=900) def test_urlparse(self): parsed = utils.urlparse('http://127.0.0.1/') self.assertEqual(parsed.scheme, 'http') self.assertEqual(parsed.hostname, '127.0.0.1') self.assertEqual(parsed.path, '/') parsed = utils.urlparse('http://127.0.0.1:8080/') self.assertEqual(parsed.port, 8080) parsed = utils.urlparse('https://127.0.0.1/') self.assertEqual(parsed.scheme, 'https') parsed = utils.urlparse('http://[::1]/') self.assertEqual(parsed.hostname, '::1') parsed = utils.urlparse('http://[::1]:8080/') self.assertEqual(parsed.hostname, '::1') self.assertEqual(parsed.port, 8080) parsed = utils.urlparse('www.example.com') self.assertEqual(parsed.hostname, '') def test_search_tree(self): # file match & ext miss with temptree(['asdf.conf', 'blarg.conf', 'asdf.cfg']) as t: asdf = utils.search_tree(t, 'a*', '.conf') self.assertEqual(len(asdf), 1) self.assertEqual(asdf[0], os.path.join(t, 'asdf.conf')) # multi-file match & glob miss & sort with temptree(['application.bin', 'apple.bin', 'apropos.bin']) as t: app_bins = utils.search_tree(t, 'app*', 'bin') self.assertEqual(len(app_bins), 2) self.assertEqual(app_bins[0], os.path.join(t, 'apple.bin')) self.assertEqual(app_bins[1], os.path.join(t, 'application.bin')) # test file in folder & ext miss & glob miss files = ( 'sub/file1.ini', 'sub/file2.conf', 'sub.bin', 'bus.ini', 'bus/file3.ini', ) with temptree(files) as t: sub_ini = utils.search_tree(t, 'sub*', '.ini') self.assertEqual(len(sub_ini), 1) self.assertEqual(sub_ini[0], os.path.join(t, 'sub/file1.ini')) # test multi-file in folder & sub-folder & ext miss & glob miss files = ( 'folder_file.txt', 'folder/1.txt', 'folder/sub/2.txt', 'folder2/3.txt', 'Folder3/4.txt' 'folder.rc', ) with temptree(files) as t: folder_texts = utils.search_tree(t, 'folder*', '.txt') self.assertEqual(len(folder_texts), 4) f1 = os.path.join(t, 'folder_file.txt') f2 = os.path.join(t, 'folder/1.txt') f3 = os.path.join(t, 'folder/sub/2.txt') f4 = os.path.join(t, 'folder2/3.txt') for f in [f1, f2, f3, f4]: self.assertTrue(f in folder_texts) def test_search_tree_with_directory_ext_match(self): files = ( 'object-server/object-server.conf-base', 'object-server/1.conf.d/base.conf', 'object-server/1.conf.d/1.conf', 'object-server/2.conf.d/base.conf', 'object-server/2.conf.d/2.conf', 'object-server/3.conf.d/base.conf', 'object-server/3.conf.d/3.conf', 'object-server/4.conf.d/base.conf', 'object-server/4.conf.d/4.conf', ) with temptree(files) as t: conf_dirs = utils.search_tree(t, 'object-server', '.conf', dir_ext='conf.d') self.assertEqual(len(conf_dirs), 4) for i in range(4): conf_dir = os.path.join(t, 'object-server/%d.conf.d' % (i + 1)) self.assertTrue(conf_dir in conf_dirs) def test_search_tree_conf_dir_with_named_conf_match(self): files = ( 'proxy-server/proxy-server.conf.d/base.conf', 'proxy-server/proxy-server.conf.d/pipeline.conf', 'proxy-server/proxy-noauth.conf.d/base.conf', 'proxy-server/proxy-noauth.conf.d/pipeline.conf', ) with temptree(files) as t: conf_dirs = utils.search_tree(t, 'proxy-server', 'noauth.conf', dir_ext='noauth.conf.d') self.assertEqual(len(conf_dirs), 1) conf_dir = conf_dirs[0] expected = os.path.join(t, 'proxy-server/proxy-noauth.conf.d') self.assertEqual(conf_dir, expected) def test_search_tree_conf_dir_pid_with_named_conf_match(self): files = ( 'proxy-server/proxy-server.pid.d', 'proxy-server/proxy-noauth.pid.d', ) with temptree(files) as t: pid_files = utils.search_tree(t, 'proxy-server', exts=['noauth.pid', 'noauth.pid.d']) self.assertEqual(len(pid_files), 1) pid_file = pid_files[0] expected = os.path.join(t, 'proxy-server/proxy-noauth.pid.d') self.assertEqual(pid_file, expected) def test_write_file(self): with temptree([]) as t: file_name = os.path.join(t, 'test') utils.write_file(file_name, 'test') with open(file_name, 'r') as f: contents = f.read() self.assertEqual(contents, 'test') # and also subdirs file_name = os.path.join(t, 'subdir/test2') utils.write_file(file_name, 'test2') with open(file_name, 'r') as f: contents = f.read() self.assertEqual(contents, 'test2') # but can't over-write files file_name = os.path.join(t, 'subdir/test2/test3') self.assertRaises(IOError, utils.write_file, file_name, 'test3') def test_remove_file(self): with temptree([]) as t: file_name = os.path.join(t, 'blah.pid') # assert no raise self.assertEqual(os.path.exists(file_name), False) self.assertEqual(utils.remove_file(file_name), None) with open(file_name, 'w') as f: f.write('1') self.assertTrue(os.path.exists(file_name)) self.assertEqual(utils.remove_file(file_name), None) self.assertFalse(os.path.exists(file_name)) def test_human_readable(self): self.assertEqual(utils.human_readable(0), '0') self.assertEqual(utils.human_readable(1), '1') self.assertEqual(utils.human_readable(10), '10') self.assertEqual(utils.human_readable(100), '100') self.assertEqual(utils.human_readable(999), '999') self.assertEqual(utils.human_readable(1024), '1Ki') self.assertEqual(utils.human_readable(1535), '1Ki') self.assertEqual(utils.human_readable(1536), '2Ki') self.assertEqual(utils.human_readable(1047552), '1023Ki') self.assertEqual(utils.human_readable(1048063), '1023Ki') self.assertEqual(utils.human_readable(1048064), '1Mi') self.assertEqual(utils.human_readable(1048576), '1Mi') self.assertEqual(utils.human_readable(1073741824), '1Gi') self.assertEqual(utils.human_readable(1099511627776), '1Ti') self.assertEqual(utils.human_readable(1125899906842624), '1Pi') self.assertEqual(utils.human_readable(1152921504606846976), '1Ei') self.assertEqual(utils.human_readable(1180591620717411303424), '1Zi') self.assertEqual(utils.human_readable(1208925819614629174706176), '1Yi') self.assertEqual(utils.human_readable(1237940039285380274899124224), '1024Yi') def test_validate_sync_to(self): fname = 'container-sync-realms.conf' fcontents = ''' [US] key = 9ff3b71c849749dbaec4ccdd3cbab62b cluster_dfw1 = http://dfw1.host/v1/ ''' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) for realms_conf in (None, csr): for goodurl, result in ( ('http://1.1.1.1/v1/a/c', (None, 'http://1.1.1.1/v1/a/c', None, None)), ('http://1.1.1.1:8080/a/c', (None, 'http://1.1.1.1:8080/a/c', None, None)), ('http://2.2.2.2/a/c', (None, 'http://2.2.2.2/a/c', None, None)), ('https://1.1.1.1/v1/a/c', (None, 'https://1.1.1.1/v1/a/c', None, None)), ('//US/DFW1/a/c', (None, 'http://dfw1.host/v1/a/c', 'US', '9ff3b71c849749dbaec4ccdd3cbab62b')), ('//us/DFW1/a/c', (None, 'http://dfw1.host/v1/a/c', 'US', '9ff3b71c849749dbaec4ccdd3cbab62b')), ('//us/dfw1/a/c', (None, 'http://dfw1.host/v1/a/c', 'US', '9ff3b71c849749dbaec4ccdd3cbab62b')), ('//', (None, None, None, None)), ('', (None, None, None, None))): if goodurl.startswith('//') and not realms_conf: self.assertEqual( utils.validate_sync_to( goodurl, ['1.1.1.1', '2.2.2.2'], realms_conf), (None, None, None, None)) else: self.assertEqual( utils.validate_sync_to( goodurl, ['1.1.1.1', '2.2.2.2'], realms_conf), result) for badurl, result in ( ('http://1.1.1.1', ('Path required in X-Container-Sync-To', None, None, None)), ('httpq://1.1.1.1/v1/a/c', ('Invalid scheme \'httpq\' in X-Container-Sync-To, ' 'must be "//", "http", or "https".', None, None, None)), ('http://1.1.1.1/v1/a/c?query', ('Params, queries, and fragments not allowed in ' 'X-Container-Sync-To', None, None, None)), ('http://1.1.1.1/v1/a/c#frag', ('Params, queries, and fragments not allowed in ' 'X-Container-Sync-To', None, None, None)), ('http://1.1.1.1/v1/a/c?query#frag', ('Params, queries, and fragments not allowed in ' 'X-Container-Sync-To', None, None, None)), ('http://1.1.1.1/v1/a/c?query=param', ('Params, queries, and fragments not allowed in ' 'X-Container-Sync-To', None, None, None)), ('http://1.1.1.1/v1/a/c?query=param#frag', ('Params, queries, and fragments not allowed in ' 'X-Container-Sync-To', None, None, None)), ('http://1.1.1.2/v1/a/c', ("Invalid host '1.1.1.2' in X-Container-Sync-To", None, None, None)), ('//us/invalid/a/c', ("No cluster endpoint for 'us' 'invalid'", None, None, None)), ('//invalid/dfw1/a/c', ("No realm key for 'invalid'", None, None, None)), ('//us/invalid1/a/', ("Invalid X-Container-Sync-To format " "'//us/invalid1/a/'", None, None, None)), ('//us/invalid1/a', ("Invalid X-Container-Sync-To format " "'//us/invalid1/a'", None, None, None)), ('//us/invalid1/', ("Invalid X-Container-Sync-To format " "'//us/invalid1/'", None, None, None)), ('//us/invalid1', ("Invalid X-Container-Sync-To format " "'//us/invalid1'", None, None, None)), ('//us/', ("Invalid X-Container-Sync-To format " "'//us/'", None, None, None)), ('//us', ("Invalid X-Container-Sync-To format " "'//us'", None, None, None))): if badurl.startswith('//') and not realms_conf: self.assertEqual( utils.validate_sync_to( badurl, ['1.1.1.1', '2.2.2.2'], realms_conf), (None, None, None, None)) else: self.assertEqual( utils.validate_sync_to( badurl, ['1.1.1.1', '2.2.2.2'], realms_conf), result) def test_TRUE_VALUES(self): for v in utils.TRUE_VALUES: self.assertEqual(v, v.lower()) def test_config_true_value(self): orig_trues = utils.TRUE_VALUES try: utils.TRUE_VALUES = 'hello world'.split() for val in 'hello world HELLO WORLD'.split(): self.assertTrue(utils.config_true_value(val) is True) self.assertTrue(utils.config_true_value(True) is True) self.assertTrue(utils.config_true_value('foo') is False) self.assertTrue(utils.config_true_value(False) is False) finally: utils.TRUE_VALUES = orig_trues def test_config_auto_int_value(self): expectations = { # (value, default) : expected, ('1', 0): 1, (1, 0): 1, ('asdf', 0): ValueError, ('auto', 1): 1, ('AutO', 1): 1, ('Aut0', 1): ValueError, (None, 1): 1, } for (value, default), expected in expectations.items(): try: rv = utils.config_auto_int_value(value, default) except Exception as e: if e.__class__ is not expected: raise else: self.assertEqual(expected, rv) def test_streq_const_time(self): self.assertTrue(utils.streq_const_time('abc123', 'abc123')) self.assertFalse(utils.streq_const_time('a', 'aaaaa')) self.assertFalse(utils.streq_const_time('ABC123', 'abc123')) def test_replication_quorum_size(self): expected_sizes = {1: 1, 2: 2, 3: 2, 4: 3, 5: 3} got_sizes = dict([(n, utils.quorum_size(n)) for n in expected_sizes]) self.assertEqual(expected_sizes, got_sizes) def test_rsync_ip_ipv4_localhost(self): self.assertEqual(utils.rsync_ip('127.0.0.1'), '127.0.0.1') def test_rsync_ip_ipv6_random_ip(self): self.assertEqual( utils.rsync_ip('fe80:0000:0000:0000:0202:b3ff:fe1e:8329'), '[fe80:0000:0000:0000:0202:b3ff:fe1e:8329]') def test_rsync_ip_ipv6_ipv4_compatible(self): self.assertEqual( utils.rsync_ip('::ffff:192.0.2.128'), '[::ffff:192.0.2.128]') def test_rsync_module_interpolation(self): fake_device = {'ip': '127.0.0.1', 'port': 11, 'replication_ip': '127.0.0.2', 'replication_port': 12, 'region': '1', 'zone': '2', 'device': 'sda1', 'meta': 'just_a_string'} self.assertEqual( utils.rsync_module_interpolation('{ip}', fake_device), '127.0.0.1') self.assertEqual( utils.rsync_module_interpolation('{port}', fake_device), '11') self.assertEqual( utils.rsync_module_interpolation('{replication_ip}', fake_device), '127.0.0.2') self.assertEqual( utils.rsync_module_interpolation('{replication_port}', fake_device), '12') self.assertEqual( utils.rsync_module_interpolation('{region}', fake_device), '1') self.assertEqual( utils.rsync_module_interpolation('{zone}', fake_device), '2') self.assertEqual( utils.rsync_module_interpolation('{device}', fake_device), 'sda1') self.assertEqual( utils.rsync_module_interpolation('{meta}', fake_device), 'just_a_string') self.assertEqual( utils.rsync_module_interpolation('{replication_ip}::object', fake_device), '127.0.0.2::object') self.assertEqual( utils.rsync_module_interpolation('{ip}::container{port}', fake_device), '127.0.0.1::container11') self.assertEqual( utils.rsync_module_interpolation( '{replication_ip}::object_{device}', fake_device), '127.0.0.2::object_sda1') self.assertEqual( utils.rsync_module_interpolation( '127.0.0.3::object_{replication_port}', fake_device), '127.0.0.3::object_12') self.assertRaises(ValueError, utils.rsync_module_interpolation, '{replication_ip}::object_{deivce}', fake_device) def test_fallocate_reserve(self): class StatVFS(object): f_frsize = 1024 f_bavail = 1 def fstatvfs(fd): return StatVFS() orig_FALLOCATE_RESERVE = utils.FALLOCATE_RESERVE orig_fstatvfs = utils.os.fstatvfs try: fallocate = utils.FallocateWrapper(noop=True) utils.os.fstatvfs = fstatvfs # Want 1023 reserved, have 1024 * 1 free, so succeeds utils.FALLOCATE_RESERVE = 1023 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 self.assertEqual(fallocate(0, 1, 0, ctypes.c_uint64(0)), 0) # Want 1023 reserved, have 512 * 2 free, so succeeds utils.FALLOCATE_RESERVE = 1023 StatVFS.f_frsize = 512 StatVFS.f_bavail = 2 self.assertEqual(fallocate(0, 1, 0, ctypes.c_uint64(0)), 0) # Want 1024 reserved, have 1024 * 1 free, so fails utils.FALLOCATE_RESERVE = 1024 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(0)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1024 <= 1024') # Want 1024 reserved, have 512 * 2 free, so fails utils.FALLOCATE_RESERVE = 1024 StatVFS.f_frsize = 512 StatVFS.f_bavail = 2 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(0)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1024 <= 1024') # Want 2048 reserved, have 1024 * 1 free, so fails utils.FALLOCATE_RESERVE = 2048 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(0)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1024 <= 2048') # Want 2048 reserved, have 512 * 2 free, so fails utils.FALLOCATE_RESERVE = 2048 StatVFS.f_frsize = 512 StatVFS.f_bavail = 2 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(0)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1024 <= 2048') # Want 1023 reserved, have 1024 * 1 free, but file size is 1, so # fails utils.FALLOCATE_RESERVE = 1023 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(1)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1023 <= 1023') # Want 1022 reserved, have 1024 * 1 free, and file size is 1, so # succeeds utils.FALLOCATE_RESERVE = 1022 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 self.assertEqual(fallocate(0, 1, 0, ctypes.c_uint64(1)), 0) # Want 1023 reserved, have 1024 * 1 free, and file size is 0, so # succeeds utils.FALLOCATE_RESERVE = 1023 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 self.assertEqual(fallocate(0, 1, 0, ctypes.c_uint64(0)), 0) # Want 1024 reserved, have 1024 * 1 free, and even though # file size is 0, since we're under the reserve, fails utils.FALLOCATE_RESERVE = 1024 StatVFS.f_frsize = 1024 StatVFS.f_bavail = 1 exc = None try: fallocate(0, 1, 0, ctypes.c_uint64(0)) except OSError as err: exc = err self.assertEqual(str(exc), 'FALLOCATE_RESERVE fail 1024 <= 1024') finally: utils.FALLOCATE_RESERVE = orig_FALLOCATE_RESERVE utils.os.fstatvfs = orig_fstatvfs def test_fallocate_func(self): class FallocateWrapper(object): def __init__(self): self.last_call = None def __call__(self, *args): self.last_call = list(args) self.last_call[-1] = self.last_call[-1].value return 0 orig__sys_fallocate = utils._sys_fallocate try: utils._sys_fallocate = FallocateWrapper() # Ensure fallocate calls _sys_fallocate even with 0 bytes utils._sys_fallocate.last_call = None utils.fallocate(1234, 0) self.assertEqual(utils._sys_fallocate.last_call, [1234, 1, 0, 0]) # Ensure fallocate calls _sys_fallocate even with negative bytes utils._sys_fallocate.last_call = None utils.fallocate(1234, -5678) self.assertEqual(utils._sys_fallocate.last_call, [1234, 1, 0, 0]) # Ensure fallocate calls _sys_fallocate properly with positive # bytes utils._sys_fallocate.last_call = None utils.fallocate(1234, 1) self.assertEqual(utils._sys_fallocate.last_call, [1234, 1, 0, 1]) utils._sys_fallocate.last_call = None utils.fallocate(1234, 10 * 1024 * 1024 * 1024) self.assertEqual(utils._sys_fallocate.last_call, [1234, 1, 0, 10 * 1024 * 1024 * 1024]) finally: utils._sys_fallocate = orig__sys_fallocate def test_generate_trans_id(self): fake_time = 1366428370.5163341 with patch.object(utils.time, 'time', return_value=fake_time): trans_id = utils.generate_trans_id('') self.assertEqual(len(trans_id), 34) self.assertEqual(trans_id[:2], 'tx') self.assertEqual(trans_id[23], '-') self.assertEqual(int(trans_id[24:], 16), int(fake_time)) with patch.object(utils.time, 'time', return_value=fake_time): trans_id = utils.generate_trans_id('-suffix') self.assertEqual(len(trans_id), 41) self.assertEqual(trans_id[:2], 'tx') self.assertEqual(trans_id[34:], '-suffix') self.assertEqual(trans_id[23], '-') self.assertEqual(int(trans_id[24:34], 16), int(fake_time)) def test_get_trans_id_time(self): ts = utils.get_trans_id_time('tx8c8bc884cdaf499bb29429aa9c46946e') self.assertEqual(ts, None) ts = utils.get_trans_id_time('tx1df4ff4f55ea45f7b2ec2-0051720c06') self.assertEqual(ts, 1366428678) self.assertEqual( time.asctime(time.gmtime(ts)) + ' UTC', 'Sat Apr 20 03:31:18 2013 UTC') ts = utils.get_trans_id_time( 'tx1df4ff4f55ea45f7b2ec2-0051720c06-suffix') self.assertEqual(ts, 1366428678) self.assertEqual( time.asctime(time.gmtime(ts)) + ' UTC', 'Sat Apr 20 03:31:18 2013 UTC') ts = utils.get_trans_id_time('') self.assertEqual(ts, None) ts = utils.get_trans_id_time('garbage') self.assertEqual(ts, None) ts = utils.get_trans_id_time('tx1df4ff4f55ea45f7b2ec2-almostright') self.assertEqual(ts, None) def test_tpool_reraise(self): with patch.object(utils.tpool, 'execute', lambda f: f()): self.assertTrue( utils.tpool_reraise(MagicMock(return_value='test1')), 'test1') self.assertRaises( Exception, utils.tpool_reraise, MagicMock(side_effect=Exception('test2'))) self.assertRaises( BaseException, utils.tpool_reraise, MagicMock(side_effect=BaseException('test3'))) def test_lock_file(self): flags = os.O_CREAT | os.O_RDWR with NamedTemporaryFile(delete=False) as nt: nt.write("test string") nt.flush() nt.close() with utils.lock_file(nt.name, unlink=False) as f: self.assertEqual(f.read(), "test string") # we have a lock, now let's try to get a newer one fd = os.open(nt.name, flags) self.assertRaises(IOError, fcntl.flock, fd, fcntl.LOCK_EX | fcntl.LOCK_NB) with utils.lock_file(nt.name, unlink=False, append=True) as f: f.seek(0) self.assertEqual(f.read(), "test string") f.seek(0) f.write("\nanother string") f.flush() f.seek(0) self.assertEqual(f.read(), "test string\nanother string") # we have a lock, now let's try to get a newer one fd = os.open(nt.name, flags) self.assertRaises(IOError, fcntl.flock, fd, fcntl.LOCK_EX | fcntl.LOCK_NB) with utils.lock_file(nt.name, timeout=3, unlink=False) as f: try: with utils.lock_file( nt.name, timeout=1, unlink=False) as f: self.assertTrue( False, "Expected LockTimeout exception") except LockTimeout: pass with utils.lock_file(nt.name, unlink=True) as f: self.assertEqual(f.read(), "test string\nanother string") # we have a lock, now let's try to get a newer one fd = os.open(nt.name, flags) self.assertRaises( IOError, fcntl.flock, fd, fcntl.LOCK_EX | fcntl.LOCK_NB) self.assertRaises(OSError, os.remove, nt.name) def test_lock_file_unlinked_after_open(self): os_open = os.open first_pass = [True] def deleting_open(filename, flags): # unlink the file after it's opened. once. fd = os_open(filename, flags) if first_pass[0]: os.unlink(filename) first_pass[0] = False return fd with NamedTemporaryFile(delete=False) as nt: with mock.patch('os.open', deleting_open): with utils.lock_file(nt.name, unlink=True) as f: self.assertNotEqual(os.fstat(nt.fileno()).st_ino, os.fstat(f.fileno()).st_ino) first_pass = [True] def recreating_open(filename, flags): # unlink and recreate the file after it's opened fd = os_open(filename, flags) if first_pass[0]: os.unlink(filename) os.close(os_open(filename, os.O_CREAT | os.O_RDWR)) first_pass[0] = False return fd with NamedTemporaryFile(delete=False) as nt: with mock.patch('os.open', recreating_open): with utils.lock_file(nt.name, unlink=True) as f: self.assertNotEqual(os.fstat(nt.fileno()).st_ino, os.fstat(f.fileno()).st_ino) def test_lock_file_held_on_unlink(self): os_unlink = os.unlink def flocking_unlink(filename): # make sure the lock is held when we unlink fd = os.open(filename, os.O_RDWR) self.assertRaises( IOError, fcntl.flock, fd, fcntl.LOCK_EX | fcntl.LOCK_NB) os.close(fd) os_unlink(filename) with NamedTemporaryFile(delete=False) as nt: with mock.patch('os.unlink', flocking_unlink): with utils.lock_file(nt.name, unlink=True): pass def test_lock_file_no_unlink_if_fail(self): os_open = os.open with NamedTemporaryFile(delete=True) as nt: def lock_on_open(filename, flags): # lock the file on another fd after it's opened. fd = os_open(filename, flags) fd2 = os_open(filename, flags) fcntl.flock(fd2, fcntl.LOCK_EX | fcntl.LOCK_NB) return fd try: timedout = False with mock.patch('os.open', lock_on_open): with utils.lock_file(nt.name, unlink=False, timeout=0.01): pass except LockTimeout: timedout = True self.assertTrue(timedout) self.assertTrue(os.path.exists(nt.name)) def test_ismount_path_does_not_exist(self): tmpdir = mkdtemp() try: self.assertFalse(utils.ismount(os.path.join(tmpdir, 'bar'))) finally: shutil.rmtree(tmpdir) def test_ismount_path_not_mount(self): tmpdir = mkdtemp() try: self.assertFalse(utils.ismount(tmpdir)) finally: shutil.rmtree(tmpdir) def test_ismount_path_error(self): def _mock_os_lstat(path): raise OSError(13, "foo") tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): # Raises exception with _raw -- see next test. utils.ismount(tmpdir) finally: shutil.rmtree(tmpdir) def test_ismount_raw_path_error(self): def _mock_os_lstat(path): raise OSError(13, "foo") tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): self.assertRaises(OSError, utils.ismount_raw, tmpdir) finally: shutil.rmtree(tmpdir) def test_ismount_path_is_symlink(self): tmpdir = mkdtemp() try: link = os.path.join(tmpdir, "tmp") os.symlink("/tmp", link) self.assertFalse(utils.ismount(link)) finally: shutil.rmtree(tmpdir) def test_ismount_path_is_root(self): self.assertTrue(utils.ismount('/')) def test_ismount_parent_path_error(self): _os_lstat = os.lstat def _mock_os_lstat(path): if path.endswith(".."): raise OSError(13, "foo") else: return _os_lstat(path) tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): # Raises exception with _raw -- see next test. utils.ismount(tmpdir) finally: shutil.rmtree(tmpdir) def test_ismount_raw_parent_path_error(self): _os_lstat = os.lstat def _mock_os_lstat(path): if path.endswith(".."): raise OSError(13, "foo") else: return _os_lstat(path) tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): self.assertRaises(OSError, utils.ismount_raw, tmpdir) finally: shutil.rmtree(tmpdir) def test_ismount_successes_dev(self): _os_lstat = os.lstat class MockStat(object): def __init__(self, mode, dev, ino): self.st_mode = mode self.st_dev = dev self.st_ino = ino def _mock_os_lstat(path): if path.endswith(".."): parent = _os_lstat(path) return MockStat(parent.st_mode, parent.st_dev + 1, parent.st_ino) else: return _os_lstat(path) tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): self.assertTrue(utils.ismount(tmpdir)) finally: shutil.rmtree(tmpdir) def test_ismount_successes_ino(self): _os_lstat = os.lstat class MockStat(object): def __init__(self, mode, dev, ino): self.st_mode = mode self.st_dev = dev self.st_ino = ino def _mock_os_lstat(path): if path.endswith(".."): return _os_lstat(path) else: parent_path = os.path.join(path, "..") child = _os_lstat(path) parent = _os_lstat(parent_path) return MockStat(child.st_mode, parent.st_ino, child.st_dev) tmpdir = mkdtemp() try: with patch("os.lstat", _mock_os_lstat): self.assertTrue(utils.ismount(tmpdir)) finally: shutil.rmtree(tmpdir) def test_parse_content_type(self): self.assertEqual(utils.parse_content_type('text/plain'), ('text/plain', [])) self.assertEqual(utils.parse_content_type('text/plain;charset=utf-8'), ('text/plain', [('charset', 'utf-8')])) self.assertEqual( utils.parse_content_type('text/plain;hello="world";charset=utf-8'), ('text/plain', [('hello', '"world"'), ('charset', 'utf-8')])) self.assertEqual( utils.parse_content_type('text/plain; hello="world"; a=b'), ('text/plain', [('hello', '"world"'), ('a', 'b')])) self.assertEqual( utils.parse_content_type(r'text/plain; x="\""; a=b'), ('text/plain', [('x', r'"\""'), ('a', 'b')])) self.assertEqual( utils.parse_content_type(r'text/plain; x; a=b'), ('text/plain', [('x', ''), ('a', 'b')])) self.assertEqual( utils.parse_content_type(r'text/plain; x="\""; a'), ('text/plain', [('x', r'"\""'), ('a', '')])) def test_override_bytes_from_content_type(self): listing_dict = { 'bytes': 1234, 'hash': 'asdf', 'name': 'zxcv', 'content_type': 'text/plain; hello="world"; swift_bytes=15'} utils.override_bytes_from_content_type(listing_dict, logger=FakeLogger()) self.assertEqual(listing_dict['bytes'], 15) self.assertEqual(listing_dict['content_type'], 'text/plain;hello="world"') listing_dict = { 'bytes': 1234, 'hash': 'asdf', 'name': 'zxcv', 'content_type': 'text/plain; hello="world"; swift_bytes=hey'} utils.override_bytes_from_content_type(listing_dict, logger=FakeLogger()) self.assertEqual(listing_dict['bytes'], 1234) self.assertEqual(listing_dict['content_type'], 'text/plain;hello="world"') def test_clean_content_type(self): subtests = { '': '', 'text/plain': 'text/plain', 'text/plain; someother=thing': 'text/plain; someother=thing', 'text/plain; swift_bytes=123': 'text/plain', 'text/plain; someother=thing; swift_bytes=123': 'text/plain; someother=thing', # Since Swift always tacks on the swift_bytes, clean_content_type() # only strips swift_bytes if it's last. The next item simply shows # that if for some other odd reason it's not last, # clean_content_type() will not remove it from the header. 'text/plain; swift_bytes=123; someother=thing': 'text/plain; swift_bytes=123; someother=thing'} for before, after in subtests.items(): self.assertEqual(utils.clean_content_type(before), after) def test_quote(self): res = utils.quote('/v1/a/c3/subdirx/') assert res == '/v1/a/c3/subdirx/' res = utils.quote('/v1/a&b/c3/subdirx/') assert res == '/v1/a%26b/c3/subdirx/' res = utils.quote('/v1/a&b/c3/subdirx/', safe='&') assert res == '%2Fv1%2Fa&b%2Fc3%2Fsubdirx%2F' unicode_sample = u'\uc77c\uc601' account = 'abc_' + unicode_sample valid_utf8_str = utils.get_valid_utf8_str(account) account = 'abc_' + unicode_sample.encode('utf-8')[::-1] invalid_utf8_str = utils.get_valid_utf8_str(account) self.assertEqual('abc_%EC%9D%BC%EC%98%81', utils.quote(valid_utf8_str)) self.assertEqual('abc_%EF%BF%BD%EF%BF%BD%EC%BC%9D%EF%BF%BD', utils.quote(invalid_utf8_str)) def test_get_hmac(self): self.assertEqual( utils.get_hmac('GET', '/path', 1, 'abc'), 'b17f6ff8da0e251737aa9e3ee69a881e3e092e2f') def test_get_policy_index(self): # Account has no information about a policy req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) res = Response() self.assertIsNone(utils.get_policy_index(req.headers, res.headers)) # The policy of a container can be specified by the response header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) res = Response(headers={'X-Backend-Storage-Policy-Index': '1'}) self.assertEqual('1', utils.get_policy_index(req.headers, res.headers)) # The policy of an object to be created can be specified by the request # header req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Backend-Storage-Policy-Index': '2'}) res = Response() self.assertEqual('2', utils.get_policy_index(req.headers, res.headers)) def test_get_log_line(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD', 'REMOTE_ADDR': '1.2.3.4'}) res = Response() trans_time = 1.2 additional_info = 'some information' server_pid = 1234 exp_line = '1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD ' \ '/sda1/p/a/c/o" 200 - "-" "-" "-" 1.2000 "some information" 1234 -' with mock.patch( 'time.gmtime', mock.MagicMock(side_effect=[time.gmtime(10001.0)])): with mock.patch( 'os.getpid', mock.MagicMock(return_value=server_pid)): self.assertEqual( exp_line, utils.get_log_line(req, res, trans_time, additional_info)) def test_cache_from_env(self): # should never get logging when swift.cache is found env = {'swift.cache': 42} logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertEqual(42, utils.cache_from_env(env)) self.assertEqual(0, len(logger.get_lines_for_level('error'))) logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertEqual(42, utils.cache_from_env(env, False)) self.assertEqual(0, len(logger.get_lines_for_level('error'))) logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertEqual(42, utils.cache_from_env(env, True)) self.assertEqual(0, len(logger.get_lines_for_level('error'))) # check allow_none controls logging when swift.cache is not found err_msg = 'ERROR: swift.cache could not be found in env!' env = {} logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertIsNone(utils.cache_from_env(env)) self.assertTrue(err_msg in logger.get_lines_for_level('error')) logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertIsNone(utils.cache_from_env(env, False)) self.assertTrue(err_msg in logger.get_lines_for_level('error')) logger = FakeLogger() with mock.patch('swift.common.utils.logging', logger): self.assertIsNone(utils.cache_from_env(env, True)) self.assertEqual(0, len(logger.get_lines_for_level('error'))) def test_fsync_dir(self): tempdir = None fd = None try: tempdir = mkdtemp(dir='/tmp') fd, temppath = tempfile.mkstemp(dir=tempdir) _mock_fsync = mock.Mock() _mock_close = mock.Mock() with patch('swift.common.utils.fsync', _mock_fsync): with patch('os.close', _mock_close): utils.fsync_dir(tempdir) self.assertTrue(_mock_fsync.called) self.assertTrue(_mock_close.called) self.assertTrue(isinstance(_mock_fsync.call_args[0][0], int)) self.assertEqual(_mock_fsync.call_args[0][0], _mock_close.call_args[0][0]) # Not a directory - arg is file path self.assertRaises(OSError, utils.fsync_dir, temppath) logger = FakeLogger() def _mock_fsync(fd): raise OSError(errno.EBADF, os.strerror(errno.EBADF)) with patch('swift.common.utils.fsync', _mock_fsync): with mock.patch('swift.common.utils.logging', logger): utils.fsync_dir(tempdir) self.assertEqual(1, len(logger.get_lines_for_level('warning'))) finally: if fd is not None: os.close(fd) os.unlink(temppath) if tempdir: os.rmdir(tempdir) def test_renamer_with_fsync_dir(self): tempdir = None try: tempdir = mkdtemp(dir='/tmp') # Simulate part of object path already existing part_dir = os.path.join(tempdir, 'objects/1234/') os.makedirs(part_dir) obj_dir = os.path.join(part_dir, 'aaa', 'a' * 32) obj_path = os.path.join(obj_dir, '1425276031.12345.data') # Object dir had to be created _m_os_rename = mock.Mock() _m_fsync_dir = mock.Mock() with patch('os.rename', _m_os_rename): with patch('swift.common.utils.fsync_dir', _m_fsync_dir): utils.renamer("fake_path", obj_path) _m_os_rename.assert_called_once_with('fake_path', obj_path) # fsync_dir on parents of all newly create dirs self.assertEqual(_m_fsync_dir.call_count, 3) # Object dir existed _m_os_rename.reset_mock() _m_fsync_dir.reset_mock() with patch('os.rename', _m_os_rename): with patch('swift.common.utils.fsync_dir', _m_fsync_dir): utils.renamer("fake_path", obj_path) _m_os_rename.assert_called_once_with('fake_path', obj_path) # fsync_dir only on the leaf dir self.assertEqual(_m_fsync_dir.call_count, 1) finally: if tempdir: shutil.rmtree(tempdir) def test_renamer_when_fsync_is_false(self): _m_os_rename = mock.Mock() _m_fsync_dir = mock.Mock() _m_makedirs_count = mock.Mock(return_value=2) with patch('os.rename', _m_os_rename): with patch('swift.common.utils.fsync_dir', _m_fsync_dir): with patch('swift.common.utils.makedirs_count', _m_makedirs_count): utils.renamer("fake_path", "/a/b/c.data", fsync=False) _m_makedirs_count.assert_called_once_with("/a/b") _m_os_rename.assert_called_once_with('fake_path', "/a/b/c.data") self.assertFalse(_m_fsync_dir.called) def test_makedirs_count(self): tempdir = None fd = None try: tempdir = mkdtemp(dir='/tmp') os.makedirs(os.path.join(tempdir, 'a/b')) # 4 new dirs created dirpath = os.path.join(tempdir, 'a/b/1/2/3/4') ret = utils.makedirs_count(dirpath) self.assertEqual(ret, 4) # no new dirs created - dir already exists ret = utils.makedirs_count(dirpath) self.assertEqual(ret, 0) # path exists and is a file fd, temppath = tempfile.mkstemp(dir=dirpath) os.close(fd) self.assertRaises(OSError, utils.makedirs_count, temppath) finally: if tempdir: shutil.rmtree(tempdir) class ResellerConfReader(unittest.TestCase): def setUp(self): self.default_rules = {'operator_roles': ['admin', 'swiftoperator'], 'service_roles': [], 'require_group': ''} def test_defaults(self): conf = {} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_']) self.assertEqual(options['AUTH_'], self.default_rules) def test_same_as_default(self): conf = {'reseller_prefix': 'AUTH', 'operator_roles': 'admin, swiftoperator'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_']) self.assertEqual(options['AUTH_'], self.default_rules) def test_single_blank_reseller(self): conf = {'reseller_prefix': ''} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['']) self.assertEqual(options[''], self.default_rules) def test_single_blank_reseller_with_conf(self): conf = {'reseller_prefix': '', "''operator_roles": 'role1, role2'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['']) self.assertEqual(options[''].get('operator_roles'), ['role1', 'role2']) self.assertEqual(options[''].get('service_roles'), self.default_rules.get('service_roles')) self.assertEqual(options[''].get('require_group'), self.default_rules.get('require_group')) def test_multiple_same_resellers(self): conf = {'reseller_prefix': " '' , '' "} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['']) conf = {'reseller_prefix': '_, _'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['_']) conf = {'reseller_prefix': 'AUTH, PRE2, AUTH, PRE2'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_', 'PRE2_']) def test_several_resellers_with_conf(self): conf = {'reseller_prefix': 'PRE1, PRE2', 'PRE1_operator_roles': 'role1, role2', 'PRE1_service_roles': 'role3, role4', 'PRE2_operator_roles': 'role5', 'PRE2_service_roles': 'role6', 'PRE2_require_group': 'pre2_group'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['PRE1_', 'PRE2_']) self.assertEqual(set(['role1', 'role2']), set(options['PRE1_'].get('operator_roles'))) self.assertEqual(['role5'], options['PRE2_'].get('operator_roles')) self.assertEqual(set(['role3', 'role4']), set(options['PRE1_'].get('service_roles'))) self.assertEqual(['role6'], options['PRE2_'].get('service_roles')) self.assertEqual('', options['PRE1_'].get('require_group')) self.assertEqual('pre2_group', options['PRE2_'].get('require_group')) def test_several_resellers_first_blank(self): conf = {'reseller_prefix': " '' , PRE2", "''operator_roles": 'role1, role2', "''service_roles": 'role3, role4', 'PRE2_operator_roles': 'role5', 'PRE2_service_roles': 'role6', 'PRE2_require_group': 'pre2_group'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['', 'PRE2_']) self.assertEqual(set(['role1', 'role2']), set(options[''].get('operator_roles'))) self.assertEqual(['role5'], options['PRE2_'].get('operator_roles')) self.assertEqual(set(['role3', 'role4']), set(options[''].get('service_roles'))) self.assertEqual(['role6'], options['PRE2_'].get('service_roles')) self.assertEqual('', options[''].get('require_group')) self.assertEqual('pre2_group', options['PRE2_'].get('require_group')) def test_several_resellers_with_blank_comma(self): conf = {'reseller_prefix': "AUTH , '', PRE2", "''operator_roles": 'role1, role2', "''service_roles": 'role3, role4', 'PRE2_operator_roles': 'role5', 'PRE2_service_roles': 'role6', 'PRE2_require_group': 'pre2_group'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_', '', 'PRE2_']) self.assertEqual(set(['admin', 'swiftoperator']), set(options['AUTH_'].get('operator_roles'))) self.assertEqual(set(['role1', 'role2']), set(options[''].get('operator_roles'))) self.assertEqual(['role5'], options['PRE2_'].get('operator_roles')) self.assertEqual([], options['AUTH_'].get('service_roles')) self.assertEqual(set(['role3', 'role4']), set(options[''].get('service_roles'))) self.assertEqual(['role6'], options['PRE2_'].get('service_roles')) self.assertEqual('', options['AUTH_'].get('require_group')) self.assertEqual('', options[''].get('require_group')) self.assertEqual('pre2_group', options['PRE2_'].get('require_group')) def test_stray_comma(self): conf = {'reseller_prefix': "AUTH ,, PRE2", "''operator_roles": 'role1, role2', "''service_roles": 'role3, role4', 'PRE2_operator_roles': 'role5', 'PRE2_service_roles': 'role6', 'PRE2_require_group': 'pre2_group'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_', 'PRE2_']) self.assertEqual(set(['admin', 'swiftoperator']), set(options['AUTH_'].get('operator_roles'))) self.assertEqual(['role5'], options['PRE2_'].get('operator_roles')) self.assertEqual([], options['AUTH_'].get('service_roles')) self.assertEqual(['role6'], options['PRE2_'].get('service_roles')) self.assertEqual('', options['AUTH_'].get('require_group')) self.assertEqual('pre2_group', options['PRE2_'].get('require_group')) def test_multiple_stray_commas_resellers(self): conf = {'reseller_prefix': ' , , ,'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['']) self.assertEqual(options[''], self.default_rules) def test_unprefixed_options(self): conf = {'reseller_prefix': "AUTH , '', PRE2", "operator_roles": 'role1, role2', "service_roles": 'role3, role4', 'require_group': 'auth_blank_group', 'PRE2_operator_roles': 'role5', 'PRE2_service_roles': 'role6', 'PRE2_require_group': 'pre2_group'} prefixes, options = utils.config_read_reseller_options( conf, self.default_rules) self.assertEqual(prefixes, ['AUTH_', '', 'PRE2_']) self.assertEqual(set(['role1', 'role2']), set(options['AUTH_'].get('operator_roles'))) self.assertEqual(set(['role1', 'role2']), set(options[''].get('operator_roles'))) self.assertEqual(['role5'], options['PRE2_'].get('operator_roles')) self.assertEqual(set(['role3', 'role4']), set(options['AUTH_'].get('service_roles'))) self.assertEqual(set(['role3', 'role4']), set(options[''].get('service_roles'))) self.assertEqual(['role6'], options['PRE2_'].get('service_roles')) self.assertEqual('auth_blank_group', options['AUTH_'].get('require_group')) self.assertEqual('auth_blank_group', options[''].get('require_group')) self.assertEqual('pre2_group', options['PRE2_'].get('require_group')) class TestUnlinkOlder(unittest.TestCase): def setUp(self): self.tempdir = mkdtemp() self.mtime = {} def tearDown(self): rmtree(self.tempdir, ignore_errors=True) def touch(self, fpath, mtime=None): self.mtime[fpath] = mtime or time.time() open(fpath, 'w') @contextlib.contextmanager def high_resolution_getmtime(self): orig_getmtime = os.path.getmtime def mock_getmtime(fpath): mtime = self.mtime.get(fpath) if mtime is None: mtime = orig_getmtime(fpath) return mtime with mock.patch('os.path.getmtime', mock_getmtime): yield def test_unlink_older_than_path_not_exists(self): path = os.path.join(self.tempdir, 'does-not-exist') # just make sure it doesn't blow up utils.unlink_older_than(path, time.time()) def test_unlink_older_than_file(self): path = os.path.join(self.tempdir, 'some-file') self.touch(path) with self.assertRaises(OSError) as ctx: utils.unlink_older_than(path, time.time()) self.assertEqual(ctx.exception.errno, errno.ENOTDIR) def test_unlink_older_than_now(self): self.touch(os.path.join(self.tempdir, 'test')) with self.high_resolution_getmtime(): utils.unlink_older_than(self.tempdir, time.time()) self.assertEqual([], os.listdir(self.tempdir)) def test_unlink_not_old_enough(self): start = time.time() self.touch(os.path.join(self.tempdir, 'test')) with self.high_resolution_getmtime(): utils.unlink_older_than(self.tempdir, start) self.assertEqual(['test'], os.listdir(self.tempdir)) def test_unlink_mixed(self): self.touch(os.path.join(self.tempdir, 'first')) cutoff = time.time() self.touch(os.path.join(self.tempdir, 'second')) with self.high_resolution_getmtime(): utils.unlink_older_than(self.tempdir, cutoff) self.assertEqual(['second'], os.listdir(self.tempdir)) def test_unlink_paths(self): paths = [] for item in ('first', 'second', 'third'): path = os.path.join(self.tempdir, item) self.touch(path) paths.append(path) # don't unlink everyone with self.high_resolution_getmtime(): utils.unlink_paths_older_than(paths[:2], time.time()) self.assertEqual(['third'], os.listdir(self.tempdir)) def test_unlink_empty_paths(self): # just make sure it doesn't blow up utils.unlink_paths_older_than([], time.time()) def test_unlink_not_exists_paths(self): path = os.path.join(self.tempdir, 'does-not-exist') # just make sure it doesn't blow up utils.unlink_paths_older_than([path], time.time()) class TestSwiftInfo(unittest.TestCase): def tearDown(self): utils._swift_info = {} utils._swift_admin_info = {} def test_register_swift_info(self): utils.register_swift_info(foo='bar') utils.register_swift_info(lorem='ipsum') utils.register_swift_info('cap1', cap1_foo='cap1_bar') utils.register_swift_info('cap1', cap1_lorem='cap1_ipsum') self.assertTrue('swift' in utils._swift_info) self.assertTrue('foo' in utils._swift_info['swift']) self.assertEqual(utils._swift_info['swift']['foo'], 'bar') self.assertTrue('lorem' in utils._swift_info['swift']) self.assertEqual(utils._swift_info['swift']['lorem'], 'ipsum') self.assertTrue('cap1' in utils._swift_info) self.assertTrue('cap1_foo' in utils._swift_info['cap1']) self.assertEqual(utils._swift_info['cap1']['cap1_foo'], 'cap1_bar') self.assertTrue('cap1_lorem' in utils._swift_info['cap1']) self.assertEqual(utils._swift_info['cap1']['cap1_lorem'], 'cap1_ipsum') self.assertRaises(ValueError, utils.register_swift_info, 'admin', foo='bar') self.assertRaises(ValueError, utils.register_swift_info, 'disallowed_sections', disallowed_sections=None) utils.register_swift_info('goodkey', foo='5.6') self.assertRaises(ValueError, utils.register_swift_info, 'bad.key', foo='5.6') data = {'bad.key': '5.6'} self.assertRaises(ValueError, utils.register_swift_info, 'goodkey', **data) def test_get_swift_info(self): utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': {'cap1_foo': 'cap1_bar'}} utils._swift_admin_info = {'admin_cap1': {'ac1_foo': 'ac1_bar'}} info = utils.get_swift_info() self.assertTrue('admin' not in info) self.assertTrue('swift' in info) self.assertTrue('foo' in info['swift']) self.assertEqual(utils._swift_info['swift']['foo'], 'bar') self.assertTrue('cap1' in info) self.assertTrue('cap1_foo' in info['cap1']) self.assertEqual(utils._swift_info['cap1']['cap1_foo'], 'cap1_bar') def test_get_swift_info_with_disallowed_sections(self): utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': {'cap1_foo': 'cap1_bar'}, 'cap2': {'cap2_foo': 'cap2_bar'}, 'cap3': {'cap3_foo': 'cap3_bar'}} utils._swift_admin_info = {'admin_cap1': {'ac1_foo': 'ac1_bar'}} info = utils.get_swift_info(disallowed_sections=['cap1', 'cap3']) self.assertTrue('admin' not in info) self.assertTrue('swift' in info) self.assertTrue('foo' in info['swift']) self.assertEqual(info['swift']['foo'], 'bar') self.assertTrue('cap1' not in info) self.assertTrue('cap2' in info) self.assertTrue('cap2_foo' in info['cap2']) self.assertEqual(info['cap2']['cap2_foo'], 'cap2_bar') self.assertTrue('cap3' not in info) def test_register_swift_admin_info(self): utils.register_swift_info(admin=True, admin_foo='admin_bar') utils.register_swift_info(admin=True, admin_lorem='admin_ipsum') utils.register_swift_info('cap1', admin=True, ac1_foo='ac1_bar') utils.register_swift_info('cap1', admin=True, ac1_lorem='ac1_ipsum') self.assertTrue('swift' in utils._swift_admin_info) self.assertTrue('admin_foo' in utils._swift_admin_info['swift']) self.assertEqual( utils._swift_admin_info['swift']['admin_foo'], 'admin_bar') self.assertTrue('admin_lorem' in utils._swift_admin_info['swift']) self.assertEqual( utils._swift_admin_info['swift']['admin_lorem'], 'admin_ipsum') self.assertTrue('cap1' in utils._swift_admin_info) self.assertTrue('ac1_foo' in utils._swift_admin_info['cap1']) self.assertEqual( utils._swift_admin_info['cap1']['ac1_foo'], 'ac1_bar') self.assertTrue('ac1_lorem' in utils._swift_admin_info['cap1']) self.assertEqual( utils._swift_admin_info['cap1']['ac1_lorem'], 'ac1_ipsum') self.assertTrue('swift' not in utils._swift_info) self.assertTrue('cap1' not in utils._swift_info) def test_get_swift_admin_info(self): utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': {'cap1_foo': 'cap1_bar'}} utils._swift_admin_info = {'admin_cap1': {'ac1_foo': 'ac1_bar'}} info = utils.get_swift_info(admin=True) self.assertTrue('admin' in info) self.assertTrue('admin_cap1' in info['admin']) self.assertTrue('ac1_foo' in info['admin']['admin_cap1']) self.assertEqual(info['admin']['admin_cap1']['ac1_foo'], 'ac1_bar') self.assertTrue('swift' in info) self.assertTrue('foo' in info['swift']) self.assertEqual(utils._swift_info['swift']['foo'], 'bar') self.assertTrue('cap1' in info) self.assertTrue('cap1_foo' in info['cap1']) self.assertEqual(utils._swift_info['cap1']['cap1_foo'], 'cap1_bar') def test_get_swift_admin_info_with_disallowed_sections(self): utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': {'cap1_foo': 'cap1_bar'}, 'cap2': {'cap2_foo': 'cap2_bar'}, 'cap3': {'cap3_foo': 'cap3_bar'}} utils._swift_admin_info = {'admin_cap1': {'ac1_foo': 'ac1_bar'}} info = utils.get_swift_info( admin=True, disallowed_sections=['cap1', 'cap3']) self.assertTrue('admin' in info) self.assertTrue('admin_cap1' in info['admin']) self.assertTrue('ac1_foo' in info['admin']['admin_cap1']) self.assertEqual(info['admin']['admin_cap1']['ac1_foo'], 'ac1_bar') self.assertTrue('disallowed_sections' in info['admin']) self.assertTrue('cap1' in info['admin']['disallowed_sections']) self.assertTrue('cap2' not in info['admin']['disallowed_sections']) self.assertTrue('cap3' in info['admin']['disallowed_sections']) self.assertTrue('swift' in info) self.assertTrue('foo' in info['swift']) self.assertEqual(info['swift']['foo'], 'bar') self.assertTrue('cap1' not in info) self.assertTrue('cap2' in info) self.assertTrue('cap2_foo' in info['cap2']) self.assertEqual(info['cap2']['cap2_foo'], 'cap2_bar') self.assertTrue('cap3' not in info) def test_get_swift_admin_info_with_disallowed_sub_sections(self): utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': {'cap1_foo': 'cap1_bar', 'cap1_moo': 'cap1_baa'}, 'cap2': {'cap2_foo': 'cap2_bar'}, 'cap3': {'cap2_foo': 'cap2_bar'}, 'cap4': {'a': {'b': {'c': 'c'}, 'b.c': 'b.c'}}} utils._swift_admin_info = {'admin_cap1': {'ac1_foo': 'ac1_bar'}} info = utils.get_swift_info( admin=True, disallowed_sections=['cap1.cap1_foo', 'cap3', 'cap4.a.b.c']) self.assertTrue('cap3' not in info) self.assertEqual(info['cap1']['cap1_moo'], 'cap1_baa') self.assertTrue('cap1_foo' not in info['cap1']) self.assertTrue('c' not in info['cap4']['a']['b']) self.assertEqual(info['cap4']['a']['b.c'], 'b.c') def test_get_swift_info_with_unmatched_disallowed_sections(self): cap1 = {'cap1_foo': 'cap1_bar', 'cap1_moo': 'cap1_baa'} utils._swift_info = {'swift': {'foo': 'bar'}, 'cap1': cap1} # expect no exceptions info = utils.get_swift_info( disallowed_sections=['cap2.cap1_foo', 'cap1.no_match', 'cap1.cap1_foo.no_match.no_match']) self.assertEqual(info['cap1'], cap1) class TestFileLikeIter(unittest.TestCase): def test_iter_file_iter(self): in_iter = [b'abc', b'de', b'fghijk', b'l'] chunks = [] for chunk in utils.FileLikeIter(in_iter): chunks.append(chunk) self.assertEqual(chunks, in_iter) def test_next(self): in_iter = [b'abc', b'de', b'fghijk', b'l'] chunks = [] iter_file = utils.FileLikeIter(in_iter) while True: try: chunk = next(iter_file) except StopIteration: break chunks.append(chunk) self.assertEqual(chunks, in_iter) def test_read(self): in_iter = [b'abc', b'de', b'fghijk', b'l'] iter_file = utils.FileLikeIter(in_iter) self.assertEqual(iter_file.read(), b''.join(in_iter)) def test_read_with_size(self): in_iter = [b'abc', b'de', b'fghijk', b'l'] chunks = [] iter_file = utils.FileLikeIter(in_iter) while True: chunk = iter_file.read(2) if not chunk: break self.assertTrue(len(chunk) <= 2) chunks.append(chunk) self.assertEqual(b''.join(chunks), b''.join(in_iter)) def test_read_with_size_zero(self): # makes little sense, but file supports it, so... self.assertEqual(utils.FileLikeIter(b'abc').read(0), b'') def test_readline(self): in_iter = [b'abc\n', b'd', b'\nef', b'g\nh', b'\nij\n\nk\n', b'trailing.'] lines = [] iter_file = utils.FileLikeIter(in_iter) while True: line = iter_file.readline() if not line: break lines.append(line) self.assertEqual( lines, [v if v == b'trailing.' else v + b'\n' for v in b''.join(in_iter).split(b'\n')]) def test_readline2(self): self.assertEqual( utils.FileLikeIter([b'abc', b'def\n']).readline(4), b'abcd') def test_readline3(self): self.assertEqual( utils.FileLikeIter([b'a' * 1111, b'bc\ndef']).readline(), (b'a' * 1111) + b'bc\n') def test_readline_with_size(self): in_iter = [b'abc\n', b'd', b'\nef', b'g\nh', b'\nij\n\nk\n', b'trailing.'] lines = [] iter_file = utils.FileLikeIter(in_iter) while True: line = iter_file.readline(2) if not line: break lines.append(line) self.assertEqual( lines, [b'ab', b'c\n', b'd\n', b'ef', b'g\n', b'h\n', b'ij', b'\n', b'\n', b'k\n', b'tr', b'ai', b'li', b'ng', b'.']) def test_readlines(self): in_iter = [b'abc\n', b'd', b'\nef', b'g\nh', b'\nij\n\nk\n', b'trailing.'] lines = utils.FileLikeIter(in_iter).readlines() self.assertEqual( lines, [v if v == b'trailing.' else v + b'\n' for v in b''.join(in_iter).split(b'\n')]) def test_readlines_with_size(self): in_iter = [b'abc\n', b'd', b'\nef', b'g\nh', b'\nij\n\nk\n', b'trailing.'] iter_file = utils.FileLikeIter(in_iter) lists_of_lines = [] while True: lines = iter_file.readlines(2) if not lines: break lists_of_lines.append(lines) self.assertEqual( lists_of_lines, [[b'ab'], [b'c\n'], [b'd\n'], [b'ef'], [b'g\n'], [b'h\n'], [b'ij'], [b'\n', b'\n'], [b'k\n'], [b'tr'], [b'ai'], [b'li'], [b'ng'], [b'.']]) def test_close(self): iter_file = utils.FileLikeIter([b'a', b'b', b'c']) self.assertEqual(next(iter_file), b'a') iter_file.close() self.assertTrue(iter_file.closed) self.assertRaises(ValueError, iter_file.next) self.assertRaises(ValueError, iter_file.read) self.assertRaises(ValueError, iter_file.readline) self.assertRaises(ValueError, iter_file.readlines) # Just make sure repeated close calls don't raise an Exception iter_file.close() self.assertTrue(iter_file.closed) class TestStatsdLogging(unittest.TestCase): def setUp(self): def fake_getaddrinfo(host, port, *args): # this is what a real getaddrinfo('localhost', port, # socket.AF_INET) returned once return [(socket.AF_INET, # address family socket.SOCK_STREAM, # socket type socket.IPPROTO_TCP, # socket protocol '', # canonical name, ('127.0.0.1', port)), # socket address (socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP, '', ('127.0.0.1', port))] self.real_getaddrinfo = utils.socket.getaddrinfo self.getaddrinfo_patcher = mock.patch.object( utils.socket, 'getaddrinfo', fake_getaddrinfo) self.mock_getaddrinfo = self.getaddrinfo_patcher.start() self.addCleanup(self.getaddrinfo_patcher.stop) def test_get_logger_statsd_client_not_specified(self): logger = utils.get_logger({}, 'some-name', log_route='some-route') # white-box construction validation self.assertIsNone(logger.logger.statsd_client) def test_get_logger_statsd_client_defaults(self): logger = utils.get_logger({'log_statsd_host': 'some.host.com'}, 'some-name', log_route='some-route') # white-box construction validation self.assertTrue(isinstance(logger.logger.statsd_client, utils.StatsdClient)) self.assertEqual(logger.logger.statsd_client._host, 'some.host.com') self.assertEqual(logger.logger.statsd_client._port, 8125) self.assertEqual(logger.logger.statsd_client._prefix, 'some-name.') self.assertEqual(logger.logger.statsd_client._default_sample_rate, 1) logger.set_statsd_prefix('some-name.more-specific') self.assertEqual(logger.logger.statsd_client._prefix, 'some-name.more-specific.') logger.set_statsd_prefix('') self.assertEqual(logger.logger.statsd_client._prefix, '') def test_get_logger_statsd_client_non_defaults(self): logger = utils.get_logger({ 'log_statsd_host': 'another.host.com', 'log_statsd_port': '9876', 'log_statsd_default_sample_rate': '0.75', 'log_statsd_sample_rate_factor': '0.81', 'log_statsd_metric_prefix': 'tomato.sauce', }, 'some-name', log_route='some-route') self.assertEqual(logger.logger.statsd_client._prefix, 'tomato.sauce.some-name.') logger.set_statsd_prefix('some-name.more-specific') self.assertEqual(logger.logger.statsd_client._prefix, 'tomato.sauce.some-name.more-specific.') logger.set_statsd_prefix('') self.assertEqual(logger.logger.statsd_client._prefix, 'tomato.sauce.') self.assertEqual(logger.logger.statsd_client._host, 'another.host.com') self.assertEqual(logger.logger.statsd_client._port, 9876) self.assertEqual(logger.logger.statsd_client._default_sample_rate, 0.75) self.assertEqual(logger.logger.statsd_client._sample_rate_factor, 0.81) def test_ipv4_or_ipv6_hostname_defaults_to_ipv4(self): def stub_getaddrinfo_both_ipv4_and_ipv6(host, port, family, *rest): if family == socket.AF_INET: return [(socket.AF_INET, 'blah', 'blah', 'blah', ('127.0.0.1', int(port)))] elif family == socket.AF_INET6: # Implemented so an incorrectly ordered implementation (IPv6 # then IPv4) would realistically fail. return [(socket.AF_INET6, 'blah', 'blah', 'blah', ('::1', int(port), 0, 0))] with mock.patch.object(utils.socket, 'getaddrinfo', new=stub_getaddrinfo_both_ipv4_and_ipv6): logger = utils.get_logger({ 'log_statsd_host': 'localhost', 'log_statsd_port': '9876', }, 'some-name', log_route='some-route') statsd_client = logger.logger.statsd_client self.assertEqual(statsd_client._sock_family, socket.AF_INET) self.assertEqual(statsd_client._target, ('localhost', 9876)) got_sock = statsd_client._open_socket() self.assertEqual(got_sock.family, socket.AF_INET) def test_ipv4_instantiation_and_socket_creation(self): logger = utils.get_logger({ 'log_statsd_host': '127.0.0.1', 'log_statsd_port': '9876', }, 'some-name', log_route='some-route') statsd_client = logger.logger.statsd_client self.assertEqual(statsd_client._sock_family, socket.AF_INET) self.assertEqual(statsd_client._target, ('127.0.0.1', 9876)) got_sock = statsd_client._open_socket() self.assertEqual(got_sock.family, socket.AF_INET) def test_ipv6_instantiation_and_socket_creation(self): # We have to check the given hostname or IP for IPv4/IPv6 on logger # instantiation so we don't call getaddrinfo() too often and don't have # to call bind() on our socket to detect IPv4/IPv6 on every send. # # This test uses the real getaddrinfo, so we patch over the mock to # put the real one back. If we just stop the mock, then # unittest.exit() blows up, but stacking real-fake-real works okay. with mock.patch.object(utils.socket, 'getaddrinfo', self.real_getaddrinfo): logger = utils.get_logger({ 'log_statsd_host': '::1', 'log_statsd_port': '9876', }, 'some-name', log_route='some-route') statsd_client = logger.logger.statsd_client self.assertEqual(statsd_client._sock_family, socket.AF_INET6) self.assertEqual(statsd_client._target, ('::1', 9876, 0, 0)) got_sock = statsd_client._open_socket() self.assertEqual(got_sock.family, socket.AF_INET6) def test_bad_hostname_instantiation(self): with mock.patch.object(utils.socket, 'getaddrinfo', side_effect=utils.socket.gaierror("whoops")): logger = utils.get_logger({ 'log_statsd_host': 'i-am-not-a-hostname-or-ip', 'log_statsd_port': '9876', }, 'some-name', log_route='some-route') statsd_client = logger.logger.statsd_client self.assertEqual(statsd_client._sock_family, socket.AF_INET) self.assertEqual(statsd_client._target, ('i-am-not-a-hostname-or-ip', 9876)) got_sock = statsd_client._open_socket() self.assertEqual(got_sock.family, socket.AF_INET) # Maybe the DNS server gets fixed in a bit and it starts working... or # maybe the DNS record hadn't propagated yet. In any case, failed # statsd sends will warn in the logs until the DNS failure or invalid # IP address in the configuration is fixed. def test_sending_ipv6(self): def fake_getaddrinfo(host, port, *args): # this is what a real getaddrinfo('::1', port, # socket.AF_INET6) returned once return [(socket.AF_INET6, socket.SOCK_STREAM, socket.IPPROTO_TCP, '', ('::1', port, 0, 0)), (socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP, '', ('::1', port, 0, 0))] with mock.patch.object(utils.socket, 'getaddrinfo', fake_getaddrinfo): logger = utils.get_logger({ 'log_statsd_host': '::1', 'log_statsd_port': '9876', }, 'some-name', log_route='some-route') statsd_client = logger.logger.statsd_client fl = FakeLogger() statsd_client.logger = fl mock_socket = MockUdpSocket() statsd_client._open_socket = lambda *_: mock_socket logger.increment('tunafish') self.assertEqual(fl.get_lines_for_level('warning'), []) self.assertEqual(mock_socket.sent, [(b'some-name.tunafish:1|c', ('::1', 9876, 0, 0))]) def test_no_exception_when_cant_send_udp_packet(self): logger = utils.get_logger({'log_statsd_host': 'some.host.com'}) statsd_client = logger.logger.statsd_client fl = FakeLogger() statsd_client.logger = fl mock_socket = MockUdpSocket(sendto_errno=errno.EPERM) statsd_client._open_socket = lambda *_: mock_socket logger.increment('tunafish') expected = ["Error sending UDP message to ('some.host.com', 8125): " "[Errno 1] test errno 1"] self.assertEqual(fl.get_lines_for_level('warning'), expected) def test_sample_rates(self): logger = utils.get_logger({'log_statsd_host': 'some.host.com'}) mock_socket = MockUdpSocket() # encapsulation? what's that? statsd_client = logger.logger.statsd_client self.assertTrue(statsd_client.random is random.random) statsd_client._open_socket = lambda *_: mock_socket statsd_client.random = lambda: 0.50001 logger.increment('tribbles', sample_rate=0.5) self.assertEqual(len(mock_socket.sent), 0) statsd_client.random = lambda: 0.49999 logger.increment('tribbles', sample_rate=0.5) self.assertEqual(len(mock_socket.sent), 1) payload = mock_socket.sent[0][0] self.assertTrue(payload.endswith(b"|@0.5")) def test_sample_rates_with_sample_rate_factor(self): logger = utils.get_logger({ 'log_statsd_host': 'some.host.com', 'log_statsd_default_sample_rate': '0.82', 'log_statsd_sample_rate_factor': '0.91', }) effective_sample_rate = 0.82 * 0.91 mock_socket = MockUdpSocket() # encapsulation? what's that? statsd_client = logger.logger.statsd_client self.assertTrue(statsd_client.random is random.random) statsd_client._open_socket = lambda *_: mock_socket statsd_client.random = lambda: effective_sample_rate + 0.001 logger.increment('tribbles') self.assertEqual(len(mock_socket.sent), 0) statsd_client.random = lambda: effective_sample_rate - 0.001 logger.increment('tribbles') self.assertEqual(len(mock_socket.sent), 1) payload = mock_socket.sent[0][0] suffix = "|@%s" % effective_sample_rate if six.PY3: suffix = suffix.encode('utf-8') self.assertTrue(payload.endswith(suffix), payload) effective_sample_rate = 0.587 * 0.91 statsd_client.random = lambda: effective_sample_rate - 0.001 logger.increment('tribbles', sample_rate=0.587) self.assertEqual(len(mock_socket.sent), 2) payload = mock_socket.sent[1][0] suffix = "|@%s" % effective_sample_rate if six.PY3: suffix = suffix.encode('utf-8') self.assertTrue(payload.endswith(suffix), payload) def test_timing_stats(self): class MockController(object): def __init__(self, status): self.status = status self.logger = self self.args = () self.called = 'UNKNOWN' def timing_since(self, *args): self.called = 'timing' self.args = args @utils.timing_stats() def METHOD(controller): return Response(status=controller.status) mock_controller = MockController(200) METHOD(mock_controller) self.assertEqual(mock_controller.called, 'timing') self.assertEqual(len(mock_controller.args), 2) self.assertEqual(mock_controller.args[0], 'METHOD.timing') self.assertTrue(mock_controller.args[1] > 0) mock_controller = MockController(404) METHOD(mock_controller) self.assertEqual(len(mock_controller.args), 2) self.assertEqual(mock_controller.called, 'timing') self.assertEqual(mock_controller.args[0], 'METHOD.timing') self.assertTrue(mock_controller.args[1] > 0) mock_controller = MockController(412) METHOD(mock_controller) self.assertEqual(len(mock_controller.args), 2) self.assertEqual(mock_controller.called, 'timing') self.assertEqual(mock_controller.args[0], 'METHOD.timing') self.assertTrue(mock_controller.args[1] > 0) mock_controller = MockController(416) METHOD(mock_controller) self.assertEqual(len(mock_controller.args), 2) self.assertEqual(mock_controller.called, 'timing') self.assertEqual(mock_controller.args[0], 'METHOD.timing') self.assertTrue(mock_controller.args[1] > 0) mock_controller = MockController(401) METHOD(mock_controller) self.assertEqual(len(mock_controller.args), 2) self.assertEqual(mock_controller.called, 'timing') self.assertEqual(mock_controller.args[0], 'METHOD.errors.timing') self.assertTrue(mock_controller.args[1] > 0) class UnsafeXrange(object): """ Like xrange(limit), but with extra context switching to screw things up. """ def __init__(self, upper_bound): self.current = 0 self.concurrent_calls = 0 self.upper_bound = upper_bound self.concurrent_call = False def __iter__(self): return self def next(self): if self.concurrent_calls > 0: self.concurrent_call = True self.concurrent_calls += 1 try: if self.current >= self.upper_bound: raise StopIteration else: val = self.current self.current += 1 eventlet.sleep() # yield control return val finally: self.concurrent_calls -= 1 __next__ = next class TestAffinityKeyFunction(unittest.TestCase): def setUp(self): self.nodes = [dict(id=0, region=1, zone=1), dict(id=1, region=1, zone=2), dict(id=2, region=2, zone=1), dict(id=3, region=2, zone=2), dict(id=4, region=3, zone=1), dict(id=5, region=3, zone=2), dict(id=6, region=4, zone=0), dict(id=7, region=4, zone=1)] def test_single_region(self): keyfn = utils.affinity_key_function("r3=1") ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([4, 5, 0, 1, 2, 3, 6, 7], ids) def test_bogus_value(self): self.assertRaises(ValueError, utils.affinity_key_function, "r3") self.assertRaises(ValueError, utils.affinity_key_function, "r3=elephant") def test_empty_value(self): # Empty's okay, it just means no preference keyfn = utils.affinity_key_function("") self.assertTrue(callable(keyfn)) ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([0, 1, 2, 3, 4, 5, 6, 7], ids) def test_all_whitespace_value(self): # Empty's okay, it just means no preference keyfn = utils.affinity_key_function(" \n") self.assertTrue(callable(keyfn)) ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([0, 1, 2, 3, 4, 5, 6, 7], ids) def test_with_zone_zero(self): keyfn = utils.affinity_key_function("r4z0=1") ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([6, 0, 1, 2, 3, 4, 5, 7], ids) def test_multiple(self): keyfn = utils.affinity_key_function("r1=100, r4=200, r3z1=1") ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([4, 0, 1, 6, 7, 2, 3, 5], ids) def test_more_specific_after_less_specific(self): keyfn = utils.affinity_key_function("r2=100, r2z2=50") ids = [n['id'] for n in sorted(self.nodes, key=keyfn)] self.assertEqual([3, 2, 0, 1, 4, 5, 6, 7], ids) class TestAffinityLocalityPredicate(unittest.TestCase): def setUp(self): self.nodes = [dict(id=0, region=1, zone=1), dict(id=1, region=1, zone=2), dict(id=2, region=2, zone=1), dict(id=3, region=2, zone=2), dict(id=4, region=3, zone=1), dict(id=5, region=3, zone=2), dict(id=6, region=4, zone=0), dict(id=7, region=4, zone=1)] def test_empty(self): pred = utils.affinity_locality_predicate('') self.assertTrue(pred is None) def test_region(self): pred = utils.affinity_locality_predicate('r1') self.assertTrue(callable(pred)) ids = [n['id'] for n in self.nodes if pred(n)] self.assertEqual([0, 1], ids) def test_zone(self): pred = utils.affinity_locality_predicate('r1z1') self.assertTrue(callable(pred)) ids = [n['id'] for n in self.nodes if pred(n)] self.assertEqual([0], ids) def test_multiple(self): pred = utils.affinity_locality_predicate('r1, r3, r4z0') self.assertTrue(callable(pred)) ids = [n['id'] for n in self.nodes if pred(n)] self.assertEqual([0, 1, 4, 5, 6], ids) def test_invalid(self): self.assertRaises(ValueError, utils.affinity_locality_predicate, 'falafel') self.assertRaises(ValueError, utils.affinity_locality_predicate, 'r8zQ') self.assertRaises(ValueError, utils.affinity_locality_predicate, 'r2d2') self.assertRaises(ValueError, utils.affinity_locality_predicate, 'r1z1=1') class TestRateLimitedIterator(unittest.TestCase): def run_under_pseudo_time( self, func, *args, **kwargs): curr_time = [42.0] def my_time(): curr_time[0] += 0.001 return curr_time[0] def my_sleep(duration): curr_time[0] += 0.001 curr_time[0] += duration with patch('time.time', my_time), \ patch('eventlet.sleep', my_sleep): return func(*args, **kwargs) def test_rate_limiting(self): def testfunc(): limited_iterator = utils.RateLimitedIterator(range(9999), 100) got = [] started_at = time.time() try: while time.time() - started_at < 0.1: got.append(next(limited_iterator)) except StopIteration: pass return got got = self.run_under_pseudo_time(testfunc) # it's 11, not 10, because ratelimiting doesn't apply to the very # first element. self.assertEqual(len(got), 11) def test_rate_limiting_sometimes(self): def testfunc(): limited_iterator = utils.RateLimitedIterator( range(9999), 100, ratelimit_if=lambda item: item % 23 != 0) got = [] started_at = time.time() try: while time.time() - started_at < 0.5: got.append(next(limited_iterator)) except StopIteration: pass return got got = self.run_under_pseudo_time(testfunc) # we'd get 51 without the ratelimit_if, but because 0, 23 and 46 # weren't subject to ratelimiting, we get 54 instead self.assertEqual(len(got), 54) def test_limit_after(self): def testfunc(): limited_iterator = utils.RateLimitedIterator( range(9999), 100, limit_after=5) got = [] started_at = time.time() try: while time.time() - started_at < 0.1: got.append(next(limited_iterator)) except StopIteration: pass return got got = self.run_under_pseudo_time(testfunc) # it's 16, not 15, because ratelimiting doesn't apply to the very # first element. self.assertEqual(len(got), 16) class TestGreenthreadSafeIterator(unittest.TestCase): def increment(self, iterable): plus_ones = [] for n in iterable: plus_ones.append(n + 1) return plus_ones def test_setup_works(self): # it should work without concurrent access self.assertEqual([0, 1, 2, 3], list(UnsafeXrange(4))) iterable = UnsafeXrange(10) pile = eventlet.GreenPile(2) for _ in range(2): pile.spawn(self.increment, iterable) sorted([resp for resp in pile]) self.assertTrue( iterable.concurrent_call, 'test setup is insufficiently crazy') def test_access_is_serialized(self): pile = eventlet.GreenPile(2) unsafe_iterable = UnsafeXrange(10) iterable = utils.GreenthreadSafeIterator(unsafe_iterable) for _ in range(2): pile.spawn(self.increment, iterable) response = sorted(sum([resp for resp in pile], [])) self.assertEqual(list(range(1, 11)), response) self.assertTrue( not unsafe_iterable.concurrent_call, 'concurrent call occurred') class TestStatsdLoggingDelegation(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.sock.bind(('localhost', 0)) self.port = self.sock.getsockname()[1] self.queue = Queue() self.reader_thread = threading.Thread(target=self.statsd_reader) self.reader_thread.setDaemon(1) self.reader_thread.start() def tearDown(self): # The "no-op when disabled" test doesn't set up a real logger, so # create one here so we can tell the reader thread to stop. if not getattr(self, 'logger', None): self.logger = utils.get_logger({ 'log_statsd_host': 'localhost', 'log_statsd_port': str(self.port), }, 'some-name') self.logger.increment('STOP') self.reader_thread.join(timeout=4) self.sock.close() del self.logger def statsd_reader(self): while True: try: payload = self.sock.recv(4096) if payload and b'STOP' in payload: return 42 self.queue.put(payload) except Exception as e: sys.stderr.write('statsd_reader thread: %r' % (e,)) break def _send_and_get(self, sender_fn, *args, **kwargs): """ Because the client library may not actually send a packet with sample_rate < 1, we keep trying until we get one through. """ got = None while not got: sender_fn(*args, **kwargs) try: got = self.queue.get(timeout=0.5) except Empty: pass return got def assertStat(self, expected, sender_fn, *args, **kwargs): got = self._send_and_get(sender_fn, *args, **kwargs) if six.PY3: got = got.decode('utf-8') return self.assertEqual(expected, got) def assertStatMatches(self, expected_regexp, sender_fn, *args, **kwargs): got = self._send_and_get(sender_fn, *args, **kwargs) if six.PY3: got = got.decode('utf-8') return self.assertTrue(re.search(expected_regexp, got), [got, expected_regexp]) def test_methods_are_no_ops_when_not_enabled(self): logger = utils.get_logger({ # No "log_statsd_host" means "disabled" 'log_statsd_port': str(self.port), }, 'some-name') # Delegate methods are no-ops self.assertIsNone(logger.update_stats('foo', 88)) self.assertIsNone(logger.update_stats('foo', 88, 0.57)) self.assertIsNone(logger.update_stats('foo', 88, sample_rate=0.61)) self.assertIsNone(logger.increment('foo')) self.assertIsNone(logger.increment('foo', 0.57)) self.assertIsNone(logger.increment('foo', sample_rate=0.61)) self.assertIsNone(logger.decrement('foo')) self.assertIsNone(logger.decrement('foo', 0.57)) self.assertIsNone(logger.decrement('foo', sample_rate=0.61)) self.assertIsNone(logger.timing('foo', 88.048)) self.assertIsNone(logger.timing('foo', 88.57, 0.34)) self.assertIsNone(logger.timing('foo', 88.998, sample_rate=0.82)) self.assertIsNone(logger.timing_since('foo', 8938)) self.assertIsNone(logger.timing_since('foo', 8948, 0.57)) self.assertIsNone(logger.timing_since('foo', 849398, sample_rate=0.61)) # Now, the queue should be empty (no UDP packets sent) self.assertRaises(Empty, self.queue.get_nowait) def test_delegate_methods_with_no_default_sample_rate(self): self.logger = utils.get_logger({ 'log_statsd_host': 'localhost', 'log_statsd_port': str(self.port), }, 'some-name') self.assertStat('some-name.some.counter:1|c', self.logger.increment, 'some.counter') self.assertStat('some-name.some.counter:-1|c', self.logger.decrement, 'some.counter') self.assertStat('some-name.some.operation:4900.0|ms', self.logger.timing, 'some.operation', 4.9 * 1000) self.assertStatMatches('some-name\.another\.operation:\d+\.\d+\|ms', self.logger.timing_since, 'another.operation', time.time()) self.assertStat('some-name.another.counter:42|c', self.logger.update_stats, 'another.counter', 42) # Each call can override the sample_rate (also, bonus prefix test) self.logger.set_statsd_prefix('pfx') self.assertStat('pfx.some.counter:1|c|@0.972', self.logger.increment, 'some.counter', sample_rate=0.972) self.assertStat('pfx.some.counter:-1|c|@0.972', self.logger.decrement, 'some.counter', sample_rate=0.972) self.assertStat('pfx.some.operation:4900.0|ms|@0.972', self.logger.timing, 'some.operation', 4.9 * 1000, sample_rate=0.972) self.assertStatMatches('pfx\.another\.op:\d+\.\d+\|ms|@0.972', self.logger.timing_since, 'another.op', time.time(), sample_rate=0.972) self.assertStat('pfx.another.counter:3|c|@0.972', self.logger.update_stats, 'another.counter', 3, sample_rate=0.972) # Can override sample_rate with non-keyword arg self.logger.set_statsd_prefix('') self.assertStat('some.counter:1|c|@0.939', self.logger.increment, 'some.counter', 0.939) self.assertStat('some.counter:-1|c|@0.939', self.logger.decrement, 'some.counter', 0.939) self.assertStat('some.operation:4900.0|ms|@0.939', self.logger.timing, 'some.operation', 4.9 * 1000, 0.939) self.assertStatMatches('another\.op:\d+\.\d+\|ms|@0.939', self.logger.timing_since, 'another.op', time.time(), 0.939) self.assertStat('another.counter:3|c|@0.939', self.logger.update_stats, 'another.counter', 3, 0.939) def test_delegate_methods_with_default_sample_rate(self): self.logger = utils.get_logger({ 'log_statsd_host': 'localhost', 'log_statsd_port': str(self.port), 'log_statsd_default_sample_rate': '0.93', }, 'pfx') self.assertStat('pfx.some.counter:1|c|@0.93', self.logger.increment, 'some.counter') self.assertStat('pfx.some.counter:-1|c|@0.93', self.logger.decrement, 'some.counter') self.assertStat('pfx.some.operation:4760.0|ms|@0.93', self.logger.timing, 'some.operation', 4.76 * 1000) self.assertStatMatches('pfx\.another\.op:\d+\.\d+\|ms|@0.93', self.logger.timing_since, 'another.op', time.time()) self.assertStat('pfx.another.counter:3|c|@0.93', self.logger.update_stats, 'another.counter', 3) # Each call can override the sample_rate self.assertStat('pfx.some.counter:1|c|@0.9912', self.logger.increment, 'some.counter', sample_rate=0.9912) self.assertStat('pfx.some.counter:-1|c|@0.9912', self.logger.decrement, 'some.counter', sample_rate=0.9912) self.assertStat('pfx.some.operation:4900.0|ms|@0.9912', self.logger.timing, 'some.operation', 4.9 * 1000, sample_rate=0.9912) self.assertStatMatches('pfx\.another\.op:\d+\.\d+\|ms|@0.9912', self.logger.timing_since, 'another.op', time.time(), sample_rate=0.9912) self.assertStat('pfx.another.counter:3|c|@0.9912', self.logger.update_stats, 'another.counter', 3, sample_rate=0.9912) # Can override sample_rate with non-keyword arg self.logger.set_statsd_prefix('') self.assertStat('some.counter:1|c|@0.987654', self.logger.increment, 'some.counter', 0.987654) self.assertStat('some.counter:-1|c|@0.987654', self.logger.decrement, 'some.counter', 0.987654) self.assertStat('some.operation:4900.0|ms|@0.987654', self.logger.timing, 'some.operation', 4.9 * 1000, 0.987654) self.assertStatMatches('another\.op:\d+\.\d+\|ms|@0.987654', self.logger.timing_since, 'another.op', time.time(), 0.987654) self.assertStat('another.counter:3|c|@0.987654', self.logger.update_stats, 'another.counter', 3, 0.987654) def test_delegate_methods_with_metric_prefix(self): self.logger = utils.get_logger({ 'log_statsd_host': 'localhost', 'log_statsd_port': str(self.port), 'log_statsd_metric_prefix': 'alpha.beta', }, 'pfx') self.assertStat('alpha.beta.pfx.some.counter:1|c', self.logger.increment, 'some.counter') self.assertStat('alpha.beta.pfx.some.counter:-1|c', self.logger.decrement, 'some.counter') self.assertStat('alpha.beta.pfx.some.operation:4760.0|ms', self.logger.timing, 'some.operation', 4.76 * 1000) self.assertStatMatches( 'alpha\.beta\.pfx\.another\.op:\d+\.\d+\|ms', self.logger.timing_since, 'another.op', time.time()) self.assertStat('alpha.beta.pfx.another.counter:3|c', self.logger.update_stats, 'another.counter', 3) self.logger.set_statsd_prefix('') self.assertStat('alpha.beta.some.counter:1|c|@0.9912', self.logger.increment, 'some.counter', sample_rate=0.9912) self.assertStat('alpha.beta.some.counter:-1|c|@0.9912', self.logger.decrement, 'some.counter', 0.9912) self.assertStat('alpha.beta.some.operation:4900.0|ms|@0.9912', self.logger.timing, 'some.operation', 4.9 * 1000, sample_rate=0.9912) self.assertStatMatches('alpha\.beta\.another\.op:\d+\.\d+\|ms|@0.9912', self.logger.timing_since, 'another.op', time.time(), sample_rate=0.9912) self.assertStat('alpha.beta.another.counter:3|c|@0.9912', self.logger.update_stats, 'another.counter', 3, sample_rate=0.9912) def test_get_valid_utf8_str(self): unicode_sample = u'\uc77c\uc601' valid_utf8_str = unicode_sample.encode('utf-8') invalid_utf8_str = unicode_sample.encode('utf-8')[::-1] self.assertEqual(valid_utf8_str, utils.get_valid_utf8_str(valid_utf8_str)) self.assertEqual(valid_utf8_str, utils.get_valid_utf8_str(unicode_sample)) self.assertEqual(b'\xef\xbf\xbd\xef\xbf\xbd\xec\xbc\x9d\xef\xbf\xbd', utils.get_valid_utf8_str(invalid_utf8_str)) @reset_logger_state def test_thread_locals(self): logger = utils.get_logger(None) # test the setter logger.thread_locals = ('id', 'ip') self.assertEqual(logger.thread_locals, ('id', 'ip')) # reset logger.thread_locals = (None, None) self.assertEqual(logger.thread_locals, (None, None)) logger.txn_id = '1234' logger.client_ip = '1.2.3.4' self.assertEqual(logger.thread_locals, ('1234', '1.2.3.4')) logger.txn_id = '5678' logger.client_ip = '5.6.7.8' self.assertEqual(logger.thread_locals, ('5678', '5.6.7.8')) def test_no_fdatasync(self): called = [] class NoFdatasync(object): pass def fsync(fd): called.append(fd) with patch('swift.common.utils.os', NoFdatasync()): with patch('swift.common.utils.fsync', fsync): utils.fdatasync(12345) self.assertEqual(called, [12345]) def test_yes_fdatasync(self): called = [] class YesFdatasync(object): def fdatasync(self, fd): called.append(fd) with patch('swift.common.utils.os', YesFdatasync()): utils.fdatasync(12345) self.assertEqual(called, [12345]) def test_fsync_bad_fullsync(self): class FCNTL(object): F_FULLSYNC = 123 def fcntl(self, fd, op): raise IOError(18) with patch('swift.common.utils.fcntl', FCNTL()): self.assertRaises(OSError, lambda: utils.fsync(12345)) def test_fsync_f_fullsync(self): called = [] class FCNTL(object): F_FULLSYNC = 123 def fcntl(self, fd, op): called[:] = [fd, op] return 0 with patch('swift.common.utils.fcntl', FCNTL()): utils.fsync(12345) self.assertEqual(called, [12345, 123]) def test_fsync_no_fullsync(self): called = [] class FCNTL(object): pass def fsync(fd): called.append(fd) with patch('swift.common.utils.fcntl', FCNTL()): with patch('os.fsync', fsync): utils.fsync(12345) self.assertEqual(called, [12345]) class TestThreadPool(unittest.TestCase): def setUp(self): self.tp = None def tearDown(self): if self.tp: self.tp.terminate() def _pipe_count(self): # Counts the number of pipes that this process owns. fd_dir = "/proc/%d/fd" % os.getpid() def is_pipe(path): try: stat_result = os.stat(path) return stat.S_ISFIFO(stat_result.st_mode) except OSError: return False return len([fd for fd in os.listdir(fd_dir) if is_pipe(os.path.join(fd_dir, fd))]) def _thread_id(self): return threading.current_thread().ident def _capture_args(self, *args, **kwargs): return {'args': args, 'kwargs': kwargs} def _raise_valueerror(self): return int('fishcakes') def test_run_in_thread_with_threads(self): tp = self.tp = utils.ThreadPool(1) my_id = self._thread_id() other_id = tp.run_in_thread(self._thread_id) self.assertNotEqual(my_id, other_id) result = tp.run_in_thread(self._capture_args, 1, 2, bert='ernie') self.assertEqual(result, {'args': (1, 2), 'kwargs': {'bert': 'ernie'}}) caught = False try: tp.run_in_thread(self._raise_valueerror) except ValueError: caught = True self.assertTrue(caught) def test_force_run_in_thread_with_threads(self): # with nthreads > 0, force_run_in_thread looks just like run_in_thread tp = self.tp = utils.ThreadPool(1) my_id = self._thread_id() other_id = tp.force_run_in_thread(self._thread_id) self.assertNotEqual(my_id, other_id) result = tp.force_run_in_thread(self._capture_args, 1, 2, bert='ernie') self.assertEqual(result, {'args': (1, 2), 'kwargs': {'bert': 'ernie'}}) self.assertRaises(ValueError, tp.force_run_in_thread, self._raise_valueerror) def test_run_in_thread_without_threads(self): # with zero threads, run_in_thread doesn't actually do so tp = utils.ThreadPool(0) my_id = self._thread_id() other_id = tp.run_in_thread(self._thread_id) self.assertEqual(my_id, other_id) result = tp.run_in_thread(self._capture_args, 1, 2, bert='ernie') self.assertEqual(result, {'args': (1, 2), 'kwargs': {'bert': 'ernie'}}) self.assertRaises(ValueError, tp.run_in_thread, self._raise_valueerror) def test_force_run_in_thread_without_threads(self): # with zero threads, force_run_in_thread uses eventlet.tpool tp = utils.ThreadPool(0) my_id = self._thread_id() other_id = tp.force_run_in_thread(self._thread_id) self.assertNotEqual(my_id, other_id) result = tp.force_run_in_thread(self._capture_args, 1, 2, bert='ernie') self.assertEqual(result, {'args': (1, 2), 'kwargs': {'bert': 'ernie'}}) self.assertRaises(ValueError, tp.force_run_in_thread, self._raise_valueerror) def test_preserving_stack_trace_from_thread(self): def gamma(): return 1 / 0 # ZeroDivisionError def beta(): return gamma() def alpha(): return beta() tp = self.tp = utils.ThreadPool(1) try: tp.run_in_thread(alpha) except ZeroDivisionError: # NB: format is (filename, line number, function name, text) tb_func = [elem[2] for elem in traceback.extract_tb(sys.exc_info()[2])] else: self.fail("Expected ZeroDivisionError") self.assertEqual(tb_func[-1], "gamma") self.assertEqual(tb_func[-2], "beta") self.assertEqual(tb_func[-3], "alpha") # omit the middle; what's important is that the start and end are # included, not the exact names of helper methods self.assertEqual(tb_func[1], "run_in_thread") self.assertEqual(tb_func[0], "test_preserving_stack_trace_from_thread") def test_terminate(self): initial_thread_count = threading.activeCount() initial_pipe_count = self._pipe_count() tp = utils.ThreadPool(4) # do some work to ensure any lazy initialization happens tp.run_in_thread(os.path.join, 'foo', 'bar') tp.run_in_thread(os.path.join, 'baz', 'quux') # 4 threads in the ThreadPool, plus one pipe for IPC; this also # serves as a sanity check that we're actually allocating some # resources to free later self.assertEqual(initial_thread_count, threading.activeCount() - 4) self.assertEqual(initial_pipe_count, self._pipe_count() - 2) tp.terminate() self.assertEqual(initial_thread_count, threading.activeCount()) self.assertEqual(initial_pipe_count, self._pipe_count()) def test_cant_run_after_terminate(self): tp = utils.ThreadPool(0) tp.terminate() self.assertRaises(ThreadPoolDead, tp.run_in_thread, lambda: 1) self.assertRaises(ThreadPoolDead, tp.force_run_in_thread, lambda: 1) def test_double_terminate_doesnt_crash(self): tp = utils.ThreadPool(0) tp.terminate() tp.terminate() tp = utils.ThreadPool(1) tp.terminate() tp.terminate() def test_terminate_no_threads_doesnt_crash(self): tp = utils.ThreadPool(0) tp.terminate() class TestAuditLocationGenerator(unittest.TestCase): def test_drive_tree_access(self): orig_listdir = utils.listdir def _mock_utils_listdir(path): if 'bad_part' in path: raise OSError(errno.EACCES) elif 'bad_suffix' in path: raise OSError(errno.EACCES) elif 'bad_hash' in path: raise OSError(errno.EACCES) else: return orig_listdir(path) # Check Raise on Bad partition tmpdir = mkdtemp() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) obj_path = os.path.join(data, "bad_part") with open(obj_path, "w"): pass part1 = os.path.join(data, "partition1") os.makedirs(part1) part2 = os.path.join(data, "partition2") os.makedirs(part2) with patch('swift.common.utils.listdir', _mock_utils_listdir): audit = lambda: list(utils.audit_location_generator( tmpdir, "data", mount_check=False)) self.assertRaises(OSError, audit) rmtree(tmpdir) # Check Raise on Bad Suffix tmpdir = mkdtemp() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) part1 = os.path.join(data, "partition1") os.makedirs(part1) part2 = os.path.join(data, "partition2") os.makedirs(part2) obj_path = os.path.join(part1, "bad_suffix") with open(obj_path, 'w'): pass suffix = os.path.join(part2, "suffix") os.makedirs(suffix) with patch('swift.common.utils.listdir', _mock_utils_listdir): audit = lambda: list(utils.audit_location_generator( tmpdir, "data", mount_check=False)) self.assertRaises(OSError, audit) rmtree(tmpdir) # Check Raise on Bad Hash tmpdir = mkdtemp() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) part1 = os.path.join(data, "partition1") os.makedirs(part1) suffix = os.path.join(part1, "suffix") os.makedirs(suffix) hash1 = os.path.join(suffix, "hash1") os.makedirs(hash1) obj_path = os.path.join(suffix, "bad_hash") with open(obj_path, 'w'): pass with patch('swift.common.utils.listdir', _mock_utils_listdir): audit = lambda: list(utils.audit_location_generator( tmpdir, "data", mount_check=False)) self.assertRaises(OSError, audit) rmtree(tmpdir) def test_non_dir_drive(self): with temptree([]) as tmpdir: logger = FakeLogger() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) # Create a file, that represents a non-dir drive open(os.path.join(tmpdir, 'asdf'), 'w') locations = utils.audit_location_generator( tmpdir, "data", mount_check=False, logger=logger ) self.assertEqual(list(locations), []) self.assertEqual(1, len(logger.get_lines_for_level('warning'))) # Test without the logger locations = utils.audit_location_generator( tmpdir, "data", mount_check=False ) self.assertEqual(list(locations), []) def test_mount_check_drive(self): with temptree([]) as tmpdir: logger = FakeLogger() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) # Create a file, that represents a non-dir drive open(os.path.join(tmpdir, 'asdf'), 'w') locations = utils.audit_location_generator( tmpdir, "data", mount_check=True, logger=logger ) self.assertEqual(list(locations), []) self.assertEqual(2, len(logger.get_lines_for_level('warning'))) # Test without the logger locations = utils.audit_location_generator( tmpdir, "data", mount_check=True ) self.assertEqual(list(locations), []) def test_non_dir_contents(self): with temptree([]) as tmpdir: logger = FakeLogger() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) with open(os.path.join(data, "partition1"), "w"): pass partition = os.path.join(data, "partition2") os.makedirs(partition) with open(os.path.join(partition, "suffix1"), "w"): pass suffix = os.path.join(partition, "suffix2") os.makedirs(suffix) with open(os.path.join(suffix, "hash1"), "w"): pass locations = utils.audit_location_generator( tmpdir, "data", mount_check=False, logger=logger ) self.assertEqual(list(locations), []) def test_find_objects(self): with temptree([]) as tmpdir: expected_objs = list() logger = FakeLogger() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) # Create a file, that represents a non-dir drive open(os.path.join(tmpdir, 'asdf'), 'w') partition = os.path.join(data, "partition1") os.makedirs(partition) suffix = os.path.join(partition, "suffix") os.makedirs(suffix) hash_path = os.path.join(suffix, "hash") os.makedirs(hash_path) obj_path = os.path.join(hash_path, "obj1.db") with open(obj_path, "w"): pass expected_objs.append((obj_path, 'drive', 'partition1')) partition = os.path.join(data, "partition2") os.makedirs(partition) suffix = os.path.join(partition, "suffix2") os.makedirs(suffix) hash_path = os.path.join(suffix, "hash2") os.makedirs(hash_path) obj_path = os.path.join(hash_path, "obj2.db") with open(obj_path, "w"): pass expected_objs.append((obj_path, 'drive', 'partition2')) locations = utils.audit_location_generator( tmpdir, "data", mount_check=False, logger=logger ) got_objs = list(locations) self.assertEqual(len(got_objs), len(expected_objs)) self.assertEqual(sorted(got_objs), sorted(expected_objs)) self.assertEqual(1, len(logger.get_lines_for_level('warning'))) def test_ignore_metadata(self): with temptree([]) as tmpdir: logger = FakeLogger() data = os.path.join(tmpdir, "drive", "data") os.makedirs(data) partition = os.path.join(data, "partition2") os.makedirs(partition) suffix = os.path.join(partition, "suffix2") os.makedirs(suffix) hash_path = os.path.join(suffix, "hash2") os.makedirs(hash_path) obj_path = os.path.join(hash_path, "obj1.dat") with open(obj_path, "w"): pass meta_path = os.path.join(hash_path, "obj1.meta") with open(meta_path, "w"): pass locations = utils.audit_location_generator( tmpdir, "data", ".dat", mount_check=False, logger=logger ) self.assertEqual(list(locations), [(obj_path, "drive", "partition2")]) class TestGreenAsyncPile(unittest.TestCase): def test_runs_everything(self): def run_test(): tests_ran[0] += 1 return tests_ran[0] tests_ran = [0] pile = utils.GreenAsyncPile(3) for x in range(3): pile.spawn(run_test) self.assertEqual(sorted(x for x in pile), [1, 2, 3]) def test_is_asynchronous(self): def run_test(index): events[index].wait() return index pile = utils.GreenAsyncPile(3) for order in ((1, 2, 0), (0, 1, 2), (2, 1, 0), (0, 2, 1)): events = [eventlet.event.Event(), eventlet.event.Event(), eventlet.event.Event()] for x in range(3): pile.spawn(run_test, x) for x in order: events[x].send() self.assertEqual(next(pile), x) def test_next_when_empty(self): def run_test(): pass pile = utils.GreenAsyncPile(3) pile.spawn(run_test) self.assertEqual(next(pile), None) self.assertRaises(StopIteration, lambda: next(pile)) def test_waitall_timeout_timesout(self): def run_test(sleep_duration): eventlet.sleep(sleep_duration) completed[0] += 1 return sleep_duration completed = [0] pile = utils.GreenAsyncPile(3) pile.spawn(run_test, 0.1) pile.spawn(run_test, 1.0) self.assertEqual(pile.waitall(0.5), [0.1]) self.assertEqual(completed[0], 1) def test_waitall_timeout_completes(self): def run_test(sleep_duration): eventlet.sleep(sleep_duration) completed[0] += 1 return sleep_duration completed = [0] pile = utils.GreenAsyncPile(3) pile.spawn(run_test, 0.1) pile.spawn(run_test, 0.1) self.assertEqual(pile.waitall(0.5), [0.1, 0.1]) self.assertEqual(completed[0], 2) def test_waitfirst_only_returns_first(self): def run_test(name): eventlet.sleep(0) completed.append(name) return name completed = [] pile = utils.GreenAsyncPile(3) pile.spawn(run_test, 'first') pile.spawn(run_test, 'second') pile.spawn(run_test, 'third') self.assertEqual(pile.waitfirst(0.5), completed[0]) # 3 still completed, but only the first was returned. self.assertEqual(3, len(completed)) def test_wait_with_firstn(self): def run_test(name): eventlet.sleep(0) completed.append(name) return name for first_n in [None] + list(range(6)): completed = [] pile = utils.GreenAsyncPile(10) for i in range(10): pile.spawn(run_test, i) actual = pile._wait(1, first_n) expected_n = first_n if first_n else 10 self.assertEqual(completed[:expected_n], actual) self.assertEqual(10, len(completed)) def test_pending(self): pile = utils.GreenAsyncPile(3) self.assertEqual(0, pile._pending) for repeats in range(2): # repeat to verify that pending will go again up after going down for i in range(4): pile.spawn(lambda: i) self.assertEqual(4, pile._pending) for i in range(3, -1, -1): next(pile) self.assertEqual(i, pile._pending) # sanity check - the pile is empty self.assertRaises(StopIteration, pile.next) # pending remains 0 self.assertEqual(0, pile._pending) class TestLRUCache(unittest.TestCase): def test_maxsize(self): @utils.LRUCache(maxsize=10) def f(*args): return math.sqrt(*args) _orig_math_sqrt = math.sqrt # setup cache [0-10) for i in range(10): self.assertEqual(math.sqrt(i), f(i)) self.assertEqual(f.size(), 10) # validate cache [0-10) with patch('math.sqrt'): for i in range(10): self.assertEqual(_orig_math_sqrt(i), f(i)) self.assertEqual(f.size(), 10) # update cache [10-20) for i in range(10, 20): self.assertEqual(math.sqrt(i), f(i)) # cache size is fixed self.assertEqual(f.size(), 10) # validate cache [10-20) with patch('math.sqrt'): for i in range(10, 20): self.assertEqual(_orig_math_sqrt(i), f(i)) # validate un-cached [0-10) with patch('math.sqrt', new=None): for i in range(10): self.assertRaises(TypeError, f, i) # cache unchanged self.assertEqual(f.size(), 10) with patch('math.sqrt'): for i in range(10, 20): self.assertEqual(_orig_math_sqrt(i), f(i)) self.assertEqual(f.size(), 10) def test_maxtime(self): @utils.LRUCache(maxtime=30) def f(*args): return math.sqrt(*args) self.assertEqual(30, f.maxtime) _orig_math_sqrt = math.sqrt now = time.time() the_future = now + 31 # setup cache [0-10) with patch('time.time', lambda: now): for i in range(10): self.assertEqual(math.sqrt(i), f(i)) self.assertEqual(f.size(), 10) # validate cache [0-10) with patch('math.sqrt'): for i in range(10): self.assertEqual(_orig_math_sqrt(i), f(i)) self.assertEqual(f.size(), 10) # validate expired [0-10) with patch('math.sqrt', new=None): with patch('time.time', lambda: the_future): for i in range(10): self.assertRaises(TypeError, f, i) # validate repopulates [0-10) with patch('time.time', lambda: the_future): for i in range(10): self.assertEqual(math.sqrt(i), f(i)) # reuses cache space self.assertEqual(f.size(), 10) def test_set_maxtime(self): @utils.LRUCache(maxtime=30) def f(*args): return math.sqrt(*args) self.assertEqual(30, f.maxtime) self.assertEqual(2, f(4)) self.assertEqual(1, f.size()) # expire everything f.maxtime = -1 # validate un-cached [0-10) with patch('math.sqrt', new=None): self.assertRaises(TypeError, f, 4) def test_set_maxsize(self): @utils.LRUCache(maxsize=10) def f(*args): return math.sqrt(*args) for i in range(12): f(i) self.assertEqual(f.size(), 10) f.maxsize = 4 for i in range(12): f(i) self.assertEqual(f.size(), 4) class TestParseContentRange(unittest.TestCase): def test_good(self): start, end, total = utils.parse_content_range("bytes 100-200/300") self.assertEqual(start, 100) self.assertEqual(end, 200) self.assertEqual(total, 300) def test_bad(self): self.assertRaises(ValueError, utils.parse_content_range, "100-300/500") self.assertRaises(ValueError, utils.parse_content_range, "bytes 100-200/aardvark") self.assertRaises(ValueError, utils.parse_content_range, "bytes bulbous-bouffant/4994801") class TestParseContentDisposition(unittest.TestCase): def test_basic_content_type(self): name, attrs = utils.parse_content_disposition('text/plain') self.assertEqual(name, 'text/plain') self.assertEqual(attrs, {}) def test_content_type_with_charset(self): name, attrs = utils.parse_content_disposition( 'text/plain; charset=UTF8') self.assertEqual(name, 'text/plain') self.assertEqual(attrs, {'charset': 'UTF8'}) def test_content_disposition(self): name, attrs = utils.parse_content_disposition( 'form-data; name="somefile"; filename="test.html"') self.assertEqual(name, 'form-data') self.assertEqual(attrs, {'name': 'somefile', 'filename': 'test.html'}) def test_content_disposition_without_white_space(self): name, attrs = utils.parse_content_disposition( 'form-data;name="somefile";filename="test.html"') self.assertEqual(name, 'form-data') self.assertEqual(attrs, {'name': 'somefile', 'filename': 'test.html'}) class TestIterMultipartMimeDocuments(unittest.TestCase): def test_bad_start(self): it = utils.iter_multipart_mime_documents(StringIO('blah'), 'unique') exc = None try: next(it) except MimeInvalid as err: exc = err self.assertTrue('invalid starting boundary' in str(exc)) self.assertTrue('--unique' in str(exc)) def test_empty(self): it = utils.iter_multipart_mime_documents(StringIO('--unique'), 'unique') fp = next(it) self.assertEqual(fp.read(), '') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_basic(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nabcdefg\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.read(), 'abcdefg') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_basic2(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nabcdefg\r\n--unique\r\nhijkl\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.read(), 'abcdefg') fp = next(it) self.assertEqual(fp.read(), 'hijkl') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_tiny_reads(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nabcdefg\r\n--unique\r\nhijkl\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.read(2), 'ab') self.assertEqual(fp.read(2), 'cd') self.assertEqual(fp.read(2), 'ef') self.assertEqual(fp.read(2), 'g') self.assertEqual(fp.read(2), '') fp = next(it) self.assertEqual(fp.read(), 'hijkl') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_big_reads(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nabcdefg\r\n--unique\r\nhijkl\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.read(65536), 'abcdefg') self.assertEqual(fp.read(), '') fp = next(it) self.assertEqual(fp.read(), 'hijkl') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_leading_crlfs(self): it = utils.iter_multipart_mime_documents( StringIO('\r\n\r\n\r\n--unique\r\nabcdefg\r\n' '--unique\r\nhijkl\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.read(65536), 'abcdefg') self.assertEqual(fp.read(), '') fp = next(it) self.assertEqual(fp.read(), 'hijkl') self.assertRaises(StopIteration, it.next) def test_broken_mid_stream(self): # We go ahead and accept whatever is sent instead of rejecting the # whole request, in case the partial form is still useful. it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nabc'), 'unique') fp = next(it) self.assertEqual(fp.read(), 'abc') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_readline(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nab\r\ncd\ref\ng\r\n--unique\r\nhi\r\n\r\n' 'jkl\r\n\r\n--unique--'), 'unique') fp = next(it) self.assertEqual(fp.readline(), 'ab\r\n') self.assertEqual(fp.readline(), 'cd\ref\ng') self.assertEqual(fp.readline(), '') fp = next(it) self.assertEqual(fp.readline(), 'hi\r\n') self.assertEqual(fp.readline(), '\r\n') self.assertEqual(fp.readline(), 'jkl\r\n') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) def test_readline_with_tiny_chunks(self): it = utils.iter_multipart_mime_documents( StringIO('--unique\r\nab\r\ncd\ref\ng\r\n--unique\r\nhi\r\n' '\r\njkl\r\n\r\n--unique--'), 'unique', read_chunk_size=2) fp = next(it) self.assertEqual(fp.readline(), 'ab\r\n') self.assertEqual(fp.readline(), 'cd\ref\ng') self.assertEqual(fp.readline(), '') fp = next(it) self.assertEqual(fp.readline(), 'hi\r\n') self.assertEqual(fp.readline(), '\r\n') self.assertEqual(fp.readline(), 'jkl\r\n') exc = None try: next(it) except StopIteration as err: exc = err self.assertTrue(exc is not None) class TestParseMimeHeaders(unittest.TestCase): def test_parse_mime_headers(self): doc_file = BytesIO(b"""Content-Disposition: form-data; name="file_size" Foo: Bar NOT-title-cAsED: quux Connexion: =?iso8859-1?q?r=E9initialis=E9e_par_l=27homologue?= Status: =?utf-8?b?5byA5aeL6YCa6L+H5a+56LGh5aSN5Yi2?= Latin-1: Resincronizaci\xf3n realizada con \xe9xito Utf-8: \xd0\xba\xd0\xbe\xd0\xbd\xd1\x82\xd0\xb5\xd0\xb9\xd0\xbd\xd0\xb5\xd1\x80 This is the body """) headers = utils.parse_mime_headers(doc_file) utf8 = u'\u043a\u043e\u043d\u0442\u0435\u0439\u043d\u0435\u0440' if six.PY2: utf8 = utf8.encode('utf-8') expected_headers = { 'Content-Disposition': 'form-data; name="file_size"', 'Foo': "Bar", 'Not-Title-Cased': "quux", # Encoded-word or non-ASCII values are treated just like any other # bytestring (at least for now) 'Connexion': "=?iso8859-1?q?r=E9initialis=E9e_par_l=27homologue?=", 'Status': "=?utf-8?b?5byA5aeL6YCa6L+H5a+56LGh5aSN5Yi2?=", 'Latin-1': "Resincronizaci\xf3n realizada con \xe9xito", 'Utf-8': utf8, } self.assertEqual(expected_headers, headers) self.assertEqual(b"This is the body\n", doc_file.read()) class FakeResponse(object): def __init__(self, status, headers, body): self.status = status self.headers = HeaderKeyDict(headers) self.body = StringIO(body) def getheader(self, header_name): return str(self.headers.get(header_name, '')) def getheaders(self): return self.headers.items() def read(self, length=None): return self.body.read(length) def readline(self, length=None): return self.body.readline(length) class TestDocumentItersToHTTPResponseBody(unittest.TestCase): def test_no_parts(self): body = utils.document_iters_to_http_response_body( iter([]), 'dontcare', multipart=False, logger=FakeLogger()) self.assertEqual(body, '') def test_single_part(self): body = "time flies like an arrow; fruit flies like a banana" doc_iters = [{'part_iter': iter(StringIO(body).read, '')}] resp_body = ''.join( utils.document_iters_to_http_response_body( iter(doc_iters), 'dontcare', multipart=False, logger=FakeLogger())) self.assertEqual(resp_body, body) def test_multiple_parts(self): part1 = "two peanuts were walking down a railroad track" part2 = "and one was a salted. ... peanut." doc_iters = [{ 'start_byte': 88, 'end_byte': 133, 'content_type': 'application/peanut', 'entity_length': 1024, 'part_iter': iter(StringIO(part1).read, ''), }, { 'start_byte': 500, 'end_byte': 532, 'content_type': 'application/salted', 'entity_length': 1024, 'part_iter': iter(StringIO(part2).read, ''), }] resp_body = ''.join( utils.document_iters_to_http_response_body( iter(doc_iters), 'boundaryboundary', multipart=True, logger=FakeLogger())) self.assertEqual(resp_body, ( "--boundaryboundary\r\n" + # This is a little too strict; we don't actually care that the # headers are in this order, but the test is much more legible # this way. "Content-Type: application/peanut\r\n" + "Content-Range: bytes 88-133/1024\r\n" + "\r\n" + part1 + "\r\n" + "--boundaryboundary\r\n" "Content-Type: application/salted\r\n" + "Content-Range: bytes 500-532/1024\r\n" + "\r\n" + part2 + "\r\n" + "--boundaryboundary--")) class TestPairs(unittest.TestCase): def test_pairs(self): items = [10, 20, 30, 40, 50, 60] got_pairs = set(utils.pairs(items)) self.assertEqual(got_pairs, set([(10, 20), (10, 30), (10, 40), (10, 50), (10, 60), (20, 30), (20, 40), (20, 50), (20, 60), (30, 40), (30, 50), (30, 60), (40, 50), (40, 60), (50, 60)])) class TestSocketStringParser(unittest.TestCase): def test_socket_string_parser(self): default = 1337 addrs = [('1.2.3.4', '1.2.3.4', default), ('1.2.3.4:5000', '1.2.3.4', 5000), ('[dead:beef::1]', 'dead:beef::1', default), ('[dead:beef::1]:5000', 'dead:beef::1', 5000), ('example.com', 'example.com', default), ('example.com:5000', 'example.com', 5000), ('foo.1-2-3.bar.com:5000', 'foo.1-2-3.bar.com', 5000), ('1.2.3.4:10:20', None, None), ('dead:beef::1:5000', None, None)] for addr, expected_host, expected_port in addrs: if expected_host: host, port = utils.parse_socket_string(addr, default) self.assertEqual(expected_host, host) self.assertEqual(expected_port, int(port)) else: with self.assertRaises(ValueError): utils.parse_socket_string(addr, default) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/corrupted_example.db0000664000567000056710000002000013024044352023262 0ustar jenkinsjenkins00000000000000junkte format 3@      F''Ktableoutgoing_syncoutgoing_syncCREATE TABLE outgoing_sync ( remote_id TEXT UNIQUE, sync_point INTEGER, updated_at TEXT DEFAULT 0 )9M'indexsqlite_autoindex_outgoing_sync_1outgoing_syncF''Ktableincoming_syncincoming_syncCREATE TABLE incoming_sync ( remote_id TEXT UNIQUE, sync_point INTEGER, updated_at TEXT DEFAULT 0 )9M'indexsqlite_autoindex_incoming_sync_1incoming_sync5']triggeroutgoing_sync_insertoutgoing_syncCREATE TRIGGER outgoing_sync_insert AFTER INSERT ON outgoing_sync BEGIN UPDATE outgoing_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END }}0 EtabletesttestCREATE TABLE test (one TEXT)5']triggerincoming_sync_updateincoming_syncCREATE TRIGGER incoming_sync_update AFTER UPDATE ON incoming_sync BEGIN UPDATE incoming_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END5']triggerincoming_sync_insertincoming_syncCREATE TRIGGER incoming_sync_insert AFTER INSERT ON incoming_sync BEGIN UPDATE incoming_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END5']triggeroutgoing_sync_updateoutgoing_syncCREATE TRIGGER outgoing_sync_update AFTER UPDATE ON outgoing_sync BEGIN UPDATE outgoing_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END 1swift-2.7.1/test/unit/common/test_db_replicator.py0000664000567000056710000017144013024044354023474 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import unittest from contextlib import contextmanager import os import logging import errno import math import time from mock import patch, call from shutil import rmtree, copy from tempfile import mkdtemp, NamedTemporaryFile import mock import json from swift.container.backend import DATADIR from swift.common import db_replicator from swift.common.utils import (normalize_timestamp, hash_path, storage_directory) from swift.common.exceptions import DriveNotMounted from swift.common.swob import HTTPException from test import unit from test.unit.common.test_db import ExampleBroker TEST_ACCOUNT_NAME = 'a c t' TEST_CONTAINER_NAME = 'c o n' def teardown_module(): "clean up my monkey patching" reload(db_replicator) @contextmanager def lock_parent_directory(filename): yield True class FakeRing(object): class Ring(object): devs = [] def __init__(self, path, reload_time=15, ring_name=None): pass def get_part(self, account, container=None, obj=None): return 0 def get_part_nodes(self, part): return [] def get_more_nodes(self, *args): return [] class FakeRingWithSingleNode(object): class Ring(object): devs = [dict( id=1, weight=10.0, zone=1, ip='1.1.1.1', port=6000, device='sdb', meta='', replication_ip='1.1.1.1', replication_port=6000 )] def __init__(self, path, reload_time=15, ring_name=None): pass def get_part(self, account, container=None, obj=None): return 0 def get_part_nodes(self, part): return self.devs def get_more_nodes(self, *args): return (d for d in self.devs) class FakeRingWithNodes(object): class Ring(object): devs = [dict( id=1, weight=10.0, zone=1, ip='1.1.1.1', port=6000, device='sdb', meta='', replication_ip='1.1.1.1', replication_port=6000, region=1 ), dict( id=2, weight=10.0, zone=2, ip='1.1.1.2', port=6000, device='sdb', meta='', replication_ip='1.1.1.2', replication_port=6000, region=2 ), dict( id=3, weight=10.0, zone=3, ip='1.1.1.3', port=6000, device='sdb', meta='', replication_ip='1.1.1.3', replication_port=6000, region=1 ), dict( id=4, weight=10.0, zone=4, ip='1.1.1.4', port=6000, device='sdb', meta='', replication_ip='1.1.1.4', replication_port=6000, region=2 ), dict( id=5, weight=10.0, zone=5, ip='1.1.1.5', port=6000, device='sdb', meta='', replication_ip='1.1.1.5', replication_port=6000, region=1 ), dict( id=6, weight=10.0, zone=6, ip='1.1.1.6', port=6000, device='sdb', meta='', replication_ip='1.1.1.6', replication_port=6000, region=2 )] def __init__(self, path, reload_time=15, ring_name=None): pass def get_part(self, account, container=None, obj=None): return 0 def get_part_nodes(self, part): return self.devs[:3] def get_more_nodes(self, *args): return (d for d in self.devs[3:]) class FakeProcess(object): def __init__(self, *codes): self.codes = iter(codes) self.args = None self.kwargs = None def __call__(self, *args, **kwargs): self.args = args self.kwargs = kwargs class Failure(object): def communicate(innerself): next_item = next(self.codes) if isinstance(next_item, int): innerself.returncode = next_item return next_item raise next_item return Failure() @contextmanager def _mock_process(*args): orig_process = db_replicator.subprocess.Popen db_replicator.subprocess.Popen = FakeProcess(*args) yield db_replicator.subprocess.Popen db_replicator.subprocess.Popen = orig_process class ReplHttp(object): def __init__(self, response=None, set_status=200): self.response = response self.set_status = set_status replicated = False host = 'localhost' def replicate(self, *args): self.replicated = True class Response(object): status = self.set_status data = self.response def read(innerself): return self.response return Response() class ChangingMtimesOs(object): def __init__(self): self.mtime = 0 def __call__(self, *args, **kwargs): self.mtime += 1 return self.mtime class FakeBroker(object): db_file = __file__ get_repl_missing_table = False stub_replication_info = None db_type = 'container' db_contains_type = 'object' info = {'account': TEST_ACCOUNT_NAME, 'container': TEST_CONTAINER_NAME} def __init__(self, *args, **kwargs): self.locked = False return None @contextmanager def lock(self): self.locked = True yield True self.locked = False def get_sync(self, *args, **kwargs): return 5 def get_syncs(self): return [] def get_items_since(self, point, *args): if point == 0: return [{'ROWID': 1}] if point == -1: return [{'ROWID': 1}, {'ROWID': 2}] return [] def merge_syncs(self, *args, **kwargs): self.args = args def merge_items(self, *args): self.args = args def get_replication_info(self): if self.get_repl_missing_table: raise Exception('no such table') info = dict(self.info) info.update({ 'hash': 12345, 'delete_timestamp': 0, 'put_timestamp': 1, 'created_at': 1, 'count': 0, }) if self.stub_replication_info: info.update(self.stub_replication_info) return info def reclaim(self, item_timestamp, sync_timestamp): pass def newid(self, remote_d): pass def update_metadata(self, metadata): self.metadata = metadata def merge_timestamps(self, created_at, put_timestamp, delete_timestamp): self.created_at = created_at self.put_timestamp = put_timestamp self.delete_timestamp = delete_timestamp class FakeAccountBroker(FakeBroker): db_type = 'account' db_contains_type = 'container' info = {'account': TEST_ACCOUNT_NAME} class TestReplicator(db_replicator.Replicator): server_type = 'container' ring_file = 'container.ring.gz' brokerclass = FakeBroker datadir = DATADIR default_port = 1000 class TestDBReplicator(unittest.TestCase): def setUp(self): db_replicator.ring = FakeRing() self.delete_db_calls = [] self._patchers = [] # recon cache path self.recon_cache = mkdtemp() rmtree(self.recon_cache, ignore_errors=1) os.mkdir(self.recon_cache) def tearDown(self): for patcher in self._patchers: patcher.stop() rmtree(self.recon_cache, ignore_errors=1) def _patch(self, patching_fn, *args, **kwargs): patcher = patching_fn(*args, **kwargs) patched_thing = patcher.start() self._patchers.append(patcher) return patched_thing def stub_delete_db(self, broker): self.delete_db_calls.append('/path/to/file') def test_creation(self): # later config should be extended to assert more config options replicator = TestReplicator({'node_timeout': '3.5'}) self.assertEqual(replicator.node_timeout, 3.5) def test_repl_connection(self): node = {'replication_ip': '127.0.0.1', 'replication_port': 80, 'device': 'sdb1'} conn = db_replicator.ReplConnection(node, '1234567890', 'abcdefg', logging.getLogger()) def req(method, path, body, headers): self.assertEqual(method, 'REPLICATE') self.assertEqual(headers['Content-Type'], 'application/json') class Resp(object): def read(self): return 'data' resp = Resp() conn.request = req conn.getresponse = lambda *args: resp self.assertEqual(conn.replicate(1, 2, 3), resp) def other_req(method, path, body, headers): raise Exception('blah') conn.request = other_req self.assertEqual(conn.replicate(1, 2, 3), None) def test_rsync_file(self): replicator = TestReplicator({}) with _mock_process(-1): self.assertEqual( False, replicator._rsync_file('/some/file', 'remote:/some/file')) with _mock_process(0): self.assertEqual( True, replicator._rsync_file('/some/file', 'remote:/some/file')) def test_rsync_file_popen_args(self): replicator = TestReplicator({}) with _mock_process(0) as process: replicator._rsync_file('/some/file', 'remote:/some_file') exp_args = ([ 'rsync', '--quiet', '--no-motd', '--timeout=%s' % int(math.ceil(replicator.node_timeout)), '--contimeout=%s' % int(math.ceil(replicator.conn_timeout)), '--whole-file', '/some/file', 'remote:/some_file'],) self.assertEqual(exp_args, process.args) def test_rsync_file_popen_args_whole_file_false(self): replicator = TestReplicator({}) with _mock_process(0) as process: replicator._rsync_file('/some/file', 'remote:/some_file', False) exp_args = ([ 'rsync', '--quiet', '--no-motd', '--timeout=%s' % int(math.ceil(replicator.node_timeout)), '--contimeout=%s' % int(math.ceil(replicator.conn_timeout)), '/some/file', 'remote:/some_file'],) self.assertEqual(exp_args, process.args) def test_rsync_file_popen_args_different_region_and_rsync_compress(self): replicator = TestReplicator({}) for rsync_compress in (False, True): replicator.rsync_compress = rsync_compress for different_region in (False, True): with _mock_process(0) as process: replicator._rsync_file('/some/file', 'remote:/some_file', False, different_region) if rsync_compress and different_region: # --compress arg should be passed to rsync binary # only when rsync_compress option is enabled # AND destination node is in a different # region self.assertTrue('--compress' in process.args[0]) else: self.assertFalse('--compress' in process.args[0]) def test_rsync_db(self): replicator = TestReplicator({}) replicator._rsync_file = lambda *args, **kwargs: True fake_device = {'replication_ip': '127.0.0.1', 'device': 'sda1'} replicator._rsync_db(FakeBroker(), fake_device, ReplHttp(), 'abcd') def test_rsync_db_rsync_file_call(self): fake_device = {'ip': '127.0.0.1', 'port': '0', 'replication_ip': '127.0.0.1', 'replication_port': '0', 'device': 'sda1'} class MyTestReplicator(TestReplicator): def __init__(self, db_file, remote_file): super(MyTestReplicator, self).__init__({}) self.db_file = db_file self.remote_file = remote_file def _rsync_file(self_, db_file, remote_file, whole_file=True, different_region=False): self.assertEqual(self_.db_file, db_file) self.assertEqual(self_.remote_file, remote_file) self_._rsync_file_called = True return False broker = FakeBroker() remote_file = '127.0.0.1::container/sda1/tmp/abcd' replicator = MyTestReplicator(broker.db_file, remote_file) replicator._rsync_db(broker, fake_device, ReplHttp(), 'abcd') self.assertTrue(replicator._rsync_file_called) def test_rsync_db_rsync_file_failure(self): class MyTestReplicator(TestReplicator): def __init__(self): super(MyTestReplicator, self).__init__({}) self._rsync_file_called = False def _rsync_file(self_, *args, **kwargs): self.assertEqual( False, self_._rsync_file_called, '_sync_file() should only be called once') self_._rsync_file_called = True return False with patch('os.path.exists', lambda *args: True): replicator = MyTestReplicator() fake_device = {'ip': '127.0.0.1', 'replication_ip': '127.0.0.1', 'device': 'sda1'} replicator._rsync_db(FakeBroker(), fake_device, ReplHttp(), 'abcd') self.assertEqual(True, replicator._rsync_file_called) def test_rsync_db_change_after_sync(self): class MyTestReplicator(TestReplicator): def __init__(self, broker): super(MyTestReplicator, self).__init__({}) self.broker = broker self._rsync_file_call_count = 0 def _rsync_file(self_, db_file, remote_file, whole_file=True, different_region=False): self_._rsync_file_call_count += 1 if self_._rsync_file_call_count == 1: self.assertEqual(True, whole_file) self.assertEqual(False, self_.broker.locked) elif self_._rsync_file_call_count == 2: self.assertEqual(False, whole_file) self.assertEqual(True, self_.broker.locked) else: raise RuntimeError('_rsync_file() called too many times') return True # with journal file with patch('os.path.exists', lambda *args: True): broker = FakeBroker() replicator = MyTestReplicator(broker) fake_device = {'ip': '127.0.0.1', 'replication_ip': '127.0.0.1', 'device': 'sda1'} replicator._rsync_db(broker, fake_device, ReplHttp(), 'abcd') self.assertEqual(2, replicator._rsync_file_call_count) # with new mtime with patch('os.path.exists', lambda *args: False): with patch('os.path.getmtime', ChangingMtimesOs()): broker = FakeBroker() replicator = MyTestReplicator(broker) fake_device = {'ip': '127.0.0.1', 'replication_ip': '127.0.0.1', 'device': 'sda1'} replicator._rsync_db(broker, fake_device, ReplHttp(), 'abcd') self.assertEqual(2, replicator._rsync_file_call_count) def test_in_sync(self): replicator = TestReplicator({}) self.assertEqual(replicator._in_sync( {'id': 'a', 'point': 0, 'max_row': 0, 'hash': 'b'}, {'id': 'a', 'point': -1, 'max_row': 0, 'hash': 'b'}, FakeBroker(), -1), True) self.assertEqual(replicator._in_sync( {'id': 'a', 'point': -1, 'max_row': 0, 'hash': 'b'}, {'id': 'a', 'point': -1, 'max_row': 10, 'hash': 'b'}, FakeBroker(), -1), True) self.assertEqual(bool(replicator._in_sync( {'id': 'a', 'point': -1, 'max_row': 0, 'hash': 'c'}, {'id': 'a', 'point': -1, 'max_row': 10, 'hash': 'd'}, FakeBroker(), -1)), False) def test_run_once_no_local_device_in_ring(self): logger = unit.debug_logger('test-replicator') replicator = TestReplicator({'recon_cache_path': self.recon_cache}, logger=logger) with patch('swift.common.db_replicator.whataremyips', return_value=['127.0.0.1']): replicator.run_once() expected = [ "Can't find itself 127.0.0.1 with port 1000 " "in ring file, not replicating", ] self.assertEqual(expected, logger.get_lines_for_level('error')) def test_run_once_with_local_device_in_ring(self): logger = unit.debug_logger('test-replicator') base = 'swift.common.db_replicator.' with patch(base + 'whataremyips', return_value=['1.1.1.1']), \ patch(base + 'ring', FakeRingWithNodes()): replicator = TestReplicator({'bind_port': 6000, 'recon_cache_path': self.recon_cache}, logger=logger) replicator.run_once() self.assertFalse(logger.get_lines_for_level('error')) def test_run_once_no_ips(self): replicator = TestReplicator({}, logger=unit.FakeLogger()) self._patch(patch.object, db_replicator, 'whataremyips', lambda *a, **kw: []) replicator.run_once() self.assertEqual( replicator.logger.log_dict['error'], [(('ERROR Failed to get my own IPs?',), {})]) def test_run_once_node_is_not_mounted(self): db_replicator.ring = FakeRingWithSingleNode() # If a bind_ip is specified, it's plumbed into whataremyips() and # returned by itself. conf = {'mount_check': 'true', 'bind_ip': '1.1.1.1', 'bind_port': 6000} replicator = TestReplicator(conf, logger=unit.FakeLogger()) self.assertEqual(replicator.mount_check, True) self.assertEqual(replicator.port, 6000) def mock_ismount(path): self.assertEqual(path, os.path.join(replicator.root, replicator.ring.devs[0]['device'])) return False self._patch(patch.object, db_replicator, 'ismount', mock_ismount) replicator.run_once() self.assertEqual( replicator.logger.log_dict['warning'], [(('Skipping %(device)s as it is not mounted' % replicator.ring.devs[0],), {})]) def test_run_once_node_is_mounted(self): db_replicator.ring = FakeRingWithSingleNode() conf = {'mount_check': 'true', 'bind_port': 6000} replicator = TestReplicator(conf, logger=unit.FakeLogger()) self.assertEqual(replicator.mount_check, True) self.assertEqual(replicator.port, 6000) def mock_unlink_older_than(path, mtime): self.assertEqual(path, os.path.join(replicator.root, replicator.ring.devs[0]['device'], 'tmp')) self.assertTrue(time.time() - replicator.reclaim_age >= mtime) def mock_spawn_n(fn, part, object_file, node_id): self.assertEqual('123', part) self.assertEqual('/srv/node/sda/c.db', object_file) self.assertEqual(1, node_id) self._patch(patch.object, db_replicator, 'whataremyips', lambda *a, **kw: ['1.1.1.1']) self._patch(patch.object, db_replicator, 'ismount', lambda *args: True) self._patch(patch.object, db_replicator, 'unlink_older_than', mock_unlink_older_than) self._patch(patch.object, db_replicator, 'roundrobin_datadirs', lambda *args: [('123', '/srv/node/sda/c.db', 1)]) self._patch(patch.object, replicator.cpool, 'spawn_n', mock_spawn_n) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.isdir.return_value = True replicator.run_once() mock_os.path.isdir.assert_called_with( os.path.join(replicator.root, replicator.ring.devs[0]['device'], replicator.datadir)) def test_usync(self): fake_http = ReplHttp() replicator = TestReplicator({}) replicator._usync_db(0, FakeBroker(), fake_http, '12345', '67890') def test_usync_http_error_above_300(self): fake_http = ReplHttp(set_status=301) replicator = TestReplicator({}) self.assertFalse( replicator._usync_db(0, FakeBroker(), fake_http, '12345', '67890')) def test_usync_http_error_below_200(self): fake_http = ReplHttp(set_status=101) replicator = TestReplicator({}) self.assertFalse( replicator._usync_db(0, FakeBroker(), fake_http, '12345', '67890')) def test_stats(self): # I'm not sure how to test that this logs the right thing, # but we can at least make sure it gets covered. replicator = TestReplicator({}) replicator._zero_stats() replicator._report_stats() def test_replicate_object(self): db_replicator.ring = FakeRingWithNodes() replicator = TestReplicator({}) replicator.delete_db = self.stub_delete_db replicator._replicate_object('0', '/path/to/file', 'node_id') self.assertEqual([], self.delete_db_calls) def test_replicate_object_quarantine(self): replicator = TestReplicator({}) self._patch(patch.object, replicator.brokerclass, 'db_file', '/a/b/c/d/e/hey') self._patch(patch.object, replicator.brokerclass, 'get_repl_missing_table', True) def mock_renamer(was, new, fsync=False, cause_colision=False): if cause_colision and '-' not in new: raise OSError(errno.EEXIST, "File already exists") self.assertEqual('/a/b/c/d/e', was) if '-' in new: self.assertTrue( new.startswith('/a/quarantined/containers/e-')) else: self.assertEqual('/a/quarantined/containers/e', new) def mock_renamer_error(was, new, fsync): return mock_renamer(was, new, fsync, cause_colision=True) with patch.object(db_replicator, 'renamer', mock_renamer): replicator._replicate_object('0', 'file', 'node_id') # try the double quarantine with patch.object(db_replicator, 'renamer', mock_renamer_error): replicator._replicate_object('0', 'file', 'node_id') def test_replicate_object_delete_because_deleted(self): replicator = TestReplicator({}) try: replicator.delete_db = self.stub_delete_db replicator.brokerclass.stub_replication_info = { 'delete_timestamp': 2, 'put_timestamp': 1} replicator._replicate_object('0', '/path/to/file', 'node_id') finally: replicator.brokerclass.stub_replication_info = None self.assertEqual(['/path/to/file'], self.delete_db_calls) def test_replicate_object_delete_because_not_shouldbehere(self): replicator = TestReplicator({}) replicator.delete_db = self.stub_delete_db replicator._replicate_object('0', '/path/to/file', 'node_id') self.assertEqual(['/path/to/file'], self.delete_db_calls) def test_replicate_account_out_of_place(self): replicator = TestReplicator({}, logger=unit.FakeLogger()) replicator.ring = FakeRingWithNodes().Ring('path') replicator.brokerclass = FakeAccountBroker replicator._repl_to_node = lambda *args: True replicator.delete_db = self.stub_delete_db # Correct node_id, wrong part part = replicator.ring.get_part(TEST_ACCOUNT_NAME) + 1 node_id = replicator.ring.get_part_nodes(part)[0]['id'] replicator._replicate_object(str(part), '/path/to/file', node_id) self.assertEqual(['/path/to/file'], self.delete_db_calls) error_msgs = replicator.logger.get_lines_for_level('error') expected = 'Found /path/to/file for /a%20c%20t when it should be ' \ 'on partition 0; will replicate out and remove.' self.assertEqual(error_msgs, [expected]) def test_replicate_container_out_of_place(self): replicator = TestReplicator({}, logger=unit.FakeLogger()) replicator.ring = FakeRingWithNodes().Ring('path') replicator._repl_to_node = lambda *args: True replicator.delete_db = self.stub_delete_db # Correct node_id, wrong part part = replicator.ring.get_part( TEST_ACCOUNT_NAME, TEST_CONTAINER_NAME) + 1 node_id = replicator.ring.get_part_nodes(part)[0]['id'] replicator._replicate_object(str(part), '/path/to/file', node_id) self.assertEqual(['/path/to/file'], self.delete_db_calls) self.assertEqual( replicator.logger.log_dict['error'], [(('Found /path/to/file for /a%20c%20t/c%20o%20n when it should ' 'be on partition 0; will replicate out and remove.',), {})]) def test_replicate_object_different_region(self): db_replicator.ring = FakeRingWithNodes() replicator = TestReplicator({}) replicator._repl_to_node = mock.Mock() # For node_id = 1, one replica in same region(1) and other is in a # different region(2). Refer: FakeRingWithNodes replicator._replicate_object('0', '/path/to/file', 1) # different_region was set True and passed to _repl_to_node() self.assertEqual(replicator._repl_to_node.call_args_list[0][0][-1], True) # different_region was set False and passed to _repl_to_node() self.assertEqual(replicator._repl_to_node.call_args_list[1][0][-1], False) def test_delete_db(self): db_replicator.lock_parent_directory = lock_parent_directory replicator = TestReplicator({}, logger=unit.FakeLogger()) replicator._zero_stats() replicator.extract_device = lambda _: 'some_device' temp_dir = mkdtemp() try: temp_suf_dir = os.path.join(temp_dir, '16e') os.mkdir(temp_suf_dir) temp_hash_dir = os.path.join(temp_suf_dir, '166e33924a08ede4204871468c11e16e') os.mkdir(temp_hash_dir) temp_file = NamedTemporaryFile(dir=temp_hash_dir, delete=False) temp_hash_dir2 = os.path.join(temp_suf_dir, '266e33924a08ede4204871468c11e16e') os.mkdir(temp_hash_dir2) temp_file2 = NamedTemporaryFile(dir=temp_hash_dir2, delete=False) # sanity-checks self.assertTrue(os.path.exists(temp_dir)) self.assertTrue(os.path.exists(temp_suf_dir)) self.assertTrue(os.path.exists(temp_hash_dir)) self.assertTrue(os.path.exists(temp_file.name)) self.assertTrue(os.path.exists(temp_hash_dir2)) self.assertTrue(os.path.exists(temp_file2.name)) self.assertEqual(0, replicator.stats['remove']) temp_file.db_file = temp_file.name replicator.delete_db(temp_file) self.assertTrue(os.path.exists(temp_dir)) self.assertTrue(os.path.exists(temp_suf_dir)) self.assertFalse(os.path.exists(temp_hash_dir)) self.assertFalse(os.path.exists(temp_file.name)) self.assertTrue(os.path.exists(temp_hash_dir2)) self.assertTrue(os.path.exists(temp_file2.name)) self.assertEqual([(('removes.some_device',), {})], replicator.logger.log_dict['increment']) self.assertEqual(1, replicator.stats['remove']) temp_file2.db_file = temp_file2.name replicator.delete_db(temp_file2) self.assertTrue(os.path.exists(temp_dir)) self.assertFalse(os.path.exists(temp_suf_dir)) self.assertFalse(os.path.exists(temp_hash_dir)) self.assertFalse(os.path.exists(temp_file.name)) self.assertFalse(os.path.exists(temp_hash_dir2)) self.assertFalse(os.path.exists(temp_file2.name)) self.assertEqual([(('removes.some_device',), {})] * 2, replicator.logger.log_dict['increment']) self.assertEqual(2, replicator.stats['remove']) finally: rmtree(temp_dir) def test_extract_device(self): replicator = TestReplicator({'devices': '/some/root'}) self.assertEqual('some_device', replicator.extract_device( '/some/root/some_device/deeper/and/deeper')) self.assertEqual('UNKNOWN', replicator.extract_device( '/some/foo/some_device/deeper/and/deeper')) def test_dispatch_no_arg_pop(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) response = rpc.dispatch(('a',), 'arg') self.assertEqual('Invalid object type', response.body) self.assertEqual(400, response.status_int) def test_dispatch_drive_not_mounted(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, True) def mock_ismount(path): self.assertEqual('/drive', path) return False self._patch(patch.object, db_replicator, 'ismount', mock_ismount) response = rpc.dispatch(('drive', 'part', 'hash'), ['method']) self.assertEqual('507 drive is not mounted', response.status) self.assertEqual(507, response.status_int) def test_dispatch_unexpected_operation_db_does_not_exist(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) def mock_mkdirs(path): self.assertEqual('/drive/tmp', path) self._patch(patch.object, db_replicator, 'mkdirs', mock_mkdirs) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = False response = rpc.dispatch(('drive', 'part', 'hash'), ['unexpected']) self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) def test_dispatch_operation_unexpected(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) self._patch(patch.object, db_replicator, 'mkdirs', lambda *args: True) def unexpected_method(broker, args): self.assertEqual(FakeBroker, broker.__class__) self.assertEqual(['arg1', 'arg2'], args) return 'unexpected-called' rpc.unexpected = unexpected_method with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = True response = rpc.dispatch(('drive', 'part', 'hash'), ['unexpected', 'arg1', 'arg2']) mock_os.path.exists.assert_called_with('/part/ash/hash/hash.db') self.assertEqual('unexpected-called', response) def test_dispatch_operation_rsync_then_merge(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) self._patch(patch.object, db_replicator, 'renamer', lambda *args: True) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = True response = rpc.dispatch(('drive', 'part', 'hash'), ['rsync_then_merge', 'arg1', 'arg2']) expected_calls = [call('/part/ash/hash/hash.db'), call('/drive/tmp/arg1')] self.assertEqual(mock_os.path.exists.call_args_list, expected_calls) self.assertEqual('204 No Content', response.status) self.assertEqual(204, response.status_int) def test_dispatch_operation_complete_rsync(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) self._patch(patch.object, db_replicator, 'renamer', lambda *args: True) with patch('swift.common.db_replicator.os', new=mock.MagicMock( wraps=os)) as mock_os: mock_os.path.exists.side_effect = [False, True] response = rpc.dispatch(('drive', 'part', 'hash'), ['complete_rsync', 'arg1', 'arg2']) expected_calls = [call('/part/ash/hash/hash.db'), call('/drive/tmp/arg1')] self.assertEqual(mock_os.path.exists.call_args_list, expected_calls) self.assertEqual('204 No Content', response.status) self.assertEqual(204, response.status_int) def test_rsync_then_merge_db_does_not_exist(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = False response = rpc.rsync_then_merge('drive', '/data/db.db', ('arg1', 'arg2')) mock_os.path.exists.assert_called_with('/data/db.db') self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) def test_rsync_then_merge_old_does_not_exist(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.side_effect = [True, False] response = rpc.rsync_then_merge('drive', '/data/db.db', ('arg1', 'arg2')) expected_calls = [call('/data/db.db'), call('/drive/tmp/arg1')] self.assertEqual(mock_os.path.exists.call_args_list, expected_calls) self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) def test_rsync_then_merge_with_objects(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) def mock_renamer(old, new): self.assertEqual('/drive/tmp/arg1', old) self.assertEqual('/data/db.db', new) self._patch(patch.object, db_replicator, 'renamer', mock_renamer) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = True response = rpc.rsync_then_merge('drive', '/data/db.db', ['arg1', 'arg2']) self.assertEqual('204 No Content', response.status) self.assertEqual(204, response.status_int) def test_complete_rsync_db_does_not_exist(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = True response = rpc.complete_rsync('drive', '/data/db.db', ['arg1', 'arg2']) mock_os.path.exists.assert_called_with('/data/db.db') self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) def test_complete_rsync_old_file_does_not_exist(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.return_value = False response = rpc.complete_rsync('drive', '/data/db.db', ['arg1', 'arg2']) expected_calls = [call('/data/db.db'), call('/drive/tmp/arg1')] self.assertEqual(expected_calls, mock_os.path.exists.call_args_list) self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) def test_complete_rsync_rename(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) def mock_exists(path): if path == '/data/db.db': return False self.assertEqual('/drive/tmp/arg1', path) return True def mock_renamer(old, new): self.assertEqual('/drive/tmp/arg1', old) self.assertEqual('/data/db.db', new) self._patch(patch.object, db_replicator, 'renamer', mock_renamer) with patch('swift.common.db_replicator.os', new=mock.MagicMock(wraps=os)) as mock_os: mock_os.path.exists.side_effect = [False, True] response = rpc.complete_rsync('drive', '/data/db.db', ['arg1', 'arg2']) self.assertEqual('204 No Content', response.status) self.assertEqual(204, response.status_int) def test_replicator_sync_with_broker_replication_missing_table(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) rpc.logger = unit.debug_logger() broker = FakeBroker() broker.get_repl_missing_table = True called = [] def mock_quarantine_db(object_file, server_type): called.append(True) self.assertEqual(broker.db_file, object_file) self.assertEqual(broker.db_type, server_type) self._patch(patch.object, db_replicator, 'quarantine_db', mock_quarantine_db) response = rpc.sync(broker, ('remote_sync', 'hash_', 'id_', 'created_at', 'put_timestamp', 'delete_timestamp', 'metadata')) self.assertEqual('404 Not Found', response.status) self.assertEqual(404, response.status_int) self.assertEqual(called, [True]) errors = rpc.logger.get_lines_for_level('error') self.assertEqual(errors, ["Unable to decode remote metadata 'metadata'", "Quarantining DB %s" % broker]) def test_replicator_sync(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) broker = FakeBroker() response = rpc.sync(broker, (broker.get_sync() + 1, 12345, 'id_', 'created_at', 'put_timestamp', 'delete_timestamp', '{"meta1": "data1", "meta2": "data2"}')) self.assertEqual({'meta1': 'data1', 'meta2': 'data2'}, broker.metadata) self.assertEqual('created_at', broker.created_at) self.assertEqual('put_timestamp', broker.put_timestamp) self.assertEqual('delete_timestamp', broker.delete_timestamp) self.assertEqual('200 OK', response.status) self.assertEqual(200, response.status_int) def test_rsync_then_merge(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) rpc.rsync_then_merge('sda1', '/srv/swift/blah', ('a', 'b')) def test_merge_items(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) fake_broker = FakeBroker() args = ('a', 'b') rpc.merge_items(fake_broker, args) self.assertEqual(fake_broker.args, args) def test_merge_syncs(self): rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) fake_broker = FakeBroker() args = ('a', 'b') rpc.merge_syncs(fake_broker, args) self.assertEqual(fake_broker.args, (args[0],)) def test_complete_rsync_with_bad_input(self): drive = '/some/root' db_file = __file__ args = ['old_file'] rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) resp = rpc.complete_rsync(drive, db_file, args) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual(404, resp.status_int) resp = rpc.complete_rsync(drive, 'new_db_file', args) self.assertTrue(isinstance(resp, HTTPException)) self.assertEqual(404, resp.status_int) def test_complete_rsync(self): drive = mkdtemp() args = ['old_file'] rpc = db_replicator.ReplicatorRpc('/', '/', FakeBroker, False) os.mkdir('%s/tmp' % drive) old_file = '%s/tmp/old_file' % drive new_file = '%s/new_db_file' % drive try: fp = open(old_file, 'w') fp.write('void') fp.close resp = rpc.complete_rsync(drive, new_file, args) self.assertEqual(204, resp.status_int) finally: rmtree(drive) def test_roundrobin_datadirs(self): listdir_calls = [] isdir_calls = [] exists_calls = [] shuffle_calls = [] rmdir_calls = [] def _listdir(path): listdir_calls.append(path) if not path.startswith('/srv/node/sda/containers') and \ not path.startswith('/srv/node/sdb/containers'): return [] path = path[len('/srv/node/sdx/containers'):] if path == '': return ['123', '456', '789', '9999'] # 456 will pretend to be a file # 9999 will be an empty partition with no contents elif path == '/123': return ['abc', 'def.db'] # def.db will pretend to be a file elif path == '/123/abc': # 11111111111111111111111111111abc will pretend to be a file return ['00000000000000000000000000000abc', '11111111111111111111111111111abc'] elif path == '/123/abc/00000000000000000000000000000abc': return ['00000000000000000000000000000abc.db', # This other.db isn't in the right place, so should be # ignored later. '000000000000000000000000000other.db', 'weird1'] # weird1 will pretend to be a dir, if asked elif path == '/789': return ['ghi', 'jkl'] # jkl will pretend to be a file elif path == '/789/ghi': # 33333333333333333333333333333ghi will pretend to be a file return ['22222222222222222222222222222ghi', '33333333333333333333333333333ghi'] elif path == '/789/ghi/22222222222222222222222222222ghi': return ['22222222222222222222222222222ghi.db', 'weird2'] # weird2 will pretend to be a dir, if asked elif path == '9999': return [] return [] def _isdir(path): isdir_calls.append(path) if not path.startswith('/srv/node/sda/containers') and \ not path.startswith('/srv/node/sdb/containers'): return False path = path[len('/srv/node/sdx/containers'):] if path in ('/123', '/123/abc', '/123/abc/00000000000000000000000000000abc', '/123/abc/00000000000000000000000000000abc/weird1', '/789', '/789/ghi', '/789/ghi/22222222222222222222222222222ghi', '/789/ghi/22222222222222222222222222222ghi/weird2', '/9999'): return True return False def _exists(arg): exists_calls.append(arg) return True def _shuffle(arg): shuffle_calls.append(arg) def _rmdir(arg): rmdir_calls.append(arg) orig_listdir = db_replicator.os.listdir orig_isdir = db_replicator.os.path.isdir orig_exists = db_replicator.os.path.exists orig_shuffle = db_replicator.random.shuffle orig_rmdir = db_replicator.os.rmdir try: db_replicator.os.listdir = _listdir db_replicator.os.path.isdir = _isdir db_replicator.os.path.exists = _exists db_replicator.random.shuffle = _shuffle db_replicator.os.rmdir = _rmdir datadirs = [('/srv/node/sda/containers', 1), ('/srv/node/sdb/containers', 2)] results = list(db_replicator.roundrobin_datadirs(datadirs)) # The results show that the .db files are returned, the devices # interleaved. self.assertEqual(results, [ ('123', '/srv/node/sda/containers/123/abc/' '00000000000000000000000000000abc/' '00000000000000000000000000000abc.db', 1), ('123', '/srv/node/sdb/containers/123/abc/' '00000000000000000000000000000abc/' '00000000000000000000000000000abc.db', 2), ('789', '/srv/node/sda/containers/789/ghi/' '22222222222222222222222222222ghi/' '22222222222222222222222222222ghi.db', 1), ('789', '/srv/node/sdb/containers/789/ghi/' '22222222222222222222222222222ghi/' '22222222222222222222222222222ghi.db', 2)]) # The listdir calls show that we only listdir the dirs self.assertEqual(listdir_calls, [ '/srv/node/sda/containers', '/srv/node/sda/containers/123', '/srv/node/sda/containers/123/abc', '/srv/node/sdb/containers', '/srv/node/sdb/containers/123', '/srv/node/sdb/containers/123/abc', '/srv/node/sda/containers/789', '/srv/node/sda/containers/789/ghi', '/srv/node/sdb/containers/789', '/srv/node/sdb/containers/789/ghi', '/srv/node/sda/containers/9999', '/srv/node/sdb/containers/9999']) # The isdir calls show that we did ask about the things pretending # to be files at various levels. self.assertEqual(isdir_calls, [ '/srv/node/sda/containers/123', '/srv/node/sda/containers/123/abc', ('/srv/node/sda/containers/123/abc/' '00000000000000000000000000000abc'), '/srv/node/sdb/containers/123', '/srv/node/sdb/containers/123/abc', ('/srv/node/sdb/containers/123/abc/' '00000000000000000000000000000abc'), ('/srv/node/sda/containers/123/abc/' '11111111111111111111111111111abc'), '/srv/node/sda/containers/123/def.db', '/srv/node/sda/containers/456', '/srv/node/sda/containers/789', '/srv/node/sda/containers/789/ghi', ('/srv/node/sda/containers/789/ghi/' '22222222222222222222222222222ghi'), ('/srv/node/sdb/containers/123/abc/' '11111111111111111111111111111abc'), '/srv/node/sdb/containers/123/def.db', '/srv/node/sdb/containers/456', '/srv/node/sdb/containers/789', '/srv/node/sdb/containers/789/ghi', ('/srv/node/sdb/containers/789/ghi/' '22222222222222222222222222222ghi'), ('/srv/node/sda/containers/789/ghi/' '33333333333333333333333333333ghi'), '/srv/node/sda/containers/789/jkl', '/srv/node/sda/containers/9999', ('/srv/node/sdb/containers/789/ghi/' '33333333333333333333333333333ghi'), '/srv/node/sdb/containers/789/jkl', '/srv/node/sdb/containers/9999']) # The exists calls are the .db files we looked for as we walked the # structure. self.assertEqual(exists_calls, [ ('/srv/node/sda/containers/123/abc/' '00000000000000000000000000000abc/' '00000000000000000000000000000abc.db'), ('/srv/node/sdb/containers/123/abc/' '00000000000000000000000000000abc/' '00000000000000000000000000000abc.db'), ('/srv/node/sda/containers/789/ghi/' '22222222222222222222222222222ghi/' '22222222222222222222222222222ghi.db'), ('/srv/node/sdb/containers/789/ghi/' '22222222222222222222222222222ghi/' '22222222222222222222222222222ghi.db')]) # Shows that we called shuffle twice, once for each device. self.assertEqual( shuffle_calls, [['123', '456', '789', '9999'], ['123', '456', '789', '9999']]) # Shows that we called removed the two empty partition directories. self.assertEqual( rmdir_calls, ['/srv/node/sda/containers/9999', '/srv/node/sdb/containers/9999']) finally: db_replicator.os.listdir = orig_listdir db_replicator.os.path.isdir = orig_isdir db_replicator.os.path.exists = orig_exists db_replicator.random.shuffle = orig_shuffle db_replicator.os.rmdir = orig_rmdir @mock.patch("swift.common.db_replicator.ReplConnection", mock.Mock()) def test_http_connect(self): node = "node" partition = "partition" db_file = __file__ replicator = TestReplicator({}) replicator._http_connect(node, partition, db_file) db_replicator.ReplConnection.assert_has_calls([ mock.call(node, partition, os.path.basename(db_file).split('.', 1)[0], replicator.logger)]) class TestReplToNode(unittest.TestCase): def setUp(self): db_replicator.ring = FakeRing() self.delete_db_calls = [] self.broker = FakeBroker() self.replicator = TestReplicator({'per_diff': 10}) self.fake_node = {'ip': '127.0.0.1', 'device': 'sda1', 'port': 1000} self.fake_info = {'id': 'a', 'point': -1, 'max_row': 20, 'hash': 'b', 'created_at': 100, 'put_timestamp': 0, 'delete_timestamp': 0, 'count': 0, 'metadata': { 'Test': ('Value', normalize_timestamp(1))}} self.replicator.logger = mock.Mock() self.replicator._rsync_db = mock.Mock(return_value=True) self.replicator._usync_db = mock.Mock(return_value=True) self.http = ReplHttp('{"id": 3, "point": -1}') self.replicator._http_connect = lambda *args: self.http def test_repl_to_node_usync_success(self): rinfo = {"id": 3, "point": -1, "max_row": 10, "hash": "c"} self.http = ReplHttp(json.dumps(rinfo)) local_sync = self.broker.get_sync() self.assertEqual(self.replicator._repl_to_node( self.fake_node, self.broker, '0', self.fake_info), True) self.replicator._usync_db.assert_has_calls([ mock.call(max(rinfo['point'], local_sync), self.broker, self.http, rinfo['id'], self.fake_info['id']) ]) def test_repl_to_node_rsync_success(self): rinfo = {"id": 3, "point": -1, "max_row": 9, "hash": "c"} self.http = ReplHttp(json.dumps(rinfo)) self.broker.get_sync() self.assertEqual(self.replicator._repl_to_node( self.fake_node, self.broker, '0', self.fake_info), True) self.replicator.logger.increment.assert_has_calls([ mock.call.increment('remote_merges') ]) self.replicator._rsync_db.assert_has_calls([ mock.call(self.broker, self.fake_node, self.http, self.fake_info['id'], replicate_method='rsync_then_merge', replicate_timeout=(self.fake_info['count'] / 2000), different_region=False) ]) def test_repl_to_node_already_in_sync(self): rinfo = {"id": 3, "point": -1, "max_row": 20, "hash": "b"} self.http = ReplHttp(json.dumps(rinfo)) self.broker.get_sync() self.assertEqual(self.replicator._repl_to_node( self.fake_node, self.broker, '0', self.fake_info), True) self.assertEqual(self.replicator._rsync_db.call_count, 0) self.assertEqual(self.replicator._usync_db.call_count, 0) def test_repl_to_node_not_found(self): self.http = ReplHttp('{"id": 3, "point": -1}', set_status=404) self.assertEqual(self.replicator._repl_to_node( self.fake_node, self.broker, '0', self.fake_info, False), True) self.replicator.logger.increment.assert_has_calls([ mock.call.increment('rsyncs') ]) self.replicator._rsync_db.assert_has_calls([ mock.call(self.broker, self.fake_node, self.http, self.fake_info['id'], different_region=False) ]) def test_repl_to_node_drive_not_mounted(self): self.http = ReplHttp('{"id": 3, "point": -1}', set_status=507) self.assertRaises(DriveNotMounted, self.replicator._repl_to_node, self.fake_node, FakeBroker(), '0', self.fake_info) def test_repl_to_node_300_status(self): self.http = ReplHttp('{"id": 3, "point": -1}', set_status=300) self.assertEqual(self.replicator._repl_to_node( self.fake_node, FakeBroker(), '0', self.fake_info), None) def test_repl_to_node_not_response(self): self.http = mock.Mock(replicate=mock.Mock(return_value=None)) self.assertEqual(self.replicator._repl_to_node( self.fake_node, FakeBroker(), '0', self.fake_info), False) def test_repl_to_node_small_container_always_usync(self): # Tests that a small container that is > 50% out of sync will # still use usync. rinfo = {"id": 3, "point": -1, "hash": "c"} # Turn per_diff back to swift's default. self.replicator.per_diff = 1000 for r, l in ((5, 20), (40, 100), (450, 1000), (550, 1500)): rinfo['max_row'] = r self.fake_info['max_row'] = l self.replicator._usync_db = mock.Mock(return_value=True) self.http = ReplHttp(json.dumps(rinfo)) local_sync = self.broker.get_sync() self.assertEqual(self.replicator._repl_to_node( self.fake_node, self.broker, '0', self.fake_info), True) self.replicator._usync_db.assert_has_calls([ mock.call(max(rinfo['point'], local_sync), self.broker, self.http, rinfo['id'], self.fake_info['id']) ]) class FakeHTTPResponse(object): def __init__(self, resp): self.resp = resp @property def status(self): return self.resp.status_int @property def data(self): return self.resp.body def attach_fake_replication_rpc(rpc, replicate_hook=None): class FakeReplConnection(object): def __init__(self, node, partition, hash_, logger): self.logger = logger self.node = node self.partition = partition self.path = '/%s/%s/%s' % (node['device'], partition, hash_) self.host = node['replication_ip'] def replicate(self, op, *sync_args): print('REPLICATE: %s, %s, %r' % (self.path, op, sync_args)) replicate_args = self.path.lstrip('/').split('/') args = [op] + list(sync_args) swob_response = rpc.dispatch(replicate_args, args) resp = FakeHTTPResponse(swob_response) if replicate_hook: replicate_hook(op, *sync_args) return resp return FakeReplConnection class ExampleReplicator(db_replicator.Replicator): server_type = 'fake' brokerclass = ExampleBroker datadir = 'fake' default_port = 1000 class TestReplicatorSync(unittest.TestCase): # override in subclass backend = ExampleReplicator.brokerclass datadir = ExampleReplicator.datadir replicator_daemon = ExampleReplicator replicator_rpc = db_replicator.ReplicatorRpc def setUp(self): self.root = mkdtemp() self.rpc = self.replicator_rpc( self.root, self.datadir, self.backend, False, logger=unit.debug_logger()) FakeReplConnection = attach_fake_replication_rpc(self.rpc) self._orig_ReplConnection = db_replicator.ReplConnection db_replicator.ReplConnection = FakeReplConnection self._orig_Ring = db_replicator.ring.Ring self._ring = unit.FakeRing() db_replicator.ring.Ring = lambda *args, **kwargs: self._get_ring() self.logger = unit.debug_logger() def tearDown(self): db_replicator.ReplConnection = self._orig_ReplConnection db_replicator.ring.Ring = self._orig_Ring rmtree(self.root) def _get_ring(self): return self._ring def _get_broker(self, account, container=None, node_index=0): hash_ = hash_path(account, container) part, nodes = self._ring.get_nodes(account, container) drive = nodes[node_index]['device'] db_path = os.path.join(self.root, drive, storage_directory(self.datadir, part, hash_), hash_ + '.db') return self.backend(db_path, account=account, container=container) def _get_broker_part_node(self, broker): part, nodes = self._ring.get_nodes(broker.account, broker.container) storage_dir = broker.db_file[len(self.root):].lstrip(os.path.sep) broker_device = storage_dir.split(os.path.sep, 1)[0] for node in nodes: if node['device'] == broker_device: return part, node def _get_daemon(self, node, conf_updates): conf = { 'devices': self.root, 'recon_cache_path': self.root, 'mount_check': 'false', 'bind_port': node['replication_port'], } if conf_updates: conf.update(conf_updates) return self.replicator_daemon(conf, logger=self.logger) def _run_once(self, node, conf_updates=None, daemon=None): daemon = daemon or self._get_daemon(node, conf_updates) def _rsync_file(db_file, remote_file, **kwargs): remote_server, remote_path = remote_file.split('/', 1) dest_path = os.path.join(self.root, remote_path) copy(db_file, dest_path) return True daemon._rsync_file = _rsync_file with mock.patch('swift.common.db_replicator.whataremyips', new=lambda *a, **kw: [node['replication_ip']]): daemon.run_once() return daemon def test_local_ids(self): for drive in ('sda', 'sdb', 'sdd'): os.makedirs(os.path.join(self.root, drive, self.datadir)) for node in self._ring.devs: daemon = self._run_once(node) if node['device'] == 'sdc': self.assertEqual(daemon._local_device_ids, set()) else: self.assertEqual(daemon._local_device_ids, set([node['id']])) def test_clean_up_after_deleted_brokers(self): broker = self._get_broker('a', 'c', node_index=0) part, node = self._get_broker_part_node(broker) part = str(part) daemon = self._run_once(node) # create a super old broker and delete it! forever_ago = time.time() - daemon.reclaim_age put_timestamp = normalize_timestamp(forever_ago - 2) delete_timestamp = normalize_timestamp(forever_ago - 1) broker.initialize(put_timestamp) broker.delete_db(delete_timestamp) # if we have a container broker make sure it's reported if hasattr(broker, 'reported'): info = broker.get_info() broker.reported(info['put_timestamp'], info['delete_timestamp'], info['object_count'], info['bytes_used']) info = broker.get_replication_info() self.assertTrue(daemon.report_up_to_date(info)) # we have a part dir part_root = os.path.join(self.root, node['device'], self.datadir) parts = os.listdir(part_root) self.assertEqual([part], parts) # with a single suffix suff = os.listdir(os.path.join(part_root, part)) self.assertEqual(1, len(suff)) # running replicator will remove the deleted db daemon = self._run_once(node, daemon=daemon) self.assertEqual(1, daemon.stats['remove']) # we still have a part dir (but it's empty) suff = os.listdir(os.path.join(part_root, part)) self.assertEqual(0, len(suff)) # run it again and there's nothing to do... daemon = self._run_once(node, daemon=daemon) self.assertEqual(0, daemon.stats['attempted']) # but empty part dir is cleaned up! parts = os.listdir(part_root) self.assertEqual(0, len(parts)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/malformed_example.db0000664000567000056710000002000013024044352023221 0ustar jenkinsjenkins00000000000000SQLite format 3j@      F''Ktableoutgoing_syncoutgoing_syncCREATE TABLE outgoing_sync ( remote_id TEXT UNIQUE, sync_point INTEGER, updated_at TEXT DEFAULT 0 )9M'indexsqlite_autoindex_outgoing_sync_1outgoing_syncF''Ktableincoming_syncincoming_syncCREATE TABLE incoming_sync ( remote_id TEXT UNIQUE, sync_point INTEGER, updated_at TEXT DEFAULT 0 )9M'indexsqlite_autoindex_incoming_sync_1incoming_sync5']triggeroutgoing_sync_insertoutgoing_syncCREATE TRIGGER outgoing_sync_insert AFTER INSERT ON outgoing_sync BEGIN UPDATE outgoing_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END }}0 EtabletesttestCREATE TABLE test (one TEXT)5']triggerincoming_sync_updateincoming_syncCREATE TRIGGER incoming_sync_update AFTER UPDATE ON incoming_sync BEGIN UPDATE incoming_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END5']triggerincoming_sync_insertincoming_syncCREATE TRIGGER incoming_sync_insert AFTER INSERT ON incoming_sync BEGIN UPDATE incoming_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END5']triggeroutgoing_sync_updateoutgoing_syncCREATE TRIGGER outgoing_sync_update AFTER UPDATE ON outgoing_sync BEGIN UPDATE outgoing_sync SET updated_at = STRFTIME('%s', 'NOW') WHERE ROWID = new.ROWID; END 1swift-2.7.1/test/unit/common/test_internal_client.py0000664000567000056710000016240313024044354024034 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import mock import unittest import zlib from textwrap import dedent import os import six from six import StringIO from six.moves import range from six.moves.urllib.parse import quote from test.unit import FakeLogger from eventlet.green import urllib2 from swift.common import exceptions, internal_client, swob from swift.common.header_key_dict import HeaderKeyDict from swift.common.storage_policy import StoragePolicy from test.unit import with_tempdir, write_fake_ring, patch_policies from test.unit.common.middleware.helpers import FakeSwift class FakeConn(object): def __init__(self, body=None): if body is None: body = [] self.body = body def read(self): return json.dumps(self.body) def info(self): return {} def not_sleep(seconds): pass def unicode_string(start, length): return u''.join([six.unichr(x) for x in range(start, start + length)]) def path_parts(): account = unicode_string(1000, 4) + ' ' + unicode_string(1100, 4) container = unicode_string(2000, 4) + ' ' + unicode_string(2100, 4) obj = unicode_string(3000, 4) + ' ' + unicode_string(3100, 4) return account, container, obj def make_path(account, container=None, obj=None): path = '/v1/%s' % quote(account.encode('utf-8')) if container: path += '/%s' % quote(container.encode('utf-8')) if obj: path += '/%s' % quote(obj.encode('utf-8')) return path def make_path_info(account, container=None, obj=None): # FakeSwift keys on PATH_INFO - which is *encoded* but unquoted path = '/v1/%s' % '/'.join( p for p in (account, container, obj) if p) return path.encode('utf-8') def get_client_app(): app = FakeSwift() with mock.patch('swift.common.internal_client.loadapp', new=lambda *args, **kwargs: app): client = internal_client.InternalClient({}, 'test', 1) return client, app class InternalClient(internal_client.InternalClient): def __init__(self): pass class GetMetadataInternalClient(internal_client.InternalClient): def __init__(self, test, path, metadata_prefix, acceptable_statuses): self.test = test self.path = path self.metadata_prefix = metadata_prefix self.acceptable_statuses = acceptable_statuses self.get_metadata_called = 0 self.metadata = 'some_metadata' def _get_metadata(self, path, metadata_prefix, acceptable_statuses=None, headers=None): self.get_metadata_called += 1 self.test.assertEqual(self.path, path) self.test.assertEqual(self.metadata_prefix, metadata_prefix) self.test.assertEqual(self.acceptable_statuses, acceptable_statuses) return self.metadata class SetMetadataInternalClient(internal_client.InternalClient): def __init__( self, test, path, metadata, metadata_prefix, acceptable_statuses): self.test = test self.path = path self.metadata = metadata self.metadata_prefix = metadata_prefix self.acceptable_statuses = acceptable_statuses self.set_metadata_called = 0 self.metadata = 'some_metadata' def _set_metadata( self, path, metadata, metadata_prefix='', acceptable_statuses=None): self.set_metadata_called += 1 self.test.assertEqual(self.path, path) self.test.assertEqual(self.metadata_prefix, metadata_prefix) self.test.assertEqual(self.metadata, metadata) self.test.assertEqual(self.acceptable_statuses, acceptable_statuses) class IterInternalClient(internal_client.InternalClient): def __init__( self, test, path, marker, end_marker, acceptable_statuses, items): self.test = test self.path = path self.marker = marker self.end_marker = end_marker self.acceptable_statuses = acceptable_statuses self.items = items def _iter_items( self, path, marker='', end_marker='', acceptable_statuses=None): self.test.assertEqual(self.path, path) self.test.assertEqual(self.marker, marker) self.test.assertEqual(self.end_marker, end_marker) self.test.assertEqual(self.acceptable_statuses, acceptable_statuses) for item in self.items: yield item class TestCompressingfileReader(unittest.TestCase): def test_init(self): class CompressObj(object): def __init__(self, test, *args): self.test = test self.args = args def method(self, *args): self.test.assertEqual(self.args, args) return self try: compressobj = CompressObj( self, 9, zlib.DEFLATED, -zlib.MAX_WBITS, zlib.DEF_MEM_LEVEL, 0) old_compressobj = internal_client.compressobj internal_client.compressobj = compressobj.method f = StringIO('') fobj = internal_client.CompressingFileReader(f) self.assertEqual(f, fobj._f) self.assertEqual(compressobj, fobj._compressor) self.assertEqual(False, fobj.done) self.assertEqual(True, fobj.first) self.assertEqual(0, fobj.crc32) self.assertEqual(0, fobj.total_size) finally: internal_client.compressobj = old_compressobj def test_read(self): exp_data = 'abcdefghijklmnopqrstuvwxyz' fobj = internal_client.CompressingFileReader( StringIO(exp_data), chunk_size=5) data = '' d = zlib.decompressobj(16 + zlib.MAX_WBITS) for chunk in fobj.read(): data += d.decompress(chunk) self.assertEqual(exp_data, data) def test_seek(self): exp_data = 'abcdefghijklmnopqrstuvwxyz' fobj = internal_client.CompressingFileReader( StringIO(exp_data), chunk_size=5) # read a couple of chunks only for _ in range(2): fobj.read() # read whole thing after seek and check data fobj.seek(0) data = '' d = zlib.decompressobj(16 + zlib.MAX_WBITS) for chunk in fobj.read(): data += d.decompress(chunk) self.assertEqual(exp_data, data) def test_seek_not_implemented_exception(self): fobj = internal_client.CompressingFileReader( StringIO(''), chunk_size=5) self.assertRaises(NotImplementedError, fobj.seek, 10) self.assertRaises(NotImplementedError, fobj.seek, 0, 10) class TestInternalClient(unittest.TestCase): @mock.patch('swift.common.utils.HASH_PATH_SUFFIX', new='endcap') @with_tempdir def test_load_from_config(self, tempdir): conf_path = os.path.join(tempdir, 'interal_client.conf') conf_body = """ [DEFAULT] swift_dir = %s [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy auto_create_account_prefix = - [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ % tempdir with open(conf_path, 'w') as f: f.write(dedent(conf_body)) account_ring_path = os.path.join(tempdir, 'account.ring.gz') write_fake_ring(account_ring_path) container_ring_path = os.path.join(tempdir, 'container.ring.gz') write_fake_ring(container_ring_path) object_ring_path = os.path.join(tempdir, 'object.ring.gz') write_fake_ring(object_ring_path) with patch_policies([StoragePolicy(0, 'legacy', True)]): client = internal_client.InternalClient(conf_path, 'test', 1) self.assertEqual(client.account_ring, client.app.app.app.account_ring) self.assertEqual(client.account_ring.serialized_path, account_ring_path) self.assertEqual(client.container_ring, client.app.app.app.container_ring) self.assertEqual(client.container_ring.serialized_path, container_ring_path) object_ring = client.app.app.app.get_object_ring(0) self.assertEqual(client.get_object_ring(0), object_ring) self.assertEqual(object_ring.serialized_path, object_ring_path) self.assertEqual(client.auto_create_account_prefix, '-') def test_init(self): class App(object): def __init__(self, test, conf_path): self.test = test self.conf_path = conf_path self.load_called = 0 def load(self, uri, allow_modify_pipeline=True): self.load_called += 1 self.test.assertEqual(conf_path, uri) self.test.assertFalse(allow_modify_pipeline) return self conf_path = 'some_path' app = App(self, conf_path) old_loadapp = internal_client.loadapp internal_client.loadapp = app.load user_agent = 'some_user_agent' request_tries = 'some_request_tries' try: client = internal_client.InternalClient( conf_path, user_agent, request_tries) finally: internal_client.loadapp = old_loadapp self.assertEqual(1, app.load_called) self.assertEqual(app, client.app) self.assertEqual(user_agent, client.user_agent) self.assertEqual(request_tries, client.request_tries) def test_make_request_sets_user_agent(self): class InternalClient(internal_client.InternalClient): def __init__(self, test): self.test = test self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 1 def fake_app(self, env, start_response): self.test.assertEqual(self.user_agent, env['HTTP_USER_AGENT']) start_response('200 Ok', [('Content-Length', '0')]) return [] client = InternalClient(self) client.make_request('GET', '/', {}, (200,)) def test_make_request_retries(self): class InternalClient(internal_client.InternalClient): def __init__(self, test): self.test = test self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 4 self.tries = 0 self.sleep_called = 0 def fake_app(self, env, start_response): self.tries += 1 if self.tries < self.request_tries: start_response( '500 Internal Server Error', [('Content-Length', '0')]) else: start_response('200 Ok', [('Content-Length', '0')]) return [] def sleep(self, seconds): self.sleep_called += 1 self.test.assertEqual(2 ** (self.sleep_called), seconds) client = InternalClient(self) old_sleep = internal_client.sleep internal_client.sleep = client.sleep try: client.make_request('GET', '/', {}, (200,)) finally: internal_client.sleep = old_sleep self.assertEqual(3, client.sleep_called) self.assertEqual(4, client.tries) def test_base_request_timeout(self): # verify that base_request passes timeout arg on to urlopen body = {"some": "content"} for timeout in (0.0, 42.0, None): mocked_func = 'swift.common.internal_client.urllib2.urlopen' with mock.patch(mocked_func) as mock_urlopen: mock_urlopen.side_effect = [FakeConn(body)] sc = internal_client.SimpleClient('http://0.0.0.0/') _, resp_body = sc.base_request('GET', timeout=timeout) mock_urlopen.assert_called_once_with(mock.ANY, timeout=timeout) # sanity check self.assertEqual(body, resp_body) def test_base_full_listing(self): body1 = [{'name': 'a'}, {'name': "b"}, {'name': "c"}] body2 = [{'name': 'd'}] body3 = [] mocked_func = 'swift.common.internal_client.urllib2.urlopen' with mock.patch(mocked_func) as mock_urlopen: mock_urlopen.side_effect = [ FakeConn(body1), FakeConn(body2), FakeConn(body3)] sc = internal_client.SimpleClient('http://0.0.0.0/') _, resp_body = sc.base_request('GET', full_listing=True) self.assertEqual(body1 + body2, resp_body) self.assertEqual(3, mock_urlopen.call_count) actual_requests = map( lambda call: call[0][0], mock_urlopen.call_args_list) self.assertEqual('/?format=json', actual_requests[0].get_selector()) self.assertEqual( '/?format=json&marker=c', actual_requests[1].get_selector()) self.assertEqual( '/?format=json&marker=d', actual_requests[2].get_selector()) def test_make_request_method_path_headers(self): class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 self.env = None def fake_app(self, env, start_response): self.env = env start_response('200 Ok', [('Content-Length', '0')]) return [] client = InternalClient() for method in 'GET PUT HEAD'.split(): client.make_request(method, '/', {}, (200,)) self.assertEqual(client.env['REQUEST_METHOD'], method) for path in '/one /two/three'.split(): client.make_request('GET', path, {'X-Test': path}, (200,)) self.assertEqual(client.env['PATH_INFO'], path) self.assertEqual(client.env['HTTP_X_TEST'], path) def test_make_request_codes(self): class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): start_response('200 Ok', [('Content-Length', '0')]) return [] client = InternalClient() try: old_sleep = internal_client.sleep internal_client.sleep = not_sleep client.make_request('GET', '/', {}, (200,)) client.make_request('GET', '/', {}, (2,)) client.make_request('GET', '/', {}, (400, 200)) client.make_request('GET', '/', {}, (400, 2)) try: client.make_request('GET', '/', {}, (400,)) except Exception as err: pass self.assertEqual(200, err.resp.status_int) try: client.make_request('GET', '/', {}, (201,)) except Exception as err: pass self.assertEqual(200, err.resp.status_int) try: client.make_request('GET', '/', {}, (111,)) except Exception as err: self.assertTrue(str(err).startswith('Unexpected response')) else: self.fail("Expected the UnexpectedResponse") finally: internal_client.sleep = old_sleep def test_make_request_calls_fobj_seek_each_try(self): class FileObject(object): def __init__(self, test): self.test = test self.seek_called = 0 def seek(self, offset, whence=0): self.seek_called += 1 self.test.assertEqual(0, offset) self.test.assertEqual(0, whence) class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): start_response('404 Not Found', [('Content-Length', '0')]) return [] fobj = FileObject(self) client = InternalClient() try: old_sleep = internal_client.sleep internal_client.sleep = not_sleep try: client.make_request('PUT', '/', {}, (2,), fobj) except Exception as err: pass self.assertEqual(404, err.resp.status_int) finally: internal_client.sleep = old_sleep self.assertEqual(client.request_tries, fobj.seek_called) def test_make_request_request_exception(self): class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): raise Exception() client = InternalClient() try: old_sleep = internal_client.sleep internal_client.sleep = not_sleep self.assertRaises( Exception, client.make_request, 'GET', '/', {}, (2,)) finally: internal_client.sleep = old_sleep def test_get_metadata(self): class Response(object): def __init__(self, headers): self.headers = headers self.status_int = 200 class InternalClient(internal_client.InternalClient): def __init__(self, test, path, resp_headers): self.test = test self.path = path self.resp_headers = resp_headers self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('HEAD', method) self.test.assertEqual(self.path, path) self.test.assertEqual((2,), acceptable_statuses) self.test.assertIsNone(body_file) return Response(self.resp_headers) path = 'some_path' metadata_prefix = 'some_key-' resp_headers = { '%sone' % (metadata_prefix): '1', '%sTwo' % (metadata_prefix): '2', '%sThree' % (metadata_prefix): '3', 'some_header-four': '4', 'Some_header-five': '5', } exp_metadata = { 'one': '1', 'two': '2', 'three': '3', } client = InternalClient(self, path, resp_headers) metadata = client._get_metadata(path, metadata_prefix) self.assertEqual(exp_metadata, metadata) self.assertEqual(1, client.make_request_called) def test_get_metadata_invalid_status(self): class FakeApp(object): def __call__(self, environ, start_response): start_response('404 Not Found', [('x-foo', 'bar')]) return ['nope'] class InternalClient(internal_client.InternalClient): def __init__(self): self.user_agent = 'test' self.request_tries = 1 self.app = FakeApp() client = InternalClient() self.assertRaises(internal_client.UnexpectedResponse, client._get_metadata, 'path') metadata = client._get_metadata('path', metadata_prefix='x-', acceptable_statuses=(4,)) self.assertEqual(metadata, {'foo': 'bar'}) def test_make_path(self): account, container, obj = path_parts() path = make_path(account, container, obj) c = InternalClient() self.assertEqual(path, c.make_path(account, container, obj)) def test_make_path_exception(self): c = InternalClient() self.assertRaises(ValueError, c.make_path, 'account', None, 'obj') def test_iter_items(self): class Response(object): def __init__(self, status_int, body): self.status_int = status_int self.body = body class InternalClient(internal_client.InternalClient): def __init__(self, test, responses): self.test = test self.responses = responses self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 return self.responses.pop(0) exp_items = [] responses = [Response(200, json.dumps([])), ] items = [] client = InternalClient(self, responses) for item in client._iter_items('/'): items.append(item) self.assertEqual(exp_items, items) exp_items = [] responses = [] for i in range(3): data = [ {'name': 'item%02d' % (2 * i)}, {'name': 'item%02d' % (2 * i + 1)}] responses.append(Response(200, json.dumps(data))) exp_items.extend(data) responses.append(Response(204, '')) items = [] client = InternalClient(self, responses) for item in client._iter_items('/'): items.append(item) self.assertEqual(exp_items, items) def test_iter_items_with_markers(self): class Response(object): def __init__(self, status_int, body): self.status_int = status_int self.body = body class InternalClient(internal_client.InternalClient): def __init__(self, test, paths, responses): self.test = test self.paths = paths self.responses = responses def make_request( self, method, path, headers, acceptable_statuses, body_file=None): exp_path = self.paths.pop(0) self.test.assertEqual(exp_path, path) return self.responses.pop(0) paths = [ '/?format=json&marker=start&end_marker=end', '/?format=json&marker=one%C3%A9&end_marker=end', '/?format=json&marker=two&end_marker=end', ] responses = [ Response(200, json.dumps([{'name': 'one\xc3\xa9'}, ])), Response(200, json.dumps([{'name': 'two'}, ])), Response(204, ''), ] items = [] client = InternalClient(self, paths, responses) for item in client._iter_items('/', marker='start', end_marker='end'): items.append(item['name'].encode('utf8')) self.assertEqual('one\xc3\xa9 two'.split(), items) def test_iter_item_read_response_if_status_is_acceptable(self): class Response(object): def __init__(self, status_int, body, app_iter): self.status_int = status_int self.body = body self.app_iter = app_iter class InternalClient(internal_client.InternalClient): def __init__(self, test, responses): self.test = test self.responses = responses def make_request( self, method, path, headers, acceptable_statuses, body_file=None): resp = self.responses.pop(0) if resp.status_int in acceptable_statuses or \ resp.status_int // 100 in acceptable_statuses: return resp if resp: raise internal_client.UnexpectedResponse( 'Unexpected response: %s' % resp.status_int, resp) num_list = [] def generate_resp_body(): for i in range(1, 5): yield str(i) num_list.append(i) exp_items = [] responses = [Response(204, json.dumps([]), generate_resp_body())] items = [] client = InternalClient(self, responses) for item in client._iter_items('/'): items.append(item) self.assertEqual(exp_items, items) self.assertEqual(len(num_list), 0) responses = [Response(300, json.dumps([]), generate_resp_body())] client = InternalClient(self, responses) self.assertRaises(internal_client.UnexpectedResponse, next, client._iter_items('/')) exp_items = [] responses = [Response(404, json.dumps([]), generate_resp_body())] items = [] client = InternalClient(self, responses) for item in client._iter_items('/'): items.append(item) self.assertEqual(exp_items, items) self.assertEqual(len(num_list), 4) def test_set_metadata(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path, exp_headers): self.test = test self.path = path self.exp_headers = exp_headers self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('POST', method) self.test.assertEqual(self.path, path) self.test.assertEqual(self.exp_headers, headers) self.test.assertEqual((2,), acceptable_statuses) self.test.assertIsNone(body_file) path = 'some_path' metadata_prefix = 'some_key-' metadata = { '%sone' % (metadata_prefix): '1', '%stwo' % (metadata_prefix): '2', 'three': '3', } exp_headers = { '%sone' % (metadata_prefix): '1', '%stwo' % (metadata_prefix): '2', '%sthree' % (metadata_prefix): '3', } client = InternalClient(self, path, exp_headers) client._set_metadata(path, metadata, metadata_prefix) self.assertEqual(1, client.make_request_called) def test_iter_containers(self): account, container, obj = path_parts() path = make_path(account) items = '0 1 2'.split() marker = 'some_marker' end_marker = 'some_end_marker' acceptable_statuses = 'some_status_list' client = IterInternalClient( self, path, marker, end_marker, acceptable_statuses, items) ret_items = [] for container in client.iter_containers( account, marker, end_marker, acceptable_statuses=acceptable_statuses): ret_items.append(container) self.assertEqual(items, ret_items) def test_get_account_info(self): class Response(object): def __init__(self, containers, objects): self.headers = { 'x-account-container-count': containers, 'x-account-object-count': objects, } self.status_int = 200 class InternalClient(internal_client.InternalClient): def __init__(self, test, path, resp): self.test = test self.path = path self.resp = resp def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.test.assertEqual('HEAD', method) self.test.assertEqual(self.path, path) self.test.assertEqual({}, headers) self.test.assertEqual((2, 404), acceptable_statuses) self.test.assertIsNone(body_file) return self.resp account, container, obj = path_parts() path = make_path(account) containers, objects = 10, 100 client = InternalClient(self, path, Response(containers, objects)) info = client.get_account_info(account) self.assertEqual((containers, objects), info) def test_get_account_info_404(self): class Response(object): def __init__(self): self.headers = { 'x-account-container-count': 10, 'x-account-object-count': 100, } self.status_int = 404 class InternalClient(internal_client.InternalClient): def __init__(self): pass def make_path(self, *a, **kw): return 'some_path' def make_request(self, *a, **kw): return Response() client = InternalClient() info = client.get_account_info('some_account') self.assertEqual((0, 0), info) def test_get_account_metadata(self): account, container, obj = path_parts() path = make_path(account) acceptable_statuses = 'some_status_list' metadata_prefix = 'some_metadata_prefix' client = GetMetadataInternalClient( self, path, metadata_prefix, acceptable_statuses) metadata = client.get_account_metadata( account, metadata_prefix, acceptable_statuses) self.assertEqual(client.metadata, metadata) self.assertEqual(1, client.get_metadata_called) def test_get_metadadata_with_acceptable_status(self): account, container, obj = path_parts() path = make_path_info(account) client, app = get_client_app() resp_headers = {'some-important-header': 'some value'} app.register('GET', path, swob.HTTPOk, resp_headers) metadata = client.get_account_metadata( account, acceptable_statuses=(2, 4)) self.assertEqual(metadata['some-important-header'], 'some value') app.register('GET', path, swob.HTTPNotFound, resp_headers) metadata = client.get_account_metadata( account, acceptable_statuses=(2, 4)) self.assertEqual(metadata['some-important-header'], 'some value') app.register('GET', path, swob.HTTPServerError, resp_headers) self.assertRaises(internal_client.UnexpectedResponse, client.get_account_metadata, account, acceptable_statuses=(2, 4)) def test_set_account_metadata(self): account, container, obj = path_parts() path = make_path(account) metadata = 'some_metadata' metadata_prefix = 'some_metadata_prefix' acceptable_statuses = 'some_status_list' client = SetMetadataInternalClient( self, path, metadata, metadata_prefix, acceptable_statuses) client.set_account_metadata( account, metadata, metadata_prefix, acceptable_statuses) self.assertEqual(1, client.set_metadata_called) def test_container_exists(self): class Response(object): def __init__(self, status_int): self.status_int = status_int class InternalClient(internal_client.InternalClient): def __init__(self, test, path, resp): self.test = test self.path = path self.make_request_called = 0 self.resp = resp def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('HEAD', method) self.test.assertEqual(self.path, path) self.test.assertEqual({}, headers) self.test.assertEqual((2, 404), acceptable_statuses) self.test.assertIsNone(body_file) return self.resp account, container, obj = path_parts() path = make_path(account, container) client = InternalClient(self, path, Response(200)) self.assertEqual(True, client.container_exists(account, container)) self.assertEqual(1, client.make_request_called) client = InternalClient(self, path, Response(404)) self.assertEqual(False, client.container_exists(account, container)) self.assertEqual(1, client.make_request_called) def test_create_container(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path, headers): self.test = test self.path = path self.headers = headers self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('PUT', method) self.test.assertEqual(self.path, path) self.test.assertEqual(self.headers, headers) self.test.assertEqual((2,), acceptable_statuses) self.test.assertIsNone(body_file) account, container, obj = path_parts() path = make_path(account, container) headers = 'some_headers' client = InternalClient(self, path, headers) client.create_container(account, container, headers) self.assertEqual(1, client.make_request_called) def test_delete_container(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path): self.test = test self.path = path self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('DELETE', method) self.test.assertEqual(self.path, path) self.test.assertEqual({}, headers) self.test.assertEqual((2, 404), acceptable_statuses) self.test.assertIsNone(body_file) account, container, obj = path_parts() path = make_path(account, container) client = InternalClient(self, path) client.delete_container(account, container) self.assertEqual(1, client.make_request_called) def test_get_container_metadata(self): account, container, obj = path_parts() path = make_path(account, container) metadata_prefix = 'some_metadata_prefix' acceptable_statuses = 'some_status_list' client = GetMetadataInternalClient( self, path, metadata_prefix, acceptable_statuses) metadata = client.get_container_metadata( account, container, metadata_prefix, acceptable_statuses) self.assertEqual(client.metadata, metadata) self.assertEqual(1, client.get_metadata_called) def test_iter_objects(self): account, container, obj = path_parts() path = make_path(account, container) marker = 'some_maker' end_marker = 'some_end_marker' acceptable_statuses = 'some_status_list' items = '0 1 2'.split() client = IterInternalClient( self, path, marker, end_marker, acceptable_statuses, items) ret_items = [] for obj in client.iter_objects( account, container, marker, end_marker, acceptable_statuses): ret_items.append(obj) self.assertEqual(items, ret_items) def test_set_container_metadata(self): account, container, obj = path_parts() path = make_path(account, container) metadata = 'some_metadata' metadata_prefix = 'some_metadata_prefix' acceptable_statuses = 'some_status_list' client = SetMetadataInternalClient( self, path, metadata, metadata_prefix, acceptable_statuses) client.set_container_metadata( account, container, metadata, metadata_prefix, acceptable_statuses) self.assertEqual(1, client.set_metadata_called) def test_delete_object(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path): self.test = test self.path = path self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual('DELETE', method) self.test.assertEqual(self.path, path) self.test.assertEqual({}, headers) self.test.assertEqual((2, 404), acceptable_statuses) self.test.assertIsNone(body_file) account, container, obj = path_parts() path = make_path(account, container, obj) client = InternalClient(self, path) client.delete_object(account, container, obj) self.assertEqual(1, client.make_request_called) def test_get_object_metadata(self): account, container, obj = path_parts() path = make_path(account, container, obj) metadata_prefix = 'some_metadata_prefix' acceptable_statuses = 'some_status_list' client = GetMetadataInternalClient( self, path, metadata_prefix, acceptable_statuses) metadata = client.get_object_metadata( account, container, obj, metadata_prefix, acceptable_statuses) self.assertEqual(client.metadata, metadata) self.assertEqual(1, client.get_metadata_called) def test_get_metadata_extra_headers(self): class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): self.req_env = env start_response('200 Ok', [('Content-Length', '0')]) return [] client = InternalClient() headers = {'X-Foo': 'bar'} client.get_object_metadata('account', 'container', 'obj', headers=headers) self.assertEqual(client.req_env['HTTP_X_FOO'], 'bar') def test_get_object(self): account, container, obj = path_parts() path_info = make_path_info(account, container, obj) client, app = get_client_app() headers = {'foo': 'bar'} body = 'some_object_body' app.register('GET', path_info, swob.HTTPOk, headers, body) req_headers = {'x-important-header': 'some_important_value'} status_int, resp_headers, obj_iter = client.get_object( account, container, obj, req_headers) self.assertEqual(status_int // 100, 2) for k, v in headers.items(): self.assertEqual(v, resp_headers[k]) self.assertEqual(''.join(obj_iter), body) self.assertEqual(resp_headers['content-length'], str(len(body))) self.assertEqual(app.call_count, 1) req_headers.update({ 'host': 'localhost:80', # from swob.Request.blank 'user-agent': 'test', # from InternalClient.make_request }) self.assertEqual(app.calls_with_headers, [( 'GET', path_info, HeaderKeyDict(req_headers))]) def test_iter_object_lines(self): class InternalClient(internal_client.InternalClient): def __init__(self, lines): self.lines = lines self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): start_response('200 Ok', [('Content-Length', '0')]) return ['%s\n' % x for x in self.lines] lines = 'line1 line2 line3'.split() client = InternalClient(lines) ret_lines = [] for line in client.iter_object_lines('account', 'container', 'object'): ret_lines.append(line) self.assertEqual(lines, ret_lines) def test_iter_object_lines_compressed_object(self): class InternalClient(internal_client.InternalClient): def __init__(self, lines): self.lines = lines self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): start_response('200 Ok', [('Content-Length', '0')]) return internal_client.CompressingFileReader( StringIO('\n'.join(self.lines))) lines = 'line1 line2 line3'.split() client = InternalClient(lines) ret_lines = [] for line in client.iter_object_lines( 'account', 'container', 'object.gz'): ret_lines.append(line) self.assertEqual(lines, ret_lines) def test_iter_object_lines_404(self): class InternalClient(internal_client.InternalClient): def __init__(self): self.app = self.fake_app self.user_agent = 'some_agent' self.request_tries = 3 def fake_app(self, env, start_response): start_response('404 Not Found', []) return ['one\ntwo\nthree'] client = InternalClient() lines = [] for line in client.iter_object_lines( 'some_account', 'some_container', 'some_object', acceptable_statuses=(2, 404)): lines.append(line) self.assertEqual([], lines) def test_set_object_metadata(self): account, container, obj = path_parts() path = make_path(account, container, obj) metadata = 'some_metadata' metadata_prefix = 'some_metadata_prefix' acceptable_statuses = 'some_status_list' client = SetMetadataInternalClient( self, path, metadata, metadata_prefix, acceptable_statuses) client.set_object_metadata( account, container, obj, metadata, metadata_prefix, acceptable_statuses) self.assertEqual(1, client.set_metadata_called) def test_upload_object(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path, headers, fobj): self.test = test self.path = path self.headers = headers self.fobj = fobj self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual(self.path, path) exp_headers = dict(self.headers) exp_headers['Transfer-Encoding'] = 'chunked' self.test.assertEqual(exp_headers, headers) self.test.assertEqual(self.fobj, fobj) fobj = 'some_fobj' account, container, obj = path_parts() path = make_path(account, container, obj) headers = {'key': 'value'} client = InternalClient(self, path, headers, fobj) client.upload_object(fobj, account, container, obj, headers) self.assertEqual(1, client.make_request_called) def test_upload_object_not_chunked(self): class InternalClient(internal_client.InternalClient): def __init__(self, test, path, headers, fobj): self.test = test self.path = path self.headers = headers self.fobj = fobj self.make_request_called = 0 def make_request( self, method, path, headers, acceptable_statuses, body_file=None): self.make_request_called += 1 self.test.assertEqual(self.path, path) exp_headers = dict(self.headers) self.test.assertEqual(exp_headers, headers) self.test.assertEqual(self.fobj, fobj) fobj = 'some_fobj' account, container, obj = path_parts() path = make_path(account, container, obj) headers = {'key': 'value', 'Content-Length': len(fobj)} client = InternalClient(self, path, headers, fobj) client.upload_object(fobj, account, container, obj, headers) self.assertEqual(1, client.make_request_called) class TestGetAuth(unittest.TestCase): @mock.patch('eventlet.green.urllib2.urlopen') @mock.patch('eventlet.green.urllib2.Request') def test_ok(self, request, urlopen): def getheader(name): d = {'X-Storage-Url': 'url', 'X-Auth-Token': 'token'} return d.get(name) urlopen.return_value.info.return_value.getheader = getheader url, token = internal_client.get_auth( 'http://127.0.0.1', 'user', 'key') self.assertEqual(url, "url") self.assertEqual(token, "token") request.assert_called_with('http://127.0.0.1') request.return_value.add_header.assert_any_call('X-Auth-User', 'user') request.return_value.add_header.assert_any_call('X-Auth-Key', 'key') def test_invalid_version(self): self.assertRaises(SystemExit, internal_client.get_auth, 'http://127.0.0.1', 'user', 'key', auth_version=2.0) class TestSimpleClient(unittest.TestCase): def _test_get_head(self, request, urlopen, method): mock_time_value = [1401224049.98] def mock_time(): # global mock_time_value mock_time_value[0] += 1 return mock_time_value[0] with mock.patch('swift.common.internal_client.time', mock_time): # basic request, only url as kwarg request.return_value.get_type.return_value = "http" urlopen.return_value.read.return_value = '' urlopen.return_value.getcode.return_value = 200 urlopen.return_value.info.return_value = {'content-length': '345'} sc = internal_client.SimpleClient(url='http://127.0.0.1') logger = FakeLogger() retval = sc.retry_request( method, headers={'content-length': '123'}, logger=logger) self.assertEqual(urlopen.call_count, 1) request.assert_called_with('http://127.0.0.1?format=json', headers={'content-length': '123'}, data=None) self.assertEqual([{'content-length': '345'}, None], retval) self.assertEqual(method, request.return_value.get_method()) self.assertEqual(logger.log_dict['debug'], [( ('-> 2014-05-27T20:54:11 ' + method + ' http://127.0.0.1%3Fformat%3Djson 200 ' '123 345 1401224050.98 1401224051.98 1.0 -',), {})]) # Check if JSON is decoded urlopen.return_value.read.return_value = '{}' retval = sc.retry_request(method) self.assertEqual([{'content-length': '345'}, {}], retval) # same as above, now with token sc = internal_client.SimpleClient(url='http://127.0.0.1', token='token') retval = sc.retry_request(method) request.assert_called_with('http://127.0.0.1?format=json', headers={'X-Auth-Token': 'token'}, data=None) self.assertEqual([{'content-length': '345'}, {}], retval) # same as above, now with prefix sc = internal_client.SimpleClient(url='http://127.0.0.1', token='token') retval = sc.retry_request(method, prefix="pre_") request.assert_called_with( 'http://127.0.0.1?format=json&prefix=pre_', headers={'X-Auth-Token': 'token'}, data=None) self.assertEqual([{'content-length': '345'}, {}], retval) # same as above, now with container name retval = sc.retry_request(method, container='cont') request.assert_called_with('http://127.0.0.1/cont?format=json', headers={'X-Auth-Token': 'token'}, data=None) self.assertEqual([{'content-length': '345'}, {}], retval) # same as above, now with object name retval = sc.retry_request(method, container='cont', name='obj') request.assert_called_with('http://127.0.0.1/cont/obj', headers={'X-Auth-Token': 'token'}, data=None) self.assertEqual([{'content-length': '345'}, {}], retval) @mock.patch('eventlet.green.urllib2.urlopen') @mock.patch('eventlet.green.urllib2.Request') def test_get(self, request, urlopen): self._test_get_head(request, urlopen, 'GET') @mock.patch('eventlet.green.urllib2.urlopen') @mock.patch('eventlet.green.urllib2.Request') def test_head(self, request, urlopen): self._test_get_head(request, urlopen, 'HEAD') @mock.patch('eventlet.green.urllib2.urlopen') @mock.patch('eventlet.green.urllib2.Request') def test_get_with_retries_all_failed(self, request, urlopen): # Simulate a failing request, ensure retries done request.return_value.get_type.return_value = "http" urlopen.side_effect = urllib2.URLError('') sc = internal_client.SimpleClient(url='http://127.0.0.1', retries=1) with mock.patch('swift.common.internal_client.sleep') as mock_sleep: self.assertRaises(urllib2.URLError, sc.retry_request, 'GET') self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(request.call_count, 2) self.assertEqual(urlopen.call_count, 2) @mock.patch('eventlet.green.urllib2.urlopen') @mock.patch('eventlet.green.urllib2.Request') def test_get_with_retries(self, request, urlopen): # First request fails, retry successful request.return_value.get_type.return_value = "http" mock_resp = mock.MagicMock() mock_resp.read.return_value = '' mock_resp.info.return_value = {} urlopen.side_effect = [urllib2.URLError(''), mock_resp] sc = internal_client.SimpleClient(url='http://127.0.0.1', retries=1, token='token') with mock.patch('swift.common.internal_client.sleep') as mock_sleep: retval = sc.retry_request('GET') self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(request.call_count, 2) self.assertEqual(urlopen.call_count, 2) request.assert_called_with('http://127.0.0.1?format=json', data=None, headers={'X-Auth-Token': 'token'}) self.assertEqual([{}, None], retval) self.assertEqual(sc.attempts, 2) @mock.patch('eventlet.green.urllib2.urlopen') def test_get_with_retries_param(self, mock_urlopen): mock_response = mock.MagicMock() mock_response.read.return_value = '' mock_response.info.return_value = {} mock_urlopen.side_effect = internal_client.httplib.BadStatusLine('') c = internal_client.SimpleClient(url='http://127.0.0.1', token='token') self.assertEqual(c.retries, 5) # first without retries param with mock.patch('swift.common.internal_client.sleep') as mock_sleep: self.assertRaises(internal_client.httplib.BadStatusLine, c.retry_request, 'GET') self.assertEqual(mock_sleep.call_count, 5) self.assertEqual(mock_urlopen.call_count, 6) # then with retries param mock_urlopen.reset_mock() with mock.patch('swift.common.internal_client.sleep') as mock_sleep: self.assertRaises(internal_client.httplib.BadStatusLine, c.retry_request, 'GET', retries=2) self.assertEqual(mock_sleep.call_count, 2) self.assertEqual(mock_urlopen.call_count, 3) # and this time with a real response mock_urlopen.reset_mock() mock_urlopen.side_effect = [internal_client.httplib.BadStatusLine(''), mock_response] with mock.patch('swift.common.internal_client.sleep') as mock_sleep: retval = c.retry_request('GET', retries=1) self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(mock_urlopen.call_count, 2) self.assertEqual([{}, None], retval) @mock.patch('eventlet.green.urllib2.urlopen') def test_request_with_retries_with_HTTPError(self, mock_urlopen): mock_response = mock.MagicMock() mock_response.read.return_value = '' c = internal_client.SimpleClient(url='http://127.0.0.1', token='token') self.assertEqual(c.retries, 5) for request_method in 'GET PUT POST DELETE HEAD COPY'.split(): mock_urlopen.reset_mock() mock_urlopen.side_effect = urllib2.HTTPError(*[None] * 5) with mock.patch('swift.common.internal_client.sleep') \ as mock_sleep: self.assertRaises(exceptions.ClientException, c.retry_request, request_method, retries=1) self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(mock_urlopen.call_count, 2) @mock.patch('eventlet.green.urllib2.urlopen') def test_request_container_with_retries_with_HTTPError(self, mock_urlopen): mock_response = mock.MagicMock() mock_response.read.return_value = '' c = internal_client.SimpleClient(url='http://127.0.0.1', token='token') self.assertEqual(c.retries, 5) for request_method in 'GET PUT POST DELETE HEAD COPY'.split(): mock_urlopen.reset_mock() mock_urlopen.side_effect = urllib2.HTTPError(*[None] * 5) with mock.patch('swift.common.internal_client.sleep') \ as mock_sleep: self.assertRaises(exceptions.ClientException, c.retry_request, request_method, container='con', retries=1) self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(mock_urlopen.call_count, 2) @mock.patch('eventlet.green.urllib2.urlopen') def test_request_object_with_retries_with_HTTPError(self, mock_urlopen): mock_response = mock.MagicMock() mock_response.read.return_value = '' c = internal_client.SimpleClient(url='http://127.0.0.1', token='token') self.assertEqual(c.retries, 5) for request_method in 'GET PUT POST DELETE HEAD COPY'.split(): mock_urlopen.reset_mock() mock_urlopen.side_effect = urllib2.HTTPError(*[None] * 5) with mock.patch('swift.common.internal_client.sleep') \ as mock_sleep: self.assertRaises(exceptions.ClientException, c.retry_request, request_method, container='con', name='obj', retries=1) self.assertEqual(mock_sleep.call_count, 1) self.assertEqual(mock_urlopen.call_count, 2) def test_proxy(self): # check that proxy arg is passed through to the urllib Request scheme = 'http' proxy_host = '127.0.0.1:80' proxy = '%s://%s' % (scheme, proxy_host) url = 'https://127.0.0.1:1/a' mocked = 'swift.common.internal_client.urllib2.urlopen' # module level methods for func in (internal_client.put_object, internal_client.delete_object): with mock.patch(mocked) as mock_urlopen: mock_urlopen.return_value = FakeConn() func(url, container='c', name='o1', contents='', proxy=proxy, timeout=0.1, retries=0) self.assertEqual(1, mock_urlopen.call_count) args, kwargs = mock_urlopen.call_args self.assertEqual(1, len(args)) self.assertEqual(1, len(kwargs)) self.assertEqual(0.1, kwargs['timeout']) self.assertTrue(isinstance(args[0], urllib2.Request)) self.assertEqual(proxy_host, args[0].host) self.assertEqual(scheme, args[0].type) # class methods content = mock.MagicMock() cl = internal_client.SimpleClient(url) scenarios = ((cl.get_account, []), (cl.get_container, ['c']), (cl.put_container, ['c']), (cl.put_object, ['c', 'o', content])) for scenario in scenarios: with mock.patch(mocked) as mock_urlopen: mock_urlopen.return_value = FakeConn() scenario[0](*scenario[1], proxy=proxy, timeout=0.1) self.assertEqual(1, mock_urlopen.call_count) args, kwargs = mock_urlopen.call_args self.assertEqual(1, len(args)) self.assertEqual(1, len(kwargs)) self.assertEqual(0.1, kwargs['timeout']) self.assertTrue(isinstance(args[0], urllib2.Request)) self.assertEqual(proxy_host, args[0].host) self.assertEqual(scheme, args[0].type) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_storage_policy.py0000775000567000056710000013736613024044354023722 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tests for swift.common.storage_policies """ import six import unittest import os import mock from functools import partial from six.moves.configparser import ConfigParser from tempfile import NamedTemporaryFile from test.unit import patch_policies, FakeRing, temptree, DEFAULT_TEST_EC_TYPE from swift.common.storage_policy import ( StoragePolicyCollection, POLICIES, PolicyError, parse_storage_policies, reload_storage_policies, get_policy_string, split_policy_string, BaseStoragePolicy, StoragePolicy, ECStoragePolicy, REPL_POLICY, EC_POLICY, VALID_EC_TYPES, DEFAULT_EC_OBJECT_SEGMENT_SIZE, BindPortsCache) from swift.common.ring import RingData from swift.common.exceptions import RingValidationError @BaseStoragePolicy.register('fake') class FakeStoragePolicy(BaseStoragePolicy): """ Test StoragePolicy class - the only user at the moment is test_validate_policies_type_invalid() """ def __init__(self, idx, name='', is_default=False, is_deprecated=False, object_ring=None): super(FakeStoragePolicy, self).__init__( idx, name, is_default, is_deprecated, object_ring) class TestStoragePolicies(unittest.TestCase): def _conf(self, conf_str): conf_str = "\n".join(line.strip() for line in conf_str.split("\n")) conf = ConfigParser() conf.readfp(six.StringIO(conf_str)) return conf def assertRaisesWithMessage(self, exc_class, message, f, *args, **kwargs): try: f(*args, **kwargs) except exc_class as err: err_msg = str(err) self.assertTrue(message in err_msg, 'Error message %r did not ' 'have expected substring %r' % (err_msg, message)) else: self.fail('%r did not raise %s' % (message, exc_class.__name__)) def test_policy_baseclass_instantiate(self): self.assertRaisesWithMessage(TypeError, "Can't instantiate BaseStoragePolicy", BaseStoragePolicy, 1, 'one') @patch_policies([ StoragePolicy(0, 'zero', is_default=True), StoragePolicy(1, 'one'), StoragePolicy(2, 'two'), StoragePolicy(3, 'three', is_deprecated=True), ECStoragePolicy(10, 'ten', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4), ]) def test_swift_info(self): # the deprecated 'three' should not exist in expect expect = [{'aliases': 'zero', 'default': True, 'name': 'zero', }, {'aliases': 'two', 'name': 'two'}, {'aliases': 'one', 'name': 'one'}, {'aliases': 'ten', 'name': 'ten'}] swift_info = POLICIES.get_policy_info() self.assertEqual(sorted(expect, key=lambda k: k['name']), sorted(swift_info, key=lambda k: k['name'])) @patch_policies def test_get_policy_string(self): self.assertEqual(get_policy_string('something', 0), 'something') self.assertEqual(get_policy_string('something', None), 'something') self.assertEqual(get_policy_string('something', ''), 'something') self.assertEqual(get_policy_string('something', 1), 'something' + '-1') self.assertRaises(PolicyError, get_policy_string, 'something', 99) @patch_policies def test_split_policy_string(self): expectations = { 'something': ('something', POLICIES[0]), 'something-1': ('something', POLICIES[1]), 'tmp': ('tmp', POLICIES[0]), 'objects': ('objects', POLICIES[0]), 'tmp-1': ('tmp', POLICIES[1]), 'objects-1': ('objects', POLICIES[1]), 'objects-': PolicyError, 'objects-0': PolicyError, 'objects--1': ('objects-', POLICIES[1]), 'objects-+1': PolicyError, 'objects--': PolicyError, 'objects-foo': PolicyError, 'objects--bar': PolicyError, 'objects-+bar': PolicyError, # questionable, demonstrated as inverse of get_policy_string 'objects+0': ('objects+0', POLICIES[0]), '': ('', POLICIES[0]), '0': ('0', POLICIES[0]), '-1': ('', POLICIES[1]), } for policy_string, expected in expectations.items(): if expected == PolicyError: try: invalid = split_policy_string(policy_string) except PolicyError: continue # good else: self.fail('The string %r returned %r ' 'instead of raising a PolicyError' % (policy_string, invalid)) self.assertEqual(expected, split_policy_string(policy_string)) # should be inverse of get_policy_string self.assertEqual(policy_string, get_policy_string(*expected)) def test_defaults(self): self.assertTrue(len(POLICIES) > 0) # test class functions default_policy = POLICIES.default self.assertTrue(default_policy.is_default) zero_policy = POLICIES.get_by_index(0) self.assertTrue(zero_policy.idx == 0) zero_policy_by_name = POLICIES.get_by_name(zero_policy.name) self.assertTrue(zero_policy_by_name.idx == 0) def test_storage_policy_repr(self): test_policies = [StoragePolicy(0, 'aay', True), StoragePolicy(1, 'bee', False), StoragePolicy(2, 'cee', False), ECStoragePolicy(10, 'ten', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=3)] policies = StoragePolicyCollection(test_policies) for policy in policies: policy_repr = repr(policy) self.assertTrue(policy.__class__.__name__ in policy_repr) self.assertTrue('is_default=%s' % policy.is_default in policy_repr) self.assertTrue('is_deprecated=%s' % policy.is_deprecated in policy_repr) self.assertTrue(policy.name in policy_repr) if policy.policy_type == EC_POLICY: self.assertTrue('ec_type=%s' % policy.ec_type in policy_repr) self.assertTrue('ec_ndata=%s' % policy.ec_ndata in policy_repr) self.assertTrue('ec_nparity=%s' % policy.ec_nparity in policy_repr) self.assertTrue('ec_segment_size=%s' % policy.ec_segment_size in policy_repr) collection_repr = repr(policies) collection_repr_lines = collection_repr.splitlines() self.assertTrue( policies.__class__.__name__ in collection_repr_lines[0]) self.assertEqual(len(policies), len(collection_repr_lines[1:-1])) for policy, line in zip(policies, collection_repr_lines[1:-1]): self.assertTrue(repr(policy) in line) with patch_policies(policies): self.assertEqual(repr(POLICIES), collection_repr) def test_validate_policies_defaults(self): # 0 explicit default test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False)] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.default, test_policies[0]) self.assertEqual(policies.default.name, 'zero') # non-zero explicit default test_policies = [StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', True)] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.default, test_policies[2]) self.assertEqual(policies.default.name, 'two') # multiple defaults test_policies = [StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True), StoragePolicy(2, 'two', True)] self.assertRaisesWithMessage( PolicyError, 'Duplicate default', StoragePolicyCollection, test_policies) # nothing specified test_policies = [] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.default, policies[0]) self.assertEqual(policies.default.name, 'Policy-0') # no default specified with only policy index 0 test_policies = [StoragePolicy(0, 'zero')] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.default, policies[0]) # no default specified with multiple policies test_policies = [StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False)] self.assertRaisesWithMessage( PolicyError, 'Unable to find default policy', StoragePolicyCollection, test_policies) def test_deprecate_policies(self): # deprecation specified test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False, is_deprecated=True)] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.default, test_policies[0]) self.assertEqual(policies.default.name, 'zero') self.assertEqual(len(policies), 3) # multiple policies requires default test_policies = [StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', False, is_deprecated=True), StoragePolicy(2, 'two', False)] self.assertRaisesWithMessage( PolicyError, 'Unable to find default policy', StoragePolicyCollection, test_policies) def test_validate_policies_indexes(self): # duplicate indexes test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(1, 'two', False)] self.assertRaises(PolicyError, StoragePolicyCollection, test_policies) def test_validate_policy_params(self): StoragePolicy(0, 'name') # sanity # bogus indexes self.assertRaises(PolicyError, FakeStoragePolicy, 'x', 'name') self.assertRaises(PolicyError, FakeStoragePolicy, -1, 'name') # non-zero Policy-0 self.assertRaisesWithMessage(PolicyError, 'reserved', FakeStoragePolicy, 1, 'policy-0') # deprecate default self.assertRaisesWithMessage( PolicyError, 'Deprecated policy can not be default', FakeStoragePolicy, 1, 'Policy-1', is_default=True, is_deprecated=True) # weird names names = ( '', 'name_foo', 'name\nfoo', 'name foo', u'name \u062a', 'name \xd8\xaa', ) for name in names: self.assertRaisesWithMessage(PolicyError, 'Invalid name', FakeStoragePolicy, 1, name) def test_validate_policies_names(self): # duplicate names test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'zero', False), StoragePolicy(2, 'two', False)] self.assertRaises(PolicyError, StoragePolicyCollection, test_policies) def test_validate_policies_type_default(self): # no type specified - make sure the policy is initialized to # DEFAULT_POLICY_TYPE test_policy = FakeStoragePolicy(0, 'zero', True) self.assertEqual(test_policy.policy_type, 'fake') def test_validate_policies_type_invalid(self): class BogusStoragePolicy(FakeStoragePolicy): policy_type = 'bogus' # unsupported policy type - initialization with FakeStoragePolicy self.assertRaisesWithMessage(PolicyError, 'Invalid type', BogusStoragePolicy, 1, 'one') def test_policies_type_attribute(self): test_policies = [ StoragePolicy(0, 'zero', is_default=True), StoragePolicy(1, 'one'), StoragePolicy(2, 'two'), StoragePolicy(3, 'three', is_deprecated=True), ECStoragePolicy(10, 'ten', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=3), ] policies = StoragePolicyCollection(test_policies) self.assertEqual(policies.get_by_index(0).policy_type, REPL_POLICY) self.assertEqual(policies.get_by_index(1).policy_type, REPL_POLICY) self.assertEqual(policies.get_by_index(2).policy_type, REPL_POLICY) self.assertEqual(policies.get_by_index(3).policy_type, REPL_POLICY) self.assertEqual(policies.get_by_index(10).policy_type, EC_POLICY) def test_names_are_normalized(self): test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'ZERO', False)] self.assertRaises(PolicyError, StoragePolicyCollection, test_policies) policies = StoragePolicyCollection([StoragePolicy(0, 'zEro', True), StoragePolicy(1, 'One', False)]) pol0 = policies[0] pol1 = policies[1] for name in ('zero', 'ZERO', 'zErO', 'ZeRo'): self.assertEqual(pol0, policies.get_by_name(name)) self.assertEqual(policies.get_by_name(name).name, 'zEro') for name in ('one', 'ONE', 'oNe', 'OnE'): self.assertEqual(pol1, policies.get_by_name(name)) self.assertEqual(policies.get_by_name(name).name, 'One') def test_multiple_names(self): # checking duplicate on insert test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False, aliases='zero')] self.assertRaises(PolicyError, StoragePolicyCollection, test_policies) # checking correct retrival using other names test_policies = [StoragePolicy(0, 'zero', True, aliases='cero, kore'), StoragePolicy(1, 'one', False, aliases='uno, tahi'), StoragePolicy(2, 'two', False, aliases='dos, rua')] policies = StoragePolicyCollection(test_policies) for name in ('zero', 'cero', 'kore'): self.assertEqual(policies.get_by_name(name), test_policies[0]) for name in ('two', 'dos', 'rua'): self.assertEqual(policies.get_by_name(name), test_policies[2]) # Testing parsing of conf files/text good_conf = self._conf(""" [storage-policy:0] name = one aliases = uno, tahi default = yes """) policies = parse_storage_policies(good_conf) self.assertEqual(policies.get_by_name('one'), policies[0]) self.assertEqual(policies.get_by_name('one'), policies.get_by_name('tahi')) name_repeat_conf = self._conf(""" [storage-policy:0] name = one aliases = one default = yes """) # Test on line below should not generate errors. Repeat of main # name under aliases is permitted during construction # but only because automated testing requires it. policies = parse_storage_policies(name_repeat_conf) bad_conf = self._conf(""" [storage-policy:0] name = one aliases = uno, uno default = yes """) self.assertRaisesWithMessage(PolicyError, 'is already assigned to this policy', parse_storage_policies, bad_conf) def test_multiple_names_EC(self): # checking duplicate names on insert test_policies_ec = [ ECStoragePolicy( 0, 'ec8-2', aliases='zeus, jupiter', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=8, ec_nparity=2, object_ring=FakeRing(replicas=8), is_default=True), ECStoragePolicy( 1, 'ec10-4', aliases='ec8-2', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4, object_ring=FakeRing(replicas=10))] self.assertRaises(PolicyError, StoragePolicyCollection, test_policies_ec) # checking correct retrival using other names good_test_policies_EC = [ ECStoragePolicy(0, 'ec8-2', aliases='zeus, jupiter', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=8, ec_nparity=2, object_ring=FakeRing(replicas=8), is_default=True), ECStoragePolicy(1, 'ec10-4', aliases='athena, minerva', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4, object_ring=FakeRing(replicas=10)), ECStoragePolicy(2, 'ec4-2', aliases='poseidon, neptune', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=4, ec_nparity=2, object_ring=FakeRing(replicas=7)), ] ec_policies = StoragePolicyCollection(good_test_policies_EC) for name in ('ec8-2', 'zeus', 'jupiter'): self.assertEqual(ec_policies.get_by_name(name), ec_policies[0]) for name in ('ec10-4', 'athena', 'minerva'): self.assertEqual(ec_policies.get_by_name(name), ec_policies[1]) # Testing parsing of conf files/text good_ec_conf = self._conf(""" [storage-policy:0] name = ec8-2 aliases = zeus, jupiter policy_type = erasure_coding ec_type = %(ec_type)s default = yes ec_num_data_fragments = 8 ec_num_parity_fragments = 2 [storage-policy:1] name = ec10-4 aliases = poseidon, neptune policy_type = erasure_coding ec_type = %(ec_type)s ec_num_data_fragments = 10 ec_num_parity_fragments = 4 """ % {'ec_type': DEFAULT_TEST_EC_TYPE}) ec_policies = parse_storage_policies(good_ec_conf) self.assertEqual(ec_policies.get_by_name('ec8-2'), ec_policies[0]) self.assertEqual(ec_policies.get_by_name('ec10-4'), ec_policies.get_by_name('poseidon')) name_repeat_ec_conf = self._conf(""" [storage-policy:0] name = ec8-2 aliases = ec8-2 policy_type = erasure_coding ec_type = %(ec_type)s default = yes ec_num_data_fragments = 8 ec_num_parity_fragments = 2 """ % {'ec_type': DEFAULT_TEST_EC_TYPE}) # Test on line below should not generate errors. Repeat of main # name under aliases is permitted during construction # but only because automated testing requires it. ec_policies = parse_storage_policies(name_repeat_ec_conf) bad_ec_conf = self._conf(""" [storage-policy:0] name = ec8-2 aliases = zeus, zeus policy_type = erasure_coding ec_type = %(ec_type)s default = yes ec_num_data_fragments = 8 ec_num_parity_fragments = 2 """ % {'ec_type': DEFAULT_TEST_EC_TYPE}) self.assertRaisesWithMessage(PolicyError, 'is already assigned to this policy', parse_storage_policies, bad_ec_conf) def test_add_remove_names(self): test_policies = [StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False)] policies = StoragePolicyCollection(test_policies) # add names policies.add_policy_alias(1, 'tahi') self.assertEqual(policies.get_by_name('tahi'), test_policies[1]) policies.add_policy_alias(2, 'rua', 'dos') self.assertEqual(policies.get_by_name('rua'), test_policies[2]) self.assertEqual(policies.get_by_name('dos'), test_policies[2]) self.assertRaisesWithMessage(PolicyError, 'Invalid name', policies.add_policy_alias, 2, 'double\n') # try to add existing name self.assertRaisesWithMessage(PolicyError, 'Duplicate name', policies.add_policy_alias, 2, 'two') self.assertRaisesWithMessage(PolicyError, 'Duplicate name', policies.add_policy_alias, 1, 'two') # remove name policies.remove_policy_alias('tahi') self.assertEqual(policies.get_by_name('tahi'), None) # remove only name self.assertRaisesWithMessage(PolicyError, 'Policies must have at least one name.', policies.remove_policy_alias, 'zero') # remove non-existent name self.assertRaisesWithMessage(PolicyError, 'No policy with name', policies.remove_policy_alias, 'three') # remove default name policies.remove_policy_alias('two') self.assertEqual(policies.get_by_name('two'), None) self.assertEqual(policies.get_by_index(2).name, 'rua') # change default name to a new name policies.change_policy_primary_name(2, 'two') self.assertEqual(policies.get_by_name('two'), test_policies[2]) self.assertEqual(policies.get_by_index(2).name, 'two') # change default name to an existing alias policies.change_policy_primary_name(2, 'dos') self.assertEqual(policies.get_by_index(2).name, 'dos') # change default name to a bad new name self.assertRaisesWithMessage(PolicyError, 'Invalid name', policies.change_policy_primary_name, 2, 'bad\nname') # change default name to a name belonging to another policy self.assertRaisesWithMessage(PolicyError, 'Other policy', policies.change_policy_primary_name, 1, 'dos') def test_deprecated_default(self): bad_conf = self._conf(""" [storage-policy:1] name = one deprecated = yes default = yes """) self.assertRaisesWithMessage( PolicyError, "Deprecated policy can not be default", parse_storage_policies, bad_conf) def test_multiple_policies_with_no_policy_index_zero(self): bad_conf = self._conf(""" [storage-policy:1] name = one default = yes """) # Policy-0 will not be implicitly added if other policies are defined self.assertRaisesWithMessage( PolicyError, "must specify a storage policy section " "for policy index 0", parse_storage_policies, bad_conf) def test_no_default(self): orig_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = one default = yes """) policies = parse_storage_policies(orig_conf) self.assertEqual(policies.default, policies[1]) self.assertTrue(policies[0].name, 'Policy-0') bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = one deprecated = yes """) # multiple polices and no explicit default self.assertRaisesWithMessage( PolicyError, "Unable to find default", parse_storage_policies, bad_conf) good_conf = self._conf(""" [storage-policy:0] name = Policy-0 default = yes [storage-policy:1] name = one deprecated = yes """) policies = parse_storage_policies(good_conf) self.assertEqual(policies.default, policies[0]) self.assertTrue(policies[1].is_deprecated, True) def test_parse_storage_policies(self): # ValueError when deprecating policy 0 bad_conf = self._conf(""" [storage-policy:0] name = zero deprecated = yes [storage-policy:1] name = one deprecated = yes """) self.assertRaisesWithMessage( PolicyError, "Unable to find policy that's not deprecated", parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:-1] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:x] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:x-1] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:x] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:x:1] name = zero """) self.assertRaisesWithMessage(PolicyError, 'Invalid index', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:1] name = zero boo = berries """) self.assertRaisesWithMessage(PolicyError, 'Invalid option', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:0] name = """) self.assertRaisesWithMessage(PolicyError, 'Invalid name', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:3] name = Policy-0 """) self.assertRaisesWithMessage(PolicyError, 'Invalid name', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:1] name = policY-0 """) self.assertRaisesWithMessage(PolicyError, 'Invalid name', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:0] name = one [storage-policy:1] name = ONE """) self.assertRaisesWithMessage(PolicyError, 'Duplicate name', parse_storage_policies, bad_conf) bad_conf = self._conf(""" [storage-policy:0] name = good_stuff """) self.assertRaisesWithMessage(PolicyError, 'Invalid name', parse_storage_policies, bad_conf) # policy_type = erasure_coding # missing ec_type, ec_num_data_fragments and ec_num_parity_fragments bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding """) self.assertRaisesWithMessage(PolicyError, 'Missing ec_type', parse_storage_policies, bad_conf) # missing ec_type, but other options valid... bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_num_data_fragments = 10 ec_num_parity_fragments = 4 """) self.assertRaisesWithMessage(PolicyError, 'Missing ec_type', parse_storage_policies, bad_conf) # ec_type specified, but invalid... bad_conf = self._conf(""" [storage-policy:0] name = zero default = yes [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_type = garbage_alg ec_num_data_fragments = 10 ec_num_parity_fragments = 4 """) self.assertRaisesWithMessage(PolicyError, 'Wrong ec_type garbage_alg for policy ' 'ec10-4, should be one of "%s"' % (', '.join(VALID_EC_TYPES)), parse_storage_policies, bad_conf) # missing and invalid ec_num_parity_fragments bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_type = %(ec_type)s ec_num_data_fragments = 10 """ % {'ec_type': DEFAULT_TEST_EC_TYPE}) self.assertRaisesWithMessage(PolicyError, 'Invalid ec_num_parity_fragments', parse_storage_policies, bad_conf) for num_parity in ('-4', '0', 'x'): bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_type = %(ec_type)s ec_num_data_fragments = 10 ec_num_parity_fragments = %(num_parity)s """ % {'ec_type': DEFAULT_TEST_EC_TYPE, 'num_parity': num_parity}) self.assertRaisesWithMessage(PolicyError, 'Invalid ec_num_parity_fragments', parse_storage_policies, bad_conf) # missing and invalid ec_num_data_fragments bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_type = %(ec_type)s ec_num_parity_fragments = 4 """ % {'ec_type': DEFAULT_TEST_EC_TYPE}) self.assertRaisesWithMessage(PolicyError, 'Invalid ec_num_data_fragments', parse_storage_policies, bad_conf) for num_data in ('-10', '0', 'x'): bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_type = %(ec_type)s ec_num_data_fragments = %(num_data)s ec_num_parity_fragments = 4 """ % {'num_data': num_data, 'ec_type': DEFAULT_TEST_EC_TYPE}) self.assertRaisesWithMessage(PolicyError, 'Invalid ec_num_data_fragments', parse_storage_policies, bad_conf) # invalid ec_object_segment_size for segment_size in ('-4', '0', 'x'): bad_conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:1] name = ec10-4 policy_type = erasure_coding ec_object_segment_size = %(segment_size)s ec_type = %(ec_type)s ec_num_data_fragments = 10 ec_num_parity_fragments = 4 """ % {'segment_size': segment_size, 'ec_type': DEFAULT_TEST_EC_TYPE}) self.assertRaisesWithMessage(PolicyError, 'Invalid ec_object_segment_size', parse_storage_policies, bad_conf) # Additional section added to ensure parser ignores other sections conf = self._conf(""" [some-other-section] foo = bar [storage-policy:0] name = zero [storage-policy:5] name = one default = yes [storage-policy:6] name = duplicate-sections-are-ignored [storage-policy:6] name = apple """) policies = parse_storage_policies(conf) self.assertEqual(True, policies.get_by_index(5).is_default) self.assertEqual(False, policies.get_by_index(0).is_default) self.assertEqual(False, policies.get_by_index(6).is_default) self.assertEqual("object", policies.get_by_name("zero").ring_name) self.assertEqual("object-5", policies.get_by_name("one").ring_name) self.assertEqual("object-6", policies.get_by_name("apple").ring_name) self.assertEqual(0, int(policies.get_by_name('zero'))) self.assertEqual(5, int(policies.get_by_name('one'))) self.assertEqual(6, int(policies.get_by_name('apple'))) self.assertEqual("zero", policies.get_by_index(0).name) self.assertEqual("zero", policies.get_by_index("0").name) self.assertEqual("one", policies.get_by_index(5).name) self.assertEqual("apple", policies.get_by_index(6).name) self.assertEqual("zero", policies.get_by_index(None).name) self.assertEqual("zero", policies.get_by_index('').name) self.assertEqual(policies.get_by_index(0), policies.legacy) def test_reload_invalid_storage_policies(self): conf = self._conf(""" [storage-policy:0] name = zero [storage-policy:00] name = double-zero """) with NamedTemporaryFile() as f: conf.write(f) f.flush() with mock.patch('swift.common.storage_policy.SWIFT_CONF_FILE', new=f.name): try: reload_storage_policies() except SystemExit as e: err_msg = str(e) else: self.fail('SystemExit not raised') parts = [ 'Invalid Storage Policy Configuration', 'Duplicate index', ] for expected in parts: self.assertTrue( expected in err_msg, '%s was not in %s' % (expected, err_msg)) def test_storage_policy_ordering(self): test_policies = StoragePolicyCollection([ StoragePolicy(0, 'zero', is_default=True), StoragePolicy(503, 'error'), StoragePolicy(204, 'empty'), StoragePolicy(404, 'missing'), ]) self.assertEqual([0, 204, 404, 503], [int(p) for p in sorted(list(test_policies))]) p503 = test_policies[503] self.assertTrue(501 < p503 < 507) def test_get_object_ring(self): test_policies = [StoragePolicy(0, 'aay', True), StoragePolicy(1, 'bee', False), StoragePolicy(2, 'cee', False)] policies = StoragePolicyCollection(test_policies) class NamedFakeRing(FakeRing): def __init__(self, swift_dir, ring_name=None): self.ring_name = ring_name super(NamedFakeRing, self).__init__() with mock.patch('swift.common.storage_policy.Ring', new=NamedFakeRing): for policy in policies: self.assertFalse(policy.object_ring) ring = policies.get_object_ring(int(policy), '/path/not/used') self.assertEqual(ring.ring_name, policy.ring_name) self.assertTrue(policy.object_ring) self.assertTrue(isinstance(policy.object_ring, NamedFakeRing)) def blow_up(*args, **kwargs): raise Exception('kaboom!') with mock.patch('swift.common.storage_policy.Ring', new=blow_up): for policy in policies: policy.load_ring('/path/not/used') expected = policies.get_object_ring(int(policy), '/path/not/used') self.assertEqual(policy.object_ring, expected) # bad policy index self.assertRaises(PolicyError, policies.get_object_ring, 99, '/path/not/used') def test_bind_ports_cache(self): test_policies = [StoragePolicy(0, 'aay', True), StoragePolicy(1, 'bee', False), StoragePolicy(2, 'cee', False)] my_ips = ['1.2.3.4', '2.3.4.5'] other_ips = ['3.4.5.6', '4.5.6.7'] bind_ip = my_ips[1] devs_by_ring_name1 = { 'object': [ # 'aay' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[0], 'port': 6006}, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[0], 'port': 6007}, {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[1], 'port': 6008}, None, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6009}], 'object-1': [ # 'bee' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[1], 'port': 6006}, # dupe {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[0], 'port': 6010}, {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[1], 'port': 6011}, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6012}], 'object-2': [ # 'cee' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[0], 'port': 6010}, # on our IP and a not-us IP {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[0], 'port': 6013}, None, {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[1], 'port': 6014}, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6015}], } devs_by_ring_name2 = { 'object': [ # 'aay' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[0], 'port': 6016}, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6019}], 'object-1': [ # 'bee' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[1], 'port': 6016}, # dupe {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6022}], 'object-2': [ # 'cee' {'id': 0, 'zone': 0, 'region': 1, 'ip': my_ips[0], 'port': 6020}, {'id': 0, 'zone': 0, 'region': 1, 'ip': other_ips[1], 'port': 6025}], } ring_files = [ring_name + '.ring.gz' for ring_name in sorted(devs_by_ring_name1)] def _fake_load(gz_path, stub_objs, metadata_only=False): return RingData( devs=stub_objs[os.path.basename(gz_path)[:-8]], replica2part2dev_id=[], part_shift=24) with mock.patch( 'swift.common.storage_policy.RingData.load' ) as mock_ld, \ patch_policies(test_policies), \ mock.patch('swift.common.storage_policy.whataremyips') \ as mock_whataremyips, \ temptree(ring_files) as tempdir: mock_whataremyips.return_value = my_ips cache = BindPortsCache(tempdir, bind_ip) self.assertEqual([ mock.call(bind_ip), ], mock_whataremyips.mock_calls) mock_whataremyips.reset_mock() mock_ld.side_effect = partial(_fake_load, stub_objs=devs_by_ring_name1) self.assertEqual(set([ 6006, 6008, 6011, 6010, 6014, ]), cache.all_bind_ports_for_node()) self.assertEqual([ mock.call(os.path.join(tempdir, ring_files[0]), metadata_only=True), mock.call(os.path.join(tempdir, ring_files[1]), metadata_only=True), mock.call(os.path.join(tempdir, ring_files[2]), metadata_only=True), ], mock_ld.mock_calls) mock_ld.reset_mock() mock_ld.side_effect = partial(_fake_load, stub_objs=devs_by_ring_name2) self.assertEqual(set([ 6006, 6008, 6011, 6010, 6014, ]), cache.all_bind_ports_for_node()) self.assertEqual([], mock_ld.mock_calls) # but when all the file mtimes are made different, it'll # reload for gz_file in [os.path.join(tempdir, n) for n in ring_files]: os.utime(gz_file, (88, 88)) self.assertEqual(set([ 6016, 6020, ]), cache.all_bind_ports_for_node()) self.assertEqual([ mock.call(os.path.join(tempdir, ring_files[0]), metadata_only=True), mock.call(os.path.join(tempdir, ring_files[1]), metadata_only=True), mock.call(os.path.join(tempdir, ring_files[2]), metadata_only=True), ], mock_ld.mock_calls) mock_ld.reset_mock() # Don't do something stupid like crash if a ring file is missing. os.unlink(os.path.join(tempdir, 'object-2.ring.gz')) self.assertEqual(set([ 6016, 6020, ]), cache.all_bind_ports_for_node()) self.assertEqual([], mock_ld.mock_calls) # whataremyips() is only called in the constructor self.assertEqual([], mock_whataremyips.mock_calls) def test_singleton_passthrough(self): test_policies = [StoragePolicy(0, 'aay', True), StoragePolicy(1, 'bee', False), StoragePolicy(2, 'cee', False)] with patch_policies(test_policies): for policy in POLICIES: self.assertEqual(POLICIES[int(policy)], policy) def test_quorum_size_replication(self): expected_sizes = {1: 1, 2: 2, 3: 2, 4: 3, 5: 3} for n, expected in expected_sizes.items(): policy = StoragePolicy(0, 'zero', object_ring=FakeRing(replicas=n)) self.assertEqual(policy.quorum, expected) def test_quorum_size_erasure_coding(self): test_ec_policies = [ ECStoragePolicy(10, 'ec8-2', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=8, ec_nparity=2), ECStoragePolicy(11, 'df10-6', ec_type='flat_xor_hd_4', ec_ndata=10, ec_nparity=6), ] for ec_policy in test_ec_policies: k = ec_policy.ec_ndata expected_size = \ k + ec_policy.pyeclib_driver.min_parity_fragments_needed() self.assertEqual(expected_size, ec_policy.quorum) def test_validate_ring(self): test_policies = [ ECStoragePolicy(0, 'ec8-2', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=8, ec_nparity=2, object_ring=FakeRing(replicas=8), is_default=True), ECStoragePolicy(1, 'ec10-4', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4, object_ring=FakeRing(replicas=10)), ECStoragePolicy(2, 'ec4-2', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=4, ec_nparity=2, object_ring=FakeRing(replicas=7)), ] policies = StoragePolicyCollection(test_policies) for policy in policies: msg = 'EC ring for policy %s needs to be configured with ' \ 'exactly %d nodes.' % \ (policy.name, policy.ec_ndata + policy.ec_nparity) self.assertRaisesWithMessage(RingValidationError, msg, policy._validate_ring) def test_storage_policy_get_info(self): test_policies = [ StoragePolicy(0, 'zero', is_default=True), StoragePolicy(1, 'one', is_deprecated=True, aliases='tahi, uno'), ECStoragePolicy(10, 'ten', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=3), ECStoragePolicy(11, 'done', is_deprecated=True, ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=3), ] policies = StoragePolicyCollection(test_policies) expected = { # default replication (0, True): { 'name': 'zero', 'aliases': 'zero', 'default': True, 'deprecated': False, 'policy_type': REPL_POLICY }, (0, False): { 'name': 'zero', 'aliases': 'zero', 'default': True, }, # deprecated replication (1, True): { 'name': 'one', 'aliases': 'one, tahi, uno', 'default': False, 'deprecated': True, 'policy_type': REPL_POLICY }, (1, False): { 'name': 'one', 'aliases': 'one, tahi, uno', 'deprecated': True, }, # enabled ec (10, True): { 'name': 'ten', 'aliases': 'ten', 'default': False, 'deprecated': False, 'policy_type': EC_POLICY, 'ec_type': DEFAULT_TEST_EC_TYPE, 'ec_num_data_fragments': 10, 'ec_num_parity_fragments': 3, 'ec_object_segment_size': DEFAULT_EC_OBJECT_SEGMENT_SIZE, }, (10, False): { 'name': 'ten', 'aliases': 'ten', }, # deprecated ec (11, True): { 'name': 'done', 'aliases': 'done', 'default': False, 'deprecated': True, 'policy_type': EC_POLICY, 'ec_type': DEFAULT_TEST_EC_TYPE, 'ec_num_data_fragments': 10, 'ec_num_parity_fragments': 3, 'ec_object_segment_size': DEFAULT_EC_OBJECT_SEGMENT_SIZE, }, (11, False): { 'name': 'done', 'aliases': 'done', 'deprecated': True, }, } self.maxDiff = None for policy in policies: expected_info = expected[(int(policy), True)] self.assertEqual(policy.get_info(config=True), expected_info) expected_info = expected[(int(policy), False)] self.assertEqual(policy.get_info(config=False), expected_info) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_container_sync_realms.py0000664000567000056710000001634113024044352025240 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import errno import os import unittest import uuid from mock import patch from swift.common.container_sync_realms import ContainerSyncRealms from test.unit import FakeLogger, temptree class TestUtils(unittest.TestCase): def test_no_file_there(self): unique = uuid.uuid4().hex logger = FakeLogger() csr = ContainerSyncRealms(unique, logger) self.assertEqual( logger.all_log_lines(), {'debug': [ "Could not load '%s': [Errno 2] No such file or directory: " "'%s'" % (unique, unique)]}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), []) def test_os_error(self): fname = 'container-sync-realms.conf' fcontents = '' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) def _mock_getmtime(path): raise OSError(errno.EACCES, os.strerror(errno.EACCES) + ": '%s'" % (fpath)) with patch('os.path.getmtime', _mock_getmtime): csr = ContainerSyncRealms(fpath, logger) self.assertEqual( logger.all_log_lines(), {'error': [ "Could not load '%s': [Errno 13] Permission denied: " "'%s'" % (fpath, fpath)]}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), []) def test_empty(self): fname = 'container-sync-realms.conf' fcontents = '' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual(logger.all_log_lines(), {}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), []) def test_error_parsing(self): fname = 'container-sync-realms.conf' fcontents = 'invalid' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual( logger.all_log_lines(), {'error': [ "Could not load '%s': File contains no section headers.\n" "file: %s, line: 1\n" "'invalid'" % (fpath, fpath)]}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), []) def test_one_realm(self): fname = 'container-sync-realms.conf' fcontents = ''' [US] key = 9ff3b71c849749dbaec4ccdd3cbab62b cluster_dfw1 = http://dfw1.host/v1/ ''' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual(logger.all_log_lines(), {}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), ['US']) self.assertEqual(csr.key('US'), '9ff3b71c849749dbaec4ccdd3cbab62b') self.assertEqual(csr.key2('US'), None) self.assertEqual(csr.clusters('US'), ['DFW1']) self.assertEqual( csr.endpoint('US', 'DFW1'), 'http://dfw1.host/v1/') def test_two_realms_and_change_a_default(self): fname = 'container-sync-realms.conf' fcontents = ''' [DEFAULT] mtime_check_interval = 60 [US] key = 9ff3b71c849749dbaec4ccdd3cbab62b cluster_dfw1 = http://dfw1.host/v1/ [UK] key = e9569809dc8b4951accc1487aa788012 key2 = f6351bd1cc36413baa43f7ba1b45e51d cluster_lon3 = http://lon3.host/v1/ ''' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual(logger.all_log_lines(), {}) self.assertEqual(csr.mtime_check_interval, 60) self.assertEqual(sorted(csr.realms()), ['UK', 'US']) self.assertEqual(csr.key('US'), '9ff3b71c849749dbaec4ccdd3cbab62b') self.assertEqual(csr.key2('US'), None) self.assertEqual(csr.clusters('US'), ['DFW1']) self.assertEqual( csr.endpoint('US', 'DFW1'), 'http://dfw1.host/v1/') self.assertEqual(csr.key('UK'), 'e9569809dc8b4951accc1487aa788012') self.assertEqual( csr.key2('UK'), 'f6351bd1cc36413baa43f7ba1b45e51d') self.assertEqual(csr.clusters('UK'), ['LON3']) self.assertEqual( csr.endpoint('UK', 'LON3'), 'http://lon3.host/v1/') def test_empty_realm(self): fname = 'container-sync-realms.conf' fcontents = ''' [US] ''' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual(logger.all_log_lines(), {}) self.assertEqual(csr.mtime_check_interval, 300) self.assertEqual(csr.realms(), ['US']) self.assertEqual(csr.key('US'), None) self.assertEqual(csr.key2('US'), None) self.assertEqual(csr.clusters('US'), []) self.assertEqual(csr.endpoint('US', 'JUST_TESTING'), None) def test_bad_mtime_check_interval(self): fname = 'container-sync-realms.conf' fcontents = ''' [DEFAULT] mtime_check_interval = invalid ''' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual( logger.all_log_lines(), {'error': [ "Error in '%s' with mtime_check_interval: invalid literal " "for int() with base 10: 'invalid'" % fpath]}) self.assertEqual(csr.mtime_check_interval, 300) def test_get_sig(self): fname = 'container-sync-realms.conf' fcontents = '' with temptree([fname], [fcontents]) as tempdir: logger = FakeLogger() fpath = os.path.join(tempdir, fname) csr = ContainerSyncRealms(fpath, logger) self.assertEqual( csr.get_sig( 'GET', '/some/path', '1387212345.67890', 'my_nonce', 'realm_key', 'user_key'), '5a6eb486eb7b44ae1b1f014187a94529c3f9c8f9') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_swob.py0000664000567000056710000021601713024044354021635 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. "Tests for swift.common.swob" import datetime import unittest import re import time from six import BytesIO from six.moves.urllib.parse import quote import swift.common.swob from swift.common import utils, exceptions class TestHeaderEnvironProxy(unittest.TestCase): def test_proxy(self): environ = {} proxy = swift.common.swob.HeaderEnvironProxy(environ) proxy['Content-Length'] = 20 proxy['Content-Type'] = 'text/plain' proxy['Something-Else'] = 'somevalue' self.assertEqual( proxy.environ, {'CONTENT_LENGTH': '20', 'CONTENT_TYPE': 'text/plain', 'HTTP_SOMETHING_ELSE': 'somevalue'}) self.assertEqual(proxy['content-length'], '20') self.assertEqual(proxy['content-type'], 'text/plain') self.assertEqual(proxy['something-else'], 'somevalue') self.assertEqual(set(['Something-Else', 'Content-Length', 'Content-Type']), set(proxy.keys())) self.assertEqual(list(iter(proxy)), proxy.keys()) self.assertEqual(3, len(proxy)) def test_ignored_keys(self): # Constructor doesn't normalize keys key = 'wsgi.input' environ = {key: ''} proxy = swift.common.swob.HeaderEnvironProxy(environ) self.assertEqual([], list(iter(proxy))) self.assertEqual([], proxy.keys()) self.assertEqual(0, len(proxy)) self.assertRaises(KeyError, proxy.__getitem__, key) self.assertNotIn(key, proxy) proxy['Content-Type'] = 'text/plain' self.assertEqual(['Content-Type'], list(iter(proxy))) self.assertEqual(['Content-Type'], proxy.keys()) self.assertEqual(1, len(proxy)) self.assertEqual('text/plain', proxy['Content-Type']) self.assertIn('Content-Type', proxy) def test_del(self): environ = {} proxy = swift.common.swob.HeaderEnvironProxy(environ) proxy['Content-Length'] = 20 proxy['Content-Type'] = 'text/plain' proxy['Something-Else'] = 'somevalue' del proxy['Content-Length'] del proxy['Content-Type'] del proxy['Something-Else'] self.assertEqual(proxy.environ, {}) self.assertEqual(0, len(proxy)) def test_contains(self): environ = {} proxy = swift.common.swob.HeaderEnvironProxy(environ) proxy['Content-Length'] = 20 proxy['Content-Type'] = 'text/plain' proxy['Something-Else'] = 'somevalue' self.assertTrue('content-length' in proxy) self.assertTrue('content-type' in proxy) self.assertTrue('something-else' in proxy) def test_keys(self): environ = {} proxy = swift.common.swob.HeaderEnvironProxy(environ) proxy['Content-Length'] = 20 proxy['Content-Type'] = 'text/plain' proxy['Something-Else'] = 'somevalue' self.assertEqual( set(proxy.keys()), set(('Content-Length', 'Content-Type', 'Something-Else'))) class TestRange(unittest.TestCase): def test_range(self): swob_range = swift.common.swob.Range('bytes=1-7') self.assertEqual(swob_range.ranges[0], (1, 7)) def test_upsidedown_range(self): swob_range = swift.common.swob.Range('bytes=5-10') self.assertEqual(swob_range.ranges_for_length(2), []) def test_str(self): for range_str in ('bytes=1-7', 'bytes=1-', 'bytes=-1', 'bytes=1-7,9-12', 'bytes=-7,9-'): swob_range = swift.common.swob.Range(range_str) self.assertEqual(str(swob_range), range_str) def test_ranges_for_length(self): swob_range = swift.common.swob.Range('bytes=1-7') self.assertEqual(swob_range.ranges_for_length(10), [(1, 8)]) self.assertEqual(swob_range.ranges_for_length(5), [(1, 5)]) self.assertEqual(swob_range.ranges_for_length(None), None) def test_ranges_for_large_length(self): swob_range = swift.common.swob.Range('bytes=-100000000000000000000000') self.assertEqual(swob_range.ranges_for_length(100), [(0, 100)]) def test_ranges_for_length_no_end(self): swob_range = swift.common.swob.Range('bytes=1-') self.assertEqual(swob_range.ranges_for_length(10), [(1, 10)]) self.assertEqual(swob_range.ranges_for_length(5), [(1, 5)]) self.assertEqual(swob_range.ranges_for_length(None), None) # This used to freak out: swob_range = swift.common.swob.Range('bytes=100-') self.assertEqual(swob_range.ranges_for_length(5), []) self.assertEqual(swob_range.ranges_for_length(None), None) swob_range = swift.common.swob.Range('bytes=4-6,100-') self.assertEqual(swob_range.ranges_for_length(5), [(4, 5)]) def test_ranges_for_length_no_start(self): swob_range = swift.common.swob.Range('bytes=-7') self.assertEqual(swob_range.ranges_for_length(10), [(3, 10)]) self.assertEqual(swob_range.ranges_for_length(5), [(0, 5)]) self.assertEqual(swob_range.ranges_for_length(None), None) swob_range = swift.common.swob.Range('bytes=4-6,-100') self.assertEqual(swob_range.ranges_for_length(5), [(4, 5), (0, 5)]) def test_ranges_for_length_multi(self): swob_range = swift.common.swob.Range('bytes=-20,4-') self.assertEqual(len(swob_range.ranges_for_length(200)), 2) # the actual length greater than each range element self.assertEqual(swob_range.ranges_for_length(200), [(180, 200), (4, 200)]) swob_range = swift.common.swob.Range('bytes=30-150,-10') self.assertEqual(len(swob_range.ranges_for_length(200)), 2) # the actual length lands in the middle of a range self.assertEqual(swob_range.ranges_for_length(90), [(30, 90), (80, 90)]) # the actual length greater than any of the range self.assertEqual(swob_range.ranges_for_length(200), [(30, 151), (190, 200)]) self.assertEqual(swob_range.ranges_for_length(None), None) def test_ranges_for_length_edges(self): swob_range = swift.common.swob.Range('bytes=0-1, -7') self.assertEqual(swob_range.ranges_for_length(10), [(0, 2), (3, 10)]) swob_range = swift.common.swob.Range('bytes=-7, 0-1') self.assertEqual(swob_range.ranges_for_length(10), [(3, 10), (0, 2)]) swob_range = swift.common.swob.Range('bytes=-7, 0-1') self.assertEqual(swob_range.ranges_for_length(5), [(0, 5), (0, 2)]) def test_ranges_for_length_overlapping(self): # Fewer than 3 overlaps is okay swob_range = swift.common.swob.Range('bytes=10-19,15-24') self.assertEqual(swob_range.ranges_for_length(100), [(10, 20), (15, 25)]) swob_range = swift.common.swob.Range('bytes=10-19,15-24,20-29') self.assertEqual(swob_range.ranges_for_length(100), [(10, 20), (15, 25), (20, 30)]) # Adjacent ranges, though suboptimal, don't overlap swob_range = swift.common.swob.Range('bytes=10-19,20-29,30-39') self.assertEqual(swob_range.ranges_for_length(100), [(10, 20), (20, 30), (30, 40)]) # Ranges that share a byte do overlap swob_range = swift.common.swob.Range('bytes=10-20,20-30,30-40,40-50') self.assertEqual(swob_range.ranges_for_length(100), []) # With suffix byte range specs (e.g. bytes=-2), make sure that we # correctly determine overlapping-ness based on the entity length swob_range = swift.common.swob.Range('bytes=10-15,15-20,30-39,-9') self.assertEqual(swob_range.ranges_for_length(100), [(10, 16), (15, 21), (30, 40), (91, 100)]) self.assertEqual(swob_range.ranges_for_length(20), []) def test_ranges_for_length_nonascending(self): few_ranges = ("bytes=100-109,200-209,300-309,500-509," "400-409,600-609,700-709") many_ranges = few_ranges + ",800-809" swob_range = swift.common.swob.Range(few_ranges) self.assertEqual(swob_range.ranges_for_length(100000), [(100, 110), (200, 210), (300, 310), (500, 510), (400, 410), (600, 610), (700, 710)]) swob_range = swift.common.swob.Range(many_ranges) self.assertEqual(swob_range.ranges_for_length(100000), []) def test_ranges_for_length_too_many(self): at_the_limit_ranges = ( "bytes=" + ",".join("%d-%d" % (x * 1000, x * 1000 + 10) for x in range(50))) too_many_ranges = at_the_limit_ranges + ",10000000-10000009" rng = swift.common.swob.Range(at_the_limit_ranges) self.assertEqual(len(rng.ranges_for_length(1000000000)), 50) rng = swift.common.swob.Range(too_many_ranges) self.assertEqual(rng.ranges_for_length(1000000000), []) def test_range_invalid_syntax(self): def _check_invalid_range(range_value): try: swift.common.swob.Range(range_value) return False except ValueError: return True """ All the following cases should result ValueError exception 1. value not starts with bytes= 2. range value start is greater than the end, eg. bytes=5-3 3. range does not have start or end, eg. bytes=- 4. range does not have hyphen, eg. bytes=45 5. range value is non numeric 6. any combination of the above """ self.assertTrue(_check_invalid_range('nonbytes=foobar,10-2')) self.assertTrue(_check_invalid_range('bytes=5-3')) self.assertTrue(_check_invalid_range('bytes=-')) self.assertTrue(_check_invalid_range('bytes=45')) self.assertTrue(_check_invalid_range('bytes=foo-bar,3-5')) self.assertTrue(_check_invalid_range('bytes=4-10,45')) self.assertTrue(_check_invalid_range('bytes=foobar,3-5')) self.assertTrue(_check_invalid_range('bytes=nonumber-5')) self.assertTrue(_check_invalid_range('bytes=nonumber')) class TestMatch(unittest.TestCase): def test_match(self): match = swift.common.swob.Match('"a", "b"') self.assertEqual(match.tags, set(('a', 'b'))) self.assertTrue('a' in match) self.assertTrue('b' in match) self.assertTrue('c' not in match) def test_match_star(self): match = swift.common.swob.Match('"a", "*"') self.assertTrue('a' in match) self.assertTrue('b' in match) self.assertTrue('c' in match) def test_match_noquote(self): match = swift.common.swob.Match('a, b') self.assertEqual(match.tags, set(('a', 'b'))) self.assertTrue('a' in match) self.assertTrue('b' in match) self.assertTrue('c' not in match) class TestTransferEncoding(unittest.TestCase): def test_is_chunked(self): headers = {} self.assertFalse(swift.common.swob.is_chunked(headers)) headers['Transfer-Encoding'] = 'chunked' self.assertTrue(swift.common.swob.is_chunked(headers)) headers['Transfer-Encoding'] = 'gzip,chunked' try: swift.common.swob.is_chunked(headers) except AttributeError as e: self.assertEqual(str(e), "Unsupported Transfer-Coding header" " value specified in Transfer-Encoding header") else: self.fail("Expected an AttributeError raised for 'gzip'") headers['Transfer-Encoding'] = 'gzip' try: swift.common.swob.is_chunked(headers) except ValueError as e: self.assertEqual(str(e), "Invalid Transfer-Encoding header value") else: self.fail("Expected a ValueError raised for 'gzip'") headers['Transfer-Encoding'] = 'gzip,identity' try: swift.common.swob.is_chunked(headers) except AttributeError as e: self.assertEqual(str(e), "Unsupported Transfer-Coding header" " value specified in Transfer-Encoding header") else: self.fail("Expected an AttributeError raised for 'gzip,identity'") class TestAccept(unittest.TestCase): def test_accept_json(self): for accept in ('application/json', 'application/json;q=1.0,*/*;q=0.9', '*/*;q=0.9,application/json;q=1.0', 'application/*', 'text/*,application/json', 'application/*,text/*', 'application/json,text/xml'): acc = swift.common.swob.Accept(accept) match = acc.best_match(['text/plain', 'application/json', 'application/xml', 'text/xml']) self.assertEqual(match, 'application/json') def test_accept_plain(self): for accept in ('', 'text/plain', 'application/xml;q=0.8,*/*;q=0.9', '*/*;q=0.9,application/xml;q=0.8', '*/*', 'text/plain,application/xml'): acc = swift.common.swob.Accept(accept) match = acc.best_match(['text/plain', 'application/json', 'application/xml', 'text/xml']) self.assertEqual(match, 'text/plain') def test_accept_xml(self): for accept in ('application/xml', 'application/xml;q=1.0,*/*;q=0.9', '*/*;q=0.9,application/xml;q=1.0', 'application/xml;charset=UTF-8', 'application/xml;charset=UTF-8;qws="quoted with space"', 'application/xml; q=0.99 ; qws="quoted with space"'): acc = swift.common.swob.Accept(accept) match = acc.best_match(['text/plain', 'application/xml', 'text/xml']) self.assertEqual(match, 'application/xml') def test_accept_invalid(self): for accept in ('*', 'text/plain,,', 'some stuff', 'application/xml;q=1.0;q=1.1', 'text/plain,*', 'text /plain', 'text\x7f/plain', 'text/plain;a=b=c', 'text/plain;q=1;q=2', 'text/plain; ubq="unbalanced " quotes"'): acc = swift.common.swob.Accept(accept) match = acc.best_match(['text/plain', 'application/xml', 'text/xml']) self.assertEqual(match, None) def test_repr(self): acc = swift.common.swob.Accept("application/json") self.assertEqual(repr(acc), "application/json") class TestRequest(unittest.TestCase): def test_blank(self): req = swift.common.swob.Request.blank( '/', environ={'REQUEST_METHOD': 'POST'}, headers={'Content-Type': 'text/plain'}, body='hi') self.assertEqual(req.path_info, '/') self.assertEqual(req.body, 'hi') self.assertEqual(req.headers['Content-Type'], 'text/plain') self.assertEqual(req.method, 'POST') def test_blank_req_environ_property_args(self): blank = swift.common.swob.Request.blank req = blank('/', method='PATCH') self.assertEqual(req.method, 'PATCH') self.assertEqual(req.environ['REQUEST_METHOD'], 'PATCH') req = blank('/', referer='http://example.com') self.assertEqual(req.referer, 'http://example.com') self.assertEqual(req.referrer, 'http://example.com') self.assertEqual(req.environ['HTTP_REFERER'], 'http://example.com') self.assertEqual(req.headers['Referer'], 'http://example.com') req = blank('/', script_name='/application') self.assertEqual(req.script_name, '/application') self.assertEqual(req.environ['SCRIPT_NAME'], '/application') req = blank('/', host='www.example.com') self.assertEqual(req.host, 'www.example.com') self.assertEqual(req.environ['HTTP_HOST'], 'www.example.com') self.assertEqual(req.headers['Host'], 'www.example.com') req = blank('/', remote_addr='127.0.0.1') self.assertEqual(req.remote_addr, '127.0.0.1') self.assertEqual(req.environ['REMOTE_ADDR'], '127.0.0.1') req = blank('/', remote_user='username') self.assertEqual(req.remote_user, 'username') self.assertEqual(req.environ['REMOTE_USER'], 'username') req = blank('/', user_agent='curl/7.22.0 (x86_64-pc-linux-gnu)') self.assertEqual(req.user_agent, 'curl/7.22.0 (x86_64-pc-linux-gnu)') self.assertEqual(req.environ['HTTP_USER_AGENT'], 'curl/7.22.0 (x86_64-pc-linux-gnu)') self.assertEqual(req.headers['User-Agent'], 'curl/7.22.0 (x86_64-pc-linux-gnu)') req = blank('/', query_string='a=b&c=d') self.assertEqual(req.query_string, 'a=b&c=d') self.assertEqual(req.environ['QUERY_STRING'], 'a=b&c=d') req = blank('/', if_match='*') self.assertEqual(req.environ['HTTP_IF_MATCH'], '*') self.assertEqual(req.headers['If-Match'], '*') # multiple environ property kwargs req = blank('/', method='PATCH', referer='http://example.com', script_name='/application', host='www.example.com', remote_addr='127.0.0.1', remote_user='username', user_agent='curl/7.22.0 (x86_64-pc-linux-gnu)', query_string='a=b&c=d', if_match='*') self.assertEqual(req.method, 'PATCH') self.assertEqual(req.referer, 'http://example.com') self.assertEqual(req.script_name, '/application') self.assertEqual(req.host, 'www.example.com') self.assertEqual(req.remote_addr, '127.0.0.1') self.assertEqual(req.remote_user, 'username') self.assertEqual(req.user_agent, 'curl/7.22.0 (x86_64-pc-linux-gnu)') self.assertEqual(req.query_string, 'a=b&c=d') self.assertEqual(req.environ['QUERY_STRING'], 'a=b&c=d') def test_invalid_req_environ_property_args(self): # getter only property try: swift.common.swob.Request.blank('/', params={'a': 'b'}) except TypeError as e: self.assertEqual("got unexpected keyword argument 'params'", str(e)) else: self.assertTrue(False, "invalid req_environ_property " "didn't raise error!") # regular attribute try: swift.common.swob.Request.blank('/', _params_cache={'a': 'b'}) except TypeError as e: self.assertEqual("got unexpected keyword " "argument '_params_cache'", str(e)) else: self.assertTrue(False, "invalid req_environ_property " "didn't raise error!") # non-existent attribute try: swift.common.swob.Request.blank('/', params_cache={'a': 'b'}) except TypeError as e: self.assertEqual("got unexpected keyword " "argument 'params_cache'", str(e)) else: self.assertTrue(False, "invalid req_environ_property " "didn't raise error!") # method try: swift.common.swob.Request.blank( '/', as_referer='GET http://example.com') except TypeError as e: self.assertEqual("got unexpected keyword " "argument 'as_referer'", str(e)) else: self.assertTrue(False, "invalid req_environ_property " "didn't raise error!") def test_blank_path_info_precedence(self): blank = swift.common.swob.Request.blank req = blank('/a') self.assertEqual(req.path_info, '/a') req = blank('/a', environ={'PATH_INFO': '/a/c'}) self.assertEqual(req.path_info, '/a/c') req = blank('/a', environ={'PATH_INFO': '/a/c'}, path_info='/a/c/o') self.assertEqual(req.path_info, '/a/c/o') req = blank('/a', path_info='/a/c/o') self.assertEqual(req.path_info, '/a/c/o') def test_blank_body_precedence(self): req = swift.common.swob.Request.blank( '/', environ={'REQUEST_METHOD': 'POST', 'wsgi.input': BytesIO(b'')}, headers={'Content-Type': 'text/plain'}, body='hi') self.assertEqual(req.path_info, '/') self.assertEqual(req.body, 'hi') self.assertEqual(req.headers['Content-Type'], 'text/plain') self.assertEqual(req.method, 'POST') body_file = BytesIO(b'asdf') req = swift.common.swob.Request.blank( '/', environ={'REQUEST_METHOD': 'POST', 'wsgi.input': BytesIO(b'')}, headers={'Content-Type': 'text/plain'}, body='hi', body_file=body_file) self.assertTrue(req.body_file is body_file) req = swift.common.swob.Request.blank( '/', environ={'REQUEST_METHOD': 'POST', 'wsgi.input': BytesIO(b'')}, headers={'Content-Type': 'text/plain'}, body='hi', content_length=3) self.assertEqual(req.content_length, 3) self.assertEqual(len(req.body), 2) def test_blank_parsing(self): req = swift.common.swob.Request.blank('http://test.com/') self.assertEqual(req.environ['wsgi.url_scheme'], 'http') self.assertEqual(req.environ['SERVER_PORT'], '80') self.assertEqual(req.environ['SERVER_NAME'], 'test.com') req = swift.common.swob.Request.blank('https://test.com:456/') self.assertEqual(req.environ['wsgi.url_scheme'], 'https') self.assertEqual(req.environ['SERVER_PORT'], '456') req = swift.common.swob.Request.blank('test.com/') self.assertEqual(req.environ['wsgi.url_scheme'], 'http') self.assertEqual(req.environ['SERVER_PORT'], '80') self.assertEqual(req.environ['PATH_INFO'], 'test.com/') self.assertRaises(TypeError, swift.common.swob.Request.blank, 'ftp://test.com/') def test_params(self): req = swift.common.swob.Request.blank('/?a=b&c=d') self.assertEqual(req.params['a'], 'b') self.assertEqual(req.params['c'], 'd') def test_timestamp_missing(self): req = swift.common.swob.Request.blank('/') self.assertRaises(exceptions.InvalidTimestamp, getattr, req, 'timestamp') def test_timestamp_invalid(self): req = swift.common.swob.Request.blank( '/', headers={'X-Timestamp': 'asdf'}) self.assertRaises(exceptions.InvalidTimestamp, getattr, req, 'timestamp') def test_timestamp(self): req = swift.common.swob.Request.blank( '/', headers={'X-Timestamp': '1402447134.13507_00000001'}) expected = utils.Timestamp('1402447134.13507', offset=1) self.assertEqual(req.timestamp, expected) self.assertEqual(req.timestamp.normal, expected.normal) self.assertEqual(req.timestamp.internal, expected.internal) def test_path(self): req = swift.common.swob.Request.blank('/hi?a=b&c=d') self.assertEqual(req.path, '/hi') req = swift.common.swob.Request.blank( '/', environ={'SCRIPT_NAME': '/hi', 'PATH_INFO': '/there'}) self.assertEqual(req.path, '/hi/there') def test_path_question_mark(self): req = swift.common.swob.Request.blank('/test%3Ffile') # This tests that .blank unquotes the path when setting PATH_INFO self.assertEqual(req.environ['PATH_INFO'], '/test?file') # This tests that .path requotes it self.assertEqual(req.path, '/test%3Ffile') def test_path_info_pop(self): req = swift.common.swob.Request.blank('/hi/there') self.assertEqual(req.path_info_pop(), 'hi') self.assertEqual(req.path_info, '/there') self.assertEqual(req.script_name, '/hi') def test_bad_path_info_pop(self): req = swift.common.swob.Request.blank('blahblah') self.assertEqual(req.path_info_pop(), None) def test_path_info_pop_last(self): req = swift.common.swob.Request.blank('/last') self.assertEqual(req.path_info_pop(), 'last') self.assertEqual(req.path_info, '') self.assertEqual(req.script_name, '/last') def test_path_info_pop_none(self): req = swift.common.swob.Request.blank('/') self.assertEqual(req.path_info_pop(), '') self.assertEqual(req.path_info, '') self.assertEqual(req.script_name, '/') def test_copy_get(self): req = swift.common.swob.Request.blank( '/hi/there', environ={'REQUEST_METHOD': 'POST'}) self.assertEqual(req.method, 'POST') req2 = req.copy_get() self.assertEqual(req2.method, 'GET') def test_get_response(self): def test_app(environ, start_response): start_response('200 OK', []) return ['hi'] req = swift.common.swob.Request.blank('/') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, 'hi') def test_401_unauthorized(self): # No request environment resp = swift.common.swob.HTTPUnauthorized() self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) # Request environment req = swift.common.swob.Request.blank('/') resp = swift.common.swob.HTTPUnauthorized(request=req) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) def test_401_valid_account_path(self): def test_app(environ, start_response): start_response('401 Unauthorized', []) return ['hi'] # Request environment contains valid account in path req = swift.common.swob.Request.blank('/v1/account-name') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="account-name"', resp.headers['Www-Authenticate']) # Request environment contains valid account/container in path req = swift.common.swob.Request.blank('/v1/account-name/c') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="account-name"', resp.headers['Www-Authenticate']) def test_401_invalid_path(self): def test_app(environ, start_response): start_response('401 Unauthorized', []) return ['hi'] # Request environment contains bad path req = swift.common.swob.Request.blank('/random') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="unknown"', resp.headers['Www-Authenticate']) def test_401_non_keystone_auth_path(self): def test_app(environ, start_response): start_response('401 Unauthorized', []) return ['no creds in request'] # Request to get token req = swift.common.swob.Request.blank('/v1.0/auth') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="unknown"', resp.headers['Www-Authenticate']) # Other form of path req = swift.common.swob.Request.blank('/auth/v1.0') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="unknown"', resp.headers['Www-Authenticate']) def test_401_www_authenticate_exists(self): def test_app(environ, start_response): start_response('401 Unauthorized', { 'Www-Authenticate': 'Me realm="whatever"'}) return ['no creds in request'] # Auth middleware sets own Www-Authenticate req = swift.common.swob.Request.blank('/auth/v1.0') resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Me realm="whatever"', resp.headers['Www-Authenticate']) def test_401_www_authenticate_is_quoted(self): def test_app(environ, start_response): start_response('401 Unauthorized', []) return ['hi'] hacker = 'account-name\n\nfoo
' # url injection test quoted_hacker = quote(hacker) req = swift.common.swob.Request.blank('/v1/' + hacker) resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="%s"' % quoted_hacker, resp.headers['Www-Authenticate']) req = swift.common.swob.Request.blank('/v1/' + quoted_hacker) resp = req.get_response(test_app) self.assertEqual(resp.status_int, 401) self.assertTrue('Www-Authenticate' in resp.headers) self.assertEqual('Swift realm="%s"' % quoted_hacker, resp.headers['Www-Authenticate']) def test_not_401(self): # Other status codes should not have WWW-Authenticate in response def test_app(environ, start_response): start_response('200 OK', []) return ['hi'] req = swift.common.swob.Request.blank('/') resp = req.get_response(test_app) self.assertTrue('Www-Authenticate' not in resp.headers) def test_properties(self): req = swift.common.swob.Request.blank('/hi/there', body='hi') self.assertEqual(req.body, 'hi') self.assertEqual(req.content_length, 2) req.remote_addr = 'something' self.assertEqual(req.environ['REMOTE_ADDR'], 'something') req.body = 'whatever' self.assertEqual(req.content_length, 8) self.assertEqual(req.body, 'whatever') self.assertEqual(req.method, 'GET') req.range = 'bytes=1-7' self.assertEqual(req.range.ranges[0], (1, 7)) self.assertTrue('Range' in req.headers) req.range = None self.assertTrue('Range' not in req.headers) def test_datetime_properties(self): req = swift.common.swob.Request.blank('/hi/there', body='hi') req.if_unmodified_since = 0 self.assertTrue(isinstance(req.if_unmodified_since, datetime.datetime)) if_unmodified_since = req.if_unmodified_since req.if_unmodified_since = if_unmodified_since self.assertEqual(if_unmodified_since, req.if_unmodified_since) req.if_unmodified_since = 'something' self.assertEqual(req.headers['If-Unmodified-Since'], 'something') self.assertEqual(req.if_unmodified_since, None) self.assertTrue('If-Unmodified-Since' in req.headers) req.if_unmodified_since = None self.assertTrue('If-Unmodified-Since' not in req.headers) too_big_date_list = list(datetime.datetime.max.timetuple()) too_big_date_list[0] += 1 # bump up the year too_big_date = time.strftime( "%a, %d %b %Y %H:%M:%S UTC", time.struct_time(too_big_date_list)) req.if_unmodified_since = too_big_date self.assertEqual(req.if_unmodified_since, None) def test_bad_range(self): req = swift.common.swob.Request.blank('/hi/there', body='hi') req.range = 'bad range' self.assertEqual(req.range, None) def test_accept_header(self): req = swift.common.swob.Request({'REQUEST_METHOD': 'GET', 'PATH_INFO': '/', 'HTTP_ACCEPT': 'application/json'}) self.assertEqual( req.accept.best_match(['application/json', 'text/plain']), 'application/json') self.assertEqual( req.accept.best_match(['text/plain', 'application/json']), 'application/json') def test_swift_entity_path(self): req = swift.common.swob.Request.blank('/v1/a/c/o') self.assertEqual(req.swift_entity_path, '/a/c/o') req = swift.common.swob.Request.blank('/v1/a/c') self.assertEqual(req.swift_entity_path, '/a/c') req = swift.common.swob.Request.blank('/v1/a') self.assertEqual(req.swift_entity_path, '/a') req = swift.common.swob.Request.blank('/v1') self.assertEqual(req.swift_entity_path, None) def test_path_qs(self): req = swift.common.swob.Request.blank('/hi/there?hello=equal&acl') self.assertEqual(req.path_qs, '/hi/there?hello=equal&acl') req = swift.common.swob.Request({'PATH_INFO': '/hi/there', 'QUERY_STRING': 'hello=equal&acl'}) self.assertEqual(req.path_qs, '/hi/there?hello=equal&acl') def test_url(self): req = swift.common.swob.Request.blank('/hi/there?hello=equal&acl') self.assertEqual(req.url, 'http://localhost/hi/there?hello=equal&acl') def test_wsgify(self): used_req = [] @swift.common.swob.wsgify def _wsgi_func(req): used_req.append(req) return swift.common.swob.Response('200 OK') req = swift.common.swob.Request.blank('/hi/there') resp = req.get_response(_wsgi_func) self.assertEqual(used_req[0].path, '/hi/there') self.assertEqual(resp.status_int, 200) def test_wsgify_raise(self): used_req = [] @swift.common.swob.wsgify def _wsgi_func(req): used_req.append(req) raise swift.common.swob.HTTPServerError() req = swift.common.swob.Request.blank('/hi/there') resp = req.get_response(_wsgi_func) self.assertEqual(used_req[0].path, '/hi/there') self.assertEqual(resp.status_int, 500) def test_split_path(self): """ Copied from swift.common.utils.split_path """ def _test_split_path(path, minsegs=1, maxsegs=None, rwl=False): req = swift.common.swob.Request.blank(path) return req.split_path(minsegs, maxsegs, rwl) self.assertRaises(ValueError, _test_split_path, '') self.assertRaises(ValueError, _test_split_path, '/') self.assertRaises(ValueError, _test_split_path, '//') self.assertEqual(_test_split_path('/a'), ['a']) self.assertRaises(ValueError, _test_split_path, '//a') self.assertEqual(_test_split_path('/a/'), ['a']) self.assertRaises(ValueError, _test_split_path, '/a/c') self.assertRaises(ValueError, _test_split_path, '//c') self.assertRaises(ValueError, _test_split_path, '/a/c/') self.assertRaises(ValueError, _test_split_path, '/a//') self.assertRaises(ValueError, _test_split_path, '/a', 2) self.assertRaises(ValueError, _test_split_path, '/a', 2, 3) self.assertRaises(ValueError, _test_split_path, '/a', 2, 3, True) self.assertEqual(_test_split_path('/a/c', 2), ['a', 'c']) self.assertEqual(_test_split_path('/a/c/o', 3), ['a', 'c', 'o']) self.assertRaises(ValueError, _test_split_path, '/a/c/o/r', 3, 3) self.assertEqual(_test_split_path('/a/c/o/r', 3, 3, True), ['a', 'c', 'o/r']) self.assertEqual(_test_split_path('/a/c', 2, 3, True), ['a', 'c', None]) self.assertRaises(ValueError, _test_split_path, '/a', 5, 4) self.assertEqual(_test_split_path('/a/c/', 2), ['a', 'c']) self.assertEqual(_test_split_path('/a/c/', 2, 3), ['a', 'c', '']) try: _test_split_path('o\nn e', 2) except ValueError as err: self.assertEqual(str(err), 'Invalid path: o%0An%20e') try: _test_split_path('o\nn e', 2, 3, True) except ValueError as err: self.assertEqual(str(err), 'Invalid path: o%0An%20e') def test_unicode_path(self): req = swift.common.swob.Request.blank(u'/\u2661') self.assertEqual(req.path, quote(u'/\u2661'.encode('utf-8'))) def test_unicode_query(self): req = swift.common.swob.Request.blank(u'/') req.query_string = u'x=\u2661' self.assertEqual(req.params['x'], u'\u2661'.encode('utf-8')) def test_url2(self): pi = '/hi/there' path = pi req = swift.common.swob.Request.blank(path) sche = 'http' exp_url = '%s://localhost%s' % (sche, pi) self.assertEqual(req.url, exp_url) qs = 'hello=equal&acl' path = '%s?%s' % (pi, qs) s, p = 'unit.test.example.com', '90' req = swift.common.swob.Request({'PATH_INFO': pi, 'QUERY_STRING': qs, 'SERVER_NAME': s, 'SERVER_PORT': p}) exp_url = '%s://%s:%s%s?%s' % (sche, s, p, pi, qs) self.assertEqual(req.url, exp_url) host = 'unit.test.example.com' req = swift.common.swob.Request({'PATH_INFO': pi, 'QUERY_STRING': qs, 'HTTP_HOST': host + ':80'}) exp_url = '%s://%s%s?%s' % (sche, host, pi, qs) self.assertEqual(req.url, exp_url) host = 'unit.test.example.com' sche = 'https' req = swift.common.swob.Request({'PATH_INFO': pi, 'QUERY_STRING': qs, 'HTTP_HOST': host + ':443', 'wsgi.url_scheme': sche}) exp_url = '%s://%s%s?%s' % (sche, host, pi, qs) self.assertEqual(req.url, exp_url) host = 'unit.test.example.com:81' req = swift.common.swob.Request({'PATH_INFO': pi, 'QUERY_STRING': qs, 'HTTP_HOST': host, 'wsgi.url_scheme': sche}) exp_url = '%s://%s%s?%s' % (sche, host, pi, qs) self.assertEqual(req.url, exp_url) def test_as_referer(self): pi = '/hi/there' qs = 'hello=equal&acl' sche = 'https' host = 'unit.test.example.com:81' req = swift.common.swob.Request({'REQUEST_METHOD': 'POST', 'PATH_INFO': pi, 'QUERY_STRING': qs, 'HTTP_HOST': host, 'wsgi.url_scheme': sche}) exp_url = '%s://%s%s?%s' % (sche, host, pi, qs) self.assertEqual(req.as_referer(), 'POST ' + exp_url) def test_message_length_just_content_length(self): req = swift.common.swob.Request.blank( u'/', environ={'REQUEST_METHOD': 'PUT', 'PATH_INFO': '/'}) self.assertEqual(req.message_length(), None) req = swift.common.swob.Request.blank( u'/', environ={'REQUEST_METHOD': 'PUT', 'PATH_INFO': '/'}, body='x' * 42) self.assertEqual(req.message_length(), 42) req.headers['Content-Length'] = 'abc' try: req.message_length() except ValueError as e: self.assertEqual(str(e), "Invalid Content-Length header value") else: self.fail("Expected a ValueError raised for 'abc'") def test_message_length_transfer_encoding(self): req = swift.common.swob.Request.blank( u'/', environ={'REQUEST_METHOD': 'PUT', 'PATH_INFO': '/'}, headers={'transfer-encoding': 'chunked'}, body='x' * 42) self.assertEqual(req.message_length(), None) req.headers['Transfer-Encoding'] = 'gzip,chunked' try: req.message_length() except AttributeError as e: self.assertEqual(str(e), "Unsupported Transfer-Coding header" " value specified in Transfer-Encoding header") else: self.fail("Expected an AttributeError raised for 'gzip'") req.headers['Transfer-Encoding'] = 'gzip' try: req.message_length() except ValueError as e: self.assertEqual(str(e), "Invalid Transfer-Encoding header value") else: self.fail("Expected a ValueError raised for 'gzip'") req.headers['Transfer-Encoding'] = 'gzip,identity' try: req.message_length() except AttributeError as e: self.assertEqual(str(e), "Unsupported Transfer-Coding header" " value specified in Transfer-Encoding header") else: self.fail("Expected an AttributeError raised for 'gzip,identity'") class TestStatusMap(unittest.TestCase): def test_status_map(self): response_args = [] def start_response(status, headers): response_args.append(status) response_args.append(headers) resp_cls = swift.common.swob.status_map[404] resp = resp_cls() self.assertEqual(resp.status_int, 404) self.assertEqual(resp.title, 'Not Found') body = ''.join(resp({}, start_response)) self.assertTrue('The resource could not be found.' in body) self.assertEqual(response_args[0], '404 Not Found') headers = dict(response_args[1]) self.assertEqual(headers['Content-Type'], 'text/html; charset=UTF-8') self.assertTrue(int(headers['Content-Length']) > 0) class TestResponse(unittest.TestCase): def _get_response(self): def test_app(environ, start_response): start_response('200 OK', []) return ['hi'] req = swift.common.swob.Request.blank('/') return req.get_response(test_app) def test_properties(self): resp = self._get_response() resp.location = 'something' self.assertEqual(resp.location, 'something') self.assertTrue('Location' in resp.headers) resp.location = None self.assertTrue('Location' not in resp.headers) resp.content_type = 'text/plain' self.assertTrue('Content-Type' in resp.headers) resp.content_type = None self.assertTrue('Content-Type' not in resp.headers) def test_empty_body(self): resp = self._get_response() resp.body = '' self.assertEqual(resp.body, '') def test_unicode_body(self): resp = self._get_response() resp.body = u'\N{SNOWMAN}' self.assertEqual(resp.body, u'\N{SNOWMAN}'.encode('utf-8')) def test_call_reifies_request_if_necessary(self): """ The actual bug was a HEAD response coming out with a body because the Request object wasn't passed into the Response object's constructor. The Response object's __call__ method should be able to reify a Request object from the env it gets passed. """ def test_app(environ, start_response): start_response('200 OK', []) return ['hi'] req = swift.common.swob.Request.blank('/') req.method = 'HEAD' status, headers, app_iter = req.call_application(test_app) resp = swift.common.swob.Response(status=status, headers=dict(headers), app_iter=app_iter) output_iter = resp(req.environ, lambda *_: None) self.assertEqual(list(output_iter), ['']) def test_call_preserves_closeability(self): def test_app(environ, start_response): start_response('200 OK', []) yield "igloo" yield "shindig" yield "macadamia" yield "hullabaloo" req = swift.common.swob.Request.blank('/') req.method = 'GET' status, headers, app_iter = req.call_application(test_app) iterator = iter(app_iter) self.assertEqual('igloo', next(iterator)) self.assertEqual('shindig', next(iterator)) app_iter.close() self.assertRaises(StopIteration, iterator.next) def test_location_rewrite(self): def start_response(env, headers): pass req = swift.common.swob.Request.blank( '/', environ={'HTTP_HOST': 'somehost'}) resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://somehost/something') req = swift.common.swob.Request.blank( '/', environ={'HTTP_HOST': 'somehost:80'}) resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://somehost/something') req = swift.common.swob.Request.blank( '/', environ={'HTTP_HOST': 'somehost:443', 'wsgi.url_scheme': 'http'}) resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://somehost:443/something') req = swift.common.swob.Request.blank( '/', environ={'HTTP_HOST': 'somehost:443', 'wsgi.url_scheme': 'https'}) resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'https://somehost/something') def test_location_rewrite_no_host(self): def start_response(env, headers): pass req = swift.common.swob.Request.blank( '/', environ={'SERVER_NAME': 'local', 'SERVER_PORT': 80}) del req.environ['HTTP_HOST'] resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://local/something') req = swift.common.swob.Request.blank( '/', environ={'SERVER_NAME': 'local', 'SERVER_PORT': 81}) del req.environ['HTTP_HOST'] resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://local:81/something') def test_location_no_rewrite(self): def start_response(env, headers): pass req = swift.common.swob.Request.blank( '/', environ={'HTTP_HOST': 'somehost'}) resp = self._get_response() resp.location = 'http://www.google.com/' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, 'http://www.google.com/') def test_location_no_rewrite_when_told_not_to(self): def start_response(env, headers): pass req = swift.common.swob.Request.blank( '/', environ={'SERVER_NAME': 'local', 'SERVER_PORT': 81, 'swift.leave_relative_location': True}) del req.environ['HTTP_HOST'] resp = self._get_response() resp.location = '/something' # read response ''.join(resp(req.environ, start_response)) self.assertEqual(resp.location, '/something') def test_app_iter(self): def start_response(env, headers): pass resp = self._get_response() resp.app_iter = ['a', 'b', 'c'] body = ''.join(resp({}, start_response)) self.assertEqual(body, 'abc') def test_multi_ranges_wo_iter_ranges(self): def test_app(environ, start_response): start_response('200 OK', [('Content-Length', '10')]) return ['1234567890'] req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=0-9,10-19,20-29'}) resp = req.get_response(test_app) resp.conditional_response = True resp.content_length = 10 # read response ''.join(resp._response_iter(resp.app_iter, '')) self.assertEqual(resp.status, '200 OK') self.assertEqual(10, resp.content_length) def test_single_range_wo_iter_range(self): def test_app(environ, start_response): start_response('200 OK', [('Content-Length', '10')]) return ['1234567890'] req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=0-9'}) resp = req.get_response(test_app) resp.conditional_response = True resp.content_length = 10 # read response ''.join(resp._response_iter(resp.app_iter, '')) self.assertEqual(resp.status, '200 OK') self.assertEqual(10, resp.content_length) def test_multi_range_body(self): def test_app(environ, start_response): start_response('200 OK', [('Content-Length', '4')]) return ['abcd'] req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=0-9,10-19,20-29'}) resp = req.get_response(test_app) resp.conditional_response = True resp.content_length = 100 resp.content_type = 'text/plain; charset=utf8' content = ''.join(resp._response_iter(None, ('0123456789112345678' '92123456789'))) self.assertTrue(re.match(('--[a-f0-9]{32}\r\n' 'Content-Type: text/plain; charset=utf8\r\n' 'Content-Range: bytes ' '0-9/100\r\n\r\n0123456789\r\n' '--[a-f0-9]{32}\r\n' 'Content-Type: text/plain; charset=utf8\r\n' 'Content-Range: bytes ' '10-19/100\r\n\r\n1123456789\r\n' '--[a-f0-9]{32}\r\n' 'Content-Type: text/plain; charset=utf8\r\n' 'Content-Range: bytes ' '20-29/100\r\n\r\n2123456789\r\n' '--[a-f0-9]{32}--'), content)) def test_multi_response_iter(self): def test_app(environ, start_response): start_response('200 OK', [('Content-Length', '10'), ('Content-Type', 'application/xml')]) return ['0123456789'] app_iter_ranges_args = [] class App_iter(object): def app_iter_ranges(self, ranges, content_type, boundary, size): app_iter_ranges_args.append((ranges, content_type, boundary, size)) for i in range(3): yield str(i) + 'fun' yield boundary def __iter__(self): for i in range(3): yield str(i) + 'fun' req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=1-5,8-11'}) resp = req.get_response(test_app) resp.conditional_response = True resp.content_length = 12 content = ''.join(resp._response_iter(App_iter(), '')) boundary = content[-32:] self.assertEqual(content[:-32], '0fun1fun2fun') self.assertEqual(app_iter_ranges_args, [([(1, 6), (8, 12)], 'application/xml', boundary, 12)]) def test_range_body(self): def test_app(environ, start_response): start_response('200 OK', [('Content-Length', '10')]) return ['1234567890'] def start_response(env, headers): pass req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=1-3'}) resp = swift.common.swob.Response( body='1234567890', request=req, conditional_response=True) body = ''.join(resp([], start_response)) self.assertEqual(body, '234') self.assertEqual(resp.content_range, 'bytes 1-3/10') self.assertEqual(resp.status, '206 Partial Content') # syntactically valid, but does not make sense, so returning 416 # in next couple of cases. req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=-0'}) resp = req.get_response(test_app) resp.conditional_response = True body = ''.join(resp([], start_response)) self.assertEqual(body, '') self.assertEqual(resp.content_length, 0) self.assertEqual(resp.status, '416 Requested Range Not Satisfiable') resp = swift.common.swob.Response( body='1234567890', request=req, conditional_response=True) body = ''.join(resp([], start_response)) self.assertEqual(body, '') self.assertEqual(resp.content_length, 0) self.assertEqual(resp.status, '416 Requested Range Not Satisfiable') # Syntactically-invalid Range headers "MUST" be ignored req = swift.common.swob.Request.blank( '/', headers={'Range': 'bytes=3-2'}) resp = req.get_response(test_app) resp.conditional_response = True body = ''.join(resp([], start_response)) self.assertEqual(body, '1234567890') self.assertEqual(resp.status, '200 OK') resp = swift.common.swob.Response( body='1234567890', request=req, conditional_response=True) body = ''.join(resp([], start_response)) self.assertEqual(body, '1234567890') self.assertEqual(resp.status, '200 OK') def test_content_type(self): resp = self._get_response() resp.content_type = 'text/plain; charset=utf8' self.assertEqual(resp.content_type, 'text/plain') def test_charset(self): resp = self._get_response() resp.content_type = 'text/plain; charset=utf8' self.assertEqual(resp.charset, 'utf8') resp.charset = 'utf16' self.assertEqual(resp.charset, 'utf16') def test_charset_content_type(self): resp = swift.common.swob.Response( content_type='text/plain', charset='utf-8') self.assertEqual(resp.charset, 'utf-8') resp = swift.common.swob.Response( charset='utf-8', content_type='text/plain') self.assertEqual(resp.charset, 'utf-8') def test_etag(self): resp = self._get_response() resp.etag = 'hi' self.assertEqual(resp.headers['Etag'], '"hi"') self.assertEqual(resp.etag, 'hi') self.assertTrue('etag' in resp.headers) resp.etag = None self.assertTrue('etag' not in resp.headers) def test_host_url_default(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'http' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '1234' del env['HTTP_HOST'] self.assertEqual(resp.host_url, 'http://bob:1234') def test_host_url_default_port_squelched(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'http' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '80' del env['HTTP_HOST'] self.assertEqual(resp.host_url, 'http://bob') def test_host_url_https(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'https' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '1234' del env['HTTP_HOST'] self.assertEqual(resp.host_url, 'https://bob:1234') def test_host_url_https_port_squelched(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'https' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '443' del env['HTTP_HOST'] self.assertEqual(resp.host_url, 'https://bob') def test_host_url_host_override(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'http' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '1234' env['HTTP_HOST'] = 'someother' self.assertEqual(resp.host_url, 'http://someother') def test_host_url_host_port_override(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'http' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '1234' env['HTTP_HOST'] = 'someother:5678' self.assertEqual(resp.host_url, 'http://someother:5678') def test_host_url_host_https(self): resp = self._get_response() env = resp.environ env['wsgi.url_scheme'] = 'https' env['SERVER_NAME'] = 'bob' env['SERVER_PORT'] = '1234' env['HTTP_HOST'] = 'someother:5678' self.assertEqual(resp.host_url, 'https://someother:5678') def test_507(self): resp = swift.common.swob.HTTPInsufficientStorage() content = ''.join(resp._response_iter(resp.app_iter, resp._body)) self.assertEqual( content, '

Insufficient Storage

There was not enough space ' 'to save the resource. Drive: unknown

') resp = swift.common.swob.HTTPInsufficientStorage(drive='sda1') content = ''.join(resp._response_iter(resp.app_iter, resp._body)) self.assertEqual( content, '

Insufficient Storage

There was not enough space ' 'to save the resource. Drive: sda1

') def test_200_with_body_and_headers(self): headers = {'Content-Length': '0'} content = 'foo' resp = swift.common.swob.HTTPOk(body=content, headers=headers) self.assertEqual(resp.body, content) self.assertEqual(resp.content_length, len(content)) def test_init_with_body_headers_app_iter(self): # body exists but no headers and no app_iter body = 'ok' resp = swift.common.swob.Response(body=body) self.assertEqual(resp.body, body) self.assertEqual(resp.content_length, len(body)) # body and headers with 0 content_length exist but no app_iter body = 'ok' resp = swift.common.swob.Response( body=body, headers={'Content-Length': '0'}) self.assertEqual(resp.body, body) self.assertEqual(resp.content_length, len(body)) # body and headers with content_length exist but no app_iter body = 'ok' resp = swift.common.swob.Response( body=body, headers={'Content-Length': '5'}) self.assertEqual(resp.body, body) self.assertEqual(resp.content_length, len(body)) # body and headers with no content_length exist but no app_iter body = 'ok' resp = swift.common.swob.Response(body=body, headers={}) self.assertEqual(resp.body, body) self.assertEqual(resp.content_length, len(body)) # body, headers with content_length and app_iter exist resp = swift.common.swob.Response( body='ok', headers={'Content-Length': '5'}, app_iter=iter([])) self.assertEqual(resp.content_length, 5) self.assertEqual(resp.body, '') # headers with content_length and app_iter exist but no body resp = swift.common.swob.Response( headers={'Content-Length': '5'}, app_iter=iter([])) self.assertEqual(resp.content_length, 5) self.assertEqual(resp.body, '') # app_iter exists but no body and headers resp = swift.common.swob.Response(app_iter=iter([])) self.assertEqual(resp.content_length, None) self.assertEqual(resp.body, '') class TestUTC(unittest.TestCase): def test_tzname(self): self.assertEqual(swift.common.swob.UTC.tzname(None), 'UTC') class TestConditionalIfNoneMatch(unittest.TestCase): def fake_app(self, environ, start_response): start_response('200 OK', [('Etag', 'the-etag')]) return ['hi'] def fake_start_response(*a, **kw): pass def test_simple_match(self): # etag matches --> 304 req = swift.common.swob.Request.blank( '/', headers={'If-None-Match': 'the-etag'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') def test_quoted_simple_match(self): # double quotes don't matter req = swift.common.swob.Request.blank( '/', headers={'If-None-Match': '"the-etag"'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') def test_list_match(self): # it works with lists of etags to match req = swift.common.swob.Request.blank( '/', headers={'If-None-Match': '"bert", "the-etag", "ernie"'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') def test_list_no_match(self): # no matches --> whatever the original status was req = swift.common.swob.Request.blank( '/', headers={'If-None-Match': '"bert", "ernie"'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_match_star(self): # "*" means match anything; see RFC 2616 section 14.24 req = swift.common.swob.Request.blank( '/', headers={'If-None-Match': '*'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') class TestConditionalIfMatch(unittest.TestCase): def fake_app(self, environ, start_response): start_response('200 OK', [('Etag', 'the-etag')]) return ['hi'] def fake_start_response(*a, **kw): pass def test_simple_match(self): # if etag matches, proceed as normal req = swift.common.swob.Request.blank( '/', headers={'If-Match': 'the-etag'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_simple_conditional_etag_match(self): # if etag matches, proceed as normal req = swift.common.swob.Request.blank( '/', headers={'If-Match': 'not-the-etag'}) resp = req.get_response(self.fake_app) resp.conditional_response = True resp._conditional_etag = 'not-the-etag' body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_quoted_simple_match(self): # double quotes or not, doesn't matter req = swift.common.swob.Request.blank( '/', headers={'If-Match': '"the-etag"'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_no_match(self): # no match --> 412 req = swift.common.swob.Request.blank( '/', headers={'If-Match': 'not-the-etag'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 412) self.assertEqual(body, '') def test_simple_conditional_etag_no_match(self): req = swift.common.swob.Request.blank( '/', headers={'If-Match': 'the-etag'}) resp = req.get_response(self.fake_app) resp.conditional_response = True resp._conditional_etag = 'not-the-etag' body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 412) self.assertEqual(body, '') def test_match_star(self): # "*" means match anything; see RFC 2616 section 14.24 req = swift.common.swob.Request.blank( '/', headers={'If-Match': '*'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_match_star_on_404(self): def fake_app_404(environ, start_response): start_response('404 Not Found', []) return ['hi'] req = swift.common.swob.Request.blank( '/', headers={'If-Match': '*'}) resp = req.get_response(fake_app_404) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 412) self.assertEqual(body, '') class TestConditionalIfModifiedSince(unittest.TestCase): def fake_app(self, environ, start_response): start_response( '200 OK', [('Last-Modified', 'Thu, 27 Feb 2014 03:29:37 GMT')]) return ['hi'] def fake_start_response(*a, **kw): pass def test_absent(self): req = swift.common.swob.Request.blank('/') resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_before(self): req = swift.common.swob.Request.blank( '/', headers={'If-Modified-Since': 'Thu, 27 Feb 2014 03:29:36 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_same(self): req = swift.common.swob.Request.blank( '/', headers={'If-Modified-Since': 'Thu, 27 Feb 2014 03:29:37 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') def test_greater(self): req = swift.common.swob.Request.blank( '/', headers={'If-Modified-Since': 'Thu, 27 Feb 2014 03:29:38 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 304) self.assertEqual(body, '') def test_out_of_range_is_ignored(self): # All that datetime gives us is a ValueError or OverflowError when # something is out of range (i.e. less than datetime.datetime.min or # greater than datetime.datetime.max). Unfortunately, we can't # distinguish between a date being too old and a date being too new, # so the best we can do is ignore such headers. max_date_list = list(datetime.datetime.max.timetuple()) max_date_list[0] += 1 # bump up the year too_big_date_header = time.strftime( "%a, %d %b %Y %H:%M:%S GMT", time.struct_time(max_date_list)) req = swift.common.swob.Request.blank( '/', headers={'If-Modified-Since': too_big_date_header}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') class TestConditionalIfUnmodifiedSince(unittest.TestCase): def fake_app(self, environ, start_response): start_response( '200 OK', [('Last-Modified', 'Thu, 20 Feb 2014 03:29:37 GMT')]) return ['hi'] def fake_start_response(*a, **kw): pass def test_absent(self): req = swift.common.swob.Request.blank('/') resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_before(self): req = swift.common.swob.Request.blank( '/', headers={'If-Unmodified-Since': 'Thu, 20 Feb 2014 03:29:36 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 412) self.assertEqual(body, '') def test_same(self): req = swift.common.swob.Request.blank( '/', headers={'If-Unmodified-Since': 'Thu, 20 Feb 2014 03:29:37 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_greater(self): req = swift.common.swob.Request.blank( '/', headers={'If-Unmodified-Since': 'Thu, 20 Feb 2014 03:29:38 GMT'}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') def test_out_of_range_is_ignored(self): # All that datetime gives us is a ValueError or OverflowError when # something is out of range (i.e. less than datetime.datetime.min or # greater than datetime.datetime.max). Unfortunately, we can't # distinguish between a date being too old and a date being too new, # so the best we can do is ignore such headers. max_date_list = list(datetime.datetime.max.timetuple()) max_date_list[0] += 1 # bump up the year too_big_date_header = time.strftime( "%a, %d %b %Y %H:%M:%S GMT", time.struct_time(max_date_list)) req = swift.common.swob.Request.blank( '/', headers={'If-Unmodified-Since': too_big_date_header}) resp = req.get_response(self.fake_app) resp.conditional_response = True body = ''.join(resp(req.environ, self.fake_start_response)) self.assertEqual(resp.status_int, 200) self.assertEqual(body, 'hi') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_header_key_dict.py0000664000567000056710000000530613024044354023763 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from swift.common.header_key_dict import HeaderKeyDict class TestHeaderKeyDict(unittest.TestCase): def test_case_insensitive(self): headers = HeaderKeyDict() headers['Content-Length'] = 0 headers['CONTENT-LENGTH'] = 10 headers['content-length'] = 20 self.assertEqual(headers['Content-Length'], '20') self.assertEqual(headers['content-length'], '20') self.assertEqual(headers['CONTENT-LENGTH'], '20') def test_setdefault(self): headers = HeaderKeyDict() # it gets set headers.setdefault('x-rubber-ducky', 'the one') self.assertEqual(headers['X-Rubber-Ducky'], 'the one') # it has the right return value ret = headers.setdefault('x-boat', 'dinghy') self.assertEqual(ret, 'dinghy') ret = headers.setdefault('x-boat', 'yacht') self.assertEqual(ret, 'dinghy') # shouldn't crash headers.setdefault('x-sir-not-appearing-in-this-request', None) def test_del_contains(self): headers = HeaderKeyDict() headers['Content-Length'] = 0 self.assertTrue('Content-Length' in headers) del headers['Content-Length'] self.assertTrue('Content-Length' not in headers) def test_update(self): headers = HeaderKeyDict() headers.update({'Content-Length': '0'}) headers.update([('Content-Type', 'text/plain')]) self.assertEqual(headers['Content-Length'], '0') self.assertEqual(headers['Content-Type'], 'text/plain') def test_get(self): headers = HeaderKeyDict() headers['content-length'] = 20 self.assertEqual(headers.get('CONTENT-LENGTH'), '20') self.assertEqual(headers.get('something-else'), None) self.assertEqual(headers.get('something-else', True), True) def test_keys(self): headers = HeaderKeyDict() headers['content-length'] = 20 headers['cOnTent-tYpe'] = 'text/plain' headers['SomeThing-eLse'] = 'somevalue' self.assertEqual( set(headers.keys()), set(('Content-Length', 'Content-Type', 'Something-Else'))) swift-2.7.1/test/unit/common/test_memcached.py0000664000567000056710000006001313024044354022562 0ustar jenkinsjenkins00000000000000# -*- coding:utf-8 -*- # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.common.utils""" from collections import defaultdict import logging import socket import time import unittest from uuid import uuid4 from eventlet import GreenPool, sleep, Queue from eventlet.pools import Pool from swift.common import memcached from mock import patch, MagicMock from test.unit import NullLoggingHandler class MockedMemcachePool(memcached.MemcacheConnPool): def __init__(self, mocks): Pool.__init__(self, max_size=2) self.mocks = mocks # setting this for the eventlet workaround in the MemcacheConnPool self._parent_class_getter = super(memcached.MemcacheConnPool, self).get def create(self): return self.mocks.pop(0) class ExplodingMockMemcached(object): exploded = False def sendall(self, string): self.exploded = True raise socket.error() def readline(self): self.exploded = True raise socket.error() def read(self, size): self.exploded = True raise socket.error() def close(self): pass class MockMemcached(object): def __init__(self): self.inbuf = '' self.outbuf = '' self.cache = {} self.down = False self.exc_on_delete = False self.read_return_none = False self.close_called = False def sendall(self, string): if self.down: raise Exception('mock is down') self.inbuf += string while '\n' in self.inbuf: cmd, self.inbuf = self.inbuf.split('\n', 1) parts = cmd.split() if parts[0].lower() == 'set': self.cache[parts[1]] = parts[2], parts[3], \ self.inbuf[:int(parts[4])] self.inbuf = self.inbuf[int(parts[4]) + 2:] if len(parts) < 6 or parts[5] != 'noreply': self.outbuf += 'STORED\r\n' elif parts[0].lower() == 'add': value = self.inbuf[:int(parts[4])] self.inbuf = self.inbuf[int(parts[4]) + 2:] if parts[1] in self.cache: if len(parts) < 6 or parts[5] != 'noreply': self.outbuf += 'NOT_STORED\r\n' else: self.cache[parts[1]] = parts[2], parts[3], value if len(parts) < 6 or parts[5] != 'noreply': self.outbuf += 'STORED\r\n' elif parts[0].lower() == 'delete': if self.exc_on_delete: raise Exception('mock is has exc_on_delete set') if parts[1] in self.cache: del self.cache[parts[1]] if 'noreply' not in parts: self.outbuf += 'DELETED\r\n' elif 'noreply' not in parts: self.outbuf += 'NOT_FOUND\r\n' elif parts[0].lower() == 'get': for key in parts[1:]: if key in self.cache: val = self.cache[key] self.outbuf += 'VALUE %s %s %s\r\n' % ( key, val[0], len(val[2])) self.outbuf += val[2] + '\r\n' self.outbuf += 'END\r\n' elif parts[0].lower() == 'incr': if parts[1] in self.cache: val = list(self.cache[parts[1]]) val[2] = str(int(val[2]) + int(parts[2])) self.cache[parts[1]] = val self.outbuf += str(val[2]) + '\r\n' else: self.outbuf += 'NOT_FOUND\r\n' elif parts[0].lower() == 'decr': if parts[1] in self.cache: val = list(self.cache[parts[1]]) if int(val[2]) - int(parts[2]) > 0: val[2] = str(int(val[2]) - int(parts[2])) else: val[2] = '0' self.cache[parts[1]] = val self.outbuf += str(val[2]) + '\r\n' else: self.outbuf += 'NOT_FOUND\r\n' def readline(self): if self.read_return_none: return None if self.down: raise Exception('mock is down') if '\n' in self.outbuf: response, self.outbuf = self.outbuf.split('\n', 1) return response + '\n' def read(self, size): if self.down: raise Exception('mock is down') if len(self.outbuf) >= size: response = self.outbuf[:size] self.outbuf = self.outbuf[size:] return response def close(self): self.close_called = True pass class TestMemcached(unittest.TestCase): """Tests for swift.common.memcached""" def test_get_conns(self): sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock1.bind(('127.0.0.1', 0)) sock1.listen(1) sock1ipport = '%s:%s' % sock1.getsockname() sock2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock2.bind(('127.0.0.1', 0)) sock2.listen(1) orig_port = memcached.DEFAULT_MEMCACHED_PORT try: sock2ip, memcached.DEFAULT_MEMCACHED_PORT = sock2.getsockname() sock2ipport = '%s:%s' % (sock2ip, memcached.DEFAULT_MEMCACHED_PORT) # We're deliberately using sock2ip (no port) here to test that the # default port is used. memcache_client = memcached.MemcacheRing([sock1ipport, sock2ip]) one = two = True while one or two: # Run until we match hosts one and two key = uuid4().hex for conn in memcache_client._get_conns(key): peeripport = '%s:%s' % conn[2].getpeername() self.assertTrue(peeripport in (sock1ipport, sock2ipport)) if peeripport == sock1ipport: one = False if peeripport == sock2ipport: two = False self.assertEqual(len(memcache_client._errors[sock1ipport]), 0) self.assertEqual(len(memcache_client._errors[sock2ip]), 0) finally: memcached.DEFAULT_MEMCACHED_PORT = orig_port def test_get_conns_v6(self): if not socket.has_ipv6: return try: sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) sock.bind(('::1', 0, 0, 0)) sock.listen(1) sock_addr = sock.getsockname() server_socket = '[%s]:%s' % (sock_addr[0], sock_addr[1]) memcache_client = memcached.MemcacheRing([server_socket]) key = uuid4().hex for conn in memcache_client._get_conns(key): peer_sockaddr = conn[2].getpeername() peer_socket = '[%s]:%s' % (peer_sockaddr[0], peer_sockaddr[1]) self.assertEqual(peer_socket, server_socket) self.assertEqual(len(memcache_client._errors[server_socket]), 0) finally: sock.close() def test_get_conns_v6_default(self): if not socket.has_ipv6: return try: sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) sock.bind(('::1', 0)) sock.listen(1) sock_addr = sock.getsockname() server_socket = '[%s]:%s' % (sock_addr[0], sock_addr[1]) server_host = '[%s]' % sock_addr[0] memcached.DEFAULT_MEMCACHED_PORT = sock_addr[1] memcache_client = memcached.MemcacheRing([server_host]) key = uuid4().hex for conn in memcache_client._get_conns(key): peer_sockaddr = conn[2].getpeername() peer_socket = '[%s]:%s' % (peer_sockaddr[0], peer_sockaddr[1]) self.assertEqual(peer_socket, server_socket) self.assertEqual(len(memcache_client._errors[server_host]), 0) finally: sock.close() def test_get_conns_bad_v6(self): with self.assertRaises(ValueError): # IPv6 address with missing [] is invalid server_socket = '%s:%s' % ('::1', 11211) memcached.MemcacheRing([server_socket]) def test_get_conns_hostname(self): with patch('swift.common.memcached.socket.getaddrinfo') as addrinfo: try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind(('127.0.0.1', 0)) sock.listen(1) sock_addr = sock.getsockname() fqdn = socket.getfqdn() server_socket = '%s:%s' % (fqdn, sock_addr[1]) addrinfo.return_value = [(socket.AF_INET, socket.SOCK_STREAM, 0, '', ('127.0.0.1', sock_addr[1]))] memcache_client = memcached.MemcacheRing([server_socket]) key = uuid4().hex for conn in memcache_client._get_conns(key): peer_sockaddr = conn[2].getpeername() peer_socket = '%s:%s' % (peer_sockaddr[0], peer_sockaddr[1]) self.assertEqual(peer_socket, '127.0.0.1:%d' % sock_addr[1]) self.assertEqual(len(memcache_client._errors[server_socket]), 0) finally: sock.close() def test_get_conns_hostname6(self): with patch('swift.common.memcached.socket.getaddrinfo') as addrinfo: try: sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) sock.bind(('::1', 0)) sock.listen(1) sock_addr = sock.getsockname() fqdn = socket.getfqdn() server_socket = '%s:%s' % (fqdn, sock_addr[1]) addrinfo.return_value = [(socket.AF_INET6, socket.SOCK_STREAM, 0, '', ('::1', sock_addr[1]))] memcache_client = memcached.MemcacheRing([server_socket]) key = uuid4().hex for conn in memcache_client._get_conns(key): peer_sockaddr = conn[2].getpeername() peer_socket = '[%s]:%s' % (peer_sockaddr[0], peer_sockaddr[1]) self.assertEqual(peer_socket, '[::1]:%d' % sock_addr[1]) self.assertEqual(len(memcache_client._errors[server_socket]), 0) finally: sock.close() def test_set_get(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) memcache_client.set('some_key', [1, 2, 3]) self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) self.assertEqual(mock.cache.values()[0][1], '0') memcache_client.set('some_key', [4, 5, 6]) self.assertEqual(memcache_client.get('some_key'), [4, 5, 6]) memcache_client.set('some_key', ['simple str', 'utf8 str éà']) # As per http://wiki.openstack.org/encoding, # we should expect to have unicode self.assertEqual( memcache_client.get('some_key'), ['simple str', u'utf8 str éà']) self.assertTrue(float(mock.cache.values()[0][1]) == 0) memcache_client.set('some_key', [1, 2, 3], time=20) self.assertEqual(mock.cache.values()[0][1], '20') sixtydays = 60 * 24 * 60 * 60 esttimeout = time.time() + sixtydays memcache_client.set('some_key', [1, 2, 3], time=sixtydays) self.assertTrue( -1 <= float(mock.cache.values()[0][1]) - esttimeout <= 1) def test_incr(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) self.assertEqual(memcache_client.incr('some_key', delta=5), 5) self.assertEqual(memcache_client.get('some_key'), '5') self.assertEqual(memcache_client.incr('some_key', delta=5), 10) self.assertEqual(memcache_client.get('some_key'), '10') self.assertEqual(memcache_client.incr('some_key', delta=1), 11) self.assertEqual(memcache_client.get('some_key'), '11') self.assertEqual(memcache_client.incr('some_key', delta=-5), 6) self.assertEqual(memcache_client.get('some_key'), '6') self.assertEqual(memcache_client.incr('some_key', delta=-15), 0) self.assertEqual(memcache_client.get('some_key'), '0') mock.read_return_none = True self.assertRaises(memcached.MemcacheConnectionError, memcache_client.incr, 'some_key', delta=-15) self.assertTrue(mock.close_called) def test_incr_w_timeout(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) memcache_client.incr('some_key', delta=5, time=55) self.assertEqual(memcache_client.get('some_key'), '5') self.assertEqual(mock.cache.values()[0][1], '55') memcache_client.delete('some_key') self.assertEqual(memcache_client.get('some_key'), None) fiftydays = 50 * 24 * 60 * 60 esttimeout = time.time() + fiftydays memcache_client.incr('some_key', delta=5, time=fiftydays) self.assertEqual(memcache_client.get('some_key'), '5') self.assertTrue( -1 <= float(mock.cache.values()[0][1]) - esttimeout <= 1) memcache_client.delete('some_key') self.assertEqual(memcache_client.get('some_key'), None) memcache_client.incr('some_key', delta=5) self.assertEqual(memcache_client.get('some_key'), '5') self.assertEqual(mock.cache.values()[0][1], '0') memcache_client.incr('some_key', delta=5, time=55) self.assertEqual(memcache_client.get('some_key'), '10') self.assertEqual(mock.cache.values()[0][1], '0') def test_decr(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) self.assertEqual(memcache_client.decr('some_key', delta=5), 0) self.assertEqual(memcache_client.get('some_key'), '0') self.assertEqual(memcache_client.incr('some_key', delta=15), 15) self.assertEqual(memcache_client.get('some_key'), '15') self.assertEqual(memcache_client.decr('some_key', delta=4), 11) self.assertEqual(memcache_client.get('some_key'), '11') self.assertEqual(memcache_client.decr('some_key', delta=15), 0) self.assertEqual(memcache_client.get('some_key'), '0') mock.read_return_none = True self.assertRaises(memcached.MemcacheConnectionError, memcache_client.decr, 'some_key', delta=15) def test_retry(self): logging.getLogger().addHandler(NullLoggingHandler()) memcache_client = memcached.MemcacheRing( ['1.2.3.4:11211', '1.2.3.5:11211']) mock1 = ExplodingMockMemcached() mock2 = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock2, mock2)]) memcache_client._client_cache['1.2.3.5:11211'] = MockedMemcachePool( [(mock1, mock1)]) memcache_client.set('some_key', [1, 2, 3]) self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) self.assertEqual(mock1.exploded, True) def test_delete(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) memcache_client.set('some_key', [1, 2, 3]) self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) memcache_client.delete('some_key') self.assertEqual(memcache_client.get('some_key'), None) def test_multi(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211']) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) memcache_client.set_multi( {'some_key1': [1, 2, 3], 'some_key2': [4, 5, 6]}, 'multi_key') self.assertEqual( memcache_client.get_multi(('some_key2', 'some_key1'), 'multi_key'), [[4, 5, 6], [1, 2, 3]]) self.assertEqual(mock.cache.values()[0][1], '0') self.assertEqual(mock.cache.values()[1][1], '0') memcache_client.set_multi( {'some_key1': [1, 2, 3], 'some_key2': [4, 5, 6]}, 'multi_key', time=20) self.assertEqual(mock.cache.values()[0][1], '20') self.assertEqual(mock.cache.values()[1][1], '20') fortydays = 50 * 24 * 60 * 60 esttimeout = time.time() + fortydays memcache_client.set_multi( {'some_key1': [1, 2, 3], 'some_key2': [4, 5, 6]}, 'multi_key', time=fortydays) self.assertTrue( -1 <= float(mock.cache.values()[0][1]) - esttimeout <= 1) self.assertTrue( -1 <= float(mock.cache.values()[1][1]) - esttimeout <= 1) self.assertEqual(memcache_client.get_multi( ('some_key2', 'some_key1', 'not_exists'), 'multi_key'), [[4, 5, 6], [1, 2, 3], None]) def test_serialization(self): memcache_client = memcached.MemcacheRing(['1.2.3.4:11211'], allow_pickle=True) mock = MockMemcached() memcache_client._client_cache['1.2.3.4:11211'] = MockedMemcachePool( [(mock, mock)] * 2) memcache_client.set('some_key', [1, 2, 3]) self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) memcache_client._allow_pickle = False memcache_client._allow_unpickle = True self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) memcache_client._allow_unpickle = False self.assertEqual(memcache_client.get('some_key'), None) memcache_client.set('some_key', [1, 2, 3]) self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) memcache_client._allow_unpickle = True self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) memcache_client._allow_pickle = True self.assertEqual(memcache_client.get('some_key'), [1, 2, 3]) def test_connection_pooling(self): with patch('swift.common.memcached.socket') as mock_module: def mock_getaddrinfo(host, port, family=socket.AF_INET, socktype=socket.SOCK_STREAM, proto=0, flags=0): return [(family, socktype, proto, '', (host, port))] mock_module.getaddrinfo = mock_getaddrinfo # patch socket, stub socket.socket, mock sock mock_sock = mock_module.socket.return_value # track clients waiting for connections connected = [] connections = Queue() errors = [] def wait_connect(addr): connected.append(addr) sleep(0.1) # yield val = connections.get() if val is not None: errors.append(val) mock_sock.connect = wait_connect memcache_client = memcached.MemcacheRing(['1.2.3.4:11211'], connect_timeout=10) # sanity self.assertEqual(1, len(memcache_client._client_cache)) for server, pool in memcache_client._client_cache.items(): self.assertEqual(2, pool.max_size) # make 10 requests "at the same time" p = GreenPool() for i in range(10): p.spawn(memcache_client.set, 'key', 'value') for i in range(3): sleep(0.1) self.assertEqual(2, len(connected)) # give out a connection connections.put(None) # at this point, only one connection should have actually been # created, the other is in the creation step, and the rest of the # clients are not attempting to connect. we let this play out a # bit to verify. for i in range(3): sleep(0.1) self.assertEqual(2, len(connected)) # finish up, this allows the final connection to be created, so # that all the other clients can use the two existing connections # and no others will be created. connections.put(None) connections.put('nono') self.assertEqual(2, len(connected)) p.waitall() self.assertEqual(2, len(connected)) self.assertEqual(0, len(errors), "A client was allowed a third connection") connections.get_nowait() self.assertTrue(connections.empty()) def test_connection_pool_timeout(self): orig_conn_pool = memcached.MemcacheConnPool try: connections = defaultdict(Queue) pending = defaultdict(int) served = defaultdict(int) class MockConnectionPool(orig_conn_pool): def get(self): pending[self.host] += 1 conn = connections[self.host].get() pending[self.host] -= 1 return conn def put(self, *args, **kwargs): connections[self.host].put(*args, **kwargs) served[self.host] += 1 memcached.MemcacheConnPool = MockConnectionPool memcache_client = memcached.MemcacheRing(['1.2.3.4:11211', '1.2.3.5:11211'], io_timeout=0.5, pool_timeout=0.1) # Hand out a couple slow connections to 1.2.3.5, leaving 1.2.3.4 # fast. All ten (10) clients should try to talk to .5 first, and # then move on to .4, and we'll assert all that below. mock_conn = MagicMock(), MagicMock() mock_conn[1].sendall = lambda x: sleep(0.2) connections['1.2.3.5'].put(mock_conn) connections['1.2.3.5'].put(mock_conn) mock_conn = MagicMock(), MagicMock() connections['1.2.3.4'].put(mock_conn) connections['1.2.3.4'].put(mock_conn) p = GreenPool() for i in range(10): p.spawn(memcache_client.set, 'key', 'value') # Wait for the dust to settle. p.waitall() self.assertEqual(pending['1.2.3.5'], 8) self.assertEqual(len(memcache_client._errors['1.2.3.5:11211']), 8) self.assertEqual(served['1.2.3.5'], 2) self.assertEqual(pending['1.2.3.4'], 0) self.assertEqual(len(memcache_client._errors['1.2.3.4:11211']), 0) self.assertEqual(served['1.2.3.4'], 8) # and we never got more put in that we gave out self.assertEqual(connections['1.2.3.5'].qsize(), 2) self.assertEqual(connections['1.2.3.4'].qsize(), 2) finally: memcached.MemcacheConnPool = orig_conn_pool if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_constraints.py0000664000567000056710000006512113024044354023230 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import mock import tempfile import time from six.moves import range from test import safe_repr from test.unit import MockTrue from swift.common.swob import HTTPBadRequest, Request, HTTPException from swift.common.http import HTTP_REQUEST_ENTITY_TOO_LARGE, \ HTTP_BAD_REQUEST, HTTP_LENGTH_REQUIRED, HTTP_NOT_IMPLEMENTED from swift.common import constraints, utils class TestConstraints(unittest.TestCase): def assertIn(self, member, container, msg=None): """Copied from 2.7""" if member not in container: standardMsg = '%s not found in %s' % (safe_repr(member), safe_repr(container)) self.fail(self._formatMessage(msg, standardMsg)) def test_check_metadata_empty(self): headers = {} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) def test_check_metadata_good(self): headers = {'X-Object-Meta-Name': 'Value'} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) def test_check_metadata_empty_name(self): headers = {'X-Object-Meta-': 'Value'} self.assertTrue(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), HTTPBadRequest) def test_check_metadata_name_length(self): name = 'a' * constraints.MAX_META_NAME_LENGTH headers = {'X-Object-Meta-%s' % name: 'v'} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) name = 'a' * (constraints.MAX_META_NAME_LENGTH + 1) headers = {'X-Object-Meta-%s' % name: 'v'} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object').status_int, HTTP_BAD_REQUEST) self.assertIn( ('X-Object-Meta-%s' % name).lower(), constraints.check_metadata(Request.blank( '/', headers=headers), 'object').body.lower()) def test_check_metadata_value_length(self): value = 'a' * constraints.MAX_META_VALUE_LENGTH headers = {'X-Object-Meta-Name': value} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) value = 'a' * (constraints.MAX_META_VALUE_LENGTH + 1) headers = {'X-Object-Meta-Name': value} self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object').status_int, HTTP_BAD_REQUEST) self.assertIn( 'x-object-meta-name', constraints.check_metadata(Request.blank( '/', headers=headers), 'object').body.lower()) self.assertIn( str(constraints.MAX_META_VALUE_LENGTH), constraints.check_metadata(Request.blank( '/', headers=headers), 'object').body) def test_check_metadata_count(self): headers = {} for x in range(constraints.MAX_META_COUNT): headers['X-Object-Meta-%d' % x] = 'v' self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) headers['X-Object-Meta-Too-Many'] = 'v' self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object').status_int, HTTP_BAD_REQUEST) def test_check_metadata_size(self): headers = {} size = 0 chunk = constraints.MAX_META_NAME_LENGTH + \ constraints.MAX_META_VALUE_LENGTH x = 0 while size + chunk < constraints.MAX_META_OVERALL_SIZE: headers['X-Object-Meta-%04d%s' % (x, 'a' * (constraints.MAX_META_NAME_LENGTH - 4))] = \ 'v' * constraints.MAX_META_VALUE_LENGTH size += chunk x += 1 self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object'), None) # add two more headers in case adding just one falls exactly on the # limit (eg one header adds 1024 and the limit is 2048) headers['X-Object-Meta-%04d%s' % (x, 'a' * (constraints.MAX_META_NAME_LENGTH - 4))] = \ 'v' * constraints.MAX_META_VALUE_LENGTH headers['X-Object-Meta-%04d%s' % (x + 1, 'a' * (constraints.MAX_META_NAME_LENGTH - 4))] = \ 'v' * constraints.MAX_META_VALUE_LENGTH self.assertEqual(constraints.check_metadata(Request.blank( '/', headers=headers), 'object').status_int, HTTP_BAD_REQUEST) def test_check_object_creation_content_length(self): headers = {'Content-Length': str(constraints.MAX_FILE_SIZE), 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name'), None) headers = {'Content-Length': str(constraints.MAX_FILE_SIZE + 1), 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name').status_int, HTTP_REQUEST_ENTITY_TOO_LARGE) headers = {'Transfer-Encoding': 'chunked', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name'), None) headers = {'Transfer-Encoding': 'gzip', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name').status_int, HTTP_BAD_REQUEST) headers = {'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name').status_int, HTTP_LENGTH_REQUIRED) headers = {'Content-Length': 'abc', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name').status_int, HTTP_BAD_REQUEST) headers = {'Transfer-Encoding': 'gzip,chunked', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name').status_int, HTTP_NOT_IMPLEMENTED) def test_check_object_creation_copy(self): headers = {'Content-Length': '0', 'X-Copy-From': 'c/o2', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name'), None) headers = {'Content-Length': '1', 'X-Copy-From': 'c/o2', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name').status_int, HTTP_BAD_REQUEST) headers = {'Transfer-Encoding': 'chunked', 'X-Copy-From': 'c/o2', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name'), None) # a content-length header is always required headers = {'X-Copy-From': 'c/o2', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name').status_int, HTTP_LENGTH_REQUIRED) def test_check_object_creation_name_length(self): headers = {'Transfer-Encoding': 'chunked', 'Content-Type': 'text/plain'} name = 'o' * constraints.MAX_OBJECT_NAME_LENGTH self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), name), None) name = 'o' * (constraints.MAX_OBJECT_NAME_LENGTH + 1) self.assertEqual(constraints.check_object_creation( Request.blank('/', headers=headers), name).status_int, HTTP_BAD_REQUEST) def test_check_object_creation_content_type(self): headers = {'Transfer-Encoding': 'chunked', 'Content-Type': 'text/plain'} self.assertEqual(constraints.check_object_creation(Request.blank( '/', headers=headers), 'object_name'), None) headers = {'Transfer-Encoding': 'chunked'} self.assertEqual(constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name').status_int, HTTP_BAD_REQUEST) def test_check_object_creation_bad_content_type(self): headers = {'Transfer-Encoding': 'chunked', 'Content-Type': '\xff\xff'} resp = constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name') self.assertEqual(resp.status_int, HTTP_BAD_REQUEST) self.assertTrue('Content-Type' in resp.body) def test_check_object_creation_bad_delete_headers(self): headers = {'Transfer-Encoding': 'chunked', 'Content-Type': 'text/plain', 'X-Delete-After': 'abc'} resp = constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name') self.assertEqual(resp.status_int, HTTP_BAD_REQUEST) self.assertTrue('Non-integer X-Delete-After' in resp.body) t = str(int(time.time() - 60)) headers = {'Transfer-Encoding': 'chunked', 'Content-Type': 'text/plain', 'X-Delete-At': t} resp = constraints.check_object_creation( Request.blank('/', headers=headers), 'object_name') self.assertEqual(resp.status_int, HTTP_BAD_REQUEST) self.assertTrue('X-Delete-At in past' in resp.body) def test_check_delete_headers(self): # X-Delete-After headers = {'X-Delete-After': '60'} resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) self.assertTrue(isinstance(resp, Request)) self.assertTrue('x-delete-at' in resp.headers) headers = {'X-Delete-After': 'abc'} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('Non-integer X-Delete-After' in e.body) else: self.fail("Should have failed with HTTPBadRequest") headers = {'X-Delete-After': '60.1'} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('Non-integer X-Delete-After' in e.body) else: self.fail("Should have failed with HTTPBadRequest") headers = {'X-Delete-After': '-1'} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('X-Delete-After in past' in e.body) else: self.fail("Should have failed with HTTPBadRequest") # X-Delete-At t = str(int(time.time() + 100)) headers = {'X-Delete-At': t} resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) self.assertTrue(isinstance(resp, Request)) self.assertTrue('x-delete-at' in resp.headers) self.assertEqual(resp.headers.get('X-Delete-At'), t) headers = {'X-Delete-At': 'abc'} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('Non-integer X-Delete-At' in e.body) else: self.fail("Should have failed with HTTPBadRequest") t = str(int(time.time() + 100)) + '.1' headers = {'X-Delete-At': t} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('Non-integer X-Delete-At' in e.body) else: self.fail("Should have failed with HTTPBadRequest") t = str(int(time.time())) headers = {'X-Delete-At': t} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('X-Delete-At in past' in e.body) else: self.fail("Should have failed with HTTPBadRequest") t = str(int(time.time() - 1)) headers = {'X-Delete-At': t} try: resp = constraints.check_delete_headers( Request.blank('/', headers=headers)) except HTTPException as e: self.assertEqual(e.status_int, HTTP_BAD_REQUEST) self.assertTrue('X-Delete-At in past' in e.body) else: self.fail("Should have failed with HTTPBadRequest") def test_check_delete_headers_sets_delete_at(self): t = time.time() + 1000 # check delete-at is passed through headers = {'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Delete-At': str(int(t))} req = Request.blank('/', headers=headers) constraints.check_delete_headers(req) self.assertTrue('X-Delete-At' in req.headers) self.assertEqual(req.headers['X-Delete-At'], str(int(t))) # check delete-after is converted to delete-at headers = {'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Delete-After': '42'} req = Request.blank('/', headers=headers) with mock.patch('time.time', lambda: t): constraints.check_delete_headers(req) self.assertTrue('X-Delete-At' in req.headers) expected = str(int(t) + 42) self.assertEqual(req.headers['X-Delete-At'], expected) # check delete-after takes precedence over delete-at headers = {'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Delete-After': '42', 'X-Delete-At': str(int(t) + 40)} req = Request.blank('/', headers=headers) with mock.patch('time.time', lambda: t): constraints.check_delete_headers(req) self.assertTrue('X-Delete-At' in req.headers) self.assertEqual(req.headers['X-Delete-At'], expected) headers = {'Content-Length': '0', 'Content-Type': 'text/plain', 'X-Delete-After': '42', 'X-Delete-At': str(int(t) + 44)} req = Request.blank('/', headers=headers) with mock.patch('time.time', lambda: t): constraints.check_delete_headers(req) self.assertTrue('X-Delete-At' in req.headers) self.assertEqual(req.headers['X-Delete-At'], expected) def test_check_dir(self): self.assertFalse(constraints.check_dir('', '')) with mock.patch("os.path.isdir", MockTrue()): self.assertTrue(constraints.check_dir('/srv', 'foo/bar')) def test_check_mount(self): self.assertFalse(constraints.check_mount('', '')) with mock.patch("swift.common.utils.ismount", MockTrue()): self.assertTrue(constraints.check_mount('/srv', '1')) self.assertTrue(constraints.check_mount('/srv', 'foo-bar')) self.assertTrue(constraints.check_mount( '/srv', '003ed03c-242a-4b2f-bee9-395f801d1699')) self.assertFalse(constraints.check_mount('/srv', 'foo bar')) self.assertFalse(constraints.check_mount('/srv', 'foo/bar')) self.assertFalse(constraints.check_mount('/srv', 'foo?bar')) def test_check_float(self): self.assertFalse(constraints.check_float('')) self.assertTrue(constraints.check_float('0')) def test_valid_timestamp(self): self.assertRaises(HTTPException, constraints.valid_timestamp, Request.blank('/')) self.assertRaises(HTTPException, constraints.valid_timestamp, Request.blank('/', headers={ 'X-Timestamp': 'asdf'})) timestamp = utils.Timestamp(time.time()) req = Request.blank('/', headers={'X-Timestamp': timestamp.internal}) self.assertEqual(timestamp, constraints.valid_timestamp(req)) req = Request.blank('/', headers={'X-Timestamp': timestamp.normal}) self.assertEqual(timestamp, constraints.valid_timestamp(req)) def test_check_utf8(self): unicode_sample = u'\uc77c\uc601' valid_utf8_str = unicode_sample.encode('utf-8') invalid_utf8_str = unicode_sample.encode('utf-8')[::-1] unicode_with_null = u'abc\u0000def' utf8_with_null = unicode_with_null.encode('utf-8') for false_argument in [None, '', invalid_utf8_str, unicode_with_null, utf8_with_null]: self.assertFalse(constraints.check_utf8(false_argument)) for true_argument in ['this is ascii and utf-8, too', unicode_sample, valid_utf8_str]: self.assertTrue(constraints.check_utf8(true_argument)) def test_check_utf8_non_canonical(self): self.assertFalse(constraints.check_utf8('\xed\xa0\xbc\xed\xbc\xb8')) self.assertFalse(constraints.check_utf8('\xed\xa0\xbd\xed\xb9\x88')) def test_check_utf8_lone_surrogates(self): self.assertFalse(constraints.check_utf8('\xed\xa0\xbc')) self.assertFalse(constraints.check_utf8('\xed\xb9\x88')) def test_validate_bad_meta(self): req = Request.blank( '/v/a/c/o', headers={'x-object-meta-hello': 'ab' * constraints.MAX_HEADER_SIZE}) self.assertEqual(constraints.check_metadata(req, 'object').status_int, HTTP_BAD_REQUEST) self.assertIn('x-object-meta-hello', constraints.check_metadata(req, 'object').body.lower()) def test_validate_constraints(self): c = constraints self.assertTrue(c.MAX_META_OVERALL_SIZE > c.MAX_META_NAME_LENGTH) self.assertTrue(c.MAX_META_OVERALL_SIZE > c.MAX_META_VALUE_LENGTH) self.assertTrue(c.MAX_HEADER_SIZE > c.MAX_META_NAME_LENGTH) self.assertTrue(c.MAX_HEADER_SIZE > c.MAX_META_VALUE_LENGTH) def test_validate_copy_from(self): req = Request.blank( '/v/a/c/o', headers={'x-copy-from': 'c/o2'}) src_cont, src_obj = constraints.check_copy_from_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'o2') req = Request.blank( '/v/a/c/o', headers={'x-copy-from': 'c/subdir/o2'}) src_cont, src_obj = constraints.check_copy_from_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'subdir/o2') req = Request.blank( '/v/a/c/o', headers={'x-copy-from': '/c/o2'}) src_cont, src_obj = constraints.check_copy_from_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'o2') def test_validate_bad_copy_from(self): req = Request.blank( '/v/a/c/o', headers={'x-copy-from': 'bad_object'}) self.assertRaises(HTTPException, constraints.check_copy_from_header, req) def test_validate_destination(self): req = Request.blank( '/v/a/c/o', headers={'destination': 'c/o2'}) src_cont, src_obj = constraints.check_destination_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'o2') req = Request.blank( '/v/a/c/o', headers={'destination': 'c/subdir/o2'}) src_cont, src_obj = constraints.check_destination_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'subdir/o2') req = Request.blank( '/v/a/c/o', headers={'destination': '/c/o2'}) src_cont, src_obj = constraints.check_destination_header(req) self.assertEqual(src_cont, 'c') self.assertEqual(src_obj, 'o2') def test_validate_bad_destination(self): req = Request.blank( '/v/a/c/o', headers={'destination': 'bad_object'}) self.assertRaises(HTTPException, constraints.check_destination_header, req) def test_check_account_format(self): req = Request.blank( '/v/a/c/o', headers={'X-Copy-From-Account': 'account/with/slashes'}) self.assertRaises(HTTPException, constraints.check_account_format, req, req.headers['X-Copy-From-Account']) req = Request.blank( '/v/a/c/o', headers={'X-Copy-From-Account': ''}) self.assertRaises(HTTPException, constraints.check_account_format, req, req.headers['X-Copy-From-Account']) def test_check_container_format(self): invalid_versions_locations = ( 'container/with/slashes', '', # empty ) for versions_location in invalid_versions_locations: req = Request.blank( '/v/a/c/o', headers={ 'X-Versions-Location': versions_location}) try: constraints.check_container_format( req, req.headers['X-Versions-Location']) except HTTPException as e: self.assertTrue(e.body.startswith('Container name cannot')) else: self.fail('check_container_format did not raise error for %r' % req.headers['X-Versions-Location']) class TestConstraintsConfig(unittest.TestCase): def test_default_constraints(self): for key in constraints.DEFAULT_CONSTRAINTS: # if there is local over-rides in swift.conf we just continue on if key in constraints.OVERRIDE_CONSTRAINTS: continue # module level attrs (that aren't in OVERRIDE) should have the # same value as the DEFAULT map module_level_value = getattr(constraints, key.upper()) self.assertEqual(constraints.DEFAULT_CONSTRAINTS[key], module_level_value) def test_effective_constraints(self): for key in constraints.DEFAULT_CONSTRAINTS: # module level attrs should always mirror the same value as the # EFFECTIVE map module_level_value = getattr(constraints, key.upper()) self.assertEqual(constraints.EFFECTIVE_CONSTRAINTS[key], module_level_value) # if there are local over-rides in swift.conf those should be # reflected in the EFFECTIVE, otherwise we expect the DEFAULTs self.assertEqual(constraints.EFFECTIVE_CONSTRAINTS[key], constraints.OVERRIDE_CONSTRAINTS.get( key, constraints.DEFAULT_CONSTRAINTS[key])) def test_override_constraints(self): try: with tempfile.NamedTemporaryFile() as f: f.write('[swift-constraints]\n') # set everything to 1 for key in constraints.DEFAULT_CONSTRAINTS: f.write('%s = 1\n' % key) f.flush() with mock.patch.object(utils, 'SWIFT_CONF_FILE', f.name): constraints.reload_constraints() for key in constraints.DEFAULT_CONSTRAINTS: # module level attrs should all be 1 module_level_value = getattr(constraints, key.upper()) self.assertEqual(module_level_value, 1) # all keys should be in OVERRIDE self.assertEqual(constraints.OVERRIDE_CONSTRAINTS[key], module_level_value) # module level attrs should always mirror the same value as # the EFFECTIVE map self.assertEqual(constraints.EFFECTIVE_CONSTRAINTS[key], module_level_value) finally: constraints.reload_constraints() def test_reload_reset(self): try: with tempfile.NamedTemporaryFile() as f: f.write('[swift-constraints]\n') # set everything to 1 for key in constraints.DEFAULT_CONSTRAINTS: f.write('%s = 1\n' % key) f.flush() with mock.patch.object(utils, 'SWIFT_CONF_FILE', f.name): constraints.reload_constraints() self.assertTrue(constraints.SWIFT_CONSTRAINTS_LOADED) self.assertEqual(sorted(constraints.DEFAULT_CONSTRAINTS.keys()), sorted(constraints.OVERRIDE_CONSTRAINTS.keys())) # file is now deleted... with mock.patch.object(utils, 'SWIFT_CONF_FILE', f.name): constraints.reload_constraints() # no constraints have been loaded from non-existent swift.conf self.assertFalse(constraints.SWIFT_CONSTRAINTS_LOADED) # no constraints are in OVERRIDE self.assertEqual([], constraints.OVERRIDE_CONSTRAINTS.keys()) # the EFFECTIVE constraints mirror DEFAULT self.assertEqual(constraints.EFFECTIVE_CONSTRAINTS, constraints.DEFAULT_CONSTRAINTS) finally: constraints.reload_constraints() if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_request_helpers.py0000664000567000056710000002615113024044354024073 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.common.request_helpers""" import unittest from swift.common.swob import Request, HTTPException, HeaderKeyDict from swift.common.storage_policy import POLICIES, EC_POLICY, REPL_POLICY from swift.common.request_helpers import is_sys_meta, is_user_meta, \ is_sys_or_user_meta, strip_sys_meta_prefix, strip_user_meta_prefix, \ remove_items, copy_header_subset, get_name_and_placement, \ http_response_to_document_iters from test.unit import patch_policies from test.unit.common.test_utils import FakeResponse server_types = ['account', 'container', 'object'] class TestRequestHelpers(unittest.TestCase): def test_is_user_meta(self): m_type = 'meta' for st in server_types: self.assertTrue(is_user_meta(st, 'x-%s-%s-foo' % (st, m_type))) self.assertFalse(is_user_meta(st, 'x-%s-%s-' % (st, m_type))) self.assertFalse(is_user_meta(st, 'x-%s-%sfoo' % (st, m_type))) def test_is_sys_meta(self): m_type = 'sysmeta' for st in server_types: self.assertTrue(is_sys_meta(st, 'x-%s-%s-foo' % (st, m_type))) self.assertFalse(is_sys_meta(st, 'x-%s-%s-' % (st, m_type))) self.assertFalse(is_sys_meta(st, 'x-%s-%sfoo' % (st, m_type))) def test_is_sys_or_user_meta(self): m_types = ['sysmeta', 'meta'] for mt in m_types: for st in server_types: self.assertTrue(is_sys_or_user_meta(st, 'x-%s-%s-foo' % (st, mt))) self.assertFalse(is_sys_or_user_meta(st, 'x-%s-%s-' % (st, mt))) self.assertFalse(is_sys_or_user_meta(st, 'x-%s-%sfoo' % (st, mt))) def test_strip_sys_meta_prefix(self): mt = 'sysmeta' for st in server_types: self.assertEqual(strip_sys_meta_prefix(st, 'x-%s-%s-a' % (st, mt)), 'a') def test_strip_user_meta_prefix(self): mt = 'meta' for st in server_types: self.assertEqual(strip_user_meta_prefix(st, 'x-%s-%s-a' % (st, mt)), 'a') def test_remove_items(self): src = {'a': 'b', 'c': 'd'} test = lambda x: x == 'a' rem = remove_items(src, test) self.assertEqual(src, {'c': 'd'}) self.assertEqual(rem, {'a': 'b'}) def test_copy_header_subset(self): src = {'a': 'b', 'c': 'd'} from_req = Request.blank('/path', environ={}, headers=src) to_req = Request.blank('/path', {}) test = lambda x: x.lower() == 'a' copy_header_subset(from_req, to_req, test) self.assertTrue('A' in to_req.headers) self.assertEqual(to_req.headers['A'], 'b') self.assertFalse('c' in to_req.headers) self.assertFalse('C' in to_req.headers) @patch_policies(with_ec_default=True) def test_get_name_and_placement_object_req(self): path = '/device/part/account/container/object' req = Request.blank(path, headers={ 'X-Backend-Storage-Policy-Index': '0'}) device, part, account, container, obj, policy = \ get_name_and_placement(req, 5, 5, True) self.assertEqual(device, 'device') self.assertEqual(part, 'part') self.assertEqual(account, 'account') self.assertEqual(container, 'container') self.assertEqual(obj, 'object') self.assertEqual(policy, POLICIES[0]) self.assertEqual(policy.policy_type, EC_POLICY) req.headers['X-Backend-Storage-Policy-Index'] = 1 device, part, account, container, obj, policy = \ get_name_and_placement(req, 5, 5, True) self.assertEqual(device, 'device') self.assertEqual(part, 'part') self.assertEqual(account, 'account') self.assertEqual(container, 'container') self.assertEqual(obj, 'object') self.assertEqual(policy, POLICIES[1]) self.assertEqual(policy.policy_type, REPL_POLICY) req.headers['X-Backend-Storage-Policy-Index'] = 'foo' try: device, part, account, container, obj, policy = \ get_name_and_placement(req, 5, 5, True) except HTTPException as e: self.assertEqual(e.status_int, 503) self.assertEqual(str(e), '503 Service Unavailable') self.assertEqual(e.body, "No policy with index foo") else: self.fail('get_name_and_placement did not raise error ' 'for invalid storage policy index') @patch_policies(with_ec_default=True) def test_get_name_and_placement_object_replication(self): # yup, suffixes are sent '-'.joined in the path path = '/device/part/012-345-678-9ab-cde' req = Request.blank(path, headers={ 'X-Backend-Storage-Policy-Index': '0'}) device, partition, suffix_parts, policy = \ get_name_and_placement(req, 2, 3, True) self.assertEqual(device, 'device') self.assertEqual(partition, 'part') self.assertEqual(suffix_parts, '012-345-678-9ab-cde') self.assertEqual(policy, POLICIES[0]) self.assertEqual(policy.policy_type, EC_POLICY) path = '/device/part' req = Request.blank(path, headers={ 'X-Backend-Storage-Policy-Index': '1'}) device, partition, suffix_parts, policy = \ get_name_and_placement(req, 2, 3, True) self.assertEqual(device, 'device') self.assertEqual(partition, 'part') self.assertEqual(suffix_parts, None) # false-y self.assertEqual(policy, POLICIES[1]) self.assertEqual(policy.policy_type, REPL_POLICY) path = '/device/part/' # with a trailing slash req = Request.blank(path, headers={ 'X-Backend-Storage-Policy-Index': '1'}) device, partition, suffix_parts, policy = \ get_name_and_placement(req, 2, 3, True) self.assertEqual(device, 'device') self.assertEqual(partition, 'part') self.assertEqual(suffix_parts, '') # still false-y self.assertEqual(policy, POLICIES[1]) self.assertEqual(policy.policy_type, REPL_POLICY) class TestHTTPResponseToDocumentIters(unittest.TestCase): def test_200(self): fr = FakeResponse( 200, {'Content-Length': '10', 'Content-Type': 'application/lunch'}, 'sandwiches') doc_iters = http_response_to_document_iters(fr) first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 0) self.assertEqual(last_byte, 9) self.assertEqual(length, 10) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Content-Length'), '10') self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'sandwiches') self.assertRaises(StopIteration, next, doc_iters) fr = FakeResponse( 200, {'Transfer-Encoding': 'chunked', 'Content-Type': 'application/lunch'}, 'sandwiches') doc_iters = http_response_to_document_iters(fr) first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 0) self.assertIsNone(last_byte) self.assertIsNone(length) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Transfer-Encoding'), 'chunked') self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'sandwiches') self.assertRaises(StopIteration, next, doc_iters) def test_206_single_range(self): fr = FakeResponse( 206, {'Content-Length': '8', 'Content-Type': 'application/lunch', 'Content-Range': 'bytes 1-8/10'}, 'andwiche') doc_iters = http_response_to_document_iters(fr) first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 1) self.assertEqual(last_byte, 8) self.assertEqual(length, 10) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Content-Length'), '8') self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'andwiche') self.assertRaises(StopIteration, next, doc_iters) # Chunked response should be treated in the same way as non-chunked one fr = FakeResponse( 206, {'Transfer-Encoding': 'chunked', 'Content-Type': 'application/lunch', 'Content-Range': 'bytes 1-8/10'}, 'andwiche') doc_iters = http_response_to_document_iters(fr) first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 1) self.assertEqual(last_byte, 8) self.assertEqual(length, 10) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'andwiche') self.assertRaises(StopIteration, next, doc_iters) def test_206_multiple_ranges(self): fr = FakeResponse( 206, {'Content-Type': 'multipart/byteranges; boundary=asdfasdfasdf'}, ("--asdfasdfasdf\r\n" "Content-Type: application/lunch\r\n" "Content-Range: bytes 0-3/10\r\n" "\r\n" "sand\r\n" "--asdfasdfasdf\r\n" "Content-Type: application/lunch\r\n" "Content-Range: bytes 6-9/10\r\n" "\r\n" "ches\r\n" "--asdfasdfasdf--")) doc_iters = http_response_to_document_iters(fr) first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 0) self.assertEqual(last_byte, 3) self.assertEqual(length, 10) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'sand') first_byte, last_byte, length, headers, body = next(doc_iters) self.assertEqual(first_byte, 6) self.assertEqual(last_byte, 9) self.assertEqual(length, 10) header_dict = HeaderKeyDict(headers) self.assertEqual(header_dict.get('Content-Type'), 'application/lunch') self.assertEqual(body.read(), 'ches') self.assertRaises(StopIteration, next, doc_iters) swift-2.7.1/test/unit/common/test_bufferedhttp.py0000664000567000056710000001200613024044352023333 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import unittest import socket from eventlet import spawn, Timeout, listen from swift.common import bufferedhttp class MockHTTPSConnection(object): def __init__(self, hostport): pass def putrequest(self, method, path, skip_host=0): self.path = path pass def putheader(self, header, *values): # Verify that path and values can be safely joined # Essentially what Python 2.7 does that caused us problems. '\r\n\t'.join((self.path,) + values) def endheaders(self): pass class TestBufferedHTTP(unittest.TestCase): def test_http_connect(self): bindsock = listen(('127.0.0.1', 0)) def accept(expected_par): try: with Timeout(3): sock, addr = bindsock.accept() fp = sock.makefile() fp.write('HTTP/1.1 200 OK\r\nContent-Length: 8\r\n\r\n' 'RESPONSE') fp.flush() self.assertEqual( fp.readline(), 'PUT /dev/%s/path/..%%25/?omg&no=%%7f HTTP/1.1\r\n' % expected_par) headers = {} line = fp.readline() while line and line != '\r\n': headers[line.split(':')[0].lower()] = \ line.split(':')[1].strip() line = fp.readline() self.assertEqual(headers['content-length'], '7') self.assertEqual(headers['x-header'], 'value') self.assertEqual(fp.readline(), 'REQUEST\r\n') except BaseException as err: return err return None for par in ('par', 1357): event = spawn(accept, par) try: with Timeout(3): conn = bufferedhttp.http_connect( '127.0.0.1', bindsock.getsockname()[1], 'dev', par, 'PUT', '/path/..%/', { 'content-length': 7, 'x-header': 'value'}, query_string='omg&no=%7f') conn.send('REQUEST\r\n') self.assertTrue(conn.sock.getsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY)) resp = conn.getresponse() body = resp.read() conn.close() self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertEqual(body, 'RESPONSE') finally: err = event.wait() if err: raise Exception(err) def test_nonstr_header_values(self): origHTTPSConnection = bufferedhttp.HTTPSConnection bufferedhttp.HTTPSConnection = MockHTTPSConnection try: bufferedhttp.http_connect( '127.0.0.1', 8080, 'sda', 1, 'GET', '/', headers={'x-one': '1', 'x-two': 2, 'x-three': 3.0, 'x-four': {'crazy': 'value'}}, ssl=True) bufferedhttp.http_connect_raw( '127.0.0.1', 8080, 'GET', '/', headers={'x-one': '1', 'x-two': 2, 'x-three': 3.0, 'x-four': {'crazy': 'value'}}, ssl=True) finally: bufferedhttp.HTTPSConnection = origHTTPSConnection def test_unicode_values(self): with mock.patch('swift.common.bufferedhttp.HTTPSConnection', MockHTTPSConnection): for dev in ('sda', u'sda', u'sdá', u'sdá'.encode('utf-8')): for path in ( '/v1/a', u'/v1/a', u'/v1/á', u'/v1/á'.encode('utf-8')): for header in ('abc', u'abc', u'ábc'.encode('utf-8')): try: bufferedhttp.http_connect( '127.0.0.1', 8080, dev, 1, 'GET', path, headers={'X-Container-Meta-Whatever': header}, ssl=True) except Exception as e: self.fail( 'Exception %r for device=%r path=%r header=%r' % (e, dev, path, header)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_direct_client.py0000664000567000056710000007332613024044354023477 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import unittest import os from contextlib import contextmanager from hashlib import md5 import time import mock import six from six.moves import urllib from swift.common import direct_client from swift.common.exceptions import ClientException from swift.common.header_key_dict import HeaderKeyDict from swift.common.utils import Timestamp from swift.common.swob import RESPONSE_REASONS from swift.common.storage_policy import POLICIES from six.moves.http_client import HTTPException from test.unit import patch_policies, debug_logger class FakeConn(object): def __init__(self, status, headers=None, body='', **kwargs): self.status = status try: self.reason = RESPONSE_REASONS[self.status][0] except Exception: self.reason = 'Fake' self.body = body self.resp_headers = HeaderKeyDict() if headers: self.resp_headers.update(headers) self.etag = None def _update_raw_call_args(self, *args, **kwargs): capture_attrs = ('host', 'port', 'method', 'path', 'req_headers', 'query_string') for attr, value in zip(capture_attrs, args[:len(capture_attrs)]): setattr(self, attr, value) return self def getresponse(self): if self.etag: self.resp_headers['etag'] = str(self.etag.hexdigest()) if isinstance(self.status, Exception): raise self.status return self def getheader(self, header, default=None): return self.resp_headers.get(header, default) def getheaders(self): return self.resp_headers.items() def read(self, amt=None): if isinstance(self.body, six.StringIO): return self.body.read(amt) elif amt is None: return self.body else: return Exception('Not a StringIO entry') def send(self, data): if not self.etag: self.etag = md5() self.etag.update(data) @contextmanager def mocked_http_conn(*args, **kwargs): fake_conn = FakeConn(*args, **kwargs) mock_http_conn = lambda *args, **kwargs: \ fake_conn._update_raw_call_args(*args, **kwargs) with mock.patch('swift.common.bufferedhttp.http_connect_raw', new=mock_http_conn): yield fake_conn @patch_policies class TestDirectClient(unittest.TestCase): def setUp(self): self.node = {'ip': '1.2.3.4', 'port': '6000', 'device': 'sda'} self.part = '0' self.account = u'\u062a account' self.container = u'\u062a container' self.obj = u'\u062a obj/name' self.account_path = '/sda/0/%s' % urllib.parse.quote( self.account.encode('utf-8')) self.container_path = '/sda/0/%s/%s' % tuple( urllib.parse.quote(p.encode('utf-8')) for p in ( self.account, self.container)) self.obj_path = '/sda/0/%s/%s/%s' % tuple( urllib.parse.quote(p.encode('utf-8')) for p in ( self.account, self.container, self.obj)) self.user_agent = 'direct-client %s' % os.getpid() def test_gen_headers(self): stub_user_agent = 'direct-client %s' % os.getpid() headers = direct_client.gen_headers() self.assertEqual(headers['user-agent'], stub_user_agent) self.assertEqual(1, len(headers)) now = time.time() headers = direct_client.gen_headers(add_ts=True) self.assertEqual(headers['user-agent'], stub_user_agent) self.assertTrue(now - 1 < Timestamp(headers['x-timestamp']) < now + 1) self.assertEqual(headers['x-timestamp'], Timestamp(headers['x-timestamp']).internal) self.assertEqual(2, len(headers)) headers = direct_client.gen_headers(hdrs_in={'foo-bar': '47'}) self.assertEqual(headers['user-agent'], stub_user_agent) self.assertEqual(headers['foo-bar'], '47') self.assertEqual(2, len(headers)) headers = direct_client.gen_headers(hdrs_in={'user-agent': '47'}) self.assertEqual(headers['user-agent'], stub_user_agent) self.assertEqual(1, len(headers)) for policy in POLICIES: for add_ts in (True, False): now = time.time() headers = direct_client.gen_headers( {'X-Backend-Storage-Policy-Index': policy.idx}, add_ts=add_ts) self.assertEqual(headers['user-agent'], stub_user_agent) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], str(policy.idx)) expected_header_count = 2 if add_ts: expected_header_count += 1 self.assertEqual( headers['x-timestamp'], Timestamp(headers['x-timestamp']).internal) self.assertTrue( now - 1 < Timestamp(headers['x-timestamp']) < now + 1) self.assertEqual(expected_header_count, len(headers)) def test_direct_get_account(self): stub_headers = HeaderKeyDict({ 'X-Account-Container-Count': '1', 'X-Account-Object-Count': '1', 'X-Account-Bytes-Used': '1', 'X-Timestamp': '1234567890', 'X-PUT-Timestamp': '1234567890'}) body = '[{"count": 1, "bytes": 20971520, "name": "c1"}]' with mocked_http_conn(200, stub_headers, body) as conn: resp_headers, resp = direct_client.direct_get_account( self.node, self.part, self.account, marker='marker', prefix='prefix', delimiter='delimiter', limit=1000) self.assertEqual(conn.method, 'GET') self.assertEqual(conn.path, self.account_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(resp_headers, stub_headers) self.assertEqual(json.loads(body), resp) self.assertTrue('marker=marker' in conn.query_string) self.assertTrue('delimiter=delimiter' in conn.query_string) self.assertTrue('limit=1000' in conn.query_string) self.assertTrue('prefix=prefix' in conn.query_string) self.assertTrue('format=json' in conn.query_string) def test_direct_client_exception(self): stub_headers = {'X-Trans-Id': 'txb5f59485c578460f8be9e-0053478d09'} body = 'a server error has occurred' with mocked_http_conn(500, stub_headers, body): try: direct_client.direct_get_account(self.node, self.part, self.account) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(err.http_status, 500) expected_err_msg_parts = ( 'Account server %s:%s' % (self.node['ip'], self.node['port']), 'GET %r' % self.account_path, 'status 500', ) for item in expected_err_msg_parts: self.assertTrue( item in str(err), '%r was not in "%s"' % (item, err)) self.assertEqual(err.http_host, self.node['ip']) self.assertEqual(err.http_port, self.node['port']) self.assertEqual(err.http_device, self.node['device']) self.assertEqual(err.http_status, 500) self.assertEqual(err.http_reason, 'Internal Error') self.assertEqual(err.http_headers, stub_headers) def test_direct_get_account_no_content_does_not_parse_body(self): headers = { 'X-Account-Container-Count': '1', 'X-Account-Object-Count': '1', 'X-Account-Bytes-Used': '1', 'X-Timestamp': '1234567890', 'X-PUT-Timestamp': '1234567890'} with mocked_http_conn(204, headers) as conn: resp_headers, resp = direct_client.direct_get_account( self.node, self.part, self.account) self.assertEqual(conn.method, 'GET') self.assertEqual(conn.path, self.account_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(resp_headers, resp_headers) self.assertEqual([], resp) def test_direct_get_account_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_get_account( self.node, self.part, self.account) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'GET') self.assertEqual(conn.path, self.account_path) self.assertEqual(err.http_status, 500) self.assertTrue('GET' in str(err)) def test_direct_delete_account(self): node = {'ip': '1.2.3.4', 'port': '6000', 'device': 'sda'} part = '0' account = 'a' mock_path = 'swift.common.bufferedhttp.http_connect_raw' with mock.patch(mock_path) as fake_connect: fake_connect.return_value.getresponse.return_value.status = 200 direct_client.direct_delete_account(node, part, account) args, kwargs = fake_connect.call_args method = args[2] self.assertEqual('DELETE', method) path = args[3] self.assertEqual('/sda/0/a', path) headers = args[4] self.assertTrue('X-Timestamp' in headers) def test_direct_delete_account_failure(self): node = {'ip': '1.2.3.4', 'port': '6000', 'device': 'sda'} part = '0' account = 'a' with mocked_http_conn(500) as conn: try: direct_client.direct_delete_account(node, part, account) except ClientException as err: pass self.assertEqual('DELETE', conn.method) self.assertEqual('/sda/0/a', conn.path) self.assertEqual(err.http_status, 500) def test_direct_head_container(self): headers = HeaderKeyDict(key='value') with mocked_http_conn(200, headers) as conn: resp = direct_client.direct_head_container( self.node, self.part, self.account, self.container) self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.container_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(headers, resp) def test_direct_head_container_error(self): headers = HeaderKeyDict(key='value') with mocked_http_conn(503, headers) as conn: try: direct_client.direct_head_container( self.node, self.part, self.account, self.container) except ClientException as err: pass else: self.fail('ClientException not raised') # check request self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.container_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(err.http_status, 503) self.assertEqual(err.http_headers, headers) self.assertTrue('HEAD' in str(err)) def test_direct_head_container_deleted(self): important_timestamp = Timestamp(time.time()).internal headers = HeaderKeyDict({'X-Backend-Important-Timestamp': important_timestamp}) with mocked_http_conn(404, headers) as conn: try: direct_client.direct_head_container( self.node, self.part, self.account, self.container) except Exception as err: self.assertTrue(isinstance(err, ClientException)) else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.container_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(err.http_status, 404) self.assertEqual(err.http_headers, headers) def test_direct_get_container(self): headers = HeaderKeyDict({'key': 'value'}) body = '[{"hash": "8f4e3", "last_modified": "317260", "bytes": 209}]' with mocked_http_conn(200, headers, body) as conn: resp_headers, resp = direct_client.direct_get_container( self.node, self.part, self.account, self.container, marker='marker', prefix='prefix', delimiter='delimiter', limit=1000) self.assertEqual(conn.req_headers['user-agent'], 'direct-client %s' % os.getpid()) self.assertEqual(headers, resp_headers) self.assertEqual(json.loads(body), resp) self.assertTrue('marker=marker' in conn.query_string) self.assertTrue('delimiter=delimiter' in conn.query_string) self.assertTrue('limit=1000' in conn.query_string) self.assertTrue('prefix=prefix' in conn.query_string) self.assertTrue('format=json' in conn.query_string) def test_direct_get_container_no_content_does_not_decode_body(self): headers = {} body = '' with mocked_http_conn(204, headers, body) as conn: resp_headers, resp = direct_client.direct_get_container( self.node, self.part, self.account, self.container) self.assertEqual(conn.req_headers['user-agent'], 'direct-client %s' % os.getpid()) self.assertEqual(headers, resp_headers) self.assertEqual([], resp) def test_direct_delete_container(self): with mocked_http_conn(200) as conn: direct_client.direct_delete_container( self.node, self.part, self.account, self.container) self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.container_path) def test_direct_delete_container_with_timestamp(self): # ensure timestamp is different from any that might be auto-generated timestamp = Timestamp(time.time() - 100) headers = {'X-Timestamp': timestamp.internal} with mocked_http_conn(200) as conn: direct_client.direct_delete_container( self.node, self.part, self.account, self.container, headers=headers) self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.container_path) self.assertTrue('X-Timestamp' in conn.req_headers) self.assertEqual(timestamp, conn.req_headers['X-Timestamp']) def test_direct_delete_container_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_delete_container( self.node, self.part, self.account, self.container) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.container_path) self.assertEqual(err.http_status, 500) self.assertTrue('DELETE' in str(err)) def test_direct_put_container_object(self): headers = {'x-foo': 'bar'} with mocked_http_conn(204) as conn: rv = direct_client.direct_put_container_object( self.node, self.part, self.account, self.container, self.obj, headers=headers) self.assertEqual(conn.method, 'PUT') self.assertEqual(conn.path, self.obj_path) self.assertTrue('x-timestamp' in conn.req_headers) self.assertEqual('bar', conn.req_headers.get('x-foo')) self.assertEqual(rv, None) def test_direct_put_container_object_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_put_container_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'PUT') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 500) self.assertTrue('PUT' in str(err)) def test_direct_delete_container_object(self): with mocked_http_conn(204) as conn: rv = direct_client.direct_delete_container_object( self.node, self.part, self.account, self.container, self.obj) self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.obj_path) self.assertEqual(rv, None) def test_direct_delete_container_obj_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_delete_container_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 500) self.assertTrue('DELETE' in str(err)) def test_direct_head_object(self): headers = HeaderKeyDict({'x-foo': 'bar'}) with mocked_http_conn(200, headers) as conn: resp = direct_client.direct_head_object( self.node, self.part, self.account, self.container, self.obj, headers=headers) self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.obj_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual('bar', conn.req_headers.get('x-foo')) self.assertTrue('x-timestamp' not in conn.req_headers, 'x-timestamp was in HEAD request headers') self.assertEqual(headers, resp) def test_direct_head_object_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_head_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 500) self.assertTrue('HEAD' in str(err)) def test_direct_head_object_not_found(self): important_timestamp = Timestamp(time.time()).internal stub_headers = {'X-Backend-Important-Timestamp': important_timestamp} with mocked_http_conn(404, headers=stub_headers) as conn: try: direct_client.direct_head_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 404) self.assertEqual(err.http_headers['x-backend-important-timestamp'], important_timestamp) def test_direct_get_object(self): contents = six.StringIO('123456') with mocked_http_conn(200, body=contents) as conn: resp_header, obj_body = direct_client.direct_get_object( self.node, self.part, self.account, self.container, self.obj) self.assertEqual(conn.method, 'GET') self.assertEqual(conn.path, self.obj_path) self.assertEqual(obj_body, contents.getvalue()) def test_direct_get_object_error(self): with mocked_http_conn(500) as conn: try: direct_client.direct_get_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'GET') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 500) self.assertTrue('GET' in str(err)) def test_direct_get_object_chunks(self): contents = six.StringIO('123456') downloaded = b'' with mocked_http_conn(200, body=contents) as conn: resp_header, obj_body = direct_client.direct_get_object( self.node, self.part, self.account, self.container, self.obj, resp_chunk_size=2) while obj_body: try: chunk = obj_body.next() except StopIteration: break downloaded += chunk self.assertEqual('GET', conn.method) self.assertEqual(self.obj_path, conn.path) self.assertEqual('123456', downloaded) def test_direct_post_object(self): headers = {'Key': 'value'} resp_headers = [] with mocked_http_conn(200, resp_headers) as conn: direct_client.direct_post_object( self.node, self.part, self.account, self.container, self.obj, headers) self.assertEqual(conn.method, 'POST') self.assertEqual(conn.path, self.obj_path) for header in headers: self.assertEqual(conn.req_headers[header], headers[header]) def test_direct_post_object_error(self): headers = {'Key': 'value'} with mocked_http_conn(500) as conn: try: direct_client.direct_post_object( self.node, self.part, self.account, self.container, self.obj, headers) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'POST') self.assertEqual(conn.path, self.obj_path) for header in headers: self.assertEqual(conn.req_headers[header], headers[header]) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertTrue('x-timestamp' in conn.req_headers) self.assertEqual(err.http_status, 500) self.assertTrue('POST' in str(err)) def test_direct_delete_object(self): with mocked_http_conn(200) as conn: resp = direct_client.direct_delete_object( self.node, self.part, self.account, self.container, self.obj) self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.obj_path) self.assertEqual(resp, None) def test_direct_delete_object_with_timestamp(self): # ensure timestamp is different from any that might be auto-generated timestamp = Timestamp(time.time() - 100) headers = {'X-Timestamp': timestamp.internal} with mocked_http_conn(200) as conn: direct_client.direct_delete_object( self.node, self.part, self.account, self.container, self.obj, headers=headers) self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.obj_path) self.assertTrue('X-Timestamp' in conn.req_headers) self.assertEqual(timestamp, conn.req_headers['X-Timestamp']) def test_direct_delete_object_error(self): with mocked_http_conn(503) as conn: try: direct_client.direct_delete_object( self.node, self.part, self.account, self.container, self.obj) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'DELETE') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 503) self.assertTrue('DELETE' in str(err)) def test_direct_put_object_with_content_length(self): contents = six.StringIO('123456') with mocked_http_conn(200) as conn: resp = direct_client.direct_put_object( self.node, self.part, self.account, self.container, self.obj, contents, 6) self.assertEqual(conn.method, 'PUT') self.assertEqual(conn.path, self.obj_path) self.assertEqual(md5('123456').hexdigest(), resp) def test_direct_put_object_fail(self): contents = six.StringIO('123456') with mocked_http_conn(500) as conn: try: direct_client.direct_put_object( self.node, self.part, self.account, self.container, self.obj, contents) except ClientException as err: pass else: self.fail('ClientException not raised') self.assertEqual(conn.method, 'PUT') self.assertEqual(conn.path, self.obj_path) self.assertEqual(err.http_status, 500) def test_direct_put_object_chunked(self): contents = six.StringIO('123456') with mocked_http_conn(200) as conn: resp = direct_client.direct_put_object( self.node, self.part, self.account, self.container, self.obj, contents) self.assertEqual(conn.method, 'PUT') self.assertEqual(conn.path, self.obj_path) self.assertEqual(md5('6\r\n123456\r\n0\r\n\r\n').hexdigest(), resp) def test_direct_put_object_args(self): # One test to cover all missing checks contents = "" with mocked_http_conn(200) as conn: resp = direct_client.direct_put_object( self.node, self.part, self.account, self.container, self.obj, contents, etag="testing-etag", content_type='Text') self.assertEqual('PUT', conn.method) self.assertEqual(self.obj_path, conn.path) self.assertEqual(conn.req_headers['Content-Length'], '0') self.assertEqual(conn.req_headers['Content-Type'], 'Text') self.assertEqual(md5('0\r\n\r\n').hexdigest(), resp) def test_direct_put_object_header_content_length(self): contents = six.StringIO('123456') stub_headers = HeaderKeyDict({ 'Content-Length': '6'}) with mocked_http_conn(200) as conn: resp = direct_client.direct_put_object( self.node, self.part, self.account, self.container, self.obj, contents, headers=stub_headers) self.assertEqual('PUT', conn.method) self.assertEqual(conn.req_headers['Content-length'], '6') self.assertEqual(md5('123456').hexdigest(), resp) def test_retry(self): headers = HeaderKeyDict({'key': 'value'}) with mocked_http_conn(200, headers) as conn: attempts, resp = direct_client.retry( direct_client.direct_head_object, self.node, self.part, self.account, self.container, self.obj) self.assertEqual(conn.method, 'HEAD') self.assertEqual(conn.path, self.obj_path) self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(headers, resp) self.assertEqual(attempts, 1) def test_retry_client_exception(self): logger = debug_logger('direct-client-test') with mock.patch('swift.common.direct_client.sleep') as mock_sleep, \ mocked_http_conn(500) as conn: with self.assertRaises(direct_client.ClientException) as err_ctx: direct_client.retry(direct_client.direct_delete_object, self.node, self.part, self.account, self.container, self.obj, retries=2, error_log=logger.error) self.assertEqual('DELETE', conn.method) self.assertEqual(err_ctx.exception.http_status, 500) self.assertEqual([mock.call(1), mock.call(2)], mock_sleep.call_args_list) error_lines = logger.get_lines_for_level('error') self.assertEqual(3, len(error_lines)) for line in error_lines: self.assertIn('500 Internal Error', line) def test_retry_http_exception(self): logger = debug_logger('direct-client-test') with mock.patch('swift.common.direct_client.sleep') as mock_sleep, \ mocked_http_conn(HTTPException('Kaboom!')) as conn: with self.assertRaises(HTTPException) as err_ctx: direct_client.retry(direct_client.direct_delete_object, self.node, self.part, self.account, self.container, self.obj, retries=2, error_log=logger.error) self.assertEqual('DELETE', conn.method) self.assertEqual('Kaboom!', str(err_ctx.exception)) self.assertEqual([mock.call(1), mock.call(2)], mock_sleep.call_args_list) error_lines = logger.get_lines_for_level('error') self.assertEqual(3, len(error_lines)) for line in error_lines: self.assertIn('Kaboom!', line) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/common/test_splice.py0000664000567000056710000002330613024044354022137 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. '''Tests for `swift.common.splice`''' import os import errno import ctypes import logging import tempfile import unittest import contextlib import re import mock import nose from swift.common.splice import splice, tee LOGGER = logging.getLogger(__name__) def safe_close(fd): '''Close a file descriptor, ignoring any exceptions''' try: os.close(fd) except Exception: LOGGER.exception('Error while closing FD') @contextlib.contextmanager def pipe(): '''Context-manager providing 2 ends of a pipe, closing them at exit''' fds = os.pipe() try: yield fds finally: safe_close(fds[0]) safe_close(fds[1]) class TestSplice(unittest.TestCase): '''Tests for `splice`''' def setUp(self): if not splice.available: raise nose.SkipTest('splice not available') def test_flags(self): '''Test flag attribute availability''' self.assertTrue(hasattr(splice, 'SPLICE_F_MOVE')) self.assertTrue(hasattr(splice, 'SPLICE_F_NONBLOCK')) self.assertTrue(hasattr(splice, 'SPLICE_F_MORE')) self.assertTrue(hasattr(splice, 'SPLICE_F_GIFT')) @mock.patch('swift.common.splice.splice._c_splice', None) def test_available(self): '''Test `available` attribute correctness''' self.assertFalse(splice.available) def test_splice_pipe_to_pipe(self): '''Test `splice` from a pipe to a pipe''' with pipe() as (p1a, p1b): with pipe() as (p2a, p2b): os.write(p1b, 'abcdef') res = splice(p1a, None, p2b, None, 3, 0) self.assertEqual(res, (3, None, None)) self.assertEqual(os.read(p2a, 3), 'abc') self.assertEqual(os.read(p1a, 3), 'def') def test_splice_file_to_pipe(self): '''Test `splice` from a file to a pipe''' with tempfile.NamedTemporaryFile(bufsize=0) as fd: with pipe() as (pa, pb): fd.write('abcdef') fd.seek(0, os.SEEK_SET) res = splice(fd, None, pb, None, 3, 0) self.assertEqual(res, (3, None, None)) # `fd.tell()` isn't updated... self.assertEqual(os.lseek(fd.fileno(), 0, os.SEEK_CUR), 3) fd.seek(0, os.SEEK_SET) res = splice(fd, 3, pb, None, 3, 0) self.assertEqual(res, (3, 6, None)) self.assertEqual(os.lseek(fd.fileno(), 0, os.SEEK_CUR), 0) self.assertEqual(os.read(pa, 6), 'abcdef') def test_splice_pipe_to_file(self): '''Test `splice` from a pipe to a file''' with tempfile.NamedTemporaryFile(bufsize=0) as fd: with pipe() as (pa, pb): os.write(pb, 'abcdef') res = splice(pa, None, fd, None, 3, 0) self.assertEqual(res, (3, None, None)) self.assertEqual(fd.tell(), 3) fd.seek(0, os.SEEK_SET) res = splice(pa, None, fd, 3, 3, 0) self.assertEqual(res, (3, None, 6)) self.assertEqual(fd.tell(), 0) self.assertEqual(fd.read(6), 'abcdef') @mock.patch.object(splice, '_c_splice') def test_fileno(self, mock_splice): '''Test handling of file-descriptors''' splice(1, None, 2, None, 3, 0) self.assertEqual(mock_splice.call_args, ((1, None, 2, None, 3, 0), {})) mock_splice.reset_mock() with open('/dev/zero', 'r') as fd: splice(fd, None, fd, None, 3, 0) self.assertEqual(mock_splice.call_args, ((fd.fileno(), None, fd.fileno(), None, 3, 0), {})) @mock.patch.object(splice, '_c_splice') def test_flags_list(self, mock_splice): '''Test handling of flag lists''' splice(1, None, 2, None, 3, [splice.SPLICE_F_MOVE, splice.SPLICE_F_NONBLOCK]) flags = splice.SPLICE_F_MOVE | splice.SPLICE_F_NONBLOCK self.assertEqual(mock_splice.call_args, ((1, None, 2, None, 3, flags), {})) mock_splice.reset_mock() splice(1, None, 2, None, 3, []) self.assertEqual(mock_splice.call_args, ((1, None, 2, None, 3, 0), {})) def test_errno(self): '''Test handling of failures''' # Invoke EBADF by using a read-only FD as fd_out with open('/dev/null', 'r') as fd: err = errno.EBADF msg = r'\[Errno %d\] splice: %s' % (err, os.strerror(err)) try: splice(fd, None, fd, None, 3, 0) except IOError as e: self.assertTrue(re.match(msg, str(e))) else: self.fail('Expected IOError was not raised') self.assertEqual(ctypes.get_errno(), 0) @mock.patch('swift.common.splice.splice._c_splice', None) def test_unavailable(self): '''Test exception when unavailable''' self.assertRaises(EnvironmentError, splice, 1, None, 2, None, 2, 0) def test_unavailable_in_libc(self): '''Test `available` attribute when `libc` has no `splice` support''' class LibC(object): '''A fake `libc` object tracking `splice` attribute access''' def __init__(self): self.splice_retrieved = False @property def splice(self): self.splice_retrieved = True raise AttributeError libc = LibC() mock_cdll = mock.Mock(return_value=libc) with mock.patch('ctypes.CDLL', new=mock_cdll): # Force re-construction of a `Splice` instance # Something you're not supposed to do in actual code new_splice = type(splice)() self.assertFalse(new_splice.available) libc_name = ctypes.util.find_library('c') mock_cdll.assert_called_once_with(libc_name, use_errno=True) self.assertTrue(libc.splice_retrieved) class TestTee(unittest.TestCase): '''Tests for `tee`''' def setUp(self): if not tee.available: raise nose.SkipTest('tee not available') @mock.patch('swift.common.splice.tee._c_tee', None) def test_available(self): '''Test `available` attribute correctness''' self.assertFalse(tee.available) def test_tee_pipe_to_pipe(self): '''Test `tee` from a pipe to a pipe''' with pipe() as (p1a, p1b): with pipe() as (p2a, p2b): os.write(p1b, 'abcdef') res = tee(p1a, p2b, 3, 0) self.assertEqual(res, 3) self.assertEqual(os.read(p2a, 3), 'abc') self.assertEqual(os.read(p1a, 6), 'abcdef') @mock.patch.object(tee, '_c_tee') def test_fileno(self, mock_tee): '''Test handling of file-descriptors''' with pipe() as (pa, pb): tee(pa, pb, 3, 0) self.assertEqual(mock_tee.call_args, ((pa, pb, 3, 0), {})) mock_tee.reset_mock() tee(os.fdopen(pa, 'r'), os.fdopen(pb, 'w'), 3, 0) self.assertEqual(mock_tee.call_args, ((pa, pb, 3, 0), {})) @mock.patch.object(tee, '_c_tee') def test_flags_list(self, mock_tee): '''Test handling of flag lists''' tee(1, 2, 3, [splice.SPLICE_F_MOVE | splice.SPLICE_F_NONBLOCK]) flags = splice.SPLICE_F_MOVE | splice.SPLICE_F_NONBLOCK self.assertEqual(mock_tee.call_args, ((1, 2, 3, flags), {})) mock_tee.reset_mock() tee(1, 2, 3, []) self.assertEqual(mock_tee.call_args, ((1, 2, 3, 0), {})) def test_errno(self): '''Test handling of failures''' # Invoke EBADF by using a read-only FD as fd_out with open('/dev/null', 'r') as fd: err = errno.EBADF msg = r'\[Errno %d\] tee: %s' % (err, os.strerror(err)) try: tee(fd, fd, 3, 0) except IOError as e: self.assertTrue(re.match(msg, str(e))) else: self.fail('Expected IOError was not raised') self.assertEqual(ctypes.get_errno(), 0) @mock.patch('swift.common.splice.tee._c_tee', None) def test_unavailable(self): '''Test exception when unavailable''' self.assertRaises(EnvironmentError, tee, 1, 2, 2, 0) def test_unavailable_in_libc(self): '''Test `available` attribute when `libc` has no `tee` support''' class LibC(object): '''A fake `libc` object tracking `tee` attribute access''' def __init__(self): self.tee_retrieved = False @property def tee(self): self.tee_retrieved = True raise AttributeError libc = LibC() mock_cdll = mock.Mock(return_value=libc) with mock.patch('ctypes.CDLL', new=mock_cdll): # Force re-construction of a `Tee` instance # Something you're not supposed to do in actual code new_tee = type(tee)() self.assertFalse(new_tee.available) libc_name = ctypes.util.find_library('c') mock_cdll.assert_called_once_with(libc_name, use_errno=True) self.assertTrue(libc.tee_retrieved) swift-2.7.1/test/unit/common/test_manager.py0000664000567000056710000026174413024044354022304 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import unittest from test.unit import temptree import os import sys import resource import signal import errno from collections import defaultdict from time import sleep, time from swift.common import manager from swift.common.exceptions import InvalidPidFileException import eventlet threading = eventlet.patcher.original('threading') DUMMY_SIG = 1 class MockOs(object): RAISE_EPERM_SIG = 99 def __init__(self, pids): self.running_pids = pids self.pid_sigs = defaultdict(list) self.closed_fds = [] self.child_pid = 9999 # fork defaults to test parent process path self.execlp_called = False def kill(self, pid, sig): if sig == self.RAISE_EPERM_SIG: raise OSError(errno.EPERM, 'Operation not permitted') if pid not in self.running_pids: raise OSError(3, 'No such process') self.pid_sigs[pid].append(sig) def __getattr__(self, name): # I only over-ride portions of the os module try: return object.__getattr__(self, name) except AttributeError: return getattr(os, name) def pop_stream(f): """read everything out of file from the top and clear it out """ f.flush() f.seek(0) output = f.read() f.seek(0) f.truncate() return output class TestManagerModule(unittest.TestCase): def test_servers(self): main_plus_rest = set(manager.MAIN_SERVERS + manager.REST_SERVERS) self.assertEqual(set(manager.ALL_SERVERS), main_plus_rest) # make sure there's no server listed in both self.assertEqual(len(main_plus_rest), len(manager.MAIN_SERVERS) + len(manager.REST_SERVERS)) def test_setup_env(self): class MockResource(object): def __init__(self, error=None): self.error = error self.called_with_args = [] def setrlimit(self, resource, limits): if self.error: raise self.error self.called_with_args.append((resource, limits)) def __getattr__(self, name): # I only over-ride portions of the resource module try: return object.__getattr__(self, name) except AttributeError: return getattr(resource, name) _orig_resource = manager.resource _orig_environ = os.environ try: manager.resource = MockResource() manager.os.environ = {} manager.setup_env() expected = [ (resource.RLIMIT_NOFILE, (manager.MAX_DESCRIPTORS, manager.MAX_DESCRIPTORS)), (resource.RLIMIT_DATA, (manager.MAX_MEMORY, manager.MAX_MEMORY)), (resource.RLIMIT_NPROC, (manager.MAX_PROCS, manager.MAX_PROCS)), ] self.assertEqual(manager.resource.called_with_args, expected) self.assertTrue( manager.os.environ['PYTHON_EGG_CACHE'].startswith('/tmp')) # test error condition manager.resource = MockResource(error=ValueError()) manager.os.environ = {} manager.setup_env() self.assertEqual(manager.resource.called_with_args, []) self.assertTrue( manager.os.environ['PYTHON_EGG_CACHE'].startswith('/tmp')) manager.resource = MockResource(error=OSError()) manager.os.environ = {} self.assertRaises(OSError, manager.setup_env) self.assertEqual(manager.os.environ.get('PYTHON_EGG_CACHE'), None) finally: manager.resource = _orig_resource os.environ = _orig_environ def test_command_wrapper(self): @manager.command def myfunc(arg1): """test doc """ return arg1 self.assertEqual(myfunc.__doc__.strip(), 'test doc') self.assertEqual(myfunc(1), 1) self.assertEqual(myfunc(0), 0) self.assertEqual(myfunc(True), 1) self.assertEqual(myfunc(False), 0) self.assertTrue(hasattr(myfunc, 'publicly_accessible')) self.assertTrue(myfunc.publicly_accessible) def test_watch_server_pids(self): class MockOs(object): WNOHANG = os.WNOHANG def __init__(self, pid_map=None): if pid_map is None: pid_map = {} self.pid_map = {} for pid, v in pid_map.items(): self.pid_map[pid] = (x for x in v) def waitpid(self, pid, options): try: rv = next(self.pid_map[pid]) except StopIteration: raise OSError(errno.ECHILD, os.strerror(errno.ECHILD)) except KeyError: raise OSError(errno.ESRCH, os.strerror(errno.ESRCH)) if isinstance(rv, Exception): raise rv else: return rv class MockTime(object): def __init__(self, ticks=None): self.tock = time() if not ticks: ticks = [] self.ticks = (t for t in ticks) def time(self): try: self.tock += next(self.ticks) except StopIteration: self.tock += 1 return self.tock def sleep(*args): return class MockServer(object): def __init__(self, pids, run_dir=manager.RUN_DIR, zombie=0): self.heartbeat = (pids for _ in range(zombie)) def get_running_pids(self): try: rv = next(self.heartbeat) return rv except StopIteration: return {} _orig_os = manager.os _orig_time = manager.time _orig_server = manager.Server try: manager.time = MockTime() manager.os = MockOs() # this server always says it's dead when you ask for running pids server = MockServer([1]) # list of pids keyed on servers to watch server_pids = { server: [1], } # basic test, server dies gen = manager.watch_server_pids(server_pids) expected = [(server, 1)] self.assertEqual([x for x in gen], expected) # start long running server and short interval server = MockServer([1], zombie=15) server_pids = { server: [1], } gen = manager.watch_server_pids(server_pids) self.assertEqual([x for x in gen], []) # wait a little longer gen = manager.watch_server_pids(server_pids, interval=15) self.assertEqual([x for x in gen], [(server, 1)]) # zombie process server = MockServer([1], zombie=200) server_pids = { server: [1], } # test weird os error manager.os = MockOs({1: [OSError()]}) gen = manager.watch_server_pids(server_pids) self.assertRaises(OSError, lambda: [x for x in gen]) # test multi-server server1 = MockServer([1, 10], zombie=200) server2 = MockServer([2, 20], zombie=8) server_pids = { server1: [1, 10], server2: [2, 20], } pid_map = { 1: [None for _ in range(10)], 2: [None for _ in range(8)], 20: [None for _ in range(4)], } manager.os = MockOs(pid_map) gen = manager.watch_server_pids(server_pids, interval=manager.KILL_WAIT) expected = [ (server2, 2), (server2, 20), ] self.assertEqual([x for x in gen], expected) finally: manager.os = _orig_os manager.time = _orig_time manager.Server = _orig_server def test_safe_kill(self): manager.os = MockOs([1, 2, 3, 4]) proc_files = ( ('1/cmdline', 'same-procname'), ('2/cmdline', 'another-procname'), ('4/cmdline', 'another-procname'), ) files, contents = zip(*proc_files) with temptree(files, contents) as t: manager.PROC_DIR = t manager.safe_kill(1, signal.SIG_DFL, 'same-procname') self.assertRaises(InvalidPidFileException, manager.safe_kill, 2, signal.SIG_DFL, 'same-procname') manager.safe_kill(3, signal.SIG_DFL, 'same-procname') manager.safe_kill(4, signal.SIGHUP, 'same-procname') def test_exc(self): self.assertTrue(issubclass(manager.UnknownCommandError, Exception)) class TestServer(unittest.TestCase): def tearDown(self): reload(manager) def join_swift_dir(self, path): return os.path.join(manager.SWIFT_DIR, path) def join_run_dir(self, path): return os.path.join(manager.RUN_DIR, path) def test_create_server(self): server = manager.Server('proxy') self.assertEqual(server.server, 'proxy-server') self.assertEqual(server.type, 'proxy') self.assertEqual(server.cmd, 'swift-proxy-server') server = manager.Server('object-replicator') self.assertEqual(server.server, 'object-replicator') self.assertEqual(server.type, 'object') self.assertEqual(server.cmd, 'swift-object-replicator') def test_server_to_string(self): server = manager.Server('Proxy') self.assertEqual(str(server), 'proxy-server') server = manager.Server('object-replicator') self.assertEqual(str(server), 'object-replicator') def test_server_repr(self): server = manager.Server('proxy') self.assertTrue(server.__class__.__name__ in repr(server)) self.assertTrue(str(server) in repr(server)) def test_server_equality(self): server1 = manager.Server('Proxy') server2 = manager.Server('proxy-server') self.assertEqual(server1, server2) # it is NOT a string self.assertNotEqual(server1, 'proxy-server') def test_get_pid_file_name(self): server = manager.Server('proxy') conf_file = self.join_swift_dir('proxy-server.conf') pid_file = self.join_run_dir('proxy-server.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) server = manager.Server('object-replicator') conf_file = self.join_swift_dir('object-server/1.conf') pid_file = self.join_run_dir('object-replicator/1.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) server = manager.Server('container-auditor') conf_file = self.join_swift_dir( 'container-server/1/container-auditor.conf') pid_file = self.join_run_dir( 'container-auditor/1/container-auditor.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) def test_get_custom_pid_file_name(self): random_run_dir = "/random/dir" get_random_run_dir = lambda x: os.path.join(random_run_dir, x) server = manager.Server('proxy', run_dir=random_run_dir) conf_file = self.join_swift_dir('proxy-server.conf') pid_file = get_random_run_dir('proxy-server.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) server = manager.Server('object-replicator', run_dir=random_run_dir) conf_file = self.join_swift_dir('object-server/1.conf') pid_file = get_random_run_dir('object-replicator/1.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) server = manager.Server('container-auditor', run_dir=random_run_dir) conf_file = self.join_swift_dir( 'container-server/1/container-auditor.conf') pid_file = get_random_run_dir( 'container-auditor/1/container-auditor.pid') self.assertEqual(pid_file, server.get_pid_file_name(conf_file)) def test_get_conf_file_name(self): server = manager.Server('proxy') conf_file = self.join_swift_dir('proxy-server.conf') pid_file = self.join_run_dir('proxy-server.pid') self.assertEqual(conf_file, server.get_conf_file_name(pid_file)) server = manager.Server('object-replicator') conf_file = self.join_swift_dir('object-server/1.conf') pid_file = self.join_run_dir('object-replicator/1.pid') self.assertEqual(conf_file, server.get_conf_file_name(pid_file)) server = manager.Server('container-auditor') conf_file = self.join_swift_dir( 'container-server/1/container-auditor.conf') pid_file = self.join_run_dir( 'container-auditor/1/container-auditor.pid') self.assertEqual(conf_file, server.get_conf_file_name(pid_file)) server_name = manager.STANDALONE_SERVERS[0] server = manager.Server(server_name) conf_file = self.join_swift_dir(server_name + '.conf') pid_file = self.join_run_dir(server_name + '.pid') self.assertEqual(conf_file, server.get_conf_file_name(pid_file)) def test_conf_files(self): # test get single conf file conf_files = ( 'proxy-server.conf', 'proxy-server.ini', 'auth-server.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('proxy') conf_files = server.conf_files() self.assertEqual(len(conf_files), 1) conf_file = conf_files[0] proxy_conf = self.join_swift_dir('proxy-server.conf') self.assertEqual(conf_file, proxy_conf) # test multi server conf files & grouping of server-type config conf_files = ( 'object-server1.conf', 'object-server/2.conf', 'object-server/object3.conf', 'object-server/conf/server4.conf', 'object-server.txt', 'proxy-server.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('object-replicator') conf_files = server.conf_files() self.assertEqual(len(conf_files), 4) c1 = self.join_swift_dir('object-server1.conf') c2 = self.join_swift_dir('object-server/2.conf') c3 = self.join_swift_dir('object-server/object3.conf') c4 = self.join_swift_dir('object-server/conf/server4.conf') for c in [c1, c2, c3, c4]: self.assertTrue(c in conf_files) # test configs returned sorted sorted_confs = sorted([c1, c2, c3, c4]) self.assertEqual(conf_files, sorted_confs) # test get single numbered conf conf_files = ( 'account-server/1.conf', 'account-server/2.conf', 'account-server/3.conf', 'account-server/4.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('account') conf_files = server.conf_files(number=2) self.assertEqual(len(conf_files), 1) conf_file = conf_files[0] self.assertEqual(conf_file, self.join_swift_dir('account-server/2.conf')) # test missing config number conf_files = server.conf_files(number=5) self.assertFalse(conf_files) # test getting specific conf conf_files = ( 'account-server/1.conf', 'account-server/2.conf', 'account-server/3.conf', 'account-server/4.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('account.2') conf_files = server.conf_files() self.assertEqual(len(conf_files), 1) conf_file = conf_files[0] self.assertEqual(conf_file, self.join_swift_dir('account-server/2.conf')) # test verbose & quiet conf_files = ( 'auth-server.ini', 'container-server/1.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t old_stdout = sys.stdout try: with open(os.path.join(t, 'output'), 'w+') as f: sys.stdout = f server = manager.Server('auth') # check warn "unable to locate" conf_files = server.conf_files() self.assertFalse(conf_files) self.assertTrue('unable to locate config for auth' in pop_stream(f).lower()) # check quiet will silence warning conf_files = server.conf_files(verbose=True, quiet=True) self.assertEqual(pop_stream(f), '') # check found config no warning server = manager.Server('container-auditor') conf_files = server.conf_files() self.assertEqual(pop_stream(f), '') # check missing config number warn "unable to locate" conf_files = server.conf_files(number=2) self.assertTrue( 'unable to locate config number 2 for ' + 'container-auditor' in pop_stream(f).lower()) # check verbose lists configs conf_files = server.conf_files(number=2, verbose=True) c1 = self.join_swift_dir('container-server/1.conf') self.assertTrue(c1 in pop_stream(f)) finally: sys.stdout = old_stdout # test standalone conf file server_name = manager.STANDALONE_SERVERS[0] conf_files = (server_name + '.conf',) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server(server_name) conf_files = server.conf_files() self.assertEqual(len(conf_files), 1) conf_file = conf_files[0] conf = self.join_swift_dir(server_name + '.conf') self.assertEqual(conf_file, conf) def test_proxy_conf_dir(self): conf_files = ( 'proxy-server.conf.d/00.conf', 'proxy-server.conf.d/01.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('proxy') conf_dirs = server.conf_files() self.assertEqual(len(conf_dirs), 1) conf_dir = conf_dirs[0] proxy_conf_dir = self.join_swift_dir('proxy-server.conf.d') self.assertEqual(proxy_conf_dir, conf_dir) def test_named_conf_dir(self): conf_files = ( 'object-server/base.conf-template', 'object-server/object-server.conf.d/00_base.conf', 'object-server/object-server.conf.d/10_server.conf', 'object-server/object-replication.conf.d/00_base.conf', 'object-server/object-replication.conf.d/10_server.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('object.replication') conf_dirs = server.conf_files() self.assertEqual(len(conf_dirs), 1) conf_dir = conf_dirs[0] replication_server_conf_dir = self.join_swift_dir( 'object-server/object-replication.conf.d') self.assertEqual(replication_server_conf_dir, conf_dir) # and again with no named filter server = manager.Server('object') conf_dirs = server.conf_files() self.assertEqual(len(conf_dirs), 2) for named_conf in ('server', 'replication'): conf_dir = self.join_swift_dir( 'object-server/object-%s.conf.d' % named_conf) self.assertTrue(conf_dir in conf_dirs) def test_conf_dir(self): conf_files = ( 'object-server/object-server.conf-base', 'object-server/1.conf.d/base.conf', 'object-server/1.conf.d/1.conf', 'object-server/2.conf.d/base.conf', 'object-server/2.conf.d/2.conf', 'object-server/3.conf.d/base.conf', 'object-server/3.conf.d/3.conf', 'object-server/4.conf.d/base.conf', 'object-server/4.conf.d/4.conf', ) with temptree(conf_files) as t: manager.SWIFT_DIR = t server = manager.Server('object-replicator') conf_dirs = server.conf_files() self.assertEqual(len(conf_dirs), 4) c1 = self.join_swift_dir('object-server/1.conf.d') c2 = self.join_swift_dir('object-server/2.conf.d') c3 = self.join_swift_dir('object-server/3.conf.d') c4 = self.join_swift_dir('object-server/4.conf.d') for c in [c1, c2, c3, c4]: self.assertTrue(c in conf_dirs) # test configs returned sorted sorted_confs = sorted([c1, c2, c3, c4]) self.assertEqual(conf_dirs, sorted_confs) def test_named_conf_dir_pid_files(self): conf_files = ( 'object-server/object-server.pid.d', 'object-server/object-replication.pid.d', ) with temptree(conf_files) as t: manager.RUN_DIR = t server = manager.Server('object.replication', run_dir=t) pid_files = server.pid_files() self.assertEqual(len(pid_files), 1) pid_file = pid_files[0] replication_server_pid = self.join_run_dir( 'object-server/object-replication.pid.d') self.assertEqual(replication_server_pid, pid_file) # and again with no named filter server = manager.Server('object', run_dir=t) pid_files = server.pid_files() self.assertEqual(len(pid_files), 2) for named_pid in ('server', 'replication'): pid_file = self.join_run_dir( 'object-server/object-%s.pid.d' % named_pid) self.assertTrue(pid_file in pid_files) def test_iter_pid_files(self): """ Server.iter_pid_files is kinda boring, test the Server.pid_files stuff here as well """ pid_files = ( ('proxy-server.pid', 1), ('auth-server.pid', 'blah'), ('object-replicator/1.pid', 11), ('object-replicator/2.pid', 12), ) files, contents = zip(*pid_files) with temptree(files, contents) as t: manager.RUN_DIR = t server = manager.Server('proxy', run_dir=t) # test get one file iter = server.iter_pid_files() pid_file, pid = next(iter) self.assertEqual(pid_file, self.join_run_dir('proxy-server.pid')) self.assertEqual(pid, 1) # ... and only one file self.assertRaises(StopIteration, iter.next) # test invalid value in pid file server = manager.Server('auth', run_dir=t) pid_file, pid = next(server.iter_pid_files()) self.assertIsNone(pid) # test object-server doesn't steal pids from object-replicator server = manager.Server('object', run_dir=t) self.assertRaises(StopIteration, server.iter_pid_files().next) # test multi-pid iter server = manager.Server('object-replicator', run_dir=t) real_map = { 11: self.join_run_dir('object-replicator/1.pid'), 12: self.join_run_dir('object-replicator/2.pid'), } pid_map = {} for pid_file, pid in server.iter_pid_files(): pid_map[pid] = pid_file self.assertEqual(pid_map, real_map) # test get pid_files by number conf_files = ( 'object-server/1.conf', 'object-server/2.conf', 'object-server/3.conf', 'object-server/4.conf', ) pid_files = ( ('object-server/1.pid', 1), ('object-server/2.pid', 2), ('object-server/5.pid', 5), ) with temptree(conf_files) as swift_dir: manager.SWIFT_DIR = swift_dir files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t server = manager.Server('object', run_dir=t) # test get all pid files real_map = { 1: self.join_run_dir('object-server/1.pid'), 2: self.join_run_dir('object-server/2.pid'), 5: self.join_run_dir('object-server/5.pid'), } pid_map = {} for pid_file, pid in server.iter_pid_files(): pid_map[pid] = pid_file self.assertEqual(pid_map, real_map) # test get pid with matching conf pids = list(server.iter_pid_files(number=2)) self.assertEqual(len(pids), 1) pid_file, pid = pids[0] self.assertEqual(pid, 2) pid_two = self.join_run_dir('object-server/2.pid') self.assertEqual(pid_file, pid_two) # try to iter on a pid number with a matching conf but no pid pids = list(server.iter_pid_files(number=3)) self.assertFalse(pids) # test get pids w/o matching conf pids = list(server.iter_pid_files(number=5)) self.assertFalse(pids) # test get pid_files by conf name conf_files = ( 'object-server/1.conf', 'object-server/2.conf', 'object-server/3.conf', 'object-server/4.conf', ) pid_files = ( ('object-server/1.pid', 1), ('object-server/2.pid', 2), ('object-server/5.pid', 5), ) with temptree(conf_files) as swift_dir: manager.SWIFT_DIR = swift_dir files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t server = manager.Server('object.2', run_dir=t) # test get pid with matching conf pids = list(server.iter_pid_files()) self.assertEqual(len(pids), 1) pid_file, pid = pids[0] self.assertEqual(pid, 2) pid_two = self.join_run_dir('object-server/2.pid') self.assertEqual(pid_file, pid_two) def test_signal_pids(self): temp_files = ( ('var/run/zero-server.pid', 0), ('var/run/proxy-server.pid', 1), ('var/run/auth-server.pid', 2), ('var/run/one-server.pid', 3), ('var/run/object-server.pid', 4), ('var/run/invalid-server.pid', 'Forty-Two'), ('proc/3/cmdline', 'swift-another-server') ) with temptree(*zip(*temp_files)) as t: manager.RUN_DIR = os.path.join(t, 'var/run') manager.PROC_DIR = os.path.join(t, 'proc') # mock os with so both the first and second are running manager.os = MockOs([1, 2]) server = manager.Server('proxy', run_dir=manager.RUN_DIR) pids = server.signal_pids(DUMMY_SIG) self.assertEqual(len(pids), 1) self.assertTrue(1 in pids) self.assertEqual(manager.os.pid_sigs[1], [DUMMY_SIG]) # make sure other process not signaled self.assertFalse(2 in pids) self.assertFalse(2 in manager.os.pid_sigs) # capture stdio old_stdout = sys.stdout try: with open(os.path.join(t, 'output'), 'w+') as f: sys.stdout = f # test print details pids = server.signal_pids(DUMMY_SIG) output = pop_stream(f) self.assertTrue('pid: %s' % 1 in output) self.assertTrue('signal: %s' % DUMMY_SIG in output) # test no details on signal.SIG_DFL pids = server.signal_pids(signal.SIG_DFL) self.assertEqual(pop_stream(f), '') # reset mock os so only the second server is running manager.os = MockOs([2]) # test pid not running pids = server.signal_pids(signal.SIG_DFL) self.assertTrue(1 not in pids) self.assertTrue(1 not in manager.os.pid_sigs) # test remove stale pid file self.assertFalse(os.path.exists( self.join_run_dir('proxy-server.pid'))) # reset mock os with no running pids manager.os = MockOs([]) server = manager.Server('auth', run_dir=manager.RUN_DIR) # test verbose warns on removing stale pid file pids = server.signal_pids(signal.SIG_DFL, verbose=True) output = pop_stream(f) self.assertTrue('stale pid' in output.lower()) auth_pid = self.join_run_dir('auth-server.pid') self.assertTrue(auth_pid in output) # reset mock os so only the third server is running manager.os = MockOs([3]) server = manager.Server('one', run_dir=manager.RUN_DIR) # test verbose warns on removing invalid pid file pids = server.signal_pids(signal.SIG_DFL, verbose=True) output = pop_stream(f) old_stdout.write('output %s' % output) self.assertTrue('removing pid file' in output.lower()) one_pid = self.join_run_dir('one-server.pid') self.assertTrue(one_pid in output) server = manager.Server('zero', run_dir=manager.RUN_DIR) self.assertTrue(os.path.exists( self.join_run_dir('zero-server.pid'))) # sanity # test verbose warns on removing pid file with invalid pid pids = server.signal_pids(signal.SIG_DFL, verbose=True) output = pop_stream(f) old_stdout.write('output %s' % output) self.assertTrue('with invalid pid' in output.lower()) self.assertFalse(os.path.exists( self.join_run_dir('zero-server.pid'))) server = manager.Server('invalid-server', run_dir=manager.RUN_DIR) self.assertTrue(os.path.exists( self.join_run_dir('invalid-server.pid'))) # sanity # test verbose warns on removing pid file with invalid pid pids = server.signal_pids(signal.SIG_DFL, verbose=True) output = pop_stream(f) old_stdout.write('output %s' % output) self.assertTrue('with invalid pid' in output.lower()) self.assertFalse(os.path.exists( self.join_run_dir('invalid-server.pid'))) # reset mock os with no running pids manager.os = MockOs([]) # test warning with insufficient permissions server = manager.Server('object', run_dir=manager.RUN_DIR) pids = server.signal_pids(manager.os.RAISE_EPERM_SIG) output = pop_stream(f) self.assertTrue('no permission to signal pid 4' in output.lower(), output) finally: sys.stdout = old_stdout def test_get_running_pids(self): # test only gets running pids temp_files = ( ('var/run/test-server1.pid', 1), ('var/run/test-server2.pid', 2), ('var/run/test-server3.pid', 3), ('proc/1/cmdline', 'swift-test-server'), ('proc/3/cmdline', 'swift-another-server') ) with temptree(*zip(*temp_files)) as t: manager.RUN_DIR = os.path.join(t, 'var/run') manager.PROC_DIR = os.path.join(t, 'proc') server = manager.Server( 'test-server', run_dir=manager.RUN_DIR) # mock os, only pid '1' is running manager.os = MockOs([1, 3]) running_pids = server.get_running_pids() self.assertEqual(len(running_pids), 1) self.assertTrue(1 in running_pids) self.assertTrue(2 not in running_pids) self.assertTrue(3 not in running_pids) # test persistent running pid files self.assertTrue(os.path.exists( os.path.join(manager.RUN_DIR, 'test-server1.pid'))) # test clean up stale pids pid_two = self.join_swift_dir('test-server2.pid') self.assertFalse(os.path.exists(pid_two)) pid_three = self.join_swift_dir('test-server3.pid') self.assertFalse(os.path.exists(pid_three)) # reset mock os, no pids running manager.os = MockOs([]) running_pids = server.get_running_pids() self.assertFalse(running_pids) # and now all pid files are cleaned out pid_one = self.join_run_dir('test-server1.pid') self.assertFalse(os.path.exists(pid_one)) all_pids = os.listdir(manager.RUN_DIR) self.assertEqual(len(all_pids), 0) # test only get pids for right server pid_files = ( ('thing-doer.pid', 1), ('thing-sayer.pid', 2), ('other-doer.pid', 3), ('other-sayer.pid', 4), ) files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t # all pids are running manager.os = MockOs(pids) server = manager.Server('thing-doer', run_dir=t) running_pids = server.get_running_pids() # only thing-doer.pid, 1 self.assertEqual(len(running_pids), 1) self.assertTrue(1 in running_pids) # no other pids returned for n in (2, 3, 4): self.assertTrue(n not in running_pids) # assert stale pids for other servers ignored manager.os = MockOs([1]) # only thing-doer is running running_pids = server.get_running_pids() for f in ('thing-sayer.pid', 'other-doer.pid', 'other-sayer.pid'): # other server pid files persist self.assertTrue(os.path.exists, os.path.join(t, f)) # verify that servers are in fact not running for server_name in ('thing-sayer', 'other-doer', 'other-sayer'): server = manager.Server(server_name, run_dir=t) running_pids = server.get_running_pids() self.assertFalse(running_pids) # and now all OTHER pid files are cleaned out all_pids = os.listdir(t) self.assertEqual(len(all_pids), 1) self.assertTrue(os.path.exists(os.path.join(t, 'thing-doer.pid'))) def test_kill_running_pids(self): pid_files = ( ('object-server.pid', 1), ('object-replicator1.pid', 11), ('object-replicator2.pid', 12), ) files, running_pids = zip(*pid_files) with temptree(files, running_pids) as t: manager.RUN_DIR = t server = manager.Server('object', run_dir=t) # test no servers running manager.os = MockOs([]) pids = server.kill_running_pids() self.assertFalse(pids, pids) files, running_pids = zip(*pid_files) with temptree(files, running_pids) as t: manager.RUN_DIR = t server.run_dir = t # start up pid manager.os = MockOs([1]) server = manager.Server('object', run_dir=t) # test kill one pid pids = server.kill_running_pids() self.assertEqual(len(pids), 1) self.assertTrue(1 in pids) self.assertEqual(manager.os.pid_sigs[1], [signal.SIGTERM]) # reset os mock manager.os = MockOs([1]) # test shutdown self.assertTrue('object-server' in manager.GRACEFUL_SHUTDOWN_SERVERS) pids = server.kill_running_pids(graceful=True) self.assertEqual(len(pids), 1) self.assertTrue(1 in pids) self.assertEqual(manager.os.pid_sigs[1], [signal.SIGHUP]) # start up other servers manager.os = MockOs([11, 12]) # test multi server kill & ignore graceful on unsupported server self.assertFalse('object-replicator' in manager.GRACEFUL_SHUTDOWN_SERVERS) server = manager.Server('object-replicator', run_dir=t) pids = server.kill_running_pids(graceful=True) self.assertEqual(len(pids), 2) for pid in (11, 12): self.assertTrue(pid in pids) self.assertEqual(manager.os.pid_sigs[pid], [signal.SIGTERM]) # and the other pid is of course not signaled self.assertTrue(1 not in manager.os.pid_sigs) def test_status(self): conf_files = ( 'test-server/1.conf', 'test-server/2.conf', 'test-server/3.conf', 'test-server/4.conf', ) pid_files = ( ('test-server/1.pid', 1), ('test-server/2.pid', 2), ('test-server/3.pid', 3), ('test-server/4.pid', 4), ) with temptree(conf_files) as swift_dir: manager.SWIFT_DIR = swift_dir files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t # setup running servers server = manager.Server('test', run_dir=t) # capture stdio old_stdout = sys.stdout try: with open(os.path.join(t, 'output'), 'w+') as f: sys.stdout = f # test status for all running manager.os = MockOs(pids) proc_files = ( ('1/cmdline', 'swift-test-server'), ('2/cmdline', 'swift-test-server'), ('3/cmdline', 'swift-test-server'), ('4/cmdline', 'swift-test-server'), ) files, contents = zip(*proc_files) with temptree(files, contents) as t: manager.PROC_DIR = t self.assertEqual(server.status(), 0) output = pop_stream(f).strip().splitlines() self.assertEqual(len(output), 4) for line in output: self.assertTrue('test-server running' in line) # test get single server by number with temptree([], []) as t: manager.PROC_DIR = t self.assertEqual(server.status(number=4), 0) output = pop_stream(f).strip().splitlines() self.assertEqual(len(output), 1) line = output[0] self.assertTrue('test-server running' in line) conf_four = self.join_swift_dir(conf_files[3]) self.assertTrue('4 - %s' % conf_four in line) # test some servers not running manager.os = MockOs([1, 2, 3]) proc_files = ( ('1/cmdline', 'swift-test-server'), ('2/cmdline', 'swift-test-server'), ('3/cmdline', 'swift-test-server'), ) files, contents = zip(*proc_files) with temptree(files, contents) as t: manager.PROC_DIR = t self.assertEqual(server.status(), 0) output = pop_stream(f).strip().splitlines() self.assertEqual(len(output), 3) for line in output: self.assertTrue('test-server running' in line) # test single server not running manager.os = MockOs([1, 2]) proc_files = ( ('1/cmdline', 'swift-test-server'), ('2/cmdline', 'swift-test-server'), ) files, contents = zip(*proc_files) with temptree(files, contents) as t: manager.PROC_DIR = t self.assertEqual(server.status(number=3), 1) output = pop_stream(f).strip().splitlines() self.assertEqual(len(output), 1) line = output[0] self.assertTrue('not running' in line) conf_three = self.join_swift_dir(conf_files[2]) self.assertTrue(conf_three in line) # test no running pids manager.os = MockOs([]) with temptree([], []) as t: manager.PROC_DIR = t self.assertEqual(server.status(), 1) output = pop_stream(f).lower() self.assertTrue('no test-server running' in output) # test use provided pids pids = { 1: '1.pid', 2: '2.pid', } # shouldn't call get_running_pids called = [] def mock(*args, **kwargs): called.append(True) server.get_running_pids = mock status = server.status(pids=pids) self.assertEqual(status, 0) self.assertFalse(called) output = pop_stream(f).strip().splitlines() self.assertEqual(len(output), 2) for line in output: self.assertTrue('test-server running' in line) finally: sys.stdout = old_stdout def test_spawn(self): # mocks class MockProcess(object): NOTHING = 'default besides None' STDOUT = 'stdout' PIPE = 'pipe' def __init__(self, pids=None): if pids is None: pids = [] self.pids = (p for p in pids) def Popen(self, args, **kwargs): return MockProc(next(self.pids), args, **kwargs) class MockProc(object): def __init__(self, pid, args, stdout=MockProcess.NOTHING, stderr=MockProcess.NOTHING): self.pid = pid self.args = args self.stdout = stdout if stderr == MockProcess.STDOUT: self.stderr = self.stdout else: self.stderr = stderr # setup running servers server = manager.Server('test') with temptree(['test-server.conf']) as swift_dir: manager.SWIFT_DIR = swift_dir with temptree([]) as t: manager.RUN_DIR = t server.run_dir = t old_subprocess = manager.subprocess try: # test single server process calls spawn once manager.subprocess = MockProcess([1]) conf_file = self.join_swift_dir('test-server.conf') # spawn server no kwargs server.spawn(conf_file) # test pid file pid_file = self.join_run_dir('test-server.pid') self.assertTrue(os.path.exists(pid_file)) pid_on_disk = int(open(pid_file).read().strip()) self.assertEqual(pid_on_disk, 1) # assert procs args self.assertTrue(server.procs) self.assertEqual(len(server.procs), 1) proc = server.procs[0] expected_args = [ 'swift-test-server', conf_file, ] self.assertEqual(proc.args, expected_args) # assert stdout is piped self.assertEqual(proc.stdout, MockProcess.PIPE) self.assertEqual(proc.stderr, proc.stdout) # test multi server process calls spawn multiple times manager.subprocess = MockProcess([11, 12, 13, 14]) conf1 = self.join_swift_dir('test-server/1.conf') conf2 = self.join_swift_dir('test-server/2.conf') conf3 = self.join_swift_dir('test-server/3.conf') conf4 = self.join_swift_dir('test-server/4.conf') server = manager.Server('test', run_dir=t) # test server run once server.spawn(conf1, once=True) self.assertTrue(server.procs) self.assertEqual(len(server.procs), 1) proc = server.procs[0] expected_args = ['swift-test-server', conf1, 'once'] # assert stdout is piped self.assertEqual(proc.stdout, MockProcess.PIPE) self.assertEqual(proc.stderr, proc.stdout) # test server not daemon server.spawn(conf2, daemon=False) self.assertTrue(server.procs) self.assertEqual(len(server.procs), 2) proc = server.procs[1] expected_args = ['swift-test-server', conf2, 'verbose'] self.assertEqual(proc.args, expected_args) # assert stdout is not changed self.assertEqual(proc.stdout, None) self.assertEqual(proc.stderr, None) # test server wait server.spawn(conf3, wait=False) self.assertTrue(server.procs) self.assertEqual(len(server.procs), 3) proc = server.procs[2] # assert stdout is /dev/null self.assertTrue(isinstance(proc.stdout, file)) self.assertEqual(proc.stdout.name, os.devnull) self.assertEqual(proc.stdout.mode, 'w+b') self.assertEqual(proc.stderr, proc.stdout) # test not daemon over-rides wait server.spawn(conf4, wait=False, daemon=False, once=True) self.assertTrue(server.procs) self.assertEqual(len(server.procs), 4) proc = server.procs[3] expected_args = ['swift-test-server', conf4, 'once', 'verbose'] self.assertEqual(proc.args, expected_args) # daemon behavior should trump wait, once shouldn't matter self.assertEqual(proc.stdout, None) self.assertEqual(proc.stderr, None) # assert pids for i, proc in enumerate(server.procs): pid_file = self.join_run_dir('test-server/%d.pid' % (i + 1)) pid_on_disk = int(open(pid_file).read().strip()) self.assertEqual(pid_on_disk, proc.pid) finally: manager.subprocess = old_subprocess def test_wait(self): server = manager.Server('test') self.assertEqual(server.wait(), 0) class MockProcess(threading.Thread): def __init__(self, delay=0.1, fail_to_start=False): threading.Thread.__init__(self) # setup pipe rfd, wfd = os.pipe() # subprocess connection to read stdout self.stdout = os.fdopen(rfd) # real process connection to write stdout self._stdout = os.fdopen(wfd, 'w') self.delay = delay self.finished = False self.returncode = None if fail_to_start: self._returncode = 1 self.run = self.fail else: self._returncode = 0 def __enter__(self): self.start() return self def __exit__(self, *args): if self.isAlive(): self.join() def close_stdout(self): self._stdout.flush() with open(os.devnull, 'wb') as nullfile: try: os.dup2(nullfile.fileno(), self._stdout.fileno()) except OSError: pass def fail(self): print('mock process started', file=self._stdout) sleep(self.delay) # perform setup processing print('mock process failed to start', file=self._stdout) self.close_stdout() def poll(self): self.returncode = self._returncode return self.returncode or None def run(self): print('mock process started', file=self._stdout) sleep(self.delay) # perform setup processing print('setup complete!', file=self._stdout) self.close_stdout() sleep(self.delay) # do some more processing print('mock process finished', file=self._stdout) self.finished = True class MockTime(object): def time(self): return time() def sleep(self, *args, **kwargs): pass with temptree([]) as t: old_stdout = sys.stdout old_wait = manager.WARNING_WAIT old_time = manager.time try: manager.WARNING_WAIT = 0.01 manager.time = MockTime() with open(os.path.join(t, 'output'), 'w+') as f: # actually capture the read stdout (for prints) sys.stdout = f # test closing pipe in subprocess unblocks read with MockProcess() as proc: server.procs = [proc] status = server.wait() self.assertEqual(status, 0) # wait should return before process exits self.assertTrue(proc.isAlive()) self.assertFalse(proc.finished) self.assertTrue(proc.finished) # make sure it did finish # test output kwarg prints subprocess output with MockProcess() as proc: server.procs = [proc] status = server.wait(output=True) output = pop_stream(f) self.assertTrue('mock process started' in output) self.assertTrue('setup complete' in output) # make sure we don't get prints after stdout was closed self.assertTrue('mock process finished' not in output) # test process which fails to start with MockProcess(fail_to_start=True) as proc: server.procs = [proc] status = server.wait() self.assertEqual(status, 1) self.assertTrue('failed' in pop_stream(f)) # test multiple procs procs = [MockProcess(delay=.5) for i in range(3)] for proc in procs: proc.start() server.procs = procs status = server.wait() self.assertEqual(status, 0) for proc in procs: self.assertTrue(proc.isAlive()) for proc in procs: proc.join() finally: sys.stdout = old_stdout manager.WARNING_WAIT = old_wait manager.time = old_time def test_interact(self): class MockProcess(object): def __init__(self, fail=False): self.returncode = None if fail: self._returncode = 1 else: self._returncode = 0 def communicate(self): self.returncode = self._returncode return '', '' server = manager.Server('test') server.procs = [MockProcess()] self.assertEqual(server.interact(), 0) server.procs = [MockProcess(fail=True)] self.assertEqual(server.interact(), 1) procs = [] for fail in (False, True, True): procs.append(MockProcess(fail=fail)) server.procs = procs self.assertTrue(server.interact() > 0) def test_launch(self): # stubs conf_files = ( 'proxy-server.conf', 'auth-server.conf', 'object-server/1.conf', 'object-server/2.conf', 'object-server/3.conf', 'object-server/4.conf', ) pid_files = ( ('proxy-server.pid', 1), ('proxy-server/2.pid', 2), ) # mocks class MockSpawn(object): def __init__(self, pids=None): self.conf_files = [] self.kwargs = [] if not pids: def one_forever(): while True: yield 1 self.pids = one_forever() else: self.pids = (x for x in pids) def __call__(self, conf_file, **kwargs): self.conf_files.append(conf_file) self.kwargs.append(kwargs) rv = next(self.pids) if isinstance(rv, Exception): raise rv else: return rv with temptree(conf_files) as swift_dir: manager.SWIFT_DIR = swift_dir files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t old_stdout = sys.stdout try: with open(os.path.join(t, 'output'), 'w+') as f: sys.stdout = f # can't start server w/o an conf server = manager.Server('test', run_dir=t) self.assertFalse(server.launch()) # start mock os running all pids manager.os = MockOs(pids) proc_files = ( ('1/cmdline', 'swift-proxy-server'), ('2/cmdline', 'swift-proxy-server'), ) files, contents = zip(*proc_files) with temptree(files, contents) as proc_dir: manager.PROC_DIR = proc_dir server = manager.Server('proxy', run_dir=t) # can't start server if it's already running self.assertFalse(server.launch()) output = pop_stream(f) self.assertTrue('running' in output) conf_file = self.join_swift_dir( 'proxy-server.conf') self.assertTrue(conf_file in output) pid_file = self.join_run_dir('proxy-server/2.pid') self.assertTrue(pid_file in output) self.assertTrue('already started' in output) # no running pids manager.os = MockOs([]) with temptree([], []) as proc_dir: manager.PROC_DIR = proc_dir # test ignore once for non-start-once server mock_spawn = MockSpawn([1]) server.spawn = mock_spawn conf_file = self.join_swift_dir( 'proxy-server.conf') expected = { 1: conf_file, } self.assertEqual(server.launch(once=True), expected) self.assertEqual(mock_spawn.conf_files, [conf_file]) expected = { 'once': False, } self.assertEqual(mock_spawn.kwargs, [expected]) output = pop_stream(f) self.assertTrue('Starting' in output) self.assertTrue('once' not in output) # test multi-server kwarg once server = manager.Server('object-replicator') with temptree([], []) as proc_dir: manager.PROC_DIR = proc_dir mock_spawn = MockSpawn([1, 2, 3, 4]) server.spawn = mock_spawn conf1 = self.join_swift_dir('object-server/1.conf') conf2 = self.join_swift_dir('object-server/2.conf') conf3 = self.join_swift_dir('object-server/3.conf') conf4 = self.join_swift_dir('object-server/4.conf') expected = { 1: conf1, 2: conf2, 3: conf3, 4: conf4, } self.assertEqual(server.launch(once=True), expected) self.assertEqual(mock_spawn.conf_files, [ conf1, conf2, conf3, conf4]) expected = { 'once': True, } self.assertEqual(len(mock_spawn.kwargs), 4) for kwargs in mock_spawn.kwargs: self.assertEqual(kwargs, expected) # test number kwarg mock_spawn = MockSpawn([4]) manager.PROC_DIR = proc_dir server.spawn = mock_spawn expected = { 4: conf4, } self.assertEqual(server.launch(number=4), expected) self.assertEqual(mock_spawn.conf_files, [conf4]) expected = { 'number': 4 } self.assertEqual(mock_spawn.kwargs, [expected]) # test cmd does not exist server = manager.Server('auth') with temptree([], []) as proc_dir: manager.PROC_DIR = proc_dir mock_spawn = MockSpawn([OSError(errno.ENOENT, 'blah')]) server.spawn = mock_spawn self.assertEqual(server.launch(), {}) self.assertTrue( 'swift-auth-server does not exist' in pop_stream(f)) finally: sys.stdout = old_stdout def test_stop(self): conf_files = ( 'account-server/1.conf', 'account-server/2.conf', 'account-server/3.conf', 'account-server/4.conf', ) pid_files = ( ('account-reaper/1.pid', 1), ('account-reaper/2.pid', 2), ('account-reaper/3.pid', 3), ('account-reaper/4.pid', 4), ) with temptree(conf_files) as swift_dir: manager.SWIFT_DIR = swift_dir files, pids = zip(*pid_files) with temptree(files, pids) as t: manager.RUN_DIR = t # start all pids in mock os manager.os = MockOs(pids) server = manager.Server('account-reaper', run_dir=t) # test kill all running pids pids = server.stop() self.assertEqual(len(pids), 4) for pid in (1, 2, 3, 4): self.assertTrue(pid in pids) self.assertEqual(manager.os.pid_sigs[pid], [signal.SIGTERM]) conf1 = self.join_swift_dir('account-reaper/1.conf') conf2 = self.join_swift_dir('account-reaper/2.conf') conf3 = self.join_swift_dir('account-reaper/3.conf') conf4 = self.join_swift_dir('account-reaper/4.conf') # reset mock os with only 2 running pids manager.os = MockOs([3, 4]) pids = server.stop() self.assertEqual(len(pids), 2) for pid in (3, 4): self.assertTrue(pid in pids) self.assertEqual(manager.os.pid_sigs[pid], [signal.SIGTERM]) self.assertFalse(os.path.exists(conf1)) self.assertFalse(os.path.exists(conf2)) # test number kwarg manager.os = MockOs([3, 4]) pids = server.stop(number=3) self.assertEqual(len(pids), 1) expected = { 3: conf3, } self.assertTrue(pids, expected) self.assertEqual(manager.os.pid_sigs[3], [signal.SIGTERM]) self.assertFalse(os.path.exists(conf4)) self.assertFalse(os.path.exists(conf3)) class TestManager(unittest.TestCase): def test_create(self): m = manager.Manager(['test']) self.assertEqual(len(m.servers), 1) server = m.servers.pop() self.assertTrue(isinstance(server, manager.Server)) self.assertEqual(server.server, 'test-server') # test multi-server and simple dedupe servers = ['object-replicator', 'object-auditor', 'object-replicator'] m = manager.Manager(servers) self.assertEqual(len(m.servers), 2) for server in m.servers: self.assertTrue(server.server in servers) # test all m = manager.Manager(['all']) self.assertEqual(len(m.servers), len(manager.ALL_SERVERS)) for server in m.servers: self.assertTrue(server.server in manager.ALL_SERVERS) # test main m = manager.Manager(['main']) self.assertEqual(len(m.servers), len(manager.MAIN_SERVERS)) for server in m.servers: self.assertTrue(server.server in manager.MAIN_SERVERS) # test rest m = manager.Manager(['rest']) self.assertEqual(len(m.servers), len(manager.REST_SERVERS)) for server in m.servers: self.assertTrue(server.server in manager.REST_SERVERS) # test main + rest == all m = manager.Manager(['main', 'rest']) self.assertEqual(len(m.servers), len(manager.ALL_SERVERS)) for server in m.servers: self.assertTrue(server.server in manager.ALL_SERVERS) # test dedupe m = manager.Manager(['main', 'rest', 'proxy', 'object', 'container', 'account']) self.assertEqual(len(m.servers), len(manager.ALL_SERVERS)) for server in m.servers: self.assertTrue(server.server in manager.ALL_SERVERS) # test glob m = manager.Manager(['object-*']) object_servers = [s for s in manager.ALL_SERVERS if s.startswith('object')] self.assertEqual(len(m.servers), len(object_servers)) for s in m.servers: self.assertTrue(str(s) in object_servers) m = manager.Manager(['*-replicator']) replicators = [s for s in manager.ALL_SERVERS if s.endswith('replicator')] for s in m.servers: self.assertTrue(str(s) in replicators) def test_iter(self): m = manager.Manager(['all']) self.assertEqual(len(list(m)), len(manager.ALL_SERVERS)) for server in m: self.assertTrue(server.server in manager.ALL_SERVERS) def test_default_strict(self): # test default strict m = manager.Manager(['proxy']) self.assertEqual(m._default_strict, True) # aliases m = manager.Manager(['main']) self.assertEqual(m._default_strict, False) m = manager.Manager(['proxy*']) self.assertEqual(m._default_strict, False) def test_status(self): class MockServer(object): def __init__(self, server, run_dir=manager.RUN_DIR): self.server = server self.called_kwargs = [] def status(self, **kwargs): self.called_kwargs.append(kwargs) if 'error' in self.server: return 1 else: return 0 old_server_class = manager.Server try: manager.Server = MockServer m = manager.Manager(['test']) status = m.status() self.assertEqual(status, 0) m = manager.Manager(['error']) status = m.status() self.assertEqual(status, 1) # test multi-server m = manager.Manager(['test', 'error']) kwargs = {'key': 'value'} status = m.status(**kwargs) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called_kwargs, [kwargs]) finally: manager.Server = old_server_class def test_start(self): def mock_setup_env(): getattr(mock_setup_env, 'called', []).append(True) class MockServer(object): def __init__(self, server, run_dir=manager.RUN_DIR): self.server = server self.called = defaultdict(list) def launch(self, **kwargs): self.called['launch'].append(kwargs) if 'noconfig' in self.server: return {} elif 'somerunning' in self.server: return {} else: return {1: self.server[0]} def wait(self, **kwargs): self.called['wait'].append(kwargs) return int('error' in self.server) def stop(self, **kwargs): self.called['stop'].append(kwargs) def interact(self, **kwargs): self.called['interact'].append(kwargs) if 'raise' in self.server: raise KeyboardInterrupt elif 'error' in self.server: return 1 else: return 0 old_setup_env = manager.setup_env old_swift_server = manager.Server try: manager.setup_env = mock_setup_env manager.Server = MockServer # test no errors on launch m = manager.Manager(['proxy']) status = m.start() self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) # test error on launch m = manager.Manager(['proxy', 'error']) status = m.start() self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) self.assertEqual(server.called['wait'], [{}]) # test interact m = manager.Manager(['proxy', 'error']) kwargs = {'daemon': False} status = m.start(**kwargs) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [kwargs]) self.assertEqual(server.called['interact'], [kwargs]) m = manager.Manager(['raise']) kwargs = {'daemon': False} status = m.start(**kwargs) # test no config m = manager.Manager(['proxy', 'noconfig']) status = m.start() self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) self.assertEqual(server.called['wait'], [{}]) # test no config with --non-strict m = manager.Manager(['proxy', 'noconfig']) status = m.start(strict=False) self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': False}]) self.assertEqual(server.called['wait'], [{'strict': False}]) # test no config --strict m = manager.Manager(['proxy', 'noconfig']) status = m.start(strict=True) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': True}]) self.assertEqual(server.called['wait'], [{'strict': True}]) # test no config with alias m = manager.Manager(['main', 'noconfig']) status = m.start() self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) self.assertEqual(server.called['wait'], [{}]) # test no config with alias and --non-strict m = manager.Manager(['main', 'noconfig']) status = m.start(strict=False) self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': False}]) self.assertEqual(server.called['wait'], [{'strict': False}]) # test no config with alias and --strict m = manager.Manager(['main', 'noconfig']) status = m.start(strict=True) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': True}]) self.assertEqual(server.called['wait'], [{'strict': True}]) # test already all running m = manager.Manager(['proxy', 'somerunning']) status = m.start() self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) self.assertEqual(server.called['wait'], [{}]) # test already all running --non-strict m = manager.Manager(['proxy', 'somerunning']) status = m.start(strict=False) self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': False}]) self.assertEqual(server.called['wait'], [{'strict': False}]) # test already all running --strict m = manager.Manager(['proxy', 'somerunning']) status = m.start(strict=True) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': True}]) self.assertEqual(server.called['wait'], [{'strict': True}]) # test already all running with alias m = manager.Manager(['main', 'somerunning']) status = m.start() self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{}]) self.assertEqual(server.called['wait'], [{}]) # test already all running with alias and --non-strict m = manager.Manager(['main', 'somerunning']) status = m.start(strict=False) self.assertEqual(status, 0) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': False}]) self.assertEqual(server.called['wait'], [{'strict': False}]) # test already all running with alias and --strict m = manager.Manager(['main', 'somerunning']) status = m.start(strict=True) self.assertEqual(status, 1) for server in m.servers: self.assertEqual(server.called['launch'], [{'strict': True}]) self.assertEqual(server.called['wait'], [{'strict': True}]) finally: manager.setup_env = old_setup_env manager.Server = old_swift_server def test_no_wait(self): class MockServer(object): def __init__(self, server, run_dir=manager.RUN_DIR): self.server = server self.called = defaultdict(list) def launch(self, **kwargs): self.called['launch'].append(kwargs) # must return non-empty dict if launch succeeded return {1: self.server[0]} def wait(self, **kwargs): self.called['wait'].append(kwargs) return int('error' in self.server) orig_swift_server = manager.Server try: manager.Server = MockServer # test success init = manager.Manager(['proxy']) status = init.no_wait() self.assertEqual(status, 0) for server in init.servers: self.assertEqual(len(server.called['launch']), 1) called_kwargs = server.called['launch'][0] self.assertFalse(called_kwargs['wait']) self.assertFalse(server.called['wait']) # test no errocode status even on error init = manager.Manager(['error']) status = init.no_wait() self.assertEqual(status, 0) for server in init.servers: self.assertEqual(len(server.called['launch']), 1) called_kwargs = server.called['launch'][0] self.assertTrue('wait' in called_kwargs) self.assertFalse(called_kwargs['wait']) self.assertFalse(server.called['wait']) # test wait with once option init = manager.Manager(['updater', 'replicator-error']) status = init.no_wait(once=True) self.assertEqual(status, 0) for server in init.servers: self.assertEqual(len(server.called['launch']), 1) called_kwargs = server.called['launch'][0] self.assertTrue('wait' in called_kwargs) self.assertFalse(called_kwargs['wait']) self.assertTrue('once' in called_kwargs) self.assertTrue(called_kwargs['once']) self.assertFalse(server.called['wait']) finally: manager.Server = orig_swift_server def test_no_daemon(self): class MockServer(object): def __init__(self, server, run_dir=manager.RUN_DIR): self.server = server self.called = defaultdict(list) def launch(self, **kwargs): self.called['launch'].append(kwargs) # must return non-empty dict if launch succeeded return {1: self.server[0]} def interact(self, **kwargs): self.called['interact'].append(kwargs) return int('error' in self.server) orig_swift_server = manager.Server try: manager.Server = MockServer # test success init = manager.Manager(['proxy']) stats = init.no_daemon() self.assertEqual(stats, 0) # test error init = manager.Manager(['proxy', 'object-error']) stats = init.no_daemon() self.assertEqual(stats, 1) # test once init = manager.Manager(['proxy', 'object-error']) stats = init.no_daemon() for server in init.servers: self.assertEqual(len(server.called['launch']), 1) self.assertEqual(len(server.called['wait']), 0) self.assertEqual(len(server.called['interact']), 1) finally: manager.Server = orig_swift_server def test_once(self): class MockServer(object): def __init__(self, server, run_dir=manager.RUN_DIR): self.server = server self.called = defaultdict(list) def wait(self, **kwargs): self.called['wait'].append(kwargs) if 'error' in self.server: return 1 else: return 0 def launch(self, **kwargs): self.called['launch'].append(kwargs) return {1: 'account-reaper'} orig_swift_server = manager.Server try: manager.Server = MockServer # test no errors init = manager.Manager(['account-reaper']) status = init.once() self.assertEqual(status, 0) # test error code on error init = manager.Manager(['error-reaper']) status = init.once() self.assertEqual(status, 1) for server in init.servers: self.assertEqual(len(server.called['launch']), 1) called_kwargs = server.called['launch'][0] self.assertEqual(called_kwargs, {'once': True}) self.assertEqual(len(server.called['wait']), 1) self.assertEqual(len(server.called['interact']), 0) finally: manager.Server = orig_swift_server def test_stop(self): class MockServerFactory(object): class MockServer(object): def __init__(self, pids, run_dir=manager.RUN_DIR): self.pids = pids def stop(self, **kwargs): return self.pids def status(self, **kwargs): return not self.pids def __init__(self, server_pids, run_dir=manager.RUN_DIR): self.server_pids = server_pids def __call__(self, server, run_dir=manager.RUN_DIR): return MockServerFactory.MockServer(self.server_pids[server]) def mock_watch_server_pids(server_pids, **kwargs): for server, pids in server_pids.items(): for pid in pids: if pid is None: continue yield server, pid def mock_kill_group(pid, sig): self.fail('kill_group should not be called') _orig_server = manager.Server _orig_watch_server_pids = manager.watch_server_pids _orig_kill_group = manager.kill_group try: manager.watch_server_pids = mock_watch_server_pids manager.kill_group = mock_kill_group # test stop one server server_pids = { 'test': {1: "dummy.pid"} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.stop() self.assertEqual(status, 0) # test not running server_pids = { 'test': {} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.stop() self.assertEqual(status, 1) # test kill not running server_pids = { 'test': {} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.kill() self.assertEqual(status, 0) # test won't die server_pids = { 'test': {None: None} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.stop() self.assertEqual(status, 1) finally: manager.Server = _orig_server manager.watch_server_pids = _orig_watch_server_pids manager.kill_group = _orig_kill_group def test_stop_kill_after_timeout(self): class MockServerFactory(object): class MockServer(object): def __init__(self, pids, run_dir=manager.RUN_DIR): self.pids = pids def stop(self, **kwargs): return self.pids def status(self, **kwargs): return not self.pids def __init__(self, server_pids, run_dir=manager.RUN_DIR): self.server_pids = server_pids def __call__(self, server, run_dir=manager.RUN_DIR): return MockServerFactory.MockServer(self.server_pids[server]) def mock_watch_server_pids(server_pids, **kwargs): for server, pids in server_pids.items(): for pid in pids: if pid is None: continue yield server, pid mock_kill_group_called = [] def mock_kill_group(*args): mock_kill_group_called.append(args) def mock_kill_group_oserr(*args): raise OSError() def mock_kill_group_oserr_ESRCH(*args): raise OSError(errno.ESRCH, 'No such process') _orig_server = manager.Server _orig_watch_server_pids = manager.watch_server_pids _orig_kill_group = manager.kill_group try: manager.watch_server_pids = mock_watch_server_pids manager.kill_group = mock_kill_group # test stop one server server_pids = { 'test': {None: None} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.stop(kill_after_timeout=True) self.assertEqual(status, 1) self.assertEqual(mock_kill_group_called, [(None, 9)]) manager.kill_group = mock_kill_group_oserr # test stop one server - OSError server_pids = { 'test': {None: None} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) with self.assertRaises(OSError): status = m.stop(kill_after_timeout=True) manager.kill_group = mock_kill_group_oserr_ESRCH # test stop one server - OSError: No such process server_pids = { 'test': {None: None} } manager.Server = MockServerFactory(server_pids) m = manager.Manager(['test']) status = m.stop(kill_after_timeout=True) self.assertEqual(status, 1) finally: manager.Server = _orig_server manager.watch_server_pids = _orig_watch_server_pids manager.kill_group = _orig_kill_group # TODO(clayg): more tests def test_shutdown(self): m = manager.Manager(['test']) m.stop_was_called = False def mock_stop(*args, **kwargs): m.stop_was_called = True expected = {'graceful': True} self.assertEqual(kwargs, expected) return 0 m.stop = mock_stop status = m.shutdown() self.assertEqual(status, 0) self.assertEqual(m.stop_was_called, True) def test_restart(self): m = manager.Manager(['test']) m.stop_was_called = False def mock_stop(*args, **kwargs): m.stop_was_called = True return 0 m.start_was_called = False def mock_start(*args, **kwargs): m.start_was_called = True return 0 m.stop = mock_stop m.start = mock_start status = m.restart() self.assertEqual(status, 0) self.assertEqual(m.stop_was_called, True) self.assertEqual(m.start_was_called, True) def test_reload(self): class MockManager(object): called = defaultdict(list) def __init__(self, servers): pass @classmethod def reset_called(cls): cls.called = defaultdict(list) def stop(self, **kwargs): MockManager.called['stop'].append(kwargs) return 0 def start(self, **kwargs): MockManager.called['start'].append(kwargs) return 0 _orig_manager = manager.Manager try: m = _orig_manager(['auth']) for server in m.servers: self.assertTrue(server.server in manager.GRACEFUL_SHUTDOWN_SERVERS) manager.Manager = MockManager status = m.reload() self.assertEqual(status, 0) expected = { 'start': [{'graceful': True}], 'stop': [{'graceful': True}], } self.assertEqual(MockManager.called, expected) # test force graceful MockManager.reset_called() m = _orig_manager(['*-server']) self.assertEqual(len(m.servers), 4) for server in m.servers: self.assertTrue(server.server in manager.GRACEFUL_SHUTDOWN_SERVERS) manager.Manager = MockManager status = m.reload(graceful=False) self.assertEqual(status, 0) expected = { 'start': [{'graceful': True}] * 4, 'stop': [{'graceful': True}] * 4, } self.assertEqual(MockManager.called, expected) finally: manager.Manager = _orig_manager def test_force_reload(self): m = manager.Manager(['test']) m.reload_was_called = False def mock_reload(*args, **kwargs): m.reload_was_called = True return 0 m.reload = mock_reload status = m.force_reload() self.assertEqual(status, 0) self.assertEqual(m.reload_was_called, True) def test_get_command(self): m = manager.Manager(['test']) self.assertEqual(m.start, m.get_command('start')) self.assertEqual(m.force_reload, m.get_command('force-reload')) self.assertEqual(m.get_command('force-reload'), m.get_command('force_reload')) self.assertRaises(manager.UnknownCommandError, m.get_command, 'no_command') self.assertRaises(manager.UnknownCommandError, m.get_command, '__init__') def test_list_commands(self): for cmd, help in manager.Manager.list_commands(): method = getattr(manager.Manager, cmd.replace('-', '_'), None) self.assertTrue(method, '%s is not a command' % cmd) self.assertTrue(getattr(method, 'publicly_accessible', False)) self.assertEqual(method.__doc__.strip(), help) def test_run_command(self): m = manager.Manager(['test']) m.cmd_was_called = False def mock_cmd(*args, **kwargs): m.cmd_was_called = True expected = {'kw1': True, 'kw2': False} self.assertEqual(kwargs, expected) return 0 mock_cmd.publicly_accessible = True m.mock_cmd = mock_cmd kwargs = {'kw1': True, 'kw2': False} status = m.run_command('mock_cmd', **kwargs) self.assertEqual(status, 0) self.assertEqual(m.cmd_was_called, True) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/0000775000567000056710000000000013024044470017734 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/container/test_backend.py0000664000567000056710000036763413024044354022760 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tests for swift.container.backend """ import os import hashlib import unittest from time import sleep, time from uuid import uuid4 import itertools import random from collections import defaultdict from contextlib import contextmanager import sqlite3 import pickle import json from swift.container.backend import ContainerBroker, \ update_new_item_from_existing from swift.common.utils import Timestamp, encode_timestamps from swift.common.storage_policy import POLICIES import mock from test.unit import (patch_policies, with_tempdir, make_timestamp_iter, EMPTY_ETAG) from test.unit.common import test_db class TestContainerBroker(unittest.TestCase): """Tests for ContainerBroker""" def test_creation(self): # Test ContainerBroker.__init__ broker = ContainerBroker(':memory:', account='a', container='c') self.assertEqual(broker.db_file, ':memory:') broker.initialize(Timestamp('1').internal, 0) with broker.get() as conn: curs = conn.cursor() curs.execute('SELECT 1') self.assertEqual(curs.fetchall()[0][0], 1) @patch_policies def test_storage_policy_property(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) for policy in POLICIES: broker = ContainerBroker(':memory:', account='a', container='policy_%s' % policy.name) broker.initialize(next(ts), policy.idx) with broker.get() as conn: try: conn.execute('''SELECT storage_policy_index FROM container_stat''') except Exception: is_migrated = False else: is_migrated = True if not is_migrated: # pre spi tests don't set policy on initialize broker.set_storage_policy_index(policy.idx) self.assertEqual(policy.idx, broker.storage_policy_index) # make sure it's cached with mock.patch.object(broker, 'get'): self.assertEqual(policy.idx, broker.storage_policy_index) def test_exception(self): # Test ContainerBroker throwing a conn away after # unhandled exception first_conn = None broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) with broker.get() as conn: first_conn = conn try: with broker.get() as conn: self.assertEqual(first_conn, conn) raise Exception('OMG') except Exception: pass self.assertTrue(broker.conn is None) def test_empty(self): # Test ContainerBroker.empty broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) self.assertTrue(broker.empty()) broker.put_object('o', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') self.assertTrue(not broker.empty()) sleep(.00001) broker.delete_object('o', Timestamp(time()).internal) self.assertTrue(broker.empty()) def test_reclaim(self): broker = ContainerBroker(':memory:', account='test_account', container='test_container') broker.initialize(Timestamp('1').internal, 0) broker.put_object('o', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 0) broker.reclaim(Timestamp(time() - 999).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 0) sleep(.00001) broker.delete_object('o', Timestamp(time()).internal) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 1) broker.reclaim(Timestamp(time() - 999).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 1) sleep(.00001) broker.reclaim(Timestamp(time()).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 0) # Test the return values of reclaim() broker.put_object('w', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('x', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('y', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('z', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') # Test before deletion broker.reclaim(Timestamp(time()).internal, time()) broker.delete_db(Timestamp(time()).internal) def test_get_info_is_deleted(self): start = int(time()) ts = (Timestamp(t).internal for t in itertools.count(start)) broker = ContainerBroker(':memory:', account='test_account', container='test_container') # create it broker.initialize(next(ts), POLICIES.default.idx) info, is_deleted = broker.get_info_is_deleted() self.assertEqual(is_deleted, broker.is_deleted()) self.assertEqual(is_deleted, False) # sanity self.assertEqual(info, broker.get_info()) self.assertEqual(info['put_timestamp'], Timestamp(start).internal) self.assertTrue(Timestamp(info['created_at']) >= start) self.assertEqual(info['delete_timestamp'], '0') if self.__class__ in (TestContainerBrokerBeforeMetadata, TestContainerBrokerBeforeXSync, TestContainerBrokerBeforeSPI): self.assertEqual(info['status_changed_at'], '0') else: self.assertEqual(info['status_changed_at'], Timestamp(start).internal) # delete it delete_timestamp = next(ts) broker.delete_db(delete_timestamp) info, is_deleted = broker.get_info_is_deleted() self.assertEqual(is_deleted, True) # sanity self.assertEqual(is_deleted, broker.is_deleted()) self.assertEqual(info, broker.get_info()) self.assertEqual(info['put_timestamp'], Timestamp(start).internal) self.assertTrue(Timestamp(info['created_at']) >= start) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertEqual(info['status_changed_at'], delete_timestamp) # bring back to life broker.put_object('obj', next(ts), 0, 'text/plain', 'etag', storage_policy_index=broker.storage_policy_index) info, is_deleted = broker.get_info_is_deleted() self.assertEqual(is_deleted, False) # sanity self.assertEqual(is_deleted, broker.is_deleted()) self.assertEqual(info, broker.get_info()) self.assertEqual(info['put_timestamp'], Timestamp(start).internal) self.assertTrue(Timestamp(info['created_at']) >= start) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertEqual(info['status_changed_at'], delete_timestamp) def test_delete_object(self): # Test ContainerBroker.delete_object broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object('o', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 0) sleep(.00001) broker.delete_object('o', Timestamp(time()).internal) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM object " "WHERE deleted = 1").fetchone()[0], 1) def test_put_object(self): # Test ContainerBroker.put_object broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) # Create initial object timestamp = Timestamp(time()).internal broker.put_object('"{}"', timestamp, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 123) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], '5af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Reput same event broker.put_object('"{}"', timestamp, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 123) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], '5af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Put new event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_object('"{}"', timestamp, 124, 'application/x-test', 'aa0749bacbc79ec65fe206943d8fe449') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 124) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], 'aa0749bacbc79ec65fe206943d8fe449') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Put old event otimestamp = Timestamp(float(Timestamp(timestamp)) - 1).internal broker.put_object('"{}"', otimestamp, 124, 'application/x-test', 'aa0749bacbc79ec65fe206943d8fe449') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 124) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], 'aa0749bacbc79ec65fe206943d8fe449') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Put old delete event dtimestamp = Timestamp(float(Timestamp(timestamp)) - 1).internal broker.put_object('"{}"', dtimestamp, 0, '', '', deleted=1) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 124) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], 'aa0749bacbc79ec65fe206943d8fe449') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Put new delete event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_object('"{}"', timestamp, 0, '', '', deleted=1) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 1) # Put new event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_object('"{}"', timestamp, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 123) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], '5af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # We'll use this later sleep(.0001) in_between_timestamp = Timestamp(time()).internal # New post event sleep(.0001) previous_timestamp = timestamp timestamp = Timestamp(time()).internal with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], previous_timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 123) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], '5af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) # Put event from after last put but before last post timestamp = in_between_timestamp broker.put_object('"{}"', timestamp, 456, 'application/x-test3', '6af83e3196bf99f440f31f2e1a6c9afe') with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], 456) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], 'application/x-test3') self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], '6af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], 0) def test_make_tuple_for_pickle(self): record = {'name': 'obj', 'created_at': '1234567890.12345', 'size': 42, 'content_type': 'text/plain', 'etag': 'hash_test', 'deleted': '1', 'storage_policy_index': '2', 'ctype_timestamp': None, 'meta_timestamp': None} broker = ContainerBroker(':memory:', account='a', container='c') expect = ('obj', '1234567890.12345', 42, 'text/plain', 'hash_test', '1', '2', None, None) result = broker.make_tuple_for_pickle(record) self.assertEqual(expect, result) record['ctype_timestamp'] = '2233445566.00000' expect = ('obj', '1234567890.12345', 42, 'text/plain', 'hash_test', '1', '2', '2233445566.00000', None) result = broker.make_tuple_for_pickle(record) self.assertEqual(expect, result) record['meta_timestamp'] = '5566778899.00000' expect = ('obj', '1234567890.12345', 42, 'text/plain', 'hash_test', '1', '2', '2233445566.00000', '5566778899.00000') result = broker.make_tuple_for_pickle(record) self.assertEqual(expect, result) @with_tempdir def test_load_old_record_from_pending_file(self, tempdir): # Test reading old update record from pending file db_path = os.path.join(tempdir, 'container.db') broker = ContainerBroker(db_path, account='a', container='c') broker.initialize(time(), 0) record = {'name': 'obj', 'created_at': '1234567890.12345', 'size': 42, 'content_type': 'text/plain', 'etag': 'hash_test', 'deleted': '1', 'storage_policy_index': '2', 'ctype_timestamp': None, 'meta_timestamp': None} # sanity check self.assertFalse(os.path.isfile(broker.pending_file)) # simulate existing pending items written with old code, # i.e. without content_type and meta timestamps def old_make_tuple_for_pickle(_, record): return (record['name'], record['created_at'], record['size'], record['content_type'], record['etag'], record['deleted'], record['storage_policy_index']) _new = 'swift.container.backend.ContainerBroker.make_tuple_for_pickle' with mock.patch(_new, old_make_tuple_for_pickle): broker.put_record(dict(record)) self.assertTrue(os.path.getsize(broker.pending_file) > 0) read_items = [] def mock_merge_items(_, item_list, *args): # capture the items read from the pending file read_items.extend(item_list) with mock.patch('swift.container.backend.ContainerBroker.merge_items', mock_merge_items): broker._commit_puts() self.assertEqual(1, len(read_items)) self.assertEqual(record, read_items[0]) self.assertTrue(os.path.getsize(broker.pending_file) == 0) @with_tempdir def test_save_and_load_record_from_pending_file(self, tempdir): db_path = os.path.join(tempdir, 'container.db') broker = ContainerBroker(db_path, account='a', container='c') broker.initialize(time(), 0) record = {'name': 'obj', 'created_at': '1234567890.12345', 'size': 42, 'content_type': 'text/plain', 'etag': 'hash_test', 'deleted': '1', 'storage_policy_index': '2', 'ctype_timestamp': '1234567890.44444', 'meta_timestamp': '1234567890.99999'} # sanity check self.assertFalse(os.path.isfile(broker.pending_file)) broker.put_record(dict(record)) self.assertTrue(os.path.getsize(broker.pending_file) > 0) read_items = [] def mock_merge_items(_, item_list, *args): # capture the items read from the pending file read_items.extend(item_list) with mock.patch('swift.container.backend.ContainerBroker.merge_items', mock_merge_items): broker._commit_puts() self.assertEqual(1, len(read_items)) self.assertEqual(record, read_items[0]) self.assertTrue(os.path.getsize(broker.pending_file) == 0) def _assert_db_row(self, broker, name, timestamp, size, content_type, hash, deleted=0): with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM object").fetchone()[0], name) self.assertEqual(conn.execute( "SELECT created_at FROM object").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT size FROM object").fetchone()[0], size) self.assertEqual(conn.execute( "SELECT content_type FROM object").fetchone()[0], content_type) self.assertEqual(conn.execute( "SELECT etag FROM object").fetchone()[0], hash) self.assertEqual(conn.execute( "SELECT deleted FROM object").fetchone()[0], deleted) def _test_put_object_multiple_encoded_timestamps(self, broker): ts = (Timestamp(t) for t in itertools.count(int(time()))) broker.initialize(ts.next().internal, 0) t = [ts.next() for _ in range(9)] # Create initial object broker.put_object('obj_name', t[0].internal, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t[0].internal, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') # hash and size change with same data timestamp are ignored t_encoded = encode_timestamps(t[0], t[1], t[1]) broker.put_object('obj_name', t_encoded, 456, 'application/x-test-2', '1234567890abcdeffedcba0987654321') self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 123, 'application/x-test-2', '5af83e3196bf99f440f31f2e1a6c9afe') # content-type change with same timestamp is ignored t_encoded = encode_timestamps(t[0], t[1], t[2]) broker.put_object('obj_name', t_encoded, 456, 'application/x-test-3', '1234567890abcdeffedcba0987654321') self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 123, 'application/x-test-2', '5af83e3196bf99f440f31f2e1a6c9afe') # update with differing newer timestamps t_encoded = encode_timestamps(t[4], t[6], t[8]) broker.put_object('obj_name', t_encoded, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890') self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890') # update with differing older timestamps should be ignored t_encoded_older = encode_timestamps(t[3], t[5], t[7]) self.assertEqual(1, len(broker.get_items_since(0, 100))) broker.put_object('obj_name', t_encoded_older, 9999, 'application/x-test-ignored', 'ignored_hash') self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890') def test_put_object_multiple_encoded_timestamps_using_memory(self): # Test ContainerBroker.put_object with differing data, content-type # and metadata timestamps broker = ContainerBroker(':memory:', account='a', container='c') self._test_put_object_multiple_encoded_timestamps(broker) @with_tempdir def test_put_object_multiple_encoded_timestamps_using_file(self, tempdir): # Test ContainerBroker.put_object with differing data, content-type # and metadata timestamps, using file db to ensure that the code paths # to write/read pending file are exercised. db_path = os.path.join(tempdir, 'container.db') broker = ContainerBroker(db_path, account='a', container='c') self._test_put_object_multiple_encoded_timestamps(broker) def _test_put_object_multiple_explicit_timestamps(self, broker): ts = (Timestamp(t) for t in itertools.count(int(time()))) broker.initialize(ts.next().internal, 0) t = [ts.next() for _ in range(11)] # Create initial object broker.put_object('obj_name', t[0].internal, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe', ctype_timestamp=None, meta_timestamp=None) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t[0].internal, 123, 'application/x-test', '5af83e3196bf99f440f31f2e1a6c9afe') # hash and size change with same data timestamp are ignored t_encoded = encode_timestamps(t[0], t[1], t[1]) broker.put_object('obj_name', t[0].internal, 456, 'application/x-test-2', '1234567890abcdeffedcba0987654321', ctype_timestamp=t[1].internal, meta_timestamp=t[1].internal) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 123, 'application/x-test-2', '5af83e3196bf99f440f31f2e1a6c9afe') # content-type change with same timestamp is ignored t_encoded = encode_timestamps(t[0], t[1], t[2]) broker.put_object('obj_name', t[0].internal, 456, 'application/x-test-3', '1234567890abcdeffedcba0987654321', ctype_timestamp=t[1].internal, meta_timestamp=t[2].internal) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 123, 'application/x-test-2', '5af83e3196bf99f440f31f2e1a6c9afe') # update with differing newer timestamps t_encoded = encode_timestamps(t[4], t[6], t[8]) broker.put_object('obj_name', t[4].internal, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890', ctype_timestamp=t[6].internal, meta_timestamp=t[8].internal) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890') # update with differing older timestamps should be ignored broker.put_object('obj_name', t[3].internal, 9999, 'application/x-test-ignored', 'ignored_hash', ctype_timestamp=t[5].internal, meta_timestamp=t[7].internal) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 789, 'application/x-test-3', 'abcdef1234567890abcdef1234567890') # content_type_timestamp == None defaults to data timestamp t_encoded = encode_timestamps(t[9], t[9], t[8]) broker.put_object('obj_name', t[9].internal, 9999, 'application/x-test-new', 'new_hash', ctype_timestamp=None, meta_timestamp=t[7].internal) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 9999, 'application/x-test-new', 'new_hash') # meta_timestamp == None defaults to data timestamp t_encoded = encode_timestamps(t[9], t[10], t[10]) broker.put_object('obj_name', t[8].internal, 1111, 'application/x-test-newer', 'older_hash', ctype_timestamp=t[10].internal, meta_timestamp=None) self.assertEqual(1, len(broker.get_items_since(0, 100))) self._assert_db_row(broker, 'obj_name', t_encoded, 9999, 'application/x-test-newer', 'new_hash') def test_put_object_multiple_explicit_timestamps_using_memory(self): # Test ContainerBroker.put_object with differing data, content-type # and metadata timestamps passed as explicit args broker = ContainerBroker(':memory:', account='a', container='c') self._test_put_object_multiple_explicit_timestamps(broker) @with_tempdir def test_put_object_multiple_explicit_timestamps_using_file(self, tempdir): # Test ContainerBroker.put_object with differing data, content-type # and metadata timestamps passed as explicit args, using file db to # ensure that the code paths to write/read pending file are exercised. db_path = os.path.join(tempdir, 'container.db') broker = ContainerBroker(db_path, account='a', container='c') self._test_put_object_multiple_explicit_timestamps(broker) def test_last_modified_time(self): # Test container listing reports the most recent of data or metadata # timestamp as last-modified time ts = (Timestamp(t) for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(ts.next().internal, 0) # simple 'single' timestamp case t0 = ts.next() broker.put_object('obj1', t0.internal, 0, 'text/plain', 'hash1') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 1) self.assertEqual(listing[0][0], 'obj1') self.assertEqual(listing[0][1], t0.internal) # content-type and metadata are updated at t1 t1 = ts.next() t_encoded = encode_timestamps(t0, t1, t1) broker.put_object('obj1', t_encoded, 0, 'text/plain', 'hash1') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 1) self.assertEqual(listing[0][0], 'obj1') self.assertEqual(listing[0][1], t1.internal) # used later t2 = ts.next() # metadata is updated at t3 t3 = ts.next() t_encoded = encode_timestamps(t0, t1, t3) broker.put_object('obj1', t_encoded, 0, 'text/plain', 'hash1') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 1) self.assertEqual(listing[0][0], 'obj1') self.assertEqual(listing[0][1], t3.internal) # all parts updated at t2, last-modified should remain at t3 t_encoded = encode_timestamps(t2, t2, t2) broker.put_object('obj1', t_encoded, 0, 'text/plain', 'hash1') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 1) self.assertEqual(listing[0][0], 'obj1') self.assertEqual(listing[0][1], t3.internal) # all parts updated at t4, last-modified should be t4 t4 = ts.next() t_encoded = encode_timestamps(t4, t4, t4) broker.put_object('obj1', t_encoded, 0, 'text/plain', 'hash1') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 1) self.assertEqual(listing[0][0], 'obj1') self.assertEqual(listing[0][1], t4.internal) @patch_policies def test_put_misplaced_object_does_not_effect_container_stats(self): policy = random.choice(list(POLICIES)) ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(next(ts), policy.idx) # migration tests may not honor policy on initialize if isinstance(self, ContainerBrokerMigrationMixin): real_storage_policy_index = \ broker.get_info()['storage_policy_index'] policy = filter(lambda p: p.idx == real_storage_policy_index, POLICIES)[0] broker.put_object('correct_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=policy.idx) info = broker.get_info() self.assertEqual(1, info['object_count']) self.assertEqual(123, info['bytes_used']) other_policy = random.choice([p for p in POLICIES if p is not policy]) broker.put_object('wrong_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=other_policy.idx) self.assertEqual(1, info['object_count']) self.assertEqual(123, info['bytes_used']) @patch_policies def test_has_multiple_policies(self): policy = random.choice(list(POLICIES)) ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(next(ts), policy.idx) # migration tests may not honor policy on initialize if isinstance(self, ContainerBrokerMigrationMixin): real_storage_policy_index = \ broker.get_info()['storage_policy_index'] policy = filter(lambda p: p.idx == real_storage_policy_index, POLICIES)[0] broker.put_object('correct_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=policy.idx) self.assertFalse(broker.has_multiple_policies()) other_policy = [p for p in POLICIES if p is not policy][0] broker.put_object('wrong_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=other_policy.idx) self.assertTrue(broker.has_multiple_policies()) @patch_policies def test_get_policy_info(self): policy = random.choice(list(POLICIES)) ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(next(ts), policy.idx) # migration tests may not honor policy on initialize if isinstance(self, ContainerBrokerMigrationMixin): real_storage_policy_index = \ broker.get_info()['storage_policy_index'] policy = filter(lambda p: p.idx == real_storage_policy_index, POLICIES)[0] policy_stats = broker.get_policy_stats() expected = {policy.idx: {'bytes_used': 0, 'object_count': 0}} self.assertEqual(policy_stats, expected) # add an object broker.put_object('correct_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=policy.idx) policy_stats = broker.get_policy_stats() expected = {policy.idx: {'bytes_used': 123, 'object_count': 1}} self.assertEqual(policy_stats, expected) # add a misplaced object other_policy = random.choice([p for p in POLICIES if p is not policy]) broker.put_object('wrong_o', next(ts), 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=other_policy.idx) policy_stats = broker.get_policy_stats() expected = { policy.idx: {'bytes_used': 123, 'object_count': 1}, other_policy.idx: {'bytes_used': 123, 'object_count': 1}, } self.assertEqual(policy_stats, expected) def test_policy_stat_tracking(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(next(ts), POLICIES.default.idx) stats = defaultdict(dict) iters = 100 for i in range(iters): policy_index = random.randint(0, iters * 0.1) name = 'object-%s' % random.randint(0, iters * 0.1) size = random.randint(0, iters) broker.put_object(name, next(ts), size, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe', storage_policy_index=policy_index) # track the size of the latest timestamp put for each object # in each storage policy stats[policy_index][name] = size policy_stats = broker.get_policy_stats() # if no objects were added for the default policy we still # expect an entry for the default policy in the returned info # because the database was initialized with that storage policy # - but it must be empty. if POLICIES.default.idx not in stats: default_stats = policy_stats.pop(POLICIES.default.idx) expected = {'object_count': 0, 'bytes_used': 0} self.assertEqual(default_stats, expected) self.assertEqual(len(policy_stats), len(stats)) for policy_index, stat in policy_stats.items(): self.assertEqual(stat['object_count'], len(stats[policy_index])) self.assertEqual(stat['bytes_used'], sum(stats[policy_index].values())) def test_initialize_container_broker_in_default(self): broker = ContainerBroker(':memory:', account='test1', container='test2') # initialize with no storage_policy_index argument broker.initialize(Timestamp(1).internal) info = broker.get_info() self.assertEqual(info['account'], 'test1') self.assertEqual(info['container'], 'test2') self.assertEqual(info['hash'], '00000000000000000000000000000000') self.assertEqual(info['put_timestamp'], Timestamp(1).internal) self.assertEqual(info['delete_timestamp'], '0') info = broker.get_info() self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) policy_stats = broker.get_policy_stats() # Act as policy-0 self.assertTrue(0 in policy_stats) self.assertEqual(policy_stats[0]['bytes_used'], 0) self.assertEqual(policy_stats[0]['object_count'], 0) broker.put_object('o1', Timestamp(time()).internal, 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 123) policy_stats = broker.get_policy_stats() self.assertTrue(0 in policy_stats) self.assertEqual(policy_stats[0]['object_count'], 1) self.assertEqual(policy_stats[0]['bytes_used'], 123) def test_get_info(self): # Test ContainerBroker.get_info broker = ContainerBroker(':memory:', account='test1', container='test2') broker.initialize(Timestamp('1').internal, 0) info = broker.get_info() self.assertEqual(info['account'], 'test1') self.assertEqual(info['container'], 'test2') self.assertEqual(info['hash'], '00000000000000000000000000000000') self.assertEqual(info['put_timestamp'], Timestamp(1).internal) self.assertEqual(info['delete_timestamp'], '0') if self.__class__ in (TestContainerBrokerBeforeMetadata, TestContainerBrokerBeforeXSync, TestContainerBrokerBeforeSPI): self.assertEqual(info['status_changed_at'], '0') else: self.assertEqual(info['status_changed_at'], Timestamp(1).internal) info = broker.get_info() self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) broker.put_object('o1', Timestamp(time()).internal, 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 123) sleep(.00001) broker.put_object('o2', Timestamp(time()).internal, 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 2) self.assertEqual(info['bytes_used'], 246) sleep(.00001) broker.put_object('o2', Timestamp(time()).internal, 1000, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 2) self.assertEqual(info['bytes_used'], 1123) sleep(.00001) broker.delete_object('o1', Timestamp(time()).internal) info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 1000) sleep(.00001) broker.delete_object('o2', Timestamp(time()).internal) info = broker.get_info() self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) info = broker.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) def test_set_x_syncs(self): broker = ContainerBroker(':memory:', account='test1', container='test2') broker.initialize(Timestamp('1').internal, 0) info = broker.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) broker.set_x_container_sync_points(1, 2) info = broker.get_info() self.assertEqual(info['x_container_sync_point1'], 1) self.assertEqual(info['x_container_sync_point2'], 2) def test_get_report_info(self): broker = ContainerBroker(':memory:', account='test1', container='test2') broker.initialize(Timestamp('1').internal, 0) info = broker.get_info() self.assertEqual(info['account'], 'test1') self.assertEqual(info['container'], 'test2') self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) broker.put_object('o1', Timestamp(time()).internal, 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 123) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) sleep(.00001) broker.put_object('o2', Timestamp(time()).internal, 123, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 2) self.assertEqual(info['bytes_used'], 246) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) sleep(.00001) broker.put_object('o2', Timestamp(time()).internal, 1000, 'text/plain', '5af83e3196bf99f440f31f2e1a6c9afe') info = broker.get_info() self.assertEqual(info['object_count'], 2) self.assertEqual(info['bytes_used'], 1123) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) put_timestamp = Timestamp(time()).internal sleep(.001) delete_timestamp = Timestamp(time()).internal broker.reported(put_timestamp, delete_timestamp, 2, 1123) info = broker.get_info() self.assertEqual(info['object_count'], 2) self.assertEqual(info['bytes_used'], 1123) self.assertEqual(info['reported_put_timestamp'], put_timestamp) self.assertEqual(info['reported_delete_timestamp'], delete_timestamp) self.assertEqual(info['reported_object_count'], 2) self.assertEqual(info['reported_bytes_used'], 1123) sleep(.00001) broker.delete_object('o1', Timestamp(time()).internal) info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 1000) self.assertEqual(info['reported_object_count'], 2) self.assertEqual(info['reported_bytes_used'], 1123) sleep(.00001) broker.delete_object('o2', Timestamp(time()).internal) info = broker.get_info() self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) self.assertEqual(info['reported_object_count'], 2) self.assertEqual(info['reported_bytes_used'], 1123) def test_list_objects_iter(self): # Test ContainerBroker.list_objects_iter broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) for obj1 in range(4): for obj2 in range(125): broker.put_object('%d/%04d' % (obj1, obj2), Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') for obj in range(125): broker.put_object('2/0051/%04d' % obj, Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') for obj in range(125): broker.put_object('3/%04d/0049' % obj, Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0/0000') self.assertEqual(listing[-1][0], '0/0099') listing = broker.list_objects_iter(100, '', '0/0050', None, '') self.assertEqual(len(listing), 50) self.assertEqual(listing[0][0], '0/0000') self.assertEqual(listing[-1][0], '0/0049') listing = broker.list_objects_iter(100, '0/0099', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0/0100') self.assertEqual(listing[-1][0], '1/0074') listing = broker.list_objects_iter(55, '1/0074', None, None, '') self.assertEqual(len(listing), 55) self.assertEqual(listing[0][0], '1/0075') self.assertEqual(listing[-1][0], '2/0004') listing = broker.list_objects_iter(55, '2/0005', None, None, '', reverse=True) self.assertEqual(len(listing), 55) self.assertEqual(listing[0][0], '2/0004') self.assertEqual(listing[-1][0], '1/0075') listing = broker.list_objects_iter(10, '', None, '0/01', '') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0/0100') self.assertEqual(listing[-1][0], '0/0109') listing = broker.list_objects_iter(10, '', None, '0/', '/') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0/0000') self.assertEqual(listing[-1][0], '0/0009') listing = broker.list_objects_iter(10, '', None, '0/', '/', reverse=True) self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0/0124') self.assertEqual(listing[-1][0], '0/0115') # Same as above, but using the path argument. listing = broker.list_objects_iter(10, '', None, None, '', '0') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0/0000') self.assertEqual(listing[-1][0], '0/0009') listing = broker.list_objects_iter(10, '', None, None, '', '0', reverse=True) self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0/0124') self.assertEqual(listing[-1][0], '0/0115') listing = broker.list_objects_iter(10, '', None, '', '/') self.assertEqual(len(listing), 4) self.assertEqual([row[0] for row in listing], ['0/', '1/', '2/', '3/']) listing = broker.list_objects_iter(10, '', None, '', '/', reverse=True) self.assertEqual(len(listing), 4) self.assertEqual([row[0] for row in listing], ['3/', '2/', '1/', '0/']) listing = broker.list_objects_iter(10, '2', None, None, '/') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['2/', '3/']) listing = broker.list_objects_iter(10, '2/', None, None, '/') self.assertEqual(len(listing), 1) self.assertEqual([row[0] for row in listing], ['3/']) listing = broker.list_objects_iter(10, '2/', None, None, '/', reverse=True) self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['1/', '0/']) listing = broker.list_objects_iter(10, '20', None, None, '/', reverse=True) self.assertEqual(len(listing), 3) self.assertEqual([row[0] for row in listing], ['2/', '1/', '0/']) listing = broker.list_objects_iter(10, '2/0050', None, '2/', '/') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '2/0051') self.assertEqual(listing[1][0], '2/0051/') self.assertEqual(listing[2][0], '2/0052') self.assertEqual(listing[-1][0], '2/0059') listing = broker.list_objects_iter(10, '3/0045', None, '3/', '/') self.assertEqual(len(listing), 10) self.assertEqual([row[0] for row in listing], ['3/0045/', '3/0046', '3/0046/', '3/0047', '3/0047/', '3/0048', '3/0048/', '3/0049', '3/0049/', '3/0050']) broker.put_object('3/0049/', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(10, '3/0048', None, None, None) self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['3/0048/0049', '3/0049', '3/0049/', '3/0049/0049', '3/0050', '3/0050/0049', '3/0051', '3/0051/0049', '3/0052', '3/0052/0049']) listing = broker.list_objects_iter(10, '3/0048', None, '3/', '/') self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['3/0048/', '3/0049', '3/0049/', '3/0050', '3/0050/', '3/0051', '3/0051/', '3/0052', '3/0052/', '3/0053']) listing = broker.list_objects_iter(10, None, None, '3/0049/', '/') self.assertEqual(len(listing), 2) self.assertEqual( [row[0] for row in listing], ['3/0049/', '3/0049/0049']) listing = broker.list_objects_iter(10, None, None, None, None, '3/0049') self.assertEqual(len(listing), 1) self.assertEqual([row[0] for row in listing], ['3/0049/0049']) listing = broker.list_objects_iter(2, None, None, '3/', '/') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['3/0000', '3/0000/']) listing = broker.list_objects_iter(2, None, None, None, None, '3') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['3/0000', '3/0001']) def test_reverse_prefix_delim(self): expectations = [ { 'objects': [ 'topdir1/subdir1.0/obj1', 'topdir1/subdir1.1/obj1', 'topdir1/subdir1/obj1', ], 'params': { 'prefix': 'topdir1/', 'delimiter': '/', }, 'expected': [ 'topdir1/subdir1.0/', 'topdir1/subdir1.1/', 'topdir1/subdir1/', ], }, { 'objects': [ 'topdir1/subdir1.0/obj1', 'topdir1/subdir1.1/obj1', 'topdir1/subdir1/obj1', 'topdir1/subdir10', 'topdir1/subdir10/obj1', ], 'params': { 'prefix': 'topdir1/', 'delimiter': '/', }, 'expected': [ 'topdir1/subdir1.0/', 'topdir1/subdir1.1/', 'topdir1/subdir1/', 'topdir1/subdir10', 'topdir1/subdir10/', ], }, { 'objects': [ 'topdir1/subdir1/obj1', 'topdir1/subdir1.0/obj1', 'topdir1/subdir1.1/obj1', ], 'params': { 'prefix': 'topdir1/', 'delimiter': '/', 'reverse': True, }, 'expected': [ 'topdir1/subdir1/', 'topdir1/subdir1.1/', 'topdir1/subdir1.0/', ], }, { 'objects': [ 'topdir1/subdir10/obj1', 'topdir1/subdir10', 'topdir1/subdir1/obj1', 'topdir1/subdir1.0/obj1', 'topdir1/subdir1.1/obj1', ], 'params': { 'prefix': 'topdir1/', 'delimiter': '/', 'reverse': True, }, 'expected': [ 'topdir1/subdir10/', 'topdir1/subdir10', 'topdir1/subdir1/', 'topdir1/subdir1.1/', 'topdir1/subdir1.0/', ], }, { 'objects': [ '1', '2', '3/1', '3/2.2', '3/2/1', '3/2/2', '3/3', '4', ], 'params': { 'path': '3/', }, 'expected': [ '3/1', '3/2.2', '3/3', ], }, { 'objects': [ '1', '2', '3/1', '3/2.2', '3/2/1', '3/2/2', '3/3', '4', ], 'params': { 'path': '3/', 'reverse': True, }, 'expected': [ '3/3', '3/2.2', '3/1', ], }, ] ts = make_timestamp_iter() default_listing_params = { 'limit': 10000, 'marker': '', 'end_marker': None, 'prefix': None, 'delimiter': None, } obj_create_params = { 'size': 0, 'content_type': 'application/test', 'etag': EMPTY_ETAG, } failures = [] for expected in expectations: broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(next(ts).internal, 0) for name in expected['objects']: broker.put_object(name, next(ts).internal, **obj_create_params) params = default_listing_params.copy() params.update(expected['params']) listing = list(o[0] for o in broker.list_objects_iter(**params)) if listing != expected['expected']: expected['listing'] = listing failures.append( "With objects %(objects)r, the params %(params)r " "produced %(listing)r instead of %(expected)r" % expected) self.assertFalse(failures, "Found the following failures:\n%s" % '\n'.join(failures)) def test_list_objects_iter_non_slash(self): # Test ContainerBroker.list_objects_iter using a # delimiter that is not a slash broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) for obj1 in range(4): for obj2 in range(125): broker.put_object('%d:%04d' % (obj1, obj2), Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') for obj in range(125): broker.put_object('2:0051:%04d' % obj, Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') for obj in range(125): broker.put_object('3:%04d:0049' % obj, Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(100, '', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0:0000') self.assertEqual(listing[-1][0], '0:0099') listing = broker.list_objects_iter(100, '', '0:0050', None, '') self.assertEqual(len(listing), 50) self.assertEqual(listing[0][0], '0:0000') self.assertEqual(listing[-1][0], '0:0049') listing = broker.list_objects_iter(100, '0:0099', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0:0100') self.assertEqual(listing[-1][0], '1:0074') listing = broker.list_objects_iter(55, '1:0074', None, None, '') self.assertEqual(len(listing), 55) self.assertEqual(listing[0][0], '1:0075') self.assertEqual(listing[-1][0], '2:0004') listing = broker.list_objects_iter(10, '', None, '0:01', '') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0:0100') self.assertEqual(listing[-1][0], '0:0109') listing = broker.list_objects_iter(10, '', None, '0:', ':') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0:0000') self.assertEqual(listing[-1][0], '0:0009') # Same as above, but using the path argument, so nothing should be # returned since path uses a '/' as a delimiter. listing = broker.list_objects_iter(10, '', None, None, '', '0') self.assertEqual(len(listing), 0) listing = broker.list_objects_iter(10, '', None, '', ':') self.assertEqual(len(listing), 4) self.assertEqual([row[0] for row in listing], ['0:', '1:', '2:', '3:']) listing = broker.list_objects_iter(10, '2', None, None, ':') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['2:', '3:']) listing = broker.list_objects_iter(10, '2:', None, None, ':') self.assertEqual(len(listing), 1) self.assertEqual([row[0] for row in listing], ['3:']) listing = broker.list_objects_iter(10, '2:0050', None, '2:', ':') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '2:0051') self.assertEqual(listing[1][0], '2:0051:') self.assertEqual(listing[2][0], '2:0052') self.assertEqual(listing[-1][0], '2:0059') listing = broker.list_objects_iter(10, '3:0045', None, '3:', ':') self.assertEqual(len(listing), 10) self.assertEqual([row[0] for row in listing], ['3:0045:', '3:0046', '3:0046:', '3:0047', '3:0047:', '3:0048', '3:0048:', '3:0049', '3:0049:', '3:0050']) broker.put_object('3:0049:', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(10, '3:0048', None, None, None) self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['3:0048:0049', '3:0049', '3:0049:', '3:0049:0049', '3:0050', '3:0050:0049', '3:0051', '3:0051:0049', '3:0052', '3:0052:0049']) listing = broker.list_objects_iter(10, '3:0048', None, '3:', ':') self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['3:0048:', '3:0049', '3:0049:', '3:0050', '3:0050:', '3:0051', '3:0051:', '3:0052', '3:0052:', '3:0053']) listing = broker.list_objects_iter(10, None, None, '3:0049:', ':') self.assertEqual(len(listing), 2) self.assertEqual( [row[0] for row in listing], ['3:0049:', '3:0049:0049']) # Same as above, but using the path argument, so nothing should be # returned since path uses a '/' as a delimiter. listing = broker.list_objects_iter(10, None, None, None, None, '3:0049') self.assertEqual(len(listing), 0) listing = broker.list_objects_iter(2, None, None, '3:', ':') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['3:0000', '3:0000:']) listing = broker.list_objects_iter(2, None, None, None, None, '3') self.assertEqual(len(listing), 0) def test_list_objects_iter_prefix_delim(self): # Test ContainerBroker.list_objects_iter broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object( '/pets/dogs/1', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( '/pets/dogs/2', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( '/pets/fish/a', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( '/pets/fish/b', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( '/pets/fish_info.txt', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( '/snakes', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') # def list_objects_iter(self, limit, marker, prefix, delimiter, # path=None, format=None): listing = broker.list_objects_iter(100, None, None, '/pets/f', '/') self.assertEqual([row[0] for row in listing], ['/pets/fish/', '/pets/fish_info.txt']) listing = broker.list_objects_iter(100, None, None, '/pets/fish', '/') self.assertEqual([row[0] for row in listing], ['/pets/fish/', '/pets/fish_info.txt']) listing = broker.list_objects_iter(100, None, None, '/pets/fish/', '/') self.assertEqual([row[0] for row in listing], ['/pets/fish/a', '/pets/fish/b']) def test_list_objects_iter_order_and_reverse(self): # Test ContainerBroker.list_objects_iter broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object( 'o1', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( 'o10', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( 'O1', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( 'o2', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( 'o3', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object( 'O4', Timestamp(0).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(100, None, None, '', '', reverse=False) self.assertEqual([row[0] for row in listing], ['O1', 'O4', 'o1', 'o10', 'o2', 'o3']) listing = broker.list_objects_iter(100, None, None, '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['o3', 'o2', 'o10', 'o1', 'O4', 'O1']) listing = broker.list_objects_iter(2, None, None, '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['o3', 'o2']) listing = broker.list_objects_iter(100, 'o2', 'O4', '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['o10', 'o1']) def test_double_check_trailing_delimiter(self): # Test ContainerBroker.list_objects_iter for a # container that has an odd file with a trailing delimiter broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object('a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/a/a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/a/b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b/a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b/b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('c', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a/0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('00', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/00', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/1', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/1/', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0/1/0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1/', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1/0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(25, None, None, None, None) self.assertEqual(len(listing), 22) self.assertEqual( [row[0] for row in listing], ['0', '0/', '0/0', '0/00', '0/1', '0/1/', '0/1/0', '00', '1', '1/', '1/0', 'a', 'a/', 'a/0', 'a/a', 'a/a/a', 'a/a/b', 'a/b', 'b', 'b/a', 'b/b', 'c']) listing = broker.list_objects_iter(25, None, None, '', '/') self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['0', '0/', '00', '1', '1/', 'a', 'a/', 'b', 'b/', 'c']) listing = broker.list_objects_iter(25, None, None, 'a/', '/') self.assertEqual(len(listing), 5) self.assertEqual( [row[0] for row in listing], ['a/', 'a/0', 'a/a', 'a/a/', 'a/b']) listing = broker.list_objects_iter(25, None, None, '0/', '/') self.assertEqual(len(listing), 5) self.assertEqual( [row[0] for row in listing], ['0/', '0/0', '0/00', '0/1', '0/1/']) listing = broker.list_objects_iter(25, None, None, '0/1/', '/') self.assertEqual(len(listing), 2) self.assertEqual( [row[0] for row in listing], ['0/1/', '0/1/0']) listing = broker.list_objects_iter(25, None, None, 'b/', '/') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['b/a', 'b/b']) def test_double_check_trailing_delimiter_non_slash(self): # Test ContainerBroker.list_objects_iter for a # container that has an odd file with a trailing delimiter broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object('a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:a:a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:a:b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b:a', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b:b', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('c', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('a:0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('00', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:00', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:1', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:1:', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('0:1:0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1:', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('1:0', Timestamp(time()).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') listing = broker.list_objects_iter(25, None, None, None, None) self.assertEqual(len(listing), 22) self.assertEqual( [row[0] for row in listing], ['0', '00', '0:', '0:0', '0:00', '0:1', '0:1:', '0:1:0', '1', '1:', '1:0', 'a', 'a:', 'a:0', 'a:a', 'a:a:a', 'a:a:b', 'a:b', 'b', 'b:a', 'b:b', 'c']) listing = broker.list_objects_iter(25, None, None, '', ':') self.assertEqual(len(listing), 10) self.assertEqual( [row[0] for row in listing], ['0', '00', '0:', '1', '1:', 'a', 'a:', 'b', 'b:', 'c']) listing = broker.list_objects_iter(25, None, None, 'a:', ':') self.assertEqual(len(listing), 5) self.assertEqual( [row[0] for row in listing], ['a:', 'a:0', 'a:a', 'a:a:', 'a:b']) listing = broker.list_objects_iter(25, None, None, '0:', ':') self.assertEqual(len(listing), 5) self.assertEqual( [row[0] for row in listing], ['0:', '0:0', '0:00', '0:1', '0:1:']) listing = broker.list_objects_iter(25, None, None, '0:1:', ':') self.assertEqual(len(listing), 2) self.assertEqual( [row[0] for row in listing], ['0:1:', '0:1:0']) listing = broker.list_objects_iter(25, None, None, 'b:', ':') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['b:a', 'b:b']) def test_chexor(self): broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object('a', Timestamp(1).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker.put_object('b', Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') hasha = hashlib.md5('%s-%s' % ('a', Timestamp(1).internal)).digest() hashb = hashlib.md5('%s-%s' % ('b', Timestamp(2).internal)).digest() hashc = ''.join( ('%02x' % (ord(a) ^ ord(b)) for a, b in zip(hasha, hashb))) self.assertEqual(broker.get_info()['hash'], hashc) broker.put_object('b', Timestamp(3).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') hashb = hashlib.md5('%s-%s' % ('b', Timestamp(3).internal)).digest() hashc = ''.join( ('%02x' % (ord(a) ^ ord(b)) for a, b in zip(hasha, hashb))) self.assertEqual(broker.get_info()['hash'], hashc) def test_newid(self): # test DatabaseBroker.newid broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) id = broker.get_info()['id'] broker.newid('someid') self.assertNotEqual(id, broker.get_info()['id']) def test_get_items_since(self): # test DatabaseBroker.get_items_since broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) broker.put_object('a', Timestamp(1).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') max_row = broker.get_replication_info()['max_row'] broker.put_object('b', Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') items = broker.get_items_since(max_row, 1000) self.assertEqual(len(items), 1) self.assertEqual(items[0]['name'], 'b') def test_sync_merging(self): # exercise the DatabaseBroker sync functions a bit broker1 = ContainerBroker(':memory:', account='a', container='c') broker1.initialize(Timestamp('1').internal, 0) broker2 = ContainerBroker(':memory:', account='a', container='c') broker2.initialize(Timestamp('1').internal, 0) self.assertEqual(broker2.get_sync('12345'), -1) broker1.merge_syncs([{'sync_point': 3, 'remote_id': '12345'}]) broker2.merge_syncs(broker1.get_syncs()) self.assertEqual(broker2.get_sync('12345'), 3) def test_merge_items(self): broker1 = ContainerBroker(':memory:', account='a', container='c') broker1.initialize(Timestamp('1').internal, 0) broker2 = ContainerBroker(':memory:', account='a', container='c') broker2.initialize(Timestamp('1').internal, 0) broker1.put_object('a', Timestamp(1).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker1.put_object('b', Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') id = broker1.get_info()['id'] broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(len(items), 2) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) broker1.put_object('c', Timestamp(3).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(len(items), 3) self.assertEqual(['a', 'b', 'c'], sorted([rec['name'] for rec in items])) def test_merge_items_overwrite_unicode(self): # test DatabaseBroker.merge_items snowman = u'\N{SNOWMAN}'.encode('utf-8') broker1 = ContainerBroker(':memory:', account='a', container='c') broker1.initialize(Timestamp('1').internal, 0) id = broker1.get_info()['id'] broker2 = ContainerBroker(':memory:', account='a', container='c') broker2.initialize(Timestamp('1').internal, 0) broker1.put_object(snowman, Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker1.put_object('b', Timestamp(3).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(json.loads(json.dumps(broker1.get_items_since( broker2.get_sync(id), 1000))), id) broker1.put_object(snowman, Timestamp(4).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(json.loads(json.dumps(broker1.get_items_since( broker2.get_sync(id), 1000))), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(['b', snowman], sorted([rec['name'] for rec in items])) for rec in items: if rec['name'] == snowman: self.assertEqual(rec['created_at'], Timestamp(4).internal) if rec['name'] == 'b': self.assertEqual(rec['created_at'], Timestamp(3).internal) def test_merge_items_overwrite(self): # test DatabaseBroker.merge_items broker1 = ContainerBroker(':memory:', account='a', container='c') broker1.initialize(Timestamp('1').internal, 0) id = broker1.get_info()['id'] broker2 = ContainerBroker(':memory:', account='a', container='c') broker2.initialize(Timestamp('1').internal, 0) broker1.put_object('a', Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker1.put_object('b', Timestamp(3).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) broker1.put_object('a', Timestamp(4).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) for rec in items: if rec['name'] == 'a': self.assertEqual(rec['created_at'], Timestamp(4).internal) if rec['name'] == 'b': self.assertEqual(rec['created_at'], Timestamp(3).internal) def test_merge_items_post_overwrite_out_of_order(self): # test DatabaseBroker.merge_items broker1 = ContainerBroker(':memory:', account='a', container='c') broker1.initialize(Timestamp('1').internal, 0) id = broker1.get_info()['id'] broker2 = ContainerBroker(':memory:', account='a', container='c') broker2.initialize(Timestamp('1').internal, 0) broker1.put_object('a', Timestamp(2).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker1.put_object('b', Timestamp(3).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) broker1.put_object('a', Timestamp(4).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) for rec in items: if rec['name'] == 'a': self.assertEqual(rec['created_at'], Timestamp(4).internal) if rec['name'] == 'b': self.assertEqual(rec['created_at'], Timestamp(3).internal) self.assertEqual(rec['content_type'], 'text/plain') items = broker2.get_items_since(-1, 1000) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) for rec in items: if rec['name'] == 'a': self.assertEqual(rec['created_at'], Timestamp(4).internal) if rec['name'] == 'b': self.assertEqual(rec['created_at'], Timestamp(3).internal) broker1.put_object('b', Timestamp(5).internal, 0, 'text/plain', 'd41d8cd98f00b204e9800998ecf8427e') broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) for rec in items: if rec['name'] == 'a': self.assertEqual(rec['created_at'], Timestamp(4).internal) if rec['name'] == 'b': self.assertEqual(rec['created_at'], Timestamp(5).internal) self.assertEqual(rec['content_type'], 'text/plain') def test_set_storage_policy_index(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = ContainerBroker(':memory:', account='test_account', container='test_container') timestamp = next(ts) broker.initialize(timestamp, 0) info = broker.get_info() self.assertEqual(0, info['storage_policy_index']) # sanity check self.assertEqual(0, info['object_count']) self.assertEqual(0, info['bytes_used']) if self.__class__ in (TestContainerBrokerBeforeMetadata, TestContainerBrokerBeforeXSync, TestContainerBrokerBeforeSPI): self.assertEqual(info['status_changed_at'], '0') else: self.assertEqual(timestamp, info['status_changed_at']) expected = {0: {'object_count': 0, 'bytes_used': 0}} self.assertEqual(expected, broker.get_policy_stats()) timestamp = next(ts) broker.set_storage_policy_index(111, timestamp) self.assertEqual(broker.storage_policy_index, 111) info = broker.get_info() self.assertEqual(111, info['storage_policy_index']) self.assertEqual(0, info['object_count']) self.assertEqual(0, info['bytes_used']) self.assertEqual(timestamp, info['status_changed_at']) expected[111] = {'object_count': 0, 'bytes_used': 0} self.assertEqual(expected, broker.get_policy_stats()) timestamp = next(ts) broker.set_storage_policy_index(222, timestamp) self.assertEqual(broker.storage_policy_index, 222) info = broker.get_info() self.assertEqual(222, info['storage_policy_index']) self.assertEqual(0, info['object_count']) self.assertEqual(0, info['bytes_used']) self.assertEqual(timestamp, info['status_changed_at']) expected[222] = {'object_count': 0, 'bytes_used': 0} self.assertEqual(expected, broker.get_policy_stats()) old_timestamp, timestamp = timestamp, next(ts) broker.set_storage_policy_index(222, timestamp) # it's idempotent info = broker.get_info() self.assertEqual(222, info['storage_policy_index']) self.assertEqual(0, info['object_count']) self.assertEqual(0, info['bytes_used']) self.assertEqual(old_timestamp, info['status_changed_at']) self.assertEqual(expected, broker.get_policy_stats()) def test_set_storage_policy_index_empty(self): # Putting an object may trigger migrations, so test with a # never-had-an-object container to make sure we handle it broker = ContainerBroker(':memory:', account='test_account', container='test_container') broker.initialize(Timestamp('1').internal, 0) info = broker.get_info() self.assertEqual(0, info['storage_policy_index']) broker.set_storage_policy_index(2) info = broker.get_info() self.assertEqual(2, info['storage_policy_index']) def test_reconciler_sync(self): broker = ContainerBroker(':memory:', account='test_account', container='test_container') broker.initialize(Timestamp('1').internal, 0) self.assertEqual(-1, broker.get_reconciler_sync()) broker.update_reconciler_sync(10) self.assertEqual(10, broker.get_reconciler_sync()) @with_tempdir def test_legacy_pending_files(self, tempdir): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) db_path = os.path.join(tempdir, 'container.db') # first init an acct DB without the policy_stat table present broker = ContainerBroker(db_path, account='a', container='c') broker.initialize(next(ts), 1) # manually make some pending entries lacking storage_policy_index with open(broker.pending_file, 'a+b') as fp: for i in range(10): name, timestamp, size, content_type, etag, deleted = ( 'o%s' % i, next(ts), 0, 'c', 'e', 0) fp.write(':') fp.write(pickle.dumps( (name, timestamp, size, content_type, etag, deleted), protocol=2).encode('base64')) fp.flush() # use put_object to append some more entries with different # values for storage_policy_index for i in range(10, 30): name = 'o%s' % i if i < 20: size = 1 storage_policy_index = 0 else: size = 2 storage_policy_index = 1 broker.put_object(name, next(ts), size, 'c', 'e', 0, storage_policy_index=storage_policy_index) broker._commit_puts_stale_ok() # 10 objects with 0 bytes each in the legacy pending entries # 10 objects with 1 bytes each in storage policy 0 # 10 objects with 2 bytes each in storage policy 1 expected = { 0: {'object_count': 20, 'bytes_used': 10}, 1: {'object_count': 10, 'bytes_used': 20}, } self.assertEqual(broker.get_policy_stats(), expected) class TestCommonContainerBroker(test_db.TestExampleBroker): broker_class = ContainerBroker def setUp(self): super(TestCommonContainerBroker, self).setUp() self.policy = random.choice(list(POLICIES)) def put_item(self, broker, timestamp): broker.put_object('test', timestamp, 0, 'text/plain', 'x', storage_policy_index=int(self.policy)) def delete_item(self, broker, timestamp): broker.delete_object('test', timestamp, storage_policy_index=int(self.policy)) class ContainerBrokerMigrationMixin(object): """ Mixin for running ContainerBroker against databases created with older schemas. """ def setUp(self): self._imported_create_object_table = \ ContainerBroker.create_object_table ContainerBroker.create_object_table = \ prespi_create_object_table self._imported_create_container_info_table = \ ContainerBroker.create_container_info_table ContainerBroker.create_container_info_table = \ premetadata_create_container_info_table self._imported_create_policy_stat_table = \ ContainerBroker.create_policy_stat_table ContainerBroker.create_policy_stat_table = lambda *args: None @classmethod @contextmanager def old_broker(cls): cls.runTest = lambda *a, **k: None case = cls() case.setUp() try: yield ContainerBroker finally: case.tearDown() def tearDown(self): ContainerBroker.create_container_info_table = \ self._imported_create_container_info_table ContainerBroker.create_object_table = \ self._imported_create_object_table ContainerBroker.create_policy_stat_table = \ self._imported_create_policy_stat_table def premetadata_create_container_info_table(self, conn, put_timestamp, _spi=None): """ Copied from ContainerBroker before the metadata column was added; used for testing with TestContainerBrokerBeforeMetadata. Create the container_stat table which is specific to the container DB. :param conn: DB connection object :param put_timestamp: put timestamp """ if put_timestamp is None: put_timestamp = Timestamp(0).internal conn.executescript(''' CREATE TABLE container_stat ( account TEXT, container TEXT, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', object_count INTEGER, bytes_used INTEGER, reported_put_timestamp TEXT DEFAULT '0', reported_delete_timestamp TEXT DEFAULT '0', reported_object_count INTEGER DEFAULT 0, reported_bytes_used INTEGER DEFAULT 0, hash TEXT default '00000000000000000000000000000000', id TEXT, status TEXT DEFAULT '', status_changed_at TEXT DEFAULT '0' ); INSERT INTO container_stat (object_count, bytes_used) VALUES (0, 0); ''') conn.execute(''' UPDATE container_stat SET account = ?, container = ?, created_at = ?, id = ?, put_timestamp = ? ''', (self.account, self.container, Timestamp(time()).internal, str(uuid4()), put_timestamp)) class TestContainerBrokerBeforeMetadata(ContainerBrokerMigrationMixin, TestContainerBroker): """ Tests for ContainerBroker against databases created before the metadata column was added. """ def setUp(self): super(TestContainerBrokerBeforeMetadata, self).setUp() broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) exc = None with broker.get() as conn: try: conn.execute('SELECT metadata FROM container_stat') except BaseException as err: exc = err self.assertTrue('no such column: metadata' in str(exc)) def tearDown(self): super(TestContainerBrokerBeforeMetadata, self).tearDown() broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) with broker.get() as conn: conn.execute('SELECT metadata FROM container_stat') def prexsync_create_container_info_table(self, conn, put_timestamp, _spi=None): """ Copied from ContainerBroker before the x_container_sync_point[12] columns were added; used for testing with TestContainerBrokerBeforeXSync. Create the container_stat table which is specific to the container DB. :param conn: DB connection object :param put_timestamp: put timestamp """ if put_timestamp is None: put_timestamp = Timestamp(0).internal conn.executescript(""" CREATE TABLE container_stat ( account TEXT, container TEXT, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', object_count INTEGER, bytes_used INTEGER, reported_put_timestamp TEXT DEFAULT '0', reported_delete_timestamp TEXT DEFAULT '0', reported_object_count INTEGER DEFAULT 0, reported_bytes_used INTEGER DEFAULT 0, hash TEXT default '00000000000000000000000000000000', id TEXT, status TEXT DEFAULT '', status_changed_at TEXT DEFAULT '0', metadata TEXT DEFAULT '' ); INSERT INTO container_stat (object_count, bytes_used) VALUES (0, 0); """) conn.execute(''' UPDATE container_stat SET account = ?, container = ?, created_at = ?, id = ?, put_timestamp = ? ''', (self.account, self.container, Timestamp(time()).internal, str(uuid4()), put_timestamp)) class TestContainerBrokerBeforeXSync(ContainerBrokerMigrationMixin, TestContainerBroker): """ Tests for ContainerBroker against databases created before the x_container_sync_point[12] columns were added. """ def setUp(self): super(TestContainerBrokerBeforeXSync, self).setUp() ContainerBroker.create_container_info_table = \ prexsync_create_container_info_table broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) exc = None with broker.get() as conn: try: conn.execute('''SELECT x_container_sync_point1 FROM container_stat''') except BaseException as err: exc = err self.assertTrue('no such column: x_container_sync_point1' in str(exc)) def tearDown(self): super(TestContainerBrokerBeforeXSync, self).tearDown() broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) with broker.get() as conn: conn.execute('SELECT x_container_sync_point1 FROM container_stat') def prespi_create_object_table(self, conn, *args, **kwargs): conn.executescript(""" CREATE TABLE object ( ROWID INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, created_at TEXT, size INTEGER, content_type TEXT, etag TEXT, deleted INTEGER DEFAULT 0 ); CREATE INDEX ix_object_deleted_name ON object (deleted, name); CREATE TRIGGER object_insert AFTER INSERT ON object BEGIN UPDATE container_stat SET object_count = object_count + (1 - new.deleted), bytes_used = bytes_used + new.size, hash = chexor(hash, new.name, new.created_at); END; CREATE TRIGGER object_update BEFORE UPDATE ON object BEGIN SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT'); END; CREATE TRIGGER object_delete AFTER DELETE ON object BEGIN UPDATE container_stat SET object_count = object_count - (1 - old.deleted), bytes_used = bytes_used - old.size, hash = chexor(hash, old.name, old.created_at); END; """) def prespi_create_container_info_table(self, conn, put_timestamp, _spi=None): """ Copied from ContainerBroker before the storage_policy_index column was added; used for testing with TestContainerBrokerBeforeSPI. Create the container_stat table which is specific to the container DB. :param conn: DB connection object :param put_timestamp: put timestamp """ if put_timestamp is None: put_timestamp = Timestamp(0).internal conn.executescript(""" CREATE TABLE container_stat ( account TEXT, container TEXT, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', object_count INTEGER, bytes_used INTEGER, reported_put_timestamp TEXT DEFAULT '0', reported_delete_timestamp TEXT DEFAULT '0', reported_object_count INTEGER DEFAULT 0, reported_bytes_used INTEGER DEFAULT 0, hash TEXT default '00000000000000000000000000000000', id TEXT, status TEXT DEFAULT '', status_changed_at TEXT DEFAULT '0', metadata TEXT DEFAULT '', x_container_sync_point1 INTEGER DEFAULT -1, x_container_sync_point2 INTEGER DEFAULT -1 ); INSERT INTO container_stat (object_count, bytes_used) VALUES (0, 0); """) conn.execute(''' UPDATE container_stat SET account = ?, container = ?, created_at = ?, id = ?, put_timestamp = ? ''', (self.account, self.container, Timestamp(time()).internal, str(uuid4()), put_timestamp)) class TestContainerBrokerBeforeSPI(ContainerBrokerMigrationMixin, TestContainerBroker): """ Tests for ContainerBroker against databases created before the storage_policy_index column was added. """ def setUp(self): super(TestContainerBrokerBeforeSPI, self).setUp() ContainerBroker.create_container_info_table = \ prespi_create_container_info_table broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) exc = None with broker.get() as conn: try: conn.execute('''SELECT storage_policy_index FROM container_stat''') except BaseException as err: exc = err self.assertTrue('no such column: storage_policy_index' in str(exc)) def tearDown(self): super(TestContainerBrokerBeforeSPI, self).tearDown() broker = ContainerBroker(':memory:', account='a', container='c') broker.initialize(Timestamp('1').internal, 0) with broker.get() as conn: conn.execute('SELECT storage_policy_index FROM container_stat') @patch_policies @with_tempdir def test_object_table_migration(self, tempdir): db_path = os.path.join(tempdir, 'container.db') # initialize an un-migrated database broker = ContainerBroker(db_path, account='a', container='c') put_timestamp = Timestamp(int(time())).internal broker.initialize(put_timestamp, None) with broker.get() as conn: try: conn.execute(''' SELECT storage_policy_index FROM object ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table doesn't have this column self.assertTrue('no such column: storage_policy_index' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from storage_policy_index ' 'from object table!') # manually insert an existing row to avoid automatic migration obj_put_timestamp = Timestamp(time()).internal with broker.get() as conn: conn.execute(''' INSERT INTO object (name, created_at, size, content_type, etag, deleted) VALUES (?, ?, ?, ?, ?, ?) ''', ('test_name', obj_put_timestamp, 123, 'text/plain', '8f4c680e75ca4c81dc1917ddab0a0b5c', 0)) conn.commit() # make sure we can iter objects without performing migration for o in broker.list_objects_iter(1, None, None, None, None): self.assertEqual(o, ('test_name', obj_put_timestamp, 123, 'text/plain', '8f4c680e75ca4c81dc1917ddab0a0b5c')) # get_info info = broker.get_info() expected = { 'account': 'a', 'container': 'c', 'put_timestamp': put_timestamp, 'delete_timestamp': '0', 'status_changed_at': '0', 'bytes_used': 123, 'object_count': 1, 'reported_put_timestamp': '0', 'reported_delete_timestamp': '0', 'reported_object_count': 0, 'reported_bytes_used': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1, 'storage_policy_index': 0, } for k, v in expected.items(): self.assertEqual(info[k], v, 'The value for %s was %r not %r' % ( k, info[k], v)) self.assertTrue( Timestamp(info['created_at']) > Timestamp(put_timestamp)) self.assertNotEqual(int(info['hash'], 16), 0) orig_hash = info['hash'] # get_replication_info info = broker.get_replication_info() # translate object count for replicators expected['count'] = expected.pop('object_count') for k, v in expected.items(): self.assertEqual(info[k], v) self.assertTrue( Timestamp(info['created_at']) > Timestamp(put_timestamp)) self.assertEqual(info['hash'], orig_hash) self.assertEqual(info['max_row'], 1) self.assertEqual(info['metadata'], '') # get_policy_stats info = broker.get_policy_stats() expected = { 0: {'bytes_used': 123, 'object_count': 1} } self.assertEqual(info, expected) # empty & is_deleted self.assertEqual(broker.empty(), False) self.assertEqual(broker.is_deleted(), False) # no migrations have occurred yet # container_stat table with broker.get() as conn: try: conn.execute(''' SELECT storage_policy_index FROM container_stat ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table doesn't have this column self.assertTrue('no such column: storage_policy_index' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from storage_policy_index ' 'from container_stat table!') # object table with broker.get() as conn: try: conn.execute(''' SELECT storage_policy_index FROM object ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table doesn't have this column self.assertTrue('no such column: storage_policy_index' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from storage_policy_index ' 'from object table!') # policy_stat table with broker.get() as conn: try: conn.execute(''' SELECT storage_policy_index FROM policy_stat ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table does not exist yet self.assertTrue('no such table: policy_stat' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from storage_policy_index ' 'from policy_stat table!') # now do a PUT with a different value for storage_policy_index # which will update the DB schema as well as update policy_stats # for legacy objects in the DB (those without an SPI) second_object_put_timestamp = Timestamp(time()).internal other_policy = [p for p in POLICIES if p.idx != 0][0] broker.put_object('test_second', second_object_put_timestamp, 456, 'text/plain', 'cbac50c175793513fa3c581551c876ab', storage_policy_index=other_policy.idx) broker._commit_puts_stale_ok() # we are fully migrated and both objects have their # storage_policy_index with broker.get() as conn: storage_policy_index = conn.execute(''' SELECT storage_policy_index FROM container_stat ''').fetchone()[0] self.assertEqual(storage_policy_index, 0) rows = conn.execute(''' SELECT name, storage_policy_index FROM object ''').fetchall() for row in rows: if row[0] == 'test_name': self.assertEqual(row[1], 0) else: self.assertEqual(row[1], other_policy.idx) # and all stats tracking is in place stats = broker.get_policy_stats() self.assertEqual(len(stats), 2) self.assertEqual(stats[0]['object_count'], 1) self.assertEqual(stats[0]['bytes_used'], 123) self.assertEqual(stats[other_policy.idx]['object_count'], 1) self.assertEqual(stats[other_policy.idx]['bytes_used'], 456) # get info still reports on the legacy storage policy info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 123) # unless you change the storage policy broker.set_storage_policy_index(other_policy.idx) info = broker.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 456) class TestUpdateNewItemFromExisting(unittest.TestCase): # TODO: add test scenarios that have swift_bytes in content_type t0 = '1234567890.00000' t1 = '1234567890.00001' t2 = '1234567890.00002' t3 = '1234567890.00003' t4 = '1234567890.00004' t5 = '1234567890.00005' t6 = '1234567890.00006' t7 = '1234567890.00007' t8 = '1234567890.00008' t20 = '1234567890.00020' t30 = '1234567890.00030' base_new_item = {'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item', 'deleted': '0'} base_existing = {'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting', 'deleted': '0'} # # each scenario is a tuple of: # (existing time, new item times, expected updated item) # # e.g.: # existing -> ({'created_at': t5}, # new_item -> {'created_at': t, 'ctype_timestamp': t, 'meta_timestamp': t}, # expected -> {'created_at': t, # 'etag': , 'size': , 'content_type': }) # scenarios_when_all_existing_wins = ( # # all new_item times <= all existing times -> existing values win # # existing has attrs at single time # ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t0, 'meta_timestamp': t0}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t0, 'meta_timestamp': t1}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t1, 'meta_timestamp': t1}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t1, 'meta_timestamp': t2}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t1, 'meta_timestamp': t3}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t0, 'ctype_timestamp': t3, 'meta_timestamp': t3}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t3, 'ctype_timestamp': t3, 'meta_timestamp': t3}, {'created_at': t3, 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), # # existing has attrs at multiple times: # data @ t3, ctype @ t5, meta @t7 -> existing created_at = t3+2+2 # ({'created_at': t3 + '+2+2'}, {'created_at': t0, 'ctype_timestamp': t0, 'meta_timestamp': t0}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t3, 'meta_timestamp': t3}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t4}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t5}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t7}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t7}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t5, 'meta_timestamp': t5}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t5, 'meta_timestamp': t6}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t5, 'meta_timestamp': t7}, {'created_at': t3 + '+2+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ) scenarios_when_all_new_item_wins = ( # no existing record (None, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t4}, {'created_at': t4, 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), (None, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t5}, {'created_at': t4 + '+0+1', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), (None, {'created_at': t4, 'ctype_timestamp': t5, 'meta_timestamp': t5}, {'created_at': t4 + '+1+0', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), (None, {'created_at': t4, 'ctype_timestamp': t5, 'meta_timestamp': t6}, {'created_at': t4 + '+1+1', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), # # all new_item times > all existing times -> new item values win # # existing has attrs at single time # ({'created_at': t3}, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t4}, {'created_at': t4, 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3}, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t5}, {'created_at': t4 + '+0+1', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3}, {'created_at': t4, 'ctype_timestamp': t5, 'meta_timestamp': t5}, {'created_at': t4 + '+1+0', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3}, {'created_at': t4, 'ctype_timestamp': t5, 'meta_timestamp': t6}, {'created_at': t4 + '+1+1', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), # # existing has attrs at multiple times: # data @ t3, ctype @ t5, meta @t7 -> existing created_at = t3+2+2 # ({'created_at': t3 + '+2+2'}, {'created_at': t4, 'ctype_timestamp': t6, 'meta_timestamp': t8}, {'created_at': t4 + '+2+2', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t6, 'ctype_timestamp': t6, 'meta_timestamp': t8}, {'created_at': t6 + '+0+2', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t4, 'ctype_timestamp': t8, 'meta_timestamp': t8}, {'created_at': t4 + '+4+0', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t6, 'ctype_timestamp': t8, 'meta_timestamp': t8}, {'created_at': t6 + '+2+0', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t8, 'ctype_timestamp': t8, 'meta_timestamp': t8}, {'created_at': t8, 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ) scenarios_when_some_new_item_wins = ( # # some but not all new_item times > existing times -> mixed updates # # existing has attrs at single time # ({'created_at': t3}, {'created_at': t3, 'ctype_timestamp': t3, 'meta_timestamp': t4}, {'created_at': t3 + '+0+1', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t4}, {'created_at': t3 + '+1+0', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'neW_item'}), ({'created_at': t3}, {'created_at': t3, 'ctype_timestamp': t4, 'meta_timestamp': t5}, {'created_at': t3 + '+1+1', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'neW_item'}), # # existing has attrs at multiple times: # data @ t3, ctype @ t5, meta @t7 -> existing created_at = t3+2+2 # ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t3, 'meta_timestamp': t8}, {'created_at': t3 + '+2+3', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t3, 'ctype_timestamp': t6, 'meta_timestamp': t8}, {'created_at': t3 + '+3+2', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t6}, {'created_at': t4 + '+1+2', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'exIsting'}), ({'created_at': t3 + '+2+2'}, {'created_at': t4, 'ctype_timestamp': t6, 'meta_timestamp': t6}, {'created_at': t4 + '+2+1', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'neW_item'}), ({'created_at': t3 + '+2+2'}, {'created_at': t4, 'ctype_timestamp': t4, 'meta_timestamp': t8}, {'created_at': t4 + '+1+3', 'etag': 'New_item', 'size': 'nEw_item', 'content_type': 'exIsting'}), # this scenario is to check that the deltas are in hex ({'created_at': t3 + '+2+2'}, {'created_at': t2, 'ctype_timestamp': t20, 'meta_timestamp': t30}, {'created_at': t3 + '+11+a', 'etag': 'Existing', 'size': 'eXisting', 'content_type': 'neW_item'}), ) def _test_scenario(self, scenario, newer): existing_time, new_item_times, expected_attrs = scenario # this is the existing record... existing = None if existing_time: existing = dict(self.base_existing) existing.update(existing_time) # this is the new item to update new_item = dict(self.base_new_item) new_item.update(new_item_times) # this is the expected result of the update expected = dict(new_item) expected.update(expected_attrs) expected['data_timestamp'] = new_item['created_at'] try: self.assertIs(newer, update_new_item_from_existing(new_item, existing)) self.assertDictEqual(expected, new_item) except AssertionError as e: msg = ('Scenario: existing %s, new_item %s, expected %s.' % scenario) msg = '%s Failed with: %s' % (msg, e.message) raise AssertionError(msg) def test_update_new_item_from_existing(self): for scenario in self.scenarios_when_all_existing_wins: self._test_scenario(scenario, False) for scenario in self.scenarios_when_all_new_item_wins: self._test_scenario(scenario, True) for scenario in self.scenarios_when_some_new_item_wins: self._test_scenario(scenario, True) swift-2.7.1/test/unit/container/test_reconciler.py0000664000567000056710000023522213024044354023501 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import numbers import mock import operator import time import unittest import socket import os import errno import itertools import random from collections import defaultdict from datetime import datetime from six.moves import urllib from swift.container import reconciler from swift.container.server import gen_resp_headers from swift.common.direct_client import ClientException from swift.common import swob from swift.common.header_key_dict import HeaderKeyDict from swift.common.utils import split_path, Timestamp, encode_timestamps from test.unit import debug_logger, FakeRing, fake_http_connect from test.unit.common.middleware.helpers import FakeSwift def timestamp_to_last_modified(timestamp): return datetime.utcfromtimestamp( float(Timestamp(timestamp))).strftime('%Y-%m-%dT%H:%M:%S.%f') def container_resp_headers(**kwargs): return HeaderKeyDict(gen_resp_headers(kwargs)) class FakeStoragePolicySwift(object): def __init__(self): self.storage_policy = defaultdict(FakeSwift) self._mock_oldest_spi_map = {} def __getattribute__(self, name): try: return object.__getattribute__(self, name) except AttributeError: return getattr(self.storage_policy[None], name) def __call__(self, env, start_response): method = env['REQUEST_METHOD'] path = env['PATH_INFO'] _, acc, cont, obj = split_path(env['PATH_INFO'], 0, 4, rest_with_last=True) if not obj: policy_index = None else: policy_index = self._mock_oldest_spi_map.get(cont, 0) # allow backend policy override if 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX' in env: policy_index = int(env['HTTP_X_BACKEND_STORAGE_POLICY_INDEX']) try: return self.storage_policy[policy_index].__call__( env, start_response) except KeyError: pass if method == 'PUT': resp_class = swob.HTTPCreated else: resp_class = swob.HTTPNotFound self.storage_policy[policy_index].register( method, path, resp_class, {}, '') return self.storage_policy[policy_index].__call__( env, start_response) class FakeInternalClient(reconciler.InternalClient): def __init__(self, listings): self.app = FakeStoragePolicySwift() self.user_agent = 'fake-internal-client' self.request_tries = 1 self.parse(listings) def parse(self, listings): self.accounts = defaultdict(lambda: defaultdict(list)) for item, timestamp in listings.items(): # XXX this interface is stupid if isinstance(timestamp, tuple): timestamp, content_type = timestamp else: timestamp, content_type = timestamp, 'application/x-put' storage_policy_index, path = item account, container_name, obj_name = split_path( path.encode('utf-8'), 0, 3, rest_with_last=True) self.accounts[account][container_name].append( (obj_name, storage_policy_index, timestamp, content_type)) for account_name, containers in self.accounts.items(): for con in containers: self.accounts[account_name][con].sort(key=lambda t: t[0]) for account, containers in self.accounts.items(): account_listing_data = [] account_path = '/v1/%s' % account for container, objects in containers.items(): container_path = account_path + '/' + container container_listing_data = [] for entry in objects: (obj_name, storage_policy_index, timestamp, content_type) = entry if storage_policy_index is None and not obj_name: # empty container continue obj_path = container_path + '/' + obj_name ts = Timestamp(timestamp) headers = {'X-Timestamp': ts.normal, 'X-Backend-Timestamp': ts.internal} # register object response self.app.storage_policy[storage_policy_index].register( 'GET', obj_path, swob.HTTPOk, headers) self.app.storage_policy[storage_policy_index].register( 'DELETE', obj_path, swob.HTTPNoContent, {}) # container listing entry last_modified = timestamp_to_last_modified(timestamp) # some tests setup mock listings using floats, some use # strings, so normalize here if isinstance(timestamp, numbers.Number): timestamp = '%f' % timestamp obj_data = { 'bytes': 0, # listing data is unicode 'name': obj_name.decode('utf-8'), 'last_modified': last_modified, 'hash': timestamp.decode('utf-8'), 'content_type': content_type, } container_listing_data.append(obj_data) container_listing_data.sort(key=operator.itemgetter('name')) # register container listing response container_headers = {} container_qry_string = '?format=json&marker=&end_marker=' self.app.register('GET', container_path + container_qry_string, swob.HTTPOk, container_headers, json.dumps(container_listing_data)) if container_listing_data: obj_name = container_listing_data[-1]['name'] # client should quote and encode marker end_qry_string = '?format=json&marker=%s&end_marker=' % ( urllib.parse.quote(obj_name.encode('utf-8'))) self.app.register('GET', container_path + end_qry_string, swob.HTTPOk, container_headers, json.dumps([])) self.app.register('DELETE', container_path, swob.HTTPConflict, {}, '') # simple account listing entry container_data = {'name': container} account_listing_data.append(container_data) # register account response account_listing_data.sort(key=operator.itemgetter('name')) account_headers = {} account_qry_string = '?format=json&marker=&end_marker=' self.app.register('GET', account_path + account_qry_string, swob.HTTPOk, account_headers, json.dumps(account_listing_data)) end_qry_string = '?format=json&marker=%s&end_marker=' % ( urllib.parse.quote(account_listing_data[-1]['name'])) self.app.register('GET', account_path + end_qry_string, swob.HTTPOk, account_headers, json.dumps([])) class TestReconcilerUtils(unittest.TestCase): def setUp(self): self.fake_ring = FakeRing() reconciler.direct_get_container_policy_index.reset() def test_parse_raw_obj(self): got = reconciler.parse_raw_obj({ 'name': "2:/AUTH_bob/con/obj", 'hash': Timestamp(2017551.49350).internal, 'last_modified': timestamp_to_last_modified(2017551.49352), 'content_type': 'application/x-delete', }) self.assertEqual(got['q_policy_index'], 2) self.assertEqual(got['account'], 'AUTH_bob') self.assertEqual(got['container'], 'con') self.assertEqual(got['obj'], 'obj') self.assertEqual(got['q_ts'], 2017551.49350) self.assertEqual(got['q_record'], 2017551.49352) self.assertEqual(got['q_op'], 'DELETE') got = reconciler.parse_raw_obj({ 'name': "1:/AUTH_bob/con/obj", 'hash': Timestamp(1234.20190).internal, 'last_modified': timestamp_to_last_modified(1234.20192), 'content_type': 'application/x-put', }) self.assertEqual(got['q_policy_index'], 1) self.assertEqual(got['account'], 'AUTH_bob') self.assertEqual(got['container'], 'con') self.assertEqual(got['obj'], 'obj') self.assertEqual(got['q_ts'], 1234.20190) self.assertEqual(got['q_record'], 1234.20192) self.assertEqual(got['q_op'], 'PUT') # the 'hash' field in object listing has the raw 'created_at' value # which could be a composite of timestamps timestamp_str = encode_timestamps(Timestamp(1234.20190), Timestamp(1245.20190), Timestamp(1256.20190), explicit=True) got = reconciler.parse_raw_obj({ 'name': "1:/AUTH_bob/con/obj", 'hash': timestamp_str, 'last_modified': timestamp_to_last_modified(1234.20192), 'content_type': 'application/x-put', }) self.assertEqual(got['q_policy_index'], 1) self.assertEqual(got['account'], 'AUTH_bob') self.assertEqual(got['container'], 'con') self.assertEqual(got['obj'], 'obj') self.assertEqual(got['q_ts'], 1234.20190) self.assertEqual(got['q_record'], 1234.20192) self.assertEqual(got['q_op'], 'PUT') # negative test obj_info = { 'name': "1:/AUTH_bob/con/obj", 'hash': Timestamp(1234.20190).internal, 'last_modified': timestamp_to_last_modified(1234.20192), } self.assertRaises(ValueError, reconciler.parse_raw_obj, obj_info) obj_info['content_type'] = 'foo' self.assertRaises(ValueError, reconciler.parse_raw_obj, obj_info) obj_info['content_type'] = 'appliation/x-post' self.assertRaises(ValueError, reconciler.parse_raw_obj, obj_info) self.assertRaises(ValueError, reconciler.parse_raw_obj, {'name': 'bogus'}) self.assertRaises(ValueError, reconciler.parse_raw_obj, {'name': '-1:/AUTH_test/container'}) self.assertRaises(ValueError, reconciler.parse_raw_obj, {'name': 'asdf:/AUTH_test/c/obj'}) self.assertRaises(KeyError, reconciler.parse_raw_obj, {'name': '0:/AUTH_test/c/obj', 'content_type': 'application/x-put'}) def test_get_container_policy_index(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=1, ), container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), ] for permutation in itertools.permutations((0, 1, 2)): reconciler.direct_get_container_policy_index.reset() resp_headers = [stub_resp_headers[i] for i in permutation] with mock.patch(mock_path) as direct_head: direct_head.side_effect = resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') test_values = [(info['x-storage-policy-index'], info['x-backend-status-changed-at']) for info in resp_headers] self.assertEqual(oldest_spi, 0, "oldest policy index wrong " "for permutation %r" % test_values) def test_get_container_policy_index_with_error(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ container_resp_headers( status_change_at=next(ts), storage_policy_index=2, ), container_resp_headers( status_changed_at=next(ts), storage_policy_index=1, ), # old timestamp, but 500 should be ignored... ClientException( 'Container Server blew up', http_status=500, http_reason='Server Error', http_headers=container_resp_headers( status_changed_at=Timestamp(0).internal, storage_policy_index=0, ), ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 2) def test_get_container_policy_index_with_socket_error(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=1, ), container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 1) def test_get_container_policy_index_with_too_many_errors(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), ClientException( 'Container Server blew up', http_status=500, http_reason='Server Error', http_headers=container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=1, ), ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, None) def test_get_container_policy_index_for_deleted(self): mock_path = 'swift.container.reconciler.direct_head_container' headers = container_resp_headers( status_changed_at=Timestamp(time.time()).internal, storage_policy_index=1, ) stub_resp_headers = [ ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=headers, ), ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=headers, ), ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=headers, ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 1) def test_get_container_policy_index_for_recently_deleted(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=container_resp_headers( put_timestamp=next(ts), delete_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=0, ), ), ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=container_resp_headers( put_timestamp=next(ts), delete_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=1, ), ), ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=container_resp_headers( put_timestamp=next(ts), delete_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=2, ), ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 2) def test_get_container_policy_index_for_recently_recreated(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ # old put, no recreate container_resp_headers( delete_timestamp=0, put_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=0, ), # recently deleted ClientException( 'Container Not Found', http_status=404, http_reason='Not Found', http_headers=container_resp_headers( put_timestamp=next(ts), delete_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=1, ), ), # recently recreated container_resp_headers( delete_timestamp=next(ts), put_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=2, ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 2) def test_get_container_policy_index_for_recently_split_brain(self): ts = itertools.count(int(time.time())) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ # oldest put container_resp_headers( delete_timestamp=0, put_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=0, ), # old recreate container_resp_headers( delete_timestamp=next(ts), put_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=1, ), # recently put container_resp_headers( delete_timestamp=0, put_timestamp=next(ts), status_changed_at=next(ts), storage_policy_index=2, ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 1) def test_get_container_policy_index_cache(self): now = time.time() ts = itertools.count(int(now)) mock_path = 'swift.container.reconciler.direct_head_container' stub_resp_headers = [ container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=1, ), container_resp_headers( status_changed_at=Timestamp(next(ts)).internal, storage_policy_index=0, ), ] random.shuffle(stub_resp_headers) with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') self.assertEqual(oldest_spi, 0) # re-mock with errors stub_resp_headers = [ socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), ] with mock.patch('time.time', new=lambda: now): with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') # still cached self.assertEqual(oldest_spi, 0) # propel time forward the_future = now + 31 with mock.patch('time.time', new=lambda: the_future): with mock.patch(mock_path) as direct_head: direct_head.side_effect = stub_resp_headers oldest_spi = reconciler.direct_get_container_policy_index( self.fake_ring, 'a', 'con') # expired self.assertEqual(oldest_spi, None) def test_direct_delete_container_entry(self): mock_path = 'swift.common.direct_client.http_connect' connect_args = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): connect_args.append({ 'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'headers': headers, 'query_string': query_string}) x_timestamp = Timestamp(time.time()) headers = {'x-timestamp': x_timestamp.internal} fake_hc = fake_http_connect(200, 200, 200, give_connect=test_connect) with mock.patch(mock_path, fake_hc): reconciler.direct_delete_container_entry( self.fake_ring, 'a', 'c', 'o', headers=headers) self.assertEqual(len(connect_args), 3) for args in connect_args: self.assertEqual(args['method'], 'DELETE') self.assertEqual(args['path'], '/a/c/o') self.assertEqual(args['headers'].get('x-timestamp'), headers['x-timestamp']) def test_direct_delete_container_entry_with_errors(self): # setup mock direct_delete mock_path = \ 'swift.container.reconciler.direct_delete_container_object' stub_resp = [ None, socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)), ClientException( 'Container Server blew up', '10.0.0.12', 6001, 'sdj', 404, 'Not Found' ), ] mock_direct_delete = mock.MagicMock() mock_direct_delete.side_effect = stub_resp with mock.patch(mock_path, mock_direct_delete), \ mock.patch('eventlet.greenpool.DEBUG', False): rv = reconciler.direct_delete_container_entry( self.fake_ring, 'a', 'c', 'o') self.assertEqual(rv, None) self.assertEqual(len(mock_direct_delete.mock_calls), 3) def test_add_to_reconciler_queue(self): mock_path = 'swift.common.direct_client.http_connect' connect_args = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): connect_args.append({ 'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'headers': headers, 'query_string': query_string}) fake_hc = fake_http_connect(200, 200, 200, give_connect=test_connect) with mock.patch(mock_path, fake_hc): ret = reconciler.add_to_reconciler_queue( self.fake_ring, 'a', 'c', 'o', 17, 5948918.63946, 'DELETE') self.assertTrue(ret) self.assertEqual(ret, str(int(5948918.63946 // 3600 * 3600))) self.assertEqual(len(connect_args), 3) connect_args.sort(key=lambda a: (a['ipaddr'], a['port'])) required_headers = ('x-content-type', 'x-etag') for args in connect_args: self.assertEqual(args['headers']['X-Timestamp'], '5948918.63946') self.assertEqual(args['path'], '/.misplaced_objects/5947200/17:/a/c/o') self.assertEqual(args['headers']['X-Content-Type'], 'application/x-delete') for header in required_headers: self.assertTrue(header in args['headers'], '%r was missing request headers %r' % ( header, args['headers'])) def test_add_to_reconciler_queue_force(self): mock_path = 'swift.common.direct_client.http_connect' connect_args = [] def test_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None): connect_args.append({ 'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'headers': headers, 'query_string': query_string}) fake_hc = fake_http_connect(200, 200, 200, give_connect=test_connect) now = time.time() with mock.patch(mock_path, fake_hc), \ mock.patch('swift.container.reconciler.time.time', lambda: now): ret = reconciler.add_to_reconciler_queue( self.fake_ring, 'a', 'c', 'o', 17, 5948918.63946, 'PUT', force=True) self.assertTrue(ret) self.assertEqual(ret, str(int(5948918.63946 // 3600 * 3600))) self.assertEqual(len(connect_args), 3) connect_args.sort(key=lambda a: (a['ipaddr'], a['port'])) required_headers = ('x-size', 'x-content-type') for args in connect_args: self.assertEqual(args['headers']['X-Timestamp'], Timestamp(now).internal) self.assertEqual(args['headers']['X-Etag'], '5948918.63946') self.assertEqual(args['path'], '/.misplaced_objects/5947200/17:/a/c/o') for header in required_headers: self.assertTrue(header in args['headers'], '%r was missing request headers %r' % ( header, args['headers'])) def test_add_to_reconciler_queue_fails(self): mock_path = 'swift.common.direct_client.http_connect' fake_connects = [fake_http_connect(200), fake_http_connect(200, raise_timeout_exc=True), fake_http_connect(507)] def fake_hc(*a, **kw): return fake_connects.pop()(*a, **kw) with mock.patch(mock_path, fake_hc): ret = reconciler.add_to_reconciler_queue( self.fake_ring, 'a', 'c', 'o', 17, 5948918.63946, 'PUT') self.assertFalse(ret) def test_add_to_reconciler_queue_socket_error(self): mock_path = 'swift.common.direct_client.http_connect' exc = socket.error(errno.ECONNREFUSED, os.strerror(errno.ECONNREFUSED)) fake_connects = [fake_http_connect(200), fake_http_connect(200, raise_timeout_exc=True), fake_http_connect(500, raise_exc=exc)] def fake_hc(*a, **kw): return fake_connects.pop()(*a, **kw) with mock.patch(mock_path, fake_hc): ret = reconciler.add_to_reconciler_queue( self.fake_ring, 'a', 'c', 'o', 17, 5948918.63946, 'DELETE') self.assertFalse(ret) def listing_qs(marker): return "?format=json&marker=%s&end_marker=" % \ urllib.parse.quote(marker.encode('utf-8')) class TestReconciler(unittest.TestCase): maxDiff = None def setUp(self): self.logger = debug_logger() conf = {} with mock.patch('swift.container.reconciler.InternalClient'): self.reconciler = reconciler.ContainerReconciler(conf) self.reconciler.logger = self.logger self.start_interval = int(time.time() // 3600 * 3600) self.current_container_path = '/v1/.misplaced_objects/%d' % ( self.start_interval) + listing_qs('') def _mock_listing(self, objects): self.reconciler.swift = FakeInternalClient(objects) self.fake_swift = self.reconciler.swift.app def _mock_oldest_spi(self, container_oldest_spi_map): self.fake_swift._mock_oldest_spi_map = container_oldest_spi_map def _run_once(self): """ Helper method to run the reconciler once with appropriate direct-client mocks in place. Returns the list of direct-deleted container entries in the format [(acc1, con1, obj1), ...] """ def mock_oldest_spi(ring, account, container_name): return self.fake_swift._mock_oldest_spi_map.get(container_name, 0) items = { 'direct_get_container_policy_index': mock_oldest_spi, 'direct_delete_container_entry': mock.DEFAULT, } mock_time_iter = itertools.count(self.start_interval) with mock.patch.multiple(reconciler, **items) as mocks: self.mock_delete_container_entry = \ mocks['direct_delete_container_entry'] with mock.patch('time.time', mock_time_iter.next): self.reconciler.run_once() return [c[1][1:4] for c in mocks['direct_delete_container_entry'].mock_calls] def test_invalid_queue_name(self): self._mock_listing({ (None, "/.misplaced_objects/3600/bogus"): 3618.84187, }) deleted_container_entries = self._run_once() # we try to find something useful self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('bogus'))]) # but only get the bogus record self.assertEqual(self.reconciler.stats['invalid_record'], 1) # and just leave it on the queue self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertFalse(deleted_container_entries) def test_invalid_queue_name_marches_onward(self): # there's something useful there on the queue self._mock_listing({ (None, "/.misplaced_objects/3600/00000bogus"): 3600.0000, (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3618.84187, (1, "/AUTH_bob/c/o1"): 3618.84187, }) self._mock_oldest_spi({'c': 1}) # already in the right spot! deleted_container_entries = self._run_once() # we get all the queue entries we can self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # and one is garbage self.assertEqual(self.reconciler.stats['invalid_record'], 1) # but the other is workable self.assertEqual(self.reconciler.stats['noop_object'], 1) # so pop the queue for that one self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_queue_name_with_policy_index_delimiter_in_name(self): q_path = '.misplaced_objects/3600' obj_path = "AUTH_bob/c:sneaky/o1:sneaky" # there's something useful there on the queue self._mock_listing({ (None, "/%s/1:/%s" % (q_path, obj_path)): 3618.84187, (1, '/%s' % obj_path): 3618.84187, }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # we find the misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/%s' % obj_path))]) # move it self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/%s' % obj_path), ('DELETE', '/v1/%s' % obj_path)]) delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/%s' % obj_path), ('PUT', '/v1/%s' % obj_path)]) # clean up the source self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # we DELETE the object from the wrong place with source_ts + offset 1 # timestamp to make sure the change takes effect self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=1).internal) # and pop the queue for that one self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [( '.misplaced_objects', '3600', '1:/%s' % obj_path)]) self.assertEqual(self.reconciler.stats['success'], 1) def test_unable_to_direct_get_oldest_storage_policy(self): self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3618.84187, }) # the reconciler gets "None" if we can't quorum the container self._mock_oldest_spi({'c': None}) deleted_container_entries = self._run_once() # we look for misplaced objects self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # but can't really say where to go looking self.assertEqual(self.reconciler.stats['unavailable_container'], 1) # we don't clean up anything self.assertEqual(self.reconciler.stats['cleanup_object'], 0) # and we definitely should not pop_queue self.assertFalse(deleted_container_entries) self.assertEqual(self.reconciler.stats['retry'], 1) def test_object_move(self): self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3618.84187, (1, "/AUTH_bob/c/o1"): 3618.84187, }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # moves it self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1'), ('DELETE', '/v1/AUTH_bob/c/o1')]) delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), ('PUT', '/v1/AUTH_bob/c/o1')]) put_headers = self.fake_swift.storage_policy[0].headers[1] # we PUT the object in the right place with q_ts + offset 2 self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=2)) # cleans up the old self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # we DELETE the object from the wrong place with source_ts + offset 1 # timestamp to make sure the change takes effect self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=1)) # and when we're done, we pop the entry from the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_the_other_direction(self): self._mock_listing({ (None, "/.misplaced_objects/3600/0:/AUTH_bob/c/o1"): 3618.84187, (0, "/AUTH_bob/c/o1"): 3618.84187, }) self._mock_oldest_spi({'c': 1}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('0:/AUTH_bob/c/o1'))]) # moves it self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('GET', '/v1/AUTH_bob/c/o1'), # 2 ('DELETE', '/v1/AUTH_bob/c/o1')]) # 4 delete_headers = self.fake_swift.storage_policy[0].headers[1] self.assertEqual( self.fake_swift.storage_policy[1].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 put_headers = self.fake_swift.storage_policy[1].headers[1] # we PUT the object in the right place with q_ts + offset 2 self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=2).internal) # cleans up the old self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # we DELETE the object from the wrong place with source_ts + offset 1 # timestamp to make sure the change takes effect self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=1).internal) # and when we're done, we pop the entry from the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '0:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_with_unicode_and_spaces(self): # the "name" in listings and the unicode string passed to all # functions where we call them with (account, container, obj) obj_name = u"AUTH_bob/c \u062a/o1 \u062a" # anytime we talk about a call made to swift for a path obj_path = obj_name.encode('utf-8') # this mock expects unquoted unicode because it handles container # listings as well as paths self._mock_listing({ (None, "/.misplaced_objects/3600/1:/%s" % obj_name): 3618.84187, (1, "/%s" % obj_name): 3618.84187, }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) # listing_qs encodes and quotes - so give it name self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/%s' % obj_name))]) # moves it self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) # these calls are to the real path self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/%s' % obj_path), # 2 ('DELETE', '/v1/%s' % obj_path)]) # 4 delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/%s' % obj_path), # 1 ('PUT', '/v1/%s' % obj_path)]) # 3 put_headers = self.fake_swift.storage_policy[0].headers[1] # we PUT the object in the right place with q_ts + offset 2 self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=2).internal) # cleans up the old self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # we DELETE the object from the wrong place with source_ts + offset 1 # timestamp to make sure the change takes effect self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3618.84187, offset=1).internal) self.assertEqual( delete_headers.get('X-Backend-Storage-Policy-Index'), '1') # and when we're done, we pop the entry from the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) # this mock received the name, it's encoded down in buffered_http self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/%s' % obj_name)]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_delete(self): q_ts = time.time() self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): ( Timestamp(q_ts).internal, 'application/x-delete'), # object exists in "correct" storage policy - slightly older (0, "/AUTH_bob/c/o1"): Timestamp(q_ts - 1).internal, }) self._mock_oldest_spi({'c': 0}) # the tombstone exists in the enqueued storage policy self.fake_swift.storage_policy[1].register( 'GET', '/v1/AUTH_bob/c/o1', swob.HTTPNotFound, {'X-Backend-Timestamp': Timestamp(q_ts).internal}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # delete it self.assertEqual(self.reconciler.stats['delete_attempt'], 1) self.assertEqual(self.reconciler.stats['delete_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1'), ('DELETE', '/v1/AUTH_bob/c/o1')]) delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), ('DELETE', '/v1/AUTH_bob/c/o1')]) reconcile_headers = self.fake_swift.storage_policy[0].headers[1] # we DELETE the object in the right place with q_ts + offset 2 self.assertEqual(reconcile_headers.get('X-Timestamp'), Timestamp(q_ts, offset=2).internal) # cleans up the old self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # we DELETE the object from the wrong place with source_ts + offset 1 # timestamp to make sure the change takes effect self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(q_ts, offset=1)) # and when we're done, we pop the entry from the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_enqueued_for_the_correct_dest_noop(self): self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3618.84187, (1, "/AUTH_bob/c/o1"): 3618.84187, }) self._mock_oldest_spi({'c': 1}) # already in the right spot! deleted_container_entries = self._run_once() # nothing to see here self.assertEqual(self.reconciler.stats['noop_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # so we just pop the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_src_object_newer_than_queue_entry(self): # setup the cluster self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3600.123456, (1, '/AUTH_bob/c/o1'): 3600.234567, # slightly newer }) self._mock_oldest_spi({'c': 0}) # destination # turn the crank deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # proceed with the move self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1'), # 2 ('DELETE', '/v1/AUTH_bob/c/o1')]) # 4 delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 # .. with source timestamp + offset 2 put_headers = self.fake_swift.storage_policy[0].headers[1] self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(3600.234567, offset=2)) # src object is cleaned up self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # ... with q_ts + offset 1 self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3600.123456, offset=1)) # and queue is popped self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_src_object_older_than_queue_entry(self): # should be some sort of retry case q_ts = time.time() container = str(int(q_ts // 3600 * 3600)) q_path = '.misplaced_objects/%s' % container self._mock_listing({ (None, "/%s/1:/AUTH_bob/c/o1" % q_path): q_ts, (1, '/AUTH_bob/c/o1'): q_ts - 0.00001, # slightly older }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', '/v1/%s' % q_path + listing_qs('')), ('GET', '/v1/%s' % q_path + listing_qs('1:/AUTH_bob/c/o1')), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # but no object copy is attempted self.assertEqual(self.reconciler.stats['unavailable_source'], 1) self.assertEqual(self.reconciler.stats['copy_attempt'], 0) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # src object is un-modified self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) # queue is un-changed, we'll have to retry self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertEqual(deleted_container_entries, []) self.assertEqual(self.reconciler.stats['retry'], 1) def test_src_object_unavailable_with_slightly_newer_tombstone(self): # should be some sort of retry case q_ts = float(Timestamp(time.time())) container = str(int(q_ts // 3600 * 3600)) q_path = '.misplaced_objects/%s' % container self._mock_listing({ (None, "/%s/1:/AUTH_bob/c/o1" % q_path): q_ts, }) self._mock_oldest_spi({'c': 0}) self.fake_swift.storage_policy[1].register( 'GET', '/v1/AUTH_bob/c/o1', swob.HTTPNotFound, {'X-Backend-Timestamp': Timestamp(q_ts, offset=2).internal}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', '/v1/%s' % q_path + listing_qs('')), ('GET', '/v1/%s' % q_path + listing_qs('1:/AUTH_bob/c/o1')), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # but no object copy is attempted self.assertEqual(self.reconciler.stats['unavailable_source'], 1) self.assertEqual(self.reconciler.stats['copy_attempt'], 0) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # src object is un-modified self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) # queue is un-changed, we'll have to retry self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertEqual(deleted_container_entries, []) self.assertEqual(self.reconciler.stats['retry'], 1) def test_src_object_unavailable_server_error(self): # should be some sort of retry case q_ts = float(Timestamp(time.time())) container = str(int(q_ts // 3600 * 3600)) q_path = '.misplaced_objects/%s' % container self._mock_listing({ (None, "/%s/1:/AUTH_bob/c/o1" % q_path): q_ts, }) self._mock_oldest_spi({'c': 0}) self.fake_swift.storage_policy[1].register( 'GET', '/v1/AUTH_bob/c/o1', swob.HTTPServiceUnavailable, {}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', '/v1/%s' % q_path + listing_qs('')), ('GET', '/v1/%s' % q_path + listing_qs('1:/AUTH_bob/c/o1')), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # but no object copy is attempted self.assertEqual(self.reconciler.stats['unavailable_source'], 1) self.assertEqual(self.reconciler.stats['copy_attempt'], 0) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # src object is un-modified self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) # queue is un-changed, we'll have to retry self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertEqual(deleted_container_entries, []) self.assertEqual(self.reconciler.stats['retry'], 1) def test_object_move_fails_cleanup(self): # setup the cluster self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3600.123456, (1, '/AUTH_bob/c/o1'): 3600.123457, # slightly newer }) self._mock_oldest_spi({'c': 0}) # destination # make the DELETE blow up self.fake_swift.storage_policy[1].register( 'DELETE', '/v1/AUTH_bob/c/o1', swob.HTTPServiceUnavailable, {}) # turn the crank deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # proceed with the move self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1'), # 2 ('DELETE', '/v1/AUTH_bob/c/o1')]) # 4 delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 # .. with source timestamp + offset 2 put_headers = self.fake_swift.storage_policy[0].headers[1] self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(3600.123457, offset=2)) # we try to cleanup self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) # ... with q_ts + offset 1 self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3600.12346, offset=1)) # but cleanup fails! self.assertEqual(self.reconciler.stats['cleanup_failed'], 1) # so the queue is not popped self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertEqual(deleted_container_entries, []) # and we'll have to retry self.assertEqual(self.reconciler.stats['retry'], 1) def test_object_move_src_object_is_forever_gone(self): # oh boy, hate to be here - this is an oldy q_ts = self.start_interval - self.reconciler.reclaim_age - 1 self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): q_ts, }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # but it's gone :\ self.assertEqual(self.reconciler.stats['lost_source'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # gah, look, even if it was out there somewhere - we've been at this # two weeks and haven't found it. We can't just keep looking forever, # so... we're done self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) # dunno if this is helpful, but FWIW we don't throw tombstones? self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) self.assertEqual(self.reconciler.stats['success'], 1) # lol def test_object_move_dest_already_moved(self): self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3679.2019, (1, "/AUTH_bob/c/o1"): 3679.2019, (0, "/AUTH_bob/c/o1"): 3679.2019, }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # we look for misplaced objects self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # but we found it already in the right place! self.assertEqual(self.reconciler.stats['found_object'], 1) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # so no attempt to read the source is made, but we do cleanup self.assertEqual( self.fake_swift.storage_policy[1].calls, [('DELETE', '/v1/AUTH_bob/c/o1')]) delete_headers = self.fake_swift.storage_policy[1].headers[0] # rather we just clean up the dark matter self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3679.2019, offset=1)) # and wipe our hands of it self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_dest_object_newer_than_queue_entry(self): self._mock_listing({ (None, "/.misplaced_objects/3600/1:/AUTH_bob/c/o1"): 3679.2019, (1, "/AUTH_bob/c/o1"): 3679.2019, (0, "/AUTH_bob/c/o1"): 3679.2019 + 0.00001, # slightly newer }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # we look for misplaced objects... self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('3600')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('')), ('GET', '/v1/.misplaced_objects/3600' + listing_qs('1:/AUTH_bob/c/o1'))]) # but we found it already in the right place! self.assertEqual(self.reconciler.stats['found_object'], 1) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1')]) # so not attempt to read is made, but we do cleanup self.assertEqual(self.reconciler.stats['copy_attempt'], 0) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('DELETE', '/v1/AUTH_bob/c/o1')]) delete_headers = self.fake_swift.storage_policy[1].headers[0] # rather we just clean up the dark matter self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(3679.2019, offset=1)) # and since we cleaned up the old object, so this counts as done self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '3600', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_dest_object_older_than_queue_entry(self): self._mock_listing({ (None, "/.misplaced_objects/36000/1:/AUTH_bob/c/o1"): 36123.38393, (1, "/AUTH_bob/c/o1"): 36123.38393, (0, "/AUTH_bob/c/o1"): 36123.38393 - 0.00001, # slightly older }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() # we found a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('36000')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('1:/AUTH_bob/c/o1'))]) # and since our version is *newer*, we overwrite self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual(self.reconciler.stats['copy_success'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1'), # 2 ('DELETE', '/v1/AUTH_bob/c/o1')]) # 4 delete_headers = self.fake_swift.storage_policy[1].headers[1] self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 # ... with a q_ts + offset 2 put_headers = self.fake_swift.storage_policy[0].headers[1] self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(36123.38393, offset=2)) # then clean the dark matter self.assertEqual(self.reconciler.stats['cleanup_attempt'], 1) self.assertEqual(self.reconciler.stats['cleanup_success'], 1) # ... with a q_ts + offset 1 self.assertEqual(delete_headers.get('X-Timestamp'), Timestamp(36123.38393, offset=1)) # and pop the queue self.assertEqual(self.reconciler.stats['pop_queue'], 1) self.assertEqual(deleted_container_entries, [('.misplaced_objects', '36000', '1:/AUTH_bob/c/o1')]) self.assertEqual(self.reconciler.stats['success'], 1) def test_object_move_put_fails(self): # setup the cluster self._mock_listing({ (None, "/.misplaced_objects/36000/1:/AUTH_bob/c/o1"): 36123.383925, (1, "/AUTH_bob/c/o1"): 36123.383925, }) self._mock_oldest_spi({'c': 0}) # make the put to dest fail! self.fake_swift.storage_policy[0].register( 'PUT', '/v1/AUTH_bob/c/o1', swob.HTTPServiceUnavailable, {}) # turn the crank deleted_container_entries = self._run_once() # we find a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('36000')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('1:/AUTH_bob/c/o1'))]) # and try to move it, but it fails self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # 2 self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 put_headers = self.fake_swift.storage_policy[0].headers[1] # ...with q_ts + offset 2 (20-microseconds) self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(36123.383925, offset=2)) # but it failed self.assertEqual(self.reconciler.stats['copy_success'], 0) self.assertEqual(self.reconciler.stats['copy_failed'], 1) # ... so we don't clean up the source self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) # and we don't pop the queue self.assertEqual(deleted_container_entries, []) self.assertEqual(self.reconciler.stats['unhandled_errors'], 0) self.assertEqual(self.reconciler.stats['retry'], 1) def test_object_move_put_blows_up_crazy_town(self): # setup the cluster self._mock_listing({ (None, "/.misplaced_objects/36000/1:/AUTH_bob/c/o1"): 36123.383925, (1, "/AUTH_bob/c/o1"): 36123.383925, }) self._mock_oldest_spi({'c': 0}) # make the put to dest blow up crazy town def blow_up(*args, **kwargs): raise Exception('kaboom!') self.fake_swift.storage_policy[0].register( 'PUT', '/v1/AUTH_bob/c/o1', blow_up, {}) # turn the crank deleted_container_entries = self._run_once() # we find a misplaced object self.assertEqual(self.reconciler.stats['misplaced_object'], 1) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs('36000')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('')), ('GET', '/v1/.misplaced_objects/36000' + listing_qs('1:/AUTH_bob/c/o1'))]) # and attempt to move it self.assertEqual(self.reconciler.stats['copy_attempt'], 1) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_bob/c/o1')]) # 2 self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_bob/c/o1'), # 1 ('PUT', '/v1/AUTH_bob/c/o1')]) # 3 put_headers = self.fake_swift.storage_policy[0].headers[1] # ...with q_ts + offset 2 (20-microseconds) self.assertEqual(put_headers.get('X-Timestamp'), Timestamp(36123.383925, offset=2)) # but it blows up hard self.assertEqual(self.reconciler.stats['unhandled_error'], 1) # so we don't cleanup self.assertEqual(self.reconciler.stats['cleanup_attempt'], 0) # and we don't pop the queue self.assertEqual(self.reconciler.stats['pop_queue'], 0) self.assertEqual(deleted_container_entries, []) self.assertEqual(self.reconciler.stats['retry'], 1) def test_object_move_no_such_object_no_tombstone_recent(self): q_ts = float(Timestamp(time.time())) container = str(int(q_ts // 3600 * 3600)) q_path = '.misplaced_objects/%s' % container self._mock_listing({ (None, "/%s/1:/AUTH_jeb/c/o1" % q_path): q_ts }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() self.assertEqual( self.fake_swift.calls, [('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('')), ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('1:/AUTH_jeb/c/o1')), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_jeb/c/o1')], ) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_jeb/c/o1')], ) # the queue entry is recent enough that there could easily be # tombstones on offline nodes or something, so we'll just leave it # here and try again later self.assertEqual(deleted_container_entries, []) def test_object_move_no_such_object_no_tombstone_ancient(self): queue_ts = float(Timestamp(time.time())) - \ self.reconciler.reclaim_age * 1.1 container = str(int(queue_ts // 3600 * 3600)) self._mock_listing({ ( None, "/.misplaced_objects/%s/1:/AUTH_jeb/c/o1" % container ): queue_ts }) self._mock_oldest_spi({'c': 0}) deleted_container_entries = self._run_once() self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container)), ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('')), ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('1:/AUTH_jeb/c/o1'))]) self.assertEqual( self.fake_swift.storage_policy[0].calls, [('HEAD', '/v1/AUTH_jeb/c/o1')], ) self.assertEqual( self.fake_swift.storage_policy[1].calls, [('GET', '/v1/AUTH_jeb/c/o1')], ) # the queue entry is old enough that the tombstones, if any, have # probably been reaped, so we'll just give up self.assertEqual( deleted_container_entries, [('.misplaced_objects', container, '1:/AUTH_jeb/c/o1')]) def test_delete_old_empty_queue_containers(self): ts = time.time() - self.reconciler.reclaim_age * 1.1 container = str(int(ts // 3600 * 3600)) older_ts = ts - 3600 older_container = str(int(older_ts // 3600 * 3600)) self._mock_listing({ (None, "/.misplaced_objects/%s/" % container): 0, (None, "/.misplaced_objects/%s/something" % older_container): 0, }) deleted_container_entries = self._run_once() self.assertEqual(deleted_container_entries, []) self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(container)), ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('')), ('DELETE', '/v1/.misplaced_objects/%s' % container), ('GET', '/v1/.misplaced_objects/%s' % older_container + listing_qs('')), ('GET', '/v1/.misplaced_objects/%s' % older_container + listing_qs('something'))]) self.assertEqual(self.reconciler.stats['invalid_record'], 1) def test_iter_over_old_containers_in_reverse(self): step = reconciler.MISPLACED_OBJECTS_CONTAINER_DIVISOR now = self.start_interval containers = [] for i in range(10): container_ts = int(now - step * i) container_name = str(container_ts // 3600 * 3600) containers.append(container_name) # add some old containers too now -= self.reconciler.reclaim_age old_containers = [] for i in range(10): container_ts = int(now - step * i) container_name = str(container_ts // 3600 * 3600) old_containers.append(container_name) containers.sort() old_containers.sort() all_containers = old_containers + containers self._mock_listing(dict(( (None, "/.misplaced_objects/%s/" % container), 0 ) for container in all_containers)) deleted_container_entries = self._run_once() self.assertEqual(deleted_container_entries, []) last_container = all_containers[-1] account_listing_calls = [ ('GET', '/v1/.misplaced_objects' + listing_qs('')), ('GET', '/v1/.misplaced_objects' + listing_qs(last_container)), ] new_container_calls = [ ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('')) for container in reversed(containers) ][1:] # current_container get's skipped the second time around... old_container_listings = [ ('GET', '/v1/.misplaced_objects/%s' % container + listing_qs('')) for container in reversed(old_containers) ] old_container_deletes = [ ('DELETE', '/v1/.misplaced_objects/%s' % container) for container in reversed(old_containers) ] old_container_calls = list(itertools.chain(*zip( old_container_listings, old_container_deletes))) self.assertEqual(self.fake_swift.calls, [('GET', self.current_container_path)] + account_listing_calls + new_container_calls + old_container_calls) def test_error_in_iter_containers(self): self._mock_listing({}) # make the listing return an error self.fake_swift.storage_policy[None].register( 'GET', '/v1/.misplaced_objects' + listing_qs(''), swob.HTTPServiceUnavailable, {}) self._run_once() self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs(''))]) self.assertEqual(self.reconciler.stats, {}) errors = self.reconciler.logger.get_lines_for_level('error') self.assertEqual(errors, [ 'Error listing containers in account ' '.misplaced_objects (Unexpected response: ' '503 Service Unavailable)']) def test_unhandled_exception_in_reconcile(self): self._mock_listing({}) # make the listing blow up def blow_up(*args, **kwargs): raise Exception('kaboom!') self.fake_swift.storage_policy[None].register( 'GET', '/v1/.misplaced_objects' + listing_qs(''), blow_up, {}) self._run_once() self.assertEqual( self.fake_swift.calls, [('GET', self.current_container_path), ('GET', '/v1/.misplaced_objects' + listing_qs(''))]) self.assertEqual(self.reconciler.stats, {}) errors = self.reconciler.logger.get_lines_for_level('error') self.assertEqual(errors, ['Unhandled Exception trying to reconcile: ']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/__init__.py0000664000567000056710000000000013024044352022032 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/container/test_sync_store.py0000664000567000056710000003656713024044352023555 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2016 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import errno import mock import random import logging import unittest import tempfile from shutil import rmtree from test.unit import debug_logger from swift.container.backend import DATADIR from swift.container import sync_store class FakeContainerBroker(object): def __init__(self, path): self.db_file = path self.db_dir = os.path.dirname(path) self.metadata = dict() self._is_deleted = False def is_deleted(self): return self._is_deleted class TestContainerSyncStore(unittest.TestCase): def setUp(self): self.logger = debug_logger('test-container-sync-store') self.logger.level = logging.DEBUG self.test_dir_prefix = tempfile.mkdtemp() self.devices_dir = os.path.join(self.test_dir_prefix, 'srv/node/') os.makedirs(self.devices_dir) # Create dummy container dbs self.devices = ['sdax', 'sdb', 'sdc'] self.partitions = ['21765', '38965', '13234'] self.suffixes = ['312', '435'] self.hashes = ['f19ed', '53ef', '0ab5', '9c3a'] for device in self.devices: data_dir_path = os.path.join(self.devices_dir, device, DATADIR) os.makedirs(data_dir_path) for part in self.partitions: for suffix in self.suffixes: for hsh in self.hashes: db_dir = os.path.join(data_dir_path, part, suffix, hsh) os.makedirs(db_dir) db_file = os.path.join(db_dir, '%s.db' % hsh) with open(db_file, 'w') as outfile: outfile.write('%s' % db_file) def teardown(self): rmtree(self.test_dir_prefix) def pick_dbfile(self): hsh = random.choice(self.hashes) return os.path.join(self.devices_dir, random.choice(self.devices), DATADIR, random.choice(self.partitions), random.choice(self.suffixes), hsh, '%s.db' % hsh) # Path conversion tests # container path is of the form: # /srv/node/sdb/containers/part/.../*.db # or more generally: # devices/device/DATADIR/part/.../*.db # synced container path is assumed to be of the form: # /srv/node/sdb/sync_containers/part/.../*.db # or more generally: # devices/device/SYNC_DATADIR/part/.../*.db # Indeed the ONLY DIFFERENCE is DATADIR <-> SYNC_DATADIR # Since, however, the strings represented by the constants # DATADIR or SYNC_DATADIR # can appear in the devices or the device part, the conversion # function between the two is a bit more subtle then a mere replacement. # This function tests the conversion between a container path # and a synced container path def test_container_to_synced_container_path_conversion(self): # The conversion functions are oblivious to the suffix # so we just pick up a constant one. db_path_suffix = self._db_path_suffix() # We build various container path putting in both # DATADIR and SYNC_DATADIR strings in the # device and devices parts. for devices, device in self._container_path_elements_generator(): path = os.path.join(devices, device, DATADIR, db_path_suffix) # Call the conversion function sds = sync_store.ContainerSyncStore(devices, self.logger, False) path = sds._container_to_synced_container_path(path) # Validate that ONLY the DATADIR part was replaced with # sync_store.SYNC_DATADIR self._validate_container_path_parts(path, devices, device, sync_store.SYNC_DATADIR, db_path_suffix) # This function tests the conversion between a synced container path # and a container path def test_synced_container_to_container_path_conversion(self): # The conversion functions are oblivious to the suffix # so we just pick up a constant one. db_path_suffix = ('133791/625/82a7f5a2c43281b0eab3597e35bb9625/' '82a7f5a2c43281b0eab3597e35bb9625.db') # We build various synced container path putting in both # DATADIR and SYNC_DATADIR strings in the # device and devices parts. for devices, device in self._container_path_elements_generator(): path = os.path.join(devices, device, sync_store.SYNC_DATADIR, db_path_suffix) # Call the conversion function sds = sync_store.ContainerSyncStore(devices, self.logger, False) path = sds._synced_container_to_container_path(path) # Validate that ONLY the SYNC_DATADIR part was replaced with # DATADIR self._validate_container_path_parts(path, devices, device, DATADIR, db_path_suffix) # Constructs a db path suffix of the form: # 133791/625/82...25/82...25.db def _db_path_suffix(self): def random_hexa_string(length): '%0x' % random.randrange(16 ** length) db = random_hexa_string(32) return '%s/%s/%s/%s.db' % (random_hexa_string(5), random_hexa_string(3), db, db) def _container_path_elements_generator(self): # We build various container path elements putting in both # DATADIR and SYNC_DATADIR strings in the # device and devices parts. for devices in ['/srv/node', '/srv/node/', '/srv/node/dev', '/srv/node/%s' % DATADIR, '/srv/node/%s' % sync_store.SYNC_DATADIR]: for device in ['sdf1', 'sdf1/sdf2', 'sdf1/%s' % DATADIR, 'sdf1/%s' % sync_store.SYNC_DATADIR, '%s/sda' % DATADIR, '%s/sda' % sync_store.SYNC_DATADIR]: yield devices, device def _validate_container_path_parts(self, path, devices, device, target, suffix): # Recall that the path is of the form: # devices/device/target/suffix # where each of the sub path elements (e.g. devices) # has a path structure containing path elements separated by '/' # We thus validate by splitting the path according to '/' # traversing all of its path elements making sure that the # first elements are those of devices, # the second are those of device # etc. spath = path.split('/') spath.reverse() self.assertEqual(spath.pop(), '') # Validate path against 'devices' for p in [p for p in devices.split('/') if p]: self.assertEqual(spath.pop(), p) # Validate path against 'device' for p in [p for p in device.split('/') if p]: self.assertEqual(spath.pop(), p) # Validate path against target self.assertEqual(spath.pop(), target) # Validate path against suffix for p in [p for p in suffix.split('/') if p]: self.assertEqual(spath.pop(), p) def test_add_synced_container(self): # Add non-existing and existing synced containers sds = sync_store.ContainerSyncStore(self.devices_dir, self.logger, False) cfile = self.pick_dbfile() broker = FakeContainerBroker(cfile) for i in range(2): sds.add_synced_container(broker) scpath = sds._container_to_synced_container_path(cfile) with open(scpath, 'r') as infile: self.assertEqual(infile.read(), cfile) iterated_synced_containers = list() for db_path in sds.synced_containers_generator(): iterated_synced_containers.append(db_path) self.assertEqual(len(iterated_synced_containers), 1) def test_remove_synced_container(self): # Add a synced container to remove sds = sync_store.ContainerSyncStore(self.devices_dir, self.logger, False) cfile = self.pick_dbfile() # We keep here the link file so as to validate its deletion later lfile = sds._container_to_synced_container_path(cfile) broker = FakeContainerBroker(cfile) sds.add_synced_container(broker) # Remove existing and non-existing synced containers for i in range(2): sds.remove_synced_container(broker) iterated_synced_containers = list() for db_path in sds.synced_containers_generator(): iterated_synced_containers.append(db_path) self.assertEqual(len(iterated_synced_containers), 0) # Make sure the whole link path gets deleted # recall that the path has the following suffix: # // # /.db # and we expect the .db as well as all path elements # to get deleted self.assertFalse(os.path.exists(lfile)) lfile = os.path.dirname(lfile) for i in range(3): self.assertFalse(os.path.exists(os.path.dirname(lfile))) lfile = os.path.dirname(lfile) def test_iterate_synced_containers(self): # populate sync container db sds = sync_store.ContainerSyncStore(self.devices_dir, self.logger, False) containers = list() for i in range(10): cfile = self.pick_dbfile() broker = FakeContainerBroker(cfile) sds.add_synced_container(broker) containers.append(cfile) iterated_synced_containers = list() for db_path in sds.synced_containers_generator(): iterated_synced_containers.append(db_path) self.assertEqual( set(containers), set(iterated_synced_containers)) def test_unhandled_exceptions_in_add_remove(self): sds = sync_store.ContainerSyncStore(self.devices_dir, self.logger, False) cfile = self.pick_dbfile() broker = FakeContainerBroker(cfile) with mock.patch( 'swift.container.sync_store.os.stat', side_effect=OSError(errno.EPERM, 'permission denied')): with self.assertRaises(OSError) as cm: sds.add_synced_container(broker) self.assertEqual(errno.EPERM, cm.exception.errno) with mock.patch( 'swift.container.sync_store.os.makedirs', side_effect=OSError(errno.EPERM, 'permission denied')): with self.assertRaises(OSError) as cm: sds.add_synced_container(broker) self.assertEqual(errno.EPERM, cm.exception.errno) with mock.patch( 'swift.container.sync_store.os.symlink', side_effect=OSError(errno.EPERM, 'permission denied')): with self.assertRaises(OSError) as cm: sds.add_synced_container(broker) self.assertEqual(errno.EPERM, cm.exception.errno) with mock.patch( 'swift.container.sync_store.os.unlink', side_effect=OSError(errno.EPERM, 'permission denied')): with self.assertRaises(OSError) as cm: sds.remove_synced_container(broker) self.assertEqual(errno.EPERM, cm.exception.errno) def test_update_sync_store_according_to_metadata_and_deleted(self): # This function tests the update_sync_store 'logics' # with respect to various combinations of the # sync-to and sync-key metadata items and whether # the database is marked for delete. # The table below summarizes the expected result # for the various combinations, e.g.: # If metadata items exist and the database # is not marked for delete then add should be called. results_list = [ [False, 'a', 'b', 'add'], [False, 'a', '', 'remove'], [False, 'a', None, 'remove'], [False, '', 'b', 'remove'], [False, '', '', 'remove'], [False, '', None, 'remove'], [False, None, 'b', 'remove'], [False, None, '', 'remove'], [False, None, None, 'none'], [True, 'a', 'b', 'remove'], [True, 'a', '', 'remove'], [True, 'a', None, 'remove'], [True, '', 'b', 'remove'], [True, '', '', 'remove'], [True, '', None, 'remove'], [True, None, 'b', 'remove'], [True, None, '', 'remove'], [True, None, None, 'none'], ] store = 'swift.container.sync_store.ContainerSyncStore' with mock.patch(store + '.add_synced_container') as add_container: with mock.patch( store + '.remove_synced_container') as remove_container: sds = sync_store.ContainerSyncStore(self.devices_dir, self.logger, False) add_calls = 0 remove_calls = 0 # We now iterate over the list of combinations # Validating that add and removed are called as # expected for deleted, sync_to, sync_key, expected_op in results_list: cfile = self.pick_dbfile() broker = FakeContainerBroker(cfile) broker._is_deleted = deleted if sync_to is not None: broker.metadata['X-Container-Sync-To'] = [ sync_to, 1] if sync_key is not None: broker.metadata['X-Container-Sync-Key'] = [ sync_key, 1] sds.update_sync_store(broker) if expected_op == 'add': add_calls += 1 if expected_op == 'remove': remove_calls += 1 self.assertEqual(add_container.call_count, add_calls) self.assertEqual(remove_container.call_count, remove_calls) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/test_updater.py0000664000567000056710000002632513024044354023022 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import six.moves.cPickle as pickle import mock import os import unittest from contextlib import closing from gzip import GzipFile from shutil import rmtree from tempfile import mkdtemp from test.unit import FakeLogger from eventlet import spawn, Timeout, listen from swift.common import utils from swift.container import updater as container_updater from swift.container.backend import ContainerBroker, DATADIR from swift.common.ring import RingData from swift.common.utils import normalize_timestamp class TestContainerUpdater(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' self.testdir = os.path.join(mkdtemp(), 'tmp_test_container_updater') rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) ring_file = os.path.join(self.testdir, 'account.ring.gz') with closing(GzipFile(ring_file, 'wb')) as f: pickle.dump( RingData([[0, 1, 0, 1], [1, 0, 1, 0]], [{'id': 0, 'ip': '127.0.0.1', 'port': 12345, 'device': 'sda1', 'zone': 0}, {'id': 1, 'ip': '127.0.0.1', 'port': 12345, 'device': 'sda1', 'zone': 2}], 30), f) self.devices_dir = os.path.join(self.testdir, 'devices') os.mkdir(self.devices_dir) self.sda1 = os.path.join(self.devices_dir, 'sda1') os.mkdir(self.sda1) def tearDown(self): rmtree(os.path.dirname(self.testdir), ignore_errors=1) def test_creation(self): cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '2', 'node_timeout': '5.5', }) self.assertTrue(hasattr(cu, 'logger')) self.assertTrue(cu.logger is not None) self.assertEqual(cu.devices, self.devices_dir) self.assertEqual(cu.interval, 1) self.assertEqual(cu.concurrency, 2) self.assertEqual(cu.node_timeout, 5.5) self.assertTrue(cu.get_account_ring() is not None) @mock.patch.object(container_updater, 'ismount') @mock.patch.object(container_updater.ContainerUpdater, 'container_sweep') def test_run_once_with_device_unmounted(self, mock_sweep, mock_ismount): mock_ismount.return_value = False cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', 'account_suppression_time': 0 }) containers_dir = os.path.join(self.sda1, DATADIR) os.mkdir(containers_dir) partition_dir = os.path.join(containers_dir, "a") os.mkdir(partition_dir) cu.run_once() self.assertTrue(os.path.exists(containers_dir)) # sanity check # only called if a partition dir exists self.assertTrue(mock_sweep.called) mock_sweep.reset_mock() cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'true', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', 'account_suppression_time': 0 }) cu.logger = FakeLogger() cu.run_once() log_lines = cu.logger.get_lines_for_level('warning') self.assertTrue(len(log_lines) > 0) msg = 'sda1 is not mounted' self.assertEqual(log_lines[0], msg) # Ensure that the container_sweep did not run self.assertFalse(mock_sweep.called) def test_run_once(self): cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', 'account_suppression_time': 0 }) cu.run_once() containers_dir = os.path.join(self.sda1, DATADIR) os.mkdir(containers_dir) cu.run_once() self.assertTrue(os.path.exists(containers_dir)) subdir = os.path.join(containers_dir, 'subdir') os.mkdir(subdir) cb = ContainerBroker(os.path.join(subdir, 'hash.db'), account='a', container='c') cb.initialize(normalize_timestamp(1), 0) cu.run_once() info = cb.get_info() self.assertEqual(info['object_count'], 0) self.assertEqual(info['bytes_used'], 0) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) cb.put_object('o', normalize_timestamp(2), 3, 'text/plain', '68b329da9893e34099c7d8ad5cb9c940') cu.run_once() info = cb.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 3) self.assertEqual(info['reported_object_count'], 0) self.assertEqual(info['reported_bytes_used'], 0) def accept(sock, addr, return_code): try: with Timeout(3): inc = sock.makefile('rb') out = sock.makefile('wb') out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' % return_code) out.flush() self.assertEqual(inc.readline(), 'PUT /sda1/0/a/c HTTP/1.1\r\n') headers = {} line = inc.readline() while line and line != '\r\n': headers[line.split(':')[0].lower()] = \ line.split(':')[1].strip() line = inc.readline() self.assertTrue('x-put-timestamp' in headers) self.assertTrue('x-delete-timestamp' in headers) self.assertTrue('x-object-count' in headers) self.assertTrue('x-bytes-used' in headers) except BaseException as err: import traceback traceback.print_exc() return err return None bindsock = listen(('127.0.0.1', 0)) def spawn_accepts(): events = [] for _junk in range(2): sock, addr = bindsock.accept() events.append(spawn(accept, sock, addr, 201)) return events spawned = spawn(spawn_accepts) for dev in cu.get_account_ring().devs: if dev is not None: dev['port'] = bindsock.getsockname()[1] cu.run_once() for event in spawned.wait(): err = event.wait() if err: raise err info = cb.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 3) self.assertEqual(info['reported_object_count'], 1) self.assertEqual(info['reported_bytes_used'], 3) @mock.patch('os.listdir') def test_listdir_with_exception(self, mock_listdir): e = OSError('permission_denied') mock_listdir.side_effect = e cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', 'account_suppression_time': 0 }) cu.logger = FakeLogger() paths = cu.get_paths() self.assertEqual(paths, []) log_lines = cu.logger.get_lines_for_level('error') msg = ('ERROR: Failed to get paths to drive partitions: ' 'permission_denied') self.assertEqual(log_lines[0], msg) @mock.patch('os.listdir', return_value=['foo', 'bar']) def test_listdir_without_exception(self, mock_listdir): cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', 'account_suppression_time': 0 }) cu.logger = FakeLogger() path = cu._listdir('foo/bar/') self.assertEqual(path, ['foo', 'bar']) log_lines = cu.logger.get_lines_for_level('error') self.assertEqual(len(log_lines), 0) def test_unicode(self): cu = container_updater.ContainerUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15', }) containers_dir = os.path.join(self.sda1, DATADIR) os.mkdir(containers_dir) subdir = os.path.join(containers_dir, 'subdir') os.mkdir(subdir) cb = ContainerBroker(os.path.join(subdir, 'hash.db'), account='a', container='\xce\xa9') cb.initialize(normalize_timestamp(1), 0) cb.put_object('\xce\xa9', normalize_timestamp(2), 3, 'text/plain', '68b329da9893e34099c7d8ad5cb9c940') def accept(sock, addr): try: with Timeout(3): inc = sock.makefile('rb') out = sock.makefile('wb') out.write('HTTP/1.1 201 OK\r\nContent-Length: 0\r\n\r\n') out.flush() inc.read() except BaseException as err: import traceback traceback.print_exc() return err return None bindsock = listen(('127.0.0.1', 0)) def spawn_accepts(): events = [] for _junk in range(2): with Timeout(3): sock, addr = bindsock.accept() events.append(spawn(accept, sock, addr)) return events spawned = spawn(spawn_accepts) for dev in cu.get_account_ring().devs: if dev is not None: dev['port'] = bindsock.getsockname()[1] cu.run_once() for event in spawned.wait(): err = event.wait() if err: raise err info = cb.get_info() self.assertEqual(info['object_count'], 1) self.assertEqual(info['bytes_used'], 3) self.assertEqual(info['reported_object_count'], 1) self.assertEqual(info['reported_bytes_used'], 3) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/test_auditor.py0000664000567000056710000001554413024044352023024 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import mock import time import os import random from tempfile import mkdtemp from shutil import rmtree from eventlet import Timeout from swift.common.utils import normalize_timestamp from swift.container import auditor from test.unit import debug_logger, with_tempdir from test.unit.container import test_backend class FakeContainerBroker(object): def __init__(self, path): self.path = path self.db_file = path self.file = os.path.basename(path) def is_deleted(self): return False def get_info(self): if self.file.startswith('fail'): raise ValueError if self.file.startswith('true'): return 'ok' class TestAuditor(unittest.TestCase): def setUp(self): self.testdir = os.path.join(mkdtemp(), 'tmp_test_container_auditor') self.logger = debug_logger() rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) fnames = ['true1.db', 'true2.db', 'true3.db', 'fail1.db', 'fail2.db'] for fn in fnames: with open(os.path.join(self.testdir, fn), 'w+') as f: f.write(' ') def tearDown(self): rmtree(os.path.dirname(self.testdir), ignore_errors=1) @mock.patch('swift.container.auditor.ContainerBroker', FakeContainerBroker) def test_run_forever(self): sleep_times = random.randint(5, 10) call_times = sleep_times - 1 class FakeTime(object): def __init__(self): self.times = 0 def sleep(self, sec): self.times += 1 if self.times < sleep_times: time.sleep(0.1) else: # stop forever by an error raise ValueError() def time(self): return time.time() conf = {} test_auditor = auditor.ContainerAuditor(conf, logger=self.logger) with mock.patch('swift.container.auditor.time', FakeTime()): def fake_audit_location_generator(*args, **kwargs): files = os.listdir(self.testdir) return [(os.path.join(self.testdir, f), '', '') for f in files] with mock.patch('swift.container.auditor.audit_location_generator', fake_audit_location_generator): self.assertRaises(ValueError, test_auditor.run_forever) self.assertEqual(test_auditor.container_failures, 2 * call_times) self.assertEqual(test_auditor.container_passes, 3 * call_times) # now force timeout path code coverage with mock.patch('swift.container.auditor.ContainerAuditor.' '_one_audit_pass', side_effect=Timeout()): with mock.patch('swift.container.auditor.time', FakeTime()): self.assertRaises(ValueError, test_auditor.run_forever) @mock.patch('swift.container.auditor.ContainerBroker', FakeContainerBroker) def test_run_once(self): conf = {} test_auditor = auditor.ContainerAuditor(conf, logger=self.logger) def fake_audit_location_generator(*args, **kwargs): files = os.listdir(self.testdir) return [(os.path.join(self.testdir, f), '', '') for f in files] with mock.patch('swift.container.auditor.audit_location_generator', fake_audit_location_generator): test_auditor.run_once() self.assertEqual(test_auditor.container_failures, 2) self.assertEqual(test_auditor.container_passes, 3) @mock.patch('swift.container.auditor.ContainerBroker', FakeContainerBroker) def test_one_audit_pass(self): conf = {} test_auditor = auditor.ContainerAuditor(conf, logger=self.logger) def fake_audit_location_generator(*args, **kwargs): files = sorted(os.listdir(self.testdir)) return [(os.path.join(self.testdir, f), '', '') for f in files] # force code coverage for logging path test_auditor.logging_interval = 0 with mock.patch('swift.container.auditor.audit_location_generator', fake_audit_location_generator): test_auditor._one_audit_pass(test_auditor.logging_interval) self.assertEqual(test_auditor.container_failures, 1) self.assertEqual(test_auditor.container_passes, 3) @mock.patch('swift.container.auditor.ContainerBroker', FakeContainerBroker) def test_container_auditor(self): conf = {} test_auditor = auditor.ContainerAuditor(conf, logger=self.logger) files = os.listdir(self.testdir) for f in files: path = os.path.join(self.testdir, f) test_auditor.container_audit(path) self.assertEqual(test_auditor.container_failures, 2) self.assertEqual(test_auditor.container_passes, 3) class TestAuditorMigrations(unittest.TestCase): @with_tempdir def test_db_migration(self, tempdir): db_path = os.path.join(tempdir, 'sda', 'containers', '0', '0', '0', 'test.db') with test_backend.TestContainerBrokerBeforeSPI.old_broker() as \ old_ContainerBroker: broker = old_ContainerBroker(db_path, account='a', container='c') broker.initialize(normalize_timestamp(0), -1) with broker.get() as conn: try: conn.execute('SELECT storage_policy_index ' 'FROM container_stat') except Exception as err: self.assertTrue('no such column: storage_policy_index' in str(err)) else: self.fail('TestContainerBrokerBeforeSPI broker class ' 'was already migrated') conf = {'devices': tempdir, 'mount_check': False} test_auditor = auditor.ContainerAuditor(conf, logger=debug_logger()) test_auditor.run_once() broker = auditor.ContainerBroker(db_path, account='a', container='c') info = broker.get_info() expected = { 'account': 'a', 'container': 'c', 'object_count': 0, 'bytes_used': 0, 'storage_policy_index': 0, } for k, v in expected.items(): self.assertEqual(info[k], v) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/test_server.py0000664000567000056710000041303013024044354022655 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import operator import os import mock import unittest import itertools from contextlib import contextmanager from shutil import rmtree from tempfile import mkdtemp from test.unit import FakeLogger from time import gmtime from xml.dom import minidom import time import random from eventlet import spawn, Timeout, listen import json import six from six import BytesIO from six import StringIO from swift import __version__ as swift_version from swift.common.header_key_dict import HeaderKeyDict from swift.common.swob import (Request, WsgiBytesIO, HTTPNoContent) import swift.container from swift.container import server as container_server from swift.common import constraints from swift.common.utils import (Timestamp, mkdirs, public, replication, storage_directory, lock_parent_directory) from test.unit import fake_http_connect, debug_logger from swift.common.storage_policy import (POLICIES, StoragePolicy) from swift.common.request_helpers import get_sys_meta_prefix from test.unit import patch_policies @contextmanager def save_globals(): orig_http_connect = getattr(swift.container.server, 'http_connect', None) try: yield True finally: swift.container.server.http_connect = orig_http_connect @patch_policies class TestContainerController(unittest.TestCase): """Test swift.container.server.ContainerController""" def setUp(self): """Set up for testing swift.object_server.ObjectController""" self.testdir = os.path.join(mkdtemp(), 'tmp_test_object_server_ObjectController') mkdirs(self.testdir) rmtree(self.testdir) mkdirs(os.path.join(self.testdir, 'sda1')) mkdirs(os.path.join(self.testdir, 'sda1', 'tmp')) self.controller = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false'}) # some of the policy tests want at least two policies self.assertTrue(len(POLICIES) > 1) def tearDown(self): rmtree(os.path.dirname(self.testdir), ignore_errors=1) def _update_object_put_headers(self, req): """ Override this method in test subclasses to test post upgrade behavior. """ pass def _check_put_container_storage_policy(self, req, policy_index): resp = req.get_response(self.controller) self.assertEqual(201, resp.status_int) req = Request.blank(req.path, method='HEAD') resp = req.get_response(self.controller) self.assertEqual(204, resp.status_int) self.assertEqual(str(policy_index), resp.headers['X-Backend-Storage-Policy-Index']) def test_creation(self): # later config should be extended to assert more config options replicator = container_server.ContainerController( {'node_timeout': '3.5'}) self.assertEqual(replicator.node_timeout, 3.5) def test_get_and_validate_policy_index(self): # no policy is OK req = Request.blank('/sda1/p/a/container_default', method='PUT', headers={'X-Timestamp': '0'}) self._check_put_container_storage_policy(req, POLICIES.default.idx) # bogus policies for policy in ('nada', 999): req = Request.blank('/sda1/p/a/c_%s' % policy, method='PUT', headers={ 'X-Timestamp': '0', 'X-Backend-Storage-Policy-Index': policy }) resp = req.get_response(self.controller) self.assertEqual(400, resp.status_int) self.assertTrue('invalid' in resp.body.lower()) # good policies for policy in POLICIES: req = Request.blank('/sda1/p/a/c_%s' % policy.name, method='PUT', headers={ 'X-Timestamp': '0', 'X-Backend-Storage-Policy-Index': policy.idx, }) self._check_put_container_storage_policy(req, policy.idx) def test_acl_container(self): # Ensure no acl by default req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '0'}) resp = req.get_response(self.controller) self.assertTrue(resp.status.startswith('201')) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) response = req.get_response(self.controller) self.assertTrue(response.status.startswith('204')) self.assertTrue('x-container-read' not in response.headers) self.assertTrue('x-container-write' not in response.headers) # Ensure POSTing acls works req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': '1', 'X-Container-Read': '.r:*', 'X-Container-Write': 'account:user'}) resp = req.get_response(self.controller) self.assertTrue(resp.status.startswith('204')) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) response = req.get_response(self.controller) self.assertTrue(response.status.startswith('204')) self.assertEqual(response.headers.get('x-container-read'), '.r:*') self.assertEqual(response.headers.get('x-container-write'), 'account:user') # Ensure we can clear acls on POST req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': '3', 'X-Container-Read': '', 'X-Container-Write': ''}) resp = req.get_response(self.controller) self.assertTrue(resp.status.startswith('204')) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) response = req.get_response(self.controller) self.assertTrue(response.status.startswith('204')) self.assertTrue('x-container-read' not in response.headers) self.assertTrue('x-container-write' not in response.headers) # Ensure PUTing acls works req = Request.blank( '/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '4', 'X-Container-Read': '.r:*', 'X-Container-Write': 'account:user'}) resp = req.get_response(self.controller) self.assertTrue(resp.status.startswith('201')) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'HEAD'}) response = req.get_response(self.controller) self.assertTrue(response.status.startswith('204')) self.assertEqual(response.headers.get('x-container-read'), '.r:*') self.assertEqual(response.headers.get('x-container-write'), 'account:user') def test_HEAD(self): start = int(time.time()) ts = (Timestamp(t).internal for t in itertools.count(start)) req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'x-timestamp': next(ts)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c', method='HEAD') response = req.get_response(self.controller) self.assertEqual(response.status_int, 204) self.assertEqual(response.headers['x-container-bytes-used'], '0') self.assertEqual(response.headers['x-container-object-count'], '0') obj_put_request = Request.blank( '/sda1/p/a/c/o', method='PUT', headers={ 'x-timestamp': next(ts), 'x-size': 42, 'x-content-type': 'text/plain', 'x-etag': 'x', }) self._update_object_put_headers(obj_put_request) obj_put_resp = obj_put_request.get_response(self.controller) self.assertEqual(obj_put_resp.status_int // 100, 2) # re-issue HEAD request response = req.get_response(self.controller) self.assertEqual(response.status_int // 100, 2) self.assertEqual(response.headers['x-container-bytes-used'], '42') self.assertEqual(response.headers['x-container-object-count'], '1') # created at time... created_at_header = Timestamp(response.headers['x-timestamp']) self.assertEqual(response.headers['x-timestamp'], created_at_header.normal) self.assertTrue(created_at_header >= start) self.assertEqual(response.headers['x-put-timestamp'], Timestamp(start).normal) # backend headers self.assertEqual(int(response.headers ['X-Backend-Storage-Policy-Index']), int(POLICIES.default)) self.assertTrue( Timestamp(response.headers['x-backend-timestamp']) >= start) self.assertEqual(response.headers['x-backend-put-timestamp'], Timestamp(start).internal) self.assertEqual(response.headers['x-backend-delete-timestamp'], Timestamp(0).internal) self.assertEqual(response.headers['x-backend-status-changed-at'], Timestamp(start).internal) def test_HEAD_not_found(self): req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']), 0) self.assertEqual(resp.headers['x-backend-timestamp'], Timestamp(0).internal) self.assertEqual(resp.headers['x-backend-put-timestamp'], Timestamp(0).internal) self.assertEqual(resp.headers['x-backend-status-changed-at'], Timestamp(0).internal) self.assertEqual(resp.headers['x-backend-delete-timestamp'], Timestamp(0).internal) for header in ('x-container-object-count', 'x-container-bytes-used', 'x-timestamp', 'x-put-timestamp'): self.assertEqual(resp.headers[header], None) def test_deleted_headers(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) request_method_times = { 'PUT': next(ts), 'DELETE': next(ts), } # setup a deleted container for method in ('PUT', 'DELETE'): x_timestamp = request_method_times[method] req = Request.blank('/sda1/p/a/c', method=method, headers={'x-timestamp': x_timestamp}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a/c', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) # backend headers self.assertEqual(int(resp.headers[ 'X-Backend-Storage-Policy-Index']), int(POLICIES.default)) self.assertTrue(Timestamp(resp.headers['x-backend-timestamp']) >= Timestamp(request_method_times['PUT'])) self.assertEqual(resp.headers['x-backend-put-timestamp'], request_method_times['PUT']) self.assertEqual(resp.headers['x-backend-delete-timestamp'], request_method_times['DELETE']) self.assertEqual(resp.headers['x-backend-status-changed-at'], request_method_times['DELETE']) for header in ('x-container-object-count', 'x-container-bytes-used', 'x-timestamp', 'x-put-timestamp'): self.assertEqual(resp.headers[header], None) def test_HEAD_invalid_partition(self): req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'HEAD', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_HEAD_insufficient_storage(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'HEAD', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_HEAD_invalid_content_type(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}, headers={'Accept': 'application/plain'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 406) def test_HEAD_invalid_format(self): format = '%D1%BD%8A9' # invalid UTF-8; should be %E1%BD%8A9 (E -> D) req = Request.blank( '/sda1/p/a/c?format=' + format, environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_OPTIONS(self): server_handler = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false'}) req = Request.blank('/sda1/p/a/c/o', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = server_handler.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST PUT DELETE HEAD REPLICATE'.split(): self.assertTrue( verb in resp.headers['Allow'].split(', ')) self.assertEqual(len(resp.headers['Allow'].split(', ')), 7) self.assertEqual(resp.headers['Server'], (self.controller.server_type + '/' + swift_version)) def test_PUT(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) def test_PUT_simulated_create_race(self): state = ['initial'] from swift.container.backend import ContainerBroker as OrigCoBr class InterceptedCoBr(OrigCoBr): def __init__(self, *args, **kwargs): super(InterceptedCoBr, self).__init__(*args, **kwargs) if state[0] == 'initial': # Do nothing initially pass elif state[0] == 'race': # Save the original db_file attribute value self._saved_db_file = self.db_file self.db_file += '.doesnotexist' def initialize(self, *args, **kwargs): if state[0] == 'initial': # Do nothing initially pass elif state[0] == 'race': # Restore the original db_file attribute to get the race # behavior self.db_file = self._saved_db_file return super(InterceptedCoBr, self).initialize(*args, **kwargs) with mock.patch("swift.container.server.ContainerBroker", InterceptedCoBr): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) state[0] = "race" req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) def test_PUT_obj_not_found(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1', 'X-Size': '0', 'X-Content-Type': 'text/plain', 'X-ETag': 'e'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_PUT_good_policy_specified(self): policy = random.choice(list(POLICIES)) # Set metadata header req = Request.blank('/sda1/p/a/c', method='PUT', headers={'X-Timestamp': Timestamp(1).internal, 'X-Backend-Storage-Policy-Index': policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) # now make sure we read it back req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) def test_PUT_no_policy_specified(self): # Set metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(POLICIES.default.idx)) # now make sure the default was used (pol 1) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(POLICIES.default.idx)) def test_PUT_bad_policy_specified(self): # Set metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal, 'X-Backend-Storage-Policy-Index': 'nada'}) resp = req.get_response(self.controller) # make sure we get bad response self.assertEqual(resp.status_int, 400) self.assertFalse('X-Backend-Storage-Policy-Index' in resp.headers) def test_PUT_no_policy_change(self): ts = (Timestamp(t).internal for t in itertools.count(time.time())) policy = random.choice(list(POLICIES)) # Set metadata header req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # make sure we get the right index back self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) # now try to update w/o changing the policy for method in ('POST', 'PUT'): req = Request.blank('/sda1/p/a/c', method=method, headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': policy.idx }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) # make sure we get the right index back req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) def test_PUT_bad_policy_change(self): ts = (Timestamp(t).internal for t in itertools.count(time.time())) policy = random.choice(list(POLICIES)) # Set metadata header req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # make sure we get the right index back self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) other_policies = [p for p in POLICIES if p != policy] for other_policy in other_policies: # now try to change it and make sure we get a conflict req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': other_policy.idx }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 409) self.assertEqual( resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) # and make sure there is no change! req = Request.blank('/sda1/p/a/c') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # make sure we get the right index back self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) def test_POST_ignores_policy_change(self): ts = (Timestamp(t).internal for t in itertools.count(time.time())) policy = random.choice(list(POLICIES)) req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # make sure we get the right index back self.assertEqual(resp.headers.get('X-Backend-Storage-Policy-Index'), str(policy.idx)) other_policies = [p for p in POLICIES if p != policy] for other_policy in other_policies: # now try to change it and make sure we get a conflict req = Request.blank('/sda1/p/a/c', method='POST', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': other_policy.idx }) resp = req.get_response(self.controller) # valid request self.assertEqual(resp.status_int // 100, 2) # but it does nothing req = Request.blank('/sda1/p/a/c') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # make sure we get the right index back self.assertEqual(resp.headers.get ('X-Backend-Storage-Policy-Index'), str(policy.idx)) def test_PUT_no_policy_for_existing_default(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) # create a container with the default storage policy req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check # check the policy index req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(POLICIES.default.idx)) # put again without specifying the storage policy req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) # sanity check # policy index is unchanged req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(POLICIES.default.idx)) def test_PUT_proxy_default_no_policy_for_existing_default(self): # make it look like the proxy has a different default than we do, like # during a config change restart across a multi node cluster. proxy_default = random.choice([p for p in POLICIES if not p.is_default]) ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) # create a container with the default storage policy req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Default': int(proxy_default), }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check # check the policy index req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']), int(proxy_default)) # put again without proxy specifying the different default req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Default': int(POLICIES.default), }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) # sanity check # policy index is unchanged req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']), int(proxy_default)) def test_PUT_no_policy_for_existing_non_default(self): ts = (Timestamp(t).internal for t in itertools.count(time.time())) non_default_policy = [p for p in POLICIES if not p.is_default][0] # create a container with the non-default storage policy req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': non_default_policy.idx, }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check # check the policy index req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(non_default_policy.idx)) # put again without specifying the storage policy req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) # sanity check # policy index is unchanged req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(non_default_policy.idx)) def test_PUT_GET_metadata(self): # Set metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal, 'X-Container-Meta-Test': 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'Value') # Set another metadata header, ensuring old one doesn't disappear req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(1).internal, 'X-Container-Meta-Test2': 'Value2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'Value') self.assertEqual(resp.headers.get('x-container-meta-test2'), 'Value2') # Update metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(3).internal, 'X-Container-Meta-Test': 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'New Value') # Send old update to metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(2).internal, 'X-Container-Meta-Test': 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(4).internal, 'X-Container-Meta-Test': ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue('x-container-meta-test' not in resp.headers) def test_PUT_GET_sys_metadata(self): prefix = get_sys_meta_prefix('container') key = '%sTest' % prefix key2 = '%sTest2' % prefix # Set metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal, key: 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'Value') # Set another metadata header, ensuring old one doesn't disappear req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(1).internal, key2: 'Value2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'Value') self.assertEqual(resp.headers.get(key2.lower()), 'Value2') # Update metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(3).internal, key: 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'New Value') # Send old update to metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(2).internal, key: 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(4).internal, key: ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue(key.lower() not in resp.headers) def test_PUT_invalid_partition(self): req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_PUT_timestamp_not_float(self): req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 'not-float'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_PUT_insufficient_storage(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_POST_HEAD_metadata(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # Set metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(1).internal, 'X-Container-Meta-Test': 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'Value') self.assertEqual(resp.headers.get('x-put-timestamp'), '0000000001.00000') # Update metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(3).internal, 'X-Container-Meta-Test': 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'New Value') self.assertEqual(resp.headers.get('x-put-timestamp'), '0000000003.00000') # Send old update to metadata header req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(2).internal, 'X-Container-Meta-Test': 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-container-meta-test'), 'New Value') self.assertEqual(resp.headers.get('x-put-timestamp'), '0000000003.00000') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(4).internal, 'X-Container-Meta-Test': ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue('x-container-meta-test' not in resp.headers) self.assertEqual(resp.headers.get('x-put-timestamp'), '0000000004.00000') def test_POST_HEAD_sys_metadata(self): prefix = get_sys_meta_prefix('container') key = '%sTest' % prefix req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # Set metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(1).internal, key: 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'Value') # Update metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(3).internal, key: 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'New Value') # Send old update to metadata header req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(2).internal, key: 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(key.lower()), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': Timestamp(4).internal, key: ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue(key.lower() not in resp.headers) def test_POST_invalid_partition(self): req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_POST_timestamp_not_float(self): req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': 'not-float'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_POST_insufficient_storage(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_POST_invalid_container_sync_to(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '1'}, headers={'x-container-sync-to': '192.168.0.1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_POST_after_DELETE_not_found(self): req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': '2'}) resp = req.get_response(self.controller) req = Request.blank('/sda1/p/a/c/', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': '3'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_DELETE_obj_not_found(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_DELETE_container_not_found(self): req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_PUT_utf8(self): snowman = u'\u2603' container_name = snowman.encode('utf-8') req = Request.blank( '/sda1/p/a/%s' % container_name, environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) def test_account_update_mismatched_host_device(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}, headers={'X-Timestamp': '0000000001.00000', 'X-Account-Host': '127.0.0.1:0', 'X-Account-Partition': '123', 'X-Account-Device': 'sda1,sda2'}) broker = self.controller._get_container_broker('sda1', 'p', 'a', 'c') resp = self.controller.account_update(req, 'a', 'c', broker) self.assertEqual(resp.status_int, 400) def test_account_update_account_override_deleted(self): bindsock = listen(('127.0.0.1', 0)) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}, headers={'X-Timestamp': '0000000001.00000', 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1', 'X-Account-Override-Deleted': 'yes'}) with save_globals(): new_connect = fake_http_connect(200, count=123) swift.container.server.http_connect = new_connect resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) def test_PUT_account_update(self): bindsock = listen(('127.0.0.1', 0)) def accept(return_code, expected_timestamp): try: with Timeout(3): sock, addr = bindsock.accept() inc = sock.makefile('rb') out = sock.makefile('wb') out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' % return_code) out.flush() self.assertEqual(inc.readline(), 'PUT /sda1/123/a/c HTTP/1.1\r\n') headers = {} line = inc.readline() while line and line != '\r\n': headers[line.split(':')[0].lower()] = \ line.split(':')[1].strip() line = inc.readline() self.assertEqual(headers['x-put-timestamp'], expected_timestamp) except BaseException as err: return err return None req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(1).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 201, Timestamp(1).internal) try: with Timeout(3): resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) finally: err = event.wait() if err: raise Exception(err) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(3).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 404, Timestamp(3).internal) try: with Timeout(3): resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) finally: err = event.wait() if err: raise Exception(err) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': Timestamp(5).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 503, Timestamp(5).internal) got_exc = False try: with Timeout(3): resp = req.get_response(self.controller) except BaseException as err: got_exc = True finally: err = event.wait() if err: raise Exception(err) self.assertTrue(not got_exc) def test_PUT_reset_container_sync(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) db.set_x_container_sync_points(123, 456) info = db.get_info() self.assertEqual(info['x_container_sync_point1'], 123) self.assertEqual(info['x_container_sync_point2'], 456) # Set to same value req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], 123) self.assertEqual(info['x_container_sync_point2'], 456) # Set to new value req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) def test_POST_reset_container_sync(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) db.set_x_container_sync_points(123, 456) info = db.get_info() self.assertEqual(info['x_container_sync_point1'], 123) self.assertEqual(info['x_container_sync_point2'], 456) # Set to same value req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], 123) self.assertEqual(info['x_container_sync_point2'], 456) # Set to new value req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') info = db.get_info() self.assertEqual(info['x_container_sync_point1'], -1) self.assertEqual(info['x_container_sync_point2'], -1) def test_update_sync_store_on_PUT(self): # Create a synced container and validate a link is created self._create_synced_container_and_validate_sync_store('PUT') # remove the sync using PUT and validate the link is deleted self._remove_sync_and_validate_sync_store('PUT') def test_update_sync_store_on_POST(self): # Create a container and validate a link is not created self._create_container_and_validate_sync_store() # Update the container to be synced and validate a link is created self._create_synced_container_and_validate_sync_store('POST') # remove the sync using POST and validate the link is deleted self._remove_sync_and_validate_sync_store('POST') def test_update_sync_store_on_DELETE(self): # Create a synced container and validate a link is created self._create_synced_container_and_validate_sync_store('PUT') # Remove the container and validate the link is deleted self._remove_sync_and_validate_sync_store('DELETE') def _create_container_and_validate_sync_store(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'x-timestamp': '0'}) req.get_response(self.controller) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') sync_store = self.controller.sync_store db_path = db.db_file db_link = sync_store._container_to_synced_container_path(db_path) self.assertFalse(os.path.exists(db_link)) sync_containers = [c for c in sync_store.synced_containers_generator()] self.assertFalse(sync_containers) def _create_synced_container_and_validate_sync_store(self, method): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': method}, headers={'x-timestamp': '1', 'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c', 'x-container-sync-key': '1234'}) req.get_response(self.controller) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') sync_store = self.controller.sync_store db_path = db.db_file db_link = sync_store._container_to_synced_container_path(db_path) self.assertTrue(os.path.exists(db_link)) sync_containers = [c for c in sync_store.synced_containers_generator()] self.assertEqual(1, len(sync_containers)) self.assertEqual(db_path, sync_containers[0]) def _remove_sync_and_validate_sync_store(self, method): if method == 'DELETE': headers = {'x-timestamp': '2'} else: headers = {'x-timestamp': '2', 'x-container-sync-to': '', 'x-container-sync-key': '1234'} req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': method}, headers=headers) req.get_response(self.controller) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') sync_store = self.controller.sync_store db_path = db.db_file db_link = sync_store._container_to_synced_container_path(db_path) self.assertFalse(os.path.exists(db_link)) sync_containers = [c for c in sync_store.synced_containers_generator()] self.assertFalse(sync_containers) def test_REPLICATE_insufficient_storage(self): conf = {'devices': self.testdir, 'mount_check': 'true'} self.container_controller = container_server.ContainerController( conf) def fake_check_mount(*args, **kwargs): return False with mock.patch("swift.common.constraints.check_mount", fake_check_mount): req = Request.blank('/sda1/p/suff', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) resp = req.get_response(self.container_controller) self.assertEqual(resp.status_int, 507) def test_REPLICATE_works(self): mkdirs(os.path.join(self.testdir, 'sda1', 'containers', 'p', 'a', 'a')) db_file = os.path.join(self.testdir, 'sda1', storage_directory('containers', 'p', 'a'), 'a' + '.db') open(db_file, 'w') def fake_rsync_then_merge(self, drive, db_file, args): return HTTPNoContent() with mock.patch("swift.container.replicator.ContainerReplicatorRpc." "rsync_then_merge", fake_rsync_then_merge): req = Request.blank('/sda1/p/a/', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) json_string = '["rsync_then_merge", "a.db"]' inbuf = WsgiBytesIO(json_string) req.environ['wsgi.input'] = inbuf resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # check valuerror wsgi_input_valuerror = '["sync" : sync, "-1"]' inbuf1 = WsgiBytesIO(wsgi_input_valuerror) req.environ['wsgi.input'] = inbuf1 resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_DELETE(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': '3'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_DELETE_PUT_recreate(self): path = '/sda1/p/a/c' req = Request.blank(path, method='PUT', headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank(path, method='DELETE', headers={'X-Timestamp': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank(path, method='GET') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) # sanity # backend headers expectations = { 'x-backend-put-timestamp': Timestamp(1).internal, 'x-backend-delete-timestamp': Timestamp(2).internal, 'x-backend-status-changed-at': Timestamp(2).internal, } for header, value in expectations.items(): self.assertEqual(resp.headers[header], value, 'response header %s was %s not %s' % ( header, resp.headers[header], value)) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') self.assertEqual(True, db.is_deleted()) info = db.get_info() self.assertEqual(info['put_timestamp'], Timestamp('1').internal) self.assertEqual(info['delete_timestamp'], Timestamp('2').internal) self.assertEqual(info['status_changed_at'], Timestamp('2').internal) # recreate req = Request.blank(path, method='PUT', headers={'X-Timestamp': '4'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') self.assertEqual(False, db.is_deleted()) info = db.get_info() self.assertEqual(info['put_timestamp'], Timestamp('4').internal) self.assertEqual(info['delete_timestamp'], Timestamp('2').internal) self.assertEqual(info['status_changed_at'], Timestamp('4').internal) for method in ('GET', 'HEAD'): req = Request.blank(path) resp = req.get_response(self.controller) expectations = { 'x-put-timestamp': Timestamp(4).normal, 'x-backend-put-timestamp': Timestamp(4).internal, 'x-backend-delete-timestamp': Timestamp(2).internal, 'x-backend-status-changed-at': Timestamp(4).internal, } for header, expected in expectations.items(): self.assertEqual(resp.headers[header], expected, 'header %s was %s is not expected %s' % ( header, resp.headers[header], expected)) def test_DELETE_PUT_recreate_replication_race(self): path = '/sda1/p/a/c' # create a deleted db req = Request.blank(path, method='PUT', headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') req = Request.blank(path, method='DELETE', headers={'X-Timestamp': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank(path, method='GET') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) # sanity self.assertEqual(True, db.is_deleted()) # now save a copy of this db (and remove it from the "current node") db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') db_path = db.db_file other_path = os.path.join(self.testdir, 'othernode.db') os.rename(db_path, other_path) # that should make it missing on this node req = Request.blank(path, method='GET') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) # sanity # setup the race in os.path.exists (first time no, then yes) mock_called = [] _real_exists = os.path.exists def mock_exists(db_path): rv = _real_exists(db_path) if not mock_called: # be as careful as we might hope backend replication can be... with lock_parent_directory(db_path, timeout=1): os.rename(other_path, db_path) mock_called.append((rv, db_path)) return rv req = Request.blank(path, method='PUT', headers={'X-Timestamp': '4'}) with mock.patch.object(container_server.os.path, 'exists', mock_exists): resp = req.get_response(self.controller) # db was successfully created self.assertEqual(resp.status_int // 100, 2) db = self.controller._get_container_broker('sda1', 'p', 'a', 'c') self.assertEqual(False, db.is_deleted()) # mock proves the race self.assertEqual(mock_called[:2], [(exists, db.db_file) for exists in (False, True)]) # info was updated info = db.get_info() self.assertEqual(info['put_timestamp'], Timestamp('4').internal) self.assertEqual(info['delete_timestamp'], Timestamp('2').internal) def test_DELETE_not_found(self): # Even if the container wasn't previously heard of, the container # server will accept the delete and replicate it to where it belongs # later. req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_change_storage_policy_via_DELETE_then_PUT(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) policy = random.choice(list(POLICIES)) req = Request.blank( '/sda1/p/a/c', method='PUT', headers={'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check # try re-recreate with other policies other_policies = [p for p in POLICIES if p != policy] for other_policy in other_policies: # first delete the existing container req = Request.blank('/sda1/p/a/c', method='DELETE', headers={ 'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # sanity check # at this point, the DB should still exist but be in a deleted # state, so changing the policy index is perfectly acceptable req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': other_policy.idx}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check req = Request.blank( '/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(other_policy.idx)) def test_change_to_default_storage_policy_via_DELETE_then_PUT(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) non_default_policy = random.choice([p for p in POLICIES if not p.is_default]) req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts), 'X-Backend-Storage-Policy-Index': non_default_policy.idx, }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check req = Request.blank( '/sda1/p/a/c', method='DELETE', headers={'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # sanity check # at this point, the DB should still exist but be in a deleted state, # so changing the policy index is perfectly acceptable req = Request.blank( '/sda1/p/a/c', method='PUT', headers={'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity check req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'], str(POLICIES.default.idx)) def test_DELETE_object(self): req = Request.blank( '/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': Timestamp(2).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': Timestamp(0).internal, 'X-Size': 1, 'X-Content-Type': 'text/plain', 'X-Etag': 'x'}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) ts = (Timestamp(t).internal for t in itertools.count(3)) req = Request.blank('/sda1/p/a/c', method='DELETE', headers={ 'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 409) req = Request.blank('/sda1/p/a/c/o', method='DELETE', headers={ 'X-Timestamp': next(ts)}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', method='DELETE', headers={ 'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c', method='GET', headers={ 'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_object_update_with_offset(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) # create container req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': next(ts)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # check status req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']), int(POLICIES.default)) # create object obj_timestamp = next(ts) req = Request.blank( '/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': obj_timestamp, 'X-Size': 1, 'X-Content-Type': 'text/plain', 'X-Etag': 'x'}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # check listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 1) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 1) self.assertEqual(obj['hash'], 'x') self.assertEqual(obj['content_type'], 'text/plain') # send an update with an offset offset_timestamp = Timestamp(obj_timestamp, offset=1).internal req = Request.blank( '/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': offset_timestamp, 'X-Size': 2, 'X-Content-Type': 'text/html', 'X-Etag': 'y'}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # check updated listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 2) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 2) self.assertEqual(obj['hash'], 'y') self.assertEqual(obj['content_type'], 'text/html') # now overwrite with a newer time delete_timestamp = next(ts) req = Request.blank( '/sda1/p/a/c/o', method='DELETE', headers={ 'X-Timestamp': delete_timestamp}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # check empty listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0) listing_data = json.loads(resp.body) self.assertEqual(0, len(listing_data)) # recreate with an offset offset_timestamp = Timestamp(delete_timestamp, offset=1).internal req = Request.blank( '/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': offset_timestamp, 'X-Size': 3, 'X-Content-Type': 'text/enriched', 'X-Etag': 'z'}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # check un-deleted listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 3) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 3) self.assertEqual(obj['hash'], 'z') self.assertEqual(obj['content_type'], 'text/enriched') # delete offset with newer offset delete_timestamp = Timestamp(offset_timestamp, offset=1).internal req = Request.blank( '/sda1/p/a/c/o', method='DELETE', headers={ 'X-Timestamp': delete_timestamp}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # check empty listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0) listing_data = json.loads(resp.body) self.assertEqual(0, len(listing_data)) def test_object_update_with_multiple_timestamps(self): def do_update(t_data, etag, size, content_type, t_type=None, t_meta=None): """ Make a PUT request to container controller to update an object """ headers = {'X-Timestamp': t_data.internal, 'X-Size': size, 'X-Content-Type': content_type, 'X-Etag': etag} if t_type: headers['X-Content-Type-Timestamp'] = t_type.internal if t_meta: headers['X-Meta-Timestamp'] = t_meta.internal req = Request.blank( '/sda1/p/a/c/o', method='PUT', headers=headers) self._update_object_put_headers(req) return req.get_response(self.controller) ts = (Timestamp(t) for t in itertools.count(int(time.time()))) t0 = ts.next() # create container req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': t0.internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # check status req = Request.blank('/sda1/p/a/c', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # create object at t1 t1 = ts.next() resp = do_update(t1, 'etag_at_t1', 1, 'ctype_at_t1') self.assertEqual(resp.status_int, 201) # check listing, expect last_modified = t1 req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 1) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 1) self.assertEqual(obj['hash'], 'etag_at_t1') self.assertEqual(obj['content_type'], 'ctype_at_t1') self.assertEqual(obj['last_modified'], t1.isoformat) # send an update with a content type timestamp at t4 t2 = ts.next() t3 = ts.next() t4 = ts.next() resp = do_update(t1, 'etag_at_t1', 1, 'ctype_at_t4', t_type=t4) self.assertEqual(resp.status_int, 201) # check updated listing, expect last_modified = t4 req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 1) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 1) self.assertEqual(obj['hash'], 'etag_at_t1') self.assertEqual(obj['content_type'], 'ctype_at_t4') self.assertEqual(obj['last_modified'], t4.isoformat) # now overwrite with an in-between data timestamp at t2 resp = do_update(t2, 'etag_at_t2', 2, 'ctype_at_t2', t_type=t2) self.assertEqual(resp.status_int, 201) # check updated listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 2) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 2) self.assertEqual(obj['hash'], 'etag_at_t2') self.assertEqual(obj['content_type'], 'ctype_at_t4') self.assertEqual(obj['last_modified'], t4.isoformat) # now overwrite with an in-between content-type timestamp at t3 resp = do_update(t2, 'etag_at_t2', 2, 'ctype_at_t3', t_type=t3) self.assertEqual(resp.status_int, 201) # check updated listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 2) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 2) self.assertEqual(obj['hash'], 'etag_at_t2') self.assertEqual(obj['content_type'], 'ctype_at_t4') self.assertEqual(obj['last_modified'], t4.isoformat) # now update with an in-between meta timestamp at t5 t5 = ts.next() resp = do_update(t2, 'etag_at_t2', 2, 'ctype_at_t3', t_type=t3, t_meta=t5) self.assertEqual(resp.status_int, 201) # check updated listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 2) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 2) self.assertEqual(obj['hash'], 'etag_at_t2') self.assertEqual(obj['content_type'], 'ctype_at_t4') self.assertEqual(obj['last_modified'], t5.isoformat) # delete object at t6 t6 = ts.next() req = Request.blank( '/sda1/p/a/c/o', method='DELETE', headers={ 'X-Timestamp': t6.internal}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # check empty listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0) listing_data = json.loads(resp.body) self.assertEqual(0, len(listing_data)) # subsequent content type timestamp at t8 should leave object deleted t7 = ts.next() t8 = ts.next() t9 = ts.next() resp = do_update(t2, 'etag_at_t2', 2, 'ctype_at_t8', t_type=t8, t_meta=t9) self.assertEqual(resp.status_int, 201) # check empty listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0) listing_data = json.loads(resp.body) self.assertEqual(0, len(listing_data)) # object recreated at t7 should pick up existing, later content-type resp = do_update(t7, 'etag_at_t7', 7, 'ctype_at_t7') self.assertEqual(resp.status_int, 201) # check listing req = Request.blank('/sda1/p/a/c', method='GET', query_string='format=json') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1) self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 7) listing_data = json.loads(resp.body) self.assertEqual(1, len(listing_data)) for obj in listing_data: self.assertEqual(obj['name'], 'o') self.assertEqual(obj['bytes'], 7) self.assertEqual(obj['hash'], 'etag_at_t7') self.assertEqual(obj['content_type'], 'ctype_at_t8') self.assertEqual(obj['last_modified'], t9.isoformat) def test_DELETE_account_update(self): bindsock = listen(('127.0.0.1', 0)) def accept(return_code, expected_timestamp): try: with Timeout(3): sock, addr = bindsock.accept() inc = sock.makefile('rb') out = sock.makefile('wb') out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' % return_code) out.flush() self.assertEqual(inc.readline(), 'PUT /sda1/123/a/c HTTP/1.1\r\n') headers = {} line = inc.readline() while line and line != '\r\n': headers[line.split(':')[0].lower()] = \ line.split(':')[1].strip() line = inc.readline() self.assertEqual(headers['x-delete-timestamp'], expected_timestamp) except BaseException as err: return err return None req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': Timestamp(2).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 204, Timestamp(2).internal) try: with Timeout(3): resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) finally: err = event.wait() if err: raise Exception(err) req = Request.blank( '/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': Timestamp(2).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': Timestamp(3).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 404, Timestamp(3).internal) try: with Timeout(3): resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) finally: err = event.wait() if err: raise Exception(err) req = Request.blank( '/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': Timestamp(4).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': Timestamp(5).internal, 'X-Account-Host': '%s:%s' % bindsock.getsockname(), 'X-Account-Partition': '123', 'X-Account-Device': 'sda1'}) event = spawn(accept, 503, Timestamp(5).internal) got_exc = False try: with Timeout(3): resp = req.get_response(self.controller) except BaseException as err: got_exc = True finally: err = event.wait() if err: raise Exception(err) self.assertTrue(not got_exc) def test_DELETE_invalid_partition(self): req = Request.blank( '/sda1/./a/c', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_DELETE_timestamp_not_float(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': 'not-float'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_DELETE_insufficient_storage(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_GET_over_limit(self): req = Request.blank( '/sda1/p/a/c?limit=%d' % (constraints.CONTAINER_LISTING_LIMIT + 1), environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 412) def test_GET_json(self): # make a container req = Request.blank( '/sda1/p/a/jsonc', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) # test an empty container req = Request.blank( '/sda1/p/a/jsonc?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), []) # fill the container for i in range(3): req = Request.blank( '/sda1/p/a/jsonc/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # test format json_body = [{"name": "0", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}, {"name": "1", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}, {"name": "2", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}] req = Request.blank( '/sda1/p/a/jsonc?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), json_body) self.assertEqual(resp.charset, 'utf-8') req = Request.blank( '/sda1/p/a/jsonc?format=json', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/json') for accept in ('application/json', 'application/json;q=1.0,*/*;q=0.9', '*/*;q=0.9,application/json;q=1.0', 'application/*'): req = Request.blank( '/sda1/p/a/jsonc', environ={'REQUEST_METHOD': 'GET'}) req.accept = accept resp = req.get_response(self.controller) self.assertEqual( json.loads(resp.body), json_body, 'Invalid body for Accept: %s' % accept) self.assertEqual( resp.content_type, 'application/json', 'Invalid content_type for Accept: %s' % accept) req = Request.blank( '/sda1/p/a/jsonc', environ={'REQUEST_METHOD': 'HEAD'}) req.accept = accept resp = req.get_response(self.controller) self.assertEqual( resp.content_type, 'application/json', 'Invalid content_type for Accept: %s' % accept) def test_GET_plain(self): # make a container req = Request.blank( '/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) # test an empty container req = Request.blank( '/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # fill the container for i in range(3): req = Request.blank( '/sda1/p/a/plainc/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) plain_body = '0\n1\n2\n' req = Request.blank('/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.body, plain_body) self.assertEqual(resp.charset, 'utf-8') req = Request.blank('/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'text/plain') for accept in ('', 'text/plain', 'application/xml;q=0.8,*/*;q=0.9', '*/*;q=0.9,application/xml;q=0.8', '*/*', 'text/plain,application/xml'): req = Request.blank( '/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'GET'}) req.accept = accept resp = req.get_response(self.controller) self.assertEqual( resp.body, plain_body, 'Invalid body for Accept: %s' % accept) self.assertEqual( resp.content_type, 'text/plain', 'Invalid content_type for Accept: %s' % accept) req = Request.blank( '/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'GET'}) req.accept = accept resp = req.get_response(self.controller) self.assertEqual( resp.content_type, 'text/plain', 'Invalid content_type for Accept: %s' % accept) # test conflicting formats req = Request.blank( '/sda1/p/a/plainc?format=plain', environ={'REQUEST_METHOD': 'GET'}) req.accept = 'application/json' resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.body, plain_body) # test unknown format uses default plain req = Request.blank( '/sda1/p/a/plainc?format=somethingelse', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.body, plain_body) def test_GET_json_last_modified(self): # make a container req = Request.blank( '/sda1/p/a/jsonc', environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i, d in [(0, 1.5), (1, 1.0), ]: req = Request.blank( '/sda1/p/a/jsonc/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': d, 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # test format # last_modified format must be uniform, even when there are not msecs json_body = [{"name": "0", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.500000"}, {"name": "1", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}, ] req = Request.blank( '/sda1/p/a/jsonc?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(json.loads(resp.body), json_body) self.assertEqual(resp.charset, 'utf-8') def test_GET_xml(self): # make a container req = Request.blank( '/sda1/p/a/xmlc', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) # fill the container for i in range(3): req = Request.blank( '/sda1/p/a/xmlc/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) xml_body = '\n' \ '' \ '0x0' \ 'text/plain' \ '1970-01-01T00:00:01.000000' \ '' \ '1x0' \ 'text/plain' \ '1970-01-01T00:00:01.000000' \ '' \ '2x0' \ 'text/plain' \ '1970-01-01T00:00:01.000000' \ '' \ '' # tests req = Request.blank( '/sda1/p/a/xmlc?format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/xml') self.assertEqual(resp.body, xml_body) self.assertEqual(resp.charset, 'utf-8') req = Request.blank( '/sda1/p/a/xmlc?format=xml', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/xml') for xml_accept in ( 'application/xml', 'application/xml;q=1.0,*/*;q=0.9', '*/*;q=0.9,application/xml;q=1.0', 'application/xml,text/xml'): req = Request.blank( '/sda1/p/a/xmlc', environ={'REQUEST_METHOD': 'GET'}) req.accept = xml_accept resp = req.get_response(self.controller) self.assertEqual( resp.body, xml_body, 'Invalid body for Accept: %s' % xml_accept) self.assertEqual( resp.content_type, 'application/xml', 'Invalid content_type for Accept: %s' % xml_accept) req = Request.blank( '/sda1/p/a/xmlc', environ={'REQUEST_METHOD': 'HEAD'}) req.accept = xml_accept resp = req.get_response(self.controller) self.assertEqual( resp.content_type, 'application/xml', 'Invalid content_type for Accept: %s' % xml_accept) req = Request.blank( '/sda1/p/a/xmlc', environ={'REQUEST_METHOD': 'GET'}) req.accept = 'text/xml' resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'text/xml') self.assertEqual(resp.body, xml_body) def test_GET_marker(self): # make a container req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) # fill the container for i in range(3): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # test limit with marker req = Request.blank('/sda1/p/a/c?limit=2&marker=1', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) result = resp.body.split() self.assertEqual(result, ['2', ]) def test_weird_content_types(self): snowman = u'\u2603' req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i, ctype in enumerate((snowman.encode('utf-8'), 'text/plain; charset="utf-8"')): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': ctype, 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) result = [x['content_type'] for x in json.loads(resp.body)] self.assertEqual(result, [u'\u2603', 'text/plain;charset="utf-8"']) def test_GET_accept_not_valid(self): req = Request.blank('/sda1/p/a/c', method='PUT', headers={ 'X-Timestamp': Timestamp(0).internal}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c', method='GET') req.accept = 'application/xml*' resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 406) def test_GET_limit(self): # make a container req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) # fill the container for i in range(3): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # test limit req = Request.blank( '/sda1/p/a/c?limit=2', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) result = resp.body.split() self.assertEqual(result, ['0', '1']) def test_GET_prefix(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i in ('a1', 'b1', 'a2', 'b2', 'a3', 'b3'): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c?prefix=a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.body.split(), ['a1', 'a2', 'a3']) def test_GET_delimiter_too_long(self): req = Request.blank('/sda1/p/a/c?delimiter=xx', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 412) def test_GET_delimiter(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i in ('US-TX-A', 'US-TX-B', 'US-OK-A', 'US-OK-B', 'US-UT-A'): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c?prefix=US-&delimiter=-&format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual( json.loads(resp.body), [{"subdir": "US-OK-"}, {"subdir": "US-TX-"}, {"subdir": "US-UT-"}]) def test_GET_delimiter_xml(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i in ('US-TX-A', 'US-TX-B', 'US-OK-A', 'US-OK-B', 'US-UT-A'): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c?prefix=US-&delimiter=-&format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual( resp.body, '' '\n' 'US-OK-' 'US-TX-' 'US-UT-') def test_GET_delimiter_xml_with_quotes(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) req = Request.blank( '/sda1/p/a/c/<\'sub\' "dir">/object', environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c?delimiter=/&format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) dom = minidom.parseString(resp.body) self.assertTrue(len(dom.getElementsByTagName('container')) == 1) container = dom.getElementsByTagName('container')[0] self.assertTrue(len(container.getElementsByTagName('subdir')) == 1) subdir = container.getElementsByTagName('subdir')[0] self.assertEqual(six.text_type(subdir.attributes['name'].value), u'<\'sub\' "dir">/') self.assertTrue(len(subdir.getElementsByTagName('name')) == 1) name = subdir.getElementsByTagName('name')[0] self.assertEqual(six.text_type(name.childNodes[0].data), u'<\'sub\' "dir">/') def test_GET_path(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) for i in ('US/TX', 'US/TX/B', 'US/OK', 'US/OK/B', 'US/UT/A'): req = Request.blank( '/sda1/p/a/c/%s' % i, environ={ 'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0}) self._update_object_put_headers(req) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c?path=US&format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual( json.loads(resp.body), [{"name": "US/OK", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}, {"name": "US/TX", "hash": "x", "bytes": 0, "content_type": "text/plain", "last_modified": "1970-01-01T00:00:01.000000"}]) def test_GET_insufficient_storage(self): self.controller = container_server.ContainerController( {'devices': self.testdir}) req = Request.blank( '/sda-null/p/a/c', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_through_call(self): inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.controller.__call__({'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '404 ') def test_through_call_invalid_path(self): inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.controller.__call__({'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'PATH_INFO': '/bob', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '400 ') def test_through_call_invalid_path_utf8(self): inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.controller.__call__({'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'PATH_INFO': '\x00', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '412 ') def test_invalid_method_doesnt_exist(self): errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.controller.__call__({'REQUEST_METHOD': 'method_doesnt_exist', 'PATH_INFO': '/sda1/p/a/c'}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_invalid_method_is_not_public(self): errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.controller.__call__({'REQUEST_METHOD': '__init__', 'PATH_INFO': '/sda1/p/a/c'}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_params_format(self): req = Request.blank( '/sda1/p/a/c', method='PUT', headers={'X-Timestamp': Timestamp(1).internal}) req.get_response(self.controller) for format in ('xml', 'json'): req = Request.blank('/sda1/p/a/c?format=%s' % format, method='GET') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) def test_params_utf8(self): # Bad UTF8 sequence, all parameters should cause 400 error for param in ('delimiter', 'limit', 'marker', 'path', 'prefix', 'end_marker', 'format'): req = Request.blank('/sda1/p/a/c?%s=\xce' % param, environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400, "%d on param %s" % (resp.status_int, param)) # Good UTF8 sequence for delimiter, too long (1 byte delimiters only) req = Request.blank('/sda1/p/a/c?delimiter=\xce\xa9', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 412, "%d on param delimiter" % (resp.status_int)) req = Request.blank('/sda1/p/a/c', method='PUT', headers={'X-Timestamp': Timestamp(1).internal}) req.get_response(self.controller) # Good UTF8 sequence, ignored for limit, doesn't affect other queries for param in ('limit', 'marker', 'path', 'prefix', 'end_marker', 'format'): req = Request.blank('/sda1/p/a/c?%s=\xce\xa9' % param, environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204, "%d on param %s" % (resp.status_int, param)) def test_put_auto_create(self): headers = {'x-timestamp': Timestamp(1).internal, 'x-size': '0', 'x-content-type': 'text/plain', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e'} req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/.a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/.c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/a/c/.o', environ={'REQUEST_METHOD': 'PUT'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_delete_auto_create(self): headers = {'x-timestamp': Timestamp(1).internal} req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/.a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/.c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/a/.c/.o', environ={'REQUEST_METHOD': 'DELETE'}, headers=dict(headers)) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_content_type_on_HEAD(self): Request.blank('/sda1/p/a/o', headers={'X-Timestamp': Timestamp(1).internal}, environ={'REQUEST_METHOD': 'PUT'}).get_response( self.controller) env = {'REQUEST_METHOD': 'HEAD'} req = Request.blank('/sda1/p/a/o?format=xml', environ=env) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/xml') self.assertEqual(resp.charset, 'utf-8') req = Request.blank('/sda1/p/a/o?format=json', environ=env) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(resp.charset, 'utf-8') req = Request.blank('/sda1/p/a/o', environ=env) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.charset, 'utf-8') req = Request.blank( '/sda1/p/a/o', headers={'Accept': 'application/json'}, environ=env) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(resp.charset, 'utf-8') req = Request.blank( '/sda1/p/a/o', headers={'Accept': 'application/xml'}, environ=env) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/xml') self.assertEqual(resp.charset, 'utf-8') def test_updating_multiple_container_servers(self): http_connect_args = [] def fake_http_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): class SuccessfulFakeConn(object): @property def status(self): return 200 def getresponse(self): return self def read(self): return '' captured_args = {'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'ssl': ssl, 'headers': headers, 'query_string': query_string} http_connect_args.append( dict((k, v) for k, v in captured_args.items() if v is not None)) req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '12345', 'X-Account-Partition': '30', 'X-Account-Host': '1.2.3.4:5, 6.7.8.9:10', 'X-Account-Device': 'sdb1, sdf1'}) orig_http_connect = container_server.http_connect try: container_server.http_connect = fake_http_connect req.get_response(self.controller) finally: container_server.http_connect = orig_http_connect http_connect_args.sort(key=operator.itemgetter('ipaddr')) self.assertEqual(len(http_connect_args), 2) self.assertEqual( http_connect_args[0], {'ipaddr': '1.2.3.4', 'port': '5', 'path': '/a/c', 'device': 'sdb1', 'partition': '30', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-bytes-used': 0, 'x-delete-timestamp': '0', 'x-object-count': 0, 'x-put-timestamp': Timestamp(12345).internal, 'X-Backend-Storage-Policy-Index': '%s' % POLICIES.default.idx, 'referer': 'PUT http://localhost/sda1/p/a/c', 'user-agent': 'container-server %d' % os.getpid(), 'x-trans-id': '-'})}) self.assertEqual( http_connect_args[1], {'ipaddr': '6.7.8.9', 'port': '10', 'path': '/a/c', 'device': 'sdf1', 'partition': '30', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-bytes-used': 0, 'x-delete-timestamp': '0', 'x-object-count': 0, 'x-put-timestamp': Timestamp(12345).internal, 'X-Backend-Storage-Policy-Index': '%s' % POLICIES.default.idx, 'referer': 'PUT http://localhost/sda1/p/a/c', 'user-agent': 'container-server %d' % os.getpid(), 'x-trans-id': '-'})}) def test_serv_reserv(self): # Test replication_server flag was set from configuration file. container_controller = container_server.ContainerController conf = {'devices': self.testdir, 'mount_check': 'false'} self.assertEqual(container_controller(conf).replication_server, None) for val in [True, '1', 'True', 'true']: conf['replication_server'] = val self.assertTrue(container_controller(conf).replication_server) for val in [False, 0, '0', 'False', 'false', 'test_string']: conf['replication_server'] = val self.assertFalse(container_controller(conf).replication_server) def test_list_allowed_methods(self): # Test list of allowed_methods obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST'] repl_methods = ['REPLICATE'] for method_name in obj_methods: method = getattr(self.controller, method_name) self.assertFalse(hasattr(method, 'replication')) for method_name in repl_methods: method = getattr(self.controller, method_name) self.assertEqual(method.replication, True) def test_correct_allowed_method(self): # Test correct work for allowed method using # swift.container.server.ContainerController.__call__ inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() self.controller = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'}) def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} method_res = mock.MagicMock() mock_method = public(lambda x: mock.MagicMock(return_value=method_res)) with mock.patch.object(self.controller, method, new=mock_method): response = self.controller(env, start_response) self.assertEqual(response, method_res) def test_not_allowed_method(self): # Test correct work for NOT allowed method using # swift.container.server.ContainerController.__call__ inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() self.controller = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'}) def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} answer = ['

Method Not Allowed

The method is not ' 'allowed for this resource.

'] mock_method = replication(public(lambda x: mock.MagicMock())) with mock.patch.object(self.controller, method, new=mock_method): response = self.controller.__call__(env, start_response) self.assertEqual(response, answer) def test_call_incorrect_replication_method(self): inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() self.controller = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'true'}) def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST', 'OPTIONS'] for method in obj_methods: env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} self.controller(env, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test__call__raise_timeout(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.logger = debug_logger('test') self.container_controller = container_server.ContainerController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false', 'log_requests': 'false'}, logger=self.logger) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} @public def mock_put_method(*args, **kwargs): raise Exception() with mock.patch.object(self.container_controller, method, new=mock_put_method): response = self.container_controller.__call__(env, start_response) self.assertTrue(response[0].startswith( 'Traceback (most recent call last):')) self.assertEqual(self.logger.get_lines_for_level('error'), [ 'ERROR __call__ error with %(method)s %(path)s : ' % { 'method': 'PUT', 'path': '/sda1/p/a/c'}, ]) self.assertEqual(self.logger.get_lines_for_level('info'), []) def test_GET_log_requests_true(self): self.controller.logger = FakeLogger() self.controller.log_requests = True req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue(self.controller.logger.log_dict['info']) def test_GET_log_requests_false(self): self.controller.logger = FakeLogger() self.controller.log_requests = False req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertFalse(self.controller.logger.log_dict['info']) def test_log_line_format(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD', 'REMOTE_ADDR': '1.2.3.4'}) self.controller.logger = FakeLogger() with mock.patch( 'time.gmtime', mock.MagicMock(side_effect=[gmtime(10001.0)])): with mock.patch( 'time.time', mock.MagicMock(side_effect=[10000.0, 10001.0, 10002.0])): with mock.patch( 'os.getpid', mock.MagicMock(return_value=1234)): req.get_response(self.controller) self.assertEqual( self.controller.logger.log_dict['info'], [(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a/c" ' '404 - "-" "-" "-" 2.0000 "-" 1234 0',), {})]) @patch_policies([ StoragePolicy(0, 'legacy'), StoragePolicy(1, 'one'), StoragePolicy(2, 'two', True), StoragePolicy(3, 'three'), StoragePolicy(4, 'four'), ]) class TestNonLegacyDefaultStoragePolicy(TestContainerController): """ Test swift.container.server.ContainerController with a non-legacy default Storage Policy. """ def _update_object_put_headers(self, req): """ Add policy index headers for containers created with default policy - which in this TestCase is 1. """ req.headers['X-Backend-Storage-Policy-Index'] = \ str(POLICIES.default.idx) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/test_sync.py0000664000567000056710000015414013024044354022327 0ustar jenkinsjenkins00000000000000 # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from textwrap import dedent import mock import errno from swift.common.utils import Timestamp from test.unit import debug_logger from swift.container import sync from swift.common.db import DatabaseConnectionError from swift.common import utils from swift.common.wsgi import ConfigString from swift.common.exceptions import ClientException from swift.common.storage_policy import StoragePolicy import test from test.unit import patch_policies, with_tempdir utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'endcap' class FakeRing(object): def __init__(self): self.devs = [{'ip': '10.0.0.%s' % x, 'port': 1000 + x, 'device': 'sda'} for x in range(3)] def get_nodes(self, account, container=None, obj=None): return 1, list(self.devs) class FakeContainerBroker(object): def __init__(self, path, metadata=None, info=None, deleted=False, items_since=None): self.db_file = path self.db_dir = os.path.dirname(path) self.metadata = metadata if metadata else {} self.info = info if info else {} self.deleted = deleted self.items_since = items_since if items_since else [] self.sync_point1 = -1 self.sync_point2 = -1 def get_info(self): return self.info def is_deleted(self): return self.deleted def get_items_since(self, sync_point, limit): if sync_point < 0: sync_point = 0 return self.items_since[sync_point:sync_point + limit] def set_x_container_sync_points(self, sync_point1, sync_point2): self.sync_point1 = sync_point1 self.sync_point2 = sync_point2 @patch_policies([StoragePolicy(0, 'zero', True, object_ring=FakeRing())]) class TestContainerSync(unittest.TestCase): def setUp(self): self.logger = debug_logger('test-container-sync') def test_FileLikeIter(self): # Retained test to show new FileLikeIter acts just like the removed # _Iter2FileLikeObject did. flo = sync.FileLikeIter(iter(['123', '4567', '89', '0'])) expect = '1234567890' got = flo.read(2) self.assertTrue(len(got) <= 2) self.assertEqual(got, expect[:len(got)]) expect = expect[len(got):] got = flo.read(5) self.assertTrue(len(got) <= 5) self.assertEqual(got, expect[:len(got)]) expect = expect[len(got):] self.assertEqual(flo.read(), expect) self.assertEqual(flo.read(), '') self.assertEqual(flo.read(2), '') flo = sync.FileLikeIter(iter(['123', '4567', '89', '0'])) self.assertEqual(flo.read(), '1234567890') self.assertEqual(flo.read(), '') self.assertEqual(flo.read(2), '') def assertLogMessage(self, msg_level, expected, skip=0): for line in self.logger.get_lines_for_level(msg_level)[skip:]: msg = 'expected %r not in %r' % (expected, line) self.assertTrue(expected in line, msg) @with_tempdir def test_init(self, tempdir): ic_conf_path = os.path.join(tempdir, 'internal-client.conf') cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) self.assertTrue(cs.container_ring is cring) # specified but not exists will not start conf = {'internal_client_conf_path': ic_conf_path} self.assertRaises(SystemExit, sync.ContainerSync, conf, container_ring=cring, logger=self.logger) # not specified will use default conf with mock.patch('swift.container.sync.InternalClient') as mock_ic: cs = sync.ContainerSync({}, container_ring=cring, logger=self.logger) self.assertTrue(cs.container_ring is cring) self.assertTrue(mock_ic.called) conf_path, name, retry = mock_ic.call_args[0] self.assertTrue(isinstance(conf_path, ConfigString)) self.assertEqual(conf_path.contents.getvalue(), dedent(sync.ic_conf_body)) self.assertLogMessage('warning', 'internal_client_conf_path') self.assertLogMessage('warning', 'internal-client.conf-sample') # correct contents = dedent(sync.ic_conf_body) with open(ic_conf_path, 'w') as f: f.write(contents) with mock.patch('swift.container.sync.InternalClient') as mock_ic: cs = sync.ContainerSync(conf, container_ring=cring) self.assertTrue(cs.container_ring is cring) self.assertTrue(mock_ic.called) conf_path, name, retry = mock_ic.call_args[0] self.assertEqual(conf_path, ic_conf_path) sample_conf_filename = os.path.join( os.path.dirname(test.__file__), '../etc/internal-client.conf-sample') with open(sample_conf_filename) as sample_conf_file: sample_conf = sample_conf_file.read() self.assertEqual(contents, sample_conf) def test_run_forever(self): # This runs runs_forever with fakes to succeed for two loops, the first # causing a report but no interval sleep, the second no report but an # interval sleep. time_calls = [0] sleep_calls = [] def fake_time(): time_calls[0] += 1 returns = [1, # Initialized reported time 1, # Start time 3602, # Is it report time (yes) 3602, # Report time 3602, # Elapsed time for "under interval" (no) 3602, # Start time 3603, # Is it report time (no) 3603] # Elapsed time for "under interval" (yes) if time_calls[0] == len(returns) + 1: raise Exception('we are now done') return returns[time_calls[0] - 1] def fake_sleep(amount): sleep_calls.append(amount) gen_func = ('swift.container.sync_store.' 'ContainerSyncStore.synced_containers_generator') with mock.patch('swift.container.sync.InternalClient'), \ mock.patch('swift.container.sync.time', fake_time), \ mock.patch('swift.container.sync.sleep', fake_sleep), \ mock.patch(gen_func) as fake_generator, \ mock.patch('swift.container.sync.ContainerBroker', lambda p: FakeContainerBroker(p, info={ 'account': 'a', 'container': 'c', 'storage_policy_index': 0})): fake_generator.side_effect = [iter(['container.db']), iter(['container.db'])] cs = sync.ContainerSync({}, container_ring=FakeRing()) try: cs.run_forever() except Exception as err: if str(err) != 'we are now done': raise self.assertEqual(time_calls, [9]) self.assertEqual(len(sleep_calls), 2) self.assertLessEqual(sleep_calls[0], cs.interval) self.assertEqual(cs.interval - 1, sleep_calls[1]) self.assertEqual(2, fake_generator.call_count) self.assertEqual(cs.reported, 3602) def test_run_once(self): # This runs runs_once with fakes twice, the first causing an interim # report, the second with no interim report. time_calls = [0] def fake_time(): time_calls[0] += 1 returns = [1, # Initialized reported time 1, # Start time 3602, # Is it report time (yes) 3602, # Report time 3602, # End report time 3602, # For elapsed 3602, # Start time 3603, # Is it report time (no) 3604, # End report time 3605] # For elapsed if time_calls[0] == len(returns) + 1: raise Exception('we are now done') return returns[time_calls[0] - 1] gen_func = ('swift.container.sync_store.' 'ContainerSyncStore.synced_containers_generator') with mock.patch('swift.container.sync.InternalClient'), \ mock.patch('swift.container.sync.time', fake_time), \ mock.patch(gen_func) as fake_generator, \ mock.patch('swift.container.sync.ContainerBroker', lambda p: FakeContainerBroker(p, info={ 'account': 'a', 'container': 'c', 'storage_policy_index': 0})): fake_generator.side_effect = [iter(['container.db']), iter(['container.db'])] cs = sync.ContainerSync({}, container_ring=FakeRing()) try: cs.run_once() self.assertEqual(time_calls, [6]) self.assertEqual(1, fake_generator.call_count) self.assertEqual(cs.reported, 3602) cs.run_once() except Exception as err: if str(err) != 'we are now done': raise self.assertEqual(time_calls, [10]) self.assertEqual(2, fake_generator.call_count) self.assertEqual(cs.reported, 3604) def test_container_sync_not_db(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) self.assertEqual(cs.container_failures, 0) def test_container_sync_missing_db(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) broker = 'swift.container.backend.ContainerBroker' store = 'swift.container.sync_store.ContainerSyncStore' # In this test we call the container_sync instance several # times with a missing db in various combinations. # Since we use the same ContainerSync instance for all tests # its failures counter increases by one with each call. # Test the case where get_info returns DatabaseConnectionError # with DB does not exist, and we succeed in deleting it. with mock.patch(broker + '.get_info') as fake_get_info: with mock.patch(store + '.remove_synced_container') as fake_remove: fake_get_info.side_effect = DatabaseConnectionError( 'a', "DB doesn't exist") cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) self.assertEqual(1, fake_remove.call_count) self.assertEqual('isa.db', fake_remove.call_args[0][0].db_file) # Test the case where get_info returns DatabaseConnectionError # with DB does not exist, and we fail to delete it. with mock.patch(broker + '.get_info') as fake_get_info: with mock.patch(store + '.remove_synced_container') as fake_remove: fake_get_info.side_effect = DatabaseConnectionError( 'a', "DB doesn't exist") fake_remove.side_effect = OSError('1') cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 2) self.assertEqual(cs.container_skips, 0) self.assertEqual(1, fake_remove.call_count) self.assertEqual('isa.db', fake_remove.call_args[0][0].db_file) # Test the case where get_info returns DatabaseConnectionError # with DB does not exist, and it returns an error != ENOENT. with mock.patch(broker + '.get_info') as fake_get_info: with mock.patch(store + '.remove_synced_container') as fake_remove: fake_get_info.side_effect = DatabaseConnectionError( 'a', "DB doesn't exist") fake_remove.side_effect = OSError(errno.EPERM, 'a') cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 3) self.assertEqual(cs.container_skips, 0) self.assertEqual(1, fake_remove.call_count) self.assertEqual('isa.db', fake_remove.call_args[0][0].db_file) # Test the case where get_info returns DatabaseConnectionError # error different than DB does not exist with mock.patch(broker + '.get_info') as fake_get_info: with mock.patch(store + '.remove_synced_container') as fake_remove: fake_get_info.side_effect = DatabaseConnectionError('a', 'a') cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 4) self.assertEqual(cs.container_skips, 0) self.assertEqual(0, fake_remove.call_count) def test_container_sync_not_my_db(self): # Db could be there due to handoff replication so test that we ignore # those. cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({ 'bind_ip': '10.0.0.0', }, container_ring=cring) # Plumbing test for bind_ip and whataremyips() self.assertEqual(['10.0.0.0'], cs._myips) orig_ContainerBroker = sync.ContainerBroker try: sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0}) cs._myips = ['127.0.0.1'] # No match cs._myport = 1 # No match cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) cs._myips = ['10.0.0.0'] # Match cs._myport = 1 # No match cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) cs._myips = ['127.0.0.1'] # No match cs._myport = 1000 # Match cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match # This complete match will cause the 1 container failure since the # broker's info doesn't contain sync point keys cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) finally: sync.ContainerBroker = orig_ContainerBroker def test_container_sync_deleted(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) orig_ContainerBroker = sync.ContainerBroker try: sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0}, deleted=False) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match # This complete match will cause the 1 container failure since the # broker's info doesn't contain sync point keys cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0}, deleted=True) # This complete match will not cause any more container failures # since the broker indicates deletion cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) finally: sync.ContainerBroker = orig_ContainerBroker def test_container_sync_no_to_or_key(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) orig_ContainerBroker = sync.ContainerBroker try: sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match # This complete match will be skipped since the broker's metadata # has no x-container-sync-to or x-container-sync-key cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) self.assertEqual(cs.container_skips, 1) sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1)}) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match # This complete match will be skipped since the broker's metadata # has no x-container-sync-key cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) self.assertEqual(cs.container_skips, 2) sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-key': ('key', 1)}) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match # This complete match will be skipped since the broker's metadata # has no x-container-sync-to cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 0) self.assertEqual(cs.container_skips, 3) sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = [] # This complete match will cause a container failure since the # sync-to won't validate as allowed. cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 3) sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] # This complete match will succeed completely since the broker # get_items_since will return no new rows. cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 3) finally: sync.ContainerBroker = orig_ContainerBroker def test_container_stop_at(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) orig_ContainerBroker = sync.ContainerBroker orig_time = sync.time try: sync.ContainerBroker = lambda p: FakeContainerBroker( p, info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=['erroneous data']) cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] # This sync will fail since the items_since data is bad. cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) # Set up fake times to make the sync short-circuit as having taken # too long fake_times = [ 1.0, # Compute the time to move on 100000.0, # Compute if it's time to move on from first loop 100000.0] # Compute if it's time to move on from second loop def fake_time(): return fake_times.pop(0) sync.time = fake_time # This same sync won't fail since it will look like it took so long # as to be time to move on (before it ever actually tries to do # anything). cs.container_sync('isa.db') self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) finally: sync.ContainerBroker = orig_ContainerBroker sync.time = orig_time def test_container_first_loop(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring) def fake_hash_path(account, container, obj, raw_digest=False): # Ensures that no rows match for full syncing, ordinal is 0 and # all hashes are 0 return '\x00' * 16 fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': 2, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o'}]) with mock.patch('swift.container.sync.ContainerBroker', lambda p: fcb), \ mock.patch('swift.container.sync.hash_path', fake_hash_path): cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Succeeds because no rows match self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, None) self.assertEqual(fcb.sync_point2, -1) def fake_hash_path(account, container, obj, raw_digest=False): # Ensures that all rows match for full syncing, ordinal is 0 # and all hashes are 1 return '\x01' * 16 fcb = FakeContainerBroker('path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': 1, 'x_container_sync_point2': 1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o'}]) with mock.patch('swift.container.sync.ContainerBroker', lambda p: fcb), \ mock.patch('swift.container.sync.hash_path', fake_hash_path): cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Succeeds because the two sync points haven't deviated yet self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, -1) self.assertEqual(fcb.sync_point2, -1) fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': 2, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o'}]) with mock.patch('swift.container.sync.ContainerBroker', lambda p: fcb): cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Fails because container_sync_row will fail since the row has no # 'deleted' key self.assertEqual(cs.container_failures, 2) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, None) self.assertEqual(fcb.sync_point2, -1) def fake_delete_object(*args, **kwargs): raise ClientException fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': 2, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o', 'created_at': '1.2', 'deleted': True}]) with mock.patch('swift.container.sync.ContainerBroker', lambda p: fcb), \ mock.patch('swift.container.sync.delete_object', fake_delete_object): cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Fails because delete_object fails self.assertEqual(cs.container_failures, 3) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, None) self.assertEqual(fcb.sync_point2, -1) fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': 2, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o', 'created_at': '1.2', 'deleted': True}]) with mock.patch('swift.container.sync.ContainerBroker', lambda p: fcb), \ mock.patch('swift.container.sync.delete_object', lambda *x, **y: None): cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Succeeds because delete_object succeeds self.assertEqual(cs.container_failures, 3) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, None) self.assertEqual(fcb.sync_point2, 1) def test_container_second_loop(self): cring = FakeRing() with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=cring, logger=self.logger) orig_ContainerBroker = sync.ContainerBroker orig_hash_path = sync.hash_path orig_delete_object = sync.delete_object try: # We'll ensure the first loop is always skipped by keeping the two # sync points equal def fake_hash_path(account, container, obj, raw_digest=False): # Ensures that no rows match for second loop, ordinal is 0 and # all hashes are 1 return '\x01' * 16 sync.hash_path = fake_hash_path fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o'}]) sync.ContainerBroker = lambda p: fcb cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Succeeds because no rows match self.assertEqual(cs.container_failures, 0) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, 1) self.assertEqual(fcb.sync_point2, None) def fake_hash_path(account, container, obj, raw_digest=False): # Ensures that all rows match for second loop, ordinal is 0 and # all hashes are 0 return '\x00' * 16 def fake_delete_object(*args, **kwargs): pass sync.hash_path = fake_hash_path sync.delete_object = fake_delete_object fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o'}]) sync.ContainerBroker = lambda p: fcb cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Fails because row is missing 'deleted' key # Nevertheless the fault is skipped self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, 1) self.assertEqual(fcb.sync_point2, None) fcb = FakeContainerBroker( 'path', info={'account': 'a', 'container': 'c', 'storage_policy_index': 0, 'x_container_sync_point1': -1, 'x_container_sync_point2': -1}, metadata={'x-container-sync-to': ('http://127.0.0.1/a/c', 1), 'x-container-sync-key': ('key', 1)}, items_since=[{'ROWID': 1, 'name': 'o', 'created_at': '1.2', 'deleted': True}]) sync.ContainerBroker = lambda p: fcb cs._myips = ['10.0.0.0'] # Match cs._myport = 1000 # Match cs.allowed_sync_hosts = ['127.0.0.1'] cs.container_sync('isa.db') # Succeeds because row now has 'deleted' key and delete_object # succeeds self.assertEqual(cs.container_failures, 1) self.assertEqual(cs.container_skips, 0) self.assertEqual(fcb.sync_point1, 1) self.assertEqual(fcb.sync_point2, None) finally: sync.ContainerBroker = orig_ContainerBroker sync.hash_path = orig_hash_path sync.delete_object = orig_delete_object def test_container_sync_row_delete(self): self._test_container_sync_row_delete(None, None) def test_container_sync_row_delete_using_realms(self): self._test_container_sync_row_delete('US', 'realm_key') def _test_container_sync_row_delete(self, realm, realm_key): orig_uuid = sync.uuid orig_delete_object = sync.delete_object try: class FakeUUID(object): class uuid4(object): hex = 'abcdef' sync.uuid = FakeUUID ts_data = Timestamp(1.1) def fake_delete_object(path, name=None, headers=None, proxy=None, logger=None, timeout=None): self.assertEqual(path, 'http://sync/to/path') self.assertEqual(name, 'object') if realm: self.assertEqual(headers, { 'x-container-sync-auth': 'US abcdef a2401ecb1256f469494a0abcb0eb62ffa73eca63', 'x-timestamp': ts_data.internal}) else: self.assertEqual( headers, {'x-container-sync-key': 'key', 'x-timestamp': ts_data.internal}) self.assertEqual(proxy, 'http://proxy') self.assertEqual(timeout, 5.0) self.assertEqual(logger, self.logger) sync.delete_object = fake_delete_object with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=FakeRing(), logger=self.logger) cs.http_proxies = ['http://proxy'] # Success. # simulate a row with tombstone at 1.1 and later ctype, meta times created_at = ts_data.internal + '+1388+1388' # last modified = 1.2 self.assertTrue(cs.container_sync_row( {'deleted': True, 'name': 'object', 'created_at': created_at}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_deletes, 1) exc = [] def fake_delete_object(*args, **kwargs): exc.append(Exception('test exception')) raise exc[-1] sync.delete_object = fake_delete_object # Failure because of delete_object exception self.assertFalse(cs.container_sync_row( {'deleted': True, 'name': 'object', 'created_at': '1.2'}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_deletes, 1) self.assertEqual(len(exc), 1) self.assertEqual(str(exc[-1]), 'test exception') def fake_delete_object(*args, **kwargs): exc.append(ClientException('test client exception')) raise exc[-1] sync.delete_object = fake_delete_object # Failure because of delete_object exception self.assertFalse(cs.container_sync_row( {'deleted': True, 'name': 'object', 'created_at': '1.2'}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_deletes, 1) self.assertEqual(len(exc), 2) self.assertEqual(str(exc[-1]), 'test client exception') def fake_delete_object(*args, **kwargs): exc.append(ClientException('test client exception', http_status=404)) raise exc[-1] sync.delete_object = fake_delete_object # Success because the object wasn't even found self.assertTrue(cs.container_sync_row( {'deleted': True, 'name': 'object', 'created_at': '1.2'}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_deletes, 2) self.assertEqual(len(exc), 3) self.assertEqual(str(exc[-1]), 'test client exception: 404') finally: sync.uuid = orig_uuid sync.delete_object = orig_delete_object def test_container_sync_row_put(self): self._test_container_sync_row_put(None, None) def test_container_sync_row_put_using_realms(self): self._test_container_sync_row_put('US', 'realm_key') def _test_container_sync_row_put(self, realm, realm_key): orig_uuid = sync.uuid orig_put_object = sync.put_object orig_head_object = sync.head_object try: class FakeUUID(object): class uuid4(object): hex = 'abcdef' sync.uuid = FakeUUID ts_data = Timestamp(1.1) timestamp = Timestamp(1.2) def fake_put_object(sync_to, name=None, headers=None, contents=None, proxy=None, logger=None, timeout=None): self.assertEqual(sync_to, 'http://sync/to/path') self.assertEqual(name, 'object') if realm: self.assertEqual(headers, { 'x-container-sync-auth': 'US abcdef a5fb3cf950738e6e3b364190e246bd7dd21dad3c', 'x-timestamp': timestamp.internal, 'etag': 'etagvalue', 'other-header': 'other header value', 'content-type': 'text/plain'}) else: self.assertEqual(headers, { 'x-container-sync-key': 'key', 'x-timestamp': timestamp.internal, 'other-header': 'other header value', 'etag': 'etagvalue', 'content-type': 'text/plain'}) self.assertEqual(contents.read(), 'contents') self.assertEqual(proxy, 'http://proxy') self.assertEqual(timeout, 5.0) self.assertEqual(logger, self.logger) sync.put_object = fake_put_object expected_put_count = 0 excepted_failure_count = 0 with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync({}, container_ring=FakeRing(), logger=self.logger) cs.http_proxies = ['http://proxy'] def fake_get_object(acct, con, obj, headers, acceptable_statuses): self.assertEqual(headers['X-Backend-Storage-Policy-Index'], '0') return (200, {'other-header': 'other header value', 'etag': '"etagvalue"', 'x-timestamp': timestamp.internal, 'content-type': 'text/plain; swift_bytes=123'}, iter('contents')) cs.swift.get_object = fake_get_object # Success as everything says it worked. # simulate a row with data at 1.1 and later ctype, meta times created_at = ts_data.internal + '+1388+1388' # last modified = 1.2 def fake_object_in_rcontainer(row, sync_to, user_key, broker, realm, realm_key): return False orig_object_in_rcontainer = cs._object_in_remote_container cs._object_in_remote_container = fake_object_in_rcontainer self.assertTrue(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': created_at}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) expected_put_count += 1 self.assertEqual(cs.container_puts, expected_put_count) def fake_get_object(acct, con, obj, headers, acceptable_statuses): self.assertEqual(headers['X-Newest'], True) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], '0') return (200, {'date': 'date value', 'last-modified': 'last modified value', 'x-timestamp': timestamp.internal, 'other-header': 'other header value', 'etag': '"etagvalue"', 'content-type': 'text/plain; swift_bytes=123'}, iter('contents')) cs.swift.get_object = fake_get_object # Success as everything says it worked, also checks 'date' and # 'last-modified' headers are removed and that 'etag' header is # stripped of double quotes. self.assertTrue(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) expected_put_count += 1 self.assertEqual(cs.container_puts, expected_put_count) # Success as everything says it worked, also check that PUT # timestamp equals GET timestamp when it is newer than created_at # value. self.assertTrue(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': '1.1'}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) expected_put_count += 1 self.assertEqual(cs.container_puts, expected_put_count) exc = [] def fake_get_object(acct, con, obj, headers, acceptable_statuses): self.assertEqual(headers['X-Newest'], True) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], '0') exc.append(Exception('test exception')) raise exc[-1] cs.swift.get_object = fake_get_object # Fail due to completely unexpected exception self.assertFalse(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_puts, expected_put_count) excepted_failure_count += 1 self.assertEqual(len(exc), 1) self.assertEqual(str(exc[-1]), 'test exception') exc = [] def fake_get_object(acct, con, obj, headers, acceptable_statuses): self.assertEqual(headers['X-Newest'], True) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], '0') exc.append(ClientException('test client exception')) raise exc[-1] cs.swift.get_object = fake_get_object # Fail due to all direct_get_object calls failing self.assertFalse(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_puts, expected_put_count) excepted_failure_count += 1 self.assertEqual(len(exc), 1) self.assertEqual(str(exc[-1]), 'test client exception') def fake_get_object(acct, con, obj, headers, acceptable_statuses): self.assertEqual(headers['X-Newest'], True) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], '0') return (200, {'other-header': 'other header value', 'x-timestamp': timestamp.internal, 'etag': '"etagvalue"'}, iter('contents')) def fake_put_object(*args, **kwargs): raise ClientException('test client exception', http_status=401) cs.swift.get_object = fake_get_object sync.put_object = fake_put_object # Fail due to 401 self.assertFalse(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_puts, expected_put_count) excepted_failure_count += 1 self.assertEqual(cs.container_failures, excepted_failure_count) self.assertLogMessage('info', 'Unauth') def fake_put_object(*args, **kwargs): raise ClientException('test client exception', http_status=404) sync.put_object = fake_put_object # Fail due to 404 self.assertFalse(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_puts, expected_put_count) excepted_failure_count += 1 self.assertEqual(cs.container_failures, excepted_failure_count) self.assertLogMessage('info', 'Not found', 1) def fake_put_object(*args, **kwargs): raise ClientException('test client exception', http_status=503) sync.put_object = fake_put_object # Fail due to 503 self.assertFalse(cs.container_sync_row( {'deleted': False, 'name': 'object', 'created_at': timestamp.internal}, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), {'account': 'a', 'container': 'c', 'storage_policy_index': 0}, realm, realm_key)) self.assertEqual(cs.container_puts, expected_put_count) excepted_failure_count += 1 self.assertEqual(cs.container_failures, excepted_failure_count) self.assertLogMessage('error', 'ERROR Syncing') # Test the following cases: # remote has the same date and a put doesn't take place # remote has more up to date copy and a put doesn't take place # head_object returns ClientException(404) and a put takes place # head_object returns other ClientException put doesn't take place # and we get failure # head_object returns other Exception put does not take place # and we get failure # remote returns old copy and a put takes place test_row = {'deleted': False, 'name': 'object', 'created_at': timestamp.internal, 'etag': '1111'} test_info = {'account': 'a', 'container': 'c', 'storage_policy_index': 0} actual_puts = [] def fake_put_object(*args, **kwargs): actual_puts.append((args, kwargs)) def fake_head_object(*args, **kwargs): return ({'x-timestamp': '1.2'}, '') sync.put_object = fake_put_object sync.head_object = fake_head_object cs._object_in_remote_container = orig_object_in_rcontainer self.assertTrue(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # No additional put has taken place self.assertEqual(len(actual_puts), 0) # No additional errors self.assertEqual(cs.container_failures, excepted_failure_count) def fake_head_object(*args, **kwargs): return ({'x-timestamp': '1.3'}, '') sync.head_object = fake_head_object self.assertTrue(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # No additional put has taken place self.assertEqual(len(actual_puts), 0) # No additional errors self.assertEqual(cs.container_failures, excepted_failure_count) actual_puts = [] def fake_head_object(*args, **kwargs): raise ClientException('test client exception', http_status=404) sync.head_object = fake_head_object self.assertTrue(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # Additional put has taken place self.assertEqual(len(actual_puts), 1) # No additional errors self.assertEqual(cs.container_failures, excepted_failure_count) def fake_head_object(*args, **kwargs): raise ClientException('test client exception', http_status=401) sync.head_object = fake_head_object self.assertFalse(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # No additional put has taken place, failures increased self.assertEqual(len(actual_puts), 1) excepted_failure_count += 1 self.assertEqual(cs.container_failures, excepted_failure_count) def fake_head_object(*args, **kwargs): raise Exception() sync.head_object = fake_head_object self.assertFalse(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # No additional put has taken place, failures increased self.assertEqual(len(actual_puts), 1) excepted_failure_count += 1 self.assertEqual(cs.container_failures, excepted_failure_count) def fake_head_object(*args, **kwargs): return ({'x-timestamp': '1.1'}, '') sync.head_object = fake_head_object self.assertTrue(cs.container_sync_row( test_row, 'http://sync/to/path', 'key', FakeContainerBroker('broker'), test_info, realm, realm_key)) # Additional put has taken place self.assertEqual(len(actual_puts), 2) # No additional errors self.assertEqual(cs.container_failures, excepted_failure_count) finally: sync.uuid = orig_uuid sync.put_object = orig_put_object sync.head_object = orig_head_object def test_select_http_proxy_None(self): with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync( {'sync_proxy': ''}, container_ring=FakeRing()) self.assertEqual(cs.select_http_proxy(), None) def test_select_http_proxy_one(self): with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync( {'sync_proxy': 'http://one'}, container_ring=FakeRing()) self.assertEqual(cs.select_http_proxy(), 'http://one') def test_select_http_proxy_multiple(self): with mock.patch('swift.container.sync.InternalClient'): cs = sync.ContainerSync( {'sync_proxy': 'http://one,http://two,http://three'}, container_ring=FakeRing()) self.assertEqual( set(cs.http_proxies), set(['http://one', 'http://two', 'http://three'])) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/container/test_replicator.py0000664000567000056710000015243113024044352023516 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import shutil import itertools import unittest import mock import random import sqlite3 from swift.common import db_replicator from swift.container import replicator, backend, server, sync_store from swift.container.reconciler import ( MISPLACED_OBJECTS_ACCOUNT, get_reconciler_container_name) from swift.common.utils import Timestamp, encode_timestamps from swift.common.storage_policy import POLICIES from test.unit.common import test_db_replicator from test.unit import patch_policies, make_timestamp_iter, FakeLogger from contextlib import contextmanager @patch_policies class TestReplicatorSync(test_db_replicator.TestReplicatorSync): backend = backend.ContainerBroker datadir = server.DATADIR replicator_daemon = replicator.ContainerReplicator replicator_rpc = replicator.ContainerReplicatorRpc def test_report_up_to_date(self): broker = self._get_broker('a', 'c', node_index=0) broker.initialize(Timestamp(1).internal, int(POLICIES.default)) info = broker.get_info() broker.reported(info['put_timestamp'], info['delete_timestamp'], info['object_count'], info['bytes_used']) full_info = broker.get_replication_info() expected_info = {'put_timestamp': Timestamp(1).internal, 'delete_timestamp': '0', 'count': 0, 'bytes_used': 0, 'reported_put_timestamp': Timestamp(1).internal, 'reported_delete_timestamp': '0', 'reported_object_count': 0, 'reported_bytes_used': 0} for key, value in expected_info.items(): msg = 'expected value for %r, %r != %r' % ( key, full_info[key], value) self.assertEqual(full_info[key], value, msg) repl = replicator.ContainerReplicator({}) self.assertTrue(repl.report_up_to_date(full_info)) full_info['delete_timestamp'] = Timestamp(2).internal self.assertFalse(repl.report_up_to_date(full_info)) full_info['reported_delete_timestamp'] = Timestamp(2).internal self.assertTrue(repl.report_up_to_date(full_info)) full_info['count'] = 1 self.assertFalse(repl.report_up_to_date(full_info)) full_info['reported_object_count'] = 1 self.assertTrue(repl.report_up_to_date(full_info)) full_info['bytes_used'] = 1 self.assertFalse(repl.report_up_to_date(full_info)) full_info['reported_bytes_used'] = 1 self.assertTrue(repl.report_up_to_date(full_info)) full_info['put_timestamp'] = Timestamp(3).internal self.assertFalse(repl.report_up_to_date(full_info)) full_info['reported_put_timestamp'] = Timestamp(3).internal self.assertTrue(repl.report_up_to_date(full_info)) def test_sync_remote_in_sync(self): # setup a local container broker = self._get_broker('a', 'c', node_index=0) put_timestamp = time.time() broker.initialize(put_timestamp, POLICIES.default.idx) # "replicate" to same database node = {'device': 'sdb', 'replication_ip': '127.0.0.1'} daemon = replicator.ContainerReplicator({}) # replicate part, node = self._get_broker_part_node(broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) # nothing to do self.assertTrue(success) self.assertEqual(1, daemon.stats['no_change']) def test_sync_remote_with_timings(self): ts_iter = make_timestamp_iter() # setup a local container broker = self._get_broker('a', 'c', node_index=0) put_timestamp = next(ts_iter) broker.initialize(put_timestamp.internal, POLICIES.default.idx) broker.update_metadata( {'x-container-meta-test': ('foo', put_timestamp.internal)}) # setup remote container remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(next(ts_iter).internal, POLICIES.default.idx) timestamp = next(ts_iter) for db in (broker, remote_broker): db.put_object( '/a/c/o', timestamp.internal, 0, 'content-type', 'etag', storage_policy_index=db.storage_policy_index) # replicate daemon = replicator.ContainerReplicator({}) part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() with mock.patch.object(db_replicator, 'DEBUG_TIMINGS_THRESHOLD', -1): success = daemon._repl_to_node(node, broker, part, info) # nothing to do self.assertTrue(success) self.assertEqual(1, daemon.stats['no_change']) expected_timings = ('info', 'update_metadata', 'merge_timestamps', 'get_sync', 'merge_syncs') debug_lines = self.rpc.logger.logger.get_lines_for_level('debug') self.assertEqual(len(expected_timings), len(debug_lines), 'Expected %s debug lines but only got %s: %s' % (len(expected_timings), len(debug_lines), debug_lines)) for metric in expected_timings: expected = 'replicator-rpc-sync time for %s:' % metric self.assertTrue(any(expected in line for line in debug_lines), 'debug timing %r was not in %r' % ( expected, debug_lines)) def test_sync_remote_missing(self): broker = self._get_broker('a', 'c', node_index=0) put_timestamp = time.time() broker.initialize(put_timestamp, POLICIES.default.idx) # "replicate" part, node = self._get_broker_part_node(broker) daemon = self._run_once(node) # complete rsync to all other nodes self.assertEqual(2, daemon.stats['rsync']) for i in range(1, 3): remote_broker = self._get_broker('a', 'c', node_index=i) self.assertTrue(os.path.exists(remote_broker.db_file)) remote_info = remote_broker.get_info() local_info = self._get_broker( 'a', 'c', node_index=0).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_rsync_failure(self): broker = self._get_broker('a', 'c', node_index=0) put_timestamp = time.time() broker.initialize(put_timestamp, POLICIES.default.idx) # "replicate" to different device daemon = replicator.ContainerReplicator({}) def _rsync_file(*args, **kwargs): return False daemon._rsync_file = _rsync_file # replicate part, local_node = self._get_broker_part_node(broker) node = random.choice([n for n in self._ring.devs if n['id'] != local_node['id']]) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertFalse(success) def test_sync_remote_missing_most_rows(self): put_timestamp = time.time() # create "local" broker broker = self._get_broker('a', 'c', node_index=0) broker.initialize(put_timestamp, POLICIES.default.idx) # create "remote" broker remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(put_timestamp, POLICIES.default.idx) # add a row to "local" db broker.put_object('/a/c/o', time.time(), 0, 'content-type', 'etag', storage_policy_index=broker.storage_policy_index) # replicate node = {'device': 'sdc', 'replication_ip': '127.0.0.1'} daemon = replicator.ContainerReplicator({'per_diff': 1}) def _rsync_file(db_file, remote_file, **kwargs): remote_server, remote_path = remote_file.split('/', 1) dest_path = os.path.join(self.root, remote_path) shutil.copy(db_file, dest_path) return True daemon._rsync_file = _rsync_file part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) # row merge self.assertEqual(1, daemon.stats['remote_merge']) local_info = self._get_broker( 'a', 'c', node_index=0).get_info() remote_info = self._get_broker( 'a', 'c', node_index=1).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_sync_remote_missing_one_rows(self): put_timestamp = time.time() # create "local" broker broker = self._get_broker('a', 'c', node_index=0) broker.initialize(put_timestamp, POLICIES.default.idx) # create "remote" broker remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(put_timestamp, POLICIES.default.idx) # add some rows to both db for i in range(10): put_timestamp = time.time() for db in (broker, remote_broker): path = '/a/c/o_%s' % i db.put_object(path, put_timestamp, 0, 'content-type', 'etag', storage_policy_index=db.storage_policy_index) # now a row to the "local" broker only broker.put_object('/a/c/o_missing', time.time(), 0, 'content-type', 'etag', storage_policy_index=broker.storage_policy_index) # replicate daemon = replicator.ContainerReplicator({}) part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) # row merge self.assertEqual(1, daemon.stats['diff']) local_info = self._get_broker( 'a', 'c', node_index=0).get_info() remote_info = self._get_broker( 'a', 'c', node_index=1).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_sync_remote_can_not_keep_up(self): put_timestamp = time.time() # create "local" broker broker = self._get_broker('a', 'c', node_index=0) broker.initialize(put_timestamp, POLICIES.default.idx) # create "remote" broker remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(put_timestamp, POLICIES.default.idx) # add some rows to both db's for i in range(10): put_timestamp = time.time() for db in (broker, remote_broker): obj_name = 'o_%s' % i db.put_object(obj_name, put_timestamp, 0, 'content-type', 'etag', storage_policy_index=db.storage_policy_index) # setup REPLICATE callback to simulate adding rows during merge_items missing_counter = itertools.count() def put_more_objects(op, *args): if op != 'merge_items': return path = '/a/c/o_missing_%s' % next(missing_counter) broker.put_object(path, time.time(), 0, 'content-type', 'etag', storage_policy_index=db.storage_policy_index) test_db_replicator.FakeReplConnection = \ test_db_replicator.attach_fake_replication_rpc( self.rpc, replicate_hook=put_more_objects) db_replicator.ReplConnection = test_db_replicator.FakeReplConnection # and add one extra to local db to trigger merge_items put_more_objects('merge_items') # limit number of times we'll call merge_items daemon = replicator.ContainerReplicator({'max_diffs': 10}) # replicate part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertFalse(success) # back off on the PUTs during replication... FakeReplConnection = test_db_replicator.attach_fake_replication_rpc( self.rpc, replicate_hook=None) db_replicator.ReplConnection = FakeReplConnection # retry replication info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) # row merge self.assertEqual(2, daemon.stats['diff']) self.assertEqual(1, daemon.stats['diff_capped']) local_info = self._get_broker( 'a', 'c', node_index=0).get_info() remote_info = self._get_broker( 'a', 'c', node_index=1).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_diff_capped_sync(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) put_timestamp = next(ts) # start off with with a local db that is way behind broker = self._get_broker('a', 'c', node_index=0) broker.initialize(put_timestamp, POLICIES.default.idx) for i in range(50): broker.put_object( 'o%s' % i, next(ts), 0, 'content-type-old', 'etag', storage_policy_index=broker.storage_policy_index) # remote primary db has all the new bits... remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(put_timestamp, POLICIES.default.idx) for i in range(100): remote_broker.put_object( 'o%s' % i, next(ts), 0, 'content-type-new', 'etag', storage_policy_index=remote_broker.storage_policy_index) # except there's *one* tiny thing in our local broker that's newer broker.put_object( 'o101', next(ts), 0, 'content-type-new', 'etag', storage_policy_index=broker.storage_policy_index) # setup daemon with smaller per_diff and max_diffs part, node = self._get_broker_part_node(broker) daemon = self._get_daemon(node, conf_updates={'per_diff': 10, 'max_diffs': 3}) self.assertEqual(daemon.per_diff, 10) self.assertEqual(daemon.max_diffs, 3) # run once and verify diff capped self._run_once(node, daemon=daemon) self.assertEqual(1, daemon.stats['diff']) self.assertEqual(1, daemon.stats['diff_capped']) # run again and verify fully synced self._run_once(node, daemon=daemon) self.assertEqual(1, daemon.stats['diff']) self.assertEqual(0, daemon.stats['diff_capped']) # now that we're synced the new item should be in remote db remote_names = set() for item in remote_broker.list_objects_iter(500, '', '', '', ''): name, ts, size, content_type, etag = item remote_names.add(name) self.assertEqual(content_type, 'content-type-new') self.assertTrue('o101' in remote_names) self.assertEqual(len(remote_names), 101) self.assertEqual(remote_broker.get_info()['object_count'], 101) def test_sync_status_change(self): # setup a local container broker = self._get_broker('a', 'c', node_index=0) put_timestamp = time.time() broker.initialize(put_timestamp, POLICIES.default.idx) # setup remote container remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(put_timestamp, POLICIES.default.idx) # delete local container broker.delete_db(time.time()) # replicate daemon = replicator.ContainerReplicator({}) part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) # nothing to do self.assertTrue(success) self.assertEqual(1, daemon.stats['no_change']) # status in sync self.assertTrue(remote_broker.is_deleted()) info = broker.get_info() remote_info = remote_broker.get_info() self.assertTrue(Timestamp(remote_info['status_changed_at']) > Timestamp(remote_info['put_timestamp']), 'remote status_changed_at (%s) is not ' 'greater than put_timestamp (%s)' % ( remote_info['status_changed_at'], remote_info['put_timestamp'])) self.assertTrue(Timestamp(remote_info['status_changed_at']) > Timestamp(info['status_changed_at']), 'remote status_changed_at (%s) is not ' 'greater than local status_changed_at (%s)' % ( remote_info['status_changed_at'], info['status_changed_at'])) @contextmanager def _wrap_merge_timestamps(self, broker, calls): def fake_merge_timestamps(*args, **kwargs): calls.append(args[0]) orig_merge_timestamps(*args, **kwargs) orig_merge_timestamps = broker.merge_timestamps broker.merge_timestamps = fake_merge_timestamps try: yield True finally: broker.merge_timestamps = orig_merge_timestamps def test_sync_merge_timestamps(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) # setup a local container broker = self._get_broker('a', 'c', node_index=0) put_timestamp = next(ts) broker.initialize(put_timestamp, POLICIES.default.idx) # setup remote container remote_broker = self._get_broker('a', 'c', node_index=1) remote_put_timestamp = next(ts) remote_broker.initialize(remote_put_timestamp, POLICIES.default.idx) # replicate, expect call to merge_timestamps on remote and local daemon = replicator.ContainerReplicator({}) part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() local_calls = [] remote_calls = [] with self._wrap_merge_timestamps(broker, local_calls): with self._wrap_merge_timestamps(broker, remote_calls): success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) self.assertEqual(1, len(remote_calls)) self.assertEqual(1, len(local_calls)) self.assertEqual(remote_put_timestamp, broker.get_info()['put_timestamp']) self.assertEqual(remote_put_timestamp, remote_broker.get_info()['put_timestamp']) # replicate again, no changes so expect no calls to merge_timestamps info = broker.get_replication_info() local_calls = [] remote_calls = [] with self._wrap_merge_timestamps(broker, local_calls): with self._wrap_merge_timestamps(broker, remote_calls): success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) self.assertEqual(0, len(remote_calls)) self.assertEqual(0, len(local_calls)) self.assertEqual(remote_put_timestamp, broker.get_info()['put_timestamp']) self.assertEqual(remote_put_timestamp, remote_broker.get_info()['put_timestamp']) def test_sync_bogus_db_quarantines(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) policy = random.choice(list(POLICIES)) # create "local" broker local_broker = self._get_broker('a', 'c', node_index=0) local_broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(next(ts), policy.idx) db_path = local_broker.db_file self.assertTrue(os.path.exists(db_path)) # sanity check old_inode = os.stat(db_path).st_ino _orig_get_info = backend.ContainerBroker.get_info def fail_like_bad_db(broker): if broker.db_file == local_broker.db_file: raise sqlite3.OperationalError("no such table: container_info") else: return _orig_get_info(broker) part, node = self._get_broker_part_node(remote_broker) with mock.patch('swift.container.backend.ContainerBroker.get_info', fail_like_bad_db): # Have the remote node replicate to local; local should see its # corrupt DB, quarantine it, and act like the DB wasn't ever there # in the first place. daemon = self._run_once(node) self.assertTrue(os.path.exists(db_path)) # Make sure we didn't just keep the old DB, but quarantined it and # made a fresh copy. new_inode = os.stat(db_path).st_ino self.assertNotEqual(old_inode, new_inode) self.assertEqual(daemon.stats['failure'], 0) def _replication_scenarios(self, *scenarios, **kwargs): remote_wins = kwargs.get('remote_wins', False) # these tests are duplicated because of the differences in replication # when row counts cause full rsync vs. merge scenarios = scenarios or ( 'no_row', 'local_row', 'remote_row', 'both_rows') for scenario_name in scenarios: ts = itertools.count(int(time.time())) policy = random.choice(list(POLICIES)) remote_policy = random.choice( [p for p in POLICIES if p is not policy]) broker = self._get_broker('a', 'c', node_index=0) remote_broker = self._get_broker('a', 'c', node_index=1) yield ts, policy, remote_policy, broker, remote_broker # variations on different replication scenarios variations = { 'no_row': (), 'local_row': (broker,), 'remote_row': (remote_broker,), 'both_rows': (broker, remote_broker), } dbs = variations[scenario_name] obj_ts = next(ts) for db in dbs: db.put_object('/a/c/o', obj_ts, 0, 'content-type', 'etag', storage_policy_index=db.storage_policy_index) # replicate part, node = self._get_broker_part_node(broker) daemon = self._run_once(node) self.assertEqual(0, daemon.stats['failure']) # in sync local_info = self._get_broker( 'a', 'c', node_index=0).get_info() remote_info = self._get_broker( 'a', 'c', node_index=1).get_info() if remote_wins: expected = remote_policy.idx err = 'local policy did not change to match remote ' \ 'for replication row scenario %s' % scenario_name else: expected = policy.idx err = 'local policy changed to match remote ' \ 'for replication row scenario %s' % scenario_name self.assertEqual(local_info['storage_policy_index'], expected, err) self.assertEqual(remote_info['storage_policy_index'], local_info['storage_policy_index']) test_db_replicator.TestReplicatorSync.tearDown(self) test_db_replicator.TestReplicatorSync.setUp(self) def test_sync_local_create_policy_over_newer_remote_create(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) def test_sync_local_create_policy_over_newer_remote_delete(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create older "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # delete "remote" broker remote_broker.delete_db(next(ts)) def test_sync_local_create_policy_over_older_remote_delete(self): # remote_row & both_rows cases are covered by # "test_sync_remote_half_delete_policy_over_newer_local_create" for setup in self._replication_scenarios( 'no_row', 'local_row'): ts, policy, remote_policy, broker, remote_broker = setup # create older "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # delete older "remote" broker remote_broker.delete_db(next(ts)) # create "local" broker broker.initialize(next(ts), policy.idx) def test_sync_local_half_delete_policy_over_newer_remote_create(self): # no_row & remote_row cases are covered by # "test_sync_remote_create_policy_over_older_local_delete" for setup in self._replication_scenarios('local_row', 'both_rows'): ts, policy, remote_policy, broker, remote_broker = setup # create older "local" broker broker.initialize(next(ts), policy.idx) # half delete older "local" broker broker.delete_db(next(ts)) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) def test_sync_local_recreate_policy_over_newer_remote_create(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # older recreate "local" broker broker.delete_db(next(ts)) recreate_timestamp = next(ts) broker.update_put_timestamp(recreate_timestamp) broker.update_status_changed_at(recreate_timestamp) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) def test_sync_local_recreate_policy_over_older_remote_create(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create older "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # create "local" broker broker.initialize(next(ts), policy.idx) # recreate "local" broker broker.delete_db(next(ts)) recreate_timestamp = next(ts) broker.update_put_timestamp(recreate_timestamp) broker.update_status_changed_at(recreate_timestamp) def test_sync_local_recreate_policy_over_newer_remote_delete(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # recreate "local" broker broker.delete_db(next(ts)) recreate_timestamp = next(ts) broker.update_put_timestamp(recreate_timestamp) broker.update_status_changed_at(recreate_timestamp) # older delete "remote" broker remote_broker.delete_db(next(ts)) def test_sync_local_recreate_policy_over_older_remote_delete(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # older delete "remote" broker remote_broker.delete_db(next(ts)) # recreate "local" broker broker.delete_db(next(ts)) recreate_timestamp = next(ts) broker.update_put_timestamp(recreate_timestamp) broker.update_status_changed_at(recreate_timestamp) def test_sync_local_recreate_policy_over_older_remote_recreate(self): for setup in self._replication_scenarios(): ts, policy, remote_policy, broker, remote_broker = setup # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # create "local" broker broker.initialize(next(ts), policy.idx) # older recreate "remote" broker remote_broker.delete_db(next(ts)) remote_recreate_timestamp = next(ts) remote_broker.update_put_timestamp(remote_recreate_timestamp) remote_broker.update_status_changed_at(remote_recreate_timestamp) # recreate "local" broker broker.delete_db(next(ts)) local_recreate_timestamp = next(ts) broker.update_put_timestamp(local_recreate_timestamp) broker.update_status_changed_at(local_recreate_timestamp) def test_sync_remote_create_policy_over_newer_local_create(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # create "local" broker broker.initialize(next(ts), policy.idx) def test_sync_remote_create_policy_over_newer_local_delete(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # create "local" broker broker.initialize(next(ts), policy.idx) # delete "local" broker broker.delete_db(next(ts)) def test_sync_remote_create_policy_over_older_local_delete(self): # local_row & both_rows cases are covered by # "test_sync_local_half_delete_policy_over_newer_remote_create" for setup in self._replication_scenarios( 'no_row', 'remote_row', remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "local" broker broker.initialize(next(ts), policy.idx) # delete older "local" broker broker.delete_db(next(ts)) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) def test_sync_remote_half_delete_policy_over_newer_local_create(self): # no_row & both_rows cases are covered by # "test_sync_local_create_policy_over_older_remote_delete" for setup in self._replication_scenarios('remote_row', 'both_rows', remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # half delete older "remote" broker remote_broker.delete_db(next(ts)) # create "local" broker broker.initialize(next(ts), policy.idx) def test_sync_remote_recreate_policy_over_newer_local_create(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # older recreate "remote" broker remote_broker.delete_db(next(ts)) recreate_timestamp = next(ts) remote_broker.update_put_timestamp(recreate_timestamp) remote_broker.update_status_changed_at(recreate_timestamp) # create "local" broker broker.initialize(next(ts), policy.idx) def test_sync_remote_recreate_policy_over_older_local_create(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # recreate "remote" broker remote_broker.delete_db(next(ts)) recreate_timestamp = next(ts) remote_broker.update_put_timestamp(recreate_timestamp) remote_broker.update_status_changed_at(recreate_timestamp) def test_sync_remote_recreate_policy_over_newer_local_delete(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # recreate "remote" broker remote_broker.delete_db(next(ts)) remote_recreate_timestamp = next(ts) remote_broker.update_put_timestamp(remote_recreate_timestamp) remote_broker.update_status_changed_at(remote_recreate_timestamp) # older delete "local" broker broker.delete_db(next(ts)) def test_sync_remote_recreate_policy_over_older_local_delete(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # older delete "local" broker broker.delete_db(next(ts)) # recreate "remote" broker remote_broker.delete_db(next(ts)) remote_recreate_timestamp = next(ts) remote_broker.update_put_timestamp(remote_recreate_timestamp) remote_broker.update_status_changed_at(remote_recreate_timestamp) def test_sync_remote_recreate_policy_over_older_local_recreate(self): for setup in self._replication_scenarios(remote_wins=True): ts, policy, remote_policy, broker, remote_broker = setup # create older "local" broker broker.initialize(next(ts), policy.idx) # create "remote" broker remote_broker.initialize(next(ts), remote_policy.idx) # older recreate "local" broker broker.delete_db(next(ts)) local_recreate_timestamp = next(ts) broker.update_put_timestamp(local_recreate_timestamp) broker.update_status_changed_at(local_recreate_timestamp) # recreate "remote" broker remote_broker.delete_db(next(ts)) remote_recreate_timestamp = next(ts) remote_broker.update_put_timestamp(remote_recreate_timestamp) remote_broker.update_status_changed_at(remote_recreate_timestamp) def test_sync_to_remote_with_misplaced(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) # create "local" broker policy = random.choice(list(POLICIES)) broker = self._get_broker('a', 'c', node_index=0) broker.initialize(next(ts), policy.idx) # create "remote" broker remote_policy = random.choice([p for p in POLICIES if p is not policy]) remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(next(ts), remote_policy.idx) # add misplaced row to remote_broker remote_broker.put_object( '/a/c/o', next(ts), 0, 'content-type', 'etag', storage_policy_index=remote_broker.storage_policy_index) # since this row matches policy index or remote, it shows up in count self.assertEqual(remote_broker.get_info()['object_count'], 1) self.assertEqual([], remote_broker.get_misplaced_since(-1, 1)) # replicate part, node = self._get_broker_part_node(broker) daemon = self._run_once(node) # since our local broker has no rows to push it logs as no_change self.assertEqual(1, daemon.stats['no_change']) self.assertEqual(0, broker.get_info()['object_count']) # remote broker updates it's policy index; this makes the remote # broker's object count change info = remote_broker.get_info() expectations = { 'object_count': 0, 'storage_policy_index': policy.idx, } for key, value in expectations.items(): self.assertEqual(info[key], value) # but it also knows those objects are misplaced now misplaced = remote_broker.get_misplaced_since(-1, 100) self.assertEqual(len(misplaced), 1) # we also pushed out to node 3 with rsync self.assertEqual(1, daemon.stats['rsync']) third_broker = self._get_broker('a', 'c', node_index=2) info = third_broker.get_info() for key, value in expectations.items(): self.assertEqual(info[key], value) def test_misplaced_rows_replicate_and_enqueue(self): # force all timestamps to fall in same hour ts = (Timestamp(t) for t in itertools.count(int(time.time()) // 3600 * 3600)) policy = random.choice(list(POLICIES)) broker = self._get_broker('a', 'c', node_index=0) broker.initialize(next(ts).internal, policy.idx) remote_policy = random.choice([p for p in POLICIES if p is not policy]) remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(next(ts).internal, remote_policy.idx) # add a misplaced row to *local* broker obj_put_timestamp = next(ts).internal broker.put_object( 'o', obj_put_timestamp, 0, 'content-type', 'etag', storage_policy_index=remote_policy.idx) misplaced = broker.get_misplaced_since(-1, 10) self.assertEqual(len(misplaced), 1) # since this row is misplaced it doesn't show up in count self.assertEqual(broker.get_info()['object_count'], 0) # add another misplaced row to *local* broker with composite timestamp ts_data = next(ts) ts_ctype = next(ts) ts_meta = next(ts) broker.put_object( 'o2', ts_data.internal, 0, 'content-type', 'etag', storage_policy_index=remote_policy.idx, ctype_timestamp=ts_ctype.internal, meta_timestamp=ts_meta.internal) misplaced = broker.get_misplaced_since(-1, 10) self.assertEqual(len(misplaced), 2) # since this row is misplaced it doesn't show up in count self.assertEqual(broker.get_info()['object_count'], 0) # replicate part, node = self._get_broker_part_node(broker) daemon = self._run_once(node) # push to remote, and third node was missing (also maybe reconciler) self.assertTrue(2 < daemon.stats['rsync'] <= 3, daemon.stats['rsync']) # grab the rsynced instance of remote_broker remote_broker = self._get_broker('a', 'c', node_index=1) # remote has misplaced rows too now misplaced = remote_broker.get_misplaced_since(-1, 10) self.assertEqual(len(misplaced), 2) # and the correct policy_index and object_count info = remote_broker.get_info() expectations = { 'object_count': 0, 'storage_policy_index': policy.idx, } for key, value in expectations.items(): self.assertEqual(info[key], value) # and we should have also enqueued these rows in a single reconciler, # since we forced the object timestamps to be in the same hour. reconciler = daemon.get_reconciler_broker(misplaced[0]['created_at']) # but it may not be on the same node as us anymore though... reconciler = self._get_broker(reconciler.account, reconciler.container, node_index=0) self.assertEqual(reconciler.get_info()['object_count'], 2) objects = reconciler.list_objects_iter( 10, '', None, None, None, None, storage_policy_index=0) self.assertEqual(len(objects), 2) expected = ('%s:/a/c/o' % remote_policy.idx, obj_put_timestamp, 0, 'application/x-put', obj_put_timestamp) self.assertEqual(objects[0], expected) # the second object's listing has ts_meta as its last modified time # but its full composite timestamp is in the hash field. expected = ('%s:/a/c/o2' % remote_policy.idx, ts_meta.internal, 0, 'application/x-put', encode_timestamps(ts_data, ts_ctype, ts_meta)) self.assertEqual(objects[1], expected) # having safely enqueued to the reconciler we can advance # our sync pointer self.assertEqual(broker.get_reconciler_sync(), 2) def test_multiple_out_sync_reconciler_enqueue_normalize(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) policy = random.choice(list(POLICIES)) broker = self._get_broker('a', 'c', node_index=0) broker.initialize(next(ts), policy.idx) remote_policy = random.choice([p for p in POLICIES if p is not policy]) remote_broker = self._get_broker('a', 'c', node_index=1) remote_broker.initialize(next(ts), remote_policy.idx) # add some rows to brokers for db in (broker, remote_broker): for p in (policy, remote_policy): db.put_object('o-%s' % p.name, next(ts), 0, 'content-type', 'etag', storage_policy_index=p.idx) db._commit_puts() expected_policy_stats = { policy.idx: {'object_count': 1, 'bytes_used': 0}, remote_policy.idx: {'object_count': 1, 'bytes_used': 0}, } for db in (broker, remote_broker): policy_stats = db.get_policy_stats() self.assertEqual(policy_stats, expected_policy_stats) # each db has 2 rows, 4 total all_items = set() for db in (broker, remote_broker): items = db.get_items_since(-1, 4) all_items.update( (item['name'], item['created_at']) for item in items) self.assertEqual(4, len(all_items)) # replicate both ways part, node = self._get_broker_part_node(broker) self._run_once(node) part, node = self._get_broker_part_node(remote_broker) self._run_once(node) # only the latest timestamps should survive most_recent_items = {} for name, timestamp in all_items: most_recent_items[name] = max( timestamp, most_recent_items.get(name, -1)) self.assertEqual(2, len(most_recent_items)) for db in (broker, remote_broker): items = db.get_items_since(-1, 4) self.assertEqual(len(items), len(most_recent_items)) for item in items: self.assertEqual(most_recent_items[item['name']], item['created_at']) # and the reconciler also collapses updates reconciler_containers = set() for item in all_items: _name, timestamp = item reconciler_containers.add( get_reconciler_container_name(timestamp)) reconciler_items = set() for reconciler_container in reconciler_containers: for node_index in range(3): reconciler = self._get_broker(MISPLACED_OBJECTS_ACCOUNT, reconciler_container, node_index=node_index) items = reconciler.get_items_since(-1, 4) reconciler_items.update( (item['name'], item['created_at']) for item in items) # they can't *both* be in the wrong policy ;) self.assertEqual(1, len(reconciler_items)) for reconciler_name, timestamp in reconciler_items: _policy_index, path = reconciler_name.split(':', 1) a, c, name = path.lstrip('/').split('/') self.assertEqual(most_recent_items[name], timestamp) @contextmanager def _wrap_update_reconciler_sync(self, broker, calls): def wrapper_function(*args, **kwargs): calls.append(args) orig_function(*args, **kwargs) orig_function = broker.update_reconciler_sync broker.update_reconciler_sync = wrapper_function try: yield True finally: broker.update_reconciler_sync = orig_function def test_post_replicate_hook(self): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) broker = self._get_broker('a', 'c', node_index=0) broker.initialize(next(ts), 0) broker.put_object('foo', next(ts), 0, 'text/plain', 'xyz', deleted=0, storage_policy_index=0) info = broker.get_replication_info() self.assertEqual(1, info['max_row']) self.assertEqual(-1, broker.get_reconciler_sync()) daemon = replicator.ContainerReplicator({}) calls = [] with self._wrap_update_reconciler_sync(broker, calls): daemon._post_replicate_hook(broker, info, []) self.assertEqual(1, len(calls)) # repeated call to _post_replicate_hook with no change to info # should not call update_reconciler_sync calls = [] with self._wrap_update_reconciler_sync(broker, calls): daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, len(calls)) def test_update_sync_store_exception(self): class FakeContainerSyncStore(object): def update_sync_store(self, broker): raise OSError(1, '1') logger = FakeLogger() daemon = replicator.ContainerReplicator({}, logger) daemon.sync_store = FakeContainerSyncStore() ts_iter = make_timestamp_iter() broker = self._get_broker('a', 'c', node_index=0) timestamp = next(ts_iter) broker.initialize(timestamp.internal, POLICIES.default.idx) info = broker.get_replication_info() daemon._post_replicate_hook(broker, info, []) log_lines = logger.get_lines_for_level('error') self.assertEqual(1, len(log_lines)) self.assertIn('Failed to update sync_store', log_lines[0]) def test_update_sync_store(self): klass = 'swift.container.sync_store.ContainerSyncStore' daemon = replicator.ContainerReplicator({}) daemon.sync_store = sync_store.ContainerSyncStore( daemon.root, daemon.logger, daemon.mount_check) ts_iter = make_timestamp_iter() broker = self._get_broker('a', 'c', node_index=0) timestamp = next(ts_iter) broker.initialize(timestamp.internal, POLICIES.default.idx) info = broker.get_replication_info() with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, mock_remove.call_count) self.assertEqual(0, mock_add.call_count) timestamp = next(ts_iter) # sync-to and sync-key empty - remove from store broker.update_metadata( {'X-Container-Sync-To': ('', timestamp.internal), 'X-Container-Sync-Key': ('', timestamp.internal)}) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, mock_add.call_count) mock_remove.assert_called_once_with(broker) timestamp = next(ts_iter) # sync-to is not empty sync-key is empty - remove from store broker.update_metadata( {'X-Container-Sync-To': ('a', timestamp.internal)}) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, mock_add.call_count) mock_remove.assert_called_once_with(broker) timestamp = next(ts_iter) # sync-to is empty sync-key is not empty - remove from store broker.update_metadata( {'X-Container-Sync-To': ('', timestamp.internal), 'X-Container-Sync-Key': ('secret', timestamp.internal)}) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, mock_add.call_count) mock_remove.assert_called_once_with(broker) timestamp = next(ts_iter) # sync-to, sync-key both not empty - add to store broker.update_metadata( {'X-Container-Sync-To': ('a', timestamp.internal), 'X-Container-Sync-Key': ('secret', timestamp.internal)}) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) mock_add.assert_called_once_with(broker) self.assertEqual(0, mock_remove.call_count) timestamp = next(ts_iter) # container is removed - need to remove from store broker.delete_db(timestamp.internal) broker.update_metadata( {'X-Container-Sync-To': ('a', timestamp.internal), 'X-Container-Sync-Key': ('secret', timestamp.internal)}) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: daemon._post_replicate_hook(broker, info, []) self.assertEqual(0, mock_add.call_count) mock_remove.assert_called_once_with(broker) def test_sync_triggers_sync_store_update(self): klass = 'swift.container.sync_store.ContainerSyncStore' ts_iter = make_timestamp_iter() # Create two containers as follows: # broker_1 which is not set for sync # broker_2 which is set for sync and then unset # test that while replicating both we see no activity # for broker_1, and the anticipated activity for broker_2 broker_1 = self._get_broker('a', 'c', node_index=0) broker_1.initialize(next(ts_iter).internal, POLICIES.default.idx) broker_2 = self._get_broker('b', 'd', node_index=0) broker_2.initialize(next(ts_iter).internal, POLICIES.default.idx) broker_2.update_metadata( {'X-Container-Sync-To': ('a', next(ts_iter).internal), 'X-Container-Sync-Key': ('secret', next(ts_iter).internal)}) # replicate once according to broker_1 # relying on the fact that FakeRing would place both # in the same partition. part, node = self._get_broker_part_node(broker_1) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: self._run_once(node) self.assertEqual(1, mock_add.call_count) self.assertEqual(broker_2.db_file, mock_add.call_args[0][0].db_file) self.assertEqual(0, mock_remove.call_count) broker_2.update_metadata( {'X-Container-Sync-To': ('', next(ts_iter).internal)}) # replicate once this time according to broker_2 # relying on the fact that FakeRing would place both # in the same partition. part, node = self._get_broker_part_node(broker_2) with mock.patch(klass + '.remove_synced_container') as mock_remove: with mock.patch(klass + '.add_synced_container') as mock_add: self._run_once(node) self.assertEqual(0, mock_add.call_count) self.assertEqual(1, mock_remove.call_count) self.assertEqual(broker_2.db_file, mock_remove.call_args[0][0].db_file) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/0000775000567000056710000000000013024044470016524 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/obj/test_ssync.py0000664000567000056710000020433213024044354021301 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 - 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import defaultdict import mock import os import time import unittest import eventlet import itertools from six.moves import urllib from swift.common.exceptions import DiskFileNotExist, DiskFileError, \ DiskFileDeleted from swift.common import utils from swift.common.storage_policy import POLICIES, EC_POLICY from swift.common.utils import Timestamp from swift.obj import ssync_sender, server from swift.obj.reconstructor import RebuildingECDiskFileStream, \ ObjectReconstructor from test.unit import patch_policies, debug_logger, encode_frag_archive_bodies from test.unit.obj.common import BaseTest, FakeReplicator class TestBaseSsync(BaseTest): """ Provides a framework to test end to end interactions between sender and receiver. The basis for each test is actual diskfile state on either side. The connection between sender and receiver is wrapped to capture ssync traffic for subsequent verification of the protocol. Assertions are made about the final state of the sender and receiver diskfiles. """ def setUp(self): super(TestBaseSsync, self).setUp() self.device = 'dev' self.partition = '9' # sender side setup self.tx_testdir = os.path.join(self.tmpdir, 'tmp_test_ssync_sender') utils.mkdirs(os.path.join(self.tx_testdir, self.device)) self.daemon = FakeReplicator(self.tx_testdir) # rx side setup self.rx_testdir = os.path.join(self.tmpdir, 'tmp_test_ssync_receiver') utils.mkdirs(os.path.join(self.rx_testdir, self.device)) conf = { 'devices': self.rx_testdir, 'mount_check': 'false', 'replication_one_per_device': 'false', 'log_requests': 'false'} self.rx_logger = debug_logger() self.rx_controller = server.ObjectController(conf, self.rx_logger) self.ts_iter = (Timestamp(t) for t in itertools.count(int(time.time()))) self.rx_ip = '127.0.0.1' sock = eventlet.listen((self.rx_ip, 0)) self.rx_server = eventlet.spawn( eventlet.wsgi.server, sock, self.rx_controller, self.rx_logger) self.rx_port = sock.getsockname()[1] self.rx_node = {'replication_ip': self.rx_ip, 'replication_port': self.rx_port, 'device': self.device} self.obj_data = {} # maps obj path -> obj data def tearDown(self): self.rx_server.kill() super(TestBaseSsync, self).tearDown() def make_connect_wrapper(self, sender): """ Make a wrapper function for the ssync_sender.Sender.connect() method that will in turn wrap the HTTConnection.send() and the Sender.readline() so that ssync protocol messages can be captured. """ orig_connect = sender.connect trace = dict(messages=[]) def add_trace(type, msg): # record a protocol event for later analysis if msg.strip(): trace['messages'].append((type, msg.strip())) def make_send_wrapper(send): def wrapped_send(msg): _msg = msg.split('\r\n', 1)[1] _msg = _msg.rsplit('\r\n', 1)[0] add_trace('tx', _msg) send(msg) return wrapped_send def make_readline_wrapper(readline): def wrapped_readline(): data = readline() add_trace('rx', data) bytes_read = trace.setdefault('readline_bytes', 0) trace['readline_bytes'] = bytes_read + len(data) return data return wrapped_readline def wrapped_connect(): orig_connect() sender.connection.send = make_send_wrapper( sender.connection.send) sender.readline = make_readline_wrapper(sender.readline) return wrapped_connect, trace def _get_object_data(self, path, **kwargs): # return data for given path if path not in self.obj_data: self.obj_data[path] = '%s___data' % path return self.obj_data[path] def _create_ondisk_files(self, df_mgr, obj_name, policy, timestamp, frag_indexes=None, commit=True): frag_indexes = frag_indexes or [None] metadata = {'Content-Type': 'plain/text'} diskfiles = [] for frag_index in frag_indexes: object_data = self._get_object_data('/a/c/%s' % obj_name, frag_index=frag_index) if policy.policy_type == EC_POLICY: metadata['X-Object-Sysmeta-Ec-Frag-Index'] = str(frag_index) df = self._make_diskfile( device=self.device, partition=self.partition, account='a', container='c', obj=obj_name, body=object_data, extra_metadata=metadata, timestamp=timestamp, policy=policy, frag_index=frag_index, df_mgr=df_mgr, commit=commit) diskfiles.append(df) return diskfiles def _open_tx_diskfile(self, obj_name, policy, frag_index=None): df_mgr = self.daemon._diskfile_router[policy] df = df_mgr.get_diskfile( self.device, self.partition, account='a', container='c', obj=obj_name, policy=policy, frag_index=frag_index) df.open() return df def _open_rx_diskfile(self, obj_name, policy, frag_index=None): df = self.rx_controller.get_diskfile( self.device, self.partition, 'a', 'c', obj_name, policy=policy, frag_index=frag_index) df.open() return df def _verify_diskfile_sync(self, tx_df, rx_df, frag_index, same_etag=False): # verify that diskfiles' metadata match # sanity check, they are not the same ondisk files! self.assertNotEqual(tx_df._datadir, rx_df._datadir) rx_metadata = dict(rx_df.get_metadata()) for k, v in tx_df.get_metadata().items(): if k == 'X-Object-Sysmeta-Ec-Frag-Index': # if tx_df had a frag_index then rx_df should also have one self.assertTrue(k in rx_metadata) self.assertEqual(frag_index, int(rx_metadata.pop(k))) elif k == 'ETag' and not same_etag: self.assertNotEqual(v, rx_metadata.pop(k, None)) continue else: self.assertEqual(v, rx_metadata.pop(k), k) self.assertFalse(rx_metadata) expected_body = self._get_object_data(tx_df._name, frag_index=frag_index) actual_body = ''.join([chunk for chunk in rx_df.reader()]) self.assertEqual(expected_body, actual_body) def _analyze_trace(self, trace): """ Parse protocol trace captured by fake connection, making some assertions along the way, and return results as a dict of form: results = {'tx_missing': , 'rx_missing': , 'tx_updates': , 'rx_updates': } Each subreq is a dict with keys: 'method', 'path', 'headers', 'body' """ def tx_missing(results, line): self.assertEqual('tx', line[0]) results['tx_missing'].append(line[1]) def rx_missing(results, line): self.assertEqual('rx', line[0]) parts = line[1].split('\r\n') for part in parts: results['rx_missing'].append(part) def tx_updates(results, line): self.assertEqual('tx', line[0]) subrequests = results['tx_updates'] if line[1].startswith(('PUT', 'DELETE', 'POST')): parts = line[1].split('\r\n') method, path = parts[0].split() subreq = {'method': method, 'path': path, 'req': line[1], 'headers': parts[1:]} subrequests.append(subreq) else: self.assertTrue(subrequests) body = (subrequests[-1]).setdefault('body', '') body += line[1] subrequests[-1]['body'] = body def rx_updates(results, line): self.assertEqual('rx', line[0]) results.setdefault['rx_updates'].append(line[1]) def unexpected(results, line): results.setdefault('unexpected', []).append(line) # each trace line is a tuple of ([tx|rx], msg) handshakes = iter([(('tx', ':MISSING_CHECK: START'), tx_missing), (('tx', ':MISSING_CHECK: END'), unexpected), (('rx', ':MISSING_CHECK: START'), rx_missing), (('rx', ':MISSING_CHECK: END'), unexpected), (('tx', ':UPDATES: START'), tx_updates), (('tx', ':UPDATES: END'), unexpected), (('rx', ':UPDATES: START'), rx_updates), (('rx', ':UPDATES: END'), unexpected)]) expect_handshake = next(handshakes) phases = ('tx_missing', 'rx_missing', 'tx_updates', 'rx_updates') results = dict((k, []) for k in phases) handler = unexpected lines = list(trace.get('messages', [])) lines.reverse() while lines: line = lines.pop() if line == expect_handshake[0]: handler = expect_handshake[1] try: expect_handshake = next(handshakes) except StopIteration: # should be the last line self.assertFalse( lines, 'Unexpected trailing lines %s' % lines) continue handler(results, line) try: # check all handshakes occurred missed = next(handshakes) self.fail('Handshake %s not found' % str(missed[0])) except StopIteration: pass # check no message outside of a phase self.assertFalse(results.get('unexpected'), 'Message outside of a phase: %s' % results.get(None)) return results def _verify_ondisk_files(self, tx_objs, policy, tx_frag_index=None, rx_frag_index=None): """ Verify tx and rx files that should be in sync. :param tx_objs: sender diskfiles :param policy: storage policy instance :param tx_frag_index: the fragment index of tx diskfiles that should have been used as a source for sync'ing :param rx_frag_index: the fragment index of expected rx diskfiles """ for o_name, diskfiles in tx_objs.items(): for tx_df in diskfiles: # check tx file still intact - ssync does not do any cleanup! tx_df.open() if tx_frag_index is None or tx_df._frag_index == tx_frag_index: # this diskfile should have been sync'd, # check rx file is ok rx_df = self._open_rx_diskfile( o_name, policy, rx_frag_index) # for EC revert job or replication etags should match match_etag = (tx_frag_index == rx_frag_index) self._verify_diskfile_sync( tx_df, rx_df, rx_frag_index, match_etag) else: # this diskfile should not have been sync'd, # check no rx file, self.assertRaises(DiskFileNotExist, self._open_rx_diskfile, o_name, policy, frag_index=tx_df._frag_index) def _verify_tombstones(self, tx_objs, policy): # verify tx and rx tombstones that should be in sync for o_name, diskfiles in tx_objs.items(): try: self._open_tx_diskfile(o_name, policy) self.fail('DiskFileDeleted expected') except DiskFileDeleted as exc: tx_delete_time = exc.timestamp try: self._open_rx_diskfile(o_name, policy) self.fail('DiskFileDeleted expected') except DiskFileDeleted as exc: rx_delete_time = exc.timestamp self.assertEqual(tx_delete_time, rx_delete_time) @patch_policies(with_ec_default=True) class TestBaseSsyncEC(TestBaseSsync): def setUp(self): super(TestBaseSsyncEC, self).setUp() self.policy = POLICIES.default def _get_object_data(self, path, frag_index=None, **kwargs): # return a frag archive for given object name and frag index. # for EC policies obj_data maps obj path -> list of frag archives if path not in self.obj_data: # make unique frag archives for each object name data = path * 2 * (self.policy.ec_ndata + self.policy.ec_nparity) self.obj_data[path] = encode_frag_archive_bodies( self.policy, data) return self.obj_data[path][frag_index] class TestSsyncEC(TestBaseSsyncEC): def test_handoff_fragment_revert(self): # test that a sync_revert type job does send the correct frag archives # to the receiver policy = POLICIES.default rx_node_index = 0 tx_node_index = 1 # for a revert job we iterate over frag index that belongs on # remote node frag_index = rx_node_index # create sender side diskfiles... tx_objs = {} rx_objs = {} tx_tombstones = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] # o1 has primary and handoff fragment archives t1 = next(self.ts_iter) tx_objs['o1'] = self._create_ondisk_files( tx_df_mgr, 'o1', policy, t1, (rx_node_index, tx_node_index)) # o2 only has primary t2 = next(self.ts_iter) tx_objs['o2'] = self._create_ondisk_files( tx_df_mgr, 'o2', policy, t2, (tx_node_index,)) # o3 only has handoff, rx has other frag index t3 = next(self.ts_iter) tx_objs['o3'] = self._create_ondisk_files( tx_df_mgr, 'o3', policy, t3, (rx_node_index,)) rx_objs['o3'] = self._create_ondisk_files( rx_df_mgr, 'o3', policy, t3, (13,)) # o4 primary and handoff fragment archives on tx, handoff in sync on rx t4 = next(self.ts_iter) tx_objs['o4'] = self._create_ondisk_files( tx_df_mgr, 'o4', policy, t4, (tx_node_index, rx_node_index,)) rx_objs['o4'] = self._create_ondisk_files( rx_df_mgr, 'o4', policy, t4, (rx_node_index,)) # o5 is a tombstone, missing on receiver t5 = next(self.ts_iter) tx_tombstones['o5'] = self._create_ondisk_files( tx_df_mgr, 'o5', policy, t5, (tx_node_index,)) tx_tombstones['o5'][0].delete(t5) suffixes = set() for diskfiles in (tx_objs.values() + tx_tombstones.values()): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy, 'frag_index': frag_index} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... sender() # verify protocol results = self._analyze_trace(trace) # sender has handoff frags for o1, o3 and o4 and ts for o5 self.assertEqual(4, len(results['tx_missing'])) # receiver is missing frags for o1, o3 and ts for o5 self.assertEqual(3, len(results['rx_missing'])) self.assertEqual(3, len(results['tx_updates'])) self.assertFalse(results['rx_updates']) sync_paths = [] for subreq in results.get('tx_updates'): if subreq.get('method') == 'PUT': self.assertTrue( 'X-Object-Sysmeta-Ec-Frag-Index: %s' % rx_node_index in subreq.get('headers')) expected_body = self._get_object_data(subreq['path'], rx_node_index) self.assertEqual(expected_body, subreq['body']) elif subreq.get('method') == 'DELETE': self.assertEqual('/a/c/o5', subreq['path']) sync_paths.append(subreq.get('path')) self.assertEqual(['/a/c/o1', '/a/c/o3', '/a/c/o5'], sorted(sync_paths)) # verify on disk files... self._verify_ondisk_files( tx_objs, policy, frag_index, rx_node_index) self._verify_tombstones(tx_tombstones, policy) def test_handoff_fragment_only_missing_durable(self): # test that a sync_revert type job does not PUT when the rx is only # missing a durable file policy = POLICIES.default rx_node_index = frag_index = 0 tx_node_index = 1 # create sender side diskfiles... tx_objs = {} rx_objs = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] expected_subreqs = defaultdict(list) # o1 in sync on rx but rx missing .durable - no PUT required t1a = next(self.ts_iter) # older rx .data with .durable t1b = next(self.ts_iter) # rx .meta t1c = next(self.ts_iter) # tx .data with .durable, rx missing .durable obj_name = 'o1' tx_objs[obj_name] = self._create_ondisk_files( tx_df_mgr, obj_name, policy, t1c, (tx_node_index, rx_node_index,)) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t1a, (rx_node_index,)) metadata = {'X-Timestamp': t1b.internal} rx_objs[obj_name][0].write_metadata(metadata) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t1c, (rx_node_index, 9), commit=False) # o2 on rx has wrong frag_indexes and missing .durable - PUT required t2 = next(self.ts_iter) obj_name = 'o2' tx_objs[obj_name] = self._create_ondisk_files( tx_df_mgr, obj_name, policy, t2, (tx_node_index, rx_node_index,)) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t2, (12, 13), commit=False) expected_subreqs['PUT'].append(obj_name) # o3 on rx has frag at other time missing .durable - PUT required t3 = next(self.ts_iter) obj_name = 'o3' tx_objs[obj_name] = self._create_ondisk_files( tx_df_mgr, obj_name, policy, t3, (tx_node_index, rx_node_index,)) t3b = next(self.ts_iter) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t3b, (rx_node_index,), commit=False) expected_subreqs['PUT'].append(obj_name) # o4 on rx has a newer tombstone and even newer frags - no PUT required t4 = next(self.ts_iter) obj_name = 'o4' tx_objs[obj_name] = self._create_ondisk_files( tx_df_mgr, obj_name, policy, t4, (tx_node_index, rx_node_index,)) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t4, (rx_node_index,)) t4b = next(self.ts_iter) rx_objs[obj_name][0].delete(t4b) t4c = next(self.ts_iter) rx_objs[obj_name] = self._create_ondisk_files( rx_df_mgr, obj_name, policy, t4c, (rx_node_index,), commit=False) suffixes = set() for diskfiles in tx_objs.values(): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy, 'frag_index': frag_index} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... sender() # verify protocol results = self._analyze_trace(trace) self.assertEqual(4, len(results['tx_missing'])) self.assertEqual(2, len(results['rx_missing'])) self.assertEqual(2, len(results['tx_updates'])) self.assertFalse(results['rx_updates']) for subreq in results.get('tx_updates'): obj = subreq['path'].split('/')[3] method = subreq['method'] self.assertTrue(obj in expected_subreqs[method], 'Unexpected %s subreq for object %s, expected %s' % (method, obj, expected_subreqs[method])) expected_subreqs[method].remove(obj) if method == 'PUT': expected_body = self._get_object_data( subreq['path'], frag_index=rx_node_index) self.assertEqual(expected_body, subreq['body']) # verify all expected subreqs consumed for _method, expected in expected_subreqs.items(): self.assertFalse(expected) # verify on disk files... tx_objs.pop('o4') # o4 should not have been sync'd self._verify_ondisk_files( tx_objs, policy, frag_index, rx_node_index) def test_fragment_sync(self): # check that a sync_only type job does call reconstructor to build a # diskfile to send, and continues making progress despite an error # when building one diskfile policy = POLICIES.default rx_node_index = 0 tx_node_index = 1 # for a sync job we iterate over frag index that belongs on local node frag_index = tx_node_index # create sender side diskfiles... tx_objs = {} tx_tombstones = {} rx_objs = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] # o1 only has primary t1 = next(self.ts_iter) tx_objs['o1'] = self._create_ondisk_files( tx_df_mgr, 'o1', policy, t1, (tx_node_index,)) # o2 only has primary t2 = next(self.ts_iter) tx_objs['o2'] = self._create_ondisk_files( tx_df_mgr, 'o2', policy, t2, (tx_node_index,)) # o3 only has primary t3 = next(self.ts_iter) tx_objs['o3'] = self._create_ondisk_files( tx_df_mgr, 'o3', policy, t3, (tx_node_index,)) # o4 primary fragment archives on tx, handoff in sync on rx t4 = next(self.ts_iter) tx_objs['o4'] = self._create_ondisk_files( tx_df_mgr, 'o4', policy, t4, (tx_node_index,)) rx_objs['o4'] = self._create_ondisk_files( rx_df_mgr, 'o4', policy, t4, (rx_node_index,)) # o5 is a tombstone, missing on receiver t5 = next(self.ts_iter) tx_tombstones['o5'] = self._create_ondisk_files( tx_df_mgr, 'o5', policy, t5, (tx_node_index,)) tx_tombstones['o5'][0].delete(t5) suffixes = set() for diskfiles in (tx_objs.values() + tx_tombstones.values()): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) reconstruct_fa_calls = [] def fake_reconstruct_fa(job, node, metadata): reconstruct_fa_calls.append((job, node, policy, metadata)) if len(reconstruct_fa_calls) == 2: # simulate second reconstruct failing raise DiskFileError content = self._get_object_data(metadata['name'], frag_index=rx_node_index) return RebuildingECDiskFileStream( metadata, rx_node_index, iter([content])) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy, 'frag_index': frag_index, 'sync_diskfile_builder': fake_reconstruct_fa} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... sender() # verify protocol results = self._analyze_trace(trace) # sender has primary for o1, o2 and o3, o4 and ts for o5 self.assertEqual(5, len(results['tx_missing'])) # receiver is missing o1, o2 and o3 and ts for o5 self.assertEqual(4, len(results['rx_missing'])) # sender can only construct 2 out of 3 missing frags self.assertEqual(3, len(results['tx_updates'])) self.assertEqual(3, len(reconstruct_fa_calls)) self.assertFalse(results['rx_updates']) actual_sync_paths = [] for subreq in results.get('tx_updates'): if subreq.get('method') == 'PUT': self.assertTrue( 'X-Object-Sysmeta-Ec-Frag-Index: %s' % rx_node_index in subreq.get('headers')) expected_body = self._get_object_data( subreq['path'], frag_index=rx_node_index) self.assertEqual(expected_body, subreq['body']) elif subreq.get('method') == 'DELETE': self.assertEqual('/a/c/o5', subreq['path']) actual_sync_paths.append(subreq.get('path')) # remove the failed df from expected synced df's expect_sync_paths = ['/a/c/o1', '/a/c/o2', '/a/c/o3', '/a/c/o5'] failed_path = reconstruct_fa_calls[1][3]['name'] expect_sync_paths.remove(failed_path) failed_obj = None for obj, diskfiles in tx_objs.items(): if diskfiles[0]._name == failed_path: failed_obj = obj # sanity check self.assertTrue(tx_objs.pop(failed_obj)) # verify on disk files... self.assertEqual(sorted(expect_sync_paths), sorted(actual_sync_paths)) self._verify_ondisk_files( tx_objs, policy, frag_index, rx_node_index) self._verify_tombstones(tx_tombstones, policy) def test_send_with_frag_index_none(self): policy = POLICIES.default tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] # create an ec fragment on the remote node ts1 = next(self.ts_iter) remote_df = self._create_ondisk_files( rx_df_mgr, 'o', policy, ts1, (3,))[0] # create a tombstone on the local node df = self._create_ondisk_files( tx_df_mgr, 'o', policy, ts1, (3,))[0] suffix = os.path.basename(os.path.dirname(df._datadir)) ts2 = next(self.ts_iter) df.delete(ts2) # a reconstructor revert job with only tombstones will have frag_index # explicitly set to None job = { 'frag_index': None, 'partition': self.partition, 'policy': policy, 'device': self.device, } sender = ssync_sender.Sender( self.daemon, self.rx_node, job, [suffix]) success, _ = sender() self.assertTrue(success) try: remote_df.read_metadata() except DiskFileDeleted as e: self.assertEqual(e.timestamp, ts2) else: self.fail('Successfully opened remote DiskFile') def test_send_invalid_frag_index(self): policy = POLICIES.default job = {'frag_index': 'Not a number', 'device': self.device, 'partition': self.partition, 'policy': policy} sender = ssync_sender.Sender( self.daemon, self.rx_node, job, ['abc']) success, _ = sender() self.assertFalse(success) error_log_lines = self.daemon.logger.get_lines_for_level('error') self.assertEqual(1, len(error_log_lines)) error_msg = error_log_lines[0] self.assertIn("Expected status 200; got 400", error_msg) self.assertIn("Invalid X-Backend-Ssync-Frag-Index 'Not a number'", error_msg) class FakeResponse(object): def __init__(self, frag_index, obj_data, length=None): self.headers = { 'X-Object-Sysmeta-Ec-Frag-Index': str(frag_index), 'X-Object-Sysmeta-Ec-Etag': 'the etag', 'X-Backend-Timestamp': '1234567890.12345' } self.frag_index = frag_index self.obj_data = obj_data self.data = '' self.length = length def init(self, path): if isinstance(self.obj_data, Exception): self.data = self.obj_data else: self.data = self.obj_data[path][self.frag_index] def getheaders(self): return self.headers def read(self, length): if isinstance(self.data, Exception): raise self.data val = self.data self.data = '' return val if self.length is None else val[:self.length] class TestSsyncECReconstructorSyncJob(TestBaseSsyncEC): def setUp(self): super(TestSsyncECReconstructorSyncJob, self).setUp() self.rx_node_index = 0 self.tx_node_index = 1 # create sender side diskfiles... self.tx_objs = {} tx_df_mgr = self.daemon._diskfile_router[self.policy] t1 = next(self.ts_iter) self.tx_objs['o1'] = self._create_ondisk_files( tx_df_mgr, 'o1', self.policy, t1, (self.tx_node_index,)) t2 = next(self.ts_iter) self.tx_objs['o2'] = self._create_ondisk_files( tx_df_mgr, 'o2', self.policy, t2, (self.tx_node_index,)) self.suffixes = set() for diskfiles in list(self.tx_objs.values()): for df in diskfiles: self.suffixes.add( os.path.basename(os.path.dirname(df._datadir))) self.job_node = dict(self.rx_node) self.job_node['index'] = self.rx_node_index self.frag_length = int( self.tx_objs['o1'][0].get_metadata()['Content-Length']) def _test_reconstructor_sync_job(self, frag_responses): # Helper method to mock reconstructor to consume given lists of fake # responses while reconstructing a fragment for a sync type job. The # tests verify that when the reconstructed fragment iter fails in some # way then ssync does not mistakenly create fragments on the receiving # node which have incorrect data. # See https://bugs.launchpad.net/swift/+bug/1631144 # frag_responses is a list of two lists of responses to each # reconstructor GET request for a fragment archive. The two items in # the outer list are lists of responses for each of the two fragments # to be reconstructed. Items in the inner lists are responses for each # of the other fragments fetched during the reconstructor rebuild. path_to_responses = {} fake_get_response_calls = [] def fake_get_response(recon, node, part, path, headers, policy): # select a list of fake responses for this path and return the next # from the list if path not in path_to_responses: path_to_responses[path] = frag_responses.pop(0) response = path_to_responses[path].pop() # the frag_responses list is in ssync task order, we only know the # path when consuming the responses so initialise the path in the # response now if response: response.init(path) fake_get_response_calls.append(path) return response def fake_get_part_nodes(part): # the reconstructor will try to remove the receiver node from the # object ring part nodes, but the fake node we created for our # receiver is not actually in the ring part nodes, so append it # here simply so that the reconstructor does not fail to remove it. return (self.policy.object_ring._get_part_nodes(part) + [self.job_node]) with mock.patch( 'swift.obj.reconstructor.ObjectReconstructor._get_response', fake_get_response), \ mock.patch.object( self.policy.object_ring, 'get_part_nodes', fake_get_part_nodes): self.reconstructor = ObjectReconstructor( {}, logger=debug_logger('test_reconstructor')) job = { 'device': self.device, 'partition': self.partition, 'policy': self.policy, 'sync_diskfile_builder': self.reconstructor.reconstruct_fa } sender = ssync_sender.Sender( self.daemon, self.job_node, job, self.suffixes) sender.connect, trace = self.make_connect_wrapper(sender) sender() return trace def test_sync_reconstructor_partial_rebuild(self): # First fragment to sync gets partial content from reconstructor. # Expect ssync job to exit early with no file written on receiver. frag_responses = [ [FakeResponse(i, self.obj_data, length=-1) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)], [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)]] self._test_reconstructor_sync_job(frag_responses) msgs = [] for obj_name in ('o1', 'o2'): try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) msgs.append('Unexpected rx diskfile for %r with content %r' % (obj_name, ''.join([d for d in df.reader()]))) except DiskFileNotExist: pass # expected outcome if msgs: self.fail('Failed with:\n%s' % '\n'.join(msgs)) log_lines = self.daemon.logger.get_lines_for_level('error') self.assertIn('Sent data length does not match content-length', log_lines[0]) self.assertFalse(log_lines[1:]) # trampoline for the receiver to write a log for x in range(2): # Trampoline twice - the receiving ObjectController is a coroutine # and it also creates a coroutine to write chunks to disk. Note # that the second of these coroutines is removed in later releases # by commit 4c11833. eventlet.sleep(0) self.assertIn('SSYNC', self.rx_logger.get_lines_for_level('info')[-1]) log_lines = self.rx_logger.get_lines_for_level('warning') self.assertIn('ssync subrequest failed with 499', log_lines[0]) self.assertFalse(log_lines[1:]) self.assertFalse(self.rx_logger.get_lines_for_level('error')) def test_sync_reconstructor_no_rebuilt_content(self): # First fragment to sync gets no content in any response to # reconstructor. Expect ssync job to exit early with no file written on # receiver. frag_responses = [ [FakeResponse(i, self.obj_data, length=0) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)], [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)]] self._test_reconstructor_sync_job(frag_responses) msgs = [] for obj_name in ('o1', 'o2'): try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) msgs.append('Unexpected rx diskfile for %r with content %r' % (obj_name, ''.join([d for d in df.reader()]))) except DiskFileNotExist: pass # expected outcome if msgs: self.fail('Failed with:\n%s' % '\n'.join(msgs)) log_lines = self.daemon.logger.get_lines_for_level('error') self.assertIn('Sent data length does not match content-length', log_lines[0]) self.assertFalse(log_lines[1:]) # trampoline for the receiver to write a log eventlet.sleep(0) self.assertIn('SSYNC', self.rx_logger.get_lines_for_level('info')[-1]) log_lines = self.rx_logger.get_lines_for_level('warning') self.assertIn('ssync subrequest failed with 499', log_lines[0]) self.assertFalse(log_lines[1:]) self.assertFalse(self.rx_logger.get_lines_for_level('error')) def test_sync_reconstructor_exception_during_rebuild(self): # First fragment to sync has some reconstructor get responses raise # exception while rebuilding. Expect ssync job to exit early with no # files written on receiver. frag_responses = [ # ec_ndata responses are ok, but one of these will be ignored as # it is for the frag index being rebuilt [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata)] + # ec_nparity responses will raise an Exception - at least one of # these will be used during rebuild [FakeResponse(i, Exception('raised in response read method')) for i in range(self.policy.ec_ndata, self.policy.ec_ndata + self.policy.ec_nparity)], # second set of response are all good [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)]] self._test_reconstructor_sync_job(frag_responses) msgs = [] for obj_name in ('o1', 'o2'): try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) msgs.append('Unexpected rx diskfile for %r with content %r' % (obj_name, ''.join([d for d in df.reader()]))) except DiskFileNotExist: pass # expected outcome if msgs: self.fail('Failed with:\n%s' % '\n'.join(msgs)) log_lines = self.reconstructor.logger.get_lines_for_level('error') self.assertIn('Error trying to rebuild', log_lines[0]) log_lines = self.daemon.logger.get_lines_for_level('error') self.assertIn('Sent data length does not match content-length', log_lines[0]) self.assertFalse(log_lines[1:]) # trampoline for the receiver to write a log eventlet.sleep(0) self.assertIn('SSYNC', self.rx_logger.get_lines_for_level('info')[-1]) log_lines = self.rx_logger.get_lines_for_level('warning') self.assertIn('ssync subrequest failed with 499', log_lines[0]) self.assertFalse(log_lines[1:]) self.assertFalse(self.rx_logger.get_lines_for_level('error')) def test_sync_reconstructor_no_responses(self): # First fragment to sync gets no responses for reconstructor to rebuild # with, nothing is sent to receiver so expect to skip that fragment and # continue with second. frag_responses = [ [None for i in range(self.policy.ec_ndata + self.policy.ec_nparity)], [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)]] trace = self._test_reconstructor_sync_job(frag_responses) results = self._analyze_trace(trace) self.assertEqual(2, len(results['tx_missing'])) self.assertEqual(2, len(results['rx_missing'])) self.assertEqual(1, len(results['tx_updates'])) self.assertFalse(results['rx_updates']) self.assertEqual('PUT', results['tx_updates'][0].get('method')) synced_obj_path = results['tx_updates'][0].get('path') synced_obj_name = synced_obj_path[-2:] msgs = [] obj_name = synced_obj_name try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) self.assertEqual( self._get_object_data(synced_obj_path, frag_index=self.rx_node_index), ''.join([d for d in df.reader()])) except DiskFileNotExist: msgs.append('Missing rx diskfile for %r' % obj_name) obj_names = list(self.tx_objs) obj_names.remove(synced_obj_name) obj_name = obj_names[0] try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) msgs.append('Unexpected rx diskfile for %r with content %r' % (obj_name, ''.join([d for d in df.reader()]))) except DiskFileNotExist: pass # expected outcome if msgs: self.fail('Failed with:\n%s' % '\n'.join(msgs)) self.assertFalse(self.daemon.logger.get_lines_for_level('error')) log_lines = self.reconstructor.logger.get_lines_for_level('error') self.assertIn('Unable to get enough responses', log_lines[0]) # trampoline for the receiver to write a log eventlet.sleep(0) self.assertIn('SSYNC', self.rx_logger.get_lines_for_level('info')[-1]) self.assertFalse(self.rx_logger.get_lines_for_level('warning')) self.assertFalse(self.rx_logger.get_lines_for_level('error')) def test_sync_reconstructor_rebuild_ok(self): # Sanity test for this class of tests. Both fragments get a full # complement of responses and rebuild correctly. frag_responses = [ [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)], [FakeResponse(i, self.obj_data) for i in range(self.policy.ec_ndata + self.policy.ec_nparity)]] trace = self._test_reconstructor_sync_job(frag_responses) results = self._analyze_trace(trace) self.assertEqual(2, len(results['tx_missing'])) self.assertEqual(2, len(results['rx_missing'])) self.assertEqual(2, len(results['tx_updates'])) self.assertFalse(results['rx_updates']) msgs = [] for obj_name in self.tx_objs: try: df = self._open_rx_diskfile( obj_name, self.policy, self.rx_node_index) self.assertEqual( self._get_object_data(df._name, frag_index=self.rx_node_index), ''.join([d for d in df.reader()])) except DiskFileNotExist: msgs.append('Missing rx diskfile for %r' % obj_name) if msgs: self.fail('Failed with:\n%s' % '\n'.join(msgs)) self.assertFalse(self.daemon.logger.get_lines_for_level('error')) self.assertFalse( self.reconstructor.logger.get_lines_for_level('error')) # trampoline for the receiver to write a log eventlet.sleep(0) self.assertIn('SSYNC', self.rx_logger.get_lines_for_level('info')[-1]) self.assertFalse(self.rx_logger.get_lines_for_level('warning')) self.assertFalse(self.rx_logger.get_lines_for_level('error')) @patch_policies class TestSsyncReplication(TestBaseSsync): def test_sync(self): policy = POLICIES.default rx_node_index = 0 # create sender side diskfiles... tx_objs = {} rx_objs = {} tx_tombstones = {} rx_tombstones = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] # o1 and o2 are on tx only t1 = next(self.ts_iter) tx_objs['o1'] = self._create_ondisk_files(tx_df_mgr, 'o1', policy, t1) t2 = next(self.ts_iter) tx_objs['o2'] = self._create_ondisk_files(tx_df_mgr, 'o2', policy, t2) # o3 is on tx and older copy on rx t3a = next(self.ts_iter) rx_objs['o3'] = self._create_ondisk_files(rx_df_mgr, 'o3', policy, t3a) t3b = next(self.ts_iter) tx_objs['o3'] = self._create_ondisk_files(tx_df_mgr, 'o3', policy, t3b) # o4 in sync on rx and tx t4 = next(self.ts_iter) tx_objs['o4'] = self._create_ondisk_files(tx_df_mgr, 'o4', policy, t4) rx_objs['o4'] = self._create_ondisk_files(rx_df_mgr, 'o4', policy, t4) # o5 is a tombstone, missing on receiver t5 = next(self.ts_iter) tx_tombstones['o5'] = self._create_ondisk_files( tx_df_mgr, 'o5', policy, t5) tx_tombstones['o5'][0].delete(t5) # o6 is a tombstone, in sync on tx and rx t6 = next(self.ts_iter) tx_tombstones['o6'] = self._create_ondisk_files( tx_df_mgr, 'o6', policy, t6) tx_tombstones['o6'][0].delete(t6) rx_tombstones['o6'] = self._create_ondisk_files( rx_df_mgr, 'o6', policy, t6) rx_tombstones['o6'][0].delete(t6) # o7 is a tombstone on tx, older data on rx t7a = next(self.ts_iter) rx_objs['o7'] = self._create_ondisk_files(rx_df_mgr, 'o7', policy, t7a) t7b = next(self.ts_iter) tx_tombstones['o7'] = self._create_ondisk_files( tx_df_mgr, 'o7', policy, t7b) tx_tombstones['o7'][0].delete(t7b) suffixes = set() for diskfiles in (tx_objs.values() + tx_tombstones.values()): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... success, in_sync_objs = sender() self.assertEqual(7, len(in_sync_objs)) self.assertTrue(success) # verify protocol results = self._analyze_trace(trace) self.assertEqual(7, len(results['tx_missing'])) self.assertEqual(5, len(results['rx_missing'])) self.assertEqual(5, len(results['tx_updates'])) self.assertFalse(results['rx_updates']) sync_paths = [] for subreq in results.get('tx_updates'): if subreq.get('method') == 'PUT': self.assertTrue( subreq['path'] in ('/a/c/o1', '/a/c/o2', '/a/c/o3')) expected_body = self._get_object_data(subreq['path']) self.assertEqual(expected_body, subreq['body']) elif subreq.get('method') == 'DELETE': self.assertTrue(subreq['path'] in ('/a/c/o5', '/a/c/o7')) sync_paths.append(subreq.get('path')) self.assertEqual( ['/a/c/o1', '/a/c/o2', '/a/c/o3', '/a/c/o5', '/a/c/o7'], sorted(sync_paths)) # verify on disk files... self._verify_ondisk_files(tx_objs, policy) self._verify_tombstones(tx_tombstones, policy) def test_nothing_to_sync(self): job = {'device': self.device, 'partition': self.partition, 'policy': POLICIES.default} node = {'replication_ip': self.rx_ip, 'replication_port': self.rx_port, 'device': self.device, 'index': 0} sender = ssync_sender.Sender(self.daemon, node, job, ['abc']) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) result, in_sync_objs = sender() self.assertTrue(result) self.assertFalse(in_sync_objs) results = self._analyze_trace(trace) self.assertFalse(results['tx_missing']) self.assertFalse(results['rx_missing']) self.assertFalse(results['tx_updates']) self.assertFalse(results['rx_updates']) # Minimal receiver response as read by sender: # 2 <-- initial \r\n to start ssync exchange # + 23 <-- :MISSING CHECK START\r\n # + 2 <-- \r\n (minimal missing check response) # + 21 <-- :MISSING CHECK END\r\n # + 17 <-- :UPDATES START\r\n # + 15 <-- :UPDATES END\r\n # TOTAL = 80 self.assertEqual(80, trace.get('readline_bytes')) def test_meta_file_sync(self): policy = POLICIES.default rx_node_index = 0 # create diskfiles... tx_objs = {} rx_objs = {} tx_tombstones = {} rx_tombstones = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] expected_subreqs = defaultdict(list) # o1 on tx only with meta file t1 = next(self.ts_iter) tx_objs['o1'] = self._create_ondisk_files(tx_df_mgr, 'o1', policy, t1) t1_meta = next(self.ts_iter) metadata = {'X-Timestamp': t1_meta.internal, 'X-Object-Meta-Test': 'o1', 'X-Object-Sysmeta-Test': 'sys_o1'} tx_objs['o1'][0].write_metadata(metadata) expected_subreqs['PUT'].append('o1') expected_subreqs['POST'].append('o1') # o2 on tx with meta, on rx without meta t2 = next(self.ts_iter) tx_objs['o2'] = self._create_ondisk_files(tx_df_mgr, 'o2', policy, t2) t2_meta = next(self.ts_iter) metadata = {'X-Timestamp': t2_meta.internal, 'X-Object-Meta-Test': 'o2', 'X-Object-Sysmeta-Test': 'sys_o2'} tx_objs['o2'][0].write_metadata(metadata) rx_objs['o2'] = self._create_ondisk_files(rx_df_mgr, 'o2', policy, t2) expected_subreqs['POST'].append('o2') # o3 is on tx with meta, rx has newer data but no meta t3a = next(self.ts_iter) tx_objs['o3'] = self._create_ondisk_files(tx_df_mgr, 'o3', policy, t3a) t3b = next(self.ts_iter) rx_objs['o3'] = self._create_ondisk_files(rx_df_mgr, 'o3', policy, t3b) t3_meta = next(self.ts_iter) metadata = {'X-Timestamp': t3_meta.internal, 'X-Object-Meta-Test': 'o3', 'X-Object-Sysmeta-Test': 'sys_o3'} tx_objs['o3'][0].write_metadata(metadata) expected_subreqs['POST'].append('o3') # o4 is on tx with meta, rx has older data and up to date meta t4a = next(self.ts_iter) rx_objs['o4'] = self._create_ondisk_files(rx_df_mgr, 'o4', policy, t4a) t4b = next(self.ts_iter) tx_objs['o4'] = self._create_ondisk_files(tx_df_mgr, 'o4', policy, t4b) t4_meta = next(self.ts_iter) metadata = {'X-Timestamp': t4_meta.internal, 'X-Object-Meta-Test': 'o4', 'X-Object-Sysmeta-Test': 'sys_o4'} tx_objs['o4'][0].write_metadata(metadata) rx_objs['o4'][0].write_metadata(metadata) expected_subreqs['PUT'].append('o4') # o5 is on tx with meta, rx is in sync with data and meta t5 = next(self.ts_iter) rx_objs['o5'] = self._create_ondisk_files(rx_df_mgr, 'o5', policy, t5) tx_objs['o5'] = self._create_ondisk_files(tx_df_mgr, 'o5', policy, t5) t5_meta = next(self.ts_iter) metadata = {'X-Timestamp': t5_meta.internal, 'X-Object-Meta-Test': 'o5', 'X-Object-Sysmeta-Test': 'sys_o5'} tx_objs['o5'][0].write_metadata(metadata) rx_objs['o5'][0].write_metadata(metadata) # o6 is tombstone on tx, rx has older data and meta t6 = next(self.ts_iter) tx_tombstones['o6'] = self._create_ondisk_files( tx_df_mgr, 'o6', policy, t6) rx_tombstones['o6'] = self._create_ondisk_files( rx_df_mgr, 'o6', policy, t6) metadata = {'X-Timestamp': next(self.ts_iter).internal, 'X-Object-Meta-Test': 'o6', 'X-Object-Sysmeta-Test': 'sys_o6'} rx_tombstones['o6'][0].write_metadata(metadata) tx_tombstones['o6'][0].delete(next(self.ts_iter)) expected_subreqs['DELETE'].append('o6') # o7 is tombstone on rx, tx has older data and meta, # no subreqs expected... t7 = next(self.ts_iter) tx_objs['o7'] = self._create_ondisk_files(tx_df_mgr, 'o7', policy, t7) rx_tombstones['o7'] = self._create_ondisk_files( rx_df_mgr, 'o7', policy, t7) metadata = {'X-Timestamp': next(self.ts_iter).internal, 'X-Object-Meta-Test': 'o7', 'X-Object-Sysmeta-Test': 'sys_o7'} tx_objs['o7'][0].write_metadata(metadata) rx_tombstones['o7'][0].delete(next(self.ts_iter)) suffixes = set() for diskfiles in (tx_objs.values() + tx_tombstones.values()): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... success, in_sync_objs = sender() self.assertEqual(7, len(in_sync_objs)) self.assertTrue(success) # verify protocol results = self._analyze_trace(trace) self.assertEqual(7, len(results['tx_missing'])) self.assertEqual(5, len(results['rx_missing'])) for subreq in results.get('tx_updates'): obj = subreq['path'].split('/')[3] method = subreq['method'] self.assertTrue(obj in expected_subreqs[method], 'Unexpected %s subreq for object %s, expected %s' % (method, obj, expected_subreqs[method])) expected_subreqs[method].remove(obj) if method == 'PUT': expected_body = self._get_object_data(subreq['path']) self.assertEqual(expected_body, subreq['body']) # verify all expected subreqs consumed for _method, expected in expected_subreqs.items(): self.assertFalse(expected) self.assertFalse(results['rx_updates']) # verify on disk files... del tx_objs['o7'] # o7 not expected to be sync'd self._verify_ondisk_files(tx_objs, policy) self._verify_tombstones(tx_tombstones, policy) for oname, rx_obj in rx_objs.items(): df = rx_obj[0].open() metadata = df.get_metadata() self.assertEqual(metadata['X-Object-Meta-Test'], oname) self.assertEqual(metadata['X-Object-Sysmeta-Test'], 'sys_' + oname) def test_meta_file_not_synced_to_legacy_receiver(self): # verify that the sender does sync a data file to a legacy receiver, # but does not PUT meta file content to a legacy receiver policy = POLICIES.default rx_node_index = 0 # create diskfiles... tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] # rx has data at t1 but no meta # object is on tx with data at t2, meta at t3, t1 = next(self.ts_iter) self._create_ondisk_files(rx_df_mgr, 'o1', policy, t1) t2 = next(self.ts_iter) tx_obj = self._create_ondisk_files(tx_df_mgr, 'o1', policy, t2)[0] t3 = next(self.ts_iter) metadata = {'X-Timestamp': t3.internal, 'X-Object-Meta-Test': 'o3', 'X-Object-Sysmeta-Test': 'sys_o3'} tx_obj.write_metadata(metadata) suffixes = [os.path.basename(os.path.dirname(tx_obj._datadir))] # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) def _legacy_check_missing(self, line): # reproduces behavior of 'legacy' ssync receiver missing_checks() parts = line.split() object_hash = urllib.parse.unquote(parts[0]) timestamp = urllib.parse.unquote(parts[1]) want = False try: df = self.diskfile_mgr.get_diskfile_from_hash( self.device, self.partition, object_hash, self.policy, frag_index=self.frag_index) except DiskFileNotExist: want = True else: try: df.open() except DiskFileDeleted as err: want = err.timestamp < timestamp except DiskFileError: want = True else: want = df.timestamp < timestamp if want: return urllib.parse.quote(object_hash) return None # run the sync protocol... func = 'swift.obj.ssync_receiver.Receiver._check_missing' with mock.patch(func, _legacy_check_missing): success, in_sync_objs = sender() self.assertEqual(1, len(in_sync_objs)) self.assertTrue(success) # verify protocol, expecting only a PUT to legacy receiver results = self._analyze_trace(trace) self.assertEqual(1, len(results['tx_missing'])) self.assertEqual(1, len(results['rx_missing'])) self.assertEqual(1, len(results['tx_updates'])) self.assertEqual('PUT', results['tx_updates'][0]['method']) self.assertFalse(results['rx_updates']) # verify on disk files... rx_obj = self._open_rx_diskfile('o1', policy) tx_obj = self._open_tx_diskfile('o1', policy) # with legacy behavior rx_obj data and meta timestamps are equal self.assertEqual(t2, rx_obj.data_timestamp) self.assertEqual(t2, rx_obj.timestamp) # with legacy behavior rx_obj data timestamp should equal tx_obj self.assertEqual(rx_obj.data_timestamp, tx_obj.data_timestamp) # tx meta file should not have been sync'd to rx data file self.assertNotIn('X-Object-Meta-Test', rx_obj.get_metadata()) def test_content_type_sync(self): policy = POLICIES.default rx_node_index = 0 # create diskfiles... tx_objs = {} rx_objs = {} tx_df_mgr = self.daemon._diskfile_router[policy] rx_df_mgr = self.rx_controller._diskfile_router[policy] expected_subreqs = defaultdict(list) # o1 on tx only with two meta files name = 'o1' t1 = self.ts_iter.next() tx_objs[name] = self._create_ondisk_files(tx_df_mgr, name, policy, t1) t1_type = self.ts_iter.next() metadata_1 = {'X-Timestamp': t1_type.internal, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t1_type.internal} tx_objs[name][0].write_metadata(metadata_1) t1_meta = self.ts_iter.next() metadata_2 = {'X-Timestamp': t1_meta.internal, 'X-Object-Meta-Test': name} tx_objs[name][0].write_metadata(metadata_2) expected_subreqs['PUT'].append(name) expected_subreqs['POST'].append(name) # o2 on tx with two meta files, rx has .data and newest .meta but is # missing latest content-type name = 'o2' t2 = self.ts_iter.next() tx_objs[name] = self._create_ondisk_files(tx_df_mgr, name, policy, t2) t2_type = self.ts_iter.next() metadata_1 = {'X-Timestamp': t2_type.internal, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t2_type.internal} tx_objs[name][0].write_metadata(metadata_1) t2_meta = self.ts_iter.next() metadata_2 = {'X-Timestamp': t2_meta.internal, 'X-Object-Meta-Test': name} tx_objs[name][0].write_metadata(metadata_2) rx_objs[name] = self._create_ondisk_files(rx_df_mgr, name, policy, t2) rx_objs[name][0].write_metadata(metadata_2) expected_subreqs['POST'].append(name) # o3 on tx with two meta files, rx has .data and one .meta but does # have latest content-type so nothing to sync name = 'o3' t3 = self.ts_iter.next() tx_objs[name] = self._create_ondisk_files(tx_df_mgr, name, policy, t3) t3_type = self.ts_iter.next() metadata_1 = {'X-Timestamp': t3_type.internal, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t3_type.internal} tx_objs[name][0].write_metadata(metadata_1) t3_meta = self.ts_iter.next() metadata_2 = {'X-Timestamp': t3_meta.internal, 'X-Object-Meta-Test': name} tx_objs[name][0].write_metadata(metadata_2) rx_objs[name] = self._create_ondisk_files(rx_df_mgr, name, policy, t3) metadata_2b = {'X-Timestamp': t3_meta.internal, 'X-Object-Meta-Test': name, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t3_type.internal} rx_objs[name][0].write_metadata(metadata_2b) # o4 on tx with one meta file having latest content-type, rx has # .data and two .meta having latest content-type so nothing to sync # i.e. o4 is the reverse of o3 scenario name = 'o4' t4 = self.ts_iter.next() tx_objs[name] = self._create_ondisk_files(tx_df_mgr, name, policy, t4) t4_type = self.ts_iter.next() t4_meta = self.ts_iter.next() metadata_2b = {'X-Timestamp': t4_meta.internal, 'X-Object-Meta-Test': name, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t4_type.internal} tx_objs[name][0].write_metadata(metadata_2b) rx_objs[name] = self._create_ondisk_files(rx_df_mgr, name, policy, t4) metadata_1 = {'X-Timestamp': t4_type.internal, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t4_type.internal} rx_objs[name][0].write_metadata(metadata_1) metadata_2 = {'X-Timestamp': t4_meta.internal, 'X-Object-Meta-Test': name} rx_objs[name][0].write_metadata(metadata_2) # o5 on tx with one meta file having latest content-type, rx has # .data and no .meta name = 'o5' t5 = self.ts_iter.next() tx_objs[name] = self._create_ondisk_files(tx_df_mgr, name, policy, t5) t5_type = self.ts_iter.next() t5_meta = self.ts_iter.next() metadata = {'X-Timestamp': t5_meta.internal, 'X-Object-Meta-Test': name, 'Content-Type': 'text/test', 'Content-Type-Timestamp': t5_type.internal} tx_objs[name][0].write_metadata(metadata) rx_objs[name] = self._create_ondisk_files(rx_df_mgr, name, policy, t5) expected_subreqs['POST'].append(name) suffixes = set() for diskfiles in tx_objs.values(): for df in diskfiles: suffixes.add(os.path.basename(os.path.dirname(df._datadir))) # create ssync sender instance... job = {'device': self.device, 'partition': self.partition, 'policy': policy} node = dict(self.rx_node) node.update({'index': rx_node_index}) sender = ssync_sender.Sender(self.daemon, node, job, suffixes) # wrap connection from tx to rx to capture ssync messages... sender.connect, trace = self.make_connect_wrapper(sender) # run the sync protocol... success, in_sync_objs = sender() self.assertEqual(5, len(in_sync_objs), trace['messages']) self.assertTrue(success) # verify protocol results = self._analyze_trace(trace) self.assertEqual(5, len(results['tx_missing'])) self.assertEqual(3, len(results['rx_missing'])) for subreq in results.get('tx_updates'): obj = subreq['path'].split('/')[3] method = subreq['method'] self.assertTrue(obj in expected_subreqs[method], 'Unexpected %s subreq for object %s, expected %s' % (method, obj, expected_subreqs[method])) expected_subreqs[method].remove(obj) if method == 'PUT': expected_body = self._get_object_data(subreq['path']) self.assertEqual(expected_body, subreq['body']) # verify all expected subreqs consumed for _method, expected in expected_subreqs.items(): self.assertFalse(expected, 'Expected subreqs not seen for %s for objects %s' % (_method, expected)) self.assertFalse(results['rx_updates']) # verify on disk files... self._verify_ondisk_files(tx_objs, policy) for oname, rx_obj in rx_objs.items(): df = rx_obj[0].open() metadata = df.get_metadata() self.assertEqual(metadata['X-Object-Meta-Test'], oname) self.assertEqual(metadata['Content-Type'], 'text/test') # verify that tx and rx both generate the same suffix hashes... tx_hashes = tx_df_mgr.get_hashes( self.device, self.partition, suffixes, policy) rx_hashes = rx_df_mgr.get_hashes( self.device, self.partition, suffixes, policy) self.assertEqual(tx_hashes, rx_hashes) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/__init__.py0000664000567000056710000000000013024044352020622 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/obj/test_ssync_receiver.py0000664000567000056710000027464313024044354023201 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil import tempfile import unittest import eventlet import mock import six from swift.common import bufferedhttp from swift.common import exceptions from swift.common import swob from swift.common.storage_policy import POLICIES from swift.common import utils from swift.common.swob import HTTPException from swift.obj import diskfile from swift.obj import server from swift.obj import ssync_receiver, ssync_sender from swift.obj.reconstructor import ObjectReconstructor from test import unit from test.unit import debug_logger, patch_policies, make_timestamp_iter @unit.patch_policies() class TestReceiver(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' # Not sure why the test.unit stuff isn't taking effect here; so I'm # reinforcing it. diskfile.getxattr = unit._getxattr diskfile.setxattr = unit._setxattr self.testdir = os.path.join( tempfile.mkdtemp(), 'tmp_test_ssync_receiver') utils.mkdirs(os.path.join(self.testdir, 'sda1', 'tmp')) self.conf = { 'devices': self.testdir, 'mount_check': 'false', 'replication_one_per_device': 'false', 'log_requests': 'false'} utils.mkdirs(os.path.join(self.testdir, 'device', 'partition')) self.controller = server.ObjectController(self.conf) self.controller.bytes_per_sync = 1 self.account1 = 'a' self.container1 = 'c' self.object1 = 'o1' self.name1 = '/' + '/'.join(( self.account1, self.container1, self.object1)) self.hash1 = utils.hash_path( self.account1, self.container1, self.object1) self.ts1 = '1372800001.00000' self.metadata1 = { 'name': self.name1, 'X-Timestamp': self.ts1, 'Content-Length': '0'} self.account2 = 'a' self.container2 = 'c' self.object2 = 'o2' self.name2 = '/' + '/'.join(( self.account2, self.container2, self.object2)) self.hash2 = utils.hash_path( self.account2, self.container2, self.object2) self.ts2 = '1372800002.00000' self.metadata2 = { 'name': self.name2, 'X-Timestamp': self.ts2, 'Content-Length': '0'} def tearDown(self): shutil.rmtree(os.path.dirname(self.testdir)) def body_lines(self, body): lines = [] for line in body.split('\n'): line = line.strip() if line: lines.append(line) return lines def test_SSYNC_semaphore_locked(self): with mock.patch.object( self.controller, 'replication_semaphore') as \ mocked_replication_semaphore: self.controller.logger = mock.MagicMock() mocked_replication_semaphore.acquire.return_value = False req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [":ERROR: 503 '

Service Unavailable

The " "server is currently unavailable. Please try again at a " "later time.

'"]) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_SSYNC_calls_replication_lock(self): with mock.patch.object( self.controller._diskfile_router[POLICIES.legacy], 'replication_lock') as mocked_replication_lock: req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) mocked_replication_lock.assert_called_once_with('sda1') def test_Receiver_with_default_storage_policy(self): req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') rcvr = ssync_receiver.Receiver(self.controller, req) body_lines = [chunk.strip() for chunk in rcvr() if chunk.strip()] self.assertEqual( body_lines, [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(rcvr.policy, POLICIES[0]) def test_Receiver_with_storage_policy_index_header(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') rcvr = ssync_receiver.Receiver(self.controller, req) body_lines = [chunk.strip() for chunk in rcvr() if chunk.strip()] self.assertEqual( body_lines, [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(rcvr.policy, POLICIES[1]) self.assertEqual(rcvr.frag_index, None) def test_Receiver_with_bad_storage_policy_index_header(self): valid_indices = sorted([int(policy) for policy in POLICIES]) bad_index = valid_indices[-1] + 1 req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '0', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': bad_index}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') self.controller.logger = mock.MagicMock() try: ssync_receiver.Receiver(self.controller, req) self.fail('Expected HTTPException to be raised.') except HTTPException as err: self.assertEqual('503 Service Unavailable', err.status) self.assertEqual('No policy with index 2', err.body) @unit.patch_policies() def test_Receiver_with_only_frag_index_header(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '7', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') rcvr = ssync_receiver.Receiver(self.controller, req) body_lines = [chunk.strip() for chunk in rcvr() if chunk.strip()] self.assertEqual( body_lines, [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(rcvr.policy, POLICIES[1]) self.assertEqual(rcvr.frag_index, 7) self.assertEqual(rcvr.node_index, None) @unit.patch_policies() def test_Receiver_with_only_node_index_header(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_NODE_INDEX': '7', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') with self.assertRaises(HTTPException) as e: ssync_receiver.Receiver(self.controller, req) self.assertEqual(e.exception.status_int, 400) # if a node index is included - it *must* be # the same value of frag index self.assertEqual(e.exception.body, 'Frag-Index (None) != Node-Index (7)') @unit.patch_policies() def test_Receiver_with_matched_indexes(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_NODE_INDEX': '7', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '7', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') rcvr = ssync_receiver.Receiver(self.controller, req) body_lines = [chunk.strip() for chunk in rcvr() if chunk.strip()] self.assertEqual( body_lines, [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(rcvr.policy, POLICIES[1]) self.assertEqual(rcvr.frag_index, 7) self.assertEqual(rcvr.node_index, 7) @unit.patch_policies() def test_Receiver_with_invalid_indexes(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_NODE_INDEX': 'None', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': 'None', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) @unit.patch_policies() def test_Receiver_with_mismatched_indexes(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_NODE_INDEX': '6', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '7', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') self.assertRaises(HTTPException, ssync_receiver.Receiver, self.controller, req) def test_SSYNC_replication_lock_fail(self): def _mock(path): with exceptions.ReplicationLockTimeout(0.01, '/somewhere/' + path): eventlet.sleep(0.05) with mock.patch.object( self.controller._diskfile_router[POLICIES.legacy], 'replication_lock', _mock): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [":ERROR: 0 '0.01 seconds: /somewhere/sda1'"]) self.controller.logger.debug.assert_called_once_with( 'None/sda1/1 SSYNC LOCK TIMEOUT: 0.01 seconds: ' '/somewhere/sda1') def test_SSYNC_initial_path(self): with mock.patch.object( self.controller, 'replication_semaphore') as \ mocked_replication_semaphore: req = swob.Request.blank( '/device', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), ["Invalid path: /device"]) self.assertEqual(resp.status_int, 400) self.assertFalse(mocked_replication_semaphore.acquire.called) self.assertFalse(mocked_replication_semaphore.release.called) with mock.patch.object( self.controller, 'replication_semaphore') as \ mocked_replication_semaphore: req = swob.Request.blank( '/device/', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), ["Invalid path: /device/"]) self.assertEqual(resp.status_int, 400) self.assertFalse(mocked_replication_semaphore.acquire.called) self.assertFalse(mocked_replication_semaphore.release.called) with mock.patch.object( self.controller, 'replication_semaphore') as \ mocked_replication_semaphore: req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':ERROR: 0 "Looking for :MISSING_CHECK: START got \'\'"']) self.assertEqual(resp.status_int, 200) mocked_replication_semaphore.acquire.assert_called_once_with(0) mocked_replication_semaphore.release.assert_called_once_with() with mock.patch.object( self.controller, 'replication_semaphore') as \ mocked_replication_semaphore: req = swob.Request.blank( '/device/partition/junk', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), ["Invalid path: /device/partition/junk"]) self.assertEqual(resp.status_int, 400) self.assertFalse(mocked_replication_semaphore.acquire.called) self.assertFalse(mocked_replication_semaphore.release.called) def test_SSYNC_mount_check(self): with mock.patch.object(self.controller, 'replication_semaphore'), \ mock.patch.object( self.controller._diskfile_router[POLICIES.legacy], 'mount_check', False), \ mock.patch('swift.obj.diskfile.check_mount', return_value=False) as mocked_check_mount: req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':ERROR: 0 "Looking for :MISSING_CHECK: START got \'\'"']) self.assertEqual(resp.status_int, 200) self.assertFalse(mocked_check_mount.called) with mock.patch.object(self.controller, 'replication_semaphore'), \ mock.patch.object( self.controller._diskfile_router[POLICIES.legacy], 'mount_check', True), \ mock.patch('swift.obj.diskfile.check_mount', return_value=False) as mocked_check_mount: req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), ["

Insufficient Storage

There " "was not enough space to save the resource. Drive: " "device

"]) self.assertEqual(resp.status_int, 507) mocked_check_mount.assert_called_once_with( self.controller._diskfile_router[POLICIES.legacy].devices, 'device') mocked_check_mount.reset_mock() mocked_check_mount.return_value = True req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':ERROR: 0 "Looking for :MISSING_CHECK: START got \'\'"']) self.assertEqual(resp.status_int, 200) mocked_check_mount.assert_called_once_with( self.controller._diskfile_router[POLICIES.legacy].devices, 'device') def test_SSYNC_Exception(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def get_socket(self): return self.mock_socket with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\nBad content is here') req.remote_addr = '1.2.3.4' mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Got no headers for Bad content is here'"]) self.assertEqual(resp.status_int, 200) mock_shutdown_safe.assert_called_once_with( mock_wsgi_input.mock_socket) mock_wsgi_input.mock_socket.close.assert_called_once_with() self.controller.logger.exception.assert_called_once_with( '1.2.3.4/device/partition EXCEPTION in ssync.Receiver') def test_SSYNC_Exception_Exception(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def get_socket(self): return self.mock_socket with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\nBad content is here') req.remote_addr = mock.MagicMock() req.remote_addr.__str__ = mock.Mock( side_effect=Exception("can't stringify this")) mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END']) self.assertEqual(resp.status_int, 200) mock_shutdown_safe.assert_called_once_with( mock_wsgi_input.mock_socket) mock_wsgi_input.mock_socket.close.assert_called_once_with() self.controller.logger.exception.assert_called_once_with( 'EXCEPTION in ssync.Receiver') def test_MISSING_CHECK_timeout(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def readline(self, sizehint=-1): line = six.StringIO.readline(self) if line.startswith('hash'): eventlet.sleep(0.1) return line def get_socket(self): return self.mock_socket self.controller.client_timeout = 0.01 with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' 'hash ts\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') req.remote_addr = '2.3.4.5' mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [":ERROR: 408 '0.01 seconds: missing_check line'"]) self.assertEqual(resp.status_int, 200) self.assertTrue(mock_shutdown_safe.called) self.controller.logger.error.assert_called_once_with( '2.3.4.5/sda1/1 TIMEOUT in ssync.Receiver: ' '0.01 seconds: missing_check line') def test_MISSING_CHECK_other_exception(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def readline(self, sizehint=-1): line = six.StringIO.readline(self) if line.startswith('hash'): raise Exception('test exception') return line def get_socket(self): return self.mock_socket self.controller.client_timeout = 0.01 with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' 'hash ts\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') req.remote_addr = '3.4.5.6' mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [":ERROR: 0 'test exception'"]) self.assertEqual(resp.status_int, 200) self.assertTrue(mock_shutdown_safe.called) self.controller.logger.exception.assert_called_once_with( '3.4.5.6/sda1/1 EXCEPTION in ssync.Receiver') def test_MISSING_CHECK_empty_list(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_have_none(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + '\r\n' + self.hash2 + ' ' + self.ts2 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' dm', self.hash2 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_extra_line_parts(self): # check that rx tolerates extra parts in missing check lines to # allow for protocol upgrades extra_1 = 'extra' extra_2 = 'multiple extra parts' self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + ' ' + extra_1 + '\r\n' + self.hash2 + ' ' + self.ts2 + ' ' + extra_2 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' dm', self.hash2 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_have_one_exact(self): object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) fp = open(os.path.join(object_dir, self.ts1 + '.data'), 'w+') fp.write('1') fp.flush() self.metadata1['Content-Length'] = '1' diskfile.write_metadata(fp, self.metadata1) self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + '\r\n' + self.hash2 + ' ' + self.ts2 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash2 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) @patch_policies(with_ec_default=True) def test_MISSING_CHECK_missing_durable(self): self.controller.logger = mock.MagicMock() self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) # make rx disk file but don't commit it, so .durable is missing ts1 = next(make_timestamp_iter()).internal object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) fp = open(os.path.join(object_dir, ts1 + '#2.data'), 'w+') fp.write('1') fp.flush() metadata1 = { 'name': self.name1, 'X-Timestamp': ts1, 'Content-Length': '1'} diskfile.write_metadata(fp, metadata1) # make a request - expect no data to be wanted req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '0', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '2'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + ts1 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) @patch_policies(with_ec_default=True) @mock.patch('swift.obj.diskfile.ECDiskFileWriter.commit') def test_MISSING_CHECK_missing_durable_but_commit_fails(self, mock_commit): self.controller.logger = mock.MagicMock() self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) # make rx disk file but don't commit it, so .durable is missing ts1 = next(make_timestamp_iter()).internal object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) fp = open(os.path.join(object_dir, ts1 + '#2.data'), 'w+') fp.write('1') fp.flush() metadata1 = { 'name': self.name1, 'X-Timestamp': ts1, 'Content-Length': '1'} diskfile.write_metadata(fp, metadata1) # make a request with commit disabled - expect data to be wanted req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '0', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '2'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + ts1 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) # make a request with commit raising error - expect data to be wanted mock_commit.side_effect = Exception req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '0', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '2'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + ts1 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.exception.called) self.assertIn( 'EXCEPTION in ssync.Receiver while attempting commit of', self.controller.logger.exception.call_args[0][0]) def test_MISSING_CHECK_storage_policy(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[1])), '1', self.hash1) utils.mkdirs(object_dir) fp = open(os.path.join(object_dir, self.ts1 + '.data'), 'w+') fp.write('1') fp.flush() self.metadata1['Content-Length'] = '1' diskfile.write_metadata(fp, self.metadata1) self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + '\r\n' + self.hash2 + ' ' + self.ts2 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash2 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_have_one_newer(self): object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) newer_ts1 = utils.normalize_timestamp(float(self.ts1) + 1) self.metadata1['X-Timestamp'] = newer_ts1 fp = open(os.path.join(object_dir, newer_ts1 + '.data'), 'w+') fp.write('1') fp.flush() self.metadata1['Content-Length'] = '1' diskfile.write_metadata(fp, self.metadata1) self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + '\r\n' + self.hash2 + ' ' + self.ts2 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash2 + ' dm', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_have_newer_meta(self): object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) older_ts1 = utils.normalize_timestamp(float(self.ts1) - 1) self.metadata1['X-Timestamp'] = older_ts1 fp = open(os.path.join(object_dir, older_ts1 + '.data'), 'w+') fp.write('1') fp.flush() self.metadata1['Content-Length'] = '1' diskfile.write_metadata(fp, self.metadata1) # write newer .meta file metadata = {'name': self.name1, 'X-Timestamp': self.ts2, 'X-Object-Meta-Test': 'test'} fp = open(os.path.join(object_dir, self.ts2 + '.meta'), 'w+') diskfile.write_metadata(fp, metadata) # receiver has .data at older_ts, .meta at ts2 # sender has .data at ts1 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + self.ts1 + '\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' d', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_MISSING_CHECK_have_older_meta(self): object_dir = utils.storage_directory( os.path.join(self.testdir, 'sda1', diskfile.get_data_dir(POLICIES[0])), '1', self.hash1) utils.mkdirs(object_dir) older_ts1 = utils.normalize_timestamp(float(self.ts1) - 1) self.metadata1['X-Timestamp'] = older_ts1 fp = open(os.path.join(object_dir, older_ts1 + '.data'), 'w+') fp.write('1') fp.flush() self.metadata1['Content-Length'] = '1' diskfile.write_metadata(fp, self.metadata1) # write .meta file at ts1 metadata = {'name': self.name1, 'X-Timestamp': self.ts1, 'X-Object-Meta-Test': 'test'} fp = open(os.path.join(object_dir, self.ts1 + '.meta'), 'w+') diskfile.write_metadata(fp, metadata) # receiver has .data at older_ts, .meta at ts1 # sender has .data at older_ts, .meta at ts2 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/sda1/1', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n' + self.hash1 + ' ' + older_ts1 + ' m:30d40\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n:UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', self.hash1 + ' m', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.error.called) self.assertFalse(self.controller.logger.exception.called) def test_UPDATES_timeout(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def readline(self, sizehint=-1): line = six.StringIO.readline(self) if line.startswith('DELETE'): eventlet.sleep(0.1) return line def get_socket(self): return self.mock_socket self.controller.client_timeout = 0.01 with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n' ':UPDATES: END\r\n') req.remote_addr = '2.3.4.5' mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 408 '0.01 seconds: updates line'"]) self.assertEqual(resp.status_int, 200) mock_shutdown_safe.assert_called_once_with( mock_wsgi_input.mock_socket) mock_wsgi_input.mock_socket.close.assert_called_once_with() self.controller.logger.error.assert_called_once_with( '2.3.4.5/device/partition TIMEOUT in ssync.Receiver: ' '0.01 seconds: updates line') def test_UPDATES_other_exception(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def readline(self, sizehint=-1): line = six.StringIO.readline(self) if line.startswith('DELETE'): raise Exception('test exception') return line def get_socket(self): return self.mock_socket self.controller.client_timeout = 0.01 with mock.patch.object( ssync_receiver.eventlet.greenio, 'shutdown_safe') as \ mock_shutdown_safe: self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n' ':UPDATES: END\r\n') req.remote_addr = '3.4.5.6' mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'test exception'"]) self.assertEqual(resp.status_int, 200) mock_shutdown_safe.assert_called_once_with( mock_wsgi_input.mock_socket) mock_wsgi_input.mock_socket.close.assert_called_once_with() self.controller.logger.exception.assert_called_once_with( '3.4.5.6/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_no_problems_no_hard_disconnect(self): class _Wrapper(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) self.mock_socket = mock.MagicMock() def get_socket(self): return self.mock_socket self.controller.client_timeout = 0.01 with mock.patch.object(ssync_receiver.eventlet.greenio, 'shutdown_safe') as mock_shutdown_safe, \ mock.patch.object( self.controller, 'DELETE', return_value=swob.HTTPNoContent()): req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n' ':UPDATES: END\r\n') mock_wsgi_input = _Wrapper(req.body) req.environ['wsgi.input'] = mock_wsgi_input resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(mock_shutdown_safe.called) self.assertFalse(mock_wsgi_input.mock_socket.close.called) def test_UPDATES_bad_subrequest_line(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'bad_subrequest_line\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'need more than 1 value to unpack'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') with mock.patch.object( self.controller, 'DELETE', return_value=swob.HTTPNoContent()): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n' 'bad_subrequest_line2') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'need more than 1 value to unpack'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_no_headers(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Got no headers for DELETE /a/c/o'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_bad_headers(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'Bad-Header Test\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'need more than 1 value to unpack'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'Good-Header: Test\r\n' 'Bad-Header Test\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'need more than 1 value to unpack'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_bad_content_length(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n' 'Content-Length: a\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':ERROR: 0 "invalid literal for int() with base 10: \'a\'"']) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_content_length_with_DELETE(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'Content-Length: 1\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'DELETE subrequest with content-length /a/c/o'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_no_content_length_with_PUT(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'No content-length sent for PUT /a/c/o'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_early_termination(self): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n' 'Content-Length: 1\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Early termination for PUT /a/c/o'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') def test_UPDATES_failures(self): @server.public def _DELETE(request): if request.path == '/device/partition/a/c/works': return swob.HTTPNoContent() else: return swob.HTTPInternalServerError() # failures never hit threshold with mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.replication_failure_threshold = 4 self.controller.replication_failure_ratio = 1.5 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 500 'ERROR: With :UPDATES: 3 failures to 0 " "successes'"]) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.warning.called) self.assertEqual(3, self.controller.logger.warning.call_count) self.controller.logger.clear() # failures hit threshold and no successes, so ratio is like infinity with mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.replication_failure_threshold = 4 self.controller.replication_failure_ratio = 1.5 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' ':UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Too many 4 failures to 0 successes'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.warning.called) self.assertEqual(4, self.controller.logger.warning.call_count) self.controller.logger.clear() # failures hit threshold and ratio hits 1.33333333333 with mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.replication_failure_threshold = 4 self.controller.replication_failure_ratio = 1.5 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/works\r\n\r\n' 'DELETE /a/c/works\r\n\r\n' 'DELETE /a/c/works\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' ':UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 500 'ERROR: With :UPDATES: 4 failures to 3 " "successes'"]) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.warning.called) self.assertEqual(4, self.controller.logger.warning.call_count) self.controller.logger.clear() # failures hit threshold and ratio hits 2.0 with mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.replication_failure_threshold = 4 self.controller.replication_failure_ratio = 1.5 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/works\r\n\r\n' 'DELETE /a/c/works\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' 'DELETE /a/c/o\r\n\r\n' ':UPDATES: END\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Too many 4 failures to 2 successes'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.warning.called) self.assertEqual(4, self.controller.logger.warning.call_count) self.controller.logger.clear() def test_UPDATES_PUT(self): _PUT_request = [None] @server.public def _PUT(request): _PUT_request[0] = request request.read_body = request.environ['wsgi.input'].read() return swob.HTTPCreated() with mock.patch.object(self.controller, 'PUT', _PUT): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n' 'Content-Length: 1\r\n' 'Etag: c4ca4238a0b923820dcc509a6f75849b\r\n' 'X-Timestamp: 1364456113.12344\r\n' 'X-Object-Meta-Test1: one\r\n' 'Content-Encoding: gzip\r\n' 'Specialty-Header: value\r\n' '\r\n' '1') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertEqual(len(_PUT_request), 1) # sanity req = _PUT_request[0] self.assertEqual(req.path, '/device/partition/a/c/o') self.assertEqual(req.content_length, 1) self.assertEqual(req.headers, { 'Etag': 'c4ca4238a0b923820dcc509a6f75849b', 'Content-Length': '1', 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp x-object-meta-test1 ' 'content-encoding specialty-header')}) def test_UPDATES_PUT_replication_headers(self): self.controller.logger = mock.MagicMock() # sanity check - regular PUT will not persist Specialty-Header req = swob.Request.blank( '/sda1/0/a/c/o1', body='1', environ={'REQUEST_METHOD': 'PUT'}, headers={'Content-Length': '1', 'Content-Type': 'text/plain', 'Etag': 'c4ca4238a0b923820dcc509a6f75849b', 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) df = self.controller.get_diskfile( 'sda1', '0', 'a', 'c', 'o1', POLICIES.default) df.open() self.assertFalse('Specialty-Header' in df.get_metadata()) # an SSYNC request can override PUT header filtering... req = swob.Request.blank( '/sda1/0', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o2\r\n' 'Content-Length: 1\r\n' 'Content-Type: text/plain\r\n' 'Etag: c4ca4238a0b923820dcc509a6f75849b\r\n' 'X-Timestamp: 1364456113.12344\r\n' 'X-Object-Meta-Test1: one\r\n' 'Content-Encoding: gzip\r\n' 'Specialty-Header: value\r\n' '\r\n' '1') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) # verify diskfile has metadata permitted by replication headers # including Specialty-Header df = self.controller.get_diskfile( 'sda1', '0', 'a', 'c', 'o2', POLICIES.default) df.open() for chunk in df.reader(): self.assertEqual('1', chunk) expected = {'ETag': 'c4ca4238a0b923820dcc509a6f75849b', 'Content-Length': '1', 'Content-Type': 'text/plain', 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value', 'name': '/a/c/o2'} actual = df.get_metadata() self.assertEqual(expected, actual) def test_UPDATES_POST(self): _POST_request = [None] @server.public def _POST(request): _POST_request[0] = request return swob.HTTPAccepted() with mock.patch.object(self.controller, 'POST', _POST): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'POST /a/c/o\r\n' 'X-Timestamp: 1364456113.12344\r\n' 'X-Object-Meta-Test1: one\r\n' 'Specialty-Header: value\r\n\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) req = _POST_request[0] self.assertEqual(req.path, '/device/partition/a/c/o') self.assertEqual(req.content_length, None) self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Specialty-Header': 'value', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'x-timestamp x-object-meta-test1 specialty-header')}) def test_UPDATES_with_storage_policy(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) _PUT_request = [None] @server.public def _PUT(request): _PUT_request[0] = request request.read_body = request.environ['wsgi.input'].read() return swob.HTTPCreated() with mock.patch.object(self.controller, 'PUT', _PUT): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '1'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n' 'Content-Length: 1\r\n' 'X-Timestamp: 1364456113.12344\r\n' 'X-Object-Meta-Test1: one\r\n' 'Content-Encoding: gzip\r\n' 'Specialty-Header: value\r\n' '\r\n' '1') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertEqual(len(_PUT_request), 1) # sanity req = _PUT_request[0] self.assertEqual(req.path, '/device/partition/a/c/o') self.assertEqual(req.content_length, 1) self.assertEqual(req.headers, { 'Content-Length': '1', 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '1', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp x-object-meta-test1 ' 'content-encoding specialty-header')}) self.assertEqual(req.read_body, '1') def test_UPDATES_PUT_with_storage_policy_and_node_index(self): # update router post policy patch self.controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.controller.logger) _PUT_request = [None] @server.public def _PUT(request): _PUT_request[0] = request request.read_body = request.environ['wsgi.input'].read() return swob.HTTPCreated() with mock.patch.object(self.controller, 'PUT', _PUT): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC', 'HTTP_X_BACKEND_SSYNC_NODE_INDEX': '7', 'HTTP_X_BACKEND_SSYNC_FRAG_INDEX': '7', 'HTTP_X_BACKEND_STORAGE_POLICY_INDEX': '0'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o\r\n' 'Content-Length: 1\r\n' 'X-Timestamp: 1364456113.12344\r\n' 'X-Object-Meta-Test1: one\r\n' 'Content-Encoding: gzip\r\n' 'Specialty-Header: value\r\n' '\r\n' '1') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertEqual(len(_PUT_request), 1) # sanity req = _PUT_request[0] self.assertEqual(req.path, '/device/partition/a/c/o') self.assertEqual(req.content_length, 1) self.assertEqual(req.headers, { 'Content-Length': '1', 'X-Timestamp': '1364456113.12344', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Ssync-Frag-Index': '7', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp x-object-meta-test1 ' 'content-encoding specialty-header')}) self.assertEqual(req.read_body, '1') def test_UPDATES_DELETE(self): _DELETE_request = [None] @server.public def _DELETE(request): _DELETE_request[0] = request return swob.HTTPNoContent() with mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertEqual(len(_DELETE_request), 1) # sanity req = _DELETE_request[0] self.assertEqual(req.path, '/device/partition/a/c/o') self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.76334', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': 'x-timestamp'}) def test_UPDATES_BONK(self): _BONK_request = [None] @server.public def _BONK(request): _BONK_request[0] = request return swob.HTTPOk() self.controller.BONK = _BONK self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'BONK /a/c/o\r\n' 'X-Timestamp: 1364456113.76334\r\n' '\r\n') resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 0 'Invalid subrequest method BONK'"]) self.assertEqual(resp.status_int, 200) self.controller.logger.exception.assert_called_once_with( 'None/device/partition EXCEPTION in ssync.Receiver') self.assertEqual(len(_BONK_request), 1) # sanity self.assertEqual(_BONK_request[0], None) def test_UPDATES_multiple(self): _requests = [] @server.public def _PUT(request): _requests.append(request) request.read_body = request.environ['wsgi.input'].read() return swob.HTTPCreated() @server.public def _POST(request): _requests.append(request) return swob.HTTPOk() @server.public def _DELETE(request): _requests.append(request) return swob.HTTPNoContent() with mock.patch.object(self.controller, 'PUT', _PUT), \ mock.patch.object(self.controller, 'POST', _POST), \ mock.patch.object(self.controller, 'DELETE', _DELETE): self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o1\r\n' 'Content-Length: 1\r\n' 'X-Timestamp: 1364456113.00001\r\n' 'X-Object-Meta-Test1: one\r\n' 'Content-Encoding: gzip\r\n' 'Specialty-Header: value\r\n' '\r\n' '1' 'DELETE /a/c/o2\r\n' 'X-Timestamp: 1364456113.00002\r\n' '\r\n' 'PUT /a/c/o3\r\n' 'Content-Length: 3\r\n' 'X-Timestamp: 1364456113.00003\r\n' '\r\n' '123' 'PUT /a/c/o4\r\n' 'Content-Length: 4\r\n' 'X-Timestamp: 1364456113.00004\r\n' '\r\n' '1\r\n4' 'DELETE /a/c/o5\r\n' 'X-Timestamp: 1364456113.00005\r\n' '\r\n' 'DELETE /a/c/o6\r\n' 'X-Timestamp: 1364456113.00006\r\n' '\r\n' 'PUT /a/c/o7\r\n' 'Content-Length: 7\r\n' 'X-Timestamp: 1364456113.00007\r\n' '\r\n' '1234567' 'POST /a/c/o7\r\n' 'X-Object-Meta-Test-User: user_meta\r\n' 'X-Timestamp: 1364456113.00008\r\n' '\r\n' ) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ':UPDATES: START', ':UPDATES: END']) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertEqual(len(_requests), 8) # sanity req = _requests.pop(0) self.assertEqual(req.method, 'PUT') self.assertEqual(req.path, '/device/partition/a/c/o1') self.assertEqual(req.content_length, 1) self.assertEqual(req.headers, { 'Content-Length': '1', 'X-Timestamp': '1364456113.00001', 'X-Object-Meta-Test1': 'one', 'Content-Encoding': 'gzip', 'Specialty-Header': 'value', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp x-object-meta-test1 ' 'content-encoding specialty-header')}) self.assertEqual(req.read_body, '1') req = _requests.pop(0) self.assertEqual(req.method, 'DELETE') self.assertEqual(req.path, '/device/partition/a/c/o2') self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.00002', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': 'x-timestamp'}) req = _requests.pop(0) self.assertEqual(req.method, 'PUT') self.assertEqual(req.path, '/device/partition/a/c/o3') self.assertEqual(req.content_length, 3) self.assertEqual(req.headers, { 'Content-Length': '3', 'X-Timestamp': '1364456113.00003', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp')}) self.assertEqual(req.read_body, '123') req = _requests.pop(0) self.assertEqual(req.method, 'PUT') self.assertEqual(req.path, '/device/partition/a/c/o4') self.assertEqual(req.content_length, 4) self.assertEqual(req.headers, { 'Content-Length': '4', 'X-Timestamp': '1364456113.00004', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp')}) self.assertEqual(req.read_body, '1\r\n4') req = _requests.pop(0) self.assertEqual(req.method, 'DELETE') self.assertEqual(req.path, '/device/partition/a/c/o5') self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.00005', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': 'x-timestamp'}) req = _requests.pop(0) self.assertEqual(req.method, 'DELETE') self.assertEqual(req.path, '/device/partition/a/c/o6') self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.00006', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': 'x-timestamp'}) req = _requests.pop(0) self.assertEqual(req.method, 'PUT') self.assertEqual(req.path, '/device/partition/a/c/o7') self.assertEqual(req.content_length, 7) self.assertEqual(req.headers, { 'Content-Length': '7', 'X-Timestamp': '1364456113.00007', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp')}) self.assertEqual(req.read_body, '1234567') req = _requests.pop(0) self.assertEqual(req.method, 'POST') self.assertEqual(req.path, '/device/partition/a/c/o7') self.assertEqual(req.content_length, None) self.assertEqual(req.headers, { 'X-Timestamp': '1364456113.00008', 'X-Object-Meta-Test-User': 'user_meta', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'x-object-meta-test-user x-timestamp')}) self.assertEqual(_requests, []) def test_UPDATES_subreq_does_not_read_all(self): # This tests that if a SSYNC subrequest fails and doesn't read # all the subrequest body that it will read and throw away the rest of # the body before moving on to the next subrequest. # If you comment out the part in ssync_receiver where it does: # for junk in subreq.environ['wsgi.input']: # pass # You can then see this test fail. _requests = [] @server.public def _PUT(request): _requests.append(request) # Deliberately just reading up to first 2 bytes. request.read_body = request.environ['wsgi.input'].read(2) return swob.HTTPInternalServerError() class _IgnoreReadlineHint(six.StringIO): def __init__(self, value): six.StringIO.__init__(self, value) def readline(self, hint=-1): return six.StringIO.readline(self) self.controller.PUT = _PUT self.controller.network_chunk_size = 2 self.controller.logger = mock.MagicMock() req = swob.Request.blank( '/device/partition', environ={'REQUEST_METHOD': 'SSYNC'}, body=':MISSING_CHECK: START\r\n:MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' 'PUT /a/c/o1\r\n' 'Content-Length: 3\r\n' 'X-Timestamp: 1364456113.00001\r\n' '\r\n' '123' 'PUT /a/c/o2\r\n' 'Content-Length: 1\r\n' 'X-Timestamp: 1364456113.00002\r\n' '\r\n' '1') req.environ['wsgi.input'] = _IgnoreReadlineHint(req.body) resp = req.get_response(self.controller) self.assertEqual( self.body_lines(resp.body), [':MISSING_CHECK: START', ':MISSING_CHECK: END', ":ERROR: 500 'ERROR: With :UPDATES: 2 failures to 0 successes'"]) self.assertEqual(resp.status_int, 200) self.assertFalse(self.controller.logger.exception.called) self.assertFalse(self.controller.logger.error.called) self.assertTrue(self.controller.logger.warning.called) self.assertEqual(2, self.controller.logger.warning.call_count) self.assertEqual(len(_requests), 2) # sanity req = _requests.pop(0) self.assertEqual(req.path, '/device/partition/a/c/o1') self.assertEqual(req.content_length, 3) self.assertEqual(req.headers, { 'Content-Length': '3', 'X-Timestamp': '1364456113.00001', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp')}) self.assertEqual(req.read_body, '12') req = _requests.pop(0) self.assertEqual(req.path, '/device/partition/a/c/o2') self.assertEqual(req.content_length, 1) self.assertEqual(req.headers, { 'Content-Length': '1', 'X-Timestamp': '1364456113.00002', 'Host': 'localhost:80', 'X-Backend-Storage-Policy-Index': '0', 'X-Backend-Replication': 'True', 'X-Backend-Replication-Headers': ( 'content-length x-timestamp')}) self.assertEqual(req.read_body, '1') self.assertEqual(_requests, []) @patch_policies(with_ec_default=True) class TestSsyncRxServer(unittest.TestCase): # Tests to verify behavior of SSYNC requests sent to an object # server socket. def setUp(self): self.rx_ip = '127.0.0.1' # dirs self.tmpdir = tempfile.mkdtemp() self.tempdir = os.path.join(self.tmpdir, 'tmp_test_obj_server') self.devices = os.path.join(self.tempdir, 'srv/node') for device in ('sda1', 'sdb1'): os.makedirs(os.path.join(self.devices, device)) self.conf = { 'devices': self.devices, 'swift_dir': self.tempdir, } self.rx_logger = debug_logger('test-object-server') rx_server = server.ObjectController(self.conf, logger=self.rx_logger) self.sock = eventlet.listen((self.rx_ip, 0)) self.rx_server = eventlet.spawn( eventlet.wsgi.server, self.sock, rx_server, utils.NullLogger()) self.rx_port = self.sock.getsockname()[1] self.tx_logger = debug_logger('test-reconstructor') self.daemon = ObjectReconstructor(self.conf, self.tx_logger) self.daemon._diskfile_mgr = self.daemon._df_router[POLICIES[0]] def tearDown(self): self.rx_server.kill() self.sock.close() eventlet.sleep(0) shutil.rmtree(self.tmpdir) def test_SSYNC_disconnect(self): node = { 'replication_ip': '127.0.0.1', 'replication_port': self.rx_port, 'device': 'sdb1', } job = { 'partition': 0, 'policy': POLICIES[0], 'device': 'sdb1', } sender = ssync_sender.Sender(self.daemon, node, job, ['abc']) # kick off the sender and let the error trigger failure with mock.patch('swift.obj.ssync_receiver.Receiver.initialize_request')\ as mock_initialize_request: mock_initialize_request.side_effect = \ swob.HTTPInternalServerError() success, _ = sender() self.assertFalse(success) stderr = six.StringIO() with mock.patch('sys.stderr', stderr): # let gc and eventlet spin a bit del sender for i in range(3): eventlet.sleep(0) self.assertNotIn('ValueError: invalid literal for int() with base 16', stderr.getvalue()) def test_SSYNC_device_not_available(self): with mock.patch('swift.obj.ssync_receiver.Receiver.missing_check')\ as mock_missing_check: self.connection = bufferedhttp.BufferedHTTPConnection( '127.0.0.1:%s' % self.rx_port) self.connection.putrequest('SSYNC', '/sdc1/0') self.connection.putheader('Transfer-Encoding', 'chunked') self.connection.putheader('X-Backend-Storage-Policy-Index', int(POLICIES[0])) self.connection.endheaders() resp = self.connection.getresponse() self.assertEqual(507, resp.status) resp.read() resp.close() # sanity check that the receiver did not proceed to missing_check self.assertFalse(mock_missing_check.called) def test_SSYNC_invalid_policy(self): valid_indices = sorted([int(policy) for policy in POLICIES]) bad_index = valid_indices[-1] + 1 with mock.patch('swift.obj.ssync_receiver.Receiver.missing_check')\ as mock_missing_check: self.connection = bufferedhttp.BufferedHTTPConnection( '127.0.0.1:%s' % self.rx_port) self.connection.putrequest('SSYNC', '/sda1/0') self.connection.putheader('Transfer-Encoding', 'chunked') self.connection.putheader('X-Backend-Storage-Policy-Index', bad_index) self.connection.endheaders() resp = self.connection.getresponse() self.assertEqual(503, resp.status) resp.read() resp.close() # sanity check that the receiver did not proceed to missing_check self.assertFalse(mock_missing_check.called) def test_bad_request_invalid_frag_index(self): with mock.patch('swift.obj.ssync_receiver.Receiver.missing_check')\ as mock_missing_check: self.connection = bufferedhttp.BufferedHTTPConnection( '127.0.0.1:%s' % self.rx_port) self.connection.putrequest('SSYNC', '/sda1/0') self.connection.putheader('Transfer-Encoding', 'chunked') self.connection.putheader('X-Backend-Ssync-Frag-Index', 'None') self.connection.endheaders() resp = self.connection.getresponse() self.assertEqual(400, resp.status) error_msg = resp.read() self.assertIn("Invalid X-Backend-Ssync-Frag-Index 'None'", error_msg) resp.close() # sanity check that the receiver did not proceed to missing_check self.assertFalse(mock_missing_check.called) class TestModuleMethods(unittest.TestCase): def test_decode_missing(self): object_hash = '9d41d8cd98f00b204e9800998ecf0abc' ts_iter = make_timestamp_iter() t_data = next(ts_iter) t_meta = next(ts_iter) t_ctype = next(ts_iter) d_meta_data = t_meta.raw - t_data.raw d_ctype_data = t_ctype.raw - t_data.raw # legacy single timestamp string msg = '%s %s' % (object_hash, t_data.internal) expected = dict(object_hash=object_hash, ts_meta=t_data, ts_data=t_data, ts_ctype=t_data) self.assertEqual(expected, ssync_receiver.decode_missing(msg)) # hex meta delta encoded as extra message part msg = '%s %s m:%x' % (object_hash, t_data.internal, d_meta_data) expected = dict(object_hash=object_hash, ts_data=t_data, ts_meta=t_meta, ts_ctype=t_data) self.assertEqual(expected, ssync_receiver.decode_missing(msg)) # hex content type delta encoded in extra message part msg = '%s %s t:%x,m:%x' % (object_hash, t_data.internal, d_ctype_data, d_meta_data) expected = dict(object_hash=object_hash, ts_data=t_data, ts_meta=t_meta, ts_ctype=t_ctype) self.assertEqual( expected, ssync_receiver.decode_missing(msg)) # order of subparts does not matter msg = '%s %s m:%x,t:%x' % (object_hash, t_data.internal, d_meta_data, d_ctype_data) self.assertEqual( expected, ssync_receiver.decode_missing(msg)) # hex content type delta may be zero msg = '%s %s t:0,m:%x' % (object_hash, t_data.internal, d_meta_data) expected = dict(object_hash=object_hash, ts_data=t_data, ts_meta=t_meta, ts_ctype=t_data) self.assertEqual( expected, ssync_receiver.decode_missing(msg)) # unexpected zero delta is tolerated msg = '%s %s m:0' % (object_hash, t_data.internal) expected = dict(object_hash=object_hash, ts_meta=t_data, ts_data=t_data, ts_ctype=t_data) self.assertEqual(expected, ssync_receiver.decode_missing(msg)) # unexpected subparts in timestamp delta part are tolerated msg = '%s %s c:12345,m:%x,junk' % (object_hash, t_data.internal, d_meta_data) expected = dict(object_hash=object_hash, ts_meta=t_meta, ts_data=t_data, ts_ctype=t_data) self.assertEqual( expected, ssync_receiver.decode_missing(msg)) # extra message parts tolerated msg = '%s %s m:%x future parts' % (object_hash, t_data.internal, d_meta_data) expected = dict(object_hash=object_hash, ts_meta=t_meta, ts_data=t_data, ts_ctype=t_data) self.assertEqual(expected, ssync_receiver.decode_missing(msg)) def test_encode_wanted(self): ts_iter = make_timestamp_iter() old_t_data = next(ts_iter) t_data = next(ts_iter) old_t_meta = next(ts_iter) t_meta = next(ts_iter) remote = { 'object_hash': 'theremotehash', 'ts_data': t_data, 'ts_meta': t_meta, } # missing local = {} expected = 'theremotehash dm' self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # in-sync local = { 'ts_data': t_data, 'ts_meta': t_meta, } expected = None self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # out-of-sync local = { 'ts_data': old_t_data, 'ts_meta': old_t_meta, } expected = 'theremotehash dm' self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # old data local = { 'ts_data': old_t_data, 'ts_meta': t_meta, } expected = 'theremotehash d' self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # old metadata local = { 'ts_data': t_data, 'ts_meta': old_t_meta, } expected = 'theremotehash m' self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # in-sync tombstone local = { 'ts_data': t_data, } expected = None self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) # old tombstone local = { 'ts_data': old_t_data, } expected = 'theremotehash d' self.assertEqual(ssync_receiver.encode_wanted(remote, local), expected) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_updater.py0000664000567000056710000004564313024044354021616 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import six.moves.cPickle as pickle import mock import os import unittest import random import itertools from contextlib import closing from gzip import GzipFile from tempfile import mkdtemp from shutil import rmtree from test.unit import FakeLogger from time import time from distutils.dir_util import mkpath from eventlet import spawn, Timeout, listen from six.moves import range from swift.obj import updater as object_updater from swift.obj.diskfile import (ASYNCDIR_BASE, get_async_dir, DiskFileManager, get_tmp_dir) from swift.common.ring import RingData from swift.common import utils from swift.common.header_key_dict import HeaderKeyDict from swift.common.utils import hash_path, normalize_timestamp, mkdirs, \ write_pickle from test.unit import debug_logger, patch_policies, mocked_http_conn from swift.common.storage_policy import StoragePolicy, POLICIES _mocked_policies = [StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True)] @patch_policies(_mocked_policies) class TestObjectUpdater(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' self.testdir = mkdtemp() ring_file = os.path.join(self.testdir, 'container.ring.gz') with closing(GzipFile(ring_file, 'wb')) as f: pickle.dump( RingData([[0, 1, 2, 0, 1, 2], [1, 2, 0, 1, 2, 0], [2, 3, 1, 2, 3, 1]], [{'id': 0, 'ip': '127.0.0.1', 'port': 1, 'device': 'sda1', 'zone': 0}, {'id': 1, 'ip': '127.0.0.1', 'port': 1, 'device': 'sda1', 'zone': 2}, {'id': 2, 'ip': '127.0.0.1', 'port': 1, 'device': 'sda1', 'zone': 4}], 30), f) self.devices_dir = os.path.join(self.testdir, 'devices') os.mkdir(self.devices_dir) self.sda1 = os.path.join(self.devices_dir, 'sda1') os.mkdir(self.sda1) for policy in POLICIES: os.mkdir(os.path.join(self.sda1, get_tmp_dir(policy))) self.logger = debug_logger() def tearDown(self): rmtree(self.testdir, ignore_errors=1) def test_creation(self): cu = object_updater.ObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '2', 'node_timeout': '5.5'}) self.assertTrue(hasattr(cu, 'logger')) self.assertTrue(cu.logger is not None) self.assertEqual(cu.devices, self.devices_dir) self.assertEqual(cu.interval, 1) self.assertEqual(cu.concurrency, 2) self.assertEqual(cu.node_timeout, 5.5) self.assertTrue(cu.get_container_ring() is not None) @mock.patch('os.listdir') def test_listdir_with_exception(self, mock_listdir): e = OSError('permission_denied') mock_listdir.side_effect = e # setup updater conf = { 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, } daemon = object_updater.ObjectUpdater(conf) daemon.logger = FakeLogger() paths = daemon._listdir('foo/bar') self.assertEqual([], paths) log_lines = daemon.logger.get_lines_for_level('error') msg = ('ERROR: Unable to access foo/bar: permission_denied') self.assertEqual(log_lines[0], msg) @mock.patch('os.listdir', return_value=['foo', 'bar']) def test_listdir_without_exception(self, mock_listdir): # setup updater conf = { 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, } daemon = object_updater.ObjectUpdater(conf) daemon.logger = FakeLogger() path = daemon._listdir('foo/bar/') log_lines = daemon.logger.get_lines_for_level('error') self.assertEqual(len(log_lines), 0) self.assertEqual(path, ['foo', 'bar']) def test_object_sweep(self): def check_with_idx(index, warn, should_skip): if int(index) > 0: asyncdir = os.path.join(self.sda1, ASYNCDIR_BASE + "-" + index) else: asyncdir = os.path.join(self.sda1, ASYNCDIR_BASE) prefix_dir = os.path.join(asyncdir, 'abc') mkpath(prefix_dir) # A non-directory where directory is expected should just be # skipped, but should not stop processing of subsequent # directories. not_dirs = ( os.path.join(self.sda1, 'not_a_dir'), os.path.join(self.sda1, ASYNCDIR_BASE + '-' + 'twentington'), os.path.join(self.sda1, ASYNCDIR_BASE + '-' + str(int(index) + 100))) for not_dir in not_dirs: with open(not_dir, 'w'): pass objects = { 'a': [1089.3, 18.37, 12.83, 1.3], 'b': [49.4, 49.3, 49.2, 49.1], 'c': [109984.123], } expected = set() for o, timestamps in objects.items(): ohash = hash_path('account', 'container', o) for t in timestamps: o_path = os.path.join(prefix_dir, ohash + '-' + normalize_timestamp(t)) if t == timestamps[0]: expected.add((o_path, int(index))) write_pickle({}, o_path) seen = set() class MockObjectUpdater(object_updater.ObjectUpdater): def process_object_update(self, update_path, device, policy): seen.add((update_path, int(policy))) os.unlink(update_path) cu = MockObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '5'}) cu.logger = mock_logger = mock.MagicMock() cu.object_sweep(self.sda1) self.assertEqual(mock_logger.warning.call_count, warn) self.assertTrue( os.path.exists(os.path.join(self.sda1, 'not_a_dir'))) if should_skip: # if we were supposed to skip over the dir, we didn't process # anything at all self.assertTrue(os.path.exists(prefix_dir)) self.assertEqual(set(), seen) else: self.assertTrue(not os.path.exists(prefix_dir)) self.assertEqual(expected, seen) # test cleanup: the tempdir gets cleaned up between runs, but this # way we can be called multiple times in a single test method for not_dir in not_dirs: os.unlink(not_dir) # first check with valid policies for pol in POLICIES: check_with_idx(str(pol.idx), 0, should_skip=False) # now check with a bogus async dir policy and make sure we get # a warning indicating that the '99' policy isn't valid check_with_idx('99', 1, should_skip=True) @mock.patch.object(object_updater, 'ismount') def test_run_once_with_disk_unmounted(self, mock_ismount): mock_ismount.return_value = False cu = object_updater.ObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15'}) cu.run_once() async_dir = os.path.join(self.sda1, get_async_dir(POLICIES[0])) os.mkdir(async_dir) cu.run_once() self.assertTrue(os.path.exists(async_dir)) # mount_check == False means no call to ismount self.assertEqual([], mock_ismount.mock_calls) cu = object_updater.ObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'TrUe', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15'}, logger=self.logger) odd_dir = os.path.join(async_dir, 'not really supposed ' 'to be here') os.mkdir(odd_dir) cu.run_once() self.assertTrue(os.path.exists(async_dir)) self.assertTrue(os.path.exists(odd_dir)) # skipped - not mounted! # mount_check == True means ismount was checked self.assertEqual([ mock.call(self.sda1), ], mock_ismount.mock_calls) self.assertEqual(cu.logger.get_increment_counts(), {'errors': 1}) @mock.patch.object(object_updater, 'ismount') def test_run_once(self, mock_ismount): mock_ismount.return_value = True cu = object_updater.ObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15'}, logger=self.logger) cu.run_once() async_dir = os.path.join(self.sda1, get_async_dir(POLICIES[0])) os.mkdir(async_dir) cu.run_once() self.assertTrue(os.path.exists(async_dir)) # mount_check == False means no call to ismount self.assertEqual([], mock_ismount.mock_calls) cu = object_updater.ObjectUpdater({ 'devices': self.devices_dir, 'mount_check': 'TrUe', 'swift_dir': self.testdir, 'interval': '1', 'concurrency': '1', 'node_timeout': '15'}, logger=self.logger) odd_dir = os.path.join(async_dir, 'not really supposed ' 'to be here') os.mkdir(odd_dir) cu.run_once() self.assertTrue(os.path.exists(async_dir)) self.assertTrue(not os.path.exists(odd_dir)) # mount_check == True means ismount was checked self.assertEqual([ mock.call(self.sda1), ], mock_ismount.mock_calls) ohash = hash_path('a', 'c', 'o') odir = os.path.join(async_dir, ohash[-3:]) mkdirs(odir) older_op_path = os.path.join( odir, '%s-%s' % (ohash, normalize_timestamp(time() - 1))) op_path = os.path.join( odir, '%s-%s' % (ohash, normalize_timestamp(time()))) for path in (op_path, older_op_path): with open(path, 'wb') as async_pending: pickle.dump({'op': 'PUT', 'account': 'a', 'container': 'c', 'obj': 'o', 'headers': { 'X-Container-Timestamp': normalize_timestamp(0)}}, async_pending) cu.run_once() self.assertTrue(not os.path.exists(older_op_path)) self.assertTrue(os.path.exists(op_path)) self.assertEqual(cu.logger.get_increment_counts(), {'failures': 1, 'unlinks': 1}) self.assertIsNone(pickle.load(open(op_path)).get('successes')) bindsock = listen(('127.0.0.1', 0)) def accepter(sock, return_code): try: with Timeout(3): inc = sock.makefile('rb') out = sock.makefile('wb') out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' % return_code) out.flush() self.assertEqual(inc.readline(), 'PUT /sda1/0/a/c/o HTTP/1.1\r\n') headers = HeaderKeyDict() line = inc.readline() while line and line != '\r\n': headers[line.split(':')[0]] = \ line.split(':')[1].strip() line = inc.readline() self.assertTrue('x-container-timestamp' in headers) self.assertTrue('X-Backend-Storage-Policy-Index' in headers) except BaseException as err: return err return None def accept(return_codes): codes = iter(return_codes) try: events = [] for x in range(len(return_codes)): with Timeout(3): sock, addr = bindsock.accept() events.append( spawn(accepter, sock, next(codes))) for event in events: err = event.wait() if err: raise err except BaseException as err: return err return None event = spawn(accept, [201, 500, 500]) for dev in cu.get_container_ring().devs: if dev is not None: dev['port'] = bindsock.getsockname()[1] cu.logger._clear() cu.run_once() err = event.wait() if err: raise err self.assertTrue(os.path.exists(op_path)) self.assertEqual(cu.logger.get_increment_counts(), {'failures': 1}) self.assertEqual([0], pickle.load(open(op_path)).get('successes')) event = spawn(accept, [404, 500]) cu.logger._clear() cu.run_once() err = event.wait() if err: raise err self.assertTrue(os.path.exists(op_path)) self.assertEqual(cu.logger.get_increment_counts(), {'failures': 1}) self.assertEqual([0, 1], pickle.load(open(op_path)).get('successes')) event = spawn(accept, [201]) cu.logger._clear() cu.run_once() err = event.wait() if err: raise err self.assertTrue(not os.path.exists(op_path)) self.assertEqual(cu.logger.get_increment_counts(), {'unlinks': 1, 'successes': 1}) def test_obj_put_legacy_updates(self): ts = (normalize_timestamp(t) for t in itertools.count(int(time()))) policy = POLICIES.get_by_index(0) # setup updater conf = { 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, } async_dir = os.path.join(self.sda1, get_async_dir(policy)) os.mkdir(async_dir) account, container, obj = 'a', 'c', 'o' # write an async for op in ('PUT', 'DELETE'): self.logger._clear() daemon = object_updater.ObjectUpdater(conf, logger=self.logger) dfmanager = DiskFileManager(conf, daemon.logger) # don't include storage-policy-index in headers_out pickle headers_out = HeaderKeyDict({ 'x-size': 0, 'x-content-type': 'text/plain', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-timestamp': next(ts), }) data = {'op': op, 'account': account, 'container': container, 'obj': obj, 'headers': headers_out} dfmanager.pickle_async_update(self.sda1, account, container, obj, data, next(ts), policy) request_log = [] def capture(*args, **kwargs): request_log.append((args, kwargs)) # run once fake_status_codes = [200, 200, 200] with mocked_http_conn(*fake_status_codes, give_connect=capture): daemon.run_once() self.assertEqual(len(fake_status_codes), len(request_log)) for request_args, request_kwargs in request_log: ip, part, method, path, headers, qs, ssl = request_args self.assertEqual(method, op) self.assertEqual(headers['X-Backend-Storage-Policy-Index'], str(int(policy))) self.assertEqual(daemon.logger.get_increment_counts(), {'successes': 1, 'unlinks': 1, 'async_pendings': 1}) def test_obj_put_async_updates(self): ts = (normalize_timestamp(t) for t in itertools.count(int(time()))) policy = random.choice(list(POLICIES)) # setup updater conf = { 'devices': self.devices_dir, 'mount_check': 'false', 'swift_dir': self.testdir, } daemon = object_updater.ObjectUpdater(conf, logger=self.logger) async_dir = os.path.join(self.sda1, get_async_dir(policy)) os.mkdir(async_dir) # write an async dfmanager = DiskFileManager(conf, daemon.logger) account, container, obj = 'a', 'c', 'o' op = 'PUT' headers_out = HeaderKeyDict({ 'x-size': 0, 'x-content-type': 'text/plain', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-timestamp': next(ts), 'X-Backend-Storage-Policy-Index': int(policy), }) data = {'op': op, 'account': account, 'container': container, 'obj': obj, 'headers': headers_out} dfmanager.pickle_async_update(self.sda1, account, container, obj, data, next(ts), policy) request_log = [] def capture(*args, **kwargs): request_log.append((args, kwargs)) # run once fake_status_codes = [ 200, # object update success 200, # object update success 200, # object update conflict ] with mocked_http_conn(*fake_status_codes, give_connect=capture): daemon.run_once() self.assertEqual(len(fake_status_codes), len(request_log)) for request_args, request_kwargs in request_log: ip, part, method, path, headers, qs, ssl = request_args self.assertEqual(method, 'PUT') self.assertEqual(headers['X-Backend-Storage-Policy-Index'], str(int(policy))) self.assertEqual(daemon.logger.get_increment_counts(), {'successes': 1, 'unlinks': 1, 'async_pendings': 1}) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_reconstructor.py0000775000567000056710000035507213024044354023071 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import unittest import os from hashlib import md5 import mock import six.moves.cPickle as pickle import tempfile import time import shutil import re import random import struct from eventlet import Timeout, sleep from contextlib import closing, contextmanager from gzip import GzipFile from shutil import rmtree from swift.common import utils from swift.common.exceptions import DiskFileError from swift.common.header_key_dict import HeaderKeyDict from swift.obj import diskfile, reconstructor as object_reconstructor from swift.common import ring from swift.common.storage_policy import (StoragePolicy, ECStoragePolicy, POLICIES, EC_POLICY) from swift.obj.reconstructor import REVERT from test.unit import (patch_policies, debug_logger, mocked_http_conn, FabricatedRing, make_timestamp_iter, DEFAULT_TEST_EC_TYPE) @contextmanager def mock_ssync_sender(ssync_calls=None, response_callback=None, **kwargs): def fake_ssync(daemon, node, job, suffixes): if ssync_calls is not None: ssync_calls.append( {'node': node, 'job': job, 'suffixes': suffixes}) def fake_call(): if response_callback: response = response_callback(node, job, suffixes) else: response = True, {} return response return fake_call with mock.patch('swift.obj.reconstructor.ssync_sender', fake_ssync): yield fake_ssync def make_ec_archive_bodies(policy, test_body): segment_size = policy.ec_segment_size # split up the body into buffers chunks = [test_body[x:x + segment_size] for x in range(0, len(test_body), segment_size)] # encode the buffers into fragment payloads fragment_payloads = [] for chunk in chunks: fragments = policy.pyeclib_driver.encode(chunk) if not fragments: break fragment_payloads.append(fragments) # join up the fragment payloads per node ec_archive_bodies = [''.join(frags) for frags in zip(*fragment_payloads)] return ec_archive_bodies def _create_test_rings(path): testgz = os.path.join(path, 'object.ring.gz') intended_replica2part2dev_id = [ [0, 1, 2], [1, 2, 3], [2, 3, 0] ] intended_devs = [ {'id': 0, 'device': 'sda1', 'zone': 0, 'ip': '127.0.0.0', 'port': 6000}, {'id': 1, 'device': 'sda1', 'zone': 1, 'ip': '127.0.0.1', 'port': 6000}, {'id': 2, 'device': 'sda1', 'zone': 2, 'ip': '127.0.0.2', 'port': 6000}, {'id': 3, 'device': 'sda1', 'zone': 4, 'ip': '127.0.0.3', 'port': 6000} ] intended_part_shift = 30 with closing(GzipFile(testgz, 'wb')) as f: pickle.dump( ring.RingData(intended_replica2part2dev_id, intended_devs, intended_part_shift), f) testgz = os.path.join(path, 'object-1.ring.gz') with closing(GzipFile(testgz, 'wb')) as f: pickle.dump( ring.RingData(intended_replica2part2dev_id, intended_devs, intended_part_shift), f) def count_stats(logger, key, metric): count = 0 for record in logger.log_dict[key]: log_args, log_kwargs = record m = log_args[0] if re.match(metric, m): count += 1 return count def get_header_frag_index(self, body): metadata = self.policy.pyeclib_driver.get_metadata(body) frag_index = struct.unpack('h', metadata[:2])[0] return { 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, } @patch_policies([StoragePolicy(0, name='zero', is_default=True), ECStoragePolicy(1, name='one', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=2, ec_nparity=1)]) class TestGlobalSetupObjectReconstructor(unittest.TestCase): def setUp(self): self.testdir = tempfile.mkdtemp() _create_test_rings(self.testdir) POLICIES[0].object_ring = ring.Ring(self.testdir, ring_name='object') POLICIES[1].object_ring = ring.Ring(self.testdir, ring_name='object-1') utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' self.devices = os.path.join(self.testdir, 'node') os.makedirs(self.devices) os.mkdir(os.path.join(self.devices, 'sda1')) self.objects = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(POLICIES[0])) self.objects_1 = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(POLICIES[1])) os.mkdir(self.objects) os.mkdir(self.objects_1) self.parts = {} self.parts_1 = {} self.part_nums = ['0', '1', '2'] for part in self.part_nums: self.parts[part] = os.path.join(self.objects, part) os.mkdir(self.parts[part]) self.parts_1[part] = os.path.join(self.objects_1, part) os.mkdir(self.parts_1[part]) self.conf = dict( swift_dir=self.testdir, devices=self.devices, mount_check='false', timeout='300', stats_interval='1') self.logger = debug_logger('test-reconstructor') self.reconstructor = object_reconstructor.ObjectReconstructor( self.conf, logger=self.logger) self.policy = POLICIES[1] # most of the reconstructor test methods require that there be # real objects in place, not just part dirs, so we'll create them # all here.... # part 0: 3C1/hash/xxx-1.data <-- job: sync_only - parnters (FI 1) # /xxx.durable <-- included in earlier job (FI 1) # 061/hash/xxx-1.data <-- included in earlier job (FI 1) # /xxx.durable <-- included in earlier job (FI 1) # /xxx-2.data <-- job: sync_revert to index 2 # part 1: 3C1/hash/xxx-0.data <-- job: sync_only - parnters (FI 0) # /xxx-1.data <-- job: sync_revert to index 1 # /xxx.durable <-- included in earlier jobs (FI 0, 1) # 061/hash/xxx-1.data <-- included in earlier job (FI 1) # /xxx.durable <-- included in earlier job (FI 1) # part 2: 3C1/hash/xxx-2.data <-- job: sync_revert to index 2 # /xxx.durable <-- included in earlier job (FI 2) # 061/hash/xxx-0.data <-- job: sync_revert to index 0 # /xxx.durable <-- included in earlier job (FI 0) def _create_frag_archives(policy, obj_path, local_id, obj_set): # we'll create 2 sets of objects in different suffix dirs # so we cover all the scenarios we want (3 of them) # 1) part dir with all FI's matching the local node index # 2) part dir with one local and mix of others # 3) part dir with no local FI and one or more others def part_0(set): if set == 0: # just the local return local_id else: # onde local and all of another if obj_num == 0: return local_id else: return (local_id + 1) % 3 def part_1(set): if set == 0: # one local and all of another if obj_num == 0: return local_id else: return (local_id + 2) % 3 else: # just the local node return local_id def part_2(set): # this part is a handoff in our config (always) # so lets do a set with indices from different nodes if set == 0: return (local_id + 1) % 3 else: return (local_id + 2) % 3 # function dictionary for defining test scenarios base on set # scenarios = {'0': part_0, '1': part_1, '2': part_2} def _create_df(obj_num, part_num): self._create_diskfile( part=part_num, object_name='o' + str(obj_set), policy=policy, frag_index=scenarios[part_num](obj_set), timestamp=utils.Timestamp(t)) for part_num in self.part_nums: # create 3 unique objcets per part, each part # will then have a unique mix of FIs for the # possible scenarios for obj_num in range(0, 3): _create_df(obj_num, part_num) ips = utils.whataremyips() for policy in [p for p in POLICIES if p.policy_type == EC_POLICY]: self.ec_policy = policy self.ec_obj_ring = self.reconstructor.load_object_ring( self.ec_policy) data_dir = diskfile.get_data_dir(self.ec_policy) for local_dev in [dev for dev in self.ec_obj_ring.devs if dev and dev['replication_ip'] in ips and dev['replication_port'] == self.reconstructor.port]: self.ec_local_dev = local_dev dev_path = os.path.join(self.reconstructor.devices_dir, self.ec_local_dev['device']) self.ec_obj_path = os.path.join(dev_path, data_dir) # create a bunch of FA's to test t = 1421181937.70054 # time.time() with mock.patch('swift.obj.diskfile.time') as mock_time: # since (a) we are using a fixed time here to create # frags which corresponds to all the hardcoded hashes and # (b) the EC diskfile will delete its .data file right # after creating if it has expired, use this horrible hack # to prevent the reclaim happening mock_time.time.return_value = 0.0 _create_frag_archives(self.ec_policy, self.ec_obj_path, self.ec_local_dev['id'], 0) _create_frag_archives(self.ec_policy, self.ec_obj_path, self.ec_local_dev['id'], 1) break break def tearDown(self): rmtree(self.testdir, ignore_errors=1) def _create_diskfile(self, policy=None, part=0, object_name='o', frag_index=0, timestamp=None, test_data=None): policy = policy or self.policy df_mgr = self.reconstructor._df_router[policy] df = df_mgr.get_diskfile('sda1', part, 'a', 'c', object_name, policy=policy) with df.create() as writer: timestamp = timestamp or utils.Timestamp(time.time()) test_data = test_data or 'test data' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'Content-Length': len(test_data), 'Etag': md5(test_data).hexdigest(), 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, } writer.put(metadata) writer.commit(timestamp) return df def assert_expected_jobs(self, part_num, jobs): for job in jobs: del job['path'] del job['policy'] if 'local_index' in job: del job['local_index'] job['suffixes'].sort() expected = [] # part num 0 expected.append( [{ 'sync_to': [{ 'index': 2, 'replication_port': 6000, 'zone': 2, 'ip': '127.0.0.2', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.2', 'device': 'sda1', 'id': 2, }], 'job_type': object_reconstructor.REVERT, 'suffixes': ['061'], 'partition': 0, 'frag_index': 2, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000, }, 'hashes': { '061': { None: '85b02a5283704292a511078a5c483da5', 2: '0e6e8d48d801dc89fd31904ae3b31229', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, '3c1': { None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, }, }, { 'sync_to': [{ 'index': 0, 'replication_port': 6000, 'zone': 0, 'ip': '127.0.0.0', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.0', 'device': 'sda1', 'id': 0, }, { 'index': 2, 'replication_port': 6000, 'zone': 2, 'ip': '127.0.0.2', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.2', 'device': 'sda1', 'id': 2, }], 'job_type': object_reconstructor.SYNC, 'sync_diskfile_builder': self.reconstructor.reconstruct_fa, 'suffixes': ['061', '3c1'], 'partition': 0, 'frag_index': 1, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000, }, 'hashes': { '061': { None: '85b02a5283704292a511078a5c483da5', 2: '0e6e8d48d801dc89fd31904ae3b31229', 1: '0e6e8d48d801dc89fd31904ae3b31229' }, '3c1': { None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, }, }] ) # part num 1 expected.append( [{ 'sync_to': [{ 'index': 1, 'replication_port': 6000, 'zone': 2, 'ip': '127.0.0.2', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.2', 'device': 'sda1', 'id': 2, }], 'job_type': object_reconstructor.REVERT, 'suffixes': ['061', '3c1'], 'partition': 1, 'frag_index': 1, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000, }, 'hashes': { '061': { None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, '3c1': { 0: '0e6e8d48d801dc89fd31904ae3b31229', None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, }, }, { 'sync_to': [{ 'index': 2, 'replication_port': 6000, 'zone': 4, 'ip': '127.0.0.3', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.3', 'device': 'sda1', 'id': 3, }, { 'index': 1, 'replication_port': 6000, 'zone': 2, 'ip': '127.0.0.2', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.2', 'device': 'sda1', 'id': 2, }], 'job_type': object_reconstructor.SYNC, 'sync_diskfile_builder': self.reconstructor.reconstruct_fa, 'suffixes': ['3c1'], 'partition': 1, 'frag_index': 0, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000, }, 'hashes': { '061': { None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, '3c1': { 0: '0e6e8d48d801dc89fd31904ae3b31229', None: '85b02a5283704292a511078a5c483da5', 1: '0e6e8d48d801dc89fd31904ae3b31229', }, }, }] ) # part num 2 expected.append( [{ 'sync_to': [{ 'index': 0, 'replication_port': 6000, 'zone': 2, 'ip': '127.0.0.2', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.2', 'device': 'sda1', 'id': 2, }], 'job_type': object_reconstructor.REVERT, 'suffixes': ['061'], 'partition': 2, 'frag_index': 0, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000, }, 'hashes': { '061': { 0: '0e6e8d48d801dc89fd31904ae3b31229', None: '85b02a5283704292a511078a5c483da5' }, '3c1': { None: '85b02a5283704292a511078a5c483da5', 2: '0e6e8d48d801dc89fd31904ae3b31229' }, }, }, { 'sync_to': [{ 'index': 2, 'replication_port': 6000, 'zone': 0, 'ip': '127.0.0.0', 'region': 1, 'port': 6000, 'replication_ip': '127.0.0.0', 'device': 'sda1', 'id': 0, }], 'job_type': object_reconstructor.REVERT, 'suffixes': ['3c1'], 'partition': 2, 'frag_index': 2, 'device': 'sda1', 'local_dev': { 'replication_port': 6000, 'zone': 1, 'ip': '127.0.0.1', 'region': 1, 'id': 1, 'replication_ip': '127.0.0.1', 'device': 'sda1', 'port': 6000 }, 'hashes': { '061': { 0: '0e6e8d48d801dc89fd31904ae3b31229', None: '85b02a5283704292a511078a5c483da5' }, '3c1': { None: '85b02a5283704292a511078a5c483da5', 2: '0e6e8d48d801dc89fd31904ae3b31229' }, }, }] ) def check_jobs(part_num): try: expected_jobs = expected[int(part_num)] except (IndexError, ValueError): self.fail('Unknown part number %r' % part_num) expected_by_part_frag_index = dict( ((j['partition'], j['frag_index']), j) for j in expected_jobs) for job in jobs: job_key = (job['partition'], job['frag_index']) if job_key in expected_by_part_frag_index: for k, value in job.items(): expected_value = \ expected_by_part_frag_index[job_key][k] try: if isinstance(value, list): value.sort() expected_value.sort() self.assertEqual(value, expected_value) except AssertionError as e: extra_info = \ '\n\n... for %r in part num %s job %r' % ( k, part_num, job_key) raise AssertionError(str(e) + extra_info) else: self.fail( 'Unexpected job %r for part num %s - ' 'expected jobs where %r' % ( job_key, part_num, expected_by_part_frag_index.keys())) for expected_job in expected_jobs: if expected_job in jobs: jobs.remove(expected_job) self.assertFalse(jobs) # that should be all of them check_jobs(part_num) def _run_once(self, http_count, extra_devices, override_devices=None): ring_devs = list(self.policy.object_ring.devs) for device, parts in extra_devices.items(): device_path = os.path.join(self.devices, device) os.mkdir(device_path) for part in range(parts): os.makedirs(os.path.join(device_path, 'objects-1', str(part))) # we update the ring to make is_local happy devs = [dict(d) for d in ring_devs] for d in devs: d['device'] = device self.policy.object_ring.devs.extend(devs) self.reconstructor.stats_interval = 0 self.process_job = lambda j: sleep(0) with mocked_http_conn(*[200] * http_count, body=pickle.dumps({})): with mock_ssync_sender(): self.reconstructor.run_once(devices=override_devices) def test_run_once(self): # sda1: 3 is done in setup extra_devices = { 'sdb1': 4, 'sdc1': 1, 'sdd1': 0, } self._run_once(18, extra_devices) stats_lines = set() for line in self.logger.get_lines_for_level('info'): if 'devices reconstructed in' not in line: continue stat_line = line.split('of', 1)[0].strip() stats_lines.add(stat_line) acceptable = set([ '0/3 (0.00%) partitions', '8/8 (100.00%) partitions', ]) matched = stats_lines & acceptable self.assertEqual(matched, acceptable, 'missing some expected acceptable:\n%s' % ( '\n'.join(sorted(acceptable - matched)))) self.assertEqual(self.reconstructor.reconstruction_device_count, 4) self.assertEqual(self.reconstructor.reconstruction_part_count, 8) self.assertEqual(self.reconstructor.part_count, 8) def test_run_once_override_devices(self): # sda1: 3 is done in setup extra_devices = { 'sdb1': 4, 'sdc1': 1, 'sdd1': 0, } self._run_once(2, extra_devices, 'sdc1') stats_lines = set() for line in self.logger.get_lines_for_level('info'): if 'devices reconstructed in' not in line: continue stat_line = line.split('of', 1)[0].strip() stats_lines.add(stat_line) acceptable = set([ '1/1 (100.00%) partitions', ]) matched = stats_lines & acceptable self.assertEqual(matched, acceptable, 'missing some expected acceptable:\n%s' % ( '\n'.join(sorted(acceptable - matched)))) self.assertEqual(self.reconstructor.reconstruction_device_count, 1) self.assertEqual(self.reconstructor.reconstruction_part_count, 1) self.assertEqual(self.reconstructor.part_count, 1) def test_get_response(self): part = self.part_nums[0] node = POLICIES[0].object_ring.get_part_nodes(int(part))[0] for stat_code in (200, 400): with mocked_http_conn(stat_code): resp = self.reconstructor._get_response(node, part, path='nada', headers={}, policy=POLICIES[0]) if resp: self.assertEqual(resp.status, 200) else: self.assertEqual( len(self.reconstructor.logger.log_dict['warning']), 1) def test_reconstructor_does_not_log_on_404(self): part = self.part_nums[0] node = POLICIES[0].object_ring.get_part_nodes(int(part))[0] with mocked_http_conn(404): self.reconstructor._get_response(node, part, path='some_path', headers={}, policy=POLICIES[0]) # Make sure that no warnings are emitted for a 404 len_warning_lines = len(self.logger.get_lines_for_level('warning')) self.assertEqual(len_warning_lines, 0) def test_reconstructor_skips_bogus_partition_dirs(self): # A directory in the wrong place shouldn't crash the reconstructor self.reconstructor._reset_stats() rmtree(self.objects_1) os.mkdir(self.objects_1) os.mkdir(os.path.join(self.objects_1, "burrito")) jobs = [] for part_info in self.reconstructor.collect_parts(): jobs += self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(len(jobs), 0) def test_check_ring(self): testring = tempfile.mkdtemp() _create_test_rings(testring) obj_ring = ring.Ring(testring, ring_name='object') # noqa self.assertTrue(self.reconstructor.check_ring(obj_ring)) orig_check = self.reconstructor.next_check self.reconstructor.next_check = orig_check - 30 self.assertTrue(self.reconstructor.check_ring(obj_ring)) self.reconstructor.next_check = orig_check orig_ring_time = obj_ring._mtime obj_ring._mtime = orig_ring_time - 30 self.assertTrue(self.reconstructor.check_ring(obj_ring)) self.reconstructor.next_check = orig_check - 30 self.assertFalse(self.reconstructor.check_ring(obj_ring)) rmtree(testring, ignore_errors=1) def test_build_reconstruction_jobs(self): self.reconstructor.handoffs_first = False self.reconstructor._reset_stats() for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertTrue(jobs[0]['job_type'] in (object_reconstructor.SYNC, object_reconstructor.REVERT)) self.assert_expected_jobs(part_info['partition'], jobs) self.reconstructor.handoffs_first = True self.reconstructor._reset_stats() for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertTrue(jobs[0]['job_type'] == object_reconstructor.REVERT) self.assert_expected_jobs(part_info['partition'], jobs) def test_get_partners(self): # we're going to perform an exhaustive test of every possible # combination of partitions and nodes in our custom test ring # format: [dev_id in question, 'part_num', # [part_nodes for the given part], left id, right id...] expected_partners = sorted([ (0, '0', [0, 1, 2], 2, 1), (0, '2', [2, 3, 0], 3, 2), (1, '0', [0, 1, 2], 0, 2), (1, '1', [1, 2, 3], 3, 2), (2, '0', [0, 1, 2], 1, 0), (2, '1', [1, 2, 3], 1, 3), (2, '2', [2, 3, 0], 0, 3), (3, '1', [1, 2, 3], 2, 1), (3, '2', [2, 3, 0], 2, 0), (0, '0', [0, 1, 2], 2, 1), (0, '2', [2, 3, 0], 3, 2), (1, '0', [0, 1, 2], 0, 2), (1, '1', [1, 2, 3], 3, 2), (2, '0', [0, 1, 2], 1, 0), (2, '1', [1, 2, 3], 1, 3), (2, '2', [2, 3, 0], 0, 3), (3, '1', [1, 2, 3], 2, 1), (3, '2', [2, 3, 0], 2, 0), ]) got_partners = [] for pol in POLICIES: obj_ring = pol.object_ring for part_num in self.part_nums: part_nodes = obj_ring.get_part_nodes(int(part_num)) primary_ids = [n['id'] for n in part_nodes] for node in part_nodes: partners = object_reconstructor._get_partners( node['index'], part_nodes) left = partners[0]['id'] right = partners[1]['id'] got_partners.append(( node['id'], part_num, primary_ids, left, right)) self.assertEqual(expected_partners, sorted(got_partners)) def test_collect_parts(self): self.reconstructor._reset_stats() parts = [] for part_info in self.reconstructor.collect_parts(): parts.append(part_info['partition']) self.assertEqual(sorted(parts), [0, 1, 2]) def test_collect_parts_mkdirs_error(self): def blowup_mkdirs(path): raise OSError('Ow!') self.reconstructor._reset_stats() with mock.patch.object(object_reconstructor, 'mkdirs', blowup_mkdirs): rmtree(self.objects_1, ignore_errors=1) parts = [] for part_info in self.reconstructor.collect_parts(): parts.append(part_info['partition']) error_lines = self.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) log_args, log_kwargs = self.logger.log_dict['error'][0] self.assertEqual(str(log_kwargs['exc_info'][1]), 'Ow!') def test_removes_zbf(self): # After running xfs_repair, a partition directory could become a # zero-byte file. If this happens, the reconstructor should clean it # up, log something, and move on to the next partition. # Surprise! Partition dir 1 is actually a zero-byte file. pol_1_part_1_path = os.path.join(self.objects_1, '1') rmtree(pol_1_part_1_path) with open(pol_1_part_1_path, 'w'): pass self.assertTrue(os.path.isfile(pol_1_part_1_path)) # sanity check # since our collect_parts job is a generator, that yields directly # into build_jobs and then spawns it's safe to do the remove_files # without making reconstructor startup slow self.reconstructor._reset_stats() for part_info in self.reconstructor.collect_parts(): self.assertNotEqual(pol_1_part_1_path, part_info['part_path']) self.assertFalse(os.path.exists(pol_1_part_1_path)) warnings = self.reconstructor.logger.get_lines_for_level('warning') self.assertEqual(1, len(warnings)) self.assertIn('Unexpected entity in data dir:', warnings[0]) def test_ignores_status_file(self): # Following fd86d5a, the auditor will leave status files on each device # until an audit can complete. The reconstructor should ignore these @contextmanager def status_files(*auditor_types): status_paths = [os.path.join(self.objects_1, 'auditor_status_%s.json' % typ) for typ in auditor_types] for status_path in status_paths: self.assertFalse(os.path.exists(status_path)) # sanity check with open(status_path, 'w'): pass self.assertTrue(os.path.isfile(status_path)) # sanity check try: yield status_paths finally: for status_path in status_paths: try: os.unlink(status_path) except OSError as e: if e.errno != 2: raise # since our collect_parts job is a generator, that yields directly # into build_jobs and then spawns it's safe to do the remove_files # without making reconstructor startup slow with status_files('ALL', 'ZBF') as status_paths: self.reconstructor._reset_stats() for part_info in self.reconstructor.collect_parts(): self.assertNotIn(part_info['part_path'], status_paths) warnings = self.reconstructor.logger.get_lines_for_level('warning') self.assertEqual(0, len(warnings)) for status_path in status_paths: self.assertTrue(os.path.exists(status_path)) def _make_fake_ssync(self, ssync_calls): class _fake_ssync(object): def __init__(self, daemon, node, job, suffixes, **kwargs): # capture context and generate an available_map of objs context = {} context['node'] = node context['job'] = job context['suffixes'] = suffixes self.suffixes = suffixes self.daemon = daemon self.job = job hash_gen = self.daemon._diskfile_mgr.yield_hashes( self.job['device'], self.job['partition'], self.job['policy'], self.suffixes, frag_index=self.job.get('frag_index')) self.available_map = {} for path, hash_, timestamps in hash_gen: self.available_map[hash_] = timestamps context['available_map'] = self.available_map ssync_calls.append(context) def __call__(self, *args, **kwargs): return True, self.available_map return _fake_ssync def test_delete_reverted(self): # verify reconstructor deletes reverted frag indexes after ssync'ing def visit_obj_dirs(context): for suff in context['suffixes']: suff_dir = os.path.join( context['job']['path'], suff) for root, dirs, files in os.walk(suff_dir): for d in dirs: dirpath = os.path.join(root, d) files = os.listdir(dirpath) yield dirpath, files n_files = n_files_after = 0 # run reconstructor with delete function mocked out to check calls ssync_calls = [] delete_func =\ 'swift.obj.reconstructor.ObjectReconstructor.delete_reverted_objs' with mock.patch('swift.obj.reconstructor.ssync_sender', self._make_fake_ssync(ssync_calls)): with mocked_http_conn(*[200] * 12, body=pickle.dumps({})): with mock.patch(delete_func) as mock_delete: self.reconstructor.reconstruct() expected_calls = [] for context in ssync_calls: if context['job']['job_type'] == REVERT: for dirpath, files in visit_obj_dirs(context): # sanity check - expect some files to be in dir, # may not be for the reverted frag index self.assertTrue(files) n_files += len(files) expected_calls.append(mock.call(context['job'], context['available_map'], context['node']['index'])) mock_delete.assert_has_calls(expected_calls, any_order=True) ssync_calls = [] with mock.patch('swift.obj.reconstructor.ssync_sender', self._make_fake_ssync(ssync_calls)): with mocked_http_conn(*[200] * 12, body=pickle.dumps({})): self.reconstructor.reconstruct() for context in ssync_calls: if context['job']['job_type'] == REVERT: data_file_tail = ('#%s.data' % context['node']['index']) for dirpath, files in visit_obj_dirs(context): n_files_after += len(files) for filename in files: self.assertFalse( filename.endswith(data_file_tail)) # sanity check that some files should were deleted self.assertTrue(n_files > n_files_after) def test_get_part_jobs(self): # yeah, this test code expects a specific setup self.assertEqual(len(self.part_nums), 3) # OK, at this point we should have 4 loaded parts with one jobs = [] for partition in os.listdir(self.ec_obj_path): part_path = os.path.join(self.ec_obj_path, partition) jobs = self.reconstructor._get_part_jobs( self.ec_local_dev, part_path, int(partition), self.ec_policy) self.assert_expected_jobs(partition, jobs) def assertStatCount(self, stat_method, stat_prefix, expected_count): count = count_stats(self.logger, stat_method, stat_prefix) msg = 'expected %s != %s for %s %s' % ( expected_count, count, stat_method, stat_prefix) self.assertEqual(expected_count, count, msg) def test_delete_partition(self): # part 2 is predefined to have all revert jobs part_path = os.path.join(self.objects_1, '2') self.assertTrue(os.access(part_path, os.F_OK)) ssync_calls = [] status = [200] * 2 body = pickle.dumps({}) with mocked_http_conn(*status, body=body) as request_log: with mock.patch('swift.obj.reconstructor.ssync_sender', self._make_fake_ssync(ssync_calls)): self.reconstructor.reconstruct(override_partitions=[2]) expected_repliate_calls = set([ ('127.0.0.0', '/sda1/2/3c1'), ('127.0.0.2', '/sda1/2/061'), ]) found_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_repliate_calls, found_calls) expected_ssync_calls = sorted([ ('127.0.0.0', REVERT, 2, ['3c1']), ('127.0.0.2', REVERT, 2, ['061']), ]) self.assertEqual(expected_ssync_calls, sorted(( c['node']['ip'], c['job']['job_type'], c['job']['partition'], c['suffixes'], ) for c in ssync_calls)) expected_stats = { ('increment', 'partition.delete.count.'): 2, ('timing_since', 'partition.delete.timing'): 2, } for stat_key, expected in expected_stats.items(): stat_method, stat_prefix = stat_key self.assertStatCount(stat_method, stat_prefix, expected) # part 2 should be totally empty policy = POLICIES[1] hash_gen = self.reconstructor._df_router[policy].yield_hashes( 'sda1', '2', policy) for path, hash_, ts in hash_gen: self.fail('found %s with %s in %s', (hash_, ts, path)) # but the partition directory and hashes pkl still exist self.assertTrue(os.access(part_path, os.F_OK)) hashes_path = os.path.join(self.objects_1, '2', diskfile.HASH_FILE) self.assertTrue(os.access(hashes_path, os.F_OK)) # ... but on next pass ssync_calls = [] with mocked_http_conn() as request_log: with mock.patch('swift.obj.reconstructor.ssync_sender', self._make_fake_ssync(ssync_calls)): self.reconstructor.reconstruct(override_partitions=[2]) # reconstruct won't generate any replicate or ssync_calls self.assertFalse(request_log.requests) self.assertFalse(ssync_calls) # and the partition will get removed! self.assertFalse(os.access(part_path, os.F_OK)) def test_process_job_all_success(self): self.reconstructor._reset_stats() with mock_ssync_sender(): with mocked_http_conn(*[200] * 12, body=pickle.dumps({})): found_jobs = [] for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs( part_info) found_jobs.extend(jobs) for job in jobs: self.logger._clear() node_count = len(job['sync_to']) self.reconstructor.process_job(job) if job['job_type'] == object_reconstructor.REVERT: self.assertEqual(0, count_stats( self.logger, 'update_stats', 'suffix.hashes')) else: self.assertStatCount('update_stats', 'suffix.hashes', node_count) self.assertEqual(node_count, count_stats( self.logger, 'update_stats', 'suffix.hashes')) self.assertEqual(node_count, count_stats( self.logger, 'update_stats', 'suffix.syncs')) self.assertFalse('error' in self.logger.all_log_lines()) self.assertEqual(self.reconstructor.suffix_sync, 8) self.assertEqual(self.reconstructor.suffix_count, 8) self.assertEqual(len(found_jobs), 6) def test_process_job_all_insufficient_storage(self): self.reconstructor._reset_stats() with mock_ssync_sender(): with mocked_http_conn(*[507] * 8): found_jobs = [] for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs( part_info) found_jobs.extend(jobs) for job in jobs: self.logger._clear() self.reconstructor.process_job(job) for line in self.logger.get_lines_for_level('error'): self.assertTrue('responded as unmounted' in line) self.assertEqual(0, count_stats( self.logger, 'update_stats', 'suffix.hashes')) self.assertEqual(0, count_stats( self.logger, 'update_stats', 'suffix.syncs')) self.assertEqual(self.reconstructor.suffix_sync, 0) self.assertEqual(self.reconstructor.suffix_count, 0) self.assertEqual(len(found_jobs), 6) def test_process_job_all_client_error(self): self.reconstructor._reset_stats() with mock_ssync_sender(): with mocked_http_conn(*[400] * 8): found_jobs = [] for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs( part_info) found_jobs.extend(jobs) for job in jobs: self.logger._clear() self.reconstructor.process_job(job) for line in self.logger.get_lines_for_level('error'): self.assertTrue('Invalid response 400' in line) self.assertEqual(0, count_stats( self.logger, 'update_stats', 'suffix.hashes')) self.assertEqual(0, count_stats( self.logger, 'update_stats', 'suffix.syncs')) self.assertEqual(self.reconstructor.suffix_sync, 0) self.assertEqual(self.reconstructor.suffix_count, 0) self.assertEqual(len(found_jobs), 6) def test_process_job_all_timeout(self): self.reconstructor._reset_stats() with mock_ssync_sender(), mocked_http_conn(*[Timeout()] * 8): found_jobs = [] for part_info in self.reconstructor.collect_parts(): jobs = self.reconstructor.build_reconstruction_jobs( part_info) found_jobs.extend(jobs) for job in jobs: self.logger._clear() self.reconstructor.process_job(job) for line in self.logger.get_lines_for_level('error'): self.assertTrue('Timeout (Nones)' in line) self.assertStatCount( 'update_stats', 'suffix.hashes', 0) self.assertStatCount( 'update_stats', 'suffix.syncs', 0) self.assertEqual(self.reconstructor.suffix_sync, 0) self.assertEqual(self.reconstructor.suffix_count, 0) self.assertEqual(len(found_jobs), 6) @patch_policies(with_ec_default=True) class TestObjectReconstructor(unittest.TestCase): def setUp(self): self.policy = POLICIES.default self.policy.object_ring._rtime = time.time() + 3600 self.testdir = tempfile.mkdtemp() self.devices = os.path.join(self.testdir, 'devices') self.local_dev = self.policy.object_ring.devs[0] self.ip = self.local_dev['replication_ip'] self.port = self.local_dev['replication_port'] self.conf = { 'devices': self.devices, 'mount_check': False, 'bind_ip': self.ip, 'bind_port': self.port, } self.logger = debug_logger('object-reconstructor') self._configure_reconstructor() self.policy.object_ring.max_more_nodes = \ self.policy.object_ring.replicas self.ts_iter = make_timestamp_iter() def _configure_reconstructor(self, **kwargs): self.conf.update(kwargs) self.reconstructor = object_reconstructor.ObjectReconstructor( self.conf, logger=self.logger) self.reconstructor._reset_stats() # some tests bypass build_reconstruction_jobs and go to process_job # directly, so you end up with a /0 when you try to show the # percentage of complete jobs as ratio of the total job count self.reconstructor.job_count = 1 def tearDown(self): self.reconstructor._reset_stats() self.reconstructor.stats_line() shutil.rmtree(self.testdir) def ts(self): return next(self.ts_iter) def test_collect_parts_skips_non_ec_policy_and_device(self): stub_parts = (371, 78, 419, 834) for policy in POLICIES: datadir = diskfile.get_data_dir(policy) for part in stub_parts: utils.mkdirs(os.path.join( self.devices, self.local_dev['device'], datadir, str(part))) part_infos = list(self.reconstructor.collect_parts()) found_parts = sorted(int(p['partition']) for p in part_infos) self.assertEqual(found_parts, sorted(stub_parts)) for part_info in part_infos: self.assertEqual(part_info['local_dev'], self.local_dev) self.assertEqual(part_info['policy'], self.policy) self.assertEqual(part_info['part_path'], os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(part_info['partition']))) def test_collect_parts_skips_non_local_devs_servers_per_port(self): self._configure_reconstructor(devices=self.devices, mount_check=False, bind_ip=self.ip, bind_port=self.port, servers_per_port=2) device_parts = { 'sda': (374,), 'sdb': (179, 807), # w/one-serv-per-port, same IP alone is local 'sdc': (363, 468, 843), 'sdd': (912,), # "not local" via different IP } for policy in POLICIES: datadir = diskfile.get_data_dir(policy) for dev, parts in device_parts.items(): for part in parts: utils.mkdirs(os.path.join( self.devices, dev, datadir, str(part))) # we're only going to add sda and sdc into the ring local_devs = ('sda', 'sdb', 'sdc') stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port + 1 if dev == 'sdb' else self.port, } for dev in local_devs] stub_ring_devs.append({ 'device': 'sdd', 'replication_ip': '127.0.0.88', # not local via IP 'replication_port': self.port, }) self.reconstructor.bind_ip = '0.0.0.0' # use whataremyips with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs): part_infos = list(self.reconstructor.collect_parts()) found_parts = sorted(int(p['partition']) for p in part_infos) expected_parts = sorted(itertools.chain( *(device_parts[d] for d in local_devs))) self.assertEqual(found_parts, expected_parts) for part_info in part_infos: self.assertEqual(part_info['policy'], self.policy) self.assertTrue(part_info['local_dev'] in stub_ring_devs) dev = part_info['local_dev'] self.assertEqual(part_info['part_path'], os.path.join(self.devices, dev['device'], diskfile.get_data_dir(self.policy), str(part_info['partition']))) def test_collect_parts_multi_device_skips_non_non_local_devs(self): device_parts = { 'sda': (374,), 'sdb': (179, 807), # "not local" via different port 'sdc': (363, 468, 843), 'sdd': (912,), # "not local" via different IP } for policy in POLICIES: datadir = diskfile.get_data_dir(policy) for dev, parts in device_parts.items(): for part in parts: utils.mkdirs(os.path.join( self.devices, dev, datadir, str(part))) # we're only going to add sda and sdc into the ring local_devs = ('sda', 'sdc') stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port, } for dev in local_devs] stub_ring_devs.append({ 'device': 'sdb', 'replication_ip': self.ip, 'replication_port': self.port + 1, # not local via port }) stub_ring_devs.append({ 'device': 'sdd', 'replication_ip': '127.0.0.88', # not local via IP 'replication_port': self.port, }) self.reconstructor.bind_ip = '0.0.0.0' # use whataremyips with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs): part_infos = list(self.reconstructor.collect_parts()) found_parts = sorted(int(p['partition']) for p in part_infos) expected_parts = sorted(itertools.chain( *(device_parts[d] for d in local_devs))) self.assertEqual(found_parts, expected_parts) for part_info in part_infos: self.assertEqual(part_info['policy'], self.policy) self.assertTrue(part_info['local_dev'] in stub_ring_devs) dev = part_info['local_dev'] self.assertEqual(part_info['part_path'], os.path.join(self.devices, dev['device'], diskfile.get_data_dir(self.policy), str(part_info['partition']))) def test_collect_parts_multi_device_skips_non_ring_devices(self): device_parts = { 'sda': (374,), 'sdc': (363, 468, 843), } for policy in POLICIES: datadir = diskfile.get_data_dir(policy) for dev, parts in device_parts.items(): for part in parts: utils.mkdirs(os.path.join( self.devices, dev, datadir, str(part))) # we're only going to add sda and sdc into the ring local_devs = ('sda', 'sdc') stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port, } for dev in local_devs] self.reconstructor.bind_ip = '0.0.0.0' # use whataremyips with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs): part_infos = list(self.reconstructor.collect_parts()) found_parts = sorted(int(p['partition']) for p in part_infos) expected_parts = sorted(itertools.chain( *(device_parts[d] for d in local_devs))) self.assertEqual(found_parts, expected_parts) for part_info in part_infos: self.assertEqual(part_info['policy'], self.policy) self.assertTrue(part_info['local_dev'] in stub_ring_devs) dev = part_info['local_dev'] self.assertEqual(part_info['part_path'], os.path.join(self.devices, dev['device'], diskfile.get_data_dir(self.policy), str(part_info['partition']))) def test_collect_parts_mount_check(self): # each device has one part in it local_devs = ('sda', 'sdb') for i, dev in enumerate(local_devs): datadir = diskfile.get_data_dir(self.policy) utils.mkdirs(os.path.join( self.devices, dev, datadir, str(i))) stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port } for dev in local_devs] with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs): part_infos = list(self.reconstructor.collect_parts()) self.assertEqual(2, len(part_infos)) # sanity self.assertEqual(set(int(p['partition']) for p in part_infos), set([0, 1])) paths = [] def fake_check_mount(devices, device): paths.append(os.path.join(devices, device)) return False with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs), \ mock.patch('swift.obj.diskfile.check_mount', fake_check_mount): part_infos = list(self.reconstructor.collect_parts()) self.assertEqual(2, len(part_infos)) # sanity, same jobs self.assertEqual(set(int(p['partition']) for p in part_infos), set([0, 1])) # ... because ismount was not called self.assertEqual(paths, []) # ... now with mount check self._configure_reconstructor(mount_check=True) self.assertTrue(self.reconstructor.mount_check) for policy in POLICIES: self.assertTrue(self.reconstructor._df_router[policy].mount_check) with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs), \ mock.patch('swift.obj.diskfile.check_mount', fake_check_mount): part_infos = list(self.reconstructor.collect_parts()) self.assertEqual([], part_infos) # sanity, no jobs # ... because fake_ismount returned False for both paths self.assertEqual(set(paths), set([ os.path.join(self.devices, dev) for dev in local_devs])) def fake_check_mount(devices, device): path = os.path.join(devices, device) if path.endswith('sda'): return True else: return False with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs), \ mock.patch('swift.obj.diskfile.check_mount', fake_check_mount): part_infos = list(self.reconstructor.collect_parts()) self.assertEqual(1, len(part_infos)) # only sda picked up (part 0) self.assertEqual(part_infos[0]['partition'], 0) def test_collect_parts_cleans_tmp(self): local_devs = ('sda', 'sdc') stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port } for dev in local_devs] for device in local_devs: utils.mkdirs(os.path.join(self.devices, device)) fake_unlink = mock.MagicMock() self.reconstructor.reclaim_age = 1000 now = time.time() with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch('swift.obj.reconstructor.time.time', return_value=now), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs), \ mock.patch('swift.obj.reconstructor.unlink_older_than', fake_unlink): self.assertEqual([], list(self.reconstructor.collect_parts())) # each local device hash unlink_older_than called on it, # with now - self.reclaim_age tmpdir = diskfile.get_tmp_dir(self.policy) expected = now - 1000 self.assertEqual(fake_unlink.mock_calls, [ mock.call(os.path.join(self.devices, dev, tmpdir), expected) for dev in local_devs]) def test_collect_parts_creates_datadir(self): # create just the device path dev_path = os.path.join(self.devices, self.local_dev['device']) utils.mkdirs(dev_path) with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]): self.assertEqual([], list(self.reconstructor.collect_parts())) datadir_path = os.path.join(dev_path, diskfile.get_data_dir(self.policy)) self.assertTrue(os.path.exists(datadir_path)) def test_collect_parts_creates_datadir_error(self): # create just the device path datadir_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy)) utils.mkdirs(os.path.dirname(datadir_path)) with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch('swift.obj.reconstructor.mkdirs', side_effect=OSError('kaboom!')): self.assertEqual([], list(self.reconstructor.collect_parts())) error_lines = self.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) line = error_lines[0] self.assertTrue('Unable to create' in line) self.assertTrue(datadir_path in line) def test_collect_parts_skips_invalid_paths(self): datadir_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy)) utils.mkdirs(os.path.dirname(datadir_path)) with open(datadir_path, 'w') as f: f.write('junk') with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]): self.assertEqual([], list(self.reconstructor.collect_parts())) self.assertTrue(os.path.exists(datadir_path)) error_lines = self.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) line = error_lines[0] self.assertTrue('Unable to list partitions' in line) self.assertTrue(datadir_path in line) def test_collect_parts_removes_non_partition_files(self): # create some junk next to partitions datadir_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy)) num_parts = 3 for part in range(num_parts): utils.mkdirs(os.path.join(datadir_path, str(part))) junk_file = os.path.join(datadir_path, 'junk') with open(junk_file, 'w') as f: f.write('junk') with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]): part_infos = list(self.reconstructor.collect_parts()) # the file is not included in the part_infos map self.assertEqual(sorted(p['part_path'] for p in part_infos), sorted([os.path.join(datadir_path, str(i)) for i in range(num_parts)])) # and gets cleaned up self.assertFalse(os.path.exists(junk_file)) def test_collect_parts_overrides(self): # setup multiple devices, with multiple parts device_parts = { 'sda': (374, 843), 'sdb': (179, 807), 'sdc': (363, 468, 843), } datadir = diskfile.get_data_dir(self.policy) for dev, parts in device_parts.items(): for part in parts: utils.mkdirs(os.path.join( self.devices, dev, datadir, str(part))) # we're only going to add sda and sdc into the ring local_devs = ('sda', 'sdc') stub_ring_devs = [{ 'device': dev, 'replication_ip': self.ip, 'replication_port': self.port } for dev in local_devs] expected = ( ({}, [ ('sda', 374), ('sda', 843), ('sdc', 363), ('sdc', 468), ('sdc', 843), ]), ({'override_devices': ['sda', 'sdc']}, [ ('sda', 374), ('sda', 843), ('sdc', 363), ('sdc', 468), ('sdc', 843), ]), ({'override_devices': ['sdc']}, [ ('sdc', 363), ('sdc', 468), ('sdc', 843), ]), ({'override_devices': ['sda']}, [ ('sda', 374), ('sda', 843), ]), ({'override_devices': ['sdx']}, []), ({'override_partitions': [374]}, [ ('sda', 374), ]), ({'override_partitions': [843]}, [ ('sda', 843), ('sdc', 843), ]), ({'override_partitions': [843], 'override_devices': ['sda']}, [ ('sda', 843), ]), ) with mock.patch('swift.obj.reconstructor.whataremyips', return_value=[self.ip]), \ mock.patch.object(self.policy.object_ring, '_devs', new=stub_ring_devs): for kwargs, expected_parts in expected: part_infos = list(self.reconstructor.collect_parts(**kwargs)) expected_paths = set( os.path.join(self.devices, dev, datadir, str(part)) for dev, part in expected_parts) found_paths = set(p['part_path'] for p in part_infos) msg = 'expected %r != %r for %r' % ( expected_paths, found_paths, kwargs) self.assertEqual(expected_paths, found_paths, msg) def test_build_jobs_creates_empty_hashes(self): part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), '0') utils.mkdirs(part_path) part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': 0, 'part_path': part_path, } jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(1, len(jobs)) job = jobs[0] self.assertEqual(job['job_type'], object_reconstructor.SYNC) self.assertEqual(job['frag_index'], 0) self.assertEqual(job['suffixes'], []) self.assertEqual(len(job['sync_to']), 2) self.assertEqual(job['partition'], 0) self.assertEqual(job['path'], part_path) self.assertEqual(job['hashes'], {}) self.assertEqual(job['policy'], self.policy) self.assertEqual(job['local_dev'], self.local_dev) self.assertEqual(job['device'], self.local_dev['device']) hashes_file = os.path.join(part_path, diskfile.HASH_FILE) self.assertTrue(os.path.exists(hashes_file)) suffixes = self.reconstructor._get_hashes( self.policy, part_path, do_listdir=True) self.assertEqual(suffixes, {}) def test_build_jobs_no_hashes(self): part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), '0') part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': 0, 'part_path': part_path, } stub_hashes = {} with mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(1, len(jobs)) job = jobs[0] self.assertEqual(job['job_type'], object_reconstructor.SYNC) self.assertEqual(job['frag_index'], 0) self.assertEqual(job['suffixes'], []) self.assertEqual(len(job['sync_to']), 2) self.assertEqual(job['partition'], 0) self.assertEqual(job['path'], part_path) self.assertEqual(job['hashes'], {}) self.assertEqual(job['policy'], self.policy) self.assertEqual(job['local_dev'], self.local_dev) self.assertEqual(job['device'], self.local_dev['device']) def test_build_jobs_primary(self): ring = self.policy.object_ring = FabricatedRing() # find a partition for which we're a primary for partition in range(2 ** ring.part_power): part_nodes = ring.get_part_nodes(partition) try: frag_index = [n['id'] for n in part_nodes].index( self.local_dev['id']) except ValueError: pass else: break else: self.fail("the ring doesn't work: %r" % ring._replica2part2dev_id) part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': partition, 'part_path': part_path, } stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } with mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(1, len(jobs)) job = jobs[0] self.assertEqual(job['job_type'], object_reconstructor.SYNC) self.assertEqual(job['frag_index'], frag_index) self.assertEqual(job['suffixes'], stub_hashes.keys()) self.assertEqual(set([n['index'] for n in job['sync_to']]), set([(frag_index + 1) % ring.replicas, (frag_index - 1) % ring.replicas])) self.assertEqual(job['partition'], partition) self.assertEqual(job['path'], part_path) self.assertEqual(job['hashes'], stub_hashes) self.assertEqual(job['policy'], self.policy) self.assertEqual(job['local_dev'], self.local_dev) self.assertEqual(job['device'], self.local_dev['device']) def test_build_jobs_handoff(self): ring = self.policy.object_ring = FabricatedRing() # find a partition for which we're a handoff for partition in range(2 ** ring.part_power): part_nodes = ring.get_part_nodes(partition) if self.local_dev['id'] not in [n['id'] for n in part_nodes]: break else: self.fail("the ring doesn't work: %r" % ring._replica2part2dev_id) part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': partition, 'part_path': part_path, } # since this part doesn't belong on us it doesn't matter what # frag_index we have frag_index = random.randint(0, ring.replicas - 1) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {None: 'hash'}, } with mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(1, len(jobs)) job = jobs[0] self.assertEqual(job['job_type'], object_reconstructor.REVERT) self.assertEqual(job['frag_index'], frag_index) self.assertEqual(sorted(job['suffixes']), sorted(stub_hashes.keys())) self.assertEqual(len(job['sync_to']), 1) self.assertEqual(job['sync_to'][0]['index'], frag_index) self.assertEqual(job['path'], part_path) self.assertEqual(job['partition'], partition) self.assertEqual(sorted(job['hashes']), sorted(stub_hashes)) self.assertEqual(job['local_dev'], self.local_dev) def test_build_jobs_mixed(self): ring = self.policy.object_ring = FabricatedRing() # find a partition for which we're a primary for partition in range(2 ** ring.part_power): part_nodes = ring.get_part_nodes(partition) try: frag_index = [n['id'] for n in part_nodes].index( self.local_dev['id']) except ValueError: pass else: break else: self.fail("the ring doesn't work: %r" % ring._replica2part2dev_id) part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': partition, 'part_path': part_path, } other_frag_index = random.choice([f for f in range(ring.replicas) if f != frag_index]) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, '456': {other_frag_index: 'hash', None: 'hash'}, 'abc': {None: 'hash'}, } with mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(2, len(jobs)) sync_jobs, revert_jobs = [], [] for job in jobs: self.assertEqual(job['partition'], partition) self.assertEqual(job['path'], part_path) self.assertEqual(sorted(job['hashes']), sorted(stub_hashes)) self.assertEqual(job['policy'], self.policy) self.assertEqual(job['local_dev'], self.local_dev) self.assertEqual(job['device'], self.local_dev['device']) { object_reconstructor.SYNC: sync_jobs, object_reconstructor.REVERT: revert_jobs, }[job['job_type']].append(job) self.assertEqual(1, len(sync_jobs)) job = sync_jobs[0] self.assertEqual(job['frag_index'], frag_index) self.assertEqual(sorted(job['suffixes']), sorted(['123', 'abc'])) self.assertEqual(len(job['sync_to']), 2) self.assertEqual(set([n['index'] for n in job['sync_to']]), set([(frag_index + 1) % ring.replicas, (frag_index - 1) % ring.replicas])) self.assertEqual(1, len(revert_jobs)) job = revert_jobs[0] self.assertEqual(job['frag_index'], other_frag_index) self.assertEqual(job['suffixes'], ['456']) self.assertEqual(len(job['sync_to']), 1) self.assertEqual(job['sync_to'][0]['index'], other_frag_index) def test_build_jobs_revert_only_tombstones(self): ring = self.policy.object_ring = FabricatedRing() # find a partition for which we're a handoff for partition in range(2 ** ring.part_power): part_nodes = ring.get_part_nodes(partition) if self.local_dev['id'] not in [n['id'] for n in part_nodes]: break else: self.fail("the ring doesn't work: %r" % ring._replica2part2dev_id) part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) part_info = { 'local_dev': self.local_dev, 'policy': self.policy, 'partition': partition, 'part_path': part_path, } # we have no fragment index to hint the jobs where they belong stub_hashes = { '123': {None: 'hash'}, 'abc': {None: 'hash'}, } with mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)): jobs = self.reconstructor.build_reconstruction_jobs(part_info) self.assertEqual(len(jobs), 1) job = jobs[0] expected = { 'job_type': object_reconstructor.REVERT, 'frag_index': None, 'suffixes': stub_hashes.keys(), 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, 'device': self.local_dev['device'], } self.assertEqual(ring.replica_count, len(job['sync_to'])) for k, v in expected.items(): msg = 'expected %s != %s for %s' % ( v, job[k], k) self.assertEqual(v, job[k], msg) def test_get_suffix_delta(self): # different local_suff = {'123': {None: 'abc', 0: 'def'}} remote_suff = {'456': {None: 'ghi', 0: 'jkl'}} local_index = 0 remote_index = 0 suffs = self.reconstructor.get_suffix_delta(local_suff, local_index, remote_suff, remote_index) self.assertEqual(suffs, ['123']) # now the same remote_suff = {'123': {None: 'abc', 0: 'def'}} suffs = self.reconstructor.get_suffix_delta(local_suff, local_index, remote_suff, remote_index) self.assertEqual(suffs, []) # now with a mis-matched None key (missing durable) remote_suff = {'123': {None: 'ghi', 0: 'def'}} suffs = self.reconstructor.get_suffix_delta(local_suff, local_index, remote_suff, remote_index) self.assertEqual(suffs, ['123']) # now with bogus local index local_suff = {'123': {None: 'abc', 99: 'def'}} remote_suff = {'456': {None: 'ghi', 0: 'jkl'}} suffs = self.reconstructor.get_suffix_delta(local_suff, local_index, remote_suff, remote_index) self.assertEqual(suffs, ['123']) def test_process_job_primary_in_sync(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [n for n in self.policy.object_ring.devs if n != self.local_dev][:2] # setup left and right hashes stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } left_index = sync_to[0]['index'] = (frag_index - 1) % replicas left_hashes = { '123': {left_index: 'hash', None: 'hash'}, 'abc': {left_index: 'hash', None: 'hash'}, } right_index = sync_to[1]['index'] = (frag_index + 1) % replicas right_hashes = { '123': {right_index: 'hash', None: 'hash'}, 'abc': {right_index: 'hash', None: 'hash'}, } partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } responses = [(200, pickle.dumps(hashes)) for hashes in ( left_hashes, right_hashes)] codes, body_iter = zip(*responses) ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*codes, body_iter=body_iter) as request_log: self.reconstructor.process_job(job) expected_suffix_calls = set([ ('10.0.0.1', '/sdb/0'), ('10.0.0.2', '/sdc/0'), ]) self.assertEqual(expected_suffix_calls, set((r['ip'], r['path']) for r in request_log.requests)) self.assertEqual(len(ssync_calls), 0) def test_process_job_primary_not_in_sync(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [n for n in self.policy.object_ring.devs if n != self.local_dev][:2] # setup left and right hashes stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } sync_to[0]['index'] = (frag_index - 1) % replicas left_hashes = {} sync_to[1]['index'] = (frag_index + 1) % replicas right_hashes = {} partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } responses = [(200, pickle.dumps(hashes)) for hashes in ( left_hashes, left_hashes, right_hashes, right_hashes)] codes, body_iter = zip(*responses) ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*codes, body_iter=body_iter) as request_log: self.reconstructor.process_job(job) expected_suffix_calls = set([ ('10.0.0.1', '/sdb/0'), ('10.0.0.1', '/sdb/0/123-abc'), ('10.0.0.2', '/sdc/0'), ('10.0.0.2', '/sdc/0/123-abc'), ]) self.assertEqual(expected_suffix_calls, set((r['ip'], r['path']) for r in request_log.requests)) expected_ssync_calls = sorted([ ('10.0.0.1', 0, set(['123', 'abc'])), ('10.0.0.2', 0, set(['123', 'abc'])), ]) self.assertEqual(expected_ssync_calls, sorted(( c['node']['ip'], c['job']['partition'], set(c['suffixes']), ) for c in ssync_calls)) def test_process_job_sync_missing_durable(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [n for n in self.policy.object_ring.devs if n != self.local_dev][:2] # setup left and right hashes stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } # left hand side is in sync left_index = sync_to[0]['index'] = (frag_index - 1) % replicas left_hashes = { '123': {left_index: 'hash', None: 'hash'}, 'abc': {left_index: 'hash', None: 'hash'}, } # right hand side has fragment, but no durable (None key is whack) right_index = sync_to[1]['index'] = (frag_index + 1) % replicas right_hashes = { '123': {right_index: 'hash', None: 'hash'}, 'abc': {right_index: 'hash', None: 'different-because-durable'}, } partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } responses = [(200, pickle.dumps(hashes)) for hashes in ( left_hashes, right_hashes, right_hashes)] codes, body_iter = zip(*responses) ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*codes, body_iter=body_iter) as request_log: self.reconstructor.process_job(job) expected_suffix_calls = set([ ('10.0.0.1', '/sdb/0'), ('10.0.0.2', '/sdc/0'), ('10.0.0.2', '/sdc/0/abc'), ]) self.assertEqual(expected_suffix_calls, set((r['ip'], r['path']) for r in request_log.requests)) expected_ssync_calls = sorted([ ('10.0.0.2', 0, ['abc']), ]) self.assertEqual(expected_ssync_calls, sorted(( c['node']['ip'], c['job']['partition'], c['suffixes'], ) for c in ssync_calls)) def test_process_job_primary_some_in_sync(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [n for n in self.policy.object_ring.devs if n != self.local_dev][:2] # setup left and right hashes stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } left_index = sync_to[0]['index'] = (frag_index - 1) % replicas left_hashes = { '123': {left_index: 'hashX', None: 'hash'}, 'abc': {left_index: 'hash', None: 'hash'}, } right_index = sync_to[1]['index'] = (frag_index + 1) % replicas right_hashes = { '123': {right_index: 'hash', None: 'hash'}, } partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } responses = [(200, pickle.dumps(hashes)) for hashes in ( left_hashes, left_hashes, right_hashes, right_hashes)] codes, body_iter = zip(*responses) ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*codes, body_iter=body_iter) as request_log: self.reconstructor.process_job(job) expected_suffix_calls = set([ ('10.0.0.1', '/sdb/0'), ('10.0.0.1', '/sdb/0/123'), ('10.0.0.2', '/sdc/0'), ('10.0.0.2', '/sdc/0/abc'), ]) self.assertEqual(expected_suffix_calls, set((r['ip'], r['path']) for r in request_log.requests)) self.assertEqual(len(ssync_calls), 2) self.assertEqual(set(c['node']['index'] for c in ssync_calls), set([left_index, right_index])) for call in ssync_calls: if call['node']['index'] == left_index: self.assertEqual(call['suffixes'], ['123']) elif call['node']['index'] == right_index: self.assertEqual(call['suffixes'], ['abc']) else: self.fail('unexpected call %r' % call) def test_process_job_primary_down(self): replicas = self.policy.object_ring.replicas partition = 0 frag_index = random.randint(0, replicas - 1) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } part_nodes = self.policy.object_ring.get_part_nodes(partition) sync_to = part_nodes[:2] part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'device': self.local_dev['device'], 'local_dev': self.local_dev, } non_local = {'called': 0} def ssync_response_callback(*args): # in this test, ssync fails on the first (primary sync_to) node if non_local['called'] >= 1: return True, {} non_local['called'] += 1 return False, {} expected_suffix_calls = set() for node in part_nodes[:3]: expected_suffix_calls.update([ (node['replication_ip'], '/%s/0' % node['device']), (node['replication_ip'], '/%s/0/123-abc' % node['device']), ]) ssync_calls = [] with mock_ssync_sender(ssync_calls, response_callback=ssync_response_callback), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*[200] * len(expected_suffix_calls), body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) found_suffix_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_suffix_calls, found_suffix_calls) expected_ssync_calls = sorted([ ('10.0.0.0', 0, set(['123', 'abc'])), ('10.0.0.1', 0, set(['123', 'abc'])), ('10.0.0.2', 0, set(['123', 'abc'])), ]) found_ssync_calls = sorted(( c['node']['ip'], c['job']['partition'], set(c['suffixes']), ) for c in ssync_calls) self.assertEqual(expected_ssync_calls, found_ssync_calls) def test_process_job_suffix_call_errors(self): replicas = self.policy.object_ring.replicas partition = 0 frag_index = random.randint(0, replicas - 1) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } part_nodes = self.policy.object_ring.get_part_nodes(partition) sync_to = part_nodes[:2] part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.SYNC, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'device': self.local_dev['device'], 'local_dev': self.local_dev, } expected_suffix_calls = set(( node['replication_ip'], '/%s/0' % node['device'] ) for node in part_nodes) possible_errors = [404, 507, Timeout(), Exception('kaboom!')] codes = [random.choice(possible_errors) for r in expected_suffix_calls] ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*codes) as request_log: self.reconstructor.process_job(job) found_suffix_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_suffix_calls, found_suffix_calls) self.assertFalse(ssync_calls) def test_process_job_handoff(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [random.choice([n for n in self.policy.object_ring.devs if n != self.local_dev])] sync_to[0]['index'] = frag_index stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.REVERT, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } ssync_calls = [] with mock_ssync_sender(ssync_calls), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(200, body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) expected_suffix_calls = set([ (sync_to[0]['ip'], '/%s/0/123-abc' % sync_to[0]['device']), ]) found_suffix_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_suffix_calls, found_suffix_calls) self.assertEqual(len(ssync_calls), 1) call = ssync_calls[0] self.assertEqual(call['node'], sync_to[0]) self.assertEqual(set(call['suffixes']), set(['123', 'abc'])) def test_process_job_revert_to_handoff(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [random.choice([n for n in self.policy.object_ring.devs if n != self.local_dev])] sync_to[0]['index'] = frag_index partition = 0 handoff = next(self.policy.object_ring.get_more_nodes(partition)) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.REVERT, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': self.local_dev, } non_local = {'called': 0} def ssync_response_callback(*args): # in this test, ssync fails on the first (primary sync_to) node if non_local['called'] >= 1: return True, {} non_local['called'] += 1 return False, {} expected_suffix_calls = set([ (node['replication_ip'], '/%s/0/123-abc' % node['device']) for node in (sync_to[0], handoff) ]) ssync_calls = [] with mock_ssync_sender(ssync_calls, response_callback=ssync_response_callback), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*[200] * len(expected_suffix_calls), body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) found_suffix_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_suffix_calls, found_suffix_calls) self.assertEqual(len(ssync_calls), len(expected_suffix_calls)) call = ssync_calls[0] self.assertEqual(call['node'], sync_to[0]) self.assertEqual(set(call['suffixes']), set(['123', 'abc'])) def test_process_job_revert_is_handoff(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [random.choice([n for n in self.policy.object_ring.devs if n != self.local_dev])] sync_to[0]['index'] = frag_index partition = 0 handoff_nodes = list(self.policy.object_ring.get_more_nodes(partition)) stub_hashes = { '123': {frag_index: 'hash', None: 'hash'}, 'abc': {frag_index: 'hash', None: 'hash'}, } part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) job = { 'job_type': object_reconstructor.REVERT, 'frag_index': frag_index, 'suffixes': stub_hashes.keys(), 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': stub_hashes, 'policy': self.policy, 'local_dev': handoff_nodes[-1], } def ssync_response_callback(*args): # in this test ssync always fails, until we encounter ourselves in # the list of possible handoff's to sync to return False, {} expected_suffix_calls = set([ (sync_to[0]['replication_ip'], '/%s/0/123-abc' % sync_to[0]['device']) ] + [ (node['replication_ip'], '/%s/0/123-abc' % node['device']) for node in handoff_nodes[:-1] ]) ssync_calls = [] with mock_ssync_sender(ssync_calls, response_callback=ssync_response_callback), \ mock.patch('swift.obj.diskfile.ECDiskFileManager._get_hashes', return_value=(None, stub_hashes)), \ mocked_http_conn(*[200] * len(expected_suffix_calls), body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) found_suffix_calls = set((r['ip'], r['path']) for r in request_log.requests) self.assertEqual(expected_suffix_calls, found_suffix_calls) # this is ssync call to primary (which fails) plus the ssync call to # all of the handoffs (except the last one - which is the local_dev) self.assertEqual(len(ssync_calls), len(handoff_nodes)) call = ssync_calls[0] self.assertEqual(call['node'], sync_to[0]) self.assertEqual(set(call['suffixes']), set(['123', 'abc'])) def test_process_job_revert_cleanup(self): replicas = self.policy.object_ring.replicas frag_index = random.randint(0, replicas - 1) sync_to = [random.choice([n for n in self.policy.object_ring.devs if n != self.local_dev])] sync_to[0]['index'] = frag_index partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) os.makedirs(part_path) df_mgr = self.reconstructor._df_router[self.policy] df = df_mgr.get_diskfile(self.local_dev['device'], partition, 'a', 'c', 'data-obj', policy=self.policy) ts = self.ts() with df.create() as writer: test_data = 'test data' writer.write(test_data) metadata = { 'X-Timestamp': ts.internal, 'Content-Length': len(test_data), 'Etag': md5(test_data).hexdigest(), 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, } writer.put(metadata) writer.commit(ts) ohash = os.path.basename(df._datadir) suffix = os.path.basename(os.path.dirname(df._datadir)) job = { 'job_type': object_reconstructor.REVERT, 'frag_index': frag_index, 'suffixes': [suffix], 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': {}, 'policy': self.policy, 'local_dev': self.local_dev, } def ssync_response_callback(*args): return True, {ohash: {'ts_data': ts}} ssync_calls = [] with mock_ssync_sender(ssync_calls, response_callback=ssync_response_callback): with mocked_http_conn(200, body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) self.assertEqual([ (sync_to[0]['replication_ip'], '/%s/0/%s' % ( sync_to[0]['device'], suffix)), ], [ (r['ip'], r['path']) for r in request_log.requests ]) # hashpath is still there, but only the durable remains files = os.listdir(df._datadir) self.assertEqual(1, len(files)) self.assertTrue(files[0].endswith('.durable')) # and more to the point, the next suffix recalc will clean it up df_mgr = self.reconstructor._df_router[self.policy] df_mgr.get_hashes(self.local_dev['device'], str(partition), [], self.policy) self.assertFalse(os.access(df._datadir, os.F_OK)) def test_process_job_revert_cleanup_tombstone(self): sync_to = [random.choice([n for n in self.policy.object_ring.devs if n != self.local_dev])] partition = 0 part_path = os.path.join(self.devices, self.local_dev['device'], diskfile.get_data_dir(self.policy), str(partition)) os.makedirs(part_path) df_mgr = self.reconstructor._df_router[self.policy] df = df_mgr.get_diskfile(self.local_dev['device'], partition, 'a', 'c', 'data-obj', policy=self.policy) ts = self.ts() df.delete(ts) ohash = os.path.basename(df._datadir) suffix = os.path.basename(os.path.dirname(df._datadir)) job = { 'job_type': object_reconstructor.REVERT, 'frag_index': None, 'suffixes': [suffix], 'sync_to': sync_to, 'partition': partition, 'path': part_path, 'hashes': {}, 'policy': self.policy, 'local_dev': self.local_dev, } def ssync_response_callback(*args): return True, {ohash: {'ts_data': ts}} ssync_calls = [] with mock_ssync_sender(ssync_calls, response_callback=ssync_response_callback): with mocked_http_conn(200, body=pickle.dumps({})) as request_log: self.reconstructor.process_job(job) self.assertEqual([ (sync_to[0]['replication_ip'], '/%s/0/%s' % ( sync_to[0]['device'], suffix)), ], [ (r['ip'], r['path']) for r in request_log.requests ]) # hashpath is still there, but it's empty self.assertEqual([], os.listdir(df._datadir)) def test_reconstruct_fa_no_errors(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] metadata = { 'name': '/a/c/o', 'Content-Length': '0', 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) broken_body = ec_archive_bodies.pop(1) responses = list() for body in ec_archive_bodies: headers = get_header_frag_index(self, body) headers.update({'X-Object-Sysmeta-Ec-Etag': etag}) responses.append((200, body, headers)) # make a hook point at # swift.obj.reconstructor.ObjectReconstructor._get_response called_headers = [] orig_func = object_reconstructor.ObjectReconstructor._get_response def _get_response_hook(self, node, part, path, headers, policy): called_headers.append(headers) return orig_func(self, node, part, path, headers, policy) codes, body_iter, headers = zip(*responses) get_response_path = \ 'swift.obj.reconstructor.ObjectReconstructor._get_response' with mock.patch(get_response_path, _get_response_hook): with mocked_http_conn( *codes, body_iter=body_iter, headers=headers): df = self.reconstructor.reconstruct_fa( job, node, metadata) self.assertEqual(0, df.content_length) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) for called_header in called_headers: called_header = HeaderKeyDict(called_header) self.assertTrue('Content-Length' in called_header) self.assertEqual(called_header['Content-Length'], '0') self.assertTrue('User-Agent' in called_header) user_agent = called_header['User-Agent'] self.assertTrue(user_agent.startswith('obj-reconstructor')) def test_reconstruct_fa_errors_works(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[4] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) broken_body = ec_archive_bodies.pop(4) base_responses = list() for body in ec_archive_bodies: headers = get_header_frag_index(self, body) headers.update({'X-Object-Sysmeta-Ec-Etag': etag}) base_responses.append((200, body, headers)) # since we're already missing a fragment a +2 scheme can only support # one additional failure at a time for error in (Timeout(), 404, Exception('kaboom!')): responses = base_responses error_index = random.randint(0, len(responses) - 1) responses[error_index] = (error, '', '') codes, body_iter, headers_iter = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers_iter): df = self.reconstructor.reconstruct_fa( job, node, dict(metadata)) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) def test_reconstruct_parity_fa_with_data_node_failure(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[-4] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } # make up some data (trim some amount to make it unaligned with # segment size) test_data = ('rebuild' * self.policy.ec_segment_size)[:-454] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) # the scheme is 10+4, so this gets a parity node broken_body = ec_archive_bodies.pop(-4) responses = list() for body in ec_archive_bodies: headers = get_header_frag_index(self, body) headers.update({'X-Object-Sysmeta-Ec-Etag': etag}) responses.append((200, body, headers)) for error in (Timeout(), 404, Exception('kaboom!')): # grab a data node index error_index = random.randint(0, self.policy.ec_ndata - 1) responses[error_index] = (error, '', '') codes, body_iter, headers_iter = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers_iter): df = self.reconstructor.reconstruct_fa( job, node, dict(metadata)) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) def test_reconstruct_fa_errors_fails(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] policy = self.policy metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } possible_errors = [404, Timeout(), Exception('kaboom!')] codes = [random.choice(possible_errors) for i in range(policy.object_ring.replicas - 1)] with mocked_http_conn(*codes): self.assertRaises(DiskFileError, self.reconstructor.reconstruct_fa, job, node, metadata) def test_reconstruct_fa_with_mixed_old_etag(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) broken_body = ec_archive_bodies.pop(1) ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) # bad response bad_headers = { 'X-Object-Sysmeta-Ec-Etag': 'some garbage', 'X-Backend-Timestamp': next(ts).internal, } # good responses responses = list() t1 = next(ts).internal for body in ec_archive_bodies: headers = get_header_frag_index(self, body) headers.update({'X-Object-Sysmeta-Ec-Etag': etag, 'X-Backend-Timestamp': t1}) responses.append((200, body, headers)) # mixed together error_index = random.randint(0, self.policy.ec_ndata) error_headers = get_header_frag_index(self, (responses[error_index])[1]) error_headers.update(bad_headers) bad_response = (200, '', bad_headers) responses[error_index] = bad_response codes, body_iter, headers = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers): df = self.reconstructor.reconstruct_fa( job, node, metadata) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) def test_reconstruct_fa_with_mixed_new_etag(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) broken_body = ec_archive_bodies.pop(1) ts = (utils.Timestamp(t) for t in itertools.count(int(time.time()))) # good responses responses = list() t0 = next(ts).internal for body in ec_archive_bodies: headers = get_header_frag_index(self, body) headers.update({'X-Object-Sysmeta-Ec-Etag': etag, 'X-Backend-Timestamp': t0}) responses.append((200, body, headers)) # sanity check before negative test codes, body_iter, headers = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers): df = self.reconstructor.reconstruct_fa( job, node, dict(metadata)) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) # one newer etag can spoil the bunch new_index = random.randint(0, len(responses) - self.policy.ec_nparity) new_headers = get_header_frag_index(self, (responses[new_index])[1]) new_headers.update({'X-Object-Sysmeta-Ec-Etag': 'some garbage', 'X-Backend-Timestamp': next(ts).internal}) new_response = (200, '', new_headers) responses[new_index] = new_response codes, body_iter, headers = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers): self.assertRaises(DiskFileError, self.reconstructor.reconstruct_fa, job, node, dict(metadata)) def test_reconstruct_fa_finds_itself_does_not_fail(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) # instead of popping the broken body, we'll just leave it in the list # of responses and take away something else. broken_body = ec_archive_bodies[1] ec_archive_bodies = ec_archive_bodies[:-1] def make_header(body): metadata = self.policy.pyeclib_driver.get_metadata(body) frag_index = struct.unpack('h', metadata[:2])[0] return { 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, 'X-Object-Sysmeta-Ec-Etag': etag, } responses = [(200, body, make_header(body)) for body in ec_archive_bodies] codes, body_iter, headers = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers): df = self.reconstructor.reconstruct_fa( job, node, metadata) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) def test_reconstruct_fa_finds_duplicate_does_not_fail(self): job = { 'partition': 0, 'policy': self.policy, } part_nodes = self.policy.object_ring.get_part_nodes(0) node = part_nodes[1] metadata = { 'name': '/a/c/o', 'Content-Length': 0, 'ETag': 'etag', } test_data = ('rebuild' * self.policy.ec_segment_size)[:-777] etag = md5(test_data).hexdigest() ec_archive_bodies = make_ec_archive_bodies(self.policy, test_data) broken_body = ec_archive_bodies.pop(1) # add some duplicates num_duplicates = self.policy.ec_nparity - 1 ec_archive_bodies = (ec_archive_bodies[:num_duplicates] + ec_archive_bodies)[:-num_duplicates] def make_header(body): metadata = self.policy.pyeclib_driver.get_metadata(body) frag_index = struct.unpack('h', metadata[:2])[0] return { 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, 'X-Object-Sysmeta-Ec-Etag': etag, } responses = [(200, body, make_header(body)) for body in ec_archive_bodies] codes, body_iter, headers = zip(*responses) with mocked_http_conn(*codes, body_iter=body_iter, headers=headers): df = self.reconstructor.reconstruct_fa( job, node, metadata) fixed_body = ''.join(df.reader()) self.assertEqual(len(fixed_body), len(broken_body)) self.assertEqual(md5(fixed_body).hexdigest(), md5(broken_body).hexdigest()) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_auditor.py0000664000567000056710000013071113024044354021610 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from test import unit import unittest import mock import os import time import string from shutil import rmtree from hashlib import md5 from tempfile import mkdtemp import textwrap from test.unit import (FakeLogger, patch_policies, make_timestamp_iter, DEFAULT_TEST_EC_TYPE) from swift.obj import auditor, replicator from swift.obj.diskfile import ( DiskFile, write_metadata, invalidate_hash, get_data_dir, DiskFileManager, ECDiskFileManager, AuditLocation, clear_auditor_status, get_auditor_status) from swift.common.utils import ( mkdirs, normalize_timestamp, Timestamp, readconf) from swift.common.storage_policy import ( ECStoragePolicy, StoragePolicy, POLICIES, EC_POLICY) _mocked_policies = [ StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True), ECStoragePolicy(2, 'two', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=2, ec_nparity=1, ec_segment_size=4096), ] def works_only_once(callable_thing, exception): called = [False] def only_once(*a, **kw): if called[0]: raise exception else: called[0] = True return callable_thing(*a, **kw) return only_once @patch_policies(_mocked_policies) class TestAuditor(unittest.TestCase): def setUp(self): self.testdir = os.path.join(mkdtemp(), 'tmp_test_object_auditor') self.devices = os.path.join(self.testdir, 'node') self.rcache = os.path.join(self.testdir, 'object.recon') self.logger = FakeLogger() rmtree(self.testdir, ignore_errors=1) mkdirs(os.path.join(self.devices, 'sda')) os.mkdir(os.path.join(self.devices, 'sdb')) # policy 0 self.objects = os.path.join(self.devices, 'sda', get_data_dir(POLICIES[0])) self.objects_2 = os.path.join(self.devices, 'sdb', get_data_dir(POLICIES[0])) os.mkdir(self.objects) # policy 1 self.objects_p1 = os.path.join(self.devices, 'sda', get_data_dir(POLICIES[1])) self.objects_2_p1 = os.path.join(self.devices, 'sdb', get_data_dir(POLICIES[1])) os.mkdir(self.objects_p1) # policy 2 self.objects_p2 = os.path.join(self.devices, 'sda', get_data_dir(POLICIES[2])) self.objects_2_p2 = os.path.join(self.devices, 'sdb', get_data_dir(POLICIES[2])) os.mkdir(self.objects_p2) self.parts = {} self.parts_p1 = {} self.parts_p2 = {} for part in ['0', '1', '2', '3']: self.parts[part] = os.path.join(self.objects, part) self.parts_p1[part] = os.path.join(self.objects_p1, part) self.parts_p2[part] = os.path.join(self.objects_p2, part) os.mkdir(os.path.join(self.objects, part)) os.mkdir(os.path.join(self.objects_p1, part)) os.mkdir(os.path.join(self.objects_p2, part)) self.conf = dict( devices=self.devices, mount_check='false', object_size_stats='10,100,1024,10240') self.df_mgr = DiskFileManager(self.conf, self.logger) self.ec_df_mgr = ECDiskFileManager(self.conf, self.logger) # diskfiles for policy 0, 1, 2 self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o', policy=POLICIES[0]) self.disk_file_p1 = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o', policy=POLICIES[1]) self.disk_file_ec = self.ec_df_mgr.get_diskfile( 'sda', '0', 'a', 'c', 'o', policy=POLICIES[2], frag_index=1) def tearDown(self): rmtree(os.path.dirname(self.testdir), ignore_errors=1) unit.xattr_data = {} def test_worker_conf_parms(self): def check_common_defaults(): self.assertEqual(auditor_worker.max_bytes_per_second, 10000000) self.assertEqual(auditor_worker.log_time, 3600) # test default values conf = dict( devices=self.devices, mount_check='false', object_size_stats='10,100,1024,10240') auditor_worker = auditor.AuditorWorker(conf, self.logger, self.rcache, self.devices) check_common_defaults() for policy in POLICIES: mgr = auditor_worker.diskfile_router[policy] self.assertEqual(mgr.disk_chunk_size, 65536) self.assertEqual(auditor_worker.max_files_per_second, 20) self.assertEqual(auditor_worker.zero_byte_only_at_fps, 0) # test specified audit value overrides conf.update({'disk_chunk_size': 4096}) auditor_worker = auditor.AuditorWorker(conf, self.logger, self.rcache, self.devices, zero_byte_only_at_fps=50) check_common_defaults() for policy in POLICIES: mgr = auditor_worker.diskfile_router[policy] self.assertEqual(mgr.disk_chunk_size, 4096) self.assertEqual(auditor_worker.max_files_per_second, 50) self.assertEqual(auditor_worker.zero_byte_only_at_fps, 50) def test_object_audit_extra_data(self): def run_tests(disk_file): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) data = b'0' * 1024 if disk_file.policy.policy_type == EC_POLICY: data = disk_file.policy.pyeclib_driver.encode(data)[0] etag = md5() with disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() timestamp = str(normalize_timestamp(time.time())) metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) pre_quarantines = auditor_worker.quarantines auditor_worker.object_audit( AuditLocation(disk_file._datadir, 'sda', '0', policy=disk_file.policy)) self.assertEqual(auditor_worker.quarantines, pre_quarantines) os.write(writer._fd, 'extra_data') auditor_worker.object_audit( AuditLocation(disk_file._datadir, 'sda', '0', policy=disk_file.policy)) self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1) run_tests(self.disk_file) run_tests(self.disk_file_p1) run_tests(self.disk_file_ec) def test_object_audit_diff_data(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) data = '0' * 1024 etag = md5() timestamp = str(normalize_timestamp(time.time())) with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) pre_quarantines = auditor_worker.quarantines # remake so it will have metadata self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o', policy=POLICIES.legacy) auditor_worker.object_audit( AuditLocation(self.disk_file._datadir, 'sda', '0', policy=POLICIES.legacy)) self.assertEqual(auditor_worker.quarantines, pre_quarantines) etag = md5() etag.update('1' + '0' * 1023) etag = etag.hexdigest() metadata['ETag'] = etag with self.disk_file.create() as writer: writer.write(data) writer.put(metadata) writer.commit(Timestamp(timestamp)) auditor_worker.object_audit( AuditLocation(self.disk_file._datadir, 'sda', '0', policy=POLICIES.legacy)) self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1) def test_object_audit_checks_EC_fragments(self): disk_file = self.disk_file_ec def do_test(data): # create diskfile and set ETag and content-length to match the data etag = md5(data).hexdigest() timestamp = str(normalize_timestamp(time.time())) with disk_file.create() as writer: writer.write(data) metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': len(data), } writer.put(metadata) writer.commit(Timestamp(timestamp)) auditor_worker = auditor.AuditorWorker(self.conf, FakeLogger(), self.rcache, self.devices) self.assertEqual(0, auditor_worker.quarantines) # sanity check auditor_worker.object_audit( AuditLocation(disk_file._datadir, 'sda', '0', policy=disk_file.policy)) return auditor_worker # two good frags in an EC archive frag_0 = disk_file.policy.pyeclib_driver.encode( 'x' * disk_file.policy.ec_segment_size)[0] frag_1 = disk_file.policy.pyeclib_driver.encode( 'y' * disk_file.policy.ec_segment_size)[0] data = frag_0 + frag_1 auditor_worker = do_test(data) self.assertEqual(0, auditor_worker.quarantines) self.assertFalse(auditor_worker.logger.get_lines_for_level('error')) # corrupt second frag headers corrupt_frag_1 = 'blah' * 16 + frag_1[64:] data = frag_0 + corrupt_frag_1 auditor_worker = do_test(data) self.assertEqual(1, auditor_worker.quarantines) log_lines = auditor_worker.logger.get_lines_for_level('error') self.assertIn('failed audit and was quarantined: ' 'Invalid EC metadata at offset 0x%x' % len(frag_0), log_lines[0]) # dangling extra corrupt frag data data = frag_0 + frag_1 + 'wtf' * 100 auditor_worker = do_test(data) self.assertEqual(1, auditor_worker.quarantines) log_lines = auditor_worker.logger.get_lines_for_level('error') self.assertIn('failed audit and was quarantined: ' 'Invalid EC metadata at offset 0x%x' % len(frag_0 + frag_1), log_lines[0]) # simulate bug https://bugs.launchpad.net/bugs/1631144 by writing start # of an ssync subrequest into the diskfile data = ( b'PUT /a/c/o\r\n' + b'Content-Length: 999\r\n' + b'Content-Type: image/jpeg\r\n' + b'X-Object-Sysmeta-Ec-Content-Length: 1024\r\n' + b'X-Object-Sysmeta-Ec-Etag: 1234bff7eb767cc6d19627c6b6f9edef\r\n' + b'X-Object-Sysmeta-Ec-Frag-Index: 1\r\n' + b'X-Object-Sysmeta-Ec-Scheme: ' + DEFAULT_TEST_EC_TYPE + '\r\n' + b'X-Object-Sysmeta-Ec-Segment-Size: 1048576\r\n' + b'X-Timestamp: 1471512345.17333\r\n\r\n' ) data += frag_0[:disk_file.policy.fragment_size - len(data)] auditor_worker = do_test(data) self.assertEqual(1, auditor_worker.quarantines) log_lines = auditor_worker.logger.get_lines_for_level('error') self.assertIn('failed audit and was quarantined: ' 'Invalid EC metadata at offset 0x0', log_lines[0]) def test_object_audit_no_meta(self): timestamp = str(normalize_timestamp(time.time())) path = os.path.join(self.disk_file._datadir, timestamp + '.data') mkdirs(self.disk_file._datadir) fp = open(path, 'w') fp.write('0' * 1024) fp.close() invalidate_hash(os.path.dirname(self.disk_file._datadir)) auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) pre_quarantines = auditor_worker.quarantines auditor_worker.object_audit( AuditLocation(self.disk_file._datadir, 'sda', '0', policy=POLICIES.legacy)) self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1) def test_object_audit_will_not_swallow_errors_in_tests(self): timestamp = str(normalize_timestamp(time.time())) path = os.path.join(self.disk_file._datadir, timestamp + '.data') mkdirs(self.disk_file._datadir) with open(path, 'w') as f: write_metadata(f, {'name': '/a/c/o'}) auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) def blowup(*args): raise NameError('tpyo') with mock.patch.object(DiskFileManager, 'get_diskfile_from_audit_location', blowup): self.assertRaises(NameError, auditor_worker.object_audit, AuditLocation(os.path.dirname(path), 'sda', '0', policy=POLICIES.legacy)) def test_failsafe_object_audit_will_swallow_errors_in_tests(self): timestamp = str(normalize_timestamp(time.time())) path = os.path.join(self.disk_file._datadir, timestamp + '.data') mkdirs(self.disk_file._datadir) with open(path, 'w') as f: write_metadata(f, {'name': '/a/c/o'}) auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) def blowup(*args): raise NameError('tpyo') with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls', blowup): auditor_worker.failsafe_object_audit( AuditLocation(os.path.dirname(path), 'sda', '0', policy=POLICIES.legacy)) self.assertEqual(auditor_worker.errors, 1) def test_audit_location_gets_quarantined(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) location = AuditLocation(self.disk_file._datadir, 'sda', '0', policy=self.disk_file.policy) # instead of a datadir, we'll make a file! mkdirs(os.path.dirname(self.disk_file._datadir)) open(self.disk_file._datadir, 'w') # after we turn the crank ... auditor_worker.object_audit(location) # ... it should get quarantined self.assertFalse(os.path.exists(self.disk_file._datadir)) self.assertEqual(1, auditor_worker.quarantines) def test_rsync_tempfile_timeout_auto_option(self): # if we don't have access to the replicator config section we'll use # our default auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400) # if the rsync_tempfile_timeout option is set explicitly we use that self.conf['rsync_tempfile_timeout'] = '1800' auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) self.assertEqual(auditor_worker.rsync_tempfile_timeout, 1800) # if we have a real config we can be a little smarter config_path = os.path.join(self.testdir, 'objserver.conf') stub_config = """ [object-auditor] rsync_tempfile_timeout = auto """ with open(config_path, 'w') as f: f.write(textwrap.dedent(stub_config)) # the Daemon loader will hand the object-auditor config to the # auditor who will build the workers from it conf = readconf(config_path, 'object-auditor') auditor_worker = auditor.AuditorWorker(conf, self.logger, self.rcache, self.devices) # if there is no object-replicator section we still have to fall back # to default because we can't parse the config for that section! self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400) stub_config = """ [object-replicator] [object-auditor] rsync_tempfile_timeout = auto """ with open(os.path.join(self.testdir, 'objserver.conf'), 'w') as f: f.write(textwrap.dedent(stub_config)) conf = readconf(config_path, 'object-auditor') auditor_worker = auditor.AuditorWorker(conf, self.logger, self.rcache, self.devices) # if the object-replicator section will parse but does not override # the default rsync_timeout we assume the default rsync_timeout value # and add 15mins self.assertEqual(auditor_worker.rsync_tempfile_timeout, replicator.DEFAULT_RSYNC_TIMEOUT + 900) stub_config = """ [DEFAULT] reclaim_age = 1209600 [object-replicator] rsync_timeout = 3600 [object-auditor] rsync_tempfile_timeout = auto """ with open(os.path.join(self.testdir, 'objserver.conf'), 'w') as f: f.write(textwrap.dedent(stub_config)) conf = readconf(config_path, 'object-auditor') auditor_worker = auditor.AuditorWorker(conf, self.logger, self.rcache, self.devices) # if there is an object-replicator section with a rsync_timeout # configured we'll use that value (3600) + 900 self.assertEqual(auditor_worker.rsync_tempfile_timeout, 3600 + 900) def test_inprogress_rsync_tempfiles_get_cleaned_up(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) location = AuditLocation(self.disk_file._datadir, 'sda', '0', policy=self.disk_file.policy) data = 'VERIFY' etag = md5() timestamp = str(normalize_timestamp(time.time())) with self.disk_file.create() as writer: writer.write(data) etag.update(data) metadata = { 'ETag': etag.hexdigest(), 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) datafilename = None datadir_files = os.listdir(self.disk_file._datadir) for filename in datadir_files: if filename.endswith('.data'): datafilename = filename break else: self.fail('Did not find .data file in %r: %r' % (self.disk_file._datadir, datadir_files)) rsynctempfile_path = os.path.join(self.disk_file._datadir, '.%s.9ILVBL' % datafilename) open(rsynctempfile_path, 'w') # sanity check we have an extra file rsync_files = os.listdir(self.disk_file._datadir) self.assertEqual(len(datadir_files) + 1, len(rsync_files)) # and after we turn the crank ... auditor_worker.object_audit(location) # ... we've still got the rsync file self.assertEqual(rsync_files, os.listdir(self.disk_file._datadir)) # and we'll keep it - depending on the rsync_tempfile_timeout self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400) self.conf['rsync_tempfile_timeout'] = '3600' auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) self.assertEqual(auditor_worker.rsync_tempfile_timeout, 3600) now = time.time() + 1900 with mock.patch('swift.obj.auditor.time.time', return_value=now): auditor_worker.object_audit(location) self.assertEqual(rsync_files, os.listdir(self.disk_file._datadir)) # but *tomorrow* when we run tomorrow = time.time() + 86400 with mock.patch('swift.obj.auditor.time.time', return_value=tomorrow): auditor_worker.object_audit(location) # ... we'll totally clean that stuff up! self.assertEqual(datadir_files, os.listdir(self.disk_file._datadir)) # but if we have some random crazy file in there random_crazy_file_path = os.path.join(self.disk_file._datadir, '.random.crazy.file') open(random_crazy_file_path, 'w') tomorrow = time.time() + 86400 with mock.patch('swift.obj.auditor.time.time', return_value=tomorrow): auditor_worker.object_audit(location) # that's someone elses problem self.assertIn(os.path.basename(random_crazy_file_path), os.listdir(self.disk_file._datadir)) def test_generic_exception_handling(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) # pretend that we logged (and reset counters) just now auditor_worker.last_logged = time.time() timestamp = str(normalize_timestamp(time.time())) pre_errors = auditor_worker.errors data = '0' * 1024 etag = md5() with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls', lambda *_: 1 / 0): auditor_worker.audit_all_objects() self.assertEqual(auditor_worker.errors, pre_errors + 1) def test_object_run_once_pass(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) auditor_worker.log_time = 0 timestamp = str(normalize_timestamp(time.time())) pre_quarantines = auditor_worker.quarantines data = '0' * 1024 def write_file(df): with df.create() as writer: writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) # policy 0 write_file(self.disk_file) # policy 1 write_file(self.disk_file_p1) # policy 2 write_file(self.disk_file_ec) auditor_worker.audit_all_objects() self.assertEqual(auditor_worker.quarantines, pre_quarantines) # 1 object per policy falls into 1024 bucket self.assertEqual(auditor_worker.stats_buckets[1024], 3) self.assertEqual(auditor_worker.stats_buckets[10240], 0) # pick up some additional code coverage, large file data = '0' * 1024 * 1024 for df in (self.disk_file, self.disk_file_ec): with df.create() as writer: writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) auditor_worker.audit_all_objects(device_dirs=['sda', 'sdb']) self.assertEqual(auditor_worker.quarantines, pre_quarantines) # still have the 1024 byte object left in policy-1 (plus the # stats from the original 3) self.assertEqual(auditor_worker.stats_buckets[1024], 4) self.assertEqual(auditor_worker.stats_buckets[10240], 0) # and then policy-0 disk_file was re-written as a larger object self.assertEqual(auditor_worker.stats_buckets['OVER'], 2) # pick up even more additional code coverage, misc paths auditor_worker.log_time = -1 auditor_worker.stats_sizes = [] auditor_worker.audit_all_objects(device_dirs=['sda', 'sdb']) self.assertEqual(auditor_worker.quarantines, pre_quarantines) self.assertEqual(auditor_worker.stats_buckets[1024], 4) self.assertEqual(auditor_worker.stats_buckets[10240], 0) self.assertEqual(auditor_worker.stats_buckets['OVER'], 2) def test_object_run_logging(self): logger = FakeLogger() auditor_worker = auditor.AuditorWorker(self.conf, logger, self.rcache, self.devices) auditor_worker.audit_all_objects(device_dirs=['sda']) log_lines = logger.get_lines_for_level('info') self.assertTrue(len(log_lines) > 0) self.assertTrue(log_lines[0].index('ALL - parallel, sda')) logger = FakeLogger() auditor_worker = auditor.AuditorWorker(self.conf, logger, self.rcache, self.devices, zero_byte_only_at_fps=50) auditor_worker.audit_all_objects(device_dirs=['sda']) log_lines = logger.get_lines_for_level('info') self.assertTrue(len(log_lines) > 0) self.assertTrue(log_lines[0].index('ZBF - sda')) def test_object_run_once_no_sda(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) timestamp = str(normalize_timestamp(time.time())) pre_quarantines = auditor_worker.quarantines # pretend that we logged (and reset counters) just now auditor_worker.last_logged = time.time() data = '0' * 1024 etag = md5() with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) os.write(writer._fd, 'extra_data') writer.commit(Timestamp(timestamp)) auditor_worker.audit_all_objects() self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1) def test_object_run_once_multi_devices(self): auditor_worker = auditor.AuditorWorker(self.conf, self.logger, self.rcache, self.devices) # pretend that we logged (and reset counters) just now auditor_worker.last_logged = time.time() timestamp = str(normalize_timestamp(time.time())) pre_quarantines = auditor_worker.quarantines data = '0' * 10 etag = md5() with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) auditor_worker.audit_all_objects() self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'ob', policy=POLICIES.legacy) data = '1' * 10 etag = md5() with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) os.write(writer._fd, 'extra_data') auditor_worker.audit_all_objects() self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1) def test_object_run_fast_track_non_zero(self): self.auditor = auditor.ObjectAuditor(self.conf) self.auditor.log_time = 0 data = '0' * 1024 etag = md5() with self.disk_file.create() as writer: writer.write(data) etag.update(data) etag = etag.hexdigest() timestamp = str(normalize_timestamp(time.time())) metadata = { 'ETag': etag, 'X-Timestamp': timestamp, 'Content-Length': str(os.fstat(writer._fd).st_size), } writer.put(metadata) writer.commit(Timestamp(timestamp)) etag = md5() etag.update('1' + '0' * 1023) etag = etag.hexdigest() metadata['ETag'] = etag write_metadata(writer._fd, metadata) quarantine_path = os.path.join(self.devices, 'sda', 'quarantined', 'objects') kwargs = {'mode': 'once'} kwargs['zero_byte_fps'] = 50 self.auditor.run_audit(**kwargs) self.assertFalse(os.path.isdir(quarantine_path)) del(kwargs['zero_byte_fps']) clear_auditor_status(self.devices) self.auditor.run_audit(**kwargs) self.assertTrue(os.path.isdir(quarantine_path)) def setup_bad_zero_byte(self, timestamp=None): if timestamp is None: timestamp = Timestamp(time.time()) self.auditor = auditor.ObjectAuditor(self.conf) self.auditor.log_time = 0 etag = md5() with self.disk_file.create() as writer: etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp.internal, 'Content-Length': 10, } writer.put(metadata) writer.commit(Timestamp(timestamp)) etag = md5() etag = etag.hexdigest() metadata['ETag'] = etag write_metadata(writer._fd, metadata) def test_object_run_fast_track_all(self): self.setup_bad_zero_byte() kwargs = {'mode': 'once'} self.auditor.run_audit(**kwargs) quarantine_path = os.path.join(self.devices, 'sda', 'quarantined', 'objects') self.assertTrue(os.path.isdir(quarantine_path)) def test_object_run_fast_track_zero(self): self.setup_bad_zero_byte() kwargs = {'mode': 'once'} kwargs['zero_byte_fps'] = 50 called_args = [0] def mock_get_auditor_status(path, logger, audit_type): called_args[0] = audit_type return get_auditor_status(path, logger, audit_type) with mock.patch('swift.obj.diskfile.get_auditor_status', mock_get_auditor_status): self.auditor.run_audit(**kwargs) quarantine_path = os.path.join(self.devices, 'sda', 'quarantined', 'objects') self.assertTrue(os.path.isdir(quarantine_path)) self.assertEqual('ZBF', called_args[0]) def test_object_run_fast_track_zero_check_closed(self): rat = [False] class FakeFile(DiskFile): def _quarantine(self, data_file, msg): rat[0] = True DiskFile._quarantine(self, data_file, msg) self.setup_bad_zero_byte() with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls', FakeFile): kwargs = {'mode': 'once'} kwargs['zero_byte_fps'] = 50 self.auditor.run_audit(**kwargs) quarantine_path = os.path.join(self.devices, 'sda', 'quarantined', 'objects') self.assertTrue(os.path.isdir(quarantine_path)) self.assertTrue(rat[0]) @mock.patch.object(auditor.ObjectAuditor, 'run_audit') @mock.patch('os.fork', return_value=0) def test_with_inaccessible_object_location(self, mock_os_fork, mock_run_audit): # Need to ensure that any failures in run_audit do # not prevent sys.exit() from running. Otherwise we get # zombie processes. e = OSError('permission denied') mock_run_audit.side_effect = e self.auditor = auditor.ObjectAuditor(self.conf) self.assertRaises(SystemExit, self.auditor.fork_child, self) def test_with_only_tombstone(self): # sanity check that auditor doesn't touch solitary tombstones ts_iter = make_timestamp_iter() self.setup_bad_zero_byte(timestamp=next(ts_iter)) self.disk_file.delete(next(ts_iter)) files = os.listdir(self.disk_file._datadir) self.assertEqual(1, len(files)) self.assertTrue(files[0].endswith('ts')) kwargs = {'mode': 'once'} self.auditor.run_audit(**kwargs) files_after = os.listdir(self.disk_file._datadir) self.assertEqual(files, files_after) def test_with_tombstone_and_data(self): # rsync replication could leave a tombstone and data file in object # dir - verify they are both removed during audit ts_iter = make_timestamp_iter() ts_tomb = next(ts_iter) ts_data = next(ts_iter) self.setup_bad_zero_byte(timestamp=ts_data) tomb_file_path = os.path.join(self.disk_file._datadir, '%s.ts' % ts_tomb.internal) with open(tomb_file_path, 'wb') as fd: write_metadata(fd, {'X-Timestamp': ts_tomb.internal}) files = os.listdir(self.disk_file._datadir) self.assertEqual(2, len(files)) self.assertTrue(os.path.basename(tomb_file_path) in files, files) kwargs = {'mode': 'once'} self.auditor.run_audit(**kwargs) self.assertFalse(os.path.exists(self.disk_file._datadir)) def test_sleeper(self): with mock.patch( 'time.sleep', mock.MagicMock()) as mock_sleep: my_auditor = auditor.ObjectAuditor(self.conf) my_auditor._sleep() mock_sleep.assert_called_with(30) my_conf = dict(interval=2) my_conf.update(self.conf) my_auditor = auditor.ObjectAuditor(my_conf) my_auditor._sleep() mock_sleep.assert_called_with(2) my_auditor = auditor.ObjectAuditor(self.conf) my_auditor.interval = 2 my_auditor._sleep() mock_sleep.assert_called_with(2) def test_run_parallel_audit(self): class StopForever(Exception): pass class Bogus(Exception): pass loop_error = Bogus('exception') class LetMeOut(BaseException): pass class ObjectAuditorMock(object): check_args = () check_kwargs = {} check_device_dir = None fork_called = 0 master = 0 wait_called = 0 def mock_run(self, *args, **kwargs): self.check_args = args self.check_kwargs = kwargs if 'zero_byte_fps' in kwargs: self.check_device_dir = kwargs.get('device_dirs') def mock_sleep_stop(self): raise StopForever('stop') def mock_sleep_continue(self): return def mock_audit_loop_error(self, parent, zbo_fps, override_devices=None, **kwargs): raise loop_error def mock_fork(self): self.fork_called += 1 if self.master: return self.fork_called else: return 0 def mock_wait(self): self.wait_called += 1 return (self.wait_called, 0) for i in string.ascii_letters[2:26]: mkdirs(os.path.join(self.devices, 'sd%s' % i)) my_auditor = auditor.ObjectAuditor(dict(devices=self.devices, mount_check='false', zero_byte_files_per_second=89, concurrency=1)) mocker = ObjectAuditorMock() my_auditor.logger.exception = mock.MagicMock() real_audit_loop = my_auditor.audit_loop my_auditor.audit_loop = mocker.mock_audit_loop_error my_auditor.run_audit = mocker.mock_run was_fork = os.fork was_wait = os.wait os.fork = mocker.mock_fork os.wait = mocker.mock_wait try: my_auditor._sleep = mocker.mock_sleep_stop my_auditor.run_once(zero_byte_fps=50) my_auditor.logger.exception.assert_called_once_with( 'ERROR auditing: %s', loop_error) my_auditor.logger.exception.reset_mock() self.assertRaises(StopForever, my_auditor.run_forever) my_auditor.logger.exception.assert_called_once_with( 'ERROR auditing: %s', loop_error) my_auditor.audit_loop = real_audit_loop self.assertRaises(StopForever, my_auditor.run_forever, zero_byte_fps=50) self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 50) self.assertEqual(mocker.fork_called, 0) self.assertRaises(SystemExit, my_auditor.run_once) self.assertEqual(mocker.fork_called, 1) self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 89) self.assertEqual(mocker.check_device_dir, []) self.assertEqual(mocker.check_args, ()) device_list = ['sd%s' % i for i in string.ascii_letters[2:10]] device_string = ','.join(device_list) device_string_bogus = device_string + ',bogus' mocker.fork_called = 0 self.assertRaises(SystemExit, my_auditor.run_once, devices=device_string_bogus) self.assertEqual(mocker.fork_called, 1) self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 89) self.assertEqual(sorted(mocker.check_device_dir), device_list) mocker.master = 1 mocker.fork_called = 0 self.assertRaises(StopForever, my_auditor.run_forever) # Fork is called 2 times since the zbf process is forked just # once before self._sleep() is called and StopForever is raised # Also wait is called just once before StopForever is raised self.assertEqual(mocker.fork_called, 2) self.assertEqual(mocker.wait_called, 1) my_auditor._sleep = mocker.mock_sleep_continue my_auditor.audit_loop = works_only_once(my_auditor.audit_loop, LetMeOut()) my_auditor.concurrency = 2 mocker.fork_called = 0 mocker.wait_called = 0 self.assertRaises(LetMeOut, my_auditor.run_forever) # Fork is called no. of devices + (no. of devices)/2 + 1 times # since zbf process is forked (no.of devices)/2 + 1 times no_devices = len(os.listdir(self.devices)) self.assertEqual(mocker.fork_called, no_devices + no_devices / 2 + 1) self.assertEqual(mocker.wait_called, no_devices + no_devices / 2 + 1) finally: os.fork = was_fork os.wait = was_wait def test_run_audit_once(self): my_auditor = auditor.ObjectAuditor(dict(devices=self.devices, mount_check='false', zero_byte_files_per_second=89, concurrency=1)) forked_pids = [] next_zbf_pid = [2] next_normal_pid = [1001] outstanding_pids = [[]] def fake_fork_child(**kwargs): if len(forked_pids) > 10: # something's gone horribly wrong raise BaseException("forking too much") # ZBF pids are all smaller than the normal-audit pids; this way # we can return them first. # # Also, ZBF pids are even and normal-audit pids are odd; this is # so humans seeing this test fail can better tell what's happening. if kwargs.get('zero_byte_fps'): pid = next_zbf_pid[0] next_zbf_pid[0] += 2 else: pid = next_normal_pid[0] next_normal_pid[0] += 2 outstanding_pids[0].append(pid) forked_pids.append(pid) return pid def fake_os_wait(): # Smallest pid first; that's ZBF if we have one, else normal outstanding_pids[0].sort() pid = outstanding_pids[0].pop(0) return (pid, 0) # (pid, status) with mock.patch("swift.obj.auditor.os.wait", fake_os_wait), \ mock.patch.object(my_auditor, 'fork_child', fake_fork_child), \ mock.patch.object(my_auditor, '_sleep', lambda *a: None): my_auditor.run_once() self.assertEqual(sorted(forked_pids), [2, 1001]) def test_run_parallel_audit_once(self): my_auditor = auditor.ObjectAuditor( dict(devices=self.devices, mount_check='false', zero_byte_files_per_second=89, concurrency=2)) # ZBF pids are smaller than the normal-audit pids; this way we can # return them first from our mocked os.wait(). # # Also, ZBF pids are even and normal-audit pids are odd; this is so # humans seeing this test fail can better tell what's happening. forked_pids = [] next_zbf_pid = [2] next_normal_pid = [1001] outstanding_pids = [[]] def fake_fork_child(**kwargs): if len(forked_pids) > 10: # something's gone horribly wrong; try not to hang the test # run because of it raise BaseException("forking too much") if kwargs.get('zero_byte_fps'): pid = next_zbf_pid[0] next_zbf_pid[0] += 2 else: pid = next_normal_pid[0] next_normal_pid[0] += 2 outstanding_pids[0].append(pid) forked_pids.append(pid) return pid def fake_os_wait(): if not outstanding_pids[0]: raise BaseException("nobody waiting") # ZBF auditor finishes first outstanding_pids[0].sort() pid = outstanding_pids[0].pop(0) return (pid, 0) # (pid, status) # make sure we've got enough devs that the ZBF auditor can finish # before all the normal auditors have been started mkdirs(os.path.join(self.devices, 'sdc')) mkdirs(os.path.join(self.devices, 'sdd')) with mock.patch("swift.obj.auditor.os.wait", fake_os_wait), \ mock.patch.object(my_auditor, 'fork_child', fake_fork_child), \ mock.patch.object(my_auditor, '_sleep', lambda *a: None): my_auditor.run_once() self.assertEqual(sorted(forked_pids), [2, 1001, 1003, 1005, 1007]) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/common.py0000664000567000056710000001027213024044354020371 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 - 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hashlib import os import shutil import tempfile import unittest import time from swift.common.storage_policy import POLICIES from swift.common.utils import Timestamp from swift.obj import diskfile from test.unit import debug_logger class FakeReplicator(object): def __init__(self, testdir, policy=None): self.logger = debug_logger('test-ssync-sender') self.conn_timeout = 1 self.node_timeout = 2 self.http_timeout = 3 self.network_chunk_size = 65536 self.disk_chunk_size = 4096 conf = { 'devices': testdir, 'mount_check': 'false', } policy = POLICIES.default if policy is None else policy self._diskfile_router = diskfile.DiskFileRouter(conf, self.logger) self._diskfile_mgr = self._diskfile_router[policy] def write_diskfile(df, timestamp, data='test data', frag_index=None, commit=True, extra_metadata=None): # Helper method to write some data and metadata to a diskfile. # Optionally do not commit the diskfile with df.create() as writer: writer.write(data) metadata = { 'ETag': hashlib.md5(data).hexdigest(), 'X-Timestamp': timestamp.internal, 'Content-Length': str(len(data)), } if extra_metadata: metadata.update(extra_metadata) if frag_index is not None: metadata['X-Object-Sysmeta-Ec-Frag-Index'] = str(frag_index) writer.put(metadata) if commit: writer.commit(timestamp) # else: don't make it durable return metadata class BaseTest(unittest.TestCase): def setUp(self): # daemon will be set in subclass setUp self.daemon = None self.tmpdir = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.tmpdir, ignore_errors=True) def _make_diskfile(self, device='dev', partition='9', account='a', container='c', obj='o', body='test', extra_metadata=None, policy=None, frag_index=None, timestamp=None, df_mgr=None, commit=True): policy = policy or POLICIES.legacy object_parts = account, container, obj timestamp = Timestamp(time.time()) if timestamp is None else timestamp if df_mgr is None: df_mgr = self.daemon._diskfile_router[policy] df = df_mgr.get_diskfile( device, partition, *object_parts, policy=policy, frag_index=frag_index) write_diskfile(df, timestamp, data=body, extra_metadata=extra_metadata, commit=commit) if commit: # when we write and commit stub data, sanity check it's readable # and not quarantined because of any validation check with df.open(): self.assertEqual(''.join(df.reader()), body) # sanity checks listing = os.listdir(df._datadir) self.assertTrue(listing) for filename in listing: self.assertTrue(filename.startswith(timestamp.internal)) return df def _make_open_diskfile(self, device='dev', partition='9', account='a', container='c', obj='o', body='test', extra_metadata=None, policy=None, frag_index=None, timestamp=None, df_mgr=None): df = self._make_diskfile(device, partition, account, container, obj, body, extra_metadata, policy, frag_index, timestamp, df_mgr) df.open() return df swift-2.7.1/test/unit/obj/test_server.py0000775000567000056710000111460013024044354021452 0ustar jenkinsjenkins00000000000000# coding: utf-8 # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.obj.server""" import six.moves.cPickle as pickle import datetime import json import errno import operator import os import mock import six from six import StringIO import unittest import math import random from shutil import rmtree from time import gmtime, strftime, time, struct_time from tempfile import mkdtemp from hashlib import md5 import itertools import tempfile from collections import defaultdict from contextlib import contextmanager from eventlet import sleep, spawn, wsgi, listen, Timeout, tpool, greenthread from eventlet.green import httplib from nose import SkipTest from swift import __version__ as swift_version from swift.common.http import is_success from test.unit import FakeLogger, debug_logger, mocked_http_conn, \ make_timestamp_iter, DEFAULT_TEST_EC_TYPE from test.unit import connect_tcp, readuntil2crlfs, patch_policies, \ encode_frag_archive_bodies from swift.obj import server as object_server from swift.obj import updater from swift.obj import diskfile from swift.common import utils, bufferedhttp from swift.common.header_key_dict import HeaderKeyDict from swift.common.utils import hash_path, mkdirs, normalize_timestamp, \ NullLogger, storage_directory, public, replication, encode_timestamps, \ Timestamp from swift.common import constraints from swift.common.swob import Request, WsgiBytesIO from swift.common.splice import splice from swift.common.storage_policy import (StoragePolicy, ECStoragePolicy, POLICIES, EC_POLICY) from swift.common.exceptions import DiskFileDeviceUnavailable, \ DiskFileNoSpace, DiskFileQuarantined def mock_time(*args, **kwargs): return 5000.0 test_policies = [ StoragePolicy(0, name='zero', is_default=True), ECStoragePolicy(1, name='one', ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4), ] @contextmanager def fake_spawn(): """ Spawn and capture the result so we can later wait on it. This means we can test code executing in a greenthread but still wait() on the result to ensure that the method has completed. """ greenlets = [] def _inner_fake_spawn(func, *a, **kw): gt = greenthread.spawn(func, *a, **kw) greenlets.append(gt) return gt object_server.spawn = _inner_fake_spawn with mock.patch('swift.obj.server.spawn', _inner_fake_spawn): try: yield finally: for gt in greenlets: gt.wait() @patch_policies(test_policies) class TestObjectController(unittest.TestCase): """Test swift.obj.server.ObjectController""" def setUp(self): """Set up for testing swift.object.server.ObjectController""" utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' self.tmpdir = mkdtemp() self.testdir = os.path.join(self.tmpdir, 'tmp_test_object_server_ObjectController') mkdirs(os.path.join(self.testdir, 'sda1')) self.conf = {'devices': self.testdir, 'mount_check': 'false', 'container_update_timeout': 0.0} self.object_controller = object_server.ObjectController( self.conf, logger=debug_logger()) self.object_controller.bytes_per_sync = 1 self._orig_tpool_exc = tpool.execute tpool.execute = lambda f, *args, **kwargs: f(*args, **kwargs) self.df_mgr = diskfile.DiskFileManager(self.conf, self.object_controller.logger) self.logger = debug_logger('test-object-controller') self.ts = make_timestamp_iter() self.ec_policies = [p for p in POLICIES if p.policy_type == EC_POLICY] def tearDown(self): """Tear down for testing swift.object.server.ObjectController""" rmtree(self.tmpdir) tpool.execute = self._orig_tpool_exc def _stage_tmp_dir(self, policy): mkdirs(os.path.join(self.testdir, 'sda1', diskfile.get_tmp_dir(policy))) def check_all_api_methods(self, obj_name='o', alt_res=None): path = '/sda1/p/a/c/%s' % obj_name body = 'SPECIAL_STRING' op_table = { "PUT": (body, alt_res or 201, ''), # create one "GET": ('', alt_res or 200, body), # check it "POST": ('', alt_res or 202, ''), # update it "HEAD": ('', alt_res or 200, ''), # head it "DELETE": ('', alt_res or 204, '') # delete it } for method in ["PUT", "GET", "POST", "HEAD", "DELETE"]: in_body, res, out_body = op_table[method] timestamp = normalize_timestamp(time()) req = Request.blank( path, environ={'REQUEST_METHOD': method}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = in_body resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, res) if out_body and (200 <= res < 300): self.assertEqual(resp.body, out_body) def test_REQUEST_SPECIAL_CHARS(self): obj = 'special昆%20/%' self.check_all_api_methods(obj) def test_device_unavailable(self): def raise_disk_unavail(*args, **kwargs): raise DiskFileDeviceUnavailable() self.object_controller.get_diskfile = raise_disk_unavail self.check_all_api_methods(alt_res=507) def test_allowed_headers(self): dah = ['content-disposition', 'content-encoding', 'x-delete-at', 'x-object-manifest', 'x-static-large-object'] conf = {'devices': self.testdir, 'mount_check': 'false', 'allowed_headers': ','.join(['content-length'] + dah)} self.object_controller = object_server.ObjectController( conf, logger=debug_logger()) self.assertEqual(self.object_controller.allowed_headers, set(dah)) def test_POST_update_meta(self): # Test swift.obj.server.ObjectController.POST original_headers = self.object_controller.allowed_headers test_headers = 'content-encoding foo bar'.split() self.object_controller.allowed_headers = set(test_headers) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'Foo': 'fooheader', 'Baz': 'bazheader', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'X-Object-Meta-3': 'Three', 'X-Object-Meta-4': 'Four', 'Content-Encoding': 'gzip', 'Foo': 'fooheader', 'Bar': 'barheader', 'Content-Type': 'application/x-test'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertTrue("X-Object-Meta-1" not in resp.headers and "X-Object-Meta-Two" not in resp.headers and "X-Object-Meta-3" in resp.headers and "X-Object-Meta-4" in resp.headers and "Foo" in resp.headers and "Bar" in resp.headers and "Baz" not in resp.headers and "Content-Encoding" in resp.headers) self.assertEqual(resp.headers['Content-Type'], 'application/x-test') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertTrue("X-Object-Meta-1" not in resp.headers and "X-Object-Meta-Two" not in resp.headers and "X-Object-Meta-3" in resp.headers and "X-Object-Meta-4" in resp.headers and "Foo" in resp.headers and "Bar" in resp.headers and "Baz" not in resp.headers and "Content-Encoding" in resp.headers) self.assertEqual(resp.headers['Content-Type'], 'application/x-test') timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertTrue("X-Object-Meta-3" not in resp.headers and "X-Object-Meta-4" not in resp.headers and "Foo" not in resp.headers and "Bar" not in resp.headers and "Content-Encoding" not in resp.headers) self.assertEqual(resp.headers['Content-Type'], 'application/x-test') # test defaults self.object_controller.allowed_headers = original_headers timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'Foo': 'fooheader', 'X-Object-Meta-1': 'One', 'X-Object-Manifest': 'c/bar', 'Content-Encoding': 'gzip', 'Content-Disposition': 'bar', 'X-Static-Large-Object': 'True', }) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertTrue("X-Object-Meta-1" in resp.headers and "Foo" not in resp.headers and "Content-Encoding" in resp.headers and "X-Object-Manifest" in resp.headers and "Content-Disposition" in resp.headers and "X-Static-Large-Object" in resp.headers) self.assertEqual(resp.headers['Content-Type'], 'application/x-test') timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'X-Object-Meta-3': 'Three', 'Foo': 'fooheader', 'Content-Type': 'application/x-test'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertTrue("X-Object-Meta-1" not in resp.headers and "Foo" not in resp.headers and "Content-Encoding" not in resp.headers and "X-Object-Manifest" not in resp.headers and "Content-Disposition" not in resp.headers and "X-Object-Meta-3" in resp.headers and "X-Static-Large-Object" in resp.headers) self.assertEqual(resp.headers['Content-Type'], 'application/x-test') # Test for empty metadata timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'X-Object-Meta-3': ''}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertEqual(resp.headers["x-object-meta-3"], '') def test_POST_old_timestamp(self): ts = time() orig_timestamp = utils.Timestamp(ts).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': orig_timestamp, 'Content-Type': 'application/x-test', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # Same timestamp should result in 409 req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': orig_timestamp, 'X-Object-Meta-3': 'Three', 'X-Object-Meta-4': 'Four', 'Content-Encoding': 'gzip', 'Content-Type': 'application/x-test'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['X-Backend-Timestamp'], orig_timestamp) # Earlier timestamp should result in 409 timestamp = normalize_timestamp(ts - 1) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'X-Object-Meta-5': 'Five', 'X-Object-Meta-6': 'Six', 'Content-Encoding': 'gzip', 'Content-Type': 'application/x-test'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['X-Backend-Timestamp'], orig_timestamp) def test_POST_conflicts_with_later_POST(self): ts_iter = make_timestamp_iter() t_put = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': t_put, 'Content-Length': 0, 'Content-Type': 'plain/text'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) t_post1 = next(ts_iter).internal t_post2 = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t_post2}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t_post1}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) obj_dir = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o'))) ts_file = os.path.join(obj_dir, t_post2 + '.meta') self.assertTrue(os.path.isfile(ts_file)) meta_file = os.path.join(obj_dir, t_post1 + '.meta') self.assertFalse(os.path.isfile(meta_file)) def test_POST_not_exist(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/fail', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'X-Object-Meta-1': 'One', 'X-Object-Meta-2': 'Two', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) def test_POST_invalid_path(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp, 'X-Object-Meta-1': 'One', 'X-Object-Meta-2': 'Two', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_POST_no_timestamp(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Object-Meta-1': 'One', 'X-Object-Meta-2': 'Two', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_POST_bad_timestamp(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': 'bad', 'X-Object-Meta-1': 'One', 'X-Object-Meta-2': 'Two', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_POST_container_connection(self): # Test that POST does call container_update and returns success # whether update to container server succeeds or fails def mock_http_connect(calls, response, with_exc=False): class FakeConn(object): def __init__(self, calls, status, with_exc): self.calls = calls self.status = status self.reason = 'Fake' self.host = '1.2.3.4' self.port = '1234' self.with_exc = with_exc def getresponse(self): calls[0] += 1 if self.with_exc: raise Exception('test') return self def read(self, amt=None): return '' return lambda *args, **kwargs: FakeConn(calls, response, with_exc) ts = time() timestamp = normalize_timestamp(ts) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Content-Length': '0'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(ts + 1), 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new1'}) calls = [0] with mock.patch.object(object_server, 'http_connect', mock_http_connect(calls, 202)): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(ts + 2), 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new1'}) calls = [0] with mock.patch.object(object_server, 'http_connect', mock_http_connect(calls, 202, with_exc=True)): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(ts + 3), 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new2'}) calls = [0] with mock.patch.object(object_server, 'http_connect', mock_http_connect(calls, 500)): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) def _test_POST_container_updates(self, policy, update_etag=None): # Test that POST requests result in correct calls to container_update ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) t = [ts_iter.next() for _ in range(0, 5)] calls_made = [] update_etag = update_etag or '098f6bcd4621d373cade4e832627b4f6' def mock_container_update(ctlr, op, account, container, obj, request, headers_out, objdevice, policy): calls_made.append((headers_out, policy)) headers = { 'X-Timestamp': t[1].internal, 'Content-Type': 'application/octet-stream;swift_bytes=123456789', 'Content-Length': '4', 'X-Backend-Storage-Policy-Index': int(policy)} if policy.policy_type == EC_POLICY: headers['X-Backend-Container-Update-Override-Etag'] = update_etag headers['X-Object-Sysmeta-Ec-Etag'] = update_etag headers['X-Object-Sysmeta-Ec-Frag-Index'] = 2 req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) req.body = 'test' with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual(1, len(calls_made)) expected_headers = HeaderKeyDict({ 'x-size': '4', 'x-content-type': 'application/octet-stream;swift_bytes=123456789', 'x-timestamp': t[1].internal, 'x-etag': update_etag}) self.assertDictEqual(expected_headers, calls_made[0][0]) self.assertEqual(policy, calls_made[0][1]) # POST with no metadata newer than the data should return 409, # container update not expected calls_made = [] req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t[0].internal, 'X-Backend-Storage-Policy-Index': int(policy)}) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['x-backend-timestamp'], t[1].internal) self.assertEqual(0, len(calls_made)) # POST with newer metadata returns success and container update # is expected calls_made = [] req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t[3].internal, 'X-Backend-Storage-Policy-Index': int(policy)}) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual(1, len(calls_made)) expected_headers = HeaderKeyDict({ 'x-size': '4', 'x-content-type': 'application/octet-stream;swift_bytes=123456789', 'x-timestamp': t[1].internal, 'x-content-type-timestamp': t[1].internal, 'x-meta-timestamp': t[3].internal, 'x-etag': update_etag}) self.assertDictEqual(expected_headers, calls_made[0][0]) self.assertEqual(policy, calls_made[0][1]) # POST with no metadata newer than existing metadata should return # 409, container update not expected calls_made = [] req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t[2].internal, 'X-Backend-Storage-Policy-Index': int(policy)}) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['x-backend-timestamp'], t[3].internal) self.assertEqual(0, len(calls_made)) # POST with newer content-type but older metadata returns success # and container update is expected newer content-type should have # existing swift_bytes appended calls_made = [] req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={ 'X-Timestamp': t[2].internal, 'Content-Type': 'text/plain', 'Content-Type-Timestamp': t[2].internal, 'X-Backend-Storage-Policy-Index': int(policy) }) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual(1, len(calls_made)) expected_headers = HeaderKeyDict({ 'x-size': '4', 'x-content-type': 'text/plain;swift_bytes=123456789', 'x-timestamp': t[1].internal, 'x-content-type-timestamp': t[2].internal, 'x-meta-timestamp': t[3].internal, 'x-etag': update_etag}) self.assertDictEqual(expected_headers, calls_made[0][0]) self.assertEqual(policy, calls_made[0][1]) # POST with older content-type but newer metadata returns success # and container update is expected calls_made = [] req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={ 'X-Timestamp': t[4].internal, 'Content-Type': 'older', 'Content-Type-Timestamp': t[1].internal, 'X-Backend-Storage-Policy-Index': int(policy) }) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual(1, len(calls_made)) expected_headers = HeaderKeyDict({ 'x-size': '4', 'x-content-type': 'text/plain;swift_bytes=123456789', 'x-timestamp': t[1].internal, 'x-content-type-timestamp': t[2].internal, 'x-meta-timestamp': t[4].internal, 'x-etag': update_etag}) self.assertDictEqual(expected_headers, calls_made[0][0]) self.assertEqual(policy, calls_made[0][1]) # POST with same-time content-type and metadata returns 409 # and no container update is expected calls_made = [] req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={ 'X-Timestamp': t[4].internal, 'Content-Type': 'ignored', 'Content-Type-Timestamp': t[2].internal, 'X-Backend-Storage-Policy-Index': int(policy) }) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(0, len(calls_made)) # POST with implicit newer content-type but older metadata # returns success and container update is expected, # update reports existing metadata timestamp calls_made = [] req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={ 'X-Timestamp': t[3].internal, 'Content-Type': 'text/newer', 'X-Backend-Storage-Policy-Index': int(policy) }) with mock.patch('swift.obj.server.ObjectController.container_update', mock_container_update): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual(1, len(calls_made)) expected_headers = HeaderKeyDict({ 'x-size': '4', 'x-content-type': 'text/newer;swift_bytes=123456789', 'x-timestamp': t[1].internal, 'x-content-type-timestamp': t[3].internal, 'x-meta-timestamp': t[4].internal, 'x-etag': update_etag}) self.assertDictEqual(expected_headers, calls_made[0][0]) self.assertEqual(policy, calls_made[0][1]) def test_POST_container_updates_with_replication_policy(self): self._test_POST_container_updates(POLICIES[0]) def test_POST_container_updates_with_EC_policy(self): self._test_POST_container_updates( POLICIES[1], update_etag='override_etag') def _test_PUT_then_POST_async_pendings(self, policy, update_etag=None): # Test that PUT and POST requests result in distinct async pending # files when sync container update fails. def fake_http_connect(*args): raise Exception('test') device_dir = os.path.join(self.testdir, 'sda1') ts_iter = make_timestamp_iter() t_put = ts_iter.next() update_etag = update_etag or '098f6bcd4621d373cade4e832627b4f6' put_headers = { 'X-Trans-Id': 'put_trans_id', 'X-Timestamp': t_put.internal, 'Content-Type': 'application/octet-stream;swift_bytes=123456789', 'Content-Length': '4', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Container-Host': 'chost:cport', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice'} if policy.policy_type == EC_POLICY: put_headers.update({ 'X-Object-Sysmeta-Ec-Frag-Index': '2', 'X-Backend-Container-Update-Override-Etag': update_etag, 'X-Object-Sysmeta-Ec-Etag': update_etag}) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=put_headers, body='test') with mock.patch('swift.obj.server.http_connect', fake_http_connect), \ mock.patch('swift.common.utils.HASH_PATH_PREFIX', ''), \ fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) async_pending_file_put = os.path.join( device_dir, diskfile.get_async_dir(policy), 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-%s' % t_put.internal) self.assertTrue(os.path.isfile(async_pending_file_put), 'Expected %s to be a file but it is not.' % async_pending_file_put) expected_put_headers = { 'Referer': 'PUT http://localhost/sda1/p/a/c/o', 'X-Trans-Id': 'put_trans_id', 'X-Timestamp': t_put.internal, 'X-Content-Type': 'application/octet-stream;swift_bytes=123456789', 'X-Size': '4', 'X-Etag': '098f6bcd4621d373cade4e832627b4f6', 'User-Agent': 'object-server %s' % os.getpid(), 'X-Backend-Storage-Policy-Index': '%d' % int(policy)} if policy.policy_type == EC_POLICY: expected_put_headers['X-Etag'] = update_etag self.assertDictEqual( pickle.load(open(async_pending_file_put)), {'headers': expected_put_headers, 'account': 'a', 'container': 'c', 'obj': 'o', 'op': 'PUT'}) # POST with newer metadata returns success and container update # is expected t_post = ts_iter.next() post_headers = { 'X-Trans-Id': 'post_trans_id', 'X-Timestamp': t_post.internal, 'Content-Type': 'application/other', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Container-Host': 'chost:cport', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice'} req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers=post_headers) with mock.patch('swift.obj.server.http_connect', fake_http_connect), \ mock.patch('swift.common.utils.HASH_PATH_PREFIX', ''), \ fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.maxDiff = None # check async pending file for PUT is still intact self.assertDictEqual( pickle.load(open(async_pending_file_put)), {'headers': expected_put_headers, 'account': 'a', 'container': 'c', 'obj': 'o', 'op': 'PUT'}) # check distinct async pending file for POST async_pending_file_post = os.path.join( device_dir, diskfile.get_async_dir(policy), 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-%s' % t_post.internal) self.assertTrue(os.path.isfile(async_pending_file_post), 'Expected %s to be a file but it is not.' % async_pending_file_post) expected_post_headers = { 'Referer': 'POST http://localhost/sda1/p/a/c/o', 'X-Trans-Id': 'post_trans_id', 'X-Timestamp': t_put.internal, 'X-Content-Type': 'application/other;swift_bytes=123456789', 'X-Size': '4', 'X-Etag': '098f6bcd4621d373cade4e832627b4f6', 'User-Agent': 'object-server %s' % os.getpid(), 'X-Backend-Storage-Policy-Index': '%d' % int(policy), 'X-Meta-Timestamp': t_post.internal, 'X-Content-Type-Timestamp': t_post.internal, } if policy.policy_type == EC_POLICY: expected_post_headers['X-Etag'] = update_etag self.assertDictEqual( pickle.load(open(async_pending_file_post)), {'headers': expected_post_headers, 'account': 'a', 'container': 'c', 'obj': 'o', 'op': 'PUT'}) # verify that only the POST (most recent) async update gets sent by the # object updater, and that both update files are deleted with mock.patch( 'swift.obj.updater.ObjectUpdater.object_update') as mock_update, \ mock.patch('swift.obj.updater.dump_recon_cache'): object_updater = updater.ObjectUpdater( {'devices': self.testdir, 'mount_check': 'false'}, logger=debug_logger()) node = {'id': 1} mock_ring = mock.MagicMock() mock_ring.get_nodes.return_value = (99, [node]) object_updater.container_ring = mock_ring mock_update.return_value = ((True, 1)) object_updater.run_once() self.assertEqual(1, mock_update.call_count) self.assertEqual((node, 99, 'PUT', '/a/c/o'), mock_update.call_args_list[0][0][0:4]) actual_headers = mock_update.call_args_list[0][0][4] self.assertTrue( actual_headers.pop('user-agent').startswith('object-updater')) self.assertDictEqual(expected_post_headers, actual_headers) self.assertFalse( os.listdir(os.path.join( device_dir, diskfile.get_async_dir(policy)))) def test_PUT_then_POST_async_pendings_with_repl_policy(self): self._test_PUT_then_POST_async_pendings(POLICIES[0]) def test_PUT_then_POST_async_pendings_with_EC_policy(self): self._test_PUT_then_POST_async_pendings( POLICIES[1], update_etag='override_etag') def test_POST_quarantine_zbyte(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = self.df_mgr.get_diskfile('sda1', 'p', 'a', 'c', 'o', policy=POLICIES.legacy) objfile.open() file_name = os.path.basename(objfile._data_file) with open(objfile._data_file) as fp: metadata = diskfile.read_metadata(fp) os.unlink(objfile._data_file) with open(objfile._data_file, 'w') as fp: diskfile.write_metadata(fp, metadata) self.assertEqual(os.listdir(objfile._datadir)[0], file_name) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(time())}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(objfile._data_file))) self.assertEqual(os.listdir(quar_dir)[0], file_name) def test_PUT_invalid_path(self): req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_no_timestamp(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'CONTENT_LENGTH': '0'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_no_content_type(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '6'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_invalid_content_type(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '6', 'Content-Type': '\xff\xff'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) self.assertTrue('Content-Type' in resp.body) def test_PUT_no_content_length(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' del req.headers['Content-Length'] resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 411) def test_PUT_zero_content_length(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream'}) req.body = '' self.assertEqual(req.headers['Content-Length'], '0') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) def test_PUT_bad_transfer_encoding(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' req.headers['Transfer-Encoding'] = 'bad' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_if_none_match_star(self): # First PUT should succeed timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'If-None-Match': '*'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # File should already exist so it should fail timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'If-None-Match': '*'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) def test_PUT_if_none_match(self): # PUT with if-none-match set and nothing there should succeed timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'If-None-Match': 'notthere'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # PUT with if-none-match of the object etag should fail timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'If-None-Match': '0b4c12d7e0a73840c1c4f148fda3b037'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) def test_PUT_common(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'x-object-meta-test': 'one', 'Custom-Header': '*', 'X-Backend-Replication-Headers': 'Content-Type Content-Length'}) req.body = 'VERIFY' with mock.patch.object(self.object_controller, 'allowed_headers', ['Custom-Header']): self.object_controller.allowed_headers = ['Custom-Header'] resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': utils.Timestamp(timestamp).internal, 'Content-Length': '6', 'ETag': '0b4c12d7e0a73840c1c4f148fda3b037', 'Content-Type': 'application/octet-stream', 'name': '/a/c/o', 'X-Object-Meta-Test': 'one', 'Custom-Header': '*'}) def test_PUT_overwrite(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY TWO' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY TWO') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': utils.Timestamp(timestamp).internal, 'Content-Length': '10', 'ETag': 'b381a4c5dab1eaa1eb9711fa647cd039', 'Content-Type': 'text/plain', 'name': '/a/c/o', 'Content-Encoding': 'gzip'}) def test_PUT_overwrite_to_older_ts_succcess(self): ts_iter = make_timestamp_iter() old_timestamp = next(ts_iter) new_timestamp = next(ts_iter) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': old_timestamp.normal, 'Content-Length': '0', 'Content-Type': 'application/octet-stream'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': new_timestamp.normal, 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY TWO' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), new_timestamp.internal + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY TWO') self.assertEqual( diskfile.read_metadata(objfile), {'X-Timestamp': new_timestamp.internal, 'Content-Length': '10', 'ETag': 'b381a4c5dab1eaa1eb9711fa647cd039', 'Content-Type': 'text/plain', 'name': '/a/c/o', 'Content-Encoding': 'gzip'}) def test_PUT_overwrite_to_newer_ts_failed(self): ts_iter = make_timestamp_iter() old_timestamp = next(ts_iter) new_timestamp = next(ts_iter) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': new_timestamp.normal, 'Content-Length': '0', 'Content-Type': 'application/octet-stream'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': old_timestamp.normal, 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY TWO' with mock.patch( 'swift.obj.diskfile.BaseDiskFile.create') as mock_create: resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(mock_create.call_count, 0) # data file doesn't exist there (This is sanity because # if .data written unexpectedly, it will be removed # by hash_cleanup_list_dir) datafile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), old_timestamp.internal + '.data') self.assertFalse(os.path.exists(datafile)) # ts file sitll exists tsfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), new_timestamp.internal + '.ts') self.assertTrue(os.path.isfile(tsfile)) def test_PUT_overwrite_w_delete_at(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'X-Delete-At': 9999999999, 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY TWO' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY TWO') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': utils.Timestamp(timestamp).internal, 'Content-Length': '10', 'ETag': 'b381a4c5dab1eaa1eb9711fa647cd039', 'Content-Type': 'text/plain', 'name': '/a/c/o', 'Content-Encoding': 'gzip'}) def test_PUT_old_timestamp(self): ts = time() orig_timestamp = utils.Timestamp(ts).internal req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': orig_timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(ts), 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY TWO' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['X-Backend-Timestamp'], orig_timestamp) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': normalize_timestamp(ts - 1), 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}) req.body = 'VERIFY THREE' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['X-Backend-Timestamp'], orig_timestamp) def test_PUT_new_object_really_old_timestamp(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '-1', # 1969-12-31 23:59:59 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1', # 1970-01-01 00:00:01 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) def test_PUT_object_really_new_timestamp(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '9999999999', # 2286-11-20 17:46:40 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # roll over to 11 digits before the decimal req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '10000000000', 'Content-Length': '6', 'Content-Type': 'application/octet-stream'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_no_etag(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'text/plain'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) def test_PUT_invalid_etag(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'text/plain', 'ETag': 'invalid'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 422) def test_PUT_user_metadata(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'ETag': 'b114ab7b90d9ccac4bd5d99cc7ebb568', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY THREE' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY THREE') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': utils.Timestamp(timestamp).internal, 'Content-Length': '12', 'ETag': 'b114ab7b90d9ccac4bd5d99cc7ebb568', 'Content-Type': 'text/plain', 'name': '/a/c/o', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) def test_PUT_etag_in_footer(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'Etag': 'other-etag', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) obj_etag = md5("obj data").hexdigest() footer_meta = json.dumps({"Etag": obj_etag}) footer_meta_cksum = md5(footer_meta).hexdigest() req.body = "\r\n".join(( "--boundary", "", "obj data", "--boundary", "Content-MD5: " + footer_meta_cksum, "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.etag, obj_etag) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') with open(objfile) as fh: self.assertEqual(fh.read(), "obj data") def test_PUT_etag_in_footer_mismatch(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = json.dumps({"Etag": md5("green").hexdigest()}) footer_meta_cksum = md5(footer_meta).hexdigest() req.body = "\r\n".join(( "--boundary", "", "blue", "--boundary", "Content-MD5: " + footer_meta_cksum, "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 422) def test_PUT_meta_in_footer(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Object-Meta-X': 'Z', 'X-Object-Sysmeta-X': 'Z', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = json.dumps({ 'X-Object-Meta-X': 'Y', 'X-Object-Sysmeta-X': 'Y', }) footer_meta_cksum = md5(footer_meta).hexdigest() req.body = "\r\n".join(( "--boundary", "", "stuff stuff stuff", "--boundary", "Content-MD5: " + footer_meta_cksum, "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp}, environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.headers.get('X-Object-Meta-X'), 'Y') self.assertEqual(resp.headers.get('X-Object-Sysmeta-X'), 'Y') def test_PUT_missing_footer_checksum(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = json.dumps({"Etag": md5("obj data").hexdigest()}) req.body = "\r\n".join(( "--boundary", "", "obj data", "--boundary", # no Content-MD5 "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_bad_footer_checksum(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = json.dumps({"Etag": md5("obj data").hexdigest()}) bad_footer_meta_cksum = md5(footer_meta + "bad").hexdigest() req.body = "\r\n".join(( "--boundary", "", "obj data", "--boundary", "Content-MD5: " + bad_footer_meta_cksum, "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 422) def test_PUT_bad_footer_json(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = "{{{[[{{[{[[{[{[[{{{[{{{{[[{{[{[" footer_meta_cksum = md5(footer_meta).hexdigest() req.body = "\r\n".join(( "--boundary", "", "obj data", "--boundary", "Content-MD5: " + footer_meta_cksum, "", footer_meta, "--boundary--", )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_PUT_extra_mime_docs_ignored(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary'}, environ={'REQUEST_METHOD': 'PUT'}) footer_meta = json.dumps({'X-Object-Meta-Mint': 'pepper'}) footer_meta_cksum = md5(footer_meta).hexdigest() req.body = "\r\n".join(( "--boundary", "", "obj data", "--boundary", "Content-MD5: " + footer_meta_cksum, "", footer_meta, "--boundary", "This-Document-Is-Useless: yes", "", "blah blah I take up space", "--boundary--" )) req.headers.pop("Content-Length", None) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # swob made this into a StringIO for us wsgi_input = req.environ['wsgi.input'] self.assertEqual(wsgi_input.tell(), len(wsgi_input.getvalue())) def test_PUT_user_metadata_no_xattr(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'ETag': 'b114ab7b90d9ccac4bd5d99cc7ebb568', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY THREE' def mock_get_and_setxattr(*args, **kargs): error_num = errno.ENOTSUP if hasattr(errno, 'ENOTSUP') else \ errno.EOPNOTSUPP raise IOError(error_num, 'Operation not supported') with mock.patch('xattr.getxattr', mock_get_and_setxattr): with mock.patch('xattr.setxattr', mock_get_and_setxattr): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 507) def test_PUT_client_timeout(self): class FakeTimeout(BaseException): def __enter__(self): raise self def __exit__(self, typ, value, tb): pass # This is just so the test fails when run on older object server code # instead of exploding. if not hasattr(object_server, 'ChunkReadTimeout'): object_server.ChunkReadTimeout = None with mock.patch.object(object_server, 'ChunkReadTimeout', FakeTimeout): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'Content-Length': '6'}) req.environ['wsgi.input'] = WsgiBytesIO(b'VERIFY') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 408) def test_PUT_system_metadata(self): # check that sysmeta is stored in diskfile timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) req.body = 'VERIFY SYSMETA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), timestamp + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY SYSMETA') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': timestamp, 'Content-Length': '14', 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'name': '/a/c/o', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) def test_PUT_succeeds_with_later_POST(self): ts_iter = make_timestamp_iter() t_put = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': t_put, 'Content-Length': 0, 'Content-Type': 'plain/text'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) t_put2 = next(ts_iter).internal t_post = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t_post}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': t_put2, 'Content-Length': 0, 'Content-Type': 'plain/text'}, ) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) obj_dir = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o'))) ts_file = os.path.join(obj_dir, t_put2 + '.data') self.assertTrue(os.path.isfile(ts_file)) meta_file = os.path.join(obj_dir, t_post + '.meta') self.assertTrue(os.path.isfile(meta_file)) def test_POST_system_metadata(self): # check that diskfile sysmeta is not changed by a POST timestamp1 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp1, 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) req.body = 'VERIFY SYSMETA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp2 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp2, 'X-Object-Meta-1': 'Not One', 'X-Object-Sysmeta-1': 'Not One', 'X-Object-Sysmeta-Two': 'Not Two'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) # original .data file metadata should be unchanged objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), timestamp1 + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY SYSMETA') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': timestamp1, 'Content-Length': '14', 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'name': '/a/c/o', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) # .meta file metadata should have only user meta items metafile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), timestamp2 + '.meta') self.assertTrue(os.path.isfile(metafile)) self.assertEqual(diskfile.read_metadata(metafile), {'X-Timestamp': timestamp2, 'name': '/a/c/o', 'X-Object-Meta-1': 'Not One'}) def test_POST_then_fetch_content_type(self): # check that content_type is updated by a POST timestamp1 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp1, 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'X-Object-Meta-1': 'One'}) req.body = 'VERIFY SYSMETA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp2 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp2, 'X-Object-Meta-1': 'Not One', 'Content-Type': 'text/html'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) # original .data file metadata should be unchanged objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o')), timestamp1 + '.data') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(open(objfile).read(), 'VERIFY SYSMETA') self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': timestamp1, 'Content-Length': '14', 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'name': '/a/c/o', 'X-Object-Meta-1': 'One'}) # .meta file metadata should have updated content-type metafile_name = encode_timestamps(Timestamp(timestamp2), Timestamp(timestamp2), explicit=True) metafile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o')), metafile_name + '.meta') self.assertTrue(os.path.isfile(metafile)) self.assertEqual(diskfile.read_metadata(metafile), {'X-Timestamp': timestamp2, 'name': '/a/c/o', 'Content-Type': 'text/html', 'Content-Type-Timestamp': timestamp2, 'X-Object-Meta-1': 'Not One'}) def check_response(resp): self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 14) self.assertEqual(resp.content_type, 'text/html') self.assertEqual(resp.headers['content-type'], 'text/html') self.assertEqual( resp.headers['last-modified'], strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp2))))) self.assertEqual(resp.headers['etag'], '"1000d172764c9dbc3a5798a67ec5bb76"') self.assertEqual(resp.headers['x-object-meta-1'], 'Not One') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) check_response(resp) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) check_response(resp) def test_PUT_then_fetch_system_metadata(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) req.body = 'VERIFY SYSMETA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) def check_response(resp): self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 14) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.headers['content-type'], 'text/plain') self.assertEqual( resp.headers['last-modified'], strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) self.assertEqual(resp.headers['etag'], '"1000d172764c9dbc3a5798a67ec5bb76"') self.assertEqual(resp.headers['x-object-meta-1'], 'One') self.assertEqual(resp.headers['x-object-sysmeta-1'], 'One') self.assertEqual(resp.headers['x-object-sysmeta-two'], 'Two') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) check_response(resp) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) check_response(resp) def test_PUT_then_POST_then_fetch_system_metadata(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'text/plain', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'X-Object-Meta-1': 'One', 'X-Object-Sysmeta-1': 'One', 'X-Object-Sysmeta-Two': 'Two'}) req.body = 'VERIFY SYSMETA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp2 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp2, 'X-Object-Meta-1': 'Not One', 'X-Object-Sysmeta-1': 'Not One', 'X-Object-Sysmeta-Two': 'Not Two'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) def check_response(resp): # user meta should be updated but not sysmeta self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 14) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.headers['content-type'], 'text/plain') self.assertEqual( resp.headers['last-modified'], strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp2))))) self.assertEqual(resp.headers['etag'], '"1000d172764c9dbc3a5798a67ec5bb76"') self.assertEqual(resp.headers['x-object-meta-1'], 'Not One') self.assertEqual(resp.headers['x-object-sysmeta-1'], 'One') self.assertEqual(resp.headers['x-object-sysmeta-two'], 'Two') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) check_response(resp) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) check_response(resp) def test_PUT_with_replication_headers(self): # check that otherwise disallowed headers are accepted when specified # by X-Backend-Replication-Headers # first PUT object timestamp1 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp1, 'Content-Type': 'text/plain', 'Content-Length': '14', 'Etag': '1000d172764c9dbc3a5798a67ec5bb76', 'Custom-Header': 'custom1', 'X-Object-Meta-1': 'meta1', 'X-Static-Large-Object': 'False'}) req.body = 'VERIFY SYSMETA' # restrict set of allowed headers on this server with mock.patch.object(self.object_controller, 'allowed_headers', ['Custom-Header']): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o')), timestamp1 + '.data') # X-Static-Large-Object is disallowed. self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': timestamp1, 'Content-Type': 'text/plain', 'Content-Length': '14', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'name': '/a/c/o', 'Custom-Header': 'custom1', 'X-Object-Meta-1': 'meta1'}) # PUT object again with X-Backend-Replication-Headers timestamp2 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp2, 'Content-Type': 'text/plain', 'Content-Length': '14', 'Etag': '1000d172764c9dbc3a5798a67ec5bb76', 'Custom-Header': 'custom1', 'X-Object-Meta-1': 'meta1', 'X-Static-Large-Object': 'False', 'X-Backend-Replication-Headers': 'X-Static-Large-Object'}) req.body = 'VERIFY SYSMETA' with mock.patch.object(self.object_controller, 'allowed_headers', ['Custom-Header']): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o')), timestamp2 + '.data') # X-Static-Large-Object should be copied since it is now allowed by # replication headers. self.assertEqual(diskfile.read_metadata(objfile), {'X-Timestamp': timestamp2, 'Content-Type': 'text/plain', 'Content-Length': '14', 'ETag': '1000d172764c9dbc3a5798a67ec5bb76', 'name': '/a/c/o', 'Custom-Header': 'custom1', 'X-Object-Meta-1': 'meta1', 'X-Static-Large-Object': 'False'}) def test_PUT_container_connection(self): def mock_http_connect(response, with_exc=False): class FakeConn(object): def __init__(self, status, with_exc): self.status = status self.reason = 'Fake' self.host = '1.2.3.4' self.port = '1234' self.with_exc = with_exc def getresponse(self): if self.with_exc: raise Exception('test') return self def read(self, amt=None): return '' return lambda *args, **kwargs: FakeConn(response, with_exc) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new1', 'Content-Length': '0'}) with mock.patch.object( object_server, 'http_connect', mock_http_connect(201)): with fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new1', 'Content-Length': '0'}) with mock.patch.object( object_server, 'http_connect', mock_http_connect(500)): with fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'X-Container-Host': '1.2.3.4:0', 'X-Container-Partition': '3', 'X-Container-Device': 'sda1', 'X-Container-Timestamp': '1', 'Content-Type': 'application/new1', 'Content-Length': '0'}) with mock.patch.object( object_server, 'http_connect', mock_http_connect(500, with_exc=True)): with fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) def test_EC_GET_PUT_data(self): for policy in self.ec_policies: raw_data = ('VERIFY' * policy.ec_segment_size)[:-432] frag_archives = encode_frag_archive_bodies(policy, raw_data) frag_index = random.randint(0, len(frag_archives) - 1) # put EC frag archive req = Request.blank('/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': next(self.ts).internal, 'Content-Type': 'application/verify', 'Content-Length': len(frag_archives[frag_index]), 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, 'X-Backend-Storage-Policy-Index': int(policy), }) req.body = frag_archives[frag_index] resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # get EC frag archive req = Request.blank('/sda1/p/a/c/o', headers={ 'X-Backend-Storage-Policy-Index': int(policy), }) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, frag_archives[frag_index]) def test_EC_GET_quarantine_invalid_frag_archive(self): policy = random.choice(self.ec_policies) raw_data = ('VERIFY' * policy.ec_segment_size)[:-432] frag_archives = encode_frag_archive_bodies(policy, raw_data) frag_index = random.randint(0, len(frag_archives) - 1) content_length = len(frag_archives[frag_index]) # put EC frag archive req = Request.blank('/sda1/p/a/c/o', method='PUT', headers={ 'X-Timestamp': next(self.ts).internal, 'Content-Type': 'application/verify', 'Content-Length': content_length, 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, 'X-Backend-Storage-Policy-Index': int(policy), }) corrupt = 'garbage' + frag_archives[frag_index] req.body = corrupt[:content_length] resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # get EC frag archive req = Request.blank('/sda1/p/a/c/o', headers={ 'X-Backend-Storage-Policy-Index': int(policy), }) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) with self.assertRaises(DiskFileQuarantined) as ctx: resp.body self.assertIn("Invalid EC metadata", str(ctx.exception)) # nothing is logged on *our* loggers errors = self.object_controller.logger.get_lines_for_level('error') self.assertEqual(errors, []) # get EC frag archive - it's gone req = Request.blank('/sda1/p/a/c/o', headers={ 'X-Backend-Storage-Policy-Index': int(policy), }) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) def test_PUT_ssync_multi_frag(self): timestamp = utils.Timestamp(time()).internal def put_with_index(expected_rsp, frag_index, node_index=None): data_file_tail = '#%d.data' % frag_index headers = {'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'X-Backend-Ssync-Frag-Index': node_index, 'X-Object-Sysmeta-Ec-Frag-Index': frag_index, 'X-Backend-Storage-Policy-Index': int(policy)} req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual( resp.status_int, expected_rsp, 'got %s != %s for frag_index=%s node_index=%s' % ( resp.status_int, expected_rsp, frag_index, node_index)) if expected_rsp == 409: return obj_dir = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(int(policy)), 'p', hash_path('a', 'c', 'o'))) data_file = os.path.join(obj_dir, timestamp) + data_file_tail self.assertTrue(os.path.isfile(data_file), 'Expected file %r not found in %r for policy %r' % (data_file, os.listdir(obj_dir), int(policy))) for policy in POLICIES: if policy.policy_type == EC_POLICY: # upload with a ec-frag-index put_with_index(201, 3) # same timestamp will conflict a different ec-frag-index put_with_index(409, 2) # but with the ssync-frag-index (primary node) it will just # save both! put_with_index(201, 2, 2) # but even with the ssync-frag-index we can still get a # timestamp collisison if the file already exists put_with_index(409, 3, 3) # FWIW, ssync will never send in-consistent indexes - but if # something else did, from the object server perspective ... # ... the ssync-frag-index is canonical on the # read/pre-existance check put_with_index(409, 7, 2) # ... but the ec-frag-index is canonical when it comes to on # disk file put_with_index(201, 7, 6) def test_PUT_durable_files(self): for policy in POLICIES: timestamp = utils.Timestamp(int(time())).internal data_file_tail = '.data' headers = {'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'X-Backend-Storage-Policy-Index': int(policy)} if policy.policy_type == EC_POLICY: headers['X-Object-Sysmeta-Ec-Frag-Index'] = '2' data_file_tail = '#2.data' req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) obj_dir = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(int(policy)), 'p', hash_path('a', 'c', 'o'))) data_file = os.path.join(obj_dir, timestamp) + data_file_tail self.assertTrue(os.path.isfile(data_file), 'Expected file %r not found in %r for policy %r' % (data_file, os.listdir(obj_dir), int(policy))) durable_file = os.path.join(obj_dir, timestamp) + '.durable' if policy.policy_type == EC_POLICY: self.assertTrue(os.path.isfile(durable_file)) self.assertFalse(os.path.getsize(durable_file)) else: self.assertFalse(os.path.isfile(durable_file)) rmtree(obj_dir) def test_HEAD(self): # Test swift.obj.server.ObjectController.HEAD req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertFalse('X-Backend-Timestamp' in resp.headers) timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.content_length, 6) self.assertEqual(resp.content_type, 'application/x-test') self.assertEqual(resp.headers['content-type'], 'application/x-test') self.assertEqual( resp.headers['last-modified'], strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) self.assertEqual(resp.headers['etag'], '"0b4c12d7e0a73840c1c4f148fda3b037"') self.assertEqual(resp.headers['x-object-meta-1'], 'One') self.assertEqual(resp.headers['x-object-meta-two'], 'Two') objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') os.unlink(objfile) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-length': '6'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Backend-Timestamp'], utils.Timestamp(timestamp).internal) def test_HEAD_quarantine_zbyte(self): # Test swift.obj.server.ObjectController.GET timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) disk_file = self.df_mgr.get_diskfile('sda1', 'p', 'a', 'c', 'o', policy=POLICIES.legacy) disk_file.open() file_name = os.path.basename(disk_file._data_file) with open(disk_file._data_file) as fp: metadata = diskfile.read_metadata(fp) os.unlink(disk_file._data_file) with open(disk_file._data_file, 'w') as fp: diskfile.write_metadata(fp, metadata) file_name = os.path.basename(disk_file._data_file) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) self.assertEqual(os.listdir(quar_dir)[0], file_name) def test_OPTIONS(self): conf = {'devices': self.testdir, 'mount_check': 'false'} server_handler = object_server.ObjectController( conf, logger=debug_logger()) req = Request.blank('/sda1/p/a/c/o', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = server_handler.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST PUT DELETE HEAD REPLICATE \ SSYNC'.split(): self.assertTrue( verb in resp.headers['Allow'].split(', ')) self.assertEqual(len(resp.headers['Allow'].split(', ')), 8) self.assertEqual(resp.headers['Server'], (server_handler.server_type + '/' + swift_version)) def test_GET(self): # Test swift.obj.server.ObjectController.GET req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertFalse('X-Backend-Timestamp' in resp.headers) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, 'VERIFY') self.assertEqual(resp.content_length, 6) self.assertEqual(resp.content_type, 'application/x-test') self.assertEqual(resp.headers['content-length'], '6') self.assertEqual(resp.headers['content-type'], 'application/x-test') self.assertEqual( resp.headers['last-modified'], strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) self.assertEqual(resp.headers['etag'], '"0b4c12d7e0a73840c1c4f148fda3b037"') self.assertEqual(resp.headers['x-object-meta-1'], 'One') self.assertEqual(resp.headers['x-object-meta-two'], 'Two') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) req.range = 'bytes=1-3' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 206) self.assertEqual(resp.body, 'ERI') self.assertEqual(resp.headers['content-length'], '3') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) req.range = 'bytes=1-' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 206) self.assertEqual(resp.body, 'ERIFY') self.assertEqual(resp.headers['content-length'], '5') req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) req.range = 'bytes=-2' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 206) self.assertEqual(resp.body, 'FY') self.assertEqual(resp.headers['content-length'], '2') objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.data') os.unlink(objfile) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application:octet-stream', 'Content-Length': '6'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) sleep(.00001) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Backend-Timestamp'], utils.Timestamp(timestamp).internal) def test_GET_if_match(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) etag = resp.etag req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Match': '"11111111111111111111111111111111"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={ 'If-Match': '"11111111111111111111111111111111", "%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={ 'If-Match': '"11111111111111111111111111111111", ' '"22222222222222222222222222222222"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) def test_GET_if_match_etag_is_at(self): headers = { 'X-Timestamp': utils.Timestamp(time()).internal, 'Content-Type': 'application/octet-stream', 'X-Object-Meta-Xtag': 'madeup', } req = Request.blank('/sda1/p/a/c/o', method='PUT', headers=headers) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) real_etag = resp.etag # match x-backend-etag-is-at req = Request.blank('/sda1/p/a/c/o', headers={ 'If-Match': 'madeup', 'X-Backend-Etag-Is-At': 'X-Object-Meta-Xtag'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) # no match x-backend-etag-is-at req = Request.blank('/sda1/p/a/c/o', headers={ 'If-Match': real_etag, 'X-Backend-Etag-Is-At': 'X-Object-Meta-Xtag'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) # etag-is-at metadata doesn't exist, default to real etag req = Request.blank('/sda1/p/a/c/o', headers={ 'If-Match': real_etag, 'X-Backend-Etag-Is-At': 'X-Object-Meta-Missing'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) # sanity no-match with no etag-is-at req = Request.blank('/sda1/p/a/c/o', headers={ 'If-Match': 'madeup'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) # sanity match with no etag-is-at req = Request.blank('/sda1/p/a/c/o', headers={ 'If-Match': real_etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) # sanity with no if-match req = Request.blank('/sda1/p/a/c/o', headers={ 'X-Backend-Etag-Is-At': 'X-Object-Meta-Xtag'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) def test_HEAD_if_match(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) etag = resp.etag req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Match': '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Match': '"11111111111111111111111111111111"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={ 'If-Match': '"11111111111111111111111111111111", "%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={ 'If-Match': '"11111111111111111111111111111111", ' '"22222222222222222222222222222222"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) def test_GET_if_none_match(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': normalize_timestamp(time()), 'X-Object-Meta-Soup': 'gazpacho', 'Content-Type': 'application/fizzbuzz', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) etag = resp.etag req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) self.assertEqual(resp.headers['Content-Type'], 'application/fizzbuzz') self.assertEqual(resp.headers['X-Object-Meta-Soup'], 'gazpacho') req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': '"11111111111111111111111111111111"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-None-Match': '"11111111111111111111111111111111", ' '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) def test_HEAD_if_none_match(self): req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) etag = resp.etag req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': '*'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': '"11111111111111111111111111111111"'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.etag, etag) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-None-Match': '"11111111111111111111111111111111", ' '"%s"' % etag}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) self.assertEqual(resp.etag, etag) def test_GET_if_modified_since(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) - 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) since = resp.headers['Last-Modified'] self.assertEqual(since, strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) timestamp = normalize_timestamp(int(time())) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp))) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) def test_HEAD_if_modified_since(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) - 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) since = resp.headers['Last-Modified'] self.assertEqual(since, strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Modified-Since': since}) resp = self.object_controller.GET(req) self.assertEqual(resp.status_int, 304) timestamp = normalize_timestamp(int(time())) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp))) req = Request.blank('/sda1/p/a/c/o2', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Modified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 304) def test_GET_if_unmodified_since(self): timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={ 'X-Timestamp': timestamp, 'X-Object-Meta-Burr': 'ito', 'Content-Type': 'application/cat-picture', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) - 9)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) self.assertEqual(resp.headers['Content-Type'], 'application/cat-picture') self.assertEqual(resp.headers['X-Object-Meta-Burr'], 'ito') since = \ strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(float(timestamp) + 9)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) since = resp.headers['Last-Modified'] self.assertEqual(since, strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp))))) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) def test_HEAD_if_unmodified_since(self): timestamp = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'}) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp)) + 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp)))) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) since = strftime('%a, %d %b %Y %H:%M:%S GMT', gmtime(math.ceil(float(timestamp)) - 1)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'If-Unmodified-Since': since}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) def test_GET_quarantine(self): # Test swift.obj.server.ObjectController.GET timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) disk_file = self.df_mgr.get_diskfile('sda1', 'p', 'a', 'c', 'o', policy=POLICIES.legacy) disk_file.open() file_name = os.path.basename(disk_file._data_file) etag = md5() etag.update('VERIF') etag = etag.hexdigest() metadata = {'X-Timestamp': timestamp, 'name': '/a/c/o', 'Content-Length': 6, 'ETag': etag} diskfile.write_metadata(disk_file._fp, metadata) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) body = resp.body # actually does quarantining self.assertEqual(body, 'VERIFY') self.assertEqual(os.listdir(quar_dir)[0], file_name) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) def test_GET_quarantine_zbyte(self): # Test swift.obj.server.ObjectController.GET timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) disk_file = self.df_mgr.get_diskfile('sda1', 'p', 'a', 'c', 'o', policy=POLICIES.legacy) disk_file.open() file_name = os.path.basename(disk_file._data_file) with open(disk_file._data_file) as fp: metadata = diskfile.read_metadata(fp) os.unlink(disk_file._data_file) with open(disk_file._data_file, 'w') as fp: diskfile.write_metadata(fp, metadata) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) self.assertEqual(os.listdir(quar_dir)[0], file_name) def test_GET_quarantine_range(self): # Test swift.obj.server.ObjectController.GET timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test'}) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) disk_file = self.df_mgr.get_diskfile('sda1', 'p', 'a', 'c', 'o', policy=POLICIES.legacy) disk_file.open() file_name = os.path.basename(disk_file._data_file) etag = md5() etag.update('VERIF') etag = etag.hexdigest() metadata = {'X-Timestamp': timestamp, 'name': '/a/c/o', 'Content-Length': 6, 'ETag': etag} diskfile.write_metadata(disk_file._fp, metadata) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) req = Request.blank('/sda1/p/a/c/o') req.range = 'bytes=0-4' # partial resp = req.get_response(self.object_controller) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) resp.body self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) self.assertFalse(os.path.isdir(quar_dir)) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) req = Request.blank('/sda1/p/a/c/o') req.range = 'bytes=1-6' # partial resp = req.get_response(self.object_controller) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) resp.body self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) self.assertFalse(os.path.isdir(quar_dir)) req = Request.blank('/sda1/p/a/c/o') req.range = 'bytes=0-14' # full resp = req.get_response(self.object_controller) quar_dir = os.path.join( self.testdir, 'sda1', 'quarantined', 'objects', os.path.basename(os.path.dirname(disk_file._data_file))) self.assertEqual(os.listdir(disk_file._datadir)[0], file_name) resp.body self.assertTrue(os.path.isdir(quar_dir)) req = Request.blank('/sda1/p/a/c/o') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) @mock.patch("time.time", mock_time) def test_DELETE(self): # Test swift.obj.server.ObjectController.DELETE req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) # The following should have created a tombstone file timestamp = normalize_timestamp(1000) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) ts_1000_file = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertTrue(os.path.isfile(ts_1000_file)) # There should now be a 1000 ts file. self.assertEqual(len(os.listdir(os.path.dirname(ts_1000_file))), 1) # The following should *not* have created a tombstone file. timestamp = normalize_timestamp(999) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) ts_999_file = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertFalse(os.path.isfile(ts_999_file)) self.assertTrue(os.path.isfile(ts_1000_file)) self.assertEqual(len(os.listdir(os.path.dirname(ts_1000_file))), 1) orig_timestamp = utils.Timestamp(1002).internal headers = {'X-Timestamp': orig_timestamp, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'} req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # There should now be 1000 ts and a 1001 data file. data_1002_file = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), orig_timestamp + '.data') self.assertTrue(os.path.isfile(data_1002_file)) self.assertEqual(len(os.listdir(os.path.dirname(data_1002_file))), 1) # The following should *not* have created a tombstone file. timestamp = normalize_timestamp(1001) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['X-Backend-Timestamp'], orig_timestamp) ts_1001_file = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertFalse(os.path.isfile(ts_1001_file)) self.assertTrue(os.path.isfile(data_1002_file)) self.assertEqual(len(os.listdir(os.path.dirname(ts_1001_file))), 1) timestamp = normalize_timestamp(1003) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) ts_1003_file = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertTrue(os.path.isfile(ts_1003_file)) self.assertEqual(len(os.listdir(os.path.dirname(ts_1003_file))), 1) def test_DELETE_succeeds_with_later_POST(self): ts_iter = make_timestamp_iter() t_put = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': t_put, 'Content-Length': 0, 'Content-Type': 'plain/text'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) t_delete = next(ts_iter).internal t_post = next(ts_iter).internal req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': t_post}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': t_delete}, ) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) obj_dir = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(0), 'p', hash_path('a', 'c', 'o'))) ts_file = os.path.join(obj_dir, t_delete + '.ts') self.assertTrue(os.path.isfile(ts_file)) meta_file = os.path.join(obj_dir, t_post + '.meta') self.assertTrue(os.path.isfile(meta_file)) def test_DELETE_container_updates(self): # Test swift.obj.server.ObjectController.DELETE and container # updates, making sure container update is called in the correct # state. start = time() orig_timestamp = utils.Timestamp(start) headers = {'X-Timestamp': orig_timestamp.internal, 'Content-Type': 'application/octet-stream', 'Content-Length': '4'} req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers) req.body = 'test' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) calls_made = [0] def our_container_update(*args, **kwargs): calls_made[0] += 1 orig_cu = self.object_controller.container_update self.object_controller.container_update = our_container_update try: # The following request should return 409 (HTTP Conflict). A # tombstone file should not have been created with this timestamp. timestamp = utils.Timestamp(start - 0.00001) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp.internal}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 409) self.assertEqual(resp.headers['x-backend-timestamp'], orig_timestamp.internal) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertFalse(os.path.isfile(objfile)) self.assertEqual(len(os.listdir(os.path.dirname(objfile))), 1) self.assertEqual(0, calls_made[0]) # The following request should return 204, and the object should # be truly deleted (container update is performed) because this # timestamp is newer. A tombstone file should have been created # with this timestamp. timestamp = utils.Timestamp(start + 0.00001) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp.internal}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(1, calls_made[0]) self.assertEqual(len(os.listdir(os.path.dirname(objfile))), 1) # The following request should return a 404, as the object should # already have been deleted, but it should have also performed a # container update because the timestamp is newer, and a tombstone # file should also exist with this timestamp. timestamp = utils.Timestamp(start + 0.00002) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp.internal}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertTrue(os.path.isfile(objfile)) self.assertEqual(2, calls_made[0]) self.assertEqual(len(os.listdir(os.path.dirname(objfile))), 1) # The following request should return a 404, as the object should # already have been deleted, and it should not have performed a # container update because the timestamp is older, or created a # tombstone file with this timestamp. timestamp = utils.Timestamp(start + 0.00001) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp.internal}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(timestamp).internal + '.ts') self.assertFalse(os.path.isfile(objfile)) self.assertEqual(2, calls_made[0]) self.assertEqual(len(os.listdir(os.path.dirname(objfile))), 1) finally: self.object_controller.container_update = orig_cu def test_DELETE_full_drive(self): def mock_diskfile_delete(self, timestamp): raise DiskFileNoSpace() t_put = utils.Timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': t_put.internal, 'Content-Length': 0, 'Content-Type': 'plain/text'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) with mock.patch('swift.obj.diskfile.BaseDiskFile.delete', mock_diskfile_delete): t_delete = utils.Timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': t_delete.internal}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 507) def test_object_update_with_offset(self): ts = (utils.Timestamp(t).internal for t in itertools.count(int(time()))) container_updates = [] def capture_updates(ip, port, method, path, headers, *args, **kwargs): container_updates.append((ip, port, method, path, headers)) # create a new object create_timestamp = next(ts) req = Request.blank('/sda1/p/a/c/o', method='PUT', body='test1', headers={'X-Timestamp': create_timestamp, 'X-Container-Host': '10.0.0.1:8080', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p', 'Content-Type': 'text/plain'}) with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(1, len(container_updates)) for update in container_updates: ip, port, method, path, headers = update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '8080') self.assertEqual(method, 'PUT') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Size': len('test1'), 'X-Etag': md5('test1').hexdigest(), 'X-Content-Type': 'text/plain', 'X-Timestamp': create_timestamp, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) container_updates = [] # reset # read back object req = Request.blank('/sda1/p/a/c/o', method='GET') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Timestamp'], utils.Timestamp(create_timestamp).normal) self.assertEqual(resp.headers['X-Backend-Timestamp'], create_timestamp) self.assertEqual(resp.body, 'test1') # send an update with an offset offset_timestamp = utils.Timestamp( create_timestamp, offset=1).internal req = Request.blank('/sda1/p/a/c/o', method='PUT', body='test2', headers={'X-Timestamp': offset_timestamp, 'X-Container-Host': '10.0.0.1:8080', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p', 'Content-Type': 'text/html'}) with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(1, len(container_updates)) for update in container_updates: ip, port, method, path, headers = update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '8080') self.assertEqual(method, 'PUT') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Size': len('test2'), 'X-Etag': md5('test2').hexdigest(), 'X-Content-Type': 'text/html', 'X-Timestamp': offset_timestamp, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) container_updates = [] # reset # read back new offset req = Request.blank('/sda1/p/a/c/o', method='GET') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Timestamp'], utils.Timestamp(offset_timestamp).normal) self.assertEqual(resp.headers['X-Backend-Timestamp'], offset_timestamp) self.assertEqual(resp.body, 'test2') # now overwrite with a newer time overwrite_timestamp = next(ts) req = Request.blank('/sda1/p/a/c/o', method='PUT', body='test3', headers={'X-Timestamp': overwrite_timestamp, 'X-Container-Host': '10.0.0.1:8080', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p', 'Content-Type': 'text/enriched'}) with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(1, len(container_updates)) for update in container_updates: ip, port, method, path, headers = update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '8080') self.assertEqual(method, 'PUT') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Size': len('test3'), 'X-Etag': md5('test3').hexdigest(), 'X-Content-Type': 'text/enriched', 'X-Timestamp': overwrite_timestamp, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) container_updates = [] # reset # read back overwrite req = Request.blank('/sda1/p/a/c/o', method='GET') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Timestamp'], utils.Timestamp(overwrite_timestamp).normal) self.assertEqual(resp.headers['X-Backend-Timestamp'], overwrite_timestamp) self.assertEqual(resp.body, 'test3') # delete with an offset offset_delete = utils.Timestamp(overwrite_timestamp, offset=1).internal req = Request.blank('/sda1/p/a/c/o', method='DELETE', headers={'X-Timestamp': offset_delete, 'X-Container-Host': '10.0.0.1:8080', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p'}) with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 204) self.assertEqual(1, len(container_updates)) for update in container_updates: ip, port, method, path, headers = update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '8080') self.assertEqual(method, 'DELETE') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Timestamp': offset_delete, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) container_updates = [] # reset # read back offset delete req = Request.blank('/sda1/p/a/c/o', method='GET') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Timestamp'], None) self.assertEqual(resp.headers['X-Backend-Timestamp'], offset_delete) # and one more delete with a newer timestamp delete_timestamp = next(ts) req = Request.blank('/sda1/p/a/c/o', method='DELETE', headers={'X-Timestamp': delete_timestamp, 'X-Container-Host': '10.0.0.1:8080', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p'}) with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 404) self.assertEqual(1, len(container_updates)) for update in container_updates: ip, port, method, path, headers = update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '8080') self.assertEqual(method, 'DELETE') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Timestamp': delete_timestamp, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) container_updates = [] # reset # read back delete req = Request.blank('/sda1/p/a/c/o', method='GET') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Timestamp'], None) self.assertEqual(resp.headers['X-Backend-Timestamp'], delete_timestamp) def test_call_bad_request(self): # Test swift.obj.server.ObjectController.__call__ inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) self.object_controller.__call__({'REQUEST_METHOD': 'PUT', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '400 ') def test_call_not_found(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) self.object_controller.__call__({'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '404 ') def test_call_bad_method(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) self.object_controller.__call__({'REQUEST_METHOD': 'INVALID', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_call_name_collision(self): def my_check(*args): return False def my_hash_path(*args): return md5('collide').hexdigest() with mock.patch("swift.obj.diskfile.hash_path", my_hash_path): with mock.patch("swift.obj.server.check_object_creation", my_check): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) self.object_controller.__call__({ 'REQUEST_METHOD': 'PUT', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'CONTENT_TYPE': 'text/html', 'HTTP_X_TIMESTAMP': normalize_timestamp(1.2), 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '201 ') inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) self.object_controller.__call__({ 'REQUEST_METHOD': 'PUT', 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/b/d/x', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'CONTENT_TYPE': 'text/html', 'HTTP_X_TIMESTAMP': normalize_timestamp(1.3), 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '403 ') def test_invalid_method_doesnt_exist(self): errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.object_controller.__call__({ 'REQUEST_METHOD': 'method_doesnt_exist', 'PATH_INFO': '/sda1/p/a/c/o'}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_invalid_method_is_not_public(self): errbuf = StringIO() outbuf = StringIO() def start_response(*args): outbuf.writelines(args) self.object_controller.__call__({'REQUEST_METHOD': '__init__', 'PATH_INFO': '/sda1/p/a/c/o'}, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_chunked_put(self): listener = listen(('localhost', 0)) port = listener.getsockname()[1] killer = spawn(wsgi.server, listener, self.object_controller, NullLogger()) sock = connect_tcp(('localhost', port)) fd = sock.makefile() fd.write('PUT /sda1/p/a/c/o HTTP/1.1\r\nHost: localhost\r\n' 'Content-Type: text/plain\r\n' 'Connection: close\r\nX-Timestamp: %s\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n' % normalize_timestamp( 1.0)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', port)) fd = sock.makefile() fd.write('GET /sda1/p/a/c/o HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) response = fd.read() self.assertEqual(response, 'oh hai') killer.kill() def test_chunked_content_length_mismatch_zero(self): listener = listen(('localhost', 0)) port = listener.getsockname()[1] killer = spawn(wsgi.server, listener, self.object_controller, NullLogger()) sock = connect_tcp(('localhost', port)) fd = sock.makefile() fd.write('PUT /sda1/p/a/c/o HTTP/1.1\r\nHost: localhost\r\n' 'Content-Type: text/plain\r\n' 'Connection: close\r\nX-Timestamp: %s\r\n' 'Content-Length: 0\r\n' 'Transfer-Encoding: chunked\r\n\r\n' '2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n' % normalize_timestamp( 1.0)) fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 201' self.assertEqual(headers[:len(exp)], exp) sock = connect_tcp(('localhost', port)) fd = sock.makefile() fd.write('GET /sda1/p/a/c/o HTTP/1.1\r\nHost: localhost\r\n' 'Connection: close\r\n\r\n') fd.flush() headers = readuntil2crlfs(fd) exp = 'HTTP/1.1 200' self.assertEqual(headers[:len(exp)], exp) response = fd.read() self.assertEqual(response, 'oh hai') killer.kill() def test_max_object_name_length(self): timestamp = normalize_timestamp(time()) max_name_len = constraints.MAX_OBJECT_NAME_LENGTH req = Request.blank( '/sda1/p/a/c/' + ('1' * max_name_len), environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'DATA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/' + ('2' * (max_name_len + 1)), environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'DATA' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_max_upload_time(self): class SlowBody(object): def __init__(self): self.sent = 0 def read(self, size=-1): if self.sent < 4: sleep(0.1) self.sent += 1 return ' ' return '' def set_hundred_continue_response_headers(*a, **kw): pass req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': SlowBody()}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.object_controller.max_upload_time = 0.1 req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': SlowBody()}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 408) def test_short_body(self): class ShortBody(object): def __init__(self): self.sent = False def read(self, size=-1): if not self.sent: self.sent = True return ' ' return '' def set_hundred_continue_response_headers(*a, **kw): pass req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': ShortBody()}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'text/plain'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 499) def test_bad_sinces(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'text/plain'}, body=' ') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': 'Not a valid date'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Modified-Since': 'Not a valid date'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) too_big_date_list = list(datetime.datetime.max.timetuple()) too_big_date_list[0] += 1 # bump up the year too_big_date = strftime( "%a, %d %b %Y %H:%M:%S UTC", struct_time(too_big_date_list)) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'If-Unmodified-Since': too_big_date}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) def test_content_encoding(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'text/plain', 'Content-Encoding': 'gzip'}, body=' ') resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-encoding'], 'gzip') req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['content-encoding'], 'gzip') def test_async_update_http_connect(self): policy = random.choice(list(POLICIES)) self._stage_tmp_dir(policy) given_args = [] def fake_http_connect(*args): given_args.extend(args) raise Exception('test') orig_http_connect = object_server.http_connect try: object_server.http_connect = fake_http_connect self.object_controller.async_update( 'PUT', 'a', 'c', 'o', '127.0.0.1:1234', 1, 'sdc1', {'x-timestamp': '1', 'x-out': 'set', 'X-Backend-Storage-Policy-Index': int(policy)}, 'sda1', policy) finally: object_server.http_connect = orig_http_connect self.assertEqual( given_args, ['127.0.0.1', '1234', 'sdc1', 1, 'PUT', '/a/c/o', { 'x-timestamp': '1', 'x-out': 'set', 'user-agent': 'object-server %s' % os.getpid(), 'X-Backend-Storage-Policy-Index': int(policy)}]) @patch_policies([StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one'), StoragePolicy(37, 'fantastico')]) def test_updating_multiple_delete_at_container_servers(self): # update router post patch self.object_controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.object_controller.logger) policy = random.choice(list(POLICIES)) self.object_controller.expiring_objects_account = 'exp' self.object_controller.expiring_objects_container_divisor = 60 http_connect_args = [] def fake_http_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): class SuccessfulFakeConn(object): @property def status(self): return 200 def getresponse(self): return self def read(self): return '' captured_args = {'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'ssl': ssl, 'headers': headers, 'query_string': query_string} http_connect_args.append( dict((k, v) for k, v in captured_args.items() if v is not None)) return SuccessfulFakeConn() req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '12345', 'Content-Type': 'application/burrito', 'Content-Length': '0', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Container-Partition': '20', 'X-Container-Host': '1.2.3.4:5', 'X-Container-Device': 'sdb1', 'X-Delete-At': 9999999999, 'X-Delete-At-Container': '9999999960', 'X-Delete-At-Host': "10.1.1.1:6001,10.2.2.2:6002", 'X-Delete-At-Partition': '6237', 'X-Delete-At-Device': 'sdp,sdq'}) with mock.patch.object( object_server, 'http_connect', fake_http_connect): with fake_spawn(): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) http_connect_args.sort(key=operator.itemgetter('ipaddr')) self.assertEqual(len(http_connect_args), 3) self.assertEqual( http_connect_args[0], {'ipaddr': '1.2.3.4', 'port': '5', 'path': '/a/c/o', 'device': 'sdb1', 'partition': '20', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-content-type': 'application/burrito', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-size': '0', 'x-timestamp': utils.Timestamp('12345').internal, 'referer': 'PUT http://localhost/sda1/p/a/c/o', 'user-agent': 'object-server %d' % os.getpid(), 'X-Backend-Storage-Policy-Index': int(policy), 'x-trans-id': '-'})}) self.assertEqual( http_connect_args[1], {'ipaddr': '10.1.1.1', 'port': '6001', 'path': '/exp/9999999960/9999999999-a/c/o', 'device': 'sdp', 'partition': '6237', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-content-type': 'text/plain', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-size': '0', 'x-timestamp': utils.Timestamp('12345').internal, 'referer': 'PUT http://localhost/sda1/p/a/c/o', 'user-agent': 'object-server %d' % os.getpid(), # system account storage policy is 0 'X-Backend-Storage-Policy-Index': 0, 'x-trans-id': '-'})}) self.assertEqual( http_connect_args[2], {'ipaddr': '10.2.2.2', 'port': '6002', 'path': '/exp/9999999960/9999999999-a/c/o', 'device': 'sdq', 'partition': '6237', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-content-type': 'text/plain', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-size': '0', 'x-timestamp': utils.Timestamp('12345').internal, 'referer': 'PUT http://localhost/sda1/p/a/c/o', 'user-agent': 'object-server %d' % os.getpid(), # system account storage policy is 0 'X-Backend-Storage-Policy-Index': 0, 'x-trans-id': '-'})}) @patch_policies([StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one'), StoragePolicy(26, 'twice-thirteen')]) def test_updating_multiple_container_servers(self): # update router post patch self.object_controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.object_controller.logger) http_connect_args = [] def fake_http_connect(ipaddr, port, device, partition, method, path, headers=None, query_string=None, ssl=False): class SuccessfulFakeConn(object): @property def status(self): return 200 def getresponse(self): return self def read(self): return '' captured_args = {'ipaddr': ipaddr, 'port': port, 'device': device, 'partition': partition, 'method': method, 'path': path, 'ssl': ssl, 'headers': headers, 'query_string': query_string} http_connect_args.append( dict((k, v) for k, v in captured_args.items() if v is not None)) return SuccessfulFakeConn() req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '12345', 'Content-Type': 'application/burrito', 'Content-Length': '0', 'X-Backend-Storage-Policy-Index': '26', 'X-Container-Partition': '20', 'X-Container-Host': '1.2.3.4:5, 6.7.8.9:10', 'X-Container-Device': 'sdb1, sdf1'}) with mock.patch.object( object_server, 'http_connect', fake_http_connect): with fake_spawn(): req.get_response(self.object_controller) http_connect_args.sort(key=operator.itemgetter('ipaddr')) self.assertEqual(len(http_connect_args), 2) self.assertEqual( http_connect_args[0], {'ipaddr': '1.2.3.4', 'port': '5', 'path': '/a/c/o', 'device': 'sdb1', 'partition': '20', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-content-type': 'application/burrito', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-size': '0', 'x-timestamp': utils.Timestamp('12345').internal, 'X-Backend-Storage-Policy-Index': '26', 'referer': 'PUT http://localhost/sda1/p/a/c/o', 'user-agent': 'object-server %d' % os.getpid(), 'x-trans-id': '-'})}) self.assertEqual( http_connect_args[1], {'ipaddr': '6.7.8.9', 'port': '10', 'path': '/a/c/o', 'device': 'sdf1', 'partition': '20', 'method': 'PUT', 'ssl': False, 'headers': HeaderKeyDict({ 'x-content-type': 'application/burrito', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-size': '0', 'x-timestamp': utils.Timestamp('12345').internal, 'X-Backend-Storage-Policy-Index': '26', 'referer': 'PUT http://localhost/sda1/p/a/c/o', 'user-agent': 'object-server %d' % os.getpid(), 'x-trans-id': '-'})}) def test_object_delete_at_async_update(self): policy = random.choice(list(POLICIES)) ts = (utils.Timestamp(t) for t in itertools.count(int(time()))) container_updates = [] def capture_updates(ip, port, method, path, headers, *args, **kwargs): container_updates.append((ip, port, method, path, headers)) put_timestamp = next(ts).internal delete_at_timestamp = utils.normalize_delete_at_timestamp( next(ts).normal) delete_at_container = ( int(delete_at_timestamp) / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'X-Container-Host': '10.0.0.1:6001', 'X-Container-Device': 'sda1', 'X-Container-Partition': 'p', 'X-Delete-At': delete_at_timestamp, 'X-Delete-At-Container': delete_at_container, 'X-Delete-At-Partition': 'p', 'X-Delete-At-Host': '10.0.0.2:6002', 'X-Delete-At-Device': 'sda1', 'X-Backend-Storage-Policy-Index': int(policy)} if policy.policy_type == EC_POLICY: headers['X-Object-Sysmeta-Ec-Frag-Index'] = '2' req = Request.blank( '/sda1/p/a/c/o', method='PUT', body='', headers=headers) with mocked_http_conn( 500, 500, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(2, len(container_updates)) delete_at_update, container_update = container_updates # delete_at_update ip, port, method, path, headers = delete_at_update self.assertEqual(ip, '10.0.0.2') self.assertEqual(port, '6002') self.assertEqual(method, 'PUT') self.assertEqual(path, '/sda1/p/.expiring_objects/%s/%s-a/c/o' % (delete_at_container, delete_at_timestamp)) expected = { 'X-Timestamp': put_timestamp, # system account storage policy is 0 'X-Backend-Storage-Policy-Index': 0, } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) # container_update ip, port, method, path, headers = container_update self.assertEqual(ip, '10.0.0.1') self.assertEqual(port, '6001') self.assertEqual(method, 'PUT') self.assertEqual(path, '/sda1/p/a/c/o') expected = { 'X-Timestamp': put_timestamp, 'X-Backend-Storage-Policy-Index': int(policy), } for key, value in expected.items(): self.assertEqual(headers[key], str(value)) # check async pendings async_dir = os.path.join(self.testdir, 'sda1', diskfile.get_async_dir(policy)) found_files = [] for root, dirs, files in os.walk(async_dir): for f in files: async_file = os.path.join(root, f) found_files.append(async_file) data = pickle.load(open(async_file)) if data['account'] == 'a': self.assertEqual( int(data['headers'] ['X-Backend-Storage-Policy-Index']), int(policy)) elif data['account'] == '.expiring_objects': self.assertEqual( int(data['headers'] ['X-Backend-Storage-Policy-Index']), 0) else: self.fail('unexpected async pending data') self.assertEqual(2, len(found_files)) def test_async_update_saves_on_exception(self): policy = random.choice(list(POLICIES)) self._stage_tmp_dir(policy) _prefix = utils.HASH_PATH_PREFIX utils.HASH_PATH_PREFIX = '' def fake_http_connect(*args): raise Exception('test') orig_http_connect = object_server.http_connect try: object_server.http_connect = fake_http_connect self.object_controller.async_update( 'PUT', 'a', 'c', 'o', '127.0.0.1:1234', 1, 'sdc1', {'x-timestamp': '1', 'x-out': 'set', 'X-Backend-Storage-Policy-Index': int(policy)}, 'sda1', policy) finally: object_server.http_connect = orig_http_connect utils.HASH_PATH_PREFIX = _prefix async_dir = diskfile.get_async_dir(policy) self.assertEqual( pickle.load(open(os.path.join( self.testdir, 'sda1', async_dir, 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-%s' % utils.Timestamp(1).internal))), {'headers': {'x-timestamp': '1', 'x-out': 'set', 'user-agent': 'object-server %s' % os.getpid(), 'X-Backend-Storage-Policy-Index': int(policy)}, 'account': 'a', 'container': 'c', 'obj': 'o', 'op': 'PUT'}) def test_async_update_saves_on_non_2xx(self): policy = random.choice(list(POLICIES)) self._stage_tmp_dir(policy) _prefix = utils.HASH_PATH_PREFIX utils.HASH_PATH_PREFIX = '' def fake_http_connect(status): class FakeConn(object): def __init__(self, status): self.status = status def getresponse(self): return self def read(self): return '' return lambda *args: FakeConn(status) orig_http_connect = object_server.http_connect try: for status in (199, 300, 503): object_server.http_connect = fake_http_connect(status) self.object_controller.async_update( 'PUT', 'a', 'c', 'o', '127.0.0.1:1234', 1, 'sdc1', {'x-timestamp': '1', 'x-out': str(status), 'X-Backend-Storage-Policy-Index': int(policy)}, 'sda1', policy) async_dir = diskfile.get_async_dir(policy) self.assertEqual( pickle.load(open(os.path.join( self.testdir, 'sda1', async_dir, 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-%s' % utils.Timestamp(1).internal))), {'headers': {'x-timestamp': '1', 'x-out': str(status), 'user-agent': 'object-server %s' % os.getpid(), 'X-Backend-Storage-Policy-Index': int(policy)}, 'account': 'a', 'container': 'c', 'obj': 'o', 'op': 'PUT'}) finally: object_server.http_connect = orig_http_connect utils.HASH_PATH_PREFIX = _prefix def test_async_update_does_not_save_on_2xx(self): _prefix = utils.HASH_PATH_PREFIX utils.HASH_PATH_PREFIX = '' def fake_http_connect(status): class FakeConn(object): def __init__(self, status): self.status = status def getresponse(self): return self def read(self): return '' return lambda *args: FakeConn(status) orig_http_connect = object_server.http_connect try: for status in (200, 299): object_server.http_connect = fake_http_connect(status) self.object_controller.async_update( 'PUT', 'a', 'c', 'o', '127.0.0.1:1234', 1, 'sdc1', {'x-timestamp': '1', 'x-out': str(status)}, 'sda1', 0) self.assertFalse( os.path.exists(os.path.join( self.testdir, 'sda1', 'async_pending', 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-0000000001.00000'))) finally: object_server.http_connect = orig_http_connect utils.HASH_PATH_PREFIX = _prefix def test_async_update_saves_on_timeout(self): policy = random.choice(list(POLICIES)) self._stage_tmp_dir(policy) _prefix = utils.HASH_PATH_PREFIX utils.HASH_PATH_PREFIX = '' def fake_http_connect(): class FakeConn(object): def getresponse(self): return sleep(1) return lambda *args: FakeConn() orig_http_connect = object_server.http_connect try: for status in (200, 299): object_server.http_connect = fake_http_connect() self.object_controller.node_timeout = 0.001 self.object_controller.async_update( 'PUT', 'a', 'c', 'o', '127.0.0.1:1234', 1, 'sdc1', {'x-timestamp': '1', 'x-out': str(status)}, 'sda1', policy) async_dir = diskfile.get_async_dir(policy) self.assertTrue( os.path.exists(os.path.join( self.testdir, 'sda1', async_dir, 'a83', '06fbf0b514e5199dfc4e00f42eb5ea83-%s' % utils.Timestamp(1).internal))) finally: object_server.http_connect = orig_http_connect utils.HASH_PATH_PREFIX = _prefix def test_container_update_no_async_update(self): policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.container_update( 'PUT', 'a', 'c', 'o', req, { 'x-size': '0', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-content-type': 'text/plain', 'x-timestamp': '1'}, 'sda1', policy) self.assertEqual(given_args, []) def test_container_update_success(self): container_updates = [] def capture_updates(ip, port, method, path, headers, *args, **kwargs): container_updates.append((ip, port, method, path, headers)) req = Request.blank( '/sda1/0/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '123', 'X-Container-Host': 'chost:cport', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice', 'Content-Type': 'text/plain'}, body='') with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(len(container_updates), 1) ip, port, method, path, headers = container_updates[0] self.assertEqual(ip, 'chost') self.assertEqual(port, 'cport') self.assertEqual(method, 'PUT') self.assertEqual(path, '/cdevice/cpartition/a/c/o') self.assertEqual(headers, HeaderKeyDict({ 'user-agent': 'object-server %s' % os.getpid(), 'x-size': '0', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-content-type': 'text/plain', 'x-timestamp': utils.Timestamp(1).internal, 'X-Backend-Storage-Policy-Index': '0', # default when not given 'x-trans-id': '123', 'referer': 'PUT http://localhost/sda1/0/a/c/o'})) def test_container_update_overrides(self): container_updates = [] def capture_updates(ip, port, method, path, headers, *args, **kwargs): container_updates.append((ip, port, method, path, headers)) headers = { 'X-Timestamp': 1, 'X-Trans-Id': '123', 'X-Container-Host': 'chost:cport', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice', 'Content-Type': 'text/plain', 'X-Backend-Container-Update-Override-Etag': 'override_etag', 'X-Backend-Container-Update-Override-Content-Type': 'override_val', 'X-Backend-Container-Update-Override-Foo': 'bar', 'X-Backend-Container-Ignored': 'ignored' } req = Request.blank('/sda1/0/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers=headers, body='') with mocked_http_conn(200, give_connect=capture_updates) as fake_conn: with fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(len(container_updates), 1) ip, port, method, path, headers = container_updates[0] self.assertEqual(ip, 'chost') self.assertEqual(port, 'cport') self.assertEqual(method, 'PUT') self.assertEqual(path, '/cdevice/cpartition/a/c/o') self.assertEqual(headers, HeaderKeyDict({ 'user-agent': 'object-server %s' % os.getpid(), 'x-size': '0', 'x-etag': 'override_etag', 'x-content-type': 'override_val', 'x-timestamp': utils.Timestamp(1).internal, 'X-Backend-Storage-Policy-Index': '0', # default when not given 'x-trans-id': '123', 'referer': 'PUT http://localhost/sda1/0/a/c/o', 'x-foo': 'bar'})) def test_container_update_async(self): policy = random.choice(list(POLICIES)) req = Request.blank( '/sda1/0/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '123', 'X-Container-Host': 'chost:cport', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice', 'Content-Type': 'text/plain', 'X-Object-Sysmeta-Ec-Frag-Index': 0, 'X-Backend-Storage-Policy-Index': int(policy)}, body='') given_args = [] def fake_pickle_async_update(*args): given_args[:] = args diskfile_mgr = self.object_controller._diskfile_router[policy] diskfile_mgr.pickle_async_update = fake_pickle_async_update with mocked_http_conn(500) as fake_conn, fake_spawn(): resp = req.get_response(self.object_controller) self.assertRaises(StopIteration, fake_conn.code_iter.next) self.assertEqual(resp.status_int, 201) self.assertEqual(len(given_args), 7) (objdevice, account, container, obj, data, timestamp, policy) = given_args self.assertEqual(objdevice, 'sda1') self.assertEqual(account, 'a') self.assertEqual(container, 'c') self.assertEqual(obj, 'o') self.assertEqual(timestamp, utils.Timestamp(1).internal) self.assertEqual(policy, policy) self.assertEqual(data, { 'headers': HeaderKeyDict({ 'X-Size': '0', 'User-Agent': 'object-server %s' % os.getpid(), 'X-Content-Type': 'text/plain', 'X-Timestamp': utils.Timestamp(1).internal, 'X-Trans-Id': '123', 'Referer': 'PUT http://localhost/sda1/0/a/c/o', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Etag': 'd41d8cd98f00b204e9800998ecf8427e'}), 'obj': 'o', 'account': 'a', 'container': 'c', 'op': 'PUT'}) def test_container_update_as_greenthread(self): greenthreads = [] saved_spawn_calls = [] called_async_update_args = [] def local_fake_spawn(func, *a, **kw): saved_spawn_calls.append((func, a, kw)) return mock.MagicMock() def local_fake_async_update(*a, **kw): # just capture the args to see that we would have called called_async_update_args.append([a, kw]) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '12345', 'Content-Type': 'application/burrito', 'Content-Length': '0', 'X-Backend-Storage-Policy-Index': 0, 'X-Container-Partition': '20', 'X-Container-Host': '1.2.3.4:5', 'X-Container-Device': 'sdb1'}) with mock.patch.object(object_server, 'spawn', local_fake_spawn): with mock.patch.object(self.object_controller, 'async_update', local_fake_async_update): resp = req.get_response(self.object_controller) # check the response is completed and successful self.assertEqual(resp.status_int, 201) # check that async_update hasn't been called self.assertFalse(len(called_async_update_args)) # now do the work in greenthreads for func, a, kw in saved_spawn_calls: gt = spawn(func, *a, **kw) greenthreads.append(gt) # wait for the greenthreads to finish for gt in greenthreads: gt.wait() # check that the calls to async_update have happened headers_out = {'X-Size': '0', 'X-Content-Type': 'application/burrito', 'X-Timestamp': '0000012345.00000', 'X-Trans-Id': '-', 'Referer': 'PUT http://localhost/sda1/p/a/c/o', 'X-Backend-Storage-Policy-Index': '0', 'X-Etag': 'd41d8cd98f00b204e9800998ecf8427e'} expected = [('PUT', 'a', 'c', 'o', '1.2.3.4:5', '20', 'sdb1', headers_out, 'sda1', POLICIES[0]), {'logger_thread_locals': (None, None)}] self.assertEqual(called_async_update_args, [expected]) def test_container_update_as_greenthread_with_timeout(self): ''' give it one container to update (for only one greenthred) fake the greenthred so it will raise a timeout test that the right message is logged and the method returns None ''' called_async_update_args = [] def local_fake_spawn(func, *a, **kw): m = mock.MagicMock() def wait_with_error(): raise Timeout() m.wait = wait_with_error # because raise can't be in a lambda return m def local_fake_async_update(*a, **kw): # just capture the args to see that we would have called called_async_update_args.append([a, kw]) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '12345', 'Content-Type': 'application/burrito', 'Content-Length': '0', 'X-Backend-Storage-Policy-Index': 0, 'X-Container-Partition': '20', 'X-Container-Host': '1.2.3.4:5', 'X-Container-Device': 'sdb1'}) with mock.patch.object(object_server, 'spawn', local_fake_spawn): with mock.patch.object(self.object_controller, 'container_update_timeout', 1.414213562): resp = req.get_response(self.object_controller) # check the response is completed and successful self.assertEqual(resp.status_int, 201) # check that the timeout was logged expected_logged_error = "Container update timeout (1.4142s) " \ "waiting for [('1.2.3.4:5', 'sdb1')]" self.assertTrue( expected_logged_error in self.object_controller.logger.get_lines_for_level('debug')) def test_container_update_bad_args(self): policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '123', 'X-Container-Host': 'chost,badhost', 'X-Container-Partition': 'cpartition', 'X-Container-Device': 'cdevice', 'X-Backend-Storage-Policy-Index': int(policy)}) with mock.patch.object(self.object_controller, 'async_update', fake_async_update): self.object_controller.container_update( 'PUT', 'a', 'c', 'o', req, { 'x-size': '0', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-content-type': 'text/plain', 'x-timestamp': '1'}, 'sda1', policy) self.assertEqual(given_args, []) errors = self.object_controller.logger.get_lines_for_level('error') self.assertEqual(len(errors), 1) msg = errors[0] self.assertTrue('Container update failed' in msg) self.assertTrue('different numbers of hosts and devices' in msg) self.assertTrue('chost,badhost' in msg) self.assertTrue('cdevice' in msg) def test_delete_at_update_on_put(self): # Test how delete_at_update works when issued a delete for old # expiration info after a new put with no new expiration info. policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '123', 'X-Backend-Storage-Policy-Index': int(policy)}) with mock.patch.object(self.object_controller, 'async_update', fake_async_update): self.object_controller.delete_at_update( 'DELETE', 2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual( given_args, [ 'DELETE', '.expiring_objects', '0000000000', '0000000002-a/c/o', None, None, None, HeaderKeyDict({ 'X-Backend-Storage-Policy-Index': 0, 'x-timestamp': utils.Timestamp('1').internal, 'x-trans-id': '123', 'referer': 'PUT http://localhost/v1/a/c/o'}), 'sda1', policy]) def test_delete_at_negative(self): # Test how delete_at_update works when issued a delete for old # expiration info after a new put with no new expiration info. # Test negative is reset to 0 policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update( 'DELETE', -2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual(given_args, [ 'DELETE', '.expiring_objects', '0000000000', '0000000000-a/c/o', None, None, None, HeaderKeyDict({ # the expiring objects account is always 0 'X-Backend-Storage-Policy-Index': 0, 'x-timestamp': utils.Timestamp('1').internal, 'x-trans-id': '1234', 'referer': 'PUT http://localhost/v1/a/c/o'}), 'sda1', policy]) def test_delete_at_cap(self): # Test how delete_at_update works when issued a delete for old # expiration info after a new put with no new expiration info. # Test past cap is reset to cap policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update( 'DELETE', 12345678901, 'a', 'c', 'o', req, 'sda1', policy) expiring_obj_container = given_args.pop(2) expected_exp_cont = utils.get_expirer_container( utils.normalize_delete_at_timestamp(12345678901), 86400, 'a', 'c', 'o') self.assertEqual(expiring_obj_container, expected_exp_cont) self.assertEqual(given_args, [ 'DELETE', '.expiring_objects', '9999999999-a/c/o', None, None, None, HeaderKeyDict({ 'X-Backend-Storage-Policy-Index': 0, 'x-timestamp': utils.Timestamp('1').internal, 'x-trans-id': '1234', 'referer': 'PUT http://localhost/v1/a/c/o'}), 'sda1', policy]) def test_delete_at_update_put_with_info(self): # Keep next test, # test_delete_at_update_put_with_info_but_missing_container, in sync # with this one but just missing the X-Delete-At-Container header. policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Delete-At-Container': '0', 'X-Delete-At-Host': '127.0.0.1:1234', 'X-Delete-At-Partition': '3', 'X-Delete-At-Device': 'sdc1', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update('PUT', 2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual( given_args, [ 'PUT', '.expiring_objects', '0000000000', '0000000002-a/c/o', '127.0.0.1:1234', '3', 'sdc1', HeaderKeyDict({ # the .expiring_objects account is always policy-0 'X-Backend-Storage-Policy-Index': 0, 'x-size': '0', 'x-etag': 'd41d8cd98f00b204e9800998ecf8427e', 'x-content-type': 'text/plain', 'x-timestamp': utils.Timestamp('1').internal, 'x-trans-id': '1234', 'referer': 'PUT http://localhost/v1/a/c/o'}), 'sda1', policy]) def test_delete_at_update_put_with_info_but_missing_container(self): # Same as previous test, test_delete_at_update_put_with_info, but just # missing the X-Delete-At-Container header. policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update self.object_controller.logger = self.logger req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Delete-At-Host': '127.0.0.1:1234', 'X-Delete-At-Partition': '3', 'X-Delete-At-Device': 'sdc1', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update('PUT', 2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual( self.logger.get_lines_for_level('warning'), ['X-Delete-At-Container header must be specified for expiring ' 'objects background PUT to work properly. Making best guess as ' 'to the container name for now.']) def test_delete_at_update_delete(self): policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update('DELETE', 2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual( given_args, [ 'DELETE', '.expiring_objects', '0000000000', '0000000002-a/c/o', None, None, None, HeaderKeyDict({ 'X-Backend-Storage-Policy-Index': 0, 'x-timestamp': utils.Timestamp('1').internal, 'x-trans-id': '1234', 'referer': 'DELETE http://localhost/v1/a/c/o'}), 'sda1', policy]) def test_delete_backend_replication(self): # If X-Backend-Replication: True delete_at_update should completely # short-circuit. policy = random.choice(list(POLICIES)) given_args = [] def fake_async_update(*args): given_args.extend(args) self.object_controller.async_update = fake_async_update req = Request.blank( '/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': 1, 'X-Trans-Id': '1234', 'X-Backend-Replication': 'True', 'X-Backend-Storage-Policy-Index': int(policy)}) self.object_controller.delete_at_update( 'DELETE', -2, 'a', 'c', 'o', req, 'sda1', policy) self.assertEqual(given_args, []) def test_POST_calls_delete_at(self): policy = random.choice(list(POLICIES)) given_args = [] def fake_delete_at_update(*args): given_args.extend(args) self.object_controller.delete_at_update = fake_delete_at_update req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'application/octet-stream', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Object-Sysmeta-Ec-Frag-Index': 2}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual(given_args, []) sleep(.00001) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Type': 'application/x-test', 'X-Backend-Storage-Policy-Index': int(policy)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual(given_args, []) sleep(.00001) timestamp1 = normalize_timestamp(time()) delete_at_timestamp1 = str(int(time() + 1000)) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp1, 'Content-Type': 'application/x-test', 'X-Delete-At': delete_at_timestamp1, 'X-Backend-Storage-Policy-Index': int(policy)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual( given_args, [ 'PUT', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', policy]) while given_args: given_args.pop() sleep(.00001) timestamp2 = normalize_timestamp(time()) delete_at_timestamp2 = str(int(time() + 2000)) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': timestamp2, 'Content-Type': 'application/x-test', 'X-Delete-At': delete_at_timestamp2, 'X-Backend-Storage-Policy-Index': int(policy)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) self.assertEqual( given_args, [ 'PUT', int(delete_at_timestamp2), 'a', 'c', 'o', given_args[5], 'sda1', policy, 'DELETE', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', policy]) def test_PUT_calls_delete_at(self): policy = random.choice(list(POLICIES)) given_args = [] def fake_delete_at_update(*args): given_args.extend(args) self.object_controller.delete_at_update = fake_delete_at_update req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'application/octet-stream', 'X-Backend-Storage-Policy-Index': int(policy), 'X-Object-Sysmeta-Ec-Frag-Index': 4}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual(given_args, []) sleep(.00001) timestamp1 = normalize_timestamp(time()) delete_at_timestamp1 = str(int(time() + 1000)) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp1, 'Content-Length': '4', 'Content-Type': 'application/octet-stream', 'X-Delete-At': delete_at_timestamp1, 'X-Backend-Storage-Policy-Index': int(policy), 'X-Object-Sysmeta-Ec-Frag-Index': 3}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual( given_args, [ 'PUT', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', policy]) while given_args: given_args.pop() sleep(.00001) timestamp2 = normalize_timestamp(time()) delete_at_timestamp2 = str(int(time() + 2000)) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp2, 'Content-Length': '4', 'Content-Type': 'application/octet-stream', 'X-Delete-At': delete_at_timestamp2, 'X-Backend-Storage-Policy-Index': int(policy), 'X-Object-Sysmeta-Ec-Frag-Index': 3}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual( given_args, [ 'PUT', int(delete_at_timestamp2), 'a', 'c', 'o', given_args[5], 'sda1', policy, 'DELETE', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', policy]) def test_GET_but_expired(self): test_time = time() + 10000 delete_at_timestamp = int(test_time + 100) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 2000), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': normalize_timestamp(test_time)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) orig_time = object_server.time.time try: t = time() object_server.time.time = lambda: t delete_at_timestamp = int(t + 1) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) put_timestamp = normalize_timestamp(test_time - 1000) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': put_timestamp, 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': normalize_timestamp(test_time)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) finally: object_server.time.time = orig_time orig_time = object_server.time.time try: t = time() + 2 object_server.time.time = lambda: t req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': normalize_timestamp(t)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Backend-Timestamp'], utils.Timestamp(put_timestamp)) finally: object_server.time.time = orig_time def test_HEAD_but_expired(self): test_time = time() + 10000 delete_at_timestamp = int(test_time + 100) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 2000), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'X-Timestamp': normalize_timestamp(test_time)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) orig_time = object_server.time.time try: t = time() delete_at_timestamp = int(t + 1) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) object_server.time.time = lambda: t put_timestamp = normalize_timestamp(test_time - 1000) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': put_timestamp, 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'X-Timestamp': normalize_timestamp(test_time)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) finally: object_server.time.time = orig_time orig_time = object_server.time.time try: t = time() + 2 object_server.time.time = lambda: t req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}, headers={'X-Timestamp': normalize_timestamp(time())}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Backend-Timestamp'], utils.Timestamp(put_timestamp)) finally: object_server.time.time = orig_time def test_POST_but_expired(self): test_time = time() + 10000 delete_at_timestamp = int(test_time + 100) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 2000), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(test_time - 1500)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 202) delete_at_timestamp = int(time() + 1) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 1000), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) orig_time = object_server.time.time try: t = time() + 2 object_server.time.time = lambda: t req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(time())}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) finally: object_server.time.time = orig_time def test_DELETE_but_expired(self): test_time = time() + 10000 delete_at_timestamp = int(test_time + 100) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 2000), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) orig_time = object_server.time.time try: t = test_time + 100 object_server.time.time = lambda: float(t) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(time())}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) finally: object_server.time.time = orig_time def test_DELETE_if_delete_at_expired_still_deletes(self): test_time = time() + 10 test_timestamp = normalize_timestamp(test_time) delete_at_time = int(test_time + 10) delete_at_timestamp = str(delete_at_time) delete_at_container = str( delete_at_time / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': test_timestamp, 'X-Delete-At': delete_at_timestamp, 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) # sanity req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': test_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body, 'TEST') objfile = os.path.join( self.testdir, 'sda1', storage_directory(diskfile.get_data_dir(POLICIES[0]), 'p', hash_path('a', 'c', 'o')), utils.Timestamp(test_timestamp).internal + '.data') self.assertTrue(os.path.isfile(objfile)) # move time past expirery with mock.patch('swift.obj.diskfile.time') as mock_time: mock_time.time.return_value = test_time + 100 req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': test_timestamp}) resp = req.get_response(self.object_controller) # request will 404 self.assertEqual(resp.status_int, 404) # but file still exists self.assertTrue(os.path.isfile(objfile)) # make the x-if-delete-at with some wrong bits req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': delete_at_timestamp, 'X-If-Delete-At': int(time() + 1)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) self.assertTrue(os.path.isfile(objfile)) # make the x-if-delete-at with all the right bits req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': delete_at_timestamp, 'X-If-Delete-At': delete_at_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) self.assertFalse(os.path.isfile(objfile)) # make the x-if-delete-at with all the right bits (again) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': delete_at_timestamp, 'X-If-Delete-At': delete_at_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) self.assertFalse(os.path.isfile(objfile)) # make the x-if-delete-at for some not found req = Request.blank( '/sda1/p/a/c/o-not-found', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': delete_at_timestamp, 'X-If-Delete-At': delete_at_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) def test_DELETE_if_delete_at(self): test_time = time() + 10000 req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 99), 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 98)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) delete_at_timestamp = int(test_time - 1) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 97), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 95), 'X-If-Delete-At': str(int(test_time))}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 95)}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) delete_at_timestamp = int(test_time - 1) delete_at_container = str( delete_at_timestamp / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(test_time - 94), 'X-Delete-At': str(delete_at_timestamp), 'X-Delete-At-Container': delete_at_container, 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 92), 'X-If-Delete-At': str(int(test_time))}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 412) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 92), 'X-If-Delete-At': delete_at_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(test_time - 92), 'X-If-Delete-At': 'abc'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) def test_DELETE_calls_delete_at(self): given_args = [] def fake_delete_at_update(*args): given_args.extend(args) self.object_controller.delete_at_update = fake_delete_at_update timestamp1 = normalize_timestamp(time()) delete_at_timestamp1 = int(time() + 1000) delete_at_container1 = str( delete_at_timestamp1 / self.object_controller.expiring_objects_container_divisor * self.object_controller.expiring_objects_container_divisor) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp1, 'Content-Length': '4', 'Content-Type': 'application/octet-stream', 'X-Delete-At': str(delete_at_timestamp1), 'X-Delete-At-Container': delete_at_container1}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertEqual(given_args, [ 'PUT', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', POLICIES[0]]) while given_args: given_args.pop() sleep(.00001) timestamp2 = normalize_timestamp(time()) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': timestamp2, 'Content-Type': 'application/octet-stream'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 204) self.assertEqual(given_args, [ 'DELETE', int(delete_at_timestamp1), 'a', 'c', 'o', given_args[5], 'sda1', POLICIES[0]]) def test_PUT_delete_at_in_past(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'X-Delete-At': str(int(time() - 1)), 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) self.assertTrue('X-Delete-At in past' in resp.body) def test_POST_delete_at_in_past(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(time()), 'Content-Length': '4', 'Content-Type': 'application/octet-stream'}) req.body = 'TEST' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(time() + 1), 'X-Delete-At': str(int(time() - 1))}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 400) self.assertTrue('X-Delete-At in past' in resp.body) def test_REPLICATE_works(self): def fake_get_hashes(*args, **kwargs): return 0, {1: 2} def my_tpool_execute(func, *args, **kwargs): return func(*args, **kwargs) was_get_hashes = diskfile.DiskFileManager._get_hashes was_tpool_exe = tpool.execute try: diskfile.DiskFileManager._get_hashes = fake_get_hashes tpool.execute = my_tpool_execute req = Request.blank('/sda1/p/suff', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) p_data = pickle.loads(resp.body) self.assertEqual(p_data, {1: 2}) finally: tpool.execute = was_tpool_exe diskfile.DiskFileManager._get_hashes = was_get_hashes def test_REPLICATE_timeout(self): def fake_get_hashes(*args, **kwargs): raise Timeout() def my_tpool_execute(func, *args, **kwargs): return func(*args, **kwargs) was_get_hashes = diskfile.DiskFileManager._get_hashes was_tpool_exe = tpool.execute try: diskfile.DiskFileManager._get_hashes = fake_get_hashes tpool.execute = my_tpool_execute req = Request.blank('/sda1/p/suff', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) self.assertRaises(Timeout, self.object_controller.REPLICATE, req) finally: tpool.execute = was_tpool_exe diskfile.DiskFileManager._get_hashes = was_get_hashes def test_REPLICATE_insufficient_storage(self): conf = {'devices': self.testdir, 'mount_check': 'true'} self.object_controller = object_server.ObjectController( conf, logger=debug_logger()) self.object_controller.bytes_per_sync = 1 def fake_check_mount(*args, **kwargs): return False with mock.patch("swift.obj.diskfile.check_mount", fake_check_mount): req = Request.blank('/sda1/p/suff', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 507) def test_SSYNC_can_be_called(self): req = Request.blank('/sda1/0', environ={'REQUEST_METHOD': 'SSYNC'}, headers={}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) def test_PUT_with_full_drive(self): class IgnoredBody(object): def __init__(self): self.read_called = False def read(self, size=-1): if not self.read_called: self.read_called = True return 'VERIFY' return '' def fake_fallocate(fd, size): raise OSError(errno.ENOSPC, os.strerror(errno.ENOSPC)) orig_fallocate = diskfile.fallocate try: diskfile.fallocate = fake_fallocate timestamp = normalize_timestamp(time()) body_reader = IgnoredBody() req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': body_reader}, headers={'X-Timestamp': timestamp, 'Content-Length': '6', 'Content-Type': 'application/octet-stream', 'Expect': '100-continue'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 507) self.assertFalse(body_reader.read_called) finally: diskfile.fallocate = orig_fallocate def test_global_conf_callback_does_nothing(self): preloaded_app_conf = {} global_conf = {} object_server.global_conf_callback(preloaded_app_conf, global_conf) self.assertEqual(preloaded_app_conf, {}) self.assertEqual(global_conf.keys(), ['replication_semaphore']) try: value = global_conf['replication_semaphore'][0].get_value() except NotImplementedError: # On some operating systems (at a minimum, OS X) it's not possible # to introspect the value of a semaphore raise SkipTest else: self.assertEqual(value, 4) def test_global_conf_callback_replication_semaphore(self): preloaded_app_conf = {'replication_concurrency': 123} global_conf = {} with mock.patch.object( object_server.multiprocessing, 'BoundedSemaphore', return_value='test1') as mocked_Semaphore: object_server.global_conf_callback(preloaded_app_conf, global_conf) self.assertEqual(preloaded_app_conf, {'replication_concurrency': 123}) self.assertEqual(global_conf, {'replication_semaphore': ['test1']}) mocked_Semaphore.assert_called_once_with(123) def test_handling_of_replication_semaphore_config(self): conf = {'devices': self.testdir, 'mount_check': 'false'} objsrv = object_server.ObjectController(conf) self.assertTrue(objsrv.replication_semaphore is None) conf['replication_semaphore'] = ['sema'] objsrv = object_server.ObjectController(conf) self.assertEqual(objsrv.replication_semaphore, 'sema') def test_serv_reserv(self): # Test replication_server flag was set from configuration file. conf = {'devices': self.testdir, 'mount_check': 'false'} self.assertEqual( object_server.ObjectController(conf).replication_server, None) for val in [True, '1', 'True', 'true']: conf['replication_server'] = val self.assertTrue( object_server.ObjectController(conf).replication_server) for val in [False, 0, '0', 'False', 'false', 'test_string']: conf['replication_server'] = val self.assertFalse( object_server.ObjectController(conf).replication_server) def test_list_allowed_methods(self): # Test list of allowed_methods obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST'] repl_methods = ['REPLICATE', 'SSYNC'] for method_name in obj_methods: method = getattr(self.object_controller, method_name) self.assertFalse(hasattr(method, 'replication')) for method_name in repl_methods: method = getattr(self.object_controller, method_name) self.assertEqual(method.replication, True) def test_correct_allowed_method(self): # Test correct work for allowed method using # swift.obj.server.ObjectController.__call__ inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.object_controller = object_server.app_factory( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'}) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} method_res = mock.MagicMock() mock_method = public(lambda x: mock.MagicMock(return_value=method_res)) with mock.patch.object(self.object_controller, method, new=mock_method): response = self.object_controller(env, start_response) self.assertEqual(response, method_res) def test_not_allowed_method(self): # Test correct work for NOT allowed method using # swift.obj.server.ObjectController.__call__ inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.object_controller = object_server.ObjectController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false'}, logger=self.logger) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} answer = ['

Method Not Allowed

The method is not ' 'allowed for this resource.

'] mock_method = replication(public(lambda x: mock.MagicMock())) with mock.patch.object(self.object_controller, method, new=mock_method): mock_method.replication = True with mock.patch('time.gmtime', mock.MagicMock(side_effect=[gmtime(10001.0)])): with mock.patch('time.time', mock.MagicMock(side_effect=[10000.0, 10001.0])): with mock.patch('os.getpid', mock.MagicMock(return_value=1234)): response = self.object_controller.__call__( env, start_response) self.assertEqual(response, answer) self.assertEqual( self.logger.get_lines_for_level('info'), ['None - - [01/Jan/1970:02:46:41 +0000] "PUT' ' /sda1/p/a/c/o" 405 - "-" "-" "-" 1.0000 "-"' ' 1234 -']) def test_call_incorrect_replication_method(self): inbuf = StringIO() errbuf = StringIO() outbuf = StringIO() self.object_controller = object_server.ObjectController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'true'}, logger=FakeLogger()) def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST', 'OPTIONS'] for method in obj_methods: env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} self.object_controller(env, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test_not_utf8_and_not_logging_requests(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.object_controller = object_server.ObjectController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false', 'log_requests': 'false'}, logger=FakeLogger()) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/\x00%20/%', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} answer = ['Invalid UTF8 or contains NULL'] mock_method = public(lambda x: mock.MagicMock()) with mock.patch.object(self.object_controller, method, new=mock_method): response = self.object_controller.__call__(env, start_response) self.assertEqual(response, answer) self.assertEqual(self.logger.get_lines_for_level('info'), []) def test__call__returns_500(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.logger = debug_logger('test') self.object_controller = object_server.ObjectController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false', 'log_requests': 'false'}, logger=self.logger) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} @public def mock_put_method(*args, **kwargs): raise Exception() with mock.patch.object(self.object_controller, method, new=mock_put_method): response = self.object_controller.__call__(env, start_response) self.assertTrue(response[0].startswith( 'Traceback (most recent call last):')) self.assertEqual(self.logger.get_lines_for_level('error'), [ 'ERROR __call__ error with %(method)s %(path)s : ' % { 'method': 'PUT', 'path': '/sda1/p/a/c/o'}, ]) self.assertEqual(self.logger.get_lines_for_level('info'), []) def test_PUT_slow(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.object_controller = object_server.ObjectController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false', 'log_requests': 'false', 'slow': '10'}, logger=self.logger) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c/o', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} mock_method = public(lambda x: mock.MagicMock()) with mock.patch.object(self.object_controller, method, new=mock_method): with mock.patch('time.time', mock.MagicMock(side_effect=[10000.0, 10001.0])): with mock.patch('swift.obj.server.sleep', mock.MagicMock()) as ms: self.object_controller.__call__(env, start_response) ms.assert_called_with(9) self.assertEqual(self.logger.get_lines_for_level('info'), []) def test_log_line_format(self): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD', 'REMOTE_ADDR': '1.2.3.4'}) self.object_controller.logger = self.logger with mock.patch( 'time.gmtime', mock.MagicMock(side_effect=[gmtime(10001.0)])): with mock.patch( 'time.time', mock.MagicMock(side_effect=[10000.0, 10001.0, 10002.0])): with mock.patch( 'os.getpid', mock.MagicMock(return_value=1234)): req.get_response(self.object_controller) self.assertEqual( self.logger.get_lines_for_level('info'), ['1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a/c/o" ' '404 - "-" "-" "-" 2.0000 "-" 1234 -']) @patch_policies([StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False)]) def test_dynamic_datadir(self): # update router post patch self.object_controller._diskfile_router = diskfile.DiskFileRouter( self.conf, self.object_controller.logger) timestamp = normalize_timestamp(time()) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'Foo': 'fooheader', 'Baz': 'bazheader', 'X-Backend-Storage-Policy-Index': 1, 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' object_dir = self.testdir + "/sda1/objects-1" self.assertFalse(os.path.isdir(object_dir)) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertTrue(os.path.isdir(object_dir)) # make sure no idx in header uses policy 0 data_dir req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': timestamp, 'Content-Type': 'application/x-test', 'Foo': 'fooheader', 'Baz': 'bazheader', 'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two'}) req.body = 'VERIFY' object_dir = self.testdir + "/sda1/objects" self.assertFalse(os.path.isdir(object_dir)) with mock.patch.object(POLICIES, 'get_by_index', lambda _: True): resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) self.assertTrue(os.path.isdir(object_dir)) def test_storage_policy_index_is_validated(self): # sanity check that index for existing policy is ok ts = (utils.Timestamp(t).internal for t in itertools.count(int(time()))) methods = ('PUT', 'POST', 'GET', 'HEAD', 'REPLICATE', 'DELETE') valid_indices = sorted([int(policy) for policy in POLICIES]) for index in valid_indices: object_dir = self.testdir + "/sda1/objects" if index > 0: object_dir = "%s-%s" % (object_dir, index) self.assertFalse(os.path.isdir(object_dir)) for method in methods: headers = { 'X-Timestamp': next(ts), 'Content-Type': 'application/x-test', 'X-Backend-Storage-Policy-Index': index} if POLICIES[index].policy_type == EC_POLICY: headers['X-Object-Sysmeta-Ec-Frag-Index'] = '2' req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': method}, headers=headers) req.body = 'VERIFY' resp = req.get_response(self.object_controller) self.assertTrue(is_success(resp.status_int), '%s method failed: %r' % (method, resp.status)) # index for non-existent policy should return 503 index = valid_indices[-1] + 1 for method in methods: req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': method}, headers={ 'X-Timestamp': next(ts), 'Content-Type': 'application/x-test', 'X-Backend-Storage-Policy-Index': index}) req.body = 'VERIFY' object_dir = self.testdir + "/sda1/objects-%s" % index resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 503) self.assertFalse(os.path.isdir(object_dir)) def test_race_doesnt_quarantine(self): existing_timestamp = normalize_timestamp(time()) delete_timestamp = normalize_timestamp(time() + 1) put_timestamp = normalize_timestamp(time() + 2) # make a .ts req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': existing_timestamp}) req.get_response(self.object_controller) # force a PUT between the listdir and read_metadata of a DELETE put_once = [False] orig_listdir = os.listdir def mock_listdir(path): listing = orig_listdir(path) if not put_once[0]: put_once[0] = True req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': put_timestamp, 'Content-Length': '9', 'Content-Type': 'application/octet-stream'}) req.body = 'some data' resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 201) return listing with mock.patch('os.listdir', mock_listdir): req = Request.blank( '/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': delete_timestamp}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 404) qdir = os.path.join(self.testdir, 'sda1', 'quarantined') self.assertFalse(os.path.exists(qdir)) req = Request.blank('/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.object_controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['X-Timestamp'], put_timestamp) def test_multiphase_put_draining(self): # We want to ensure that we read the whole response body even if # it's multipart MIME and there's document parts that we don't # expect or understand. This'll help save our bacon if we ever jam # more stuff in there. in_a_timeout = [False] # inherit from BaseException so we get a stack trace when the test # fails instead of just a 500 class NotInATimeout(BaseException): pass class FakeTimeout(BaseException): def __enter__(self): in_a_timeout[0] = True def __exit__(self, typ, value, tb): in_a_timeout[0] = False class PickyWsgiBytesIO(WsgiBytesIO): def read(self, *a, **kw): if not in_a_timeout[0]: raise NotInATimeout() return WsgiBytesIO.read(self, *a, **kw) def readline(self, *a, **kw): if not in_a_timeout[0]: raise NotInATimeout() return WsgiBytesIO.readline(self, *a, **kw) test_data = 'obj data' footer_meta = { "X-Object-Sysmeta-Ec-Frag-Index": "7", "Etag": md5(test_data).hexdigest(), } footer_json = json.dumps(footer_meta) footer_meta_cksum = md5(footer_json).hexdigest() test_doc = "\r\n".join(( "--boundary123", "X-Document: object body", "", test_data, "--boundary123", "X-Document: object metadata", "Content-MD5: " + footer_meta_cksum, "", footer_json, "--boundary123", "X-Document: we got cleverer", "", "stuff stuff meaningless stuuuuuuuuuuff", "--boundary123", "X-Document: we got even cleverer; can you believe it?", "Waneshaft: ambifacient lunar", "Casing: malleable logarithmic", "", "potato potato potato potato potato potato potato", "--boundary123--" )) if six.PY3: test_doc = test_doc.encode('utf-8') # phase1 - PUT request with object metadata in footer and # multiphase commit conversation put_timestamp = utils.Timestamp(time()).internal headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', 'X-Backend-Storage-Policy-Index': '1', 'X-Backend-Obj-Content-Length': len(test_data), 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', } wsgi_input = PickyWsgiBytesIO(test_doc) req = Request.blank( "/sda1/0/a/c/o", environ={'REQUEST_METHOD': 'PUT', 'wsgi.input': wsgi_input}, headers=headers) app = object_server.ObjectController(self.conf, logger=self.logger) with mock.patch('swift.obj.server.ChunkReadTimeout', FakeTimeout): resp = req.get_response(app) self.assertEqual(resp.status_int, 201) # sanity check in_a_timeout[0] = True # so we can check without an exception self.assertEqual(wsgi_input.read(), '') # we read all the bytes @patch_policies(test_policies) class TestObjectServer(unittest.TestCase): def setUp(self): # dirs self.tmpdir = tempfile.mkdtemp() self.tempdir = os.path.join(self.tmpdir, 'tmp_test_obj_server') self.devices = os.path.join(self.tempdir, 'srv/node') for device in ('sda1', 'sdb1'): os.makedirs(os.path.join(self.devices, device)) self.conf = { 'devices': self.devices, 'swift_dir': self.tempdir, 'mount_check': 'false', } self.logger = debug_logger('test-object-server') self.app = object_server.ObjectController( self.conf, logger=self.logger) sock = listen(('127.0.0.1', 0)) self.server = spawn(wsgi.server, sock, self.app, utils.NullLogger()) self.port = sock.getsockname()[1] def tearDown(self): rmtree(self.tmpdir) def test_not_found(self): conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'GET', '/a/c/o') resp = conn.getresponse() self.assertEqual(resp.status, 404) resp.read() resp.close() def test_expect_on_put(self): test_body = 'test' headers = { 'Expect': '100-continue', 'Content-Length': len(test_body), 'X-Timestamp': utils.Timestamp(time()).internal, } conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 100) conn.send(test_body) resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() resp.close() def test_expect_on_put_footer(self): test_body = 'test' headers = { 'Expect': '100-continue', 'Content-Length': len(test_body), 'X-Timestamp': utils.Timestamp(time()).internal, 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', } conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 100) headers = HeaderKeyDict(resp.getheaders()) self.assertEqual(headers['X-Obj-Metadata-Footer'], 'yes') resp.close() def test_expect_on_put_conflict(self): test_body = 'test' put_timestamp = utils.Timestamp(time()) headers = { 'Expect': '100-continue', 'Content-Length': len(test_body), 'X-Timestamp': put_timestamp.internal, } conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 100) conn.send(test_body) resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() resp.close() # and again with same timestamp conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 409) headers = HeaderKeyDict(resp.getheaders()) self.assertEqual(headers['X-Backend-Timestamp'], put_timestamp) resp.read() resp.close() def test_multiphase_put_no_mime_boundary(self): test_data = 'obj data' put_timestamp = utils.Timestamp(time()).internal headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', 'X-Backend-Obj-Content-Length': len(test_data), 'X-Backend-Obj-Multiphase-Commit': 'yes', } conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 400) resp.read() resp.close() def test_expect_on_multiphase_put_diconnect(self): put_timestamp = utils.Timestamp(time()).internal headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', 'X-Backend-Obj-Content-Length': 0, 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', 'X-Backend-Obj-Multiphase-Commit': 'yes', } conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 100) headers = HeaderKeyDict(resp.getheaders()) self.assertEqual(headers['X-Obj-Multiphase-Commit'], 'yes') conn.send('c\r\n--boundary123\r\n') # disconnect client conn.sock.fd._sock.close() for i in range(2): sleep(0) self.assertFalse(self.logger.get_lines_for_level('error')) for line in self.logger.get_lines_for_level('info'): self.assertIn(' 499 ', line) def find_files(self): found_files = defaultdict(list) for root, dirs, files in os.walk(self.devices): for filename in files: _name, ext = os.path.splitext(filename) file_path = os.path.join(root, filename) found_files[ext].append(file_path) return found_files @contextmanager def _check_multiphase_put_commit_handling(self, test_doc=None, headers=None, finish_body=True): """ This helper will setup a multiphase chunked PUT request and yield at the context at the commit phase (after getting the second expect-100 continue response. It can setup a reasonable stub request, but you can over-ride some characteristics of the request via kwargs. :param test_doc: first part of the mime conversation before the object server will send the 100-continue, this includes the object body :param headers: headers to send along with the initial request; some object-metadata (e.g. X-Backend-Obj-Content-Length) is generally expected to match the test_doc) :param finish_body: boolean, if true send "0\r\n\r\n" after test_doc and wait for 100-continue before yielding context """ test_data = encode_frag_archive_bodies(POLICIES[1], 'obj data')[0] footer_meta = { "X-Object-Sysmeta-Ec-Frag-Index": "2", "Etag": md5(test_data).hexdigest(), } footer_json = json.dumps(footer_meta) footer_meta_cksum = md5(footer_json).hexdigest() test_doc = test_doc or "\r\n".join(( "--boundary123", "X-Document: object body", "", test_data, "--boundary123", "X-Document: object metadata", "Content-MD5: " + footer_meta_cksum, "", footer_json, "--boundary123", )) # phase1 - PUT request with object metadata in footer and # multiphase commit conversation headers = headers or { 'Content-Type': 'text/plain', 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', 'X-Backend-Storage-Policy-Index': '1', 'X-Backend-Obj-Content-Length': len(test_data), 'X-Backend-Obj-Metadata-Footer': 'yes', 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', 'X-Backend-Obj-Multiphase-Commit': 'yes', } put_timestamp = utils.Timestamp(headers.setdefault( 'X-Timestamp', utils.Timestamp(time()).internal)) container_update = \ 'swift.obj.server.ObjectController.container_update' with mock.patch(container_update) as _container_update: conn = bufferedhttp.http_connect( '127.0.0.1', self.port, 'sda1', '0', 'PUT', '/a/c/o', headers=headers) resp = conn.getexpect() self.assertEqual(resp.status, 100) expect_headers = HeaderKeyDict(resp.getheaders()) to_send = "%x\r\n%s\r\n" % (len(test_doc), test_doc) conn.send(to_send) if finish_body: conn.send("0\r\n\r\n") # verify 100-continue response to mark end of phase1 resp = conn.getexpect() self.assertEqual(resp.status, 100) # yield relevant context for test yield { 'conn': conn, 'expect_headers': expect_headers, 'put_timestamp': put_timestamp, 'mock_container_update': _container_update, } # give the object server a little time to trampoline enough to # recognize request has finished, or socket has closed or whatever sleep(0.1) def test_multiphase_put_client_disconnect_right_before_commit(self): with self._check_multiphase_put_commit_handling() as context: conn = context['conn'] # just bail stright out conn.sock.fd._sock.close() put_timestamp = context['put_timestamp'] _container_update = context['mock_container_update'] # and make sure it demonstrates the client disconnect log_lines = self.logger.get_lines_for_level('info') self.assertEqual(len(log_lines), 1) self.assertIn(' 499 ', log_lines[0]) # verify successful object data and durable state file write found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # but .durable isn't self.assertEqual(found_files['.durable'], []) # And no container update self.assertFalse(_container_update.called) def test_multiphase_put_client_disconnect_in_the_middle_of_commit(self): with self._check_multiphase_put_commit_handling() as context: conn = context['conn'] # start commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123--", )) # but don't quite the commit body to_send = "%x\r\n%s" % \ (len(commit_confirmation_doc), commit_confirmation_doc[:-1]) conn.send(to_send) # and then bail out conn.sock.fd._sock.close() put_timestamp = context['put_timestamp'] _container_update = context['mock_container_update'] # and make sure it demonstrates the client disconnect log_lines = self.logger.get_lines_for_level('info') self.assertEqual(len(log_lines), 1) self.assertIn(' 499 ', log_lines[0]) # verify successful object data and durable state file write found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # but .durable isn't self.assertEqual(found_files['.durable'], []) # And no container update self.assertFalse(_container_update.called) def test_multiphase_put_no_metadata_replicated(self): test_data = 'obj data' test_doc = "\r\n".join(( "--boundary123", "X-Document: object body", "", test_data, "--boundary123", )) put_timestamp = utils.Timestamp(time()).internal headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', 'X-Backend-Obj-Content-Length': len(test_data), 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', 'X-Backend-Obj-Multiphase-Commit': 'yes', } with self._check_multiphase_put_commit_handling( test_doc=test_doc, headers=headers) as context: expect_headers = context['expect_headers'] self.assertEqual(expect_headers['X-Obj-Multiphase-Commit'], 'yes') # N.B. no X-Obj-Metadata-Footer header self.assertNotIn('X-Obj-Metadata-Footer', expect_headers) conn = context['conn'] # send commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123--", )) to_send = "%x\r\n%s\r\n0\r\n\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) # verify success (2xx) to make end of phase2 resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() resp.close() # verify successful object data and durable state file write put_timestamp = context['put_timestamp'] found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # replicated objects do not have a .durable file self.assertEqual(found_files['.durable'], []) # And container update was called self.assertTrue(context['mock_container_update'].called) def test_multiphase_put_metadata_footer(self): with self._check_multiphase_put_commit_handling() as context: expect_headers = context['expect_headers'] self.assertEqual(expect_headers['X-Obj-Multiphase-Commit'], 'yes') self.assertEqual(expect_headers['X-Obj-Metadata-Footer'], 'yes') conn = context['conn'] # send commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123--", )) to_send = "%x\r\n%s\r\n0\r\n\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) # verify success (2xx) to make end of phase2 resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() resp.close() # verify successful object data and durable state file write put_timestamp = context['put_timestamp'] found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # .durable file is there self.assertEqual(len(found_files['.durable']), 1) durable_file = found_files['.durable'][0] self.assertEqual("%s.durable" % put_timestamp.internal, os.path.basename(durable_file)) # And container update was called self.assertTrue(context['mock_container_update'].called) def test_multiphase_put_metadata_footer_disconnect(self): test_data = 'obj data' test_doc = "\r\n".join(( "--boundary123", "X-Document: object body", "", test_data, "--boundary123", )) # eventlet.wsgi won't return < network_chunk_size from a chunked read self.app.network_chunk_size = 16 with self._check_multiphase_put_commit_handling( test_doc=test_doc, finish_body=False) as context: conn = context['conn'] # make footer doc footer_meta = { "X-Object-Sysmeta-Ec-Frag-Index": "2", "Etag": md5(test_data).hexdigest(), } footer_json = json.dumps(footer_meta) footer_meta_cksum = md5(footer_json).hexdigest() # send most of the footer doc footer_doc = "\r\n".join(( "X-Document: object metadata", "Content-MD5: " + footer_meta_cksum, "", footer_json, )) # but don't send final boundary nor last chunk to_send = "%x\r\n%s\r\n" % \ (len(footer_doc), footer_doc) conn.send(to_send) # and then bail out conn.sock.fd._sock.close() # and make sure it demonstrates the client disconnect log_lines = self.logger.get_lines_for_level('info') self.assertEqual(len(log_lines), 1) self.assertIn(' 499 ', log_lines[0]) # no artifacts left on disk found_files = self.find_files() self.assertEqual(len(found_files['.data']), 0) self.assertEqual(len(found_files['.durable']), 0) # ... and no container update _container_update = context['mock_container_update'] self.assertFalse(_container_update.called) def test_multiphase_put_ec_fragment_in_headers_no_footers(self): test_data = 'obj data' test_doc = "\r\n".join(( "--boundary123", "X-Document: object body", "", test_data, "--boundary123", )) # phase1 - PUT request with multiphase commit conversation # no object metadata in footer put_timestamp = utils.Timestamp(time()).internal headers = { 'Content-Type': 'text/plain', 'X-Timestamp': put_timestamp, 'Transfer-Encoding': 'chunked', 'Expect': '100-continue', # normally the frag index gets sent in the MIME footer (which this # test doesn't have, see `test_multiphase_put_metadata_footer`), # but the proxy *could* send the frag index in the headers and # this test verifies that would work. 'X-Object-Sysmeta-Ec-Frag-Index': '2', 'X-Backend-Storage-Policy-Index': '1', 'X-Backend-Obj-Content-Length': len(test_data), 'X-Backend-Obj-Multipart-Mime-Boundary': 'boundary123', 'X-Backend-Obj-Multiphase-Commit': 'yes', } with self._check_multiphase_put_commit_handling( test_doc=test_doc, headers=headers) as context: expect_headers = context['expect_headers'] self.assertEqual(expect_headers['X-Obj-Multiphase-Commit'], 'yes') # N.B. no X-Obj-Metadata-Footer header self.assertNotIn('X-Obj-Metadata-Footer', expect_headers) conn = context['conn'] # send commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123--", )) to_send = "%x\r\n%s\r\n0\r\n\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) # verify success (2xx) to make end of phase2 resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() resp.close() # verify successful object data and durable state file write put_timestamp = context['put_timestamp'] found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # .durable file is there self.assertEqual(len(found_files['.durable']), 1) durable_file = found_files['.durable'][0] self.assertEqual("%s.durable" % put_timestamp.internal, os.path.basename(durable_file)) # And container update was called self.assertTrue(context['mock_container_update'].called) def test_multiphase_put_bad_commit_message(self): with self._check_multiphase_put_commit_handling() as context: conn = context['conn'] # send commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "junkjunk", "--boundary123--", )) to_send = "%x\r\n%s\r\n0\r\n\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) resp = conn.getresponse() self.assertEqual(resp.status, 500) resp.read() resp.close() put_timestamp = context['put_timestamp'] _container_update = context['mock_container_update'] # verify that durable file was NOT created found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # but .durable isn't self.assertEqual(found_files['.durable'], []) # And no container update self.assertFalse(_container_update.called) def test_multiphase_put_drains_extra_commit_junk(self): with self._check_multiphase_put_commit_handling() as context: conn = context['conn'] # send commit confirmation to start phase2 commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123", "X-Document: we got cleverer", "", "stuff stuff meaningless stuuuuuuuuuuff", "--boundary123", "X-Document: we got even cleverer; can you believe it?", "Waneshaft: ambifacient lunar", "Casing: malleable logarithmic", "", "potato potato potato potato potato potato potato", "--boundary123--", )) to_send = "%x\r\n%s\r\n0\r\n\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) # verify success (2xx) to make end of phase2 resp = conn.getresponse() self.assertEqual(resp.status, 201) resp.read() # make another request to validate the HTTP protocol state conn.putrequest('GET', '/sda1/0/a/c/o') conn.putheader('X-Backend-Storage-Policy-Index', '1') conn.endheaders() resp = conn.getresponse() self.assertEqual(resp.status, 200) resp.read() resp.close() # verify successful object data and durable state file write put_timestamp = context['put_timestamp'] found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # .durable file is there self.assertEqual(len(found_files['.durable']), 1) durable_file = found_files['.durable'][0] self.assertEqual("%s.durable" % put_timestamp.internal, os.path.basename(durable_file)) # And container update was called self.assertTrue(context['mock_container_update'].called) def test_multiphase_put_drains_extra_commit_junk_disconnect(self): commit_confirmation_doc = "\r\n".join(( "X-Document: put commit", "", "commit_confirmation", "--boundary123", "X-Document: we got cleverer", "", "stuff stuff meaningless stuuuuuuuuuuff", "--boundary123", "X-Document: we got even cleverer; can you believe it?", "Waneshaft: ambifacient lunar", "Casing: malleable logarithmic", "", "potato potato potato potato potato potato potato", )) # eventlet.wsgi won't return < network_chunk_size from a chunked read self.app.network_chunk_size = 16 with self._check_multiphase_put_commit_handling() as context: conn = context['conn'] # send commit confirmation and some other stuff # but don't send final boundary or last chunk to_send = "%x\r\n%s\r\n" % \ (len(commit_confirmation_doc), commit_confirmation_doc) conn.send(to_send) # and then bail out conn.sock.fd._sock.close() # and make sure it demonstrates the client disconnect log_lines = self.logger.get_lines_for_level('info') self.assertEqual(len(log_lines), 1) self.assertIn(' 499 ', log_lines[0]) # verify successful object data and durable state file write put_timestamp = context['put_timestamp'] found_files = self.find_files() # .data file is there self.assertEqual(len(found_files['.data']), 1) obj_datafile = found_files['.data'][0] self.assertEqual("%s#2.data" % put_timestamp.internal, os.path.basename(obj_datafile)) # ... and .durable is there self.assertEqual(len(found_files['.durable']), 1) durable_file = found_files['.durable'][0] self.assertEqual("%s.durable" % put_timestamp.internal, os.path.basename(durable_file)) # but no container update self.assertFalse(context['mock_container_update'].called) @patch_policies class TestZeroCopy(unittest.TestCase): """Test the object server's zero-copy functionality""" def _system_can_zero_copy(self): if not splice.available: return False try: utils.get_md5_socket() except IOError: return False return True def setUp(self): if not self._system_can_zero_copy(): raise SkipTest("zero-copy support is missing") self.testdir = mkdtemp(suffix="obj_server_zero_copy") mkdirs(os.path.join(self.testdir, 'sda1', 'tmp')) conf = {'devices': self.testdir, 'mount_check': 'false', 'splice': 'yes', 'disk_chunk_size': '4096'} self.object_controller = object_server.ObjectController( conf, logger=debug_logger()) self.df_mgr = diskfile.DiskFileManager( conf, self.object_controller.logger) listener = listen(('localhost', 0)) port = listener.getsockname()[1] self.wsgi_greenlet = spawn( wsgi.server, listener, self.object_controller, NullLogger()) self.http_conn = httplib.HTTPConnection('127.0.0.1', port) self.http_conn.connect() def tearDown(self): """Tear down for testing swift.object.server.ObjectController""" self.wsgi_greenlet.kill() rmtree(self.testdir) def test_GET(self): url_path = '/sda1/2100/a/c/o' self.http_conn.request('PUT', url_path, 'obj contents', {'X-Timestamp': '127082564.24709'}) response = self.http_conn.getresponse() self.assertEqual(response.status, 201) response.read() self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() self.assertEqual(response.status, 200) contents = response.read() self.assertEqual(contents, 'obj contents') def test_GET_big(self): # Test with a large-ish object to make sure we handle full socket # buffers correctly. obj_contents = 'A' * 4 * 1024 * 1024 # 4 MiB url_path = '/sda1/2100/a/c/o' self.http_conn.request('PUT', url_path, obj_contents, {'X-Timestamp': '1402600322.52126'}) response = self.http_conn.getresponse() self.assertEqual(response.status, 201) response.read() self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() self.assertEqual(response.status, 200) contents = response.read() self.assertEqual(contents, obj_contents) def test_quarantine(self): obj_hash = hash_path('a', 'c', 'o') url_path = '/sda1/2100/a/c/o' ts = '1402601849.47475' self.http_conn.request('PUT', url_path, 'obj contents', {'X-Timestamp': ts}) response = self.http_conn.getresponse() self.assertEqual(response.status, 201) response.read() # go goof up the file on disk fname = os.path.join(self.testdir, 'sda1', 'objects', '2100', obj_hash[-3:], obj_hash, ts + '.data') with open(fname, 'rb+') as fh: fh.write('XYZ') self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() self.assertEqual(response.status, 200) contents = response.read() self.assertEqual(contents, 'XYZ contents') self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() # it was quarantined by the previous request self.assertEqual(response.status, 404) response.read() def test_quarantine_on_well_formed_zero_byte_file(self): # Make sure we work around an oddity in Linux's hash sockets url_path = '/sda1/2100/a/c/o' ts = '1402700497.71333' self.http_conn.request( 'PUT', url_path, '', {'X-Timestamp': ts, 'Content-Length': '0'}) response = self.http_conn.getresponse() self.assertEqual(response.status, 201) response.read() self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() self.assertEqual(response.status, 200) contents = response.read() self.assertEqual(contents, '') self.http_conn.request('GET', url_path) response = self.http_conn.getresponse() self.assertEqual(response.status, 200) # still there contents = response.read() self.assertEqual(contents, '') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_diskfile.py0000664000567000056710000102045713024044354021741 0ustar jenkinsjenkins00000000000000# -*- coding:utf-8 -*- # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for swift.obj.diskfile""" import six.moves.cPickle as pickle import os import errno import itertools from unittest.util import safe_repr import mock import unittest import email import tempfile import uuid import xattr import re from collections import defaultdict from random import shuffle, randint from shutil import rmtree from time import time from tempfile import mkdtemp from hashlib import md5 from contextlib import closing, contextmanager from gzip import GzipFile import pyeclib.ec_iface from eventlet import hubs, timeout, tpool from test.unit import (FakeLogger, mock as unit_mock, temptree, patch_policies, debug_logger, EMPTY_ETAG, make_timestamp_iter, DEFAULT_TEST_EC_TYPE, encode_frag_archive_bodies) from nose import SkipTest from swift.obj import diskfile from swift.common import utils from swift.common.utils import hash_path, mkdirs, Timestamp, encode_timestamps from swift.common import ring from swift.common.splice import splice from swift.common.exceptions import DiskFileNotExist, DiskFileQuarantined, \ DiskFileDeviceUnavailable, DiskFileDeleted, DiskFileNotOpen, \ DiskFileError, ReplicationLockTimeout, DiskFileCollision, \ DiskFileExpired, SwiftException, DiskFileNoSpace, DiskFileXattrNotSupported from swift.common.storage_policy import ( POLICIES, get_policy_string, StoragePolicy, ECStoragePolicy, BaseStoragePolicy, REPL_POLICY, EC_POLICY) from test.unit.obj.common import write_diskfile test_policies = [ StoragePolicy(0, name='zero', is_default=True), ECStoragePolicy(1, name='one', is_default=False, ec_type=DEFAULT_TEST_EC_TYPE, ec_ndata=10, ec_nparity=4), ] def find_paths_with_matching_suffixes(needed_matches=2, needed_suffixes=3): paths = defaultdict(list) while True: path = ('a', 'c', uuid.uuid4().hex) hash_ = hash_path(*path) suffix = hash_[-3:] paths[suffix].append(path) if len(paths) < needed_suffixes: # in the extreamly unlikely situation where you land the matches # you need before you get the total suffixes you need - it's # simpler to just ignore this suffix for now continue if len(paths[suffix]) >= needed_matches: break return paths, suffix def _create_test_ring(path, policy): ring_name = get_policy_string('object', policy) testgz = os.path.join(path, ring_name + '.ring.gz') intended_replica2part2dev_id = [ [0, 1, 2, 3, 4, 5, 6], [1, 2, 3, 0, 5, 6, 4], [2, 3, 0, 1, 6, 4, 5]] intended_devs = [ {'id': 0, 'device': 'sda1', 'zone': 0, 'ip': '127.0.0.0', 'port': 6000}, {'id': 1, 'device': 'sda1', 'zone': 1, 'ip': '127.0.0.1', 'port': 6000}, {'id': 2, 'device': 'sda1', 'zone': 2, 'ip': '127.0.0.2', 'port': 6000}, {'id': 3, 'device': 'sda1', 'zone': 4, 'ip': '127.0.0.3', 'port': 6000}, {'id': 4, 'device': 'sda1', 'zone': 5, 'ip': '127.0.0.4', 'port': 6000}, {'id': 5, 'device': 'sda1', 'zone': 6, 'ip': 'fe80::202:b3ff:fe1e:8329', 'port': 6000}, {'id': 6, 'device': 'sda1', 'zone': 7, 'ip': '2001:0db8:85a3:0000:0000:8a2e:0370:7334', 'port': 6000}] intended_part_shift = 30 intended_reload_time = 15 with closing(GzipFile(testgz, 'wb')) as f: pickle.dump( ring.RingData(intended_replica2part2dev_id, intended_devs, intended_part_shift), f) return ring.Ring(path, ring_name=ring_name, reload_time=intended_reload_time) @patch_policies class TestDiskFileModuleMethods(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' # Setup a test ring per policy (stolen from common/test_ring.py) self.testdir = tempfile.mkdtemp() self.devices = os.path.join(self.testdir, 'node') rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) os.mkdir(self.devices) self.existing_device = 'sda1' os.mkdir(os.path.join(self.devices, self.existing_device)) self.objects = os.path.join(self.devices, self.existing_device, 'objects') os.mkdir(self.objects) self.parts = {} for part in ['0', '1', '2', '3']: self.parts[part] = os.path.join(self.objects, part) os.mkdir(os.path.join(self.objects, part)) self.ring = _create_test_ring(self.testdir, POLICIES.legacy) self.conf = dict( swift_dir=self.testdir, devices=self.devices, mount_check='false', timeout='300', stats_interval='1') self.df_mgr = diskfile.DiskFileManager(self.conf, FakeLogger()) def tearDown(self): rmtree(self.testdir, ignore_errors=1) def _create_diskfile(self, policy): return self.df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) def test_extract_policy(self): # good path names pn = 'objects/0/606/1984527ed7ef6247c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), POLICIES[0]) pn = 'objects-1/0/606/198452b6ef6247c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), POLICIES[1]) # leading slash pn = '/objects/0/606/1984527ed7ef6247c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), POLICIES[0]) pn = '/objects-1/0/606/198452b6ef6247c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), POLICIES[1]) # full paths good_path = '/srv/node/sda1/objects-1/1/abc/def/1234.data' self.assertEqual(diskfile.extract_policy(good_path), POLICIES[1]) good_path = '/srv/node/sda1/objects/1/abc/def/1234.data' self.assertEqual(diskfile.extract_policy(good_path), POLICIES[0]) # short paths path = '/srv/node/sda1/objects/1/1234.data' self.assertEqual(diskfile.extract_policy(path), POLICIES[0]) path = '/srv/node/sda1/objects-1/1/1234.data' self.assertEqual(diskfile.extract_policy(path), POLICIES[1]) # well formatted but, unknown policy index pn = 'objects-2/0/606/198427efcff042c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), None) # malformed path self.assertEqual(diskfile.extract_policy(''), None) bad_path = '/srv/node/sda1/objects-t/1/abc/def/1234.data' self.assertEqual(diskfile.extract_policy(bad_path), None) pn = 'XXXX/0/606/1984527ed42b6ef6247c78606/1401379842.14643.data' self.assertEqual(diskfile.extract_policy(pn), None) bad_path = '/srv/node/sda1/foo-1/1/abc/def/1234.data' self.assertEqual(diskfile.extract_policy(bad_path), None) bad_path = '/srv/node/sda1/obj1/1/abc/def/1234.data' self.assertEqual(diskfile.extract_policy(bad_path), None) def test_quarantine_renamer(self): for policy in POLICIES: # we use this for convenience, not really about a diskfile layout df = self._create_diskfile(policy=policy) mkdirs(df._datadir) exp_dir = os.path.join(self.devices, 'quarantined', diskfile.get_data_dir(policy), os.path.basename(df._datadir)) qbit = os.path.join(df._datadir, 'qbit') with open(qbit, 'w') as f: f.write('abc') to_dir = diskfile.quarantine_renamer(self.devices, qbit) self.assertEqual(to_dir, exp_dir) self.assertRaises(OSError, diskfile.quarantine_renamer, self.devices, qbit) def test_get_data_dir(self): self.assertEqual(diskfile.get_data_dir(POLICIES[0]), diskfile.DATADIR_BASE) self.assertEqual(diskfile.get_data_dir(POLICIES[1]), diskfile.DATADIR_BASE + "-1") self.assertRaises(ValueError, diskfile.get_data_dir, 'junk') self.assertRaises(ValueError, diskfile.get_data_dir, 99) def test_get_async_dir(self): self.assertEqual(diskfile.get_async_dir(POLICIES[0]), diskfile.ASYNCDIR_BASE) self.assertEqual(diskfile.get_async_dir(POLICIES[1]), diskfile.ASYNCDIR_BASE + "-1") self.assertRaises(ValueError, diskfile.get_async_dir, 'junk') self.assertRaises(ValueError, diskfile.get_async_dir, 99) def test_get_tmp_dir(self): self.assertEqual(diskfile.get_tmp_dir(POLICIES[0]), diskfile.TMP_BASE) self.assertEqual(diskfile.get_tmp_dir(POLICIES[1]), diskfile.TMP_BASE + "-1") self.assertRaises(ValueError, diskfile.get_tmp_dir, 'junk') self.assertRaises(ValueError, diskfile.get_tmp_dir, 99) def test_pickle_async_update_tmp_dir(self): for policy in POLICIES: if int(policy) == 0: tmp_part = 'tmp' else: tmp_part = 'tmp-%d' % policy tmp_path = os.path.join( self.devices, self.existing_device, tmp_part) self.assertFalse(os.path.isdir(tmp_path)) pickle_args = (self.existing_device, 'a', 'c', 'o', 'data', 0.0, policy) # async updates don't create their tmpdir on their own self.assertRaises(OSError, self.df_mgr.pickle_async_update, *pickle_args) os.makedirs(tmp_path) # now create a async update self.df_mgr.pickle_async_update(*pickle_args) # check tempdir self.assertTrue(os.path.isdir(tmp_path)) @patch_policies class TestObjectAuditLocationGenerator(unittest.TestCase): def _make_file(self, path): try: os.makedirs(os.path.dirname(path)) except OSError as err: if err.errno != errno.EEXIST: raise with open(path, 'w'): pass def test_audit_location_class(self): al = diskfile.AuditLocation('abc', '123', '_-_', policy=POLICIES.legacy) self.assertEqual(str(al), 'abc') def test_finding_of_hashdirs(self): with temptree([]) as tmpdir: # the good os.makedirs(os.path.join(tmpdir, "sdp", "objects", "1519", "aca", "5c1fdc1ffb12e5eaf84edc30d8b67aca")) os.makedirs(os.path.join(tmpdir, "sdp", "objects", "1519", "aca", "fdfd184d39080020bc8b487f8a7beaca")) os.makedirs(os.path.join(tmpdir, "sdp", "objects", "1519", "df2", "b0fe7af831cc7b1af5bf486b1c841df2")) os.makedirs(os.path.join(tmpdir, "sdp", "objects", "9720", "ca5", "4a943bc72c2e647c4675923d58cf4ca5")) os.makedirs(os.path.join(tmpdir, "sdq", "objects", "3071", "8eb", "fcd938702024c25fef6c32fef05298eb")) os.makedirs(os.path.join(tmpdir, "sdp", "objects-1", "9970", "ca5", "4a943bc72c2e647c4675923d58cf4ca5")) os.makedirs(os.path.join(tmpdir, "sdq", "objects-2", "9971", "8eb", "fcd938702024c25fef6c32fef05298eb")) os.makedirs(os.path.join(tmpdir, "sdq", "objects-99", "9972", "8eb", "fcd938702024c25fef6c32fef05298eb")) # the bad os.makedirs(os.path.join(tmpdir, "sdq", "objects-", "1135", "6c3", "fcd938702024c25fef6c32fef05298eb")) os.makedirs(os.path.join(tmpdir, "sdq", "objects-fud", "foo")) os.makedirs(os.path.join(tmpdir, "sdq", "objects-+1", "foo")) self._make_file(os.path.join(tmpdir, "sdp", "objects", "1519", "fed")) self._make_file(os.path.join(tmpdir, "sdq", "objects", "9876")) # the empty os.makedirs(os.path.join(tmpdir, "sdr")) os.makedirs(os.path.join(tmpdir, "sds", "objects")) os.makedirs(os.path.join(tmpdir, "sdt", "objects", "9601")) os.makedirs(os.path.join(tmpdir, "sdu", "objects", "6499", "f80")) # the irrelevant os.makedirs(os.path.join(tmpdir, "sdv", "accounts", "77", "421", "4b8c86149a6d532f4af018578fd9f421")) os.makedirs(os.path.join(tmpdir, "sdw", "containers", "28", "51e", "4f9eee668b66c6f0250bfa3c7ab9e51e")) logger = debug_logger() locations = [(loc.path, loc.device, loc.partition, loc.policy) for loc in diskfile.object_audit_location_generator( devices=tmpdir, mount_check=False, logger=logger)] locations.sort() # expect some warnings about those bad dirs warnings = logger.get_lines_for_level('warning') self.assertEqual(set(warnings), set([ ("Directory 'objects-' does not map to a valid policy " "(Unknown policy, for index '')"), ("Directory 'objects-2' does not map to a valid policy " "(Unknown policy, for index '2')"), ("Directory 'objects-99' does not map to a valid policy " "(Unknown policy, for index '99')"), ("Directory 'objects-fud' does not map to a valid policy " "(Unknown policy, for index 'fud')"), ("Directory 'objects-+1' does not map to a valid policy " "(Unknown policy, for index '+1')"), ])) expected = \ [(os.path.join(tmpdir, "sdp", "objects-1", "9970", "ca5", "4a943bc72c2e647c4675923d58cf4ca5"), "sdp", "9970", POLICIES[1]), (os.path.join(tmpdir, "sdp", "objects", "1519", "aca", "5c1fdc1ffb12e5eaf84edc30d8b67aca"), "sdp", "1519", POLICIES[0]), (os.path.join(tmpdir, "sdp", "objects", "1519", "aca", "fdfd184d39080020bc8b487f8a7beaca"), "sdp", "1519", POLICIES[0]), (os.path.join(tmpdir, "sdp", "objects", "1519", "df2", "b0fe7af831cc7b1af5bf486b1c841df2"), "sdp", "1519", POLICIES[0]), (os.path.join(tmpdir, "sdp", "objects", "9720", "ca5", "4a943bc72c2e647c4675923d58cf4ca5"), "sdp", "9720", POLICIES[0]), (os.path.join(tmpdir, "sdq", "objects", "3071", "8eb", "fcd938702024c25fef6c32fef05298eb"), "sdq", "3071", POLICIES[0]), ] self.assertEqual(locations, expected) # Reset status file for next run diskfile.clear_auditor_status(tmpdir) # now without a logger locations = [(loc.path, loc.device, loc.partition, loc.policy) for loc in diskfile.object_audit_location_generator( devices=tmpdir, mount_check=False)] locations.sort() self.assertEqual(locations, expected) def test_skipping_unmounted_devices(self): def mock_ismount(path): return path.endswith('sdp') with mock.patch('swift.obj.diskfile.ismount', mock_ismount): with temptree([]) as tmpdir: os.makedirs(os.path.join(tmpdir, "sdp", "objects", "2607", "df3", "ec2871fe724411f91787462f97d30df3")) os.makedirs(os.path.join(tmpdir, "sdq", "objects", "9785", "a10", "4993d582f41be9771505a8d4cb237a10")) locations = [ (loc.path, loc.device, loc.partition, loc.policy) for loc in diskfile.object_audit_location_generator( devices=tmpdir, mount_check=True)] locations.sort() self.assertEqual( locations, [(os.path.join(tmpdir, "sdp", "objects", "2607", "df3", "ec2871fe724411f91787462f97d30df3"), "sdp", "2607", POLICIES[0])]) # Do it again, this time with a logger. ml = mock.MagicMock() locations = [ (loc.path, loc.device, loc.partition, loc.policy) for loc in diskfile.object_audit_location_generator( devices=tmpdir, mount_check=True, logger=ml)] ml.debug.assert_called_once_with( 'Skipping %s as it is not mounted', 'sdq') def test_only_catch_expected_errors(self): # Crazy exceptions should still escape object_audit_location_generator # so that errors get logged and a human can see what's going wrong; # only normal FS corruption should be skipped over silently. def list_locations(dirname): return [(loc.path, loc.device, loc.partition, loc.policy) for loc in diskfile.object_audit_location_generator( devices=dirname, mount_check=False)] real_listdir = os.listdir def splode_if_endswith(suffix): def sploder(path): if path.endswith(suffix): raise OSError(errno.EACCES, "don't try to ad-lib") else: return real_listdir(path) return sploder with temptree([]) as tmpdir: os.makedirs(os.path.join(tmpdir, "sdf", "objects", "2607", "b54", "fe450ec990a88cc4b252b181bab04b54")) with mock.patch('os.listdir', splode_if_endswith("sdf/objects")): self.assertRaises(OSError, list_locations, tmpdir) with mock.patch('os.listdir', splode_if_endswith("2607")): self.assertRaises(OSError, list_locations, tmpdir) with mock.patch('os.listdir', splode_if_endswith("b54")): self.assertRaises(OSError, list_locations, tmpdir) def test_auditor_status(self): with temptree([]) as tmpdir: os.makedirs(os.path.join(tmpdir, "sdf", "objects", "1", "a", "b")) os.makedirs(os.path.join(tmpdir, "sdf", "objects", "2", "a", "b")) # Auditor starts, there are two partitions to check gen = diskfile.object_audit_location_generator(tmpdir, False) gen.next() gen.next() # Auditor stopped for some reason without raising StopIterator in # the generator and restarts There is now only one remaining # partition to check gen = diskfile.object_audit_location_generator(tmpdir, False) gen.next() # There are no more remaining partitions self.assertRaises(StopIteration, gen.next) # There are no partitions to check if the auditor restarts another # time and the status files have not been cleared gen = diskfile.object_audit_location_generator(tmpdir, False) self.assertRaises(StopIteration, gen.next) # Reset status file diskfile.clear_auditor_status(tmpdir) # If the auditor restarts another time, we expect to # check two partitions again, because the remaining # partitions were empty and a new listdir was executed gen = diskfile.object_audit_location_generator(tmpdir, False) gen.next() gen.next() class TestDiskFileRouter(unittest.TestCase): def test_register(self): with mock.patch.dict( diskfile.DiskFileRouter.policy_type_to_manager_cls, {}): @diskfile.DiskFileRouter.register('test-policy') class TestDiskFileManager(diskfile.DiskFileManager): pass @BaseStoragePolicy.register('test-policy') class TestStoragePolicy(BaseStoragePolicy): pass with patch_policies([TestStoragePolicy(0, 'test')]): router = diskfile.DiskFileRouter({}, debug_logger('test')) manager = router[POLICIES.default] self.assertTrue(isinstance(manager, TestDiskFileManager)) class BaseDiskFileTestMixin(object): """ Bag of helpers that are useful in the per-policy DiskFile test classes. """ def _manager_mock(self, manager_attribute_name, df=None): mgr_cls = df._manager.__class__ if df else self.mgr_cls return '.'.join([ mgr_cls.__module__, mgr_cls.__name__, manager_attribute_name]) def _assertDictContainsSubset(self, subset, dictionary, msg=None): """Checks whether dictionary is a superset of subset.""" # This is almost identical to the method in python3.4 version of # unitest.case.TestCase.assertDictContainsSubset, reproduced here to # avoid the deprecation warning in the original when using python3. missing = [] mismatched = [] for key, value in subset.items(): if key not in dictionary: missing.append(key) elif value != dictionary[key]: mismatched.append('%s, expected: %s, actual: %s' % (safe_repr(key), safe_repr(value), safe_repr(dictionary[key]))) if not (missing or mismatched): return standardMsg = '' if missing: standardMsg = 'Missing: %s' % ','.join(safe_repr(m) for m in missing) if mismatched: if standardMsg: standardMsg += '; ' standardMsg += 'Mismatched values: %s' % ','.join(mismatched) self.fail(self._formatMessage(msg, standardMsg)) class DiskFileManagerMixin(BaseDiskFileTestMixin): """ Abstract test method mixin for concrete test cases - this class won't get picked up by test runners because it doesn't subclass unittest.TestCase and doesn't have [Tt]est in the name. """ # set mgr_cls on subclasses mgr_cls = None def setUp(self): self.tmpdir = mkdtemp() self.testdir = os.path.join( self.tmpdir, 'tmp_test_obj_server_DiskFile') self.existing_device1 = 'sda1' self.existing_device2 = 'sda2' for policy in POLICIES: mkdirs(os.path.join(self.testdir, self.existing_device1, diskfile.get_tmp_dir(policy))) mkdirs(os.path.join(self.testdir, self.existing_device2, diskfile.get_tmp_dir(policy))) self._orig_tpool_exc = tpool.execute tpool.execute = lambda f, *args, **kwargs: f(*args, **kwargs) self.conf = dict(devices=self.testdir, mount_check='false', keep_cache_size=2 * 1024) self.logger = debug_logger('test-' + self.__class__.__name__) self.df_mgr = self.mgr_cls(self.conf, self.logger) self.df_router = diskfile.DiskFileRouter(self.conf, self.logger) def tearDown(self): rmtree(self.tmpdir, ignore_errors=1) def _get_diskfile(self, policy, frag_index=None): df_mgr = self.df_router[policy] return df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=frag_index) def _test_get_ondisk_files(self, scenarios, policy, frag_index=None): class_under_test = self._get_diskfile(policy, frag_index=frag_index) for test in scenarios: # test => [('filename.ext', '.ext'|False, ...), ...] expected = { ext[1:] + '_file': os.path.join( class_under_test._datadir, filename) for (filename, ext) in [v[:2] for v in test] if ext in ('.data', '.meta', '.ts')} # list(zip(...)) for py3 compatibility (zip is lazy there) files = list(list(zip(*test))[0]) for _order in ('ordered', 'shuffled', 'shuffled'): class_under_test = self._get_diskfile(policy, frag_index) try: actual = class_under_test._get_ondisk_files(files) self._assertDictContainsSubset( expected, actual, 'Expected %s from %s but got %s' % (expected, files, actual)) except AssertionError as e: self.fail('%s with files %s' % (str(e), files)) shuffle(files) def _test_hash_cleanup_listdir_files(self, scenarios, policy, reclaim_age=None): # check that expected files are left in hashdir after cleanup for test in scenarios: class_under_test = self.df_router[policy] # list(zip(...)) for py3 compatibility (zip is lazy there) files = list(list(zip(*test))[0]) hashdir = os.path.join(self.testdir, str(uuid.uuid4())) os.mkdir(hashdir) for fname in files: open(os.path.join(hashdir, fname), 'w') expected_after_cleanup = set([f[0] for f in test if (f[2] if len(f) > 2 else f[1])]) if reclaim_age: class_under_test.cleanup_ondisk_files( hashdir, reclaim_age=reclaim_age)['files'] else: with mock.patch('swift.obj.diskfile.time') as mock_time: # don't reclaim anything mock_time.time.return_value = 0.0 class_under_test.cleanup_ondisk_files(hashdir)['files'] after_cleanup = set(os.listdir(hashdir)) errmsg = "expected %r, got %r for test %r" % ( sorted(expected_after_cleanup), sorted(after_cleanup), test ) self.assertEqual(expected_after_cleanup, after_cleanup, errmsg) def _test_yield_hashes_cleanup(self, scenarios, policy): # opportunistic test to check that yield_hashes cleans up dir using # same scenarios as passed to _test_hash_cleanup_listdir_files for test in scenarios: class_under_test = self.df_router[policy] # list(zip(...)) for py3 compatibility (zip is lazy there) files = list(list(zip(*test))[0]) dev_path = os.path.join(self.testdir, str(uuid.uuid4())) hashdir = os.path.join( dev_path, diskfile.get_data_dir(policy), '0', 'abc', '9373a92d072897b136b3fc06595b4abc') os.makedirs(hashdir) for fname in files: open(os.path.join(hashdir, fname), 'w') expected_after_cleanup = set([f[0] for f in test if f[1] or len(f) > 2 and f[2]]) with mock.patch('swift.obj.diskfile.time') as mock_time: # don't reclaim anything mock_time.time.return_value = 0.0 mocked = 'swift.obj.diskfile.BaseDiskFileManager.get_dev_path' with mock.patch(mocked) as mock_path: mock_path.return_value = dev_path for _ in class_under_test.yield_hashes( 'ignored', '0', policy, suffixes=['abc']): # return values are tested in test_yield_hashes_* pass after_cleanup = set(os.listdir(hashdir)) errmsg = "expected %r, got %r for test %r" % ( sorted(expected_after_cleanup), sorted(after_cleanup), test ) self.assertEqual(expected_after_cleanup, after_cleanup, errmsg) def test_get_ondisk_files_with_empty_dir(self): files = [] expected = dict( data_file=None, meta_file=None, ctype_file=None, ts_file=None) for policy in POLICIES: for frag_index in (0, None, '14'): # check manager df_mgr = self.df_router[policy] datadir = os.path.join('/srv/node/sdb1/', diskfile.get_data_dir(policy)) actual = df_mgr.get_ondisk_files(files, datadir) self._assertDictContainsSubset(expected, actual) # check diskfile under the hood df = self._get_diskfile(policy, frag_index=frag_index) actual = df._get_ondisk_files(files) self._assertDictContainsSubset(expected, actual) # check diskfile open self.assertRaises(DiskFileNotExist, df.open) def test_get_ondisk_files_with_unexpected_file(self): unexpected_files = ['junk', 'junk.data', '.junk'] timestamp = next(make_timestamp_iter()) tomb_file = timestamp.internal + '.ts' for policy in POLICIES: for unexpected in unexpected_files: files = [unexpected, tomb_file] df_mgr = self.df_router[policy] df_mgr.logger = FakeLogger() datadir = os.path.join('/srv/node/sdb1/', diskfile.get_data_dir(policy)) results = df_mgr.get_ondisk_files(files, datadir) expected = {'ts_file': os.path.join(datadir, tomb_file)} self._assertDictContainsSubset(expected, results) log_lines = df_mgr.logger.get_lines_for_level('warning') self.assertTrue( log_lines[0].startswith( 'Unexpected file %s' % os.path.join(datadir, unexpected))) def test_construct_dev_path(self): res_path = self.df_mgr.construct_dev_path('abc') self.assertEqual(os.path.join(self.df_mgr.devices, 'abc'), res_path) def test_pickle_async_update(self): self.df_mgr.logger.increment = mock.MagicMock() ts = Timestamp(10000.0).internal with mock.patch('swift.obj.diskfile.write_pickle') as wp: self.df_mgr.pickle_async_update(self.existing_device1, 'a', 'c', 'o', dict(a=1, b=2), ts, POLICIES[0]) dp = self.df_mgr.construct_dev_path(self.existing_device1) ohash = diskfile.hash_path('a', 'c', 'o') wp.assert_called_with({'a': 1, 'b': 2}, os.path.join( dp, diskfile.get_async_dir(POLICIES[0]), ohash[-3:], ohash + '-' + ts), os.path.join(dp, 'tmp')) self.df_mgr.logger.increment.assert_called_with('async_pendings') def test_object_audit_location_generator(self): locations = list(self.df_mgr.object_audit_location_generator()) self.assertEqual(locations, []) def test_replication_lock_on(self): # Double check settings self.df_mgr.replication_one_per_device = True self.df_mgr.replication_lock_timeout = 0.1 dev_path = os.path.join(self.testdir, self.existing_device1) with self.df_mgr.replication_lock(self.existing_device1): lock_exc = None exc = None try: with self.df_mgr.replication_lock(self.existing_device1): raise Exception( '%r was not replication locked!' % dev_path) except ReplicationLockTimeout as err: lock_exc = err except Exception as err: exc = err self.assertTrue(lock_exc is not None) self.assertTrue(exc is None) def test_replication_lock_off(self): # Double check settings self.df_mgr.replication_one_per_device = False self.df_mgr.replication_lock_timeout = 0.1 dev_path = os.path.join(self.testdir, self.existing_device1) with self.df_mgr.replication_lock(dev_path): lock_exc = None exc = None try: with self.df_mgr.replication_lock(dev_path): raise Exception( '%r was not replication locked!' % dev_path) except ReplicationLockTimeout as err: lock_exc = err except Exception as err: exc = err self.assertTrue(lock_exc is None) self.assertTrue(exc is not None) def test_replication_lock_another_device_fine(self): # Double check settings self.df_mgr.replication_one_per_device = True self.df_mgr.replication_lock_timeout = 0.1 with self.df_mgr.replication_lock(self.existing_device1): lock_exc = None try: with self.df_mgr.replication_lock(self.existing_device2): pass except ReplicationLockTimeout as err: lock_exc = err self.assertTrue(lock_exc is None) def test_missing_splice_warning(self): logger = FakeLogger() with mock.patch('swift.common.splice.splice._c_splice', None): self.conf['splice'] = 'yes' mgr = diskfile.DiskFileManager(self.conf, logger) warnings = logger.get_lines_for_level('warning') self.assertTrue(len(warnings) > 0) self.assertTrue('splice()' in warnings[-1]) self.assertFalse(mgr.use_splice) def test_get_diskfile_from_hash_dev_path_fail(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value=None) with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': ['1381679759.90941.data']} readmeta.return_value = {'name': '/a/c/o'} self.assertRaises( DiskFileDeviceUnavailable, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) def test_get_diskfile_from_hash_not_dir(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta, \ mock.patch(self._manager_mock( 'quarantine_renamer')) as quarantine_renamer: osexc = OSError() osexc.errno = errno.ENOTDIR hclistdir.side_effect = osexc readmeta.return_value = {'name': '/a/c/o'} self.assertRaises( DiskFileNotExist, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) quarantine_renamer.assert_called_once_with( '/srv/dev/', '/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900') def test_get_diskfile_from_hash_no_dir(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: osexc = OSError() osexc.errno = errno.ENOENT hclistdir.side_effect = osexc readmeta.return_value = {'name': '/a/c/o'} self.assertRaises( DiskFileNotExist, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) def test_get_diskfile_from_hash_other_oserror(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: osexc = OSError() hclistdir.side_effect = osexc readmeta.return_value = {'name': '/a/c/o'} self.assertRaises( OSError, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) def test_get_diskfile_from_hash_no_actual_files(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': []} readmeta.return_value = {'name': '/a/c/o'} self.assertRaises( DiskFileNotExist, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) def test_get_diskfile_from_hash_read_metadata_problem(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': ['1381679759.90941.data']} readmeta.side_effect = EOFError() self.assertRaises( DiskFileNotExist, self.df_mgr.get_diskfile_from_hash, 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) def test_get_diskfile_from_hash_no_meta_name(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': ['1381679759.90941.data']} readmeta.return_value = {} try: self.df_mgr.get_diskfile_from_hash( 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) except DiskFileNotExist as err: exc = err self.assertEqual(str(exc), '') def test_get_diskfile_from_hash_bad_meta_name(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')), \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': ['1381679759.90941.data']} readmeta.return_value = {'name': 'bad'} try: self.df_mgr.get_diskfile_from_hash( 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) except DiskFileNotExist as err: exc = err self.assertEqual(str(exc), '') def test_get_diskfile_from_hash(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value='/srv/dev/') with mock.patch(self._manager_mock('diskfile_cls')) as dfclass, \ mock.patch(self._manager_mock( 'cleanup_ondisk_files')) as hclistdir, \ mock.patch('swift.obj.diskfile.read_metadata') as readmeta: hclistdir.return_value = {'files': ['1381679759.90941.data']} readmeta.return_value = {'name': '/a/c/o'} self.df_mgr.get_diskfile_from_hash( 'dev', '9', '9a7175077c01a23ade5956b8a2bba900', POLICIES[0]) dfclass.assert_called_once_with( self.df_mgr, '/srv/dev/', self.df_mgr.threadpools['dev'], '9', 'a', 'c', 'o', policy=POLICIES[0]) hclistdir.assert_called_once_with( '/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900', 604800) readmeta.assert_called_once_with( '/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900/' '1381679759.90941.data') def test_listdir_enoent(self): oserror = OSError() oserror.errno = errno.ENOENT self.df_mgr.logger.error = mock.MagicMock() with mock.patch('os.listdir', side_effect=oserror): self.assertEqual(self.df_mgr._listdir('path'), []) self.assertEqual(self.df_mgr.logger.error.mock_calls, []) def test_listdir_other_oserror(self): oserror = OSError() self.df_mgr.logger.error = mock.MagicMock() with mock.patch('os.listdir', side_effect=oserror): self.assertEqual(self.df_mgr._listdir('path'), []) self.df_mgr.logger.error.assert_called_once_with( 'ERROR: Skipping %r due to error with listdir attempt: %s', 'path', oserror) def test_listdir(self): self.df_mgr.logger.error = mock.MagicMock() with mock.patch('os.listdir', return_value=['abc', 'def']): self.assertEqual(self.df_mgr._listdir('path'), ['abc', 'def']) self.assertEqual(self.df_mgr.logger.error.mock_calls, []) def test_yield_suffixes_dev_path_fail(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value=None) exc = None try: list(self.df_mgr.yield_suffixes(self.existing_device1, '9', 0)) except DiskFileDeviceUnavailable as err: exc = err self.assertEqual(str(exc), '') def test_yield_suffixes(self): self.df_mgr._listdir = mock.MagicMock(return_value=[ 'abc', 'def', 'ghi', 'abcd', '012']) dev = self.existing_device1 self.assertEqual( list(self.df_mgr.yield_suffixes(dev, '9', POLICIES[0])), [(self.testdir + '/' + dev + '/objects/9/abc', 'abc'), (self.testdir + '/' + dev + '/objects/9/def', 'def'), (self.testdir + '/' + dev + '/objects/9/012', '012')]) def test_yield_hashes_dev_path_fail(self): self.df_mgr.get_dev_path = mock.MagicMock(return_value=None) exc = None try: list(self.df_mgr.yield_hashes(self.existing_device1, '9', POLICIES[0])) except DiskFileDeviceUnavailable as err: exc = err self.assertEqual(str(exc), '') def test_yield_hashes_empty(self): def _listdir(path): return [] with mock.patch('os.listdir', _listdir): self.assertEqual(list(self.df_mgr.yield_hashes( self.existing_device1, '9', POLICIES[0])), []) def test_yield_hashes_empty_suffixes(self): def _listdir(path): return [] with mock.patch('os.listdir', _listdir): self.assertEqual( list(self.df_mgr.yield_hashes(self.existing_device1, '9', POLICIES[0], suffixes=['456'])), []) def _check_yield_hashes(self, policy, suffix_map, expected, **kwargs): device = self.existing_device1 part = '9' part_path = os.path.join( self.testdir, device, diskfile.get_data_dir(policy), part) def _listdir(path): if path == part_path: return suffix_map.keys() for suff, hash_map in suffix_map.items(): if path == os.path.join(part_path, suff): return hash_map.keys() for hash_, files in hash_map.items(): if path == os.path.join(part_path, suff, hash_): return files self.fail('Unexpected listdir of %r' % path) expected_items = [ (os.path.join(part_path, hash_[-3:], hash_), hash_, timestamps) for hash_, timestamps in expected.items()] with mock.patch('os.listdir', _listdir), \ mock.patch('os.unlink'): df_mgr = self.df_router[policy] hash_items = list(df_mgr.yield_hashes( device, part, policy, **kwargs)) expected = sorted(expected_items) actual = sorted(hash_items) # default list diff easiest to debug self.assertEqual(actual, expected) def test_yield_hashes_tombstones(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) ts3 = next(ts_iter) suffix_map = { '27e': { '1111111111111111111111111111127e': [ ts1.internal + '.ts'], '2222222222222222222222222222227e': [ ts2.internal + '.ts'], }, 'd41': { 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaad41': [] }, 'd98': {}, '00b': { '3333333333333333333333333333300b': [ ts1.internal + '.ts', ts2.internal + '.ts', ts3.internal + '.ts', ] }, '204': { 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbb204': [ ts3.internal + '.ts', ] } } expected = { '1111111111111111111111111111127e': {'ts_data': ts1.internal}, '2222222222222222222222222222227e': {'ts_data': ts2.internal}, '3333333333333333333333333333300b': {'ts_data': ts3.internal}, } for policy in POLICIES: self._check_yield_hashes(policy, suffix_map, expected, suffixes=['27e', '00b']) @patch_policies class TestDiskFileManager(DiskFileManagerMixin, unittest.TestCase): mgr_cls = diskfile.DiskFileManager def test_get_ondisk_files_with_repl_policy(self): # Each scenario specifies a list of (filename, extension) tuples. If # extension is set then that filename should be returned by the method # under test for that extension type. scenarios = [[('0000000007.00000.data', '.data')], [('0000000007.00000.ts', '.ts')], # older tombstone is ignored [('0000000007.00000.ts', '.ts'), ('0000000006.00000.ts', False)], # older data is ignored [('0000000007.00000.data', '.data'), ('0000000006.00000.data', False), ('0000000004.00000.ts', False)], # newest meta trumps older meta [('0000000009.00000.meta', '.meta'), ('0000000008.00000.meta', False), ('0000000007.00000.data', '.data'), ('0000000004.00000.ts', False)], # meta older than data is ignored [('0000000007.00000.data', '.data'), ('0000000006.00000.meta', False), ('0000000004.00000.ts', False)], # meta without data is ignored [('0000000007.00000.meta', False, True), ('0000000006.00000.ts', '.ts'), ('0000000004.00000.data', False)], # tombstone trumps meta and data at same timestamp [('0000000006.00000.meta', False), ('0000000006.00000.ts', '.ts'), ('0000000006.00000.data', False)], ] self._test_get_ondisk_files(scenarios, POLICIES[0], None) self._test_hash_cleanup_listdir_files(scenarios, POLICIES[0]) self._test_yield_hashes_cleanup(scenarios, POLICIES[0]) def test_get_ondisk_files_with_stray_meta(self): # get_ondisk_files ignores a stray .meta file class_under_test = self._get_diskfile(POLICIES[0]) files = ['0000000007.00000.meta'] with mock.patch('swift.obj.diskfile.os.listdir', lambda *args: files): self.assertRaises(DiskFileNotExist, class_under_test.open) def test_verify_ondisk_files(self): # ._verify_ondisk_files should only return False if get_ondisk_files # has produced a bad set of files due to a bug, so to test it we need # to probe it directly. mgr = self.df_router[POLICIES.default] ok_scenarios = ( {'ts_file': None, 'data_file': None, 'meta_file': None}, {'ts_file': None, 'data_file': 'a_file', 'meta_file': None}, {'ts_file': None, 'data_file': 'a_file', 'meta_file': 'a_file'}, {'ts_file': 'a_file', 'data_file': None, 'meta_file': None}, ) for scenario in ok_scenarios: self.assertTrue(mgr._verify_ondisk_files(scenario), 'Unexpected result for scenario %s' % scenario) # construct every possible invalid combination of results vals = (None, 'a_file') for ts_file, data_file, meta_file in [ (a, b, c) for a in vals for b in vals for c in vals]: scenario = { 'ts_file': ts_file, 'data_file': data_file, 'meta_file': meta_file} if scenario in ok_scenarios: continue self.assertFalse(mgr._verify_ondisk_files(scenario), 'Unexpected result for scenario %s' % scenario) def test_parse_on_disk_filename(self): mgr = self.df_router[POLICIES.default] for ts in (Timestamp('1234567890.00001'), Timestamp('1234567890.00001', offset=17)): for ext in ('.meta', '.data', '.ts'): fname = '%s%s' % (ts.internal, ext) info = mgr.parse_on_disk_filename(fname) self.assertEqual(ts, info['timestamp']) self.assertEqual(ext, info['ext']) def test_parse_on_disk_filename_errors(self): mgr = self.df_router[POLICIES.default] with self.assertRaises(DiskFileError) as cm: mgr.parse_on_disk_filename('junk') self.assertEqual("Invalid Timestamp value in filename 'junk'", str(cm.exception)) def test_hash_cleanup_listdir_reclaim(self): # Each scenario specifies a list of (filename, extension, [survives]) # tuples. If extension is set or 'survives' is True, the filename # should still be in the dir after cleanup. much_older = Timestamp(time() - 2000).internal older = Timestamp(time() - 1001).internal newer = Timestamp(time() - 900).internal scenarios = [[('%s.ts' % older, False, False)], # fresh tombstone is preserved [('%s.ts' % newer, '.ts', True)], # .data files are not reclaimed, ever [('%s.data' % older, '.data', True)], [('%s.data' % newer, '.data', True)], # ... and we could have a mixture of fresh and stale .data [('%s.data' % newer, '.data', True), ('%s.data' % older, False, False)], # tombstone reclaimed despite newer data [('%s.data' % newer, '.data', True), ('%s.data' % older, False, False), ('%s.ts' % much_older, '.ts', False)], # tombstone reclaimed despite junk file [('junk', False, True), ('%s.ts' % much_older, '.ts', False)], ] self._test_hash_cleanup_listdir_files(scenarios, POLICIES.default, reclaim_age=1000) def test_yield_hashes(self): old_ts = '1383180000.12345' fresh_ts = Timestamp(time() - 10).internal fresher_ts = Timestamp(time() - 1).internal suffix_map = { 'abc': { '9373a92d072897b136b3fc06595b4abc': [ fresh_ts + '.ts'], }, '456': { '9373a92d072897b136b3fc06595b0456': [ old_ts + '.data'], '9373a92d072897b136b3fc06595b7456': [ fresh_ts + '.ts', fresher_ts + '.data'], }, 'def': {}, } expected = { '9373a92d072897b136b3fc06595b4abc': {'ts_data': fresh_ts}, '9373a92d072897b136b3fc06595b0456': {'ts_data': old_ts}, '9373a92d072897b136b3fc06595b7456': {'ts_data': fresher_ts}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected) def test_yield_hashes_yields_meta_timestamp(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) ts3 = next(ts_iter) suffix_map = { 'abc': { # only tombstone is yield/sync -able '9333a92d072897b136b3fc06595b4abc': [ ts1.internal + '.ts', ts2.internal + '.meta'], }, '456': { # only latest metadata timestamp '9444a92d072897b136b3fc06595b0456': [ ts1.internal + '.data', ts2.internal + '.meta', ts3.internal + '.meta'], # exemplary datadir with .meta '9555a92d072897b136b3fc06595b7456': [ ts1.internal + '.data', ts2.internal + '.meta'], }, } expected = { '9333a92d072897b136b3fc06595b4abc': {'ts_data': ts1}, '9444a92d072897b136b3fc06595b0456': {'ts_data': ts1, 'ts_meta': ts3}, '9555a92d072897b136b3fc06595b7456': {'ts_data': ts1, 'ts_meta': ts2}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected) def test_yield_hashes_yields_content_type_timestamp(self): hash_ = '9373a92d072897b136b3fc06595b4abc' ts_iter = make_timestamp_iter() ts0, ts1, ts2, ts3, ts4 = (next(ts_iter) for _ in range(5)) data_file = ts1.internal + '.data' # no content-type delta meta_file = ts2.internal + '.meta' suffix_map = {'abc': {hash_: [data_file, meta_file]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts2}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) # non-zero content-type delta delta = ts3.raw - ts2.raw meta_file = '%s-%x.meta' % (ts3.internal, delta) suffix_map = {'abc': {hash_: [data_file, meta_file]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts3, 'ts_ctype': ts2}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) # zero content-type delta meta_file = '%s+0.meta' % ts3.internal suffix_map = {'abc': {hash_: [data_file, meta_file]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts3, 'ts_ctype': ts3}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) # content-type in second meta file delta = ts3.raw - ts2.raw meta_file1 = '%s-%x.meta' % (ts3.internal, delta) meta_file2 = '%s.meta' % ts4.internal suffix_map = {'abc': {hash_: [data_file, meta_file1, meta_file2]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts4, 'ts_ctype': ts2}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) # obsolete content-type in second meta file, older than data file delta = ts3.raw - ts0.raw meta_file1 = '%s-%x.meta' % (ts3.internal, delta) meta_file2 = '%s.meta' % ts4.internal suffix_map = {'abc': {hash_: [data_file, meta_file1, meta_file2]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts4}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) # obsolete content-type in second meta file, same time as data file delta = ts3.raw - ts1.raw meta_file1 = '%s-%x.meta' % (ts3.internal, delta) meta_file2 = '%s.meta' % ts4.internal suffix_map = {'abc': {hash_: [data_file, meta_file1, meta_file2]}} expected = {hash_: {'ts_data': ts1, 'ts_meta': ts4}} self._check_yield_hashes(POLICIES.default, suffix_map, expected) def test_yield_hashes_suffix_filter(self): # test again with limited suffixes old_ts = '1383180000.12345' fresh_ts = Timestamp(time() - 10).internal fresher_ts = Timestamp(time() - 1).internal suffix_map = { 'abc': { '9373a92d072897b136b3fc06595b4abc': [ fresh_ts + '.ts'], }, '456': { '9373a92d072897b136b3fc06595b0456': [ old_ts + '.data'], '9373a92d072897b136b3fc06595b7456': [ fresh_ts + '.ts', fresher_ts + '.data'], }, 'def': {}, } expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': old_ts}, '9373a92d072897b136b3fc06595b7456': {'ts_data': fresher_ts}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, suffixes=['456']) def test_yield_hashes_fails_with_bad_ondisk_filesets(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) suffix_map = { '456': { '9373a92d072897b136b3fc06595b0456': [ ts1.internal + '.data'], '9373a92d072897b136b3fc06595ba456': [ ts1.internal + '.meta'], }, } expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': ts1}, } try: self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) self.fail('Expected AssertionError') except AssertionError: pass @patch_policies(with_ec_default=True) class TestECDiskFileManager(DiskFileManagerMixin, unittest.TestCase): mgr_cls = diskfile.ECDiskFileManager def test_get_ondisk_files_with_ec_policy(self): # Each scenario specifies a list of (filename, extension, [survives]) # tuples. If extension is set then that filename should be returned by # the method under test for that extension type. If the optional # 'survives' is True, the filename should still be in the dir after # cleanup. scenarios = [[('0000000007.00000.ts', '.ts')], [('0000000007.00000.ts', '.ts'), ('0000000006.00000.ts', False)], # highest frag index is chosen by default [('0000000007.00000.durable', '.durable'), ('0000000007.00000#1.data', '.data'), ('0000000007.00000#0.data', False, True)], # data with no durable is ignored [('0000000007.00000#0.data', False, True)], # data newer than tombstone with no durable is ignored [('0000000007.00000#0.data', False, True), ('0000000006.00000.ts', '.ts', True)], # data newer than durable is ignored [('0000000008.00000#1.data', False, True), ('0000000007.00000.durable', '.durable'), ('0000000007.00000#1.data', '.data'), ('0000000007.00000#0.data', False, True)], # data newer than durable ignored, even if its only data [('0000000008.00000#1.data', False, True), ('0000000007.00000.durable', False, False)], # data older than durable is ignored [('0000000007.00000.durable', '.durable'), ('0000000007.00000#1.data', '.data'), ('0000000006.00000#1.data', False), ('0000000004.00000.ts', False)], # data older than durable ignored, even if its only data [('0000000007.00000.durable', False, False), ('0000000006.00000#1.data', False), ('0000000004.00000.ts', False)], # newer meta trumps older meta [('0000000009.00000.meta', '.meta'), ('0000000008.00000.meta', False), ('0000000007.00000.durable', '.durable'), ('0000000007.00000#14.data', '.data'), ('0000000004.00000.ts', False)], # older meta is ignored [('0000000007.00000.durable', '.durable'), ('0000000007.00000#14.data', '.data'), ('0000000006.00000.meta', False), ('0000000004.00000.ts', False)], # tombstone trumps meta, data, durable at older timestamp [('0000000006.00000.ts', '.ts'), ('0000000005.00000.meta', False), ('0000000004.00000.durable', False), ('0000000004.00000#0.data', False)], # tombstone trumps meta, data, durable at same timestamp [('0000000006.00000.meta', False), ('0000000006.00000.ts', '.ts'), ('0000000006.00000.durable', False), ('0000000006.00000#0.data', False)], # missing durable invalidates data [('0000000006.00000.meta', False, True), ('0000000006.00000#0.data', False, True)] ] self._test_get_ondisk_files(scenarios, POLICIES.default, None) self._test_hash_cleanup_listdir_files(scenarios, POLICIES.default) self._test_yield_hashes_cleanup(scenarios, POLICIES.default) def test_get_ondisk_files_with_ec_policy_and_frag_index(self): # Each scenario specifies a list of (filename, extension) tuples. If # extension is set then that filename should be returned by the method # under test for that extension type. scenarios = [[('0000000007.00000#2.data', False, True), ('0000000007.00000#1.data', '.data'), ('0000000007.00000#0.data', False, True), ('0000000007.00000.durable', '.durable')], # specific frag newer than durable is ignored [('0000000007.00000#2.data', False, True), ('0000000007.00000#1.data', False, True), ('0000000007.00000#0.data', False, True), ('0000000006.00000.durable', '.durable')], # specific frag older than durable is ignored [('0000000007.00000#2.data', False), ('0000000007.00000#1.data', False), ('0000000007.00000#0.data', False), ('0000000008.00000.durable', '.durable')], # specific frag older than newest durable is ignored # even if is also has a durable [('0000000007.00000#2.data', False), ('0000000007.00000#1.data', False), ('0000000007.00000.durable', False), ('0000000008.00000#0.data', False), ('0000000008.00000.durable', '.durable')], # meta included when frag index is specified [('0000000009.00000.meta', '.meta'), ('0000000007.00000#2.data', False, True), ('0000000007.00000#1.data', '.data'), ('0000000007.00000#0.data', False, True), ('0000000007.00000.durable', '.durable')], # specific frag older than tombstone is ignored [('0000000009.00000.ts', '.ts'), ('0000000007.00000#2.data', False), ('0000000007.00000#1.data', False), ('0000000007.00000#0.data', False), ('0000000007.00000.durable', False)], # no data file returned if specific frag index missing [('0000000007.00000#2.data', False, True), ('0000000007.00000#14.data', False, True), ('0000000007.00000#0.data', False, True), ('0000000007.00000.durable', '.durable')], # meta ignored if specific frag index missing [('0000000008.00000.meta', False, True), ('0000000007.00000#14.data', False, True), ('0000000007.00000#0.data', False, True), ('0000000007.00000.durable', '.durable')], # meta ignored if no data files # Note: this is anomalous, because we are specifying a # frag_index, get_ondisk_files will tolerate .meta with # no .data [('0000000088.00000.meta', False, True), ('0000000077.00000.durable', '.durable')] ] self._test_get_ondisk_files(scenarios, POLICIES.default, frag_index=1) # note: not calling self._test_hash_cleanup_listdir_files(scenarios, 0) # here due to the anomalous scenario as commented above def test_hash_cleanup_listdir_reclaim(self): # Each scenario specifies a list of (filename, extension, [survives]) # tuples. If extension is set or 'survives' is True, the filename # should still be in the dir after cleanup. much_older = Timestamp(time() - 2000).internal older = Timestamp(time() - 1001).internal newer = Timestamp(time() - 900).internal scenarios = [[('%s.ts' % older, False, False)], # fresh tombstone is preserved [('%s.ts' % newer, '.ts', True)], # isolated .durable is cleaned up immediately [('%s.durable' % newer, False, False)], # ...even when other older files are in dir [('%s.durable' % older, False, False), ('%s.ts' % much_older, False, False)], # isolated .data files are cleaned up when stale [('%s#2.data' % older, False, False), ('%s#4.data' % older, False, False)], # ...even when there is an older durable fileset [('%s#2.data' % older, False, False), ('%s#4.data' % older, False, False), ('%s#2.data' % much_older, '.data', True), ('%s#4.data' % much_older, False, True), ('%s.durable' % much_older, '.durable', True)], # ... but preserved if still fresh [('%s#2.data' % newer, False, True), ('%s#4.data' % newer, False, True)], # ... and we could have a mixture of fresh and stale .data [('%s#2.data' % newer, False, True), ('%s#4.data' % older, False, False)], # tombstone reclaimed despite newer non-durable data [('%s#2.data' % newer, False, True), ('%s#4.data' % older, False, False), ('%s.ts' % much_older, '.ts', False)], # tombstone reclaimed despite much older durable [('%s.ts' % older, '.ts', False), ('%s.durable' % much_older, False, False)], # tombstone reclaimed despite junk file [('junk', False, True), ('%s.ts' % much_older, '.ts', False)], ] self._test_hash_cleanup_listdir_files(scenarios, POLICIES.default, reclaim_age=1000) def test_get_ondisk_files_with_stray_meta(self): # get_ondisk_files ignores a stray .meta file class_under_test = self._get_diskfile(POLICIES.default) @contextmanager def create_files(df, files): os.makedirs(df._datadir) for fname in files: fpath = os.path.join(df._datadir, fname) with open(fpath, 'w') as f: diskfile.write_metadata(f, {'name': df._name, 'Content-Length': 0}) yield rmtree(df._datadir, ignore_errors=True) # sanity files = [ '0000000006.00000#1.data', '0000000006.00000.durable', ] with create_files(class_under_test, files): class_under_test.open() scenarios = [['0000000007.00000.meta'], ['0000000007.00000.meta', '0000000006.00000.durable'], ['0000000007.00000.meta', '0000000006.00000#1.data'], ['0000000007.00000.meta', '0000000006.00000.durable', '0000000005.00000#1.data'] ] for files in scenarios: with create_files(class_under_test, files): try: class_under_test.open() except DiskFileNotExist: continue self.fail('expected DiskFileNotExist opening %s with %r' % ( class_under_test.__class__.__name__, files)) def test_verify_ondisk_files(self): # _verify_ondisk_files should only return False if get_ondisk_files # has produced a bad set of files due to a bug, so to test it we need # to probe it directly. mgr = self.df_router[POLICIES.default] ok_scenarios = ( {'ts_file': None, 'data_file': None, 'meta_file': None, 'durable_frag_set': None}, {'ts_file': None, 'data_file': 'a_file', 'meta_file': None, 'durable_frag_set': ['a_file']}, {'ts_file': None, 'data_file': 'a_file', 'meta_file': 'a_file', 'durable_frag_set': ['a_file']}, {'ts_file': 'a_file', 'data_file': None, 'meta_file': None, 'durable_frag_set': None}, ) for scenario in ok_scenarios: self.assertTrue(mgr._verify_ondisk_files(scenario), 'Unexpected result for scenario %s' % scenario) # construct every possible invalid combination of results vals = (None, 'a_file') for ts_file, data_file, meta_file, durable_frag in [ (a, b, c, d) for a in vals for b in vals for c in vals for d in vals]: scenario = { 'ts_file': ts_file, 'data_file': data_file, 'meta_file': meta_file, 'durable_frag_set': [durable_frag] if durable_frag else None} if scenario in ok_scenarios: continue self.assertFalse(mgr._verify_ondisk_files(scenario), 'Unexpected result for scenario %s' % scenario) def test_parse_on_disk_filename(self): mgr = self.df_router[POLICIES.default] for ts in (Timestamp('1234567890.00001'), Timestamp('1234567890.00001', offset=17)): for frag in (0, 2, 14): fname = '%s#%s.data' % (ts.internal, frag) info = mgr.parse_on_disk_filename(fname) self.assertEqual(ts, info['timestamp']) self.assertEqual('.data', info['ext']) self.assertEqual(frag, info['frag_index']) self.assertEqual(mgr.make_on_disk_filename(**info), fname) for ext in ('.meta', '.durable', '.ts'): fname = '%s%s' % (ts.internal, ext) info = mgr.parse_on_disk_filename(fname) self.assertEqual(ts, info['timestamp']) self.assertEqual(ext, info['ext']) self.assertIsNone(info['frag_index']) self.assertEqual(mgr.make_on_disk_filename(**info), fname) def test_parse_on_disk_filename_errors(self): mgr = self.df_router[POLICIES.default] for ts in (Timestamp('1234567890.00001'), Timestamp('1234567890.00001', offset=17)): fname = '%s.data' % ts.internal with self.assertRaises(DiskFileError) as cm: mgr.parse_on_disk_filename(fname) self.assertTrue(str(cm.exception).startswith("Bad fragment index")) expected = { '': 'bad', 'foo': 'bad', '1.314': 'bad', 1.314: 'bad', -2: 'negative', '-2': 'negative', None: 'bad', 'None': 'bad', } for frag, msg in expected.items(): fname = '%s#%s.data' % (ts.internal, frag) with self.assertRaises(DiskFileError) as cm: mgr.parse_on_disk_filename(fname) self.assertIn(msg, str(cm.exception).lower()) with self.assertRaises(DiskFileError) as cm: mgr.parse_on_disk_filename('junk') self.assertEqual("Invalid Timestamp value in filename 'junk'", str(cm.exception)) def test_make_on_disk_filename(self): mgr = self.df_router[POLICIES.default] for ts in (Timestamp('1234567890.00001'), Timestamp('1234567890.00001', offset=17)): for frag in (0, '0', 2, '2', 14, '14'): expected = '%s#%s.data' % (ts.internal, frag) actual = mgr.make_on_disk_filename( ts, '.data', frag_index=frag) self.assertEqual(expected, actual) parsed = mgr.parse_on_disk_filename(actual) self.assertEqual(parsed, { 'timestamp': ts, 'frag_index': int(frag), 'ext': '.data', 'ctype_timestamp': None }) # these functions are inverse self.assertEqual( mgr.make_on_disk_filename(**parsed), expected) for ext in ('.meta', '.durable', '.ts'): expected = '%s%s' % (ts.internal, ext) # frag index should not be required actual = mgr.make_on_disk_filename(ts, ext) self.assertEqual(expected, actual) # frag index should be ignored actual = mgr.make_on_disk_filename( ts, ext, frag_index=frag) self.assertEqual(expected, actual) parsed = mgr.parse_on_disk_filename(actual) self.assertEqual(parsed, { 'timestamp': ts, 'frag_index': None, 'ext': ext, 'ctype_timestamp': None }) # these functions are inverse self.assertEqual( mgr.make_on_disk_filename(**parsed), expected) actual = mgr.make_on_disk_filename(ts) self.assertEqual(ts, actual) def test_make_on_disk_filename_with_bad_frag_index(self): mgr = self.df_router[POLICIES.default] ts = Timestamp('1234567890.00001') try: # .data requires a frag_index kwarg mgr.make_on_disk_filename(ts, '.data') self.fail('Expected DiskFileError for missing frag_index') except DiskFileError: pass for frag in (None, 'foo', '1.314', 1.314, -2, '-2'): try: mgr.make_on_disk_filename(ts, '.data', frag_index=frag) self.fail('Expected DiskFileError for frag_index %s' % frag) except DiskFileError: pass for ext in ('.meta', '.durable', '.ts'): expected = '%s%s' % (ts.internal, ext) # bad frag index should be ignored actual = mgr.make_on_disk_filename(ts, ext, frag_index=frag) self.assertEqual(expected, actual) def test_make_on_disk_filename_for_meta_with_content_type(self): # verify .meta filename encodes content-type timestamp mgr = self.df_router[POLICIES.default] time_ = 1234567890.00001 for delta in (0.0, .00001, 1.11111): t_meta = Timestamp(time_) t_type = Timestamp(time_ - delta) sign = '-' if delta else '+' expected = '%s%s%x.meta' % (t_meta.short, sign, 100000 * delta) actual = mgr.make_on_disk_filename( t_meta, '.meta', ctype_timestamp=t_type) self.assertEqual(expected, actual) parsed = mgr.parse_on_disk_filename(actual) self.assertEqual(parsed, { 'timestamp': t_meta, 'frag_index': None, 'ext': '.meta', 'ctype_timestamp': t_type }) # these functions are inverse self.assertEqual( mgr.make_on_disk_filename(**parsed), expected) def test_yield_hashes(self): old_ts = '1383180000.12345' fresh_ts = Timestamp(time() - 10).internal fresher_ts = Timestamp(time() - 1).internal suffix_map = { 'abc': { '9373a92d072897b136b3fc06595b4abc': [ fresh_ts + '.ts'], }, '456': { '9373a92d072897b136b3fc06595b0456': [ old_ts + '#2.data', old_ts + '.durable'], '9373a92d072897b136b3fc06595b7456': [ fresh_ts + '.ts', fresher_ts + '#2.data', fresher_ts + '.durable'], }, 'def': {}, } expected = { '9373a92d072897b136b3fc06595b4abc': {'ts_data': fresh_ts}, '9373a92d072897b136b3fc06595b0456': {'ts_data': old_ts}, '9373a92d072897b136b3fc06595b7456': {'ts_data': fresher_ts}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) def test_yield_hashes_yields_meta_timestamp(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) ts3 = next(ts_iter) suffix_map = { 'abc': { '9373a92d072897b136b3fc06595b4abc': [ ts1.internal + '.ts', ts2.internal + '.meta'], }, '456': { '9373a92d072897b136b3fc06595b0456': [ ts1.internal + '#2.data', ts1.internal + '.durable', ts2.internal + '.meta', ts3.internal + '.meta'], '9373a92d072897b136b3fc06595b7456': [ ts1.internal + '#2.data', ts1.internal + '.durable', ts2.internal + '.meta'], }, } expected = { '9373a92d072897b136b3fc06595b4abc': {'ts_data': ts1}, '9373a92d072897b136b3fc06595b0456': {'ts_data': ts1, 'ts_meta': ts3}, '9373a92d072897b136b3fc06595b7456': {'ts_data': ts1, 'ts_meta': ts2}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected) # but meta timestamp is *not* returned if specified frag index # is not found expected = { '9373a92d072897b136b3fc06595b4abc': {'ts_data': ts1}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=3) def test_yield_hashes_suffix_filter(self): # test again with limited suffixes old_ts = '1383180000.12345' fresh_ts = Timestamp(time() - 10).internal fresher_ts = Timestamp(time() - 1).internal suffix_map = { 'abc': { '9373a92d072897b136b3fc06595b4abc': [ fresh_ts + '.ts'], }, '456': { '9373a92d072897b136b3fc06595b0456': [ old_ts + '#2.data', old_ts + '.durable'], '9373a92d072897b136b3fc06595b7456': [ fresh_ts + '.ts', fresher_ts + '#2.data', fresher_ts + '.durable'], }, 'def': {}, } expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': old_ts}, '9373a92d072897b136b3fc06595b7456': {'ts_data': fresher_ts}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, suffixes=['456'], frag_index=2) def test_yield_hashes_skips_missing_durable(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) suffix_map = { '456': { '9373a92d072897b136b3fc06595b0456': [ ts1.internal + '#2.data', ts1.internal + '.durable'], '9373a92d072897b136b3fc06595b7456': [ ts1.internal + '#2.data'], }, } expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': ts1}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) # if we add a durable it shows up suffix_map['456']['9373a92d072897b136b3fc06595b7456'].append( ts1.internal + '.durable') expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': ts1}, '9373a92d072897b136b3fc06595b7456': {'ts_data': ts1}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) def test_yield_hashes_skips_data_without_durable(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) ts3 = next(ts_iter) suffix_map = { '456': { '9373a92d072897b136b3fc06595b0456': [ ts1.internal + '#2.data', ts1.internal + '.durable', ts2.internal + '#2.data', ts3.internal + '#2.data'], }, } expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': ts1}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=None) self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) # if we add a durable then newer data shows up suffix_map['456']['9373a92d072897b136b3fc06595b0456'].append( ts2.internal + '.durable') expected = { '9373a92d072897b136b3fc06595b0456': {'ts_data': ts2}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=None) self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) def test_yield_hashes_ignores_bad_ondisk_filesets(self): # this differs from DiskFileManager.yield_hashes which will fail # when encountering a bad on-disk file set ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) suffix_map = { '456': { # this one is fine '9333a92d072897b136b3fc06595b0456': [ ts1.internal + '#2.data', ts1.internal + '.durable'], # missing frag index '9444a92d072897b136b3fc06595b7456': [ ts1.internal + '.data'], # junk '9555a92d072897b136b3fc06595b8456': [ 'junk_file'], # missing .durable '9666a92d072897b136b3fc06595b9456': [ ts1.internal + '#2.data', ts2.internal + '.meta'], # .meta files w/o .data files can't be opened, and are ignored '9777a92d072897b136b3fc06595ba456': [ ts1.internal + '.meta'], # multiple meta files with no data '9888a92d072897b136b3fc06595bb456': [ ts1.internal + '.meta', ts2.internal + '.meta'], # this is good with meta '9999a92d072897b136b3fc06595bb456': [ ts1.internal + '#2.data', ts1.internal + '.durable', ts2.internal + '.meta'], # this one is wrong frag index '9aaaa92d072897b136b3fc06595b0456': [ ts1.internal + '#7.data', ts1.internal + '.durable'], }, } expected = { '9333a92d072897b136b3fc06595b0456': {'ts_data': ts1}, '9999a92d072897b136b3fc06595bb456': {'ts_data': ts1, 'ts_meta': ts2}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) def test_yield_hashes_filters_frag_index(self): ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) ts1 = next(ts_iter) ts2 = next(ts_iter) ts3 = next(ts_iter) suffix_map = { '27e': { '1111111111111111111111111111127e': [ ts1.internal + '#2.data', ts1.internal + '#3.data', ts1.internal + '.durable', ], '2222222222222222222222222222227e': [ ts1.internal + '#2.data', ts1.internal + '.durable', ts2.internal + '#2.data', ts2.internal + '.durable', ], }, 'd41': { 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaad41': [ ts1.internal + '#3.data', ts1.internal + '.durable', ], }, '00b': { '3333333333333333333333333333300b': [ ts1.internal + '#2.data', ts2.internal + '#2.data', ts3.internal + '#2.data', ts3.internal + '.durable', ], }, } expected = { '1111111111111111111111111111127e': {'ts_data': ts1}, '2222222222222222222222222222227e': {'ts_data': ts2}, '3333333333333333333333333333300b': {'ts_data': ts3}, } self._check_yield_hashes(POLICIES.default, suffix_map, expected, frag_index=2) def test_get_diskfile_from_hash_frag_index_filter(self): df = self._get_diskfile(POLICIES.default) hash_ = os.path.basename(df._datadir) self.assertRaises(DiskFileNotExist, self.df_mgr.get_diskfile_from_hash, self.existing_device1, '0', hash_, POLICIES.default) # sanity frag_index = 7 timestamp = Timestamp(time()) for frag_index in (4, 7): with df.create() as writer: data = 'test_data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': timestamp.internal, 'Content-Length': len(data), 'X-Object-Sysmeta-Ec-Frag-Index': str(frag_index), } writer.put(metadata) writer.commit(timestamp) df4 = self.df_mgr.get_diskfile_from_hash( self.existing_device1, '0', hash_, POLICIES.default, frag_index=4) self.assertEqual(df4._frag_index, 4) self.assertEqual( df4.read_metadata()['X-Object-Sysmeta-Ec-Frag-Index'], '4') df7 = self.df_mgr.get_diskfile_from_hash( self.existing_device1, '0', hash_, POLICIES.default, frag_index=7) self.assertEqual(df7._frag_index, 7) self.assertEqual( df7.read_metadata()['X-Object-Sysmeta-Ec-Frag-Index'], '7') class DiskFileMixin(BaseDiskFileTestMixin): # set mgr_cls on subclasses mgr_cls = None def setUp(self): """Set up for testing swift.obj.diskfile""" self.tmpdir = mkdtemp() self.testdir = os.path.join( self.tmpdir, 'tmp_test_obj_server_DiskFile') self.existing_device = 'sda1' for policy in POLICIES: mkdirs(os.path.join(self.testdir, self.existing_device, diskfile.get_tmp_dir(policy))) self._orig_tpool_exc = tpool.execute tpool.execute = lambda f, *args, **kwargs: f(*args, **kwargs) self.conf = dict(devices=self.testdir, mount_check='false', keep_cache_size=2 * 1024, mb_per_sync=1) self.logger = debug_logger('test-' + self.__class__.__name__) self.df_mgr = self.mgr_cls(self.conf, self.logger) self.df_router = diskfile.DiskFileRouter(self.conf, self.logger) self._ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) def ts(self): """ Timestamps - forever. """ return next(self._ts_iter) def tearDown(self): """Tear down for testing swift.obj.diskfile""" rmtree(self.tmpdir, ignore_errors=1) tpool.execute = self._orig_tpool_exc def _create_ondisk_file(self, df, data, timestamp, metadata=None, ctype_timestamp=None, ext='.data'): mkdirs(df._datadir) if timestamp is None: timestamp = time() timestamp = Timestamp(timestamp) if not metadata: metadata = {} if 'X-Timestamp' not in metadata: metadata['X-Timestamp'] = timestamp.internal if 'ETag' not in metadata: etag = md5() etag.update(data) metadata['ETag'] = etag.hexdigest() if 'name' not in metadata: metadata['name'] = '/a/c/o' if 'Content-Length' not in metadata: metadata['Content-Length'] = str(len(data)) filename = timestamp.internal if ext == '.data' and df.policy.policy_type == EC_POLICY: filename = '%s#%s' % (timestamp.internal, df._frag_index) if ctype_timestamp: metadata.update( {'Content-Type-Timestamp': Timestamp(ctype_timestamp).internal}) filename = encode_timestamps(timestamp, Timestamp(ctype_timestamp), explicit=True) data_file = os.path.join(df._datadir, filename + ext) with open(data_file, 'wb') as f: f.write(data) xattr.setxattr(f.fileno(), diskfile.METADATA_KEY, pickle.dumps(metadata, diskfile.PICKLE_PROTOCOL)) def _simple_get_diskfile(self, partition='0', account='a', container='c', obj='o', policy=None, frag_index=None): policy = policy or POLICIES.default df_mgr = self.df_router[policy] if policy.policy_type == EC_POLICY and frag_index is None: frag_index = 2 return df_mgr.get_diskfile(self.existing_device, partition, account, container, obj, policy=policy, frag_index=frag_index) def _create_test_file(self, data, timestamp=None, metadata=None, account='a', container='c', obj='o'): if metadata is None: metadata = {} metadata.setdefault('name', '/%s/%s/%s' % (account, container, obj)) df = self._simple_get_diskfile(account=account, container=container, obj=obj) if timestamp is None: timestamp = time() timestamp = Timestamp(timestamp) if df.policy.policy_type == EC_POLICY: data = encode_frag_archive_bodies(df.policy, data)[df._frag_index] with df.create() as writer: new_metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': timestamp.internal, 'Content-Length': len(data), } new_metadata.update(metadata) writer.write(data) writer.put(new_metadata) writer.commit(timestamp) df.open() return df, data def test_get_dev_path(self): self.df_mgr.devices = '/srv' device = 'sda1' dev_path = os.path.join(self.df_mgr.devices, device) mount_check = None self.df_mgr.mount_check = True with mock.patch('swift.obj.diskfile.check_mount', mock.MagicMock(return_value=False)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), None) with mock.patch('swift.obj.diskfile.check_mount', mock.MagicMock(return_value=True)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), dev_path) self.df_mgr.mount_check = False with mock.patch('swift.obj.diskfile.check_dir', mock.MagicMock(return_value=False)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), None) with mock.patch('swift.obj.diskfile.check_dir', mock.MagicMock(return_value=True)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), dev_path) mount_check = True with mock.patch('swift.obj.diskfile.check_mount', mock.MagicMock(return_value=False)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), None) with mock.patch('swift.obj.diskfile.check_mount', mock.MagicMock(return_value=True)): self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), dev_path) mount_check = False self.assertEqual(self.df_mgr.get_dev_path(device, mount_check), dev_path) def test_open_not_exist(self): df = self._simple_get_diskfile() self.assertRaises(DiskFileNotExist, df.open) def test_open_expired(self): self.assertRaises(DiskFileExpired, self._create_test_file, '1234567890', metadata={'X-Delete-At': '0'}) def test_open_not_expired(self): try: self._create_test_file( '1234567890', metadata={'X-Delete-At': str(2 * int(time()))}) except SwiftException as err: self.fail("Unexpected swift exception raised: %r" % err) def test_get_metadata(self): timestamp = self.ts().internal df, df_data = self._create_test_file('1234567890', timestamp=timestamp) md = df.get_metadata() self.assertEqual(md['X-Timestamp'], timestamp) def test_read_metadata(self): timestamp = self.ts().internal self._create_test_file('1234567890', timestamp=timestamp) df = self._simple_get_diskfile() md = df.read_metadata() self.assertEqual(md['X-Timestamp'], timestamp) def test_read_metadata_no_xattr(self): def mock_getxattr(*args, **kargs): error_num = errno.ENOTSUP if hasattr(errno, 'ENOTSUP') else \ errno.EOPNOTSUPP raise IOError(error_num, "Operation not supported") with mock.patch('xattr.getxattr', mock_getxattr): self.assertRaises( DiskFileXattrNotSupported, diskfile.read_metadata, 'n/a') def test_get_metadata_not_opened(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.get_metadata() def test_get_datafile_metadata(self): ts_iter = make_timestamp_iter() body = '1234567890' ts_data = next(ts_iter) metadata = {'X-Object-Meta-Test': 'test1', 'X-Object-Sysmeta-Test': 'test1'} df, df_data = self._create_test_file(body, timestamp=ts_data.internal, metadata=metadata) expected = df.get_metadata() ts_meta = next(ts_iter) df.write_metadata({'X-Timestamp': ts_meta.internal, 'X-Object-Meta-Test': 'changed', 'X-Object-Sysmeta-Test': 'ignored'}) df.open() self.assertEqual(expected, df.get_datafile_metadata()) expected.update({'X-Timestamp': ts_meta.internal, 'X-Object-Meta-Test': 'changed'}) self.assertEqual(expected, df.get_metadata()) def test_get_datafile_metadata_not_opened(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.get_datafile_metadata() def test_get_metafile_metadata(self): ts_iter = make_timestamp_iter() body = '1234567890' ts_data = next(ts_iter) metadata = {'X-Object-Meta-Test': 'test1', 'X-Object-Sysmeta-Test': 'test1'} df, df_data = self._create_test_file(body, timestamp=ts_data.internal, metadata=metadata) self.assertIsNone(df.get_metafile_metadata()) # now create a meta file ts_meta = next(ts_iter) df.write_metadata({'X-Timestamp': ts_meta.internal, 'X-Object-Meta-Test': 'changed'}) df.open() expected = {'X-Timestamp': ts_meta.internal, 'X-Object-Meta-Test': 'changed'} self.assertEqual(expected, df.get_metafile_metadata()) def test_get_metafile_metadata_not_opened(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.get_metafile_metadata() def test_not_opened(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): with df: pass def test_disk_file_default_disallowed_metadata(self): # build an object with some meta (at t0+1s) orig_metadata = {'X-Object-Meta-Key1': 'Value1', 'Content-Type': 'text/garbage'} df = self._get_open_disk_file(ts=self.ts().internal, extra_metadata=orig_metadata) with df.open(): if df.policy.policy_type == EC_POLICY: expected = df.policy.pyeclib_driver.get_segment_info( 1024, df.policy.ec_segment_size)['fragment_size'] else: expected = 1024 self.assertEqual(str(expected), df._metadata['Content-Length']) # write some new metadata (fast POST, don't send orig meta, at t0+1) df = self._simple_get_diskfile() df.write_metadata({'X-Timestamp': self.ts().internal, 'X-Object-Meta-Key2': 'Value2'}) df = self._simple_get_diskfile() with df.open(): # non-fast-post updateable keys are preserved self.assertEqual('text/garbage', df._metadata['Content-Type']) # original fast-post updateable keys are removed self.assertNotIn('X-Object-Meta-Key1', df._metadata) # new fast-post updateable keys are added self.assertEqual('Value2', df._metadata['X-Object-Meta-Key2']) def test_disk_file_preserves_sysmeta(self): # build an object with some meta (at t0) orig_metadata = {'X-Object-Sysmeta-Key1': 'Value1', 'Content-Type': 'text/garbage'} df = self._get_open_disk_file(ts=self.ts().internal, extra_metadata=orig_metadata) with df.open(): if df.policy.policy_type == EC_POLICY: expected = df.policy.pyeclib_driver.get_segment_info( 1024, df.policy.ec_segment_size)['fragment_size'] else: expected = 1024 self.assertEqual(str(expected), df._metadata['Content-Length']) # write some new metadata (fast POST, don't send orig meta, at t0+1s) df = self._simple_get_diskfile() df.write_metadata({'X-Timestamp': self.ts().internal, 'X-Object-Sysmeta-Key1': 'Value2', 'X-Object-Meta-Key3': 'Value3'}) df = self._simple_get_diskfile() with df.open(): # non-fast-post updateable keys are preserved self.assertEqual('text/garbage', df._metadata['Content-Type']) # original sysmeta keys are preserved self.assertEqual('Value1', df._metadata['X-Object-Sysmeta-Key1']) def test_disk_file_reader_iter(self): df, df_data = self._create_test_file('1234567890') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) self.assertEqual(''.join(reader), df_data) self.assertEqual(quarantine_msgs, []) def test_disk_file_reader_iter_w_quarantine(self): df, df_data = self._create_test_file('1234567890') def raise_dfq(m): raise DiskFileQuarantined(m) reader = df.reader(_quarantine_hook=raise_dfq) reader._obj_size += 1 self.assertRaises(DiskFileQuarantined, ''.join, reader) def test_disk_file_app_iter_corners(self): df, df_data = self._create_test_file('1234567890') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) self.assertEqual(''.join(reader.app_iter_range(0, None)), df_data) self.assertEqual(quarantine_msgs, []) df = self._simple_get_diskfile() with df.open(): reader = df.reader() self.assertEqual(''.join(reader.app_iter_range(5, None)), df_data[5:]) def test_disk_file_app_iter_range_w_none(self): df, df_data = self._create_test_file('1234567890') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) self.assertEqual(''.join(reader.app_iter_range(None, None)), df_data) self.assertEqual(quarantine_msgs, []) def test_disk_file_app_iter_partial_closes(self): df, df_data = self._create_test_file('1234567890') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_range(0, 5) self.assertEqual(''.join(it), df_data[:5]) self.assertEqual(quarantine_msgs, []) self.assertTrue(reader._fp is None) def test_disk_file_app_iter_ranges(self): df, df_data = self._create_test_file('012345678911234567892123456789') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_ranges([(0, 10), (10, 20), (20, 30)], 'plain/text', '\r\n--someheader\r\n', len(df_data)) value = ''.join(it) self.assertIn(df_data[:10], value) self.assertIn(df_data[10:20], value) self.assertIn(df_data[20:30], value) self.assertEqual(quarantine_msgs, []) def test_disk_file_app_iter_ranges_w_quarantine(self): df, df_data = self._create_test_file('012345678911234567892123456789') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) self.assertEqual(len(df_data), reader._obj_size) # sanity check reader._obj_size += 1 it = reader.app_iter_ranges([(0, len(df_data))], 'plain/text', '\r\n--someheader\r\n', len(df_data)) value = ''.join(it) self.assertIn(df_data, value) self.assertEqual(quarantine_msgs, ["Bytes read: %s, does not match metadata: %s" % (len(df_data), len(df_data) + 1)]) def test_disk_file_app_iter_ranges_w_no_etag_quarantine(self): df, df_data = self._create_test_file('012345678911234567892123456789') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_ranges([(0, 10)], 'plain/text', '\r\n--someheader\r\n', len(df_data)) value = ''.join(it) self.assertIn(df_data[:10], value) self.assertEqual(quarantine_msgs, []) def test_disk_file_app_iter_ranges_edges(self): df, df_data = self._create_test_file('012345678911234567892123456789') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_ranges([(3, 10), (0, 2)], 'application/whatever', '\r\n--someheader\r\n', len(df_data)) value = ''.join(it) self.assertIn(df_data[3:10], value) self.assertIn(df_data[:2], value) self.assertEqual(quarantine_msgs, []) def test_disk_file_large_app_iter_ranges(self): # This test case is to make sure that the disk file app_iter_ranges # method all the paths being tested. long_str = '01234567890' * 65536 df, df_data = self._create_test_file(long_str) target_strs = [df_data[3:10], df_data[0:65590]] quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_ranges([(3, 10), (0, 65590)], 'plain/text', '5e816ff8b8b8e9a5d355497e5d9e0301', len(df_data)) # The produced string actually missing the MIME headers # need to add these headers to make it as real MIME message. # The body of the message is produced by method app_iter_ranges # off of DiskFile object. header = ''.join(['Content-Type: multipart/byteranges;', 'boundary=', '5e816ff8b8b8e9a5d355497e5d9e0301\r\n']) value = header + ''.join(it) self.assertEqual(quarantine_msgs, []) parts = map(lambda p: p.get_payload(decode=True), email.message_from_string(value).walk())[1:3] self.assertEqual(parts, target_strs) def test_disk_file_app_iter_ranges_empty(self): # This test case tests when empty value passed into app_iter_ranges # When ranges passed into the method is either empty array or None, # this method will yield empty string df, df_data = self._create_test_file('012345678911234567892123456789') quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) it = reader.app_iter_ranges([], 'application/whatever', '\r\n--someheader\r\n', len(df_data)) self.assertEqual(''.join(it), '') df = self._simple_get_diskfile() with df.open(): reader = df.reader() it = reader.app_iter_ranges(None, 'app/something', '\r\n--someheader\r\n', 150) self.assertEqual(''.join(it), '') self.assertEqual(quarantine_msgs, []) def test_disk_file_mkstemp_creates_dir(self): for policy in POLICIES: tmpdir = os.path.join(self.testdir, self.existing_device, diskfile.get_tmp_dir(policy)) os.rmdir(tmpdir) df = self._simple_get_diskfile(policy=policy) with df.create(): self.assertTrue(os.path.exists(tmpdir)) def _get_open_disk_file(self, invalid_type=None, obj_name='o', fsize=1024, csize=8, mark_deleted=False, prealloc=False, ts=None, mount_check=False, extra_metadata=None, policy=None, frag_index=None, data=None, commit=True): '''returns a DiskFile''' policy = policy or POLICIES.legacy df = self._simple_get_diskfile(obj=obj_name, policy=policy, frag_index=frag_index) data = data or '0' * fsize if policy.policy_type == EC_POLICY: archives = encode_frag_archive_bodies(policy, data) try: data = archives[df._frag_index] except IndexError: data = archives[0] etag = md5() if ts: timestamp = Timestamp(ts) else: timestamp = Timestamp(time()) if prealloc: prealloc_size = fsize else: prealloc_size = None with df.create(size=prealloc_size) as writer: upload_size = writer.write(data) etag.update(data) etag = etag.hexdigest() metadata = { 'ETag': etag, 'X-Timestamp': timestamp.internal, 'Content-Length': str(upload_size), } metadata.update(extra_metadata or {}) writer.put(metadata) if invalid_type == 'ETag': etag = md5() etag.update('1' + '0' * (fsize - 1)) etag = etag.hexdigest() metadata['ETag'] = etag diskfile.write_metadata(writer._fd, metadata) elif invalid_type == 'Content-Length': metadata['Content-Length'] = fsize - 1 diskfile.write_metadata(writer._fd, metadata) elif invalid_type == 'Bad-Content-Length': metadata['Content-Length'] = 'zero' diskfile.write_metadata(writer._fd, metadata) elif invalid_type == 'Missing-Content-Length': del metadata['Content-Length'] diskfile.write_metadata(writer._fd, metadata) elif invalid_type == 'Bad-X-Delete-At': metadata['X-Delete-At'] = 'bad integer' diskfile.write_metadata(writer._fd, metadata) if commit: writer.commit(timestamp) if mark_deleted: df.delete(timestamp) data_files = [os.path.join(df._datadir, fname) for fname in sorted(os.listdir(df._datadir), reverse=True) if fname.endswith('.data')] if invalid_type == 'Corrupt-Xattrs': # We have to go below read_metadata/write_metadata to get proper # corruption. meta_xattr = xattr.getxattr(data_files[0], "user.swift.metadata") wrong_byte = 'X' if meta_xattr[0] != 'X' else 'Y' xattr.setxattr(data_files[0], "user.swift.metadata", wrong_byte + meta_xattr[1:]) elif invalid_type == 'Truncated-Xattrs': meta_xattr = xattr.getxattr(data_files[0], "user.swift.metadata") xattr.setxattr(data_files[0], "user.swift.metadata", meta_xattr[:-1]) elif invalid_type == 'Missing-Name': md = diskfile.read_metadata(data_files[0]) del md['name'] diskfile.write_metadata(data_files[0], md) elif invalid_type == 'Bad-Name': md = diskfile.read_metadata(data_files[0]) md['name'] = md['name'] + 'garbage' diskfile.write_metadata(data_files[0], md) self.conf['disk_chunk_size'] = csize self.conf['mount_check'] = mount_check self.df_mgr = self.mgr_cls(self.conf, self.logger) self.df_router = diskfile.DiskFileRouter(self.conf, self.logger) # actual on disk frag_index may have been set by metadata frag_index = metadata.get('X-Object-Sysmeta-Ec-Frag-Index', frag_index) df = self._simple_get_diskfile(obj=obj_name, policy=policy, frag_index=frag_index) df.open() if invalid_type == 'Zero-Byte': fp = open(df._data_file, 'w') fp.close() df.unit_test_len = fsize return df def test_keep_cache(self): df = self._get_open_disk_file(fsize=65) with mock.patch("swift.obj.diskfile.drop_buffer_cache") as foo: for _ in df.reader(): pass self.assertTrue(foo.called) df = self._get_open_disk_file(fsize=65) with mock.patch("swift.obj.diskfile.drop_buffer_cache") as bar: for _ in df.reader(keep_cache=False): pass self.assertTrue(bar.called) df = self._get_open_disk_file(fsize=65) with mock.patch("swift.obj.diskfile.drop_buffer_cache") as boo: for _ in df.reader(keep_cache=True): pass self.assertFalse(boo.called) df = self._get_open_disk_file(fsize=50 * 1024, csize=256) with mock.patch("swift.obj.diskfile.drop_buffer_cache") as goo: for _ in df.reader(keep_cache=True): pass self.assertTrue(goo.called) def test_quarantine_valids(self): def verify(*args, **kwargs): try: df = self._get_open_disk_file(**kwargs) reader = df.reader() for chunk in reader: pass except DiskFileQuarantined: self.fail( "Unexpected quarantining occurred: args=%r, kwargs=%r" % ( args, kwargs)) else: pass verify(obj_name='1') verify(obj_name='2', csize=1) verify(obj_name='3', csize=100000) def run_quarantine_invalids(self, invalid_type): def verify(*args, **kwargs): open_exc = invalid_type in ('Content-Length', 'Bad-Content-Length', 'Corrupt-Xattrs', 'Truncated-Xattrs', 'Missing-Name', 'Bad-X-Delete-At') open_collision = invalid_type == 'Bad-Name' reader = None quarantine_msgs = [] try: df = self._get_open_disk_file(**kwargs) reader = df.reader(_quarantine_hook=quarantine_msgs.append) except DiskFileQuarantined as err: if not open_exc: self.fail( "Unexpected DiskFileQuarantine raised: %r" % err) return except DiskFileCollision as err: if not open_collision: self.fail( "Unexpected DiskFileCollision raised: %r" % err) return else: if open_exc: self.fail("Expected DiskFileQuarantine exception") try: for chunk in reader: pass except DiskFileQuarantined as err: self.fail("Unexpected DiskFileQuarantine raised: :%r" % err) else: if not open_exc: self.assertEqual(1, len(quarantine_msgs)) verify(invalid_type=invalid_type, obj_name='1') verify(invalid_type=invalid_type, obj_name='2', csize=1) verify(invalid_type=invalid_type, obj_name='3', csize=100000) verify(invalid_type=invalid_type, obj_name='4') def verify_air(params, start=0, adjustment=0): """verify (a)pp (i)ter (r)ange""" open_exc = invalid_type in ('Content-Length', 'Bad-Content-Length', 'Corrupt-Xattrs', 'Truncated-Xattrs', 'Missing-Name', 'Bad-X-Delete-At') open_collision = invalid_type == 'Bad-Name' reader = None try: df = self._get_open_disk_file(**params) reader = df.reader() except DiskFileQuarantined as err: if not open_exc: self.fail( "Unexpected DiskFileQuarantine raised: %r" % err) return except DiskFileCollision as err: if not open_collision: self.fail( "Unexpected DiskFileCollision raised: %r" % err) return else: if open_exc: self.fail("Expected DiskFileQuarantine exception") try: for chunk in reader.app_iter_range( start, df.unit_test_len + adjustment): pass except DiskFileQuarantined as err: self.fail("Unexpected DiskFileQuarantine raised: :%r" % err) verify_air(dict(invalid_type=invalid_type, obj_name='5')) verify_air(dict(invalid_type=invalid_type, obj_name='6'), 0, 100) verify_air(dict(invalid_type=invalid_type, obj_name='7'), 1) verify_air(dict(invalid_type=invalid_type, obj_name='8'), 0, -1) verify_air(dict(invalid_type=invalid_type, obj_name='8'), 1, 1) def test_quarantine_corrupt_xattrs(self): self.run_quarantine_invalids('Corrupt-Xattrs') def test_quarantine_truncated_xattrs(self): self.run_quarantine_invalids('Truncated-Xattrs') def test_quarantine_invalid_etag(self): self.run_quarantine_invalids('ETag') def test_quarantine_invalid_missing_name(self): self.run_quarantine_invalids('Missing-Name') def test_quarantine_invalid_bad_name(self): self.run_quarantine_invalids('Bad-Name') def test_quarantine_invalid_bad_x_delete_at(self): self.run_quarantine_invalids('Bad-X-Delete-At') def test_quarantine_invalid_content_length(self): self.run_quarantine_invalids('Content-Length') def test_quarantine_invalid_content_length_bad(self): self.run_quarantine_invalids('Bad-Content-Length') def test_quarantine_invalid_zero_byte(self): self.run_quarantine_invalids('Zero-Byte') def test_quarantine_deleted_files(self): try: self._get_open_disk_file(invalid_type='Content-Length') except DiskFileQuarantined: pass else: self.fail("Expected DiskFileQuarantined exception") try: self._get_open_disk_file(invalid_type='Content-Length', mark_deleted=True) except DiskFileQuarantined as err: self.fail("Unexpected DiskFileQuarantined exception" " encountered: %r" % err) except DiskFileNotExist: pass else: self.fail("Expected DiskFileNotExist exception") try: self._get_open_disk_file(invalid_type='Content-Length', mark_deleted=True) except DiskFileNotExist: pass else: self.fail("Expected DiskFileNotExist exception") def test_quarantine_missing_content_length(self): self.assertRaises( DiskFileQuarantined, self._get_open_disk_file, invalid_type='Missing-Content-Length') def test_quarantine_bad_content_length(self): self.assertRaises( DiskFileQuarantined, self._get_open_disk_file, invalid_type='Bad-Content-Length') def test_quarantine_fstat_oserror(self): invocations = [0] orig_os_fstat = os.fstat def bad_fstat(fd): invocations[0] += 1 if invocations[0] == 4: # FIXME - yes, this an icky way to get code coverage ... worth # it? raise OSError() return orig_os_fstat(fd) with mock.patch('os.fstat', bad_fstat): self.assertRaises( DiskFileQuarantined, self._get_open_disk_file) def test_quarantine_hashdir_not_a_directory(self): df, df_data = self._create_test_file('1234567890', account="abc", container='123', obj='xyz') hashdir = df._datadir rmtree(hashdir) with open(hashdir, 'w'): pass df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) self.assertRaises(DiskFileQuarantined, df.open) # make sure the right thing got quarantined; the suffix dir should not # have moved, as that could have many objects in it self.assertFalse(os.path.exists(hashdir)) self.assertTrue(os.path.exists(os.path.dirname(hashdir))) def test_create_prealloc(self): df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) with mock.patch("swift.obj.diskfile.fallocate") as fa: with df.create(size=200) as writer: used_fd = writer._fd fa.assert_called_with(used_fd, 200) def test_create_prealloc_oserror(self): df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) for e in (errno.ENOSPC, errno.EDQUOT): with mock.patch("swift.obj.diskfile.fallocate", mock.MagicMock(side_effect=OSError( e, os.strerror(e)))): try: with df.create(size=200): pass except DiskFileNoSpace: pass else: self.fail("Expected exception DiskFileNoSpace") # Other OSErrors must not be raised as DiskFileNoSpace with mock.patch("swift.obj.diskfile.fallocate", mock.MagicMock(side_effect=OSError( errno.EACCES, os.strerror(errno.EACCES)))): try: with df.create(size=200): pass except OSError: pass else: self.fail("Expected exception OSError") def test_create_mkstemp_no_space(self): df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) for e in (errno.ENOSPC, errno.EDQUOT): with mock.patch("swift.obj.diskfile.mkstemp", mock.MagicMock(side_effect=OSError( e, os.strerror(e)))): try: with df.create(size=200): pass except DiskFileNoSpace: pass else: self.fail("Expected exception DiskFileNoSpace") # Other OSErrors must not be raised as DiskFileNoSpace with mock.patch("swift.obj.diskfile.mkstemp", mock.MagicMock(side_effect=OSError( errno.EACCES, os.strerror(errno.EACCES)))): try: with df.create(size=200): pass except OSError: pass else: self.fail("Expected exception OSError") def test_create_close_oserror(self): df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) with mock.patch("swift.obj.diskfile.os.close", mock.MagicMock(side_effect=OSError( errno.EACCES, os.strerror(errno.EACCES)))): try: with df.create(size=200): pass except Exception as err: self.fail("Unexpected exception raised: %r" % err) else: pass def test_write_metadata(self): df, df_data = self._create_test_file('1234567890') file_count = len(os.listdir(df._datadir)) timestamp = Timestamp(time()).internal metadata = {'X-Timestamp': timestamp, 'X-Object-Meta-test': 'data'} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1) exp_name = '%s.meta' % timestamp self.assertIn(exp_name, set(dl)) def test_write_metadata_with_content_type(self): # if metadata has content-type then its time should be in file name df, df_data = self._create_test_file('1234567890') file_count = len(os.listdir(df._datadir)) timestamp = Timestamp(time()) metadata = {'X-Timestamp': timestamp.internal, 'X-Object-Meta-test': 'data', 'Content-Type': 'foo', 'Content-Type-Timestamp': timestamp.internal} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1) exp_name = '%s+0.meta' % timestamp.internal self.assertTrue(exp_name in set(dl), 'Expected file %s not found in %s' % (exp_name, dl)) def test_write_metadata_with_older_content_type(self): # if metadata has content-type then its time should be in file name ts_iter = make_timestamp_iter() df, df_data = self._create_test_file('1234567890', timestamp=ts_iter.next()) file_count = len(os.listdir(df._datadir)) timestamp = ts_iter.next() timestamp2 = ts_iter.next() metadata = {'X-Timestamp': timestamp2.internal, 'X-Object-Meta-test': 'data', 'Content-Type': 'foo', 'Content-Type-Timestamp': timestamp.internal} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1, dl) exp_name = '%s-%x.meta' % (timestamp2.internal, timestamp2.raw - timestamp.raw) self.assertTrue(exp_name in set(dl), 'Expected file %s not found in %s' % (exp_name, dl)) def test_write_metadata_with_content_type_removes_same_time_meta(self): # a meta file without content-type should be cleaned up in favour of # a meta file at same time with content-type ts_iter = make_timestamp_iter() df, df_data = self._create_test_file('1234567890', timestamp=ts_iter.next()) file_count = len(os.listdir(df._datadir)) timestamp = ts_iter.next() timestamp2 = ts_iter.next() metadata = {'X-Timestamp': timestamp2.internal, 'X-Object-Meta-test': 'data'} df.write_metadata(metadata) metadata = {'X-Timestamp': timestamp2.internal, 'X-Object-Meta-test': 'data', 'Content-Type': 'foo', 'Content-Type-Timestamp': timestamp.internal} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1, dl) exp_name = '%s-%x.meta' % (timestamp2.internal, timestamp2.raw - timestamp.raw) self.assertTrue(exp_name in set(dl), 'Expected file %s not found in %s' % (exp_name, dl)) def test_write_metadata_with_content_type_removes_multiple_metas(self): # a combination of a meta file without content-type and an older meta # file with content-type should be cleaned up in favour of a meta file # at newer time with content-type ts_iter = make_timestamp_iter() df, df_data = self._create_test_file('1234567890', timestamp=ts_iter.next()) file_count = len(os.listdir(df._datadir)) timestamp = ts_iter.next() timestamp2 = ts_iter.next() metadata = {'X-Timestamp': timestamp2.internal, 'X-Object-Meta-test': 'data'} df.write_metadata(metadata) metadata = {'X-Timestamp': timestamp.internal, 'X-Object-Meta-test': 'data', 'Content-Type': 'foo', 'Content-Type-Timestamp': timestamp.internal} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 2, dl) metadata = {'X-Timestamp': timestamp2.internal, 'X-Object-Meta-test': 'data', 'Content-Type': 'foo', 'Content-Type-Timestamp': timestamp.internal} df.write_metadata(metadata) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1, dl) exp_name = '%s-%x.meta' % (timestamp2.internal, timestamp2.raw - timestamp.raw) self.assertTrue(exp_name in set(dl), 'Expected file %s not found in %s' % (exp_name, dl)) def test_write_metadata_no_xattr(self): timestamp = Timestamp(time()).internal metadata = {'X-Timestamp': timestamp, 'X-Object-Meta-test': 'data'} def mock_setxattr(*args, **kargs): error_num = errno.ENOTSUP if hasattr(errno, 'ENOTSUP') else \ errno.EOPNOTSUPP raise IOError(error_num, "Operation not supported") with mock.patch('xattr.setxattr', mock_setxattr): self.assertRaises( DiskFileXattrNotSupported, diskfile.write_metadata, 'n/a', metadata) def test_write_metadata_disk_full(self): timestamp = Timestamp(time()).internal metadata = {'X-Timestamp': timestamp, 'X-Object-Meta-test': 'data'} def mock_setxattr_ENOSPC(*args, **kargs): raise IOError(errno.ENOSPC, "No space left on device") def mock_setxattr_EDQUOT(*args, **kargs): raise IOError(errno.EDQUOT, "Exceeded quota") with mock.patch('xattr.setxattr', mock_setxattr_ENOSPC): self.assertRaises( DiskFileNoSpace, diskfile.write_metadata, 'n/a', metadata) with mock.patch('xattr.setxattr', mock_setxattr_EDQUOT): self.assertRaises( DiskFileNoSpace, diskfile.write_metadata, 'n/a', metadata) def _create_diskfile_dir(self, timestamp, policy): timestamp = Timestamp(timestamp) df = self._simple_get_diskfile(account='a', container='c', obj='o_%s' % policy, policy=policy) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } if policy.policy_type == EC_POLICY: metadata['X-Object-Sysmeta-Ec-Frag-Index'] = \ df._frag_index or 7 writer.put(metadata) writer.commit(timestamp) return writer._datadir def test_commit(self): for policy in POLICIES: # create first fileset as starting state timestamp = Timestamp(time()).internal datadir = self._create_diskfile_dir(timestamp, policy) dl = os.listdir(datadir) expected = ['%s.data' % timestamp] if policy.policy_type == EC_POLICY: expected = ['%s#2.data' % timestamp, '%s.durable' % timestamp] self.assertEqual(len(dl), len(expected), 'Unexpected dir listing %s' % dl) self.assertEqual(sorted(expected), sorted(dl)) def test_write_cleanup(self): for policy in POLICIES: # create first fileset as starting state timestamp_1 = Timestamp(time()).internal datadir_1 = self._create_diskfile_dir(timestamp_1, policy) # second write should clean up first fileset timestamp_2 = Timestamp(time() + 1).internal datadir_2 = self._create_diskfile_dir(timestamp_2, policy) # sanity check self.assertEqual(datadir_1, datadir_2) dl = os.listdir(datadir_2) expected = ['%s.data' % timestamp_2] if policy.policy_type == EC_POLICY: expected = ['%s#2.data' % timestamp_2, '%s.durable' % timestamp_2] self.assertEqual(len(dl), len(expected), 'Unexpected dir listing %s' % dl) self.assertEqual(sorted(expected), sorted(dl)) def test_commit_fsync(self): for policy in POLICIES: mock_fsync = mock.MagicMock() df = self._simple_get_diskfile(account='a', container='c', obj='o', policy=policy) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch('swift.obj.diskfile.fsync', mock_fsync): writer.commit(timestamp) expected = { EC_POLICY: 1, REPL_POLICY: 0, }[policy.policy_type] self.assertEqual(expected, mock_fsync.call_count) if policy.policy_type == EC_POLICY: self.assertTrue(isinstance(mock_fsync.call_args[0][0], int)) def test_commit_ignores_hash_cleanup_listdir_error(self): for policy in POLICIES: # Check OSError from hash_cleanup_listdir is caught and ignored mock_hcl = mock.MagicMock(side_effect=OSError) df = self._simple_get_diskfile(account='a', container='c', obj='o_hcl_error', policy=policy) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df), mock_hcl): writer.commit(timestamp) expected = { EC_POLICY: 1, REPL_POLICY: 0, }[policy.policy_type] self.assertEqual(expected, mock_hcl.call_count) expected = ['%s.data' % timestamp.internal] if policy.policy_type == EC_POLICY: expected = ['%s#2.data' % timestamp.internal, '%s.durable' % timestamp.internal] dl = os.listdir(df._datadir) self.assertEqual(len(dl), len(expected), 'Unexpected dir listing %s' % dl) self.assertEqual(sorted(expected), sorted(dl)) def test_number_calls_to_hash_cleanup_listdir_during_create(self): # Check how many calls are made to hash_cleanup_listdir, and when, # during put(), commit() sequence for policy in POLICIES: expected = { EC_POLICY: (0, 1), REPL_POLICY: (1, 0), }[policy.policy_type] df = self._simple_get_diskfile(account='a', container='c', obj='o_hcl_error', policy=policy) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df)) as mock_hcl: writer.put(metadata) self.assertEqual(expected[0], mock_hcl.call_count) with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df)) as mock_hcl: writer.commit(timestamp) self.assertEqual(expected[1], mock_hcl.call_count) def test_number_calls_to_hash_cleanup_listdir_during_delete(self): # Check how many calls are made to hash_cleanup_listdir, and when, # for delete() and necessary prerequisite steps for policy in POLICIES: expected = { EC_POLICY: (0, 1, 1), REPL_POLICY: (1, 0, 1), }[policy.policy_type] df = self._simple_get_diskfile(account='a', container='c', obj='o_hcl_error', policy=policy) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df)) as mock_hcl: writer.put(metadata) self.assertEqual(expected[0], mock_hcl.call_count) with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df)) as mock_hcl: writer.commit(timestamp) self.assertEqual(expected[1], mock_hcl.call_count) with mock.patch(self._manager_mock( 'cleanup_ondisk_files', df)) as mock_hcl: timestamp = Timestamp(time()) df.delete(timestamp) self.assertEqual(expected[2], mock_hcl.call_count) def test_delete(self): for policy in POLICIES: if policy.policy_type == EC_POLICY: metadata = {'X-Object-Sysmeta-Ec-Frag-Index': '1'} fi = 1 else: metadata = {} fi = None df = self._get_open_disk_file(policy=policy, frag_index=fi, extra_metadata=metadata) ts = Timestamp(time()) df.delete(ts) exp_name = '%s.ts' % ts.internal dl = os.listdir(df._datadir) self.assertEqual(len(dl), 1) self.assertIn(exp_name, set(dl)) # cleanup before next policy os.unlink(os.path.join(df._datadir, exp_name)) def test_open_deleted(self): df = self._get_open_disk_file() ts = time() df.delete(ts) exp_name = '%s.ts' % str(Timestamp(ts).internal) dl = os.listdir(df._datadir) self.assertEqual(len(dl), 1) self.assertIn(exp_name, set(dl)) df = self._simple_get_diskfile() self.assertRaises(DiskFileDeleted, df.open) def test_open_deleted_with_corrupt_tombstone(self): df = self._get_open_disk_file() ts = time() df.delete(ts) exp_name = '%s.ts' % str(Timestamp(ts).internal) dl = os.listdir(df._datadir) self.assertEqual(len(dl), 1) self.assertIn(exp_name, set(dl)) # it's pickle-format, so removing the last byte is sufficient to # corrupt it ts_fullpath = os.path.join(df._datadir, exp_name) self.assertTrue(os.path.exists(ts_fullpath)) # sanity check meta_xattr = xattr.getxattr(ts_fullpath, "user.swift.metadata") xattr.setxattr(ts_fullpath, "user.swift.metadata", meta_xattr[:-1]) df = self._simple_get_diskfile() self.assertRaises(DiskFileNotExist, df.open) self.assertFalse(os.path.exists(ts_fullpath)) def test_from_audit_location(self): df, df_data = self._create_test_file( 'blah blah', account='three', container='blind', obj='mice') hashdir = df._datadir df = self.df_mgr.get_diskfile_from_audit_location( diskfile.AuditLocation(hashdir, self.existing_device, '0', policy=POLICIES.default)) df.open() self.assertEqual(df._name, '/three/blind/mice') def test_from_audit_location_with_mismatched_hash(self): df, df_data = self._create_test_file( 'blah blah', account='this', container='is', obj='right') hashdir = df._datadir datafilename = [f for f in os.listdir(hashdir) if f.endswith('.data')][0] datafile = os.path.join(hashdir, datafilename) meta = diskfile.read_metadata(datafile) meta['name'] = '/this/is/wrong' diskfile.write_metadata(datafile, meta) df = self.df_mgr.get_diskfile_from_audit_location( diskfile.AuditLocation(hashdir, self.existing_device, '0', policy=POLICIES.default)) self.assertRaises(DiskFileQuarantined, df.open) def test_close_error(self): def mock_handle_close_quarantine(): raise Exception("Bad") df = self._get_open_disk_file(fsize=1024 * 1024 * 2, csize=1024) reader = df.reader() reader._handle_close_quarantine = mock_handle_close_quarantine for chunk in reader: pass # close is called at the end of the iterator self.assertEqual(reader._fp, None) error_lines = df._logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) self.assertIn('close failure', error_lines[0]) self.assertIn('Bad', error_lines[0]) def test_mount_checking(self): def _mock_cm(*args, **kwargs): return False with mock.patch("swift.common.constraints.check_mount", _mock_cm): self.assertRaises( DiskFileDeviceUnavailable, self._get_open_disk_file, mount_check=True) def test_ondisk_search_loop_ts_meta_data(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, '', ext='.ts', timestamp=10) self._create_ondisk_file(df, '', ext='.ts', timestamp=9) self._create_ondisk_file(df, '', ext='.meta', timestamp=8) self._create_ondisk_file(df, '', ext='.meta', timestamp=7) self._create_ondisk_file(df, 'B', ext='.data', timestamp=6) self._create_ondisk_file(df, 'A', ext='.data', timestamp=5) df = self._simple_get_diskfile() try: df.open() except DiskFileDeleted as d: self.assertEqual(d.timestamp, Timestamp(10).internal) else: self.fail("Expected DiskFileDeleted exception") def test_ondisk_search_loop_meta_ts_data(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, '', ext='.meta', timestamp=10) self._create_ondisk_file(df, '', ext='.meta', timestamp=9) self._create_ondisk_file(df, '', ext='.ts', timestamp=8) self._create_ondisk_file(df, '', ext='.ts', timestamp=7) self._create_ondisk_file(df, 'B', ext='.data', timestamp=6) self._create_ondisk_file(df, 'A', ext='.data', timestamp=5) df = self._simple_get_diskfile() try: df.open() except DiskFileDeleted as d: self.assertEqual(d.timestamp, Timestamp(8).internal) else: self.fail("Expected DiskFileDeleted exception") def test_ondisk_search_loop_meta_data_ts(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, '', ext='.meta', timestamp=10) self._create_ondisk_file(df, '', ext='.meta', timestamp=9) self._create_ondisk_file(df, 'B', ext='.data', timestamp=8) self._create_ondisk_file(df, 'A', ext='.data', timestamp=7) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=8) self._create_ondisk_file(df, '', ext='.durable', timestamp=7) self._create_ondisk_file(df, '', ext='.ts', timestamp=6) self._create_ondisk_file(df, '', ext='.ts', timestamp=5) df = self._simple_get_diskfile() with df.open(): self.assertIn('X-Timestamp', df._metadata) self.assertEqual(df._metadata['X-Timestamp'], Timestamp(10).internal) self.assertNotIn('deleted', df._metadata) def test_ondisk_search_loop_multiple_meta_data(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, '', ext='.meta', timestamp=10, metadata={'X-Object-Meta-User': 'user-meta'}) self._create_ondisk_file(df, '', ext='.meta', timestamp=9, ctype_timestamp=9, metadata={'Content-Type': 'newest', 'X-Object-Meta-User': 'blah'}) self._create_ondisk_file(df, 'B', ext='.data', timestamp=8, metadata={'Content-Type': 'newer'}) self._create_ondisk_file(df, 'A', ext='.data', timestamp=7, metadata={'Content-Type': 'oldest'}) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=8) self._create_ondisk_file(df, '', ext='.durable', timestamp=7) df = self._simple_get_diskfile() with df.open(): self.assertTrue('X-Timestamp' in df._metadata) self.assertEqual(df._metadata['X-Timestamp'], Timestamp(10).internal) self.assertTrue('Content-Type' in df._metadata) self.assertEqual(df._metadata['Content-Type'], 'newest') self.assertTrue('X-Object-Meta-User' in df._metadata) self.assertEqual(df._metadata['X-Object-Meta-User'], 'user-meta') def test_ondisk_search_loop_stale_meta_data(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, '', ext='.meta', timestamp=10, metadata={'X-Object-Meta-User': 'user-meta'}) self._create_ondisk_file(df, '', ext='.meta', timestamp=9, ctype_timestamp=7, metadata={'Content-Type': 'older', 'X-Object-Meta-User': 'blah'}) self._create_ondisk_file(df, 'B', ext='.data', timestamp=8, metadata={'Content-Type': 'newer'}) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=8) df = self._simple_get_diskfile() with df.open(): self.assertTrue('X-Timestamp' in df._metadata) self.assertEqual(df._metadata['X-Timestamp'], Timestamp(10).internal) self.assertTrue('Content-Type' in df._metadata) self.assertEqual(df._metadata['Content-Type'], 'newer') self.assertTrue('X-Object-Meta-User' in df._metadata) self.assertEqual(df._metadata['X-Object-Meta-User'], 'user-meta') def test_ondisk_search_loop_data_ts_meta(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, 'B', ext='.data', timestamp=10) self._create_ondisk_file(df, 'A', ext='.data', timestamp=9) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=10) self._create_ondisk_file(df, '', ext='.durable', timestamp=9) self._create_ondisk_file(df, '', ext='.ts', timestamp=8) self._create_ondisk_file(df, '', ext='.ts', timestamp=7) self._create_ondisk_file(df, '', ext='.meta', timestamp=6) self._create_ondisk_file(df, '', ext='.meta', timestamp=5) df = self._simple_get_diskfile() with df.open(): self.assertIn('X-Timestamp', df._metadata) self.assertEqual(df._metadata['X-Timestamp'], Timestamp(10).internal) self.assertNotIn('deleted', df._metadata) def test_ondisk_search_loop_wayward_files_ignored(self): df = self._simple_get_diskfile() self._create_ondisk_file(df, 'X', ext='.bar', timestamp=11) self._create_ondisk_file(df, 'B', ext='.data', timestamp=10) self._create_ondisk_file(df, 'A', ext='.data', timestamp=9) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=10) self._create_ondisk_file(df, '', ext='.durable', timestamp=9) self._create_ondisk_file(df, '', ext='.ts', timestamp=8) self._create_ondisk_file(df, '', ext='.ts', timestamp=7) self._create_ondisk_file(df, '', ext='.meta', timestamp=6) self._create_ondisk_file(df, '', ext='.meta', timestamp=5) df = self._simple_get_diskfile() with df.open(): self.assertIn('X-Timestamp', df._metadata) self.assertEqual(df._metadata['X-Timestamp'], Timestamp(10).internal) self.assertNotIn('deleted', df._metadata) def test_ondisk_search_loop_listdir_error(self): df = self._simple_get_diskfile() def mock_listdir_exp(*args, **kwargs): raise OSError(errno.EACCES, os.strerror(errno.EACCES)) with mock.patch("os.listdir", mock_listdir_exp): self._create_ondisk_file(df, 'X', ext='.bar', timestamp=11) self._create_ondisk_file(df, 'B', ext='.data', timestamp=10) self._create_ondisk_file(df, 'A', ext='.data', timestamp=9) if df.policy.policy_type == EC_POLICY: self._create_ondisk_file(df, '', ext='.durable', timestamp=10) self._create_ondisk_file(df, '', ext='.durable', timestamp=9) self._create_ondisk_file(df, '', ext='.ts', timestamp=8) self._create_ondisk_file(df, '', ext='.ts', timestamp=7) self._create_ondisk_file(df, '', ext='.meta', timestamp=6) self._create_ondisk_file(df, '', ext='.meta', timestamp=5) df = self._simple_get_diskfile() self.assertRaises(DiskFileError, df.open) def test_exception_in_handle_close_quarantine(self): df = self._get_open_disk_file() def blow_up(): raise Exception('a very special error') reader = df.reader() reader._handle_close_quarantine = blow_up for _ in reader: pass reader.close() log_lines = df._logger.get_lines_for_level('error') self.assertIn('a very special error', log_lines[-1]) def test_diskfile_names(self): df = self._simple_get_diskfile() self.assertEqual(df.account, 'a') self.assertEqual(df.container, 'c') self.assertEqual(df.obj, 'o') def test_diskfile_content_length_not_open(self): df = self._simple_get_diskfile() exc = None try: df.content_length except DiskFileNotOpen as err: exc = err self.assertEqual(str(exc), '') def test_diskfile_content_length_deleted(self): df = self._get_open_disk_file() ts = time() df.delete(ts) exp_name = '%s.ts' % str(Timestamp(ts).internal) dl = os.listdir(df._datadir) self.assertEqual(len(dl), 1) self.assertIn(exp_name, set(dl)) df = self._simple_get_diskfile() exc = None try: with df.open(): df.content_length except DiskFileDeleted as err: exc = err self.assertEqual(str(exc), '') def test_diskfile_content_length(self): self._get_open_disk_file() df = self._simple_get_diskfile() with df.open(): if df.policy.policy_type == EC_POLICY: expected = df.policy.pyeclib_driver.get_segment_info( 1024, df.policy.ec_segment_size)['fragment_size'] else: expected = 1024 self.assertEqual(df.content_length, expected) def test_diskfile_timestamp_not_open(self): df = self._simple_get_diskfile() exc = None try: df.timestamp except DiskFileNotOpen as err: exc = err self.assertEqual(str(exc), '') def test_diskfile_timestamp_deleted(self): df = self._get_open_disk_file() ts = time() df.delete(ts) exp_name = '%s.ts' % str(Timestamp(ts).internal) dl = os.listdir(df._datadir) self.assertEqual(len(dl), 1) self.assertIn(exp_name, set(dl)) df = self._simple_get_diskfile() exc = None try: with df.open(): df.timestamp except DiskFileDeleted as err: exc = err self.assertEqual(str(exc), '') def test_diskfile_timestamp(self): ts_1 = self.ts() self._get_open_disk_file(ts=ts_1.internal) df = self._simple_get_diskfile() with df.open(): self.assertEqual(df.timestamp, ts_1.internal) ts_2 = self.ts() df.write_metadata({'X-Timestamp': ts_2.internal}) with df.open(): self.assertEqual(df.timestamp, ts_2.internal) def test_data_timestamp(self): ts_1 = self.ts() self._get_open_disk_file(ts=ts_1.internal) df = self._simple_get_diskfile() with df.open(): self.assertEqual(df.data_timestamp, ts_1.internal) ts_2 = self.ts() df.write_metadata({'X-Timestamp': ts_2.internal}) with df.open(): self.assertEqual(df.data_timestamp, ts_1.internal) def test_data_timestamp_not_open(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.data_timestamp def test_content_type_and_timestamp(self): ts_1 = self.ts() self._get_open_disk_file(ts=ts_1.internal, extra_metadata={'Content-Type': 'image/jpeg'}) df = self._simple_get_diskfile() with df.open(): self.assertEqual(ts_1.internal, df.data_timestamp) self.assertEqual(ts_1.internal, df.timestamp) self.assertEqual(ts_1.internal, df.content_type_timestamp) self.assertEqual('image/jpeg', df.content_type) ts_2 = self.ts() ts_3 = self.ts() df.write_metadata({'X-Timestamp': ts_3.internal, 'Content-Type': 'image/gif', 'Content-Type-Timestamp': ts_2.internal}) with df.open(): self.assertEqual(ts_1.internal, df.data_timestamp) self.assertEqual(ts_3.internal, df.timestamp) self.assertEqual(ts_2.internal, df.content_type_timestamp) self.assertEqual('image/gif', df.content_type) def test_content_type_timestamp_not_open(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.content_type_timestamp def test_content_type_not_open(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.content_type def test_durable_timestamp(self): ts_1 = self.ts() df = self._get_open_disk_file(ts=ts_1.internal) with df.open(): self.assertEqual(df.durable_timestamp, ts_1.internal) # verify durable timestamp does not change when metadata is written ts_2 = self.ts() df.write_metadata({'X-Timestamp': ts_2.internal}) with df.open(): self.assertEqual(df.durable_timestamp, ts_1.internal) def test_durable_timestamp_not_open(self): df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotOpen): df.durable_timestamp def test_durable_timestamp_no_data_file(self): df = self._get_open_disk_file(self.ts().internal) for f in os.listdir(df._datadir): if f.endswith('.data'): os.unlink(os.path.join(df._datadir, f)) df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotExist): df.open() # open() was attempted, but no data file so expect None self.assertIsNone(df.durable_timestamp) def test_error_in_hash_cleanup_listdir(self): def mock_hcl(*args, **kwargs): raise OSError() df = self._get_open_disk_file() file_count = len(os.listdir(df._datadir)) ts = time() with mock.patch(self._manager_mock('cleanup_ondisk_files'), mock_hcl): try: df.delete(ts) except OSError: self.fail("OSError raised when it should have been swallowed") exp_name = '%s.ts' % str(Timestamp(ts).internal) dl = os.listdir(df._datadir) self.assertEqual(len(dl), file_count + 1) self.assertIn(exp_name, set(dl)) def _system_can_zero_copy(self): if not splice.available: return False try: utils.get_md5_socket() except IOError: return False return True def test_zero_copy_cache_dropping(self): if not self._system_can_zero_copy(): raise SkipTest("zero-copy support is missing") self.conf['splice'] = 'on' self.conf['keep_cache_size'] = 16384 self.conf['disk_chunk_size'] = 4096 df = self._get_open_disk_file(fsize=163840) reader = df.reader() self.assertTrue(reader.can_zero_copy_send()) with mock.patch("swift.obj.diskfile.drop_buffer_cache") as dbc: with mock.patch("swift.obj.diskfile.DROP_CACHE_WINDOW", 4095): with open('/dev/null', 'w') as devnull: reader.zero_copy_send(devnull.fileno()) if df.policy.policy_type == EC_POLICY: expected = 4 + 1 else: expected = (4 * 10) + 1 self.assertEqual(len(dbc.mock_calls), expected) def test_zero_copy_turns_off_when_md5_sockets_not_supported(self): if not self._system_can_zero_copy(): raise SkipTest("zero-copy support is missing") df_mgr = self.df_router[POLICIES.default] self.conf['splice'] = 'on' with mock.patch('swift.obj.diskfile.get_md5_socket') as mock_md5sock: mock_md5sock.side_effect = IOError( errno.EAFNOSUPPORT, "MD5 socket busted") df = self._get_open_disk_file(fsize=128) reader = df.reader() self.assertFalse(reader.can_zero_copy_send()) log_lines = df_mgr.logger.get_lines_for_level('warning') self.assertIn('MD5 sockets', log_lines[-1]) def test_tee_to_md5_pipe_length_mismatch(self): if not self._system_can_zero_copy(): raise SkipTest("zero-copy support is missing") self.conf['splice'] = 'on' df = self._get_open_disk_file(fsize=16385) reader = df.reader() self.assertTrue(reader.can_zero_copy_send()) with mock.patch('swift.obj.diskfile.tee') as mock_tee: mock_tee.side_effect = lambda _1, _2, _3, cnt: cnt - 1 with open('/dev/null', 'w') as devnull: exc_re = (r'tee\(\) failed: tried to move \d+ bytes, but only ' 'moved -?\d+') try: reader.zero_copy_send(devnull.fileno()) except Exception as e: self.assertTrue(re.match(exc_re, str(e))) else: self.fail('Expected Exception was not raised') def test_splice_to_wsockfd_blocks(self): if not self._system_can_zero_copy(): raise SkipTest("zero-copy support is missing") self.conf['splice'] = 'on' df = self._get_open_disk_file(fsize=16385) reader = df.reader() self.assertTrue(reader.can_zero_copy_send()) def _run_test(): # Set up mock of `splice` splice_called = [False] # State hack def fake_splice(fd_in, off_in, fd_out, off_out, len_, flags): if fd_out == devnull.fileno() and not splice_called[0]: splice_called[0] = True err = errno.EWOULDBLOCK raise IOError(err, os.strerror(err)) return splice(fd_in, off_in, fd_out, off_out, len_, flags) mock_splice.side_effect = fake_splice # Set up mock of `trampoline` # There are 2 reasons to mock this: # # - We want to ensure it's called with the expected arguments at # least once # - When called with our write FD (which points to `/dev/null`), we # can't actually call `trampoline`, because adding such FD to an # `epoll` handle results in `EPERM` def fake_trampoline(fd, read=None, write=None, timeout=None, timeout_exc=timeout.Timeout, mark_as_closed=None): if write and fd == devnull.fileno(): return else: hubs.trampoline(fd, read=read, write=write, timeout=timeout, timeout_exc=timeout_exc, mark_as_closed=mark_as_closed) mock_trampoline.side_effect = fake_trampoline reader.zero_copy_send(devnull.fileno()) # Assert the end of `zero_copy_send` was reached self.assertTrue(mock_close.called) # Assert there was at least one call to `trampoline` waiting for # `write` access to the output FD mock_trampoline.assert_any_call(devnull.fileno(), write=True) # Assert at least one call to `splice` with the output FD we expect for call in mock_splice.call_args_list: args = call[0] if args[2] == devnull.fileno(): break else: self.fail('`splice` not called with expected arguments') with mock.patch('swift.obj.diskfile.splice') as mock_splice: with mock.patch.object( reader, 'close', side_effect=reader.close) as mock_close: with open('/dev/null', 'w') as devnull: with mock.patch('swift.obj.diskfile.trampoline') as \ mock_trampoline: _run_test() def test_create_unlink_cleanup_DiskFileNoSpace(self): # Test cleanup when DiskFileNoSpace() is raised. df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) _m_fallocate = mock.MagicMock(side_effect=OSError(errno.ENOSPC, os.strerror(errno.ENOSPC))) _m_unlink = mock.Mock() with mock.patch("swift.obj.diskfile.fallocate", _m_fallocate): with mock.patch("os.unlink", _m_unlink): try: with df.create(size=100): pass except DiskFileNoSpace: pass else: self.fail("Expected exception DiskFileNoSpace") self.assertTrue(_m_fallocate.called) self.assertTrue(_m_unlink.called) self.assertNotIn('error', self.logger.all_log_lines()) def test_create_unlink_cleanup_renamer_fails(self): # Test cleanup when renamer fails _m_renamer = mock.MagicMock(side_effect=OSError(errno.ENOENT, os.strerror(errno.ENOENT))) _m_unlink = mock.Mock() df = self._simple_get_diskfile() data = '0' * 100 metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': Timestamp(time()).internal, 'Content-Length': str(100), } with mock.patch("swift.obj.diskfile.renamer", _m_renamer): with mock.patch("os.unlink", _m_unlink): try: with df.create(size=100) as writer: writer.write(data) writer.put(metadata) except OSError: pass else: self.fail("Expected OSError exception") self.assertFalse(writer.put_succeeded) self.assertTrue(_m_renamer.called) self.assertTrue(_m_unlink.called) self.assertNotIn('error', self.logger.all_log_lines()) def test_create_unlink_cleanup_logging(self): # Test logging of os.unlink() failures. df = self.df_mgr.get_diskfile(self.existing_device, '0', 'abc', '123', 'xyz', policy=POLICIES.legacy) _m_fallocate = mock.MagicMock(side_effect=OSError(errno.ENOSPC, os.strerror(errno.ENOSPC))) _m_unlink = mock.MagicMock(side_effect=OSError(errno.ENOENT, os.strerror(errno.ENOENT))) with mock.patch("swift.obj.diskfile.fallocate", _m_fallocate): with mock.patch("os.unlink", _m_unlink): try: with df.create(size=100): pass except DiskFileNoSpace: pass else: self.fail("Expected exception DiskFileNoSpace") self.assertTrue(_m_fallocate.called) self.assertTrue(_m_unlink.called) error_lines = self.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith("Error removing tempfile:")) @patch_policies(test_policies) class TestDiskFile(DiskFileMixin, unittest.TestCase): mgr_cls = diskfile.DiskFileManager @patch_policies(with_ec_default=True) class TestECDiskFile(DiskFileMixin, unittest.TestCase): mgr_cls = diskfile.ECDiskFileManager def test_commit_raises_DiskFileErrors(self): scenarios = ((errno.ENOSPC, DiskFileNoSpace), (errno.EDQUOT, DiskFileNoSpace), (errno.ENOTDIR, DiskFileError), (errno.EPERM, DiskFileError)) # Check IOErrors from open() is handled for err_number, expected_exception in scenarios: io_error = IOError() io_error.errno = err_number mock_open = mock.MagicMock(side_effect=io_error) df = self._simple_get_diskfile(account='a', container='c', obj='o_%s' % err_number, policy=POLICIES.default) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch('six.moves.builtins.open', mock_open): self.assertRaises(expected_exception, writer.commit, timestamp) dl = os.listdir(df._datadir) self.assertEqual(1, len(dl), dl) rmtree(df._datadir) # Check OSError from fsync() is handled mock_fsync = mock.MagicMock(side_effect=OSError) df = self._simple_get_diskfile(account='a', container='c', obj='o_fsync_error') timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch('swift.obj.diskfile.fsync', mock_fsync): self.assertRaises(DiskFileError, writer.commit, timestamp) def test_commit_fsync_dir_raises_DiskFileErrors(self): scenarios = ((errno.ENOSPC, DiskFileNoSpace), (errno.EDQUOT, DiskFileNoSpace), (errno.ENOTDIR, DiskFileError), (errno.EPERM, DiskFileError)) # Check IOErrors from fsync_dir() is handled for err_number, expected_exception in scenarios: io_error = IOError(err_number, os.strerror(err_number)) mock_open = mock.MagicMock(side_effect=io_error) mock_io_error = mock.MagicMock(side_effect=io_error) df = self._simple_get_diskfile(account='a', container='c', obj='o_%s' % err_number, policy=POLICIES.default) timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch('six.moves.builtins.open', mock_open): self.assertRaises(expected_exception, writer.commit, timestamp) with mock.patch('swift.obj.diskfile.fsync_dir', mock_io_error): self.assertRaises(expected_exception, writer.commit, timestamp) dl = os.listdir(df._datadir) self.assertEqual(2, len(dl), dl) rmtree(df._datadir) # Check OSError from fsync_dir() is handled mock_os_error = mock.MagicMock( side_effect=OSError(100, 'Some Error')) df = self._simple_get_diskfile(account='a', container='c', obj='o_fsync_dir_error') timestamp = Timestamp(time()) with df.create() as writer: metadata = { 'ETag': 'bogus_etag', 'X-Timestamp': timestamp.internal, 'Content-Length': '0', } writer.put(metadata) with mock.patch('swift.obj.diskfile.fsync_dir', mock_os_error): self.assertRaises(DiskFileError, writer.commit, timestamp) def test_data_file_has_frag_index(self): policy = POLICIES.default for good_value in (0, '0', 2, '2', 14, '14'): # frag_index set by constructor arg ts = self.ts().internal expected = ['%s#%s.data' % (ts, good_value), '%s.durable' % ts] df = self._get_open_disk_file(ts=ts, policy=policy, frag_index=good_value) self.assertEqual(expected, sorted(os.listdir(df._datadir))) # frag index should be added to object sysmeta actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(good_value), int(actual)) # metadata value overrides the constructor arg ts = self.ts().internal expected = ['%s#%s.data' % (ts, good_value), '%s.durable' % ts] meta = {'X-Object-Sysmeta-Ec-Frag-Index': good_value} df = self._get_open_disk_file(ts=ts, policy=policy, frag_index='99', extra_metadata=meta) self.assertEqual(expected, sorted(os.listdir(df._datadir))) actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(good_value), int(actual)) # metadata value alone is sufficient ts = self.ts().internal expected = ['%s#%s.data' % (ts, good_value), '%s.durable' % ts] meta = {'X-Object-Sysmeta-Ec-Frag-Index': good_value} df = self._get_open_disk_file(ts=ts, policy=policy, frag_index=None, extra_metadata=meta) self.assertEqual(expected, sorted(os.listdir(df._datadir))) actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(good_value), int(actual)) def test_sysmeta_frag_index_is_immutable(self): # the X-Object-Sysmeta-Ec-Frag-Index should *only* be set when # the .data file is written. policy = POLICIES.default orig_frag_index = 14 # frag_index set by constructor arg ts = self.ts().internal expected = ['%s#%s.data' % (ts, orig_frag_index), '%s.durable' % ts] df = self._get_open_disk_file(ts=ts, policy=policy, obj_name='my_obj', frag_index=orig_frag_index) self.assertEqual(expected, sorted(os.listdir(df._datadir))) # frag index should be added to object sysmeta actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(orig_frag_index), int(actual)) # open the same diskfile with no frag_index passed to constructor df = self.df_router[policy].get_diskfile( self.existing_device, 0, 'a', 'c', 'my_obj', policy=policy, frag_index=None) df.open() actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(orig_frag_index), int(actual)) # write metadata to a meta file ts = self.ts().internal metadata = {'X-Timestamp': ts, 'X-Object-Meta-Fruit': 'kiwi'} df.write_metadata(metadata) # sanity check we did write a meta file expected.append('%s.meta' % ts) actual_files = sorted(os.listdir(df._datadir)) self.assertEqual(expected, actual_files) # open the same diskfile, check frag index is unchanged df = self.df_router[policy].get_diskfile( self.existing_device, 0, 'a', 'c', 'my_obj', policy=policy, frag_index=None) df.open() # sanity check we have read the meta file self.assertEqual(ts, df.get_metadata().get('X-Timestamp')) self.assertEqual('kiwi', df.get_metadata().get('X-Object-Meta-Fruit')) # check frag index sysmeta is unchanged actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(orig_frag_index), int(actual)) # attempt to overwrite frag index sysmeta ts = self.ts().internal metadata = {'X-Timestamp': ts, 'X-Object-Sysmeta-Ec-Frag-Index': 99, 'X-Object-Meta-Fruit': 'apple'} df.write_metadata(metadata) # open the same diskfile, check frag index is unchanged df = self.df_router[policy].get_diskfile( self.existing_device, 0, 'a', 'c', 'my_obj', policy=policy, frag_index=None) df.open() # sanity check we have read the meta file self.assertEqual(ts, df.get_metadata().get('X-Timestamp')) self.assertEqual('apple', df.get_metadata().get('X-Object-Meta-Fruit')) actual = df.get_metadata().get('X-Object-Sysmeta-Ec-Frag-Index') self.assertEqual(int(orig_frag_index), int(actual)) def test_data_file_errors_bad_frag_index(self): policy = POLICIES.default df_mgr = self.df_router[policy] for bad_value in ('foo', '-2', -2, '3.14', 3.14): # check that bad frag_index set by constructor arg raises error # as soon as diskfile is constructed, before data is written self.assertRaises(DiskFileError, self._simple_get_diskfile, policy=policy, frag_index=bad_value) # bad frag_index set by metadata value # (drive-by check that it is ok for constructor arg to be None) df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=None) ts = self.ts() meta = {'X-Object-Sysmeta-Ec-Frag-Index': bad_value, 'X-Timestamp': ts.internal, 'Content-Length': 0, 'Etag': EMPTY_ETAG, 'Content-Type': 'plain/text'} with df.create() as writer: try: writer.put(meta) self.fail('Expected DiskFileError for frag_index %s' % bad_value) except DiskFileError: pass # bad frag_index set by metadata value overrides ok constructor arg df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=2) ts = self.ts() meta = {'X-Object-Sysmeta-Ec-Frag-Index': bad_value, 'X-Timestamp': ts.internal, 'Content-Length': 0, 'Etag': EMPTY_ETAG, 'Content-Type': 'plain/text'} with df.create() as writer: try: writer.put(meta) self.fail('Expected DiskFileError for frag_index %s' % bad_value) except DiskFileError: pass def test_purge_one_fragment_index(self): ts = self.ts() for frag_index in (1, 2): df = self._simple_get_diskfile(frag_index=frag_index) with df.create() as writer: data = 'test data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), } writer.put(metadata) writer.commit(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#1.data', ts.internal + '#2.data', ts.internal + '.durable', ]) df.purge(ts, 2) self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#1.data', ts.internal + '.durable', ]) def test_purge_last_fragment_index(self): ts = self.ts() frag_index = 0 df = self._simple_get_diskfile(frag_index=frag_index) with df.create() as writer: data = 'test data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), } writer.put(metadata) writer.commit(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#0.data', ts.internal + '.durable', ]) df.purge(ts, 0) self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '.durable', ]) def test_purge_non_existent_fragment_index(self): ts = self.ts() frag_index = 7 df = self._simple_get_diskfile(frag_index=frag_index) with df.create() as writer: data = 'test data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), } writer.put(metadata) writer.commit(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#7.data', ts.internal + '.durable', ]) df.purge(ts, 3) # no effect self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#7.data', ts.internal + '.durable', ]) def test_purge_old_timestamp_frag_index(self): old_ts = self.ts() ts = self.ts() frag_index = 1 df = self._simple_get_diskfile(frag_index=frag_index) with df.create() as writer: data = 'test data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), } writer.put(metadata) writer.commit(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#1.data', ts.internal + '.durable', ]) df.purge(old_ts, 1) # no effect self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '#1.data', ts.internal + '.durable', ]) def test_purge_tombstone(self): ts = self.ts() df = self._simple_get_diskfile(frag_index=3) df.delete(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '.ts', ]) df.purge(ts, 3) self.assertEqual(sorted(os.listdir(df._datadir)), []) def test_purge_without_frag(self): ts = self.ts() df = self._simple_get_diskfile() df.delete(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '.ts', ]) df.purge(ts, None) self.assertEqual(sorted(os.listdir(df._datadir)), []) def test_purge_old_tombstone(self): old_ts = self.ts() ts = self.ts() df = self._simple_get_diskfile(frag_index=5) df.delete(ts) # sanity self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '.ts', ]) df.purge(old_ts, 5) # no effect self.assertEqual(sorted(os.listdir(df._datadir)), [ ts.internal + '.ts', ]) def test_purge_already_removed(self): df = self._simple_get_diskfile(frag_index=6) df.purge(self.ts(), 6) # no errors # sanity os.makedirs(df._datadir) self.assertEqual(sorted(os.listdir(df._datadir)), []) df.purge(self.ts(), 6) # no effect self.assertEqual(sorted(os.listdir(df._datadir)), []) def test_open_most_recent_durable(self): policy = POLICIES.default df_mgr = self.df_router[policy] df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) ts = self.ts() with df.create() as writer: data = 'test data' writer.write(data) metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), 'X-Object-Sysmeta-Ec-Frag-Index': 3, } writer.put(metadata) writer.commit(ts) # add some .meta stuff extra_meta = { 'X-Object-Meta-Foo': 'Bar', 'X-Timestamp': self.ts().internal, } df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) df.write_metadata(extra_meta) # sanity df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) metadata.update(extra_meta) self.assertEqual(metadata, df.read_metadata()) # add a newer datafile df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) ts = self.ts() with df.create() as writer: data = 'test data' writer.write(data) new_metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), 'X-Object-Sysmeta-Ec-Frag-Index': 3, } writer.put(new_metadata) # N.B. don't make it durable # and we still get the old metadata (same as if no .data!) df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) self.assertEqual(metadata, df.read_metadata()) def test_open_most_recent_missing_durable(self): policy = POLICIES.default df_mgr = self.df_router[policy] df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) self.assertRaises(DiskFileNotExist, df.read_metadata) # now create a datafile missing durable ts = self.ts() with df.create() as writer: data = 'test data' writer.write(data) new_metadata = { 'ETag': md5(data).hexdigest(), 'X-Timestamp': ts.internal, 'Content-Length': len(data), 'X-Object-Sysmeta-Ec-Frag-Index': 3, } writer.put(new_metadata) # N.B. don't make it durable # add some .meta stuff extra_meta = { 'X-Object-Meta-Foo': 'Bar', 'X-Timestamp': self.ts().internal, } df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) df.write_metadata(extra_meta) # we still get the DiskFileNotExist (same as if no .data!) df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=3) self.assertRaises(DiskFileNotExist, df.read_metadata) # sanity, withtout the frag_index kwarg df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) self.assertRaises(DiskFileNotExist, df.read_metadata) def test_fragments(self): ts_1 = self.ts() self._get_open_disk_file(ts=ts_1.internal, frag_index=0) df = self._get_open_disk_file(ts=ts_1.internal, frag_index=2) self.assertEqual(df.fragments, {ts_1: [0, 2]}) # now add a newer datafile for frag index 3 but don't write a # durable with it (so ignore the error when we try to open) ts_2 = self.ts() try: df = self._get_open_disk_file(ts=ts_2.internal, frag_index=3, commit=False) except DiskFileNotExist: pass # sanity check: should have 2* .data, .durable, .data files = os.listdir(df._datadir) self.assertEqual(4, len(files)) with df.open(): self.assertEqual(df.fragments, {ts_1: [0, 2], ts_2: [3]}) # verify frags available even if open fails e.g. if .durable missing for f in filter(lambda f: f.endswith('.durable'), files): os.remove(os.path.join(df._datadir, f)) self.assertRaises(DiskFileNotExist, df.open) self.assertEqual(df.fragments, {ts_1: [0, 2], ts_2: [3]}) def test_fragments_not_open(self): df = self._simple_get_diskfile() self.assertIsNone(df.fragments) def test_durable_timestamp_no_durable_file(self): try: self._get_open_disk_file(self.ts().internal, commit=False) except DiskFileNotExist: pass df = self._simple_get_diskfile() with self.assertRaises(DiskFileNotExist): df.open() # open() was attempted, but no durable file so expect None self.assertIsNone(df.durable_timestamp) def test_durable_timestamp_missing_frag_index(self): ts1 = self.ts() self._get_open_disk_file(ts=ts1.internal, frag_index=1) df = self._simple_get_diskfile(frag_index=2) with self.assertRaises(DiskFileNotExist): df.open() # open() was attempted, but no data file for frag index so expect None self.assertIsNone(df.durable_timestamp) def test_durable_timestamp_newer_non_durable_data_file(self): ts1 = self.ts() self._get_open_disk_file(ts=ts1.internal) ts2 = self.ts() try: self._get_open_disk_file(ts=ts2.internal, commit=False) except DiskFileNotExist: pass df = self._simple_get_diskfile() # sanity check - one .durable file, two .data files self.assertEqual(3, len(os.listdir(df._datadir))) df.open() self.assertEqual(ts1, df.durable_timestamp) def test_disk_file_app_iter_ranges_checks_only_aligned_frag_data(self): policy = POLICIES.default frag_size = policy.fragment_size # make sure there are two fragment size worth of data on disk data = 'ab' * policy.ec_segment_size df, df_data = self._create_test_file(data) quarantine_msgs = [] reader = df.reader(_quarantine_hook=quarantine_msgs.append) # each range uses a fresh reader app_iter_range which triggers a disk # read at the range offset - make sure each of those disk reads will # fetch an amount of data from disk that is greater than but not equal # to a fragment size reader._disk_chunk_size = int(frag_size * 1.5) with mock.patch.object( reader._diskfile.policy.pyeclib_driver, 'get_metadata')\ as mock_get_metadata: it = reader.app_iter_ranges( [(0, 10), (10, 20), (frag_size + 20, frag_size + 30)], 'plain/text', '\r\n--someheader\r\n', len(df_data)) value = ''.join(it) # check that only first range which starts at 0 triggers a frag check self.assertEqual(1, mock_get_metadata.call_count) self.assertIn(df_data[:10], value) self.assertIn(df_data[10:20], value) self.assertIn(df_data[frag_size + 20:frag_size + 30], value) self.assertEqual(quarantine_msgs, []) def test_reader_quarantines_corrupted_ec_archive(self): # This has same purpose as # TestAuditor.test_object_audit_checks_EC_fragments just making # sure that checks happen in DiskFileReader layer. policy = POLICIES.default df, df_data = self._create_test_file('x' * policy.ec_segment_size, timestamp=self.ts()) def do_test(corrupted_frag_body, expected_offset, expected_read): # expected_offset is offset at which corruption should be reported # expected_read is number of bytes that should be read before the # exception is raised ts = self.ts() write_diskfile(df, ts, corrupted_frag_body) # at the open for the diskfile, no error occurred # reading first corrupt frag is sufficient to detect the corruption df.open() with self.assertRaises(DiskFileQuarantined) as cm: reader = df.reader() reader._disk_chunk_size = int(policy.fragment_size) bytes_read = 0 for chunk in reader: bytes_read += len(chunk) with self.assertRaises(DiskFileNotExist): df.open() self.assertEqual(expected_read, bytes_read) self.assertEqual('Invalid EC metadata at offset 0x%x' % expected_offset, cm.exception.message) # TODO with liberasurecode < 1.2.0 the EC metadata verification checks # only the magic number at offset 59 bytes into the frag so we'll # corrupt up to and including that. Once liberasurecode >= 1.2.0 is # required we should be able to reduce the corruption length. corruption_length = 64 # corrupted first frag can be detected corrupted_frag_body = (' ' * corruption_length + df_data[corruption_length:]) do_test(corrupted_frag_body, 0, 0) # corrupted the second frag can be also detected corrupted_frag_body = (df_data + ' ' * corruption_length + df_data[corruption_length:]) do_test(corrupted_frag_body, len(df_data), len(df_data)) # if the second frag is shorter than frag size then corruption is # detected when the reader is closed corrupted_frag_body = (df_data + ' ' * corruption_length + df_data[corruption_length:-10]) do_test(corrupted_frag_body, len(df_data), len(corrupted_frag_body)) def test_reader_ec_exception_causes_quarantine(self): policy = POLICIES.default def do_test(exception): df, df_data = self._create_test_file('x' * policy.ec_segment_size, timestamp=self.ts()) df.manager.logger.clear() with mock.patch.object(df.policy.pyeclib_driver, 'get_metadata', side_effect=exception): df.open() with self.assertRaises(DiskFileQuarantined) as cm: for chunk in df.reader(): pass with self.assertRaises(DiskFileNotExist): df.open() self.assertEqual('Invalid EC metadata at offset 0x0', cm.exception.message) log_lines = df.manager.logger.get_lines_for_level('warning') self.assertIn('Quarantined object', log_lines[0]) self.assertIn('Invalid EC metadata at offset 0x0', log_lines[0]) do_test(pyeclib.ec_iface.ECInvalidFragmentMetadata('testing')) do_test(pyeclib.ec_iface.ECBadFragmentChecksum('testing')) do_test(pyeclib.ec_iface.ECInvalidParameter('testing')) def test_reader_ec_exception_does_not_cause_quarantine(self): # ECDriverError should not cause quarantine, only certain subclasses policy = POLICIES.default df, df_data = self._create_test_file('x' * policy.ec_segment_size, timestamp=self.ts()) with mock.patch.object( df.policy.pyeclib_driver, 'get_metadata', side_effect=pyeclib.ec_iface.ECDriverError('testing')): df.open() read_data = ''.join([d for d in df.reader()]) self.assertEqual(df_data, read_data) log_lines = df.manager.logger.get_lines_for_level('warning') self.assertIn('Problem checking EC fragment', log_lines[0]) df.open() # not quarantined def test_reader_frag_check_does_not_quarantine_if_its_not_binary(self): # This may look weird but for super-safety, check the # ECDiskFileReader._frag_check doesn't quarantine when non-binary # type chunk incomming (that would occurre only from coding bug) policy = POLICIES.default df, df_data = self._create_test_file('x' * policy.ec_segment_size, timestamp=self.ts()) df.open() for invalid_type_chunk in (None, [], [[]], 1): reader = df.reader() reader._check_frag(invalid_type_chunk) # None and [] are just skipped and [[]] and 1 are detected as invalid # chunks log_lines = df.manager.logger.get_lines_for_level('warning') self.assertEqual(2, len(log_lines)) for log_line in log_lines: self.assertIn( 'Unexpected fragment data type (not quarantined)', log_line) df.open() # not quarantined @patch_policies(with_ec_default=True) class TestSuffixHashes(unittest.TestCase): """ This tests all things related to hashing suffixes and therefore there's also few test methods for hash_cleanup_listdir as well (because it's used by hash_suffix). The public interface to suffix hashing is on the Manager:: * hash_cleanup_listdir(hsh_path) * get_hashes(device, partition, suffixes, policy) * invalidate_hash(suffix_dir) The Manager.get_hashes method (used by the REPLICATE verb) calls Manager._get_hashes (which may be an alias to the module method get_hashes), which calls hash_suffix, which calls hash_cleanup_listdir. Outside of that, hash_cleanup_listdir and invalidate_hash are used mostly after writing new files via PUT or DELETE. Test methods are organized by:: * hash_cleanup_listdir tests - behaviors * hash_cleanup_listdir tests - error handling * invalidate_hash tests - behavior * invalidate_hash tests - error handling * get_hashes tests - hash_suffix behaviors * get_hashes tests - hash_suffix error handling * get_hashes tests - behaviors * get_hashes tests - error handling """ def setUp(self): self.testdir = tempfile.mkdtemp() self.logger = debug_logger('suffix-hash-test') self.devices = os.path.join(self.testdir, 'node') os.mkdir(self.devices) self.existing_device = 'sda1' os.mkdir(os.path.join(self.devices, self.existing_device)) self.conf = { 'swift_dir': self.testdir, 'devices': self.devices, 'mount_check': False, } self.df_router = diskfile.DiskFileRouter(self.conf, self.logger) self._ts_iter = (Timestamp(t) for t in itertools.count(int(time()))) self.policy = None def ts(self): """ Timestamps - forever. """ return next(self._ts_iter) def fname_to_ts_hash(self, fname): """ EC datafiles are only hashed by their timestamp """ return md5(fname.split('#', 1)[0]).hexdigest() def tearDown(self): rmtree(self.testdir, ignore_errors=1) def iter_policies(self): for policy in POLICIES: self.policy = policy yield policy def assertEqual(self, *args): try: unittest.TestCase.assertEqual(self, *args) except AssertionError as err: if not self.policy: raise policy_trailer = '\n\n... for policy %r' % self.policy raise AssertionError(str(err) + policy_trailer) def _datafilename(self, timestamp, policy, frag_index=None): if frag_index is None: frag_index = randint(0, 9) filename = timestamp.internal if policy.policy_type == EC_POLICY: filename += '#%d' % frag_index filename += '.data' return filename def _metafilename(self, meta_timestamp, ctype_timestamp=None): filename = meta_timestamp.internal if ctype_timestamp is not None: delta = meta_timestamp.raw - ctype_timestamp.raw filename = '%s-%x' % (filename, delta) filename += '.meta' return filename def check_hash_cleanup_listdir(self, policy, input_files, output_files): orig_unlink = os.unlink file_list = list(input_files) def mock_listdir(path): return list(file_list) def mock_unlink(path): # timestamp 1 is a special tag to pretend a file disappeared # between the listdir and unlink. if '/0000000001.00000.' in path: # Using actual os.unlink for a non-existent name to reproduce # exactly what OSError it raises in order to prove that # common.utils.remove_file is squelching the error - but any # OSError would do. orig_unlink(uuid.uuid4().hex) file_list.remove(os.path.basename(path)) df_mgr = self.df_router[policy] with unit_mock({'os.listdir': mock_listdir, 'os.unlink': mock_unlink}): if isinstance(output_files, Exception): path = os.path.join(self.testdir, 'does-not-matter') self.assertRaises(output_files.__class__, df_mgr.cleanup_ondisk_files, path) return files = df_mgr.cleanup_ondisk_files('/whatever')['files'] self.assertEqual(files, output_files) # hash_cleanup_listdir tests - behaviors def test_hash_cleanup_listdir_purge_data_newer_ts(self): for policy in self.iter_policies(): # purge .data if there's a newer .ts file1 = self._datafilename(self.ts(), policy) file2 = self.ts().internal + '.ts' file_list = [file1, file2] self.check_hash_cleanup_listdir(policy, file_list, [file2]) def test_hash_cleanup_listdir_purge_expired_ts(self): for policy in self.iter_policies(): # purge older .ts files if there's a newer .data file1 = self.ts().internal + '.ts' file2 = self.ts().internal + '.ts' timestamp = self.ts() file3 = self._datafilename(timestamp, policy) file_list = [file1, file2, file3] expected = { # no durable datafile means you can't get rid of the # latest tombstone even if datafile is newer EC_POLICY: [file3, file2], REPL_POLICY: [file3], }[policy.policy_type] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_purge_ts_newer_data(self): for policy in self.iter_policies(): # purge .ts if there's a newer .data file1 = self.ts().internal + '.ts' timestamp = self.ts() file2 = self._datafilename(timestamp, policy) file_list = [file1, file2] if policy.policy_type == EC_POLICY: durable_file = timestamp.internal + '.durable' file_list.append(durable_file) expected = { EC_POLICY: [durable_file, file2], REPL_POLICY: [file2], }[policy.policy_type] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_purge_older_ts(self): for policy in self.iter_policies(): file1 = self.ts().internal + '.ts' file2 = self.ts().internal + '.ts' file3 = self._datafilename(self.ts(), policy) file4 = self.ts().internal + '.meta' expected = { # no durable means we can only throw out things before # the latest tombstone EC_POLICY: [file4, file3, file2], # keep .meta and .data and purge all .ts files REPL_POLICY: [file4, file3], }[policy.policy_type] file_list = [file1, file2, file3, file4] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_keep_meta_data_purge_ts(self): for policy in self.iter_policies(): file1 = self.ts().internal + '.ts' file2 = self.ts().internal + '.ts' timestamp = self.ts() file3 = self._datafilename(timestamp, policy) file_list = [file1, file2, file3] if policy.policy_type == EC_POLICY: durable_filename = timestamp.internal + '.durable' file_list.append(durable_filename) file4 = self.ts().internal + '.meta' file_list.append(file4) # keep .meta and .data if meta newer than data and purge .ts expected = { EC_POLICY: [file4, durable_filename, file3], REPL_POLICY: [file4, file3], }[policy.policy_type] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_keep_one_ts(self): for policy in self.iter_policies(): file1, file2, file3 = [self.ts().internal + '.ts' for i in range(3)] file_list = [file1, file2, file3] # keep only latest of multiple .ts files self.check_hash_cleanup_listdir(policy, file_list, [file3]) def test_hash_cleanup_listdir_multi_data_file(self): for policy in self.iter_policies(): file1 = self._datafilename(self.ts(), policy, 1) file2 = self._datafilename(self.ts(), policy, 2) file3 = self._datafilename(self.ts(), policy, 3) expected = { # keep all non-durable datafiles EC_POLICY: [file3, file2, file1], # keep only latest of multiple .data files REPL_POLICY: [file3] }[policy.policy_type] file_list = [file1, file2, file3] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_keeps_one_datafile(self): for policy in self.iter_policies(): timestamps = [self.ts() for i in range(3)] file1 = self._datafilename(timestamps[0], policy, 1) file2 = self._datafilename(timestamps[1], policy, 2) file3 = self._datafilename(timestamps[2], policy, 3) file_list = [file1, file2, file3] if policy.policy_type == EC_POLICY: for t in timestamps: file_list.append(t.internal + '.durable') latest_durable = file_list[-1] expected = { # keep latest durable and datafile EC_POLICY: [latest_durable, file3], # keep only latest of multiple .data files REPL_POLICY: [file3] }[policy.policy_type] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_keep_one_meta(self): for policy in self.iter_policies(): # keep only latest of multiple .meta files t_data = self.ts() file1 = self._datafilename(t_data, policy) file2, file3 = [self.ts().internal + '.meta' for i in range(2)] file_list = [file1, file2, file3] if policy.policy_type == EC_POLICY: durable_file = t_data.internal + '.durable' file_list.append(durable_file) expected = { EC_POLICY: [file3, durable_file, file1], REPL_POLICY: [file3, file1] }[policy.policy_type] self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_only_meta(self): for policy in self.iter_policies(): file1, file2 = [self.ts().internal + '.meta' for i in range(2)] file_list = [file1, file2] self.check_hash_cleanup_listdir(policy, file_list, [file2]) def test_hash_cleanup_listdir_ignore_orphaned_ts(self): for policy in self.iter_policies(): # A more recent orphaned .meta file will prevent old .ts files # from being cleaned up otherwise file1, file2 = [self.ts().internal + '.ts' for i in range(2)] file3 = self.ts().internal + '.meta' file_list = [file1, file2, file3] self.check_hash_cleanup_listdir(policy, file_list, [file3, file2]) def test_hash_cleanup_listdir_purge_old_data_only(self): for policy in self.iter_policies(): # Oldest .data will be purge, .meta and .ts won't be touched file1 = self._datafilename(self.ts(), policy) file2 = self.ts().internal + '.ts' file3 = self.ts().internal + '.meta' file_list = [file1, file2, file3] self.check_hash_cleanup_listdir(policy, file_list, [file3, file2]) def test_hash_cleanup_listdir_purge_old_ts(self): for policy in self.iter_policies(): # A single old .ts file will be removed old_float = time() - (diskfile.ONE_WEEK + 1) file1 = Timestamp(old_float).internal + '.ts' file_list = [file1] self.check_hash_cleanup_listdir(policy, file_list, []) def test_hash_cleanup_listdir_meta_keeps_old_ts(self): for policy in self.iter_policies(): old_float = time() - (diskfile.ONE_WEEK + 1) file1 = Timestamp(old_float).internal + '.ts' file2 = Timestamp(time() + 2).internal + '.meta' file_list = [file1, file2] self.check_hash_cleanup_listdir(policy, file_list, [file2]) def test_hash_cleanup_listdir_keep_single_old_data(self): for policy in self.iter_policies(): old_float = time() - (diskfile.ONE_WEEK + 1) file1 = self._datafilename(Timestamp(old_float), policy) file_list = [file1] if policy.policy_type == EC_POLICY: # for EC an isolated old .data file is removed, its useless # without a .durable expected = [] else: # A single old .data file will not be removed expected = file_list self.check_hash_cleanup_listdir(policy, file_list, expected) def test_hash_cleanup_listdir_drops_isolated_durable(self): for policy in self.iter_policies(): if policy.policy_type == EC_POLICY: file1 = Timestamp(time()).internal + '.durable' file_list = [file1] self.check_hash_cleanup_listdir(policy, file_list, []) def test_hash_cleanup_listdir_keep_single_old_meta(self): for policy in self.iter_policies(): # A single old .meta file will not be removed old_float = time() - (diskfile.ONE_WEEK + 1) file1 = Timestamp(old_float).internal + '.meta' file_list = [file1] self.check_hash_cleanup_listdir(policy, file_list, [file1]) # hash_cleanup_listdir tests - error handling def test_hash_cleanup_listdir_hsh_path_enoent(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # common.utils.listdir *completely* mutes ENOENT path = os.path.join(self.testdir, 'does-not-exist') self.assertEqual(df_mgr.cleanup_ondisk_files(path)['files'], []) def test_hash_cleanup_listdir_hsh_path_other_oserror(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] with mock.patch('os.listdir') as mock_listdir: mock_listdir.side_effect = OSError('kaboom!') # but it will raise other OSErrors path = os.path.join(self.testdir, 'does-not-matter') self.assertRaises(OSError, df_mgr.cleanup_ondisk_files, path) def test_hash_cleanup_listdir_reclaim_tombstone_remove_file_error(self): for policy in self.iter_policies(): # Timestamp 1 makes the check routine pretend the file # disappeared after listdir before unlink. file1 = '0000000001.00000.ts' file_list = [file1] self.check_hash_cleanup_listdir(policy, file_list, []) def test_hash_cleanup_listdir_older_remove_file_error(self): for policy in self.iter_policies(): # Timestamp 1 makes the check routine pretend the file # disappeared after listdir before unlink. file1 = self._datafilename(Timestamp(1), policy) file2 = '0000000002.00000.ts' file_list = [file1, file2] self.check_hash_cleanup_listdir(policy, file_list, []) # invalidate_hash tests - behavior def test_invalidate_hash_file_does_not_exist(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy) suffix_dir = os.path.dirname(df._datadir) part_path = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(policy), '0') hashes_file = os.path.join(part_path, diskfile.HASH_FILE) inv_file = os.path.join( part_path, diskfile.HASH_INVALIDATIONS_FILE) self.assertFalse(os.path.exists(hashes_file)) # sanity with mock.patch('swift.obj.diskfile.lock_path') as mock_lock: df_mgr.invalidate_hash(suffix_dir) self.assertFalse(mock_lock.called) # does not create files self.assertFalse(os.path.exists(hashes_file)) self.assertFalse(os.path.exists(inv_file)) def test_invalidate_hash_empty_file_exists(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertEqual(hashes, {}) # create something to hash df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy) df.delete(self.ts()) suffix_dir = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_dir) hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, hashes) # sanity def test_invalidate_hash_consolidation(self): def assert_consolidation(suffixes): # verify that suffixes are invalidated after consolidation with mock.patch('swift.obj.diskfile.lock_path') as mock_lock: hashes = df_mgr.consolidate_hashes(part_path) self.assertTrue(mock_lock.called) for suffix in suffixes: self.assertIn(suffix, hashes) self.assertIsNone(hashes[suffix]) with open(hashes_file, 'rb') as f: self.assertEqual(hashes, pickle.load(f)) with open(invalidations_file, 'rb') as f: self.assertEqual("", f.read()) return hashes for policy in self.iter_policies(): df_mgr = self.df_router[policy] # create something to hash df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy) df.delete(self.ts()) suffix_dir = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_dir) original_hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, original_hashes) # sanity self.assertIsNotNone(original_hashes[suffix]) # sanity check hashes file part_path = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(policy), '0') hashes_file = os.path.join(part_path, diskfile.HASH_FILE) invalidations_file = os.path.join( part_path, diskfile.HASH_INVALIDATIONS_FILE) with open(hashes_file, 'rb') as f: self.assertEqual(original_hashes, pickle.load(f)) self.assertFalse(os.path.exists(invalidations_file)) # invalidate the hash with mock.patch('swift.obj.diskfile.lock_path') as mock_lock: df_mgr.invalidate_hash(suffix_dir) self.assertTrue(mock_lock.called) # suffix should be in invalidations file with open(invalidations_file, 'rb') as f: self.assertEqual(suffix + "\n", f.read()) # hashes file is unchanged with open(hashes_file, 'rb') as f: self.assertEqual(original_hashes, pickle.load(f)) # consolidate the hash and the invalidations hashes = assert_consolidation([suffix]) # invalidate a different suffix hash in same partition but not in # existing hashes.pkl i = 0 while True: df2 = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o%d' % i, policy=policy) i += 1 suffix_dir2 = os.path.dirname(df2._datadir) if suffix_dir != suffix_dir2: break df2.delete(self.ts()) suffix2 = os.path.basename(suffix_dir2) # suffix2 should be in invalidations file with open(invalidations_file, 'rb') as f: self.assertEqual(suffix2 + "\n", f.read()) # hashes file is not yet changed with open(hashes_file, 'rb') as f: self.assertEqual(hashes, pickle.load(f)) # consolidate hashes hashes = assert_consolidation([suffix, suffix2]) # invalidating suffix2 multiple times is ok df2.delete(self.ts()) df2.delete(self.ts()) # suffix2 should be in invalidations file with open(invalidations_file, 'rb') as f: self.assertEqual("%s\n%s\n" % (suffix2, suffix2), f.read()) # hashes file is not yet changed with open(hashes_file, 'rb') as f: self.assertEqual(hashes, pickle.load(f)) # consolidate hashes assert_consolidation([suffix, suffix2]) # invalidate_hash tests - error handling def test_invalidate_hash_bad_pickle(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # make some valid data df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy) suffix_dir = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_dir) df.delete(self.ts()) # sanity check hashes file part_path = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(policy), '0') hashes_file = os.path.join(part_path, diskfile.HASH_FILE) self.assertFalse(os.path.exists(hashes_file)) # write some garbage in hashes file with open(hashes_file, 'w') as f: f.write('asdf') # invalidate_hash silently *NOT* repair invalid data df_mgr.invalidate_hash(suffix_dir) with open(hashes_file) as f: self.assertEqual(f.read(), 'asdf') # ... but get_hashes will hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, hashes) # get_hashes tests - hash_suffix behaviors def test_hash_suffix_one_tombstone(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy) suffix = os.path.basename(os.path.dirname(df._datadir)) # write a tombstone timestamp = self.ts() df.delete(timestamp) tombstone_hash = md5(timestamp.internal + '.ts').hexdigest() hashes = df_mgr.get_hashes('sda1', '0', [], policy) expected = { REPL_POLICY: {suffix: tombstone_hash}, EC_POLICY: {suffix: { # fi is None here because we have a tombstone None: tombstone_hash}}, }[policy.policy_type] self.assertEqual(hashes, expected) def test_hash_suffix_one_tombstone_and_one_meta(self): # A tombstone plus a newer meta file can happen if a tombstone is # replicated to a node with a newer meta file but older data file. The # meta file will be ignored when the diskfile is opened so the # effective state of the disk files is equivalent to only having the # tombstone. Replication cannot remove the meta file, and the meta file # cannot be ssync replicated to a node with only the tombstone, so # we want the get_hashes result to be the same as if the meta file was # not there. for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy) suffix = os.path.basename(os.path.dirname(df._datadir)) # write a tombstone timestamp = self.ts() df.delete(timestamp) # write a meta file df.write_metadata({'X-Timestamp': self.ts().internal}) # sanity check self.assertEqual(2, len(os.listdir(df._datadir))) tombstone_hash = md5(timestamp.internal + '.ts').hexdigest() hashes = df_mgr.get_hashes('sda1', '0', [], policy) expected = { REPL_POLICY: {suffix: tombstone_hash}, EC_POLICY: {suffix: { # fi is None here because we have a tombstone None: tombstone_hash}}, }[policy.policy_type] self.assertEqual(hashes, expected) def test_hash_suffix_one_reclaim_tombstone(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy) # scale back this tests manager's reclaim age a bit df_mgr.reclaim_age = 1000 # write a tombstone that's just a *little* older old_time = time() - 1001 timestamp = Timestamp(old_time) df.delete(timestamp.internal) hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertEqual(hashes, {}) def test_hash_suffix_ts_cleanup_after_recalc(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy) suffix_dir = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_dir) # scale back reclaim age a bit df_mgr.reclaim_age = 1000 # write a valid tombstone old_time = time() - 500 timestamp = Timestamp(old_time) df.delete(timestamp.internal) hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, hashes) self.assertIsNotNone(hashes[suffix]) # we have tombstone entry tombstone = '%s.ts' % timestamp.internal self.assertTrue(os.path.exists(df._datadir)) self.assertIn(tombstone, os.listdir(df._datadir)) # lower reclaim age to force tombstone reclaiming df_mgr.reclaim_age = 200 # not cleaning up because suffix not invalidated hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertTrue(os.path.exists(df._datadir)) self.assertIn(tombstone, os.listdir(df._datadir)) self.assertIn(suffix, hashes) self.assertIsNotNone(hashes[suffix]) # recalculating suffix hash cause cleanup hashes = df_mgr.get_hashes('sda1', '0', [suffix], policy) self.assertEqual(hashes, {}) self.assertFalse(os.path.exists(df._datadir)) def test_hash_suffix_ts_cleanup_after_invalidate_hash(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy) suffix_dir = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_dir) # scale back reclaim age a bit df_mgr.reclaim_age = 1000 # write a valid tombstone old_time = time() - 500 timestamp = Timestamp(old_time) df.delete(timestamp.internal) hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, hashes) self.assertIsNotNone(hashes[suffix]) # we have tombstone entry tombstone = '%s.ts' % timestamp.internal self.assertTrue(os.path.exists(df._datadir)) self.assertIn(tombstone, os.listdir(df._datadir)) # lower reclaim age to force tombstone reclaiming df_mgr.reclaim_age = 200 # not cleaning up because suffix not invalidated hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertTrue(os.path.exists(df._datadir)) self.assertIn(tombstone, os.listdir(df._datadir)) self.assertIn(suffix, hashes) self.assertIsNotNone(hashes[suffix]) # However if we call invalidate_hash for the suffix dir, # get_hashes can reclaim the tombstone with mock.patch('swift.obj.diskfile.lock_path'): df_mgr.invalidate_hash(suffix_dir) # updating invalidated hashes cause cleanup hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertEqual(hashes, {}) self.assertFalse(os.path.exists(df._datadir)) def test_hash_suffix_one_reclaim_and_one_valid_tombstone(self): for policy in self.iter_policies(): paths, suffix = find_paths_with_matching_suffixes(2, 1) df_mgr = self.df_router[policy] a, c, o = paths[suffix][0] df1 = df_mgr.get_diskfile( 'sda1', '0', a, c, o, policy=policy) # scale back this tests manager's reclaim age a bit df_mgr.reclaim_age = 1000 # write one tombstone that's just a *little* older df1.delete(Timestamp(time() - 1001)) # create another tombstone in same suffix dir that's newer a, c, o = paths[suffix][1] df2 = df_mgr.get_diskfile( 'sda1', '0', a, c, o, policy=policy) t_df2 = Timestamp(time() - 900) df2.delete(t_df2) hashes = df_mgr.get_hashes('sda1', '0', [], policy) suffix = os.path.basename(os.path.dirname(df1._datadir)) df2_tombstone_hash = md5(t_df2.internal + '.ts').hexdigest() expected = { REPL_POLICY: {suffix: df2_tombstone_hash}, EC_POLICY: {suffix: { # fi is None here because we have a tombstone None: df2_tombstone_hash}}, }[policy.policy_type] self.assertEqual(hashes, expected) def test_hash_suffix_one_datafile(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile( 'sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=7) suffix = os.path.basename(os.path.dirname(df._datadir)) # write a datafile timestamp = self.ts() with df.create() as writer: test_data = 'test file' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'ETag': md5(test_data).hexdigest(), 'Content-Length': len(test_data), } writer.put(metadata) hashes = df_mgr.get_hashes('sda1', '0', [], policy) datafile_hash = md5({ EC_POLICY: timestamp.internal, REPL_POLICY: timestamp.internal + '.data', }[policy.policy_type]).hexdigest() expected = { REPL_POLICY: {suffix: datafile_hash}, EC_POLICY: {suffix: { # because there's no .durable file, we have no hash for # the None key - only the frag index for the data file 7: datafile_hash}}, }[policy.policy_type] msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) def test_hash_suffix_multi_file_ends_in_tombstone(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=4) suffix = os.path.basename(os.path.dirname(df._datadir)) mkdirs(df._datadir) now = time() # go behind the scenes and setup a bunch of weird file names for tdiff in [500, 100, 10, 1]: for suff in ['.meta', '.data', '.ts']: timestamp = Timestamp(now - tdiff) filename = timestamp.internal if policy.policy_type == EC_POLICY and suff == '.data': filename += '#%s' % df._frag_index filename += suff open(os.path.join(df._datadir, filename), 'w').close() tombstone_hash = md5(filename).hexdigest() # call get_hashes and it should clean things up hashes = df_mgr.get_hashes('sda1', '0', [], policy) expected = { REPL_POLICY: {suffix: tombstone_hash}, EC_POLICY: {suffix: { # fi is None here because we have a tombstone None: tombstone_hash}}, }[policy.policy_type] self.assertEqual(hashes, expected) # only the tombstone should be left found_files = os.listdir(df._datadir) self.assertEqual(found_files, [filename]) def test_hash_suffix_multi_file_ends_in_datafile(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=4) suffix = os.path.basename(os.path.dirname(df._datadir)) mkdirs(df._datadir) now = time() timestamp = None # go behind the scenes and setup a bunch of weird file names for tdiff in [500, 100, 10, 1]: suffs = ['.meta', '.data'] if tdiff > 50: suffs.append('.ts') if policy.policy_type == EC_POLICY: suffs.append('.durable') for suff in suffs: timestamp = Timestamp(now - tdiff) filename = timestamp.internal if policy.policy_type == EC_POLICY and suff == '.data': filename += '#%s' % df._frag_index filename += suff open(os.path.join(df._datadir, filename), 'w').close() meta_timestamp = Timestamp(now) metadata_filename = meta_timestamp.internal + '.meta' open(os.path.join(df._datadir, metadata_filename), 'w').close() # call get_hashes and it should clean things up hashes = df_mgr.get_hashes('sda1', '0', [], policy) data_filename = timestamp.internal if policy.policy_type == EC_POLICY: data_filename += '#%s' % df._frag_index data_filename += '.data' if policy.policy_type == EC_POLICY: durable_filename = timestamp.internal + '.durable' hasher = md5() hasher.update(metadata_filename) hasher.update(durable_filename) expected = { suffix: { # metadata & durable updates are hashed separately None: hasher.hexdigest(), 4: self.fname_to_ts_hash(data_filename), } } expected_files = [data_filename, durable_filename, metadata_filename] elif policy.policy_type == REPL_POLICY: hasher = md5() hasher.update(metadata_filename) hasher.update(data_filename) expected = {suffix: hasher.hexdigest()} expected_files = [data_filename, metadata_filename] else: self.fail('unknown policy type %r' % policy.policy_type) msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) # only the meta and data should be left self.assertEqual(sorted(os.listdir(df._datadir)), sorted(expected_files)) def _verify_get_hashes(self, filenames, ts_data, ts_meta, ts_ctype, policy): """ Helper method to create a set of ondisk files and verify suffix_hashes. :param filenames: list of filenames to create in an object hash dir :param ts_data: newest data timestamp, used for expected result :param ts_meta: newest meta timestamp, used for expected result :param ts_ctype: newest content-type timestamp, used for expected result :param policy: storage policy to use for test """ df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=4) suffix = os.path.basename(os.path.dirname(df._datadir)) mkdirs(df._datadir) # calculate expected result hasher = md5() if policy.policy_type == EC_POLICY: hasher.update(ts_meta.internal + '.meta') hasher.update(ts_data.internal + '.durable') if ts_ctype: hasher.update(ts_ctype.internal + '_ctype') expected = { suffix: { None: hasher.hexdigest(), 4: md5(ts_data.internal).hexdigest(), } } elif policy.policy_type == REPL_POLICY: hasher.update(ts_meta.internal + '.meta') hasher.update(ts_data.internal + '.data') if ts_ctype: hasher.update(ts_ctype.internal + '_ctype') expected = {suffix: hasher.hexdigest()} else: self.fail('unknown policy type %r' % policy.policy_type) for fname in filenames: open(os.path.join(df._datadir, fname), 'w').close() hashes = df_mgr.get_hashes('sda1', '0', [], policy) msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) def test_hash_suffix_with_older_content_type_in_meta(self): # single meta file having older content-type for policy in self.iter_policies(): ts_data, ts_ctype, ts_meta = ( self.ts(), self.ts(), self.ts()) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_meta, ts_ctype)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_meta, ts_ctype, policy) def test_hash_suffix_with_same_age_content_type_in_meta(self): # single meta file having same age content-type for policy in self.iter_policies(): ts_data, ts_meta = (self.ts(), self.ts()) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_meta, ts_meta)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_meta, ts_meta, policy) def test_hash_suffix_with_obsolete_content_type_in_meta(self): # After rsync replication we could have a single meta file having # content-type older than a replicated data file for policy in self.iter_policies(): ts_ctype, ts_data, ts_meta = (self.ts(), self.ts(), self.ts()) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_meta, ts_ctype)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_meta, None, policy) def test_hash_suffix_with_older_content_type_in_newer_meta(self): # After rsync replication we could have two meta files: newest # content-type is in newer meta file, older than newer meta file for policy in self.iter_policies(): ts_data, ts_older_meta, ts_ctype, ts_newer_meta = ( self.ts() for _ in range(4)) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_older_meta), self._metafilename(ts_newer_meta, ts_ctype)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_newer_meta, ts_ctype, policy) def test_hash_suffix_with_same_age_content_type_in_newer_meta(self): # After rsync replication we could have two meta files: newest # content-type is in newer meta file, at same age as newer meta file for policy in self.iter_policies(): ts_data, ts_older_meta, ts_newer_meta = ( self.ts() for _ in range(3)) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_newer_meta, ts_newer_meta)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_newer_meta, ts_newer_meta, policy) def test_hash_suffix_with_older_content_type_in_older_meta(self): # After rsync replication we could have two meta files: newest # content-type is in older meta file, older than older meta file for policy in self.iter_policies(): ts_data, ts_ctype, ts_older_meta, ts_newer_meta = ( self.ts() for _ in range(4)) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_newer_meta), self._metafilename(ts_older_meta, ts_ctype)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_newer_meta, ts_ctype, policy) def test_hash_suffix_with_same_age_content_type_in_older_meta(self): # After rsync replication we could have two meta files: newest # content-type is in older meta file, at same age as older meta file for policy in self.iter_policies(): ts_data, ts_older_meta, ts_newer_meta = ( self.ts() for _ in range(3)) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_newer_meta), self._metafilename(ts_older_meta, ts_older_meta)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_newer_meta, ts_older_meta, policy) def test_hash_suffix_with_obsolete_content_type_in_older_meta(self): # After rsync replication we could have two meta files: newest # content-type is in older meta file, but older than data file for policy in self.iter_policies(): ts_ctype, ts_data, ts_older_meta, ts_newer_meta = ( self.ts() for _ in range(4)) filenames = [self._datafilename(ts_data, policy, frag_index=4), self._metafilename(ts_newer_meta), self._metafilename(ts_older_meta, ts_ctype)] if policy.policy_type == EC_POLICY: filenames.append(ts_data.internal + '.durable') self._verify_get_hashes( filenames, ts_data, ts_newer_meta, None, policy) def test_hash_suffix_removes_empty_hashdir_and_suffix(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o', policy=policy, frag_index=2) os.makedirs(df._datadir) self.assertTrue(os.path.exists(df._datadir)) # sanity df_mgr.get_hashes('sda1', '0', [], policy) suffix_dir = os.path.dirname(df._datadir) self.assertFalse(os.path.exists(suffix_dir)) def test_hash_suffix_removes_empty_hashdirs_in_valid_suffix(self): paths, suffix = find_paths_with_matching_suffixes(needed_matches=3, needed_suffixes=0) matching_paths = paths.pop(suffix) for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile('sda1', '0', *matching_paths[0], policy=policy, frag_index=2) # create a real, valid hsh_path df.delete(Timestamp(time())) # and a couple of empty hsh_paths empty_hsh_paths = [] for path in matching_paths[1:]: fake_df = df_mgr.get_diskfile('sda1', '0', *path, policy=policy) os.makedirs(fake_df._datadir) empty_hsh_paths.append(fake_df._datadir) for hsh_path in empty_hsh_paths: self.assertTrue(os.path.exists(hsh_path)) # sanity # get_hashes will cleanup empty hsh_path and leave valid one hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertIn(suffix, hashes) self.assertTrue(os.path.exists(df._datadir)) for hsh_path in empty_hsh_paths: self.assertFalse(os.path.exists(hsh_path)) # get_hashes tests - hash_suffix error handling def test_hash_suffix_listdir_enotdir(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] suffix = '123' suffix_path = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(policy), '0', suffix) os.makedirs(suffix_path) self.assertTrue(os.path.exists(suffix_path)) # sanity hashes = df_mgr.get_hashes('sda1', '0', [suffix], policy) # suffix dir cleaned up by get_hashes self.assertFalse(os.path.exists(suffix_path)) expected = {} msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) # now make the suffix path a file open(suffix_path, 'w').close() hashes = df_mgr.get_hashes('sda1', '0', [suffix], policy) expected = {} msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) def test_hash_suffix_listdir_enoent(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] orig_listdir = os.listdir listdir_calls = [] def mock_listdir(path): success = False try: rv = orig_listdir(path) success = True return rv finally: listdir_calls.append((path, success)) with mock.patch('swift.obj.diskfile.os.listdir', mock_listdir): # recalc always forces hash_suffix even if the suffix # does not exist! df_mgr.get_hashes('sda1', '0', ['123'], policy) part_path = os.path.join(self.devices, 'sda1', diskfile.get_data_dir(policy), '0') self.assertEqual(listdir_calls, [ # part path gets created automatically (part_path, True), # this one blows up (os.path.join(part_path, '123'), False), ]) def test_hash_suffix_hash_cleanup_listdir_enotdir_quarantined(self): for policy in self.iter_policies(): df = self.df_router[policy].get_diskfile( self.existing_device, '0', 'a', 'c', 'o', policy=policy) # make the suffix directory suffix_path = os.path.dirname(df._datadir) os.makedirs(suffix_path) suffix = os.path.basename(suffix_path) # make the df hash path a file open(df._datadir, 'wb').close() df_mgr = self.df_router[policy] hashes = df_mgr.get_hashes(self.existing_device, '0', [suffix], policy) self.assertEqual(hashes, {}) # and hash path is quarantined self.assertFalse(os.path.exists(df._datadir)) # each device a quarantined directory quarantine_base = os.path.join(self.devices, self.existing_device, 'quarantined') # the quarantine path is... quarantine_path = os.path.join( quarantine_base, # quarantine root diskfile.get_data_dir(policy), # per-policy data dir suffix, # first dir from which quarantined file was removed os.path.basename(df._datadir) # name of quarantined file ) self.assertTrue(os.path.exists(quarantine_path)) def test_hash_suffix_hash_cleanup_listdir_other_oserror(self): for policy in self.iter_policies(): timestamp = self.ts() df_mgr = self.df_router[policy] df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=7) suffix = os.path.basename(os.path.dirname(df._datadir)) with df.create() as writer: test_data = 'test_data' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'ETag': md5(test_data).hexdigest(), 'Content-Length': len(test_data), } writer.put(metadata) orig_os_listdir = os.listdir listdir_calls = [] part_path = os.path.join(self.devices, self.existing_device, diskfile.get_data_dir(policy), '0') suffix_path = os.path.join(part_path, suffix) datadir_path = os.path.join(suffix_path, hash_path('a', 'c', 'o')) def mock_os_listdir(path): listdir_calls.append(path) if path == datadir_path: # we want the part and suffix listdir calls to pass and # make the hash_cleanup_listdir raise an exception raise OSError(errno.EACCES, os.strerror(errno.EACCES)) return orig_os_listdir(path) with mock.patch('os.listdir', mock_os_listdir): hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(listdir_calls, [ part_path, suffix_path, datadir_path, ]) expected = {suffix: None} msg = 'expected %r != %r for policy %r' % ( expected, hashes, policy) self.assertEqual(hashes, expected, msg) def test_hash_suffix_rmdir_hsh_path_oserror(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # make an empty hsh_path to be removed df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) os.makedirs(df._datadir) suffix = os.path.basename(os.path.dirname(df._datadir)) with mock.patch('os.rmdir', side_effect=OSError()): hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) expected = { EC_POLICY: {}, REPL_POLICY: md5().hexdigest(), }[policy.policy_type] self.assertEqual(hashes, {suffix: expected}) self.assertTrue(os.path.exists(df._datadir)) def test_hash_suffix_rmdir_suffix_oserror(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # make an empty hsh_path to be removed df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy) os.makedirs(df._datadir) suffix_path = os.path.dirname(df._datadir) suffix = os.path.basename(suffix_path) captured_paths = [] def mock_rmdir(path): captured_paths.append(path) if path == suffix_path: raise OSError('kaboom!') with mock.patch('os.rmdir', mock_rmdir): hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) expected = { EC_POLICY: {}, REPL_POLICY: md5().hexdigest(), }[policy.policy_type] self.assertEqual(hashes, {suffix: expected}) self.assertTrue(os.path.exists(suffix_path)) self.assertEqual([ df._datadir, suffix_path, ], captured_paths) # get_hashes tests - behaviors def test_get_hashes_creates_partition_and_pkl(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(hashes, {}) part_path = os.path.join( self.devices, 'sda1', diskfile.get_data_dir(policy), '0') self.assertTrue(os.path.exists(part_path)) hashes_file = os.path.join(part_path, diskfile.HASH_FILE) self.assertTrue(os.path.exists(hashes_file)) # and double check the hashes new_hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(hashes, new_hashes) def test_get_hashes_new_pkl_finds_new_suffix_dirs(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] part_path = os.path.join( self.devices, self.existing_device, diskfile.get_data_dir(policy), '0') hashes_file = os.path.join(part_path, diskfile.HASH_FILE) # add something to find df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=4) timestamp = self.ts() df.delete(timestamp) suffix = os.path.basename(os.path.dirname(df._datadir)) # get_hashes will find the untracked suffix dir self.assertFalse(os.path.exists(hashes_file)) # sanity hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertIn(suffix, hashes) # ... and create a hashes pickle for it self.assertTrue(os.path.exists(hashes_file)) def test_get_hashes_old_pickle_does_not_find_new_suffix_dirs(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # create a empty stale pickle part_path = os.path.join( self.devices, 'sda1', diskfile.get_data_dir(policy), '0') hashes_file = os.path.join(part_path, diskfile.HASH_FILE) hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(hashes, {}) self.assertTrue(os.path.exists(hashes_file)) # sanity # add something to find df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=4) os.makedirs(df._datadir) filename = Timestamp(time()).internal + '.ts' open(os.path.join(df._datadir, filename), 'w').close() suffix = os.path.basename(os.path.dirname(df._datadir)) # but get_hashes has no reason to find it (because we didn't # call invalidate_hash) new_hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(new_hashes, hashes) # ... unless remote end asks for a recalc hashes = df_mgr.get_hashes(self.existing_device, '0', [suffix], policy) self.assertIn(suffix, hashes) def test_get_hashes_does_not_rehash_known_suffix_dirs(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=4) suffix = os.path.basename(os.path.dirname(df._datadir)) timestamp = self.ts() df.delete(timestamp) # create the baseline hashes file hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertIn(suffix, hashes) # now change the contents of the suffix w/o calling # invalidate_hash rmtree(df._datadir) suffix_path = os.path.dirname(df._datadir) self.assertTrue(os.path.exists(suffix_path)) # sanity new_hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) # ... and get_hashes is none the wiser self.assertEqual(new_hashes, hashes) # ... unless remote end asks for a recalc hashes = df_mgr.get_hashes(self.existing_device, '0', [suffix], policy) self.assertNotEqual(new_hashes, hashes) # and the empty suffix path is removed self.assertFalse(os.path.exists(suffix_path)) # ... and the suffix key is removed expected = {} self.assertEqual(expected, hashes) def test_get_hashes_multi_file_multi_suffix(self): paths, suffix = find_paths_with_matching_suffixes(needed_matches=2, needed_suffixes=3) matching_paths = paths.pop(suffix) matching_paths.sort(key=lambda path: hash_path(*path)) other_paths = [] for suffix, paths in paths.items(): other_paths.append(paths[0]) if len(other_paths) >= 2: break for policy in self.iter_policies(): df_mgr = self.df_router[policy] # first we'll make a tombstone df = df_mgr.get_diskfile(self.existing_device, '0', *other_paths[0], policy=policy, frag_index=4) timestamp = self.ts() df.delete(timestamp) tombstone_hash = md5(timestamp.internal + '.ts').hexdigest() tombstone_suffix = os.path.basename(os.path.dirname(df._datadir)) # second file in another suffix has a .datafile df = df_mgr.get_diskfile(self.existing_device, '0', *other_paths[1], policy=policy, frag_index=5) timestamp = self.ts() with df.create() as writer: test_data = 'test_file' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'ETag': md5(test_data).hexdigest(), 'Content-Length': len(test_data), } writer.put(metadata) writer.commit(timestamp) datafile_name = timestamp.internal if policy.policy_type == EC_POLICY: datafile_name += '#%d' % df._frag_index datafile_name += '.data' durable_hash = md5(timestamp.internal + '.durable').hexdigest() datafile_suffix = os.path.basename(os.path.dirname(df._datadir)) # in the *third* suffix - two datafiles for different hashes df = df_mgr.get_diskfile(self.existing_device, '0', *matching_paths[0], policy=policy, frag_index=6) matching_suffix = os.path.basename(os.path.dirname(df._datadir)) timestamp = self.ts() with df.create() as writer: test_data = 'test_file' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'ETag': md5(test_data).hexdigest(), 'Content-Length': len(test_data), } writer.put(metadata) writer.commit(timestamp) # we'll keep track of file names for hash calculations filename = timestamp.internal if policy.policy_type == EC_POLICY: filename += '#%d' % df._frag_index filename += '.data' filenames = { 'data': { 6: filename }, 'durable': [timestamp.internal + '.durable'], } df = df_mgr.get_diskfile(self.existing_device, '0', *matching_paths[1], policy=policy, frag_index=7) self.assertEqual(os.path.basename(os.path.dirname(df._datadir)), matching_suffix) # sanity timestamp = self.ts() with df.create() as writer: test_data = 'test_file' writer.write(test_data) metadata = { 'X-Timestamp': timestamp.internal, 'ETag': md5(test_data).hexdigest(), 'Content-Length': len(test_data), } writer.put(metadata) writer.commit(timestamp) filename = timestamp.internal if policy.policy_type == EC_POLICY: filename += '#%d' % df._frag_index filename += '.data' filenames['data'][7] = filename filenames['durable'].append(timestamp.internal + '.durable') # now make up the expected suffixes! if policy.policy_type == EC_POLICY: hasher = md5() for filename in filenames['durable']: hasher.update(filename) expected = { tombstone_suffix: { None: tombstone_hash, }, datafile_suffix: { None: durable_hash, 5: self.fname_to_ts_hash(datafile_name), }, matching_suffix: { None: hasher.hexdigest(), 6: self.fname_to_ts_hash(filenames['data'][6]), 7: self.fname_to_ts_hash(filenames['data'][7]), }, } elif policy.policy_type == REPL_POLICY: hasher = md5() for filename in filenames['data'].values(): hasher.update(filename) expected = { tombstone_suffix: tombstone_hash, datafile_suffix: md5(datafile_name).hexdigest(), matching_suffix: hasher.hexdigest(), } else: self.fail('unknown policy type %r' % policy.policy_type) hashes = df_mgr.get_hashes('sda1', '0', [], policy) self.assertEqual(hashes, expected) # get_hashes tests - error handling def test_get_hashes_bad_dev(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] df_mgr.mount_check = True with mock.patch('swift.obj.diskfile.check_mount', mock.MagicMock(side_effect=[False])): self.assertRaises( DiskFileDeviceUnavailable, df_mgr.get_hashes, self.existing_device, '0', ['123'], policy) def test_get_hashes_zero_bytes_pickle(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] part_path = os.path.join(self.devices, self.existing_device, diskfile.get_data_dir(policy), '0') os.makedirs(part_path) # create a pre-existing zero-byte file open(os.path.join(part_path, diskfile.HASH_FILE), 'w').close() hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertEqual(hashes, {}) def test_get_hashes_hash_suffix_enotdir(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # create a real suffix dir df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c', 'o', policy=policy, frag_index=3) df.delete(Timestamp(time())) suffix = os.path.basename(os.path.dirname(df._datadir)) # touch a bad suffix dir part_dir = os.path.join(self.devices, self.existing_device, diskfile.get_data_dir(policy), '0') open(os.path.join(part_dir, 'bad'), 'w').close() hashes = df_mgr.get_hashes(self.existing_device, '0', [], policy) self.assertIn(suffix, hashes) self.assertNotIn('bad', hashes) def test_get_hashes_hash_suffix_other_oserror(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] suffix = '123' suffix_path = os.path.join(self.devices, self.existing_device, diskfile.get_data_dir(policy), '0', suffix) os.makedirs(suffix_path) self.assertTrue(os.path.exists(suffix_path)) # sanity hashes = df_mgr.get_hashes(self.existing_device, '0', [suffix], policy) expected = {} msg = 'expected %r != %r for policy %r' % (expected, hashes, policy) self.assertEqual(hashes, expected, msg) # this OSError does *not* raise PathNotDir, and is allowed to leak # from hash_suffix into get_hashes mocked_os_listdir = mock.Mock( side_effect=OSError(errno.EACCES, os.strerror(errno.EACCES))) with mock.patch("os.listdir", mocked_os_listdir): with mock.patch('swift.obj.diskfile.logging') as mock_logging: hashes = df_mgr.get_hashes('sda1', '0', [suffix], policy) self.assertEqual(mock_logging.method_calls, [mock.call.exception('Error hashing suffix')]) # recalc always causes a suffix to get reset to None; the listdir # error prevents the suffix from being rehashed expected = {'123': None} msg = 'expected %r != %r for policy %r' % (expected, hashes, policy) self.assertEqual(hashes, expected, msg) def test_get_hashes_modified_recursive_retry(self): for policy in self.iter_policies(): df_mgr = self.df_router[policy] # first create an empty pickle df_mgr.get_hashes(self.existing_device, '0', [], policy) hashes_file = os.path.join( self.devices, self.existing_device, diskfile.get_data_dir(policy), '0', diskfile.HASH_FILE) mtime = os.path.getmtime(hashes_file) non_local = {'mtime': mtime} calls = [] def mock_getmtime(filename): t = non_local['mtime'] if len(calls) <= 3: # this will make the *next* call get a slightly # newer mtime than the last non_local['mtime'] += 1 # track exactly the value for every return calls.append(t) return t with mock.patch('swift.obj.diskfile.getmtime', mock_getmtime): df_mgr.get_hashes(self.existing_device, '0', ['123'], policy) self.assertEqual(calls, [ mtime + 0, # read mtime + 1, # modified mtime + 2, # read mtime + 3, # modifed mtime + 4, # read mtime + 4, # not modifed ]) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_ssync_sender.py0000664000567000056710000020354013024044352022637 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import unittest import eventlet import mock import six from swift.common import exceptions, utils from swift.common.storage_policy import POLICIES from swift.common.utils import Timestamp from swift.obj import ssync_sender, diskfile, ssync_receiver from test.unit import patch_policies, make_timestamp_iter from test.unit.obj.common import FakeReplicator, BaseTest class NullBufferedHTTPConnection(object): def __init__(*args, **kwargs): pass def putrequest(*args, **kwargs): pass def putheader(*args, **kwargs): pass def endheaders(*args, **kwargs): pass def getresponse(*args, **kwargs): pass def close(*args, **kwargs): pass class FakeResponse(object): def __init__(self, chunk_body=''): self.status = 200 self.close_called = False if chunk_body: self.fp = six.StringIO( '%x\r\n%s\r\n0\r\n\r\n' % (len(chunk_body), chunk_body)) def read(self, *args, **kwargs): return '' def close(self): self.close_called = True class FakeConnection(object): def __init__(self): self.sent = [] self.closed = False def send(self, data): self.sent.append(data) def close(self): self.closed = True @patch_policies() class TestSender(BaseTest): def setUp(self): super(TestSender, self).setUp() self.testdir = os.path.join(self.tmpdir, 'tmp_test_ssync_sender') utils.mkdirs(os.path.join(self.testdir, 'dev')) self.daemon = FakeReplicator(self.testdir) self.sender = ssync_sender.Sender(self.daemon, None, None, None) def test_call_catches_MessageTimeout(self): def connect(self): exc = exceptions.MessageTimeout(1, 'test connect') # Cancels Eventlet's raising of this since we're about to do it. exc.cancel() raise exc with mock.patch.object(ssync_sender.Sender, 'connect', connect): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES.legacy) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') self.assertEqual(1, len(error_lines)) self.assertEqual('1.2.3.4:5678/sda1/9 1 second: test connect', error_lines[0]) def test_call_catches_ReplicationException(self): def connect(self): raise exceptions.ReplicationException('test connect') with mock.patch.object(ssync_sender.Sender, 'connect', connect): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES.legacy) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') self.assertEqual(1, len(error_lines)) self.assertEqual('1.2.3.4:5678/sda1/9 test connect', error_lines[0]) def test_call_catches_other_exceptions(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES.legacy) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] self.sender.connect = 'cause exception' success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith( '1.2.3.4:5678/sda1/9 EXCEPTION in ssync.Sender:')) def test_call_catches_exception_handling_exception(self): job = node = None # Will cause inside exception handler to fail self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] self.sender.connect = 'cause exception' success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith( 'EXCEPTION in ssync.Sender')) def test_call_calls_others(self): self.sender.suffixes = ['abc'] self.sender.connect = mock.MagicMock() self.sender.missing_check = mock.MagicMock() self.sender.updates = mock.MagicMock() self.sender.disconnect = mock.MagicMock() success, candidates = self.sender() self.assertTrue(success) self.assertEqual(candidates, {}) self.sender.connect.assert_called_once_with() self.sender.missing_check.assert_called_once_with() self.sender.updates.assert_called_once_with() self.sender.disconnect.assert_called_once_with() def test_call_calls_others_returns_failure(self): self.sender.suffixes = ['abc'] self.sender.connect = mock.MagicMock() self.sender.missing_check = mock.MagicMock() self.sender.updates = mock.MagicMock() self.sender.disconnect = mock.MagicMock() self.sender.failures = 1 success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) self.sender.connect.assert_called_once_with() self.sender.missing_check.assert_called_once_with() self.sender.updates.assert_called_once_with() self.sender.disconnect.assert_called_once_with() def test_connect(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1', index=0) job = dict(partition='9', policy=POLICIES[1]) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] with mock.patch( 'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection' ) as mock_conn_class: mock_conn = mock_conn_class.return_value mock_resp = mock.MagicMock() mock_resp.status = 200 mock_conn.getresponse.return_value = mock_resp self.sender.connect() mock_conn_class.assert_called_once_with('1.2.3.4:5678') expectations = { 'putrequest': [ mock.call('SSYNC', '/sda1/9'), ], 'putheader': [ mock.call('Transfer-Encoding', 'chunked'), mock.call('X-Backend-Storage-Policy-Index', 1), mock.call('X-Backend-Ssync-Frag-Index', 0), mock.call('X-Backend-Ssync-Node-Index', 0), ], 'endheaders': [mock.call()], } for method_name, expected_calls in expectations.items(): mock_method = getattr(mock_conn, method_name) self.assertEqual(expected_calls, mock_method.mock_calls, 'connection method "%s" got %r not %r' % ( method_name, mock_method.mock_calls, expected_calls)) def test_connect_handoff(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES[1], frag_index=9) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] with mock.patch( 'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection' ) as mock_conn_class: mock_conn = mock_conn_class.return_value mock_resp = mock.MagicMock() mock_resp.status = 200 mock_conn.getresponse.return_value = mock_resp self.sender.connect() mock_conn_class.assert_called_once_with('1.2.3.4:5678') expectations = { 'putrequest': [ mock.call('SSYNC', '/sda1/9'), ], 'putheader': [ mock.call('Transfer-Encoding', 'chunked'), mock.call('X-Backend-Storage-Policy-Index', 1), mock.call('X-Backend-Ssync-Frag-Index', 9), mock.call('X-Backend-Ssync-Node-Index', ''), ], 'endheaders': [mock.call()], } for method_name, expected_calls in expectations.items(): mock_method = getattr(mock_conn, method_name) self.assertEqual(expected_calls, mock_method.mock_calls, 'connection method "%s" got %r not %r' % ( method_name, mock_method.mock_calls, expected_calls)) def test_connect_handoff_no_frag(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES[0]) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] with mock.patch( 'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection' ) as mock_conn_class: mock_conn = mock_conn_class.return_value mock_resp = mock.MagicMock() mock_resp.status = 200 mock_conn.getresponse.return_value = mock_resp self.sender.connect() mock_conn_class.assert_called_once_with('1.2.3.4:5678') expectations = { 'putrequest': [ mock.call('SSYNC', '/sda1/9'), ], 'putheader': [ mock.call('Transfer-Encoding', 'chunked'), mock.call('X-Backend-Storage-Policy-Index', 0), mock.call('X-Backend-Ssync-Frag-Index', ''), mock.call('X-Backend-Ssync-Node-Index', ''), ], 'endheaders': [mock.call()], } for method_name, expected_calls in expectations.items(): mock_method = getattr(mock_conn, method_name) self.assertEqual(expected_calls, mock_method.mock_calls, 'connection method "%s" got %r not %r' % ( method_name, mock_method.mock_calls, expected_calls)) def test_connect_handoff_none_frag(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES[1], frag_index=None) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] with mock.patch( 'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection' ) as mock_conn_class: mock_conn = mock_conn_class.return_value mock_resp = mock.MagicMock() mock_resp.status = 200 mock_conn.getresponse.return_value = mock_resp self.sender.connect() mock_conn_class.assert_called_once_with('1.2.3.4:5678') expectations = { 'putrequest': [ mock.call('SSYNC', '/sda1/9'), ], 'putheader': [ mock.call('Transfer-Encoding', 'chunked'), mock.call('X-Backend-Storage-Policy-Index', 1), mock.call('X-Backend-Ssync-Frag-Index', ''), mock.call('X-Backend-Ssync-Node-Index', ''), ], 'endheaders': [mock.call()], } for method_name, expected_calls in expectations.items(): mock_method = getattr(mock_conn, method_name) self.assertEqual(expected_calls, mock_method.mock_calls, 'connection method "%s" got %r not %r' % ( method_name, mock_method.mock_calls, expected_calls)) def test_connect_handoff_replicated(self): node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') # no frag_index in rsync job job = dict(partition='9', policy=POLICIES[1]) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] with mock.patch( 'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection' ) as mock_conn_class: mock_conn = mock_conn_class.return_value mock_resp = mock.MagicMock() mock_resp.status = 200 mock_conn.getresponse.return_value = mock_resp self.sender.connect() mock_conn_class.assert_called_once_with('1.2.3.4:5678') expectations = { 'putrequest': [ mock.call('SSYNC', '/sda1/9'), ], 'putheader': [ mock.call('Transfer-Encoding', 'chunked'), mock.call('X-Backend-Storage-Policy-Index', 1), mock.call('X-Backend-Ssync-Frag-Index', ''), mock.call('X-Backend-Ssync-Node-Index', ''), ], 'endheaders': [mock.call()], } for method_name, expected_calls in expectations.items(): mock_method = getattr(mock_conn, method_name) self.assertEqual(expected_calls, mock_method.mock_calls, 'connection method "%s" got %r not %r' % ( method_name, mock_method.mock_calls, expected_calls)) def test_call(self): def patch_sender(sender): sender.connect = mock.MagicMock() sender.missing_check = mock.MagicMock() sender.updates = mock.MagicMock() sender.disconnect = mock.MagicMock() node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, 'frag_index': 0, } available_map = dict([('9d41d8cd98f00b204e9800998ecf0abc', '1380144470.00000'), ('9d41d8cd98f00b204e9800998ecf0def', '1380144472.22222'), ('9d41d8cd98f00b204e9800998ecf1def', '1380144474.44444')]) # no suffixes -> no work done sender = ssync_sender.Sender( self.daemon, node, job, [], remote_check_objs=None) patch_sender(sender) sender.available_map = available_map success, candidates = sender() self.assertTrue(success) self.assertEqual({}, candidates) # all objs in sync sender = ssync_sender.Sender( self.daemon, node, job, ['ignored'], remote_check_objs=None) patch_sender(sender) sender.available_map = available_map success, candidates = sender() self.assertTrue(success) self.assertEqual(available_map, candidates) # one obj not in sync, sync'ing faked, all objs should be in return set wanted = '9d41d8cd98f00b204e9800998ecf0def' sender = ssync_sender.Sender( self.daemon, node, job, ['ignored'], remote_check_objs=None) patch_sender(sender) sender.send_map = {wanted: []} sender.available_map = available_map success, candidates = sender() self.assertTrue(success) self.assertEqual(available_map, candidates) # one obj not in sync, remote check only so that obj is not sync'd # and should not be in the return set wanted = '9d41d8cd98f00b204e9800998ecf0def' remote_check_objs = set(available_map.keys()) sender = ssync_sender.Sender( self.daemon, node, job, ['ignored'], remote_check_objs=remote_check_objs) patch_sender(sender) sender.send_map = {wanted: []} sender.available_map = available_map success, candidates = sender() self.assertTrue(success) expected_map = dict([('9d41d8cd98f00b204e9800998ecf0abc', '1380144470.00000'), ('9d41d8cd98f00b204e9800998ecf1def', '1380144474.44444')]) self.assertEqual(expected_map, candidates) def test_call_and_missing_check_metadata_legacy_response(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if device == 'dev' and partition == '9' and suffixes == ['abc'] \ and policy == POLICIES.legacy: yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000), 'ts_meta': Timestamp(1380155570.00005)}) else: raise Exception( 'No match for %r %r %r' % (device, partition, suffixes)) self.sender.connection = FakeConnection() self.sender.node = {} self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.suffixes = ['abc'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' '9d41d8cd98f00b204e9800998ecf0abc\r\n' ':MISSING_CHECK: END\r\n' ':UPDATES: START\r\n' ':UPDATES: END\r\n' )) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.connect = mock.MagicMock() df = mock.MagicMock() df.content_length = 0 self.sender.df_mgr.get_diskfile_from_hash = mock.MagicMock( return_value=df) self.sender.disconnect = mock.MagicMock() success, candidates = self.sender() self.assertTrue(success) found_post = found_put = False for chunk in self.sender.connection.sent: if 'POST' in chunk: found_post = True if 'PUT' in chunk: found_put = True self.assertFalse(found_post) self.assertTrue(found_put) self.assertEqual(self.sender.failures, 0) def test_call_and_missing_check(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if device == 'dev' and partition == '9' and suffixes == ['abc'] \ and policy == POLICIES.legacy: yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r' % (device, partition, suffixes)) self.sender.connection = FakeConnection() self.sender.node = {} self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.suffixes = ['abc'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' '9d41d8cd98f00b204e9800998ecf0abc d\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.connect = mock.MagicMock() self.sender.updates = mock.MagicMock() self.sender.disconnect = mock.MagicMock() success, candidates = self.sender() self.assertTrue(success) self.assertEqual(candidates, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) self.assertEqual(self.sender.failures, 0) def test_call_and_missing_check_with_obj_list(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if device == 'dev' and partition == '9' and suffixes == ['abc'] \ and policy == POLICIES.legacy: yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r' % (device, partition, suffixes)) job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender = ssync_sender.Sender(self.daemon, None, job, ['abc'], ['9d41d8cd98f00b204e9800998ecf0abc']) self.sender.connection = FakeConnection() self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.connect = mock.MagicMock() self.sender.updates = mock.MagicMock() self.sender.disconnect = mock.MagicMock() success, candidates = self.sender() self.assertTrue(success) self.assertEqual(candidates, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) self.assertEqual(self.sender.failures, 0) def test_call_and_missing_check_with_obj_list_but_required(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if device == 'dev' and partition == '9' and suffixes == ['abc'] \ and policy == POLICIES.legacy: yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r' % (device, partition, suffixes)) job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender = ssync_sender.Sender(self.daemon, {}, job, ['abc'], ['9d41d8cd98f00b204e9800998ecf0abc']) self.sender.connection = FakeConnection() self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' '9d41d8cd98f00b204e9800998ecf0abc d\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.connect = mock.MagicMock() self.sender.updates = mock.MagicMock() self.sender.disconnect = mock.MagicMock() success, candidates = self.sender() self.assertTrue(success) self.assertEqual(candidates, {}) def test_connect_send_timeout(self): self.daemon.node_timeout = 0.01 # make disconnect fail fast self.daemon.conn_timeout = 0.01 node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1') job = dict(partition='9', policy=POLICIES.legacy) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] def putrequest(*args, **kwargs): eventlet.sleep(0.1) with mock.patch.object( ssync_sender.bufferedhttp.BufferedHTTPConnection, 'putrequest', putrequest): success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith( '1.2.3.4:5678/sda1/9 0.01 seconds: connect send')) def test_connect_receive_timeout(self): self.daemon.node_timeout = 0.02 node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1', index=0) job = dict(partition='9', policy=POLICIES.legacy) self.sender = ssync_sender.Sender(self.daemon, node, job, None) self.sender.suffixes = ['abc'] class FakeBufferedHTTPConnection(NullBufferedHTTPConnection): def getresponse(*args, **kwargs): eventlet.sleep(0.1) with mock.patch.object( ssync_sender.bufferedhttp, 'BufferedHTTPConnection', FakeBufferedHTTPConnection): success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith( '1.2.3.4:5678/sda1/9 0.02 seconds: connect receive')) def test_connect_bad_status(self): self.daemon.node_timeout = 0.02 node = dict(replication_ip='1.2.3.4', replication_port=5678, device='sda1', index=0) job = dict(partition='9', policy=POLICIES.legacy) class FakeBufferedHTTPConnection(NullBufferedHTTPConnection): def getresponse(*args, **kwargs): response = FakeResponse() response.status = 503 response.read = lambda: 'an error message' return response missing_check_fn = 'swift.obj.ssync_sender.Sender.missing_check' with mock.patch(missing_check_fn) as mock_missing_check: with mock.patch.object( ssync_sender.bufferedhttp, 'BufferedHTTPConnection', FakeBufferedHTTPConnection): self.sender = ssync_sender.Sender( self.daemon, node, job, ['abc']) success, candidates = self.sender() self.assertFalse(success) self.assertEqual(candidates, {}) error_lines = self.daemon.logger.get_lines_for_level('error') for line in error_lines: self.assertTrue(line.startswith( '1.2.3.4:5678/sda1/9 Expected status 200; got 503')) self.assertIn('an error message', line) # sanity check that Sender did not proceed to missing_check exchange self.assertFalse(mock_missing_check.called) def test_readline_newline_in_buffer(self): self.sender.response_buffer = 'Has a newline already.\r\nOkay.' self.assertEqual(self.sender.readline(), 'Has a newline already.\r\n') self.assertEqual(self.sender.response_buffer, 'Okay.') def test_readline_buffer_exceeds_network_chunk_size_somehow(self): self.daemon.network_chunk_size = 2 self.sender.response_buffer = '1234567890' self.assertEqual(self.sender.readline(), '1234567890') self.assertEqual(self.sender.response_buffer, '') def test_readline_at_start_of_chunk(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO('2\r\nx\n\r\n') self.assertEqual(self.sender.readline(), 'x\n') def test_readline_chunk_with_extension(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO( '2 ; chunk=extension\r\nx\n\r\n') self.assertEqual(self.sender.readline(), 'x\n') def test_readline_broken_chunk(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO('q\r\nx\n\r\n') self.assertRaises( exceptions.ReplicationException, self.sender.readline) self.assertTrue(self.sender.response.close_called) def test_readline_terminated_chunk(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO('b\r\nnot enough') self.assertRaises( exceptions.ReplicationException, self.sender.readline) self.assertTrue(self.sender.response.close_called) def test_readline_all(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO('2\r\nx\n\r\n0\r\n\r\n') self.assertEqual(self.sender.readline(), 'x\n') self.assertEqual(self.sender.readline(), '') self.assertEqual(self.sender.readline(), '') def test_readline_all_trailing_not_newline_termed(self): self.sender.response = FakeResponse() self.sender.response.fp = six.StringIO( '2\r\nx\n\r\n3\r\n123\r\n0\r\n\r\n') self.assertEqual(self.sender.readline(), 'x\n') self.assertEqual(self.sender.readline(), '123') self.assertEqual(self.sender.readline(), '') self.assertEqual(self.sender.readline(), '') def test_missing_check_timeout(self): self.sender.connection = FakeConnection() self.sender.connection.send = lambda d: eventlet.sleep(1) self.sender.daemon.node_timeout = 0.01 self.assertRaises(exceptions.MessageTimeout, self.sender.missing_check) def test_missing_check_has_empty_suffixes(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device != 'dev' or partition != '9' or policy != POLICIES.legacy or suffixes != ['abc', 'def']): yield # Just here to make this a generator raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc', 'def'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.missing_check() self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual(self.sender.send_map, {}) self.assertEqual(self.sender.available_map, {}) def test_missing_check_has_suffixes(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc', 'def']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) yield ( '/srv/node/dev/objects/9/def/' '9d41d8cd98f00b204e9800998ecf0def', '9d41d8cd98f00b204e9800998ecf0def', {'ts_data': Timestamp(1380144472.22222), 'ts_meta': Timestamp(1380144473.22222)}) yield ( '/srv/node/dev/objects/9/def/' '9d41d8cd98f00b204e9800998ecf1def', '9d41d8cd98f00b204e9800998ecf1def', {'ts_data': Timestamp(1380144474.44444), 'ts_ctype': Timestamp(1380144474.44448), 'ts_meta': Timestamp(1380144475.44444)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc', 'def'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.missing_check() self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n' '3b\r\n9d41d8cd98f00b204e9800998ecf0def 1380144472.22222 ' 'm:186a0\r\n\r\n' '3f\r\n9d41d8cd98f00b204e9800998ecf1def 1380144474.44444 ' 'm:186a0,t:4\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual(self.sender.send_map, {}) candidates = [('9d41d8cd98f00b204e9800998ecf0abc', dict(ts_data=Timestamp(1380144470.00000))), ('9d41d8cd98f00b204e9800998ecf0def', dict(ts_data=Timestamp(1380144472.22222), ts_meta=Timestamp(1380144473.22222))), ('9d41d8cd98f00b204e9800998ecf1def', dict(ts_data=Timestamp(1380144474.44444), ts_meta=Timestamp(1380144475.44444), ts_ctype=Timestamp(1380144474.44448)))] self.assertEqual(self.sender.available_map, dict(candidates)) def test_missing_check_far_end_disconnect(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc'] self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.response = FakeResponse(chunk_body='\r\n') exc = None try: self.sender.missing_check() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), 'Early disconnect') self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual(self.sender.available_map, dict([('9d41d8cd98f00b204e9800998ecf0abc', dict(ts_data=Timestamp(1380144470.00000)))])) def test_missing_check_far_end_disconnect2(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc'] self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.response = FakeResponse( chunk_body=':MISSING_CHECK: START\r\n') exc = None try: self.sender.missing_check() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), 'Early disconnect') self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual(self.sender.available_map, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) def test_missing_check_far_end_unexpected(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc'] self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.response = FakeResponse(chunk_body='OH HAI\r\n') exc = None try: self.sender.missing_check() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), "Unexpected response: 'OH HAI'") self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual(self.sender.available_map, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) def test_missing_check_send_map(self): def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' '0123abc dm\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.missing_check() self.assertEqual( ''.join(self.sender.connection.sent), '17\r\n:MISSING_CHECK: START\r\n\r\n' '33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n' '15\r\n:MISSING_CHECK: END\r\n\r\n') self.assertEqual( self.sender.send_map, {'0123abc': {'data': True, 'meta': True}}) self.assertEqual(self.sender.available_map, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) def test_missing_check_extra_line_parts(self): # check that sender tolerates extra parts in missing check # line responses to allow for protocol upgrades def yield_hashes(device, partition, policy, suffixes=None, **kwargs): if (device == 'dev' and partition == '9' and policy == POLICIES.legacy and suffixes == ['abc']): yield ( '/srv/node/dev/objects/9/abc/' '9d41d8cd98f00b204e9800998ecf0abc', '9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)}) else: raise Exception( 'No match for %r %r %r %r' % (device, partition, policy, suffixes)) self.sender.connection = FakeConnection() self.sender.job = { 'device': 'dev', 'partition': '9', 'policy': POLICIES.legacy, } self.sender.suffixes = ['abc'] self.sender.response = FakeResponse( chunk_body=( ':MISSING_CHECK: START\r\n' '0123abc d extra response parts\r\n' ':MISSING_CHECK: END\r\n')) self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes self.sender.missing_check() self.assertEqual(self.sender.send_map, {'0123abc': {'data': True}}) self.assertEqual(self.sender.available_map, dict([('9d41d8cd98f00b204e9800998ecf0abc', {'ts_data': Timestamp(1380144470.00000)})])) def test_updates_timeout(self): self.sender.connection = FakeConnection() self.sender.connection.send = lambda d: eventlet.sleep(1) self.sender.daemon.node_timeout = 0.01 self.assertRaises(exceptions.MessageTimeout, self.sender.updates) def test_updates_empty_send_map(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_unexpected_response_lines1(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( 'abc\r\n' ':UPDATES: START\r\n' ':UPDATES: END\r\n')) exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), "Unexpected response: 'abc'") self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_unexpected_response_lines2(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' 'abc\r\n' ':UPDATES: END\r\n')) exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), "Unexpected response: 'abc'") self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_is_deleted(self): device = 'dev' part = '9' object_parts = ('a', 'c', 'o') df = self._make_open_diskfile(device, part, *object_parts) object_hash = utils.hash_path(*object_parts) delete_timestamp = utils.normalize_timestamp(time.time()) df.delete(delete_timestamp) self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.node = {} self.sender.send_map = {object_hash: {'data': True}} self.sender.send_delete = mock.MagicMock() self.sender.send_put = mock.MagicMock() self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.sender.send_delete.assert_called_once_with( '/a/c/o', delete_timestamp) self.assertEqual(self.sender.send_put.mock_calls, []) # note that the delete line isn't actually sent since we mock # send_delete; send_delete is tested separately. self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_update_send_delete(self): device = 'dev' part = '9' object_parts = ('a', 'c', 'o') df = self._make_open_diskfile(device, part, *object_parts) object_hash = utils.hash_path(*object_parts) delete_timestamp = utils.normalize_timestamp(time.time()) df.delete(delete_timestamp) self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.node = {} self.sender.send_map = {object_hash: {'data': True}} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' '30\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: %s\r\n\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n' % delete_timestamp ) def test_updates_put(self): # sender has data file and meta file ts_iter = make_timestamp_iter() device = 'dev' part = '9' object_parts = ('a', 'c', 'o') t1 = next(ts_iter) df = self._make_open_diskfile( device, part, *object_parts, timestamp=t1) t2 = next(ts_iter) metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'} df.write_metadata(metadata) object_hash = utils.hash_path(*object_parts) df.open() expected = df.get_metadata() self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.node = {} # receiver requested data only self.sender.send_map = {object_hash: {'data': True}} self.sender.send_delete = mock.MagicMock() self.sender.send_put = mock.MagicMock() self.sender.send_post = mock.MagicMock() self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.assertEqual(self.sender.send_delete.mock_calls, []) self.assertEqual(self.sender.send_post.mock_calls, []) self.assertEqual(1, len(self.sender.send_put.mock_calls)) args, _kwargs = self.sender.send_put.call_args path, df = args self.assertEqual(path, '/a/c/o') self.assertTrue(isinstance(df, diskfile.DiskFile)) self.assertEqual(expected, df.get_metadata()) # note that the put line isn't actually sent since we mock send_put; # send_put is tested separately. self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_post(self): ts_iter = make_timestamp_iter() device = 'dev' part = '9' object_parts = ('a', 'c', 'o') t1 = next(ts_iter) df = self._make_open_diskfile( device, part, *object_parts, timestamp=t1) t2 = next(ts_iter) metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'} df.write_metadata(metadata) object_hash = utils.hash_path(*object_parts) df.open() expected = df.get_metadata() self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.node = {} # receiver requested only meta self.sender.send_map = {object_hash: {'meta': True}} self.sender.send_delete = mock.MagicMock() self.sender.send_put = mock.MagicMock() self.sender.send_post = mock.MagicMock() self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.assertEqual(self.sender.send_delete.mock_calls, []) self.assertEqual(self.sender.send_put.mock_calls, []) self.assertEqual(1, len(self.sender.send_post.mock_calls)) args, _kwargs = self.sender.send_post.call_args path, df = args self.assertEqual(path, '/a/c/o') self.assertIsInstance(df, diskfile.DiskFile) self.assertEqual(expected, df.get_metadata()) # note that the post line isn't actually sent since we mock send_post; # send_post is tested separately. self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_put_and_post(self): ts_iter = make_timestamp_iter() device = 'dev' part = '9' object_parts = ('a', 'c', 'o') t1 = next(ts_iter) df = self._make_open_diskfile( device, part, *object_parts, timestamp=t1) t2 = next(ts_iter) metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'} df.write_metadata(metadata) object_hash = utils.hash_path(*object_parts) df.open() expected = df.get_metadata() self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES.legacy, 'frag_index': 0, } self.sender.node = {} # receiver requested data and meta self.sender.send_map = {object_hash: {'meta': True, 'data': True}} self.sender.send_delete = mock.MagicMock() self.sender.send_put = mock.MagicMock() self.sender.send_post = mock.MagicMock() self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() self.assertEqual(self.sender.send_delete.mock_calls, []) self.assertEqual(1, len(self.sender.send_put.mock_calls)) self.assertEqual(1, len(self.sender.send_post.mock_calls)) args, _kwargs = self.sender.send_put.call_args path, df = args self.assertEqual(path, '/a/c/o') self.assertIsInstance(df, diskfile.DiskFile) self.assertEqual(expected, df.get_metadata()) args, _kwargs = self.sender.send_post.call_args path, df = args self.assertEqual(path, '/a/c/o') self.assertIsInstance(df, diskfile.DiskFile) self.assertEqual(expected, df.get_metadata()) self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_storage_policy_index(self): device = 'dev' part = '9' object_parts = ('a', 'c', 'o') df = self._make_open_diskfile(device, part, *object_parts, policy=POLICIES[0]) object_hash = utils.hash_path(*object_parts) expected = df.get_metadata() self.sender.connection = FakeConnection() self.sender.job = { 'device': device, 'partition': part, 'policy': POLICIES[0], 'frag_index': 0} self.sender.node = {} self.sender.send_map = {object_hash: {'data': True}} self.sender.send_delete = mock.MagicMock() self.sender.send_put = mock.MagicMock() self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) self.sender.updates() args, _kwargs = self.sender.send_put.call_args path, df = args self.assertEqual(path, '/a/c/o') self.assertTrue(isinstance(df, diskfile.DiskFile)) self.assertEqual(expected, df.get_metadata()) self.assertEqual(os.path.join(self.testdir, 'dev/objects/9/', object_hash[-3:], object_hash), df._datadir) def test_updates_read_response_timeout_start(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) orig_readline = self.sender.readline def delayed_readline(): eventlet.sleep(1) return orig_readline() self.sender.readline = delayed_readline self.sender.daemon.http_timeout = 0.01 self.assertRaises(exceptions.MessageTimeout, self.sender.updates) def test_updates_read_response_disconnect_start(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse(chunk_body='\r\n') exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), 'Early disconnect') self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_read_response_unexp_start(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( 'anything else\r\n' ':UPDATES: START\r\n' ':UPDATES: END\r\n')) exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), "Unexpected response: 'anything else'") self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_read_response_timeout_end(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' ':UPDATES: END\r\n')) orig_readline = self.sender.readline def delayed_readline(): rv = orig_readline() if rv == ':UPDATES: END\r\n': eventlet.sleep(1) return rv self.sender.readline = delayed_readline self.sender.daemon.http_timeout = 0.01 self.assertRaises(exceptions.MessageTimeout, self.sender.updates) def test_updates_read_response_disconnect_end(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' '\r\n')) exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), 'Early disconnect') self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_updates_read_response_unexp_end(self): self.sender.connection = FakeConnection() self.sender.send_map = {} self.sender.response = FakeResponse( chunk_body=( ':UPDATES: START\r\n' 'anything else\r\n' ':UPDATES: END\r\n')) exc = None try: self.sender.updates() except exceptions.ReplicationException as err: exc = err self.assertEqual(str(exc), "Unexpected response: 'anything else'") self.assertEqual( ''.join(self.sender.connection.sent), '11\r\n:UPDATES: START\r\n\r\n' 'f\r\n:UPDATES: END\r\n\r\n') def test_send_delete_timeout(self): self.sender.connection = FakeConnection() self.sender.connection.send = lambda d: eventlet.sleep(1) self.sender.daemon.node_timeout = 0.01 exc = None try: self.sender.send_delete('/a/c/o', utils.Timestamp('1381679759.90941')) except exceptions.MessageTimeout as err: exc = err self.assertEqual(str(exc), '0.01 seconds: send_delete') def test_send_delete(self): self.sender.connection = FakeConnection() self.sender.send_delete('/a/c/o', utils.Timestamp('1381679759.90941')) self.assertEqual( ''.join(self.sender.connection.sent), '30\r\n' 'DELETE /a/c/o\r\n' 'X-Timestamp: 1381679759.90941\r\n' '\r\n\r\n') def test_send_put_initial_timeout(self): df = self._make_open_diskfile() df._disk_chunk_size = 2 self.sender.connection = FakeConnection() self.sender.connection.send = lambda d: eventlet.sleep(1) self.sender.daemon.node_timeout = 0.01 exc = None try: self.sender.send_put('/a/c/o', df) except exceptions.MessageTimeout as err: exc = err self.assertEqual(str(exc), '0.01 seconds: send_put') def test_send_put_chunk_timeout(self): df = self._make_open_diskfile() self.sender.connection = FakeConnection() self.sender.daemon.node_timeout = 0.01 one_shot = [None] def mock_send(data): try: one_shot.pop() except IndexError: eventlet.sleep(1) self.sender.connection.send = mock_send exc = None try: self.sender.send_put('/a/c/o', df) except exceptions.MessageTimeout as err: exc = err self.assertEqual(str(exc), '0.01 seconds: send_put chunk') def test_send_put(self): ts_iter = make_timestamp_iter() t1 = next(ts_iter) body = 'test' extra_metadata = {'Some-Other-Header': 'value'} df = self._make_open_diskfile(body=body, timestamp=t1, extra_metadata=extra_metadata) expected = dict(df.get_metadata()) expected['body'] = body expected['chunk_size'] = len(body) # .meta file metadata is not included in expected for data only PUT t2 = next(ts_iter) metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'} df.write_metadata(metadata) df.open() self.sender.connection = FakeConnection() self.sender.send_put('/a/c/o', df) self.assertEqual( ''.join(self.sender.connection.sent), '82\r\n' 'PUT /a/c/o\r\n' 'Content-Length: %(Content-Length)s\r\n' 'ETag: %(ETag)s\r\n' 'Some-Other-Header: value\r\n' 'X-Timestamp: %(X-Timestamp)s\r\n' '\r\n' '\r\n' '%(chunk_size)s\r\n' '%(body)s\r\n' % expected) def test_send_post(self): ts_iter = make_timestamp_iter() # create .data file extra_metadata = {'X-Object-Meta-Foo': 'old_value', 'X-Object-Sysmeta-Test': 'test_sysmeta', 'Content-Type': 'test_content_type'} ts_0 = next(ts_iter) df = self._make_open_diskfile(extra_metadata=extra_metadata, timestamp=ts_0) # create .meta file ts_1 = next(ts_iter) newer_metadata = {'X-Object-Meta-Foo': 'new_value', 'X-Timestamp': ts_1.internal} df.write_metadata(newer_metadata) self.sender.connection = FakeConnection() with df.open(): self.sender.send_post('/a/c/o', df) self.assertEqual( ''.join(self.sender.connection.sent), '4c\r\n' 'POST /a/c/o\r\n' 'X-Object-Meta-Foo: new_value\r\n' 'X-Timestamp: %s\r\n' '\r\n' '\r\n' % ts_1.internal) def test_disconnect_timeout(self): self.sender.connection = FakeConnection() self.sender.connection.send = lambda d: eventlet.sleep(1) self.sender.daemon.node_timeout = 0.01 self.sender.disconnect() self.assertEqual(''.join(self.sender.connection.sent), '') self.assertTrue(self.sender.connection.closed) def test_disconnect(self): self.sender.connection = FakeConnection() self.sender.disconnect() self.assertEqual(''.join(self.sender.connection.sent), '0\r\n\r\n') self.assertTrue(self.sender.connection.closed) class TestModuleMethods(unittest.TestCase): def test_encode_missing(self): object_hash = '9d41d8cd98f00b204e9800998ecf0abc' ts_iter = make_timestamp_iter() t_data = next(ts_iter) t_type = next(ts_iter) t_meta = next(ts_iter) d_meta_data = t_meta.raw - t_data.raw d_type_data = t_type.raw - t_data.raw # equal data and meta timestamps -> legacy single timestamp string expected = '%s %s' % (object_hash, t_data.internal) self.assertEqual( expected, ssync_sender.encode_missing(object_hash, t_data, ts_meta=t_data)) # newer meta timestamp -> hex data delta encoded as extra message part expected = '%s %s m:%x' % (object_hash, t_data.internal, d_meta_data) self.assertEqual( expected, ssync_sender.encode_missing(object_hash, t_data, ts_meta=t_meta)) # newer meta timestamp -> hex data delta encoded as extra message part # content type timestamp equals data timestamp -> no delta expected = '%s %s m:%x' % (object_hash, t_data.internal, d_meta_data) self.assertEqual( expected, ssync_sender.encode_missing(object_hash, t_data, t_meta, t_data)) # content type timestamp newer data timestamp -> delta encoded expected = ('%s %s m:%x,t:%x' % (object_hash, t_data.internal, d_meta_data, d_type_data)) self.assertEqual( expected, ssync_sender.encode_missing(object_hash, t_data, t_meta, t_type)) # content type timestamp equal to meta timestamp -> delta encoded expected = ('%s %s m:%x,t:%x' % (object_hash, t_data.internal, d_meta_data, d_type_data)) self.assertEqual( expected, ssync_sender.encode_missing(object_hash, t_data, t_meta, t_type)) # test encode and decode functions invert expected = {'object_hash': object_hash, 'ts_meta': t_meta, 'ts_data': t_data, 'ts_ctype': t_type} msg = ssync_sender.encode_missing(**expected) actual = ssync_receiver.decode_missing(msg) self.assertEqual(expected, actual) expected = {'object_hash': object_hash, 'ts_meta': t_meta, 'ts_data': t_meta, 'ts_ctype': t_meta} msg = ssync_sender.encode_missing(**expected) actual = ssync_receiver.decode_missing(msg) self.assertEqual(expected, actual) def test_decode_wanted(self): parts = ['d'] expected = {'data': True} self.assertEqual(ssync_sender.decode_wanted(parts), expected) parts = ['m'] expected = {'meta': True} self.assertEqual(ssync_sender.decode_wanted(parts), expected) parts = ['dm'] expected = {'data': True, 'meta': True} self.assertEqual(ssync_sender.decode_wanted(parts), expected) # you don't really expect these next few... parts = ['md'] expected = {'data': True, 'meta': True} self.assertEqual(ssync_sender.decode_wanted(parts), expected) parts = ['xcy', 'funny', {'business': True}] expected = {'data': True} self.assertEqual(ssync_sender.decode_wanted(parts), expected) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/obj/test_expirer.py0000664000567000056710000006463413024044354021631 0ustar jenkinsjenkins00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from time import time from unittest import main, TestCase from test.unit import FakeRing, mocked_http_conn, debug_logger from copy import deepcopy from tempfile import mkdtemp from shutil import rmtree import mock import six from six.moves import urllib from swift.common import internal_client, utils from swift.obj import expirer def not_random(): return 0.5 last_not_sleep = 0 def not_sleep(seconds): global last_not_sleep last_not_sleep = seconds class TestObjectExpirer(TestCase): maxDiff = None internal_client = None def setUp(self): global not_sleep self.old_loadapp = internal_client.loadapp self.old_sleep = internal_client.sleep internal_client.loadapp = lambda *a, **kw: None internal_client.sleep = not_sleep self.rcache = mkdtemp() self.conf = {'recon_cache_path': self.rcache} self.logger = debug_logger('test-recon') def tearDown(self): rmtree(self.rcache) internal_client.sleep = self.old_sleep internal_client.loadapp = self.old_loadapp def test_get_process_values_from_kwargs(self): x = expirer.ObjectExpirer({}) vals = { 'processes': 5, 'process': 1, } x.get_process_values(vals) self.assertEqual(x.processes, 5) self.assertEqual(x.process, 1) def test_get_process_values_from_config(self): vals = { 'processes': 5, 'process': 1, } x = expirer.ObjectExpirer(vals) x.get_process_values({}) self.assertEqual(x.processes, 5) self.assertEqual(x.process, 1) def test_get_process_values_negative_process(self): vals = { 'processes': 5, 'process': -1, } # from config x = expirer.ObjectExpirer(vals) self.assertRaises(ValueError, x.get_process_values, {}) # from kwargs x = expirer.ObjectExpirer({}) self.assertRaises(ValueError, x.get_process_values, vals) def test_get_process_values_negative_processes(self): vals = { 'processes': -5, 'process': 1, } # from config x = expirer.ObjectExpirer(vals) self.assertRaises(ValueError, x.get_process_values, {}) # from kwargs x = expirer.ObjectExpirer({}) self.assertRaises(ValueError, x.get_process_values, vals) def test_get_process_values_process_greater_than_processes(self): vals = { 'processes': 5, 'process': 7, } # from config x = expirer.ObjectExpirer(vals) self.assertRaises(ValueError, x.get_process_values, {}) # from kwargs x = expirer.ObjectExpirer({}) self.assertRaises(ValueError, x.get_process_values, vals) def test_init_concurrency_too_small(self): conf = { 'concurrency': 0, } self.assertRaises(ValueError, expirer.ObjectExpirer, conf) conf = { 'concurrency': -1, } self.assertRaises(ValueError, expirer.ObjectExpirer, conf) def test_process_based_concurrency(self): class ObjectExpirer(expirer.ObjectExpirer): def __init__(self, conf): super(ObjectExpirer, self).__init__(conf) self.processes = 3 self.deleted_objects = {} self.obj_containers_in_order = [] def delete_object(self, actual_obj, timestamp, container, obj): if container not in self.deleted_objects: self.deleted_objects[container] = set() self.deleted_objects[container].add(obj) self.obj_containers_in_order.append(container) class InternalClient(object): def __init__(self, containers): self.containers = containers def get_account_info(self, *a, **kw): return len(self.containers.keys()), \ sum([len(self.containers[x]) for x in self.containers]) def iter_containers(self, *a, **kw): return [{'name': six.text_type(x)} for x in self.containers.keys()] def iter_objects(self, account, container): return [{'name': six.text_type(x)} for x in self.containers[container]] def delete_container(*a, **kw): pass containers = { '0': set('1-one 2-two 3-three'.split()), '1': set('2-two 3-three 4-four'.split()), '2': set('5-five 6-six'.split()), '3': set(u'7-seven\u2661'.split()), } x = ObjectExpirer(self.conf) x.swift = InternalClient(containers) deleted_objects = {} for i in range(3): x.process = i x.run_once() self.assertNotEqual(deleted_objects, x.deleted_objects) deleted_objects = deepcopy(x.deleted_objects) self.assertEqual(containers['3'].pop(), deleted_objects['3'].pop().decode('utf8')) self.assertEqual(containers, deleted_objects) self.assertEqual(len(set(x.obj_containers_in_order[:4])), 4) def test_delete_object(self): class InternalClient(object): container_ring = None def __init__(self, test, account, container, obj): self.test = test self.account = account self.container = container self.obj = obj self.delete_object_called = False class DeleteActualObject(object): def __init__(self, test, actual_obj, timestamp): self.test = test self.actual_obj = actual_obj self.timestamp = timestamp self.called = False def __call__(self, actual_obj, timestamp): self.test.assertEqual(self.actual_obj, actual_obj) self.test.assertEqual(self.timestamp, timestamp) self.called = True container = 'container' obj = 'obj' actual_obj = 'actual_obj' timestamp = 'timestamp' x = expirer.ObjectExpirer({}, logger=self.logger) x.swift = \ InternalClient(self, x.expiring_objects_account, container, obj) x.delete_actual_object = \ DeleteActualObject(self, actual_obj, timestamp) delete_object_called = [] def pop_queue(c, o): self.assertEqual(container, c) self.assertEqual(obj, o) delete_object_called[:] = [True] x.pop_queue = pop_queue x.delete_object(actual_obj, timestamp, container, obj) self.assertTrue(delete_object_called) self.assertTrue(x.delete_actual_object.called) def test_report(self): x = expirer.ObjectExpirer({}, logger=self.logger) x.report() self.assertEqual(x.logger.get_lines_for_level('info'), []) x.logger._clear() x.report(final=True) self.assertTrue( 'completed' in str(x.logger.get_lines_for_level('info'))) self.assertTrue( 'so far' not in str(x.logger.get_lines_for_level('info'))) x.logger._clear() x.report_last_time = time() - x.report_interval x.report() self.assertTrue( 'completed' not in str(x.logger.get_lines_for_level('info'))) self.assertTrue( 'so far' in str(x.logger.get_lines_for_level('info'))) def test_run_once_nothing_to_do(self): x = expirer.ObjectExpirer(self.conf, logger=self.logger) x.swift = 'throw error because a string does not have needed methods' x.run_once() self.assertEqual(x.logger.get_lines_for_level('error'), ["Unhandled exception: "]) log_args, log_kwargs = x.logger.log_dict['error'][0] self.assertEqual(str(log_kwargs['exc_info'][1]), "'str' object has no attribute 'get_account_info'") def test_run_once_calls_report(self): class InternalClient(object): def get_account_info(*a, **kw): return 1, 2 def iter_containers(*a, **kw): return [] x = expirer.ObjectExpirer(self.conf, logger=self.logger) x.swift = InternalClient() x.run_once() self.assertEqual( x.logger.get_lines_for_level('info'), [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 0 objects expired', ]) def test_run_once_unicode_problem(self): class InternalClient(object): container_ring = FakeRing() def get_account_info(*a, **kw): return 1, 2 def iter_containers(*a, **kw): return [{'name': u'1234'}] def iter_objects(*a, **kw): return [{'name': u'1234-troms\xf8'}] def make_request(*a, **kw): pass def delete_container(*a, **kw): pass x = expirer.ObjectExpirer(self.conf, logger=self.logger) x.swift = InternalClient() requests = [] def capture_requests(ipaddr, port, method, path, *args, **kwargs): requests.append((method, path)) with mocked_http_conn( 200, 200, 200, give_connect=capture_requests): x.run_once() self.assertEqual(len(requests), 3) def test_container_timestamp_break(self): class InternalClient(object): def __init__(self, containers): self.containers = containers def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def iter_objects(*a, **kw): raise Exception('This should not have been called') x = expirer.ObjectExpirer(self.conf, logger=self.logger) x.swift = InternalClient([{'name': str(int(time() + 86400))}]) x.run_once() logs = x.logger.all_log_lines() self.assertEqual(logs['info'], [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 0 objects expired', ]) self.assertTrue('error' not in logs) # Reverse test to be sure it still would blow up the way expected. fake_swift = InternalClient([{'name': str(int(time() - 86400))}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.run_once() self.assertEqual( x.logger.get_lines_for_level('error'), [ 'Unhandled exception: ']) log_args, log_kwargs = x.logger.log_dict['error'][-1] self.assertEqual(str(log_kwargs['exc_info'][1]), 'This should not have been called') def test_object_timestamp_break(self): class InternalClient(object): def __init__(self, containers, objects): self.containers = containers self.objects = objects def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def delete_container(*a, **kw): pass def iter_objects(self, *a, **kw): return self.objects def should_not_be_called(*a, **kw): raise Exception('This should not have been called') fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': '%d-actual-obj' % int(time() + 86400)}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.run_once() self.assertTrue('error' not in x.logger.all_log_lines()) self.assertEqual(x.logger.get_lines_for_level('info'), [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 0 objects expired', ]) # Reverse test to be sure it still would blow up the way expected. ts = int(time() - 86400) fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': '%d-actual-obj' % ts}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.delete_actual_object = should_not_be_called x.run_once() self.assertEqual( x.logger.get_lines_for_level('error'), ['Exception while deleting object %d %d-actual-obj ' 'This should not have been called: ' % (ts, ts)]) def test_failed_delete_keeps_entry(self): class InternalClient(object): container_ring = None def __init__(self, containers, objects): self.containers = containers self.objects = objects def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def delete_container(*a, **kw): pass def iter_objects(self, *a, **kw): return self.objects def deliberately_blow_up(actual_obj, timestamp): raise Exception('failed to delete actual object') def should_not_get_called(container, obj): raise Exception('This should not have been called') ts = int(time() - 86400) fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': '%d-actual-obj' % ts}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.iter_containers = lambda: [str(int(time() - 86400))] x.delete_actual_object = deliberately_blow_up x.pop_queue = should_not_get_called x.run_once() error_lines = x.logger.get_lines_for_level('error') self.assertEqual( error_lines, ['Exception while deleting object %d %d-actual-obj ' 'failed to delete actual object: ' % (ts, ts)]) self.assertEqual( x.logger.get_lines_for_level('info'), [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 0 objects expired', ]) # Reverse test to be sure it still would blow up the way expected. ts = int(time() - 86400) fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': '%d-actual-obj' % ts}]) self.logger._clear() x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.delete_actual_object = lambda o, t: None x.pop_queue = should_not_get_called x.run_once() self.assertEqual( self.logger.get_lines_for_level('error'), ['Exception while deleting object %d %d-actual-obj This should ' 'not have been called: ' % (ts, ts)]) def test_success_gets_counted(self): class InternalClient(object): container_ring = None def __init__(self, containers, objects): self.containers = containers self.objects = objects def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def delete_container(*a, **kw): pass def delete_object(*a, **kw): pass def iter_objects(self, *a, **kw): return self.objects fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': '%d-acc/c/actual-obj' % int(time() - 86400)}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.delete_actual_object = lambda o, t: None x.pop_queue = lambda c, o: None self.assertEqual(x.report_objects, 0) with mock.patch('swift.obj.expirer.MAX_OBJECTS_TO_CACHE', 0): x.run_once() self.assertEqual(x.report_objects, 1) self.assertEqual( x.logger.get_lines_for_level('info'), ['Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 1 objects expired']) def test_delete_actual_object_does_not_get_unicode(self): class InternalClient(object): container_ring = None def __init__(self, containers, objects): self.containers = containers self.objects = objects def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def delete_container(*a, **kw): pass def delete_object(*a, **kw): pass def iter_objects(self, *a, **kw): return self.objects got_unicode = [False] def delete_actual_object_test_for_unicode(actual_obj, timestamp): if isinstance(actual_obj, six.text_type): got_unicode[0] = True fake_swift = InternalClient( [{'name': str(int(time() - 86400))}], [{'name': u'%d-actual-obj' % int(time() - 86400)}]) x = expirer.ObjectExpirer(self.conf, logger=self.logger, swift=fake_swift) x.delete_actual_object = delete_actual_object_test_for_unicode x.pop_queue = lambda c, o: None self.assertEqual(x.report_objects, 0) x.run_once() self.assertEqual(x.report_objects, 1) self.assertEqual( x.logger.get_lines_for_level('info'), [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 1 objects expired', ]) self.assertFalse(got_unicode[0]) def test_failed_delete_continues_on(self): class InternalClient(object): container_ring = None def __init__(self, containers, objects): self.containers = containers self.objects = objects def get_account_info(*a, **kw): return 1, 2 def iter_containers(self, *a, **kw): return self.containers def delete_container(*a, **kw): raise Exception('failed to delete container') def delete_object(*a, **kw): pass def iter_objects(self, *a, **kw): return self.objects def fail_delete_actual_object(actual_obj, timestamp): raise Exception('failed to delete actual object') x = expirer.ObjectExpirer(self.conf, logger=self.logger) cts = int(time() - 86400) ots = int(time() - 86400) containers = [ {'name': str(cts)}, {'name': str(cts + 1)}, ] objects = [ {'name': '%d-actual-obj' % ots}, {'name': '%d-next-obj' % ots} ] x.swift = InternalClient(containers, objects) x.delete_actual_object = fail_delete_actual_object x.run_once() error_lines = x.logger.get_lines_for_level('error') self.assertEqual(sorted(error_lines), sorted([ 'Exception while deleting object %d %d-actual-obj failed to ' 'delete actual object: ' % (cts, ots), 'Exception while deleting object %d %d-next-obj failed to ' 'delete actual object: ' % (cts, ots), 'Exception while deleting object %d %d-actual-obj failed to ' 'delete actual object: ' % (cts + 1, ots), 'Exception while deleting object %d %d-next-obj failed to ' 'delete actual object: ' % (cts + 1, ots), 'Exception while deleting container %d failed to delete ' 'container: ' % (cts,), 'Exception while deleting container %d failed to delete ' 'container: ' % (cts + 1,)])) self.assertEqual(x.logger.get_lines_for_level('info'), [ 'Pass beginning; 1 possible containers; 2 possible objects', 'Pass completed in 0s; 0 objects expired', ]) def test_run_forever_initial_sleep_random(self): global last_not_sleep def raise_system_exit(): raise SystemExit('test_run_forever') interval = 1234 x = expirer.ObjectExpirer({'__file__': 'unit_test', 'interval': interval}) orig_random = expirer.random orig_sleep = expirer.sleep try: expirer.random = not_random expirer.sleep = not_sleep x.run_once = raise_system_exit x.run_forever() except SystemExit as err: pass finally: expirer.random = orig_random expirer.sleep = orig_sleep self.assertEqual(str(err), 'test_run_forever') self.assertEqual(last_not_sleep, 0.5 * interval) def test_run_forever_catches_usual_exceptions(self): raises = [0] def raise_exceptions(): raises[0] += 1 if raises[0] < 2: raise Exception('exception %d' % raises[0]) raise SystemExit('exiting exception %d' % raises[0]) x = expirer.ObjectExpirer({}, logger=self.logger) orig_sleep = expirer.sleep try: expirer.sleep = not_sleep x.run_once = raise_exceptions x.run_forever() except SystemExit as err: pass finally: expirer.sleep = orig_sleep self.assertEqual(str(err), 'exiting exception 2') self.assertEqual(x.logger.get_lines_for_level('error'), ['Unhandled exception: ']) log_args, log_kwargs = x.logger.log_dict['error'][0] self.assertEqual(str(log_kwargs['exc_info'][1]), 'exception 1') def test_delete_actual_object(self): got_env = [None] def fake_app(env, start_response): got_env[0] = env start_response('204 No Content', [('Content-Length', '0')]) return [] internal_client.loadapp = lambda *a, **kw: fake_app x = expirer.ObjectExpirer({}) ts = '1234' x.delete_actual_object('/path/to/object', ts) self.assertEqual(got_env[0]['HTTP_X_IF_DELETE_AT'], ts) def test_delete_actual_object_nourlquoting(self): # delete_actual_object should not do its own url quoting because # internal client's make_request handles that. got_env = [None] def fake_app(env, start_response): got_env[0] = env start_response('204 No Content', [('Content-Length', '0')]) return [] internal_client.loadapp = lambda *a, **kw: fake_app x = expirer.ObjectExpirer({}) ts = '1234' x.delete_actual_object('/path/to/object name', ts) self.assertEqual(got_env[0]['HTTP_X_IF_DELETE_AT'], ts) self.assertEqual(got_env[0]['PATH_INFO'], '/v1/path/to/object name') def test_delete_actual_object_raises_404(self): def fake_app(env, start_response): start_response('404 Not Found', [('Content-Length', '0')]) return [] internal_client.loadapp = lambda *a, **kw: fake_app x = expirer.ObjectExpirer({}) self.assertRaises(internal_client.UnexpectedResponse, x.delete_actual_object, '/path/to/object', '1234') def test_delete_actual_object_handles_412(self): def fake_app(env, start_response): start_response('412 Precondition Failed', [('Content-Length', '0')]) return [] internal_client.loadapp = lambda *a, **kw: fake_app x = expirer.ObjectExpirer({}) x.delete_actual_object('/path/to/object', '1234') def test_delete_actual_object_does_not_handle_odd_stuff(self): def fake_app(env, start_response): start_response( '503 Internal Server Error', [('Content-Length', '0')]) return [] internal_client.loadapp = lambda *a, **kw: fake_app x = expirer.ObjectExpirer({}) exc = None try: x.delete_actual_object('/path/to/object', '1234') except Exception as err: exc = err finally: pass self.assertEqual(503, exc.resp.status_int) def test_delete_actual_object_quotes(self): name = 'this name should get quoted' timestamp = '1366063156.863045' x = expirer.ObjectExpirer({}) x.swift.make_request = mock.MagicMock() x.delete_actual_object(name, timestamp) self.assertEqual(x.swift.make_request.call_count, 1) self.assertEqual(x.swift.make_request.call_args[0][1], '/v1/' + urllib.parse.quote(name)) def test_pop_queue(self): class InternalClient(object): container_ring = FakeRing() x = expirer.ObjectExpirer({}, logger=self.logger, swift=InternalClient()) requests = [] def capture_requests(ipaddr, port, method, path, *args, **kwargs): requests.append((method, path)) with mocked_http_conn( 200, 200, 200, give_connect=capture_requests) as fake_conn: x.pop_queue('c', 'o') self.assertRaises(StopIteration, fake_conn.code_iter.next) for method, path in requests: self.assertEqual(method, 'DELETE') device, part, account, container, obj = utils.split_path( path, 5, 5, True) self.assertEqual(account, '.expiring_objects') self.assertEqual(container, 'c') self.assertEqual(obj, 'o') if __name__ == '__main__': main() swift-2.7.1/test/unit/obj/test_replicator.py0000664000567000056710000025111113024044354022303 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import os import mock from gzip import GzipFile from shutil import rmtree import six.moves.cPickle as pickle import time import tempfile from contextlib import contextmanager, closing from collections import defaultdict from errno import ENOENT, ENOTEMPTY, ENOTDIR from eventlet.green import subprocess from eventlet import Timeout, tpool from test.unit import (debug_logger, patch_policies, make_timestamp_iter, mocked_http_conn) from swift.common import utils from swift.common.utils import (hash_path, mkdirs, normalize_timestamp, storage_directory) from swift.common import ring from swift.obj import diskfile, replicator as object_replicator from swift.common.storage_policy import StoragePolicy, POLICIES def _ips(*args, **kwargs): return ['127.0.0.0'] def mock_http_connect(status): class FakeConn(object): def __init__(self, status, *args, **kwargs): self.status = status self.reason = 'Fake' self.host = args[0] self.port = args[1] self.method = args[4] self.path = args[5] self.with_exc = False self.headers = kwargs.get('headers', {}) def getresponse(self): if self.with_exc: raise Exception('test') return self def getheader(self, header): return self.headers[header] def read(self, amt=None): return pickle.dumps({}) def close(self): return return lambda *args, **kwargs: FakeConn(status, *args, **kwargs) process_errors = [] class MockProcess(object): ret_code = None ret_log = None check_args = None captured_log = None class Stream(object): def read(self): return next(MockProcess.ret_log) def __init__(self, *args, **kwargs): targs = next(MockProcess.check_args) for targ in targs: # Allow more than 2 candidate targs # (e.g. a case that either node is fine when nodes shuffled) if isinstance(targ, tuple): allowed = False for target in targ: if target in args[0]: allowed = True if not allowed: process_errors.append("Invalid: %s not in %s" % (targ, args)) else: if targ not in args[0]: process_errors.append("Invalid: %s not in %s" % (targ, args)) self.captured_info = { 'rsync_args': args[0], } self.stdout = self.Stream() def wait(self): # the _mock_process context manager assures this class attribute is a # mutable list and takes care of resetting it rv = next(self.ret_code) if self.captured_log is not None: self.captured_info['ret_code'] = rv self.captured_log.append(self.captured_info) return rv @contextmanager def _mock_process(ret): captured_log = [] MockProcess.captured_log = captured_log orig_process = subprocess.Popen MockProcess.ret_code = (i[0] for i in ret) MockProcess.ret_log = (i[1] for i in ret) MockProcess.check_args = (i[2] for i in ret) object_replicator.subprocess.Popen = MockProcess yield captured_log MockProcess.captured_log = None object_replicator.subprocess.Popen = orig_process def _create_test_rings(path, devs=None): testgz = os.path.join(path, 'object.ring.gz') intended_replica2part2dev_id = [ [0, 1, 2, 3, 4, 5, 6], [1, 2, 3, 0, 5, 6, 4], [2, 3, 0, 1, 6, 4, 5], ] intended_devs = devs or [ {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '127.0.0.0', 'port': 6000}, {'id': 1, 'device': 'sda', 'zone': 1, 'region': 2, 'ip': '127.0.0.1', 'port': 6000}, {'id': 2, 'device': 'sda', 'zone': 2, 'region': 3, 'ip': '127.0.0.2', 'port': 6000}, {'id': 3, 'device': 'sda', 'zone': 4, 'region': 2, 'ip': '127.0.0.3', 'port': 6000}, {'id': 4, 'device': 'sda', 'zone': 5, 'region': 1, 'ip': '127.0.0.4', 'port': 6000}, {'id': 5, 'device': 'sda', 'zone': 6, 'region': 3, 'ip': 'fe80::202:b3ff:fe1e:8329', 'port': 6000}, {'id': 6, 'device': 'sda', 'zone': 7, 'region': 1, 'ip': '2001:0db8:85a3:0000:0000:8a2e:0370:7334', 'port': 6000}, ] intended_part_shift = 30 with closing(GzipFile(testgz, 'wb')) as f: pickle.dump( ring.RingData(intended_replica2part2dev_id, intended_devs, intended_part_shift), f) testgz = os.path.join(path, 'object-1.ring.gz') with closing(GzipFile(testgz, 'wb')) as f: pickle.dump( ring.RingData(intended_replica2part2dev_id, intended_devs, intended_part_shift), f) for policy in POLICIES: policy.object_ring = None # force reload return @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True)]) class TestObjectReplicator(unittest.TestCase): def setUp(self): utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = '' # recon cache path self.recon_cache = tempfile.mkdtemp() rmtree(self.recon_cache, ignore_errors=1) os.mkdir(self.recon_cache) # Setup a test ring (stolen from common/test_ring.py) self.testdir = tempfile.mkdtemp() self.devices = os.path.join(self.testdir, 'node') rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) os.mkdir(self.devices) self.objects, self.objects_1, self.parts, self.parts_1 = \ self._write_disk_data('sda') _create_test_rings(self.testdir) self.logger = debug_logger('test-replicator') self.conf = dict( bind_ip=_ips()[0], bind_port=6000, swift_dir=self.testdir, devices=self.devices, mount_check='false', timeout='300', stats_interval='1', sync_method='rsync') self._create_replicator() self.ts = make_timestamp_iter() def tearDown(self): self.assertFalse(process_errors) rmtree(self.testdir, ignore_errors=1) rmtree(self.recon_cache, ignore_errors=1) def test_handoff_replication_setting_warnings(self): conf_tests = [ # (config, expected_warning) ({}, False), ({'handoff_delete': 'auto'}, False), ({'handoffs_first': 'no'}, False), ({'handoff_delete': '2'}, True), ({'handoffs_first': 'yes'}, True), ({'handoff_delete': '1', 'handoffs_first': 'yes'}, True), ] log_message = 'Handoff only mode is not intended for normal ' \ 'operation, please disable handoffs_first and ' \ 'handoff_delete before the next normal rebalance' for config, expected_warning in conf_tests: self.logger.clear() object_replicator.ObjectReplicator(config, logger=self.logger) warning_log_lines = self.logger.get_lines_for_level('warning') if expected_warning: expected_log_lines = [log_message] else: expected_log_lines = [] self.assertEqual(expected_log_lines, warning_log_lines, 'expected %s != %s for config %r' % ( expected_log_lines, warning_log_lines, config, )) def _write_disk_data(self, disk_name, with_json=False): os.mkdir(os.path.join(self.devices, disk_name)) objects = os.path.join(self.devices, disk_name, diskfile.get_data_dir(POLICIES[0])) objects_1 = os.path.join(self.devices, disk_name, diskfile.get_data_dir(POLICIES[1])) os.mkdir(objects) os.mkdir(objects_1) parts = {} parts_1 = {} for part in ['0', '1', '2', '3']: parts[part] = os.path.join(objects, part) os.mkdir(parts[part]) parts_1[part] = os.path.join(objects_1, part) os.mkdir(parts_1[part]) if with_json: for json_file in ['auditor_status_ZBF.json', 'auditor_status_ALL.json']: for obj_dir in [objects, objects_1]: with open(os.path.join(obj_dir, json_file), 'w'): pass return objects, objects_1, parts, parts_1 def _create_replicator(self): self.replicator = object_replicator.ObjectReplicator(self.conf) self.replicator.logger = self.logger self.replicator._zero_stats() self.replicator.all_devs_info = set() self.df_mgr = diskfile.DiskFileManager(self.conf, self.logger) def test_run_once_no_local_device_in_ring(self): conf = dict(swift_dir=self.testdir, devices=self.devices, bind_ip='1.1.1.1', recon_cache_path=self.recon_cache, mount_check='false', timeout='300', stats_interval='1') replicator = object_replicator.ObjectReplicator(conf, logger=self.logger) replicator.run_once() expected = [ "Can't find itself 1.1.1.1 with port 6000 " "in ring file, not replicating", "Can't find itself 1.1.1.1 with port 6000 " "in ring file, not replicating", ] self.assertEqual(expected, self.logger.get_lines_for_level('error')) def test_run_once(self): conf = dict(swift_dir=self.testdir, devices=self.devices, bind_ip=_ips()[0], recon_cache_path=self.recon_cache, mount_check='false', timeout='300', stats_interval='1') replicator = object_replicator.ObjectReplicator(conf, logger=self.logger) was_connector = object_replicator.http_connect object_replicator.http_connect = mock_http_connect(200) cur_part = '0' df = self.df_mgr.get_diskfile('sda', cur_part, 'a', 'c', 'o', policy=POLICIES[0]) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, cur_part, data_dir) process_arg_checker = [] ring = replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(int(cur_part)) if node['ip'] not in _ips()] rsync_mods = tuple(['%s::object/sda/objects/%s' % (node['ip'], cur_part) for node in nodes]) for node in nodes: process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mods])) start = replicator.replication_cycle self.assertGreaterEqual(start, 0) self.assertLess(start, 9) with _mock_process(process_arg_checker): replicator.run_once() self.assertEqual(start + 1, replicator.replication_cycle) self.assertFalse(process_errors) self.assertFalse(self.logger.get_lines_for_level('error')) object_replicator.http_connect = was_connector with _mock_process(process_arg_checker): for cycle in range(1, 10): replicator.run_once() self.assertEqual((start + 1 + cycle) % 10, replicator.replication_cycle) # policy 1 def test_run_once_1(self): conf = dict(swift_dir=self.testdir, devices=self.devices, recon_cache_path=self.recon_cache, mount_check='false', timeout='300', stats_interval='1') replicator = object_replicator.ObjectReplicator(conf, logger=self.logger) was_connector = object_replicator.http_connect object_replicator.http_connect = mock_http_connect(200) cur_part = '0' df = self.df_mgr.get_diskfile('sda', cur_part, 'a', 'c', 'o', policy=POLICIES[1]) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects_1, cur_part, data_dir) process_arg_checker = [] ring = replicator.load_object_ring(POLICIES[1]) nodes = [node for node in ring.get_part_nodes(int(cur_part)) if node['ip'] not in _ips()] rsync_mods = tuple(['%s::object/sda/objects-1/%s' % (node['ip'], cur_part) for node in nodes]) for node in nodes: process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mods])) with _mock_process(process_arg_checker): with mock.patch('swift.obj.replicator.whataremyips', side_effect=_ips): replicator.run_once() self.assertFalse(process_errors) self.assertFalse(self.logger.get_lines_for_level('error')) object_replicator.http_connect = was_connector def test_check_ring(self): for pol in POLICIES: obj_ring = self.replicator.load_object_ring(pol) self.assertTrue(self.replicator.check_ring(obj_ring)) orig_check = self.replicator.next_check self.replicator.next_check = orig_check - 30 self.assertTrue(self.replicator.check_ring(obj_ring)) self.replicator.next_check = orig_check orig_ring_time = obj_ring._mtime obj_ring._mtime = orig_ring_time - 30 self.assertTrue(self.replicator.check_ring(obj_ring)) self.replicator.next_check = orig_check - 30 self.assertFalse(self.replicator.check_ring(obj_ring)) def test_collect_jobs_mkdirs_error(self): non_local = {} def blowup_mkdirs(path): non_local['path'] = path raise OSError('Ow!') with mock.patch.object(object_replicator, 'mkdirs', blowup_mkdirs): rmtree(self.objects, ignore_errors=1) object_replicator.mkdirs = blowup_mkdirs self.replicator.collect_jobs() self.assertEqual(self.logger.get_lines_for_level('error'), [ 'ERROR creating %s: ' % non_local['path']]) log_args, log_kwargs = self.logger.log_dict['error'][0] self.assertEqual(str(log_kwargs['exc_info'][1]), 'Ow!') def test_collect_jobs(self): jobs = self.replicator.collect_jobs() jobs_to_delete = [j for j in jobs if j['delete']] jobs_by_pol_part = {} for job in jobs: jobs_by_pol_part[str(int(job['policy'])) + job['partition']] = job self.assertEqual(len(jobs_to_delete), 2) self.assertTrue('1', jobs_to_delete[0]['partition']) self.assertEqual( [node['id'] for node in jobs_by_pol_part['00']['nodes']], [1, 2]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['01']['nodes']], [1, 2, 3]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['02']['nodes']], [2, 3]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['03']['nodes']], [3, 1]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['10']['nodes']], [1, 2]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['11']['nodes']], [1, 2, 3]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['12']['nodes']], [2, 3]) self.assertEqual( [node['id'] for node in jobs_by_pol_part['13']['nodes']], [3, 1]) for part in ['00', '01', '02', '03']: for node in jobs_by_pol_part[part]['nodes']: self.assertEqual(node['device'], 'sda') self.assertEqual(jobs_by_pol_part[part]['path'], os.path.join(self.objects, part[1:])) for part in ['10', '11', '12', '13']: for node in jobs_by_pol_part[part]['nodes']: self.assertEqual(node['device'], 'sda') self.assertEqual(jobs_by_pol_part[part]['path'], os.path.join(self.objects_1, part[1:])) def test_collect_jobs_failure_report_with_auditor_stats_json(self): devs = [ {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, {'id': 1, 'device': 'sdb', 'zone': 1, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, {'id': 2, 'device': 'sdc', 'zone': 2, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.1', 'replication_port': 6000}, {'id': 3, 'device': 'sdd', 'zone': 3, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.1', 'replication_port': 6000}, ] objects_sdb, objects_1_sdb, _, _ = \ self._write_disk_data('sdb', with_json=True) objects_sdc, objects_1_sdc, _, _ = \ self._write_disk_data('sdc', with_json=True) objects_sdd, objects_1_sdd, _, _ = \ self._write_disk_data('sdd', with_json=True) _create_test_rings(self.testdir, devs) self.replicator.collect_jobs() self.assertEqual(self.replicator.stats['failure'], 0) @mock.patch('swift.obj.replicator.random.shuffle', side_effect=lambda l: l) def test_collect_jobs_multi_disk(self, mock_shuffle): devs = [ # Two disks on same IP/port {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, {'id': 1, 'device': 'sdb', 'zone': 1, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, # Two disks on same server, different ports {'id': 2, 'device': 'sdc', 'zone': 2, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6000}, {'id': 3, 'device': 'sdd', 'zone': 4, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6001}, ] objects_sdb, objects_1_sdb, _, _ = self._write_disk_data('sdb') objects_sdc, objects_1_sdc, _, _ = self._write_disk_data('sdc') objects_sdd, objects_1_sdd, _, _ = self._write_disk_data('sdd') _create_test_rings(self.testdir, devs) jobs = self.replicator.collect_jobs() self.assertEqual([mock.call(jobs)], mock_shuffle.mock_calls) jobs_to_delete = [j for j in jobs if j['delete']] self.assertEqual(len(jobs_to_delete), 4) self.assertEqual([ '1', '2', # policy 0; 1 not on sda, 2 not on sdb '1', '2', # policy 1; 1 not on sda, 2 not on sdb ], [j['partition'] for j in jobs_to_delete]) jobs_by_pol_part_dev = {} for job in jobs: # There should be no jobs with a device not in just sda & sdb self.assertTrue(job['device'] in ('sda', 'sdb')) jobs_by_pol_part_dev[ str(int(job['policy'])) + job['partition'] + job['device'] ] = job self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['00sda']['nodes']], [1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['00sdb']['nodes']], [0, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['01sda']['nodes']], [1, 2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['01sdb']['nodes']], [2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['02sda']['nodes']], [2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['02sdb']['nodes']], [2, 3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['03sda']['nodes']], [3, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['03sdb']['nodes']], [3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['10sda']['nodes']], [1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['10sdb']['nodes']], [0, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['11sda']['nodes']], [1, 2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['11sdb']['nodes']], [2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['12sda']['nodes']], [2, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['12sdb']['nodes']], [2, 3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['13sda']['nodes']], [3, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['13sdb']['nodes']], [3, 0]) for part in ['00', '01', '02', '03']: self.assertEqual(jobs_by_pol_part_dev[part + 'sda']['path'], os.path.join(self.objects, part[1:])) self.assertEqual(jobs_by_pol_part_dev[part + 'sdb']['path'], os.path.join(objects_sdb, part[1:])) for part in ['10', '11', '12', '13']: self.assertEqual(jobs_by_pol_part_dev[part + 'sda']['path'], os.path.join(self.objects_1, part[1:])) self.assertEqual(jobs_by_pol_part_dev[part + 'sdb']['path'], os.path.join(objects_1_sdb, part[1:])) @mock.patch('swift.obj.replicator.random.shuffle', side_effect=lambda l: l) def test_collect_jobs_multi_disk_diff_ports_normal(self, mock_shuffle): # Normally (servers_per_port=0), replication_ip AND replication_port # are used to determine local ring device entries. Here we show that # with bind_ip='127.0.0.1', bind_port=6000, only "sdc" is local. devs = [ # Two disks on same IP/port {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, {'id': 1, 'device': 'sdb', 'zone': 1, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, # Two disks on same server, different ports {'id': 2, 'device': 'sdc', 'zone': 2, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6000}, {'id': 3, 'device': 'sdd', 'zone': 4, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6001}, ] objects_sdb, objects_1_sdb, _, _ = self._write_disk_data('sdb') objects_sdc, objects_1_sdc, _, _ = self._write_disk_data('sdc') objects_sdd, objects_1_sdd, _, _ = self._write_disk_data('sdd') _create_test_rings(self.testdir, devs) self.conf['bind_ip'] = '127.0.0.1' self._create_replicator() jobs = self.replicator.collect_jobs() self.assertEqual([mock.call(jobs)], mock_shuffle.mock_calls) jobs_to_delete = [j for j in jobs if j['delete']] self.assertEqual(len(jobs_to_delete), 2) self.assertEqual([ '3', # policy 0; 3 not on sdc '3', # policy 1; 3 not on sdc ], [j['partition'] for j in jobs_to_delete]) jobs_by_pol_part_dev = {} for job in jobs: # There should be no jobs with a device not sdc self.assertEqual(job['device'], 'sdc') jobs_by_pol_part_dev[ str(int(job['policy'])) + job['partition'] + job['device'] ] = job self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['00sdc']['nodes']], [0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['01sdc']['nodes']], [1, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['02sdc']['nodes']], [3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['03sdc']['nodes']], [3, 0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['10sdc']['nodes']], [0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['11sdc']['nodes']], [1, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['12sdc']['nodes']], [3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['13sdc']['nodes']], [3, 0, 1]) for part in ['00', '01', '02', '03']: self.assertEqual(jobs_by_pol_part_dev[part + 'sdc']['path'], os.path.join(objects_sdc, part[1:])) for part in ['10', '11', '12', '13']: self.assertEqual(jobs_by_pol_part_dev[part + 'sdc']['path'], os.path.join(objects_1_sdc, part[1:])) @mock.patch('swift.obj.replicator.random.shuffle', side_effect=lambda l: l) def test_collect_jobs_multi_disk_servers_per_port(self, mock_shuffle): # Normally (servers_per_port=0), replication_ip AND replication_port # are used to determine local ring device entries. Here we show that # with servers_per_port > 0 and bind_ip='127.0.0.1', bind_port=6000, # then both "sdc" and "sdd" are local. devs = [ # Two disks on same IP/port {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, {'id': 1, 'device': 'sdb', 'zone': 1, 'region': 1, 'ip': '1.1.1.1', 'port': 1111, 'replication_ip': '127.0.0.0', 'replication_port': 6000}, # Two disks on same server, different ports {'id': 2, 'device': 'sdc', 'zone': 2, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6000}, {'id': 3, 'device': 'sdd', 'zone': 4, 'region': 2, 'ip': '1.1.1.2', 'port': 1112, 'replication_ip': '127.0.0.1', 'replication_port': 6001}, ] objects_sdb, objects_1_sdb, _, _ = self._write_disk_data('sdb') objects_sdc, objects_1_sdc, _, _ = self._write_disk_data('sdc') objects_sdd, objects_1_sdd, _, _ = self._write_disk_data('sdd') _create_test_rings(self.testdir, devs) self.conf['bind_ip'] = '127.0.0.1' self.conf['servers_per_port'] = 1 # diff port ok self._create_replicator() jobs = self.replicator.collect_jobs() self.assertEqual([mock.call(jobs)], mock_shuffle.mock_calls) jobs_to_delete = [j for j in jobs if j['delete']] self.assertEqual(len(jobs_to_delete), 4) self.assertEqual([ '3', '0', # policy 0; 3 not on sdc, 0 not on sdd '3', '0', # policy 1; 3 not on sdc, 0 not on sdd ], [j['partition'] for j in jobs_to_delete]) jobs_by_pol_part_dev = {} for job in jobs: # There should be no jobs with a device not in just sdc & sdd self.assertTrue(job['device'] in ('sdc', 'sdd')) jobs_by_pol_part_dev[ str(int(job['policy'])) + job['partition'] + job['device'] ] = job self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['00sdc']['nodes']], [0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['00sdd']['nodes']], [0, 1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['01sdc']['nodes']], [1, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['01sdd']['nodes']], [1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['02sdc']['nodes']], [3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['02sdd']['nodes']], [2, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['03sdc']['nodes']], [3, 0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['03sdd']['nodes']], [0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['10sdc']['nodes']], [0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['10sdd']['nodes']], [0, 1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['11sdc']['nodes']], [1, 3]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['11sdd']['nodes']], [1, 2]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['12sdc']['nodes']], [3, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['12sdd']['nodes']], [2, 0]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['13sdc']['nodes']], [3, 0, 1]) self.assertEqual([node['id'] for node in jobs_by_pol_part_dev['13sdd']['nodes']], [0, 1]) for part in ['00', '01', '02', '03']: self.assertEqual(jobs_by_pol_part_dev[part + 'sdc']['path'], os.path.join(objects_sdc, part[1:])) self.assertEqual(jobs_by_pol_part_dev[part + 'sdd']['path'], os.path.join(objects_sdd, part[1:])) for part in ['10', '11', '12', '13']: self.assertEqual(jobs_by_pol_part_dev[part + 'sdc']['path'], os.path.join(objects_1_sdc, part[1:])) self.assertEqual(jobs_by_pol_part_dev[part + 'sdd']['path'], os.path.join(objects_1_sdd, part[1:])) def test_collect_jobs_handoffs_first(self): self.replicator.handoffs_first = True jobs = self.replicator.collect_jobs() self.assertTrue(jobs[0]['delete']) self.assertEqual('1', jobs[0]['partition']) def test_handoffs_first_mode_will_process_all_jobs_after_handoffs(self): # make a object in the handoff & primary partition expected_suffix_paths = [] for policy in POLICIES: # primary ts = next(self.ts) df = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o', policy) with df.create() as w: w.write('asdf') w.put({'X-Timestamp': ts.internal}) w.commit(ts) expected_suffix_paths.append(os.path.dirname(df._datadir)) # handoff ts = next(self.ts) df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy) with df.create() as w: w.write('asdf') w.put({'X-Timestamp': ts.internal}) w.commit(ts) expected_suffix_paths.append(os.path.dirname(df._datadir)) # rsync will be called for all parts we created objects in process_arg_checker = [ # (return_code, stdout, ) (0, '', []), (0, '', []), (0, '', []), # handoff job "first" policy (0, '', []), (0, '', []), (0, '', []), # handoff job "second" policy (0, '', []), (0, '', []), # update job "first" policy (0, '', []), (0, '', []), # update job "second" policy ] # each handoff partition node gets one replicate request for after # rsync (2 * 3), each primary partition with objects gets two # replicate requests (pre-flight and post sync) to each of each # partners (2 * 2 * 2), the 2 remaining empty parts (2 & 3) get a # pre-flight replicate request per node for each storage policy # (2 * 2 * 2) - so 6 + 8 + 8 == 22 replicate_responses = [200] * 22 stub_body = pickle.dumps({}) with _mock_process(process_arg_checker) as rsync_log, \ mock.patch('swift.obj.replicator.whataremyips', side_effect=_ips), \ mocked_http_conn(*replicate_responses, body=stub_body) as conn_log: self.replicator.handoffs_first = True self.replicator.replicate() # all jobs processed! self.assertEqual(self.replicator.job_count, self.replicator.replication_count) self.assertFalse(self.replicator.handoffs_remaining) # sanity, all the handoffs suffixes we filled in were rsync'd found_rsync_suffix_paths = set() for subprocess_info in rsync_log: local_path, remote_path = subprocess_info['rsync_args'][-2:] found_rsync_suffix_paths.add(local_path) self.assertEqual(set(expected_suffix_paths), found_rsync_suffix_paths) # sanity, all nodes got replicated found_replicate_calls = defaultdict(int) for req in conn_log.requests: self.assertEqual(req['method'], 'REPLICATE') found_replicate_key = ( int(req['headers']['X-Backend-Storage-Policy-Index']), req['path']) found_replicate_calls[found_replicate_key] += 1 expected_replicate_calls = { (0, '/sda/1/a83'): 3, (1, '/sda/1/a83'): 3, (0, '/sda/0'): 2, (0, '/sda/0/a83'): 2, (1, '/sda/0'): 2, (1, '/sda/0/a83'): 2, (0, '/sda/2'): 2, (1, '/sda/2'): 2, (0, '/sda/3'): 2, (1, '/sda/3'): 2, } self.assertEqual(dict(found_replicate_calls), expected_replicate_calls) def test_handoffs_first_mode_will_abort_if_handoffs_remaining(self): # make an object in the handoff partition handoff_suffix_paths = [] for policy in POLICIES: ts = next(self.ts) df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy) with df.create() as w: w.write('asdf') w.put({'X-Timestamp': ts.internal}) w.commit(ts) handoff_suffix_paths.append(os.path.dirname(df._datadir)) process_arg_checker = [ # (return_code, stdout, ) (0, '', []), (1, '', []), (0, '', []), (0, '', []), (0, '', []), (0, '', []), ] stub_body = pickle.dumps({}) with _mock_process(process_arg_checker) as rsync_log, \ mock.patch('swift.obj.replicator.whataremyips', side_effect=_ips), \ mocked_http_conn(*[200] * 5, body=stub_body) as conn_log: self.replicator.handoffs_first = True self.replicator.replicate() # stopped after handoffs! self.assertEqual(1, self.replicator.handoffs_remaining) self.assertEqual(8, self.replicator.job_count) # in addition to the two update_deleted jobs as many as "concurrency" # jobs may have been spawned into the pool before the failed # update_deleted job incremented handoffs_remaining and caused the # handoffs_first check to abort the current pass self.assertLessEqual(self.replicator.replication_count, 2 + self.replicator.concurrency) # sanity, all the handoffs suffixes we filled in were rsync'd found_rsync_suffix_paths = set() expected_replicate_requests = set() for subprocess_info in rsync_log: local_path, remote_path = subprocess_info['rsync_args'][-2:] found_rsync_suffix_paths.add(local_path) if subprocess_info['ret_code'] == 0: node_ip = remote_path.split(':', 1)[0] expected_replicate_requests.add(node_ip) self.assertEqual(set(handoff_suffix_paths), found_rsync_suffix_paths) # sanity, all successful rsync nodes got REPLICATE requests found_replicate_requests = set() self.assertEqual(5, len(conn_log.requests)) for req in conn_log.requests: self.assertEqual(req['method'], 'REPLICATE') found_replicate_requests.add(req['ip']) self.assertEqual(expected_replicate_requests, found_replicate_requests) # and at least one partition got removed! remaining_policies = [] for path in handoff_suffix_paths: if os.path.exists(path): policy = diskfile.extract_policy(path) remaining_policies.append(policy) self.assertEqual(len(remaining_policies), 1) remaining_policy = remaining_policies[0] # try again but with handoff_delete allowing for a single failure with _mock_process(process_arg_checker) as rsync_log, \ mock.patch('swift.obj.replicator.whataremyips', side_effect=_ips), \ mocked_http_conn(*[200] * 14, body=stub_body) as conn_log: self.replicator.handoff_delete = 2 self.replicator.replicate() # all jobs processed! self.assertEqual(self.replicator.job_count, self.replicator.replication_count) self.assertFalse(self.replicator.handoffs_remaining) # sanity, all parts got replicated found_replicate_calls = defaultdict(int) for req in conn_log.requests: self.assertEqual(req['method'], 'REPLICATE') found_replicate_key = ( int(req['headers']['X-Backend-Storage-Policy-Index']), req['path']) found_replicate_calls[found_replicate_key] += 1 expected_replicate_calls = { (int(remaining_policy), '/sda/1/a83'): 2, (0, '/sda/0'): 2, (1, '/sda/0'): 2, (0, '/sda/2'): 2, (1, '/sda/2'): 2, (0, '/sda/3'): 2, (1, '/sda/3'): 2, } self.assertEqual(dict(found_replicate_calls), expected_replicate_calls) # and now all handoff partitions have been rebalanced away! removed_paths = set() for path in handoff_suffix_paths: if not os.path.exists(path): removed_paths.add(path) self.assertEqual(removed_paths, set(handoff_suffix_paths)) def test_replicator_skips_bogus_partition_dirs(self): # A directory in the wrong place shouldn't crash the replicator rmtree(self.objects) rmtree(self.objects_1) os.mkdir(self.objects) os.mkdir(self.objects_1) os.mkdir(os.path.join(self.objects, "burrito")) jobs = self.replicator.collect_jobs() self.assertEqual(len(jobs), 0) def test_replicator_removes_zbf(self): # After running xfs_repair, a partition directory could become a # zero-byte file. If this happens, the replicator should clean it # up, log something, and move on to the next partition. # Surprise! Partition dir 1 is actually a zero-byte file. pol_0_part_1_path = os.path.join(self.objects, '1') rmtree(pol_0_part_1_path) with open(pol_0_part_1_path, 'w'): pass self.assertTrue(os.path.isfile(pol_0_part_1_path)) # sanity check # Policy 1's partition dir 1 is also a zero-byte file. pol_1_part_1_path = os.path.join(self.objects_1, '1') rmtree(pol_1_part_1_path) with open(pol_1_part_1_path, 'w'): pass self.assertTrue(os.path.isfile(pol_1_part_1_path)) # sanity check # Don't delete things in collect_jobs(); all the stat() calls would # make replicator startup really slow. self.replicator.collect_jobs() self.assertTrue(os.path.exists(pol_0_part_1_path)) self.assertTrue(os.path.exists(pol_1_part_1_path)) # After a replication pass, the files should be gone with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.run_once() self.assertFalse(os.path.exists(pol_0_part_1_path)) self.assertFalse(os.path.exists(pol_1_part_1_path)) self.assertEqual( sorted(self.logger.get_lines_for_level('warning')), [ ('Removing partition directory which was a file: %s' % pol_1_part_1_path), ('Removing partition directory which was a file: %s' % pol_0_part_1_path), ]) def test_delete_partition(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for node in nodes: rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_partition_default_sync_method(self): self.replicator.conf.pop('sync_method') with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for node in nodes: rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_partition_ssync_single_region(self): devs = [ {'id': 0, 'device': 'sda', 'zone': 0, 'region': 1, 'ip': '127.0.0.0', 'port': 6000}, {'id': 1, 'device': 'sda', 'zone': 1, 'region': 1, 'ip': '127.0.0.1', 'port': 6000}, {'id': 2, 'device': 'sda', 'zone': 2, 'region': 1, 'ip': '127.0.0.2', 'port': 6000}, {'id': 3, 'device': 'sda', 'zone': 4, 'region': 1, 'ip': '127.0.0.3', 'port': 6000}, {'id': 4, 'device': 'sda', 'zone': 5, 'region': 1, 'ip': '127.0.0.4', 'port': 6000}, {'id': 5, 'device': 'sda', 'zone': 6, 'region': 1, 'ip': 'fe80::202:b3ff:fe1e:8329', 'port': 6000}, {'id': 6, 'device': 'sda', 'zone': 7, 'region': 1, 'ip': '2001:0db8:85a3:0000:0000:8a2e:0370:7334', 'port': 6000}, ] _create_test_rings(self.testdir, devs=devs) self.conf['sync_method'] = 'ssync' self.replicator = object_replicator.ObjectReplicator(self.conf) self.replicator.logger = debug_logger() self.replicator._zero_stats() with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) ts = normalize_timestamp(time.time()) f = open(os.path.join(df._datadir, ts + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') whole_path_from = storage_directory(self.objects, 1, ohash) suffix_dir_path = os.path.dirname(whole_path_from) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) def _fake_ssync(node, job, suffixes, **kwargs): return True, {ohash: ts} self.replicator.sync_method = _fake_ssync self.replicator.replicate() self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertFalse(os.access(suffix_dir_path, os.F_OK)) self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_partition_1(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES[1]) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects_1, '1', data_dir) part_path = os.path.join(self.objects_1, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[1]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for node in nodes: rsync_mod = '%s::object/sda/objects-1/%s' % (node['ip'], 1) process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_partition_with_failures(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for i, node in enumerate(nodes): rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) if i == 0: # force one of the rsync calls to fail ret_code = 1 else: ret_code = 0 process_arg_checker.append( (ret_code, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() # The path should still exist self.assertTrue(os.access(part_path, os.F_OK)) def test_delete_partition_with_handoff_delete(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.handoff_delete = 2 df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for i, node in enumerate(nodes): rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) if i == 0: # force one of the rsync calls to fail ret_code = 1 else: ret_code = 0 process_arg_checker.append( (ret_code, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_partition_with_handoff_delete_failures(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.handoff_delete = 2 df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for i, node in enumerate(nodes): rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) if i in (0, 1): # force two of the rsync calls to fail ret_code = 1 else: ret_code = 0 process_arg_checker.append( (ret_code, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() # The file should still exist self.assertTrue(os.access(part_path, os.F_OK)) def test_delete_partition_with_handoff_delete_fail_in_other_region(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, '1', data_dir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) ring = self.replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(1) if node['ip'] not in _ips()] process_arg_checker = [] for node in nodes: rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1) if node['region'] != 1: # the rsync calls for other region to fail ret_code = 1 else: ret_code = 0 process_arg_checker.append( (ret_code, '', ['rsync', whole_path_from, rsync_mod])) with _mock_process(process_arg_checker): self.replicator.replicate() # The file should still exist self.assertTrue(os.access(part_path, os.F_OK)) def test_delete_partition_override_params(self): df = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate(override_devices=['sdb']) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate(override_partitions=['9']) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate(override_devices=['sda'], override_partitions=['1']) self.assertFalse(os.access(part_path, os.F_OK)) def test_delete_policy_override_params(self): df0 = self.df_mgr.get_diskfile('sda', '99', 'a', 'c', 'o', policy=POLICIES.legacy) df1 = self.df_mgr.get_diskfile('sda', '99', 'a', 'c', 'o', policy=POLICIES[1]) mkdirs(df0._datadir) mkdirs(df1._datadir) pol0_part_path = os.path.join(self.objects, '99') pol1_part_path = os.path.join(self.objects_1, '99') # sanity checks self.assertTrue(os.access(pol0_part_path, os.F_OK)) self.assertTrue(os.access(pol1_part_path, os.F_OK)) # a bogus policy index doesn't bother the replicator any more than a # bogus device or partition does self.replicator.run_once(policies='1,2,5') self.assertFalse(os.access(pol1_part_path, os.F_OK)) self.assertTrue(os.access(pol0_part_path, os.F_OK)) def test_delete_partition_ssync(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) ts = normalize_timestamp(time.time()) f = open(os.path.join(df._datadir, ts + '.data'), 'wb') f.write('0') f.close() ohash = hash_path('a', 'c', 'o') whole_path_from = storage_directory(self.objects, 1, ohash) suffix_dir_path = os.path.dirname(whole_path_from) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) self.call_nums = 0 self.conf['sync_method'] = 'ssync' def _fake_ssync(node, job, suffixes, **kwargs): success = True ret_val = {ohash: ts} if self.call_nums == 2: # ssync should return (True, []) only when the second # candidate node has not get the replica yet. success = False ret_val = {} self.call_nums += 1 return success, ret_val self.replicator.sync_method = _fake_ssync self.replicator.replicate() # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate() # The file should be deleted at the second replicate call self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertFalse(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate() # The partition should be deleted at the third replicate call self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertFalse(os.access(suffix_dir_path, os.F_OK)) self.assertFalse(os.access(part_path, os.F_OK)) del self.call_nums def test_delete_partition_ssync_with_sync_failure(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) ts = normalize_timestamp(time.time()) mkdirs(df._datadir) f = open(os.path.join(df._datadir, ts + '.data'), 'wb') f.write('0') f.close() ohash = hash_path('a', 'c', 'o') whole_path_from = storage_directory(self.objects, 1, ohash) suffix_dir_path = os.path.dirname(whole_path_from) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) self.call_nums = 0 self.conf['sync_method'] = 'ssync' def _fake_ssync(node, job, suffixes, **kwags): success = False ret_val = {} if self.call_nums == 2: # ssync should return (True, []) only when the second # candidate node has not get the replica yet. success = True ret_val = {ohash: ts} self.call_nums += 1 return success, ret_val self.replicator.sync_method = _fake_ssync self.replicator.replicate() # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate() # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate() # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) del self.call_nums def test_delete_objs_ssync_only_when_in_sync(self): self.replicator.logger = debug_logger('test-replicator') with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) ts = normalize_timestamp(time.time()) f = open(os.path.join(df._datadir, ts + '.data'), 'wb') f.write('0') f.close() ohash = hash_path('a', 'c', 'o') whole_path_from = storage_directory(self.objects, 1, ohash) suffix_dir_path = os.path.dirname(whole_path_from) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) self.call_nums = 0 self.conf['sync_method'] = 'ssync' in_sync_objs = {} def _fake_ssync(node, job, suffixes, remote_check_objs=None): self.call_nums += 1 if remote_check_objs is None: # sync job ret_val = {ohash: ts} else: ret_val = in_sync_objs return True, ret_val self.replicator.sync_method = _fake_ssync self.replicator.replicate() self.assertEqual(3, self.call_nums) # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) del self.call_nums def test_delete_partition_ssync_with_cleanup_failure(self): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.logger = mock_logger = \ debug_logger('test-replicator') df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) ts = normalize_timestamp(time.time()) f = open(os.path.join(df._datadir, ts + '.data'), 'wb') f.write('0') f.close() ohash = hash_path('a', 'c', 'o') whole_path_from = storage_directory(self.objects, 1, ohash) suffix_dir_path = os.path.dirname(whole_path_from) part_path = os.path.join(self.objects, '1') self.assertTrue(os.access(part_path, os.F_OK)) self.call_nums = 0 self.conf['sync_method'] = 'ssync' def _fake_ssync(node, job, suffixes, **kwargs): success = True ret_val = {ohash: ts} if self.call_nums == 2: # ssync should return (True, []) only when the second # candidate node has not get the replica yet. success = False ret_val = {} self.call_nums += 1 return success, ret_val rmdir_func = os.rmdir def raise_exception_rmdir(exception_class, error_no): instance = exception_class() instance.errno = error_no def func(directory): if directory == suffix_dir_path: raise instance else: rmdir_func(directory) return func self.replicator.sync_method = _fake_ssync self.replicator.replicate() # The file should still exist self.assertTrue(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) # Fail with ENOENT with mock.patch('os.rmdir', raise_exception_rmdir(OSError, ENOENT)): self.replicator.replicate() self.assertFalse(mock_logger.get_lines_for_level('error')) self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) # Fail with ENOTEMPTY with mock.patch('os.rmdir', raise_exception_rmdir(OSError, ENOTEMPTY)): self.replicator.replicate() self.assertFalse(mock_logger.get_lines_for_level('error')) self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) # Fail with ENOTDIR with mock.patch('os.rmdir', raise_exception_rmdir(OSError, ENOTDIR)): self.replicator.replicate() self.assertEqual(len(mock_logger.get_lines_for_level('error')), 1) self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertTrue(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) # Finally we can cleanup everything self.replicator.replicate() self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertFalse(os.access(suffix_dir_path, os.F_OK)) self.assertTrue(os.access(part_path, os.F_OK)) self.replicator.replicate() self.assertFalse(os.access(whole_path_from, os.F_OK)) self.assertFalse(os.access(suffix_dir_path, os.F_OK)) self.assertFalse(os.access(part_path, os.F_OK)) def test_run_once_recover_from_failure(self): conf = dict(swift_dir=self.testdir, devices=self.devices, bind_ip=_ips()[0], mount_check='false', timeout='300', stats_interval='1') replicator = object_replicator.ObjectReplicator(conf) was_connector = object_replicator.http_connect try: object_replicator.http_connect = mock_http_connect(200) # Write some files into '1' and run replicate- they should be moved # to the other partitions and then node should get deleted. cur_part = '1' df = self.df_mgr.get_diskfile('sda', cur_part, 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, cur_part, data_dir) ring = replicator.load_object_ring(POLICIES[0]) process_arg_checker = [] nodes = [node for node in ring.get_part_nodes(int(cur_part)) if node['ip'] not in _ips()] for node in nodes: rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], cur_part) process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mod])) self.assertTrue(os.access(os.path.join(self.objects, '1', data_dir, ohash), os.F_OK)) with _mock_process(process_arg_checker): replicator.run_once() self.assertFalse(process_errors) for i, result in [('0', True), ('1', False), ('2', True), ('3', True)]: self.assertEqual(os.access( os.path.join(self.objects, i, diskfile.HASH_FILE), os.F_OK), result) finally: object_replicator.http_connect = was_connector def test_run_once_recover_from_timeout(self): conf = dict(swift_dir=self.testdir, devices=self.devices, bind_ips=_ips()[0], mount_check='false', timeout='300', stats_interval='1') replicator = object_replicator.ObjectReplicator(conf) was_connector = object_replicator.http_connect was_get_hashes = object_replicator.DiskFileManager._get_hashes was_execute = tpool.execute self.get_hash_count = 0 try: def fake_get_hashes(*args, **kwargs): self.get_hash_count += 1 if self.get_hash_count == 3: # raise timeout on last call to get hashes raise Timeout() return 2, {'abc': 'def'} def fake_exc(tester, *args, **kwargs): if 'Error syncing partition timeout' in args[0]: tester.i_failed = True self.i_failed = False object_replicator.http_connect = mock_http_connect(200) object_replicator.DiskFileManager._get_hashes = fake_get_hashes replicator.logger.exception = \ lambda *args, **kwargs: fake_exc(self, *args, **kwargs) # Write some files into '1' and run replicate- they should be moved # to the other partitions and then node should get deleted. cur_part = '1' df = self.df_mgr.get_diskfile('sda', cur_part, 'a', 'c', 'o', policy=POLICIES.legacy) mkdirs(df._datadir) f = open(os.path.join(df._datadir, normalize_timestamp(time.time()) + '.data'), 'wb') f.write('1234567890') f.close() ohash = hash_path('a', 'c', 'o') data_dir = ohash[-3:] whole_path_from = os.path.join(self.objects, cur_part, data_dir) process_arg_checker = [] ring = replicator.load_object_ring(POLICIES[0]) nodes = [node for node in ring.get_part_nodes(int(cur_part)) if node['ip'] not in _ips()] for node in nodes: rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], cur_part) process_arg_checker.append( (0, '', ['rsync', whole_path_from, rsync_mod])) self.assertTrue(os.access(os.path.join(self.objects, '1', data_dir, ohash), os.F_OK)) with _mock_process(process_arg_checker): replicator.run_once() self.assertFalse(process_errors) self.assertFalse(self.i_failed) finally: object_replicator.http_connect = was_connector object_replicator.DiskFileManager._get_hashes = was_get_hashes tpool.execute = was_execute def test_run(self): with _mock_process([(0, '')] * 100): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.replicate() def test_run_withlog(self): with _mock_process([(0, "stuff in log")] * 100): with mock.patch('swift.obj.replicator.http_connect', mock_http_connect(200)): self.replicator.replicate() def test_sync_just_calls_sync_method(self): self.replicator.sync_method = mock.MagicMock() self.replicator.sync('node', 'job', 'suffixes') self.replicator.sync_method.assert_called_once_with( 'node', 'job', 'suffixes') @mock.patch('swift.obj.replicator.tpool_reraise') @mock.patch('swift.obj.replicator.http_connect', autospec=True) @mock.patch('swift.obj.replicator._do_listdir') def test_update(self, mock_do_listdir, mock_http, mock_tpool_reraise): def set_default(self): self.replicator.suffix_count = 0 self.replicator.suffix_sync = 0 self.replicator.suffix_hash = 0 self.replicator.replication_count = 0 self.replicator.partition_times = [] self.headers = {'Content-Length': '0', 'user-agent': 'object-replicator %s' % os.getpid()} self.replicator.logger = mock_logger = mock.MagicMock() mock_tpool_reraise.return_value = (0, {}) all_jobs = self.replicator.collect_jobs() jobs = [job for job in all_jobs if not job['delete']] mock_http.return_value = answer = mock.MagicMock() answer.getresponse.return_value = resp = mock.MagicMock() # Check incorrect http_connect with status 507 and # count of attempts and call args resp.status = 507 error = '%(ip)s/%(device)s responded as unmounted' expect = 'Error syncing partition' expected_listdir_calls = [ mock.call(int(job['partition']), self.replicator.replication_cycle) for job in jobs] do_listdir_results = [False, False, True, False, True, False] mock_do_listdir.side_effect = do_listdir_results expected_tpool_calls = [ mock.call(self.replicator._diskfile_mgr._get_hashes, job['path'], do_listdir=do_listdir, reclaim_age=self.replicator.reclaim_age) for job, do_listdir in zip(jobs, do_listdir_results) ] for job in jobs: set_default(self) ring = job['policy'].object_ring self.headers['X-Backend-Storage-Policy-Index'] = int(job['policy']) self.replicator.update(job) self.assertTrue(error in mock_logger.error.call_args[0][0]) self.assertTrue(expect in mock_logger.exception.call_args[0][0]) self.assertEqual(len(self.replicator.partition_times), 1) self.assertEqual(mock_http.call_count, len(ring._devs) - 1) reqs = [] for node in job['nodes']: reqs.append(mock.call(node['ip'], node['port'], node['device'], job['partition'], 'REPLICATE', '', headers=self.headers)) if job['partition'] == '0': self.assertEqual(self.replicator.suffix_hash, 0) mock_http.assert_has_calls(reqs, any_order=True) mock_http.reset_mock() mock_logger.reset_mock() mock_do_listdir.assert_has_calls(expected_listdir_calls) mock_tpool_reraise.assert_has_calls(expected_tpool_calls) mock_do_listdir.side_effect = None mock_do_listdir.return_value = False # Check incorrect http_connect with status 400 != HTTP_OK resp.status = 400 error = 'Invalid response %(resp)s from %(ip)s' for job in jobs: set_default(self) self.replicator.update(job) self.assertTrue(error in mock_logger.error.call_args[0][0]) self.assertEqual(len(self.replicator.partition_times), 1) mock_logger.reset_mock() # Check successful http_connection and exception with # incorrect pickle.loads(resp.read()) resp.status = 200 expect = 'Error syncing with node:' for job in jobs: set_default(self) self.replicator.update(job) self.assertTrue(expect in mock_logger.exception.call_args[0][0]) self.assertEqual(len(self.replicator.partition_times), 1) mock_logger.reset_mock() # Check successful http_connection and correct # pickle.loads(resp.read()) for non local node resp.status = 200 local_job = None resp.read.return_value = pickle.dumps({}) for job in jobs: set_default(self) # limit local job to policy 0 for simplicity if job['partition'] == '0' and int(job['policy']) == 0: local_job = job.copy() continue self.replicator.update(job) self.assertEqual(mock_logger.exception.call_count, 0) self.assertEqual(mock_logger.error.call_count, 0) self.assertEqual(len(self.replicator.partition_times), 1) self.assertEqual(self.replicator.suffix_hash, 0) self.assertEqual(self.replicator.suffix_sync, 0) self.assertEqual(self.replicator.suffix_count, 0) mock_logger.reset_mock() # Check successful http_connect and sync for local node mock_tpool_reraise.return_value = (1, {'a83': 'ba47fd314242ec8c' '7efb91f5d57336e4'}) resp.read.return_value = pickle.dumps({'a83': 'c130a2c17ed45102a' 'ada0f4eee69494ff'}) set_default(self) self.replicator.sync = fake_func = \ mock.MagicMock(return_value=(True, [])) self.replicator.update(local_job) reqs = [] for node in local_job['nodes']: reqs.append(mock.call(node, local_job, ['a83'])) fake_func.assert_has_calls(reqs, any_order=True) self.assertEqual(fake_func.call_count, 2) self.assertEqual(self.replicator.replication_count, 1) self.assertEqual(self.replicator.suffix_sync, 2) self.assertEqual(self.replicator.suffix_hash, 1) self.assertEqual(self.replicator.suffix_count, 1) # Efficient Replication Case set_default(self) self.replicator.sync = fake_func = \ mock.MagicMock(return_value=(True, [])) all_jobs = self.replicator.collect_jobs() job = None for tmp in all_jobs: if tmp['partition'] == '3': job = tmp break # The candidate nodes to replicate (i.e. dev1 and dev3) # belong to another region self.replicator.update(job) self.assertEqual(fake_func.call_count, 1) self.assertEqual(self.replicator.replication_count, 1) self.assertEqual(self.replicator.suffix_sync, 1) self.assertEqual(self.replicator.suffix_hash, 1) self.assertEqual(self.replicator.suffix_count, 1) mock_http.reset_mock() mock_logger.reset_mock() # test for replication params on policy 0 only repl_job = local_job.copy() for node in repl_job['nodes']: node['replication_ip'] = '127.0.0.11' node['replication_port'] = '6011' set_default(self) # with only one set of headers make sure we specify index 0 here # as otherwise it may be different from earlier tests self.headers['X-Backend-Storage-Policy-Index'] = 0 self.replicator.update(repl_job) reqs = [] for node in repl_job['nodes']: reqs.append(mock.call(node['replication_ip'], node['replication_port'], node['device'], repl_job['partition'], 'REPLICATE', '', headers=self.headers)) reqs.append(mock.call(node['replication_ip'], node['replication_port'], node['device'], repl_job['partition'], 'REPLICATE', '/a83', headers=self.headers)) mock_http.assert_has_calls(reqs, any_order=True) def test_rsync_compress_different_region(self): self.assertEqual(self.replicator.sync_method, self.replicator.rsync) jobs = self.replicator.collect_jobs() _m_rsync = mock.Mock(return_value=0) _m_os_path_exists = mock.Mock(return_value=True) with mock.patch.object(self.replicator, '_rsync', _m_rsync): with mock.patch('os.path.exists', _m_os_path_exists): for job in jobs: self.assertTrue('region' in job) for node in job['nodes']: for rsync_compress in (True, False): self.replicator.rsync_compress = rsync_compress ret = \ self.replicator.sync(node, job, ['fake_suffix']) self.assertTrue(ret) if node['region'] != job['region']: if rsync_compress: # --compress arg should be passed to rsync # binary only when rsync_compress option is # enabled AND destination node is in a # different region self.assertTrue('--compress' in _m_rsync.call_args[0][0]) else: self.assertFalse('--compress' in _m_rsync.call_args[0][0]) else: self.assertFalse('--compress' in _m_rsync.call_args[0][0]) self.assertEqual( _m_os_path_exists.call_args_list[-1][0][0], os.path.join(job['path'], 'fake_suffix')) self.assertEqual( _m_os_path_exists.call_args_list[-2][0][0], os.path.join(job['path'])) def test_do_listdir(self): # Test if do_listdir is enabled for every 10th partition to rehash # First number is the number of partitions in the job, list entries # are the expected partition numbers per run test_data = { 9: [1, 0, 1, 1, 1, 1, 1, 1, 1, 1], 29: [3, 2, 3, 3, 3, 3, 3, 3, 3, 3], 111: [12, 11, 11, 11, 11, 11, 11, 11, 11, 11]} for partitions, expected in test_data.items(): seen = [] for phase in range(10): invalidated = 0 for partition in range(partitions): if object_replicator._do_listdir(partition, phase): seen.append(partition) invalidated += 1 # Every 10th partition is seen after each phase self.assertEqual(expected[phase], invalidated) # After 10 cycles every partition is seen exactly once self.assertEqual(sorted(range(partitions)), sorted(seen)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/cli/0000775000567000056710000000000013024044470016521 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/cli/__init__.py0000664000567000056710000000000013024044352020617 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/cli/test_ring_builder_analyzer.py0000664000567000056710000002350013024044354024505 0ustar jenkinsjenkins00000000000000#! /usr/bin/env python # Copyright (c) 2015 Samuel Merritt # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import json import mock import unittest from StringIO import StringIO from test.unit import with_tempdir from swift.cli.ring_builder_analyzer import parse_scenario, run_scenario class TestRunScenario(unittest.TestCase): @with_tempdir def test_it_runs(self, tempdir): builder_path = os.path.join(tempdir, 'test.builder') scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100], ['add', 'z2-3.4.5.6:7/sda9', 200], ['add', 'z2-3.4.5.6:7/sda10', 200], ['add', 'z2-3.4.5.6:7/sda11', 200]], [['set_weight', 0, 150]], [['remove', 1]], [['save', builder_path]]]} parsed = parse_scenario(json.dumps(scenario)) fake_stdout = StringIO() with mock.patch('sys.stdout', fake_stdout): run_scenario(parsed) # Just test that it produced some output as it ran; the fact that # this doesn't crash and produces output that resembles something # useful is good enough. self.assertTrue('Rebalance' in fake_stdout.getvalue()) self.assertTrue(os.path.exists(builder_path)) class TestParseScenario(unittest.TestCase): def test_good(self): scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100], ['add', 'z2-3.4.5.6:7/sda9', 200]], [['set_weight', 0, 150]], [['remove', 1]]]} parsed = parse_scenario(json.dumps(scenario)) self.assertEqual(parsed['replicas'], 3) self.assertEqual(parsed['part_power'], 8) self.assertEqual(parsed['random_seed'], 123) self.assertEqual(parsed['overload'], 0) self.assertEqual(parsed['rounds'], [ [['add', {'device': 'sda8', 'ip': '3.4.5.6', 'meta': '', 'port': 7, 'region': 1, 'replication_ip': '3.4.5.6', 'replication_port': 7, 'weight': 100.0, 'zone': 2}], ['add', {'device': u'sda9', 'ip': u'3.4.5.6', 'meta': '', 'port': 7, 'region': 1, 'replication_ip': '3.4.5.6', 'replication_port': 7, 'weight': 200.0, 'zone': 2}]], [['set_weight', 0, 150.0]], [['remove', 1]]]) # The rest of this test class is just a catalog of the myriad ways that # the input can be malformed. def test_invalid_json(self): self.assertRaises(ValueError, parse_scenario, "{") def test_json_not_object(self): self.assertRaises(ValueError, parse_scenario, "[]") self.assertRaises(ValueError, parse_scenario, "\"stuff\"") def test_bad_replicas(self): working_scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100]]]} busted = dict(working_scenario) del busted['replicas'] self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, replicas='blahblah') self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, replicas=-1) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_part_power(self): working_scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100]]]} busted = dict(working_scenario) del busted['part_power'] self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, part_power='blahblah') self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, part_power=0) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, part_power=33) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_random_seed(self): working_scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100]]]} busted = dict(working_scenario) del busted['random_seed'] self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, random_seed='blahblah') self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_overload(self): working_scenario = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0, 'rounds': [[['add', 'r1z2-3.4.5.6:7/sda8', 100]]]} busted = dict(working_scenario) del busted['overload'] self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, overload='blahblah') self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(working_scenario, overload=-0.01) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_rounds(self): base = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0} self.assertRaises(ValueError, parse_scenario, json.dumps(base)) busted = dict(base, rounds={}) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(base, rounds=[{}]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) busted = dict(base, rounds=[[['bork']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_add(self): base = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0} # no dev busted = dict(base, rounds=[[['add']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # no weight busted = dict(base, rounds=[[['add', 'r1z2-1.2.3.4:6000/d7']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # too many fields busted = dict(base, rounds=[[['add', 'r1z2-1.2.3.4:6000/d7', 1, 2]]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # can't parse busted = dict(base, rounds=[[['add', 'not a good value', 100]]]) # N.B. the ValueError's coming out of ring.utils.parse_add_value # are already pretty good expected = "Invalid device specifier (round 0, command 0): " \ "Invalid add value: not a good value" try: parse_scenario(json.dumps(busted)) except ValueError as err: self.assertEqual(str(err), expected) # negative weight busted = dict(base, rounds=[[['add', 'r1z2-1.2.3.4:6000/d7', -1]]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_remove(self): base = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0} # no dev busted = dict(base, rounds=[[['remove']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # bad dev id busted = dict(base, rounds=[[['remove', 'not an int']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # too many fields busted = dict(base, rounds=[[['remove', 1, 2]]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_set_weight(self): base = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0} # no dev busted = dict(base, rounds=[[['set_weight']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # no weight busted = dict(base, rounds=[[['set_weight', 0]]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # bad dev id busted = dict(base, rounds=[[['set_weight', 'not an int', 90]]]) expected = "Invalid device ID in set_weight (round 0, command 0): " \ "invalid literal for int() with base 10: 'not an int'" try: parse_scenario(json.dumps(busted)) except ValueError as e: self.assertEqual(str(e), expected) # negative weight busted = dict(base, rounds=[[['set_weight', 1, -1]]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) # bogus weight busted = dict(base, rounds=[[['set_weight', 1, 'bogus']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) def test_bad_save(self): base = { 'replicas': 3, 'part_power': 8, 'random_seed': 123, 'overload': 0} # no builder name busted = dict(base, rounds=[[['save']]]) self.assertRaises(ValueError, parse_scenario, json.dumps(busted)) swift-2.7.1/test/unit/cli/test_form_signature.py0000664000567000056710000000754713024044354023174 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2014 Samuel Merritt # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hashlib import hmac import mock from six import StringIO import unittest from swift.cli import form_signature class TestFormSignature(unittest.TestCase): def test_prints_signature(self): the_time = 1406143563.020043 key = 'secret squirrel' expires = 3600 path = '/v1/a/c/o' redirect = 'https://example.com/done.html' max_file_size = str(int(1024 * 1024 * 1024 * 3.14159)) # π GiB max_file_count = '3' expected_signature = hmac.new( key, "\n".join(( path, redirect, max_file_size, max_file_count, str(int(the_time + expires)))), hashlib.sha1).hexdigest() out = StringIO() with mock.patch('swift.cli.form_signature.time', lambda: the_time): with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', path, redirect, max_file_size, max_file_count, str(expires), key]) self.assertEqual(exitcode, 0) self.assertTrue("Signature: %s" % expected_signature in out.getvalue()) self.assertTrue("Expires: %d" % (the_time + expires,) in out.getvalue()) sig_input = ('' % expected_signature) self.assertTrue(sig_input in out.getvalue()) def test_too_few_args(self): out = StringIO() with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', '/v1/a/c/o', '', '12', '34', '3600']) self.assertNotEqual(exitcode, 0) usage = 'Syntax: swift-form-signature ' self.assertTrue(usage in out.getvalue()) def test_invalid_filesize_arg(self): out = StringIO() key = 'secret squirrel' with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', '/v1/a/c/o', '', '-1', '34', '3600', key]) self.assertNotEqual(exitcode, 0) def test_invalid_filecount_arg(self): out = StringIO() key = 'secret squirrel' with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', '/v1/a/c/o', '', '12', '-34', '3600', key]) self.assertNotEqual(exitcode, 0) def test_invalid_path_arg(self): out = StringIO() key = 'secret squirrel' with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', '/v1/a/', '', '12', '34', '3600', key]) self.assertNotEqual(exitcode, 0) def test_invalid_seconds_arg(self): out = StringIO() key = 'secret squirrel' with mock.patch('sys.stdout', out): exitcode = form_signature.main([ '/path/to/swift-form-signature', '/v1/a/c/o', '', '12', '34', '-922337203685477580799999999999999', key]) self.assertNotEqual(exitcode, 0) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/cli/test_ringbuilder.py0000664000567000056710000024603113024044354022447 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 Christian Schwede # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import mock import os import re import six import tempfile import unittest import uuid import shlex import shutil from swift.cli import ringbuilder from swift.cli.ringbuilder import EXIT_SUCCESS, EXIT_WARNING, EXIT_ERROR from swift.common import exceptions from swift.common.ring import RingBuilder class RunSwiftRingBuilderMixin(object): def run_srb(self, *argv, **kwargs): if len(argv) == 1 and isinstance(argv[0], six.string_types): # convert a single string to a list argv = shlex.split(argv[0]) mock_stdout = six.StringIO() mock_stderr = six.StringIO() if 'exp_results' in kwargs: exp_results = kwargs['exp_results'] argv = argv[:-1] else: exp_results = None srb_args = ["", self.tempfile] + [str(s) for s in argv] try: with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): ringbuilder.main(srb_args) except SystemExit as err: valid_exit_codes = None if exp_results is not None and 'valid_exit_codes' in exp_results: valid_exit_codes = exp_results['valid_exit_codes'] else: valid_exit_codes = (0, 1) # (success, warning) if err.code not in valid_exit_codes: msg = 'Unexpected exit status %s\n' % err.code msg += 'STDOUT:\n%s\nSTDERR:\n%s\n' % ( mock_stdout.getvalue(), mock_stderr.getvalue()) self.fail(msg) return (mock_stdout.getvalue(), mock_stderr.getvalue()) class TestCommands(unittest.TestCase, RunSwiftRingBuilderMixin): def __init__(self, *args, **kwargs): super(TestCommands, self).__init__(*args, **kwargs) # List of search values for various actions # These should all match the first device in the sample ring # (see below) but not the second device self.search_values = ["d0", "/sda1", "r0", "z0", "z0-127.0.0.1", "127.0.0.1", "z0:6000", ":6000", "R127.0.0.1", "127.0.0.1R127.0.0.1", "R:6000", "_some meta data"] def setUp(self): self.tmpdir = tempfile.mkdtemp() tmpf = tempfile.NamedTemporaryFile(dir=self.tmpdir) self.tempfile = self.tmpfile = tmpf.name def tearDown(self): try: shutil.rmtree(self.tmpdir, True) except OSError: pass def create_sample_ring(self, part_power=6): """ Create a sample ring with four devices At least four devices are needed to test removing a device, since having less devices than replicas is not allowed. """ # Ensure there is no existing test builder file because # create_sample_ring() might be used more than once in a single test try: os.remove(self.tmpfile) except OSError: pass ring = RingBuilder(part_power, 3, 1) ring.add_dev({'weight': 100.0, 'region': 0, 'zone': 0, 'ip': '127.0.0.1', 'port': 6000, 'device': 'sda1', 'meta': 'some meta data', }) ring.add_dev({'weight': 100.0, 'region': 1, 'zone': 1, 'ip': '127.0.0.2', 'port': 6001, 'device': 'sda2' }) ring.add_dev({'weight': 100.0, 'region': 2, 'zone': 2, 'ip': '127.0.0.3', 'port': 6002, 'device': 'sdc3' }) ring.add_dev({'weight': 100.0, 'region': 3, 'zone': 3, 'ip': '127.0.0.4', 'port': 6003, 'device': 'sdd4' }) ring.save(self.tmpfile) def assertSystemExit(self, return_code, func, *argv): with self.assertRaises(SystemExit) as cm: func(*argv) self.assertEqual(return_code, cm.exception.code) def test_parse_search_values_old_format(self): # Test old format argv = ["d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data"] search_values = ringbuilder._parse_search_values(argv) self.assertEqual(search_values['id'], 0) self.assertEqual(search_values['region'], 0) self.assertEqual(search_values['zone'], 0) self.assertEqual(search_values['ip'], '127.0.0.1') self.assertEqual(search_values['port'], 6000) self.assertEqual(search_values['replication_ip'], '127.0.0.1') self.assertEqual(search_values['replication_port'], 6000) self.assertEqual(search_values['device'], 'sda1') self.assertEqual(search_values['meta'], 'some meta data') def test_parse_search_values_new_format(self): # Test new format argv = ["--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data", "--weight", "100"] search_values = ringbuilder._parse_search_values(argv) self.assertEqual(search_values['id'], 0) self.assertEqual(search_values['region'], 0) self.assertEqual(search_values['zone'], 0) self.assertEqual(search_values['ip'], '127.0.0.1') self.assertEqual(search_values['port'], 6000) self.assertEqual(search_values['replication_ip'], '127.0.0.1') self.assertEqual(search_values['replication_port'], 6000) self.assertEqual(search_values['device'], 'sda1') self.assertEqual(search_values['meta'], 'some meta data') self.assertEqual(search_values['weight'], 100) def test_parse_search_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["--region", "2", "test"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_search_values, argv) def test_find_parts(self): rb = RingBuilder(8, 3, 0) rb.add_dev({'id': 0, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'}) rb.add_dev({'id': 3, 'region': 1, 'zone': 0, 'weight': 100, 'ip': '127.0.0.1', 'port': 10000, 'device': 'sdb1'}) rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'}) rb.add_dev({'id': 4, 'region': 1, 'zone': 1, 'weight': 100, 'ip': '127.0.0.1', 'port': 10001, 'device': 'sdb1'}) rb.add_dev({'id': 2, 'region': 1, 'zone': 2, 'weight': 100, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'}) rb.add_dev({'id': 5, 'region': 1, 'zone': 2, 'weight': 100, 'ip': '127.0.0.1', 'port': 10002, 'device': 'sdb1'}) rb.rebalance() rb.add_dev({'id': 6, 'region': 2, 'zone': 1, 'weight': 10, 'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'}) rb.pretend_min_part_hours_passed() rb.rebalance() ringbuilder.builder = rb sorted_partition_count = ringbuilder._find_parts( rb.search_devs({'ip': '127.0.0.1'})) # Expect 256 partitions in the output self.assertEqual(256, len(sorted_partition_count)) # Each partitions should have 3 replicas for partition, count in sorted_partition_count: self.assertEqual( 3, count, "Partition %d has only %d replicas" % (partition, count)) def test_parse_list_parts_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["--region", "2", "test"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_list_parts_values, argv) def test_parse_add_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["--region", "2", "test"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_add_values, argv) def test_set_weight_values_no_devices(self): # Test no devices # _set_weight_values doesn't take argv-like arguments self.assertSystemExit( EXIT_ERROR, ringbuilder._set_weight_values, [], 100) def test_parse_set_weight_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["r1", "100", "r2"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_set_weight_values, argv) argv = ["--region", "2"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_set_weight_values, argv) def test_set_info_values_no_devices(self): # Test no devices # _set_info_values doesn't take argv-like arguments self.assertSystemExit( EXIT_ERROR, ringbuilder._set_info_values, [], 100) def test_parse_set_info_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["r1", "127.0.0.1", "r2"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_set_info_values, argv) def test_parse_remove_values_number_of_arguments(self): # Test Number of arguments abnormal argv = ["--region", "2", "test"] self.assertSystemExit( EXIT_ERROR, ringbuilder._parse_remove_values, argv) def test_create_ring(self): argv = ["", self.tmpfile, "create", "6", "3.14159265359", "1"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.part_power, 6) self.assertEqual(ring.replicas, 3.14159265359) self.assertEqual(ring.min_part_hours, 1) def test_create_ring_number_of_arguments(self): # Test missing arguments argv = ["", self.tmpfile, "create"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_add_device_ipv4_old_format(self): self.create_sample_ring() # Test ipv4(old format) argv = ["", self.tmpfile, "add", "r2z3-127.0.0.1:6000/sda3_some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 3.14159265359) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['meta'], 'some meta data') def test_add_duplicate_devices(self): self.create_sample_ring() # Test adding duplicate devices argv = ["", self.tmpfile, "add", "r1z1-127.0.0.1:6000/sda9", "3.14159265359", "r1z1-127.0.0.1:6000/sda9", "2"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_add_device_ipv6_old_format(self): self.create_sample_ring() # Test ipv6(old format) argv = \ ["", self.tmpfile, "add", "r2z3-2001:0000:1234:0000:0000:C1C0:ABCD:0876:6000" "R2::10:7000/sda3_some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '2001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 3.14159265359) self.assertEqual(dev['replication_ip'], '2::10') self.assertEqual(dev['replication_port'], 7000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_add_device_ipv4_new_format(self): self.create_sample_ring() # Test ipv4(new format) argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "127.0.0.2", "--port", "6000", "--replication-ip", "127.0.0.2", "--replication-port", "6000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 3.14159265359) self.assertEqual(dev['replication_ip'], '127.0.0.2') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_add_device_ipv6_new_format(self): self.create_sample_ring() # Test ipv6(new format) argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[3001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[3::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '3001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 3.14159265359) self.assertEqual(dev['replication_ip'], '3::10') self.assertEqual(dev['replication_port'], 7000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_add_device_domain_new_format(self): self.create_sample_ring() # Test domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], 'test.test.com') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 3.14159265359) self.assertEqual(dev['replication_ip'], 'r.test.com') self.assertEqual(dev['replication_port'], 7000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_add_device_number_of_arguments(self): # Test Number of arguments abnormal argv = ["", self.tmpfile, "add"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_add_device_already_exists(self): # Test Add a device that already exists argv = ["", self.tmpfile, "add", "r0z0-127.0.0.1:6000/sda1_some meta data", "100"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_add_device_old_missing_region(self): self.create_sample_ring() # Test add device without specifying a region argv = ["", self.tmpfile, "add", "z3-127.0.0.1:6000/sde3_some meta data", "3.14159265359"] exp_results = {'valid_exit_codes': [2]} self.run_srb(*argv, exp_results=exp_results) # Check that ring was created with sane value for region ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertTrue(dev['region'] > 0) def test_remove_device(self): for search_value in self.search_values: self.create_sample_ring() argv = ["", self.tmpfile, "remove", search_value] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was set to 0 dev = ring.devs[0] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 0) self.assertEqual(dev['zone'], 0) self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_ipv4_old_format(self): self.create_sample_ring() # Test ipv4(old format) argv = ["", self.tmpfile, "remove", "d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was set to 0 dev = ring.devs[0] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 0) self.assertEqual(dev['zone'], 0) self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_ipv6_old_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(old format) argv = ["", self.tmpfile, "remove", "d4r2z3-[2001:0000:1234:0000:0000:C1C0:ABCD:0876]:6000" "R[2::10]:7000/sda3_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 0]) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Check that weight was set to 0 dev = ring.devs[-1] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '2001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], '2::10') self.assertEqual(dev['replication_port'], 7000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_ipv4_new_format(self): self.create_sample_ring() # Test ipv4(new format) argv = \ ["", self.tmpfile, "remove", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was set to 0 dev = ring.devs[0] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 0) self.assertEqual(dev['zone'], 0) self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_ipv6_new_format(self): self.create_sample_ring() argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[3001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "8000", "--replication-ip", "[3::10]", "--replication-port", "9000", "--device", "sda30", "--meta", "other meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(new format) argv = \ ["", self.tmpfile, "remove", "--id", "4", "--region", "2", "--zone", "3", "--ip", "[3001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "8000", "--replication-ip", "[3::10]", "--replication-port", "9000", "--device", "sda30", "--meta", "other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 0]) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Check that weight was set to 0 dev = ring.devs[-1] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], '3001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 8000) self.assertEqual(dev['device'], 'sda30') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], '3::10') self.assertEqual(dev['replication_port'], 9000) self.assertEqual(dev['meta'], 'other meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_domain_new_format(self): self.create_sample_ring() # add domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test domain name argv = \ ["", self.tmpfile, "remove", "--id", "4", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 0]) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) self.assertFalse([d for d in ring._remove_devs if d['id'] == 1]) # Check that weight was set to 0 dev = ring.devs[-1] self.assertEqual(dev['weight'], 0) # Check that device is in list of devices to be removed self.assertEqual(dev['region'], 2) self.assertEqual(dev['zone'], 3) self.assertEqual(dev['ip'], 'test.test.com') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['device'], 'sda3') self.assertEqual(dev['weight'], 0) self.assertEqual(dev['replication_ip'], 'r.test.com') self.assertEqual(dev['replication_port'], 7000) self.assertEqual(dev['meta'], 'some meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_remove_device_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "remove"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_remove_device_no_matching(self): self.create_sample_ring() # Test No matching devices argv = ["", self.tmpfile, "remove", "--ip", "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_weight(self): for search_value in self.search_values: self.create_sample_ring() argv = ["", self.tmpfile, "set_weight", search_value, "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was changed dev = ring.devs[0] self.assertEqual(dev['weight'], 3.14159265359) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_ipv4_old_format(self): self.create_sample_ring() # Test ipv4(old format) argv = ["", self.tmpfile, "set_weight", "d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was changed dev = ring.devs[0] self.assertEqual(dev['weight'], 3.14159265359) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_ipv6_old_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "100"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(old format) argv = ["", self.tmpfile, "set_weight", "d4r2z3-[2001:0000:1234:0000:0000:C1C0:ABCD:0876]:6000" "R[2::10]:7000/sda3_some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Check that weight was changed dev = ring.devs[-1] self.assertEqual(dev['weight'], 3.14159265359) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_ipv4_new_format(self): self.create_sample_ring() # Test ipv4(new format) argv = \ ["", self.tmpfile, "set_weight", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that weight was changed dev = ring.devs[0] self.assertEqual(dev['weight'], 3.14159265359) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_ipv6_new_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "100"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(new format) argv = \ ["", self.tmpfile, "set_weight", "--id", "4", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Check that weight was changed dev = ring.devs[-1] self.assertEqual(dev['weight'], 3.14159265359) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_domain_new_format(self): self.create_sample_ring() # add domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "100"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test domain name argv = \ ["", self.tmpfile, "set_weight", "--id", "4", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['weight'], 100) # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['weight'], 100) # Check that weight was changed dev = ring.devs[-1] self.assertEqual(dev['weight'], 3.14159265359) # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_weight_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "set_weight"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_weight_no_matching(self): self.create_sample_ring() # Test No matching devices argv = ["", self.tmpfile, "set_weight", "--ip", "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_info(self): for search_value in self.search_values: self.create_sample_ring() argv = ["", self.tmpfile, "set_info", search_value, "127.0.1.1:8000/sda1_other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.1.1') self.assertEqual(dev['port'], 8000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['meta'], 'other meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_ipv4_old_format(self): self.create_sample_ring() # Test ipv4(old format) argv = ["", self.tmpfile, "set_info", "d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data", "127.0.1.1:8000R127.0.1.1:8000/sda10_other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.1.1') self.assertEqual(dev['port'], 8000) self.assertEqual(dev['replication_ip'], '127.0.1.1') self.assertEqual(dev['replication_port'], 8000) self.assertEqual(dev['device'], 'sda10') self.assertEqual(dev['meta'], 'other meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_ipv6_old_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(old format) argv = ["", self.tmpfile, "set_info", "d4r2z3-[2001:0000:1234:0000:0000:C1C0:ABCD:0876]:6000" "R[2::10]:7000/sda3_some meta data", "[3001:0000:1234:0000:0000:C1C0:ABCD:0876]:8000" "R[3::10]:8000/sda30_other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Check that device was created with given data dev = ring.devs[-1] self.assertEqual(dev['ip'], '3001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 8000) self.assertEqual(dev['replication_ip'], '3::10') self.assertEqual(dev['replication_port'], 8000) self.assertEqual(dev['device'], 'sda30') self.assertEqual(dev['meta'], 'other meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_ipv4_new_format(self): self.create_sample_ring() # Test ipv4(new format) argv = \ ["", self.tmpfile, "set_info", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data", "--change-ip", "127.0.2.1", "--change-port", "9000", "--change-replication-ip", "127.0.2.1", "--change-replication-port", "9000", "--change-device", "sda100", "--change-meta", "other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.2.1') self.assertEqual(dev['port'], 9000) self.assertEqual(dev['replication_ip'], '127.0.2.1') self.assertEqual(dev['replication_port'], 9000) self.assertEqual(dev['device'], 'sda100') self.assertEqual(dev['meta'], 'other meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_ipv6_new_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test ipv6(new format) argv = \ ["", self.tmpfile, "set_info", "--id", "4", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--change-ip", "[4001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--change-port", "9000", "--change-replication-ip", "[4::10]", "--change-replication-port", "9000", "--change-device", "sda300", "--change-meta", "other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Check that device was created with given data ring = RingBuilder.load(self.tmpfile) dev = ring.devs[-1] self.assertEqual(dev['ip'], '4001:0:1234::c1c0:abcd:876') self.assertEqual(dev['port'], 9000) self.assertEqual(dev['replication_ip'], '4::10') self.assertEqual(dev['replication_port'], 9000) self.assertEqual(dev['device'], 'sda300') self.assertEqual(dev['meta'], 'other meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_domain_new_format(self): self.create_sample_ring() # add domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Test domain name argv = \ ["", self.tmpfile, "set_info", "--id", "4", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--change-ip", "test.test2.com", "--change-port", "9000", "--change-replication-ip", "r.test2.com", "--change-replication-port", "9000", "--change-device", "sda300", "--change-meta", "other meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) # Check that second device in ring is not affected dev = ring.devs[0] self.assertEqual(dev['ip'], '127.0.0.1') self.assertEqual(dev['port'], 6000) self.assertEqual(dev['replication_ip'], '127.0.0.1') self.assertEqual(dev['replication_port'], 6000) self.assertEqual(dev['device'], 'sda1') self.assertEqual(dev['meta'], 'some meta data') # Check that second device in ring is not affected dev = ring.devs[1] self.assertEqual(dev['ip'], '127.0.0.2') self.assertEqual(dev['port'], 6001) self.assertEqual(dev['device'], 'sda2') self.assertEqual(dev['meta'], '') # Check that device was created with given data dev = ring.devs[-1] self.assertEqual(dev['ip'], 'test.test2.com') self.assertEqual(dev['port'], 9000) self.assertEqual(dev['replication_ip'], 'r.test2.com') self.assertEqual(dev['replication_port'], 9000) self.assertEqual(dev['device'], 'sda300') self.assertEqual(dev['meta'], 'other meta data') # Final check, rebalance and check ring is ok ring.rebalance() self.assertTrue(ring.validate()) def test_set_info_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "set_info"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_info_no_matching(self): self.create_sample_ring() # Test No matching devices argv = ["", self.tmpfile, "set_info", "--ip", "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_info_already_exists(self): self.create_sample_ring() # Test Set a device that already exists argv = \ ["", self.tmpfile, "set_info", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data", "--change-ip", "127.0.0.2", "--change-port", "6001", "--change-replication-ip", "127.0.0.2", "--change-replication-port", "6001", "--change-device", "sda2", "--change-meta", ""] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_min_part_hours(self): self.create_sample_ring() argv = ["", self.tmpfile, "set_min_part_hours", "24"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.min_part_hours, 24) def test_set_min_part_hours_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "set_min_part_hours"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_replicas(self): self.create_sample_ring() argv = ["", self.tmpfile, "set_replicas", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.replicas, 3.14159265359) def test_set_overload(self): self.create_sample_ring() argv = ["", self.tmpfile, "set_overload", "0.19878"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 0.19878) def test_set_overload_negative(self): self.create_sample_ring() argv = ["", self.tmpfile, "set_overload", "-0.19878"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 0.0) def test_set_overload_non_numeric(self): self.create_sample_ring() argv = ["", self.tmpfile, "set_overload", "swedish fish"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 0.0) def test_set_overload_percent(self): self.create_sample_ring() argv = "set_overload 10%".split() out, err = self.run_srb(*argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 0.1) self.assertTrue('10.00%' in out) self.assertTrue('0.100000' in out) def test_set_overload_percent_strange_input(self): self.create_sample_ring() argv = "set_overload 26%%%%".split() out, err = self.run_srb(*argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 0.26) self.assertTrue('26.00%' in out) self.assertTrue('0.260000' in out) def test_server_overload_crazy_high(self): self.create_sample_ring() argv = "set_overload 10".split() out, err = self.run_srb(*argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 10.0) self.assertTrue('Warning overload is greater than 100%' in out) self.assertTrue('1000.00%' in out) self.assertTrue('10.000000' in out) # but it's cool if you do it on purpose argv[-1] = '1000%' out, err = self.run_srb(*argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.overload, 10.0) self.assertTrue('Warning overload is greater than 100%' not in out) self.assertTrue('1000.00%' in out) self.assertTrue('10.000000' in out) def test_set_overload_number_of_arguments(self): self.create_sample_ring() # Test missing arguments argv = ["", self.tmpfile, "set_overload"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_replicas_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "set_replicas"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_set_replicas_invalid_value(self): self.create_sample_ring() # Test not a valid number argv = ["", self.tmpfile, "set_replicas", "test"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) # Test new replicas is 0 argv = ["", self.tmpfile, "set_replicas", "0"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_validate(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) argv = ["", self.tmpfile, "validate"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_validate_empty_file(self): open(self.tmpfile, 'a').close argv = ["", self.tmpfile, "validate"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_validate_corrupted_file(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.rebalance() self.assertTrue(ring.validate()) # ring is valid until now ring.save(self.tmpfile) argv = ["", self.tmpfile, "validate"] # corrupt the file with open(self.tmpfile, 'wb') as f: f.write(os.urandom(1024)) self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_validate_non_existent_file(self): rand_file = '%s/%s' % ('/tmp', str(uuid.uuid4())) argv = ["", rand_file, "validate"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_validate_non_accessible_file(self): with mock.patch.object( RingBuilder, 'load', mock.Mock(side_effect=exceptions.PermissionError)): argv = ["", self.tmpfile, "validate"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_validate_generic_error(self): with mock.patch.object( RingBuilder, 'load', mock.Mock( side_effect=IOError('Generic error occurred'))): argv = ["", self.tmpfile, "validate"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_search_device_ipv4_old_format(self): self.create_sample_ring() # Test ipv4(old format) argv = ["", self.tmpfile, "search", "d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_search_device_ipv6_old_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv6(old format) argv = ["", self.tmpfile, "search", "d4r2z3-[2001:0000:1234:0000:0000:C1C0:ABCD:0876]:6000" "R[2::10]:7000/sda3_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_search_device_ipv4_new_format(self): self.create_sample_ring() # Test ipv4(new format) argv = \ ["", self.tmpfile, "search", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_search_device_ipv6_new_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv6(new format) argv = \ ["", self.tmpfile, "search", "--id", "4", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_search_device_domain_new_format(self): self.create_sample_ring() # add domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test domain name argv = \ ["", self.tmpfile, "search", "--id", "4", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_search_device_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "search"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_search_device_no_matching(self): self.create_sample_ring() # Test No matching devices argv = ["", self.tmpfile, "search", "--ip", "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_list_parts_ipv4_old_format(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv4(old format) argv = ["", self.tmpfile, "list_parts", "d0r0z0-127.0.0.1:6000R127.0.0.1:6000/sda1_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_list_parts_ipv6_old_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv6(old format) argv = ["", self.tmpfile, "list_parts", "d4r2z3-[2001:0000:1234:0000:0000:C1C0:ABCD:0876]:6000" "R[2::10]:7000/sda3_some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_list_parts_ipv4_new_format(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv4(new format) argv = \ ["", self.tmpfile, "list_parts", "--id", "0", "--region", "0", "--zone", "0", "--ip", "127.0.0.1", "--port", "6000", "--replication-ip", "127.0.0.1", "--replication-port", "6000", "--device", "sda1", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_list_parts_ipv6_new_format(self): self.create_sample_ring() # add IPV6 argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test ipv6(new format) argv = \ ["", self.tmpfile, "list_parts", "--id", "4", "--region", "2", "--zone", "3", "--ip", "[2001:0000:1234:0000:0000:C1C0:ABCD:0876]", "--port", "6000", "--replication-ip", "[2::10]", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_list_parts_domain_new_format(self): self.create_sample_ring() # add domain name argv = \ ["", self.tmpfile, "add", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data", "--weight", "3.14159265359"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # write ring file ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test domain name argv = \ ["", self.tmpfile, "list_parts", "--id", "4", "--region", "2", "--zone", "3", "--ip", "test.test.com", "--port", "6000", "--replication-ip", "r.test.com", "--replication-port", "7000", "--device", "sda3", "--meta", "some meta data"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_list_parts_number_of_arguments(self): self.create_sample_ring() # Test Number of arguments abnormal argv = ["", self.tmpfile, "list_parts"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_list_parts_no_matching(self): self.create_sample_ring() # Test No matching devices argv = ["", self.tmpfile, "list_parts", "--ip", "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_unknown(self): self.create_sample_ring() argv = ["", self.tmpfile, "unknown"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_default(self): self.create_sample_ring() argv = ["", self.tmpfile] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_default_show_removed(self): mock_stdout = six.StringIO() mock_stderr = six.StringIO() self.create_sample_ring() # Note: it also sets device's weight to zero. argv = ["", self.tmpfile, "remove", "--id", "1"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # Setting another device's weight to zero to be sure we distinguish # real removed device and device with zero weight. argv = ["", self.tmpfile, "set_weight", "0", "--id", "3"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) expected = "%s, build version 6\n" \ "64 partitions, 3.000000 replicas, 4 regions, 4 zones, " \ "4 devices, 100.00 balance, 0.00 dispersion\n" \ "The minimum number of hours before a partition can be " \ "reassigned is 1 (0:00:00 remaining)\n" \ "The overload factor is 0.00%% (0.000000)\n" \ "Ring file %s.ring.gz not found, probably " \ "it hasn't been written yet\n" \ "Devices: id region zone ip address port " \ "replication ip replication port name weight " \ "partitions balance flags meta\n" \ " 0 0 0 127.0.0.1 6000 " \ "127.0.0.1 6000 sda1 100.00" \ " 0 -100.00 some meta data\n" \ " 1 1 1 127.0.0.2 6001 " \ "127.0.0.2 6001 sda2 0.00" \ " 0 0.00 DEL \n" \ " 2 2 2 127.0.0.3 6002 " \ "127.0.0.3 6002 sdc3 100.00" \ " 0 -100.00 \n" \ " 3 3 3 127.0.0.4 6003 " \ "127.0.0.4 6003 sdd4 0.00" \ " 0 0.00 \n" % (self.tmpfile, self.tmpfile) self.assertEqual(expected, mock_stdout.getvalue()) def test_default_ringfile_check(self): self.create_sample_ring() # ring file not created mock_stdout = six.StringIO() mock_stderr = six.StringIO() argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring_not_found_re = re.compile("Ring file .*\.ring\.gz not found") self.assertTrue(ring_not_found_re.findall(mock_stdout.getvalue())) # write ring file argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # ring file is up-to-date mock_stdout = six.StringIO() argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring_up_to_date_re = re.compile("Ring file .*\.ring\.gz is up-to-date") self.assertTrue(ring_up_to_date_re.findall(mock_stdout.getvalue())) # change builder (set weight) argv = ["", self.tmpfile, "set_weight", "0", "--id", "3"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # ring file is obsolete after set_weight mock_stdout = six.StringIO() argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring_obsolete_re = re.compile("Ring file .*\.ring\.gz is obsolete") self.assertTrue(ring_obsolete_re.findall(mock_stdout.getvalue())) # write ring file argv = ["", self.tmpfile, "write_ring"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) # ring file up-to-date again mock_stdout = six.StringIO() argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) self.assertTrue(ring_up_to_date_re.findall(mock_stdout.getvalue())) # Break ring file e.g. just make it empty open('%s.ring.gz' % self.tmpfile, 'w').close() # ring file is invalid mock_stdout = six.StringIO() argv = ["", self.tmpfile] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring_invalid_re = re.compile("Ring file .*\.ring\.gz is invalid") self.assertTrue(ring_invalid_re.findall(mock_stdout.getvalue())) def test_rebalance(self): self.create_sample_ring() argv = ["", self.tmpfile, "rebalance", "3"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertTrue(ring.validate()) def test_rebalance_no_device_change(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.rebalance() ring.save(self.tmpfile) # Test No change to the device argv = ["", self.tmpfile, "rebalance", "3"] self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) def test_rebalance_no_devices(self): # Test no devices argv = ["", self.tmpfile, "create", "6", "3.14159265359", "1"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_rebalance_remove_zero_weighted_device(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) ring.set_dev_weight(3, 0.0) ring.rebalance() ring.pretend_min_part_hours_passed() ring.remove_dev(3) ring.save(self.tmpfile) # Test rebalance after remove 0 weighted device argv = ["", self.tmpfile, "rebalance", "3"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertTrue(ring.validate()) self.assertEqual(ring.devs[3], None) def test_rebalance_resets_time_remaining(self): self.create_sample_ring() ring = RingBuilder.load(self.tmpfile) time_path = 'swift.common.ring.builder.time' argv = ["", self.tmpfile, "rebalance", "3"] time = 0 # first rebalance, should have 1 hour left before next rebalance time += 3600 with mock.patch(time_path, return_value=time): self.assertEqual(ring.min_part_seconds_left, 0) self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.min_part_seconds_left, 3600) # min part hours passed, change ring and save for rebalance ring.set_dev_weight(0, ring.devs[0]['weight'] * 2) ring.save(self.tmpfile) # second rebalance, should have 1 hour left time += 3600 with mock.patch(time_path, return_value=time): self.assertEqual(ring.min_part_seconds_left, 0) self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertTrue(ring.min_part_seconds_left, 3600) def test_rebalance_failure_does_not_reset_last_moves_epoch(self): ring = RingBuilder(8, 3, 1) ring.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 6010, 'device': 'sda1'}) ring.add_dev({'id': 1, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 6020, 'device': 'sdb1'}) ring.add_dev({'id': 2, 'region': 0, 'zone': 0, 'weight': 1, 'ip': '127.0.0.1', 'port': 6030, 'device': 'sdc1'}) time_path = 'swift.common.ring.builder.time' argv = ["", self.tmpfile, "rebalance", "3"] with mock.patch(time_path, return_value=0): ring.rebalance() ring.save(self.tmpfile) # min part hours not passed with mock.patch(time_path, return_value=(3600 * 0.6)): self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.min_part_seconds_left, 3600 * 0.4) ring.save(self.tmpfile) # min part hours passed, no partitions need to be moved with mock.patch(time_path, return_value=(3600 * 1.5)): self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) ring = RingBuilder.load(self.tmpfile) self.assertEqual(ring.min_part_seconds_left, 0) def test_rebalance_with_seed(self): self.create_sample_ring() # Test rebalance using explicit seed parameter argv = ["", self.tmpfile, "rebalance", "--seed", "2"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_write_ring(self): self.create_sample_ring() argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "write_ring"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_write_builder(self): # Test builder file already exists self.create_sample_ring() argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "write_builder"] exp_results = {'valid_exit_codes': [2]} self.run_srb(*argv, exp_results=exp_results) def test_write_builder_after_device_removal(self): # Test regenerating builder file after having removed a device # and lost the builder file self.create_sample_ring() argv = ["", self.tmpfile, "add", "r1z1-127.0.0.1:6000/sdb", "1.0"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "add", "r1z1-127.0.0.1:6000/sdc", "1.0"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) argv = ["", self.tmpfile, "remove", "--id", "0"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) backup_file = os.path.join(os.path.dirname(self.tmpfile), os.path.basename(self.tmpfile) + ".ring.gz") os.remove(self.tmpfile) # loses file... argv = ["", backup_file, "write_builder", "24"] self.assertEqual(ringbuilder.main(argv), None) def test_warn_at_risk(self): # when the number of total part replicas (3 * 2 ** 4 = 48 in # this ring) is less than the total units of weight (310 in this # ring) the relative number of parts per unit of weight (called # weight_of_one_part) is less than 1 - and each whole part # placed takes up a larger ratio of the fractional number of # parts the device wants - so it's much more difficult to # satisfy a device's weight exactly - that is to say less parts # to go around tends to make things lumpy self.create_sample_ring(4) ring = RingBuilder.load(self.tmpfile) ring.devs[0]['weight'] = 10 ring.save(self.tmpfile) argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv) def test_no_warn_when_balanced(self): # when the number of total part replicas (3 * 2 ** 10 = 3072 in # this ring) is larger than the total units of weight (310 in # this ring) the relative number of parts per unit of weight # (called weight_of_one_part) is more than 1 - and each whole # part placed takes up a smaller ratio of the larger number of # parts the device wants - so it's much easier to satisfy a # device's weight exactly - that is to say more parts to go # around tends to smooth things out self.create_sample_ring(10) ring = RingBuilder.load(self.tmpfile) ring.devs[0]['weight'] = 10 ring.save(self.tmpfile) argv = ["", self.tmpfile, "rebalance"] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_invalid_device_name(self): self.create_sample_ring() for device_name in ["", " ", " sda1", "sda1 ", " meta "]: argv = ["", self.tmpfile, "add", "r1z1-127.0.0.1:6000/%s" % device_name, "1"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) argv = ["", self.tmpfile, "add", "--region", "1", "--zone", "1", "--ip", "127.0.0.1", "--port", "6000", "--device", device_name, "--weight", "100"] self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) def test_dispersion_command(self): self.create_sample_ring() self.run_srb('rebalance') out, err = self.run_srb('dispersion -v') self.assertIn('dispersion', out.lower()) self.assertFalse(err) def test_use_ringfile_as_builderfile(self): mock_stdout = six.StringIO() mock_stderr = six.StringIO() argv = ["", "object.ring.gz"] with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): self.assertSystemExit(EXIT_ERROR, ringbuilder.main, argv) expected = "Note: using object.builder instead of object.ring.gz " \ "as builder file\n" \ "Ring Builder file does not exist: object.builder\n" self.assertEqual(expected, mock_stdout.getvalue()) def test_main_no_arguments(self): # Test calling main with no arguments argv = [] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_main_single_argument(self): # Test calling main with single argument argv = [""] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) def test_main_with_safe(self): # Test calling main with '-safe' argument self.create_sample_ring() argv = ["-safe", self.tmpfile] self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv) class TestRebalanceCommand(unittest.TestCase, RunSwiftRingBuilderMixin): def __init__(self, *args, **kwargs): super(TestRebalanceCommand, self).__init__(*args, **kwargs) def setUp(self): self.tmpdir = tempfile.mkdtemp() tmpf = tempfile.NamedTemporaryFile(dir=self.tmpdir) self.tempfile = self.tmpfile = tmpf.name def tearDown(self): try: shutil.rmtree(self.tmpdir, True) except OSError: pass def run_srb(self, *argv): mock_stdout = six.StringIO() mock_stderr = six.StringIO() srb_args = ["", self.tempfile] + [str(s) for s in argv] try: with mock.patch("sys.stdout", mock_stdout): with mock.patch("sys.stderr", mock_stderr): ringbuilder.main(srb_args) except SystemExit as err: if err.code not in (0, 1): # (success, warning) raise return (mock_stdout.getvalue(), mock_stderr.getvalue()) def test_debug(self): # NB: getLogger(name) always returns the same object rb_logger = logging.getLogger("swift.ring.builder") try: self.assertNotEqual(rb_logger.getEffectiveLevel(), logging.DEBUG) self.run_srb("create", 8, 3, 1) self.run_srb("add", "r1z1-10.1.1.1:2345/sda", 100.0, "r1z1-10.1.1.1:2345/sdb", 100.0, "r1z1-10.1.1.1:2345/sdc", 100.0, "r1z1-10.1.1.1:2345/sdd", 100.0) self.run_srb("rebalance", "--debug") self.assertEqual(rb_logger.getEffectiveLevel(), logging.DEBUG) rb_logger.setLevel(logging.INFO) self.run_srb("rebalance", "--debug", "123") self.assertEqual(rb_logger.getEffectiveLevel(), logging.DEBUG) rb_logger.setLevel(logging.INFO) self.run_srb("rebalance", "123", "--debug") self.assertEqual(rb_logger.getEffectiveLevel(), logging.DEBUG) finally: rb_logger.setLevel(logging.INFO) # silence other test cases def test_rebalance_warning_appears(self): self.run_srb("create", 8, 3, 24) # all in one machine: totally balanceable self.run_srb("add", "r1z1-10.1.1.1:2345/sda", 100.0, "r1z1-10.1.1.1:2345/sdb", 100.0, "r1z1-10.1.1.1:2345/sdc", 100.0, "r1z1-10.1.1.1:2345/sdd", 100.0) out, err = self.run_srb("rebalance") self.assertTrue("rebalance/repush" not in out) # 2 machines of equal size: balanceable, but not in one pass due to # min_part_hours > 0 self.run_srb("add", "r1z1-10.1.1.2:2345/sda", 100.0, "r1z1-10.1.1.2:2345/sdb", 100.0, "r1z1-10.1.1.2:2345/sdc", 100.0, "r1z1-10.1.1.2:2345/sdd", 100.0) self.run_srb("pretend_min_part_hours_passed") out, err = self.run_srb("rebalance") self.assertTrue("rebalance/repush" in out) # after two passes, it's all balanced out self.run_srb("pretend_min_part_hours_passed") out, err = self.run_srb("rebalance") self.assertTrue("rebalance/repush" not in out) def test_rebalance_warning_with_overload(self): self.run_srb("create", 8, 3, 24) self.run_srb("set_overload", 0.12) # The ring's balance is at least 5, so normally we'd get a warning, # but it's suppressed due to the overload factor. self.run_srb("add", "r1z1-10.1.1.1:2345/sda", 100.0, "r1z1-10.1.1.1:2345/sdb", 100.0, "r1z1-10.1.1.1:2345/sdc", 120.0) out, err = self.run_srb("rebalance") self.assertTrue("rebalance/repush" not in out) # Now we add in a really big device, but not enough partitions move # to fill it in one pass, so we see the rebalance warning. self.run_srb("add", "r1z1-10.1.1.1:2345/sdd", 99999.0) self.run_srb("pretend_min_part_hours_passed") out, err = self.run_srb("rebalance") self.assertTrue("rebalance/repush" in out) def test_cached_dispersion_value(self): self.run_srb("create", 8, 3, 24) self.run_srb("add", "r1z1-10.1.1.1:2345/sda", 100.0, "r1z1-10.1.1.1:2345/sdb", 100.0, "r1z1-10.1.1.1:2345/sdc", 100.0, "r1z1-10.1.1.1:2345/sdd", 100.0) self.run_srb('rebalance') out, err = self.run_srb() # list devices self.assertTrue('dispersion' in out) # remove cached dispersion value builder = RingBuilder.load(self.tempfile) builder.dispersion = None builder.save(self.tempfile) # now dispersion output is suppressed out, err = self.run_srb() # list devices self.assertFalse('dispersion' in out) # but will show up after rebalance self.run_srb('rebalance', '-f') out, err = self.run_srb() # list devices self.assertTrue('dispersion' in out) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/cli/test_info.py0000664000567000056710000010070713024044354021073 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy # of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for swift.cli.info""" import os import unittest import mock from shutil import rmtree from tempfile import mkdtemp from six.moves import cStringIO as StringIO from test.unit import patch_policies, write_fake_ring from swift.common import ring, utils from swift.common.swob import Request from swift.common.storage_policy import StoragePolicy, POLICIES from swift.cli.info import print_db_info_metadata, print_ring_locations, \ print_info, print_obj_metadata, print_obj, InfoSystemExit, \ print_item_locations from swift.account.server import AccountController from swift.container.server import ContainerController from swift.obj.diskfile import write_metadata @patch_policies([StoragePolicy(0, 'zero', True), StoragePolicy(1, 'one', False), StoragePolicy(2, 'two', False)]) class TestCliInfoBase(unittest.TestCase): def setUp(self): self.orig_hp = utils.HASH_PATH_PREFIX, utils.HASH_PATH_SUFFIX utils.HASH_PATH_PREFIX = 'info' utils.HASH_PATH_SUFFIX = 'info' self.testdir = os.path.join(mkdtemp(), 'tmp_test_cli_info') utils.mkdirs(self.testdir) rmtree(self.testdir) utils.mkdirs(os.path.join(self.testdir, 'sda1')) utils.mkdirs(os.path.join(self.testdir, 'sda1', 'tmp')) utils.mkdirs(os.path.join(self.testdir, 'sdb1')) utils.mkdirs(os.path.join(self.testdir, 'sdb1', 'tmp')) self.account_ring_path = os.path.join(self.testdir, 'account.ring.gz') account_devs = [ {'ip': '127.0.0.1', 'port': 42}, {'ip': '127.0.0.2', 'port': 43}, ] write_fake_ring(self.account_ring_path, *account_devs) self.container_ring_path = os.path.join(self.testdir, 'container.ring.gz') container_devs = [ {'ip': '127.0.0.3', 'port': 42}, {'ip': '127.0.0.4', 'port': 43}, ] write_fake_ring(self.container_ring_path, *container_devs) self.object_ring_path = os.path.join(self.testdir, 'object.ring.gz') object_devs = [ {'ip': '127.0.0.3', 'port': 42}, {'ip': '127.0.0.4', 'port': 43}, ] write_fake_ring(self.object_ring_path, *object_devs) # another ring for policy 1 self.one_ring_path = os.path.join(self.testdir, 'object-1.ring.gz') write_fake_ring(self.one_ring_path, *object_devs) # ... and another for policy 2 self.two_ring_path = os.path.join(self.testdir, 'object-2.ring.gz') write_fake_ring(self.two_ring_path, *object_devs) def tearDown(self): utils.HASH_PATH_PREFIX, utils.HASH_PATH_SUFFIX = self.orig_hp rmtree(os.path.dirname(self.testdir)) def assertRaisesMessage(self, exc, msg, func, *args, **kwargs): try: func(*args, **kwargs) except Exception as e: self.assertTrue(msg in str(e), "Expected %r in %r" % (msg, str(e))) self.assertTrue(isinstance(e, exc), "Expected %s, got %s" % (exc, type(e))) class TestCliInfo(TestCliInfoBase): def test_print_db_info_metadata(self): self.assertRaisesMessage(ValueError, 'Wrong DB type', print_db_info_metadata, 't', {}, {}) self.assertRaisesMessage(ValueError, 'DB info is None', print_db_info_metadata, 'container', None, {}) self.assertRaisesMessage(ValueError, 'Info is incomplete', print_db_info_metadata, 'container', {}, {}) info = dict( account='acct', created_at=100.1, put_timestamp=106.3, delete_timestamp=107.9, status_changed_at=108.3, container_count='3', object_count='20', bytes_used='42') info['hash'] = 'abaddeadbeefcafe' info['id'] = 'abadf100d0ddba11' md = {'x-account-meta-mydata': ('swift', '0000000000.00000'), 'x-other-something': ('boo', '0000000000.00000')} out = StringIO() with mock.patch('sys.stdout', out): print_db_info_metadata('account', info, md) exp_out = '''Path: /acct Account: acct Account Hash: dc5be2aa4347a22a0fee6bc7de505b47 Metadata: Created at: 1970-01-01T00:01:40.100000 (100.1) Put Timestamp: 1970-01-01T00:01:46.300000 (106.3) Delete Timestamp: 1970-01-01T00:01:47.900000 (107.9) Status Timestamp: 1970-01-01T00:01:48.300000 (108.3) Container Count: 3 Object Count: 20 Bytes Used: 42 Chexor: abaddeadbeefcafe UUID: abadf100d0ddba11 X-Other-Something: boo No system metadata found in db file User Metadata: {'mydata': 'swift'}''' self.assertEqual(sorted(out.getvalue().strip().split('\n')), sorted(exp_out.split('\n'))) info = dict( account='acct', container='cont', storage_policy_index=0, created_at='0000000100.10000', put_timestamp='0000000106.30000', delete_timestamp='0000000107.90000', status_changed_at='0000000108.30000', object_count='20', bytes_used='42', reported_put_timestamp='0000010106.30000', reported_delete_timestamp='0000010107.90000', reported_object_count='20', reported_bytes_used='42', x_container_foo='bar', x_container_bar='goo') info['hash'] = 'abaddeadbeefcafe' info['id'] = 'abadf100d0ddba11' md = {'x-container-sysmeta-mydata': ('swift', '0000000000.00000')} out = StringIO() with mock.patch('sys.stdout', out): print_db_info_metadata('container', info, md) exp_out = '''Path: /acct/cont Account: acct Container: cont Container Hash: d49d0ecbb53be1fcc49624f2f7c7ccae Metadata: Created at: 1970-01-01T00:01:40.100000 (0000000100.10000) Put Timestamp: 1970-01-01T00:01:46.300000 (0000000106.30000) Delete Timestamp: 1970-01-01T00:01:47.900000 (0000000107.90000) Status Timestamp: 1970-01-01T00:01:48.300000 (0000000108.30000) Object Count: 20 Bytes Used: 42 Storage Policy: %s (0) Reported Put Timestamp: 1970-01-01T02:48:26.300000 (0000010106.30000) Reported Delete Timestamp: 1970-01-01T02:48:27.900000 (0000010107.90000) Reported Object Count: 20 Reported Bytes Used: 42 Chexor: abaddeadbeefcafe UUID: abadf100d0ddba11 X-Container-Bar: goo X-Container-Foo: bar System Metadata: {'mydata': 'swift'} No user metadata found in db file''' % POLICIES[0].name self.assertEqual(sorted(out.getvalue().strip().split('\n')), sorted(exp_out.split('\n'))) def test_print_ring_locations_invalid_args(self): self.assertRaises(ValueError, print_ring_locations, None, 'dir', 'acct') self.assertRaises(ValueError, print_ring_locations, [], None, 'acct') self.assertRaises(ValueError, print_ring_locations, [], 'dir', None) self.assertRaises(ValueError, print_ring_locations, [], 'dir', 'acct', 'con') self.assertRaises(ValueError, print_ring_locations, [], 'dir', 'acct', obj='o') def test_print_ring_locations_account(self): out = StringIO() with mock.patch('sys.stdout', out): acctring = ring.Ring(self.testdir, ring_name='account') print_ring_locations(acctring, 'dir', 'acct') exp_db = os.path.join('${DEVICE:-/srv/node*}', 'sdb1', 'dir', '3', 'b47', 'dc5be2aa4347a22a0fee6bc7de505b47') self.assertTrue(exp_db in out.getvalue()) self.assertTrue('127.0.0.1' in out.getvalue()) self.assertTrue('127.0.0.2' in out.getvalue()) def test_print_ring_locations_container(self): out = StringIO() with mock.patch('sys.stdout', out): contring = ring.Ring(self.testdir, ring_name='container') print_ring_locations(contring, 'dir', 'acct', 'con') exp_db = os.path.join('${DEVICE:-/srv/node*}', 'sdb1', 'dir', '1', 'fe6', '63e70955d78dfc62821edc07d6ec1fe6') self.assertTrue(exp_db in out.getvalue()) def test_print_ring_locations_obj(self): out = StringIO() with mock.patch('sys.stdout', out): objring = ring.Ring(self.testdir, ring_name='object') print_ring_locations(objring, 'dir', 'acct', 'con', 'obj') exp_obj = os.path.join('${DEVICE:-/srv/node*}', 'sda1', 'dir', '1', '117', '4a16154fc15c75e26ba6afadf5b1c117') self.assertTrue(exp_obj in out.getvalue()) def test_print_ring_locations_partition_number(self): out = StringIO() with mock.patch('sys.stdout', out): objring = ring.Ring(self.testdir, ring_name='object') print_ring_locations(objring, 'objects', None, tpart='1') exp_obj1 = os.path.join('${DEVICE:-/srv/node*}', 'sda1', 'objects', '1') exp_obj2 = os.path.join('${DEVICE:-/srv/node*}', 'sdb1', 'objects', '1') self.assertTrue(exp_obj1 in out.getvalue()) self.assertTrue(exp_obj2 in out.getvalue()) def test_print_item_locations_invalid_args(self): # No target specified self.assertRaises(InfoSystemExit, print_item_locations, None) # Need a ring or policy self.assertRaises(InfoSystemExit, print_item_locations, None, account='account', obj='object') # No account specified self.assertRaises(InfoSystemExit, print_item_locations, None, container='con') # No policy named 'xyz' (unrecognized policy) self.assertRaises(InfoSystemExit, print_item_locations, None, obj='object', policy_name='xyz') # No container specified objring = ring.Ring(self.testdir, ring_name='object') self.assertRaises(InfoSystemExit, print_item_locations, objring, account='account', obj='object') def test_print_item_locations_ring_policy_mismatch_no_target(self): out = StringIO() with mock.patch('sys.stdout', out): objring = ring.Ring(self.testdir, ring_name='object') # Test mismatch of ring and policy name (valid policy) self.assertRaises(InfoSystemExit, print_item_locations, objring, policy_name='zero') self.assertTrue('Warning: mismatch between ring and policy name!' in out.getvalue()) self.assertTrue('No target specified' in out.getvalue()) def test_print_item_locations_invalid_policy_no_target(self): out = StringIO() policy_name = 'nineteen' with mock.patch('sys.stdout', out): objring = ring.Ring(self.testdir, ring_name='object') self.assertRaises(InfoSystemExit, print_item_locations, objring, policy_name=policy_name) exp_msg = 'Warning: Policy %s is not valid' % policy_name self.assertTrue(exp_msg in out.getvalue()) self.assertTrue('No target specified' in out.getvalue()) def test_print_item_locations_policy_object(self): out = StringIO() part = '1' with mock.patch('sys.stdout', out): print_item_locations(None, partition=part, policy_name='zero', swift_dir=self.testdir) exp_part_msg = 'Partition\t%s' % part exp_acct_msg = 'Account \tNone' exp_cont_msg = 'Container\tNone' exp_obj_msg = 'Object \tNone' self.assertTrue(exp_part_msg in out.getvalue()) self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_dashed_ring_name_partition(self): out = StringIO() part = '1' with mock.patch('sys.stdout', out): print_item_locations(None, policy_name='one', ring_name='foo-bar', partition=part, swift_dir=self.testdir) exp_part_msg = 'Partition\t%s' % part exp_acct_msg = 'Account \tNone' exp_cont_msg = 'Container\tNone' exp_obj_msg = 'Object \tNone' self.assertTrue(exp_part_msg in out.getvalue()) self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_with_ring(self): out = StringIO() account = 'account' with mock.patch('sys.stdout', out): account_ring = ring.Ring(self.testdir, ring_name=account) print_item_locations(account_ring, account=account) exp_msg = 'Account \t%s' % account self.assertTrue(exp_msg in out.getvalue()) exp_warning = 'Warning: account specified ' + \ 'but ring not named "account"' self.assertTrue(exp_warning in out.getvalue()) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\tNone' exp_obj_msg = 'Object \tNone' self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_no_ring(self): out = StringIO() account = 'account' with mock.patch('sys.stdout', out): print_item_locations(None, account=account, swift_dir=self.testdir) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\tNone' exp_obj_msg = 'Object \tNone' self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_container_ring(self): out = StringIO() account = 'account' container = 'container' with mock.patch('sys.stdout', out): container_ring = ring.Ring(self.testdir, ring_name='container') print_item_locations(container_ring, account=account, container=container) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\t%s' % container exp_obj_msg = 'Object \tNone' self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_container_no_ring(self): out = StringIO() account = 'account' container = 'container' with mock.patch('sys.stdout', out): print_item_locations(None, account=account, container=container, swift_dir=self.testdir) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\t%s' % container exp_obj_msg = 'Object \tNone' self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_container_object_ring(self): out = StringIO() account = 'account' container = 'container' obj = 'object' with mock.patch('sys.stdout', out): object_ring = ring.Ring(self.testdir, ring_name='object') print_item_locations(object_ring, ring_name='object', account=account, container=container, obj=obj) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\t%s' % container exp_obj_msg = 'Object \t%s' % obj self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_item_locations_account_container_object_dashed_ring(self): out = StringIO() account = 'account' container = 'container' obj = 'object' with mock.patch('sys.stdout', out): object_ring = ring.Ring(self.testdir, ring_name='object-1') print_item_locations(object_ring, ring_name='object-1', account=account, container=container, obj=obj) exp_acct_msg = 'Account \t%s' % account exp_cont_msg = 'Container\t%s' % container exp_obj_msg = 'Object \t%s' % obj self.assertTrue(exp_acct_msg in out.getvalue()) self.assertTrue(exp_cont_msg in out.getvalue()) self.assertTrue(exp_obj_msg in out.getvalue()) def test_print_info(self): db_file = 'foo' self.assertRaises(InfoSystemExit, print_info, 'object', db_file) db_file = os.path.join(self.testdir, './acct.db') self.assertRaises(InfoSystemExit, print_info, 'account', db_file) controller = AccountController( {'devices': self.testdir, 'mount_check': 'false'}) req = Request.blank('/sda1/1/acct', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(controller) self.assertEqual(resp.status_int, 201) out = StringIO() exp_raised = False with mock.patch('sys.stdout', out): db_file = os.path.join(self.testdir, 'sda1', 'accounts', '1', 'b47', 'dc5be2aa4347a22a0fee6bc7de505b47', 'dc5be2aa4347a22a0fee6bc7de505b47.db') try: print_info('account', db_file, swift_dir=self.testdir) except Exception: exp_raised = True if exp_raised: self.fail("Unexpected exception raised") else: self.assertTrue(len(out.getvalue().strip()) > 800) controller = ContainerController( {'devices': self.testdir, 'mount_check': 'false'}) req = Request.blank('/sda1/1/acct/cont', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(controller) self.assertEqual(resp.status_int, 201) out = StringIO() exp_raised = False with mock.patch('sys.stdout', out): db_file = os.path.join(self.testdir, 'sda1', 'containers', '1', 'cae', 'd49d0ecbb53be1fcc49624f2f7c7ccae', 'd49d0ecbb53be1fcc49624f2f7c7ccae.db') orig_cwd = os.getcwd() try: os.chdir(os.path.dirname(db_file)) print_info('container', os.path.basename(db_file), swift_dir='/dev/null') except Exception: exp_raised = True finally: os.chdir(orig_cwd) if exp_raised: self.fail("Unexpected exception raised") else: self.assertTrue(len(out.getvalue().strip()) > 600) out = StringIO() exp_raised = False with mock.patch('sys.stdout', out): db_file = os.path.join(self.testdir, 'sda1', 'containers', '1', 'cae', 'd49d0ecbb53be1fcc49624f2f7c7ccae', 'd49d0ecbb53be1fcc49624f2f7c7ccae.db') orig_cwd = os.getcwd() try: os.chdir(os.path.dirname(db_file)) print_info('account', os.path.basename(db_file), swift_dir='/dev/null') except InfoSystemExit: exp_raised = True finally: os.chdir(orig_cwd) if exp_raised: exp_out = 'Does not appear to be a DB of type "account":' \ ' ./d49d0ecbb53be1fcc49624f2f7c7ccae.db' self.assertEqual(out.getvalue().strip(), exp_out) else: self.fail("Expected an InfoSystemExit exception to be raised") class TestPrintObj(TestCliInfoBase): def setUp(self): super(TestPrintObj, self).setUp() self.datafile = os.path.join(self.testdir, '1402017432.46642.data') with open(self.datafile, 'wb') as fp: md = {'name': '/AUTH_admin/c/obj', 'Content-Type': 'application/octet-stream'} write_metadata(fp, md) def test_print_obj_invalid(self): datafile = '1402017324.68634.data' self.assertRaises(InfoSystemExit, print_obj, datafile) datafile = os.path.join(self.testdir, './1234.data') self.assertRaises(InfoSystemExit, print_obj, datafile) with open(datafile, 'wb') as fp: fp.write('1234') out = StringIO() with mock.patch('sys.stdout', out): self.assertRaises(InfoSystemExit, print_obj, datafile) self.assertEqual(out.getvalue().strip(), 'Invalid metadata') def test_print_obj_valid(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, swift_dir=self.testdir) etag_msg = 'ETag: Not found in metadata' length_msg = 'Content-Length: Not found in metadata' self.assertTrue(etag_msg in out.getvalue()) self.assertTrue(length_msg in out.getvalue()) def test_print_obj_with_policy(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, swift_dir=self.testdir, policy_name='one') etag_msg = 'ETag: Not found in metadata' length_msg = 'Content-Length: Not found in metadata' ring_loc_msg = 'ls -lah' self.assertTrue(etag_msg in out.getvalue()) self.assertTrue(length_msg in out.getvalue()) self.assertTrue(ring_loc_msg in out.getvalue()) def test_missing_etag(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile) self.assertTrue('ETag: Not found in metadata' in out.getvalue()) class TestPrintObjFullMeta(TestCliInfoBase): def setUp(self): super(TestPrintObjFullMeta, self).setUp() self.datafile = os.path.join(self.testdir, 'sda', 'objects-1', '1', 'ea8', 'db4449e025aca992307c7c804a67eea8', '1402017884.18202.data') utils.mkdirs(os.path.dirname(self.datafile)) with open(self.datafile, 'wb') as fp: md = {'name': '/AUTH_admin/c/obj', 'Content-Type': 'application/octet-stream', 'ETag': 'd41d8cd98f00b204e9800998ecf8427e', 'Content-Length': 0} write_metadata(fp, md) def test_print_obj(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, swift_dir=self.testdir) self.assertTrue('/objects-1/' in out.getvalue()) def test_print_obj_policy_index(self): # Check an output of policy index when current directory is in # object-* directory out = StringIO() hash_dir = os.path.dirname(self.datafile) file_name = os.path.basename(self.datafile) # Change working directory to object hash dir cwd = os.getcwd() try: os.chdir(hash_dir) with mock.patch('sys.stdout', out): print_obj(file_name, swift_dir=self.testdir) finally: os.chdir(cwd) self.assertTrue('X-Backend-Storage-Policy-Index: 1' in out.getvalue()) def test_print_obj_meta_and_ts_files(self): # verify that print_obj will also read from meta and ts files base = os.path.splitext(self.datafile)[0] for ext in ('.meta', '.ts'): test_file = '%s%s' % (base, ext) os.link(self.datafile, test_file) out = StringIO() with mock.patch('sys.stdout', out): print_obj(test_file, swift_dir=self.testdir) self.assertTrue('/objects-1/' in out.getvalue()) def test_print_obj_no_ring(self): no_rings_dir = os.path.join(self.testdir, 'no_rings_here') os.mkdir(no_rings_dir) out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, swift_dir=no_rings_dir) self.assertTrue('d41d8cd98f00b204e9800998ecf8427e' in out.getvalue()) self.assertTrue('Partition' not in out.getvalue()) def test_print_obj_policy_name_mismatch(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, policy_name='two', swift_dir=self.testdir) ring_alert_msg = 'Warning: Ring does not match policy!' self.assertTrue(ring_alert_msg in out.getvalue()) def test_valid_etag(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile) self.assertTrue('ETag: d41d8cd98f00b204e9800998ecf8427e (valid)' in out.getvalue()) def test_invalid_etag(self): with open(self.datafile, 'wb') as fp: md = {'name': '/AUTH_admin/c/obj', 'Content-Type': 'application/octet-stream', 'ETag': 'badetag', 'Content-Length': 0} write_metadata(fp, md) out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile) self.assertTrue('ETag: badetag doesn\'t match file hash' in out.getvalue()) def test_unchecked_etag(self): out = StringIO() with mock.patch('sys.stdout', out): print_obj(self.datafile, check_etag=False) self.assertTrue('ETag: d41d8cd98f00b204e9800998ecf8427e (not checked)' in out.getvalue()) def test_print_obj_metadata(self): self.assertRaisesMessage(ValueError, 'Metadata is None', print_obj_metadata, []) def get_metadata(items): md = dict(name='/AUTH_admin/c/dummy') md['Content-Type'] = 'application/octet-stream' md['X-Timestamp'] = 106.3 md.update(items) return md metadata = get_metadata({'X-Object-Meta-Mtime': '107.3'}) out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: application/octet-stream Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: No metadata found User Metadata: X-Object-Meta-Mtime: 107.3 Other Metadata: No metadata found''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({ 'X-Object-Sysmeta-Mtime': '107.3', 'X-Object-Sysmeta-Name': 'Obj name', }) out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: application/octet-stream Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: X-Object-Sysmeta-Mtime: 107.3 X-Object-Sysmeta-Name: Obj name User Metadata: No metadata found Other Metadata: No metadata found''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({ 'X-Object-Meta-Mtime': '107.3', 'X-Object-Sysmeta-Mtime': '107.3', 'X-Object-Mtime': '107.3', }) out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: application/octet-stream Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: X-Object-Sysmeta-Mtime: 107.3 User Metadata: X-Object-Meta-Mtime: 107.3 Other Metadata: X-Object-Mtime: 107.3''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({}) out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: application/octet-stream Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: No metadata found User Metadata: No metadata found Other Metadata: No metadata found''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({'X-Object-Meta-Mtime': '107.3'}) metadata['name'] = '/a-s' self.assertRaisesMessage(ValueError, 'Path is invalid', print_obj_metadata, metadata) metadata = get_metadata({'X-Object-Meta-Mtime': '107.3'}) del metadata['name'] out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: Not found in metadata Content-Type: application/octet-stream Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: No metadata found User Metadata: X-Object-Meta-Mtime: 107.3 Other Metadata: No metadata found''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({'X-Object-Meta-Mtime': '107.3'}) del metadata['Content-Type'] out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: Not found in metadata Timestamp: 1970-01-01T00:01:46.300000 (%s) System Metadata: No metadata found User Metadata: X-Object-Meta-Mtime: 107.3 Other Metadata: No metadata found''' % ( utils.Timestamp(106.3).internal) self.assertEqual(out.getvalue().strip(), exp_out) metadata = get_metadata({'X-Object-Meta-Mtime': '107.3'}) del metadata['X-Timestamp'] out = StringIO() with mock.patch('sys.stdout', out): print_obj_metadata(metadata) exp_out = '''Path: /AUTH_admin/c/dummy Account: AUTH_admin Container: c Object: dummy Object hash: 128fdf98bddd1b1e8695f4340e67a67a Content-Type: application/octet-stream Timestamp: Not found in metadata System Metadata: No metadata found User Metadata: X-Object-Meta-Mtime: 107.3 Other Metadata: No metadata found''' self.assertEqual(out.getvalue().strip(), exp_out) class TestPrintObjWeirdPath(TestPrintObjFullMeta): def setUp(self): super(TestPrintObjWeirdPath, self).setUp() # device name is objects-0 instead of sda, this is weird. self.datafile = os.path.join(self.testdir, 'objects-0', 'objects-1', '1', 'ea8', 'db4449e025aca992307c7c804a67eea8', '1402017884.18202.data') utils.mkdirs(os.path.dirname(self.datafile)) with open(self.datafile, 'wb') as fp: md = {'name': '/AUTH_admin/c/obj', 'Content-Type': 'application/octet-stream', 'ETag': 'd41d8cd98f00b204e9800998ecf8427e', 'Content-Length': 0} write_metadata(fp, md) swift-2.7.1/test/unit/cli/test_recon.py0000664000567000056710000010621413024044354021245 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 Christian Schwede # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import mock import os import re import tempfile import time import unittest import shutil import sys from eventlet.green import urllib2, socket from six import StringIO from six.moves import urllib from swift.cli import recon from swift.common import utils from swift.common.ring import builder from swift.common.ring import utils as ring_utils from swift.common.storage_policy import StoragePolicy, POLICIES from test.unit import patch_policies class TestHelpers(unittest.TestCase): def test_seconds2timeunit(self): self.assertEqual(recon.seconds2timeunit(10), (10, 'seconds')) self.assertEqual(recon.seconds2timeunit(600), (10, 'minutes')) self.assertEqual(recon.seconds2timeunit(36000), (10, 'hours')) self.assertEqual(recon.seconds2timeunit(60 * 60 * 24 * 10), (10, 'days')) def test_size_suffix(self): self.assertEqual(recon.size_suffix(5 * 10 ** 2), '500 bytes') self.assertEqual(recon.size_suffix(5 * 10 ** 3), '5 kB') self.assertEqual(recon.size_suffix(5 * 10 ** 6), '5 MB') self.assertEqual(recon.size_suffix(5 * 10 ** 9), '5 GB') self.assertEqual(recon.size_suffix(5 * 10 ** 12), '5 TB') self.assertEqual(recon.size_suffix(5 * 10 ** 15), '5 PB') self.assertEqual(recon.size_suffix(5 * 10 ** 18), '5 EB') self.assertEqual(recon.size_suffix(5 * 10 ** 21), '5 ZB') class TestScout(unittest.TestCase): def setUp(self, *_args, **_kwargs): self.scout_instance = recon.Scout("type", suppress_errors=True) self.url = 'http://127.0.0.1:8080/recon/type' self.server_type_url = 'http://127.0.0.1:8080/' @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_ok(self, mock_urlopen): mock_urlopen.return_value.read = lambda: json.dumps([]) url, content, status, ts_start, ts_end = self.scout_instance.scout( ("127.0.0.1", "8080")) self.assertEqual(url, self.url) self.assertEqual(content, []) self.assertEqual(status, 200) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_url_error(self, mock_urlopen): mock_urlopen.side_effect = urllib2.URLError("") url, content, status, ts_start, ts_end = self.scout_instance.scout( ("127.0.0.1", "8080")) self.assertTrue(isinstance(content, urllib2.URLError)) self.assertEqual(url, self.url) self.assertEqual(status, -1) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_http_error(self, mock_urlopen): mock_urlopen.side_effect = urllib2.HTTPError( self.url, 404, "Internal error", None, None) url, content, status, ts_start, ts_end = self.scout_instance.scout( ("127.0.0.1", "8080")) self.assertEqual(url, self.url) self.assertTrue(isinstance(content, urllib2.HTTPError)) self.assertEqual(status, 404) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_socket_timeout(self, mock_urlopen): mock_urlopen.side_effect = socket.timeout("timeout") url, content, status, ts_start, ts_end = self.scout_instance.scout( ("127.0.0.1", "8080")) self.assertTrue(isinstance(content, socket.timeout)) self.assertEqual(url, self.url) self.assertEqual(status, -1) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_server_type_ok(self, mock_urlopen): def getheader(name): d = {'Server': 'server-type'} return d.get(name) mock_urlopen.return_value.info.return_value.getheader = getheader url, content, status = self.scout_instance.scout_server_type( ("127.0.0.1", "8080")) self.assertEqual(url, self.server_type_url) self.assertEqual(content, 'server-type') self.assertEqual(status, 200) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_server_type_url_error(self, mock_urlopen): mock_urlopen.side_effect = urllib2.URLError("") url, content, status = self.scout_instance.scout_server_type( ("127.0.0.1", "8080")) self.assertTrue(isinstance(content, urllib2.URLError)) self.assertEqual(url, self.server_type_url) self.assertEqual(status, -1) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_server_type_http_error(self, mock_urlopen): mock_urlopen.side_effect = urllib2.HTTPError( self.server_type_url, 404, "Internal error", None, None) url, content, status = self.scout_instance.scout_server_type( ("127.0.0.1", "8080")) self.assertEqual(url, self.server_type_url) self.assertTrue(isinstance(content, urllib2.HTTPError)) self.assertEqual(status, 404) @mock.patch('eventlet.green.urllib2.urlopen') def test_scout_server_type_socket_timeout(self, mock_urlopen): mock_urlopen.side_effect = socket.timeout("timeout") url, content, status = self.scout_instance.scout_server_type( ("127.0.0.1", "8080")) self.assertTrue(isinstance(content, socket.timeout)) self.assertEqual(url, self.server_type_url) self.assertEqual(status, -1) @patch_policies class TestRecon(unittest.TestCase): def setUp(self, *_args, **_kwargs): self.recon_instance = recon.SwiftRecon() self.swift_dir = tempfile.mkdtemp() self.ring_name = POLICIES.legacy.ring_name self.tmpfile_name = os.path.join( self.swift_dir, self.ring_name + '.ring.gz') self.ring_name2 = POLICIES[1].ring_name self.tmpfile_name2 = os.path.join( self.swift_dir, self.ring_name2 + '.ring.gz') utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_PREFIX = 'startcap' def tearDown(self, *_args, **_kwargs): shutil.rmtree(self.swift_dir, ignore_errors=True) def _make_object_rings(self): ringbuilder = builder.RingBuilder(2, 3, 1) devs = [ 'r0z0-127.0.0.1:10000/sda1', 'r0z1-127.0.0.1:10001/sda1', 'r1z0-127.0.0.1:10002/sda1', 'r1z1-127.0.0.1:10003/sda1', ] for raw_dev_str in devs: dev = ring_utils.parse_add_value(raw_dev_str) dev['weight'] = 1.0 ringbuilder.add_dev(dev) ringbuilder.rebalance() ringbuilder.get_ring().save(self.tmpfile_name) ringbuilder = builder.RingBuilder(2, 2, 1) devs = [ 'r0z0-127.0.0.1:10000/sda1', 'r0z1-127.0.0.2:10004/sda1', ] for raw_dev_str in devs: dev = ring_utils.parse_add_value(raw_dev_str) dev['weight'] = 1.0 ringbuilder.add_dev(dev) ringbuilder.rebalance() ringbuilder.get_ring().save(self.tmpfile_name2) def test_gen_stats(self): stats = self.recon_instance._gen_stats((1, 4, 10, None), 'Sample') self.assertEqual(stats.get('name'), 'Sample') self.assertEqual(stats.get('average'), 5.0) self.assertEqual(stats.get('high'), 10) self.assertEqual(stats.get('reported'), 3) self.assertEqual(stats.get('low'), 1) self.assertEqual(stats.get('total'), 15) self.assertEqual(stats.get('number_none'), 1) self.assertEqual(stats.get('perc_none'), 25.0) def test_ptime(self): with mock.patch('time.gmtime') as mock_gmtime: mock_gmtime.return_value = time.struct_time( (2013, 12, 17, 10, 0, 0, 1, 351, 0)) timestamp = self.recon_instance._ptime(1387274400) self.assertEqual(timestamp, "2013-12-17 10:00:00") mock_gmtime.assert_called_with(1387274400) timestamp2 = self.recon_instance._ptime() self.assertEqual(timestamp2, "2013-12-17 10:00:00") mock_gmtime.assert_called_with() def test_get_hosts(self): self._make_object_rings() ips = self.recon_instance.get_hosts( None, None, self.swift_dir, [self.ring_name]) self.assertEqual( set([('127.0.0.1', 10000), ('127.0.0.1', 10001), ('127.0.0.1', 10002), ('127.0.0.1', 10003)]), ips) ips = self.recon_instance.get_hosts( 0, None, self.swift_dir, [self.ring_name]) self.assertEqual( set([('127.0.0.1', 10000), ('127.0.0.1', 10001)]), ips) ips = self.recon_instance.get_hosts( 1, None, self.swift_dir, [self.ring_name]) self.assertEqual( set([('127.0.0.1', 10002), ('127.0.0.1', 10003)]), ips) ips = self.recon_instance.get_hosts( 0, 0, self.swift_dir, [self.ring_name]) self.assertEqual(set([('127.0.0.1', 10000)]), ips) ips = self.recon_instance.get_hosts( 1, 1, self.swift_dir, [self.ring_name]) self.assertEqual(set([('127.0.0.1', 10003)]), ips) ips = self.recon_instance.get_hosts( None, None, self.swift_dir, [self.ring_name, self.ring_name2]) self.assertEqual( set([('127.0.0.1', 10000), ('127.0.0.1', 10001), ('127.0.0.1', 10002), ('127.0.0.1', 10003), ('127.0.0.2', 10004)]), ips) ips = self.recon_instance.get_hosts( 0, None, self.swift_dir, [self.ring_name, self.ring_name2]) self.assertEqual( set([('127.0.0.1', 10000), ('127.0.0.1', 10001), ('127.0.0.2', 10004)]), ips) ips = self.recon_instance.get_hosts( 1, None, self.swift_dir, [self.ring_name, self.ring_name2]) self.assertEqual( set([('127.0.0.1', 10002), ('127.0.0.1', 10003)]), ips) ips = self.recon_instance.get_hosts( 0, 1, self.swift_dir, [self.ring_name, self.ring_name2]) self.assertEqual(set([('127.0.0.1', 10001), ('127.0.0.2', 10004)]), ips) def test_get_ringmd5(self): for server_type in ('account', 'container', 'object', 'object-1'): ring_name = '%s.ring.gz' % server_type ring_file = os.path.join(self.swift_dir, ring_name) open(ring_file, 'w') empty_file_hash = 'd41d8cd98f00b204e9800998ecf8427e' hosts = [("127.0.0.1", "8080")] with mock.patch('swift.cli.recon.Scout') as mock_scout: scout_instance = mock.MagicMock() url = 'http://%s:%s/recon/ringmd5' % hosts[0] response = { '/etc/swift/account.ring.gz': empty_file_hash, '/etc/swift/container.ring.gz': empty_file_hash, '/etc/swift/object.ring.gz': empty_file_hash, '/etc/swift/object-1.ring.gz': empty_file_hash, } status = 200 scout_instance.scout.return_value = (url, response, status, 0, 0) mock_scout.return_value = scout_instance stdout = StringIO() mock_hash = mock.MagicMock() with mock.patch('sys.stdout', new=stdout), \ mock.patch('swift.cli.recon.md5', new=mock_hash): mock_hash.return_value.hexdigest.return_value = \ empty_file_hash self.recon_instance.get_ringmd5(hosts, self.swift_dir) output = stdout.getvalue() expected = '1/1 hosts matched' for line in output.splitlines(): if '!!' in line: self.fail('Unexpected Error in output: %r' % line) if expected in line: break else: self.fail('Did not find expected substring %r ' 'in output:\n%s' % (expected, output)) for ring in ('account', 'container', 'object', 'object-1'): os.remove(os.path.join(self.swift_dir, "%s.ring.gz" % ring)) def test_quarantine_check(self): hosts = [('127.0.0.1', 6010), ('127.0.0.1', 6020), ('127.0.0.1', 6030), ('127.0.0.1', 6040), ('127.0.0.1', 6050)] # sample json response from http://:/recon/quarantined responses = {6010: {'accounts': 0, 'containers': 0, 'objects': 1, 'policies': {'0': {'objects': 0}, '1': {'objects': 1}}}, 6020: {'accounts': 1, 'containers': 1, 'objects': 3, 'policies': {'0': {'objects': 1}, '1': {'objects': 2}}}, 6030: {'accounts': 2, 'containers': 2, 'objects': 5, 'policies': {'0': {'objects': 2}, '1': {'objects': 3}}}, 6040: {'accounts': 3, 'containers': 3, 'objects': 7, 'policies': {'0': {'objects': 3}, '1': {'objects': 4}}}, # A server without storage policies enabled 6050: {'accounts': 0, 'containers': 0, 'objects': 4}} # expected = {'objects_0': (0, 3, 1.5, 6, 0.0, 0, 4), 'objects_1': (1, 4, 2.5, 10, 0.0, 0, 4), 'objects': (1, 7, 4.0, 20, 0.0, 0, 5), 'accounts': (0, 3, 1.2, 6, 0.0, 0, 5), 'containers': (0, 3, 1.2, 6, 0.0, 0, 5)} def mock_scout_quarantine(app, host): url = 'http://%s:%s/recon/quarantined' % host response = responses[host[1]] status = 200 return url, response, status, 0, 0 stdout = StringIO() with mock.patch('swift.cli.recon.Scout.scout', mock_scout_quarantine), \ mock.patch('sys.stdout', new=stdout): self.recon_instance.quarantine_check(hosts) output = stdout.getvalue() r = re.compile("\[quarantined_(.*)\](.*)") for line in output.splitlines(): m = r.match(line) if m: ex = expected.pop(m.group(1)) self.assertEqual(m.group(2), " low: %s, high: %s, avg: %s, total: %s," " Failed: %s%%, no_result: %s, reported: %s" % ex) self.assertFalse(expected) def test_drive_audit_check(self): hosts = [('127.0.0.1', 6010), ('127.0.0.1', 6020), ('127.0.0.1', 6030), ('127.0.0.1', 6040)] # sample json response from http://:/recon/driveaudit responses = {6010: {'drive_audit_errors': 15}, 6020: {'drive_audit_errors': 0}, 6030: {'drive_audit_errors': 257}, 6040: {'drive_audit_errors': 56}} # expected = (0, 257, 82.0, 328, 0.0, 0, 4) def mock_scout_driveaudit(app, host): url = 'http://%s:%s/recon/driveaudit' % host response = responses[host[1]] status = 200 return url, response, status, 0, 0 stdout = StringIO() with mock.patch('swift.cli.recon.Scout.scout', mock_scout_driveaudit), \ mock.patch('sys.stdout', new=stdout): self.recon_instance.driveaudit_check(hosts) output = stdout.getvalue() r = re.compile("\[drive_audit_errors(.*)\](.*)") lines = output.splitlines() self.assertTrue(lines) for line in lines: m = r.match(line) if m: self.assertEqual(m.group(2), " low: %s, high: %s, avg: %s, total: %s," " Failed: %s%%, no_result: %s, reported: %s" % expected) def test_get_ring_names(self): self.recon_instance.server_type = 'not-object' self.assertEqual(self.recon_instance._get_ring_names(), ['not-object']) self.recon_instance.server_type = 'object' with patch_policies([StoragePolicy(0, 'zero', is_default=True)]): self.assertEqual(self.recon_instance._get_ring_names(), ['object']) with patch_policies([StoragePolicy(0, 'zero', is_default=True), StoragePolicy(1, 'one')]): self.assertEqual(self.recon_instance._get_ring_names(), ['object', 'object-1']) self.assertEqual(self.recon_instance._get_ring_names('0'), ['object']) self.assertEqual(self.recon_instance._get_ring_names('zero'), ['object']) self.assertEqual(self.recon_instance._get_ring_names('1'), ['object-1']) self.assertEqual(self.recon_instance._get_ring_names('one'), ['object-1']) self.assertEqual(self.recon_instance._get_ring_names('3'), []) self.assertEqual(self.recon_instance._get_ring_names('wrong'), []) def test_main_object_hosts_default_all_policies(self): self._make_object_rings() discovered_hosts = set() def server_type_check(hosts): for h in hosts: discovered_hosts.add(h) self.recon_instance.server_type_check = server_type_check with mock.patch.object(sys, 'argv', [ "prog", "object", "--swiftdir=%s" % self.swift_dir, "--validate-servers"]): self.recon_instance.main() expected = set([ ('127.0.0.1', 10000), ('127.0.0.1', 10001), ('127.0.0.1', 10002), ('127.0.0.1', 10003), ('127.0.0.2', 10004), ]) self.assertEqual(expected, discovered_hosts) def test_main_object_hosts_default_unu(self): self._make_object_rings() discovered_hosts = set() def server_type_check(hosts): for h in hosts: discovered_hosts.add(h) self.recon_instance.server_type_check = server_type_check with mock.patch.object(sys, 'argv', [ "prog", "object", "--swiftdir=%s" % self.swift_dir, "--validate-servers", '--policy=unu']): self.recon_instance.main() expected = set([ ('127.0.0.1', 10000), ('127.0.0.2', 10004), ]) self.assertEqual(expected, discovered_hosts) def test_main_object_hosts_default_invalid(self): self._make_object_rings() stdout = StringIO() with mock.patch.object(sys, 'argv', [ "prog", "object", "--swiftdir=%s" % self.swift_dir, "--validate-servers", '--policy=invalid']),\ mock.patch('sys.stdout', stdout): self.assertRaises(SystemExit, recon.main) self.assertIn('Invalid Storage Policy', stdout.getvalue()) class TestReconCommands(unittest.TestCase): def setUp(self): self.recon = recon.SwiftRecon() self.hosts = set([('127.0.0.1', 10000)]) def mock_responses(self, resps): def fake_urlopen(url, timeout): scheme, netloc, path, _, _, _ = urllib.parse.urlparse(url) self.assertEqual(scheme, 'http') # can't handle anything else self.assertTrue(path.startswith('/recon/')) if ':' in netloc: host, port = netloc.split(':', 1) port = int(port) else: host = netloc port = 80 response_body = resps[(host, port, path[7:])] resp = mock.MagicMock() resp.read = mock.MagicMock(side_effect=[response_body]) return resp return mock.patch('eventlet.green.urllib2.urlopen', fake_urlopen) def test_server_type_check(self): hosts = [('127.0.0.1', 6010), ('127.0.0.1', 6011), ('127.0.0.1', 6012)] # sample json response from http://:/ responses = {6010: 'object-server', 6011: 'container-server', 6012: 'account-server'} def mock_scout_server_type(app, host): url = 'http://%s:%s/' % (host[0], host[1]) response = responses[host[1]] status = 200 return url, response, status stdout = StringIO() res_object = 'Invalid: http://127.0.0.1:6010/ is object-server' res_container = 'Invalid: http://127.0.0.1:6011/ is container-server' res_account = 'Invalid: http://127.0.0.1:6012/ is account-server' valid = "1/1 hosts ok, 0 error[s] while checking hosts." # Test for object server type - default with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type_check(hosts) output = stdout.getvalue() self.assertTrue(res_container in output.splitlines()) self.assertTrue(res_account in output.splitlines()) stdout.truncate(0) # Test ok for object server type - default with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type_check([hosts[0]]) output = stdout.getvalue() self.assertTrue(valid in output.splitlines()) stdout.truncate(0) # Test for account server type with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type = 'account' self.recon.server_type_check(hosts) output = stdout.getvalue() self.assertTrue(res_container in output.splitlines()) self.assertTrue(res_object in output.splitlines()) stdout.truncate(0) # Test ok for account server type with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type = 'account' self.recon.server_type_check([hosts[2]]) output = stdout.getvalue() self.assertTrue(valid in output.splitlines()) stdout.truncate(0) # Test for container server type with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type = 'container' self.recon.server_type_check(hosts) output = stdout.getvalue() self.assertTrue(res_account in output.splitlines()) self.assertTrue(res_object in output.splitlines()) stdout.truncate(0) # Test ok for container server type with mock.patch('swift.cli.recon.Scout.scout_server_type', mock_scout_server_type), \ mock.patch('sys.stdout', new=stdout): self.recon.server_type = 'container' self.recon.server_type_check([hosts[1]]) output = stdout.getvalue() self.assertTrue(valid in output.splitlines()) def test_get_swiftconfmd5(self): hosts = set([('10.1.1.1', 10000), ('10.2.2.2', 10000)]) cksum = '729cf900f2876dead617d088ece7fe8c' responses = { ('10.1.1.1', 10000, 'swiftconfmd5'): json.dumps({'/etc/swift/swift.conf': cksum}), ('10.2.2.2', 10000, 'swiftconfmd5'): json.dumps({'/etc/swift/swift.conf': cksum})} printed = [] with self.mock_responses(responses): with mock.patch.object(self.recon, '_md5_file', lambda _: cksum): self.recon.get_swiftconfmd5(hosts, printfn=printed.append) output = '\n'.join(printed) + '\n' self.assertTrue("2/2 hosts matched" in output) def test_get_swiftconfmd5_mismatch(self): hosts = set([('10.1.1.1', 10000), ('10.2.2.2', 10000)]) cksum = '29d5912b1fcfcc1066a7f51412769c1d' responses = { ('10.1.1.1', 10000, 'swiftconfmd5'): json.dumps({'/etc/swift/swift.conf': cksum}), ('10.2.2.2', 10000, 'swiftconfmd5'): json.dumps({'/etc/swift/swift.conf': 'bogus'})} printed = [] with self.mock_responses(responses): with mock.patch.object(self.recon, '_md5_file', lambda _: cksum): self.recon.get_swiftconfmd5(hosts, printfn=printed.append) output = '\n'.join(printed) + '\n' self.assertTrue("1/2 hosts matched" in output) self.assertTrue("http://10.2.2.2:10000/recon/swiftconfmd5 (bogus) " "doesn't match on disk md5sum" in output) def test_object_auditor_check(self): # Recon middleware response from an object server def dummy_request(*args, **kwargs): values = { 'passes': 0, 'errors': 0, 'audit_time': 0, 'start_time': 0, 'quarantined': 0, 'bytes_processed': 0} return [('http://127.0.0.1:6010/recon/auditor/object', { 'object_auditor_stats_ALL': values, 'object_auditor_stats_ZBF': values, }, 200, 0, 0)] response = {} def catch_print(computed): response[computed.get('name')] = computed cli = recon.SwiftRecon() cli.pool.imap = dummy_request cli._print_stats = catch_print cli.object_auditor_check([('127.0.0.1', 6010)]) # Now check that output contains all keys and names keys = ['average', 'number_none', 'high', 'reported', 'low', 'total', 'perc_none'] names = [ 'ALL_audit_time_last_path', 'ALL_quarantined_last_path', 'ALL_errors_last_path', 'ALL_passes_last_path', 'ALL_bytes_processed_last_path', 'ZBF_audit_time_last_path', 'ZBF_quarantined_last_path', 'ZBF_errors_last_path', 'ZBF_bytes_processed_last_path' ] for name in names: computed = response.get(name) self.assertTrue(computed) for key in keys: self.assertTrue(key in computed) def test_disk_usage(self): def dummy_request(*args, **kwargs): return [('http://127.0.0.1:6010/recon/diskusage', [ {"device": "sdb1", "mounted": True, "avail": 10, "used": 90, "size": 100}, {"device": "sdc1", "mounted": True, "avail": 15, "used": 85, "size": 100}, {"device": "sdd1", "mounted": True, "avail": 15, "used": 85, "size": 100}], 200, 0, 0)] cli = recon.SwiftRecon() cli.pool.imap = dummy_request default_calls = [ mock.call('Distribution Graph:'), mock.call(' 85% 2 **********************************' + '***********************************'), mock.call(' 90% 1 **********************************'), mock.call('Disk usage: space used: 260 of 300'), mock.call('Disk usage: space free: 40 of 300'), mock.call('Disk usage: lowest: 85.0%, ' + 'highest: 90.0%, avg: 86.6666666667%'), mock.call('=' * 79), ] with mock.patch('six.moves.builtins.print') as mock_print: cli.disk_usage([('127.0.0.1', 6010)]) mock_print.assert_has_calls(default_calls) with mock.patch('six.moves.builtins.print') as mock_print: expected_calls = default_calls + [ mock.call('LOWEST 5'), mock.call('85.00% 127.0.0.1 sdc1'), mock.call('85.00% 127.0.0.1 sdd1'), mock.call('90.00% 127.0.0.1 sdb1') ] cli.disk_usage([('127.0.0.1', 6010)], 0, 5) mock_print.assert_has_calls(expected_calls) with mock.patch('six.moves.builtins.print') as mock_print: expected_calls = default_calls + [ mock.call('TOP 5'), mock.call('90.00% 127.0.0.1 sdb1'), mock.call('85.00% 127.0.0.1 sdc1'), mock.call('85.00% 127.0.0.1 sdd1') ] cli.disk_usage([('127.0.0.1', 6010)], 5, 0) mock_print.assert_has_calls(expected_calls) @mock.patch('six.moves.builtins.print') @mock.patch('time.time') def test_replication_check(self, mock_now, mock_print): now = 1430000000.0 def dummy_request(*args, **kwargs): return [ ('http://127.0.0.1:6011/recon/replication/container', {"replication_last": now, "replication_stats": { "no_change": 2, "rsync": 0, "success": 3, "failure": 1, "attempted": 0, "ts_repl": 0, "remove": 0, "remote_merge": 0, "diff_capped": 0, "start": now, "hashmatch": 0, "diff": 0, "empty": 0}, "replication_time": 42}, 200, 0, 0), ('http://127.0.0.1:6021/recon/replication/container', {"replication_last": now, "replication_stats": { "no_change": 0, "rsync": 0, "success": 1, "failure": 0, "attempted": 0, "ts_repl": 0, "remove": 0, "remote_merge": 0, "diff_capped": 0, "start": now, "hashmatch": 0, "diff": 0, "empty": 0}, "replication_time": 23}, 200, 0, 0), ] cli = recon.SwiftRecon() cli.pool.imap = dummy_request default_calls = [ mock.call('[replication_failure] low: 0, high: 1, avg: 0.5, ' + 'total: 1, Failed: 0.0%, no_result: 0, reported: 2'), mock.call('[replication_success] low: 1, high: 3, avg: 2.0, ' + 'total: 4, Failed: 0.0%, no_result: 0, reported: 2'), mock.call('[replication_time] low: 23, high: 42, avg: 32.5, ' + 'total: 65, Failed: 0.0%, no_result: 0, reported: 2'), mock.call('[replication_attempted] low: 0, high: 0, avg: 0.0, ' + 'total: 0, Failed: 0.0%, no_result: 0, reported: 2'), mock.call('Oldest completion was 2015-04-25 22:13:20 ' + '(42 seconds ago) by 127.0.0.1:6011.'), mock.call('Most recent completion was 2015-04-25 22:13:20 ' + '(42 seconds ago) by 127.0.0.1:6011.'), ] mock_now.return_value = now + 42 cli.replication_check([('127.0.0.1', 6011), ('127.0.0.1', 6021)]) # We need any_order=True because the order of calls depends on the dict # that is returned from the recon middleware, thus can't rely on it mock_print.assert_has_calls(default_calls, any_order=True) @mock.patch('six.moves.builtins.print') @mock.patch('time.time') def test_load_check(self, mock_now, mock_print): now = 1430000000.0 def dummy_request(*args, **kwargs): return [ ('http://127.0.0.1:6010/recon/load', {"1m": 0.2, "5m": 0.4, "15m": 0.25, "processes": 10000, "tasks": "1/128"}, 200, 0, 0), ('http://127.0.0.1:6020/recon/load', {"1m": 0.4, "5m": 0.8, "15m": 0.75, "processes": 9000, "tasks": "1/200"}, 200, 0, 0), ] cli = recon.SwiftRecon() cli.pool.imap = dummy_request default_calls = [ mock.call('[5m_load_avg] low: 0, high: 0, avg: 0.6, total: 1, ' + 'Failed: 0.0%, no_result: 0, reported: 2'), mock.call('[15m_load_avg] low: 0, high: 0, avg: 0.5, total: 1, ' + 'Failed: 0.0%, no_result: 0, reported: 2'), mock.call('[1m_load_avg] low: 0, high: 0, avg: 0.3, total: 0, ' + 'Failed: 0.0%, no_result: 0, reported: 2'), ] mock_now.return_value = now + 42 cli.load_check([('127.0.0.1', 6010), ('127.0.0.1', 6020)]) # We need any_order=True because the order of calls depends on the dict # that is returned from the recon middleware, thus can't rely on it mock_print.assert_has_calls(default_calls, any_order=True) @mock.patch('six.moves.builtins.print') @mock.patch('time.time') def test_time_check(self, mock_now, mock_print): now = 1430000000.0 mock_now.return_value = now def dummy_request(*args, **kwargs): return [ ('http://127.0.0.1:6010/recon/load', now, 200, now - 0.5, now + 0.5), ('http://127.0.0.1:6020/recon/load', now, 200, now, now), ] cli = recon.SwiftRecon() cli.pool.imap = dummy_request default_calls = [ mock.call('2/2 hosts matched, 0 error[s] while checking hosts.') ] cli.time_check([('127.0.0.1', 6010), ('127.0.0.1', 6020)]) # We need any_order=True because the order of calls depends on the dict # that is returned from the recon middleware, thus can't rely on it mock_print.assert_has_calls(default_calls, any_order=True) @mock.patch('six.moves.builtins.print') @mock.patch('time.time') def test_time_check_mismatch(self, mock_now, mock_print): now = 1430000000.0 mock_now.return_value = now def dummy_request(*args, **kwargs): return [ ('http://127.0.0.1:6010/recon/time', now, 200, now + 0.5, now + 1.3), ('http://127.0.0.1:6020/recon/time', now, 200, now, now), ] cli = recon.SwiftRecon() cli.pool.imap = dummy_request default_calls = [ mock.call("!! http://127.0.0.1:6010/recon/time current time is " "2015-04-25 22:13:21, but remote is " "2015-04-25 22:13:20, differs by 1.30 sec"), mock.call('1/2 hosts matched, 0 error[s] while checking hosts.'), ] cli.time_check([('127.0.0.1', 6010), ('127.0.0.1', 6020)]) # We need any_order=True because the order of calls depends on the dict # that is returned from the recon middleware, thus can't rely on it mock_print.assert_has_calls(default_calls, any_order=True) swift-2.7.1/test/unit/account/0000775000567000056710000000000013024044470017406 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/account/test_backend.py0000664000567000056710000021423613024044354022417 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tests for swift.account.backend """ from collections import defaultdict import hashlib import json import unittest import pickle import os from time import sleep, time from uuid import uuid4 from tempfile import mkdtemp from shutil import rmtree import sqlite3 import itertools from contextlib import contextmanager import random import mock from swift.account.backend import AccountBroker from swift.common.utils import Timestamp from test.unit import patch_policies, with_tempdir, make_timestamp_iter from swift.common.db import DatabaseConnectionError from swift.common.storage_policy import StoragePolicy, POLICIES from test.unit.common import test_db @patch_policies class TestAccountBroker(unittest.TestCase): """Tests for AccountBroker""" def test_creation(self): # Test AccountBroker.__init__ broker = AccountBroker(':memory:', account='a') self.assertEqual(broker.db_file, ':memory:') try: with broker.get() as conn: pass except DatabaseConnectionError as e: self.assertTrue(hasattr(e, 'path')) self.assertEqual(e.path, ':memory:') self.assertTrue(hasattr(e, 'msg')) self.assertEqual(e.msg, "DB doesn't exist") except Exception as e: self.fail("Unexpected exception raised: %r" % e) else: self.fail("Expected a DatabaseConnectionError exception") broker.initialize(Timestamp('1').internal) with broker.get() as conn: curs = conn.cursor() curs.execute('SELECT 1') self.assertEqual(curs.fetchall()[0][0], 1) def test_initialize_fail(self): broker = AccountBroker(':memory:') with self.assertRaises(ValueError) as cm: broker.initialize(Timestamp('1').internal) self.assertEqual(str(cm.exception), 'Attempting to create a new' ' database with no account set') def test_exception(self): # Test AccountBroker throwing a conn away after exception first_conn = None broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: first_conn = conn try: with broker.get() as conn: self.assertEqual(first_conn, conn) raise Exception('OMG') except Exception: pass self.assertTrue(broker.conn is None) def test_empty(self): # Test AccountBroker.empty broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) self.assertTrue(broker.empty()) broker.put_container('o', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) self.assertTrue(not broker.empty()) sleep(.00001) broker.put_container('o', 0, Timestamp(time()).internal, 0, 0, POLICIES.default.idx) self.assertTrue(broker.empty()) def test_is_status_deleted(self): # Test AccountBroker.is_status_deleted broker1 = AccountBroker(':memory:', account='a') broker1.initialize(Timestamp(time()).internal) self.assertTrue(not broker1.is_status_deleted()) broker1.delete_db(Timestamp(time()).internal) self.assertTrue(broker1.is_status_deleted()) broker2 = AccountBroker(':memory:', account='a') broker2.initialize(Timestamp(time()).internal) # Set delete_timestamp greater than put_timestamp broker2.merge_timestamps( time(), Timestamp(time()).internal, Timestamp(time() + 999).internal) self.assertTrue(broker2.is_status_deleted()) def test_reclaim(self): broker = AccountBroker(':memory:', account='test_account') broker.initialize(Timestamp('1').internal) broker.put_container('c', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 0) broker.reclaim(Timestamp(time() - 999).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 0) sleep(.00001) broker.put_container('c', 0, Timestamp(time()).internal, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 1) broker.reclaim(Timestamp(time() - 999).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 1) sleep(.00001) broker.reclaim(Timestamp(time()).internal, time()) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 0) # Test reclaim after deletion. Create 3 test containers broker.put_container('x', 0, 0, 0, 0, POLICIES.default.idx) broker.put_container('y', 0, 0, 0, 0, POLICIES.default.idx) broker.put_container('z', 0, 0, 0, 0, POLICIES.default.idx) broker.reclaim(Timestamp(time()).internal, time()) # Now delete the account broker.delete_db(Timestamp(time()).internal) broker.reclaim(Timestamp(time()).internal, time()) def test_delete_db_status(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) start = next(ts) broker = AccountBroker(':memory:', account='a') broker.initialize(start) info = broker.get_info() self.assertEqual(info['put_timestamp'], Timestamp(start).internal) self.assertTrue(Timestamp(info['created_at']) >= start) self.assertEqual(info['delete_timestamp'], '0') if self.__class__ == TestAccountBrokerBeforeMetadata: self.assertEqual(info['status_changed_at'], '0') else: self.assertEqual(info['status_changed_at'], Timestamp(start).internal) # delete it delete_timestamp = next(ts) broker.delete_db(delete_timestamp) info = broker.get_info() self.assertEqual(info['put_timestamp'], Timestamp(start).internal) self.assertTrue(Timestamp(info['created_at']) >= start) self.assertEqual(info['delete_timestamp'], delete_timestamp) self.assertEqual(info['status_changed_at'], delete_timestamp) def test_delete_container(self): # Test AccountBroker.delete_container broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) broker.put_container('o', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 1) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 0) sleep(.00001) broker.put_container('o', 0, Timestamp(time()).internal, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 0").fetchone()[0], 0) self.assertEqual(conn.execute( "SELECT count(*) FROM container " "WHERE deleted = 1").fetchone()[0], 1) def test_put_container(self): # Test AccountBroker.put_container broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) # Create initial container timestamp = Timestamp(time()).internal broker.put_container('"{}"', timestamp, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) # Reput same event broker.put_container('"{}"', timestamp, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) # Put new event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_container('"{}"', timestamp, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) # Put old event otimestamp = Timestamp(float(Timestamp(timestamp)) - 1).internal broker.put_container('"{}"', otimestamp, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) # Put old delete event dtimestamp = Timestamp(float(Timestamp(timestamp)) - 1).internal broker.put_container('"{}"', 0, dtimestamp, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT delete_timestamp FROM container").fetchone()[0], dtimestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) # Put new delete event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_container('"{}"', 0, timestamp, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT delete_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 1) # Put new event sleep(.00001) timestamp = Timestamp(time()).internal broker.put_container('"{}"', timestamp, 0, 0, 0, POLICIES.default.idx) with broker.get() as conn: self.assertEqual(conn.execute( "SELECT name FROM container").fetchone()[0], '"{}"') self.assertEqual(conn.execute( "SELECT put_timestamp FROM container").fetchone()[0], timestamp) self.assertEqual(conn.execute( "SELECT deleted FROM container").fetchone()[0], 0) def test_get_info(self): # Test AccountBroker.get_info broker = AccountBroker(':memory:', account='test1') broker.initialize(Timestamp('1').internal) info = broker.get_info() self.assertEqual(info['account'], 'test1') self.assertEqual(info['hash'], '00000000000000000000000000000000') self.assertEqual(info['put_timestamp'], Timestamp(1).internal) self.assertEqual(info['delete_timestamp'], '0') if self.__class__ == TestAccountBrokerBeforeMetadata: self.assertEqual(info['status_changed_at'], '0') else: self.assertEqual(info['status_changed_at'], Timestamp(1).internal) info = broker.get_info() self.assertEqual(info['container_count'], 0) broker.put_container('c1', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) info = broker.get_info() self.assertEqual(info['container_count'], 1) sleep(.00001) broker.put_container('c2', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) info = broker.get_info() self.assertEqual(info['container_count'], 2) sleep(.00001) broker.put_container('c2', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) info = broker.get_info() self.assertEqual(info['container_count'], 2) sleep(.00001) broker.put_container('c1', 0, Timestamp(time()).internal, 0, 0, POLICIES.default.idx) info = broker.get_info() self.assertEqual(info['container_count'], 1) sleep(.00001) broker.put_container('c2', 0, Timestamp(time()).internal, 0, 0, POLICIES.default.idx) info = broker.get_info() self.assertEqual(info['container_count'], 0) def test_list_containers_iter(self): # Test AccountBroker.list_containers_iter broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) for cont1 in range(4): for cont2 in range(125): broker.put_container('%d-%04d' % (cont1, cont2), Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) for cont in range(125): broker.put_container('2-0051-%04d' % cont, Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) for cont in range(125): broker.put_container('3-%04d-0049' % cont, Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) listing = broker.list_containers_iter(100, '', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0-0000') self.assertEqual(listing[-1][0], '0-0099') listing = broker.list_containers_iter(100, '', '0-0050', None, '') self.assertEqual(len(listing), 50) self.assertEqual(listing[0][0], '0-0000') self.assertEqual(listing[-1][0], '0-0049') listing = broker.list_containers_iter(100, '0-0099', None, None, '') self.assertEqual(len(listing), 100) self.assertEqual(listing[0][0], '0-0100') self.assertEqual(listing[-1][0], '1-0074') listing = broker.list_containers_iter(55, '1-0074', None, None, '') self.assertEqual(len(listing), 55) self.assertEqual(listing[0][0], '1-0075') self.assertEqual(listing[-1][0], '2-0004') listing = broker.list_containers_iter(10, '', None, '0-01', '') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0-0100') self.assertEqual(listing[-1][0], '0-0109') listing = broker.list_containers_iter(10, '', None, '0-01', '-') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0-0100') self.assertEqual(listing[-1][0], '0-0109') listing = broker.list_containers_iter(10, '', None, '0-00', '-', reverse=True) self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0-0099') self.assertEqual(listing[-1][0], '0-0090') listing = broker.list_containers_iter(10, '', None, '0-', '-') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0-0000') self.assertEqual(listing[-1][0], '0-0009') listing = broker.list_containers_iter(10, '', None, '0-', '-', reverse=True) self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '0-0124') self.assertEqual(listing[-1][0], '0-0115') listing = broker.list_containers_iter(10, '', None, '', '-') self.assertEqual(len(listing), 4) self.assertEqual([row[0] for row in listing], ['0-', '1-', '2-', '3-']) listing = broker.list_containers_iter(10, '', None, '', '-', reverse=True) self.assertEqual(len(listing), 4) self.assertEqual([row[0] for row in listing], ['3-', '2-', '1-', '0-']) listing = broker.list_containers_iter(10, '2-', None, None, '-') self.assertEqual(len(listing), 1) self.assertEqual([row[0] for row in listing], ['3-']) listing = broker.list_containers_iter(10, '2-', None, None, '-', reverse=True) self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['1-', '0-']) listing = broker.list_containers_iter(10, '2.', None, None, '-', reverse=True) self.assertEqual(len(listing), 3) self.assertEqual([row[0] for row in listing], ['2-', '1-', '0-']) listing = broker.list_containers_iter(10, '', None, '2', '-') self.assertEqual(len(listing), 1) self.assertEqual([row[0] for row in listing], ['2-']) listing = broker.list_containers_iter(10, '2-0050', None, '2-', '-') self.assertEqual(len(listing), 10) self.assertEqual(listing[0][0], '2-0051') self.assertEqual(listing[1][0], '2-0051-') self.assertEqual(listing[2][0], '2-0052') self.assertEqual(listing[-1][0], '2-0059') listing = broker.list_containers_iter(10, '3-0045', None, '3-', '-') self.assertEqual(len(listing), 10) self.assertEqual([row[0] for row in listing], ['3-0045-', '3-0046', '3-0046-', '3-0047', '3-0047-', '3-0048', '3-0048-', '3-0049', '3-0049-', '3-0050']) broker.put_container('3-0049-', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) listing = broker.list_containers_iter(10, '3-0048', None, None, None) self.assertEqual(len(listing), 10) self.assertEqual([row[0] for row in listing], ['3-0048-0049', '3-0049', '3-0049-', '3-0049-0049', '3-0050', '3-0050-0049', '3-0051', '3-0051-0049', '3-0052', '3-0052-0049']) listing = broker.list_containers_iter(10, '3-0048', None, '3-', '-') self.assertEqual(len(listing), 10) self.assertEqual([row[0] for row in listing], ['3-0048-', '3-0049', '3-0049-', '3-0050', '3-0050-', '3-0051', '3-0051-', '3-0052', '3-0052-', '3-0053']) listing = broker.list_containers_iter(10, None, None, '3-0049-', '-') self.assertEqual(len(listing), 2) self.assertEqual([row[0] for row in listing], ['3-0049-', '3-0049-0049']) def test_list_objects_iter_order_and_reverse(self): # Test ContainerBroker.list_objects_iter broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal, 0) broker.put_container( 'c1', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container( 'c10', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container( 'C1', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container( 'c2', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container( 'c3', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container( 'C4', Timestamp(0).internal, 0, 0, 0, POLICIES.default.idx) listing = broker.list_containers_iter(100, None, None, '', '', reverse=False) self.assertEqual([row[0] for row in listing], ['C1', 'C4', 'c1', 'c10', 'c2', 'c3']) listing = broker.list_containers_iter(100, None, None, '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['c3', 'c2', 'c10', 'c1', 'C4', 'C1']) listing = broker.list_containers_iter(2, None, None, '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['c3', 'c2']) listing = broker.list_containers_iter(100, 'c2', 'C4', '', '', reverse=True) self.assertEqual([row[0] for row in listing], ['c10', 'c1']) def test_reverse_prefix_delim(self): expectations = [ { 'containers': [ 'topdir1-subdir1,0-c1', 'topdir1-subdir1,1-c1', 'topdir1-subdir1-c1', ], 'params': { 'prefix': 'topdir1-', 'delimiter': '-', }, 'expected': [ 'topdir1-subdir1,0-', 'topdir1-subdir1,1-', 'topdir1-subdir1-', ], }, { 'containers': [ 'topdir1-subdir1,0-c1', 'topdir1-subdir1,1-c1', 'topdir1-subdir1-c1', 'topdir1-subdir1.', 'topdir1-subdir1.-c1', ], 'params': { 'prefix': 'topdir1-', 'delimiter': '-', }, 'expected': [ 'topdir1-subdir1,0-', 'topdir1-subdir1,1-', 'topdir1-subdir1-', 'topdir1-subdir1.', 'topdir1-subdir1.-', ], }, { 'containers': [ 'topdir1-subdir1-c1', 'topdir1-subdir1,0-c1', 'topdir1-subdir1,1-c1', ], 'params': { 'prefix': 'topdir1-', 'delimiter': '-', 'reverse': True, }, 'expected': [ 'topdir1-subdir1-', 'topdir1-subdir1,1-', 'topdir1-subdir1,0-', ], }, { 'containers': [ 'topdir1-subdir1.-c1', 'topdir1-subdir1.', 'topdir1-subdir1-c1', 'topdir1-subdir1-', 'topdir1-subdir1,', 'topdir1-subdir1,0-c1', 'topdir1-subdir1,1-c1', ], 'params': { 'prefix': 'topdir1-', 'delimiter': '-', 'reverse': True, }, 'expected': [ 'topdir1-subdir1.-', 'topdir1-subdir1.', 'topdir1-subdir1-', 'topdir1-subdir1,1-', 'topdir1-subdir1,0-', 'topdir1-subdir1,', ], }, { 'containers': [ '1', '2', '3:1', '3:2:1', '3:2:2', '3:3', '4', ], 'params': { 'prefix': '3:', 'delimiter': ':', 'reverse': True, }, 'expected': [ '3:3', '3:2:', '3:1', ], }, ] ts = make_timestamp_iter() default_listing_params = { 'limit': 10000, 'marker': '', 'end_marker': None, 'prefix': None, 'delimiter': None, } failures = [] for expected in expectations: broker = AccountBroker(':memory:', account='a') broker.initialize(next(ts).internal, 0) for name in expected['containers']: broker.put_container(name, next(ts).internal, 0, 0, 0, POLICIES.default.idx) params = default_listing_params.copy() params.update(expected['params']) listing = list(c[0] for c in broker.list_containers_iter(**params)) if listing != expected['expected']: expected['listing'] = listing failures.append( "With containers %(containers)r, the params %(params)r " "produced %(listing)r instead of %(expected)r" % expected) self.assertFalse(failures, "Found the following failures:\n%s" % '\n'.join(failures)) def test_double_check_trailing_delimiter(self): # Test AccountBroker.list_containers_iter for an # account that has an odd container with a trailing delimiter broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) broker.put_container('a', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a-', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a-a', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a-a-a', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a-a-b', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a-b', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) # NB: ord(".") == ord("-") + 1 broker.put_container('a.', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('a.b', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('b', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('b-a', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('b-b', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker.put_container('c', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) listing = broker.list_containers_iter(15, None, None, None, None) self.assertEqual([row[0] for row in listing], ['a', 'a-', 'a-a', 'a-a-a', 'a-a-b', 'a-b', 'a.', 'a.b', 'b', 'b-a', 'b-b', 'c']) listing = broker.list_containers_iter(15, None, None, '', '-') self.assertEqual([row[0] for row in listing], ['a', 'a-', 'a.', 'a.b', 'b', 'b-', 'c']) listing = broker.list_containers_iter(15, None, None, 'a-', '-') self.assertEqual([row[0] for row in listing], ['a-', 'a-a', 'a-a-', 'a-b']) listing = broker.list_containers_iter(15, None, None, 'b-', '-') self.assertEqual([row[0] for row in listing], ['b-a', 'b-b']) def test_chexor(self): broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) broker.put_container('a', Timestamp(1).internal, Timestamp(0).internal, 0, 0, POLICIES.default.idx) broker.put_container('b', Timestamp(2).internal, Timestamp(0).internal, 0, 0, POLICIES.default.idx) hasha = hashlib.md5( '%s-%s' % ('a', "%s-%s-%s-%s" % ( Timestamp(1).internal, Timestamp(0).internal, 0, 0)) ).digest() hashb = hashlib.md5( '%s-%s' % ('b', "%s-%s-%s-%s" % ( Timestamp(2).internal, Timestamp(0).internal, 0, 0)) ).digest() hashc = \ ''.join(('%02x' % (ord(a) ^ ord(b)) for a, b in zip(hasha, hashb))) self.assertEqual(broker.get_info()['hash'], hashc) broker.put_container('b', Timestamp(3).internal, Timestamp(0).internal, 0, 0, POLICIES.default.idx) hashb = hashlib.md5( '%s-%s' % ('b', "%s-%s-%s-%s" % ( Timestamp(3).internal, Timestamp(0).internal, 0, 0)) ).digest() hashc = \ ''.join(('%02x' % (ord(a) ^ ord(b)) for a, b in zip(hasha, hashb))) self.assertEqual(broker.get_info()['hash'], hashc) def test_merge_items(self): broker1 = AccountBroker(':memory:', account='a') broker1.initialize(Timestamp('1').internal) broker2 = AccountBroker(':memory:', account='a') broker2.initialize(Timestamp('1').internal) broker1.put_container('a', Timestamp(1).internal, 0, 0, 0, POLICIES.default.idx) broker1.put_container('b', Timestamp(2).internal, 0, 0, 0, POLICIES.default.idx) id = broker1.get_info()['id'] broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(len(items), 2) self.assertEqual(['a', 'b'], sorted([rec['name'] for rec in items])) broker1.put_container('c', Timestamp(3).internal, 0, 0, 0, POLICIES.default.idx) broker2.merge_items(broker1.get_items_since( broker2.get_sync(id), 1000), id) items = broker2.get_items_since(-1, 1000) self.assertEqual(len(items), 3) self.assertEqual(['a', 'b', 'c'], sorted([rec['name'] for rec in items])) def test_merge_items_overwrite_unicode(self): snowman = u'\N{SNOWMAN}'.encode('utf-8') broker1 = AccountBroker(':memory:', account='a') broker1.initialize(Timestamp('1').internal, 0) id1 = broker1.get_info()['id'] broker2 = AccountBroker(':memory:', account='a') broker2.initialize(Timestamp('1').internal, 0) broker1.put_container(snowman, Timestamp(2).internal, 0, 1, 100, POLICIES.default.idx) broker1.put_container('b', Timestamp(3).internal, 0, 0, 0, POLICIES.default.idx) broker2.merge_items(json.loads(json.dumps(broker1.get_items_since( broker2.get_sync(id1), 1000))), id1) broker1.put_container(snowman, Timestamp(4).internal, 0, 2, 200, POLICIES.default.idx) broker2.merge_items(json.loads(json.dumps(broker1.get_items_since( broker2.get_sync(id1), 1000))), id1) items = broker2.get_items_since(-1, 1000) self.assertEqual(['b', snowman], sorted([rec['name'] for rec in items])) items_by_name = dict((rec['name'], rec) for rec in items) self.assertEqual(items_by_name[snowman]['object_count'], 2) self.assertEqual(items_by_name[snowman]['bytes_used'], 200) self.assertEqual(items_by_name['b']['object_count'], 0) self.assertEqual(items_by_name['b']['bytes_used'], 0) def test_load_old_pending_puts(self): # pending puts from pre-storage-policy account brokers won't contain # the storage policy index tempdir = mkdtemp() broker_path = os.path.join(tempdir, 'test-load-old.db') try: broker = AccountBroker(broker_path, account='real') broker.initialize(Timestamp(1).internal) with open(broker_path + '.pending', 'a+b') as pending: pending.write(':') pending.write(pickle.dumps( # name, put_timestamp, delete_timestamp, object_count, # bytes_used, deleted ('oldcon', Timestamp(200).internal, Timestamp(0).internal, 896, 9216695, 0)).encode('base64')) broker._commit_puts() with broker.get() as conn: results = list(conn.execute(''' SELECT name, storage_policy_index FROM container ''')) self.assertEqual(len(results), 1) self.assertEqual(dict(results[0]), {'name': 'oldcon', 'storage_policy_index': 0}) finally: rmtree(tempdir) @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True), StoragePolicy(2, 'two', False), StoragePolicy(3, 'three', False)]) def test_get_policy_stats(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = AccountBroker(':memory:', account='a') broker.initialize(next(ts)) # check empty policy_stats self.assertTrue(broker.empty()) policy_stats = broker.get_policy_stats() self.assertEqual(policy_stats, {}) # add some empty containers for policy in POLICIES: container_name = 'c-%s' % policy.name put_timestamp = next(ts) broker.put_container(container_name, put_timestamp, 0, 0, 0, policy.idx) policy_stats = broker.get_policy_stats() stats = policy_stats[policy.idx] if 'container_count' in stats: self.assertEqual(stats['container_count'], 1) self.assertEqual(stats['object_count'], 0) self.assertEqual(stats['bytes_used'], 0) # update the containers object & byte count for policy in POLICIES: container_name = 'c-%s' % policy.name put_timestamp = next(ts) count = policy.idx * 100 # good as any integer broker.put_container(container_name, put_timestamp, 0, count, count, policy.idx) policy_stats = broker.get_policy_stats() stats = policy_stats[policy.idx] if 'container_count' in stats: self.assertEqual(stats['container_count'], 1) self.assertEqual(stats['object_count'], count) self.assertEqual(stats['bytes_used'], count) # check all the policy_stats at once for policy_index, stats in policy_stats.items(): policy = POLICIES[policy_index] count = policy.idx * 100 # coupled with policy for test if 'container_count' in stats: self.assertEqual(stats['container_count'], 1) self.assertEqual(stats['object_count'], count) self.assertEqual(stats['bytes_used'], count) # now delete the containers one by one for policy in POLICIES: container_name = 'c-%s' % policy.name delete_timestamp = next(ts) broker.put_container(container_name, 0, delete_timestamp, 0, 0, policy.idx) policy_stats = broker.get_policy_stats() stats = policy_stats[policy.idx] if 'container_count' in stats: self.assertEqual(stats['container_count'], 0) self.assertEqual(stats['object_count'], 0) self.assertEqual(stats['bytes_used'], 0) @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True)]) def test_policy_stats_tracking(self): ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = AccountBroker(':memory:', account='a') broker.initialize(next(ts)) # policy 0 broker.put_container('con1', next(ts), 0, 12, 2798641, 0) broker.put_container('con1', next(ts), 0, 13, 8156441, 0) # policy 1 broker.put_container('con2', next(ts), 0, 7, 5751991, 1) broker.put_container('con2', next(ts), 0, 8, 6085379, 1) stats = broker.get_policy_stats() self.assertEqual(len(stats), 2) if 'container_count' in stats[0]: self.assertEqual(stats[0]['container_count'], 1) self.assertEqual(stats[0]['object_count'], 13) self.assertEqual(stats[0]['bytes_used'], 8156441) if 'container_count' in stats[1]: self.assertEqual(stats[1]['container_count'], 1) self.assertEqual(stats[1]['object_count'], 8) self.assertEqual(stats[1]['bytes_used'], 6085379) # Break encapsulation here to make sure that there's only 2 rows in # the stats table. It's possible that there could be 4 rows (one per # put_container) but that they came out in the right order so that # get_policy_stats() collapsed them down to the right number. To prove # that's not so, we have to go peek at the broker's internals. with broker.get() as conn: nrows = conn.execute( "SELECT COUNT(*) FROM policy_stat").fetchall()[0][0] self.assertEqual(nrows, 2) def prespi_AccountBroker_initialize(self, conn, put_timestamp, **kwargs): """ The AccountBroker initialze() function before we added the policy stat table. Used by test_policy_table_creation() to make sure that the AccountBroker will correctly add the table for cases where the DB existed before the policy support was added. :param conn: DB connection object :param put_timestamp: put timestamp """ if not self.account: raise ValueError( 'Attempting to create a new database with no account set') self.create_container_table(conn) self.create_account_stat_table(conn, put_timestamp) def premetadata_create_account_stat_table(self, conn, put_timestamp): """ Copied from AccountBroker before the metadata column was added; used for testing with TestAccountBrokerBeforeMetadata. Create account_stat table which is specific to the account DB. :param conn: DB connection object :param put_timestamp: put timestamp """ conn.executescript(''' CREATE TABLE account_stat ( account TEXT, created_at TEXT, put_timestamp TEXT DEFAULT '0', delete_timestamp TEXT DEFAULT '0', container_count INTEGER, object_count INTEGER DEFAULT 0, bytes_used INTEGER DEFAULT 0, hash TEXT default '00000000000000000000000000000000', id TEXT, status TEXT DEFAULT '', status_changed_at TEXT DEFAULT '0' ); INSERT INTO account_stat (container_count) VALUES (0); ''') conn.execute(''' UPDATE account_stat SET account = ?, created_at = ?, id = ?, put_timestamp = ? ''', (self.account, Timestamp(time()).internal, str(uuid4()), put_timestamp)) class TestCommonAccountBroker(test_db.TestExampleBroker): broker_class = AccountBroker def setUp(self): super(TestCommonAccountBroker, self).setUp() self.policy = random.choice(list(POLICIES)) def put_item(self, broker, timestamp): broker.put_container('test', timestamp, 0, 0, 0, int(self.policy)) def delete_item(self, broker, timestamp): broker.put_container('test', 0, timestamp, 0, 0, int(self.policy)) class TestAccountBrokerBeforeMetadata(TestAccountBroker): """ Tests for AccountBroker against databases created before the metadata column was added. """ def setUp(self): self._imported_create_account_stat_table = \ AccountBroker.create_account_stat_table AccountBroker.create_account_stat_table = \ premetadata_create_account_stat_table broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) exc = None with broker.get() as conn: try: conn.execute('SELECT metadata FROM account_stat') except BaseException as err: exc = err self.assertTrue('no such column: metadata' in str(exc)) def tearDown(self): AccountBroker.create_account_stat_table = \ self._imported_create_account_stat_table broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: conn.execute('SELECT metadata FROM account_stat') def prespi_create_container_table(self, conn): """ Copied from AccountBroker before the sstoage_policy_index column was added; used for testing with TestAccountBrokerBeforeSPI. Create container table which is specific to the account DB. :param conn: DB connection object """ conn.executescript(""" CREATE TABLE container ( ROWID INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, put_timestamp TEXT, delete_timestamp TEXT, object_count INTEGER, bytes_used INTEGER, deleted INTEGER DEFAULT 0 ); CREATE INDEX ix_container_deleted_name ON container (deleted, name); CREATE TRIGGER container_insert AFTER INSERT ON container BEGIN UPDATE account_stat SET container_count = container_count + (1 - new.deleted), object_count = object_count + new.object_count, bytes_used = bytes_used + new.bytes_used, hash = chexor(hash, new.name, new.put_timestamp || '-' || new.delete_timestamp || '-' || new.object_count || '-' || new.bytes_used); END; CREATE TRIGGER container_update BEFORE UPDATE ON container BEGIN SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT'); END; CREATE TRIGGER container_delete AFTER DELETE ON container BEGIN UPDATE account_stat SET container_count = container_count - (1 - old.deleted), object_count = object_count - old.object_count, bytes_used = bytes_used - old.bytes_used, hash = chexor(hash, old.name, old.put_timestamp || '-' || old.delete_timestamp || '-' || old.object_count || '-' || old.bytes_used); END; """) class TestAccountBrokerBeforeSPI(TestAccountBroker): """ Tests for AccountBroker against databases created before the storage_policy_index column was added. """ def setUp(self): self._imported_create_container_table = \ AccountBroker.create_container_table AccountBroker.create_container_table = \ prespi_create_container_table self._imported_initialize = AccountBroker._initialize AccountBroker._initialize = prespi_AccountBroker_initialize broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) exc = None with broker.get() as conn: try: conn.execute('SELECT storage_policy_index FROM container') except BaseException as err: exc = err self.assertTrue('no such column: storage_policy_index' in str(exc)) with broker.get() as conn: try: conn.execute('SELECT * FROM policy_stat') except sqlite3.OperationalError as err: self.assertTrue('no such table: policy_stat' in str(err)) else: self.fail('database created with policy_stat table') def tearDown(self): AccountBroker.create_container_table = \ self._imported_create_container_table AccountBroker._initialize = self._imported_initialize broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: conn.execute('SELECT storage_policy_index FROM container') @with_tempdir def test_policy_table_migration(self, tempdir): db_path = os.path.join(tempdir, 'account.db') # first init an acct DB without the policy_stat table present broker = AccountBroker(db_path, account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: try: conn.execute(''' SELECT * FROM policy_stat ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table really isn't there self.assertTrue('no such table: policy_stat' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from policy_stat table!') # make sure we can HEAD this thing w/o the table stats = broker.get_policy_stats() self.assertEqual(len(stats), 0) # now do a PUT to create the table broker.put_container('o', Timestamp(time()).internal, 0, 0, 0, POLICIES.default.idx) broker._commit_puts_stale_ok() # now confirm that the table was created with broker.get() as conn: conn.execute('SELECT * FROM policy_stat') stats = broker.get_policy_stats() self.assertEqual(len(stats), 1) @patch_policies @with_tempdir def test_container_table_migration(self, tempdir): db_path = os.path.join(tempdir, 'account.db') # first init an acct DB without the policy_stat table present broker = AccountBroker(db_path, account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: try: conn.execute(''' SELECT storage_policy_index FROM container ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the table doesn't have this column self.assertTrue('no such column: storage_policy_index' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select from storage_policy_index ' 'from container table!') # manually insert an existing row to avoid migration with broker.get() as conn: conn.execute(''' INSERT INTO container (name, put_timestamp, delete_timestamp, object_count, bytes_used, deleted) VALUES (?, ?, ?, ?, ?, ?) ''', ('test_name', Timestamp(time()).internal, 0, 1, 2, 0)) conn.commit() # make sure we can iter containers without the migration for c in broker.list_containers_iter(1, None, None, None, None): self.assertEqual(c, ('test_name', 1, 2, 0)) # stats table is mysteriously empty... stats = broker.get_policy_stats() self.assertEqual(len(stats), 0) # now do a PUT with a different value for storage_policy_index # which will update the DB schema as well as update policy_stats # for legacy containers in the DB (those without an SPI) other_policy = [p for p in POLICIES if p.idx != 0][0] broker.put_container('test_second', Timestamp(time()).internal, 0, 3, 4, other_policy.idx) broker._commit_puts_stale_ok() with broker.get() as conn: rows = conn.execute(''' SELECT name, storage_policy_index FROM container ''').fetchall() for row in rows: if row[0] == 'test_name': self.assertEqual(row[1], 0) else: self.assertEqual(row[1], other_policy.idx) # we should have stats for both containers stats = broker.get_policy_stats() self.assertEqual(len(stats), 2) if 'container_count' in stats[0]: self.assertEqual(stats[0]['container_count'], 1) self.assertEqual(stats[0]['object_count'], 1) self.assertEqual(stats[0]['bytes_used'], 2) if 'container_count' in stats[1]: self.assertEqual(stats[1]['container_count'], 1) self.assertEqual(stats[1]['object_count'], 3) self.assertEqual(stats[1]['bytes_used'], 4) # now lets delete a container and make sure policy_stats is OK with broker.get() as conn: conn.execute(''' DELETE FROM container WHERE name = ? ''', ('test_name',)) conn.commit() stats = broker.get_policy_stats() self.assertEqual(len(stats), 2) if 'container_count' in stats[0]: self.assertEqual(stats[0]['container_count'], 0) self.assertEqual(stats[0]['object_count'], 0) self.assertEqual(stats[0]['bytes_used'], 0) if 'container_count' in stats[1]: self.assertEqual(stats[1]['container_count'], 1) self.assertEqual(stats[1]['object_count'], 3) self.assertEqual(stats[1]['bytes_used'], 4) @with_tempdir def test_half_upgraded_database(self, tempdir): db_path = os.path.join(tempdir, 'account.db') ts = itertools.count() ts = (Timestamp(t).internal for t in itertools.count(int(time()))) broker = AccountBroker(db_path, account='a') broker.initialize(next(ts)) self.assertTrue(broker.empty()) # add a container (to pending file) broker.put_container('c', next(ts), 0, 0, 0, POLICIES.default.idx) real_get = broker.get called = [] @contextmanager def mock_get(): with real_get() as conn: def mock_executescript(script): if called: raise Exception('kaboom!') called.append(script) conn.executescript = mock_executescript yield conn broker.get = mock_get try: broker._commit_puts() except Exception: pass else: self.fail('mock exception was not raised') self.assertEqual(len(called), 1) self.assertTrue('CREATE TABLE policy_stat' in called[0]) # nothing was committed broker = AccountBroker(db_path, account='a') with broker.get() as conn: try: conn.execute('SELECT * FROM policy_stat') except sqlite3.OperationalError as err: self.assertTrue('no such table: policy_stat' in str(err)) else: self.fail('half upgraded database!') container_count = conn.execute( 'SELECT count(*) FROM container').fetchone()[0] self.assertEqual(container_count, 0) # try again to commit puts self.assertFalse(broker.empty()) # full migration successful with broker.get() as conn: conn.execute('SELECT * FROM policy_stat') conn.execute('SELECT storage_policy_index FROM container') @with_tempdir def test_pre_storage_policy_replication(self, tempdir): ts = make_timestamp_iter() # make and two account database "replicas" old_broker = AccountBroker(os.path.join(tempdir, 'old_account.db'), account='a') old_broker.initialize(next(ts).internal) new_broker = AccountBroker(os.path.join(tempdir, 'new_account.db'), account='a') new_broker.initialize(next(ts).internal) # manually insert an existing row to avoid migration for old database with old_broker.get() as conn: conn.execute(''' INSERT INTO container (name, put_timestamp, delete_timestamp, object_count, bytes_used, deleted) VALUES (?, ?, ?, ?, ?, ?) ''', ('test_name', next(ts).internal, 0, 1, 2, 0)) conn.commit() # get replication info and rows form old database info = old_broker.get_info() rows = old_broker.get_items_since(0, 10) # "send" replication rows to new database new_broker.merge_items(rows, info['id']) # make sure "test_name" container in new database self.assertEqual(new_broker.get_info()['container_count'], 1) for c in new_broker.list_containers_iter(1, None, None, None, None): self.assertEqual(c, ('test_name', 1, 2, 0)) # full migration successful with new_broker.get() as conn: conn.execute('SELECT * FROM policy_stat') conn.execute('SELECT storage_policy_index FROM container') def pre_track_containers_create_policy_stat(self, conn): """ Copied from AccountBroker before the container_count column was added. Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. :param conn: DB connection object """ conn.executescript(""" CREATE TABLE policy_stat ( storage_policy_index INTEGER PRIMARY KEY, object_count INTEGER DEFAULT 0, bytes_used INTEGER DEFAULT 0 ); INSERT OR IGNORE INTO policy_stat ( storage_policy_index, object_count, bytes_used ) SELECT 0, object_count, bytes_used FROM account_stat WHERE container_count > 0; """) def pre_track_containers_create_container_table(self, conn): """ Copied from AccountBroker before the container_count column was added (using old stat trigger script) Create container table which is specific to the account DB. :param conn: DB connection object """ # revert to old trigger script to support one of the tests OLD_POLICY_STAT_TRIGGER_SCRIPT = """ CREATE TRIGGER container_insert_ps AFTER INSERT ON container BEGIN INSERT OR IGNORE INTO policy_stat (storage_policy_index, object_count, bytes_used) VALUES (new.storage_policy_index, 0, 0); UPDATE policy_stat SET object_count = object_count + new.object_count, bytes_used = bytes_used + new.bytes_used WHERE storage_policy_index = new.storage_policy_index; END; CREATE TRIGGER container_delete_ps AFTER DELETE ON container BEGIN UPDATE policy_stat SET object_count = object_count - old.object_count, bytes_used = bytes_used - old.bytes_used WHERE storage_policy_index = old.storage_policy_index; END; """ conn.executescript(""" CREATE TABLE container ( ROWID INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, put_timestamp TEXT, delete_timestamp TEXT, object_count INTEGER, bytes_used INTEGER, deleted INTEGER DEFAULT 0, storage_policy_index INTEGER DEFAULT 0 ); CREATE INDEX ix_container_deleted_name ON container (deleted, name); CREATE TRIGGER container_insert AFTER INSERT ON container BEGIN UPDATE account_stat SET container_count = container_count + (1 - new.deleted), object_count = object_count + new.object_count, bytes_used = bytes_used + new.bytes_used, hash = chexor(hash, new.name, new.put_timestamp || '-' || new.delete_timestamp || '-' || new.object_count || '-' || new.bytes_used); END; CREATE TRIGGER container_update BEFORE UPDATE ON container BEGIN SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT'); END; CREATE TRIGGER container_delete AFTER DELETE ON container BEGIN UPDATE account_stat SET container_count = container_count - (1 - old.deleted), object_count = object_count - old.object_count, bytes_used = bytes_used - old.bytes_used, hash = chexor(hash, old.name, old.put_timestamp || '-' || old.delete_timestamp || '-' || old.object_count || '-' || old.bytes_used); END; """ + OLD_POLICY_STAT_TRIGGER_SCRIPT) class AccountBrokerPreTrackContainerCountSetup(object): def assertUnmigrated(self, broker): with broker.get() as conn: try: conn.execute(''' SELECT container_count FROM policy_stat ''').fetchone()[0] except sqlite3.OperationalError as err: # confirm that the column really isn't there self.assertTrue('no such column: container_count' in str(err)) else: self.fail('broker did not raise sqlite3.OperationalError ' 'trying to select container_count from policy_stat!') def setUp(self): # use old version of policy_stat self._imported_create_policy_stat_table = \ AccountBroker.create_policy_stat_table AccountBroker.create_policy_stat_table = \ pre_track_containers_create_policy_stat # use old container table so we use old trigger for # updating policy_stat self._imported_create_container_table = \ AccountBroker.create_container_table AccountBroker.create_container_table = \ pre_track_containers_create_container_table broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) self.assertUnmigrated(broker) self.tempdir = mkdtemp() self.ts = (Timestamp(t).internal for t in itertools.count(int(time()))) self.db_path = os.path.join(self.tempdir, 'sda', 'accounts', '0', '0', '0', 'test.db') self.broker = AccountBroker(self.db_path, account='a') self.broker.initialize(next(self.ts)) # Common sanity-check that our starting, pre-migration state correctly # does not have the container_count column. self.assertUnmigrated(self.broker) def tearDown(self): rmtree(self.tempdir, ignore_errors=True) self.restore_account_broker() broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) with broker.get() as conn: conn.execute('SELECT container_count FROM policy_stat') def restore_account_broker(self): AccountBroker.create_policy_stat_table = \ self._imported_create_policy_stat_table AccountBroker.create_container_table = \ self._imported_create_container_table @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True), StoragePolicy(2, 'two', False), StoragePolicy(3, 'three', False)]) class TestAccountBrokerBeforePerPolicyContainerTrack( AccountBrokerPreTrackContainerCountSetup, TestAccountBroker): """ Tests for AccountBroker against databases created before the container_count column was added to the policy_stat table. """ def test_policy_table_cont_count_do_migrations(self): # add a few containers num_containers = 8 policies = itertools.cycle(POLICIES) per_policy_container_counts = defaultdict(int) # add a few container entries for i in range(num_containers): name = 'test-container-%02d' % i policy = next(policies) self.broker.put_container(name, next(self.ts), 0, 0, 0, int(policy)) per_policy_container_counts[int(policy)] += 1 total_container_count = self.broker.get_info()['container_count'] self.assertEqual(total_container_count, num_containers) # still un-migrated self.assertUnmigrated(self.broker) policy_stats = self.broker.get_policy_stats() self.assertEqual(len(policy_stats), len(per_policy_container_counts)) for stats in policy_stats.values(): self.assertEqual(stats['object_count'], 0) self.assertEqual(stats['bytes_used'], 0) # un-migrated dbs should not return container_count self.assertFalse('container_count' in stats) # now force the migration policy_stats = self.broker.get_policy_stats(do_migrations=True) self.assertEqual(len(policy_stats), len(per_policy_container_counts)) for policy_index, stats in policy_stats.items(): self.assertEqual(stats['object_count'], 0) self.assertEqual(stats['bytes_used'], 0) self.assertEqual(stats['container_count'], per_policy_container_counts[policy_index]) def test_policy_table_cont_count_update_get_stats(self): # add a few container entries for policy in POLICIES: for i in range(0, policy.idx + 1): container_name = 'c%s_0' % policy.idx self.broker.put_container('c%s_%s' % (policy.idx, i), 0, 0, 0, 0, policy.idx) # _commit_puts_stale_ok() called by get_policy_stats() # calling get_policy_stats() with do_migrations will alter the table # and populate it based on what's in the container table now stats = self.broker.get_policy_stats(do_migrations=True) # now confirm that the column was created with self.broker.get() as conn: conn.execute('SELECT container_count FROM policy_stat') # confirm stats reporting back correctly self.assertEqual(len(stats), 4) for policy in POLICIES: self.assertEqual(stats[policy.idx]['container_count'], policy.idx + 1) # now delete one from each policy and check the stats with self.broker.get() as conn: for policy in POLICIES: container_name = 'c%s_0' % policy.idx conn.execute(''' DELETE FROM container WHERE name = ? ''', (container_name,)) conn.commit() stats = self.broker.get_policy_stats() self.assertEqual(len(stats), 4) for policy in POLICIES: self.assertEqual(stats[policy.idx]['container_count'], policy.idx) # now put them back and make sure things are still cool for policy in POLICIES: container_name = 'c%s_0' % policy.idx self.broker.put_container(container_name, 0, 0, 0, 0, policy.idx) # _commit_puts_stale_ok() called by get_policy_stats() # confirm stats reporting back correctly stats = self.broker.get_policy_stats() self.assertEqual(len(stats), 4) for policy in POLICIES: self.assertEqual(stats[policy.idx]['container_count'], policy.idx + 1) def test_per_policy_cont_count_migration_with_deleted(self): num_containers = 15 policies = itertools.cycle(POLICIES) container_policy_map = {} # add a few container entries for i in range(num_containers): name = 'test-container-%02d' % i policy = next(policies) self.broker.put_container(name, next(self.ts), 0, 0, 0, int(policy)) # keep track of stub container policies container_policy_map[name] = policy # delete about half of the containers for i in range(0, num_containers, 2): name = 'test-container-%02d' % i policy = container_policy_map[name] self.broker.put_container(name, 0, next(self.ts), 0, 0, int(policy)) total_container_count = self.broker.get_info()['container_count'] self.assertEqual(total_container_count, num_containers / 2) # trigger migration policy_info = self.broker.get_policy_stats(do_migrations=True) self.assertEqual(len(policy_info), min(num_containers, len(POLICIES))) policy_container_count = sum(p['container_count'] for p in policy_info.values()) self.assertEqual(total_container_count, policy_container_count) def test_per_policy_cont_count_migration_with_single_policy(self): num_containers = 100 with patch_policies(legacy_only=True): policy = POLICIES[0] # add a few container entries for i in range(num_containers): name = 'test-container-%02d' % i self.broker.put_container(name, next(self.ts), 0, 0, 0, int(policy)) # delete about half of the containers for i in range(0, num_containers, 2): name = 'test-container-%02d' % i self.broker.put_container(name, 0, next(self.ts), 0, 0, int(policy)) total_container_count = self.broker.get_info()['container_count'] # trigger migration policy_info = self.broker.get_policy_stats(do_migrations=True) self.assertEqual(total_container_count, num_containers / 2) self.assertEqual(len(policy_info), 1) policy_container_count = sum(p['container_count'] for p in policy_info.values()) self.assertEqual(total_container_count, policy_container_count) def test_per_policy_cont_count_migration_impossible(self): with patch_policies(legacy_only=True): # add a container for the legacy policy policy = POLICIES[0] self.broker.put_container('test-legacy-container', next(self.ts), 0, 0, 0, int(policy)) # now create an impossible situation by adding a container for a # policy index that doesn't exist non_existent_policy_index = int(policy) + 1 self.broker.put_container('test-non-existent-policy', next(self.ts), 0, 0, 0, non_existent_policy_index) total_container_count = self.broker.get_info()['container_count'] # trigger migration policy_info = self.broker.get_policy_stats(do_migrations=True) self.assertEqual(total_container_count, 2) self.assertEqual(len(policy_info), 2) for policy_stat in policy_info.values(): self.assertEqual(policy_stat['container_count'], 1) def test_migrate_add_storage_policy_index_fail(self): broker = AccountBroker(':memory:', account='a') broker.initialize(Timestamp('1').internal) with mock.patch.object( broker, 'create_policy_stat_table', side_effect=sqlite3.OperationalError('foobar')): with broker.get() as conn: self.assertRaisesRegexp( sqlite3.OperationalError, '.*foobar.*', broker._migrate_add_storage_policy_index, conn=conn) swift-2.7.1/test/unit/account/__init__.py0000664000567000056710000000000013024044352021504 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/account/test_utils.py0000664000567000056710000001703613024044352022165 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import time import unittest import mock from swift.account import utils, backend from swift.common.storage_policy import POLICIES from swift.common.utils import Timestamp from swift.common.header_key_dict import HeaderKeyDict from test.unit import patch_policies class TestFakeAccountBroker(unittest.TestCase): def test_fake_broker_get_info(self): broker = utils.FakeAccountBroker() now = time.time() with mock.patch('time.time', new=lambda: now): info = broker.get_info() timestamp = Timestamp(now) expected = { 'container_count': 0, 'object_count': 0, 'bytes_used': 0, 'created_at': timestamp.internal, 'put_timestamp': timestamp.internal, } self.assertEqual(info, expected) def test_fake_broker_list_containers_iter(self): broker = utils.FakeAccountBroker() self.assertEqual(broker.list_containers_iter(), []) def test_fake_broker_metadata(self): broker = utils.FakeAccountBroker() self.assertEqual(broker.metadata, {}) def test_fake_broker_get_policy_stats(self): broker = utils.FakeAccountBroker() self.assertEqual(broker.get_policy_stats(), {}) class TestAccountUtils(unittest.TestCase): def test_get_response_headers_fake_broker(self): broker = utils.FakeAccountBroker() now = time.time() expected = { 'X-Account-Container-Count': 0, 'X-Account-Object-Count': 0, 'X-Account-Bytes-Used': 0, 'X-Timestamp': Timestamp(now).normal, 'X-PUT-Timestamp': Timestamp(now).normal, } with mock.patch('time.time', new=lambda: now): resp_headers = utils.get_response_headers(broker) self.assertEqual(resp_headers, expected) def test_get_response_headers_empty_memory_broker(self): broker = backend.AccountBroker(':memory:', account='a') now = time.time() with mock.patch('time.time', new=lambda: now): broker.initialize(Timestamp(now).internal) expected = { 'X-Account-Container-Count': 0, 'X-Account-Object-Count': 0, 'X-Account-Bytes-Used': 0, 'X-Timestamp': Timestamp(now).normal, 'X-PUT-Timestamp': Timestamp(now).normal, } resp_headers = utils.get_response_headers(broker) self.assertEqual(resp_headers, expected) @patch_policies def test_get_response_headers_with_data(self): broker = backend.AccountBroker(':memory:', account='a') now = time.time() with mock.patch('time.time', new=lambda: now): broker.initialize(Timestamp(now).internal) # add some container data ts = (Timestamp(t).internal for t in itertools.count(int(now))) total_containers = 0 total_objects = 0 total_bytes = 0 for policy in POLICIES: delete_timestamp = next(ts) put_timestamp = next(ts) object_count = int(policy) bytes_used = int(policy) * 10 broker.put_container('c-%s' % policy.name, put_timestamp, delete_timestamp, object_count, bytes_used, int(policy)) total_containers += 1 total_objects += object_count total_bytes += bytes_used expected = HeaderKeyDict({ 'X-Account-Container-Count': total_containers, 'X-Account-Object-Count': total_objects, 'X-Account-Bytes-Used': total_bytes, 'X-Timestamp': Timestamp(now).normal, 'X-PUT-Timestamp': Timestamp(now).normal, }) for policy in POLICIES: prefix = 'X-Account-Storage-Policy-%s-' % policy.name expected[prefix + 'Container-Count'] = 1 expected[prefix + 'Object-Count'] = int(policy) expected[prefix + 'Bytes-Used'] = int(policy) * 10 resp_headers = utils.get_response_headers(broker) per_policy_container_headers = [ h for h in resp_headers if h.lower().startswith('x-account-storage-policy-') and h.lower().endswith('-container-count')] self.assertTrue(per_policy_container_headers) for key, value in resp_headers.items(): expected_value = expected.pop(key) self.assertEqual(expected_value, str(value), 'value for %r was %r not %r' % ( key, value, expected_value)) self.assertFalse(expected) @patch_policies def test_get_response_headers_with_legacy_data(self): broker = backend.AccountBroker(':memory:', account='a') now = time.time() with mock.patch('time.time', new=lambda: now): broker.initialize(Timestamp(now).internal) # add some container data ts = (Timestamp(t).internal for t in itertools.count(int(now))) total_containers = 0 total_objects = 0 total_bytes = 0 for policy in POLICIES: delete_timestamp = next(ts) put_timestamp = next(ts) object_count = int(policy) bytes_used = int(policy) * 10 broker.put_container('c-%s' % policy.name, put_timestamp, delete_timestamp, object_count, bytes_used, int(policy)) total_containers += 1 total_objects += object_count total_bytes += bytes_used expected = HeaderKeyDict({ 'X-Account-Container-Count': total_containers, 'X-Account-Object-Count': total_objects, 'X-Account-Bytes-Used': total_bytes, 'X-Timestamp': Timestamp(now).normal, 'X-PUT-Timestamp': Timestamp(now).normal, }) for policy in POLICIES: prefix = 'X-Account-Storage-Policy-%s-' % policy.name expected[prefix + 'Object-Count'] = int(policy) expected[prefix + 'Bytes-Used'] = int(policy) * 10 orig_policy_stats = broker.get_policy_stats def stub_policy_stats(*args, **kwargs): policy_stats = orig_policy_stats(*args, **kwargs) for stats in policy_stats.values(): # legacy db's won't return container_count del stats['container_count'] return policy_stats broker.get_policy_stats = stub_policy_stats resp_headers = utils.get_response_headers(broker) per_policy_container_headers = [ h for h in resp_headers if h.lower().startswith('x-account-storage-policy-') and h.lower().endswith('-container-count')] self.assertFalse(per_policy_container_headers) for key, value in resp_headers.items(): expected_value = expected.pop(key) self.assertEqual(expected_value, str(value), 'value for %r was %r not %r' % ( key, value, expected_value)) self.assertFalse(expected) swift-2.7.1/test/unit/account/test_auditor.py0000664000567000056710000002336213024044352022473 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import defaultdict import itertools import unittest import mock import time import os import random from tempfile import mkdtemp from shutil import rmtree from eventlet import Timeout from swift.account import auditor from swift.common.storage_policy import POLICIES from swift.common.utils import Timestamp from test.unit import debug_logger, patch_policies, with_tempdir from test.unit.account.test_backend import ( AccountBrokerPreTrackContainerCountSetup) class FakeAccountBroker(object): def __init__(self, path): self.path = path self.db_file = path self.file = os.path.basename(path) def is_deleted(self): return False def get_info(self): if self.file.startswith('fail'): raise ValueError() if self.file.startswith('true'): return defaultdict(int) def get_policy_stats(self, **kwargs): if self.file.startswith('fail'): raise ValueError() if self.file.startswith('true'): return defaultdict(int) class TestAuditor(unittest.TestCase): def setUp(self): self.testdir = os.path.join(mkdtemp(), 'tmp_test_account_auditor') self.logger = debug_logger() rmtree(self.testdir, ignore_errors=1) os.mkdir(self.testdir) fnames = ['true1.db', 'true2.db', 'true3.db', 'fail1.db', 'fail2.db'] for fn in fnames: with open(os.path.join(self.testdir, fn), 'w+') as f: f.write(' ') def tearDown(self): rmtree(os.path.dirname(self.testdir), ignore_errors=1) @mock.patch('swift.account.auditor.AccountBroker', FakeAccountBroker) def test_run_forever(self): sleep_times = random.randint(5, 10) call_times = sleep_times - 1 class FakeTime(object): def __init__(self): self.times = 0 def sleep(self, sec): self.times += 1 if self.times >= sleep_times: # stop forever by an error raise ValueError() def time(self): return time.time() conf = {} test_auditor = auditor.AccountAuditor(conf, logger=self.logger) with mock.patch('swift.account.auditor.time', FakeTime()): def fake_audit_location_generator(*args, **kwargs): files = os.listdir(self.testdir) return [(os.path.join(self.testdir, f), '', '') for f in files] with mock.patch('swift.account.auditor.audit_location_generator', fake_audit_location_generator): self.assertRaises(ValueError, test_auditor.run_forever) self.assertEqual(test_auditor.account_failures, 2 * call_times) self.assertEqual(test_auditor.account_passes, 3 * call_times) # now force timeout path code coverage def fake_one_audit_pass(reported): raise Timeout() with mock.patch('swift.account.auditor.AccountAuditor._one_audit_pass', fake_one_audit_pass): with mock.patch('swift.account.auditor.time', FakeTime()): self.assertRaises(ValueError, test_auditor.run_forever) self.assertEqual(test_auditor.account_failures, 2 * call_times) self.assertEqual(test_auditor.account_passes, 3 * call_times) @mock.patch('swift.account.auditor.AccountBroker', FakeAccountBroker) def test_run_once(self): conf = {} test_auditor = auditor.AccountAuditor(conf, logger=self.logger) def fake_audit_location_generator(*args, **kwargs): files = os.listdir(self.testdir) return [(os.path.join(self.testdir, f), '', '') for f in files] with mock.patch('swift.account.auditor.audit_location_generator', fake_audit_location_generator): test_auditor.run_once() self.assertEqual(test_auditor.account_failures, 2) self.assertEqual(test_auditor.account_passes, 3) @mock.patch('swift.account.auditor.AccountBroker', FakeAccountBroker) def test_one_audit_pass(self): conf = {} test_auditor = auditor.AccountAuditor(conf, logger=self.logger) def fake_audit_location_generator(*args, **kwargs): files = os.listdir(self.testdir) return [(os.path.join(self.testdir, f), '', '') for f in files] # force code coverage for logging path test_auditor.logging_interval = 0 with mock.patch('swift.account.auditor.audit_location_generator', fake_audit_location_generator): test_auditor._one_audit_pass(test_auditor.logging_interval) self.assertEqual(test_auditor.account_failures, 0) self.assertEqual(test_auditor.account_passes, 0) @mock.patch('swift.account.auditor.AccountBroker', FakeAccountBroker) def test_account_auditor(self): conf = {} test_auditor = auditor.AccountAuditor(conf, logger=self.logger) files = os.listdir(self.testdir) for f in files: path = os.path.join(self.testdir, f) test_auditor.account_audit(path) self.assertEqual(test_auditor.account_failures, 2) self.assertEqual(test_auditor.account_passes, 3) @patch_policies class TestAuditorRealBrokerMigration( AccountBrokerPreTrackContainerCountSetup, unittest.TestCase): def test_db_migration(self): # add a few containers policies = itertools.cycle(POLICIES) num_containers = len(POLICIES) * 3 per_policy_container_counts = defaultdict(int) for i in range(num_containers): name = 'test-container-%02d' % i policy = next(policies) self.broker.put_container(name, next(self.ts), 0, 0, 0, int(policy)) per_policy_container_counts[int(policy)] += 1 self.broker._commit_puts() self.assertEqual(num_containers, self.broker.get_info()['container_count']) # still un-migrated self.assertUnmigrated(self.broker) # run auditor, and validate migration conf = {'devices': self.tempdir, 'mount_check': False, 'recon_cache_path': self.tempdir} test_auditor = auditor.AccountAuditor(conf, logger=debug_logger()) test_auditor.run_once() self.restore_account_broker() broker = auditor.AccountBroker(self.db_path) # go after rows directly to avoid unintentional migration with broker.get() as conn: rows = conn.execute(''' SELECT storage_policy_index, container_count FROM policy_stat ''').fetchall() for policy_index, container_count in rows: self.assertEqual(container_count, per_policy_container_counts[policy_index]) class TestAuditorRealBroker(unittest.TestCase): def setUp(self): self.logger = debug_logger() @with_tempdir def test_db_validate_fails(self, tempdir): ts = (Timestamp(t).internal for t in itertools.count(int(time.time()))) db_path = os.path.join(tempdir, 'sda', 'accounts', '0', '0', '0', 'test.db') broker = auditor.AccountBroker(db_path, account='a') broker.initialize(next(ts)) # add a few containers policies = itertools.cycle(POLICIES) num_containers = len(POLICIES) * 3 per_policy_container_counts = defaultdict(int) for i in range(num_containers): name = 'test-container-%02d' % i policy = next(policies) broker.put_container(name, next(ts), 0, 0, 0, int(policy)) per_policy_container_counts[int(policy)] += 1 broker._commit_puts() self.assertEqual(broker.get_info()['container_count'], num_containers) messed_up_policy = random.choice(list(POLICIES)) # now mess up a policy_stats table count with broker.get() as conn: conn.executescript(''' UPDATE policy_stat SET container_count = container_count - 1 WHERE storage_policy_index = %d; ''' % int(messed_up_policy)) # validate it's messed up policy_stats = broker.get_policy_stats() self.assertEqual( policy_stats[int(messed_up_policy)]['container_count'], per_policy_container_counts[int(messed_up_policy)] - 1) # do an audit conf = {'devices': tempdir, 'mount_check': False, 'recon_cache_path': tempdir} test_auditor = auditor.AccountAuditor(conf, logger=self.logger) test_auditor.run_once() # validate errors self.assertEqual(test_auditor.account_failures, 1) error_lines = test_auditor.logger.get_lines_for_level('error') self.assertEqual(len(error_lines), 1) error_message = error_lines[0] self.assertTrue(broker.db_file in error_message) self.assertTrue('container_count' in error_message) self.assertTrue('does not match' in error_message) self.assertEqual(test_auditor.logger.get_increment_counts(), {'failures': 1}) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/account/test_server.py0000664000567000056710000030007713024044354022335 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import errno import os import mock import unittest from tempfile import mkdtemp from shutil import rmtree from time import gmtime from test.unit import FakeLogger import itertools import random import json from six import BytesIO from six import StringIO import xml.dom.minidom from swift import __version__ as swift_version from swift.common.swob import (Request, WsgiBytesIO, HTTPNoContent) from swift.common import constraints from swift.account.server import AccountController from swift.common.utils import (normalize_timestamp, replication, public, mkdirs, storage_directory) from swift.common.request_helpers import get_sys_meta_prefix from test.unit import patch_policies, debug_logger from swift.common.storage_policy import StoragePolicy, POLICIES @patch_policies class TestAccountController(unittest.TestCase): """Test swift.account.server.AccountController""" def setUp(self): """Set up for testing swift.account.server.AccountController""" self.testdir_base = mkdtemp() self.testdir = os.path.join(self.testdir_base, 'account_server') self.controller = AccountController( {'devices': self.testdir, 'mount_check': 'false'}) def tearDown(self): """Tear down for testing swift.account.server.AccountController""" try: rmtree(self.testdir_base) except OSError as err: if err.errno != errno.ENOENT: raise def test_OPTIONS(self): server_handler = AccountController( {'devices': self.testdir, 'mount_check': 'false'}) req = Request.blank('/sda1/p/a/c/o', {'REQUEST_METHOD': 'OPTIONS'}) req.content_length = 0 resp = server_handler.OPTIONS(req) self.assertEqual(200, resp.status_int) for verb in 'OPTIONS GET POST PUT DELETE HEAD REPLICATE'.split(): self.assertTrue( verb in resp.headers['Allow'].split(', ')) self.assertEqual(len(resp.headers['Allow'].split(', ')), 7) self.assertEqual(resp.headers['Server'], (server_handler.server_type + '/' + swift_version)) def test_DELETE_not_found(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue('X-Account-Status' not in resp.headers) def test_DELETE_empty(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_DELETE_not_empty(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) # We now allow deleting non-empty accounts self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_DELETE_now_empty(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank( '/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '2', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_DELETE_invalid_partition(self): req = Request.blank('/sda1/./a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_DELETE_timestamp_not_float(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': 'not-float'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_DELETE_insufficient_storage(self): self.controller = AccountController({'devices': self.testdir}) req = Request.blank( '/sda-null/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_REPLICATE_insufficient_storage(self): conf = {'devices': self.testdir, 'mount_check': 'true'} self.account_controller = AccountController(conf) def fake_check_mount(*args, **kwargs): return False with mock.patch("swift.common.constraints.check_mount", fake_check_mount): req = Request.blank('/sda1/p/suff', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) resp = req.get_response(self.account_controller) self.assertEqual(resp.status_int, 507) def test_REPLICATE_works(self): mkdirs(os.path.join(self.testdir, 'sda1', 'account', 'p', 'a', 'a')) db_file = os.path.join(self.testdir, 'sda1', storage_directory('account', 'p', 'a'), 'a' + '.db') open(db_file, 'w') def fake_rsync_then_merge(self, drive, db_file, args): return HTTPNoContent() with mock.patch("swift.common.db_replicator.ReplicatorRpc." "rsync_then_merge", fake_rsync_then_merge): req = Request.blank('/sda1/p/a/', environ={'REQUEST_METHOD': 'REPLICATE'}, headers={}) json_string = '["rsync_then_merge", "a.db"]' inbuf = WsgiBytesIO(json_string) req.environ['wsgi.input'] = inbuf resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) # check valuerror wsgi_input_valuerror = '["sync" : sync, "-1"]' inbuf1 = WsgiBytesIO(wsgi_input_valuerror) req.environ['wsgi.input'] = inbuf1 resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_HEAD_not_found(self): # Test the case in which account does not exist (can be recreated) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue('X-Account-Status' not in resp.headers) # Test the case in which account was deleted but not yet reaped req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_HEAD_empty_account(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['x-account-container-count'], '0') self.assertEqual(resp.headers['x-account-object-count'], '0') self.assertEqual(resp.headers['x-account-bytes-used'], '0') def test_HEAD_with_containers(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['x-account-container-count'], '2') self.assertEqual(resp.headers['x-account-object-count'], '0') self.assertEqual(resp.headers['x-account-bytes-used'], '0') req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '1', 'X-Bytes-Used': '2', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '3', 'X-Bytes-Used': '4', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD', 'HTTP_X_TIMESTAMP': '5'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['x-account-container-count'], '2') self.assertEqual(resp.headers['x-account-object-count'], '4') self.assertEqual(resp.headers['x-account-bytes-used'], '6') def test_HEAD_invalid_partition(self): req = Request.blank('/sda1/./a', environ={'REQUEST_METHOD': 'HEAD', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_HEAD_invalid_content_type(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}, headers={'Accept': 'application/plain'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 406) def test_HEAD_insufficient_storage(self): self.controller = AccountController({'devices': self.testdir}) req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'HEAD', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_HEAD_invalid_format(self): format = '%D1%BD%8A9' # invalid UTF-8; should be %E1%BD%8A9 (E -> D) req = Request.blank('/sda1/p/a?format=' + format, environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_PUT_not_found(self): req = Request.blank( '/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-PUT-Timestamp': normalize_timestamp(1), 'X-DELETE-Timestamp': normalize_timestamp(0), 'X-Object-Count': '1', 'X-Bytes-Used': '1', 'X-Timestamp': normalize_timestamp(0)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue('X-Account-Status' not in resp.headers) def test_PUT(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) def test_PUT_simulated_create_race(self): state = ['initial'] from swift.account.backend import AccountBroker as OrigAcBr class InterceptedAcBr(OrigAcBr): def __init__(self, *args, **kwargs): super(InterceptedAcBr, self).__init__(*args, **kwargs) if state[0] == 'initial': # Do nothing initially pass elif state[0] == 'race': # Save the original db_file attribute value self._saved_db_file = self.db_file self.db_file += '.doesnotexist' def initialize(self, *args, **kwargs): if state[0] == 'initial': # Do nothing initially pass elif state[0] == 'race': # Restore the original db_file attribute to get the race # behavior self.db_file = self._saved_db_file return super(InterceptedAcBr, self).initialize(*args, **kwargs) with mock.patch("swift.account.server.AccountBroker", InterceptedAcBr): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) state[0] = "race" req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) def test_PUT_after_DELETE(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(1)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': normalize_timestamp(1)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(2)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 403) self.assertEqual(resp.body, 'Recently deleted') self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_PUT_GET_metadata(self): # Set metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(1), 'X-Account-Meta-Test': 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'Value') # Set another metadata header, ensuring old one doesn't disappear req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(1), 'X-Account-Meta-Test2': 'Value2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'Value') self.assertEqual(resp.headers.get('x-account-meta-test2'), 'Value2') # Update metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(3), 'X-Account-Meta-Test': 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'New Value') # Send old update to metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(2), 'X-Account-Meta-Test': 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(4), 'X-Account-Meta-Test': ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue('x-account-meta-test' not in resp.headers) def test_PUT_GET_sys_metadata(self): prefix = get_sys_meta_prefix('account') hdr = '%stest' % prefix hdr2 = '%stest2' % prefix # Set metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(1), hdr.title(): 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'Value') # Set another metadata header, ensuring old one doesn't disappear req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(1), hdr2.title(): 'Value2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'Value') self.assertEqual(resp.headers.get(hdr2), 'Value2') # Update metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(3), hdr.title(): 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'New Value') # Send old update to metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(2), hdr.title(): 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(4), hdr.title(): ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 202) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue(hdr not in resp.headers) def test_PUT_invalid_partition(self): req = Request.blank('/sda1/./a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_PUT_insufficient_storage(self): self.controller = AccountController({'devices': self.testdir}) req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_POST_HEAD_metadata(self): req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(1)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # Set metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(1), 'X-Account-Meta-Test': 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'Value') # Update metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(3), 'X-Account-Meta-Test': 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'New Value') # Send old update to metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(2), 'X-Account-Meta-Test': 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get('x-account-meta-test'), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(4), 'X-Account-Meta-Test': ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue('x-account-meta-test' not in resp.headers) def test_POST_HEAD_sys_metadata(self): prefix = get_sys_meta_prefix('account') hdr = '%stest' % prefix req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': normalize_timestamp(1)}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # Set metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(1), hdr.title(): 'Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'Value') # Update metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(3), hdr.title(): 'New Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'New Value') # Send old update to metadata header req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(2), hdr.title(): 'Old Value'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers.get(hdr), 'New Value') # Remove metadata header (by setting it to empty) req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'POST'}, headers={'X-Timestamp': normalize_timestamp(4), hdr.title(): ''}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertTrue(hdr not in resp.headers) def test_POST_invalid_partition(self): req = Request.blank('/sda1/./a', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_POST_timestamp_not_float(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '0'}, headers={'X-Timestamp': 'not-float'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 400) def test_POST_insufficient_storage(self): self.controller = AccountController({'devices': self.testdir}) req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 507) def test_POST_after_DELETE_not_found(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'POST', 'HTTP_X_TIMESTAMP': '2'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_GET_not_found_plain(self): # Test the case in which account does not exist (can be recreated) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue('X-Account-Status' not in resp.headers) # Test the case in which account was deleted but not yet reaped req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertEqual(resp.headers['X-Account-Status'], 'Deleted') def test_GET_not_found_json(self): req = Request.blank('/sda1/p/a?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_GET_not_found_xml(self): req = Request.blank('/sda1/p/a?format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) def test_GET_empty_account_plain(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 204) self.assertEqual(resp.headers['Content-Type'], 'text/plain; charset=utf-8') def test_GET_empty_account_json(self): req = Request.blank('/sda1/p/a?format=json', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['Content-Type'], 'application/json; charset=utf-8') def test_GET_empty_account_xml(self): req = Request.blank('/sda1/p/a?format=xml', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.headers['Content-Type'], 'application/xml; charset=utf-8') def test_GET_over_limit(self): req = Request.blank( '/sda1/p/a?limit=%d' % (constraints.ACCOUNT_LISTING_LIMIT + 1), environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 412) def test_GET_with_containers_plain(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body.strip().split('\n'), ['c1', 'c2']) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '1', 'X-Bytes-Used': '2', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '3', 'X-Bytes-Used': '4', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body.strip().split('\n'), ['c1', 'c2']) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.charset, 'utf-8') # test unknown format uses default plain req = Request.blank('/sda1/p/a?format=somethinglese', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(resp.body.strip().split('\n'), ['c1', 'c2']) self.assertEqual(resp.content_type, 'text/plain') self.assertEqual(resp.charset, 'utf-8') def test_GET_with_containers_json(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [{'count': 0, 'bytes': 0, 'name': 'c1'}, {'count': 0, 'bytes': 0, 'name': 'c2'}]) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '1', 'X-Bytes-Used': '2', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '3', 'X-Bytes-Used': '4', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=json', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) self.assertEqual(json.loads(resp.body), [{'count': 1, 'bytes': 2, 'name': 'c1'}, {'count': 3, 'bytes': 4, 'name': 'c2'}]) self.assertEqual(resp.content_type, 'application/json') self.assertEqual(resp.charset, 'utf-8') def test_GET_with_containers_xml(self): req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '0', 'X-Bytes-Used': '0', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.content_type, 'application/xml') self.assertEqual(resp.status_int, 200) dom = xml.dom.minidom.parseString(resp.body) self.assertEqual(dom.firstChild.nodeName, 'account') listing = \ [n for n in dom.firstChild.childNodes if n.nodeName != '#text'] self.assertEqual(len(listing), 2) self.assertEqual(listing[0].nodeName, 'container') container = [n for n in listing[0].childNodes if n.nodeName != '#text'] self.assertEqual(sorted([n.nodeName for n in container]), ['bytes', 'count', 'name']) node = [n for n in container if n.nodeName == 'name'][0] self.assertEqual(node.firstChild.nodeValue, 'c1') node = [n for n in container if n.nodeName == 'count'][0] self.assertEqual(node.firstChild.nodeValue, '0') node = [n for n in container if n.nodeName == 'bytes'][0] self.assertEqual(node.firstChild.nodeValue, '0') self.assertEqual(listing[-1].nodeName, 'container') container = \ [n for n in listing[-1].childNodes if n.nodeName != '#text'] self.assertEqual(sorted([n.nodeName for n in container]), ['bytes', 'count', 'name']) node = [n for n in container if n.nodeName == 'name'][0] self.assertEqual(node.firstChild.nodeValue, 'c2') node = [n for n in container if n.nodeName == 'count'][0] self.assertEqual(node.firstChild.nodeValue, '0') node = [n for n in container if n.nodeName == 'bytes'][0] self.assertEqual(node.firstChild.nodeValue, '0') req = Request.blank('/sda1/p/a/c1', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '1', 'X-Delete-Timestamp': '0', 'X-Object-Count': '1', 'X-Bytes-Used': '2', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Put-Timestamp': '2', 'X-Delete-Timestamp': '0', 'X-Object-Count': '3', 'X-Bytes-Used': '4', 'X-Timestamp': normalize_timestamp(0)}) req.get_response(self.controller) req = Request.blank('/sda1/p/a?format=xml', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 200) dom = xml.dom.minidom.parseString(resp.body) self.assertEqual(dom.firstChild.nodeName, 'account') listing = \ [n for n in dom.firstChild.childNodes if n.nodeName != '#text'] self.assertEqual(len(listing), 2) self.assertEqual(listing[0].nodeName, 'container') container = [n for n in listing[0].childNodes if n.nodeName != '#text'] self.assertEqual(sorted([n.nodeName for n in container]), ['bytes', 'count', 'name']) node = [n for n in container if n.nodeName == 'name'][0] self.assertEqual(node.firstChild.nodeValue, 'c1') node = [n for n in container if n.nodeName == 'count'][0] self.assertEqual(node.firstChild.nodeValue, '1') node = [n for n in container if n.nodeName == 'bytes'][0] self.assertEqual(node.firstChild.nodeValue, '2') self.assertEqual(listing[-1].nodeName, 'container') container = [ n for n in listing[-1].childNodes if n.nodeName != '#text'] self.assertEqual(sorted([n.nodeName for n in container]), ['bytes', 'count', 'name']) node = [n for n in container if n.nodeName == 'name'][0] self.assertEqual(node.firstChild.nodeValue, 'c2') node = [n for n in container if n.nodeName == 'count'][0] self.assertEqual(node.firstChild.nodeValue, '3') node = [n for n in container if n.nodeName == 'bytes'][0] self.assertEqual(node.firstChild.nodeValue, '4') self.assertEqual(resp.charset, 'utf-8') def test_GET_xml_escapes_account_name(self): req = Request.blank( '/sda1/p/%22%27', # "' environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank( '/sda1/p/%22%27?format=xml', environ={'REQUEST_METHOD': 'GET', 'HTTP_X_TIMESTAMP': '1'}) resp = req.get_response(self.controller) dom = xml.dom.minidom.parseString(resp.body) self.assertEqual(dom.firstChild.attributes['name'].value, '"\'') def test_GET_xml_escapes_container_name(self): req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '0'}) req.get_response(self.controller) req = Request.blank( '/sda1/p/a/%22%3Cword', # "

Method Not Allowed

The method is not ' 'allowed for this resource.

'] mock_method = replication(public(lambda x: mock.MagicMock())) with mock.patch.object(self.controller, method, new=mock_method): mock_method.replication = True response = self.controller.__call__(env, start_response) self.assertEqual(response, answer) def test_call_incorrect_replication_method(self): inbuf = BytesIO() errbuf = StringIO() outbuf = StringIO() self.controller = AccountController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'true'}) def start_response(*args): """Sends args to outbuf""" outbuf.writelines(args) obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST', 'OPTIONS'] for method in obj_methods: env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} self.controller(env, start_response) self.assertEqual(errbuf.getvalue(), '') self.assertEqual(outbuf.getvalue()[:4], '405 ') def test__call__raise_timeout(self): inbuf = WsgiBytesIO() errbuf = StringIO() outbuf = StringIO() self.logger = debug_logger('test') self.account_controller = AccountController( {'devices': self.testdir, 'mount_check': 'false', 'replication_server': 'false', 'log_requests': 'false'}, logger=self.logger) def start_response(*args): # Sends args to outbuf outbuf.writelines(args) method = 'PUT' env = {'REQUEST_METHOD': method, 'SCRIPT_NAME': '', 'PATH_INFO': '/sda1/p/a/c', 'SERVER_NAME': '127.0.0.1', 'SERVER_PORT': '8080', 'SERVER_PROTOCOL': 'HTTP/1.0', 'CONTENT_LENGTH': '0', 'wsgi.version': (1, 0), 'wsgi.url_scheme': 'http', 'wsgi.input': inbuf, 'wsgi.errors': errbuf, 'wsgi.multithread': False, 'wsgi.multiprocess': False, 'wsgi.run_once': False} @public def mock_put_method(*args, **kwargs): raise Exception() with mock.patch.object(self.account_controller, method, new=mock_put_method): response = self.account_controller.__call__(env, start_response) self.assertTrue(response[0].startswith( 'Traceback (most recent call last):')) self.assertEqual(self.logger.get_lines_for_level('error'), [ 'ERROR __call__ error with %(method)s %(path)s : ' % { 'method': 'PUT', 'path': '/sda1/p/a/c'}, ]) self.assertEqual(self.logger.get_lines_for_level('info'), []) def test_GET_log_requests_true(self): self.controller.logger = FakeLogger() self.controller.log_requests = True req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertTrue(self.controller.logger.log_dict['info']) def test_GET_log_requests_false(self): self.controller.logger = FakeLogger() self.controller.log_requests = False req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'GET'}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 404) self.assertFalse(self.controller.logger.log_dict['info']) def test_log_line_format(self): req = Request.blank( '/sda1/p/a', environ={'REQUEST_METHOD': 'HEAD', 'REMOTE_ADDR': '1.2.3.4'}) self.controller.logger = FakeLogger() with mock.patch( 'time.gmtime', mock.MagicMock(side_effect=[gmtime(10001.0)])): with mock.patch( 'time.time', mock.MagicMock(side_effect=[10000.0, 10001.0, 10002.0])): with mock.patch( 'os.getpid', mock.MagicMock(return_value=1234)): req.get_response(self.controller) self.assertEqual( self.controller.logger.log_dict['info'], [(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a" 404 ' '- "-" "-" "-" 2.0000 "-" 1234 -',), {})]) def test_policy_stats_with_legacy(self): ts = itertools.count() # create the account req = Request.blank('/sda1/p/a', method='PUT', headers={ 'X-Timestamp': normalize_timestamp(next(ts))}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity # add a container req = Request.blank('/sda1/p/a/c1', method='PUT', headers={ 'X-Put-Timestamp': normalize_timestamp(next(ts)), 'X-Delete-Timestamp': '0', 'X-Object-Count': '2', 'X-Bytes-Used': '4', }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # read back rollup for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) self.assertEqual(resp.headers['X-Account-Object-Count'], '2') self.assertEqual(resp.headers['X-Account-Bytes-Used'], '4') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Object-Count' % POLICIES[0].name], '2') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Bytes-Used' % POLICIES[0].name], '4') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Container-Count' % POLICIES[0].name], '1') def test_policy_stats_non_default(self): ts = itertools.count() # create the account req = Request.blank('/sda1/p/a', method='PUT', headers={ 'X-Timestamp': normalize_timestamp(next(ts))}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity # add a container non_default_policies = [p for p in POLICIES if not p.is_default] policy = random.choice(non_default_policies) req = Request.blank('/sda1/p/a/c1', method='PUT', headers={ 'X-Put-Timestamp': normalize_timestamp(next(ts)), 'X-Delete-Timestamp': '0', 'X-Object-Count': '2', 'X-Bytes-Used': '4', 'X-Backend-Storage-Policy-Index': policy.idx, }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # read back rollup for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) self.assertEqual(resp.headers['X-Account-Object-Count'], '2') self.assertEqual(resp.headers['X-Account-Bytes-Used'], '4') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Object-Count' % policy.name], '2') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Bytes-Used' % policy.name], '4') self.assertEqual( resp.headers['X-Account-Storage-Policy-%s-Container-Count' % policy.name], '1') def test_empty_policy_stats(self): ts = itertools.count() # create the account req = Request.blank('/sda1/p/a', method='PUT', headers={ 'X-Timestamp': normalize_timestamp(next(ts))}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) for key in resp.headers: self.assertTrue('storage-policy' not in key.lower()) def test_empty_except_for_used_policies(self): ts = itertools.count() # create the account req = Request.blank('/sda1/p/a', method='PUT', headers={ 'X-Timestamp': normalize_timestamp(next(ts))}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity # starts empty for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) for key in resp.headers: self.assertTrue('storage-policy' not in key.lower()) # add a container policy = random.choice(POLICIES) req = Request.blank('/sda1/p/a/c1', method='PUT', headers={ 'X-Put-Timestamp': normalize_timestamp(next(ts)), 'X-Delete-Timestamp': '0', 'X-Object-Count': '2', 'X-Bytes-Used': '4', 'X-Backend-Storage-Policy-Index': policy.idx, }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # only policy of the created container should be in headers for method in ('GET', 'HEAD'): req = Request.blank('/sda1/p/a', method=method) resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) for key in resp.headers: if 'storage-policy' in key.lower(): self.assertTrue(policy.name.lower() in key.lower()) def test_multiple_policies_in_use(self): ts = itertools.count() # create the account req = Request.blank('/sda1/p/a', method='PUT', headers={ 'X-Timestamp': normalize_timestamp(next(ts))}) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) # sanity # add some containers for policy in POLICIES: count = policy.idx * 100 # good as any integer container_path = '/sda1/p/a/c_%s' % policy.name req = Request.blank( container_path, method='PUT', headers={ 'X-Put-Timestamp': normalize_timestamp(next(ts)), 'X-Delete-Timestamp': '0', 'X-Object-Count': count, 'X-Bytes-Used': count, 'X-Backend-Storage-Policy-Index': policy.idx, }) resp = req.get_response(self.controller) self.assertEqual(resp.status_int, 201) req = Request.blank('/sda1/p/a', method='HEAD') resp = req.get_response(self.controller) self.assertEqual(resp.status_int // 100, 2) # check container counts in roll up headers total_object_count = 0 total_bytes_used = 0 for key in resp.headers: if 'storage-policy' not in key.lower(): continue for policy in POLICIES: if policy.name.lower() not in key.lower(): continue if key.lower().endswith('object-count'): object_count = int(resp.headers[key]) self.assertEqual(policy.idx * 100, object_count) total_object_count += object_count if key.lower().endswith('bytes-used'): bytes_used = int(resp.headers[key]) self.assertEqual(policy.idx * 100, bytes_used) total_bytes_used += bytes_used expected_total_count = sum([p.idx * 100 for p in POLICIES]) self.assertEqual(expected_total_count, total_object_count) self.assertEqual(expected_total_count, total_bytes_used) @patch_policies([StoragePolicy(0, 'zero', False), StoragePolicy(1, 'one', True), StoragePolicy(2, 'two', False), StoragePolicy(3, 'three', False)]) class TestNonLegacyDefaultStoragePolicy(TestAccountController): pass if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/account/test_reaper.py0000664000567000056710000006702413024044354022307 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import random import shutil import tempfile import unittest from logging import DEBUG from mock import patch, call, DEFAULT import six from swift.account import reaper from swift.account.backend import DATADIR from swift.common.exceptions import ClientException from swift.common.utils import normalize_timestamp from test import unit from swift.common.storage_policy import StoragePolicy, POLICIES class FakeLogger(object): def __init__(self, *args, **kwargs): self.inc = {'return_codes.4': 0, 'return_codes.2': 0, 'objects_failures': 0, 'objects_deleted': 0, 'objects_remaining': 0, 'objects_possibly_remaining': 0, 'containers_failures': 0, 'containers_deleted': 0, 'containers_remaining': 0, 'containers_possibly_remaining': 0} self.exp = [] def info(self, msg, *args): self.msg = msg def error(self, msg, *args): self.msg = msg def timing_since(*args, **kwargs): pass def getEffectiveLevel(self): return DEBUG def exception(self, *args): self.exp.append(args) def increment(self, key): self.inc[key] += 1 class FakeBroker(object): def __init__(self): self.info = {} def get_info(self): return self.info class FakeAccountBroker(object): def __init__(self, containers): self.containers = containers self.containers_yielded = [] def get_info(self): info = {'account': 'a', 'delete_timestamp': time.time() - 10} return info def list_containers_iter(self, *args): for cont in self.containers: yield cont, None, None, None def is_status_deleted(self): return True def empty(self): return False class FakeRing(object): def __init__(self): self.nodes = [{'id': '1', 'ip': '10.10.10.1', 'port': 6002, 'device': None}, {'id': '2', 'ip': '10.10.10.2', 'port': 6002, 'device': None}, {'id': '3', 'ip': '10.10.10.3', 'port': 6002, 'device': None}, ] def get_nodes(self, *args, **kwargs): return ('partition', self.nodes) def get_part_nodes(self, *args, **kwargs): return self.nodes acc_nodes = [{'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}] cont_nodes = [{'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}] @unit.patch_policies([StoragePolicy(0, 'zero', False, object_ring=unit.FakeRing()), StoragePolicy(1, 'one', True, object_ring=unit.FakeRing(replicas=4))]) class TestReaper(unittest.TestCase): def setUp(self): self.to_delete = [] self.myexp = ClientException("", http_host=None, http_port=None, http_device=None, http_status=404, http_reason=None ) def tearDown(self): for todel in self.to_delete: shutil.rmtree(todel) def fake_direct_delete_object(self, *args, **kwargs): if self.amount_fail < self.max_fail: self.amount_fail += 1 raise self.myexp def fake_direct_delete_container(self, *args, **kwargs): if self.amount_delete_fail < self.max_delete_fail: self.amount_delete_fail += 1 raise self.myexp def fake_direct_get_container(self, *args, **kwargs): if self.get_fail: raise self.myexp objects = [{'name': 'o1'}, {'name': 'o2'}, {'name': six.text_type('o3')}, {'name': ''}] return None, objects def fake_container_ring(self): return FakeRing() def fake_reap_object(self, *args, **kwargs): if self.reap_obj_fail: raise Exception def prepare_data_dir(self, ts=False): devices_path = tempfile.mkdtemp() # will be deleted by teardown self.to_delete.append(devices_path) path = os.path.join(devices_path, 'sda1', DATADIR) os.makedirs(path) path = os.path.join(path, '100', 'a86', 'a8c682d2472e1720f2d81ff8993aba6') os.makedirs(path) suffix = 'db' if ts: suffix = 'ts' with open(os.path.join(path, 'a8c682203aba6.%s' % suffix), 'w') as fd: fd.write('') return devices_path def init_reaper(self, conf=None, myips=None, fakelogger=False): if conf is None: conf = {} if myips is None: myips = ['10.10.10.1'] r = reaper.AccountReaper(conf) r.stats_return_codes = {} r.stats_containers_deleted = 0 r.stats_containers_remaining = 0 r.stats_containers_possibly_remaining = 0 r.stats_objects_deleted = 0 r.stats_objects_remaining = 0 r.stats_objects_possibly_remaining = 0 r.myips = myips if fakelogger: r.logger = unit.debug_logger('test-reaper') return r def fake_reap_account(self, *args, **kwargs): self.called_amount += 1 def fake_account_ring(self): return FakeRing() def test_creation(self): # later config should be extended to assert more config options r = reaper.AccountReaper({'node_timeout': '3.5'}) self.assertEqual(r.node_timeout, 3.5) def test_delay_reaping_conf_default(self): r = reaper.AccountReaper({}) self.assertEqual(r.delay_reaping, 0) r = reaper.AccountReaper({'delay_reaping': ''}) self.assertEqual(r.delay_reaping, 0) def test_delay_reaping_conf_set(self): r = reaper.AccountReaper({'delay_reaping': '123'}) self.assertEqual(r.delay_reaping, 123) def test_delay_reaping_conf_bad_value(self): self.assertRaises(ValueError, reaper.AccountReaper, {'delay_reaping': 'abc'}) def test_reap_warn_after_conf_set(self): conf = {'delay_reaping': '2', 'reap_warn_after': '3'} r = reaper.AccountReaper(conf) self.assertEqual(r.reap_not_done_after, 5) def test_reap_warn_after_conf_bad_value(self): self.assertRaises(ValueError, reaper.AccountReaper, {'reap_warn_after': 'abc'}) def test_reap_delay(self): time_value = [100] def _time(): return time_value[0] time_orig = reaper.time try: reaper.time = _time r = reaper.AccountReaper({'delay_reaping': '10'}) b = FakeBroker() b.info['delete_timestamp'] = normalize_timestamp(110) self.assertFalse(r.reap_account(b, 0, None)) b.info['delete_timestamp'] = normalize_timestamp(100) self.assertFalse(r.reap_account(b, 0, None)) b.info['delete_timestamp'] = normalize_timestamp(90) self.assertFalse(r.reap_account(b, 0, None)) # KeyError raised immediately as reap_account tries to get the # account's name to do the reaping. b.info['delete_timestamp'] = normalize_timestamp(89) self.assertRaises(KeyError, r.reap_account, b, 0, None) b.info['delete_timestamp'] = normalize_timestamp(1) self.assertRaises(KeyError, r.reap_account, b, 0, None) finally: reaper.time = time_orig def test_reap_object(self): conf = { 'mount_check': 'false', } r = reaper.AccountReaper(conf, logger=unit.debug_logger()) mock_path = 'swift.account.reaper.direct_delete_object' for policy in POLICIES: r.reset_stats() with patch(mock_path) as fake_direct_delete: with patch('swift.account.reaper.time') as mock_time: mock_time.return_value = 1429117638.86767 r.reap_object('a', 'c', 'partition', cont_nodes, 'o', policy.idx) mock_time.assert_called_once_with() for i, call_args in enumerate( fake_direct_delete.call_args_list): cnode = cont_nodes[i % len(cont_nodes)] host = '%(ip)s:%(port)s' % cnode device = cnode['device'] headers = { 'X-Container-Host': host, 'X-Container-Partition': 'partition', 'X-Container-Device': device, 'X-Backend-Storage-Policy-Index': policy.idx, 'X-Timestamp': '1429117638.86767' } ring = r.get_object_ring(policy.idx) expected = call(dict(ring.devs[i], index=i), 0, 'a', 'c', 'o', headers=headers, conn_timeout=0.5, response_timeout=10) self.assertEqual(call_args, expected) self.assertEqual(policy.object_ring.replicas - 1, i) self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas) def test_reap_object_fail(self): r = self.init_reaper({}, fakelogger=True) self.amount_fail = 0 self.max_fail = 1 policy = random.choice(list(POLICIES)) with patch('swift.account.reaper.direct_delete_object', self.fake_direct_delete_object): r.reap_object('a', 'c', 'partition', cont_nodes, 'o', policy.idx) # IMHO, the stat handling in the node loop of reap object is # over indented, but no one has complained, so I'm not inclined # to move it. However it's worth noting we're currently keeping # stats on deletes per *replica* - which is rather obvious from # these tests, but this results is surprising because of some # funny logic to *skip* increments on successful deletes of # replicas until we have more successful responses than # failures. This means that while the first replica doesn't # increment deleted because of the failure, the second one # *does* get successfully deleted, but *also does not* increment # the counter (!?). # # In the three replica case this leaves only the last deleted # object incrementing the counter - in the four replica case # this leaves the last two. # # Basically this test will always result in: # deleted == num_replicas - 2 self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas - 2) self.assertEqual(r.stats_objects_remaining, 1) self.assertEqual(r.stats_objects_possibly_remaining, 1) def test_reap_object_non_exist_policy_index(self): r = self.init_reaper({}, fakelogger=True) r.reap_object('a', 'c', 'partition', cont_nodes, 'o', 2) self.assertEqual(r.stats_objects_deleted, 0) self.assertEqual(r.stats_objects_remaining, 1) self.assertEqual(r.stats_objects_possibly_remaining, 0) @patch('swift.account.reaper.Ring', lambda *args, **kwargs: unit.FakeRing()) def test_reap_container(self): policy = random.choice(list(POLICIES)) r = self.init_reaper({}, fakelogger=True) with patch.multiple('swift.account.reaper', direct_get_container=DEFAULT, direct_delete_object=DEFAULT, direct_delete_container=DEFAULT) as mocks: headers = {'X-Backend-Storage-Policy-Index': policy.idx} obj_listing = [{'name': 'o'}] def fake_get_container(*args, **kwargs): try: obj = obj_listing.pop(0) except IndexError: obj_list = [] else: obj_list = [obj] return headers, obj_list mocks['direct_get_container'].side_effect = fake_get_container with patch('swift.account.reaper.time') as mock_time: mock_time.side_effect = [1429117638.86767, 1429117639.67676] r.reap_container('a', 'partition', acc_nodes, 'c') # verify calls to direct_delete_object mock_calls = mocks['direct_delete_object'].call_args_list self.assertEqual(policy.object_ring.replicas, len(mock_calls)) for call_args in mock_calls: _args, kwargs = call_args self.assertEqual(kwargs['headers'] ['X-Backend-Storage-Policy-Index'], policy.idx) self.assertEqual(kwargs['headers'] ['X-Timestamp'], '1429117638.86767') # verify calls to direct_delete_container self.assertEqual(mocks['direct_delete_container'].call_count, 3) for i, call_args in enumerate( mocks['direct_delete_container'].call_args_list): anode = acc_nodes[i % len(acc_nodes)] host = '%(ip)s:%(port)s' % anode device = anode['device'] headers = { 'X-Account-Host': host, 'X-Account-Partition': 'partition', 'X-Account-Device': device, 'X-Account-Override-Deleted': 'yes', 'X-Timestamp': '1429117639.67676' } ring = r.get_object_ring(policy.idx) expected = call(dict(ring.devs[i], index=i), 0, 'a', 'c', headers=headers, conn_timeout=0.5, response_timeout=10) self.assertEqual(call_args, expected) self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas) def test_reap_container_get_object_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = True self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 0 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 1) self.assertEqual(r.stats_containers_deleted, 1) def test_reap_container_partial_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = False self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 2 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 2) self.assertEqual(r.stats_containers_possibly_remaining, 1) def test_reap_container_full_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = False self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 3 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 3) self.assertEqual(r.stats_containers_remaining, 1) @patch('swift.account.reaper.Ring', lambda *args, **kwargs: unit.FakeRing()) def test_reap_container_non_exist_policy_index(self): r = self.init_reaper({}, fakelogger=True) with patch.multiple('swift.account.reaper', direct_get_container=DEFAULT, direct_delete_object=DEFAULT, direct_delete_container=DEFAULT) as mocks: headers = {'X-Backend-Storage-Policy-Index': 2} obj_listing = [{'name': 'o'}] def fake_get_container(*args, **kwargs): try: obj = obj_listing.pop(0) except IndexError: obj_list = [] else: obj_list = [obj] return headers, obj_list mocks['direct_get_container'].side_effect = fake_get_container r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_lines_for_level('error'), [ 'ERROR: invalid storage policy index: 2']) def fake_reap_container(self, *args, **kwargs): self.called_amount += 1 self.r.stats_containers_deleted = 1 self.r.stats_objects_deleted = 1 self.r.stats_containers_remaining = 1 self.r.stats_objects_remaining = 1 self.r.stats_containers_possibly_remaining = 1 self.r.stats_objects_possibly_remaining = 1 def test_reap_account(self): containers = ('c1', 'c2', 'c3', '') broker = FakeAccountBroker(containers) self.called_amount = 0 self.r = r = self.init_reaper({}, fakelogger=True) r.start_time = time.time() with patch('swift.account.reaper.AccountReaper.reap_container', self.fake_reap_container), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring): nodes = r.get_account_ring().get_part_nodes() for container_shard, node in enumerate(nodes): self.assertTrue( r.reap_account(broker, 'partition', nodes, container_shard=container_shard)) self.assertEqual(self.called_amount, 4) info_lines = r.logger.get_lines_for_level('info') self.assertEqual(len(info_lines), 6) for start_line, stat_line in zip(*[iter(info_lines)] * 2): self.assertEqual(start_line, 'Beginning pass on account a') self.assertTrue(stat_line.find('1 containers deleted')) self.assertTrue(stat_line.find('1 objects deleted')) self.assertTrue(stat_line.find('1 containers remaining')) self.assertTrue(stat_line.find('1 objects remaining')) self.assertTrue(stat_line.find('1 containers possibly remaining')) self.assertTrue(stat_line.find('1 objects possibly remaining')) def test_reap_account_no_container(self): broker = FakeAccountBroker(tuple()) self.r = r = self.init_reaper({}, fakelogger=True) self.called_amount = 0 r.start_time = time.time() with patch('swift.account.reaper.AccountReaper.reap_container', self.fake_reap_container), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring): nodes = r.get_account_ring().get_part_nodes() self.assertTrue(r.reap_account(broker, 'partition', nodes)) self.assertTrue(r.logger.get_lines_for_level( 'info')[-1].startswith('Completed pass')) self.assertEqual(self.called_amount, 0) def test_reap_device(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 1) def test_reap_device_with_ts(self): devices = self.prepare_data_dir(ts=True) self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf=conf) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 0) def test_reap_device_with_not_my_ip(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.1.2']) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 0) def test_reap_device_with_sharding(self): devices = self.prepare_data_dir() conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.10.2']) container_shard_used = [-1] def fake_reap_account(*args, **kwargs): container_shard_used[0] = kwargs.get('container_shard') with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', fake_reap_account): r.reap_device('sda1') # 10.10.10.2 is second node from ring self.assertEqual(container_shard_used[0], 1) def test_reap_account_with_sharding(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.10.2']) container_reaped = [0] def fake_list_containers_iter(self, *args): for container in self.containers: if container in self.containers_yielded: continue yield container, None, None, None self.containers_yielded.append(container) def fake_reap_container(self, account, account_partition, account_nodes, container): container_reaped[0] += 1 fake_ring = FakeRing() with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch( 'swift.account.reaper.AccountBroker.list_containers_iter', fake_list_containers_iter), \ patch('swift.account.reaper.AccountReaper.reap_container', fake_reap_container): fake_broker = FakeAccountBroker(['c', 'd', 'e']) r.reap_account(fake_broker, 10, fake_ring.nodes, 0) self.assertEqual(container_reaped[0], 1) fake_broker = FakeAccountBroker(['c', 'd', 'e']) container_reaped[0] = 0 r.reap_account(fake_broker, 10, fake_ring.nodes, 1) self.assertEqual(container_reaped[0], 2) container_reaped[0] = 0 fake_broker = FakeAccountBroker(['c', 'd', 'e']) r.reap_account(fake_broker, 10, fake_ring.nodes, 2) self.assertEqual(container_reaped[0], 0) def test_run_once(self): def prepare_data_dir(): devices_path = tempfile.mkdtemp() # will be deleted by teardown self.to_delete.append(devices_path) path = os.path.join(devices_path, 'sda1', DATADIR) os.makedirs(path) return devices_path def init_reaper(devices): r = reaper.AccountReaper({'devices': devices}) return r devices = prepare_data_dir() r = init_reaper(devices) with patch('swift.account.reaper.ismount', lambda x: True): with patch( 'swift.account.reaper.AccountReaper.reap_device') as foo: r.run_once() self.assertEqual(foo.called, 1) with patch('swift.account.reaper.ismount', lambda x: False): with patch( 'swift.account.reaper.AccountReaper.reap_device') as foo: r.run_once() self.assertFalse(foo.called) def test_run_forever(self): def fake_sleep(val): self.val = val def fake_random(): return 1 def fake_run_once(): raise Exception('exit') def init_reaper(): r = reaper.AccountReaper({'interval': 1}) r.run_once = fake_run_once return r r = init_reaper() with patch('swift.account.reaper.sleep', fake_sleep): with patch('swift.account.reaper.random.random', fake_random): try: r.run_forever() except Exception as err: pass self.assertEqual(self.val, 1) self.assertEqual(str(err), 'exit') if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/account/test_replicator.py0000664000567000056710000001317613024044352023172 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import unittest import shutil from swift.account import replicator, backend, server from swift.common.utils import normalize_timestamp from swift.common.storage_policy import POLICIES from test.unit.common import test_db_replicator class TestReplicatorSync(test_db_replicator.TestReplicatorSync): backend = backend.AccountBroker datadir = server.DATADIR replicator_daemon = replicator.AccountReplicator def test_sync(self): broker = self._get_broker('a', node_index=0) put_timestamp = normalize_timestamp(time.time()) broker.initialize(put_timestamp) # "replicate" to same database daemon = replicator.AccountReplicator({}) part, node = self._get_broker_part_node(broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) # nothing to do self.assertTrue(success) self.assertEqual(1, daemon.stats['no_change']) def test_sync_remote_missing(self): broker = self._get_broker('a', node_index=0) put_timestamp = time.time() broker.initialize(put_timestamp) # "replicate" to all other nodes part, node = self._get_broker_part_node(broker) daemon = self._run_once(node) # complete rsync self.assertEqual(2, daemon.stats['rsync']) local_info = self._get_broker( 'a', node_index=0).get_info() for i in range(1, 3): remote_broker = self._get_broker('a', node_index=i) self.assertTrue(os.path.exists(remote_broker.db_file)) remote_info = remote_broker.get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_sync_remote_missing_most_rows(self): put_timestamp = time.time() # create "local" broker broker = self._get_broker('a', node_index=0) broker.initialize(put_timestamp) # create "remote" broker remote_broker = self._get_broker('a', node_index=1) remote_broker.initialize(put_timestamp) # add a row to "local" db broker.put_container('/a/c', time.time(), 0, 0, 0, POLICIES.default.idx) # replicate daemon = replicator.AccountReplicator({'per_diff': 1}) def _rsync_file(db_file, remote_file, **kwargs): remote_server, remote_path = remote_file.split('/', 1) dest_path = os.path.join(self.root, remote_path) shutil.copy(db_file, dest_path) return True daemon._rsync_file = _rsync_file part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) # row merge self.assertEqual(1, daemon.stats['remote_merge']) local_info = self._get_broker( 'a', node_index=0).get_info() remote_info = self._get_broker( 'a', node_index=1).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) def test_sync_remote_missing_one_rows(self): put_timestamp = time.time() # create "local" broker broker = self._get_broker('a', node_index=0) broker.initialize(put_timestamp) # create "remote" broker remote_broker = self._get_broker('a', node_index=1) remote_broker.initialize(put_timestamp) # add some rows to both db for i in range(10): put_timestamp = time.time() for db in (broker, remote_broker): path = '/a/c_%s' % i db.put_container(path, put_timestamp, 0, 0, 0, POLICIES.default.idx) # now a row to the "local" broker only broker.put_container('/a/c_missing', time.time(), 0, 0, 0, POLICIES.default.idx) # replicate daemon = replicator.AccountReplicator({}) part, node = self._get_broker_part_node(remote_broker) info = broker.get_replication_info() success = daemon._repl_to_node(node, broker, part, info) self.assertTrue(success) # row merge self.assertEqual(1, daemon.stats['diff']) local_info = self._get_broker( 'a', node_index=0).get_info() remote_info = self._get_broker( 'a', node_index=1).get_info() for k, v in local_info.items(): if k == 'id': continue self.assertEqual(remote_info[k], v, "mismatch remote %s %r != %r" % ( k, remote_info[k], v)) if __name__ == '__main__': unittest.main() swift-2.7.1/test/unit/test_locale/0000775000567000056710000000000013024044470020250 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/test_locale/eo/0000775000567000056710000000000013024044470020653 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/test_locale/eo/LC_MESSAGES/0000775000567000056710000000000013024044470022440 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/test_locale/eo/LC_MESSAGES/swift.mo0000664000567000056710000000012313024044352024124 0ustar jenkinsjenkins00000000000000$, 8 Etest messageprova mesaĝoswift-2.7.1/test/unit/test_locale/messages.mo0000664000567000056710000000012313024044352022407 0ustar jenkinsjenkins00000000000000$, 8 Etest messageprova mesaĝoswift-2.7.1/test/unit/test_locale/eo.po0000664000567000056710000000005413024044352021211 0ustar jenkinsjenkins00000000000000msgid "test message" msgstr "prova mesaĝo" swift-2.7.1/test/unit/test_locale/__init__.py0000664000567000056710000000000013024044352022346 0ustar jenkinsjenkins00000000000000swift-2.7.1/test/unit/test_locale/test_locale.py0000664000567000056710000000526713024044352023131 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # coding: utf-8 # Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import eventlet import os import unittest import sys threading = eventlet.patcher.original('threading') try: from subprocess import check_output except ImportError: from subprocess import Popen, PIPE, CalledProcessError def check_output(*popenargs, **kwargs): """Lifted from python 2.7 stdlib.""" if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, it will be ' 'overridden.') process = Popen(stdout=PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise CalledProcessError(retcode, cmd, output=output) return output class TestTranslations(unittest.TestCase): def setUp(self): self.orig_env = {} for var in 'LC_ALL', 'SWIFT_LOCALEDIR', 'LANGUAGE': self.orig_env[var] = os.environ.get(var) os.environ['LC_ALL'] = 'eo' os.environ['SWIFT_LOCALEDIR'] = os.path.dirname(__file__) os.environ['LANGUAGE'] = '' self.orig_stop = threading._DummyThread._Thread__stop # See http://stackoverflow.com/questions/13193278/\ # understand-python-threading-bug threading._DummyThread._Thread__stop = lambda x: 42 def tearDown(self): for var, val in self.orig_env.items(): if val is not None: os.environ[var] = val else: del os.environ[var] threading._DummyThread._Thread__stop = self.orig_stop def test_translations(self): path = ':'.join(sys.path) translated_message = check_output(['python', __file__, path]) self.assertEqual(translated_message, 'prova mesaĝo\n') if __name__ == "__main__": os.environ['LC_ALL'] = 'eo' os.environ['SWIFT_LOCALEDIR'] = os.path.dirname(__file__) sys.path = sys.argv[1].split(':') from swift import gettext_ as _ print(_('test message')) swift-2.7.1/test/unit/test_locale/README0000664000567000056710000000011213024044352021121 0ustar jenkinsjenkins00000000000000rebuild the .mo with msgfmt (included with GNU gettext) msgfmt eo.po swift-2.7.1/test/probe/0000775000567000056710000000000013024044470016102 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/probe/test_object_handoff.py0000775000567000056710000003122313024044354022453 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import main from uuid import uuid4 import random from hashlib import md5 from collections import defaultdict from swiftclient import client from swift.common import direct_client from swift.common.exceptions import ClientException from swift.common.manager import Manager from test.probe.common import (kill_server, start_server, ReplProbeTest, ECProbeTest, Body) class TestObjectHandoff(ReplProbeTest): def test_main(self): # Create container container = 'container-%s' % uuid4() client.put_container(self.url, self.token, container, headers={'X-Storage-Policy': self.policy.name}) # Kill one container/obj primary server cpart, cnodes = self.container_ring.get_nodes(self.account, container) cnode = cnodes[0] obj = 'object-%s' % uuid4() opart, onodes = self.object_ring.get_nodes( self.account, container, obj) onode = onodes[0] kill_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Create container/obj (goes to two primary servers and one handoff) client.put_object(self.url, self.token, container, obj, 'VERIFY') odata = client.get_object(self.url, self.token, container, obj)[-1] if odata != 'VERIFY': raise Exception('Object GET did not return VERIFY, instead it ' 'returned: %s' % repr(odata)) # Kill other two container/obj primary servers # to ensure GET handoff works for node in onodes[1:]: kill_server((node['ip'], node['port']), self.ipport2server, self.pids) # Indirectly through proxy assert we can get container/obj odata = client.get_object(self.url, self.token, container, obj)[-1] if odata != 'VERIFY': raise Exception('Object GET did not return VERIFY, instead it ' 'returned: %s' % repr(odata)) # Restart those other two container/obj primary servers for node in onodes[1:]: start_server((node['ip'], node['port']), self.ipport2server, self.pids) # We've indirectly verified the handoff node has the container/object, # but let's directly verify it. another_onode = next(self.object_ring.get_more_nodes(opart)) odata = direct_client.direct_get_object( another_onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx})[-1] if odata != 'VERIFY': raise Exception('Direct object GET did not return VERIFY, instead ' 'it returned: %s' % repr(odata)) # Assert container listing (via proxy and directly) has container/obj objs = [o['name'] for o in client.get_container(self.url, self.token, container)[1]] if obj not in objs: raise Exception('Container listing did not know about object') for cnode in cnodes: objs = [o['name'] for o in direct_client.direct_get_container( cnode, cpart, self.account, container)[1]] if obj not in objs: raise Exception( 'Container server %s:%s did not know about object' % (cnode['ip'], cnode['port'])) # Bring the first container/obj primary server back up start_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Assert that it doesn't have container/obj yet try: direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) except ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") # Run object replication, ensuring we run the handoff node last so it # will remove its extra handoff partition for node in onodes: try: port_num = node['replication_port'] except KeyError: port_num = node['port'] node_id = (port_num - 6000) / 10 Manager(['object-replicator']).once(number=node_id) try: another_port_num = another_onode['replication_port'] except KeyError: another_port_num = another_onode['port'] another_num = (another_port_num - 6000) / 10 Manager(['object-replicator']).once(number=another_num) # Assert the first container/obj primary server now has container/obj odata = direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx})[-1] if odata != 'VERIFY': raise Exception('Direct object GET did not return VERIFY, instead ' 'it returned: %s' % repr(odata)) # Assert the handoff server no longer has container/obj try: direct_client.direct_get_object( another_onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) except ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") # Kill the first container/obj primary server again (we have two # primaries and the handoff up now) kill_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Delete container/obj try: client.delete_object(self.url, self.token, container, obj) except client.ClientException as err: if self.object_ring.replica_count > 2: raise # Object DELETE returning 503 for (404, 204) # remove this with fix for # https://bugs.launchpad.net/swift/+bug/1318375 self.assertEqual(503, err.http_status) # Assert we can't head container/obj try: client.head_object(self.url, self.token, container, obj) except client.ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") # Assert container/obj is not in the container listing, both indirectly # and directly objs = [o['name'] for o in client.get_container(self.url, self.token, container)[1]] if obj in objs: raise Exception('Container listing still knew about object') for cnode in cnodes: objs = [o['name'] for o in direct_client.direct_get_container( cnode, cpart, self.account, container)[1]] if obj in objs: raise Exception( 'Container server %s:%s still knew about object' % (cnode['ip'], cnode['port'])) # Restart the first container/obj primary server again start_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Assert it still has container/obj direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) # Run object replication, ensuring we run the handoff node last so it # will remove its extra handoff partition for node in onodes: try: port_num = node['replication_port'] except KeyError: port_num = node['port'] node_id = (port_num - 6000) / 10 Manager(['object-replicator']).once(number=node_id) another_node_id = (another_port_num - 6000) / 10 Manager(['object-replicator']).once(number=another_node_id) # Assert primary node no longer has container/obj try: direct_client.direct_get_object( another_onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) except ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") class TestECObjectHandoffOverwrite(ECProbeTest): def get_object(self, container_name, object_name): headers, body = client.get_object(self.url, self.token, container_name, object_name, resp_chunk_size=64 * 2 ** 10) resp_checksum = md5() for chunk in body: resp_checksum.update(chunk) return resp_checksum.hexdigest() def test_ec_handoff_overwrite(self): container_name = 'container-%s' % uuid4() object_name = 'object-%s' % uuid4() # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, container_name, headers=headers) # PUT object old_contents = Body() client.put_object(self.url, self.token, container_name, object_name, contents=old_contents) # get our node lists opart, onodes = self.object_ring.get_nodes( self.account, container_name, object_name) # shutdown one of the primary data nodes failed_primary = random.choice(onodes) failed_primary_device_path = self.device_dir('object', failed_primary) self.kill_drive(failed_primary_device_path) # overwrite our object with some new data new_contents = Body() client.put_object(self.url, self.token, container_name, object_name, contents=new_contents) self.assertNotEqual(new_contents.etag, old_contents.etag) # restore failed primary device self.revive_drive(failed_primary_device_path) # sanity - failed node has old contents req_headers = {'X-Backend-Storage-Policy-Index': int(self.policy)} headers = direct_client.direct_head_object( failed_primary, opart, self.account, container_name, object_name, headers=req_headers) self.assertEqual(headers['X-Object-Sysmeta-EC-Etag'], old_contents.etag) # we have 1 primary with wrong old etag, and we should have 5 with # new etag plus a handoff with the new etag, so killing 2 other # primaries forces proxy to try to GET from all primaries plus handoff. other_nodes = [n for n in onodes if n != failed_primary] random.shuffle(other_nodes) for node in other_nodes[:2]: self.kill_drive(self.device_dir('object', node)) # sanity, after taking out two primaries we should be down to # only four primaries, one of which has the old etag - but we # also have a handoff with the new etag out there found_frags = defaultdict(int) req_headers = {'X-Backend-Storage-Policy-Index': int(self.policy)} for node in onodes + list(self.object_ring.get_more_nodes(opart)): try: headers = direct_client.direct_head_object( node, opart, self.account, container_name, object_name, headers=req_headers) except Exception: continue found_frags[headers['X-Object-Sysmeta-EC-Etag']] += 1 self.assertEqual(found_frags, { new_contents.etag: 4, # this should be enough to rebuild! old_contents.etag: 1, }) # clear node error limiting Manager(['proxy']).restart() resp_etag = self.get_object(container_name, object_name) self.assertEqual(resp_etag, new_contents.etag) if __name__ == '__main__': main() swift-2.7.1/test/probe/test_replication_servers_working.py0000664000567000056710000001622313024044354025342 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import main from uuid import uuid4 import os import time import shutil from swiftclient import client from swift.obj.diskfile import get_data_dir from test.probe.common import ReplProbeTest from swift.common.utils import readconf def collect_info(path_list): """ Recursive collect dirs and files in path_list directory. :param path_list: start directory for collecting :return files_list, dir_list: tuple of included directories and files """ files_list = [] dir_list = [] for path in path_list: temp_files_list = [] temp_dir_list = [] for root, dirs, files in os.walk(path): temp_files_list += files temp_dir_list += dirs files_list.append(temp_files_list) dir_list.append(temp_dir_list) return files_list, dir_list def find_max_occupancy_node(dir_list): """ Find node with maximum occupancy. :param list_dir: list of directories for each node. :return number: number node in list_dir """ count = 0 number = 0 length = 0 for dirs in dir_list: if length < len(dirs): length = len(dirs) number = count count += 1 return number class TestReplicatorFunctions(ReplProbeTest): """ Class for testing replicators and replication servers. By default configuration - replication servers not used. For testing separate replication servers servers need to change ring's files using set_info command or new ring's files with different port values. """ def test_main(self): # Create one account, container and object file. # Find node with account, container and object replicas. # Delete all directories and files from this node (device). # Wait 60 seconds and check replication results. # Delete directories and files in objects storage without # deleting file "hashes.pkl". # Check, that files not replicated. # Delete file "hashes.pkl". # Check, that all files were replicated. path_list = [] data_dir = get_data_dir(self.policy) # Figure out where the devices are for node_id in range(1, 5): conf = readconf(self.configs['object-server'][node_id]) device_path = conf['app:object-server']['devices'] for dev in self.object_ring.devs: if dev['port'] == int(conf['app:object-server']['bind_port']): device = dev['device'] path_list.append(os.path.join(device_path, device)) # Put data to storage nodes container = 'container-%s' % uuid4() client.put_container(self.url, self.token, container, headers={'X-Storage-Policy': self.policy.name}) obj = 'object-%s' % uuid4() client.put_object(self.url, self.token, container, obj, 'VERIFY') # Get all data file information (files_list, dir_list) = collect_info(path_list) num = find_max_occupancy_node(dir_list) test_node = path_list[num] test_node_files_list = [] for files in files_list[num]: if not files.endswith('.pending'): test_node_files_list.append(files) test_node_dir_list = [] for d in dir_list[num]: if not d.startswith('tmp'): test_node_dir_list.append(d) # Run all replicators try: self.replicators.start() # Delete some files for directory in os.listdir(test_node): shutil.rmtree(os.path.join(test_node, directory)) self.assertFalse(os.listdir(test_node)) # We will keep trying these tests until they pass for up to 60s begin = time.time() while True: (new_files_list, new_dir_list) = collect_info([test_node]) try: # Check replicate files and dir for files in test_node_files_list: self.assertTrue(files in new_files_list[0]) for dir in test_node_dir_list: self.assertTrue(dir in new_dir_list[0]) break except Exception: if time.time() - begin > 60: raise time.sleep(1) # Check behavior by deleting hashes.pkl file for directory in os.listdir(os.path.join(test_node, data_dir)): for input_dir in os.listdir(os.path.join( test_node, data_dir, directory)): if os.path.isdir(os.path.join( test_node, data_dir, directory, input_dir)): shutil.rmtree(os.path.join( test_node, data_dir, directory, input_dir)) # We will keep trying these tests until they pass for up to 60s begin = time.time() while True: try: for directory in os.listdir(os.path.join( test_node, data_dir)): for input_dir in os.listdir(os.path.join( test_node, data_dir, directory)): self.assertFalse(os.path.isdir( os.path.join(test_node, data_dir, directory, '/', input_dir))) break except Exception: if time.time() - begin > 60: raise time.sleep(1) for directory in os.listdir(os.path.join(test_node, data_dir)): os.remove(os.path.join( test_node, data_dir, directory, 'hashes.pkl')) # We will keep trying these tests until they pass for up to 60s begin = time.time() while True: try: (new_files_list, new_dir_list) = collect_info([test_node]) # Check replicate files and dirs for files in test_node_files_list: self.assertTrue(files in new_files_list[0]) for directory in test_node_dir_list: self.assertTrue(directory in new_dir_list[0]) break except Exception: if time.time() - begin > 60: raise time.sleep(1) finally: self.replicators.stop() if __name__ == '__main__': main() swift-2.7.1/test/probe/test_object_expirer.py0000664000567000056710000001225113024044354022521 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import random import uuid import unittest from nose import SkipTest from swift.common.internal_client import InternalClient from swift.common.manager import Manager from swift.common.utils import Timestamp from test.probe.common import ReplProbeTest, ENABLED_POLICIES from test.probe.test_container_merge_policy_index import BrainSplitter from swiftclient import client class TestObjectExpirer(ReplProbeTest): def setUp(self): if len(ENABLED_POLICIES) < 2: raise SkipTest('Need more than one policy') self.expirer = Manager(['object-expirer']) self.expirer.start() err = self.expirer.stop() if err: raise SkipTest('Unable to verify object-expirer service') conf_files = [] for server in self.expirer.servers: conf_files.extend(server.conf_files()) conf_file = conf_files[0] self.client = InternalClient(conf_file, 'probe-test', 3) super(TestObjectExpirer, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() self.brain = BrainSplitter(self.url, self.token, self.container_name, self.object_name) def test_expirer_object_split_brain(self): old_policy = random.choice(ENABLED_POLICIES) wrong_policy = random.choice([p for p in ENABLED_POLICIES if p != old_policy]) # create an expiring object and a container with the wrong policy self.brain.stop_primary_half() self.brain.put_container(int(old_policy)) self.brain.put_object(headers={'X-Delete-After': 2}) # get the object timestamp metadata = self.client.get_object_metadata( self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': int(old_policy)}) create_timestamp = Timestamp(metadata['x-timestamp']) self.brain.start_primary_half() # get the expiring object updates in their queue, while we have all # the servers up Manager(['object-updater']).once() self.brain.stop_handoff_half() self.brain.put_container(int(wrong_policy)) # don't start handoff servers, only wrong policy is available # make sure auto-created containers get in the account listing Manager(['container-updater']).once() # this guy should no-op since it's unable to expire the object self.expirer.once() self.brain.start_handoff_half() self.get_to_final_state() # validate object is expired found_in_policy = None metadata = self.client.get_object_metadata( self.account, self.container_name, self.object_name, acceptable_statuses=(4,), headers={'X-Backend-Storage-Policy-Index': int(old_policy)}) self.assertTrue('x-backend-timestamp' in metadata) self.assertEqual(Timestamp(metadata['x-backend-timestamp']), create_timestamp) # but it is still in the listing for obj in self.client.iter_objects(self.account, self.container_name): if self.object_name == obj['name']: break else: self.fail('Did not find listing for %s' % self.object_name) # clear proxy cache client.post_container(self.url, self.token, self.container_name, {}) # run the expirier again after replication self.expirer.once() # object is not in the listing for obj in self.client.iter_objects(self.account, self.container_name): if self.object_name == obj['name']: self.fail('Found listing for %s' % self.object_name) # and validate object is tombstoned found_in_policy = None for policy in ENABLED_POLICIES: metadata = self.client.get_object_metadata( self.account, self.container_name, self.object_name, acceptable_statuses=(4,), headers={'X-Backend-Storage-Policy-Index': int(policy)}) if 'x-backend-timestamp' in metadata: if found_in_policy: self.fail('found object in %s and also %s' % (found_in_policy, policy)) found_in_policy = policy self.assertTrue('x-backend-timestamp' in metadata) self.assertTrue(Timestamp(metadata['x-backend-timestamp']) > create_timestamp) if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_object_async_update.py0000775000567000056710000001123013024044354023521 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil from io import StringIO from tempfile import mkdtemp from textwrap import dedent from unittest import main from uuid import uuid4 from swiftclient import client from swift.common import direct_client, internal_client from swift.common.manager import Manager from test.probe.common import kill_nonprimary_server, \ kill_server, ReplProbeTest, start_server class TestObjectAsyncUpdate(ReplProbeTest): def test_main(self): # Create container container = 'container-%s' % uuid4() client.put_container(self.url, self.token, container) # Kill container servers excepting two of the primaries cpart, cnodes = self.container_ring.get_nodes(self.account, container) cnode = cnodes[0] kill_nonprimary_server(cnodes, self.ipport2server, self.pids) kill_server((cnode['ip'], cnode['port']), self.ipport2server, self.pids) # Create container/obj obj = 'object-%s' % uuid4() client.put_object(self.url, self.token, container, obj, '') # Restart other primary server start_server((cnode['ip'], cnode['port']), self.ipport2server, self.pids) # Assert it does not know about container/obj self.assertFalse(direct_client.direct_get_container( cnode, cpart, self.account, container)[1]) # Run the object-updaters Manager(['object-updater']).once() # Assert the other primary server now knows about container/obj objs = [o['name'] for o in direct_client.direct_get_container( cnode, cpart, self.account, container)[1]] self.assertTrue(obj in objs) class TestUpdateOverrides(ReplProbeTest): """ Use an internal client to PUT an object to proxy server, bypassing gatekeeper so that X-Backend- headers can be included. Verify that the update override headers take effect and override values propagate to the container server. """ def setUp(self): """ Reset all environment and start all servers. """ super(TestUpdateOverrides, self).setUp() self.tempdir = mkdtemp() conf_path = os.path.join(self.tempdir, 'internal_client.conf') conf_body = """ [DEFAULT] swift_dir = /etc/swift [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ with open(conf_path, 'w') as f: f.write(dedent(conf_body)) self.int_client = internal_client.InternalClient(conf_path, 'test', 1) def tearDown(self): super(TestUpdateOverrides, self).tearDown() shutil.rmtree(self.tempdir) def test(self): headers = { 'Content-Type': 'text/plain', 'X-Backend-Container-Update-Override-Etag': 'override-etag', 'X-Backend-Container-Update-Override-Content-Type': 'override-type' } client.put_container(self.url, self.token, 'c1', headers={'X-Storage-Policy': self.policy.name}) self.int_client.upload_object(StringIO(u'stuff'), self.account, 'c1', 'o1', headers) # Run the object-updaters to be sure updates are done Manager(['object-updater']).once() meta = self.int_client.get_object_metadata(self.account, 'c1', 'o1') self.assertEqual('text/plain', meta['content-type']) self.assertEqual('c13d88cb4cb02003daedb8a84e5d272a', meta['etag']) obj_iter = self.int_client.iter_objects(self.account, 'c1') for obj in obj_iter: if obj['name'] == 'o1': self.assertEqual('override-etag', obj['hash']) self.assertEqual('override-type', obj['content_type']) break else: self.fail('Failed to find object o1 in listing') if __name__ == '__main__': main() swift-2.7.1/test/probe/__init__.py0000664000567000056710000000037213024044352020214 0ustar jenkinsjenkins00000000000000from test import get_config from swift.common.utils import config_true_value config = get_config('probe_test') CHECK_SERVER_TIMEOUT = int(config.get('check_server_timeout', 30)) VALIDATE_RSYNC = config_true_value(config.get('validate_rsync', False)) swift-2.7.1/test/probe/test_reconstructor_rebuild.py0000664000567000056710000002006413024044354024140 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from hashlib import md5 import unittest import uuid import shutil import random from collections import defaultdict from test.probe.common import ECProbeTest from swift.common import direct_client from swift.common.storage_policy import EC_POLICY from swift.common.manager import Manager from swift.obj.reconstructor import _get_partners from swiftclient import client class Body(object): def __init__(self, total=3.5 * 2 ** 20): self.total = total self.hasher = md5() self.size = 0 self.chunk = 'test' * 16 * 2 ** 10 @property def etag(self): return self.hasher.hexdigest() def __iter__(self): return self def next(self): if self.size > self.total: raise StopIteration() self.size += len(self.chunk) self.hasher.update(self.chunk) return self.chunk def __next__(self): return next(self) class TestReconstructorRebuild(ECProbeTest): def setUp(self): super(TestReconstructorRebuild, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() # sanity self.assertEqual(self.policy.policy_type, EC_POLICY) self.reconstructor = Manager(["object-reconstructor"]) def proxy_get(self): # GET object headers, body = client.get_object(self.url, self.token, self.container_name, self.object_name, resp_chunk_size=64 * 2 ** 10) resp_checksum = md5() for chunk in body: resp_checksum.update(chunk) return resp_checksum.hexdigest() def direct_get(self, node, part): req_headers = {'X-Backend-Storage-Policy-Index': int(self.policy)} headers, data = direct_client.direct_get_object( node, part, self.account, self.container_name, self.object_name, headers=req_headers, resp_chunk_size=64 * 2 ** 20) hasher = md5() for chunk in data: hasher.update(chunk) return hasher.hexdigest() def _check_node(self, node, part, etag, headers_post): # get fragment archive etag fragment_archive_etag = self.direct_get(node, part) # remove data from the selected node part_dir = self.storage_dir('object', node, part=part) shutil.rmtree(part_dir, True) # this node can't servce the data any more try: self.direct_get(node, part) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) else: self.fail('Node data on %r was not fully destoryed!' % (node,)) # make sure we can still GET the object and its correct, the # proxy is doing decode on remaining fragments to get the obj self.assertEqual(etag, self.proxy_get()) # fire up reconstructor self.reconstructor.once() # fragment is rebuilt exactly as it was before! self.assertEqual(fragment_archive_etag, self.direct_get(node, part)) # check meta meta = client.head_object(self.url, self.token, self.container_name, self.object_name) for key in headers_post: self.assertTrue(key in meta) self.assertEqual(meta[key], headers_post[key]) def _format_node(self, node): return '%s#%s' % (node['device'], node['index']) def test_main(self): # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # PUT object contents = Body() headers = {'x-object-meta-foo': 'meta-foo'} headers_post = {'x-object-meta-bar': 'meta-bar'} etag = client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents, headers=headers) client.post_object(self.url, self.token, self.container_name, self.object_name, headers=headers_post) del headers_post['X-Auth-Token'] # WTF, where did this come from? # built up a list of node lists to kill data from, # first try a single node # then adjacent nodes and then nodes >1 node apart opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) single_node = [random.choice(onodes)] adj_nodes = [onodes[0], onodes[-1]] far_nodes = [onodes[0], onodes[-2]] test_list = [single_node, adj_nodes, far_nodes] for node_list in test_list: for onode in node_list: try: self._check_node(onode, opart, etag, headers_post) except AssertionError as e: self.fail( str(e) + '\n... for node %r of scenario %r' % ( self._format_node(onode), [self._format_node(n) for n in node_list])) def test_rebuild_partner_down(self): # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # PUT object contents = Body() client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents) opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) # find a primary server that only has one of it's devices in the # primary node list group_nodes_by_config = defaultdict(list) for n in onodes: group_nodes_by_config[self.config_number(n)].append(n) for config_number, node_list in group_nodes_by_config.items(): if len(node_list) == 1: break else: self.fail('ring balancing did not use all available nodes') primary_node = node_list[0] # pick one it's partners to fail randomly partner_node = random.choice(_get_partners( primary_node['index'], onodes)) # 507 the partner device device_path = self.device_dir('object', partner_node) self.kill_drive(device_path) # select another primary sync_to node to fail failed_primary = [n for n in onodes if n['id'] not in (primary_node['id'], partner_node['id'])][0] # ... capture it's fragment etag failed_primary_etag = self.direct_get(failed_primary, opart) # ... and delete it part_dir = self.storage_dir('object', failed_primary, part=opart) shutil.rmtree(part_dir, True) # reconstruct from the primary, while one of it's partners is 507'd self.reconstructor.once(number=self.config_number(primary_node)) # the other failed primary will get it's fragment rebuilt instead self.assertEqual(failed_primary_etag, self.direct_get(failed_primary, opart)) # just to be nice self.revive_drive(device_path) if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_container_failures.py0000775000567000056710000001633013024044354023376 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os import listdir from os.path import join as path_join from unittest import main from uuid import uuid4 from eventlet import GreenPool, Timeout import eventlet from sqlite3 import connect from swiftclient import client from swift.common import direct_client from swift.common.exceptions import ClientException from swift.common.utils import hash_path, readconf from test.probe.common import kill_nonprimary_server, \ kill_server, ReplProbeTest, start_server eventlet.monkey_patch(all=False, socket=True) def get_db_file_path(obj_dir): files = sorted(listdir(obj_dir), reverse=True) for filename in files: if filename.endswith('db'): return path_join(obj_dir, filename) class TestContainerFailures(ReplProbeTest): def test_one_node_fails(self): # Create container1 container1 = 'container-%s' % uuid4() cpart, cnodes = self.container_ring.get_nodes(self.account, container1) client.put_container(self.url, self.token, container1) # Kill container1 servers excepting two of the primaries kill_nonprimary_server(cnodes, self.ipport2server, self.pids) kill_server((cnodes[0]['ip'], cnodes[0]['port']), self.ipport2server, self.pids) # Delete container1 client.delete_container(self.url, self.token, container1) # Restart other container1 primary server start_server((cnodes[0]['ip'], cnodes[0]['port']), self.ipport2server, self.pids) # Create container1/object1 (allowed because at least server thinks the # container exists) client.put_object(self.url, self.token, container1, 'object1', '123') # Get to a final state self.get_to_final_state() # Assert all container1 servers indicate container1 is alive and # well with object1 for cnode in cnodes: self.assertEqual( [o['name'] for o in direct_client.direct_get_container( cnode, cpart, self.account, container1)[1]], ['object1']) # Assert account level also indicates container1 is alive and # well with object1 headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '1') self.assertEqual(headers['x-account-object-count'], '1') self.assertEqual(headers['x-account-bytes-used'], '3') def test_two_nodes_fail(self): # Create container1 container1 = 'container-%s' % uuid4() cpart, cnodes = self.container_ring.get_nodes(self.account, container1) client.put_container(self.url, self.token, container1) # Kill container1 servers excepting one of the primaries cnp_ipport = kill_nonprimary_server(cnodes, self.ipport2server, self.pids) kill_server((cnodes[0]['ip'], cnodes[0]['port']), self.ipport2server, self.pids) kill_server((cnodes[1]['ip'], cnodes[1]['port']), self.ipport2server, self.pids) # Delete container1 directly to the one primary still up direct_client.direct_delete_container(cnodes[2], cpart, self.account, container1) # Restart other container1 servers start_server((cnodes[0]['ip'], cnodes[0]['port']), self.ipport2server, self.pids) start_server((cnodes[1]['ip'], cnodes[1]['port']), self.ipport2server, self.pids) start_server(cnp_ipport, self.ipport2server, self.pids) # Get to a final state self.get_to_final_state() # Assert all container1 servers indicate container1 is gone (happens # because the one node that knew about the delete replicated to the # others.) for cnode in cnodes: try: direct_client.direct_get_container(cnode, cpart, self.account, container1) except ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") # Assert account level also indicates container1 is gone headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '0') self.assertEqual(headers['x-account-object-count'], '0') self.assertEqual(headers['x-account-bytes-used'], '0') def _get_container_db_files(self, container): opart, onodes = self.container_ring.get_nodes(self.account, container) onode = onodes[0] db_files = [] for onode in onodes: node_id = (onode['port'] - 6000) / 10 device = onode['device'] hash_str = hash_path(self.account, container) server_conf = readconf(self.configs['container-server'][node_id]) devices = server_conf['app:container-server']['devices'] obj_dir = '%s/%s/containers/%s/%s/%s/' % (devices, device, opart, hash_str[-3:], hash_str) db_files.append(get_db_file_path(obj_dir)) return db_files def test_locked_container_dbs(self): def run_test(num_locks, catch_503): container = 'container-%s' % uuid4() client.put_container(self.url, self.token, container) db_files = self._get_container_db_files(container) db_conns = [] for i in range(num_locks): db_conn = connect(db_files[i]) db_conn.execute('begin exclusive transaction') db_conns.append(db_conn) if catch_503: try: client.delete_container(self.url, self.token, container) except client.ClientException as err: self.assertEqual(err.http_status, 503) else: self.fail("Expected ClientException but didn't get it") else: client.delete_container(self.url, self.token, container) pool = GreenPool() try: with Timeout(15): pool.spawn(run_test, 1, False) pool.spawn(run_test, 2, True) pool.spawn(run_test, 3, True) pool.waitall() except Timeout as err: raise Exception( "The server did not return a 503 on container db locks, " "it just hangs: %s" % err) if __name__ == '__main__': main() swift-2.7.1/test/probe/test_container_sync.py0000664000567000056710000002775013024044354022545 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid import random from nose import SkipTest import unittest from six.moves.urllib.parse import urlparse from swiftclient import client, ClientException from swift.common.http import HTTP_NOT_FOUND from swift.common.manager import Manager from test.probe.brain import BrainSplitter from test.probe.common import ReplProbeTest, ENABLED_POLICIES def get_current_realm_cluster(url): parts = urlparse(url) url = parts.scheme + '://' + parts.netloc + '/info' http_conn = client.http_connection(url) try: info = client.get_capabilities(http_conn) except client.ClientException: raise SkipTest('Unable to retrieve cluster info') try: realms = info['container_sync']['realms'] except KeyError: raise SkipTest('Unable to find container sync realms') for realm, realm_info in realms.items(): for cluster, options in realm_info['clusters'].items(): if options.get('current', False): return realm, cluster raise SkipTest('Unable find current realm cluster') class TestContainerSync(ReplProbeTest): def setUp(self): super(TestContainerSync, self).setUp() self.realm, self.cluster = get_current_realm_cluster(self.url) def _setup_synced_containers(self, skey='secret', dkey='secret'): # setup dest container dest_container = 'dest-container-%s' % uuid.uuid4() dest_headers = {} dest_policy = None if len(ENABLED_POLICIES) > 1: dest_policy = random.choice(ENABLED_POLICIES) dest_headers['X-Storage-Policy'] = dest_policy.name if dkey is not None: dest_headers['X-Container-Sync-Key'] = dkey client.put_container(self.url, self.token, dest_container, headers=dest_headers) # setup source container source_container = 'source-container-%s' % uuid.uuid4() source_headers = {} sync_to = '//%s/%s/%s/%s' % (self.realm, self.cluster, self.account, dest_container) source_headers['X-Container-Sync-To'] = sync_to if skey is not None: source_headers['X-Container-Sync-Key'] = skey if dest_policy: source_policy = random.choice([p for p in ENABLED_POLICIES if p is not dest_policy]) source_headers['X-Storage-Policy'] = source_policy.name client.put_container(self.url, self.token, source_container, headers=source_headers) return source_container, dest_container def _test_sync(self, object_post_as_copy): source_container, dest_container = self._setup_synced_containers() # upload to source object_name = 'object-%s' % uuid.uuid4() put_headers = {'X-Object-Meta-Test': 'put_value'} client.put_object(self.url, self.token, source_container, object_name, 'test-body', headers=put_headers) # cycle container-sync Manager(['container-sync']).once() resp_headers, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'test-body') self.assertIn('x-object-meta-test', resp_headers) self.assertEqual('put_value', resp_headers['x-object-meta-test']) # update metadata with a POST, using an internal client so we can # vary the object_post_as_copy setting - first use post-as-copy post_headers = {'Content-Type': 'image/jpeg', 'X-Object-Meta-Test': 'post_value'} int_client = self.make_internal_client( object_post_as_copy=object_post_as_copy) int_client.set_object_metadata(self.account, source_container, object_name, post_headers) # sanity checks... resp_headers = client.head_object( self.url, self.token, source_container, object_name) self.assertIn('x-object-meta-test', resp_headers) self.assertEqual('post_value', resp_headers['x-object-meta-test']) self.assertEqual('image/jpeg', resp_headers['content-type']) # cycle container-sync Manager(['container-sync']).once() # verify that metadata changes were sync'd resp_headers, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'test-body') self.assertIn('x-object-meta-test', resp_headers) self.assertEqual('post_value', resp_headers['x-object-meta-test']) self.assertEqual('image/jpeg', resp_headers['content-type']) # delete the object client.delete_object( self.url, self.token, source_container, object_name) with self.assertRaises(ClientException) as cm: client.get_object( self.url, self.token, source_container, object_name) self.assertEqual(404, cm.exception.http_status) # sanity check # cycle container-sync Manager(['container-sync']).once() # verify delete has been sync'd with self.assertRaises(ClientException) as cm: client.get_object( self.url, self.token, dest_container, object_name) self.assertEqual(404, cm.exception.http_status) # sanity check def test_sync_with_post_as_copy(self): self._test_sync(True) def test_sync_with_fast_post(self): self._test_sync(False) def test_sync_lazy_skey(self): # Create synced containers, but with no key at source source_container, dest_container =\ self._setup_synced_containers(None, 'secret') # upload to source object_name = 'object-%s' % uuid.uuid4() client.put_object(self.url, self.token, source_container, object_name, 'test-body') # cycle container-sync, nothing should happen Manager(['container-sync']).once() with self.assertRaises(ClientException) as err: _junk, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(err.exception.http_status, HTTP_NOT_FOUND) # amend source key source_headers = {'X-Container-Sync-Key': 'secret'} client.put_container(self.url, self.token, source_container, headers=source_headers) # cycle container-sync, should replicate Manager(['container-sync']).once() _junk, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'test-body') def test_sync_lazy_dkey(self): # Create synced containers, but with no key at dest source_container, dest_container =\ self._setup_synced_containers('secret', None) # upload to source object_name = 'object-%s' % uuid.uuid4() client.put_object(self.url, self.token, source_container, object_name, 'test-body') # cycle container-sync, nothing should happen Manager(['container-sync']).once() with self.assertRaises(ClientException) as err: _junk, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(err.exception.http_status, HTTP_NOT_FOUND) # amend dest key dest_headers = {'X-Container-Sync-Key': 'secret'} client.put_container(self.url, self.token, dest_container, headers=dest_headers) # cycle container-sync, should replicate Manager(['container-sync']).once() _junk, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'test-body') def test_sync_with_stale_container_rows(self): source_container, dest_container = self._setup_synced_containers() brain = BrainSplitter(self.url, self.token, source_container, None, 'container') # upload to source object_name = 'object-%s' % uuid.uuid4() client.put_object(self.url, self.token, source_container, object_name, 'test-body') # check source container listing _, listing = client.get_container( self.url, self.token, source_container) for expected_obj_dict in listing: if expected_obj_dict['name'] == object_name: break else: self.fail('Failed to find source object %r in container listing %r' % (object_name, listing)) # stop all container servers brain.stop_primary_half() brain.stop_handoff_half() # upload new object content to source - container updates will fail client.put_object(self.url, self.token, source_container, object_name, 'new-test-body') source_headers = client.head_object( self.url, self.token, source_container, object_name) # start all container servers brain.start_primary_half() brain.start_handoff_half() # sanity check: source container listing should not have changed _, listing = client.get_container( self.url, self.token, source_container) for actual_obj_dict in listing: if actual_obj_dict['name'] == object_name: self.assertDictEqual(expected_obj_dict, actual_obj_dict) break else: self.fail('Failed to find source object %r in container listing %r' % (object_name, listing)) # cycle container-sync - object should be correctly sync'd despite # stale info in container row Manager(['container-sync']).once() # verify sync'd object has same content and headers dest_headers, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'new-test-body') mismatched_headers = [] for k in ('etag', 'content-length', 'content-type', 'x-timestamp', 'last-modified'): if source_headers[k] == dest_headers[k]: continue mismatched_headers.append((k, source_headers[k], dest_headers[k])) if mismatched_headers: msg = '\n'.join([('Mismatched header %r, expected %r but got %r' % item) for item in mismatched_headers]) self.fail(msg) def test_sync_newer_remote(self): source_container, dest_container = self._setup_synced_containers() # upload to source object_name = 'object-%s' % uuid.uuid4() client.put_object(self.url, self.token, source_container, object_name, 'old-source-body') # upload to dest with same name client.put_object(self.url, self.token, dest_container, object_name, 'new-test-body') # cycle container-sync Manager(['container-sync']).once() # verify that the remote object did not change resp_headers, body = client.get_object(self.url, self.token, dest_container, object_name) self.assertEqual(body, 'new-test-body') if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_reconstructor_revert.py0000775000567000056710000003473413024044354024035 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from hashlib import md5 import itertools import unittest import uuid import random import shutil from collections import defaultdict from test.probe.common import ECProbeTest, Body from swift.common import direct_client from swift.common.storage_policy import EC_POLICY from swift.common.manager import Manager from swift.obj import reconstructor from swiftclient import client class TestReconstructorRevert(ECProbeTest): def setUp(self): super(TestReconstructorRevert, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() # sanity self.assertEqual(self.policy.policy_type, EC_POLICY) self.reconstructor = Manager(["object-reconstructor"]) def proxy_get(self): # GET object headers, body = client.get_object(self.url, self.token, self.container_name, self.object_name, resp_chunk_size=64 * 2 ** 10) resp_checksum = md5() for chunk in body: resp_checksum.update(chunk) return resp_checksum.hexdigest() def direct_get(self, node, part): req_headers = {'X-Backend-Storage-Policy-Index': int(self.policy)} headers, data = direct_client.direct_get_object( node, part, self.account, self.container_name, self.object_name, headers=req_headers, resp_chunk_size=64 * 2 ** 20) hasher = md5() for chunk in data: hasher.update(chunk) return hasher.hexdigest() def test_revert_object(self): # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # get our node lists opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) hnodes = self.object_ring.get_more_nodes(opart) # kill 2 a parity count number of primary nodes so we can # force data onto handoffs, we do that by renaming dev dirs # to induce 507 p_dev1 = self.device_dir('object', onodes[0]) p_dev2 = self.device_dir('object', onodes[1]) self.kill_drive(p_dev1) self.kill_drive(p_dev2) # PUT object contents = Body() headers = {'x-object-meta-foo': 'meta-foo'} headers_post = {'x-object-meta-bar': 'meta-bar'} client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents, headers=headers) client.post_object(self.url, self.token, self.container_name, self.object_name, headers=headers_post) del headers_post['X-Auth-Token'] # WTF, where did this come from? # these primaries can't serve the data any more, we expect 507 # here and not 404 because we're using mount_check to kill nodes for onode in (onodes[0], onodes[1]): try: self.direct_get(onode, opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 507) else: self.fail('Node data on %r was not fully destroyed!' % (onode,)) # now take out another primary p_dev3 = self.device_dir('object', onodes[2]) self.kill_drive(p_dev3) # this node can't servce the data any more try: self.direct_get(onodes[2], opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 507) else: self.fail('Node data on %r was not fully destroyed!' % (onode,)) # make sure we can still GET the object and its correct # we're now pulling from handoffs and reconstructing etag = self.proxy_get() self.assertEqual(etag, contents.etag) # rename the dev dirs so they don't 507 anymore self.revive_drive(p_dev1) self.revive_drive(p_dev2) self.revive_drive(p_dev3) # fire up reconstructor on handoff nodes only for hnode in hnodes: hnode_id = (hnode['port'] - 6000) / 10 self.reconstructor.once(number=hnode_id) # first three primaries have data again for onode in (onodes[0], onodes[2]): self.direct_get(onode, opart) # check meta meta = client.head_object(self.url, self.token, self.container_name, self.object_name) for key in headers_post: self.assertTrue(key in meta) self.assertEqual(meta[key], headers_post[key]) # handoffs are empty for hnode in hnodes: try: self.direct_get(hnode, opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) else: self.fail('Node data on %r was not fully destroyed!' % (hnode,)) def test_delete_propagate(self): # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # get our node lists opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) hnodes = list(itertools.islice( self.object_ring.get_more_nodes(opart), 2)) # PUT object contents = Body() client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents) # now lets shut down a couple primaries failed_nodes = random.sample(onodes, 2) for node in failed_nodes: self.kill_drive(self.device_dir('object', node)) # Write tombstones over the nodes that are still online client.delete_object(self.url, self.token, self.container_name, self.object_name) # spot check the primary nodes that are still online delete_timestamp = None for node in onodes: if node in failed_nodes: continue try: self.direct_get(node, opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) delete_timestamp = err.http_headers['X-Backend-Timestamp'] else: self.fail('Node data on %r was not fully destroyed!' % (node,)) # repair the first primary self.revive_drive(self.device_dir('object', failed_nodes[0])) # run the reconstructor on the *second* handoff node self.reconstructor.once(number=self.config_number(hnodes[1])) # make sure it's tombstone was pushed out try: self.direct_get(hnodes[1], opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) self.assertNotIn('X-Backend-Timestamp', err.http_headers) else: self.fail('Found obj data on %r' % hnodes[1]) # ... and it's on the first failed (now repaired) primary try: self.direct_get(failed_nodes[0], opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) self.assertEqual(err.http_headers['X-Backend-Timestamp'], delete_timestamp) else: self.fail('Found obj data on %r' % failed_nodes[0]) # repair the second primary self.revive_drive(self.device_dir('object', failed_nodes[1])) # run the reconstructor on the *first* handoff node self.reconstructor.once(number=self.config_number(hnodes[0])) # make sure it's tombstone was pushed out try: self.direct_get(hnodes[0], opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) self.assertNotIn('X-Backend-Timestamp', err.http_headers) else: self.fail('Found obj data on %r' % hnodes[0]) # ... and now it's on the second failed primary too! try: self.direct_get(failed_nodes[1], opart) except direct_client.DirectClientException as err: self.assertEqual(err.http_status, 404) self.assertEqual(err.http_headers['X-Backend-Timestamp'], delete_timestamp) else: self.fail('Found obj data on %r' % failed_nodes[1]) # sanity make sure proxy get can't find it try: self.proxy_get() except Exception as err: self.assertEqual(err.http_status, 404) else: self.fail('Node data on %r was not fully destroyed!' % (onodes[0])) def test_reconstruct_from_reverted_fragment_archive(self): headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # get our node lists opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) # find a primary server that only has one of it's devices in the # primary node list group_nodes_by_config = defaultdict(list) for n in onodes: group_nodes_by_config[self.config_number(n)].append(n) for config_number, node_list in group_nodes_by_config.items(): if len(node_list) == 1: break else: self.fail('ring balancing did not use all available nodes') primary_node = node_list[0] # ... and 507 it's device primary_device = self.device_dir('object', primary_node) self.kill_drive(primary_device) # PUT object contents = Body() etag = client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents) self.assertEqual(contents.etag, etag) # fix the primary device and sanity GET self.revive_drive(primary_device) self.assertEqual(etag, self.proxy_get()) # find a handoff holding the fragment for hnode in self.object_ring.get_more_nodes(opart): try: reverted_fragment_etag = self.direct_get(hnode, opart) except direct_client.DirectClientException as err: if err.http_status != 404: raise else: break else: self.fail('Unable to find handoff fragment!') # we'll force the handoff device to revert instead of potentially # racing with rebuild by deleting any other fragments that may be on # the same server handoff_fragment_etag = None for node in onodes: if self.is_local_to(node, hnode): # we'll keep track of the etag of this fragment we're removing # in case we need it later (queue forshadowing music)... try: handoff_fragment_etag = self.direct_get(node, opart) except direct_client.DirectClientException as err: if err.http_status != 404: raise # this just means our handoff device was on the same # machine as the primary! continue # use the primary nodes device - not the hnode device part_dir = self.storage_dir('object', node, part=opart) shutil.rmtree(part_dir, True) # revert from handoff device with reconstructor self.reconstructor.once(number=self.config_number(hnode)) # verify fragment reverted to primary server self.assertEqual(reverted_fragment_etag, self.direct_get(primary_node, opart)) # now we'll remove some data on one of the primary node's partners partner = random.choice(reconstructor._get_partners( primary_node['index'], onodes)) try: rebuilt_fragment_etag = self.direct_get(partner, opart) except direct_client.DirectClientException as err: if err.http_status != 404: raise # partner already had it's fragment removed if (handoff_fragment_etag is not None and self.is_local_to(hnode, partner)): # oh, well that makes sense then... rebuilt_fragment_etag = handoff_fragment_etag else: # I wonder what happened? self.fail('Partner inexplicably missing fragment!') part_dir = self.storage_dir('object', partner, part=opart) shutil.rmtree(part_dir, True) # sanity, it's gone try: self.direct_get(partner, opart) except direct_client.DirectClientException as err: if err.http_status != 404: raise else: self.fail('successful GET of removed partner fragment archive!?') # and force the primary node to do a rebuild self.reconstructor.once(number=self.config_number(primary_node)) # and validate the partners rebuilt_fragment_etag try: self.assertEqual(rebuilt_fragment_etag, self.direct_get(partner, opart)) except direct_client.DirectClientException as err: if err.http_status != 404: raise else: self.fail('Did not find rebuilt fragment on partner node') if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_object_failures.py0000775000567000056710000001716313024044354022667 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import time from os import listdir, unlink from os.path import join as path_join from unittest import main from uuid import uuid4 from swiftclient import client from swift.common import direct_client from swift.common.exceptions import ClientException from swift.common.utils import hash_path, readconf from swift.obj.diskfile import write_metadata, read_metadata, get_data_dir from test.probe.common import ReplProbeTest RETRIES = 5 def get_data_file_path(obj_dir): files = [] # We might need to try a few times if a request hasn't yet settled. For # instance, a PUT can return success when just 2 of 3 nodes has completed. for attempt in range(RETRIES + 1): try: files = sorted(listdir(obj_dir), reverse=True) break except Exception: if attempt < RETRIES: time.sleep(1) else: raise for filename in files: return path_join(obj_dir, filename) class TestObjectFailures(ReplProbeTest): def _setup_data_file(self, container, obj, data): client.put_container(self.url, self.token, container, headers={'X-Storage-Policy': self.policy.name}) client.put_object(self.url, self.token, container, obj, data) odata = client.get_object(self.url, self.token, container, obj)[-1] self.assertEqual(odata, data) opart, onodes = self.object_ring.get_nodes( self.account, container, obj) onode = onodes[0] node_id = (onode['port'] - 6000) / 10 device = onode['device'] hash_str = hash_path(self.account, container, obj) obj_server_conf = readconf(self.configs['object-server'][node_id]) devices = obj_server_conf['app:object-server']['devices'] obj_dir = '%s/%s/%s/%s/%s/%s/' % (devices, device, get_data_dir(self.policy), opart, hash_str[-3:], hash_str) data_file = get_data_file_path(obj_dir) return onode, opart, data_file def run_quarantine(self): container = 'container-%s' % uuid4() obj = 'object-%s' % uuid4() onode, opart, data_file = self._setup_data_file(container, obj, 'VERIFY') metadata = read_metadata(data_file) metadata['ETag'] = 'badetag' write_metadata(data_file, metadata) odata = direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx})[-1] self.assertEqual(odata, 'VERIFY') try: direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) raise Exception("Did not quarantine object") except ClientException as err: self.assertEqual(err.http_status, 404) def run_quarantine_range_etag(self): container = 'container-range-%s' % uuid4() obj = 'object-range-%s' % uuid4() onode, opart, data_file = self._setup_data_file(container, obj, 'RANGE') metadata = read_metadata(data_file) metadata['ETag'] = 'badetag' write_metadata(data_file, metadata) base_headers = {'X-Backend-Storage-Policy-Index': self.policy.idx} for header, result in [({'Range': 'bytes=0-2'}, 'RAN'), ({'Range': 'bytes=1-11'}, 'ANGE'), ({'Range': 'bytes=0-11'}, 'RANGE')]: req_headers = base_headers.copy() req_headers.update(header) odata = direct_client.direct_get_object( onode, opart, self.account, container, obj, headers=req_headers)[-1] self.assertEqual(odata, result) try: direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) raise Exception("Did not quarantine object") except ClientException as err: self.assertEqual(err.http_status, 404) def run_quarantine_zero_byte_get(self): container = 'container-zbyte-%s' % uuid4() obj = 'object-zbyte-%s' % uuid4() onode, opart, data_file = self._setup_data_file(container, obj, 'DATA') metadata = read_metadata(data_file) unlink(data_file) with open(data_file, 'w') as fpointer: write_metadata(fpointer, metadata) try: direct_client.direct_get_object( onode, opart, self.account, container, obj, conn_timeout=1, response_timeout=1, headers={'X-Backend-Storage-Policy-Index': self.policy.idx}) raise Exception("Did not quarantine object") except ClientException as err: self.assertEqual(err.http_status, 404) def run_quarantine_zero_byte_head(self): container = 'container-zbyte-%s' % uuid4() obj = 'object-zbyte-%s' % uuid4() onode, opart, data_file = self._setup_data_file(container, obj, 'DATA') metadata = read_metadata(data_file) unlink(data_file) with open(data_file, 'w') as fpointer: write_metadata(fpointer, metadata) try: direct_client.direct_head_object( onode, opart, self.account, container, obj, conn_timeout=1, response_timeout=1, headers={'X-Backend-Storage-Policy-Index': self.policy.idx}) raise Exception("Did not quarantine object") except ClientException as err: self.assertEqual(err.http_status, 404) def run_quarantine_zero_byte_post(self): container = 'container-zbyte-%s' % uuid4() obj = 'object-zbyte-%s' % uuid4() onode, opart, data_file = self._setup_data_file(container, obj, 'DATA') metadata = read_metadata(data_file) unlink(data_file) with open(data_file, 'w') as fpointer: write_metadata(fpointer, metadata) try: headers = {'X-Object-Meta-1': 'One', 'X-Object-Meta-Two': 'Two', 'X-Backend-Storage-Policy-Index': self.policy.idx} direct_client.direct_post_object( onode, opart, self.account, container, obj, headers=headers, conn_timeout=1, response_timeout=1) raise Exception("Did not quarantine object") except ClientException as err: self.assertEqual(err.http_status, 404) def test_runner(self): self.run_quarantine() self.run_quarantine_range_etag() self.run_quarantine_zero_byte_get() self.run_quarantine_zero_byte_head() self.run_quarantine_zero_byte_post() if __name__ == '__main__': main() swift-2.7.1/test/probe/test_account_reaper.py0000664000567000056710000001502613024044354022512 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid import unittest from swiftclient import client from swift.common.storage_policy import POLICIES from swift.common.manager import Manager from swift.common.direct_client import direct_delete_account, \ direct_get_object, direct_head_container, ClientException from test.probe.common import ReplProbeTest, ENABLED_POLICIES class TestAccountReaper(ReplProbeTest): def test_sync(self): all_objects = [] # upload some containers for policy in ENABLED_POLICIES: container = 'container-%s-%s' % (policy.name, uuid.uuid4()) client.put_container(self.url, self.token, container, headers={'X-Storage-Policy': policy.name}) obj = 'object-%s' % uuid.uuid4() body = 'test-body' client.put_object(self.url, self.token, container, obj, body) all_objects.append((policy, container, obj)) Manager(['container-updater']).once() headers = client.head_account(self.url, self.token) self.assertEqual(int(headers['x-account-container-count']), len(ENABLED_POLICIES)) self.assertEqual(int(headers['x-account-object-count']), len(ENABLED_POLICIES)) self.assertEqual(int(headers['x-account-bytes-used']), len(ENABLED_POLICIES) * len(body)) part, nodes = self.account_ring.get_nodes(self.account) for node in nodes: direct_delete_account(node, part, self.account) # run the reaper Manager(['account-reaper']).once() for policy, container, obj in all_objects: # verify that any container deletes were at same timestamp cpart, cnodes = self.container_ring.get_nodes( self.account, container) delete_times = set() for cnode in cnodes: try: direct_head_container(cnode, cpart, self.account, container) except ClientException as err: self.assertEqual(err.http_status, 404) delete_time = err.http_headers.get( 'X-Backend-DELETE-Timestamp') # 'X-Backend-DELETE-Timestamp' confirms it was deleted self.assertTrue(delete_time) delete_times.add(delete_time) else: # Container replicas may not yet be deleted if we have a # policy with object replicas < container replicas, so # ignore successful HEAD. We'll check for all replicas to # be deleted again after running the replicators. pass self.assertEqual(1, len(delete_times), delete_times) # verify that all object deletes were at same timestamp object_ring = POLICIES.get_object_ring(policy.idx, '/etc/swift/') part, nodes = object_ring.get_nodes(self.account, container, obj) headers = {'X-Backend-Storage-Policy-Index': int(policy)} delete_times = set() for node in nodes: try: direct_get_object(node, part, self.account, container, obj, headers=headers) except ClientException as err: self.assertEqual(err.http_status, 404) delete_time = err.http_headers.get('X-Backend-Timestamp') # 'X-Backend-Timestamp' confirms obj was deleted self.assertTrue(delete_time) delete_times.add(delete_time) else: self.fail('Found un-reaped /%s/%s/%s on %r in %s!' % (self.account, container, obj, node, policy)) self.assertEqual(1, len(delete_times)) # run replicators and updaters self.get_to_final_state() for policy, container, obj in all_objects: # verify that ALL container replicas are now deleted cpart, cnodes = self.container_ring.get_nodes( self.account, container) delete_times = set() for cnode in cnodes: try: direct_head_container(cnode, cpart, self.account, container) except ClientException as err: self.assertEqual(err.http_status, 404) delete_time = err.http_headers.get( 'X-Backend-DELETE-Timestamp') # 'X-Backend-DELETE-Timestamp' confirms it was deleted self.assertTrue(delete_time) delete_times.add(delete_time) else: self.fail('Found un-reaped /%s/%s on %r' % (self.account, container, cnode)) # sanity check that object state is still consistent... object_ring = POLICIES.get_object_ring(policy.idx, '/etc/swift/') part, nodes = object_ring.get_nodes(self.account, container, obj) headers = {'X-Backend-Storage-Policy-Index': int(policy)} delete_times = set() for node in nodes: try: direct_get_object(node, part, self.account, container, obj, headers=headers) except ClientException as err: self.assertEqual(err.http_status, 404) delete_time = err.http_headers.get('X-Backend-Timestamp') # 'X-Backend-Timestamp' confirms obj was deleted self.assertTrue(delete_time) delete_times.add(delete_time) else: self.fail('Found un-reaped /%s/%s/%s on %r in %s!' % (self.account, container, obj, node, policy)) self.assertEqual(1, len(delete_times)) if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/common.py0000664000567000056710000004250213024044354017750 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import os from subprocess import Popen, PIPE import sys from tempfile import mkdtemp from textwrap import dedent from time import sleep, time from collections import defaultdict import unittest from hashlib import md5 from uuid import uuid4 from nose import SkipTest from six.moves.http_client import HTTPConnection import shutil from swiftclient import get_auth, head_account from swift.common import internal_client from swift.obj.diskfile import get_data_dir from swift.common.ring import Ring from swift.common.utils import readconf, renamer, \ config_true_value, rsync_module_interpolation from swift.common.manager import Manager from swift.common.storage_policy import POLICIES, EC_POLICY, REPL_POLICY from test.probe import CHECK_SERVER_TIMEOUT, VALIDATE_RSYNC ENABLED_POLICIES = [p for p in POLICIES if not p.is_deprecated] POLICIES_BY_TYPE = defaultdict(list) for p in POLICIES: POLICIES_BY_TYPE[p.policy_type].append(p) def get_server_number(ipport, ipport2server): server_number = ipport2server[ipport] server, number = server_number[:-1], server_number[-1:] try: number = int(number) except ValueError: # probably the proxy return server_number, None return server, number def start_server(ipport, ipport2server, pids, check=True): server, number = get_server_number(ipport, ipport2server) err = Manager([server]).start(number=number, wait=False) if err: raise Exception('unable to start %s' % ( server if not number else '%s%s' % (server, number))) if check: return check_server(ipport, ipport2server, pids) return None def check_server(ipport, ipport2server, pids, timeout=CHECK_SERVER_TIMEOUT): server = ipport2server[ipport] if server[:-1] in ('account', 'container', 'object'): if int(server[-1]) > 4: return None path = '/connect/1/2' if server[:-1] == 'container': path += '/3' elif server[:-1] == 'object': path += '/3/4' try_until = time() + timeout while True: try: conn = HTTPConnection(*ipport) conn.request('GET', path) resp = conn.getresponse() # 404 because it's a nonsense path (and mount_check is false) # 507 in case the test target is a VM using mount_check if resp.status not in (404, 507): raise Exception( 'Unexpected status %s' % resp.status) break except Exception as err: if time() > try_until: print(err) print('Giving up on %s:%s after %s seconds.' % ( server, ipport, timeout)) raise err sleep(0.1) else: try_until = time() + timeout while True: try: url, token = get_auth('http://%s:%d/auth/v1.0' % ipport, 'test:tester', 'testing') account = url.split('/')[-1] head_account(url, token) return url, token, account except Exception as err: if time() > try_until: print(err) print('Giving up on proxy:8080 after 30 seconds.') raise err sleep(0.1) return None def kill_server(ipport, ipport2server, pids): server, number = get_server_number(ipport, ipport2server) err = Manager([server]).kill(number=number) if err: raise Exception('unable to kill %s' % (server if not number else '%s%s' % (server, number))) try_until = time() + 30 while True: try: conn = HTTPConnection(*ipport) conn.request('GET', '/') conn.getresponse() except Exception as err: break if time() > try_until: raise Exception( 'Still answering on %s:%s after 30 seconds' % ipport) sleep(0.1) def kill_nonprimary_server(primary_nodes, ipport2server, pids): primary_ipports = [(n['ip'], n['port']) for n in primary_nodes] for ipport, server in ipport2server.items(): if ipport in primary_ipports: server_type = server[:-1] break else: raise Exception('Cannot figure out server type for %r' % primary_nodes) for ipport, server in list(ipport2server.items()): if server[:-1] == server_type and ipport not in primary_ipports: kill_server(ipport, ipport2server, pids) return ipport def add_ring_devs_to_ipport2server(ring, server_type, ipport2server, servers_per_port=0): # We'll number the servers by order of unique occurrence of: # IP, if servers_per_port > 0 OR there > 1 IP in ring # ipport, otherwise unique_ip_count = len(set(dev['ip'] for dev in ring.devs if dev)) things_to_number = {} number = 0 for dev in filter(None, ring.devs): ip = dev['ip'] ipport = (ip, dev['port']) unique_by = ip if servers_per_port or unique_ip_count > 1 else ipport if unique_by not in things_to_number: number += 1 things_to_number[unique_by] = number ipport2server[ipport] = '%s%d' % (server_type, things_to_number[unique_by]) def store_config_paths(name, configs): for server_name in (name, '%s-replicator' % name): for server in Manager([server_name]): for i, conf in enumerate(server.conf_files(), 1): configs[server.server][i] = conf def get_ring(ring_name, required_replicas, required_devices, server=None, force_validate=None, ipport2server=None, config_paths=None): if not server: server = ring_name ring = Ring('/etc/swift', ring_name=ring_name) if ipport2server is None: ipport2server = {} # used internally, even if not passed in if config_paths is None: config_paths = defaultdict(dict) store_config_paths(server, config_paths) repl_name = '%s-replicator' % server repl_configs = {i: readconf(c, section_name=repl_name) for i, c in config_paths[repl_name].items()} servers_per_port = any(int(c.get('servers_per_port', '0')) for c in repl_configs.values()) add_ring_devs_to_ipport2server(ring, server, ipport2server, servers_per_port=servers_per_port) if not VALIDATE_RSYNC and not force_validate: return ring # easy sanity checks if ring.replica_count != required_replicas: raise SkipTest('%s has %s replicas instead of %s' % ( ring.serialized_path, ring.replica_count, required_replicas)) devs = [dev for dev in ring.devs if dev is not None] if len(devs) != required_devices: raise SkipTest('%s has %s devices instead of %s' % ( ring.serialized_path, len(ring.devs), required_devices)) for dev in devs: # verify server is exposing mounted device ipport = (dev['ip'], dev['port']) _, server_number = get_server_number(ipport, ipport2server) conf = repl_configs[server_number] for device in os.listdir(conf['devices']): if device == dev['device']: dev_path = os.path.join(conf['devices'], device) full_path = os.path.realpath(dev_path) if not os.path.exists(full_path): raise SkipTest( 'device %s in %s was not found (%s)' % (device, conf['devices'], full_path)) break else: raise SkipTest( "unable to find ring device %s under %s's devices (%s)" % ( dev['device'], server, conf['devices'])) # verify server is exposing rsync device rsync_export = conf.get('rsync_module', '').rstrip('/') if not rsync_export: rsync_export = '{replication_ip}::%s' % server if config_true_value(conf.get('vm_test_mode', 'no')): rsync_export += '{replication_port}' cmd = "rsync %s" % rsync_module_interpolation(rsync_export, dev) p = Popen(cmd, shell=True, stdout=PIPE) stdout, _stderr = p.communicate() if p.returncode: raise SkipTest('unable to connect to rsync ' 'export %s (%s)' % (rsync_export, cmd)) for line in stdout.splitlines(): if line.rsplit(None, 1)[-1] == dev['device']: break else: raise SkipTest("unable to find ring device %s under rsync's " "exported devices for %s (%s)" % (dev['device'], rsync_export, cmd)) return ring def get_policy(**kwargs): kwargs.setdefault('is_deprecated', False) # go through the policies and make sure they match the # requirements of kwargs for policy in POLICIES: # TODO: for EC, pop policy type here and check it first matches = True for key, value in kwargs.items(): try: if getattr(policy, key) != value: matches = False except AttributeError: matches = False if matches: return policy raise SkipTest('No policy matching %s' % kwargs) def resetswift(): p = Popen("resetswift 2>&1", shell=True, stdout=PIPE) stdout, _stderr = p.communicate() print(stdout) Manager(['all']).stop() class Body(object): def __init__(self, total=3.5 * 2 ** 20): self.length = total self.hasher = md5() self.read_amount = 0 self.chunk = uuid4().hex * 2 ** 10 self.buff = '' @property def etag(self): return self.hasher.hexdigest() def __len__(self): return self.length def read(self, amount): if len(self.buff) < amount: try: self.buff += next(self) except StopIteration: pass rv, self.buff = self.buff[:amount], self.buff[amount:] return rv def __iter__(self): return self def next(self): if self.buff: rv, self.buff = self.buff, '' return rv if self.read_amount >= self.length: raise StopIteration() rv = self.chunk[:int(self.length - self.read_amount)] self.read_amount += len(rv) self.hasher.update(rv) return rv def __next__(self): return next(self) class ProbeTest(unittest.TestCase): """ Don't instantiate this directly, use a child class instead. """ def setUp(self): resetswift() self.pids = {} try: self.ipport2server = {} self.configs = defaultdict(dict) self.account_ring = get_ring( 'account', self.acct_cont_required_replicas, self.acct_cont_required_devices, ipport2server=self.ipport2server, config_paths=self.configs) self.container_ring = get_ring( 'container', self.acct_cont_required_replicas, self.acct_cont_required_devices, ipport2server=self.ipport2server, config_paths=self.configs) self.policy = get_policy(**self.policy_requirements) self.object_ring = get_ring( self.policy.ring_name, self.obj_required_replicas, self.obj_required_devices, server='object', ipport2server=self.ipport2server, config_paths=self.configs) self.servers_per_port = any( int(readconf(c, section_name='object-replicator').get( 'servers_per_port', '0')) for c in self.configs['object-replicator'].values()) Manager(['main']).start(wait=False) for ipport in self.ipport2server: check_server(ipport, self.ipport2server, self.pids) proxy_ipport = ('127.0.0.1', 8080) self.ipport2server[proxy_ipport] = 'proxy' self.url, self.token, self.account = check_server( proxy_ipport, self.ipport2server, self.pids) self.replicators = Manager( ['account-replicator', 'container-replicator', 'object-replicator']) self.updaters = Manager(['container-updater', 'object-updater']) except BaseException: try: raise finally: try: Manager(['all']).kill() except Exception: pass def tearDown(self): Manager(['all']).kill() def device_dir(self, server, node): server_type, config_number = get_server_number( (node['ip'], node['port']), self.ipport2server) repl_server = '%s-replicator' % server_type conf = readconf(self.configs[repl_server][config_number], section_name=repl_server) return os.path.join(conf['devices'], node['device']) def storage_dir(self, server, node, part=None, policy=None): policy = policy or self.policy device_path = self.device_dir(server, node) path_parts = [device_path, get_data_dir(policy)] if part is not None: path_parts.append(str(part)) return os.path.join(*path_parts) def config_number(self, node): _server_type, config_number = get_server_number( (node['ip'], node['port']), self.ipport2server) return config_number def is_local_to(self, node1, node2): """ Return True if both ring devices are "local" to each other (on the same "server". """ if self.servers_per_port: return node1['ip'] == node2['ip'] # Without a disambiguating IP, for SAIOs, we have to assume ports # uniquely identify "servers". SAIOs should be configured to *either* # have unique IPs per node (e.g. 127.0.0.1, 127.0.0.2, etc.) OR unique # ports per server (i.e. sdb1 & sdb5 would have same port numbers in # the 8-disk EC ring). return node1['port'] == node2['port'] def get_to_final_state(self): # these .stop()s are probably not strictly necessary, # but may prevent race conditions self.replicators.stop() self.updaters.stop() self.replicators.once() self.updaters.once() self.replicators.once() def kill_drive(self, device): if os.path.ismount(device): os.system('sudo umount %s' % device) else: renamer(device, device + "X") def revive_drive(self, device): disabled_name = device + "X" if os.path.isdir(disabled_name): renamer(device + "X", device) else: os.system('sudo mount %s' % device) def make_internal_client(self, object_post_as_copy=True): tempdir = mkdtemp() try: conf_path = os.path.join(tempdir, 'internal_client.conf') conf_body = """ [DEFAULT] swift_dir = /etc/swift [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy object_post_as_copy = %s [filter:cache] use = egg:swift#memcache [filter:catch_errors] use = egg:swift#catch_errors """ % object_post_as_copy with open(conf_path, 'w') as f: f.write(dedent(conf_body)) return internal_client.InternalClient(conf_path, 'test', 1) finally: shutil.rmtree(tempdir) class ReplProbeTest(ProbeTest): acct_cont_required_replicas = 3 acct_cont_required_devices = 4 obj_required_replicas = 3 obj_required_devices = 4 policy_requirements = {'policy_type': REPL_POLICY} class ECProbeTest(ProbeTest): acct_cont_required_replicas = 3 acct_cont_required_devices = 4 obj_required_replicas = 6 obj_required_devices = 8 policy_requirements = {'policy_type': EC_POLICY} if __name__ == "__main__": for server in ('account', 'container'): try: get_ring(server, 3, 4, force_validate=True) except SkipTest as err: sys.exit('%s ERROR: %s' % (server, err)) print('%s OK' % server) for policy in POLICIES: try: get_ring(policy.ring_name, 3, 4, server='object', force_validate=True) except SkipTest as err: sys.exit('object ERROR (%s): %s' % (policy.name, err)) print('object OK (%s)' % policy.name) swift-2.7.1/test/probe/test_wsgi_servers.py0000664000567000056710000000652713024044354022250 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import httplib import random from swift.common.storage_policy import POLICIES from swift.common.ring import Ring from swift.common.manager import Manager from test.probe.common import resetswift def putrequest(conn, method, path, headers): conn.putrequest(method, path, skip_host=(headers and 'Host' in headers)) if headers: for header, value in headers.items(): conn.putheader(header, str(value)) conn.endheaders() class TestWSGIServerProcessHandling(unittest.TestCase): def setUp(self): resetswift() def _check_reload(self, server_name, ip, port): manager = Manager([server_name]) manager.start() starting_pids = set(pid for server in manager.servers for (_, pid) in server.iter_pid_files()) body = 'test' * 10 conn = httplib.HTTPConnection('%s:%s' % (ip, port)) # sanity request putrequest(conn, 'PUT', 'blah', headers={'Content-Length': len(body)}) conn.send(body) resp = conn.getresponse() self.assertEqual(resp.status // 100, 4) resp.read() manager.reload() post_reload_pids = set(pid for server in manager.servers for (_, pid) in server.iter_pid_files()) # none of the pids we started with are being tracked after reload msg = 'expected all pids from %r to have died, but found %r' % ( starting_pids, post_reload_pids) self.assertFalse(starting_pids & post_reload_pids, msg) # ... and yet we can keep using the same connection! putrequest(conn, 'PUT', 'blah', headers={'Content-Length': len(body)}) conn.send(body) resp = conn.getresponse() self.assertEqual(resp.status // 100, 4) resp.read() # close our connection conn.close() # sanity post_close_pids = set(pid for server in manager.servers for (_, pid) in server.iter_pid_files()) self.assertEqual(post_reload_pids, post_close_pids) def test_proxy_reload(self): self._check_reload('proxy-server', 'localhost', 8080) def test_object_reload(self): policy = random.choice(list(POLICIES)) policy.load_ring('/etc/swift') node = random.choice(policy.object_ring.get_part_nodes(1)) self._check_reload('object', node['ip'], node['port']) def test_account_container_reload(self): for server in ('account', 'container'): ring = Ring('/etc/swift', ring_name=server) node = random.choice(ring.get_part_nodes(1)) self._check_reload(server, node['ip'], node['port']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/probe/test_empty_device_handoff.py0000775000567000056710000001674013024044354023671 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil import time from unittest import main from uuid import uuid4 from swiftclient import client from swift.common import direct_client from swift.obj.diskfile import get_data_dir from swift.common.exceptions import ClientException from test.probe.common import ( kill_server, ReplProbeTest, start_server, get_server_number) from swift.common.utils import readconf from swift.common.manager import Manager class TestEmptyDevice(ReplProbeTest): def _get_objects_dir(self, onode): device = onode['device'] _, node_id = get_server_number((onode['ip'], onode['port']), self.ipport2server) obj_server_conf = readconf(self.configs['object-server'][node_id]) devices = obj_server_conf['app:object-server']['devices'] obj_dir = '%s/%s' % (devices, device) return obj_dir def test_main(self): # Create container container = 'container-%s' % uuid4() client.put_container(self.url, self.token, container, headers={'X-Storage-Policy': self.policy.name}) cpart, cnodes = self.container_ring.get_nodes(self.account, container) cnode = cnodes[0] obj = 'object-%s' % uuid4() opart, onodes = self.object_ring.get_nodes( self.account, container, obj) onode = onodes[0] # Kill one container/obj primary server kill_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Delete the default data directory for objects on the primary server obj_dir = '%s/%s' % (self._get_objects_dir(onode), get_data_dir(self.policy)) shutil.rmtree(obj_dir, True) self.assertFalse(os.path.exists(obj_dir)) # Create container/obj (goes to two primary servers and one handoff) client.put_object(self.url, self.token, container, obj, 'VERIFY') odata = client.get_object(self.url, self.token, container, obj)[-1] if odata != 'VERIFY': raise Exception('Object GET did not return VERIFY, instead it ' 'returned: %s' % repr(odata)) # Kill other two container/obj primary servers # to ensure GET handoff works for node in onodes[1:]: kill_server((node['ip'], node['port']), self.ipport2server, self.pids) # Indirectly through proxy assert we can get container/obj odata = client.get_object(self.url, self.token, container, obj)[-1] if odata != 'VERIFY': raise Exception('Object GET did not return VERIFY, instead it ' 'returned: %s' % repr(odata)) # Restart those other two container/obj primary servers for node in onodes[1:]: start_server((node['ip'], node['port']), self.ipport2server, self.pids) self.assertFalse(os.path.exists(obj_dir)) # We've indirectly verified the handoff node has the object, but # let's directly verify it. # Directly to handoff server assert we can get container/obj another_onode = next(self.object_ring.get_more_nodes(opart)) odata = direct_client.direct_get_object( another_onode, opart, self.account, container, obj, headers={'X-Backend-Storage-Policy-Index': self.policy.idx})[-1] if odata != 'VERIFY': raise Exception('Direct object GET did not return VERIFY, instead ' 'it returned: %s' % repr(odata)) # Assert container listing (via proxy and directly) has container/obj objs = [o['name'] for o in client.get_container(self.url, self.token, container)[1]] if obj not in objs: raise Exception('Container listing did not know about object') timeout = time.time() + 5 found_objs_on_cnode = [] while time.time() < timeout: for cnode in [c for c in cnodes if cnodes not in found_objs_on_cnode]: objs = [o['name'] for o in direct_client.direct_get_container( cnode, cpart, self.account, container)[1]] if obj in objs: found_objs_on_cnode.append(cnode) if len(found_objs_on_cnode) >= len(cnodes): break time.sleep(0.3) if len(found_objs_on_cnode) < len(cnodes): missing = ['%s:%s' % (cnode['ip'], cnode['port']) for cnode in cnodes if cnode not in found_objs_on_cnode] raise Exception('Container servers %r did not know about object' % missing) # Bring the first container/obj primary server back up start_server((onode['ip'], onode['port']), self.ipport2server, self.pids) # Assert that it doesn't have container/obj yet self.assertFalse(os.path.exists(obj_dir)) try: direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) except ClientException as err: self.assertEqual(err.http_status, 404) self.assertFalse(os.path.exists(obj_dir)) else: self.fail("Expected ClientException but didn't get it") # Run object replication for first container/obj primary server _, num = get_server_number( (onode['ip'], onode.get('replication_port', onode['port'])), self.ipport2server) Manager(['object-replicator']).once(number=num) # Run object replication for handoff node _, another_num = get_server_number( (another_onode['ip'], another_onode.get('replication_port', another_onode['port'])), self.ipport2server) Manager(['object-replicator']).once(number=another_num) # Assert the first container/obj primary server now has container/obj odata = direct_client.direct_get_object( onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx})[-1] if odata != 'VERIFY': raise Exception('Direct object GET did not return VERIFY, instead ' 'it returned: %s' % repr(odata)) # Assert the handoff server no longer has container/obj try: direct_client.direct_get_object( another_onode, opart, self.account, container, obj, headers={ 'X-Backend-Storage-Policy-Index': self.policy.idx}) except ClientException as err: self.assertEqual(err.http_status, 404) else: self.fail("Expected ClientException but didn't get it") if __name__ == '__main__': main() swift-2.7.1/test/probe/brain.py0000664000567000056710000001776113024044354017564 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import sys import itertools import uuid from optparse import OptionParser import random import six from six.moves.urllib.parse import urlparse from swift.common.manager import Manager from swift.common import utils, ring from swift.common.storage_policy import POLICIES from swift.common.http import HTTP_NOT_FOUND from swiftclient import client, get_auth, ClientException from test.probe.common import ENABLED_POLICIES TIMEOUT = 60 def meta_command(name, bases, attrs): """ Look for attrs with a truthy attribute __command__ and add them to an attribute __commands__ on the type that maps names to decorated methods. The decorated methods' doc strings also get mapped in __docs__. Also adds a method run(command_name, *args, **kwargs) that will execute the method mapped to the name in __commands__. """ commands = {} docs = {} for attr, value in attrs.items(): if getattr(value, '__command__', False): commands[attr] = value # methods have always have a __doc__ attribute, sometimes empty docs[attr] = (getattr(value, '__doc__', None) or 'perform the %s command' % attr).strip() attrs['__commands__'] = commands attrs['__docs__'] = docs def run(self, command, *args, **kwargs): return self.__commands__[command](self, *args, **kwargs) attrs.setdefault('run', run) return type(name, bases, attrs) def command(f): f.__command__ = True return f @six.add_metaclass(meta_command) class BrainSplitter(object): def __init__(self, url, token, container_name='test', object_name='test', server_type='container', policy=None): self.url = url self.token = token self.account = utils.split_path(urlparse(url).path, 2, 2)[1] self.container_name = container_name self.object_name = object_name server_list = ['%s-server' % server_type] if server_type else ['all'] self.servers = Manager(server_list) policies = list(ENABLED_POLICIES) random.shuffle(policies) self.policies = itertools.cycle(policies) o = object_name if server_type == 'object' else None c = container_name if server_type in ('object', 'container') else None if server_type in ('container', 'account'): if policy: raise TypeError('Metadata server brains do not ' 'support specific storage policies') self.policy = None self.ring = ring.Ring( '/etc/swift/%s.ring.gz' % server_type) elif server_type == 'object': if not policy: raise TypeError('Object BrainSplitters need to ' 'specify the storage policy') self.policy = policy policy.load_ring('/etc/swift') self.ring = policy.object_ring else: raise ValueError('Unkonwn server_type: %r' % server_type) self.server_type = server_type part, nodes = self.ring.get_nodes(self.account, c, o) node_ids = [n['id'] for n in nodes] if all(n_id in node_ids for n_id in (0, 1)): self.primary_numbers = (1, 2) self.handoff_numbers = (3, 4) else: self.primary_numbers = (3, 4) self.handoff_numbers = (1, 2) @command def start_primary_half(self): """ start servers 1 & 2 """ tuple(self.servers.start(number=n) for n in self.primary_numbers) @command def stop_primary_half(self): """ stop servers 1 & 2 """ tuple(self.servers.stop(number=n) for n in self.primary_numbers) @command def start_handoff_half(self): """ start servers 3 & 4 """ tuple(self.servers.start(number=n) for n in self.handoff_numbers) @command def stop_handoff_half(self): """ stop servers 3 & 4 """ tuple(self.servers.stop(number=n) for n in self.handoff_numbers) @command def put_container(self, policy_index=None): """ put container with next storage policy """ policy = next(self.policies) if policy_index is not None: policy = POLICIES.get_by_index(int(policy_index)) if not policy: raise ValueError('Unknown policy with index %s' % policy) headers = {'X-Storage-Policy': policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) @command def delete_container(self): """ delete container """ client.delete_container(self.url, self.token, self.container_name) @command def put_object(self, headers=None): """ issue put for zero byte test object """ client.put_object(self.url, self.token, self.container_name, self.object_name, headers=headers) @command def delete_object(self): """ issue delete for test object """ try: client.delete_object(self.url, self.token, self.container_name, self.object_name) except ClientException as err: if err.http_status != HTTP_NOT_FOUND: raise parser = OptionParser('%prog [options] ' '[:[,...]] [...]') parser.usage += '\n\nCommands:\n\t' + \ '\n\t'.join("%s - %s" % (name, doc) for name, doc in BrainSplitter.__docs__.items()) parser.add_option('-c', '--container', default='container-%s' % uuid.uuid4(), help='set container name') parser.add_option('-o', '--object', default='object-%s' % uuid.uuid4(), help='set object name') parser.add_option('-s', '--server_type', default='container', help='set server type') parser.add_option('-P', '--policy_name', default=None, help='set policy') def main(): options, commands = parser.parse_args() if not commands: parser.print_help() return 'ERROR: must specify at least one command' for cmd_args in commands: cmd = cmd_args.split(':', 1)[0] if cmd not in BrainSplitter.__commands__: parser.print_help() return 'ERROR: unknown command %s' % cmd url, token = get_auth('http://127.0.0.1:8080/auth/v1.0', 'test:tester', 'testing') if options.server_type == 'object' and not options.policy_name: options.policy_name = POLICIES.default.name if options.policy_name: options.server_type = 'object' policy = POLICIES.get_by_name(options.policy_name) if not policy: return 'ERROR: unknown policy %r' % options.policy_name else: policy = None brain = BrainSplitter(url, token, options.container, options.object, options.server_type, policy=policy) for cmd_args in commands: parts = cmd_args.split(':', 1) command = parts[0] if len(parts) > 1: args = utils.list_from_csv(parts[1]) else: args = () try: brain.run(command, *args) except ClientException as e: print('**WARNING**: %s raised %s' % (command, e)) print('STATUS'.join(['*' * 25] * 2)) brain.servers.status() sys.exit() if __name__ == "__main__": sys.exit(main()) swift-2.7.1/test/probe/test_account_get_fake_responses_match.py0000775000567000056710000000733513024044354026265 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import unittest from six.moves import http_client from six.moves.urllib.parse import urlparse from swiftclient import get_auth from test.probe.common import ReplProbeTest class TestAccountGetFakeResponsesMatch(ReplProbeTest): def setUp(self): super(TestAccountGetFakeResponsesMatch, self).setUp() self.url, self.token = get_auth( 'http://127.0.0.1:8080/auth/v1.0', 'admin:admin', 'admin') def _account_path(self, account): _, _, path, _, _, _ = urlparse(self.url) basepath, _ = path.rsplit('/', 1) return basepath + '/' + account def _get(self, *a, **kw): kw['method'] = 'GET' return self._account_request(*a, **kw) def _account_request(self, account, method, headers=None): if headers is None: headers = {} headers['X-Auth-Token'] = self.token scheme, netloc, path, _, _, _ = urlparse(self.url) host, port = netloc.split(':') port = int(port) conn = http_client.HTTPConnection(host, port) conn.request(method, self._account_path(account), headers=headers) resp = conn.getresponse() if resp.status // 100 != 2: raise Exception("Unexpected status %s\n%s" % (resp.status, resp.read())) response_headers = dict(resp.getheaders()) response_body = resp.read() resp.close() return response_headers, response_body def test_main(self): # Two accounts: "real" and "fake". The fake one doesn't have any .db # files on disk; the real one does. The real one is empty. # # Make sure the important response fields match. real_acct = "AUTH_real" fake_acct = "AUTH_fake" self._account_request(real_acct, 'POST', {'X-Account-Meta-Bert': 'Ernie'}) # text real_headers, real_body = self._get(real_acct) fake_headers, fake_body = self._get(fake_acct) self.assertEqual(real_body, fake_body) self.assertEqual(real_headers['content-type'], fake_headers['content-type']) # json real_headers, real_body = self._get( real_acct, headers={'Accept': 'application/json'}) fake_headers, fake_body = self._get( fake_acct, headers={'Accept': 'application/json'}) self.assertEqual(real_body, fake_body) self.assertEqual(real_headers['content-type'], fake_headers['content-type']) # xml real_headers, real_body = self._get( real_acct, headers={'Accept': 'application/xml'}) fake_headers, fake_body = self._get( fake_acct, headers={'Accept': 'application/xml'}) # the account name is in the XML response real_body = re.sub('AUTH_\w{4}', 'AUTH_someaccount', real_body) fake_body = re.sub('AUTH_\w{4}', 'AUTH_someaccount', fake_body) self.assertEqual(real_body, fake_body) self.assertEqual(real_headers['content-type'], fake_headers['content-type']) if __name__ == '__main__': unittest.main() swift-2.7.1/test/probe/test_object_metadata_replication.py0000664000567000056710000007332013024044354025220 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from io import StringIO import unittest import os import uuid from swift.common.direct_client import direct_get_suffix_hashes from swift.common.exceptions import DiskFileDeleted from swift.common.internal_client import UnexpectedResponse from swift.container.backend import ContainerBroker from swift.common import utils from swiftclient import client from swift.common.ring import Ring from swift.common.utils import Timestamp, get_logger, hash_path from swift.obj.diskfile import DiskFileManager from swift.common.storage_policy import POLICIES from test.probe.brain import BrainSplitter from test.probe.common import ReplProbeTest class Test(ReplProbeTest): def setUp(self): """ Reset all environment and start all servers. """ super(Test, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() self.brain = BrainSplitter(self.url, self.token, self.container_name, self.object_name, 'object', policy=self.policy) self.int_client = self.make_internal_client(object_post_as_copy=False) def tearDown(self): super(Test, self).tearDown() def _get_object_info(self, account, container, obj, number): obj_conf = self.configs['object-server'] config_path = obj_conf[number] options = utils.readconf(config_path, 'app:object-server') swift_dir = options.get('swift_dir', '/etc/swift') ring = POLICIES.get_object_ring(int(self.policy), swift_dir) part, nodes = ring.get_nodes(account, container, obj) for node in nodes: # assumes one to one mapping if node['port'] == int(options.get('bind_port')): device = node['device'] break else: return None mgr = DiskFileManager(options, get_logger(options)) disk_file = mgr.get_diskfile(device, part, account, container, obj, self.policy) info = disk_file.read_metadata() return info def _assert_consistent_object_metadata(self): obj_info = [] for i in range(1, 5): info_i = self._get_object_info(self.account, self.container_name, self.object_name, i) if info_i: obj_info.append(info_i) self.assertTrue(len(obj_info) > 1) for other in obj_info[1:]: self.assertDictEqual(obj_info[0], other) def _assert_consistent_deleted_object(self): for i in range(1, 5): try: info = self._get_object_info(self.account, self.container_name, self.object_name, i) if info is not None: self.fail('Expected no disk file info but found %s' % info) except DiskFileDeleted: pass def _get_db_info(self, account, container, number): server_type = 'container' obj_conf = self.configs['%s-server' % server_type] config_path = obj_conf[number] options = utils.readconf(config_path, 'app:container-server') root = options.get('devices') swift_dir = options.get('swift_dir', '/etc/swift') ring = Ring(swift_dir, ring_name=server_type) part, nodes = ring.get_nodes(account, container) for node in nodes: # assumes one to one mapping if node['port'] == int(options.get('bind_port')): device = node['device'] break else: return None path_hash = utils.hash_path(account, container) _dir = utils.storage_directory('%ss' % server_type, part, path_hash) db_dir = os.path.join(root, device, _dir) db_file = os.path.join(db_dir, '%s.db' % path_hash) db = ContainerBroker(db_file) return db.get_info() def _assert_consistent_container_dbs(self): db_info = [] for i in range(1, 5): info_i = self._get_db_info(self.account, self.container_name, i) if info_i: db_info.append(info_i) self.assertTrue(len(db_info) > 1) for other in db_info[1:]: self.assertEqual(db_info[0]['hash'], other['hash'], 'Container db hash mismatch: %s != %s' % (db_info[0]['hash'], other['hash'])) def _assert_object_metadata_matches_listing(self, listing, metadata): self.assertEqual(listing['bytes'], int(metadata['content-length'])) self.assertEqual(listing['hash'], metadata['etag']) self.assertEqual(listing['content_type'], metadata['content-type']) modified = Timestamp(metadata['x-timestamp']).isoformat self.assertEqual(listing['last_modified'], modified) def _put_object(self, headers=None, body=u'stuff'): headers = headers or {} self.int_client.upload_object(StringIO(body), self.account, self.container_name, self.object_name, headers) def _post_object(self, headers): self.int_client.set_object_metadata(self.account, self.container_name, self.object_name, headers) def _delete_object(self): self.int_client.delete_object(self.account, self.container_name, self.object_name) def _get_object(self, headers=None, expect_statuses=(2,)): return self.int_client.get_object(self.account, self.container_name, self.object_name, headers, acceptable_statuses=expect_statuses) def _get_object_metadata(self): return self.int_client.get_object_metadata(self.account, self.container_name, self.object_name) def _assert_consistent_suffix_hashes(self): opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) name_hash = hash_path( self.account, self.container_name, self.object_name) results = [] for node in onodes: results.append( (node, direct_get_suffix_hashes(node, opart, [name_hash[-3:]]))) for (node, hashes) in results[1:]: self.assertEqual(results[0][1], hashes, 'Inconsistent suffix hashes found: %s' % results) def test_object_delete_is_replicated(self): self.brain.put_container(policy_index=int(self.policy)) # put object self._put_object() # put newer object with sysmeta to first server subset self.brain.stop_primary_half() self._put_object() self.brain.start_primary_half() # delete object on second server subset self.brain.stop_handoff_half() self._delete_object() self.brain.start_handoff_half() # run replicator self.get_to_final_state() # check object deletion has been replicated on first server set self.brain.stop_primary_half() self._get_object(expect_statuses=(4,)) self.brain.start_primary_half() # check object deletion persists on second server set self.brain.stop_handoff_half() self._get_object(expect_statuses=(4,)) # put newer object to second server set self._put_object() self.brain.start_handoff_half() # run replicator self.get_to_final_state() # check new object has been replicated on first server set self.brain.stop_primary_half() self._get_object() self.brain.start_primary_half() # check new object persists on second server set self.brain.stop_handoff_half() self._get_object() def test_object_after_replication_with_subsequent_post(self): self.brain.put_container(policy_index=0) # put object self._put_object(headers={'Content-Type': 'foo'}, body=u'older') # put newer object to first server subset self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'bar'}, body=u'newer') metadata = self._get_object_metadata() etag = metadata['etag'] self.brain.start_primary_half() # post some user meta to all servers self._post_object({'x-object-meta-bar': 'meta-bar'}) # run replicator self.get_to_final_state() # check that newer data has been replicated to second server subset self.brain.stop_handoff_half() metadata = self._get_object_metadata() self.assertEqual(etag, metadata['etag']) self.assertEqual('bar', metadata['content-type']) self.assertEqual('meta-bar', metadata['x-object-meta-bar']) self.brain.start_handoff_half() self._assert_consistent_object_metadata() self._assert_consistent_container_dbs() self._assert_consistent_suffix_hashes() def test_sysmeta_after_replication_with_subsequent_put(self): sysmeta = {'x-object-sysmeta-foo': 'older'} sysmeta2 = {'x-object-sysmeta-foo': 'newer'} usermeta = {'x-object-meta-bar': 'meta-bar'} self.brain.put_container(policy_index=0) # put object with sysmeta to first server subset self.brain.stop_primary_half() self._put_object(headers=sysmeta) metadata = self._get_object_metadata() for key in sysmeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta[key]) self.brain.start_primary_half() # put object with updated sysmeta to second server subset self.brain.stop_handoff_half() self._put_object(headers=sysmeta2) metadata = self._get_object_metadata() for key in sysmeta2: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta2[key]) self._post_object(usermeta) metadata = self._get_object_metadata() for key in usermeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], usermeta[key]) for key in sysmeta2: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta2[key]) self.brain.start_handoff_half() # run replicator self.get_to_final_state() # check sysmeta has been replicated to first server subset self.brain.stop_primary_half() metadata = self._get_object_metadata() for key in usermeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], usermeta[key]) for key in sysmeta2.keys(): self.assertTrue(key in metadata, key) self.assertEqual(metadata[key], sysmeta2[key]) self.brain.start_primary_half() # check user sysmeta ok on second server subset self.brain.stop_handoff_half() metadata = self._get_object_metadata() for key in usermeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], usermeta[key]) for key in sysmeta2.keys(): self.assertTrue(key in metadata, key) self.assertEqual(metadata[key], sysmeta2[key]) self.brain.start_handoff_half() self._assert_consistent_object_metadata() self._assert_consistent_container_dbs() self._assert_consistent_suffix_hashes() def test_sysmeta_after_replication_with_subsequent_post(self): sysmeta = {'x-object-sysmeta-foo': 'sysmeta-foo'} usermeta = {'x-object-meta-bar': 'meta-bar'} self.brain.put_container(policy_index=int(self.policy)) # put object self._put_object() # put newer object with sysmeta to first server subset self.brain.stop_primary_half() self._put_object(headers=sysmeta) metadata = self._get_object_metadata() for key in sysmeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta[key]) self.brain.start_primary_half() # post some user meta to second server subset self.brain.stop_handoff_half() self._post_object(usermeta) metadata = self._get_object_metadata() for key in usermeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], usermeta[key]) for key in sysmeta: self.assertFalse(key in metadata) self.brain.start_handoff_half() # run replicator self.get_to_final_state() # check user metadata has been replicated to first server subset # and sysmeta is unchanged self.brain.stop_primary_half() metadata = self._get_object_metadata() expected = dict(sysmeta) expected.update(usermeta) for key in expected.keys(): self.assertTrue(key in metadata, key) self.assertEqual(metadata[key], expected[key]) self.brain.start_primary_half() # check user metadata and sysmeta both on second server subset self.brain.stop_handoff_half() metadata = self._get_object_metadata() for key in expected.keys(): self.assertTrue(key in metadata, key) self.assertEqual(metadata[key], expected[key]) self.brain.start_handoff_half() self._assert_consistent_object_metadata() self._assert_consistent_container_dbs() self._assert_consistent_suffix_hashes() def test_sysmeta_after_replication_with_prior_post(self): sysmeta = {'x-object-sysmeta-foo': 'sysmeta-foo'} usermeta = {'x-object-meta-bar': 'meta-bar'} self.brain.put_container(policy_index=int(self.policy)) # put object self._put_object() # put user meta to first server subset self.brain.stop_handoff_half() self._post_object(headers=usermeta) metadata = self._get_object_metadata() for key in usermeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], usermeta[key]) self.brain.start_handoff_half() # put newer object with sysmeta to second server subset self.brain.stop_primary_half() self._put_object(headers=sysmeta) metadata = self._get_object_metadata() for key in sysmeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta[key]) self.brain.start_primary_half() # run replicator self.get_to_final_state() # check stale user metadata is not replicated to first server subset # and sysmeta is unchanged self.brain.stop_primary_half() metadata = self._get_object_metadata() for key in sysmeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta[key]) for key in usermeta: self.assertFalse(key in metadata) self.brain.start_primary_half() # check stale user metadata is removed from second server subset # and sysmeta is replicated self.brain.stop_handoff_half() metadata = self._get_object_metadata() for key in sysmeta: self.assertTrue(key in metadata) self.assertEqual(metadata[key], sysmeta[key]) for key in usermeta: self.assertFalse(key in metadata) self.brain.start_handoff_half() self._assert_consistent_object_metadata() self._assert_consistent_container_dbs() self._assert_consistent_suffix_hashes() def test_post_ctype_replicated_when_previous_incomplete_puts(self): # primary half handoff half # ------------ ------------ # t0.data: ctype = foo # t1.data: ctype = bar # t2.meta: ctype = baz # # ...run replicator and expect... # # t1.data: # t2.meta: ctype = baz self.brain.put_container(policy_index=0) # incomplete write to primary half self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'foo'}) self.brain.start_handoff_half() # handoff write self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'bar'}) self.brain.start_primary_half() # content-type update to primary half self.brain.stop_handoff_half() self._post_object(headers={'Content-Type': 'baz'}) self.brain.start_handoff_half() self.get_to_final_state() # check object metadata metadata = client.head_object(self.url, self.token, self.container_name, self.object_name) # check container listing metadata container_metadata, objs = client.get_container(self.url, self.token, self.container_name) for obj in objs: if obj['name'] == self.object_name: break expected = 'baz' self.assertEqual(obj['content_type'], expected) self._assert_object_metadata_matches_listing(obj, metadata) self._assert_consistent_container_dbs() self._assert_consistent_object_metadata() self._assert_consistent_suffix_hashes() def test_put_ctype_replicated_when_subsequent_post(self): # primary half handoff half # ------------ ------------ # t0.data: ctype = foo # t1.data: ctype = bar # t2.meta: # # ...run replicator and expect... # # t1.data: ctype = bar # t2.meta: self.brain.put_container(policy_index=0) # incomplete write self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'foo'}) self.brain.start_handoff_half() # handoff write self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'bar'}) self.brain.start_primary_half() # metadata update with newest data unavailable self.brain.stop_handoff_half() self._post_object(headers={'X-Object-Meta-Color': 'Blue'}) self.brain.start_handoff_half() self.get_to_final_state() # check object metadata metadata = client.head_object(self.url, self.token, self.container_name, self.object_name) # check container listing metadata container_metadata, objs = client.get_container(self.url, self.token, self.container_name) for obj in objs: if obj['name'] == self.object_name: break else: self.fail('obj not found in container listing') expected = 'bar' self.assertEqual(obj['content_type'], expected) self.assertEqual(metadata['x-object-meta-color'], 'Blue') self._assert_object_metadata_matches_listing(obj, metadata) self._assert_consistent_container_dbs() self._assert_consistent_object_metadata() self._assert_consistent_suffix_hashes() def test_post_ctype_replicated_when_subsequent_post_without_ctype(self): # primary half handoff half # ------------ ------------ # t0.data: ctype = foo # t1.data: ctype = bar # t2.meta: ctype = bif # t3.data: ctype = baz, color = 'Red' # t4.meta: color = Blue # # ...run replicator and expect... # # t1.data: # t4-delta.meta: ctype = baz, color = Blue self.brain.put_container(policy_index=0) # incomplete write self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'foo', 'X-Object-Sysmeta-Test': 'older'}) self.brain.start_handoff_half() # handoff write self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'bar', 'X-Object-Sysmeta-Test': 'newer'}) self.brain.start_primary_half() # incomplete post with content type self.brain.stop_handoff_half() self._post_object(headers={'Content-Type': 'bif'}) self.brain.start_handoff_half() # incomplete post to handoff with content type self.brain.stop_primary_half() self._post_object(headers={'Content-Type': 'baz', 'X-Object-Meta-Color': 'Red'}) self.brain.start_primary_half() # complete post with no content type self._post_object(headers={'X-Object-Meta-Color': 'Blue', 'X-Object-Sysmeta-Test': 'ignored'}) # 'baz' wins over 'bar' but 'Blue' wins over 'Red' self.get_to_final_state() # check object metadata metadata = self._get_object_metadata() # check container listing metadata container_metadata, objs = client.get_container(self.url, self.token, self.container_name) for obj in objs: if obj['name'] == self.object_name: break expected = 'baz' self.assertEqual(obj['content_type'], expected) self.assertEqual(metadata['x-object-meta-color'], 'Blue') self.assertEqual(metadata['x-object-sysmeta-test'], 'newer') self._assert_object_metadata_matches_listing(obj, metadata) self._assert_consistent_container_dbs() self._assert_consistent_object_metadata() self._assert_consistent_suffix_hashes() def test_put_ctype_replicated_when_subsequent_posts_without_ctype(self): # primary half handoff half # ------------ ------------ # t0.data: ctype = foo # t1.data: ctype = bar # t2.meta: # t3.meta # # ...run replicator and expect... # # t1.data: ctype = bar # t3.meta self.brain.put_container(policy_index=0) self._put_object(headers={'Content-Type': 'foo', 'X-Object-Sysmeta-Test': 'older'}) # incomplete write to handoff half self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'bar', 'X-Object-Sysmeta-Test': 'newer'}) self.brain.start_primary_half() # incomplete post with no content type to primary half self.brain.stop_handoff_half() self._post_object(headers={'X-Object-Meta-Color': 'Red', 'X-Object-Sysmeta-Test': 'ignored'}) self.brain.start_handoff_half() # incomplete post with no content type to handoff half self.brain.stop_primary_half() self._post_object(headers={'X-Object-Meta-Color': 'Blue'}) self.brain.start_primary_half() self.get_to_final_state() # check object metadata metadata = self._get_object_metadata() # check container listing metadata container_metadata, objs = client.get_container(self.url, self.token, self.container_name) for obj in objs: if obj['name'] == self.object_name: break expected = 'bar' self.assertEqual(obj['content_type'], expected) self._assert_object_metadata_matches_listing(obj, metadata) self.assertEqual(metadata['x-object-meta-color'], 'Blue') self.assertEqual(metadata['x-object-sysmeta-test'], 'newer') self._assert_object_metadata_matches_listing(obj, metadata) self._assert_consistent_container_dbs() self._assert_consistent_object_metadata() self._assert_consistent_suffix_hashes() def test_posted_metadata_only_persists_after_prior_put(self): # newer metadata posted to subset of nodes should persist after an # earlier put on other nodes, but older content-type on that subset # should not persist self.brain.put_container(policy_index=0) # incomplete put to handoff self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'oldest', 'X-Object-Sysmeta-Test': 'oldest', 'X-Object-Meta-Test': 'oldest'}) self.brain.start_primary_half() # incomplete put to primary self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'oldest', 'X-Object-Sysmeta-Test': 'oldest', 'X-Object-Meta-Test': 'oldest'}) self.brain.start_handoff_half() # incomplete post with content-type to handoff self.brain.stop_primary_half() self._post_object(headers={'Content-Type': 'newer', 'X-Object-Meta-Test': 'newer'}) self.brain.start_primary_half() # incomplete put to primary self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'newest', 'X-Object-Sysmeta-Test': 'newest', 'X-Object-Meta-Test': 'newer'}) self.brain.start_handoff_half() # incomplete post with no content-type to handoff which still has # out of date content-type self.brain.stop_primary_half() self._post_object(headers={'X-Object-Meta-Test': 'newest'}) metadata = self._get_object_metadata() self.assertEqual(metadata['x-object-meta-test'], 'newest') self.assertEqual(metadata['content-type'], 'newer') self.brain.start_primary_half() self.get_to_final_state() # check object metadata metadata = self._get_object_metadata() self.assertEqual(metadata['x-object-meta-test'], 'newest') self.assertEqual(metadata['x-object-sysmeta-test'], 'newest') self.assertEqual(metadata['content-type'], 'newest') # check container listing metadata container_metadata, objs = client.get_container(self.url, self.token, self.container_name) for obj in objs: if obj['name'] == self.object_name: break self.assertEqual(obj['content_type'], 'newest') self._assert_object_metadata_matches_listing(obj, metadata) self._assert_object_metadata_matches_listing(obj, metadata) self._assert_consistent_container_dbs() self._assert_consistent_object_metadata() self._assert_consistent_suffix_hashes() def test_post_trumped_by_prior_delete(self): # new metadata and content-type posted to subset of nodes should not # cause object to persist after replication of an earlier delete on # other nodes. self.brain.put_container(policy_index=0) # incomplete put self.brain.stop_primary_half() self._put_object(headers={'Content-Type': 'oldest', 'X-Object-Sysmeta-Test': 'oldest', 'X-Object-Meta-Test': 'oldest'}) self.brain.start_primary_half() # incomplete put then delete self.brain.stop_handoff_half() self._put_object(headers={'Content-Type': 'oldest', 'X-Object-Sysmeta-Test': 'oldest', 'X-Object-Meta-Test': 'oldest'}) self._delete_object() self.brain.start_handoff_half() # handoff post self.brain.stop_primary_half() self._post_object(headers={'Content-Type': 'newest', 'X-Object-Sysmeta-Test': 'ignored', 'X-Object-Meta-Test': 'newest'}) # check object metadata metadata = self._get_object_metadata() self.assertEqual(metadata['x-object-sysmeta-test'], 'oldest') self.assertEqual(metadata['x-object-meta-test'], 'newest') self.assertEqual(metadata['content-type'], 'newest') self.brain.start_primary_half() # delete trumps later post self.get_to_final_state() # check object is now deleted self.assertRaises(UnexpectedResponse, self._get_object_metadata) container_metadata, objs = client.get_container(self.url, self.token, self.container_name) self.assertEqual(0, len(objs)) self._assert_consistent_container_dbs() self._assert_consistent_deleted_object() self._assert_consistent_suffix_hashes() if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_container_merge_policy_index.py0000664000567000056710000005214513024044354025432 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from hashlib import md5 import time import uuid import random import unittest from nose import SkipTest from swift.common.manager import Manager from swift.common.internal_client import InternalClient from swift.common import utils, direct_client from swift.common.storage_policy import POLICIES from swift.common.http import HTTP_NOT_FOUND from test.probe.brain import BrainSplitter from test.probe.common import (ReplProbeTest, ENABLED_POLICIES, POLICIES_BY_TYPE, REPL_POLICY) from swiftclient import client, ClientException TIMEOUT = 60 class TestContainerMergePolicyIndex(ReplProbeTest): def setUp(self): if len(ENABLED_POLICIES) < 2: raise SkipTest('Need more than one policy') super(TestContainerMergePolicyIndex, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() self.brain = BrainSplitter(self.url, self.token, self.container_name, self.object_name, 'container') def test_merge_storage_policy_index(self): # generic split brain self.brain.stop_primary_half() self.brain.put_container() self.brain.start_primary_half() self.brain.stop_handoff_half() self.brain.put_container() self.brain.put_object() self.brain.start_handoff_half() # make sure we have some manner of split brain container_part, container_nodes = self.container_ring.get_nodes( self.account, self.container_name) head_responses = [] for node in container_nodes: metadata = direct_client.direct_head_container( node, container_part, self.account, self.container_name) head_responses.append((node, metadata)) found_policy_indexes = \ set(metadata['X-Backend-Storage-Policy-Index'] for node, metadata in head_responses) self.assertTrue( len(found_policy_indexes) > 1, 'primary nodes did not disagree about policy index %r' % head_responses) # find our object orig_policy_index = None for policy_index in found_policy_indexes: object_ring = POLICIES.get_object_ring(policy_index, '/etc/swift') part, nodes = object_ring.get_nodes( self.account, self.container_name, self.object_name) for node in nodes: try: direct_client.direct_head_object( node, part, self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': policy_index}) except direct_client.ClientException as err: continue orig_policy_index = policy_index break if orig_policy_index is not None: break else: self.fail('Unable to find /%s/%s/%s in %r' % ( self.account, self.container_name, self.object_name, found_policy_indexes)) self.get_to_final_state() Manager(['container-reconciler']).once() # validate containers head_responses = [] for node in container_nodes: metadata = direct_client.direct_head_container( node, container_part, self.account, self.container_name) head_responses.append((node, metadata)) found_policy_indexes = \ set(metadata['X-Backend-Storage-Policy-Index'] for node, metadata in head_responses) self.assertTrue(len(found_policy_indexes) == 1, 'primary nodes disagree about policy index %r' % head_responses) expected_policy_index = found_policy_indexes.pop() self.assertNotEqual(orig_policy_index, expected_policy_index) # validate object placement orig_policy_ring = POLICIES.get_object_ring(orig_policy_index, '/etc/swift') for node in orig_policy_ring.devs: try: direct_client.direct_head_object( node, part, self.account, self.container_name, self.object_name, headers={ 'X-Backend-Storage-Policy-Index': orig_policy_index}) except direct_client.ClientException as err: if err.http_status == HTTP_NOT_FOUND: continue raise else: self.fail('Found /%s/%s/%s in %s' % ( self.account, self.container_name, self.object_name, orig_policy_index)) # use proxy to access object (bad container info might be cached...) timeout = time.time() + TIMEOUT while time.time() < timeout: try: metadata = client.head_object(self.url, self.token, self.container_name, self.object_name) except ClientException as err: if err.http_status != HTTP_NOT_FOUND: raise time.sleep(1) else: break else: self.fail('could not HEAD /%s/%s/%s/ from policy %s ' 'after %s seconds.' % ( self.account, self.container_name, self.object_name, expected_policy_index, TIMEOUT)) def test_reconcile_delete(self): # generic split brain self.brain.stop_primary_half() self.brain.put_container() self.brain.put_object() self.brain.start_primary_half() self.brain.stop_handoff_half() self.brain.put_container() self.brain.delete_object() self.brain.start_handoff_half() # make sure we have some manner of split brain container_part, container_nodes = self.container_ring.get_nodes( self.account, self.container_name) head_responses = [] for node in container_nodes: metadata = direct_client.direct_head_container( node, container_part, self.account, self.container_name) head_responses.append((node, metadata)) found_policy_indexes = \ set(metadata['X-Backend-Storage-Policy-Index'] for node, metadata in head_responses) self.assertTrue( len(found_policy_indexes) > 1, 'primary nodes did not disagree about policy index %r' % head_responses) # find our object orig_policy_index = ts_policy_index = None for policy_index in found_policy_indexes: object_ring = POLICIES.get_object_ring(policy_index, '/etc/swift') part, nodes = object_ring.get_nodes( self.account, self.container_name, self.object_name) for node in nodes: try: direct_client.direct_head_object( node, part, self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': policy_index}) except direct_client.ClientException as err: if 'x-backend-timestamp' in err.http_headers: ts_policy_index = policy_index break else: orig_policy_index = policy_index break if not orig_policy_index: self.fail('Unable to find /%s/%s/%s in %r' % ( self.account, self.container_name, self.object_name, found_policy_indexes)) if not ts_policy_index: self.fail('Unable to find tombstone /%s/%s/%s in %r' % ( self.account, self.container_name, self.object_name, found_policy_indexes)) self.get_to_final_state() Manager(['container-reconciler']).once() # validate containers head_responses = [] for node in container_nodes: metadata = direct_client.direct_head_container( node, container_part, self.account, self.container_name) head_responses.append((node, metadata)) new_found_policy_indexes = \ set(metadata['X-Backend-Storage-Policy-Index'] for node, metadata in head_responses) self.assertTrue(len(new_found_policy_indexes) == 1, 'primary nodes disagree about policy index %r' % dict((node['port'], metadata['X-Backend-Storage-Policy-Index']) for node, metadata in head_responses)) expected_policy_index = new_found_policy_indexes.pop() self.assertEqual(orig_policy_index, expected_policy_index) # validate object fully deleted for policy_index in found_policy_indexes: object_ring = POLICIES.get_object_ring(policy_index, '/etc/swift') part, nodes = object_ring.get_nodes( self.account, self.container_name, self.object_name) for node in nodes: try: direct_client.direct_head_object( node, part, self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': policy_index}) except direct_client.ClientException as err: if err.http_status == HTTP_NOT_FOUND: continue else: self.fail('Found /%s/%s/%s in %s on %s' % ( self.account, self.container_name, self.object_name, orig_policy_index, node)) def test_reconcile_manifest(self): # this test is not only testing a split brain scenario on # multiple policies with mis-placed objects - it even writes out # a static large object directly to the storage nodes while the # objects are unavailably mis-placed from *behind* the proxy and # doesn't know how to do that for EC_POLICY (clayg: why did you # guys let me write a test that does this!?) - so we force # wrong_policy (where the manifest gets written) to be one of # any of your configured REPL_POLICY (we know you have one # because this is a ReplProbeTest) wrong_policy = random.choice(POLICIES_BY_TYPE[REPL_POLICY]) policy = random.choice([p for p in ENABLED_POLICIES if p is not wrong_policy]) manifest_data = [] def write_part(i): body = 'VERIFY%0.2d' % i + '\x00' * 1048576 part_name = 'manifest_part_%0.2d' % i manifest_entry = { "path": "/%s/%s" % (self.container_name, part_name), "etag": md5(body).hexdigest(), "size_bytes": len(body), } client.put_object(self.url, self.token, self.container_name, part_name, contents=body) manifest_data.append(manifest_entry) # get an old container stashed self.brain.stop_primary_half() self.brain.put_container(int(policy)) self.brain.start_primary_half() # write some parts for i in range(10): write_part(i) self.brain.stop_handoff_half() self.brain.put_container(int(wrong_policy)) # write some more parts for i in range(10, 20): write_part(i) # write manifest try: client.put_object(self.url, self.token, self.container_name, self.object_name, contents=utils.json.dumps(manifest_data), query_string='multipart-manifest=put') except ClientException as err: # so as it works out, you can't really upload a multi-part # manifest for objects that are currently misplaced - you have to # wait until they're all available - which is about the same as # some other failure that causes data to be unavailable to the # proxy at the time of upload self.assertEqual(err.http_status, 400) # but what the heck, we'll sneak one in just to see what happens... direct_manifest_name = self.object_name + '-direct-test' object_ring = POLICIES.get_object_ring(wrong_policy.idx, '/etc/swift') part, nodes = object_ring.get_nodes( self.account, self.container_name, direct_manifest_name) container_part = self.container_ring.get_part(self.account, self.container_name) def translate_direct(data): return { 'hash': data['etag'], 'bytes': data['size_bytes'], 'name': data['path'], } direct_manifest_data = map(translate_direct, manifest_data) headers = { 'x-container-host': ','.join('%s:%s' % (n['ip'], n['port']) for n in self.container_ring.devs), 'x-container-device': ','.join(n['device'] for n in self.container_ring.devs), 'x-container-partition': container_part, 'X-Backend-Storage-Policy-Index': wrong_policy.idx, 'X-Static-Large-Object': 'True', } for node in nodes: direct_client.direct_put_object( node, part, self.account, self.container_name, direct_manifest_name, contents=utils.json.dumps(direct_manifest_data), headers=headers) break # one should do it... self.brain.start_handoff_half() self.get_to_final_state() Manager(['container-reconciler']).once() # clear proxy cache client.post_container(self.url, self.token, self.container_name, {}) # let's see how that direct upload worked out... metadata, body = client.get_object( self.url, self.token, self.container_name, direct_manifest_name, query_string='multipart-manifest=get') self.assertEqual(metadata['x-static-large-object'].lower(), 'true') for i, entry in enumerate(utils.json.loads(body)): for key in ('hash', 'bytes', 'name'): self.assertEqual(entry[key], direct_manifest_data[i][key]) metadata, body = client.get_object( self.url, self.token, self.container_name, direct_manifest_name) self.assertEqual(metadata['x-static-large-object'].lower(), 'true') self.assertEqual(int(metadata['content-length']), sum(part['size_bytes'] for part in manifest_data)) self.assertEqual(body, ''.join('VERIFY%0.2d' % i + '\x00' * 1048576 for i in range(20))) # and regular upload should work now too client.put_object(self.url, self.token, self.container_name, self.object_name, contents=utils.json.dumps(manifest_data), query_string='multipart-manifest=put') metadata = client.head_object(self.url, self.token, self.container_name, self.object_name) self.assertEqual(int(metadata['content-length']), sum(part['size_bytes'] for part in manifest_data)) def test_reconciler_move_object_twice(self): # select some policies old_policy = random.choice(ENABLED_POLICIES) new_policy = random.choice([p for p in ENABLED_POLICIES if p != old_policy]) # setup a split brain self.brain.stop_handoff_half() # get old_policy on two primaries self.brain.put_container(policy_index=int(old_policy)) self.brain.start_handoff_half() self.brain.stop_primary_half() # force a recreate on handoffs self.brain.put_container(policy_index=int(old_policy)) self.brain.delete_container() self.brain.put_container(policy_index=int(new_policy)) self.brain.put_object() # populate memcache with new_policy self.brain.start_primary_half() # at this point two primaries have old policy container_part, container_nodes = self.container_ring.get_nodes( self.account, self.container_name) head_responses = [] for node in container_nodes: metadata = direct_client.direct_head_container( node, container_part, self.account, self.container_name) head_responses.append((node, metadata)) old_container_node_ids = [ node['id'] for node, metadata in head_responses if int(old_policy) == int(metadata['X-Backend-Storage-Policy-Index'])] self.assertEqual(2, len(old_container_node_ids)) # hopefully memcache still has the new policy cached self.brain.put_object() # double-check object correctly written to new policy conf_files = [] for server in Manager(['container-reconciler']).servers: conf_files.extend(server.conf_files()) conf_file = conf_files[0] client = InternalClient(conf_file, 'probe-test', 3) client.get_object_metadata( self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': int(new_policy)}) client.get_object_metadata( self.account, self.container_name, self.object_name, acceptable_statuses=(4,), headers={'X-Backend-Storage-Policy-Index': int(old_policy)}) # shutdown the containers that know about the new policy self.brain.stop_handoff_half() # and get rows enqueued from old nodes for server_type in ('container-replicator', 'container-updater'): server = Manager([server_type]) tuple(server.once(number=n + 1) for n in old_container_node_ids) # verify entry in the queue for the "misplaced" new_policy for container in client.iter_containers('.misplaced_objects'): for obj in client.iter_objects('.misplaced_objects', container['name']): expected = '%d:/%s/%s/%s' % (new_policy, self.account, self.container_name, self.object_name) self.assertEqual(obj['name'], expected) Manager(['container-reconciler']).once() # verify object in old_policy client.get_object_metadata( self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': int(old_policy)}) # verify object is *not* in new_policy client.get_object_metadata( self.account, self.container_name, self.object_name, acceptable_statuses=(4,), headers={'X-Backend-Storage-Policy-Index': int(new_policy)}) self.get_to_final_state() # verify entry in the queue client = InternalClient(conf_file, 'probe-test', 3) for container in client.iter_containers('.misplaced_objects'): for obj in client.iter_objects('.misplaced_objects', container['name']): expected = '%d:/%s/%s/%s' % (old_policy, self.account, self.container_name, self.object_name) self.assertEqual(obj['name'], expected) Manager(['container-reconciler']).once() # and now it flops back client.get_object_metadata( self.account, self.container_name, self.object_name, headers={'X-Backend-Storage-Policy-Index': int(new_policy)}) client.get_object_metadata( self.account, self.container_name, self.object_name, acceptable_statuses=(4,), headers={'X-Backend-Storage-Policy-Index': int(old_policy)}) # make sure the queue is settled self.get_to_final_state() for container in client.iter_containers('.misplaced_objects'): for obj in client.iter_objects('.misplaced_objects', container['name']): self.fail('Found unexpected object %r in the queue' % obj) if __name__ == "__main__": unittest.main() swift-2.7.1/test/probe/test_account_failures.py0000775000567000056710000001743613024044354023060 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import main from swiftclient import client from swift.common import direct_client from swift.common.manager import Manager from test.probe.common import kill_nonprimary_server, \ kill_server, ReplProbeTest, start_server class TestAccountFailures(ReplProbeTest): def test_main(self): # Create container1 and container2 container1 = 'container1' client.put_container(self.url, self.token, container1) container2 = 'container2' client.put_container(self.url, self.token, container2) # Assert account level sees them headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '2') self.assertEqual(headers['x-account-object-count'], '0') self.assertEqual(headers['x-account-bytes-used'], '0') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True self.assertEqual(container['count'], 0) self.assertEqual(container['bytes'], 0) elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 0) self.assertEqual(container['bytes'], 0) self.assertTrue(found1) self.assertTrue(found2) # Create container2/object1 client.put_object(self.url, self.token, container2, 'object1', '1234') # Assert account level doesn't see it yet headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '2') self.assertEqual(headers['x-account-object-count'], '0') self.assertEqual(headers['x-account-bytes-used'], '0') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True self.assertEqual(container['count'], 0) self.assertEqual(container['bytes'], 0) elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 0) self.assertEqual(container['bytes'], 0) self.assertTrue(found1) self.assertTrue(found2) # Get to final state self.get_to_final_state() # Assert account level now sees the container2/object1 headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '2') self.assertEqual(headers['x-account-object-count'], '1') self.assertEqual(headers['x-account-bytes-used'], '4') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True self.assertEqual(container['count'], 0) self.assertEqual(container['bytes'], 0) elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 1) self.assertEqual(container['bytes'], 4) self.assertTrue(found1) self.assertTrue(found2) apart, anodes = self.account_ring.get_nodes(self.account) kill_nonprimary_server(anodes, self.ipport2server, self.pids) kill_server((anodes[0]['ip'], anodes[0]['port']), self.ipport2server, self.pids) # Kill account servers excepting two of the primaries # Delete container1 client.delete_container(self.url, self.token, container1) # Put container2/object2 client.put_object(self.url, self.token, container2, 'object2', '12345') # Assert account level knows container1 is gone but doesn't know about # container2/object2 yet headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '1') self.assertEqual(headers['x-account-object-count'], '1') self.assertEqual(headers['x-account-bytes-used'], '4') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 1) self.assertEqual(container['bytes'], 4) self.assertFalse(found1) self.assertTrue(found2) # Run container updaters Manager(['container-updater']).once() # Assert account level now knows about container2/object2 headers, containers = client.get_account(self.url, self.token) self.assertEqual(headers['x-account-container-count'], '1') self.assertEqual(headers['x-account-object-count'], '2') self.assertEqual(headers['x-account-bytes-used'], '9') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 2) self.assertEqual(container['bytes'], 9) self.assertFalse(found1) self.assertTrue(found2) # Restart other primary account server start_server((anodes[0]['ip'], anodes[0]['port']), self.ipport2server, self.pids) # Assert that server doesn't know about container1's deletion or the # new container2/object2 yet headers, containers = \ direct_client.direct_get_account(anodes[0], apart, self.account) self.assertEqual(headers['x-account-container-count'], '2') self.assertEqual(headers['x-account-object-count'], '1') self.assertEqual(headers['x-account-bytes-used'], '4') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 1) self.assertEqual(container['bytes'], 4) self.assertTrue(found1) self.assertTrue(found2) # Get to final state self.get_to_final_state() # Assert that server is now up to date headers, containers = \ direct_client.direct_get_account(anodes[0], apart, self.account) self.assertEqual(headers['x-account-container-count'], '1') self.assertEqual(headers['x-account-object-count'], '2') self.assertEqual(headers['x-account-bytes-used'], '9') found1 = False found2 = False for container in containers: if container['name'] == container1: found1 = True elif container['name'] == container2: found2 = True self.assertEqual(container['count'], 2) self.assertEqual(container['bytes'], 9) self.assertEqual(container['bytes'], 9) self.assertFalse(found1) self.assertTrue(found2) if __name__ == '__main__': main() swift-2.7.1/test/probe/test_reconstructor_durable.py0000664000567000056710000001245613024044354024136 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from hashlib import md5 import unittest import uuid import random import os import errno from test.probe.common import ECProbeTest from swift.common import direct_client from swift.common.storage_policy import EC_POLICY from swift.common.manager import Manager from swiftclient import client class Body(object): def __init__(self, total=3.5 * 2 ** 20): self.total = total self.hasher = md5() self.size = 0 self.chunk = 'test' * 16 * 2 ** 10 @property def etag(self): return self.hasher.hexdigest() def __iter__(self): return self def next(self): if self.size > self.total: raise StopIteration() self.size += len(self.chunk) self.hasher.update(self.chunk) return self.chunk def __next__(self): return next(self) class TestReconstructorPropDurable(ECProbeTest): def setUp(self): super(TestReconstructorPropDurable, self).setUp() self.container_name = 'container-%s' % uuid.uuid4() self.object_name = 'object-%s' % uuid.uuid4() # sanity self.assertEqual(self.policy.policy_type, EC_POLICY) self.reconstructor = Manager(["object-reconstructor"]) def direct_get(self, node, part): req_headers = {'X-Backend-Storage-Policy-Index': int(self.policy)} headers, data = direct_client.direct_get_object( node, part, self.account, self.container_name, self.object_name, headers=req_headers, resp_chunk_size=64 * 2 ** 20) hasher = md5() for chunk in data: hasher.update(chunk) return hasher.hexdigest() def _check_node(self, node, part, etag, headers_post): # get fragment archive etag fragment_archive_etag = self.direct_get(node, part) # remove the .durable from the selected node part_dir = self.storage_dir('object', node, part=part) for dirs, subdirs, files in os.walk(part_dir): for fname in files: if fname.endswith('.durable'): durable = os.path.join(dirs, fname) os.remove(durable) break try: os.remove(os.path.join(part_dir, 'hashes.pkl')) except OSError as e: if e.errno != errno.ENOENT: raise # fire up reconstructor to propagate the .durable self.reconstructor.once() # fragment is still exactly as it was before! self.assertEqual(fragment_archive_etag, self.direct_get(node, part)) # check meta meta = client.head_object(self.url, self.token, self.container_name, self.object_name) for key in headers_post: self.assertTrue(key in meta) self.assertEqual(meta[key], headers_post[key]) def _format_node(self, node): return '%s#%s' % (node['device'], node['index']) def test_main(self): # create EC container headers = {'X-Storage-Policy': self.policy.name} client.put_container(self.url, self.token, self.container_name, headers=headers) # PUT object contents = Body() headers = {'x-object-meta-foo': 'meta-foo'} headers_post = {'x-object-meta-bar': 'meta-bar'} etag = client.put_object(self.url, self.token, self.container_name, self.object_name, contents=contents, headers=headers) client.post_object(self.url, self.token, self.container_name, self.object_name, headers=headers_post) del headers_post['X-Auth-Token'] # WTF, where did this come from? # built up a list of node lists to kill a .durable from, # first try a single node # then adjacent nodes and then nodes >1 node apart opart, onodes = self.object_ring.get_nodes( self.account, self.container_name, self.object_name) single_node = [random.choice(onodes)] adj_nodes = [onodes[0], onodes[-1]] far_nodes = [onodes[0], onodes[-2]] test_list = [single_node, adj_nodes, far_nodes] for node_list in test_list: for onode in node_list: try: self._check_node(onode, opart, etag, headers_post) except AssertionError as e: self.fail( str(e) + '\n... for node %r of scenario %r' % ( self._format_node(onode), [self._format_node(n) for n in node_list])) if __name__ == "__main__": unittest.main() swift-2.7.1/test/functional/0000775000567000056710000000000013024044470017135 5ustar jenkinsjenkins00000000000000swift-2.7.1/test/functional/swift_test_client.py0000664000567000056710000010361413024044354023246 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import hashlib import json import os import random import socket import time from unittest2 import SkipTest from xml.dom import minidom import six from six.moves import http_client from six.moves import urllib from swiftclient import get_auth from swift.common import constraints from swift.common.utils import config_true_value from test import safe_repr http_client._MAXHEADERS = constraints.MAX_HEADER_COUNT class AuthenticationFailed(Exception): pass class RequestError(Exception): pass class ResponseError(Exception): def __init__(self, response, method=None, path=None): self.status = response.status self.reason = response.reason self.method = method self.path = path self.headers = response.getheaders() for name, value in self.headers: if name.lower() == 'x-trans-id': self.txid = value break else: self.txid = None super(ResponseError, self).__init__() def __str__(self): return repr(self) def __repr__(self): return '%d: %r (%r %r) txid=%s' % ( self.status, self.reason, self.method, self.path, self.txid) def listing_empty(method): for i in range(6): if len(method()) == 0: return True time.sleep(2 ** i) return False def listing_items(method): marker = None once = True items = [] while once or items: for i in items: yield i if once or marker: if marker: items = method(parms={'marker': marker}) else: items = method() if len(items) == 10000: marker = items[-1] else: marker = None once = False else: items = [] class Connection(object): def __init__(self, config): for key in 'auth_host auth_port auth_ssl username password'.split(): if key not in config: raise SkipTest( "Missing required configuration parameter: %s" % key) self.auth_host = config['auth_host'] self.auth_port = int(config['auth_port']) self.auth_ssl = config['auth_ssl'] in ('on', 'true', 'yes', '1') self.insecure = config_true_value(config.get('insecure', 'false')) self.auth_prefix = config.get('auth_prefix', '/') self.auth_version = str(config.get('auth_version', '1')) self.account = config.get('account') self.username = config['username'] self.password = config['password'] self.storage_host = None self.storage_port = None self.storage_url = None self.conn_class = None def get_account(self): return Account(self, self.account) def authenticate(self, clone_conn=None): if clone_conn: self.conn_class = clone_conn.conn_class self.storage_host = clone_conn.storage_host self.storage_url = clone_conn.storage_url self.storage_port = clone_conn.storage_port self.storage_token = clone_conn.storage_token return if self.auth_version == "1": auth_path = '%sv1.0' % (self.auth_prefix) if self.account: auth_user = '%s:%s' % (self.account, self.username) else: auth_user = self.username else: auth_user = self.username auth_path = self.auth_prefix auth_scheme = 'https://' if self.auth_ssl else 'http://' auth_netloc = "%s:%d" % (self.auth_host, self.auth_port) auth_url = auth_scheme + auth_netloc + auth_path authargs = dict(snet=False, tenant_name=self.account, auth_version=self.auth_version, os_options={}, insecure=self.insecure) (storage_url, storage_token) = get_auth( auth_url, auth_user, self.password, **authargs) if not (storage_url and storage_token): raise AuthenticationFailed() x = storage_url.split('/') if x[0] == 'http:': self.conn_class = http_client.HTTPConnection self.storage_port = 80 elif x[0] == 'https:': self.conn_class = http_client.HTTPSConnection self.storage_port = 443 else: raise ValueError('unexpected protocol %s' % (x[0])) self.storage_host = x[2].split(':')[0] if ':' in x[2]: self.storage_port = int(x[2].split(':')[1]) # Make sure storage_url is a string and not unicode, since # keystoneclient (called by swiftclient) returns them in # unicode and this would cause troubles when doing # no_safe_quote query. self.storage_url = str('/%s/%s' % (x[3], x[4])) self.account_name = str(x[4]) self.auth_user = auth_user # With v2 keystone, storage_token is unicode. # We want it to be string otherwise this would cause # troubles when doing query with already encoded # non ascii characters in its headers. self.storage_token = str(storage_token) self.user_acl = '%s:%s' % (self.account, self.username) self.http_connect() return self.storage_url, self.storage_token def cluster_info(self): """ Retrieve the data in /info, or {} on 404 """ status = self.make_request('GET', '/info', cfg={'absolute_path': True}) if status // 100 == 4: return {} if not 200 <= status <= 299: raise ResponseError(self.response, 'GET', '/info') return json.loads(self.response.read()) def http_connect(self): self.connection = self.conn_class(self.storage_host, port=self.storage_port) # self.connection.set_debuglevel(3) def make_path(self, path=None, cfg=None): if path is None: path = [] if cfg is None: cfg = {} if cfg.get('version_only_path'): return '/' + self.storage_url.split('/')[1] if path: quote = urllib.parse.quote if cfg.get('no_quote') or cfg.get('no_path_quote'): quote = lambda x: x return '%s/%s' % (self.storage_url, '/'.join([quote(i) for i in path])) else: return self.storage_url def make_headers(self, hdrs, cfg=None): if cfg is None: cfg = {} headers = {} if not cfg.get('no_auth_token'): headers['X-Auth-Token'] = self.storage_token if cfg.get('use_token'): headers['X-Auth-Token'] = cfg.get('use_token') if isinstance(hdrs, dict): headers.update(hdrs) return headers def make_request(self, method, path=None, data='', hdrs=None, parms=None, cfg=None): if path is None: path = [] if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if not cfg.get('absolute_path'): # Set absolute_path=True to make a request to exactly the given # path, not storage path + given path. Useful for # non-account/container/object requests. path = self.make_path(path, cfg=cfg) headers = self.make_headers(hdrs, cfg=cfg) if isinstance(parms, dict) and parms: quote = urllib.parse.quote if cfg.get('no_quote') or cfg.get('no_parms_quote'): quote = lambda x: x query_args = ['%s=%s' % (quote(x), quote(str(y))) for (x, y) in parms.items()] path = '%s?%s' % (path, '&'.join(query_args)) if not cfg.get('no_content_length'): if cfg.get('set_content_length'): headers['Content-Length'] = cfg.get('set_content_length') else: headers['Content-Length'] = len(data) def try_request(): self.http_connect() self.connection.request(method, path, data, headers) return self.connection.getresponse() self.response = None try_count = 0 fail_messages = [] while try_count < 5: try_count += 1 try: self.response = try_request() except http_client.HTTPException as e: fail_messages.append(safe_repr(e)) continue if self.response.status == 401: fail_messages.append("Response 401") self.authenticate() continue elif self.response.status == 503: fail_messages.append("Response 503") if try_count != 5: time.sleep(5) continue break if self.response: return self.response.status request = "{method} {path} headers: {headers} data: {data}".format( method=method, path=path, headers=headers, data=data) raise RequestError('Unable to complete http request: %s. ' 'Attempts: %s, Failures: %s' % (request, len(fail_messages), fail_messages)) def put_start(self, path, hdrs=None, parms=None, cfg=None, chunked=False): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} self.http_connect() path = self.make_path(path, cfg) headers = self.make_headers(hdrs, cfg=cfg) if chunked: headers['Transfer-Encoding'] = 'chunked' headers.pop('Content-Length', None) if isinstance(parms, dict) and parms: quote = urllib.parse.quote if cfg.get('no_quote') or cfg.get('no_parms_quote'): quote = lambda x: x query_args = ['%s=%s' % (quote(x), quote(str(y))) for (x, y) in parms.items()] path = '%s?%s' % (path, '&'.join(query_args)) self.connection = self.conn_class(self.storage_host, port=self.storage_port) # self.connection.set_debuglevel(3) self.connection.putrequest('PUT', path) for key, value in headers.items(): self.connection.putheader(key, value) self.connection.endheaders() def put_data(self, data, chunked=False): if chunked: self.connection.send('%x\r\n%s\r\n' % (len(data), data)) else: self.connection.send(data) def put_end(self, chunked=False): if chunked: self.connection.send('0\r\n\r\n') self.response = self.connection.getresponse() self.connection.close() return self.response.status class Base(object): def __str__(self): return self.name def header_fields(self, required_fields, optional_fields=None): if optional_fields is None: optional_fields = () headers = dict(self.conn.response.getheaders()) ret = {} for field in required_fields: if field[1] not in headers: raise ValueError("%s was not found in response header" % (field[1])) try: ret[field[0]] = int(headers[field[1]]) except ValueError: ret[field[0]] = headers[field[1]] for field in optional_fields: if field[1] not in headers: continue try: ret[field[0]] = int(headers[field[1]]) except ValueError: ret[field[0]] = headers[field[1]] return ret class Account(Base): def __init__(self, conn, name): self.conn = conn self.name = str(name) def update_metadata(self, metadata=None, cfg=None): if metadata is None: metadata = {} if cfg is None: cfg = {} headers = dict(("X-Account-Meta-%s" % k, v) for k, v in metadata.items()) self.conn.make_request('POST', self.path, hdrs=headers, cfg=cfg) if not 200 <= self.conn.response.status <= 299: raise ResponseError(self.conn.response, 'POST', self.conn.make_path(self.path)) return True def container(self, container_name): return Container(self.conn, self.name, container_name) def containers(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} format_type = parms.get('format', None) if format_type not in [None, 'json', 'xml']: raise RequestError('Invalid format: %s' % format_type) if format_type is None and 'format' in parms: del parms['format'] status = self.conn.make_request('GET', self.path, hdrs=hdrs, parms=parms, cfg=cfg) if status == 200: if format_type == 'json': conts = json.loads(self.conn.response.read()) for cont in conts: cont['name'] = cont['name'].encode('utf-8') return conts elif format_type == 'xml': conts = [] tree = minidom.parseString(self.conn.response.read()) for x in tree.getElementsByTagName('container'): cont = {} for key in ['name', 'count', 'bytes']: cont[key] = x.getElementsByTagName(key)[0].\ childNodes[0].nodeValue conts.append(cont) for cont in conts: cont['name'] = cont['name'].encode('utf-8') return conts else: lines = self.conn.response.read().split('\n') if lines and not lines[-1]: lines = lines[:-1] return lines elif status == 204: return [] raise ResponseError(self.conn.response, 'GET', self.conn.make_path(self.path)) def delete_containers(self): for c in listing_items(self.containers): cont = self.container(c) cont.update_metadata(hdrs={'x-versions-location': ''}) if not cont.delete_recursive(): return False return listing_empty(self.containers) def info(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if self.conn.make_request('HEAD', self.path, hdrs=hdrs, parms=parms, cfg=cfg) != 204: raise ResponseError(self.conn.response, 'HEAD', self.conn.make_path(self.path)) fields = [['object_count', 'x-account-object-count'], ['container_count', 'x-account-container-count'], ['bytes_used', 'x-account-bytes-used']] return self.header_fields(fields) @property def path(self): return [] class Container(Base): # policy_specified is set in __init__.py when tests are being set up. policy_specified = None def __init__(self, conn, account, name): self.conn = conn self.account = str(account) self.name = str(name) def create(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if self.policy_specified and 'X-Storage-Policy' not in hdrs: hdrs['X-Storage-Policy'] = self.policy_specified return self.conn.make_request('PUT', self.path, hdrs=hdrs, parms=parms, cfg=cfg) in (201, 202) def update_metadata(self, hdrs=None, cfg=None): if hdrs is None: hdrs = {} if cfg is None: cfg = {} self.conn.make_request('POST', self.path, hdrs=hdrs, cfg=cfg) if not 200 <= self.conn.response.status <= 299: raise ResponseError(self.conn.response, 'POST', self.conn.make_path(self.path)) return True def delete(self, hdrs=None, parms=None): if hdrs is None: hdrs = {} if parms is None: parms = {} return self.conn.make_request('DELETE', self.path, hdrs=hdrs, parms=parms) == 204 def delete_files(self): for f in listing_items(self.files): file_item = self.file(f) if not file_item.delete(): return False return listing_empty(self.files) def delete_recursive(self): return self.delete_files() and self.delete() def file(self, file_name): return File(self.conn, self.account, self.name, file_name) def files(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} format_type = parms.get('format', None) if format_type not in [None, 'json', 'xml']: raise RequestError('Invalid format: %s' % format_type) if format_type is None and 'format' in parms: del parms['format'] status = self.conn.make_request('GET', self.path, hdrs=hdrs, parms=parms, cfg=cfg) if status == 200: if format_type == 'json': files = json.loads(self.conn.response.read()) for file_item in files: file_item['name'] = file_item['name'].encode('utf-8') file_item['content_type'] = file_item['content_type'].\ encode('utf-8') return files elif format_type == 'xml': files = [] tree = minidom.parseString(self.conn.response.read()) for x in tree.getElementsByTagName('object'): file_item = {} for key in ['name', 'hash', 'bytes', 'content_type', 'last_modified']: file_item[key] = x.getElementsByTagName(key)[0].\ childNodes[0].nodeValue files.append(file_item) for file_item in files: file_item['name'] = file_item['name'].encode('utf-8') file_item['content_type'] = file_item['content_type'].\ encode('utf-8') return files else: content = self.conn.response.read() if content: lines = content.split('\n') if lines and not lines[-1]: lines = lines[:-1] return lines else: return [] elif status == 204: return [] raise ResponseError(self.conn.response, 'GET', self.conn.make_path(self.path)) def info(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} self.conn.make_request('HEAD', self.path, hdrs=hdrs, parms=parms, cfg=cfg) if self.conn.response.status == 204: required_fields = [['bytes_used', 'x-container-bytes-used'], ['object_count', 'x-container-object-count']] optional_fields = [ ['versions', 'x-versions-location'], ['tempurl_key', 'x-container-meta-temp-url-key'], ['tempurl_key2', 'x-container-meta-temp-url-key-2']] return self.header_fields(required_fields, optional_fields) raise ResponseError(self.conn.response, 'HEAD', self.conn.make_path(self.path)) @property def path(self): return [self.name] class File(Base): def __init__(self, conn, account, container, name): self.conn = conn self.account = str(account) self.container = str(container) self.name = str(name) self.chunked_write_in_progress = False self.content_type = None self.content_range = None self.size = None self.metadata = {} def make_headers(self, cfg=None): if cfg is None: cfg = {} headers = {} if not cfg.get('no_content_length'): if cfg.get('set_content_length'): headers['Content-Length'] = cfg.get('set_content_length') elif self.size: headers['Content-Length'] = self.size else: headers['Content-Length'] = 0 if cfg.get('use_token'): headers['X-Auth-Token'] = cfg.get('use_token') if cfg.get('no_content_type'): pass elif self.content_type: headers['Content-Type'] = self.content_type else: headers['Content-Type'] = 'application/octet-stream' for key in self.metadata: headers['X-Object-Meta-' + key] = self.metadata[key] return headers @classmethod def compute_md5sum(cls, data): block_size = 4096 if isinstance(data, str): data = six.StringIO(data) checksum = hashlib.md5() buff = data.read(block_size) while buff: checksum.update(buff) buff = data.read(block_size) data.seek(0) return checksum.hexdigest() def copy(self, dest_cont, dest_file, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if 'destination' in cfg: headers = {'Destination': cfg['destination']} elif cfg.get('no_destination'): headers = {} else: headers = {'Destination': '%s/%s' % (dest_cont, dest_file)} headers.update(hdrs) if 'Destination' in headers: headers['Destination'] = urllib.parse.quote(headers['Destination']) return self.conn.make_request('COPY', self.path, hdrs=headers, parms=parms) == 201 def copy_account(self, dest_account, dest_cont, dest_file, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if 'destination' in cfg: headers = {'Destination': cfg['destination']} elif cfg.get('no_destination'): headers = {} else: headers = {'Destination-Account': dest_account, 'Destination': '%s/%s' % (dest_cont, dest_file)} headers.update(hdrs) if 'Destination-Account' in headers: headers['Destination-Account'] = \ urllib.parse.quote(headers['Destination-Account']) if 'Destination' in headers: headers['Destination'] = urllib.parse.quote(headers['Destination']) return self.conn.make_request('COPY', self.path, hdrs=headers, parms=parms) == 201 def delete(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if self.conn.make_request('DELETE', self.path, hdrs=hdrs, cfg=cfg, parms=parms) != 204: raise ResponseError(self.conn.response, 'DELETE', self.conn.make_path(self.path)) return True def info(self, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if self.conn.make_request('HEAD', self.path, hdrs=hdrs, parms=parms, cfg=cfg) != 200: raise ResponseError(self.conn.response, 'HEAD', self.conn.make_path(self.path)) fields = [['content_length', 'content-length'], ['content_type', 'content-type'], ['last_modified', 'last-modified'], ['etag', 'etag']] optional_fields = [['x_object_manifest', 'x-object-manifest']] header_fields = self.header_fields(fields, optional_fields=optional_fields) header_fields['etag'] = header_fields['etag'].strip('"') return header_fields def initialize(self, hdrs=None, parms=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if not self.name: return False status = self.conn.make_request('HEAD', self.path, hdrs=hdrs, parms=parms) if status == 404: return False elif (status < 200) or (status > 299): raise ResponseError(self.conn.response, 'HEAD', self.conn.make_path(self.path)) for hdr in self.conn.response.getheaders(): if hdr[0].lower() == 'content-type': self.content_type = hdr[1] if hdr[0].lower().startswith('x-object-meta-'): self.metadata[hdr[0][14:]] = hdr[1] if hdr[0].lower() == 'etag': self.etag = hdr[1].strip('"') if hdr[0].lower() == 'content-length': self.size = int(hdr[1]) if hdr[0].lower() == 'last-modified': self.last_modified = hdr[1] return True def load_from_filename(self, filename, callback=None): fobj = open(filename, 'rb') self.write(fobj, callback=callback) fobj.close() @property def path(self): return [self.container, self.name] @classmethod def random_data(cls, size=None): if size is None: size = random.randint(1, 32768) fd = open('/dev/urandom', 'r') data = fd.read(size) fd.close() return data def read(self, size=-1, offset=0, hdrs=None, buffer=None, callback=None, cfg=None, parms=None): if cfg is None: cfg = {} if parms is None: parms = {} if size > 0: range_string = 'bytes=%d-%d' % (offset, (offset + size) - 1) if hdrs: hdrs['Range'] = range_string else: hdrs = {'Range': range_string} status = self.conn.make_request('GET', self.path, hdrs=hdrs, cfg=cfg, parms=parms) if (status < 200) or (status > 299): raise ResponseError(self.conn.response, 'GET', self.conn.make_path(self.path)) for hdr in self.conn.response.getheaders(): if hdr[0].lower() == 'content-type': self.content_type = hdr[1] if hdr[0].lower() == 'content-range': self.content_range = hdr[1] if hasattr(buffer, 'write'): scratch = self.conn.response.read(8192) transferred = 0 while len(scratch) > 0: buffer.write(scratch) transferred += len(scratch) if callable(callback): callback(transferred, self.size) scratch = self.conn.response.read(8192) return None else: return self.conn.response.read() def read_md5(self): status = self.conn.make_request('GET', self.path) if (status < 200) or (status > 299): raise ResponseError(self.conn.response, 'GET', self.conn.make_path(self.path)) checksum = hashlib.md5() scratch = self.conn.response.read(8192) while len(scratch) > 0: checksum.update(scratch) scratch = self.conn.response.read(8192) return checksum.hexdigest() def save_to_filename(self, filename, callback=None): try: fobj = open(filename, 'wb') self.read(buffer=fobj, callback=callback) finally: fobj.close() def sync_metadata(self, metadata=None, cfg=None, parms=None): if metadata is None: metadata = {} if cfg is None: cfg = {} self.metadata.update(metadata) if self.metadata: headers = self.make_headers(cfg=cfg) if not cfg.get('no_content_length'): if cfg.get('set_content_length'): headers['Content-Length'] = \ cfg.get('set_content_length') else: headers['Content-Length'] = 0 self.conn.make_request('POST', self.path, hdrs=headers, parms=parms, cfg=cfg) if self.conn.response.status not in (201, 202): raise ResponseError(self.conn.response, 'POST', self.conn.make_path(self.path)) return True def chunked_write(self, data=None, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} if data is not None and self.chunked_write_in_progress: self.conn.put_data(data, True) elif data is not None: self.chunked_write_in_progress = True headers = self.make_headers(cfg=cfg) headers.update(hdrs) self.conn.put_start(self.path, hdrs=headers, parms=parms, cfg=cfg, chunked=True) self.conn.put_data(data, True) elif self.chunked_write_in_progress: self.chunked_write_in_progress = False return self.conn.put_end(True) == 201 else: raise RuntimeError def write(self, data='', hdrs=None, parms=None, callback=None, cfg=None, return_resp=False): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} block_size = 2 ** 20 if isinstance(data, file): try: data.flush() data.seek(0) except IOError: pass self.size = int(os.fstat(data.fileno())[6]) else: data = six.StringIO(data) self.size = data.len headers = self.make_headers(cfg=cfg) headers.update(hdrs) self.conn.put_start(self.path, hdrs=headers, parms=parms, cfg=cfg) transferred = 0 buff = data.read(block_size) buff_len = len(buff) try: while buff_len > 0: self.conn.put_data(buff) transferred += buff_len if callable(callback): callback(transferred, self.size) buff = data.read(block_size) buff_len = len(buff) self.conn.put_end() except socket.timeout as err: raise err if (self.conn.response.status < 200) or \ (self.conn.response.status > 299): raise ResponseError(self.conn.response, 'PUT', self.conn.make_path(self.path)) try: data.seek(0) except IOError: pass self.md5 = self.compute_md5sum(data) if return_resp: return self.conn.response return True def write_random(self, size=None, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} data = self.random_data(size) if not self.write(data, hdrs=hdrs, parms=parms, cfg=cfg): raise ResponseError(self.conn.response, 'PUT', self.conn.make_path(self.path)) self.md5 = self.compute_md5sum(six.StringIO(data)) return data def write_random_return_resp(self, size=None, hdrs=None, parms=None, cfg=None): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} data = self.random_data(size) resp = self.write(data, hdrs=hdrs, parms=parms, cfg=cfg, return_resp=True) if not resp: raise ResponseError(self.conn.response) self.md5 = self.compute_md5sum(six.StringIO(data)) return resp def post(self, hdrs=None, parms=None, cfg=None, return_resp=False): if hdrs is None: hdrs = {} if parms is None: parms = {} if cfg is None: cfg = {} headers = self.make_headers(cfg=cfg) headers.update(hdrs) self.conn.make_request('POST', self.path, hdrs=headers, parms=parms, cfg=cfg) if self.conn.response.status not in (201, 202): raise ResponseError(self.conn.response, 'POST', self.conn.make_path(self.path)) if return_resp: return self.conn.response return True swift-2.7.1/test/functional/tests.py0000664000567000056710000055217313024044354020667 0ustar jenkinsjenkins00000000000000#!/usr/bin/python -u # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from datetime import datetime import email.parser import hashlib import hmac import itertools import json import locale import random import six from six.moves import urllib import time import unittest2 import uuid from copy import deepcopy import eventlet from unittest2 import SkipTest from swift.common.http import is_success, is_client_error from test.functional import normalized_urls, load_constraint, cluster_info from test.functional import check_response, retry, requires_acls import test.functional as tf from test.functional.swift_test_client import Account, Connection, File, \ ResponseError def setUpModule(): tf.setup_package() def tearDownModule(): tf.teardown_package() class Utils(object): @classmethod def create_ascii_name(cls, length=None): return uuid.uuid4().hex @classmethod def create_utf8_name(cls, length=None): if length is None: length = 15 else: length = int(length) utf8_chars = u'\uF10F\uD20D\uB30B\u9409\u8508\u5605\u3703\u1801'\ u'\u0900\uF110\uD20E\uB30C\u940A\u8509\u5606\u3704'\ u'\u1802\u0901\uF111\uD20F\uB30D\u940B\u850A\u5607'\ u'\u3705\u1803\u0902\uF112\uD210\uB30E\u940C\u850B'\ u'\u5608\u3706\u1804\u0903\u03A9\u2603' return ''.join([random.choice(utf8_chars) for x in range(length)]).encode('utf-8') create_name = create_ascii_name class Base(unittest2.TestCase): def setUp(self): cls = type(self) if not cls.set_up: cls.env.setUp() cls.set_up = True def assert_body(self, body): response_body = self.env.conn.response.read() self.assertTrue(response_body == body, 'Body returned: %s' % (response_body)) def assert_status(self, status_or_statuses): self.assertTrue( self.env.conn.response.status == status_or_statuses or (hasattr(status_or_statuses, '__iter__') and self.env.conn.response.status in status_or_statuses), 'Status returned: %d Expected: %s' % (self.env.conn.response.status, status_or_statuses)) def assert_header(self, header_name, expected_value): try: actual_value = self.env.conn.response.getheader(header_name) except KeyError: self.fail( 'Expected header name %r not found in response.' % header_name) self.assertEqual(expected_value, actual_value) class Base2(object): def setUp(self): Utils.create_name = Utils.create_utf8_name super(Base2, self).setUp() def tearDown(self): Utils.create_name = Utils.create_ascii_name class TestAccountEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.containers = [] for i in range(10): cont = cls.account.container(Utils.create_name()) if not cont.create(): raise ResponseError(cls.conn.response) cls.containers.append(cont) class TestAccountDev(Base): env = TestAccountEnv set_up = False class TestAccountDevUTF8(Base2, TestAccountDev): set_up = False class TestAccount(Base): env = TestAccountEnv set_up = False def testNoAuthToken(self): self.assertRaises(ResponseError, self.env.account.info, cfg={'no_auth_token': True}) self.assert_status([401, 412]) self.assertRaises(ResponseError, self.env.account.containers, cfg={'no_auth_token': True}) self.assert_status([401, 412]) def testInvalidUTF8Path(self): invalid_utf8 = Utils.create_utf8_name()[::-1] container = self.env.account.container(invalid_utf8) self.assertFalse(container.create(cfg={'no_path_quote': True})) self.assert_status(412) self.assert_body('Invalid UTF8 or contains NULL') def testVersionOnlyPath(self): self.env.account.conn.make_request('PUT', cfg={'version_only_path': True}) self.assert_status(412) self.assert_body('Bad URL') def testInvalidPath(self): was_url = self.env.account.conn.storage_url if (normalized_urls): self.env.account.conn.storage_url = '/' else: self.env.account.conn.storage_url = "/%s" % was_url self.env.account.conn.make_request('GET') try: self.assert_status(404) finally: self.env.account.conn.storage_url = was_url def testPUT(self): self.env.account.conn.make_request('PUT') self.assert_status([403, 405]) def testAccountHead(self): try_count = 0 while try_count < 5: try_count += 1 info = self.env.account.info() for field in ['object_count', 'container_count', 'bytes_used']: self.assertTrue(info[field] >= 0) if info['container_count'] == len(self.env.containers): break if try_count < 5: time.sleep(1) self.assertEqual(info['container_count'], len(self.env.containers)) self.assert_status(204) def testContainerSerializedInfo(self): container_info = {} for container in self.env.containers: info = {'bytes': 0} info['count'] = random.randint(10, 30) for i in range(info['count']): file_item = container.file(Utils.create_name()) bytes = random.randint(1, 32768) file_item.write_random(bytes) info['bytes'] += bytes container_info[container.name] = info for format_type in ['json', 'xml']: for a in self.env.account.containers( parms={'format': format_type}): self.assertTrue(a['count'] >= 0) self.assertTrue(a['bytes'] >= 0) headers = dict(self.env.conn.response.getheaders()) if format_type == 'json': self.assertEqual(headers['content-type'], 'application/json; charset=utf-8') elif format_type == 'xml': self.assertEqual(headers['content-type'], 'application/xml; charset=utf-8') def testListingLimit(self): limit = load_constraint('account_listing_limit') for l in (1, 100, limit / 2, limit - 1, limit, limit + 1, limit * 2): p = {'limit': l} if l <= limit: self.assertTrue(len(self.env.account.containers(parms=p)) <= l) self.assert_status(200) else: self.assertRaises(ResponseError, self.env.account.containers, parms=p) self.assert_status(412) def testContainerListing(self): a = sorted([c.name for c in self.env.containers]) for format_type in [None, 'json', 'xml']: b = self.env.account.containers(parms={'format': format_type}) if isinstance(b[0], dict): b = [x['name'] for x in b] self.assertEqual(a, b) def testListDelimiter(self): delimiter = '-' containers = ['test', delimiter.join(['test', 'bar']), delimiter.join(['test', 'foo'])] for c in containers: cont = self.env.account.container(c) self.assertTrue(cont.create()) results = self.env.account.containers(parms={'delimiter': delimiter}) expected = ['test', 'test-'] results = [r for r in results if r in expected] self.assertEqual(expected, results) results = self.env.account.containers(parms={'delimiter': delimiter, 'reverse': 'yes'}) expected.reverse() results = [r for r in results if r in expected] self.assertEqual(expected, results) def testListDelimiterAndPrefix(self): delimiter = 'a' containers = ['bar', 'bazar'] for c in containers: cont = self.env.account.container(c) self.assertTrue(cont.create()) results = self.env.account.containers(parms={'delimiter': delimiter, 'prefix': 'ba'}) expected = ['bar', 'baza'] results = [r for r in results if r in expected] self.assertEqual(expected, results) results = self.env.account.containers(parms={'delimiter': delimiter, 'prefix': 'ba', 'reverse': 'yes'}) expected.reverse() results = [r for r in results if r in expected] self.assertEqual(expected, results) def testInvalidAuthToken(self): hdrs = {'X-Auth-Token': 'bogus_auth_token'} self.assertRaises(ResponseError, self.env.account.info, hdrs=hdrs) self.assert_status(401) def testLastContainerMarker(self): for format_type in [None, 'json', 'xml']: containers = self.env.account.containers({'format': format_type}) self.assertEqual(len(containers), len(self.env.containers)) self.assert_status(200) containers = self.env.account.containers( parms={'format': format_type, 'marker': containers[-1]}) self.assertEqual(len(containers), 0) if format_type is None: self.assert_status(204) else: self.assert_status(200) def testMarkerLimitContainerList(self): for format_type in [None, 'json', 'xml']: for marker in ['0', 'A', 'I', 'R', 'Z', 'a', 'i', 'r', 'z', 'abc123', 'mnop', 'xyz']: limit = random.randint(2, 9) containers = self.env.account.containers( parms={'format': format_type, 'marker': marker, 'limit': limit}) self.assertTrue(len(containers) <= limit) if containers: if isinstance(containers[0], dict): containers = [x['name'] for x in containers] self.assertTrue(locale.strcoll(containers[0], marker) > 0) def testContainersOrderedByName(self): for format_type in [None, 'json', 'xml']: containers = self.env.account.containers( parms={'format': format_type}) if isinstance(containers[0], dict): containers = [x['name'] for x in containers] self.assertEqual(sorted(containers, cmp=locale.strcoll), containers) def testQuotedWWWAuthenticateHeader(self): # check that the www-authenticate header value with the swift realm # is correctly quoted. conn = Connection(tf.config) conn.authenticate() inserted_html = 'Hello World' hax = 'AUTH_haxx"\nContent-Length: %d\n\n%s' % (len(inserted_html), inserted_html) quoted_hax = urllib.parse.quote(hax) conn.connection.request('GET', '/v1/' + quoted_hax, None, {}) resp = conn.connection.getresponse() resp_headers = dict(resp.getheaders()) self.assertIn('www-authenticate', resp_headers) actual = resp_headers['www-authenticate'] expected = 'Swift realm="%s"' % quoted_hax # other middleware e.g. auth_token may also set www-authenticate # headers in which case actual values will be a comma separated list. # check that expected value is among the actual values self.assertIn(expected, actual) class TestAccountUTF8(Base2, TestAccount): set_up = False class TestAccountNoContainersEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() class TestAccountNoContainers(Base): env = TestAccountNoContainersEnv set_up = False def testGetRequest(self): for format_type in [None, 'json', 'xml']: self.assertFalse(self.env.account.containers( parms={'format': format_type})) if format_type is None: self.assert_status(204) else: self.assert_status(200) class TestAccountNoContainersUTF8(Base2, TestAccountNoContainers): set_up = False class TestAccountSortingEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() postfix = Utils.create_name() cls.cont_items = ('a1', 'a2', 'A3', 'b1', 'B2', 'a10', 'b10', 'zz') cls.cont_items = ['%s%s' % (x, postfix) for x in cls.cont_items] for container in cls.cont_items: c = cls.account.container(container) if not c.create(): raise ResponseError(cls.conn.response) class TestAccountSorting(Base): env = TestAccountSortingEnv set_up = False def testAccountContainerListSorting(self): # name (byte order) sorting. cont_list = sorted(self.env.cont_items) for reverse in ('false', 'no', 'off', '', 'garbage'): cont_listing = self.env.account.containers( parms={'reverse': reverse}) self.assert_status(200) self.assertEqual(cont_list, cont_listing, 'Expected %s but got %s with reverse param %r' % (cont_list, cont_listing, reverse)) def testAccountContainerListSortingReverse(self): # name (byte order) sorting. cont_list = sorted(self.env.cont_items) cont_list.reverse() for reverse in ('true', '1', 'yes', 'on', 't', 'y'): cont_listing = self.env.account.containers( parms={'reverse': reverse}) self.assert_status(200) self.assertEqual(cont_list, cont_listing, 'Expected %s but got %s with reverse param %r' % (cont_list, cont_listing, reverse)) def testAccountContainerListSortingByPrefix(self): cont_list = sorted(c for c in self.env.cont_items if c.startswith('a')) cont_list.reverse() cont_listing = self.env.account.containers(parms={ 'reverse': 'on', 'prefix': 'a'}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testAccountContainerListSortingByMarkersExclusive(self): first_item = self.env.cont_items[3] # 'b1' + postfix last_item = self.env.cont_items[4] # 'B2' + postfix cont_list = sorted(c for c in self.env.cont_items if last_item < c < first_item) cont_list.reverse() cont_listing = self.env.account.containers(parms={ 'reverse': 'on', 'marker': first_item, 'end_marker': last_item}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testAccountContainerListSortingByMarkersInclusive(self): first_item = self.env.cont_items[3] # 'b1' + postfix last_item = self.env.cont_items[4] # 'B2' + postfix cont_list = sorted(c for c in self.env.cont_items if last_item <= c <= first_item) cont_list.reverse() cont_listing = self.env.account.containers(parms={ 'reverse': 'on', 'marker': first_item + '\x00', 'end_marker': last_item[:-1] + chr(ord(last_item[-1]) - 1)}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testAccountContainerListSortingByReversedMarkers(self): cont_listing = self.env.account.containers(parms={ 'reverse': 'on', 'marker': 'B', 'end_marker': 'b1'}) self.assert_status(204) self.assertEqual([], cont_listing) class TestContainerEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.file_count = 10 cls.file_size = 128 cls.files = list() for x in range(cls.file_count): file_item = cls.container.file(Utils.create_name()) file_item.write_random(cls.file_size) cls.files.append(file_item.name) class TestContainerDev(Base): env = TestContainerEnv set_up = False class TestContainerDevUTF8(Base2, TestContainerDev): set_up = False class TestContainer(Base): env = TestContainerEnv set_up = False def testContainerNameLimit(self): limit = load_constraint('max_container_name_length') for l in (limit - 100, limit - 10, limit - 1, limit, limit + 1, limit + 10, limit + 100): cont = self.env.account.container('a' * l) if l <= limit: self.assertTrue(cont.create()) self.assert_status(201) else: self.assertFalse(cont.create()) self.assert_status(400) def testFileThenContainerDelete(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) file_item = cont.file(Utils.create_name()) self.assertTrue(file_item.write_random()) self.assertTrue(file_item.delete()) self.assert_status(204) self.assertNotIn(file_item.name, cont.files()) self.assertTrue(cont.delete()) self.assert_status(204) self.assertNotIn(cont.name, self.env.account.containers()) def testFileListingLimitMarkerPrefix(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) files = sorted([Utils.create_name() for x in range(10)]) for f in files: file_item = cont.file(f) self.assertTrue(file_item.write_random()) for i in range(len(files)): f = files[i] for j in range(1, len(files) - i): self.assertTrue( cont.files(parms={'limit': j, 'marker': f}) == files[i + 1: i + j + 1]) self.assertTrue(cont.files(parms={'marker': f}) == files[i + 1:]) self.assertTrue(cont.files(parms={'marker': f, 'prefix': f}) == []) self.assertTrue(cont.files(parms={'prefix': f}) == [f]) def testPrefixAndLimit(self): load_constraint('container_listing_limit') cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) prefix_file_count = 10 limit_count = 2 prefixs = ['alpha/', 'beta/', 'kappa/'] prefix_files = {} for prefix in prefixs: prefix_files[prefix] = [] for i in range(prefix_file_count): file_item = cont.file(prefix + Utils.create_name()) file_item.write() prefix_files[prefix].append(file_item.name) for format_type in [None, 'json', 'xml']: for prefix in prefixs: files = cont.files(parms={'prefix': prefix}) self.assertEqual(files, sorted(prefix_files[prefix])) for format_type in [None, 'json', 'xml']: for prefix in prefixs: files = cont.files(parms={'limit': limit_count, 'prefix': prefix}) self.assertEqual(len(files), limit_count) for file_item in files: self.assertTrue(file_item.startswith(prefix)) def testListDelimiter(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) delimiter = '-' files = ['test', delimiter.join(['test', 'bar']), delimiter.join(['test', 'foo'])] for f in files: file_item = cont.file(f) self.assertTrue(file_item.write_random()) results = cont.files() results = cont.files(parms={'delimiter': delimiter}) self.assertEqual(results, ['test', 'test-']) results = cont.files(parms={'delimiter': delimiter, 'reverse': 'yes'}) self.assertEqual(results, ['test-', 'test']) def testListDelimiterAndPrefix(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) delimiter = 'a' files = ['bar', 'bazar'] for f in files: file_item = cont.file(f) self.assertTrue(file_item.write_random()) results = cont.files(parms={'delimiter': delimiter, 'prefix': 'ba'}) self.assertEqual(results, ['bar', 'baza']) results = cont.files(parms={'delimiter': delimiter, 'prefix': 'ba', 'reverse': 'yes'}) self.assertEqual(results, ['baza', 'bar']) def testCreate(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) self.assert_status(201) self.assertIn(cont.name, self.env.account.containers()) def testContainerFileListOnContainerThatDoesNotExist(self): for format_type in [None, 'json', 'xml']: container = self.env.account.container(Utils.create_name()) self.assertRaises(ResponseError, container.files, parms={'format': format_type}) self.assert_status(404) def testUtf8Container(self): valid_utf8 = Utils.create_utf8_name() invalid_utf8 = valid_utf8[::-1] container = self.env.account.container(valid_utf8) self.assertTrue(container.create(cfg={'no_path_quote': True})) self.assertIn(container.name, self.env.account.containers()) self.assertEqual(container.files(), []) self.assertTrue(container.delete()) container = self.env.account.container(invalid_utf8) self.assertFalse(container.create(cfg={'no_path_quote': True})) self.assert_status(412) self.assertRaises(ResponseError, container.files, cfg={'no_path_quote': True}) self.assert_status(412) def testCreateOnExisting(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) self.assert_status(201) self.assertTrue(cont.create()) self.assert_status(202) def testSlashInName(self): if Utils.create_name == Utils.create_utf8_name: cont_name = list(six.text_type(Utils.create_name(), 'utf-8')) else: cont_name = list(Utils.create_name()) cont_name[random.randint(2, len(cont_name) - 2)] = '/' cont_name = ''.join(cont_name) if Utils.create_name == Utils.create_utf8_name: cont_name = cont_name.encode('utf-8') cont = self.env.account.container(cont_name) self.assertFalse(cont.create(cfg={'no_path_quote': True}), 'created container with name %s' % (cont_name)) self.assert_status(404) self.assertNotIn(cont.name, self.env.account.containers()) def testDelete(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) self.assert_status(201) self.assertTrue(cont.delete()) self.assert_status(204) self.assertNotIn(cont.name, self.env.account.containers()) def testDeleteOnContainerThatDoesNotExist(self): cont = self.env.account.container(Utils.create_name()) self.assertFalse(cont.delete()) self.assert_status(404) def testDeleteOnContainerWithFiles(self): cont = self.env.account.container(Utils.create_name()) self.assertTrue(cont.create()) file_item = cont.file(Utils.create_name()) file_item.write_random(self.env.file_size) self.assertIn(file_item.name, cont.files()) self.assertFalse(cont.delete()) self.assert_status(409) def testFileCreateInContainerThatDoesNotExist(self): file_item = File(self.env.conn, self.env.account, Utils.create_name(), Utils.create_name()) self.assertRaises(ResponseError, file_item.write) self.assert_status(404) def testLastFileMarker(self): for format_type in [None, 'json', 'xml']: files = self.env.container.files({'format': format_type}) self.assertEqual(len(files), len(self.env.files)) self.assert_status(200) files = self.env.container.files( parms={'format': format_type, 'marker': files[-1]}) self.assertEqual(len(files), 0) if format_type is None: self.assert_status(204) else: self.assert_status(200) def testContainerFileList(self): for format_type in [None, 'json', 'xml']: files = self.env.container.files(parms={'format': format_type}) self.assert_status(200) if isinstance(files[0], dict): files = [x['name'] for x in files] for file_item in self.env.files: self.assertIn(file_item, files) for file_item in files: self.assertIn(file_item, self.env.files) def testMarkerLimitFileList(self): for format_type in [None, 'json', 'xml']: for marker in ['0', 'A', 'I', 'R', 'Z', 'a', 'i', 'r', 'z', 'abc123', 'mnop', 'xyz']: limit = random.randint(2, self.env.file_count - 1) files = self.env.container.files(parms={'format': format_type, 'marker': marker, 'limit': limit}) if not files: continue if isinstance(files[0], dict): files = [x['name'] for x in files] self.assertTrue(len(files) <= limit) if files: if isinstance(files[0], dict): files = [x['name'] for x in files] self.assertTrue(locale.strcoll(files[0], marker) > 0) def testFileOrder(self): for format_type in [None, 'json', 'xml']: files = self.env.container.files(parms={'format': format_type}) if isinstance(files[0], dict): files = [x['name'] for x in files] self.assertEqual(sorted(files, cmp=locale.strcoll), files) def testContainerInfo(self): info = self.env.container.info() self.assert_status(204) self.assertEqual(info['object_count'], self.env.file_count) self.assertEqual(info['bytes_used'], self.env.file_count * self.env.file_size) def testContainerInfoOnContainerThatDoesNotExist(self): container = self.env.account.container(Utils.create_name()) self.assertRaises(ResponseError, container.info) self.assert_status(404) def testContainerFileListWithLimit(self): for format_type in [None, 'json', 'xml']: files = self.env.container.files(parms={'format': format_type, 'limit': 2}) self.assertEqual(len(files), 2) def testTooLongName(self): cont = self.env.account.container('x' * 257) self.assertFalse(cont.create(), 'created container with name %s' % (cont.name)) self.assert_status(400) def testContainerExistenceCachingProblem(self): cont = self.env.account.container(Utils.create_name()) self.assertRaises(ResponseError, cont.files) self.assertTrue(cont.create()) cont.files() cont = self.env.account.container(Utils.create_name()) self.assertRaises(ResponseError, cont.files) self.assertTrue(cont.create()) file_item = cont.file(Utils.create_name()) file_item.write_random() class TestContainerUTF8(Base2, TestContainer): set_up = False class TestContainerSortingEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.file_items = ('a1', 'a2', 'A3', 'b1', 'B2', 'a10', 'b10', 'zz') cls.files = list() cls.file_size = 128 for name in cls.file_items: file_item = cls.container.file(name) file_item.write_random(cls.file_size) cls.files.append(file_item.name) class TestContainerSorting(Base): env = TestContainerSortingEnv set_up = False def testContainerFileListSortingReversed(self): file_list = list(sorted(self.env.file_items)) file_list.reverse() for reverse in ('true', '1', 'yes', 'on', 't', 'y'): cont_files = self.env.container.files(parms={'reverse': reverse}) self.assert_status(200) self.assertEqual(file_list, cont_files, 'Expected %s but got %s with reverse param %r' % (file_list, cont_files, reverse)) def testContainerFileSortingByPrefixReversed(self): cont_list = sorted(c for c in self.env.file_items if c.startswith('a')) cont_list.reverse() cont_listing = self.env.container.files(parms={ 'reverse': 'on', 'prefix': 'a'}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testContainerFileSortingByMarkersExclusiveReversed(self): first_item = self.env.file_items[3] # 'b1' + postfix last_item = self.env.file_items[4] # 'B2' + postfix cont_list = sorted(c for c in self.env.file_items if last_item < c < first_item) cont_list.reverse() cont_listing = self.env.container.files(parms={ 'reverse': 'on', 'marker': first_item, 'end_marker': last_item}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testContainerFileSortingByMarkersInclusiveReversed(self): first_item = self.env.file_items[3] # 'b1' + postfix last_item = self.env.file_items[4] # 'B2' + postfix cont_list = sorted(c for c in self.env.file_items if last_item <= c <= first_item) cont_list.reverse() cont_listing = self.env.container.files(parms={ 'reverse': 'on', 'marker': first_item + '\x00', 'end_marker': last_item[:-1] + chr(ord(last_item[-1]) - 1)}) self.assert_status(200) self.assertEqual(cont_list, cont_listing) def testContainerFileSortingByReversedMarkersReversed(self): cont_listing = self.env.container.files(parms={ 'reverse': 'on', 'marker': 'B', 'end_marker': 'b1'}) self.assert_status(204) self.assertEqual([], cont_listing) def testContainerFileListSorting(self): file_list = list(sorted(self.env.file_items)) cont_files = self.env.container.files() self.assert_status(200) self.assertEqual(file_list, cont_files) # Lets try again but with reverse is specifically turned off cont_files = self.env.container.files(parms={'reverse': 'off'}) self.assert_status(200) self.assertEqual(file_list, cont_files) cont_files = self.env.container.files(parms={'reverse': 'false'}) self.assert_status(200) self.assertEqual(file_list, cont_files) cont_files = self.env.container.files(parms={'reverse': 'no'}) self.assert_status(200) self.assertEqual(file_list, cont_files) cont_files = self.env.container.files(parms={'reverse': ''}) self.assert_status(200) self.assertEqual(file_list, cont_files) # Lets try again but with a incorrect reverse values cont_files = self.env.container.files(parms={'reverse': 'foo'}) self.assert_status(200) self.assertEqual(file_list, cont_files) cont_files = self.env.container.files(parms={'reverse': 'hai'}) self.assert_status(200) self.assertEqual(file_list, cont_files) cont_files = self.env.container.files(parms={'reverse': 'o=[]::::>'}) self.assert_status(200) self.assertEqual(file_list, cont_files) class TestContainerPathsEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.file_size = 8 cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.files = [ '/file1', '/file A', '/dir1/', '/dir2/', '/dir1/file2', '/dir1/subdir1/', '/dir1/subdir2/', '/dir1/subdir1/file2', '/dir1/subdir1/file3', '/dir1/subdir1/file4', '/dir1/subdir1/subsubdir1/', '/dir1/subdir1/subsubdir1/file5', '/dir1/subdir1/subsubdir1/file6', '/dir1/subdir1/subsubdir1/file7', '/dir1/subdir1/subsubdir1/file8', '/dir1/subdir1/subsubdir2/', '/dir1/subdir1/subsubdir2/file9', '/dir1/subdir1/subsubdir2/file0', 'file1', 'dir1/', 'dir2/', 'dir1/file2', 'dir1/subdir1/', 'dir1/subdir2/', 'dir1/subdir1/file2', 'dir1/subdir1/file3', 'dir1/subdir1/file4', 'dir1/subdir1/subsubdir1/', 'dir1/subdir1/subsubdir1/file5', 'dir1/subdir1/subsubdir1/file6', 'dir1/subdir1/subsubdir1/file7', 'dir1/subdir1/subsubdir1/file8', 'dir1/subdir1/subsubdir2/', 'dir1/subdir1/subsubdir2/file9', 'dir1/subdir1/subsubdir2/file0', 'dir1/subdir with spaces/', 'dir1/subdir with spaces/file B', 'dir1/subdir+with{whatever/', 'dir1/subdir+with{whatever/file D', ] stored_files = set() for f in cls.files: file_item = cls.container.file(f) if f.endswith('/'): file_item.write(hdrs={'Content-Type': 'application/directory'}) else: file_item.write_random(cls.file_size, hdrs={'Content-Type': 'application/directory'}) if (normalized_urls): nfile = '/'.join(filter(None, f.split('/'))) if (f[-1] == '/'): nfile += '/' stored_files.add(nfile) else: stored_files.add(f) cls.stored_files = sorted(stored_files) class TestContainerPaths(Base): env = TestContainerPathsEnv set_up = False def testTraverseContainer(self): found_files = [] found_dirs = [] def recurse_path(path, count=0): if count > 10: raise ValueError('too deep recursion') for file_item in self.env.container.files(parms={'path': path}): self.assertTrue(file_item.startswith(path)) if file_item.endswith('/'): recurse_path(file_item, count + 1) found_dirs.append(file_item) else: found_files.append(file_item) recurse_path('') for file_item in self.env.stored_files: if file_item.startswith('/'): self.assertNotIn(file_item, found_dirs) self.assertNotIn(file_item, found_files) elif file_item.endswith('/'): self.assertIn(file_item, found_dirs) self.assertNotIn(file_item, found_files) else: self.assertIn(file_item, found_files) self.assertNotIn(file_item, found_dirs) found_files = [] found_dirs = [] recurse_path('/') for file_item in self.env.stored_files: if not file_item.startswith('/'): self.assertNotIn(file_item, found_dirs) self.assertNotIn(file_item, found_files) elif file_item.endswith('/'): self.assertIn(file_item, found_dirs) self.assertNotIn(file_item, found_files) else: self.assertIn(file_item, found_files) self.assertNotIn(file_item, found_dirs) def testContainerListing(self): for format_type in (None, 'json', 'xml'): files = self.env.container.files(parms={'format': format_type}) if isinstance(files[0], dict): files = [str(x['name']) for x in files] self.assertEqual(files, self.env.stored_files) for format_type in ('json', 'xml'): for file_item in self.env.container.files(parms={'format': format_type}): self.assertTrue(int(file_item['bytes']) >= 0) self.assertIn('last_modified', file_item) if file_item['name'].endswith('/'): self.assertEqual(file_item['content_type'], 'application/directory') def testStructure(self): def assert_listing(path, file_list): files = self.env.container.files(parms={'path': path}) self.assertEqual(sorted(file_list, cmp=locale.strcoll), files) if not normalized_urls: assert_listing('/', ['/dir1/', '/dir2/', '/file1', '/file A']) assert_listing('/dir1', ['/dir1/file2', '/dir1/subdir1/', '/dir1/subdir2/']) assert_listing('/dir1/', ['/dir1/file2', '/dir1/subdir1/', '/dir1/subdir2/']) assert_listing('/dir1/subdir1', ['/dir1/subdir1/subsubdir2/', '/dir1/subdir1/file2', '/dir1/subdir1/file3', '/dir1/subdir1/file4', '/dir1/subdir1/subsubdir1/']) assert_listing('/dir1/subdir2', []) assert_listing('', ['file1', 'dir1/', 'dir2/']) else: assert_listing('', ['file1', 'dir1/', 'dir2/', 'file A']) assert_listing('dir1', ['dir1/file2', 'dir1/subdir1/', 'dir1/subdir2/', 'dir1/subdir with spaces/', 'dir1/subdir+with{whatever/']) assert_listing('dir1/subdir1', ['dir1/subdir1/file4', 'dir1/subdir1/subsubdir2/', 'dir1/subdir1/file2', 'dir1/subdir1/file3', 'dir1/subdir1/subsubdir1/']) assert_listing('dir1/subdir1/subsubdir1', ['dir1/subdir1/subsubdir1/file7', 'dir1/subdir1/subsubdir1/file5', 'dir1/subdir1/subsubdir1/file8', 'dir1/subdir1/subsubdir1/file6']) assert_listing('dir1/subdir1/subsubdir1/', ['dir1/subdir1/subsubdir1/file7', 'dir1/subdir1/subsubdir1/file5', 'dir1/subdir1/subsubdir1/file8', 'dir1/subdir1/subsubdir1/file6']) assert_listing('dir1/subdir with spaces/', ['dir1/subdir with spaces/file B']) class TestFileEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) # creating another account and connection # for account to account copy tests config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.conn2.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.account2 = cls.conn2.get_account() cls.account2.delete_containers() cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.file_size = 128 # With keystoneauth we need the accounts to have had the project # domain id persisted as sysmeta prior to testing ACLs. This may # not be the case if, for example, the account was created using # a request with reseller_admin role, when project domain id may # not have been known. So we ensure that the project domain id is # in sysmeta by making a POST to the accounts using an admin role. cls.account.update_metadata() cls.account2.update_metadata() class TestFileDev(Base): env = TestFileEnv set_up = False class TestFileDevUTF8(Base2, TestFileDev): set_up = False class TestFile(Base): env = TestFileEnv set_up = False def testCopy(self): # makes sure to test encoded characters source_filename = 'dealde%2Fl04 011e%204c8df/flash.png' file_item = self.env.container.file(source_filename) metadata = {} for i in range(1): metadata[Utils.create_ascii_name()] = Utils.create_name() data = file_item.write_random() file_item.sync_metadata(metadata) dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) # copy both from within and across containers for cont in (self.env.container, dest_cont): # copy both with and without initial slash for prefix in ('', '/'): dest_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.copy('%s%s' % (prefix, cont), dest_filename) self.assertIn(dest_filename, cont.files()) file_item = cont.file(dest_filename) self.assertTrue(data == file_item.read()) self.assertTrue(file_item.initialize()) self.assertTrue(metadata == file_item.metadata) def testCopyAccount(self): # makes sure to test encoded characters source_filename = 'dealde%2Fl04 011e%204c8df/flash.png' file_item = self.env.container.file(source_filename) metadata = {Utils.create_ascii_name(): Utils.create_name()} data = file_item.write_random() file_item.sync_metadata(metadata) dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) acct = self.env.conn.account_name # copy both from within and across containers for cont in (self.env.container, dest_cont): # copy both with and without initial slash for prefix in ('', '/'): dest_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.copy_account(acct, '%s%s' % (prefix, cont), dest_filename) self.assertIn(dest_filename, cont.files()) file_item = cont.file(dest_filename) self.assertTrue(data == file_item.read()) self.assertTrue(file_item.initialize()) self.assertTrue(metadata == file_item.metadata) dest_cont = self.env.account2.container(Utils.create_name()) self.assertTrue(dest_cont.create(hdrs={ 'X-Container-Write': self.env.conn.user_acl })) acct = self.env.conn2.account_name # copy both with and without initial slash for prefix in ('', '/'): dest_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.copy_account(acct, '%s%s' % (prefix, dest_cont), dest_filename) self.assertIn(dest_filename, dest_cont.files()) file_item = dest_cont.file(dest_filename) self.assertTrue(data == file_item.read()) self.assertTrue(file_item.initialize()) self.assertTrue(metadata == file_item.metadata) def testCopy404s(self): source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.write_random() dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) for prefix in ('', '/'): # invalid source container source_cont = self.env.account.container(Utils.create_name()) file_item = source_cont.file(source_filename) self.assertFalse(file_item.copy( '%s%s' % (prefix, self.env.container), Utils.create_name())) self.assert_status(404) self.assertFalse(file_item.copy('%s%s' % (prefix, dest_cont), Utils.create_name())) self.assert_status(404) # invalid source object file_item = self.env.container.file(Utils.create_name()) self.assertFalse(file_item.copy( '%s%s' % (prefix, self.env.container), Utils.create_name())) self.assert_status(404) self.assertFalse(file_item.copy('%s%s' % (prefix, dest_cont), Utils.create_name())) self.assert_status(404) # invalid destination container file_item = self.env.container.file(source_filename) self.assertTrue( not file_item.copy( '%s%s' % (prefix, Utils.create_name()), Utils.create_name())) def testCopyAccount404s(self): acct = self.env.conn.account_name acct2 = self.env.conn2.account_name source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.write_random() dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create(hdrs={ 'X-Container-Read': self.env.conn2.user_acl })) dest_cont2 = self.env.account2.container(Utils.create_name()) self.assertTrue(dest_cont2.create(hdrs={ 'X-Container-Write': self.env.conn.user_acl, 'X-Container-Read': self.env.conn.user_acl })) for acct, cont in ((acct, dest_cont), (acct2, dest_cont2)): for prefix in ('', '/'): # invalid source container source_cont = self.env.account.container(Utils.create_name()) file_item = source_cont.file(source_filename) self.assertFalse(file_item.copy_account( acct, '%s%s' % (prefix, self.env.container), Utils.create_name())) if acct == acct2: # there is no such source container # and foreign user can have no permission to read it self.assert_status(403) else: self.assert_status(404) self.assertFalse(file_item.copy_account( acct, '%s%s' % (prefix, cont), Utils.create_name())) self.assert_status(404) # invalid source object file_item = self.env.container.file(Utils.create_name()) self.assertFalse(file_item.copy_account( acct, '%s%s' % (prefix, self.env.container), Utils.create_name())) if acct == acct2: # there is no such object # and foreign user can have no permission to read it self.assert_status(403) else: self.assert_status(404) self.assertFalse(file_item.copy_account( acct, '%s%s' % (prefix, cont), Utils.create_name())) self.assert_status(404) # invalid destination container file_item = self.env.container.file(source_filename) self.assertFalse(file_item.copy_account( acct, '%s%s' % (prefix, Utils.create_name()), Utils.create_name())) if acct == acct2: # there is no such destination container # and foreign user can have no permission to write there self.assert_status(403) else: self.assert_status(404) def testCopyNoDestinationHeader(self): source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.write_random() file_item = self.env.container.file(source_filename) self.assertFalse(file_item.copy(Utils.create_name(), Utils.create_name(), cfg={'no_destination': True})) self.assert_status(412) def testCopyDestinationSlashProblems(self): source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.write_random() # no slash self.assertFalse(file_item.copy(Utils.create_name(), Utils.create_name(), cfg={'destination': Utils.create_name()})) self.assert_status(412) def testCopyFromHeader(self): source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) metadata = {} for i in range(1): metadata[Utils.create_ascii_name()] = Utils.create_name() file_item.metadata = metadata data = file_item.write_random() dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) # copy both from within and across containers for cont in (self.env.container, dest_cont): # copy both with and without initial slash for prefix in ('', '/'): dest_filename = Utils.create_name() file_item = cont.file(dest_filename) file_item.write(hdrs={'X-Copy-From': '%s%s/%s' % ( prefix, self.env.container.name, source_filename)}) self.assertIn(dest_filename, cont.files()) file_item = cont.file(dest_filename) self.assertTrue(data == file_item.read()) self.assertTrue(file_item.initialize()) self.assertTrue(metadata == file_item.metadata) def testCopyFromAccountHeader(self): acct = self.env.conn.account_name src_cont = self.env.account.container(Utils.create_name()) self.assertTrue(src_cont.create(hdrs={ 'X-Container-Read': self.env.conn2.user_acl })) source_filename = Utils.create_name() file_item = src_cont.file(source_filename) metadata = {} for i in range(1): metadata[Utils.create_ascii_name()] = Utils.create_name() file_item.metadata = metadata data = file_item.write_random() dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) dest_cont2 = self.env.account2.container(Utils.create_name()) self.assertTrue(dest_cont2.create(hdrs={ 'X-Container-Write': self.env.conn.user_acl })) for cont in (src_cont, dest_cont, dest_cont2): # copy both with and without initial slash for prefix in ('', '/'): dest_filename = Utils.create_name() file_item = cont.file(dest_filename) file_item.write(hdrs={'X-Copy-From-Account': acct, 'X-Copy-From': '%s%s/%s' % ( prefix, src_cont.name, source_filename)}) self.assertIn(dest_filename, cont.files()) file_item = cont.file(dest_filename) self.assertTrue(data == file_item.read()) self.assertTrue(file_item.initialize()) self.assertTrue(metadata == file_item.metadata) def testCopyFromHeader404s(self): source_filename = Utils.create_name() file_item = self.env.container.file(source_filename) file_item.write_random() for prefix in ('', '/'): # invalid source container file_item = self.env.container.file(Utils.create_name()) copy_from = ('%s%s/%s' % (prefix, Utils.create_name(), source_filename)) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From': copy_from}) self.assert_status(404) # invalid source object copy_from = ('%s%s/%s' % (prefix, self.env.container.name, Utils.create_name())) file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From': copy_from}) self.assert_status(404) # invalid destination container dest_cont = self.env.account.container(Utils.create_name()) file_item = dest_cont.file(Utils.create_name()) copy_from = ('%s%s/%s' % (prefix, self.env.container.name, source_filename)) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From': copy_from}) self.assert_status(404) def testCopyFromAccountHeader404s(self): acct = self.env.conn2.account_name src_cont = self.env.account2.container(Utils.create_name()) self.assertTrue(src_cont.create(hdrs={ 'X-Container-Read': self.env.conn.user_acl })) source_filename = Utils.create_name() file_item = src_cont.file(source_filename) file_item.write_random() dest_cont = self.env.account.container(Utils.create_name()) self.assertTrue(dest_cont.create()) for prefix in ('', '/'): # invalid source container file_item = dest_cont.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From-Account': acct, 'X-Copy-From': '%s%s/%s' % (prefix, Utils.create_name(), source_filename)}) # looks like cached responses leak "not found" # to un-authorized users, not going to fix it now, but... self.assert_status([403, 404]) # invalid source object file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From-Account': acct, 'X-Copy-From': '%s%s/%s' % (prefix, src_cont, Utils.create_name())}) self.assert_status(404) # invalid destination container dest_cont = self.env.account.container(Utils.create_name()) file_item = dest_cont.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.write, hdrs={'X-Copy-From-Account': acct, 'X-Copy-From': '%s%s/%s' % (prefix, src_cont, source_filename)}) self.assert_status(404) def testNameLimit(self): limit = load_constraint('max_object_name_length') for l in (1, 10, limit / 2, limit - 1, limit, limit + 1, limit * 2): file_item = self.env.container.file('a' * l) if l <= limit: self.assertTrue(file_item.write()) self.assert_status(201) else: self.assertRaises(ResponseError, file_item.write) self.assert_status(400) def testQuestionMarkInName(self): if Utils.create_name == Utils.create_ascii_name: file_name = list(Utils.create_name()) file_name[random.randint(2, len(file_name) - 2)] = '?' file_name = "".join(file_name) else: file_name = Utils.create_name(6) + '?' + Utils.create_name(6) file_item = self.env.container.file(file_name) self.assertTrue(file_item.write(cfg={'no_path_quote': True})) self.assertNotIn(file_name, self.env.container.files()) self.assertIn(file_name.split('?')[0], self.env.container.files()) def testDeleteThen404s(self): file_item = self.env.container.file(Utils.create_name()) self.assertTrue(file_item.write_random()) self.assert_status(201) self.assertTrue(file_item.delete()) self.assert_status(204) file_item.metadata = {Utils.create_ascii_name(): Utils.create_name()} for method in (file_item.info, file_item.read, file_item.sync_metadata, file_item.delete): self.assertRaises(ResponseError, method) self.assert_status(404) def testBlankMetadataName(self): file_item = self.env.container.file(Utils.create_name()) file_item.metadata = {'': Utils.create_name()} self.assertRaises(ResponseError, file_item.write_random) self.assert_status(400) def testMetadataNumberLimit(self): number_limit = load_constraint('max_meta_count') size_limit = load_constraint('max_meta_overall_size') for i in (number_limit - 10, number_limit - 1, number_limit, number_limit + 1, number_limit + 10, number_limit + 100): j = size_limit / (i * 2) size = 0 metadata = {} while len(metadata.keys()) < i: key = Utils.create_ascii_name() val = Utils.create_name() if len(key) > j: key = key[:j] val = val[:j] size += len(key) + len(val) metadata[key] = val file_item = self.env.container.file(Utils.create_name()) file_item.metadata = metadata if i <= number_limit: self.assertTrue(file_item.write()) self.assert_status(201) self.assertTrue(file_item.sync_metadata()) self.assert_status((201, 202)) else: self.assertRaises(ResponseError, file_item.write) self.assert_status(400) file_item.metadata = {} self.assertTrue(file_item.write()) self.assert_status(201) file_item.metadata = metadata self.assertRaises(ResponseError, file_item.sync_metadata) self.assert_status(400) def testContentTypeGuessing(self): file_types = {'wav': 'audio/x-wav', 'txt': 'text/plain', 'zip': 'application/zip'} container = self.env.account.container(Utils.create_name()) self.assertTrue(container.create()) for i in file_types.keys(): file_item = container.file(Utils.create_name() + '.' + i) file_item.write('', cfg={'no_content_type': True}) file_types_read = {} for i in container.files(parms={'format': 'json'}): file_types_read[i['name'].split('.')[1]] = i['content_type'] self.assertEqual(file_types, file_types_read) def testRangedGets(self): # We set the file_length to a strange multiple here. This is to check # that ranges still work in the EC case when the requested range # spans EC segment boundaries. The 1 MiB base value is chosen because # that's a common EC segment size. The 1.33 multiple is to ensure we # aren't aligned on segment boundaries file_length = int(1048576 * 1.33) range_size = file_length / 10 file_item = self.env.container.file(Utils.create_name()) data = file_item.write_random(file_length) for i in range(0, file_length, range_size): range_string = 'bytes=%d-%d' % (i, i + range_size - 1) hdrs = {'Range': range_string} self.assertTrue( data[i: i + range_size] == file_item.read(hdrs=hdrs), range_string) range_string = 'bytes=-%d' % (i) hdrs = {'Range': range_string} if i == 0: # RFC 2616 14.35.1 # "If a syntactically valid byte-range-set includes ... at # least one suffix-byte-range-spec with a NON-ZERO # suffix-length, then the byte-range-set is satisfiable. # Otherwise, the byte-range-set is unsatisfiable. self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(416) else: self.assertEqual(file_item.read(hdrs=hdrs), data[-i:]) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') range_string = 'bytes=%d-' % (i) hdrs = {'Range': range_string} self.assertEqual( file_item.read(hdrs=hdrs), data[i - file_length:], range_string) range_string = 'bytes=%d-%d' % (file_length + 1000, file_length + 2000) hdrs = {'Range': range_string} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(416) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') range_string = 'bytes=%d-%d' % (file_length - 1000, file_length + 2000) hdrs = {'Range': range_string} self.assertEqual(file_item.read(hdrs=hdrs), data[-1000:], range_string) hdrs = {'Range': '0-4'} self.assertEqual(file_item.read(hdrs=hdrs), data, '0-4') # RFC 2616 14.35.1 # "If the entity is shorter than the specified suffix-length, the # entire entity-body is used." range_string = 'bytes=-%d' % (file_length + 10) hdrs = {'Range': range_string} self.assertEqual(file_item.read(hdrs=hdrs), data, range_string) def testMultiRangeGets(self): file_length = 10000 range_size = file_length / 10 subrange_size = range_size / 10 file_item = self.env.container.file(Utils.create_name()) data = file_item.write_random( file_length, hdrs={"Content-Type": "lovecraft/rugose; squamous=true"}) for i in range(0, file_length, range_size): range_string = 'bytes=%d-%d,%d-%d,%d-%d' % ( i, i + subrange_size - 1, i + 2 * subrange_size, i + 3 * subrange_size - 1, i + 4 * subrange_size, i + 5 * subrange_size - 1) hdrs = {'Range': range_string} fetched = file_item.read(hdrs=hdrs) self.assert_status(206) content_type = file_item.content_type self.assertTrue(content_type.startswith("multipart/byteranges")) self.assertIsNone(file_item.content_range) # email.parser.FeedParser wants a message with headers on the # front, then two CRLFs, and then a body (like emails have but # HTTP response bodies don't). We fake it out by constructing a # one-header preamble containing just the Content-Type, then # feeding in the response body. parser = email.parser.FeedParser() parser.feed("Content-Type: %s\r\n\r\n" % content_type) parser.feed(fetched) root_message = parser.close() self.assertTrue(root_message.is_multipart()) byteranges = root_message.get_payload() self.assertEqual(len(byteranges), 3) self.assertEqual(byteranges[0]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[0]['Content-Range'], "bytes %d-%d/%d" % (i, i + subrange_size - 1, file_length)) self.assertEqual( byteranges[0].get_payload(), data[i:(i + subrange_size)]) self.assertEqual(byteranges[1]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[1]['Content-Range'], "bytes %d-%d/%d" % (i + 2 * subrange_size, i + 3 * subrange_size - 1, file_length)) self.assertEqual( byteranges[1].get_payload(), data[(i + 2 * subrange_size):(i + 3 * subrange_size)]) self.assertEqual(byteranges[2]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[2]['Content-Range'], "bytes %d-%d/%d" % (i + 4 * subrange_size, i + 5 * subrange_size - 1, file_length)) self.assertEqual( byteranges[2].get_payload(), data[(i + 4 * subrange_size):(i + 5 * subrange_size)]) # The first two ranges are satisfiable but the third is not; the # result is a multipart/byteranges response containing only the two # satisfiable byteranges. range_string = 'bytes=%d-%d,%d-%d,%d-%d' % ( 0, subrange_size - 1, 2 * subrange_size, 3 * subrange_size - 1, file_length, file_length + subrange_size - 1) hdrs = {'Range': range_string} fetched = file_item.read(hdrs=hdrs) self.assert_status(206) content_type = file_item.content_type self.assertTrue(content_type.startswith("multipart/byteranges")) self.assertIsNone(file_item.content_range) parser = email.parser.FeedParser() parser.feed("Content-Type: %s\r\n\r\n" % content_type) parser.feed(fetched) root_message = parser.close() self.assertTrue(root_message.is_multipart()) byteranges = root_message.get_payload() self.assertEqual(len(byteranges), 2) self.assertEqual(byteranges[0]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[0]['Content-Range'], "bytes %d-%d/%d" % (0, subrange_size - 1, file_length)) self.assertEqual(byteranges[0].get_payload(), data[:subrange_size]) self.assertEqual(byteranges[1]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[1]['Content-Range'], "bytes %d-%d/%d" % (2 * subrange_size, 3 * subrange_size - 1, file_length)) self.assertEqual( byteranges[1].get_payload(), data[(2 * subrange_size):(3 * subrange_size)]) # The first range is satisfiable but the second is not; the # result is either a multipart/byteranges response containing one # byterange or a normal, non-MIME 206 response. range_string = 'bytes=%d-%d,%d-%d' % ( 0, subrange_size - 1, file_length, file_length + subrange_size - 1) hdrs = {'Range': range_string} fetched = file_item.read(hdrs=hdrs) self.assert_status(206) content_type = file_item.content_type if content_type.startswith("multipart/byteranges"): self.assertIsNone(file_item.content_range) parser = email.parser.FeedParser() parser.feed("Content-Type: %s\r\n\r\n" % content_type) parser.feed(fetched) root_message = parser.close() self.assertTrue(root_message.is_multipart()) byteranges = root_message.get_payload() self.assertEqual(len(byteranges), 1) self.assertEqual(byteranges[0]['Content-Type'], "lovecraft/rugose; squamous=true") self.assertEqual( byteranges[0]['Content-Range'], "bytes %d-%d/%d" % (0, subrange_size - 1, file_length)) self.assertEqual(byteranges[0].get_payload(), data[:subrange_size]) else: self.assertEqual( file_item.content_range, "bytes %d-%d/%d" % (0, subrange_size - 1, file_length)) self.assertEqual(content_type, "lovecraft/rugose; squamous=true") self.assertEqual(fetched, data[:subrange_size]) # No byterange is satisfiable, so we get a 416 response. range_string = 'bytes=%d-%d,%d-%d' % ( file_length, file_length + 2, file_length + 100, file_length + 102) hdrs = {'Range': range_string} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(416) def testRangedGetsWithLWSinHeader(self): file_length = 10000 file_item = self.env.container.file(Utils.create_name()) data = file_item.write_random(file_length) for r in ('BYTES=0-999', 'bytes = 0-999', 'BYTES = 0 - 999', 'bytes = 0 - 999', 'bytes=0 - 999', 'bytes=0-999 '): self.assertTrue(file_item.read(hdrs={'Range': r}) == data[0:1000]) def testFileSizeLimit(self): limit = load_constraint('max_file_size') tsecs = 3 def timeout(seconds, method, *args, **kwargs): try: with eventlet.Timeout(seconds): method(*args, **kwargs) except eventlet.Timeout: return True else: return False for i in (limit - 100, limit - 10, limit - 1, limit, limit + 1, limit + 10, limit + 100): file_item = self.env.container.file(Utils.create_name()) if i <= limit: self.assertTrue(timeout(tsecs, file_item.write, cfg={'set_content_length': i})) else: self.assertRaises(ResponseError, timeout, tsecs, file_item.write, cfg={'set_content_length': i}) def testNoContentLengthForPut(self): file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.write, 'testing', cfg={'no_content_length': True}) self.assert_status(411) def testDelete(self): file_item = self.env.container.file(Utils.create_name()) file_item.write_random(self.env.file_size) self.assertIn(file_item.name, self.env.container.files()) self.assertTrue(file_item.delete()) self.assertNotIn(file_item.name, self.env.container.files()) def testBadHeaders(self): file_length = 100 # no content type on puts should be ok file_item = self.env.container.file(Utils.create_name()) file_item.write_random(file_length, cfg={'no_content_type': True}) self.assert_status(201) # content length x self.assertRaises(ResponseError, file_item.write_random, file_length, hdrs={'Content-Length': 'X'}, cfg={'no_content_length': True}) self.assert_status(400) # no content-length self.assertRaises(ResponseError, file_item.write_random, file_length, cfg={'no_content_length': True}) self.assert_status(411) self.assertRaises(ResponseError, file_item.write_random, file_length, hdrs={'transfer-encoding': 'gzip,chunked'}, cfg={'no_content_length': True}) self.assert_status(501) # bad request types # for req in ('LICK', 'GETorHEAD_base', 'container_info', # 'best_response'): for req in ('LICK', 'GETorHEAD_base'): self.env.account.conn.make_request(req) self.assert_status(405) # bad range headers self.assertTrue( len(file_item.read(hdrs={'Range': 'parsecs=8-12'})) == file_length) self.assert_status(200) def testMetadataLengthLimits(self): key_limit = load_constraint('max_meta_name_length') value_limit = load_constraint('max_meta_value_length') lengths = [[key_limit, value_limit], [key_limit, value_limit + 1], [key_limit + 1, value_limit], [key_limit, 0], [key_limit, value_limit * 10], [key_limit * 10, value_limit]] for l in lengths: metadata = {'a' * l[0]: 'b' * l[1]} file_item = self.env.container.file(Utils.create_name()) file_item.metadata = metadata if l[0] <= key_limit and l[1] <= value_limit: self.assertTrue(file_item.write()) self.assert_status(201) self.assertTrue(file_item.sync_metadata()) else: self.assertRaises(ResponseError, file_item.write) self.assert_status(400) file_item.metadata = {} self.assertTrue(file_item.write()) self.assert_status(201) file_item.metadata = metadata self.assertRaises(ResponseError, file_item.sync_metadata) self.assert_status(400) def testEtagWayoff(self): file_item = self.env.container.file(Utils.create_name()) hdrs = {'etag': 'reallylonganddefinitelynotavalidetagvalue'} self.assertRaises(ResponseError, file_item.write_random, hdrs=hdrs) self.assert_status(422) def testFileCreate(self): for i in range(10): file_item = self.env.container.file(Utils.create_name()) data = file_item.write_random() self.assert_status(201) self.assertTrue(data == file_item.read()) self.assert_status(200) def testHead(self): file_name = Utils.create_name() content_type = Utils.create_name() file_item = self.env.container.file(file_name) file_item.content_type = content_type file_item.write_random(self.env.file_size) md5 = file_item.md5 file_item = self.env.container.file(file_name) info = file_item.info() self.assert_status(200) self.assertEqual(info['content_length'], self.env.file_size) self.assertEqual(info['etag'], md5) self.assertEqual(info['content_type'], content_type) self.assertIn('last_modified', info) def testDeleteOfFileThatDoesNotExist(self): # in container that exists file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.delete) self.assert_status(404) # in container that does not exist container = self.env.account.container(Utils.create_name()) file_item = container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.delete) self.assert_status(404) def testHeadOnFileThatDoesNotExist(self): # in container that exists file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.info) self.assert_status(404) # in container that does not exist container = self.env.account.container(Utils.create_name()) file_item = container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.info) self.assert_status(404) def testMetadataOnPost(self): file_item = self.env.container.file(Utils.create_name()) file_item.write_random(self.env.file_size) for i in range(10): metadata = {} for j in range(10): metadata[Utils.create_ascii_name()] = Utils.create_name() file_item.metadata = metadata self.assertTrue(file_item.sync_metadata()) self.assert_status((201, 202)) file_item = self.env.container.file(file_item.name) self.assertTrue(file_item.initialize()) self.assert_status(200) self.assertEqual(file_item.metadata, metadata) def testGetContentType(self): file_name = Utils.create_name() content_type = Utils.create_name() file_item = self.env.container.file(file_name) file_item.content_type = content_type file_item.write_random() file_item = self.env.container.file(file_name) file_item.read() self.assertEqual(content_type, file_item.content_type) def testGetOnFileThatDoesNotExist(self): # in container that exists file_item = self.env.container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.read) self.assert_status(404) # in container that does not exist container = self.env.account.container(Utils.create_name()) file_item = container.file(Utils.create_name()) self.assertRaises(ResponseError, file_item.read) self.assert_status(404) def testPostOnFileThatDoesNotExist(self): # in container that exists file_item = self.env.container.file(Utils.create_name()) file_item.metadata['Field'] = 'Value' self.assertRaises(ResponseError, file_item.sync_metadata) self.assert_status(404) # in container that does not exist container = self.env.account.container(Utils.create_name()) file_item = container.file(Utils.create_name()) file_item.metadata['Field'] = 'Value' self.assertRaises(ResponseError, file_item.sync_metadata) self.assert_status(404) def testMetadataOnPut(self): for i in range(10): metadata = {} for j in range(10): metadata[Utils.create_ascii_name()] = Utils.create_name() file_item = self.env.container.file(Utils.create_name()) file_item.metadata = metadata file_item.write_random(self.env.file_size) file_item = self.env.container.file(file_item.name) self.assertTrue(file_item.initialize()) self.assert_status(200) self.assertEqual(file_item.metadata, metadata) def testSerialization(self): container = self.env.account.container(Utils.create_name()) self.assertTrue(container.create()) files = [] for i in (0, 1, 10, 100, 1000, 10000): files.append({'name': Utils.create_name(), 'content_type': Utils.create_name(), 'bytes': i}) write_time = time.time() for f in files: file_item = container.file(f['name']) file_item.content_type = f['content_type'] file_item.write_random(f['bytes']) f['hash'] = file_item.md5 f['json'] = False f['xml'] = False write_time = time.time() - write_time for format_type in ['json', 'xml']: for file_item in container.files(parms={'format': format_type}): found = False for f in files: if f['name'] != file_item['name']: continue self.assertEqual(file_item['content_type'], f['content_type']) self.assertEqual(int(file_item['bytes']), f['bytes']) d = datetime.strptime( file_item['last_modified'].split('.')[0], "%Y-%m-%dT%H:%M:%S") lm = time.mktime(d.timetuple()) if 'last_modified' in f: self.assertEqual(f['last_modified'], lm) else: f['last_modified'] = lm f[format_type] = True found = True self.assertTrue( found, 'Unexpected file %s found in ' '%s listing' % (file_item['name'], format_type)) headers = dict(self.env.conn.response.getheaders()) if format_type == 'json': self.assertEqual(headers['content-type'], 'application/json; charset=utf-8') elif format_type == 'xml': self.assertEqual(headers['content-type'], 'application/xml; charset=utf-8') lm_diff = max([f['last_modified'] for f in files]) -\ min([f['last_modified'] for f in files]) self.assertTrue( lm_diff < write_time + 1, 'Diff in last ' 'modified times should be less than time to write files') for f in files: for format_type in ['json', 'xml']: self.assertTrue( f[format_type], 'File %s not found in %s listing' % (f['name'], format_type)) def testStackedOverwrite(self): file_item = self.env.container.file(Utils.create_name()) for i in range(1, 11): data = file_item.write_random(512) file_item.write(data) self.assertTrue(file_item.read() == data) def testTooLongName(self): file_item = self.env.container.file('x' * 1025) self.assertRaises(ResponseError, file_item.write) self.assert_status(400) def testZeroByteFile(self): file_item = self.env.container.file(Utils.create_name()) self.assertTrue(file_item.write('')) self.assertIn(file_item.name, self.env.container.files()) self.assertTrue(file_item.read() == '') def testEtagResponse(self): file_item = self.env.container.file(Utils.create_name()) data = six.StringIO(file_item.write_random(512)) etag = File.compute_md5sum(data) headers = dict(self.env.conn.response.getheaders()) self.assertIn('etag', headers.keys()) header_etag = headers['etag'].strip('"') self.assertEqual(etag, header_etag) def testChunkedPut(self): if (tf.web_front_end == 'apache2'): raise SkipTest("Chunked PUT can only be tested with apache2 web" " front end") def chunks(s, length=3): i, j = 0, length while i < len(s): yield s[i:j] i, j = j, j + length data = File.random_data(10000) etag = File.compute_md5sum(data) for i in (1, 10, 100, 1000): file_item = self.env.container.file(Utils.create_name()) for j in chunks(data, i): file_item.chunked_write(j) self.assertTrue(file_item.chunked_write()) self.assertTrue(data == file_item.read()) info = file_item.info() self.assertEqual(etag, info['etag']) def test_POST(self): # verify consistency between object and container listing metadata file_name = Utils.create_name() file_item = self.env.container.file(file_name) file_item.content_type = 'text/foobar' file_item.write_random(1024) # sanity check file_item = self.env.container.file(file_name) file_item.initialize() self.assertEqual('text/foobar', file_item.content_type) self.assertEqual(1024, file_item.size) etag = file_item.etag # check container listing is consistent listing = self.env.container.files(parms={'format': 'json'}) for f_dict in listing: if f_dict['name'] == file_name: break else: self.fail('Failed to find file %r in listing' % file_name) self.assertEqual(1024, f_dict['bytes']) self.assertEqual('text/foobar', f_dict['content_type']) self.assertEqual(etag, f_dict['hash']) # now POST updated content-type to each file file_item = self.env.container.file(file_name) file_item.content_type = 'image/foobarbaz' file_item.sync_metadata({'Test': 'blah'}) # sanity check object metadata file_item = self.env.container.file(file_name) file_item.initialize() self.assertEqual(1024, file_item.size) self.assertEqual('image/foobarbaz', file_item.content_type) self.assertEqual(etag, file_item.etag) self.assertIn('test', file_item.metadata) # check for consistency between object and container listing listing = self.env.container.files(parms={'format': 'json'}) for f_dict in listing: if f_dict['name'] == file_name: break else: self.fail('Failed to find file %r in listing' % file_name) self.assertEqual(1024, f_dict['bytes']) self.assertEqual('image/foobarbaz', f_dict['content_type']) self.assertEqual(etag, f_dict['hash']) class TestFileUTF8(Base2, TestFile): set_up = False class TestDloEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() config2 = tf.config.copy() config2['username'] = tf.config['username3'] config2['password'] = tf.config['password3'] cls.conn2 = Connection(config2) cls.conn2.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.container = cls.account.container(Utils.create_name()) cls.container2 = cls.account.container(Utils.create_name()) for cont in (cls.container, cls.container2): if not cont.create(): raise ResponseError(cls.conn.response) # avoid getting a prefix that stops halfway through an encoded # character prefix = Utils.create_name().decode("utf-8")[:10].encode("utf-8") cls.segment_prefix = prefix for letter in ('a', 'b', 'c', 'd', 'e'): file_item = cls.container.file("%s/seg_lower%s" % (prefix, letter)) file_item.write(letter * 10) file_item = cls.container.file("%s/seg_upper%s" % (prefix, letter)) file_item.write(letter.upper() * 10) for letter in ('f', 'g', 'h', 'i', 'j'): file_item = cls.container2.file("%s/seg_lower%s" % (prefix, letter)) file_item.write(letter * 10) man1 = cls.container.file("man1") man1.write('man1-contents', hdrs={"X-Object-Manifest": "%s/%s/seg_lower" % (cls.container.name, prefix)}) man2 = cls.container.file("man2") man2.write('man2-contents', hdrs={"X-Object-Manifest": "%s/%s/seg_upper" % (cls.container.name, prefix)}) manall = cls.container.file("manall") manall.write('manall-contents', hdrs={"X-Object-Manifest": "%s/%s/seg" % (cls.container.name, prefix)}) mancont2 = cls.container.file("mancont2") mancont2.write( 'mancont2-contents', hdrs={"X-Object-Manifest": "%s/%s/seg_lower" % (cls.container2.name, prefix)}) class TestDlo(Base): env = TestDloEnv set_up = False def test_get_manifest(self): file_item = self.env.container.file('man1') file_contents = file_item.read() self.assertEqual( file_contents, "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee") file_item = self.env.container.file('man2') file_contents = file_item.read() self.assertEqual( file_contents, "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEE") file_item = self.env.container.file('manall') file_contents = file_item.read() self.assertEqual( file_contents, ("aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee" + "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEE")) def test_get_manifest_document_itself(self): file_item = self.env.container.file('man1') file_contents = file_item.read(parms={'multipart-manifest': 'get'}) self.assertEqual(file_contents, "man1-contents") self.assertEqual(file_item.info()['x_object_manifest'], "%s/%s/seg_lower" % (self.env.container.name, self.env.segment_prefix)) def test_get_range(self): file_item = self.env.container.file('man1') file_contents = file_item.read(size=25, offset=8) self.assertEqual(file_contents, "aabbbbbbbbbbccccccccccddd") file_contents = file_item.read(size=1, offset=47) self.assertEqual(file_contents, "e") def test_get_range_out_of_range(self): file_item = self.env.container.file('man1') self.assertRaises(ResponseError, file_item.read, size=7, offset=50) self.assert_status(416) def test_copy(self): # Adding a new segment, copying the manifest, and then deleting the # segment proves that the new object is really the concatenated # segments and not just a manifest. f_segment = self.env.container.file("%s/seg_lowerf" % (self.env.segment_prefix)) f_segment.write('ffffffffff') try: man1_item = self.env.container.file('man1') man1_item.copy(self.env.container.name, "copied-man1") finally: # try not to leave this around for other tests to stumble over f_segment.delete() file_item = self.env.container.file('copied-man1') file_contents = file_item.read() self.assertEqual( file_contents, "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeeeffffffffff") # The copied object must not have X-Object-Manifest self.assertNotIn("x_object_manifest", file_item.info()) def test_copy_account(self): # dlo use same account and same container only acct = self.env.conn.account_name # Adding a new segment, copying the manifest, and then deleting the # segment proves that the new object is really the concatenated # segments and not just a manifest. f_segment = self.env.container.file("%s/seg_lowerf" % (self.env.segment_prefix)) f_segment.write('ffffffffff') try: man1_item = self.env.container.file('man1') man1_item.copy_account(acct, self.env.container.name, "copied-man1") finally: # try not to leave this around for other tests to stumble over f_segment.delete() file_item = self.env.container.file('copied-man1') file_contents = file_item.read() self.assertEqual( file_contents, "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeeeffffffffff") # The copied object must not have X-Object-Manifest self.assertNotIn("x_object_manifest", file_item.info()) def test_copy_manifest(self): # Copying the manifest with multipart-manifest=get query string # should result in another manifest try: man1_item = self.env.container.file('man1') man1_item.copy(self.env.container.name, "copied-man1", parms={'multipart-manifest': 'get'}) copied = self.env.container.file("copied-man1") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) self.assertEqual(copied_contents, "man1-contents") copied_contents = copied.read() self.assertEqual( copied_contents, "aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeee") self.assertEqual(man1_item.info()['x_object_manifest'], copied.info()['x_object_manifest']) finally: # try not to leave this around for other tests to stumble over self.env.container.file("copied-man1").delete() def test_dlo_if_match_get(self): manifest = self.env.container.file("man1") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.read, hdrs={'If-Match': 'not-%s' % etag}) self.assert_status(412) manifest.read(hdrs={'If-Match': etag}) self.assert_status(200) def test_dlo_if_none_match_get(self): manifest = self.env.container.file("man1") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.read, hdrs={'If-None-Match': etag}) self.assert_status(304) manifest.read(hdrs={'If-None-Match': "not-%s" % etag}) self.assert_status(200) def test_dlo_if_match_head(self): manifest = self.env.container.file("man1") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.info, hdrs={'If-Match': 'not-%s' % etag}) self.assert_status(412) manifest.info(hdrs={'If-Match': etag}) self.assert_status(200) def test_dlo_if_none_match_head(self): manifest = self.env.container.file("man1") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.info, hdrs={'If-None-Match': etag}) self.assert_status(304) manifest.info(hdrs={'If-None-Match': "not-%s" % etag}) self.assert_status(200) def test_dlo_referer_on_segment_container(self): # First the account2 (test3) should fail headers = {'X-Auth-Token': self.env.conn2.storage_token, 'Referer': 'http://blah.example.com'} dlo_file = self.env.container.file("mancont2") self.assertRaises(ResponseError, dlo_file.read, hdrs=headers) self.assert_status(403) # Now set the referer on the dlo container only referer_metadata = {'X-Container-Read': '.r:*.example.com,.rlistings'} self.env.container.update_metadata(referer_metadata) self.assertRaises(ResponseError, dlo_file.read, hdrs=headers) self.assert_status(403) # Finally set the referer on the segment container self.env.container2.update_metadata(referer_metadata) contents = dlo_file.read(hdrs=headers) self.assertEqual( contents, "ffffffffffgggggggggghhhhhhhhhhiiiiiiiiiijjjjjjjjjj") class TestDloUTF8(Base2, TestDlo): set_up = False class TestFileComparisonEnv(object): @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.file_count = 20 cls.file_size = 128 cls.files = list() for x in range(cls.file_count): file_item = cls.container.file(Utils.create_name()) file_item.write_random(cls.file_size) cls.files.append(file_item) cls.time_old_f1 = time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime(time.time() - 86400)) cls.time_old_f2 = time.strftime("%A, %d-%b-%y %H:%M:%S GMT", time.gmtime(time.time() - 86400)) cls.time_old_f3 = time.strftime("%a %b %d %H:%M:%S %Y", time.gmtime(time.time() - 86400)) cls.time_new = time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime(time.time() + 86400)) class TestFileComparison(Base): env = TestFileComparisonEnv set_up = False def testIfMatch(self): for file_item in self.env.files: hdrs = {'If-Match': file_item.md5} self.assertTrue(file_item.read(hdrs=hdrs)) hdrs = {'If-Match': 'bogus'} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) def testIfMatchMultipleEtags(self): for file_item in self.env.files: hdrs = {'If-Match': '"bogus1", "%s", "bogus2"' % file_item.md5} self.assertTrue(file_item.read(hdrs=hdrs)) hdrs = {'If-Match': '"bogus1", "bogus2", "bogus3"'} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) def testIfNoneMatch(self): for file_item in self.env.files: hdrs = {'If-None-Match': 'bogus'} self.assertTrue(file_item.read(hdrs=hdrs)) hdrs = {'If-None-Match': file_item.md5} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(304) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') def testIfNoneMatchMultipleEtags(self): for file_item in self.env.files: hdrs = {'If-None-Match': '"bogus1", "bogus2", "bogus3"'} self.assertTrue(file_item.read(hdrs=hdrs)) hdrs = {'If-None-Match': '"bogus1", "bogus2", "%s"' % file_item.md5} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(304) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') def testIfModifiedSince(self): for file_item in self.env.files: hdrs = {'If-Modified-Since': self.env.time_old_f1} self.assertTrue(file_item.read(hdrs=hdrs)) self.assertTrue(file_item.info(hdrs=hdrs)) hdrs = {'If-Modified-Since': self.env.time_new} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(304) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') self.assertRaises(ResponseError, file_item.info, hdrs=hdrs) self.assert_status(304) self.assert_header('etag', file_item.md5) self.assert_header('accept-ranges', 'bytes') def testIfUnmodifiedSince(self): for file_item in self.env.files: hdrs = {'If-Unmodified-Since': self.env.time_new} self.assertTrue(file_item.read(hdrs=hdrs)) self.assertTrue(file_item.info(hdrs=hdrs)) hdrs = {'If-Unmodified-Since': self.env.time_old_f2} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) self.assertRaises(ResponseError, file_item.info, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) def testIfMatchAndUnmodified(self): for file_item in self.env.files: hdrs = {'If-Match': file_item.md5, 'If-Unmodified-Since': self.env.time_new} self.assertTrue(file_item.read(hdrs=hdrs)) hdrs = {'If-Match': 'bogus', 'If-Unmodified-Since': self.env.time_new} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) hdrs = {'If-Match': file_item.md5, 'If-Unmodified-Since': self.env.time_old_f3} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(412) self.assert_header('etag', file_item.md5) def testLastModified(self): file_name = Utils.create_name() content_type = Utils.create_name() file_item = self.env.container.file(file_name) file_item.content_type = content_type resp = file_item.write_random_return_resp(self.env.file_size) put_last_modified = resp.getheader('last-modified') etag = file_item.md5 file_item = self.env.container.file(file_name) info = file_item.info() self.assertIn('last_modified', info) last_modified = info['last_modified'] self.assertEqual(put_last_modified, info['last_modified']) hdrs = {'If-Modified-Since': last_modified} self.assertRaises(ResponseError, file_item.read, hdrs=hdrs) self.assert_status(304) self.assert_header('etag', etag) self.assert_header('accept-ranges', 'bytes') hdrs = {'If-Unmodified-Since': last_modified} self.assertTrue(file_item.read(hdrs=hdrs)) class TestFileComparisonUTF8(Base2, TestFileComparison): set_up = False class TestSloEnv(object): slo_enabled = None # tri-state: None initially, then True/False @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.conn2.authenticate() cls.account2 = cls.conn2.get_account() cls.account2.delete_containers() config3 = tf.config.copy() config3['username'] = tf.config['username3'] config3['password'] = tf.config['password3'] cls.conn3 = Connection(config3) cls.conn3.authenticate() if cls.slo_enabled is None: cls.slo_enabled = 'slo' in cluster_info if not cls.slo_enabled: return cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.container = cls.account.container(Utils.create_name()) cls.container2 = cls.account.container(Utils.create_name()) for cont in (cls.container, cls.container2): if not cont.create(): raise ResponseError(cls.conn.response) cls.seg_info = seg_info = {} for letter, size in (('a', 1024 * 1024), ('b', 1024 * 1024), ('c', 1024 * 1024), ('d', 1024 * 1024), ('e', 1)): seg_name = "seg_%s" % letter file_item = cls.container.file(seg_name) file_item.write(letter * size) seg_info[seg_name] = { 'size_bytes': size, 'etag': file_item.md5, 'path': '/%s/%s' % (cls.container.name, seg_name)} file_item = cls.container.file("manifest-abcde") file_item.write( json.dumps([seg_info['seg_a'], seg_info['seg_b'], seg_info['seg_c'], seg_info['seg_d'], seg_info['seg_e']]), parms={'multipart-manifest': 'put'}) # Put the same manifest in the container2 file_item = cls.container2.file("manifest-abcde") file_item.write( json.dumps([seg_info['seg_a'], seg_info['seg_b'], seg_info['seg_c'], seg_info['seg_d'], seg_info['seg_e']]), parms={'multipart-manifest': 'put'}) file_item = cls.container.file('manifest-cd') cd_json = json.dumps([seg_info['seg_c'], seg_info['seg_d']]) file_item.write(cd_json, parms={'multipart-manifest': 'put'}) cd_etag = hashlib.md5(seg_info['seg_c']['etag'] + seg_info['seg_d']['etag']).hexdigest() file_item = cls.container.file("manifest-bcd-submanifest") file_item.write( json.dumps([seg_info['seg_b'], {'etag': cd_etag, 'size_bytes': (seg_info['seg_c']['size_bytes'] + seg_info['seg_d']['size_bytes']), 'path': '/%s/%s' % (cls.container.name, 'manifest-cd')}]), parms={'multipart-manifest': 'put'}) bcd_submanifest_etag = hashlib.md5( seg_info['seg_b']['etag'] + cd_etag).hexdigest() file_item = cls.container.file("manifest-abcde-submanifest") file_item.write( json.dumps([ seg_info['seg_a'], {'etag': bcd_submanifest_etag, 'size_bytes': (seg_info['seg_b']['size_bytes'] + seg_info['seg_c']['size_bytes'] + seg_info['seg_d']['size_bytes']), 'path': '/%s/%s' % (cls.container.name, 'manifest-bcd-submanifest')}, seg_info['seg_e']]), parms={'multipart-manifest': 'put'}) abcde_submanifest_etag = hashlib.md5( seg_info['seg_a']['etag'] + bcd_submanifest_etag + seg_info['seg_e']['etag']).hexdigest() abcde_submanifest_size = (seg_info['seg_a']['size_bytes'] + seg_info['seg_b']['size_bytes'] + seg_info['seg_c']['size_bytes'] + seg_info['seg_d']['size_bytes'] + seg_info['seg_e']['size_bytes']) file_item = cls.container.file("ranged-manifest") file_item.write( json.dumps([ {'etag': abcde_submanifest_etag, 'size_bytes': abcde_submanifest_size, 'path': '/%s/%s' % (cls.container.name, 'manifest-abcde-submanifest'), 'range': '-1048578'}, # 'c' + ('d' * 2**20) + 'e' {'etag': abcde_submanifest_etag, 'size_bytes': abcde_submanifest_size, 'path': '/%s/%s' % (cls.container.name, 'manifest-abcde-submanifest'), 'range': '524288-1572863'}, # 'a' * 2**19 + 'b' * 2**19 {'etag': abcde_submanifest_etag, 'size_bytes': abcde_submanifest_size, 'path': '/%s/%s' % (cls.container.name, 'manifest-abcde-submanifest'), 'range': '3145727-3145728'}]), # 'cd' parms={'multipart-manifest': 'put'}) ranged_manifest_etag = hashlib.md5( abcde_submanifest_etag + ':3145727-4194304;' + abcde_submanifest_etag + ':524288-1572863;' + abcde_submanifest_etag + ':3145727-3145728;').hexdigest() ranged_manifest_size = 2 * 1024 * 1024 + 4 file_item = cls.container.file("ranged-submanifest") file_item.write( json.dumps([ seg_info['seg_c'], {'etag': ranged_manifest_etag, 'size_bytes': ranged_manifest_size, 'path': '/%s/%s' % (cls.container.name, 'ranged-manifest')}, {'etag': ranged_manifest_etag, 'size_bytes': ranged_manifest_size, 'path': '/%s/%s' % (cls.container.name, 'ranged-manifest'), 'range': '524289-1572865'}, {'etag': ranged_manifest_etag, 'size_bytes': ranged_manifest_size, 'path': '/%s/%s' % (cls.container.name, 'ranged-manifest'), 'range': '-3'}]), parms={'multipart-manifest': 'put'}) file_item = cls.container.file("manifest-db") file_item.write( json.dumps([ {'path': seg_info['seg_d']['path'], 'etag': None, 'size_bytes': None}, {'path': seg_info['seg_b']['path'], 'etag': None, 'size_bytes': None}, ]), parms={'multipart-manifest': 'put'}) file_item = cls.container.file("ranged-manifest-repeated-segment") file_item.write( json.dumps([ {'path': seg_info['seg_a']['path'], 'etag': None, 'size_bytes': None, 'range': '-1048578'}, {'path': seg_info['seg_a']['path'], 'etag': None, 'size_bytes': None}, {'path': seg_info['seg_b']['path'], 'etag': None, 'size_bytes': None, 'range': '-1048578'}, ]), parms={'multipart-manifest': 'put'}) class TestSlo(Base): env = TestSloEnv set_up = False def setUp(self): super(TestSlo, self).setUp() if self.env.slo_enabled is False: raise SkipTest("SLO not enabled") elif self.env.slo_enabled is not True: # just some sanity checking raise Exception( "Expected slo_enabled to be True/False, got %r" % (self.env.slo_enabled,)) def test_slo_get_simple_manifest(self): file_item = self.env.container.file('manifest-abcde') file_contents = file_item.read() self.assertEqual(4 * 1024 * 1024 + 1, len(file_contents)) self.assertEqual('a', file_contents[0]) self.assertEqual('a', file_contents[1024 * 1024 - 1]) self.assertEqual('b', file_contents[1024 * 1024]) self.assertEqual('d', file_contents[-2]) self.assertEqual('e', file_contents[-1]) def test_slo_container_listing(self): # the listing object size should equal the sum of the size of the # segments, not the size of the manifest body raise SkipTest('Only passes with object_post_as_copy=False') file_item = self.env.container.file(Utils.create_name) file_item.write( json.dumps([self.env.seg_info['seg_a']]), parms={'multipart-manifest': 'put'}) files = self.env.container.files(parms={'format': 'json'}) for f_dict in files: if f_dict['name'] == file_item.name: self.assertEqual(1024 * 1024, f_dict['bytes']) self.assertEqual('application/octet-stream', f_dict['content_type']) break else: self.fail('Failed to find manifest file in container listing') # now POST updated content-type file file_item.content_type = 'image/jpeg' file_item.sync_metadata({'X-Object-Meta-Test': 'blah'}) file_item.initialize() self.assertEqual('image/jpeg', file_item.content_type) # sanity # verify that the container listing is consistent with the file files = self.env.container.files(parms={'format': 'json'}) for f_dict in files: if f_dict['name'] == file_item.name: self.assertEqual(1024 * 1024, f_dict['bytes']) self.assertEqual(file_item.content_type, f_dict['content_type']) break else: self.fail('Failed to find manifest file in container listing') def test_slo_get_nested_manifest(self): file_item = self.env.container.file('manifest-abcde-submanifest') file_contents = file_item.read() self.assertEqual(4 * 1024 * 1024 + 1, len(file_contents)) self.assertEqual('a', file_contents[0]) self.assertEqual('a', file_contents[1024 * 1024 - 1]) self.assertEqual('b', file_contents[1024 * 1024]) self.assertEqual('d', file_contents[-2]) self.assertEqual('e', file_contents[-1]) def test_slo_get_ranged_manifest(self): file_item = self.env.container.file('ranged-manifest') grouped_file_contents = [ (char, sum(1 for _char in grp)) for char, grp in itertools.groupby(file_item.read())] self.assertEqual([ ('c', 1), ('d', 1024 * 1024), ('e', 1), ('a', 512 * 1024), ('b', 512 * 1024), ('c', 1), ('d', 1)], grouped_file_contents) def test_slo_get_ranged_manifest_repeated_segment(self): file_item = self.env.container.file('ranged-manifest-repeated-segment') grouped_file_contents = [ (char, sum(1 for _char in grp)) for char, grp in itertools.groupby(file_item.read())] self.assertEqual( [('a', 2097152), ('b', 1048576)], grouped_file_contents) def test_slo_get_ranged_submanifest(self): file_item = self.env.container.file('ranged-submanifest') grouped_file_contents = [ (char, sum(1 for _char in grp)) for char, grp in itertools.groupby(file_item.read())] self.assertEqual([ ('c', 1024 * 1024 + 1), ('d', 1024 * 1024), ('e', 1), ('a', 512 * 1024), ('b', 512 * 1024), ('c', 1), ('d', 512 * 1024 + 1), ('e', 1), ('a', 512 * 1024), ('b', 1), ('c', 1), ('d', 1)], grouped_file_contents) def test_slo_ranged_get(self): file_item = self.env.container.file('manifest-abcde') file_contents = file_item.read(size=1024 * 1024 + 2, offset=1024 * 1024 - 1) self.assertEqual('a', file_contents[0]) self.assertEqual('b', file_contents[1]) self.assertEqual('b', file_contents[-2]) self.assertEqual('c', file_contents[-1]) def test_slo_ranged_submanifest(self): file_item = self.env.container.file('manifest-abcde-submanifest') file_contents = file_item.read(size=1024 * 1024 + 2, offset=1024 * 1024 * 2 - 1) self.assertEqual('b', file_contents[0]) self.assertEqual('c', file_contents[1]) self.assertEqual('c', file_contents[-2]) self.assertEqual('d', file_contents[-1]) def test_slo_etag_is_hash_of_etags(self): expected_hash = hashlib.md5() expected_hash.update(hashlib.md5('a' * 1024 * 1024).hexdigest()) expected_hash.update(hashlib.md5('b' * 1024 * 1024).hexdigest()) expected_hash.update(hashlib.md5('c' * 1024 * 1024).hexdigest()) expected_hash.update(hashlib.md5('d' * 1024 * 1024).hexdigest()) expected_hash.update(hashlib.md5('e').hexdigest()) expected_etag = expected_hash.hexdigest() file_item = self.env.container.file('manifest-abcde') self.assertEqual(expected_etag, file_item.info()['etag']) def test_slo_etag_is_hash_of_etags_submanifests(self): def hd(x): return hashlib.md5(x).hexdigest() expected_etag = hd(hd('a' * 1024 * 1024) + hd(hd('b' * 1024 * 1024) + hd(hd('c' * 1024 * 1024) + hd('d' * 1024 * 1024))) + hd('e')) file_item = self.env.container.file('manifest-abcde-submanifest') self.assertEqual(expected_etag, file_item.info()['etag']) def test_slo_etag_mismatch(self): file_item = self.env.container.file("manifest-a-bad-etag") try: file_item.write( json.dumps([{ 'size_bytes': 1024 * 1024, 'etag': 'not it', 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) except ResponseError as err: self.assertEqual(400, err.status) else: self.fail("Expected ResponseError but didn't get it") def test_slo_size_mismatch(self): file_item = self.env.container.file("manifest-a-bad-size") try: file_item.write( json.dumps([{ 'size_bytes': 1024 * 1024 - 1, 'etag': hashlib.md5('a' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) except ResponseError as err: self.assertEqual(400, err.status) else: self.fail("Expected ResponseError but didn't get it") def test_slo_unspecified_etag(self): file_item = self.env.container.file("manifest-a-unspecified-etag") file_item.write( json.dumps([{ 'size_bytes': 1024 * 1024, 'etag': None, 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) self.assert_status(201) def test_slo_unspecified_size(self): file_item = self.env.container.file("manifest-a-unspecified-size") file_item.write( json.dumps([{ 'size_bytes': None, 'etag': hashlib.md5('a' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) self.assert_status(201) def test_slo_missing_etag(self): file_item = self.env.container.file("manifest-a-missing-etag") try: file_item.write( json.dumps([{ 'size_bytes': 1024 * 1024, 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) except ResponseError as err: self.assertEqual(400, err.status) else: self.fail("Expected ResponseError but didn't get it") def test_slo_missing_size(self): file_item = self.env.container.file("manifest-a-missing-size") try: file_item.write( json.dumps([{ 'etag': hashlib.md5('a' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}]), parms={'multipart-manifest': 'put'}) except ResponseError as err: self.assertEqual(400, err.status) else: self.fail("Expected ResponseError but didn't get it") def test_slo_overwrite_segment_with_manifest(self): file_item = self.env.container.file("seg_b") with self.assertRaises(ResponseError) as catcher: file_item.write( json.dumps([ {'size_bytes': 1024 * 1024, 'etag': hashlib.md5('a' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_a')}, {'size_bytes': 1024 * 1024, 'etag': hashlib.md5('b' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_b')}, {'size_bytes': 1024 * 1024, 'etag': hashlib.md5('c' * 1024 * 1024).hexdigest(), 'path': '/%s/%s' % (self.env.container.name, 'seg_c')}]), parms={'multipart-manifest': 'put'}) self.assertEqual(400, catcher.exception.status) def test_slo_copy(self): file_item = self.env.container.file("manifest-abcde") file_item.copy(self.env.container.name, "copied-abcde") copied = self.env.container.file("copied-abcde") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) self.assertEqual(4 * 1024 * 1024 + 1, len(copied_contents)) def test_slo_copy_account(self): acct = self.env.conn.account_name # same account copy file_item = self.env.container.file("manifest-abcde") file_item.copy_account(acct, self.env.container.name, "copied-abcde") copied = self.env.container.file("copied-abcde") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) self.assertEqual(4 * 1024 * 1024 + 1, len(copied_contents)) # copy to different account acct = self.env.conn2.account_name dest_cont = self.env.account2.container(Utils.create_name()) self.assertTrue(dest_cont.create(hdrs={ 'X-Container-Write': self.env.conn.user_acl })) file_item = self.env.container.file("manifest-abcde") file_item.copy_account(acct, dest_cont, "copied-abcde") copied = dest_cont.file("copied-abcde") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) self.assertEqual(4 * 1024 * 1024 + 1, len(copied_contents)) def test_slo_copy_the_manifest(self): file_item = self.env.container.file("manifest-abcde") file_item.copy(self.env.container.name, "copied-abcde-manifest-only", parms={'multipart-manifest': 'get'}) copied = self.env.container.file("copied-abcde-manifest-only") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) try: json.loads(copied_contents) except ValueError: self.fail("COPY didn't copy the manifest (invalid json on GET)") def test_slo_copy_the_manifest_account(self): acct = self.env.conn.account_name # same account file_item = self.env.container.file("manifest-abcde") file_item.copy_account(acct, self.env.container.name, "copied-abcde-manifest-only", parms={'multipart-manifest': 'get'}) copied = self.env.container.file("copied-abcde-manifest-only") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) try: json.loads(copied_contents) except ValueError: self.fail("COPY didn't copy the manifest (invalid json on GET)") # different account acct = self.env.conn2.account_name dest_cont = self.env.account2.container(Utils.create_name()) self.assertTrue(dest_cont.create(hdrs={ 'X-Container-Write': self.env.conn.user_acl })) file_item.copy_account(acct, dest_cont, "copied-abcde-manifest-only", parms={'multipart-manifest': 'get'}) copied = dest_cont.file("copied-abcde-manifest-only") copied_contents = copied.read(parms={'multipart-manifest': 'get'}) try: json.loads(copied_contents) except ValueError: self.fail("COPY didn't copy the manifest (invalid json on GET)") def _make_manifest(self): file_item = self.env.container.file("manifest-post") seg_info = self.env.seg_info file_item.write( json.dumps([seg_info['seg_a'], seg_info['seg_b'], seg_info['seg_c'], seg_info['seg_d'], seg_info['seg_e']]), parms={'multipart-manifest': 'put'}) return file_item def test_slo_post_the_manifest_metadata_update(self): file_item = self._make_manifest() # sanity check, check the object is an SLO manifest file_item.info() file_item.header_fields([('slo', 'x-static-large-object')]) # POST a user metadata (i.e. x-object-meta-post) file_item.sync_metadata({'post': 'update'}) updated = self.env.container.file("manifest-post") updated.info() updated.header_fields([('user-meta', 'x-object-meta-post')]) # sanity updated.header_fields([('slo', 'x-static-large-object')]) updated_contents = updated.read(parms={'multipart-manifest': 'get'}) try: json.loads(updated_contents) except ValueError: self.fail("Unexpected content on GET, expected a json body") def test_slo_post_the_manifest_metadata_update_with_qs(self): # multipart-manifest query should be ignored on post for verb in ('put', 'get', 'delete'): file_item = self._make_manifest() # sanity check, check the object is an SLO manifest file_item.info() file_item.header_fields([('slo', 'x-static-large-object')]) # POST a user metadata (i.e. x-object-meta-post) file_item.sync_metadata(metadata={'post': 'update'}, parms={'multipart-manifest': verb}) updated = self.env.container.file("manifest-post") updated.info() updated.header_fields( [('user-meta', 'x-object-meta-post')]) # sanity updated.header_fields([('slo', 'x-static-large-object')]) updated_contents = updated.read( parms={'multipart-manifest': 'get'}) try: json.loads(updated_contents) except ValueError: self.fail( "Unexpected content on GET, expected a json body") def test_slo_get_the_manifest(self): manifest = self.env.container.file("manifest-abcde") got_body = manifest.read(parms={'multipart-manifest': 'get'}) self.assertEqual('application/json; charset=utf-8', manifest.content_type) try: json.loads(got_body) except ValueError: self.fail("GET with multipart-manifest=get got invalid json") def test_slo_get_the_manifest_with_details_from_server(self): manifest = self.env.container.file("manifest-db") got_body = manifest.read(parms={'multipart-manifest': 'get'}) self.assertEqual('application/json; charset=utf-8', manifest.content_type) try: value = json.loads(got_body) except ValueError: self.fail("GET with multipart-manifest=get got invalid json") self.assertEqual(len(value), 2) self.assertEqual(value[0]['bytes'], 1024 * 1024) self.assertEqual(value[0]['hash'], hashlib.md5('d' * 1024 * 1024).hexdigest()) self.assertEqual(value[0]['name'], '/%s/seg_d' % self.env.container.name.decode("utf-8")) self.assertEqual(value[1]['bytes'], 1024 * 1024) self.assertEqual(value[1]['hash'], hashlib.md5('b' * 1024 * 1024).hexdigest()) self.assertEqual(value[1]['name'], '/%s/seg_b' % self.env.container.name.decode("utf-8")) def test_slo_get_raw_the_manifest_with_details_from_server(self): manifest = self.env.container.file("manifest-db") got_body = manifest.read(parms={'multipart-manifest': 'get', 'format': 'raw'}) self.assertEqual('application/json; charset=utf-8', manifest.content_type) try: value = json.loads(got_body) except ValueError: msg = "GET with multipart-manifest=get&format=raw got invalid json" self.fail(msg) self.assertEqual( set(value[0].keys()), set(('size_bytes', 'etag', 'path'))) self.assertEqual(len(value), 2) self.assertEqual(value[0]['size_bytes'], 1024 * 1024) self.assertEqual(value[0]['etag'], hashlib.md5('d' * 1024 * 1024).hexdigest()) self.assertEqual(value[0]['path'], '/%s/seg_d' % self.env.container.name.decode("utf-8")) self.assertEqual(value[1]['size_bytes'], 1024 * 1024) self.assertEqual(value[1]['etag'], hashlib.md5('b' * 1024 * 1024).hexdigest()) self.assertEqual(value[1]['path'], '/%s/seg_b' % self.env.container.name.decode("utf-8")) file_item = self.env.container.file("manifest-from-get-raw") file_item.write(got_body, parms={'multipart-manifest': 'put'}) file_contents = file_item.read() self.assertEqual(2 * 1024 * 1024, len(file_contents)) def test_slo_head_the_manifest(self): manifest = self.env.container.file("manifest-abcde") got_info = manifest.info(parms={'multipart-manifest': 'get'}) self.assertEqual('application/json; charset=utf-8', got_info['content_type']) def test_slo_if_match_get(self): manifest = self.env.container.file("manifest-abcde") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.read, hdrs={'If-Match': 'not-%s' % etag}) self.assert_status(412) manifest.read(hdrs={'If-Match': etag}) self.assert_status(200) def test_slo_if_none_match_get(self): manifest = self.env.container.file("manifest-abcde") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.read, hdrs={'If-None-Match': etag}) self.assert_status(304) manifest.read(hdrs={'If-None-Match': "not-%s" % etag}) self.assert_status(200) def test_slo_if_match_head(self): manifest = self.env.container.file("manifest-abcde") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.info, hdrs={'If-Match': 'not-%s' % etag}) self.assert_status(412) manifest.info(hdrs={'If-Match': etag}) self.assert_status(200) def test_slo_if_none_match_head(self): manifest = self.env.container.file("manifest-abcde") etag = manifest.info()['etag'] self.assertRaises(ResponseError, manifest.info, hdrs={'If-None-Match': etag}) self.assert_status(304) manifest.info(hdrs={'If-None-Match': "not-%s" % etag}) self.assert_status(200) def test_slo_referer_on_segment_container(self): # First the account2 (test3) should fail headers = {'X-Auth-Token': self.env.conn3.storage_token, 'Referer': 'http://blah.example.com'} slo_file = self.env.container2.file('manifest-abcde') self.assertRaises(ResponseError, slo_file.read, hdrs=headers) self.assert_status(403) # Now set the referer on the slo container only referer_metadata = {'X-Container-Read': '.r:*.example.com,.rlistings'} self.env.container2.update_metadata(referer_metadata) self.assertRaises(ResponseError, slo_file.read, hdrs=headers) self.assert_status(409) # Finally set the referer on the segment container self.env.container.update_metadata(referer_metadata) contents = slo_file.read(hdrs=headers) self.assertEqual(4 * 1024 * 1024 + 1, len(contents)) self.assertEqual('a', contents[0]) self.assertEqual('a', contents[1024 * 1024 - 1]) self.assertEqual('b', contents[1024 * 1024]) self.assertEqual('d', contents[-2]) self.assertEqual('e', contents[-1]) class TestSloUTF8(Base2, TestSlo): set_up = False class TestObjectVersioningEnv(object): versioning_enabled = None # tri-state: None initially, then True/False @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.storage_url, cls.storage_token = cls.conn.authenticate() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) # Second connection for ACL tests config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.conn2.authenticate() # avoid getting a prefix that stops halfway through an encoded # character prefix = Utils.create_name().decode("utf-8")[:10].encode("utf-8") cls.versions_container = cls.account.container(prefix + "-versions") if not cls.versions_container.create(): raise ResponseError(cls.conn.response) cls.container = cls.account.container(prefix + "-objs") if not cls.container.create( hdrs={'X-Versions-Location': cls.versions_container.name}): raise ResponseError(cls.conn.response) container_info = cls.container.info() # if versioning is off, then X-Versions-Location won't persist cls.versioning_enabled = 'versions' in container_info # setup another account to test ACLs config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.storage_url2, cls.storage_token2 = cls.conn2.authenticate() cls.account2 = cls.conn2.get_account() cls.account2.delete_containers() # setup another account with no access to anything to test ACLs config3 = deepcopy(tf.config) config3['account'] = tf.config['account'] config3['username'] = tf.config['username3'] config3['password'] = tf.config['password3'] cls.conn3 = Connection(config3) cls.storage_url3, cls.storage_token3 = cls.conn3.authenticate() cls.account3 = cls.conn3.get_account() @classmethod def tearDown(cls): cls.account.delete_containers() cls.account2.delete_containers() class TestCrossPolicyObjectVersioningEnv(object): # tri-state: None initially, then True/False versioning_enabled = None multiple_policies_enabled = None policies = None @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() if cls.multiple_policies_enabled is None: try: cls.policies = tf.FunctionalStoragePolicyCollection.from_info() except AssertionError: pass if cls.policies and len(cls.policies) > 1: cls.multiple_policies_enabled = True else: cls.multiple_policies_enabled = False cls.versioning_enabled = False return if cls.versioning_enabled is None: cls.versioning_enabled = 'versioned_writes' in cluster_info if not cls.versioning_enabled: return policy = cls.policies.select() version_policy = cls.policies.exclude(name=policy['name']).select() cls.account = Account(cls.conn, tf.config.get('account', tf.config['username'])) # Second connection for ACL tests config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.conn2.authenticate() # avoid getting a prefix that stops halfway through an encoded # character prefix = Utils.create_name().decode("utf-8")[:10].encode("utf-8") cls.versions_container = cls.account.container(prefix + "-versions") if not cls.versions_container.create( {'X-Storage-Policy': policy['name']}): raise ResponseError(cls.conn.response) cls.container = cls.account.container(prefix + "-objs") if not cls.container.create( hdrs={'X-Versions-Location': cls.versions_container.name, 'X-Storage-Policy': version_policy['name']}): raise ResponseError(cls.conn.response) container_info = cls.container.info() # if versioning is off, then X-Versions-Location won't persist cls.versioning_enabled = 'versions' in container_info # setup another account to test ACLs config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.storage_url2, cls.storage_token2 = cls.conn2.authenticate() cls.account2 = cls.conn2.get_account() cls.account2.delete_containers() # setup another account with no access to anything to test ACLs config3 = deepcopy(tf.config) config3['account'] = tf.config['account'] config3['username'] = tf.config['username3'] config3['password'] = tf.config['password3'] cls.conn3 = Connection(config3) cls.storage_url3, cls.storage_token3 = cls.conn3.authenticate() cls.account3 = cls.conn3.get_account() class TestObjectVersioning(Base): env = TestObjectVersioningEnv set_up = False def setUp(self): super(TestObjectVersioning, self).setUp() if self.env.versioning_enabled is False: raise SkipTest("Object versioning not enabled") elif self.env.versioning_enabled is not True: # just some sanity checking raise Exception( "Expected versioning_enabled to be True/False, got %r" % (self.env.versioning_enabled,)) def _tear_down_files(self): try: # only delete files and not containers # as they were configured in self.env self.env.versions_container.delete_files() self.env.container.delete_files() except ResponseError: pass def tearDown(self): super(TestObjectVersioning, self).tearDown() self._tear_down_files() def test_clear_version_option(self): # sanity self.assertEqual(self.env.container.info()['versions'], self.env.versions_container.name) self.env.container.update_metadata( hdrs={'X-Versions-Location': ''}) self.assertEqual(self.env.container.info().get('versions'), None) # set location back to the way it was self.env.container.update_metadata( hdrs={'X-Versions-Location': self.env.versions_container.name}) self.assertEqual(self.env.container.info()['versions'], self.env.versions_container.name) def test_overwriting(self): container = self.env.container versions_container = self.env.versions_container cont_info = container.info() self.assertEqual(cont_info['versions'], versions_container.name) obj_name = Utils.create_name() versioned_obj = container.file(obj_name) versioned_obj.write("aaaaa", hdrs={'Content-Type': 'text/jibberish01'}) obj_info = versioned_obj.info() self.assertEqual('text/jibberish01', obj_info['content_type']) self.assertEqual(0, versions_container.info()['object_count']) versioned_obj.write("bbbbb", hdrs={'Content-Type': 'text/jibberish02', 'X-Object-Meta-Foo': 'Bar'}) versioned_obj.initialize() self.assertEqual(versioned_obj.content_type, 'text/jibberish02') self.assertEqual(versioned_obj.metadata['foo'], 'Bar') # the old version got saved off self.assertEqual(1, versions_container.info()['object_count']) versioned_obj_name = versions_container.files()[0] prev_version = versions_container.file(versioned_obj_name) prev_version.initialize() self.assertEqual("aaaaa", prev_version.read()) self.assertEqual(prev_version.content_type, 'text/jibberish01') # make sure the new obj metadata did not leak to the prev. version self.assertTrue('foo' not in prev_version.metadata) # check that POST does not create a new version versioned_obj.sync_metadata(metadata={'fu': 'baz'}) self.assertEqual(1, versions_container.info()['object_count']) # if we overwrite it again, there are two versions versioned_obj.write("ccccc") self.assertEqual(2, versions_container.info()['object_count']) versioned_obj_name = versions_container.files()[1] prev_version = versions_container.file(versioned_obj_name) prev_version.initialize() self.assertEqual("bbbbb", prev_version.read()) self.assertEqual(prev_version.content_type, 'text/jibberish02') self.assertTrue('foo' in prev_version.metadata) self.assertTrue('fu' in prev_version.metadata) # as we delete things, the old contents return self.assertEqual("ccccc", versioned_obj.read()) # test copy from a different container src_container = self.env.account.container(Utils.create_name()) self.assertTrue(src_container.create()) src_name = Utils.create_name() src_obj = src_container.file(src_name) src_obj.write("ddddd", hdrs={'Content-Type': 'text/jibberish04'}) src_obj.copy(container.name, obj_name) self.assertEqual("ddddd", versioned_obj.read()) versioned_obj.initialize() self.assertEqual(versioned_obj.content_type, 'text/jibberish04') # make sure versions container has the previous version self.assertEqual(3, versions_container.info()['object_count']) versioned_obj_name = versions_container.files()[2] prev_version = versions_container.file(versioned_obj_name) prev_version.initialize() self.assertEqual("ccccc", prev_version.read()) # test delete versioned_obj.delete() self.assertEqual("ccccc", versioned_obj.read()) versioned_obj.delete() self.assertEqual("bbbbb", versioned_obj.read()) versioned_obj.delete() self.assertEqual("aaaaa", versioned_obj.read()) self.assertEqual(0, versions_container.info()['object_count']) versioned_obj.delete() self.assertRaises(ResponseError, versioned_obj.read) def test_versioning_dlo(self): container = self.env.container versions_container = self.env.versions_container obj_name = Utils.create_name() for i in ('1', '2', '3'): time.sleep(.01) # guarantee that the timestamp changes obj_name_seg = obj_name + '/' + i versioned_obj = container.file(obj_name_seg) versioned_obj.write(i) versioned_obj.write(i + i) self.assertEqual(3, versions_container.info()['object_count']) man_file = container.file(obj_name) man_file.write('', hdrs={"X-Object-Manifest": "%s/%s/" % (self.env.container.name, obj_name)}) # guarantee that the timestamp changes time.sleep(.01) # write manifest file again man_file.write('', hdrs={"X-Object-Manifest": "%s/%s/" % (self.env.container.name, obj_name)}) self.assertEqual(3, versions_container.info()['object_count']) self.assertEqual("112233", man_file.read()) def test_versioning_container_acl(self): # create versions container and DO NOT give write access to account2 versions_container = self.env.account.container(Utils.create_name()) self.assertTrue(versions_container.create(hdrs={ 'X-Container-Write': '' })) # check account2 cannot write to versions container fail_obj_name = Utils.create_name() fail_obj = versions_container.file(fail_obj_name) self.assertRaises(ResponseError, fail_obj.write, "should fail", cfg={'use_token': self.env.storage_token2}) # create container and give write access to account2 # don't set X-Versions-Location just yet container = self.env.account.container(Utils.create_name()) self.assertTrue(container.create(hdrs={ 'X-Container-Write': self.env.conn2.user_acl})) # check account2 cannot set X-Versions-Location on container self.assertRaises(ResponseError, container.update_metadata, hdrs={ 'X-Versions-Location': versions_container}, cfg={'use_token': self.env.storage_token2}) # good! now let admin set the X-Versions-Location # p.s.: sticking a 'x-remove' header here to test precedence # of both headers. Setting the location should succeed. self.assertTrue(container.update_metadata(hdrs={ 'X-Remove-Versions-Location': versions_container, 'X-Versions-Location': versions_container})) # write object twice to container and check version obj_name = Utils.create_name() versioned_obj = container.file(obj_name) self.assertTrue(versioned_obj.write("never argue with the data", cfg={'use_token': self.env.storage_token2})) self.assertEqual(versioned_obj.read(), "never argue with the data") self.assertTrue( versioned_obj.write("we don't have no beer, just tequila", cfg={'use_token': self.env.storage_token2})) self.assertEqual(versioned_obj.read(), "we don't have no beer, just tequila") self.assertEqual(1, versions_container.info()['object_count']) # read the original uploaded object for filename in versions_container.files(): backup_file = versions_container.file(filename) break self.assertEqual(backup_file.read(), "never argue with the data") # user3 (some random user with no access to anything) # tries to read from versioned container self.assertRaises(ResponseError, backup_file.read, cfg={'use_token': self.env.storage_token3}) # user3 cannot write or delete from source container either self.assertRaises(ResponseError, versioned_obj.write, "some random user trying to write data", cfg={'use_token': self.env.storage_token3}) self.assertRaises(ResponseError, versioned_obj.delete, cfg={'use_token': self.env.storage_token3}) # user2 can't read or delete from versions-location self.assertRaises(ResponseError, backup_file.read, cfg={'use_token': self.env.storage_token2}) self.assertRaises(ResponseError, backup_file.delete, cfg={'use_token': self.env.storage_token2}) # but is able to delete from the source container # this could be a helpful scenario for dev ops that want to setup # just one container to hold object versions of multiple containers # and each one of those containers are owned by different users self.assertTrue(versioned_obj.delete( cfg={'use_token': self.env.storage_token2})) # tear-down since we create these containers here # and not in self.env versions_container.delete_recursive() container.delete_recursive() def test_versioning_check_acl(self): container = self.env.container versions_container = self.env.versions_container versions_container.create(hdrs={'X-Container-Read': '.r:*,.rlistings'}) obj_name = Utils.create_name() versioned_obj = container.file(obj_name) versioned_obj.write("aaaaa") self.assertEqual("aaaaa", versioned_obj.read()) versioned_obj.write("bbbbb") self.assertEqual("bbbbb", versioned_obj.read()) # Use token from second account and try to delete the object org_token = self.env.account.conn.storage_token self.env.account.conn.storage_token = self.env.conn2.storage_token try: self.assertRaises(ResponseError, versioned_obj.delete) finally: self.env.account.conn.storage_token = org_token # Verify with token from first account self.assertEqual("bbbbb", versioned_obj.read()) versioned_obj.delete() self.assertEqual("aaaaa", versioned_obj.read()) class TestObjectVersioningUTF8(Base2, TestObjectVersioning): set_up = False def tearDown(self): self._tear_down_files() super(TestObjectVersioningUTF8, self).tearDown() class TestCrossPolicyObjectVersioning(TestObjectVersioning): env = TestCrossPolicyObjectVersioningEnv set_up = False def setUp(self): super(TestCrossPolicyObjectVersioning, self).setUp() if self.env.multiple_policies_enabled is False: raise SkipTest('Cross policy test requires multiple policies') elif self.env.multiple_policies_enabled is not True: # just some sanity checking raise Exception("Expected multiple_policies_enabled " "to be True/False, got %r" % ( self.env.versioning_enabled,)) class TestTempurlEnv(object): tempurl_enabled = None # tri-state: None initially, then True/False @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() if cls.tempurl_enabled is None: cls.tempurl_enabled = 'tempurl' in cluster_info if not cls.tempurl_enabled: return cls.tempurl_key = Utils.create_name() cls.tempurl_key2 = Utils.create_name() cls.account = Account( cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.account.update_metadata({ 'temp-url-key': cls.tempurl_key, 'temp-url-key-2': cls.tempurl_key2 }) cls.container = cls.account.container(Utils.create_name()) if not cls.container.create(): raise ResponseError(cls.conn.response) cls.obj = cls.container.file(Utils.create_name()) cls.obj.write("obj contents") cls.other_obj = cls.container.file(Utils.create_name()) cls.other_obj.write("other obj contents") class TestTempurl(Base): env = TestTempurlEnv set_up = False def setUp(self): super(TestTempurl, self).setUp() if self.env.tempurl_enabled is False: raise SkipTest("TempURL not enabled") elif self.env.tempurl_enabled is not True: # just some sanity checking raise Exception( "Expected tempurl_enabled to be True/False, got %r" % (self.env.tempurl_enabled,)) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key) self.obj_tempurl_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} def tempurl_sig(self, method, expires, path, key): return hmac.new( key, '%s\n%s\n%s' % (method, expires, urllib.parse.unquote(path)), hashlib.sha1).hexdigest() def test_GET(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") # GET tempurls also allow HEAD requests self.assertTrue(self.env.obj.info(parms=self.obj_tempurl_parms, cfg={'no_auth_token': True})) def test_GET_with_key_2(self): expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key2) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} contents = self.env.obj.read(parms=parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") def test_GET_DLO_inside_container(self): seg1 = self.env.container.file( "get-dlo-inside-seg1" + Utils.create_name()) seg2 = self.env.container.file( "get-dlo-inside-seg2" + Utils.create_name()) seg1.write("one fish two fish ") seg2.write("red fish blue fish") manifest = self.env.container.file("manifest" + Utils.create_name()) manifest.write( '', hdrs={"X-Object-Manifest": "%s/get-dlo-inside-seg" % (self.env.container.name,)}) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(manifest.path), self.env.tempurl_key) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} contents = manifest.read(parms=parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "one fish two fish red fish blue fish") def test_GET_DLO_outside_container(self): seg1 = self.env.container.file( "get-dlo-outside-seg1" + Utils.create_name()) seg2 = self.env.container.file( "get-dlo-outside-seg2" + Utils.create_name()) seg1.write("one fish two fish ") seg2.write("red fish blue fish") container2 = self.env.account.container(Utils.create_name()) container2.create() manifest = container2.file("manifest" + Utils.create_name()) manifest.write( '', hdrs={"X-Object-Manifest": "%s/get-dlo-outside-seg" % (self.env.container.name,)}) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(manifest.path), self.env.tempurl_key) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} # cross container tempurl works fine for account tempurl key contents = manifest.read(parms=parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "one fish two fish red fish blue fish") self.assert_status([200]) def test_PUT(self): new_obj = self.env.container.file(Utils.create_name()) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'PUT', expires, self.env.conn.make_path(new_obj.path), self.env.tempurl_key) put_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} new_obj.write('new obj contents', parms=put_parms, cfg={'no_auth_token': True}) self.assertEqual(new_obj.read(), "new obj contents") # PUT tempurls also allow HEAD requests self.assertTrue(new_obj.info(parms=put_parms, cfg={'no_auth_token': True})) def test_PUT_manifest_access(self): new_obj = self.env.container.file(Utils.create_name()) # give out a signature which allows a PUT to new_obj expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'PUT', expires, self.env.conn.make_path(new_obj.path), self.env.tempurl_key) put_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} # try to create manifest pointing to some random container try: new_obj.write('', { 'x-object-manifest': '%s/foo' % 'some_random_container' }, parms=put_parms, cfg={'no_auth_token': True}) except ResponseError as e: self.assertEqual(e.status, 400) else: self.fail('request did not error') # create some other container other_container = self.env.account.container(Utils.create_name()) if not other_container.create(): raise ResponseError(self.conn.response) # try to create manifest pointing to new container try: new_obj.write('', { 'x-object-manifest': '%s/foo' % other_container }, parms=put_parms, cfg={'no_auth_token': True}) except ResponseError as e: self.assertEqual(e.status, 400) else: self.fail('request did not error') # try again using a tempurl POST to an already created object new_obj.write('', {}, parms=put_parms, cfg={'no_auth_token': True}) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'POST', expires, self.env.conn.make_path(new_obj.path), self.env.tempurl_key) post_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} try: new_obj.post({'x-object-manifest': '%s/foo' % other_container}, parms=post_parms, cfg={'no_auth_token': True}) except ResponseError as e: self.assertEqual(e.status, 400) else: self.fail('request did not error') def test_HEAD(self): expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'HEAD', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key) head_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} self.assertTrue(self.env.obj.info(parms=head_parms, cfg={'no_auth_token': True})) # HEAD tempurls don't allow PUT or GET requests, despite the fact that # PUT and GET tempurls both allow HEAD requests self.assertRaises(ResponseError, self.env.other_obj.read, cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) self.assertRaises(ResponseError, self.env.other_obj.write, 'new contents', cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) def test_different_object(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") self.assertRaises(ResponseError, self.env.other_obj.read, cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) def test_changing_sig(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") parms = self.obj_tempurl_parms.copy() if parms['temp_url_sig'][0] == 'a': parms['temp_url_sig'] = 'b' + parms['temp_url_sig'][1:] else: parms['temp_url_sig'] = 'a' + parms['temp_url_sig'][1:] self.assertRaises(ResponseError, self.env.obj.read, cfg={'no_auth_token': True}, parms=parms) self.assert_status([401]) def test_changing_expires(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") parms = self.obj_tempurl_parms.copy() if parms['temp_url_expires'][-1] == '0': parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '1' else: parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '0' self.assertRaises(ResponseError, self.env.obj.read, cfg={'no_auth_token': True}, parms=parms) self.assert_status([401]) class TestTempurlUTF8(Base2, TestTempurl): set_up = False class TestContainerTempurlEnv(object): tempurl_enabled = None # tri-state: None initially, then True/False @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() if cls.tempurl_enabled is None: cls.tempurl_enabled = 'tempurl' in cluster_info if not cls.tempurl_enabled: return cls.tempurl_key = Utils.create_name() cls.tempurl_key2 = Utils.create_name() cls.account = Account( cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() # creating another account and connection # for ACL tests config2 = deepcopy(tf.config) config2['account'] = tf.config['account2'] config2['username'] = tf.config['username2'] config2['password'] = tf.config['password2'] cls.conn2 = Connection(config2) cls.conn2.authenticate() cls.account2 = Account( cls.conn2, config2.get('account', config2['username'])) cls.account2 = cls.conn2.get_account() cls.container = cls.account.container(Utils.create_name()) if not cls.container.create({ 'x-container-meta-temp-url-key': cls.tempurl_key, 'x-container-meta-temp-url-key-2': cls.tempurl_key2, 'x-container-read': cls.account2.name}): raise ResponseError(cls.conn.response) cls.obj = cls.container.file(Utils.create_name()) cls.obj.write("obj contents") cls.other_obj = cls.container.file(Utils.create_name()) cls.other_obj.write("other obj contents") class TestContainerTempurl(Base): env = TestContainerTempurlEnv set_up = False def setUp(self): super(TestContainerTempurl, self).setUp() if self.env.tempurl_enabled is False: raise SkipTest("TempURL not enabled") elif self.env.tempurl_enabled is not True: # just some sanity checking raise Exception( "Expected tempurl_enabled to be True/False, got %r" % (self.env.tempurl_enabled,)) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key) self.obj_tempurl_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} def tempurl_sig(self, method, expires, path, key): return hmac.new( key, '%s\n%s\n%s' % (method, expires, urllib.parse.unquote(path)), hashlib.sha1).hexdigest() def test_GET(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") # GET tempurls also allow HEAD requests self.assertTrue(self.env.obj.info(parms=self.obj_tempurl_parms, cfg={'no_auth_token': True})) def test_GET_with_key_2(self): expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key2) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} contents = self.env.obj.read(parms=parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") def test_PUT(self): new_obj = self.env.container.file(Utils.create_name()) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'PUT', expires, self.env.conn.make_path(new_obj.path), self.env.tempurl_key) put_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} new_obj.write('new obj contents', parms=put_parms, cfg={'no_auth_token': True}) self.assertEqual(new_obj.read(), "new obj contents") # PUT tempurls also allow HEAD requests self.assertTrue(new_obj.info(parms=put_parms, cfg={'no_auth_token': True})) def test_HEAD(self): expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'HEAD', expires, self.env.conn.make_path(self.env.obj.path), self.env.tempurl_key) head_parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} self.assertTrue(self.env.obj.info(parms=head_parms, cfg={'no_auth_token': True})) # HEAD tempurls don't allow PUT or GET requests, despite the fact that # PUT and GET tempurls both allow HEAD requests self.assertRaises(ResponseError, self.env.other_obj.read, cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) self.assertRaises(ResponseError, self.env.other_obj.write, 'new contents', cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) def test_different_object(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") self.assertRaises(ResponseError, self.env.other_obj.read, cfg={'no_auth_token': True}, parms=self.obj_tempurl_parms) self.assert_status([401]) def test_changing_sig(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") parms = self.obj_tempurl_parms.copy() if parms['temp_url_sig'][0] == 'a': parms['temp_url_sig'] = 'b' + parms['temp_url_sig'][1:] else: parms['temp_url_sig'] = 'a' + parms['temp_url_sig'][1:] self.assertRaises(ResponseError, self.env.obj.read, cfg={'no_auth_token': True}, parms=parms) self.assert_status([401]) def test_changing_expires(self): contents = self.env.obj.read( parms=self.obj_tempurl_parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "obj contents") parms = self.obj_tempurl_parms.copy() if parms['temp_url_expires'][-1] == '0': parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '1' else: parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '0' self.assertRaises(ResponseError, self.env.obj.read, cfg={'no_auth_token': True}, parms=parms) self.assert_status([401]) @requires_acls def test_tempurl_keys_visible_to_account_owner(self): if not tf.cluster_info.get('tempauth'): raise SkipTest('TEMP AUTH SPECIFIC TEST') metadata = self.env.container.info() self.assertEqual(metadata.get('tempurl_key'), self.env.tempurl_key) self.assertEqual(metadata.get('tempurl_key2'), self.env.tempurl_key2) @requires_acls def test_tempurl_keys_hidden_from_acl_readonly(self): if not tf.cluster_info.get('tempauth'): raise SkipTest('TEMP AUTH SPECIFIC TEST') original_token = self.env.container.conn.storage_token self.env.container.conn.storage_token = self.env.conn2.storage_token metadata = self.env.container.info() self.env.container.conn.storage_token = original_token self.assertNotIn( 'tempurl_key', metadata, 'Container TempURL key found, should not be visible ' 'to readonly ACLs') self.assertNotIn( 'tempurl_key2', metadata, 'Container TempURL key-2 found, should not be visible ' 'to readonly ACLs') def test_GET_DLO_inside_container(self): seg1 = self.env.container.file( "get-dlo-inside-seg1" + Utils.create_name()) seg2 = self.env.container.file( "get-dlo-inside-seg2" + Utils.create_name()) seg1.write("one fish two fish ") seg2.write("red fish blue fish") manifest = self.env.container.file("manifest" + Utils.create_name()) manifest.write( '', hdrs={"X-Object-Manifest": "%s/get-dlo-inside-seg" % (self.env.container.name,)}) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(manifest.path), self.env.tempurl_key) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} contents = manifest.read(parms=parms, cfg={'no_auth_token': True}) self.assertEqual(contents, "one fish two fish red fish blue fish") def test_GET_DLO_outside_container(self): container2 = self.env.account.container(Utils.create_name()) container2.create() seg1 = container2.file( "get-dlo-outside-seg1" + Utils.create_name()) seg2 = container2.file( "get-dlo-outside-seg2" + Utils.create_name()) seg1.write("one fish two fish ") seg2.write("red fish blue fish") manifest = self.env.container.file("manifest" + Utils.create_name()) manifest.write( '', hdrs={"X-Object-Manifest": "%s/get-dlo-outside-seg" % (container2.name,)}) expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(manifest.path), self.env.tempurl_key) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} # cross container tempurl does not work for container tempurl key try: manifest.read(parms=parms, cfg={'no_auth_token': True}) except ResponseError as e: self.assertEqual(e.status, 401) else: self.fail('request did not error') try: manifest.info(parms=parms, cfg={'no_auth_token': True}) except ResponseError as e: self.assertEqual(e.status, 401) else: self.fail('request did not error') class TestContainerTempurlUTF8(Base2, TestContainerTempurl): set_up = False class TestSloTempurlEnv(object): enabled = None # tri-state: None initially, then True/False @classmethod def setUp(cls): cls.conn = Connection(tf.config) cls.conn.authenticate() if cls.enabled is None: cls.enabled = 'tempurl' in cluster_info and 'slo' in cluster_info cls.tempurl_key = Utils.create_name() cls.account = Account( cls.conn, tf.config.get('account', tf.config['username'])) cls.account.delete_containers() cls.account.update_metadata({'temp-url-key': cls.tempurl_key}) cls.manifest_container = cls.account.container(Utils.create_name()) cls.segments_container = cls.account.container(Utils.create_name()) if not cls.manifest_container.create(): raise ResponseError(cls.conn.response) if not cls.segments_container.create(): raise ResponseError(cls.conn.response) seg1 = cls.segments_container.file(Utils.create_name()) seg1.write('1' * 1024 * 1024) seg2 = cls.segments_container.file(Utils.create_name()) seg2.write('2' * 1024 * 1024) cls.manifest_data = [{'size_bytes': 1024 * 1024, 'etag': seg1.md5, 'path': '/%s/%s' % (cls.segments_container.name, seg1.name)}, {'size_bytes': 1024 * 1024, 'etag': seg2.md5, 'path': '/%s/%s' % (cls.segments_container.name, seg2.name)}] cls.manifest = cls.manifest_container.file(Utils.create_name()) cls.manifest.write( json.dumps(cls.manifest_data), parms={'multipart-manifest': 'put'}) class TestSloTempurl(Base): env = TestSloTempurlEnv set_up = False def setUp(self): super(TestSloTempurl, self).setUp() if self.env.enabled is False: raise SkipTest("TempURL and SLO not both enabled") elif self.env.enabled is not True: # just some sanity checking raise Exception( "Expected enabled to be True/False, got %r" % (self.env.enabled,)) def tempurl_sig(self, method, expires, path, key): return hmac.new( key, '%s\n%s\n%s' % (method, expires, urllib.parse.unquote(path)), hashlib.sha1).hexdigest() def test_GET(self): expires = int(time.time()) + 86400 sig = self.tempurl_sig( 'GET', expires, self.env.conn.make_path(self.env.manifest.path), self.env.tempurl_key) parms = {'temp_url_sig': sig, 'temp_url_expires': str(expires)} contents = self.env.manifest.read( parms=parms, cfg={'no_auth_token': True}) self.assertEqual(len(contents), 2 * 1024 * 1024) # GET tempurls also allow HEAD requests self.assertTrue(self.env.manifest.info( parms=parms, cfg={'no_auth_token': True})) class TestSloTempurlUTF8(Base2, TestSloTempurl): set_up = False class TestServiceToken(unittest2.TestCase): def setUp(self): if tf.skip_service_tokens: raise SkipTest self.SET_TO_USERS_TOKEN = 1 self.SET_TO_SERVICE_TOKEN = 2 # keystoneauth and tempauth differ in allowing PUT account # Even if keystoneauth allows it, the proxy-server uses # allow_account_management to decide if accounts can be created self.put_account_expect = is_client_error if tf.swift_test_auth_version != '1': if cluster_info.get('swift').get('allow_account_management'): self.put_account_expect = is_success def _scenario_generator(self): paths = ((None, None), ('c', None), ('c', 'o')) for path in paths: for method in ('PUT', 'POST', 'HEAD', 'GET', 'OPTIONS'): yield method, path[0], path[1] for path in reversed(paths): yield 'DELETE', path[0], path[1] def _assert_is_authed_response(self, method, container, object, resp): resp.read() expect = is_success if method == 'DELETE' and not container: expect = is_client_error if method == 'PUT' and not container: expect = self.put_account_expect self.assertTrue(expect(resp.status), 'Unexpected %s for %s %s %s' % (resp.status, method, container, object)) def _assert_not_authed_response(self, method, container, object, resp): resp.read() expect = is_client_error if method == 'OPTIONS': expect = is_success self.assertTrue(expect(resp.status), 'Unexpected %s for %s %s %s' % (resp.status, method, container, object)) def prepare_request(self, method, use_service_account=False, container=None, obj=None, body=None, headers=None, x_auth_token=None, x_service_token=None, dbg=False): """ Setup for making the request When retry() calls the do_request() function, it calls it the test user's token, the parsed path, a connection and (optionally) a token from the test service user. We save options here so that do_request() can make the appropriate request. :param method: The operation (e.g. 'HEAD') :param use_service_account: Optional. Set True to change the path to be the service account :param container: Optional. Adds a container name to the path :param obj: Optional. Adds an object name to the path :param body: Optional. Adds a body (string) in the request :param headers: Optional. Adds additional headers. :param x_auth_token: Optional. Default is SET_TO_USERS_TOKEN. One of: SET_TO_USERS_TOKEN Put the test user's token in X-Auth-Token SET_TO_SERVICE_TOKEN Put the service token in X-Auth-Token :param x_service_token: Optional. Default is to not set X-Service-Token to any value. If specified, is one of following: SET_TO_USERS_TOKEN Put the test user's token in X-Service-Token SET_TO_SERVICE_TOKEN Put the service token in X-Service-Token :param dbg: Optional. Set true to check request arguments """ self.method = method self.use_service_account = use_service_account self.container = container self.obj = obj self.body = body self.headers = headers if x_auth_token: self.x_auth_token = x_auth_token else: self.x_auth_token = self.SET_TO_USERS_TOKEN self.x_service_token = x_service_token self.dbg = dbg def do_request(self, url, token, parsed, conn, service_token=''): if self.use_service_account: path = self._service_account(parsed.path) else: path = parsed.path if self.container: path += '/%s' % self.container if self.obj: path += '/%s' % self.obj headers = {} if self.body: headers.update({'Content-Length': len(self.body)}) if self.x_auth_token == self.SET_TO_USERS_TOKEN: headers.update({'X-Auth-Token': token}) elif self.x_auth_token == self.SET_TO_SERVICE_TOKEN: headers.update({'X-Auth-Token': service_token}) if self.x_service_token == self.SET_TO_USERS_TOKEN: headers.update({'X-Service-Token': token}) elif self.x_service_token == self.SET_TO_SERVICE_TOKEN: headers.update({'X-Service-Token': service_token}) if self.dbg: print('DEBUG: conn.request: method:%s path:%s' ' body:%s headers:%s' % (self.method, path, self.body, headers)) conn.request(self.method, path, self.body, headers=headers) return check_response(conn) def _service_account(self, path): parts = path.split('/', 3) account = parts[2] try: project_id = account[account.index('_') + 1:] except ValueError: project_id = account parts[2] = '%s%s' % (tf.swift_test_service_prefix, project_id) return '/'.join(parts) def test_user_access_own_auth_account(self): # This covers ground tested elsewhere (tests a user doing HEAD # on own account). However, if this fails, none of the remaining # tests will work self.prepare_request('HEAD') resp = retry(self.do_request) resp.read() self.assertIn(resp.status, (200, 204)) def test_user_cannot_access_service_account(self): for method, container, obj in self._scenario_generator(): self.prepare_request(method, use_service_account=True, container=container, obj=obj) resp = retry(self.do_request) self._assert_not_authed_response(method, container, obj, resp) def test_service_user_denied_with_x_auth_token(self): for method, container, obj in self._scenario_generator(): self.prepare_request(method, use_service_account=True, container=container, obj=obj, x_auth_token=self.SET_TO_SERVICE_TOKEN) resp = retry(self.do_request, service_user=5) self._assert_not_authed_response(method, container, obj, resp) def test_service_user_denied_with_x_service_token(self): for method, container, obj in self._scenario_generator(): self.prepare_request(method, use_service_account=True, container=container, obj=obj, x_auth_token=self.SET_TO_SERVICE_TOKEN, x_service_token=self.SET_TO_SERVICE_TOKEN) resp = retry(self.do_request, service_user=5) self._assert_not_authed_response(method, container, obj, resp) def test_user_plus_service_can_access_service_account(self): for method, container, obj in self._scenario_generator(): self.prepare_request(method, use_service_account=True, container=container, obj=obj, x_auth_token=self.SET_TO_USERS_TOKEN, x_service_token=self.SET_TO_SERVICE_TOKEN) resp = retry(self.do_request, service_user=5) self._assert_is_authed_response(method, container, obj, resp) if __name__ == '__main__': unittest2.main() swift-2.7.1/test/functional/__init__.py0000664000567000056710000011724313024044354021257 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import mock import os from six.moves.urllib.parse import urlparse import sys import pickle import socket import locale import eventlet import eventlet.debug import functools import random from time import time, sleep from contextlib import closing from gzip import GzipFile from shutil import rmtree from tempfile import mkdtemp from unittest2 import SkipTest from six.moves.configparser import ConfigParser, NoSectionError from six.moves import http_client from six.moves.http_client import HTTPException from swift.common.middleware.memcache import MemcacheMiddleware from swift.common.storage_policy import parse_storage_policies, PolicyError from test import get_config from test.functional.swift_test_client import Account, Connection, Container, \ ResponseError # This has the side effect of mocking out the xattr module so that unit tests # (and in this case, when in-process functional tests are called for) can run # on file systems that don't support extended attributes. from test.unit import debug_logger, FakeMemcache from swift.common import constraints, utils, ring, storage_policy from swift.common.ring import Ring from swift.common.wsgi import monkey_patch_mimetools, loadapp from swift.common.utils import config_true_value, split_path from swift.account import server as account_server from swift.container import server as container_server from swift.obj import server as object_server, mem_server as mem_object_server import swift.proxy.controllers.obj http_client._MAXHEADERS = constraints.MAX_HEADER_COUNT DEBUG = True # In order to get the proper blocking behavior of sockets without using # threads, where we can set an arbitrary timeout for some piece of code under # test, we use eventlet with the standard socket library patched. We have to # perform this setup at module import time, since all the socket module # bindings in the swiftclient code will have been made by the time nose # invokes the package or class setup methods. eventlet.hubs.use_hub(utils.get_hub()) eventlet.patcher.monkey_patch(all=False, socket=True) eventlet.debug.hub_exceptions(False) from swiftclient import get_auth, http_connection has_insecure = False try: from swiftclient import __version__ as client_version # Prevent a ValueError in StrictVersion with '2.0.3.68.ga99c2ff' client_version = '.'.join(client_version.split('.')[:3]) except ImportError: # Pre-PBR we had version, not __version__. Anyhow... client_version = '1.2' from distutils.version import StrictVersion if StrictVersion(client_version) >= StrictVersion('2.0'): has_insecure = True config = {} web_front_end = None normalized_urls = None # If no config was read, we will fall back to old school env vars swift_test_auth_version = None swift_test_auth = os.environ.get('SWIFT_TEST_AUTH') swift_test_user = [os.environ.get('SWIFT_TEST_USER'), None, None, '', '', ''] swift_test_key = [os.environ.get('SWIFT_TEST_KEY'), None, None, '', '', ''] swift_test_tenant = ['', '', '', '', '', ''] swift_test_perm = ['', '', '', '', '', ''] swift_test_domain = ['', '', '', '', '', ''] swift_test_user_id = ['', '', '', '', '', ''] swift_test_tenant_id = ['', '', '', '', '', ''] skip, skip2, skip3, skip_service_tokens, skip_if_no_reseller_admin = \ False, False, False, False, False orig_collate = '' insecure = False orig_hash_path_suff_pref = ('', '') orig_swift_conf_name = None in_process = False _testdir = _test_servers = _test_coros = _test_socks = None policy_specified = None class FakeMemcacheMiddleware(MemcacheMiddleware): """ Caching middleware that fakes out caching in swift if memcached does not appear to be running. """ def __init__(self, app, conf): super(FakeMemcacheMiddleware, self).__init__(app, conf) self.memcache = FakeMemcache() class InProcessException(BaseException): pass def _info(msg): print(msg, file=sys.stderr) def _debug(msg): if DEBUG: _info('DEBUG: ' + msg) def _in_process_setup_swift_conf(swift_conf_src, testdir): # override swift.conf contents for in-process functional test runs conf = ConfigParser() conf.read(swift_conf_src) try: section = 'swift-hash' conf.set(section, 'swift_hash_path_suffix', 'inprocfunctests') conf.set(section, 'swift_hash_path_prefix', 'inprocfunctests') section = 'swift-constraints' max_file_size = (8 * 1024 * 1024) + 2 # 8 MB + 2 conf.set(section, 'max_file_size', max_file_size) except NoSectionError: msg = 'Conf file %s is missing section %s' % (swift_conf_src, section) raise InProcessException(msg) test_conf_file = os.path.join(testdir, 'swift.conf') with open(test_conf_file, 'w') as fp: conf.write(fp) return test_conf_file def _in_process_find_conf_file(conf_src_dir, conf_file_name, use_sample=True): """ Look for a file first in conf_src_dir, if it exists, otherwise optionally look in the source tree sample 'etc' dir. :param conf_src_dir: Directory in which to search first for conf file. May be None :param conf_file_name: Name of conf file :param use_sample: If True and the conf_file_name is not found, then return any sample conf file found in the source tree sample 'etc' dir by appending '-sample' to conf_file_name :returns: Path to conf file :raises InProcessException: If no conf file is found """ dflt_src_dir = os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir, os.pardir, os.pardir, 'etc')) conf_src_dir = dflt_src_dir if conf_src_dir is None else conf_src_dir conf_file_path = os.path.join(conf_src_dir, conf_file_name) if os.path.exists(conf_file_path): return conf_file_path if use_sample: # fall back to using the corresponding sample conf file conf_file_name += '-sample' conf_file_path = os.path.join(dflt_src_dir, conf_file_name) if os.path.exists(conf_file_path): return conf_file_path msg = 'Failed to find config file %s' % conf_file_name raise InProcessException(msg) def _in_process_setup_ring(swift_conf, conf_src_dir, testdir): """ If SWIFT_TEST_POLICY is set: - look in swift.conf file for specified policy - move this to be policy-0 but preserving its options - copy its ring file to test dir, changing its devices to suit in process testing, and renaming it to suit policy-0 Otherwise, create a default ring file. """ conf = ConfigParser() conf.read(swift_conf) sp_prefix = 'storage-policy:' try: # policy index 0 will be created if no policy exists in conf policies = parse_storage_policies(conf) except PolicyError as e: raise InProcessException(e) # clear all policies from test swift.conf before adding test policy back for policy in policies: conf.remove_section(sp_prefix + str(policy.idx)) if policy_specified: policy_to_test = policies.get_by_name(policy_specified) if policy_to_test is None: raise InProcessException('Failed to find policy name "%s"' % policy_specified) _info('Using specified policy %s' % policy_to_test.name) else: policy_to_test = policies.default _info('Defaulting to policy %s' % policy_to_test.name) # make policy_to_test be policy index 0 and default for the test config sp_zero_section = sp_prefix + '0' conf.add_section(sp_zero_section) for (k, v) in policy_to_test.get_info(config=True).items(): conf.set(sp_zero_section, k, v) conf.set(sp_zero_section, 'default', True) with open(swift_conf, 'w') as fp: conf.write(fp) # look for a source ring file ring_file_src = ring_file_test = 'object.ring.gz' if policy_to_test.idx: ring_file_src = 'object-%s.ring.gz' % policy_to_test.idx try: ring_file_src = _in_process_find_conf_file(conf_src_dir, ring_file_src, use_sample=False) except InProcessException as e: if policy_specified: raise InProcessException('Failed to find ring file %s' % ring_file_src) ring_file_src = None ring_file_test = os.path.join(testdir, ring_file_test) if ring_file_src: # copy source ring file to a policy-0 test ring file, re-homing servers _info('Using source ring file %s' % ring_file_src) ring_data = ring.RingData.load(ring_file_src) obj_sockets = [] for dev in ring_data.devs: device = 'sd%c1' % chr(len(obj_sockets) + ord('a')) utils.mkdirs(os.path.join(_testdir, 'sda1')) utils.mkdirs(os.path.join(_testdir, 'sda1', 'tmp')) obj_socket = eventlet.listen(('localhost', 0)) obj_sockets.append(obj_socket) dev['port'] = obj_socket.getsockname()[1] dev['ip'] = '127.0.0.1' dev['device'] = device dev['replication_port'] = dev['port'] dev['replication_ip'] = dev['ip'] ring_data.save(ring_file_test) else: # make default test ring, 2 replicas, 4 partitions, 2 devices _info('No source object ring file, creating 2rep/4part/2dev ring') obj_sockets = [eventlet.listen(('localhost', 0)) for _ in (0, 1)] ring_data = ring.RingData( [[0, 1, 0, 1], [1, 0, 1, 0]], [{'id': 0, 'zone': 0, 'device': 'sda1', 'ip': '127.0.0.1', 'port': obj_sockets[0].getsockname()[1]}, {'id': 1, 'zone': 1, 'device': 'sdb1', 'ip': '127.0.0.1', 'port': obj_sockets[1].getsockname()[1]}], 30) with closing(GzipFile(ring_file_test, 'wb')) as f: pickle.dump(ring_data, f) for dev in ring_data.devs: _debug('Ring file dev: %s' % dev) return obj_sockets def in_process_setup(the_object_server=object_server): _info('IN-PROCESS SERVERS IN USE FOR FUNCTIONAL TESTS') _info('Using object_server class: %s' % the_object_server.__name__) conf_src_dir = os.environ.get('SWIFT_TEST_IN_PROCESS_CONF_DIR') show_debug_logs = os.environ.get('SWIFT_TEST_DEBUG_LOGS') if conf_src_dir is not None: if not os.path.isdir(conf_src_dir): msg = 'Config source %s is not a dir' % conf_src_dir raise InProcessException(msg) _info('Using config source dir: %s' % conf_src_dir) # If SWIFT_TEST_IN_PROCESS_CONF specifies a config source dir then # prefer config files from there, otherwise read config from source tree # sample files. A mixture of files from the two sources is allowed. proxy_conf = _in_process_find_conf_file(conf_src_dir, 'proxy-server.conf') _info('Using proxy config from %s' % proxy_conf) swift_conf_src = _in_process_find_conf_file(conf_src_dir, 'swift.conf') _info('Using swift config from %s' % swift_conf_src) monkey_patch_mimetools() global _testdir _testdir = os.path.join(mkdtemp(), 'tmp_functional') utils.mkdirs(_testdir) rmtree(_testdir) utils.mkdirs(os.path.join(_testdir, 'sda1')) utils.mkdirs(os.path.join(_testdir, 'sda1', 'tmp')) utils.mkdirs(os.path.join(_testdir, 'sdb1')) utils.mkdirs(os.path.join(_testdir, 'sdb1', 'tmp')) swift_conf = _in_process_setup_swift_conf(swift_conf_src, _testdir) obj_sockets = _in_process_setup_ring(swift_conf, conf_src_dir, _testdir) global orig_swift_conf_name orig_swift_conf_name = utils.SWIFT_CONF_FILE utils.SWIFT_CONF_FILE = swift_conf constraints.reload_constraints() storage_policy.SWIFT_CONF_FILE = swift_conf storage_policy.reload_storage_policies() global config if constraints.SWIFT_CONSTRAINTS_LOADED: # Use the swift constraints that are loaded for the test framework # configuration _c = dict((k, str(v)) for k, v in constraints.EFFECTIVE_CONSTRAINTS.items()) config.update(_c) else: # In-process swift constraints were not loaded, somethings wrong raise SkipTest global orig_hash_path_suff_pref orig_hash_path_suff_pref = utils.HASH_PATH_PREFIX, utils.HASH_PATH_SUFFIX utils.validate_hash_conf() global _test_socks _test_socks = [] # We create the proxy server listening socket to get its port number so # that we can add it as the "auth_port" value for the functional test # clients. prolis = eventlet.listen(('localhost', 0)) _test_socks.append(prolis) # The following set of configuration values is used both for the # functional test frame work and for the various proxy, account, container # and object servers. config.update({ # Values needed by the various in-process swift servers 'devices': _testdir, 'swift_dir': _testdir, 'mount_check': 'false', 'client_timeout': '4', 'allow_account_management': 'true', 'account_autocreate': 'true', 'allow_versions': 'True', # Below are values used by the functional test framework, as well as # by the various in-process swift servers 'auth_host': '127.0.0.1', 'auth_port': str(prolis.getsockname()[1]), 'auth_ssl': 'no', 'auth_prefix': '/auth/', # Primary functional test account (needs admin access to the # account) 'account': 'test', 'username': 'tester', 'password': 'testing', # User on a second account (needs admin access to the account) 'account2': 'test2', 'username2': 'tester2', 'password2': 'testing2', # User on same account as first, but without admin access 'username3': 'tester3', 'password3': 'testing3', # Service user and prefix (emulates glance, cinder, etc. user) 'account5': 'test5', 'username5': 'tester5', 'password5': 'testing5', 'service_prefix': 'SERVICE', # For tempauth middleware. Update reseller_prefix 'reseller_prefix': 'AUTH, SERVICE', 'SERVICE_require_group': 'service', # Reseller admin user (needs reseller_admin_role) 'account6': 'test6', 'username6': 'tester6', 'password6': 'testing6' }) # If an env var explicitly specifies the proxy-server object_post_as_copy # option then use its value, otherwise leave default config unchanged. object_post_as_copy = os.environ.get( 'SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY') if object_post_as_copy is not None: object_post_as_copy = config_true_value(object_post_as_copy) config['object_post_as_copy'] = str(object_post_as_copy) _debug('Setting object_post_as_copy to %r' % object_post_as_copy) acc1lis = eventlet.listen(('localhost', 0)) acc2lis = eventlet.listen(('localhost', 0)) con1lis = eventlet.listen(('localhost', 0)) con2lis = eventlet.listen(('localhost', 0)) _test_socks += [acc1lis, acc2lis, con1lis, con2lis] + obj_sockets account_ring_path = os.path.join(_testdir, 'account.ring.gz') with closing(GzipFile(account_ring_path, 'wb')) as f: pickle.dump(ring.RingData([[0, 1, 0, 1], [1, 0, 1, 0]], [{'id': 0, 'zone': 0, 'device': 'sda1', 'ip': '127.0.0.1', 'port': acc1lis.getsockname()[1]}, {'id': 1, 'zone': 1, 'device': 'sdb1', 'ip': '127.0.0.1', 'port': acc2lis.getsockname()[1]}], 30), f) container_ring_path = os.path.join(_testdir, 'container.ring.gz') with closing(GzipFile(container_ring_path, 'wb')) as f: pickle.dump(ring.RingData([[0, 1, 0, 1], [1, 0, 1, 0]], [{'id': 0, 'zone': 0, 'device': 'sda1', 'ip': '127.0.0.1', 'port': con1lis.getsockname()[1]}, {'id': 1, 'zone': 1, 'device': 'sdb1', 'ip': '127.0.0.1', 'port': con2lis.getsockname()[1]}], 30), f) eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0" # Turn off logging requests by the underlying WSGI software. eventlet.wsgi.HttpProtocol.log_request = lambda *a: None logger = utils.get_logger(config, 'wsgi-server', log_route='wsgi') # Redirect logging other messages by the underlying WSGI software. eventlet.wsgi.HttpProtocol.log_message = \ lambda s, f, *a: logger.error('ERROR WSGI: ' + f % a) # Default to only 4 seconds for in-process functional test runs eventlet.wsgi.WRITE_TIMEOUT = 4 def get_logger_name(name): if show_debug_logs: return debug_logger(name) else: return None acc1srv = account_server.AccountController( config, logger=get_logger_name('acct1')) acc2srv = account_server.AccountController( config, logger=get_logger_name('acct2')) con1srv = container_server.ContainerController( config, logger=get_logger_name('cont1')) con2srv = container_server.ContainerController( config, logger=get_logger_name('cont2')) objsrvs = [ (obj_sockets[index], the_object_server.ObjectController( config, logger=get_logger_name('obj%d' % (index + 1)))) for index in range(len(obj_sockets)) ] if show_debug_logs: logger = debug_logger('proxy') def get_logger(name, *args, **kwargs): return logger with mock.patch('swift.common.utils.get_logger', get_logger): with mock.patch('swift.common.middleware.memcache.MemcacheMiddleware', FakeMemcacheMiddleware): try: app = loadapp(proxy_conf, global_conf=config) except Exception as e: raise InProcessException(e) nl = utils.NullLogger() global proxy_srv proxy_srv = prolis prospa = eventlet.spawn(eventlet.wsgi.server, prolis, app, nl) acc1spa = eventlet.spawn(eventlet.wsgi.server, acc1lis, acc1srv, nl) acc2spa = eventlet.spawn(eventlet.wsgi.server, acc2lis, acc2srv, nl) con1spa = eventlet.spawn(eventlet.wsgi.server, con1lis, con1srv, nl) con2spa = eventlet.spawn(eventlet.wsgi.server, con2lis, con2srv, nl) objspa = [eventlet.spawn(eventlet.wsgi.server, objsrv[0], objsrv[1], nl) for objsrv in objsrvs] global _test_coros _test_coros = \ (prospa, acc1spa, acc2spa, con1spa, con2spa) + tuple(objspa) # Create accounts "test" and "test2" def create_account(act): ts = utils.normalize_timestamp(time()) account_ring = Ring(_testdir, ring_name='account') partition, nodes = account_ring.get_nodes(act) for node in nodes: # Note: we are just using the http_connect method in the object # controller here to talk to the account server nodes. conn = swift.proxy.controllers.obj.http_connect( node['ip'], node['port'], node['device'], partition, 'PUT', '/' + act, {'X-Timestamp': ts, 'x-trans-id': act}) resp = conn.getresponse() assert(resp.status == 201) create_account('AUTH_test') create_account('AUTH_test2') cluster_info = {} def get_cluster_info(): # The fallback constraints used for testing will come from the current # effective constraints. eff_constraints = dict(constraints.EFFECTIVE_CONSTRAINTS) # We'll update those constraints based on what the /info API provides, if # anything. global cluster_info global config try: conn = Connection(config) conn.authenticate() cluster_info.update(conn.cluster_info()) except (ResponseError, socket.error): # Failed to get cluster_information via /info API, so fall back on # test.conf data pass else: try: eff_constraints.update(cluster_info['swift']) except KeyError: # Most likely the swift cluster has "expose_info = false" set # in its proxy-server.conf file, so we'll just do the best we # can. print("** Swift Cluster not exposing /info **", file=sys.stderr) # Finally, we'll allow any constraint present in the swift-constraints # section of test.conf to override everything. Note that only those # constraints defined in the constraints module are converted to integers. test_constraints = get_config('swift-constraints') for k in constraints.DEFAULT_CONSTRAINTS: try: test_constraints[k] = int(test_constraints[k]) except KeyError: pass except ValueError: print("Invalid constraint value: %s = %s" % ( k, test_constraints[k]), file=sys.stderr) eff_constraints.update(test_constraints) # Just make it look like these constraints were loaded from a /info call, # even if the /info call failed, or when they are overridden by values # from the swift-constraints section of test.conf cluster_info['swift'] = eff_constraints def setup_package(): global policy_specified policy_specified = os.environ.get('SWIFT_TEST_POLICY') in_process_env = os.environ.get('SWIFT_TEST_IN_PROCESS') if in_process_env is not None: use_in_process = utils.config_true_value(in_process_env) else: use_in_process = None global in_process global config if use_in_process: # Explicitly set to True, so barrel on ahead with in-process # functional test setup. in_process = True # NOTE: No attempt is made to a read local test.conf file. else: if use_in_process is None: # Not explicitly set, default to using in-process functional tests # if the test.conf file is not found, or does not provide a usable # configuration. config.update(get_config('func_test')) if not config: in_process = True # else... leave in_process value unchanged. It may be that # setup_package is called twice, in which case in_process_setup may # have loaded config before we reach here a second time, so the # existence of config is not reliable to determine that in_process # should be False. Anyway, it's default value is False. else: # Explicitly set to False, do not attempt to use in-process # functional tests, be sure we attempt to read from local # test.conf file. in_process = False config.update(get_config('func_test')) if in_process: in_mem_obj_env = os.environ.get('SWIFT_TEST_IN_MEMORY_OBJ') in_mem_obj = utils.config_true_value(in_mem_obj_env) try: in_process_setup(the_object_server=( mem_object_server if in_mem_obj else object_server)) except InProcessException as exc: print(('Exception during in-process setup: %s' % str(exc)), file=sys.stderr) raise global web_front_end web_front_end = config.get('web_front_end', 'integral') global normalized_urls normalized_urls = config.get('normalized_urls', False) global orig_collate orig_collate = locale.setlocale(locale.LC_COLLATE) locale.setlocale(locale.LC_COLLATE, config.get('collate', 'C')) global insecure insecure = config_true_value(config.get('insecure', False)) global swift_test_auth_version global swift_test_auth global swift_test_user global swift_test_key global swift_test_tenant global swift_test_perm global swift_test_domain global swift_test_service_prefix swift_test_service_prefix = None if config: swift_test_auth_version = str(config.get('auth_version', '1')) swift_test_auth = 'http' if config_true_value(config.get('auth_ssl', 'no')): swift_test_auth = 'https' if 'auth_prefix' not in config: config['auth_prefix'] = '/' try: suffix = '://%(auth_host)s:%(auth_port)s%(auth_prefix)s' % config swift_test_auth += suffix except KeyError: pass # skip if 'service_prefix' in config: swift_test_service_prefix = utils.append_underscore( config['service_prefix']) if swift_test_auth_version == "1": swift_test_auth += 'v1.0' try: if 'account' in config: swift_test_user[0] = '%(account)s:%(username)s' % config else: swift_test_user[0] = '%(username)s' % config swift_test_key[0] = config['password'] except KeyError: # bad config, no account/username configured, tests cannot be # run pass try: swift_test_user[1] = '%s%s' % ( '%s:' % config['account2'] if 'account2' in config else '', config['username2']) swift_test_key[1] = config['password2'] except KeyError: pass # old config, no second account tests can be run try: swift_test_user[2] = '%s%s' % ( '%s:' % config['account'] if 'account' in config else '', config['username3']) swift_test_key[2] = config['password3'] except KeyError: pass # old config, no third account tests can be run try: swift_test_user[4] = '%s%s' % ( '%s:' % config['account5'], config['username5']) swift_test_key[4] = config['password5'] swift_test_tenant[4] = config['account5'] except KeyError: pass # no service token tests can be run for _ in range(3): swift_test_perm[_] = swift_test_user[_] else: swift_test_user[0] = config['username'] swift_test_tenant[0] = config['account'] swift_test_key[0] = config['password'] swift_test_user[1] = config['username2'] swift_test_tenant[1] = config['account2'] swift_test_key[1] = config['password2'] swift_test_user[2] = config['username3'] swift_test_tenant[2] = config['account'] swift_test_key[2] = config['password3'] if 'username4' in config: swift_test_user[3] = config['username4'] swift_test_tenant[3] = config['account4'] swift_test_key[3] = config['password4'] swift_test_domain[3] = config['domain4'] if 'username5' in config: swift_test_user[4] = config['username5'] swift_test_tenant[4] = config['account5'] swift_test_key[4] = config['password5'] if 'username6' in config: swift_test_user[5] = config['username6'] swift_test_tenant[5] = config['account6'] swift_test_key[5] = config['password6'] for _ in range(5): swift_test_perm[_] = swift_test_tenant[_] + ':' \ + swift_test_user[_] global skip skip = not all([swift_test_auth, swift_test_user[0], swift_test_key[0]]) if skip: print('SKIPPING FUNCTIONAL TESTS DUE TO NO CONFIG', file=sys.stderr) global skip2 skip2 = not all([not skip, swift_test_user[1], swift_test_key[1]]) if not skip and skip2: print('SKIPPING SECOND ACCOUNT FUNCTIONAL TESTS ' 'DUE TO NO CONFIG FOR THEM', file=sys.stderr) global skip3 skip3 = not all([not skip, swift_test_user[2], swift_test_key[2]]) if not skip and skip3: print('SKIPPING THIRD ACCOUNT FUNCTIONAL TESTS' 'DUE TO NO CONFIG FOR THEM', file=sys.stderr) global skip_if_not_v3 skip_if_not_v3 = (swift_test_auth_version != '3' or not all([not skip, swift_test_user[3], swift_test_key[3]])) if not skip and skip_if_not_v3: print('SKIPPING FUNCTIONAL TESTS SPECIFIC TO AUTH VERSION 3', file=sys.stderr) global skip_service_tokens skip_service_tokens = not all([not skip, swift_test_user[4], swift_test_key[4], swift_test_tenant[4], swift_test_service_prefix]) if not skip and skip_service_tokens: print( 'SKIPPING FUNCTIONAL TESTS SPECIFIC TO SERVICE TOKENS', file=sys.stderr) if policy_specified: policies = FunctionalStoragePolicyCollection.from_info() for p in policies: # policy names are case-insensitive if policy_specified.lower() == p['name'].lower(): _info('Using specified policy %s' % policy_specified) FunctionalStoragePolicyCollection.policy_specified = p Container.policy_specified = policy_specified break else: _info( 'SKIPPING FUNCTIONAL TESTS: Failed to find specified policy %s' % policy_specified) raise Exception('Failed to find specified policy %s' % policy_specified) global skip_if_no_reseller_admin skip_if_no_reseller_admin = not all([not skip, swift_test_user[5], swift_test_key[5], swift_test_tenant[5]]) if not skip and skip_if_no_reseller_admin: print( 'SKIPPING FUNCTIONAL TESTS DUE TO NO CONFIG FOR RESELLER ADMIN', file=sys.stderr) get_cluster_info() def teardown_package(): global orig_collate locale.setlocale(locale.LC_COLLATE, orig_collate) # clean up containers and objects left behind after running tests global config if config: conn = Connection(config) conn.authenticate() account = Account(conn, config.get('account', config['username'])) account.delete_containers() global in_process global _test_socks if in_process: try: for i, server in enumerate(_test_coros): server.kill() if not server.dead: # kill it from the socket level _test_socks[i].close() except Exception: pass try: rmtree(os.path.dirname(_testdir)) except Exception: pass utils.HASH_PATH_PREFIX, utils.HASH_PATH_SUFFIX = \ orig_hash_path_suff_pref utils.SWIFT_CONF_FILE = orig_swift_conf_name constraints.reload_constraints() reset_globals() class AuthError(Exception): pass class InternalServerError(Exception): pass url = [None, None, None, None, None] token = [None, None, None, None, None] service_token = [None, None, None, None, None] parsed = [None, None, None, None, None] conn = [None, None, None, None, None] def reset_globals(): global url, token, service_token, parsed, conn, config url = [None, None, None, None, None] token = [None, None, None, None, None] service_token = [None, None, None, None, None] parsed = [None, None, None, None, None] conn = [None, None, None, None, None] if config: config = {} def connection(url): if has_insecure: parsed_url, http_conn = http_connection(url, insecure=insecure) else: parsed_url, http_conn = http_connection(url) orig_request = http_conn.request # Add the policy header if policy_specified is set def request_with_policy(method, url, body=None, headers={}): version, account, container, obj = split_path(url, 1, 4, True) if policy_specified and method == 'PUT' and container and not obj \ and 'X-Storage-Policy' not in headers: headers['X-Storage-Policy'] = policy_specified return orig_request(method, url, body, headers) http_conn.request = request_with_policy return parsed_url, http_conn def get_url_token(user_index, os_options): authargs = dict(snet=False, tenant_name=swift_test_tenant[user_index], auth_version=swift_test_auth_version, os_options=os_options, insecure=insecure) return get_auth(swift_test_auth, swift_test_user[user_index], swift_test_key[user_index], **authargs) def retry(func, *args, **kwargs): """ You can use the kwargs to override: 'retries' (default: 5) 'use_account' (default: 1) - which user's token to pass 'url_account' (default: matches 'use_account') - which user's storage URL 'resource' (default: url[url_account] - URL to connect to; retry() will interpolate the variable :storage_url: if present 'service_user' - add a service token from this user (1 indexed) """ global url, token, service_token, parsed, conn retries = kwargs.get('retries', 5) attempts, backoff = 0, 1 # use account #1 by default; turn user's 1-indexed account into 0-indexed use_account = kwargs.pop('use_account', 1) - 1 service_user = kwargs.pop('service_user', None) if service_user: service_user -= 1 # 0-index # access our own account by default url_account = kwargs.pop('url_account', use_account + 1) - 1 os_options = {'user_domain_name': swift_test_domain[use_account], 'project_domain_name': swift_test_domain[use_account]} while attempts <= retries: auth_failure = False attempts += 1 try: if not url[use_account] or not token[use_account]: url[use_account], token[use_account] = get_url_token( use_account, os_options) parsed[use_account] = conn[use_account] = None if not parsed[use_account] or not conn[use_account]: parsed[use_account], conn[use_account] = \ connection(url[use_account]) # default resource is the account url[url_account] resource = kwargs.pop('resource', '%(storage_url)s') template_vars = {'storage_url': url[url_account]} parsed_result = urlparse(resource % template_vars) if isinstance(service_user, int): if not service_token[service_user]: dummy, service_token[service_user] = get_url_token( service_user, os_options) kwargs['service_token'] = service_token[service_user] return func(url[url_account], token[use_account], parsed_result, conn[url_account], *args, **kwargs) except (socket.error, HTTPException): if attempts > retries: raise parsed[use_account] = conn[use_account] = None if service_user: service_token[service_user] = None except AuthError: auth_failure = True url[use_account] = token[use_account] = None if service_user: service_token[service_user] = None except InternalServerError: pass if attempts <= retries: if not auth_failure: sleep(backoff) backoff *= 2 raise Exception('No result after %s retries.' % retries) def check_response(conn): resp = conn.getresponse() if resp.status == 401: resp.read() raise AuthError() elif resp.status // 100 == 5: resp.read() raise InternalServerError() return resp def load_constraint(name): global cluster_info try: c = cluster_info['swift'][name] except KeyError: raise SkipTest("Missing constraint: %s" % name) if not isinstance(c, int): raise SkipTest("Bad value, %r, for constraint: %s" % (c, name)) return c def get_storage_policy_from_cluster_info(info): policies = info['swift'].get('policies', {}) default_policy = [] non_default_policies = [] for p in policies: if p.get('default', {}): default_policy.append(p) else: non_default_policies.append(p) return default_policy, non_default_policies def reset_acl(): def post(url, token, parsed, conn): conn.request('POST', parsed.path, '', { 'X-Auth-Token': token, 'X-Account-Access-Control': '{}' }) return check_response(conn) resp = retry(post, use_account=1) resp.read() def requires_acls(f): @functools.wraps(f) def wrapper(*args, **kwargs): global skip, cluster_info if skip or not cluster_info: raise SkipTest('Requires account ACLs') # Determine whether this cluster has account ACLs; if not, skip test if not cluster_info.get('tempauth', {}).get('account_acls'): raise SkipTest('Requires account ACLs') if swift_test_auth_version != '1': # remove when keystoneauth supports account acls raise SkipTest('Requires account ACLs') reset_acl() try: rv = f(*args, **kwargs) finally: reset_acl() return rv return wrapper class FunctionalStoragePolicyCollection(object): # policy_specified is set in __init__.py when tests are being set up. policy_specified = None def __init__(self, policies): self._all = policies self.default = None for p in self: if p.get('default', False): assert self.default is None, 'Found multiple default ' \ 'policies %r and %r' % (self.default, p) self.default = p @classmethod def from_info(cls, info=None): if not (info or cluster_info): get_cluster_info() info = info or cluster_info try: policy_info = info['swift']['policies'] except KeyError: raise AssertionError('Did not find any policy info in %r' % info) policies = cls(policy_info) assert policies.default, \ 'Did not find default policy in %r' % policy_info return policies def __len__(self): return len(self._all) def __iter__(self): return iter(self._all) def __getitem__(self, index): return self._all[index] def filter(self, **kwargs): return self.__class__([p for p in self if all( p.get(k) == v for k, v in kwargs.items())]) def exclude(self, **kwargs): return self.__class__([p for p in self if all( p.get(k) != v for k, v in kwargs.items())]) def select(self): # check that a policy was specified and that it is available # in the current list (i.e., hasn't been excluded of the current list) if self.policy_specified and self.policy_specified in self: return self.policy_specified else: return random.choice(self) def requires_policies(f): @functools.wraps(f) def wrapper(self, *args, **kwargs): if skip: raise SkipTest try: self.policies = FunctionalStoragePolicyCollection.from_info() except AssertionError: raise SkipTest("Unable to determine available policies") if len(self.policies) < 2: raise SkipTest("Multiple policies not enabled") return f(self, *args, **kwargs) return wrapper swift-2.7.1/test/functional/test_account.py0000775000567000056710000010332013024044354022205 0ustar jenkinsjenkins00000000000000#!/usr/bin/python # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest2 import json from uuid import uuid4 from unittest2 import SkipTest from string import letters from six.moves import range from swift.common.middleware.acl import format_acl from test.functional import check_response, retry, requires_acls, \ load_constraint import test.functional as tf def setUpModule(): tf.setup_package() def tearDownModule(): tf.teardown_package() class TestAccount(unittest2.TestCase): def setUp(self): self.max_meta_count = load_constraint('max_meta_count') self.max_meta_name_length = load_constraint('max_meta_name_length') self.max_meta_overall_size = load_constraint('max_meta_overall_size') self.max_meta_value_length = load_constraint('max_meta_value_length') def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head) self.existing_metadata = set([ k for k, v in resp.getheaders() if k.lower().startswith('x-account-meta')]) def tearDown(self): def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head) resp.read() new_metadata = set( [k for k, v in resp.getheaders() if k.lower().startswith('x-account-meta')]) def clear_meta(url, token, parsed, conn, remove_metadata_keys): headers = {'X-Auth-Token': token} headers.update((k, '') for k in remove_metadata_keys) conn.request('POST', parsed.path, '', headers) return check_response(conn) extra_metadata = list(self.existing_metadata ^ new_metadata) for i in range(0, len(extra_metadata), 90): batch = extra_metadata[i:i + 90] resp = retry(clear_meta, batch) resp.read() self.assertEqual(resp.status // 100, 2) def test_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, value): conn.request('POST', parsed.path, '', {'X-Auth-Token': token, 'X-Account-Meta-Test': value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(post, '') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-test'), None) resp = retry(get) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-test'), None) resp = retry(post, 'Value') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-test'), 'Value') resp = retry(get) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-test'), 'Value') def test_invalid_acls(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # needs to be an acceptable header size num_keys = 8 max_key_size = load_constraint('max_header_size') / num_keys acl = {'admin': [c * max_key_size for c in letters[:num_keys]]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 400) # and again a touch smaller acl = {'admin': [c * max_key_size for c in letters[:num_keys - 1]]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) @requires_acls def test_invalid_acl_keys(self): def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # needs to be json resp = retry(post, headers={'X-Account-Access-Control': 'invalid'}, use_account=1) resp.read() self.assertEqual(resp.status, 400) acl_user = tf.swift_test_user[1] acl = {'admin': [acl_user], 'invalid_key': 'invalid_value'} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers, use_account=1) resp.read() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) @requires_acls def test_invalid_acl_values(self): def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) acl = {'admin': 'invalid_value'} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) @requires_acls def test_read_only_acl(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # cannot read account resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-only can read account headers resp = retry(get, use_account=3) resp.read() self.assertIn(resp.status, (200, 204)) # but not acls self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # read-only can not write metadata headers = {'x-account-meta-test': 'value'} resp = retry(post, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 403) # but they can read it headers = {'x-account-meta-test': 'value'} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, use_account=3) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('X-Account-Meta-Test'), 'value') @requires_acls def test_read_write_acl(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # cannot read account resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-write can read account headers resp = retry(get, use_account=3) resp.read() self.assertIn(resp.status, (200, 204)) # but not acls self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # read-write can not write account metadata headers = {'x-account-meta-test': 'value'} resp = retry(post, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 403) @requires_acls def test_admin_acl(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # cannot read account resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant admin access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} acl_json_str = format_acl(version=2, acl_dict=acl) headers = {'x-account-access-control': acl_json_str} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # admin can read account headers resp = retry(get, use_account=3) resp.read() self.assertIn(resp.status, (200, 204)) # including acls self.assertEqual(resp.getheader('X-Account-Access-Control'), acl_json_str) # admin can write account metadata value = str(uuid4()) headers = {'x-account-meta-test': value} resp = retry(post, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, use_account=3) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('X-Account-Meta-Test'), value) # admin can even revoke their own access headers = {'x-account-access-control': '{}'} resp = retry(post, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) # and again, cannot read account resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) @requires_acls def test_protected_tempurl(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) # add an account metadata, and temp-url-key to account value = str(uuid4()) headers = { 'x-account-meta-temp-url-key': 'secret', 'x-account-meta-test': value, } resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # grant read-only access to tester3 acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} acl_json_str = format_acl(version=2, acl_dict=acl) headers = {'x-account-access-control': acl_json_str} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-only tester3 can read account metadata resp = retry(get, use_account=3) resp.read() self.assertTrue( resp.status in (200, 204), 'Expected status in (200, 204), got %s' % resp.status) self.assertEqual(resp.getheader('X-Account-Meta-Test'), value) # but not temp-url-key self.assertEqual(resp.getheader('X-Account-Meta-Temp-Url-Key'), None) # grant read-write access to tester3 acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} acl_json_str = format_acl(version=2, acl_dict=acl) headers = {'x-account-access-control': acl_json_str} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-write tester3 can read account metadata resp = retry(get, use_account=3) resp.read() self.assertTrue( resp.status in (200, 204), 'Expected status in (200, 204), got %s' % resp.status) self.assertEqual(resp.getheader('X-Account-Meta-Test'), value) # but not temp-url-key self.assertEqual(resp.getheader('X-Account-Meta-Temp-Url-Key'), None) # grant admin access to tester3 acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} acl_json_str = format_acl(version=2, acl_dict=acl) headers = {'x-account-access-control': acl_json_str} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # admin tester3 can read account metadata resp = retry(get, use_account=3) resp.read() self.assertTrue( resp.status in (200, 204), 'Expected status in (200, 204), got %s' % resp.status) self.assertEqual(resp.getheader('X-Account-Meta-Test'), value) # including temp-url-key self.assertEqual(resp.getheader('X-Account-Meta-Temp-Url-Key'), 'secret') # admin tester3 can even change temp-url-key secret = str(uuid4()) headers = { 'x-account-meta-temp-url-key': secret, } resp = retry(post, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, use_account=3) resp.read() self.assertTrue( resp.status in (200, 204), 'Expected status in (200, 204), got %s' % resp.status) self.assertEqual(resp.getheader('X-Account-Meta-Temp-Url-Key'), secret) @requires_acls def test_account_acls(self): if tf.skip2: raise SkipTest def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def put(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('PUT', parsed.path, '', new_headers) return check_response(conn) def delete(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('DELETE', parsed.path, '', new_headers) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) try: # User1 can POST to their own account (and reset the ACLs) resp = retry(post, headers={'X-Account-Access-Control': '{}'}, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User1 can GET their own empty account resp = retry(get, use_account=1) resp.read() self.assertEqual(resp.status // 100, 2) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User2 can't GET User1's account resp = retry(get, use_account=2, url_account=1) resp.read() self.assertEqual(resp.status, 403) # User1 is swift_owner of their own account, so they can POST an # ACL -- let's do this and make User2 (test_user[1]) an admin acl_user = tf.swift_test_user[1] acl = {'admin': [acl_user]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # User1 can see the new header resp = retry(get, use_account=1) resp.read() self.assertEqual(resp.status // 100, 2) data_from_headers = resp.getheader('x-account-access-control') expected = json.dumps(acl, separators=(',', ':')) self.assertEqual(data_from_headers, expected) # Now User2 should be able to GET the account and see the ACL resp = retry(head, use_account=2, url_account=1) resp.read() data_from_headers = resp.getheader('x-account-access-control') self.assertEqual(data_from_headers, expected) # Revoke User2's admin access, grant User2 read-write access acl = {'read-write': [acl_user]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # User2 can still GET the account, but not see the ACL # (since it's privileged data) resp = retry(head, use_account=2, url_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('x-account-access-control'), None) # User2 can PUT and DELETE a container resp = retry(put, use_account=2, url_account=1, resource='%(storage_url)s/mycontainer', headers={}) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, use_account=2, url_account=1, resource='%(storage_url)s/mycontainer', headers={}) resp.read() self.assertEqual(resp.status, 204) # Revoke User2's read-write access, grant User2 read-only access acl = {'read-only': [acl_user]} headers = {'x-account-access-control': format_acl( version=2, acl_dict=acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # User2 can still GET the account, but not see the ACL # (since it's privileged data) resp = retry(head, use_account=2, url_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('x-account-access-control'), None) # User2 can't PUT a container resp = retry(put, use_account=2, url_account=1, resource='%(storage_url)s/mycontainer', headers={}) resp.read() self.assertEqual(resp.status, 403) finally: # Make sure to clean up even if tests fail -- User2 should not # have access to User1's account in other functional tests! resp = retry(post, headers={'X-Account-Access-Control': '{}'}, use_account=1) resp.read() @requires_acls def test_swift_account_acls(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) try: # User1 can POST to their own account resp = retry(post, headers={'X-Account-Access-Control': '{}'}) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User1 can GET their own empty account resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User1 can POST non-empty data acl_json = '{"admin":["bob"]}' resp = retry(post, headers={'X-Account-Access-Control': acl_json}) resp.read() self.assertEqual(resp.status, 204) # User1 can GET the non-empty data resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) self.assertEqual(resp.getheader('X-Account-Access-Control'), acl_json) # POST non-JSON ACL should fail resp = retry(post, headers={'X-Account-Access-Control': 'yuck'}) resp.read() # resp.status will be 400 if tempauth or some other ACL-aware # auth middleware rejects it, or 200 (but silently swallowed by # core Swift) if ACL-unaware auth middleware approves it. # A subsequent GET should show the old, valid data, not the garbage resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) self.assertEqual(resp.getheader('X-Account-Access-Control'), acl_json) finally: # Make sure to clean up even if tests fail -- User2 should not # have access to User1's account in other functional tests! resp = retry(post, headers={'X-Account-Access-Control': '{}'}) resp.read() def test_swift_prohibits_garbage_account_acls(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) try: # User1 can POST to their own account resp = retry(post, headers={'X-Account-Access-Control': '{}'}) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User1 can GET their own empty account resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) self.assertEqual(resp.getheader('X-Account-Access-Control'), None) # User1 can POST non-empty data acl_json = '{"admin":["bob"]}' resp = retry(post, headers={'X-Account-Access-Control': acl_json}) resp.read() self.assertEqual(resp.status, 204) # If this request is handled by ACL-aware auth middleware, then the # ACL will be persisted. If it is handled by ACL-unaware auth # middleware, then the header will be thrown out. But the request # should return successfully in any case. # User1 can GET the non-empty data resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) # ACL will be set if some ACL-aware auth middleware (e.g. tempauth) # propagates it to sysmeta; if no ACL-aware auth middleware does, # then X-Account-Access-Control will still be empty. # POST non-JSON ACL should fail resp = retry(post, headers={'X-Account-Access-Control': 'yuck'}) resp.read() # resp.status will be 400 if tempauth or some other ACL-aware # auth middleware rejects it, or 200 (but silently swallowed by # core Swift) if ACL-unaware auth middleware approves it. # A subsequent GET should either show the old, valid data (if # ACL-aware auth middleware is propagating it) or show nothing # (if no auth middleware in the pipeline is ACL-aware), but should # never return the garbage ACL. resp = retry(get) resp.read() self.assertEqual(resp.status // 100, 2) self.assertNotEqual(resp.getheader('X-Account-Access-Control'), 'yuck') finally: # Make sure to clean up even if tests fail -- User2 should not # have access to User1's account in other functional tests! resp = retry(post, headers={'X-Account-Access-Control': '{}'}) resp.read() def test_unicode_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, name, value): conn.request('POST', parsed.path, '', {'X-Auth-Token': token, name: value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) uni_key = u'X-Account-Meta-uni\u0E12' uni_value = u'uni\u0E12' if (tf.web_front_end == 'integral'): resp = retry(post, uni_key, '1') resp.read() self.assertIn(resp.status, (201, 204)) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader(uni_key.encode('utf-8')), '1') resp = retry(post, 'X-Account-Meta-uni', uni_value) resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('X-Account-Meta-uni'), uni_value.encode('utf-8')) if (tf.web_front_end == 'integral'): resp = retry(post, uni_key, uni_value) resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader(uni_key.encode('utf-8')), uni_value.encode('utf-8')) def test_multi_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, name, value): conn.request('POST', parsed.path, '', {'X-Auth-Token': token, name: value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(post, 'X-Account-Meta-One', '1') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-one'), '1') resp = retry(post, 'X-Account-Meta-Two', '2') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-account-meta-one'), '1') self.assertEqual(resp.getheader('x-account-meta-two'), '2') def test_bad_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path, '', headers) return check_response(conn) resp = retry(post, {'X-Account-Meta-' + ( 'k' * self.max_meta_name_length): 'v'}) resp.read() self.assertEqual(resp.status, 204) resp = retry( post, {'X-Account-Meta-' + ('k' * ( self.max_meta_name_length + 1)): 'v'}) resp.read() self.assertEqual(resp.status, 400) resp = retry(post, {'X-Account-Meta-Too-Long': ( 'k' * self.max_meta_value_length)}) resp.read() self.assertEqual(resp.status, 204) resp = retry( post, {'X-Account-Meta-Too-Long': 'k' * ( self.max_meta_value_length + 1)}) resp.read() self.assertEqual(resp.status, 400) def test_bad_metadata2(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path, '', headers) return check_response(conn) # TODO: Find the test that adds these and remove them. headers = {'x-remove-account-meta-temp-url-key': 'remove', 'x-remove-account-meta-temp-url-key-2': 'remove'} resp = retry(post, headers) headers = {} for x in range(self.max_meta_count): headers['X-Account-Meta-%d' % x] = 'v' resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) headers = {} for x in range(self.max_meta_count + 1): headers['X-Account-Meta-%d' % x] = 'v' resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) def test_bad_metadata3(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path, '', headers) return check_response(conn) headers = {} header_value = 'k' * self.max_meta_value_length size = 0 x = 0 while size < (self.max_meta_overall_size - 4 - self.max_meta_value_length): size += 4 + self.max_meta_value_length headers['X-Account-Meta-%04d' % x] = header_value x += 1 if self.max_meta_overall_size - size > 1: headers['X-Account-Meta-k'] = \ 'v' * (self.max_meta_overall_size - size - 1) resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) # this POST includes metadata size that is over limit headers['X-Account-Meta-k'] = \ 'x' * (self.max_meta_overall_size - size) resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) # this POST would be ok and the aggregate backend metadata # size is on the border headers = {'X-Account-Meta-k': 'y' * (self.max_meta_overall_size - size - 1)} resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) # this last POST would be ok by itself but takes the aggregate # backend metadata size over limit headers = {'X-Account-Meta-k': 'z' * (self.max_meta_overall_size - size)} resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) class TestAccountInNonDefaultDomain(unittest2.TestCase): def setUp(self): if tf.skip or tf.skip2 or tf.skip_if_not_v3: raise SkipTest('AUTH VERSION 3 SPECIFIC TEST') def test_project_domain_id_header(self): # make sure account exists (assumes account auto create) def post(url, token, parsed, conn): conn.request('POST', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(post, use_account=4) resp.read() self.assertEqual(resp.status, 204) # account in non-default domain should have a project domain id def head(url, token, parsed, conn): conn.request('HEAD', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head, use_account=4) resp.read() self.assertEqual(resp.status, 204) self.assertIn('X-Account-Project-Domain-Id', resp.headers) if __name__ == '__main__': unittest2.main() swift-2.7.1/test/functional/test_object.py0000775000567000056710000015743413024044354022036 0ustar jenkinsjenkins00000000000000#!/usr/bin/python # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import json import unittest2 from unittest2 import SkipTest from uuid import uuid4 import time from six.moves import range from test.functional import check_response, retry, requires_acls, \ requires_policies import test.functional as tf def setUpModule(): tf.setup_package() def tearDownModule(): tf.teardown_package() class TestObject(unittest2.TestCase): def setUp(self): if tf.skip: raise SkipTest self.container = uuid4().hex self.containers = [] self._create_container(self.container) self._create_container(self.container, use_account=2) self.obj = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, self.obj), 'test', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def _create_container(self, name=None, headers=None, use_account=1): if not name: name = uuid4().hex self.containers.append(name) headers = headers or {} def put(url, token, parsed, conn, name): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('PUT', parsed.path + '/' + name, '', new_headers) return check_response(conn) resp = retry(put, name, use_account=use_account) resp.read() self.assertEqual(resp.status, 201) # With keystoneauth we need the accounts to have had the project # domain id persisted as sysmeta prior to testing ACLs. This may # not be the case if, for example, the account was created using # a request with reseller_admin role, when project domain id may # not have been known. So we ensure that the project domain id is # in sysmeta by making a POST to the accounts using an admin role. def post(url, token, parsed, conn): conn.request('POST', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(post, use_account=use_account) resp.read() self.assertEqual(resp.status, 204) return name def tearDown(self): if tf.skip: raise SkipTest # get list of objects in container def get(url, token, parsed, conn, container): conn.request( 'GET', parsed.path + '/' + container + '?format=json', '', {'X-Auth-Token': token}) return check_response(conn) # delete an object def delete(url, token, parsed, conn, container, obj): conn.request( 'DELETE', '/'.join([parsed.path, container, obj['name']]), '', {'X-Auth-Token': token}) return check_response(conn) for container in self.containers: while True: resp = retry(get, container) body = resp.read() if resp.status == 404: break self.assertTrue(resp.status // 100 == 2, resp.status) objs = json.loads(body) if not objs: break for obj in objs: resp = retry(delete, container, obj) resp.read() self.assertIn(resp.status, (204, 404)) # delete the container def delete(url, token, parsed, conn, name): conn.request('DELETE', parsed.path + '/' + name, '', {'X-Auth-Token': token}) return check_response(conn) for container in self.containers: resp = retry(delete, container) resp.read() self.assertIn(resp.status, (204, 404)) def test_if_none_match(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, 'if_none_match_test'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'If-None-Match': '*'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) resp = retry(put) resp.read() self.assertEqual(resp.status, 412) def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, 'if_none_match_test'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'If-None-Match': 'somethingelse'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 400) def test_too_small_x_timestamp(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'too_small_x_timestamp'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Timestamp': '-1'}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', '%s/%s/%s' % (parsed.path, self.container, 'too_small_x_timestamp'), '', {'X-Auth-Token': token, 'Content-Length': '0'}) return check_response(conn) ts_before = time.time() resp = retry(put) body = resp.read() ts_after = time.time() if resp.status == 400: # shunt_inbound_x_timestamp must be false self.assertIn( 'X-Timestamp should be a UNIX timestamp float value', body) else: self.assertEqual(resp.status, 201) self.assertEqual(body, '') resp = retry(head) resp.read() self.assertGreater(float(resp.headers['x-timestamp']), ts_before) self.assertLess(float(resp.headers['x-timestamp']), ts_after) def test_too_big_x_timestamp(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'too_big_x_timestamp'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Timestamp': '99999999999.9999999999'}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', '%s/%s/%s' % (parsed.path, self.container, 'too_big_x_timestamp'), '', {'X-Auth-Token': token, 'Content-Length': '0'}) return check_response(conn) ts_before = time.time() resp = retry(put) body = resp.read() ts_after = time.time() if resp.status == 400: # shunt_inbound_x_timestamp must be false self.assertIn( 'X-Timestamp should be a UNIX timestamp float value', body) else: self.assertEqual(resp.status, 201) self.assertEqual(body, '') resp = retry(head) resp.read() self.assertGreater(float(resp.headers['x-timestamp']), ts_before) self.assertLess(float(resp.headers['x-timestamp']), ts_after) def test_x_delete_after(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'x_delete_after'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Delete-After': '1'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def get(url, token, parsed, conn): conn.request( 'GET', '%s/%s/%s' % (parsed.path, self.container, 'x_delete_after'), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) resp.read() count = 0 while resp.status == 200 and count < 10: resp = retry(get) resp.read() count += 1 time.sleep(1) self.assertEqual(resp.status, 404) # To avoid an error when the object deletion in tearDown(), # the object is added again. resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def test_x_delete_at(self): def put(url, token, parsed, conn): dt = datetime.datetime.now() epoch = time.mktime(dt.timetuple()) delete_time = str(int(epoch) + 3) conn.request( 'PUT', '%s/%s/%s' % (parsed.path, self.container, 'x_delete_at'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Delete-At': delete_time}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def get(url, token, parsed, conn): conn.request( 'GET', '%s/%s/%s' % (parsed.path, self.container, 'x_delete_at'), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) resp.read() count = 0 while resp.status == 200 and count < 10: resp = retry(get) resp.read() count += 1 time.sleep(1) self.assertEqual(resp.status, 404) # To avoid an error when the object deletion in tearDown(), # the object is added again. resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def test_non_integer_x_delete_after(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'non_integer_x_delete_after'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Delete-After': '*'}) return check_response(conn) resp = retry(put) body = resp.read() self.assertEqual(resp.status, 400) self.assertEqual(body, 'Non-integer X-Delete-After') def test_non_integer_x_delete_at(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'non_integer_x_delete_at'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Delete-At': '*'}) return check_response(conn) resp = retry(put) body = resp.read() self.assertEqual(resp.status, 400) self.assertEqual(body, 'Non-integer X-Delete-At') def test_x_delete_at_in_the_past(self): def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % (parsed.path, self.container, 'x_delete_at_in_the_past'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Delete-At': '0'}) return check_response(conn) resp = retry(put) body = resp.read() self.assertEqual(resp.status, 400) self.assertEqual(body, 'X-Delete-At in past') def test_copy_object(self): if tf.skip: raise SkipTest source = '%s/%s' % (self.container, self.obj) dest = '%s/%s' % (self.container, 'test_copy') # get contents of source def get_source(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, source), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get_source) source_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(source_contents, 'test') # copy source to dest with X-Copy-From def put(url, token, parsed, conn): conn.request('PUT', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Copy-From': source}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # contents of dest should be the same as source def get_dest(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get_dest) dest_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(dest_contents, source_contents) # delete the copy def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) # verify dest does not exist resp = retry(get_dest) resp.read() self.assertEqual(resp.status, 404) # copy source to dest with COPY def copy(url, token, parsed, conn): conn.request('COPY', '%s/%s' % (parsed.path, source), '', {'X-Auth-Token': token, 'Destination': dest}) return check_response(conn) resp = retry(copy) resp.read() self.assertEqual(resp.status, 201) # contents of dest should be the same as source resp = retry(get_dest) dest_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(dest_contents, source_contents) # copy source to dest with COPY and range def copy(url, token, parsed, conn): conn.request('COPY', '%s/%s' % (parsed.path, source), '', {'X-Auth-Token': token, 'Destination': dest, 'Range': 'bytes=1-2'}) return check_response(conn) resp = retry(copy) resp.read() self.assertEqual(resp.status, 201) # contents of dest should be the same as source resp = retry(get_dest) dest_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(dest_contents, source_contents[1:3]) # delete the copy resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) def test_copy_between_accounts(self): if tf.skip: raise SkipTest source = '%s/%s' % (self.container, self.obj) dest = '%s/%s' % (self.container, 'test_copy') # get contents of source def get_source(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, source), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get_source) source_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(source_contents, 'test') acct = tf.parsed[0].path.split('/', 2)[2] # copy source to dest with X-Copy-From-Account def put(url, token, parsed, conn): conn.request('PUT', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Copy-From-Account': acct, 'X-Copy-From': source}) return check_response(conn) # try to put, will not succeed # user does not have permissions to read from source resp = retry(put, use_account=2) self.assertEqual(resp.status, 403) # add acl to allow reading from source def post(url, token, parsed, conn): conn.request('POST', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[1]}) return check_response(conn) resp = retry(post) self.assertEqual(resp.status, 204) # retry previous put, now should succeed resp = retry(put, use_account=2) self.assertEqual(resp.status, 201) # contents of dest should be the same as source def get_dest(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get_dest, use_account=2) dest_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(dest_contents, source_contents) # delete the copy def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s' % (parsed.path, dest), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete, use_account=2) resp.read() self.assertIn(resp.status, (204, 404)) # verify dest does not exist resp = retry(get_dest, use_account=2) resp.read() self.assertEqual(resp.status, 404) acct_dest = tf.parsed[1].path.split('/', 2)[2] # copy source to dest with COPY def copy(url, token, parsed, conn): conn.request('COPY', '%s/%s' % (parsed.path, source), '', {'X-Auth-Token': token, 'Destination-Account': acct_dest, 'Destination': dest}) return check_response(conn) # try to copy, will not succeed # user does not have permissions to write to destination resp = retry(copy) resp.read() self.assertEqual(resp.status, 403) # add acl to allow write to destination def post(url, token, parsed, conn): conn.request('POST', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token, 'X-Container-Write': tf.swift_test_perm[0]}) return check_response(conn) resp = retry(post, use_account=2) self.assertEqual(resp.status, 204) # now copy will succeed resp = retry(copy) resp.read() self.assertEqual(resp.status, 201) # contents of dest should be the same as source resp = retry(get_dest, use_account=2) dest_contents = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(dest_contents, source_contents) # delete the copy resp = retry(delete, use_account=2) resp.read() self.assertIn(resp.status, (204, 404)) def test_public_object(self): if tf.skip: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', '%s/%s/%s' % (parsed.path, self.container, self.obj)) return check_response(conn) try: resp = retry(get) raise Exception('Should not have been able to GET') except Exception as err: self.assertTrue(str(err).startswith('No result after ')) def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.container, '', {'X-Auth-Token': token, 'X-Container-Read': '.r:*'}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) resp = retry(get) resp.read() self.assertEqual(resp.status, 200) def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.container, '', {'X-Auth-Token': token, 'X-Container-Read': ''}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) try: resp = retry(get) raise Exception('Should not have been able to GET') except Exception as err: self.assertTrue(str(err).startswith('No result after ')) def test_private_object(self): if tf.skip or tf.skip3: raise SkipTest # Ensure we can't access the object with the third account def get(url, token, parsed, conn): conn.request('GET', '%s/%s/%s' % ( parsed.path, self.container, self.obj), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # create a shared container writable by account3 shared_container = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', '%s/%s' % ( parsed.path, shared_container), '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[2], 'X-Container-Write': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # verify third account can not copy from private container def copy(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, shared_container, 'private_object'), '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Copy-From': '%s/%s' % (self.container, self.obj)}) return check_response(conn) resp = retry(copy, use_account=3) resp.read() self.assertEqual(resp.status, 403) # verify third account can write "obj1" to shared container def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), 'test', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put, use_account=3) resp.read() self.assertEqual(resp.status, 201) # verify third account can copy "obj1" to shared container def copy2(url, token, parsed, conn): conn.request('COPY', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), '', {'X-Auth-Token': token, 'Destination': '%s/%s' % (shared_container, 'obj1')}) return check_response(conn) resp = retry(copy2, use_account=3) resp.read() self.assertEqual(resp.status, 201) # verify third account STILL can not copy from private container def copy3(url, token, parsed, conn): conn.request('COPY', '%s/%s/%s' % ( parsed.path, self.container, self.obj), '', {'X-Auth-Token': token, 'Destination': '%s/%s' % (shared_container, 'private_object')}) return check_response(conn) resp = retry(copy3, use_account=3) resp.read() self.assertEqual(resp.status, 403) # clean up "obj1" def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) # clean up shared_container def delete(url, token, parsed, conn): conn.request('DELETE', parsed.path + '/' + shared_container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) def test_container_write_only(self): if tf.skip or tf.skip3: raise SkipTest # Ensure we can't access the object with the third account def get(url, token, parsed, conn): conn.request('GET', '%s/%s/%s' % ( parsed.path, self.container, self.obj), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # create a shared container writable (but not readable) by account3 shared_container = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', '%s/%s' % ( parsed.path, shared_container), '', {'X-Auth-Token': token, 'X-Container-Write': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # verify third account can write "obj1" to shared container def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), 'test', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put, use_account=3) resp.read() self.assertEqual(resp.status, 201) # verify third account cannot copy "obj1" to shared container def copy(url, token, parsed, conn): conn.request('COPY', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), '', {'X-Auth-Token': token, 'Destination': '%s/%s' % (shared_container, 'obj2')}) return check_response(conn) resp = retry(copy, use_account=3) resp.read() self.assertEqual(resp.status, 403) # verify third account can POST to "obj1" in shared container def post(url, token, parsed, conn): conn.request('POST', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), '', {'X-Auth-Token': token, 'X-Object-Meta-Color': 'blue'}) return check_response(conn) resp = retry(post, use_account=3) resp.read() self.assertEqual(resp.status, 202) # verify third account can DELETE from shared container def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s/%s' % ( parsed.path, shared_container, 'obj1'), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete, use_account=3) resp.read() self.assertIn(resp.status, (204, 404)) # clean up shared_container def delete(url, token, parsed, conn): conn.request('DELETE', parsed.path + '/' + shared_container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) @requires_acls def test_read_only(self): if tf.skip3: raise tf.SkipTest def get_listing(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def get(url, token, parsed, conn, name): conn.request('GET', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, name), 'test', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) # cannot list objects resp = retry(get_listing, use_account=3) resp.read() self.assertEqual(resp.status, 403) # cannot get object resp = retry(get, self.obj, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-only access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can list objects resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.obj, listing) # can get object resp = retry(get, self.obj, use_account=3) body = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(body, 'test') # can not put an object obj_name = str(uuid4()) resp = retry(put, obj_name, use_account=3) body = resp.read() self.assertEqual(resp.status, 403) # can not delete an object resp = retry(delete, self.obj, use_account=3) body = resp.read() self.assertEqual(resp.status, 403) # sanity with account1 resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertNotIn(obj_name, listing) self.assertIn(self.obj, listing) @requires_acls def test_read_write(self): if tf.skip3: raise SkipTest def get_listing(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def get(url, token, parsed, conn, name): conn.request('GET', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, name), 'test', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) # cannot list objects resp = retry(get_listing, use_account=3) resp.read() self.assertEqual(resp.status, 403) # cannot get object resp = retry(get, self.obj, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can list objects resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.obj, listing) # can get object resp = retry(get, self.obj, use_account=3) body = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(body, 'test') # can put an object obj_name = str(uuid4()) resp = retry(put, obj_name, use_account=3) body = resp.read() self.assertEqual(resp.status, 201) # can delete an object resp = retry(delete, self.obj, use_account=3) body = resp.read() self.assertIn(resp.status, (204, 404)) # sanity with account1 resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(obj_name, listing) self.assertNotIn(self.obj, listing) @requires_acls def test_admin(self): if tf.skip3: raise SkipTest def get_listing(url, token, parsed, conn): conn.request('GET', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def get(url, token, parsed, conn, name): conn.request('GET', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', '%s/%s/%s' % ( parsed.path, self.container, name), 'test', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', '%s/%s/%s' % ( parsed.path, self.container, name), '', {'X-Auth-Token': token}) return check_response(conn) # cannot list objects resp = retry(get_listing, use_account=3) resp.read() self.assertEqual(resp.status, 403) # cannot get object resp = retry(get, self.obj, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant admin access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can list objects resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.obj, listing) # can get object resp = retry(get, self.obj, use_account=3) body = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(body, 'test') # can put an object obj_name = str(uuid4()) resp = retry(put, obj_name, use_account=3) body = resp.read() self.assertEqual(resp.status, 201) # can delete an object resp = retry(delete, self.obj, use_account=3) body = resp.read() self.assertIn(resp.status, (204, 404)) # sanity with account1 resp = retry(get_listing, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(obj_name, listing) self.assertNotIn(self.obj, listing) def test_manifest(self): if tf.skip: raise SkipTest # Data for the object segments segments1 = ['one', 'two', 'three', 'four', 'five'] segments2 = ['six', 'seven', 'eight'] segments3 = ['nine', 'ten', 'eleven'] # Upload the first set of segments def put(url, token, parsed, conn, objnum): conn.request('PUT', '%s/%s/segments1/%s' % ( parsed.path, self.container, str(objnum)), segments1[objnum], {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments1)): resp = retry(put, objnum) resp.read() self.assertEqual(resp.status, 201) # Upload the manifest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/manifest' % ( parsed.path, self.container), '', { 'X-Auth-Token': token, 'X-Object-Manifest': '%s/segments1/' % self.container, 'Content-Type': 'text/jibberish', 'Content-Length': '0'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # Get the manifest (should get all the segments as the body) def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments1)) self.assertEqual(resp.status, 200) self.assertEqual(resp.getheader('content-type'), 'text/jibberish') # Get with a range at the start of the second segment def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', { 'X-Auth-Token': token, 'Range': 'bytes=3-'}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments1[1:])) self.assertEqual(resp.status, 206) # Get with a range in the middle of the second segment def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', { 'X-Auth-Token': token, 'Range': 'bytes=5-'}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments1)[5:]) self.assertEqual(resp.status, 206) # Get with a full start and stop range def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', { 'X-Auth-Token': token, 'Range': 'bytes=5-10'}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments1)[5:11]) self.assertEqual(resp.status, 206) # Upload the second set of segments def put(url, token, parsed, conn, objnum): conn.request('PUT', '%s/%s/segments2/%s' % ( parsed.path, self.container, str(objnum)), segments2[objnum], {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments2)): resp = retry(put, objnum) resp.read() self.assertEqual(resp.status, 201) # Get the manifest (should still be the first segments of course) def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments1)) self.assertEqual(resp.status, 200) # Update the manifest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/manifest' % ( parsed.path, self.container), '', { 'X-Auth-Token': token, 'X-Object-Manifest': '%s/segments2/' % self.container, 'Content-Length': '0'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # Get the manifest (should be the second set of segments now) def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments2)) self.assertEqual(resp.status, 200) if not tf.skip3: # Ensure we can't access the manifest with the third account def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # Grant access to the third account def post(url, token, parsed, conn): conn.request('POST', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # The third account should be able to get the manifest now def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) self.assertEqual(resp.read(), ''.join(segments2)) self.assertEqual(resp.status, 200) # Create another container for the third set of segments acontainer = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', parsed.path + '/' + acontainer, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # Upload the third set of segments in the other container def put(url, token, parsed, conn, objnum): conn.request('PUT', '%s/%s/segments3/%s' % ( parsed.path, acontainer, str(objnum)), segments3[objnum], {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments3)): resp = retry(put, objnum) resp.read() self.assertEqual(resp.status, 201) # Update the manifest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token, 'X-Object-Manifest': '%s/segments3/' % acontainer, 'Content-Length': '0'}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) # Get the manifest to ensure it's the third set of segments def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get) self.assertEqual(resp.read(), ''.join(segments3)) self.assertEqual(resp.status, 200) if not tf.skip3: # Ensure we can't access the manifest with the third account # (because the segments are in a protected container even if the # manifest itself is not). def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # Grant access to the third account def post(url, token, parsed, conn): conn.request('POST', '%s/%s' % (parsed.path, acontainer), '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # The third account should be able to get the manifest now def get(url, token, parsed, conn): conn.request('GET', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get, use_account=3) self.assertEqual(resp.read(), ''.join(segments3)) self.assertEqual(resp.status, 200) # Delete the manifest def delete(url, token, parsed, conn, objnum): conn.request('DELETE', '%s/%s/manifest' % ( parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete, objnum) resp.read() self.assertIn(resp.status, (204, 404)) # Delete the third set of segments def delete(url, token, parsed, conn, objnum): conn.request('DELETE', '%s/%s/segments3/%s' % ( parsed.path, acontainer, str(objnum)), '', {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments3)): resp = retry(delete, objnum) resp.read() self.assertIn(resp.status, (204, 404)) # Delete the second set of segments def delete(url, token, parsed, conn, objnum): conn.request('DELETE', '%s/%s/segments2/%s' % ( parsed.path, self.container, str(objnum)), '', {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments2)): resp = retry(delete, objnum) resp.read() self.assertIn(resp.status, (204, 404)) # Delete the first set of segments def delete(url, token, parsed, conn, objnum): conn.request('DELETE', '%s/%s/segments1/%s' % ( parsed.path, self.container, str(objnum)), '', {'X-Auth-Token': token}) return check_response(conn) for objnum in range(len(segments1)): resp = retry(delete, objnum) resp.read() self.assertIn(resp.status, (204, 404)) # Delete the extra container def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s' % (parsed.path, acontainer), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) def test_delete_content_type(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/hi' % (parsed.path, self.container), 'there', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s/hi' % (parsed.path, self.container), '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertIn(resp.status, (204, 404)) self.assertEqual(resp.getheader('Content-Type'), 'text/html; charset=UTF-8') def test_delete_if_delete_at_bad(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/hi-delete-bad' % (parsed.path, self.container), 'there', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) def delete(url, token, parsed, conn): conn.request('DELETE', '%s/%s/hi' % (parsed.path, self.container), '', {'X-Auth-Token': token, 'X-If-Delete-At': 'bad'}) return check_response(conn) resp = retry(delete) resp.read() self.assertEqual(resp.status, 400) def test_null_name(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn): conn.request('PUT', '%s/%s/abc%%00def' % ( parsed.path, self.container), 'test', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) if (tf.web_front_end == 'apache2'): self.assertEqual(resp.status, 404) else: self.assertEqual(resp.read(), 'Invalid UTF8 or contains NULL') self.assertEqual(resp.status, 412) def test_cors(self): if tf.skip: raise SkipTest try: strict_cors = tf.cluster_info['swift']['strict_cors_mode'] except KeyError: raise SkipTest("cors mode is unknown") def put_cors_cont(url, token, parsed, conn, orig): conn.request( 'PUT', '%s/%s' % (parsed.path, self.container), '', {'X-Auth-Token': token, 'X-Container-Meta-Access-Control-Allow-Origin': orig}) return check_response(conn) def put_obj(url, token, parsed, conn, obj): conn.request( 'PUT', '%s/%s/%s' % (parsed.path, self.container, obj), 'test', {'X-Auth-Token': token}) return check_response(conn) def check_cors(url, token, parsed, conn, method, obj, headers): if method != 'OPTIONS': headers['X-Auth-Token'] = token conn.request( method, '%s/%s/%s' % (parsed.path, self.container, obj), '', headers) return conn.getresponse() resp = retry(put_cors_cont, '*') resp.read() self.assertEqual(resp.status // 100, 2) resp = retry(put_obj, 'cat') resp.read() self.assertEqual(resp.status // 100, 2) resp = retry(check_cors, 'OPTIONS', 'cat', {'Origin': 'http://m.com'}) self.assertEqual(resp.status, 401) resp = retry(check_cors, 'OPTIONS', 'cat', {'Origin': 'http://m.com', 'Access-Control-Request-Method': 'GET'}) self.assertEqual(resp.status, 200) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('access-control-allow-origin'), '*') resp = retry(check_cors, 'GET', 'cat', {'Origin': 'http://m.com'}) self.assertEqual(resp.status, 200) headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('access-control-allow-origin'), '*') resp = retry(check_cors, 'GET', 'cat', {'Origin': 'http://m.com', 'X-Web-Mode': 'True'}) self.assertEqual(resp.status, 200) headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('access-control-allow-origin'), '*') #################### resp = retry(put_cors_cont, 'http://secret.com') resp.read() self.assertEqual(resp.status // 100, 2) resp = retry(check_cors, 'OPTIONS', 'cat', {'Origin': 'http://m.com', 'Access-Control-Request-Method': 'GET'}) resp.read() self.assertEqual(resp.status, 401) if strict_cors: resp = retry(check_cors, 'GET', 'cat', {'Origin': 'http://m.com'}) resp.read() self.assertEqual(resp.status, 200) headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertNotIn('access-control-allow-origin', headers) resp = retry(check_cors, 'GET', 'cat', {'Origin': 'http://secret.com'}) resp.read() self.assertEqual(resp.status, 200) headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('access-control-allow-origin'), 'http://secret.com') else: resp = retry(check_cors, 'GET', 'cat', {'Origin': 'http://m.com'}) resp.read() self.assertEqual(resp.status, 200) headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('access-control-allow-origin'), 'http://m.com') @requires_policies def test_cross_policy_copy(self): # create container in first policy policy = self.policies.select() container = self._create_container( headers={'X-Storage-Policy': policy['name']}) obj = uuid4().hex # create a container in second policy other_policy = self.policies.exclude(name=policy['name']).select() other_container = self._create_container( headers={'X-Storage-Policy': other_policy['name']}) other_obj = uuid4().hex def put_obj(url, token, parsed, conn, container, obj): # to keep track of things, use the original path as the body content = '%s/%s' % (container, obj) path = '%s/%s' % (parsed.path, content) conn.request('PUT', path, content, {'X-Auth-Token': token}) return check_response(conn) # create objects for c, o in zip((container, other_container), (obj, other_obj)): resp = retry(put_obj, c, o) resp.read() self.assertEqual(resp.status, 201) def put_copy_from(url, token, parsed, conn, container, obj, source): dest_path = '%s/%s/%s' % (parsed.path, container, obj) conn.request('PUT', dest_path, '', {'X-Auth-Token': token, 'Content-Length': '0', 'X-Copy-From': source}) return check_response(conn) copy_requests = ( (container, other_obj, '%s/%s' % (other_container, other_obj)), (other_container, obj, '%s/%s' % (container, obj)), ) # copy objects for c, o, source in copy_requests: resp = retry(put_copy_from, c, o, source) resp.read() self.assertEqual(resp.status, 201) def get_obj(url, token, parsed, conn, container, obj): path = '%s/%s/%s' % (parsed.path, container, obj) conn.request('GET', path, '', {'X-Auth-Token': token}) return check_response(conn) # validate contents, contents should be source validate_requests = copy_requests for c, o, body in validate_requests: resp = retry(get_obj, c, o) self.assertEqual(resp.status, 200) self.assertEqual(body, resp.read()) if __name__ == '__main__': unittest2.main() swift-2.7.1/test/functional/test_container.py0000775000567000056710000020354413024044354022544 0ustar jenkinsjenkins00000000000000#!/usr/bin/python # Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import json import unittest2 from unittest2 import SkipTest from uuid import uuid4 from test.functional import check_response, cluster_info, retry, \ requires_acls, load_constraint, requires_policies import test.functional as tf from six.moves import range def setUpModule(): tf.setup_package() def tearDownModule(): tf.teardown_package() class TestContainer(unittest2.TestCase): def setUp(self): if tf.skip: raise SkipTest self.name = uuid4().hex # this container isn't created by default, but will be cleaned up self.container = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 201) self.max_meta_count = load_constraint('max_meta_count') self.max_meta_name_length = load_constraint('max_meta_name_length') self.max_meta_overall_size = load_constraint('max_meta_overall_size') self.max_meta_value_length = load_constraint('max_meta_value_length') def tearDown(self): if tf.skip: raise SkipTest def get(url, token, parsed, conn, container): conn.request( 'GET', parsed.path + '/' + container + '?format=json', '', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, container, obj): conn.request( 'DELETE', '/'.join([parsed.path, container, obj['name']]), '', {'X-Auth-Token': token}) return check_response(conn) for container in (self.name, self.container): while True: resp = retry(get, container) body = resp.read() if resp.status == 404: break self.assertTrue(resp.status // 100 == 2, resp.status) objs = json.loads(body) if not objs: break for obj in objs: resp = retry(delete, container, obj) resp.read() self.assertEqual(resp.status, 204) def delete(url, token, parsed, conn, container): conn.request('DELETE', parsed.path + '/' + container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete, self.name) resp.read() self.assertEqual(resp.status, 204) # container may have not been created resp = retry(delete, self.container) resp.read() self.assertIn(resp.status, (204, 404)) def test_multi_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, name, value): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, name: value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(post, 'X-Container-Meta-One', '1') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-one'), '1') resp = retry(post, 'X-Container-Meta-Two', '2') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-one'), '1') self.assertEqual(resp.getheader('x-container-meta-two'), '2') def test_unicode_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, name, value): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, name: value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) uni_key = u'X-Container-Meta-uni\u0E12' uni_value = u'uni\u0E12' if (tf.web_front_end == 'integral'): resp = retry(post, uni_key, '1') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader(uni_key.encode('utf-8')), '1') resp = retry(post, 'X-Container-Meta-uni', uni_value) resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('X-Container-Meta-uni'), uni_value.encode('utf-8')) if (tf.web_front_end == 'integral'): resp = retry(post, uni_key, uni_value) resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader(uni_key.encode('utf-8')), uni_value.encode('utf-8')) def test_PUT_metadata(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn, name, value): conn.request('PUT', parsed.path + '/' + name, '', {'X-Auth-Token': token, 'X-Container-Meta-Test': value}) return check_response(conn) def head(url, token, parsed, conn, name): conn.request('HEAD', parsed.path + '/' + name, '', {'X-Auth-Token': token}) return check_response(conn) def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/' + name, '', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', parsed.path + '/' + name, '', {'X-Auth-Token': token}) return check_response(conn) name = uuid4().hex resp = retry(put, name, 'Value') resp.read() self.assertEqual(resp.status, 201) resp = retry(head, name) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), 'Value') resp = retry(get, name) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), 'Value') resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) name = uuid4().hex resp = retry(put, name, '') resp.read() self.assertEqual(resp.status, 201) resp = retry(head, name) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), None) resp = retry(get, name) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), None) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) def test_POST_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, value): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Meta-Test': value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) def get(url, token, parsed, conn): conn.request('GET', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), None) resp = retry(get) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), None) resp = retry(post, 'Value') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), 'Value') resp = retry(get) resp.read() self.assertIn(resp.status, (200, 204)) self.assertEqual(resp.getheader('x-container-meta-test'), 'Value') def test_PUT_bad_metadata(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn, name, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('PUT', parsed.path + '/' + name, '', headers) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', parsed.path + '/' + name, '', {'X-Auth-Token': token}) return check_response(conn) name = uuid4().hex resp = retry( put, name, {'X-Container-Meta-' + ('k' * self.max_meta_name_length): 'v'}) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) name = uuid4().hex resp = retry( put, name, {'X-Container-Meta-' + ( 'k' * (self.max_meta_name_length + 1)): 'v'}) resp.read() self.assertEqual(resp.status, 400) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 404) name = uuid4().hex resp = retry( put, name, {'X-Container-Meta-Too-Long': 'k' * self.max_meta_value_length}) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) name = uuid4().hex resp = retry( put, name, {'X-Container-Meta-Too-Long': 'k' * ( self.max_meta_value_length + 1)}) resp.read() self.assertEqual(resp.status, 400) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 404) name = uuid4().hex headers = {} for x in range(self.max_meta_count): headers['X-Container-Meta-%d' % x] = 'v' resp = retry(put, name, headers) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) name = uuid4().hex headers = {} for x in range(self.max_meta_count + 1): headers['X-Container-Meta-%d' % x] = 'v' resp = retry(put, name, headers) resp.read() self.assertEqual(resp.status, 400) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 404) name = uuid4().hex headers = {} header_value = 'k' * self.max_meta_value_length size = 0 x = 0 while size < (self.max_meta_overall_size - 4 - self.max_meta_value_length): size += 4 + self.max_meta_value_length headers['X-Container-Meta-%04d' % x] = header_value x += 1 if self.max_meta_overall_size - size > 1: headers['X-Container-Meta-k'] = \ 'v' * (self.max_meta_overall_size - size - 1) resp = retry(put, name, headers) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 204) name = uuid4().hex headers['X-Container-Meta-k'] = \ 'v' * (self.max_meta_overall_size - size) resp = retry(put, name, headers) resp.read() self.assertEqual(resp.status, 400) resp = retry(delete, name) resp.read() self.assertEqual(resp.status, 404) def test_POST_bad_metadata(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path + '/' + self.name, '', headers) return check_response(conn) resp = retry( post, {'X-Container-Meta-' + ('k' * self.max_meta_name_length): 'v'}) resp.read() self.assertEqual(resp.status, 204) resp = retry( post, {'X-Container-Meta-' + ( 'k' * (self.max_meta_name_length + 1)): 'v'}) resp.read() self.assertEqual(resp.status, 400) resp = retry( post, {'X-Container-Meta-Too-Long': 'k' * self.max_meta_value_length}) resp.read() self.assertEqual(resp.status, 204) resp = retry( post, {'X-Container-Meta-Too-Long': 'k' * ( self.max_meta_value_length + 1)}) resp.read() self.assertEqual(resp.status, 400) def test_POST_bad_metadata2(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path + '/' + self.name, '', headers) return check_response(conn) headers = {} for x in range(self.max_meta_count): headers['X-Container-Meta-%d' % x] = 'v' resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) headers = {} for x in range(self.max_meta_count + 1): headers['X-Container-Meta-%d' % x] = 'v' resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) def test_POST_bad_metadata3(self): if tf.skip: raise SkipTest def post(url, token, parsed, conn, extra_headers): headers = {'X-Auth-Token': token} headers.update(extra_headers) conn.request('POST', parsed.path + '/' + self.name, '', headers) return check_response(conn) headers = {} header_value = 'k' * self.max_meta_value_length size = 0 x = 0 while size < (self.max_meta_overall_size - 4 - self.max_meta_value_length): size += 4 + self.max_meta_value_length headers['X-Container-Meta-%04d' % x] = header_value x += 1 if self.max_meta_overall_size - size > 1: headers['X-Container-Meta-k'] = \ 'v' * (self.max_meta_overall_size - size - 1) resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) # this POST includes metadata size that is over limit headers['X-Container-Meta-k'] = \ 'x' * (self.max_meta_overall_size - size) resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) # this POST would be ok and the aggregate backend metadata # size is on the border headers = {'X-Container-Meta-k': 'y' * (self.max_meta_overall_size - size - 1)} resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 204) # this last POST would be ok by itself but takes the aggregate # backend metadata size over limit headers = {'X-Container-Meta-k': 'z' * (self.max_meta_overall_size - size)} resp = retry(post, headers) resp.read() self.assertEqual(resp.status, 400) def test_public_container(self): if tf.skip: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path + '/' + self.name) return check_response(conn) try: resp = retry(get) raise Exception('Should not have been able to GET') except Exception as err: self.assertTrue(str(err).startswith('No result after '), err) def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': '.r:*,.rlistings'}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) resp = retry(get) resp.read() self.assertEqual(resp.status, 204) def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': ''}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) try: resp = retry(get) raise Exception('Should not have been able to GET') except Exception as err: self.assertTrue(str(err).startswith('No result after '), err) def test_cross_account_container(self): if tf.skip or tf.skip2: raise SkipTest # Obtain the first account's string first_account = ['unknown'] def get1(url, token, parsed, conn): first_account[0] = parsed.path conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get1) resp.read() # Ensure we can't access the container with the second account def get2(url, token, parsed, conn): conn.request('GET', first_account[0] + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 403) # Make the container accessible by the second account def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[1], 'X-Container-Write': tf.swift_test_perm[1]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can now use the container with the second account resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 204) # Make the container private again def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': '', 'X-Container-Write': ''}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can't access the container with the second account again resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 403) def test_cross_account_public_container(self): if tf.skip or tf.skip2: raise SkipTest # Obtain the first account's string first_account = ['unknown'] def get1(url, token, parsed, conn): first_account[0] = parsed.path conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get1) resp.read() # Ensure we can't access the container with the second account def get2(url, token, parsed, conn): conn.request('GET', first_account[0] + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 403) # Make the container completely public def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': '.r:*,.rlistings'}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can now read the container with the second account resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 204) # But we shouldn't be able to write with the second account def put2(url, token, parsed, conn): conn.request('PUT', first_account[0] + '/' + self.name + '/object', 'test object', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put2, use_account=2) resp.read() self.assertEqual(resp.status, 403) # Now make the container also writeable by the second account def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Write': tf.swift_test_perm[1]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can still read the container with the second account resp = retry(get2, use_account=2) resp.read() self.assertEqual(resp.status, 204) # And that we can now write with the second account resp = retry(put2, use_account=2) resp.read() self.assertEqual(resp.status, 201) def test_nonadmin_user(self): if tf.skip or tf.skip3: raise SkipTest # Obtain the first account's string first_account = ['unknown'] def get1(url, token, parsed, conn): first_account[0] = parsed.path conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get1) resp.read() # Ensure we can't access the container with the third account def get3(url, token, parsed, conn): conn.request('GET', first_account[0] + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get3, use_account=3) resp.read() self.assertEqual(resp.status, 403) # Make the container accessible by the third account def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can now read the container with the third account resp = retry(get3, use_account=3) resp.read() self.assertEqual(resp.status, 204) # But we shouldn't be able to write with the third account def put3(url, token, parsed, conn): conn.request('PUT', first_account[0] + '/' + self.name + '/object', 'test object', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put3, use_account=3) resp.read() self.assertEqual(resp.status, 403) # Now make the container also writeable by the third account def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Write': tf.swift_test_perm[2]}) return check_response(conn) resp = retry(post) resp.read() self.assertEqual(resp.status, 204) # Ensure we can still read the container with the third account resp = retry(get3, use_account=3) resp.read() self.assertEqual(resp.status, 204) # And that we can now write with the third account resp = retry(put3, use_account=3) resp.read() self.assertEqual(resp.status, 201) @requires_acls def test_read_only_acl_listings(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) # cannot list containers resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-only access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-only can list containers resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.name, listing) # read-only can not create containers new_container_name = str(uuid4()) resp = retry(put, new_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 403) # but it can see newly created ones resp = retry(put, new_container_name, use_account=1) resp.read() self.assertEqual(resp.status, 201) resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(new_container_name, listing) @requires_acls def test_read_only_acl_metadata(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def post(url, token, parsed, conn, name, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/%s' % name, '', new_headers) return check_response(conn) # add some metadata value = str(uuid4()) headers = {'x-container-meta-test': value} resp = retry(post, self.name, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # cannot see metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-only access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-only can NOT write container metadata new_value = str(uuid4()) headers = {'x-container-meta-test': new_value} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 403) # read-only can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) @requires_acls def test_read_write_acl_listings(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) # cannot list containers resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can list containers resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.name, listing) # can create new containers new_container_name = str(uuid4()) resp = retry(put, new_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 201) resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(new_container_name, listing) # can also delete them resp = retry(delete, new_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertNotIn(new_container_name, listing) # even if they didn't create them empty_container_name = str(uuid4()) resp = retry(put, empty_container_name, use_account=1) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, empty_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 204) @requires_acls def test_read_write_acl_metadata(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def post(url, token, parsed, conn, name, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/%s' % name, '', new_headers) return check_response(conn) # add some metadata value = str(uuid4()) headers = {'x-container-meta-test': value} resp = retry(post, self.name, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # cannot see metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # read-write can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # read-write can also write container metadata new_value = str(uuid4()) headers = {'x-container-meta-test': new_value} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # and remove it headers = {'x-remove-container-meta-test': 'true'} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), None) @requires_acls def test_admin_acl_listing(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def put(url, token, parsed, conn, name): conn.request('PUT', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, name): conn.request('DELETE', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) # cannot list containers resp = retry(get, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant admin access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can list containers resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(self.name, listing) # can create new containers new_container_name = str(uuid4()) resp = retry(put, new_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 201) resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertIn(new_container_name, listing) # can also delete them resp = retry(delete, new_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, use_account=3) listing = resp.read() self.assertEqual(resp.status, 200) self.assertNotIn(new_container_name, listing) # even if they didn't create them empty_container_name = str(uuid4()) resp = retry(put, empty_container_name, use_account=1) resp.read() self.assertEqual(resp.status, 201) resp = retry(delete, empty_container_name, use_account=3) resp.read() self.assertEqual(resp.status, 204) @requires_acls def test_admin_acl_metadata(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def post(url, token, parsed, conn, name, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/%s' % name, '', new_headers) return check_response(conn) # add some metadata value = str(uuid4()) headers = {'x-container-meta-test': value} resp = retry(post, self.name, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # cannot see metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # can also write container metadata new_value = str(uuid4()) headers = {'x-container-meta-test': new_value} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # and remove it headers = {'x-remove-container-meta-test': 'true'} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), None) @requires_acls def test_protected_container_sync(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def post(url, token, parsed, conn, name, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/%s' % name, '', new_headers) return check_response(conn) # add some metadata value = str(uuid4()) headers = { 'x-container-sync-key': 'secret', 'x-container-meta-test': value, } resp = retry(post, self.name, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Sync-Key'), 'secret') self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # grant read-only access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # but not sync-key self.assertEqual(resp.getheader('X-Container-Sync-Key'), None) # and can not write headers = {'x-container-sync-key': str(uuid4())} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # but not sync-key self.assertEqual(resp.getheader('X-Container-Sync-Key'), None) # sanity check sync-key w/ account1 resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Sync-Key'), 'secret') # and can write new_value = str(uuid4()) headers = { 'x-container-sync-key': str(uuid4()), 'x-container-meta-test': new_value, } resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) # validate w/ account1 resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # but can not write sync-key self.assertEqual(resp.getheader('X-Container-Sync-Key'), 'secret') # grant admin access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # admin can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # and ALSO sync-key self.assertEqual(resp.getheader('X-Container-Sync-Key'), 'secret') # admin tester3 can even change sync-key new_secret = str(uuid4()) headers = {'x-container-sync-key': new_secret} resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Sync-Key'), new_secret) @requires_acls def test_protected_container_acl(self): if tf.skip3: raise SkipTest def get(url, token, parsed, conn, name): conn.request('GET', parsed.path + '/%s' % name, '', {'X-Auth-Token': token}) return check_response(conn) def post_account(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path, '', new_headers) return check_response(conn) def post(url, token, parsed, conn, name, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/%s' % name, '', new_headers) return check_response(conn) # add some container acls value = str(uuid4()) headers = { 'x-container-read': 'jdoe', 'x-container-write': 'jdoe', 'x-container-meta-test': value, } resp = retry(post, self.name, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Read'), 'jdoe') self.assertEqual(resp.getheader('X-Container-Write'), 'jdoe') self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # grant read-only access acl_user = tf.swift_test_user[2] acl = {'read-only': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # but not container acl self.assertEqual(resp.getheader('X-Container-Read'), None) self.assertEqual(resp.getheader('X-Container-Write'), None) # and can not write headers = { 'x-container-read': 'frank', 'x-container-write': 'frank', } resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 403) # grant read-write access acl_user = tf.swift_test_user[2] acl = {'read-write': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), value) # but not container acl self.assertEqual(resp.getheader('X-Container-Read'), None) self.assertEqual(resp.getheader('X-Container-Write'), None) # sanity check container acls with account1 resp = retry(get, self.name, use_account=1) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Read'), 'jdoe') self.assertEqual(resp.getheader('X-Container-Write'), 'jdoe') # and can write new_value = str(uuid4()) headers = { 'x-container-read': 'frank', 'x-container-write': 'frank', 'x-container-meta-test': new_value, } resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=1) # validate w/ account1 resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # but can not write container acls self.assertEqual(resp.getheader('X-Container-Read'), 'jdoe') self.assertEqual(resp.getheader('X-Container-Write'), 'jdoe') # grant admin access acl_user = tf.swift_test_user[2] acl = {'admin': [acl_user]} headers = {'x-account-access-control': json.dumps(acl)} resp = retry(post_account, headers=headers, use_account=1) resp.read() self.assertEqual(resp.status, 204) # admin can read container metadata resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Meta-Test'), new_value) # and ALSO container acls self.assertEqual(resp.getheader('X-Container-Read'), 'jdoe') self.assertEqual(resp.getheader('X-Container-Write'), 'jdoe') # admin tester3 can even change container acls new_value = str(uuid4()) headers = { 'x-container-read': '.r:*', } resp = retry(post, self.name, headers=headers, use_account=3) resp.read() self.assertEqual(resp.status, 204) resp = retry(get, self.name, use_account=3) resp.read() self.assertEqual(resp.status, 204) self.assertEqual(resp.getheader('X-Container-Read'), '.r:*') def test_long_name_content_type(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn): container_name = 'X' * 2048 conn.request('PUT', '%s/%s' % (parsed.path, container_name), 'there', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status, 400) self.assertEqual(resp.getheader('Content-Type'), 'text/html; charset=UTF-8') def test_null_name(self): if tf.skip: raise SkipTest def put(url, token, parsed, conn): conn.request('PUT', '%s/abc%%00def' % parsed.path, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put) if (tf.web_front_end == 'apache2'): self.assertEqual(resp.status, 404) else: self.assertEqual(resp.read(), 'Invalid UTF8 or contains NULL') self.assertEqual(resp.status, 412) def test_create_container_gets_default_policy_by_default(self): try: default_policy = \ tf.FunctionalStoragePolicyCollection.from_info().default except AssertionError: raise SkipTest() def put(url, token, parsed, conn): # using the empty storage policy header value here to ensure # that the default policy is chosen in case policy_specified is set # see __init__.py for details on policy_specified conn.request('PUT', parsed.path + '/' + self.container, '', {'X-Auth-Token': token, 'X-Storage-Policy': ''}) return check_response(conn) resp = retry(put) resp.read() self.assertEqual(resp.status // 100, 2) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), default_policy['name']) def test_error_invalid_storage_policy_name(self): def put(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('PUT', parsed.path + '/' + self.container, '', new_headers) return check_response(conn) # create resp = retry(put, {'X-Storage-Policy': uuid4().hex}) resp.read() self.assertEqual(resp.status, 400) @requires_policies def test_create_non_default_storage_policy_container(self): policy = self.policies.exclude(default=True).select() def put(url, token, parsed, conn, headers=None): base_headers = {'X-Auth-Token': token} if headers: base_headers.update(headers) conn.request('PUT', parsed.path + '/' + self.container, '', base_headers) return check_response(conn) headers = {'X-Storage-Policy': policy['name']} resp = retry(put, headers=headers) resp.read() self.assertEqual(resp.status, 201) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), policy['name']) # and test recreate with-out specifying Storage Policy resp = retry(put) resp.read() self.assertEqual(resp.status, 202) # should still be original storage policy resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), policy['name']) # delete it def delete(url, token, parsed, conn): conn.request('DELETE', parsed.path + '/' + self.container, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete) resp.read() self.assertEqual(resp.status, 204) # verify no policy header resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), None) @requires_policies def test_conflict_change_storage_policy_with_put(self): def put(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('PUT', parsed.path + '/' + self.container, '', new_headers) return check_response(conn) # create policy = self.policies.select() resp = retry(put, {'X-Storage-Policy': policy['name']}) resp.read() self.assertEqual(resp.status, 201) # can't change it other_policy = self.policies.exclude(name=policy['name']).select() resp = retry(put, {'X-Storage-Policy': other_policy['name']}) resp.read() self.assertEqual(resp.status, 409) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.container, '', {'X-Auth-Token': token}) return check_response(conn) # still original policy resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), policy['name']) @requires_policies def test_noop_change_storage_policy_with_post(self): def put(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('PUT', parsed.path + '/' + self.container, '', new_headers) return check_response(conn) # create policy = self.policies.select() resp = retry(put, {'X-Storage-Policy': policy['name']}) resp.read() self.assertEqual(resp.status, 201) def post(url, token, parsed, conn, headers): new_headers = dict({'X-Auth-Token': token}, **headers) conn.request('POST', parsed.path + '/' + self.container, '', new_headers) return check_response(conn) # attempt update for header in ('X-Storage-Policy', 'X-Storage-Policy-Index'): other_policy = self.policies.exclude(name=policy['name']).select() resp = retry(post, {header: other_policy['name']}) resp.read() self.assertEqual(resp.status, 204) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.container, '', {'X-Auth-Token': token}) return check_response(conn) # still original policy resp = retry(head) resp.read() headers = dict((k.lower(), v) for k, v in resp.getheaders()) self.assertEqual(headers.get('x-storage-policy'), policy['name']) def test_container_quota_bytes(self): if 'container_quotas' not in cluster_info: raise SkipTest('Container quotas not enabled') def post(url, token, parsed, conn, name, value): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, name: value}) return check_response(conn) def head(url, token, parsed, conn): conn.request('HEAD', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) # set X-Container-Meta-Quota-Bytes is 10 resp = retry(post, 'X-Container-Meta-Quota-Bytes', '10') resp.read() self.assertEqual(resp.status, 204) resp = retry(head) resp.read() self.assertIn(resp.status, (200, 204)) # confirm X-Container-Meta-Quota-Bytes self.assertEqual(resp.getheader('X-Container-Meta-Quota-Bytes'), '10') def put(url, token, parsed, conn, data): conn.request('PUT', parsed.path + '/' + self.name + '/object', data, {'X-Auth-Token': token}) return check_response(conn) # upload 11 bytes object resp = retry(put, '01234567890') resp.read() self.assertEqual(resp.status, 413) # upload 10 bytes object resp = retry(put, '0123456789') resp.read() self.assertEqual(resp.status, 201) def get(url, token, parsed, conn): conn.request('GET', parsed.path + '/' + self.name + '/object', '', {'X-Auth-Token': token}) return check_response(conn) # download 10 bytes object resp = retry(get) body = resp.read() self.assertEqual(resp.status, 200) self.assertEqual(body, '0123456789') class BaseTestContainerACLs(unittest2.TestCase): # subclasses can change the account in which container # is created/deleted by setUp/tearDown account = 1 def _get_account(self, url, token, parsed, conn): return parsed.path def _get_tenant_id(self, url, token, parsed, conn): account = parsed.path return account.replace('/v1/AUTH_', '', 1) def setUp(self): if tf.skip or tf.skip2 or tf.skip_if_not_v3: raise SkipTest('AUTH VERSION 3 SPECIFIC TEST') self.name = uuid4().hex def put(url, token, parsed, conn): conn.request('PUT', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put, use_account=self.account) resp.read() self.assertEqual(resp.status, 201) def tearDown(self): if tf.skip or tf.skip2 or tf.skip_if_not_v3: raise SkipTest def get(url, token, parsed, conn): conn.request('GET', parsed.path + '/' + self.name + '?format=json', '', {'X-Auth-Token': token}) return check_response(conn) def delete(url, token, parsed, conn, obj): conn.request('DELETE', '/'.join([parsed.path, self.name, obj['name']]), '', {'X-Auth-Token': token}) return check_response(conn) while True: resp = retry(get, use_account=self.account) body = resp.read() self.assertTrue(resp.status // 100 == 2, resp.status) objs = json.loads(body) if not objs: break for obj in objs: resp = retry(delete, obj, use_account=self.account) resp.read() self.assertEqual(resp.status, 204) def delete(url, token, parsed, conn): conn.request('DELETE', parsed.path + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(delete, use_account=self.account) resp.read() self.assertEqual(resp.status, 204) def _assert_cross_account_acl_granted(self, granted, grantee_account, acl): ''' Check whether a given container ACL is granted when a user specified by account_b attempts to access a container. ''' # Obtain the first account's string first_account = retry(self._get_account, use_account=self.account) # Ensure we can't access the container with the grantee account def get2(url, token, parsed, conn): conn.request('GET', first_account + '/' + self.name, '', {'X-Auth-Token': token}) return check_response(conn) resp = retry(get2, use_account=grantee_account) resp.read() self.assertEqual(resp.status, 403) def put2(url, token, parsed, conn): conn.request('PUT', first_account + '/' + self.name + '/object', 'test object', {'X-Auth-Token': token}) return check_response(conn) resp = retry(put2, use_account=grantee_account) resp.read() self.assertEqual(resp.status, 403) # Post ACL to the container def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': acl, 'X-Container-Write': acl}) return check_response(conn) resp = retry(post, use_account=self.account) resp.read() self.assertEqual(resp.status, 204) # Check access to container from grantee account with ACL in place resp = retry(get2, use_account=grantee_account) resp.read() expected = 204 if granted else 403 self.assertEqual(resp.status, expected) resp = retry(put2, use_account=grantee_account) resp.read() expected = 201 if granted else 403 self.assertEqual(resp.status, expected) # Make the container private again def post(url, token, parsed, conn): conn.request('POST', parsed.path + '/' + self.name, '', {'X-Auth-Token': token, 'X-Container-Read': '', 'X-Container-Write': ''}) return check_response(conn) resp = retry(post, use_account=self.account) resp.read() self.assertEqual(resp.status, 204) # Ensure we can't access the container with the grantee account again resp = retry(get2, use_account=grantee_account) resp.read() self.assertEqual(resp.status, 403) resp = retry(put2, use_account=grantee_account) resp.read() self.assertEqual(resp.status, 403) class TestContainerACLsAccount1(BaseTestContainerACLs): def test_cross_account_acl_names_with_user_in_non_default_domain(self): # names in acls are disallowed when grantee is in a non-default domain acl = '%s:%s' % (tf.swift_test_tenant[3], tf.swift_test_user[3]) self._assert_cross_account_acl_granted(False, 4, acl) def test_cross_account_acl_ids_with_user_in_non_default_domain(self): # ids are allowed in acls when grantee is in a non-default domain tenant_id = retry(self._get_tenant_id, use_account=4) acl = '%s:%s' % (tenant_id, '*') self._assert_cross_account_acl_granted(True, 4, acl) def test_cross_account_acl_names_in_default_domain(self): # names are allowed in acls when grantee and project are in # the default domain acl = '%s:%s' % (tf.swift_test_tenant[1], tf.swift_test_user[1]) self._assert_cross_account_acl_granted(True, 2, acl) def test_cross_account_acl_ids_in_default_domain(self): # ids are allowed in acls when grantee and project are in # the default domain tenant_id = retry(self._get_tenant_id, use_account=2) acl = '%s:%s' % (tenant_id, '*') self._assert_cross_account_acl_granted(True, 2, acl) class TestContainerACLsAccount4(BaseTestContainerACLs): account = 4 def test_cross_account_acl_names_with_project_in_non_default_domain(self): # names in acls are disallowed when project is in a non-default domain acl = '%s:%s' % (tf.swift_test_tenant[0], tf.swift_test_user[0]) self._assert_cross_account_acl_granted(False, 1, acl) def test_cross_account_acl_ids_with_project_in_non_default_domain(self): # ids are allowed in acls when project is in a non-default domain tenant_id = retry(self._get_tenant_id, use_account=1) acl = '%s:%s' % (tenant_id, '*') self._assert_cross_account_acl_granted(True, 1, acl) if __name__ == '__main__': unittest2.main() swift-2.7.1/test/functional/test_access_control.py0000664000567000056710000015012613024044354023555 0ustar jenkinsjenkins00000000000000#!/usr/bin/python # coding: UTF-8 # Copyright (c) 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import uuid from random import shuffle from nose import SkipTest from swiftclient import get_auth, http_connection import test.functional as tf def setUpModule(): tf.setup_package() def tearDownModule(): tf.teardown_package() TEST_CASE_FORMAT = ( 'http_method', 'header', 'account_name', 'container_name', 'object_name', 'prep_container_header', 'reseller_prefix', 'target_user_name', 'auth_user_name', 'service_user_name', 'expected') # http_method : HTTP methods such as PUT, GET, POST, HEAD and so on # header : headers for a request # account_name : Account name. Usually the name will be automatically # created by keystone # container_name : Container name. If 'UUID' is specified, a container # name will be created automatically # object_name : Object name. If 'UUID' is specified, a container # name will be created automatically # prep_container_header : headers which will be set on the container # reseller_prefix : Reseller prefix that will be used for request url. # Can be None or SERVICE to select the user account # prefix or the service prefix respectively # target_user_name : a user name which is used for getting the project id # of the target # auth_user_name : a user name which is used for getting a token for # X-Auth_Token # service_user_name : a user name which is used for getting a token for # X-Service-Token # expected : expected status code # # a combination of account_name, container_name and object_name # represents a target. # +------------+--------------+-----------+---------+ # |account_name|container_name|object_name| target | # +------------+--------------+-----------+---------+ # | None | None | None | account | # +------------+--------------+-----------+---------+ # | None | 'UUID' | None |container| # +------------+--------------+-----------+---------+ # | None | 'UUID' | 'UUID' | object | # +------------+--------------+-----------+---------+ # # The following users are required to run this functional test. # No.6, tester6, is added for this test. # +----+-----------+-------+---------+-------------+ # |No. | Domain |Project|User name| Role | # +----+-----------+-------+---------+-------------+ # | 1 | default | test | tester | admin | # +----+-----------+-------+---------+-------------+ # | 2 | default | test2 | tester2 | admin | # +----+-----------+-------+---------+-------------+ # | 3 | default | test | tester3 | _member_ | # +----+-----------+-------+---------+-------------+ # | 4 |test-domain| test4 | tester4 | admin | # +----+-----------+-------+---------+-------------+ # | 5 | default | test5 | tester5 | service | # +----+-----------+-------+---------+-------------+ # | 6 | default | test | tester6 |ResellerAdmin| # +----+-----------+-------+---------+-------------+ # A scenario of put for account, container and object with # several roles. RBAC_PUT = [ # PUT container in own account: ok ('PUT', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 201), ('PUT', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 201), # PUT container in other users account: not allowed for role admin ('PUT', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 403), ('PUT', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 403), # PUT container in other users account: not allowed for role _member_ ('PUT', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 403), ('PUT', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 403), ('PUT', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 403), # PUT container in other users account: allowed for role ResellerAdmin ('PUT', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 201), ('PUT', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 201), ('PUT', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 201), # PUT object in own account: ok ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 201), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 201), # PUT object in other users account: not allowed for role admin ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 403), # PUT object in other users account: not allowed for role _member_ ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 403), # PUT object in other users account: allowed for role ResellerAdmin ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 201), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 201), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 201) ] RBAC_PUT_WITH_SERVICE_PREFIX = [ # PUT container in own account: ok ('PUT', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 201), # PUT container in other users account: not allowed for role service ('PUT', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 403), ('PUT', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 401), ('PUT', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 403), ('PUT', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 403), ('PUT', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 403), # PUT object in own account: ok ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 201), # PUT object in other users account: not allowed for role service ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 401), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 403), # All following actions are using SERVICE prefix # PUT container in own account: ok ('PUT', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 201), # PUT container fails if wrong user, or only one token sent ('PUT', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('PUT', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 403), ('PUT', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('PUT', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 401), # PUT object in own account: ok ('PUT', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 201), # PUT object fails if wrong user, or only one token sent ('PUT', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('PUT', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 403), ('PUT', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('PUT', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 401), ] # A scenario of delete for account, container and object with # several roles. RBAC_DELETE = [ # DELETE container in own account: ok ('DELETE', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 204), ('DELETE', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 204), # DELETE container in other users account: not allowed for role admin ('DELETE', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 403), # DELETE container in other users account: not allowed for role _member_ ('DELETE', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 403), # DELETE container in other users account: allowed for role ResellerAdmin ('DELETE', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 204), ('DELETE', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 204), ('DELETE', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 204), # DELETE object in own account: ok ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 204), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 204), # DELETE object in other users account: not allowed for role admin ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 403), # DELETE object in other users account: not allowed for role _member_ ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 403), # DELETE object in other users account: allowed for role ResellerAdmin ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 204), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 204), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 204) ] RBAC_DELETE_WITH_SERVICE_PREFIX = [ # DELETE container in own account: ok ('DELETE', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 204), # DELETE container in other users account: not allowed for role service ('DELETE', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 401), ('DELETE', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 403), ('DELETE', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 403), # DELETE object in own account: ok ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 204), # DELETE object in other users account: not allowed for role service ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 401), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 403), # All following actions are using SERVICE prefix # DELETE container in own account: ok ('DELETE', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # DELETE container fails if wrong user, or only one token sent ('DELETE', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('DELETE', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 403), ('DELETE', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('DELETE', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 401), # DELETE object in own account: ok ('DELETE', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # DELETE object fails if wrong user, or only one token sent ('DELETE', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('DELETE', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 403), ('DELETE', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('DELETE', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 401) ] # A scenario of get for account, container and object with # several roles. RBAC_GET = [ # GET own account: ok ('GET', None, None, None, None, None, None, 'tester', 'tester', None, 200), ('GET', None, None, None, None, None, None, 'tester', 'tester', 'tester', 200), # GET other users account: not allowed for role admin ('GET', None, None, None, None, None, None, 'tester2', 'tester', None, 403), ('GET', None, None, None, None, None, None, 'tester4', 'tester', None, 403), # GET other users account: not allowed for role _member_ ('GET', None, None, None, None, None, None, 'tester3', 'tester3', None, 403), ('GET', None, None, None, None, None, None, 'tester2', 'tester3', None, 403), ('GET', None, None, None, None, None, None, 'tester4', 'tester3', None, 403), # GET other users account: allowed for role ResellerAdmin ('GET', None, None, None, None, None, None, 'tester6', 'tester6', None, 200), ('GET', None, None, None, None, None, None, 'tester2', 'tester6', None, 200), ('GET', None, None, None, None, None, None, 'tester4', 'tester6', None, 200), # GET container in own account: ok ('GET', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 200), ('GET', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 200), # GET container in other users account: not allowed for role admin ('GET', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 403), ('GET', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 403), # GET container in other users account: not allowed for role _member_ ('GET', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 403), ('GET', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 403), ('GET', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 403), # GET container in other users account: allowed for role ResellerAdmin ('GET', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 200), ('GET', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 200), ('GET', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 200), # GET object in own account: ok ('GET', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 200), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 200), # GET object in other users account: not allowed for role admin ('GET', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 403), # GET object in other users account: not allowed for role _member_ ('GET', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 403), # GET object in other users account: allowed for role ResellerAdmin ('GET', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 200), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 200), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 200) ] RBAC_GET_WITH_SERVICE_PREFIX = [ # GET own account: ok ('GET', None, None, None, None, None, None, 'tester', 'tester', 'tester5', 200), # GET other account: not allowed for role service ('GET', None, None, None, None, None, None, 'tester', 'tester3', 'tester5', 403), ('GET', None, None, None, None, None, None, 'tester', None, 'tester5', 401), ('GET', None, None, None, None, None, None, 'tester5', 'tester5', None, 403), ('GET', None, None, None, None, None, None, 'tester2', 'tester5', None, 403), ('GET', None, None, None, None, None, None, 'tester4', 'tester5', None, 403), # GET container in own account: ok ('GET', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 200), # GET container in other users account: not allowed for role service ('GET', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 403), ('GET', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 401), ('GET', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 403), ('GET', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 403), ('GET', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 403), # GET object in own account: ok ('GET', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 200), # GET object fails if wrong user, or only one token sent ('GET', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 401), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 403), ('GET', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 403), # All following actions are using SERVICE prefix # GET own account: ok ('GET', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester5', 200), # GET other account: not allowed for role service ('GET', None, None, None, None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('GET', None, None, None, None, None, 'SERVICE', 'tester', 'tester', None, 403), ('GET', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('GET', None, None, None, None, None, 'SERVICE', 'tester', None, 'tester5', 401), # GET container in own account: ok ('GET', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 200), # GET container fails if wrong user, or only one token sent ('GET', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('GET', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 403), ('GET', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('GET', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 401), # GET object in own account: ok ('GET', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 200), # GET object fails if wrong user, or only one token sent ('GET', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('GET', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 403), ('GET', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('GET', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 401) ] # A scenario of head for account, container and object with # several roles. RBAC_HEAD = [ # HEAD own account: ok ('HEAD', None, None, None, None, None, None, 'tester', 'tester', None, 204), ('HEAD', None, None, None, None, None, None, 'tester', 'tester', 'tester', 204), # HEAD other users account: not allowed for role admin ('HEAD', None, None, None, None, None, None, 'tester2', 'tester', None, 403), ('HEAD', None, None, None, None, None, None, 'tester4', 'tester', None, 403), # HEAD other users account: not allowed for role _member_ ('HEAD', None, None, None, None, None, None, 'tester3', 'tester3', None, 403), ('HEAD', None, None, None, None, None, None, 'tester2', 'tester3', None, 403), ('HEAD', None, None, None, None, None, None, 'tester4', 'tester3', None, 403), # HEAD other users account: allowed for role ResellerAdmin ('HEAD', None, None, None, None, None, None, 'tester6', 'tester6', None, 204), ('HEAD', None, None, None, None, None, None, 'tester2', 'tester6', None, 204), ('HEAD', None, None, None, None, None, None, 'tester4', 'tester6', None, 204), # HEAD container in own account: ok ('HEAD', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 204), ('HEAD', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 204), # HEAD container in other users account: not allowed for role admin ('HEAD', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 403), # HEAD container in other users account: not allowed for role _member_ ('HEAD', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 403), # HEAD container in other users account: allowed for role ResellerAdmin ('HEAD', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 204), ('HEAD', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 204), ('HEAD', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 204), # HEAD object in own account: ok ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 200), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 200), # HEAD object in other users account: not allowed for role admin ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 403), # HEAD object in other users account: not allowed for role _member_ ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 403), # HEAD object in other users account: allowed for role ResellerAdmin ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 200), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 200), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 200) ] RBAC_HEAD_WITH_SERVICE_PREFIX = [ # HEAD own account: ok ('HEAD', None, None, None, None, None, None, 'tester', 'tester', 'tester5', 204), # HEAD other account: not allowed for role service ('HEAD', None, None, None, None, None, None, 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, None, None, None, None, 'tester', None, 'tester5', 401), ('HEAD', None, None, None, None, None, None, 'tester5', 'tester5', None, 403), ('HEAD', None, None, None, None, None, None, 'tester2', 'tester5', None, 403), ('HEAD', None, None, None, None, None, None, 'tester4', 'tester5', None, 403), # HEAD container in own account: ok ('HEAD', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 204), # HEAD container in other users account: not allowed for role service ('HEAD', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 401), ('HEAD', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 403), ('HEAD', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 403), # HEAD object in own account: ok ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 200), # HEAD object fails if wrong user, or only one token sent ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 401), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 403), # All following actions are using SERVICE prefix # HEAD own account: ok ('HEAD', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # HEAD other account: not allowed for role service ('HEAD', None, None, None, None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, None, None, None, 'SERVICE', 'tester', 'tester', None, 403), ('HEAD', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('HEAD', None, None, None, None, None, 'SERVICE', 'tester', None, 'tester5', 401), # HEAD container in own account: ok ('HEAD', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # HEAD container in other users account: not allowed for role service ('HEAD', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 403), ('HEAD', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('HEAD', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 401), # HEAD object in own account: ok ('HEAD', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 200), # HEAD object fails if wrong user, or only one token sent ('HEAD', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('HEAD', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 403), ('HEAD', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('HEAD', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 401) ] # A scenario of post for account, container and object with # several roles. RBAC_POST = [ # POST own account: ok ('POST', None, None, None, None, None, None, 'tester', 'tester', None, 204), ('POST', None, None, None, None, None, None, 'tester', 'tester', 'tester', 204), # POST other users account: not allowed for role admin ('POST', None, None, None, None, None, None, 'tester2', 'tester', None, 403), ('POST', None, None, None, None, None, None, 'tester4', 'tester', None, 403), # POST other users account: not allowed for role _member_ ('POST', None, None, None, None, None, None, 'tester3', 'tester3', None, 403), ('POST', None, None, None, None, None, None, 'tester2', 'tester3', None, 403), ('POST', None, None, None, None, None, None, 'tester4', 'tester3', None, 403), # POST other users account: allowed for role ResellerAdmin ('POST', None, None, None, None, None, None, 'tester6', 'tester6', None, 204), ('POST', None, None, None, None, None, None, 'tester2', 'tester6', None, 204), ('POST', None, None, None, None, None, None, 'tester4', 'tester6', None, 204), # POST container in own account: ok ('POST', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 204), ('POST', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 204), # POST container in other users account: not allowed for role admin ('POST', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 403), ('POST', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 403), # POST container in other users account: not allowed for role _member_ ('POST', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 403), ('POST', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 403), ('POST', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 403), # POST container in other users account: allowed for role ResellerAdmin ('POST', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 204), ('POST', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 204), ('POST', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 204), # POST object in own account: ok ('POST', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 202), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 202), # POST object in other users account: not allowed for role admin ('POST', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 403), # POST object in other users account: not allowed for role _member_ ('POST', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 403), # POST object in other users account: allowed for role ResellerAdmin ('POST', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 202), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 202), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 202) ] RBAC_POST_WITH_SERVICE_PREFIX = [ # POST own account: ok ('POST', None, None, None, None, None, None, 'tester', 'tester', 'tester5', 204), # POST own account: ok ('POST', None, None, None, None, None, None, 'tester', 'tester3', 'tester5', 403), ('POST', None, None, None, None, None, None, 'tester', None, 'tester5', 401), ('POST', None, None, None, None, None, None, 'tester5', 'tester5', None, 403), ('POST', None, None, None, None, None, None, 'tester2', 'tester5', None, 403), ('POST', None, None, None, None, None, None, 'tester4', 'tester5', None, 403), # POST container in own account: ok ('POST', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 204), # POST container in other users account: not allowed for role service ('POST', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 403), ('POST', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 401), ('POST', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 403), ('POST', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 403), ('POST', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 403), # POST object in own account: ok ('POST', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 202), # POST object fails if wrong user, or only one token sent ('POST', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 401), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 403), ('POST', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 403), # All following actions are using SERVICE prefix # POST own account: ok ('POST', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # POST other account: not allowed for role service ('POST', None, None, None, None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('POST', None, None, None, None, None, 'SERVICE', 'tester', 'tester', None, 403), ('POST', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('POST', None, None, None, None, None, 'SERVICE', 'tester', None, 'tester5', 401), # POST container in own account: ok ('POST', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 204), # POST container in other users account: not allowed for role service ('POST', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('POST', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 403), ('POST', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('POST', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 401), # POST object in own account: ok ('POST', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 202), # POST object fails if wrong user, or only one token sent ('POST', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 403), ('POST', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 403), ('POST', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 403), ('POST', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 401) ] # A scenario of options for account, container and object with # several roles. RBAC_OPTIONS = [ # OPTIONS request is always ok ('OPTIONS', None, None, None, None, None, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, None, None, None, None, 'tester2', 'tester', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester4', 'tester', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester3', 'tester3', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester2', 'tester3', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester4', 'tester3', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester6', 'tester6', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester2', 'tester6', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester4', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester2', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester4', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester3', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester2', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester4', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester6', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester2', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester4', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester3', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester3', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester6', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester6', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester6', None, 200), ('OPTIONS', None, None, None, None, {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, None, None, {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 200), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, None, None, None, None, 'tester', 'tester', None, 200), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, None, None, {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', 'tester', None, 200), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, None, None, {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 200), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', None, None, None, 'tester', 'tester', None, 401), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', None, {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', 'tester', None, 200), # Not OK for container: wrong origin ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', None, {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 401), # Not OK for object: missing X-Container-Meta-Access-Control-Allow-Origin ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', 'UUID', None, None, 'tester', 'tester', None, 401), ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', 'UUID', {"X-Container-Meta-Access-Control-Allow-Origin": "*"}, None, 'tester', None, None, 200), # Not OK for object: wrong origin ('OPTIONS', {"Origin": "http://localhost", "Access-Control-Request-Method": "GET"}, None, 'UUID', 'UUID', {"X-Container-Meta-Access-Control-Allow-Origin": "http://invalid.com"}, None, 'tester', 'tester', None, 401) ] RBAC_OPTIONS_WITH_SERVICE_PREFIX = [ # OPTIONS request is always ok ('OPTIONS', None, None, None, None, None, None, 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, None, None, None, None, 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, None, None, None, None, 'tester', None, 'tester5', 200), ('OPTIONS', None, None, None, None, None, None, 'tester5', 'tester5', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester2', 'tester5', None, 200), ('OPTIONS', None, None, None, None, None, None, 'tester4', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester', None, 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester5', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester2', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', None, None, None, 'tester4', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester', None, 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester5', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester2', 'tester5', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, None, 'tester4', 'tester5', None, 200), ('OPTIONS', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, None, None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, None, None, None, 'SERVICE', 'tester', 'tester', None, 200), ('OPTIONS', None, None, None, None, None, 'SERVICE', 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, None, None, None, 'SERVICE', 'tester', None, 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', None, None, 'SERVICE', 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, 'UUID', None, None, 'SERVICE', 'tester', None, 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester3', 'tester5', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', None, 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', 'tester', 'tester', 200), ('OPTIONS', None, None, 'UUID', 'UUID', None, 'SERVICE', 'tester', None, 'tester5', 200) ] class SwiftClient(object): _tokens = {} def __init__(self): self._set_users() self.auth_url = tf.swift_test_auth self.insecure = tf.insecure self.auth_version = tf.swift_test_auth_version def _set_users(self): self.users = {} for index in range(6): self.users[tf.swift_test_user[index]] = { 'account': tf.swift_test_tenant[index], 'password': tf.swift_test_key[index], 'domain': tf.swift_test_domain[index]} def _get_auth(self, user_name): info = self.users.get(user_name) if info is None: return None, None os_options = {'user_domain_name': info['domain'], 'project_domain_name': info['domain']} authargs = dict(snet=False, tenant_name=info['account'], auth_version=self.auth_version, os_options=os_options, insecure=self.insecure) storage_url, token = get_auth( self.auth_url, user_name, info['password'], **authargs) return storage_url, token def auth(self, user_name): storage_url, token = SwiftClient._tokens.get(user_name, (None, None)) if not token: SwiftClient._tokens[user_name] = self._get_auth(user_name) storage_url, token = SwiftClient._tokens.get(user_name) return storage_url, token def send_request(self, method, url, token=None, headers=None, service_token=None): headers = {} if headers is None else headers.copy() headers.update({'Content-Type': 'application/json', 'Accept': 'application/json'}) if token: headers['X-Auth-Token'] = token if service_token: headers['X-Service-Token'] = service_token if self.insecure: parsed, conn = http_connection(url, insecure=self.insecure) else: parsed, conn = http_connection(url) conn.request(method, parsed.path, headers=headers) resp = conn.getresponse() return resp class BaseTestAC(unittest.TestCase): def setUp(self): self.reseller_admin = tf.swift_test_user[5] self.client = SwiftClient() def _create_resource_url(self, storage_url, account=None, container=None, obj=None, reseller_prefix=None): # e.g. # storage_url = 'http://localhost/v1/AUTH_xxx' # storage_url_list[:-1] is ['http:', '', 'localhost', 'v1'] # storage_url_list[-1] is 'AUTH_xxx' storage_url_list = storage_url.rstrip('/').split('/') base_url = '/'.join(storage_url_list[:-1]) if account is None: account = storage_url_list[-1] if reseller_prefix == 'SERVICE': # replace endpoint reseller prefix with service reseller prefix i = (account.index('_') + 1) if '_' in account else 0 account = tf.swift_test_service_prefix + account[i:] return '/'.join([part for part in (base_url, account, container, obj) if part]) def _put_container(self, storage_url, token, test_case): resource_url = self._create_resource_url( storage_url, test_case['account_name'], test_case['container_name'], reseller_prefix=test_case['reseller_prefix']) self.created_resources.append(resource_url) self.client.send_request('PUT', resource_url, token, headers=test_case['prep_container_header']) def _put_object(self, storage_url, token, test_case): resource_url = self._create_resource_url( storage_url, test_case['account_name'], test_case['container_name'], test_case['object_name'], reseller_prefix=test_case['reseller_prefix']) self.created_resources.append(resource_url) self.client.send_request('PUT', resource_url, token) def _get_storage_url_and_token(self, storage_url_user, token_user): storage_url, _junk = self.client.auth(storage_url_user) _junk, token = self.client.auth(token_user) return storage_url, token def _prepare(self, test_case): storage_url, reseller_token = self._get_storage_url_and_token( test_case['target_user_name'], self.reseller_admin) if test_case['http_method'] in ('GET', 'POST', 'DELETE', 'HEAD', 'OPTIONS'): temp_test_case = test_case.copy() if test_case['container_name'] is None: # When the target is for account, dummy container will be # created to create an account. This account is created by # account_autocreate. temp_test_case['container_name'] = uuid.uuid4().hex self._put_container(storage_url, reseller_token, temp_test_case) if test_case['object_name']: self._put_object(storage_url, reseller_token, test_case) elif test_case['http_method'] in ('PUT',): if test_case['object_name']: self._put_container(storage_url, reseller_token, test_case) def _execute(self, test_case): storage_url, token = self._get_storage_url_and_token( test_case['target_user_name'], test_case['auth_user_name']) service_user = test_case['service_user_name'] service_token = (None if service_user is None else self.client.auth(service_user)[1]) resource_url = self._create_resource_url( storage_url, test_case['account_name'], test_case['container_name'], test_case['object_name'], test_case['reseller_prefix']) if test_case['http_method'] in ('PUT'): self.created_resources.append(resource_url) resp = self.client.send_request(test_case['http_method'], resource_url, token, headers=test_case['header'], service_token=service_token) return resp.status def _cleanup(self): _junk, reseller_token = self.client.auth(self.reseller_admin) for resource_url in reversed(self.created_resources): resp = self.client.send_request('DELETE', resource_url, reseller_token) self.assertIn(resp.status, (204, 404)) def _convert_data(self, data): test_case = dict(zip(TEST_CASE_FORMAT, data)) if test_case['container_name'] == 'UUID': test_case['container_name'] = uuid.uuid4().hex if test_case['object_name'] == 'UUID': test_case['object_name'] = uuid.uuid4().hex return test_case def _run_scenario(self, scenario): for data in scenario: test_case = self._convert_data(data) self.created_resources = [] try: self._prepare(test_case) result = self._execute(test_case) self.assertEqual(test_case['expected'], result, 'Expected %s but got %s for test case %s' % (test_case['expected'], result, test_case)) finally: self._cleanup() class TestRBAC(BaseTestAC): def test_rbac(self): if any((tf.skip, tf.skip2, tf.skip3, tf.skip_if_not_v3, tf.skip_if_no_reseller_admin)): raise SkipTest scenario_rbac = RBAC_PUT + RBAC_DELETE + RBAC_GET +\ RBAC_HEAD + RBAC_POST + RBAC_OPTIONS shuffle(scenario_rbac) self._run_scenario(scenario_rbac) def test_rbac_with_service_prefix(self): if any((tf.skip, tf.skip2, tf.skip3, tf.skip_if_not_v3, tf.skip_service_tokens, tf.skip_if_no_reseller_admin)): raise SkipTest scenario_rbac = RBAC_PUT_WITH_SERVICE_PREFIX +\ RBAC_DELETE_WITH_SERVICE_PREFIX +\ RBAC_GET_WITH_SERVICE_PREFIX +\ RBAC_HEAD_WITH_SERVICE_PREFIX +\ RBAC_POST_WITH_SERVICE_PREFIX +\ RBAC_OPTIONS_WITH_SERVICE_PREFIX shuffle(scenario_rbac) self._run_scenario(scenario_rbac) if __name__ == '__main__': unittest.main() swift-2.7.1/LICENSE0000664000567000056710000002613613024044352015030 0ustar jenkinsjenkins00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. swift-2.7.1/bandit.yaml0000664000567000056710000001252213024044354016144 0ustar jenkinsjenkins00000000000000# optional: after how many files to update progress #show_progress_every: 100 # optional: plugins directory name #plugins_dir: 'plugins' # optional: plugins discovery name pattern plugin_name_pattern: '*.py' # optional: terminal escape sequences to display colors #output_colors: # DEFAULT: '\033[0m' # HEADER: '\033[95m' # LOW: '\033[94m' # MEDIUM: '\033[93m' # HIGH: '\033[91m' # optional: log format string #log_format: "[%(module)s]\t%(levelname)s\t%(message)s" # globs of files which should be analyzed include: - '*.py' # a list of strings, which if found in the path will cause files to be # excluded # for example /tests/ - to remove all all files in tests directory #exclude_dirs: # - '/tests/' #configured for swift profiles: gate: include: - blacklist_calls - blacklist_imports - exec_used - linux_commands_wildcard_injection - request_with_no_cert_validation - set_bad_file_permissions - subprocess_popen_with_shell_equals_true - ssl_with_bad_version - password_config_option_not_marked_secret # - any_other_function_with_shell_equals_true # - ssl_with_bad_defaults # - jinja2_autoescape_false # - use_of_mako_templates # - subprocess_without_shell_equals_true # - any_other_function_with_shell_equals_true # - start_process_with_a_shell # - start_process_with_no_shell # - hardcoded_sql_expressions # - hardcoded_tmp_director # - linux_commands_wildcard_injection #For now some items are commented which could be included as per use later. blacklist_calls: bad_name_sets: # - pickle: # qualnames: [pickle.loads, pickle.load, pickle.Unpickler, # cPickle.loads, cPickle.load, cPickle.Unpickler] # level: LOW # message: "Pickle library appears to be in use, possible security #issue." - marshal: qualnames: [marshal.load, marshal.loads] message: "Deserialization with the marshal module is possibly dangerous." # - md5: # qualnames: [hashlib.md5] # level: LOW # message: "Use of insecure MD5 hash function." - mktemp_q: qualnames: [tempfile.mktemp] message: "Use of insecure and deprecated function (mktemp)." # - eval: # qualnames: [eval] # level: LOW # message: "Use of possibly insecure function - consider using safer #ast.literal_eval." - mark_safe: names: [mark_safe] message: "Use of mark_safe() may expose cross-site scripting vulnerabilities and should be reviewed." - httpsconnection: qualnames: [httplib.HTTPSConnection] message: "Use of HTTPSConnection does not provide security, see https://wiki.openstack.org/wiki/OSSN/OSSN-0033" - yaml_load: qualnames: [yaml.load] message: "Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load()." - urllib_urlopen: qualnames: [urllib.urlopen, urllib.urlretrieve, urllib.URLopener, urllib.FancyURLopener, urllib2.urlopen, urllib2.Request] message: "Audit url open for permitted schemes. Allowing use of file:/ or custom schemes is often unexpected." - paramiko_injection: qualnames: [paramiko.exec_command, paramiko.invoke_shell] message: "Paramiko exec_command() and invoke_shell() usage may expose command injection vulnerabilities and should be reviewed." shell_injection: # Start a process using the subprocess module, or one of its wrappers. subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output, utils.execute, utils.execute_with_timeout] # Start a process with a function vulnerable to shell injection. shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3, popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput] # Start a process with a function that is not vulnerable to shell # injection. no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv,os.execve, os.execvp, os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve, os.spawnvp, os.spawnvpe, os.startfile] blacklist_imports: bad_import_sets: - telnet: imports: [telnetlib] level: HIGH message: "Telnet is considered insecure. Use SSH or some other encrypted protocol." - info_libs: imports: [Crypto] level: LOW message: "Consider possible security implications associated with #{module} module." hardcoded_password: word_list: "wordlist/default-passwords" ssl_with_bad_version: bad_protocol_versions: - 'PROTOCOL_SSLv2' - 'SSLv2_METHOD' - 'SSLv23_METHOD' - 'PROTOCOL_SSLv3' # strict option - 'PROTOCOL_TLSv1' # strict option - 'SSLv3_METHOD' # strict option - 'TLSv1_METHOD' # strict option password_config_option_not_marked_secret: function_names: - oslo.config.cfg.StrOpt - oslo_config.cfg.StrOpt swift-2.7.1/README.md0000664000567000056710000000617313024044354015303 0ustar jenkinsjenkins00000000000000# Swift A distributed object storage system designed to scale from a single machine to thousands of servers. Swift is optimized for multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound. Swift provides a simple, REST-based API fully documented at http://docs.openstack.org/. Swift was originally developed as the basis for Rackspace's Cloud Files and was open-sourced in 2010 as part of the OpenStack project. It has since grown to include contributions from many companies and has spawned a thriving ecosystem of 3rd party tools. Swift's contributors are listed in the AUTHORS file. ## Docs To build documentation install sphinx (`pip install sphinx`), run `python setup.py build_sphinx`, and then browse to /doc/build/html/index.html. These docs are auto-generated after every commit and available online at http://docs.openstack.org/developer/swift/. ## For Developers The best place to get started is the ["SAIO - Swift All In One"](http://docs.openstack.org/developer/swift/development_saio.html). This document will walk you through setting up a development cluster of Swift in a VM. The SAIO environment is ideal for running small-scale tests against swift and trying out new features and bug fixes. You can run unit tests with `.unittests` and functional tests with `.functests`. If you would like to start contributing, check out these [notes](CONTRIBUTING.md) to help you get started. ### Code Organization * bin/: Executable scripts that are the processes run by the deployer * doc/: Documentation * etc/: Sample config files * swift/: Core code * account/: account server * common/: code shared by different modules * middleware/: "standard", officially-supported middleware * ring/: code implementing Swift's ring * container/: container server * obj/: object server * proxy/: proxy server * test/: Unit and functional tests ### Data Flow Swift is a WSGI application and uses eventlet's WSGI server. After the processes are running, the entry point for new requests is the `Application` class in `swift/proxy/server.py`. From there, a controller is chosen, and the request is processed. The proxy may choose to forward the request to a back- end server. For example, the entry point for requests to the object server is the `ObjectController` class in `swift/obj/server.py`. ## For Deployers Deployer docs are also available at http://docs.openstack.org/developer/swift/. A good starting point is at http://docs.openstack.org/developer/swift/deployment_guide.html You can run functional tests against a swift cluster with `.functests`. These functional tests require `/etc/swift/test.conf` to run. A sample config file can be found in this source tree in `test/sample.conf`. ## For Client Apps For client applications, official Python language bindings are provided at http://github.com/openstack/python-swiftclient. Complete API documentation at http://docs.openstack.org/api/openstack-object-storage/1.0/content/ ---- For more information come hang out in #openstack-swift on freenode. Thanks, The Swift Development Team swift-2.7.1/setup.py0000664000567000056710000000202513024044352015524 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr'], pbr=True) swift-2.7.1/bindep.txt0000664000567000056710000000073413024044354016023 0ustar jenkinsjenkins00000000000000# This is a cross-platform list tracking distribution packages needed by tests; # see http://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg] gcc [platform:rpm] gettext liberasurecode-dev [platform:dpkg] liberasurecode-devel [platform:rpm] libffi-dev [platform:dpkg] libffi-devel [platform:rpm] memcached python-dev [platform:dpkg] python-devel [platform:rpm] rsync xfsprogs libssl-dev [platform:dpkg] openssl-devel [platform:rpm] swift-2.7.1/examples/0000775000567000056710000000000013024044470015632 5ustar jenkinsjenkins00000000000000swift-2.7.1/examples/apache2/0000775000567000056710000000000013024044470017135 5ustar jenkinsjenkins00000000000000swift-2.7.1/examples/apache2/container-server.template0000664000567000056710000000153513024044352024163 0ustar jenkinsjenkins00000000000000# Container Server VHOST Template For Apache2 # # Change %PORT% to the port that you wish to use on your system # Change %SERVICENAME% to the service name you are using # Change %USER% to the system user that will run the daemon process # Change the debug level as you see fit # # For example: # Replace %PORT% by 6011 # Replace %SERVICENAME% by container-server-1 # Replace %USER% with apache (or remove it for default) NameVirtualHost *:%PORT% Listen %PORT% WSGIDaemonProcess %SERVICENAME% processes=5 threads=1 user=%USER% WSGIProcessGroup %SERVICENAME% WSGIScriptAlias / /var/www/swift/%SERVICENAME%.wsgi WSGIApplicationGroup %{GLOBAL} LimitRequestFields 200 ErrorLog /var/log/%APACHE_NAME%/%SERVICENAME% LogLevel debug CustomLog /var/log/%APACHE_NAME%/access.log combined swift-2.7.1/examples/apache2/object-server.template0000664000567000056710000000152713024044352023450 0ustar jenkinsjenkins00000000000000# Object Server VHOST Template For Apache2 # # Change %PORT% to the port that you wish to use on your system # Change %SERVICENAME% to the service name you are using # Change %USER% to the system user that will run the daemon process # Change the debug level as you see fit # # For example: # Replace %PORT% by 6010 # Replace %SERVICENAME% by object-server-1 # Replace %USER% with apache (or remove it for default) NameVirtualHost *:%PORT% Listen %PORT% WSGIDaemonProcess %SERVICENAME% processes=5 threads=1 user=%USER% WSGIProcessGroup %SERVICENAME% WSGIScriptAlias / /var/www/swift/%SERVICENAME%.wsgi WSGIApplicationGroup %{GLOBAL} LimitRequestFields 200 ErrorLog /var/log/%APACHE_NAME%/%SERVICENAME% LogLevel debug CustomLog /var/log/%APACHE_NAME%/access.log combined swift-2.7.1/examples/apache2/account-server.template0000664000567000056710000000153113024044352023631 0ustar jenkinsjenkins00000000000000# Account Server VHOST Template For Apache2 # # Change %PORT% to the port that you wish to use on your system # Change %SERVICENAME% to the service name you are using # Change %USER% to the system user that will run the daemon process # Change the debug level as you see fit # # For example: # Replace %PORT% by 6012 # Replace %SERVICENAME% by account-server-1 # Replace %USER% with apache (or remove it for default) NameVirtualHost *:%PORT% Listen %PORT% WSGIDaemonProcess %SERVICENAME% processes=5 threads=1 user=%USER% WSGIProcessGroup %SERVICENAME% WSGIScriptAlias / /var/www/swift/%SERVICENAME%.wsgi WSGIApplicationGroup %{GLOBAL} LimitRequestFields 200 ErrorLog /var/log/%APACHE_NAME%/%SERVICENAME% LogLevel debug CustomLog /var/log/%APACHE_NAME%/access.log combined swift-2.7.1/examples/apache2/proxy-server.template0000664000567000056710000000162513024044352023362 0ustar jenkinsjenkins00000000000000# Proxy Server VHOST Template For Apache2 # # Change %PORT% to the port that you wish to use on your system # Change %SERVICENAME% to the service name you are using # Change %USER% to the system user that will run the daemon process # Change the debug level as you see fit # # For example: # Replace %PORT% by 8080 # Replace %SERVICENAME% by proxy-server # Replace %USER% with apache (or remove it for default) NameVirtualHost *:%PORT% Listen %PORT% # The limit of an object size LimitRequestBody 5368709122 WSGIDaemonProcess %SERVICENAME% processes=5 threads=1 user=%USER% WSGIProcessGroup %SERVICENAME% WSGIScriptAlias / /var/www/swift/%SERVICENAME%.wsgi WSGIApplicationGroup %{GLOBAL} LimitRequestFields 200 ErrorLog /var/log/%APACHE_NAME%/%SERVICENAME% LogLevel debug CustomLog /var/log/%APACHE_NAME%/access.log combined swift-2.7.1/examples/wsgi/0000775000567000056710000000000013024044470016603 5ustar jenkinsjenkins00000000000000swift-2.7.1/examples/wsgi/proxy-server.wsgi.template0000664000567000056710000000100213024044352023765 0ustar jenkinsjenkins00000000000000# Proxy Server wsgi Template # # Change %SERVICECONF% to the service conf file you are using # # For example: # Replace %SERVICECONF% by proxy-server.conf # # This file than need to be saved under /var/www/swift/%SERVICENAME%.wsgi # * Replace %SERVICENAME% with the service name you use your system # E.g. Replace %SERVICENAME% by proxy-server from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/%SERVICECONF%','proxy-server') swift-2.7.1/examples/wsgi/container-server.wsgi.template0000664000567000056710000000102613024044352024574 0ustar jenkinsjenkins00000000000000# Container Server wsgi Template # # Change %SERVICECONF% to the service conf file you are using # # For example: # Replace %SERVICECONF% by container-server/1.conf # # This file than need to be saved under /var/www/swift/%SERVICENAME%.wsgi # * Replace %SERVICENAME% with the service name you use your system # E.g. Replace %SERVICENAME% by container-server-1 from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/%SERVICECONF%','container-server') swift-2.7.1/examples/wsgi/object-server.wsgi.template0000664000567000056710000000101213024044352024053 0ustar jenkinsjenkins00000000000000# Object Server wsgi Template # # Change %SERVICECONF% to the service conf file you are using # # For example: # Replace %SERVICECONF% by object-server/1.conf # # This file than need to be saved under /var/www/swift/%SERVICENAME%.wsgi # * Replace %SERVICENAME% with the service name you use your system # E.g. Replace %SERVICENAME% by object-server-1 from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/%SERVICECONF%','object-server') swift-2.7.1/examples/wsgi/account-server.wsgi.template0000664000567000056710000000101613024044352024245 0ustar jenkinsjenkins00000000000000# Account Server wsgi Template # # Change %SERVICECONF% to the service conf file you are using # # For example: # Replace %SERVICECONF% by account-server/1.conf # # This file than need to be saved under /var/www/swift/%SERVICENAME%.wsgi # * Replace %SERVICENAME% with the service name you use your system # E.g. Replace %SERVICENAME% by account-server-1 from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/%SERVICECONF%','account-server') swift-2.7.1/CONTRIBUTING.md0000664000567000056710000001014513024044354016247 0ustar jenkinsjenkins00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in this page: [http://docs.openstack.org/infra/manual/developers.html](http://docs.openstack.org/infra/manual/developers.html) Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at [http://docs.openstack.org/infra/manual/developers.html#development-workflow](http://docs.openstack.org/infra/manual/developers.html#development-workflow). Gerrit is the review system used in the OpenStack projects. We're sorry, but we won't be able to respond to pull requests submitted through GitHub. Bugs should be filed [on Launchpad](https://bugs.launchpad.net/swift), not in GitHub's issue tracker. Swift Design Principles ======================= * [The Zen of Python](http://legacy.python.org/dev/peps/pep-0020/) * Simple Scales * Minimal dependencies * Re-use existing tools and libraries when reasonable * Leverage the economies of scale * Small, loosely coupled RESTful services * No single points of failure * Start with the use case * ... then design from the cluster operator up * If you haven't argued about it, you don't have the right answer yet :) * If it is your first implementation, you probably aren't done yet :) Please don't feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on [IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-swift/) or the [mailing list](http://lists.openstack.org/pipermail/openstack-dev/) - we want to help. Recommended workflow ==================== * Set up a [Swift All-In-One VM](http://docs.openstack.org/developer/swift/development_saio.html)(SAIO). * Make your changes. Docs and tests for your patch must land before or with your patch. * Run unit tests, functional tests, probe tests ``./.unittests`` ``./.functests`` ``./.probetests`` * Run ``tox`` (no command-line args needed) * ``git review`` Notes on Testing ================ Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use nosetests:: cd test/unit/common/middleware/ && nosetests test_healthcheck.py To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html:: tox -e py27 -- test.unit.common.middleware.test_healthcheck:TestHealthCheck.test_healthcheck Swift's unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are "black-box" tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swift's internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by two core reviewers and has passed all automated tests, it will be merged into the Swift source tree. Specs ===== The [``swift-specs``](https://github.com/openstack/swift-specs) repo can be used for collaborative design work before a feature is implemented. OpenStack's gerrit system is used to collaborate on the design spec. Once approved OpenStack provides a doc site to easily read these [specs](http://specs.openstack.org/openstack/swift-specs/) A spec is needed for more impactful features. Coordinating a feature between many devs (especially across companies) is a great example of when a spec is needed. If you are unsure if a spec document is needed, please feel free to ask in #openstack-swift on freenode IRC. swift-2.7.1/doc/0000775000567000056710000000000013024044470014561 5ustar jenkinsjenkins00000000000000swift-2.7.1/doc/source/0000775000567000056710000000000013024044470016061 5ustar jenkinsjenkins00000000000000swift-2.7.1/doc/source/replication_network.rst0000664000567000056710000003361513024044352022704 0ustar jenkinsjenkins00000000000000.. _Dedicated-replication-network: ============================= Dedicated replication network ============================= ------- Summary ------- Swift's replication process is essential for consistency and availability of data. By default, replication activity will use the same network interface as other cluster operations. However, if a replication interface is set in the ring for a node, that node will send replication traffic on its designated separate replication network interface. Replication traffic includes REPLICATE requests and rsync traffic. To separate the cluster-internal replication traffic from client traffic, separate replication servers can be used. These replication servers are based on the standard storage servers, but they listen on the replication IP and only respond to REPLICATE requests. Storage servers can serve REPLICATE requests, so an operator can transition to using a separate replication network with no cluster downtime. Replication IP and port information is stored in the ring on a per-node basis. These parameters will be used if they are present, but they are not required. If this information does not exist or is empty for a particular node, the node's standard IP and port will be used for replication. -------------------- For SAIO replication -------------------- #. Create new script in ~/bin/ (for example: remakerings_new):: #!/bin/bash cd /etc/swift rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz swift-ring-builder object.builder create 18 3 1 swift-ring-builder object.builder add z1-127.0.0.1:6010R127.0.0.1:6050/sdb1 1 swift-ring-builder object.builder add z2-127.0.0.1:6020R127.0.0.1:6060/sdb2 1 swift-ring-builder object.builder add z3-127.0.0.1:6030R127.0.0.1:6070/sdb3 1 swift-ring-builder object.builder add z4-127.0.0.1:6040R127.0.0.1:6080/sdb4 1 swift-ring-builder object.builder rebalance swift-ring-builder container.builder create 18 3 1 swift-ring-builder container.builder add z1-127.0.0.1:6011R127.0.0.1:6051/sdb1 1 swift-ring-builder container.builder add z2-127.0.0.1:6021R127.0.0.1:6061/sdb2 1 swift-ring-builder container.builder add z3-127.0.0.1:6031R127.0.0.1:6071/sdb3 1 swift-ring-builder container.builder add z4-127.0.0.1:6041R127.0.0.1:6081/sdb4 1 swift-ring-builder container.builder rebalance swift-ring-builder account.builder create 18 3 1 swift-ring-builder account.builder add z1-127.0.0.1:6012R127.0.0.1:6052/sdb1 1 swift-ring-builder account.builder add z2-127.0.0.1:6022R127.0.0.1:6062/sdb2 1 swift-ring-builder account.builder add z3-127.0.0.1:6032R127.0.0.1:6072/sdb3 1 swift-ring-builder account.builder add z4-127.0.0.1:6042R127.0.0.1:6082/sdb4 1 swift-ring-builder account.builder rebalance .. note:: Syntax of adding device has been changed: R: was added between z-: and /_ . Added devices will use and for replication activities. #. Add next rows in /etc/rsyncd.conf:: [account6052] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/account6052.lock [account6062] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/account6062.lock [account6072] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/account6072.lock [account6082] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/account6082.lock [container6051] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/container6051.lock [container6061] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/container6061.lock [container6071] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/container6071.lock [container6081] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/container6081.lock [object6050] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/object6050.lock [object6060] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/object6060.lock [object6070] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/object6070.lock [object6080] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/object6080.lock #. Restart rsync daemon:: service rsync restart #. Add changes in configuration files in directories: * /etc/swift/object-server(files: 1.conf, 2.conf, 3.conf, 4.conf) * /etc/swift/container-server(files: 1.conf, 2.conf, 3.conf, 4.conf) * /etc/swift/account-server(files: 1.conf, 2.conf, 3.conf, 4.conf) delete all configuration options in section [<*>-replicator] #. Add configuration files for object-server, in /etc/swift/object-server/ * 5.conf:: [DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_port = 6050 user = swift log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift [pipeline:main] pipeline = recon object-server [app:object-server] use = egg:swift#object replication_server = True [filter:recon] use = egg:swift#recon [object-replicator] rsync_module = {replication_ip}::object{replication_port} * 6.conf:: [DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_port = 6060 user = swift log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 [pipeline:main] pipeline = recon object-server [app:object-server] use = egg:swift#object replication_server = True [filter:recon] use = egg:swift#recon [object-replicator] rsync_module = {replication_ip}::object{replication_port} * 7.conf:: [DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_port = 6070 user = swift log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 [pipeline:main] pipeline = recon object-server [app:object-server] use = egg:swift#object replication_server = True [filter:recon] use = egg:swift#recon [object-replicator] rsync_module = {replication_ip}::object{replication_port} * 8.conf:: [DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_port = 6080 user = swift log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 [pipeline:main] pipeline = recon object-server [app:object-server] use = egg:swift#object replication_server = True [filter:recon] use = egg:swift#recon [object-replicator] rsync_module = {replication_ip}::object{replication_port} #. Add configuration files for container-server, in /etc/swift/container-server/ * 5.conf:: [DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_port = 6051 user = swift log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift [pipeline:main] pipeline = recon container-server [app:container-server] use = egg:swift#container replication_server = True [filter:recon] use = egg:swift#recon [container-replicator] rsync_module = {replication_ip}::container{replication_port} * 6.conf:: [DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_port = 6061 user = swift log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 [pipeline:main] pipeline = recon container-server [app:container-server] use = egg:swift#container replication_server = True [filter:recon] use = egg:swift#recon [container-replicator] rsync_module = {replication_ip}::container{replication_port} * 7.conf:: [DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_port = 6071 user = swift log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 [pipeline:main] pipeline = recon container-server [app:container-server] use = egg:swift#container replication_server = True [filter:recon] use = egg:swift#recon [container-replicator] rsync_module = {replication_ip}::container{replication_port} * 8.conf:: [DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_port = 6081 user = swift log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 [pipeline:main] pipeline = recon container-server [app:container-server] use = egg:swift#container replication_server = True [filter:recon] use = egg:swift#recon [container-replicator] rsync_module = {replication_ip}::container{replication_port} #. Add configuration files for account-server, in /etc/swift/account-server/ * 5.conf:: [DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_port = 6052 user = swift log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift [pipeline:main] pipeline = recon account-server [app:account-server] use = egg:swift#account replication_server = True [filter:recon] use = egg:swift#recon [account-replicator] rsync_module = {replication_ip}::account{replication_port} * 6.conf:: [DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_port = 6062 user = swift log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 [pipeline:main] pipeline = recon account-server [app:account-server] use = egg:swift#account replication_server = True [filter:recon] use = egg:swift#recon [account-replicator] rsync_module = {replication_ip}::account{replication_port} * 7.conf:: [DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_port = 6072 user = swift log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 [pipeline:main] pipeline = recon account-server [app:account-server] use = egg:swift#account replication_server = True [filter:recon] use = egg:swift#recon [account-replicator] rsync_module = {replication_ip}::account{replication_port} * 8.conf:: [DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_port = 6082 user = swift log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 [pipeline:main] pipeline = recon account-server [app:account-server] use = egg:swift#account replication_server = True [filter:recon] use = egg:swift#recon [account-replicator] rsync_module = {replication_ip}::account{replication_port} --------------------------------- For a Multiple Server replication --------------------------------- #. Move configuration file. * Configuration file for object-server from /etc/swift/object-server.conf to /etc/swift/object-server/1.conf * Configuration file for container-server from /etc/swift/container-server.conf to /etc/swift/container-server/1.conf * Configuration file for account-server from /etc/swift/account-server.conf to /etc/swift/account-server/1.conf #. Add changes in configuration files in directories: * /etc/swift/object-server(files: 1.conf) * /etc/swift/container-server(files: 1.conf) * /etc/swift/account-server(files: 1.conf) delete all configuration options in section [<*>-replicator] #. Add configuration files for object-server, in /etc/swift/object-server/2.conf:: [DEFAULT] bind_ip = $STORAGE_LOCAL_NET_IP workers = 2 [pipeline:main] pipeline = object-server [app:object-server] use = egg:swift#object replication_server = True [object-replicator] #. Add configuration files for container-server, in /etc/swift/container-server/2.conf:: [DEFAULT] bind_ip = $STORAGE_LOCAL_NET_IP workers = 2 [pipeline:main] pipeline = container-server [app:container-server] use = egg:swift#container replication_server = True [container-replicator] #. Add configuration files for account-server, in /etc/swift/account-server/2.conf:: [DEFAULT] bind_ip = $STORAGE_LOCAL_NET_IP workers = 2 [pipeline:main] pipeline = account-server [app:account-server] use = egg:swift#account replication_server = True [account-replicator] swift-2.7.1/doc/source/object.rst0000664000567000056710000000170013024044352020056 0ustar jenkinsjenkins00000000000000.. _object: ****** Object ****** .. _object-auditor: Object Auditor ============== .. automodule:: swift.obj.auditor :members: :undoc-members: :show-inheritance: .. _object-diskfile: Object Backend ============== .. automodule:: swift.obj.diskfile :members: :undoc-members: :show-inheritance: .. _object-replicator: Object Replicator ================= .. automodule:: swift.obj.replicator :members: :undoc-members: :show-inheritance: .. automodule:: swift.obj.ssync_sender :members: :undoc-members: :show-inheritance: .. automodule:: swift.obj.ssync_receiver :members: :undoc-members: :show-inheritance: .. _object-server: Object Server ============= .. automodule:: swift.obj.server :members: :undoc-members: :show-inheritance: .. _object-updater: Object Updater ============== .. automodule:: swift.obj.updater :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/account.rst0000664000567000056710000000117213024044352020247 0ustar jenkinsjenkins00000000000000.. _account: ******* Account ******* .. _account-auditor: Account Auditor =============== .. automodule:: swift.account.auditor :members: :undoc-members: :show-inheritance: .. _account-backend: Account Backend =============== .. automodule:: swift.account.backend :members: :undoc-members: :show-inheritance: .. _account-reaper: Account Reaper ============== .. automodule:: swift.account.reaper :members: :undoc-members: :show-inheritance: .. _account-server: Account Server ============== .. automodule:: swift.account.server :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/overview_container_sync.rst0000664000567000056710000004426513024044354023573 0ustar jenkinsjenkins00000000000000====================================== Container to Container Synchronization ====================================== -------- Overview -------- Swift has a feature where all the contents of a container can be mirrored to another container through background synchronization. Swift cluster operators configure their cluster to allow/accept sync requests to/from other clusters, and the user specifies where to sync their container to along with a secret synchronization key. .. note:: If you are using the large objects feature you will need to ensure both your manifest file and your segment files are synced if they happen to be in different containers. -------------------------- Configuring Container Sync -------------------------- Create a ``container-sync-realms.conf`` file specifying the allowable clusters and their information:: [realm1] key = realm1key key2 = realm1key2 cluster_clustername1 = https://host1/v1/ cluster_clustername2 = https://host2/v1/ [realm2] key = realm2key key2 = realm2key2 cluster_clustername3 = https://host3/v1/ cluster_clustername4 = https://host4/v1/ Each section name is the name of a sync realm. A sync realm is a set of clusters that have agreed to allow container syncing with each other. Realm names will be considered case insensitive. The key is the overall cluster-to-cluster key used in combination with the external users' key that they set on their containers' ``X-Container-Sync-Key`` metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request. The key2 is optional and is an additional key incoming requests will be checked against. This is so you can rotate keys if you wish; you move the existing key to key2 and make a new key value. Any values in the realm section whose names begin with ``cluster_`` will indicate the name and endpoint of a cluster and will be used by external users in their containers' ``X-Container-Sync-To`` metadata header values with the format "//realm_name/cluster_name/account_name/container_name". Realm and cluster names are considered case insensitive. The endpoint is what the container sync daemon will use when sending out requests to that cluster. Keep in mind this endpoint must be reachable by all container servers, since that is where the container sync daemon runs. Note that the endpoint ends with /v1/ and that the container sync daemon will then add the account/container/obj name after that. Distribute this ``container-sync-realms.conf`` file to all your proxy servers and container servers. You also need to add the container_sync middleware to your proxy pipeline. It needs to be after any memcache middleware and before any auth middleware. The container_sync section only needs the "use" item. For example:: [pipeline:main] pipeline = healthcheck proxy-logging cache container_sync tempauth proxy-logging proxy-server [filter:container_sync] use = egg:swift#container_sync ------------------------------------------------------- Old-Style: Configuring a Cluster's Allowable Sync Hosts ------------------------------------------------------- This section is for the old-style of using container sync. See the previous section, Configuring Container Sync, for the new-style. With the old-style, the Swift cluster operator must allow synchronization with a set of hosts before the user can enable container synchronization. First, the backend container server needs to be given this list of hosts in the ``container-server.conf`` file:: [DEFAULT] # This is a comma separated list of hosts allowed in the # X-Container-Sync-To field for containers. # allowed_sync_hosts = 127.0.0.1 allowed_sync_hosts = host1,host2,etc. ... [container-sync] # You can override the default log routing for this app here (don't # use set!): # log_name = container-sync # log_facility = LOG_LOCAL0 # log_level = INFO # Will sync, at most, each container once per interval # interval = 300 # Maximum amount of time to spend syncing each container # container_time = 60 ---------------------- Logging Container Sync ---------------------- Tracking sync progress, problems, and just general activity can only be achieved with log processing currently for container synchronization. In that light, you may wish to set the above `log_` options to direct the container-sync logs to a different file for easier monitoring. Additionally, it should be noted there is no way for an end user to detect sync progress or problems other than HEADing both containers and comparing the overall information. ---------------------------------------------------------- Using the ``swift`` tool to set up synchronized containers ---------------------------------------------------------- .. note:: The ``swift`` tool is available from the `python-swiftclient`_ library. .. note:: You must be the account admin on the account to set synchronization targets and keys. You simply tell each container where to sync to and give it a secret synchronization key. First, let's get the account details for our two cluster accounts:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing stat -v StorageURL: http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e Auth Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19 Account: AUTH_208d1854-e475-4500-b315-81de645d060e Containers: 0 Objects: 0 Bytes: 0 $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 stat -v StorageURL: http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c Auth Token: AUTH_tk816a1aaf403c49adb92ecfca2f88e430 Account: AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c Containers: 0 Objects: 0 Bytes: 0 Now, let's make our first container and tell it to synchronize to a second we'll make next:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing post \ -t '//realm_name/clustername2/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ -k 'secret' container1 The ``-t`` indicates the cluster to sync to, which is the realm name of the section from container-sync-realms.conf, followed by the cluster name from that section (without the cluster\_ prefix), followed by the account and container names we want to sync to. The ``-k`` specifies the secret key the two containers will share for synchronization; this is the user key, the cluster key in container-sync-realms.conf will also be used behind the scenes. Now, we'll do something similar for the second cluster's container:: $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 post \ -t '//realm_name/clustername1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' \ -k 'secret' container2 That's it. Now we can upload a bunch of stuff to the first container and watch as it gets synchronized over to the second:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing \ upload container1 . photo002.png photo004.png photo001.png photo003.png $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ list container2 [Nothing there yet, so we wait a bit...] .. note:: If you're an operator running SAIO and just testing, each time you configure a container for synchronization and place objects in the source container you will need to ensure that container-sync runs before attempting to retrieve objects from the target container. That is, you need to run:: swift-init container-sync once Now expect to see objects copied from the first container to the second:: $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ list container2 photo001.png photo002.png photo003.png photo004.png You can also set up a chain of synced containers if you want more than two. You'd point 1 -> 2, then 2 -> 3, and finally 3 -> 1 for three containers. They'd all need to share the same secret synchronization key. .. _`python-swiftclient`: http://github.com/openstack/python-swiftclient ----------------------------------- Using curl (or other tools) instead ----------------------------------- So what's ``swift`` doing behind the scenes? Nothing overly complicated. It translates the ``-t `` option into an ``X-Container-Sync-To: `` header and the ``-k `` option into an ``X-Container-Sync-Key: `` header. For instance, when we created the first container above and told it to synchronize to the second, we could have used this curl command:: $ curl -i -X POST -H 'X-Auth-Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19' \ -H 'X-Container-Sync-To: //realm_name/clustername2/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ -H 'X-Container-Sync-Key: secret' \ 'http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/plain; charset=UTF-8 Date: Thu, 24 Feb 2011 22:39:14 GMT --------------------------------------------------------------------- Old-Style: Using the ``swift`` tool to set up synchronized containers --------------------------------------------------------------------- .. note:: The ``swift`` tool is available from the `python-swiftclient`_ library. .. note:: You must be the account admin on the account to set synchronization targets and keys. This is for the old-style of container syncing using allowed_sync_hosts. You simply tell each container where to sync to and give it a secret synchronization key. First, let's get the account details for our two cluster accounts:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing stat -v StorageURL: http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e Auth Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19 Account: AUTH_208d1854-e475-4500-b315-81de645d060e Containers: 0 Objects: 0 Bytes: 0 $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 stat -v StorageURL: http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c Auth Token: AUTH_tk816a1aaf403c49adb92ecfca2f88e430 Account: AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c Containers: 0 Objects: 0 Bytes: 0 Now, let's make our first container and tell it to synchronize to a second we'll make next:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing post \ -t 'http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ -k 'secret' container1 The ``-t`` indicates the URL to sync to, which is the ``StorageURL`` from cluster2 we retrieved above plus the container name. The ``-k`` specifies the secret key the two containers will share for synchronization. Now, we'll do something similar for the second cluster's container:: $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 post \ -t 'http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' \ -k 'secret' container2 That's it. Now we can upload a bunch of stuff to the first container and watch as it gets synchronized over to the second:: $ swift -A http://cluster1/auth/v1.0 -U test:tester -K testing \ upload container1 . photo002.png photo004.png photo001.png photo003.png $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ list container2 [Nothing there yet, so we wait a bit...] [If you're an operator running SAIO and just testing, you may need to run 'swift-init container-sync once' to perform a sync scan.] $ swift -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ list container2 photo001.png photo002.png photo003.png photo004.png You can also set up a chain of synced containers if you want more than two. You'd point 1 -> 2, then 2 -> 3, and finally 3 -> 1 for three containers. They'd all need to share the same secret synchronization key. .. _`python-swiftclient`: http://github.com/openstack/python-swiftclient ---------------------------------------------- Old-Style: Using curl (or other tools) instead ---------------------------------------------- This is for the old-style of container syncing using allowed_sync_hosts. So what's ``swift`` doing behind the scenes? Nothing overly complicated. It translates the ``-t `` option into an ``X-Container-Sync-To: `` header and the ``-k `` option into an ``X-Container-Sync-Key: `` header. For instance, when we created the first container above and told it to synchronize to the second, we could have used this curl command:: $ curl -i -X POST -H 'X-Auth-Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19' \ -H 'X-Container-Sync-To: http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ -H 'X-Container-Sync-Key: secret' \ 'http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/plain; charset=UTF-8 Date: Thu, 24 Feb 2011 22:39:14 GMT -------------------------------------------------- What's going on behind the scenes, in the cluster? -------------------------------------------------- Container ring devices have a directory called ``containers``, where container databases reside. In addition to ``containers``, each container ring device also has a directory called ``sync-containers``. ``sync-containers`` holds symlinks to container databases that were configured for container sync using ``x-container-sync-to`` and ``x-container-sync-key`` metadata keys. The swift-container-sync process does the job of sending updates to the remote container. This is done by scanning ``sync-containers`` for container databases. For each container db found, newer rows since the last sync will trigger PUTs or DELETEs to the other container. ``sync-containers`` is maintained as follows: Whenever the container-server processes a PUT or a POST request that carries ``x-container-sync-to`` and ``x-container-sync-key`` metadata keys the server creates a symlink to the container database in ``sync-containers``. Whenever the container server deletes a synced container, the appropriate symlink is deleted from ``sync-containers``. In addition to the container-server, the container-replicator process does the job of identifying containers that should be synchronized. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist then a symlink to the container database is created in a sync-containers sub-directory on the same device. Similarly, when the container sync metadata keys are deleted, the container server and container-replicator would take care of deleting the symlinks from ``sync-containers``. .. note:: The swift-container-sync process runs on each container server in the cluster and talks to the proxy servers (or load balancers) in the remote cluster. Therefore, the container servers must be permitted to initiate outbound connections to the remote proxy servers (or load balancers). The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept in each container database. When syncing a container, the container-sync process figures out which replica of the container it has. In a standard 3-replica scenario, the process will have either replica number 0, 1, or 2. This is used to figure out which rows are belong to this sync process and which ones don't. An example may help. Assume a replica count of 3 and database row IDs are 1..6. Also, assume that container-sync is running on this container for the first time, hence SP1 = SP2 = -1. :: SP1 SP2 | v -1 0 1 2 3 4 5 6 First, the container-sync process looks for rows with id between SP1 and SP2. Since this is the first run, SP1 = SP2 = -1, and there aren't any such rows. :: SP1 SP2 | v -1 0 1 2 3 4 5 6 Second, the container-sync process looks for rows with id greater than SP1, and syncs those rows which it owns. Ownership is based on the hash of the object name, so it's not always guaranteed to be exactly one out of every three rows, but it usually gets close. For the sake of example, let's say that this process ends up owning rows 2 and 5. Once it's finished trying to sync those rows, it updates SP1 to be the biggest row-id that it's seen, which is 6 in this example. :: SP2 SP1 | | v v -1 0 1 2 3 4 5 6 While all that was going on, clients uploaded new objects into the container, creating new rows in the database. :: SP2 SP1 | | v v -1 0 1 2 3 4 5 6 7 8 9 10 11 12 On the next run, the container-sync starts off looking at rows with ids between SP1 and SP2. This time, there are a bunch of them. The sync process try to sync all of them. If it succeeds, it will set SP2 to equal SP1. If it fails, it will set SP2 to the failed object and will continue to try all other objects till SP1, setting SP2 to the first object that failed. Under normal circumstances, the container-sync processes will have already taken care of synchronizing all rows, between SP1 and SP2, resulting in a set of quick checks. However, if one of the sync processes failed for some reason, then this is a vital fallback to make sure all the objects in the container get synchronized. Without this seemingly-redundant work, any container-sync failure results in unsynchronized objects. Note that the container sync will persistently retry to sync any faulty object until success, while logging each failure. Once it's done with the fallback rows, and assuming no faults occurred, SP2 is advanced to SP1. :: SP2 SP1 | v -1 0 1 2 3 4 5 6 7 8 9 10 11 12 Then, rows with row ID greater than SP1 are synchronized (provided this container-sync process is responsible for them), and SP1 is moved up to the greatest row ID seen. :: SP2 SP1 | | v v -1 0 1 2 3 4 5 6 7 8 9 10 11 12 swift-2.7.1/doc/source/cors.rst0000664000567000056710000000765313024044354017575 0ustar jenkinsjenkins00000000000000==== CORS ==== CORS_ is a mechanism to allow code running in a browser (Javascript for example) make requests to a domain other then the one from where it originated. Swift supports CORS requests to containers and objects. CORS metadata is held on the container only. The values given apply to the container itself and all objects within it. The supported headers are, +------------------------------------------------+------------------------------+ | Metadata | Use | +================================================+==============================+ | X-Container-Meta-Access-Control-Allow-Origin | Origins to be allowed to | | | make Cross Origin Requests, | | | space separated. | +------------------------------------------------+------------------------------+ | X-Container-Meta-Access-Control-Max-Age | Max age for the Origin to | | | hold the preflight results. | +------------------------------------------------+------------------------------+ | X-Container-Meta-Access-Control-Expose-Headers | Headers exposed to the user | | | agent (e.g. browser) in the | | | the actual request response. | | | Space separated. | +------------------------------------------------+------------------------------+ Before a browser issues an actual request it may issue a `preflight request`_. The preflight request is an OPTIONS call to verify the Origin is allowed to make the request. The sequence of events are, * Browser makes OPTIONS request to Swift * Swift returns 200/401 to browser based on allowed origins * If 200, browser makes the "actual request" to Swift, i.e. PUT, POST, DELETE, HEAD, GET When a browser receives a response to an actual request it only exposes those headers listed in the ``Access-Control-Expose-Headers`` header. By default Swift returns the following values for this header, * "simple response headers" as listed on http://www.w3.org/TR/cors/#simple-response-header * the headers ``etag``, ``x-timestamp``, ``x-trans-id`` * all metadata headers (``X-Container-Meta-*`` for containers and ``X-Object-Meta-*`` for objects) * headers listed in ``X-Container-Meta-Access-Control-Expose-Headers`` ----------------- Sample Javascript ----------------- To see some CORS Javascript in action download the `test CORS page`_ (source below). Host it on a webserver and take note of the protocol and hostname (origin) you'll be using to request the page, e.g. http://localhost. Locate a container you'd like to query. Needless to say the Swift cluster hosting this container should have CORS support. Append the origin of the test page to the container's ``X-Container-Meta-Access-Control-Allow-Origin`` header,:: curl -X POST -H 'X-Auth-Token: xxx' \ -H 'X-Container-Meta-Access-Control-Allow-Origin: http://localhost' \ http://192.168.56.3:8080/v1/AUTH_test/cont1 At this point the container is now accessible to CORS clients hosted on http://localhost. Open the test CORS page in your browser. #. Populate the Token field #. Populate the URL field with the URL of either a container or object #. Select the request method #. Hit Submit Assuming the request succeeds you should see the response header and body. If something went wrong the response status will be 0. .. _test CORS page: -------------- Test CORS Page -------------- A sample cross-site test page is located in the project source tree ``doc/source/test-cors.html``. .. literalinclude:: test-cors.html .. _CORS: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS .. _preflight request: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS#Preflighted_requests swift-2.7.1/doc/source/deployment_guide.rst0000664000567000056710000030554113024044354022161 0ustar jenkinsjenkins00000000000000================ Deployment Guide ================ ----------------------- Hardware Considerations ----------------------- Swift is designed to run on commodity hardware. At Rackspace, our storage servers are currently running fairly generic 4U servers with 24 2T SATA drives and 8 cores of processing power. RAID on the storage drives is not required and not recommended. Swift's disk usage pattern is the worst case possible for RAID, and performance degrades very quickly using RAID 5 or 6. ------------------ Deployment Options ------------------ The swift services run completely autonomously, which provides for a lot of flexibility when architecting the hardware deployment for swift. The 4 main services are: #. Proxy Services #. Object Services #. Container Services #. Account Services The Proxy Services are more CPU and network I/O intensive. If you are using 10g networking to the proxy, or are terminating SSL traffic at the proxy, greater CPU power will be required. The Object, Container, and Account Services (Storage Services) are more disk and network I/O intensive. The easiest deployment is to install all services on each server. There is nothing wrong with doing this, as it scales each service out horizontally. At Rackspace, we put the Proxy Services on their own servers and all of the Storage Services on the same server. This allows us to send 10g networking to the proxy and 1g to the storage servers, and keep load balancing to the proxies more manageable. Storage Services scale out horizontally as storage servers are added, and we can scale overall API throughput by adding more Proxies. If you need more throughput to either Account or Container Services, they may each be deployed to their own servers. For example you might use faster (but more expensive) SAS or even SSD drives to get faster disk I/O to the databases. Load balancing and network design is left as an exercise to the reader, but this is a very important part of the cluster, so time should be spent designing the network for a Swift cluster. --------------------- Web Front End Options --------------------- Swift comes with an integral web front end. However, it can also be deployed as a request processor of an Apache2 using mod_wsgi as described in :doc:`Apache Deployment Guide `. .. _ring-preparing: ------------------ Preparing the Ring ------------------ The first step is to determine the number of partitions that will be in the ring. We recommend that there be a minimum of 100 partitions per drive to insure even distribution across the drives. A good starting point might be to figure out the maximum number of drives the cluster will contain, and then multiply by 100, and then round up to the nearest power of two. For example, imagine we are building a cluster that will have no more than 5,000 drives. That would mean that we would have a total number of 500,000 partitions, which is pretty close to 2^19, rounded up. It is also a good idea to keep the number of partitions small (relatively). The more partitions there are, the more work that has to be done by the replicators and other backend jobs and the more memory the rings consume in process. The goal is to find a good balance between small rings and maximum cluster size. The next step is to determine the number of replicas to store of the data. Currently it is recommended to use 3 (as this is the only value that has been tested). The higher the number, the more storage that is used but the less likely you are to lose data. It is also important to determine how many zones the cluster should have. It is recommended to start with a minimum of 5 zones. You can start with fewer, but our testing has shown that having at least five zones is optimal when failures occur. We also recommend trying to configure the zones at as high a level as possible to create as much isolation as possible. Some example things to take into consideration can include physical location, power availability, and network connectivity. For example, in a small cluster you might decide to split the zones up by cabinet, with each cabinet having its own power and network connectivity. The zone concept is very abstract, so feel free to use it in whatever way best isolates your data from failure. Zones are referenced by number, beginning with 1. You can now start building the ring with:: swift-ring-builder create This will start the ring build process creating the with 2^ partitions. is the time in hours before a specific partition can be moved in succession (24 is a good value for this). Devices can be added to the ring with:: swift-ring-builder add z-:/_ This will add a device to the ring where is the name of the builder file that was created previously, is the number of the zone this device is in, is the ip address of the server the device is in, is the port number that the server is running on, is the name of the device on the server (for example: sdb1), is a string of metadata for the device (optional), and is a float weight that determines how many partitions are put on the device relative to the rest of the devices in the cluster (a good starting point is 100.0 x TB on the drive). Add each device that will be initially in the cluster. Once all of the devices are added to the ring, run:: swift-ring-builder rebalance This will distribute the partitions across the drives in the ring. It is important whenever making changes to the ring to make all the changes required before running rebalance. This will ensure that the ring stays as balanced as possible, and as few partitions are moved as possible. The above process should be done to make a ring for each storage service (Account, Container and Object). The builder files will be needed in future changes to the ring, so it is very important that these be kept and backed up. The resulting .tar.gz ring file should be pushed to all of the servers in the cluster. For more information about building rings, running swift-ring-builder with no options will display help text with available commands and options. More information on how the ring works internally can be found in the :doc:`Ring Overview `. .. _server-per-port-configuration: ------------------------------- Running object-servers Per Disk ------------------------------- The lack of true asynchronous file I/O on Linux leaves the object-server workers vulnerable to misbehaving disks. Because any object-server worker can service a request for any disk, and a slow I/O request blocks the eventlet hub, a single slow disk can impair an entire storage node. This also prevents object servers from fully utilizing all their disks during heavy load. The :ref:`threads_per_disk ` option was one way to address this, but came with severe performance overhead which was worse than the benefit of I/O isolation. Any clusters using threads_per_disk should switch to using `servers_per_port`. Another way to get full I/O isolation is to give each disk on a storage node a different port in the storage policy rings. Then set the :ref:`servers_per_port ` option in the object-server config. NOTE: while the purpose of this config setting is to run one or more object-server worker processes per *disk*, the implementation just runs object-servers per unique port of local devices in the rings. The deployer must combine this option with appropriately-configured rings to benefit from this feature. Here's an example (abbreviated) old-style ring (2 node cluster with 2 disks each):: Devices: id region zone ip address port replication ip replication port name 0 1 1 1.1.0.1 6000 1.1.0.1 6000 d1 1 1 1 1.1.0.1 6000 1.1.0.1 6000 d2 2 1 2 1.1.0.2 6000 1.1.0.2 6000 d3 3 1 2 1.1.0.2 6000 1.1.0.2 6000 d4 And here's the same ring set up for `servers_per_port`:: Devices: id region zone ip address port replication ip replication port name 0 1 1 1.1.0.1 6000 1.1.0.1 6000 d1 1 1 1 1.1.0.1 6001 1.1.0.1 6001 d2 2 1 2 1.1.0.2 6000 1.1.0.2 6000 d3 3 1 2 1.1.0.2 6001 1.1.0.2 6001 d4 When migrating from normal to `servers_per_port`, perform these steps in order: #. Upgrade Swift code to a version capable of doing `servers_per_port`. #. Enable `servers_per_port` with a > 0 value #. Restart `swift-object-server` processes with a SIGHUP. At this point, you will have the `servers_per_port` number of `swift-object-server` processes serving all requests for all disks on each node. This preserves availability, but you should perform the next step as quickly as possible. #. Push out new rings that actually have different ports per disk on each server. One of the ports in the new ring should be the same as the port used in the old ring ("6000" in the example above). This will cover existing proxy-server processes who haven't loaded the new ring yet. They can still talk to any storage node regardless of whether or not that storage node has loaded the ring and started object-server processes on the new ports. If you do not run a separate object-server for replication, then this setting must be available to the object-replicator and object-reconstructor (i.e. appear in the [DEFAULT] config section). .. _general-service-configuration: ----------------------------- General Service Configuration ----------------------------- Most Swift services fall into two categories. Swift's wsgi servers and background daemons. For more information specific to the configuration of Swift's wsgi servers with paste deploy see :ref:`general-server-configuration`. Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing there will be an error. The sections not used by the service are ignored. Consider the example of an object storage node. By convention, configuration for the object-server, object-updater, object-replicator, and object-auditor exist in a single file ``/etc/swift/object-server.conf``:: [DEFAULT] [pipeline:main] pipeline = object-server [app:object-server] use = egg:swift#object [object-replicator] reclaim_age = 259200 [object-updater] [object-auditor] Swift services expect a configuration path as the first argument:: $ swift-object-auditor Usage: swift-object-auditor CONFIG [options] Error: missing config path argument If you omit the object-auditor section this file could not be used as the configuration path when starting the ``swift-object-auditor`` daemon:: $ swift-object-auditor /etc/swift/object-server.conf Unable to find object-auditor config section in /etc/swift/object-server.conf If the configuration path is a directory instead of a file all of the files in the directory with the file extension ".conf" will be combined to generate the configuration object which is delivered to the Swift service. This is referred to generally as "directory based configuration". Directory based configuration leverages ConfigParser's native multi-file support. Files ending in ".conf" in the given directory are parsed in lexicographical order. Filenames starting with '.' are ignored. A mixture of file and directory configuration paths is not supported - if the configuration path is a file only that file will be parsed. The swift service management tool ``swift-init`` has adopted the convention of looking for ``/etc/swift/{type}-server.conf.d/`` if the file ``/etc/swift/{type}-server.conf`` file does not exist. When using directory based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values.:: /etc/swift/ default.base object-server.conf.d/ 000_default.conf -> ../default.base 001_default-override.conf 010_server.conf 020_replicator.conf 030_updater.conf 040_auditor.conf You can inspect the resulting combined configuration object using the ``swift-config`` command line tool .. _general-server-configuration: ---------------------------- General Server Configuration ---------------------------- Swift uses paste.deploy (http://pythonpaste.org/deploy/) to manage server configurations. Default configuration options are set in the `[DEFAULT]` section, and any options specified there can be overridden in any of the other sections BUT ONLY BY USING THE SYNTAX ``set option_name = value``. This is the unfortunate way paste.deploy works and I'll try to explain it in full. First, here's an example paste.deploy configuration file:: [DEFAULT] name1 = globalvalue name2 = globalvalue name3 = globalvalue set name4 = globalvalue [pipeline:main] pipeline = myapp [app:myapp] use = egg:mypkg#myapp name2 = localvalue set name3 = localvalue set name5 = localvalue name6 = localvalue The resulting configuration that myapp receives is:: global {'__file__': '/etc/mypkg/wsgi.conf', 'here': '/etc/mypkg', 'name1': 'globalvalue', 'name2': 'globalvalue', 'name3': 'localvalue', 'name4': 'globalvalue', 'name5': 'localvalue', 'set name4': 'globalvalue'} local {'name6': 'localvalue'} So, `name1` got the global value which is fine since it's only in the `DEFAULT` section anyway. `name2` got the global value from `DEFAULT` even though it appears to be overridden in the `app:myapp` subsection. This is just the unfortunate way paste.deploy works (at least at the time of this writing.) `name3` got the local value from the `app:myapp` subsection because it is using the special paste.deploy syntax of ``set option_name = value``. So, if you want a default value for most app/filters but want to override it in one subsection, this is how you do it. `name4` got the global value from `DEFAULT` since it's only in that section anyway. But, since we used the ``set`` syntax in the `DEFAULT` section even though we shouldn't, notice we also got a ``set name4`` variable. Weird, but probably not harmful. `name5` got the local value from the `app:myapp` subsection since it's only there anyway, but notice that it is in the global configuration and not the local configuration. This is because we used the ``set`` syntax to set the value. Again, weird, but not harmful since Swift just treats the two sets of configuration values as one set anyway. `name6` got the local value from `app:myapp` subsection since it's only there, and since we didn't use the ``set`` syntax, it's only in the local configuration and not the global one. Though, as indicated above, there is no special distinction with Swift. That's quite an explanation for something that should be so much simpler, but it might be important to know how paste.deploy interprets configuration files. The main rule to remember when working with Swift configuration files is: .. note:: Use the ``set option_name = value`` syntax in subsections if the option is also set in the ``[DEFAULT]`` section. Don't get in the habit of always using the ``set`` syntax or you'll probably mess up your non-paste.deploy configuration files. -------------------- Common configuration -------------------- An example of common configuration file can be found at etc/swift.conf-sample The following configuration options are available: =================== ========== ============================================= Option Default Description ------------------- ---------- --------------------------------------------- max_header_size 8192 max_header_size is the max number of bytes in the utf8 encoding of each header. Using 8192 as default because eventlet use 8192 as max size of header line. This value may need to be increased when using identity v3 API tokens including more than 7 catalog entries. See also include_service_catalog in proxy-server.conf-sample (documented in overview_auth.rst). extra_header_count 0 By default the maximum number of allowed headers depends on the number of max allowed metadata settings plus a default value of 32 for regular http headers. If for some reason this is not enough (custom middleware for example) it can be increased with the extra_header_count constraint. =================== ========== ============================================= --------------------------- Object Server Configuration --------------------------- An Example Object Server configuration can be found at etc/object-server.conf-sample in the source code repository. The following configuration options are available: .. _object-server-default-options: [DEFAULT] ================================ ========== ========================================== Option Default Description -------------------------------- ---------- ------------------------------------------ swift_dir /etc/swift Swift configuration directory devices /srv/node Parent directory of where devices are mounted mount_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device bind_ip 0.0.0.0 IP Address for server to bind to bind_port 6000 Port for server to bind to bind_timeout 30 Seconds to attempt bind before giving up backlog 4096 Maximum number of allowed pending connections workers auto Override the number of pre-forked workers that will accept connections. If set it should be an integer, zero means no fork. If unset, it will try to default to the number of effective cpu cores and fallback to one. Increasing the number of workers helps slow filesystem operations in one request from negatively impacting other requests, but only the :ref:`servers_per_port ` option provides complete I/O isolation with no measurable overhead. servers_per_port 0 If each disk in each storage policy ring has unique port numbers for its "ip" value, you can use this setting to have each object-server worker only service requests for the single disk matching the port in the ring. The value of this setting determines how many worker processes run for each port (disk) in the ring. If you have 24 disks per server, and this setting is 4, then each storage node will have 1 + (24 * 4) = 97 total object-server processes running. This gives complete I/O isolation, drastically reducing the impact of slow disks on storage node performance. The object-replicator and object-reconstructor need to see this setting too, so it must be in the [DEFAULT] section. See :ref:`server-per-port-configuration`. max_clients 1024 Maximum number of clients one worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. disable_fallocate false Disable "fast fail" fallocate checks if the underlying filesystem does not support it. log_name swift Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory log_max_line_length 0 Caps the length of log lines to the value given; no limit if set to 0, the default. log_custom_handlers None Comma-separated list of functions to call to setup custom log handlers. log_udp_host Override log_address log_udp_port 514 UDP log port log_statsd_host None Enables StatsD logging; IPv4/IPv6 address or a hostname. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. log_statsd_port 8125 log_statsd_default_sample_rate 1.0 log_statsd_sample_rate_factor 1.0 log_statsd_metric_prefix eventlet_debug false If true, turn on debug logging for eventlet fallocate_reserve 0 You can set fallocate_reserve to the number of bytes you'd like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they're out of space early. conn_timeout 0.5 Time to wait while attempting to connect to another backend node. node_timeout 3 Time to wait while sending each chunk of data to another backend node. client_timeout 60 Time to wait while receiving each chunk of data from a client or another backend node network_chunk_size 65536 Size of chunks to read/write over the network disk_chunk_size 65536 Size of chunks to read/write to disk container_update_timeout 1 Time to wait while sending a container update on object update. ================================ ========== ========================================== .. _object-server-options: [object-server] ============================= ====================== =============================================== Option Default Description ----------------------------- ---------------------- ----------------------------------------------- use paste.deploy entry point for the object server. For most cases, this should be `egg:swift#object`. set log_name object-server Label used when logging set log_facility LOG_LOCAL0 Syslog log facility set log_level INFO Logging level set log_requests True Whether or not to log each request set log_address /dev/log Logging directory user swift User to run as max_upload_time 86400 Maximum time allowed to upload an object slow 0 If > 0, Minimum time in seconds for a PUT or DELETE request to complete. This is only useful to simulate slow devices during testing and development. mb_per_sync 512 On PUT requests, sync file every n MB keep_cache_size 5242880 Largest object size to keep in buffer cache keep_cache_private false Allow non-public objects to stay in kernel's buffer cache allowed_headers Content-Disposition, Comma separated list of headers Content-Encoding, that can be set in metadata on an object. X-Delete-At, This list is in addition to X-Object-Manifest, X-Object-Meta-* headers and cannot include X-Static-Large-Object Content-Type, etag, Content-Length, or deleted auto_create_account_prefix . Prefix used when automatically creating accounts. threads_per_disk 0 Size of the per-disk thread pool used for performing disk I/O. The default of 0 means to not use a per-disk thread pool. This option is no longer recommended and the :ref:`servers_per_port ` should be used instead. replication_server Configure parameter for creating specific server. To handle all verbs, including replication verbs, do not specify "replication_server" (this is the default). To only handle replication, set to a True value (e.g. "True" or "1"). To handle only non-replication verbs, set to "False". Unless you have a separate replication network, you should not specify any value for "replication_server". replication_concurrency 4 Set to restrict the number of concurrent incoming SSYNC requests; set to 0 for unlimited replication_one_per_device True Restricts incoming SSYNC requests to one per device, replication_currency above allowing. This can help control I/O to each device, but you may wish to set this to False to allow multiple SSYNC requests (up to the above replication_concurrency setting) per device. replication_lock_timeout 15 Number of seconds to wait for an existing replication device lock before giving up. replication_failure_threshold 100 The number of subrequest failures before the replication_failure_ratio is checked replication_failure_ratio 1.0 If the value of failures / successes of SSYNC subrequests exceeds this ratio, the overall SSYNC request will be aborted splice no Use splice() for zero-copy object GETs. This requires Linux kernel version 3.0 or greater. If you set "splice = yes" but the kernel does not support it, error messages will appear in the object server logs at startup, but your object servers should continue to function. ============================= ====================== =============================================== [object-replicator] =========================== ======================== ================================ Option Default Description --------------------------- ------------------------ -------------------------------- log_name object-replicator Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory daemonize yes Whether or not to run replication as a daemon interval 30 Time in seconds to wait between replication passes concurrency 1 Number of replication workers to spawn sync_method rsync The sync method to use; default is rsync but you can use ssync to try the EXPERIMENTAL all-swift-code-no-rsync-callouts method. Once ssync is verified as or better than, rsync, we plan to deprecate rsync so we can move on with more features for replication. rsync_timeout 900 Max duration of a partition rsync rsync_bwlimit 0 Bandwidth limit for rsync in kB/s. 0 means unlimited. rsync_io_timeout 30 Timeout value sent to rsync --timeout and --contimeout options rsync_compress no Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might slow down the syncing process. stats_interval 300 Interval in seconds between logging replication statistics reclaim_age 604800 Time elapsed in seconds before an object can be reclaimed handoffs_first false If set to True, partitions that are not supposed to be on the node will be replicated first. The default setting should not be changed, except for extreme situations. handoff_delete auto By default handoff partitions will be removed when it has successfully replicated to all the canonical nodes. If set to an integer n, it will remove the partition if it is successfully replicated to n nodes. The default setting should not be changed, except for extreme situations. node_timeout DEFAULT or 10 Request timeout to external services. This uses what's set here, or what's set in the DEFAULT section, or 10 (though other sections use 3 as the final default). http_timeout 60 Max duration of an http request. This is for REPLICATE finalization calls and so should be longer than node_timeout. lockup_timeout 1800 Attempts to kill all workers if nothing replicates for lockup_timeout seconds rsync_module {replication_ip}::object Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. rsync_error_log_line_length 0 Limits how long rsync error log lines are ring_check_interval 15 Interval for checking new ring file recon_cache_path /var/cache/swift Path to recon cache =========================== ======================== ================================ [object-updater] ================== =================== ========================================== Option Default Description ------------------ ------------------- ------------------------------------------ log_name object-updater Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory interval 300 Minimum time for a pass to take concurrency 1 Number of updater workers to spawn node_timeout DEFAULT or 10 Request timeout to external services. This uses what's set here, or what's set in the DEFAULT section, or 10 (though other sections use 3 as the final default). slowdown 0.01 Time in seconds to wait between objects recon_cache_path /var/cache/swift Path to recon cache ================== =================== ========================================== [object-auditor] =========================== =================== ========================================== Option Default Description --------------------------- ------------------- ------------------------------------------ log_name object-auditor Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory log_time 3600 Frequency of status logs in seconds. interval 30 Time in seconds to wait between auditor passes disk_chunk_size 65536 Size of chunks read during auditing files_per_second 20 Maximum files audited per second per auditor process. Should be tuned according to individual system specs. 0 is unlimited. bytes_per_second 10000000 Maximum bytes audited per second per auditor process. Should be tuned according to individual system specs. 0 is unlimited. concurrency 1 The number of parallel processes to use for checksum auditing. zero_byte_files_per_second 50 object_size_stats recon_cache_path /var/cache/swift Path to recon cache rsync_tempfile_timeout auto Time elapsed in seconds before rsync tempfiles will be unlinked. Config value of "auto" try to use object-replicator's rsync_timeout + 900 or fallback to 86400 (1 day). =========================== =================== ========================================== ------------------------------ Container Server Configuration ------------------------------ An example Container Server configuration can be found at etc/container-server.conf-sample in the source code repository. The following configuration options are available: [DEFAULT] =============================== ========== ============================================ Option Default Description ------------------------------- ---------- -------------------------------------------- swift_dir /etc/swift Swift configuration directory devices /srv/node Parent directory of where devices are mounted mount_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device bind_ip 0.0.0.0 IP Address for server to bind to bind_port 6001 Port for server to bind to bind_timeout 30 Seconds to attempt bind before giving up backlog 4096 Maximum number of allowed pending connections workers auto Override the number of pre-forked workers that will accept connections. If set it should be an integer, zero means no fork. If unset, it will try to default to the number of effective cpu cores and fallback to one. Increasing the number of workers may reduce the possibility of slow file system operations in one request from negatively impacting other requests. See :ref:`general-service-tuning`. max_clients 1024 Maximum number of clients one worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. user swift User to run as disable_fallocate false Disable "fast fail" fallocate checks if the underlying filesystem does not support it. log_name swift Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory log_max_line_length 0 Caps the length of log lines to the value given; no limit if set to 0, the default. log_custom_handlers None Comma-separated list of functions to call to setup custom log handlers. log_udp_host Override log_address log_udp_port 514 UDP log port log_statsd_host None Enables StatsD logging; IPv4/IPv6 address or a hostname. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. log_statsd_port 8125 log_statsd_default_sample_rate 1.0 log_statsd_sample_rate_factor 1.0 log_statsd_metric_prefix eventlet_debug false If true, turn on debug logging for eventlet fallocate_reserve 0 You can set fallocate_reserve to the number of bytes you'd like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they're out of space early. db_preallocation off If you don't mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. =============================== ========== ============================================ [container-server] ============================== ================ ======================================== Option Default Description ------------------------------ ---------------- ---------------------------------------- use paste.deploy entry point for the container server. For most cases, this should be `egg:swift#container`. set log_name container-server Label used when logging set log_facility LOG_LOCAL0 Syslog log facility set log_level INFO Logging level set log_requests True Whether or not to log each request set log_address /dev/log Logging directory node_timeout 3 Request timeout to external services conn_timeout 0.5 Connection timeout to external services allow_versions false Enable/Disable object versioning feature auto_create_account_prefix . Prefix used when automatically replication_server Configure parameter for creating specific server. To handle all verbs, including replication verbs, do not specify "replication_server" (this is the default). To only handle replication, set to a True value (e.g. "True" or "1"). To handle only non-replication verbs, set to "False". Unless you have a separate replication network, you should not specify any value for "replication_server". ============================== ================ ======================================== [container-replicator] ================== =========================== ============================= Option Default Description ------------------ --------------------------- ----------------------------- log_name container-replicator Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory per_diff 1000 Maximum number of database rows that will be sync'd in a single HTTP replication request. Databases with less than or equal to this number of differing rows will always be sync'd using an HTTP replication request rather than using rsync. max_diffs 100 Maximum number of HTTP replication requests attempted on each replication pass for any one container. This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. concurrency 8 Number of replication workers to spawn interval 30 Time in seconds to wait between replication passes node_timeout 10 Request timeout to external services conn_timeout 0.5 Connection timeout to external services reclaim_age 604800 Time elapsed in seconds before a container can be reclaimed rsync_module {replication_ip}::container Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. rsync_compress no Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. NOTE: Objects that are already compressed (for example: .tar.gz, mp3) might slow down the syncing process. recon_cache_path /var/cache/swift Path to recon cache ================== =========================== ============================= [container-updater] ======================== ================= ================================== Option Default Description ------------------------ ----------------- ---------------------------------- log_name container-updater Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory interval 300 Minimum time for a pass to take concurrency 4 Number of updater workers to spawn node_timeout 3 Request timeout to external services conn_timeout 0.5 Connection timeout to external services slowdown 0.01 Time in seconds to wait between containers account_suppression_time 60 Seconds to suppress updating an account that has generated an error (timeout, not yet found, etc.) recon_cache_path /var/cache/swift Path to recon cache ======================== ================= ================================== [container-auditor] ===================== ================= ======================================= Option Default Description --------------------- ----------------- --------------------------------------- log_name container-auditor Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory interval 1800 Minimum time for a pass to take containers_per_second 200 Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. recon_cache_path /var/cache/swift Path to recon cache ===================== ================= ======================================= ---------------------------- Account Server Configuration ---------------------------- An example Account Server configuration can be found at etc/account-server.conf-sample in the source code repository. The following configuration options are available: [DEFAULT] =============================== ========== ============================================= Option Default Description ------------------------------- ---------- --------------------------------------------- swift_dir /etc/swift Swift configuration directory devices /srv/node Parent directory or where devices are mounted mount_check true Whether or not check if the devices are mounted to prevent accidentally writing to the root device bind_ip 0.0.0.0 IP Address for server to bind to bind_port 6002 Port for server to bind to bind_timeout 30 Seconds to attempt bind before giving up backlog 4096 Maximum number of allowed pending connections workers auto Override the number of pre-forked workers that will accept connections. If set it should be an integer, zero means no fork. If unset, it will try to default to the number of effective cpu cores and fallback to one. Increasing the number of workers may reduce the possibility of slow file system operations in one request from negatively impacting other requests. See :ref:`general-service-tuning`. max_clients 1024 Maximum number of clients one worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. user swift User to run as db_preallocation off If you don't mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. disable_fallocate false Disable "fast fail" fallocate checks if the underlying filesystem does not support it. log_name swift Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory log_max_line_length 0 Caps the length of log lines to the value given; no limit if set to 0, the default. log_custom_handlers None Comma-separated list of functions to call to setup custom log handlers. log_udp_host Override log_address log_udp_port 514 UDP log port log_statsd_host None Enables StatsD logging; IPv4/IPv6 address or a hostname. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. log_statsd_port 8125 log_statsd_default_sample_rate 1.0 log_statsd_sample_rate_factor 1.0 log_statsd_metric_prefix eventlet_debug false If true, turn on debug logging for eventlet fallocate_reserve 0 You can set fallocate_reserve to the number of bytes you'd like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they're out of space early. =============================== ========== ============================================= [account-server] ============================= ============== ========================================== Option Default Description ----------------------------- -------------- ------------------------------------------ use Entry point for paste.deploy for the account server. For most cases, this should be `egg:swift#account`. set log_name account-server Label used when logging set log_facility LOG_LOCAL0 Syslog log facility set log_level INFO Logging level set log_requests True Whether or not to log each request set log_address /dev/log Logging directory auto_create_account_prefix . Prefix used when automatically creating accounts. replication_server Configure parameter for creating specific server. To handle all verbs, including replication verbs, do not specify "replication_server" (this is the default). To only handle replication, set to a True value (e.g. "True" or "1"). To handle only non-replication verbs, set to "False". Unless you have a separate replication network, you should not specify any value for "replication_server". ============================= ============== ========================================== [account-replicator] ================== ========================= =============================== Option Default Description ------------------ ------------------------- ------------------------------- log_name account-replicator Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory per_diff 1000 Maximum number of database rows that will be sync'd in a single HTTP replication request. Databases with less than or equal to this number of differing rows will always be sync'd using an HTTP replication request rather than using rsync. max_diffs 100 Maximum number of HTTP replication requests attempted on each replication pass for any one container. This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. concurrency 8 Number of replication workers to spawn interval 30 Time in seconds to wait between replication passes node_timeout 10 Request timeout to external services conn_timeout 0.5 Connection timeout to external services reclaim_age 604800 Time elapsed in seconds before an account can be reclaimed rsync_module {replication_ip}::account Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. rsync_compress no Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. NOTE: Objects that are already compressed (for example: .tar.gz, mp3) might slow down the syncing process. recon_cache_path /var/cache/swift Path to recon cache ================== ========================= =============================== [account-auditor] ==================== ================ ======================================= Option Default Description -------------------- ---------------- --------------------------------------- log_name account-auditor Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory interval 1800 Minimum time for a pass to take accounts_per_second 200 Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. recon_cache_path /var/cache/swift Path to recon cache ==================== ================ ======================================= [account-reaper] ================== =============== ========================================= Option Default Description ------------------ --------------- ----------------------------------------- log_name account-reaper Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_address /dev/log Logging directory concurrency 25 Number of replication workers to spawn interval 3600 Minimum time for a pass to take node_timeout 10 Request timeout to external services conn_timeout 0.5 Connection timeout to external services delay_reaping 0 Normally, the reaper begins deleting account information for deleted accounts immediately; you can set this to delay its work however. The value is in seconds, 2592000 = 30 days, for example. reap_warn_after 2892000 If the account fails to be be reaped due to a persistent error, the account reaper will log a message such as: Account has not been reaped since You can search logs for this message if space is not being reclaimed after you delete account(s). This is in addition to any time requested by delay_reaping. ================== =============== ========================================= .. _proxy-server-config: -------------------------- Proxy Server Configuration -------------------------- An example Proxy Server configuration can be found at etc/proxy-server.conf-sample in the source code repository. The following configuration options are available: [DEFAULT] ==================================== ======================== ======================================== Option Default Description ------------------------------------ ------------------------ ---------------------------------------- bind_ip 0.0.0.0 IP Address for server to bind to bind_port 80 Port for server to bind to bind_timeout 30 Seconds to attempt bind before giving up backlog 4096 Maximum number of allowed pending connections swift_dir /etc/swift Swift configuration directory workers auto Override the number of pre-forked workers that will accept connections. If set it should be an integer, zero means no fork. If unset, it will try to default to the number of effective cpu cores and fallback to one. See :ref:`general-service-tuning`. max_clients 1024 Maximum number of clients one worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. user swift User to run as cert_file Path to the ssl .crt. This should be enabled for testing purposes only. key_file Path to the ssl .key. This should be enabled for testing purposes only. cors_allow_origin This is a list of hosts that are included with any CORS request by default and returned with the Access-Control-Allow-Origin header in addition to what the container has set. strict_cors_mode True client_timeout 60 trans_id_suffix This optional suffix (default is empty) that would be appended to the swift transaction id allows one to easily figure out from which cluster that X-Trans-Id belongs to. This is very useful when one is managing more than one swift cluster. log_name swift Label used when logging log_facility LOG_LOCAL0 Syslog log facility log_level INFO Logging level log_headers False log_address /dev/log Logging directory log_max_line_length 0 Caps the length of log lines to the value given; no limit if set to 0, the default. log_custom_handlers None Comma separated list of functions to call to setup custom log handlers. log_udp_host Override log_address log_udp_port 514 UDP log port log_statsd_host None Enables StatsD logging; IPv4/IPv6 address or a hostname. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. log_statsd_port 8125 log_statsd_default_sample_rate 1.0 log_statsd_sample_rate_factor 1.0 log_statsd_metric_prefix eventlet_debug false If true, turn on debug logging for eventlet expose_info true Enables exposing configuration settings via HTTP GET /info. admin_key Key to use for admin calls that are HMAC signed. Default is empty, which will disable admin calls to /info. disallowed_sections swift.valid_api_versions Allows the ability to withhold sections from showing up in the public calls to /info. You can withhold subsections by separating the dict level with a ".". expiring_objects_container_divisor 86400 expiring_objects_account_name expiring_objects ==================================== ======================== ======================================== [proxy-server] ============================ =============== ============================= Option Default Description ---------------------------- --------------- ----------------------------- use Entry point for paste.deploy for the proxy server. For most cases, this should be `egg:swift#proxy`. set log_name proxy-server Label used when logging set log_facility LOG_LOCAL0 Syslog log facility set log_level INFO Log level set log_headers True If True, log headers in each request set log_handoffs True If True, the proxy will log whenever it has to failover to a handoff node recheck_account_existence 60 Cache timeout in seconds to send memcached for account existence recheck_container_existence 60 Cache timeout in seconds to send memcached for container existence object_chunk_size 65536 Chunk size to read from object servers client_chunk_size 65536 Chunk size to read from clients memcache_servers 127.0.0.1:11211 Comma separated list of memcached servers ip:port or [ipv6addr]:port memcache_max_connections 2 Max number of connections to each memcached server per worker node_timeout 10 Request timeout to external services recoverable_node_timeout node_timeout Request timeout to external services for requests that, on failure, can be recovered from. For example, object GET. client_timeout 60 Timeout to read one chunk from a client conn_timeout 0.5 Connection timeout to external services error_suppression_interval 60 Time in seconds that must elapse since the last error for a node to be considered no longer error limited error_suppression_limit 10 Error count to consider a node error limited allow_account_management false Whether account PUTs and DELETEs are even callable object_post_as_copy true Set object_post_as_copy = false to turn on fast posts where only the metadata changes are stored anew and the original data file is kept in place. This makes for quicker posts. account_autocreate false If set to 'true' authorized accounts that do not yet exist within the Swift cluster will be automatically created. max_containers_per_account 0 If set to a positive value, trying to create a container when the account already has at least this maximum containers will result in a 403 Forbidden. Note: This is a soft limit, meaning a user might exceed the cap for recheck_account_existence before the 403s kick in. max_containers_whitelist This is a comma separated list of account names that ignore the max_containers_per_account cap. rate_limit_after_segment 10 Rate limit the download of large object segments after this segment is downloaded. rate_limit_segments_per_sec 1 Rate limit large object downloads at this rate. request_node_count 2 * replicas Set to the number of nodes to contact for a normal request. You can use '* replicas' at the end to have it use the number given times the number of replicas for the ring being used for the request. swift_owner_headers up to the auth system in use, but usually indicates administrative responsibilities. sorting_method shuffle Storage nodes can be chosen at random (shuffle), by using timing measurements (timing), or by using an explicit match (affinity). Using timing measurements may allow for lower overall latency, while using affinity allows for finer control. In both the timing and affinity cases, equally-sorting nodes are still randomly chosen to spread load. timing_expiry 300 If the "timing" sorting_method is used, the timings will only be valid for the number of seconds configured by timing_expiry. concurrent_gets off Use replica count number of threads concurrently during a GET/HEAD and return with the first successful response. In the EC case, this parameter only effects an EC HEAD as an EC GET behaves differently. concurrency_timeout conn_timeout This parameter controls how long to wait before firing off the next concurrent_get thread. A value of 0 would we fully concurrent any other number will stagger the firing of the threads. This number should be between 0 and node_timeout. The default is conn_timeout (0.5). ============================ =============== ============================= [tempauth] ===================== =============================== ======================= Option Default Description --------------------- ------------------------------- ----------------------- use Entry point for paste.deploy to use for auth. To use tempauth set to: `egg:swift#tempauth` set log_name tempauth Label used when logging set log_facility LOG_LOCAL0 Syslog log facility set log_level INFO Log level set log_headers True If True, log headers in each request reseller_prefix AUTH The naming scope for the auth service. Swift storage accounts and auth tokens will begin with this prefix. auth_prefix /auth/ The HTTP request path prefix for the auth service. Swift itself reserves anything beginning with the letter `v`. token_life 86400 The number of seconds a token is valid. storage_url_scheme default Scheme to return with storage urls: http, https, or default (chooses based on what the server is running as) This can be useful with an SSL load balancer in front of a non-SSL server. ===================== =============================== ======================= Additionally, you need to list all the accounts/users you want here. The format is:: user__ = [group] [group] [...] [storage_url] or if you want to be able to include underscores in the ```` or ```` portions, you can base64 encode them (with *no* equal signs) in a line like this:: user64__ = [group] [group] [...] [storage_url] There are special groups of:: .reseller_admin = can do anything to any account for this auth .admin = can do anything within the account If neither of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate url to hand back to the user upon authentication. If not specified, this defaults to:: $HOST/v1/_ Where $HOST will do its best to resolve to what the requester would need to use to reach this host, is from this section, and is from the user__ name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, you'll have to specify the storage_url_scheme configuration value as an override. Here are example entries, required for running the tests:: user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test2_tester2 = testing2 .admin user_test_tester3 = testing3 # account "test_y" and user "tester_y" (note the lack of padding = chars) user64_dGVzdF95_dGVzdGVyX3k = testing4 .admin ------------------------ Memcached Considerations ------------------------ Several of the Services rely on Memcached for caching certain types of lookups, such as auth tokens, and container/account existence. Swift does not do any caching of actual object data. Memcached should be able to run on any servers that have available RAM and CPU. At Rackspace, we run Memcached on the proxy servers. The `memcache_servers` config option in the `proxy-server.conf` should contain all memcached servers. ----------- System Time ----------- Time may be relative but it is relatively important for Swift! Swift uses timestamps to determine which is the most recent version of an object. It is very important for the system time on each server in the cluster to by synced as closely as possible (more so for the proxy server, but in general it is a good idea for all the servers). At Rackspace, we use NTP with a local NTP server to ensure that the system times are as close as possible. This should also be monitored to ensure that the times do not vary too much. .. _general-service-tuning: ---------------------- General Service Tuning ---------------------- Most services support either a `worker` or `concurrency` value in the settings. This allows the services to make effective use of the cores available. A good starting point to set the concurrency level for the proxy and storage services to 2 times the number of cores available. If more than one service is sharing a server, then some experimentation may be needed to find the best balance. At Rackspace, our Proxy servers have dual quad core processors, giving us 8 cores. Our testing has shown 16 workers to be a pretty good balance when saturating a 10g network and gives good CPU utilization. Our Storage server processes all run together on the same servers. These servers have dual quad core processors, for 8 cores total. We run the Account, Container, and Object servers with 8 workers each. Most of the background jobs are run at a concurrency of 1, with the exception of the replicators which are run at a concurrency of 2. The `max_clients` parameter can be used to adjust the number of client requests an individual worker accepts for processing. The fewer requests being processed at one time, the less likely a request that consumes the worker's CPU time, or blocks in the OS, will negatively impact other requests. The more requests being processed at one time, the more likely one worker can utilize network and disk capacity. On systems that have more cores, and more memory, where one can afford to run more workers, raising the number of workers and lowering the maximum number of clients serviced per worker can lessen the impact of CPU intensive or stalled requests. The above configuration setting should be taken as suggestions and testing of configuration settings should be done to ensure best utilization of CPU, network connectivity, and disk I/O. ------------------------- Filesystem Considerations ------------------------- Swift is designed to be mostly filesystem agnostic--the only requirement being that the filesystem supports extended attributes (xattrs). After thorough testing with our use cases and hardware configurations, XFS was the best all-around choice. If you decide to use a filesystem other than XFS, we highly recommend thorough testing. For distros with more recent kernels (for example Ubuntu 12.04 Precise), we recommend using the default settings (including the default inode size of 256 bytes) when creating the file system:: mkfs.xfs /dev/sda1 In the last couple of years, XFS has made great improvements in how inodes are allocated and used. Using the default inode size no longer has an impact on performance. For distros with older kernels (for example Ubuntu 10.04 Lucid), some settings can dramatically impact performance. We recommend the following when creating the file system:: mkfs.xfs -i size=1024 /dev/sda1 Setting the inode size is important, as XFS stores xattr data in the inode. If the metadata is too large to fit in the inode, a new extent is created, which can cause quite a performance problem. Upping the inode size to 1024 bytes provides enough room to write the default metadata, plus a little headroom. The following example mount options are recommended when using XFS:: mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sda1 /srv/node/sda We do not recommend running Swift on RAID, but if you are using RAID it is also important to make sure that the proper sunit and swidth settings get set so that XFS can make most efficient use of the RAID array. For a standard swift install, all data drives are mounted directly under ``/srv/node`` (as can be seen in the above example of mounting ``/dev/sda1`` as ``/srv/node/sda``). If you choose to mount the drives in another directory, be sure to set the `devices` config option in all of the server configs to point to the correct directory. The mount points for each drive in ``/srv/node/`` should be owned by the root user almost exclusively (``root:root 755``). This is required to prevent rsync from syncing files into the root drive in the event a drive is unmounted. Swift uses system calls to reserve space for new objects being written into the system. If your filesystem does not support `fallocate()` or `posix_fallocate()`, be sure to set the `disable_fallocate = true` config parameter in account, container, and object server configs. Most current Linux distributions ship with a default installation of updatedb. This tool runs periodically and updates the file name database that is used by the GNU locate tool. However, including Swift object and container database files is most likely not required and the periodic update affects the performance quite a bit. To disable the inclusion of these files add the path where Swift stores its data to the setting PRUNEPATHS in `/etc/updatedb.conf`:: PRUNEPATHS="... /tmp ... /var/spool ... /srv/node" --------------------- General System Tuning --------------------- Rackspace currently runs Swift on Ubuntu Server 10.04, and the following changes have been found to be useful for our use cases. The following settings should be in `/etc/sysctl.conf`:: # disable TIME_WAIT.. wait.. net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_tw_reuse=1 # disable syn cookies net.ipv4.tcp_syncookies = 0 # double amount of allowed conntrack net.ipv4.netfilter.ip_conntrack_max = 262144 To load the updated sysctl settings, run ``sudo sysctl -p`` A note about changing the TIME_WAIT values. By default the OS will hold a port open for 60 seconds to ensure that any remaining packets can be received. During high usage, and with the number of connections that are created, it is easy to run out of ports. We can change this since we are in control of the network. If you are not in control of the network, or do not expect high loads, then you may not want to adjust those values. ---------------------- Logging Considerations ---------------------- Swift is set up to log directly to syslog. Every service can be configured with the `log_facility` option to set the syslog log facility destination. We recommended using syslog-ng to route the logs to specific log files locally on the server and also to remote log collecting servers. Additionally, custom log handlers can be used via the custom_log_handlers setting. swift-2.7.1/doc/source/development_saio.rst0000664000567000056710000005132413024044354022156 0ustar jenkinsjenkins00000000000000======================= SAIO - Swift All In One ======================= --------------------------------------------- Instructions for setting up a development VM --------------------------------------------- This section documents setting up a virtual machine for doing Swift development. The virtual machine will emulate running a four node Swift cluster. To begin: * Get an Ubuntu 14.04 LTS server image or try something Fedora/CentOS. * Create guest virtual machine from the image. ---------------------------- What's in a ---------------------------- Much of the configuration described in this guide requires escalated administrator (``root``) privileges; however, we assume that administrator logs in as an unprivileged user and can use ``sudo`` to run privileged commands. Swift processes also run under a separate user and group, set by configuration option, and referenced as ``:``. The default user is ``swift``, which may not exist on your system. These instructions are intended to allow a developer to use his/her username for ``:``. ----------------------- Installing dependencies ----------------------- * On ``apt`` based systems:: sudo apt-get update sudo apt-get install curl gcc memcached rsync sqlite3 xfsprogs \ git-core libffi-dev python-setuptools \ liberasurecode-dev sudo apt-get install python-coverage python-dev python-nose \ python-xattr python-eventlet \ python-greenlet python-pastedeploy \ python-netifaces python-pip python-dnspython \ python-mock * On ``yum`` based systems:: sudo yum update sudo yum install curl gcc memcached rsync sqlite xfsprogs git-core \ libffi-devel xinetd liberasurecode-devel \ python-setuptools \ python-coverage python-devel python-nose \ pyxattr python-eventlet \ python-greenlet python-paste-deploy \ python-netifaces python-pip python-dns \ python-mock Note: This installs necessary system dependencies and *most* of the python dependencies. Later in the process setuptools/distribute or pip will install and/or upgrade packages. Next, choose either :ref:`partition-section` or :ref:`loopback-section`. .. _partition-section: Using a partition for storage ============================= If you are going to use a separate partition for Swift data, be sure to add another device when creating the VM, and follow these instructions: #. Set up a single partition:: sudo fdisk /dev/sdb sudo mkfs.xfs /dev/sdb1 #. Edit ``/etc/fstab`` and add:: /dev/sdb1 /mnt/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0 #. Create the mount point and the individualized links:: sudo mkdir /mnt/sdb1 sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* sudo mkdir /srv for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 \ /var/run/swift sudo chown -R ${USER}:${USER} /var/run/swift # **Make sure to include the trailing slash after /srv/$x/** for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done Note: We create the mount points and mount the storage disk under /mnt/sdb1. This disk will contain one directory per simulated swift node, each owned by the current swift user. We then create symlinks to these directories under /srv. If the disk sdb is unmounted, files will not be written under /srv/\*, because the symbolic link destination /mnt/sdb1/* will not exist. This prevents disk sync operations from writing to the root partition in the event a drive is unmounted. #. Next, skip to :ref:`common-dev-section`. .. _loopback-section: Using a loopback device for storage =================================== If you want to use a loopback device instead of another partition, follow these instructions: #. Create the file for the loopback device:: sudo mkdir /srv sudo truncate -s 1GB /srv/swift-disk sudo mkfs.xfs /srv/swift-disk Modify size specified in the ``truncate`` command to make a larger or smaller partition as needed. #. Edit `/etc/fstab` and add:: /srv/swift-disk /mnt/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0 #. Create the mount point and the individualized links:: sudo mkdir /mnt/sdb1 sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 \ /var/run/swift sudo chown -R ${USER}:${USER} /var/run/swift # **Make sure to include the trailing slash after /srv/$x/** for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done Note: We create the mount points and mount the loopback file under /mnt/sdb1. This file will contain one directory per simulated swift node, each owned by the current swift user. We then create symlinks to these directories under /srv. If the loopback file is unmounted, files will not be written under /srv/\*, because the symbolic link destination /mnt/sdb1/* will not exist. This prevents disk sync operations from writing to the root partition in the event a drive is unmounted. .. _common-dev-section: Common Post-Device Setup ======================== Add the following lines to ``/etc/rc.local`` (before the ``exit 0``):: mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4 chown : /var/cache/swift* mkdir -p /var/run/swift chown : /var/run/swift Note that on some systems you might have to create ``/etc/rc.local``. On Fedora 19 or later, you need to place these in ``/etc/rc.d/rc.local``. ---------------- Getting the code ---------------- #. Check out the python-swiftclient repo:: cd $HOME; git clone https://github.com/openstack/python-swiftclient.git #. Build a development installation of python-swiftclient:: cd $HOME/python-swiftclient; sudo python setup.py develop; cd - Ubuntu 12.04 users need to install python-swiftclient's dependencies before the installation of python-swiftclient. This is due to a bug in an older version of setup tools:: cd $HOME/python-swiftclient; sudo pip install -r requirements.txt; sudo python setup.py develop; cd - #. Check out the swift repo:: git clone https://github.com/openstack/swift.git #. Build a development installation of swift:: cd $HOME/swift; sudo pip install -r requirements.txt; sudo python setup.py develop; cd - Fedora 19 or later users might have to perform the following if development installation of swift fails:: sudo pip install -U xattr #. Install swift's test dependencies:: cd $HOME/swift; sudo pip install -r test-requirements.txt ---------------- Setting up rsync ---------------- #. Create ``/etc/rsyncd.conf``:: sudo cp $HOME/swift/doc/saio/rsyncd.conf /etc/ sudo sed -i "s//${USER}/" /etc/rsyncd.conf Here is the default ``rsyncd.conf`` file contents maintained in the repo that is copied and fixed up above: .. literalinclude:: /../saio/rsyncd.conf #. On Ubuntu, edit the following line in ``/etc/default/rsync``:: RSYNC_ENABLE=true On Fedora, edit the following line in ``/etc/xinetd.d/rsync``:: disable = no One might have to create the above files to perform the edits. #. On platforms with SELinux in ``Enforcing`` mode, either set to ``Permissive``:: sudo setenforce Permissive Or just allow rsync full access:: sudo setsebool -P rsync_full_access 1 #. Start the rsync daemon * On Ubuntu, run:: sudo service rsync restart * On Fedora, run:: sudo systemctl restart xinetd.service sudo systemctl enable rsyncd.service sudo systemctl start rsyncd.service * On other xinetd based systems simply run:: sudo service xinetd restart #. Verify rsync is accepting connections for all servers:: rsync rsync://pub@localhost/ You should see the following output from the above command:: account6012 account6022 account6032 account6042 container6011 container6021 container6031 container6041 object6010 object6020 object6030 object6040 ------------------ Starting memcached ------------------ On non-Ubuntu distros you need to ensure memcached is running:: sudo service memcached start sudo chkconfig memcached on or:: sudo systemctl enable memcached.service sudo systemctl start memcached.service The tempauth middleware stores tokens in memcached. If memcached is not running, tokens cannot be validated, and accessing Swift becomes impossible. --------------------------------------------------- Optional: Setting up rsyslog for individual logging --------------------------------------------------- #. Install the swift rsyslogd configuration:: sudo cp $HOME/swift/doc/saio/rsyslog.d/10-swift.conf /etc/rsyslog.d/ Be sure to review that conf file to determine if you want all the logs in one file vs. all the logs separated out, and if you want hourly logs for stats processing. For convenience, we provide its default contents below: .. literalinclude:: /../saio/rsyslog.d/10-swift.conf #. Edit ``/etc/rsyslog.conf`` and make the following change (usually in the "GLOBAL DIRECTIVES" section):: $PrivDropToGroup adm #. If using hourly logs (see above) perform:: sudo mkdir -p /var/log/swift/hourly Otherwise perform:: sudo mkdir -p /var/log/swift #. Setup the logging directory and start syslog: * On Ubuntu:: sudo chown -R syslog.adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo service rsyslog restart * On Fedora:: sudo chown -R root:adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo systemctl restart rsyslog.service --------------------- Configuring each node --------------------- After performing the following steps, be sure to verify that Swift has access to resulting configuration files (sample configuration files are provided with all defaults in line-by-line comments). #. Optionally remove an existing swift directory:: sudo rm -rf /etc/swift #. Populate the ``/etc/swift`` directory itself:: cd $HOME/swift/doc; sudo cp -r saio/swift /etc/swift; cd - sudo chown -R ${USER}:${USER} /etc/swift #. Update ```` references in the Swift config files:: find /etc/swift/ -name \*.conf | xargs sudo sed -i "s//${USER}/" The contents of the configuration files provided by executing the above commands are as follows: #. ``/etc/swift/swift.conf`` .. literalinclude:: /../saio/swift/swift.conf #. ``/etc/swift/proxy-server.conf`` .. literalinclude:: /../saio/swift/proxy-server.conf #. ``/etc/swift/object-expirer.conf`` .. literalinclude:: /../saio/swift/object-expirer.conf #. ``/etc/swift/container-reconciler.conf`` .. literalinclude:: /../saio/swift/container-reconciler.conf #. ``/etc/swift/container-sync-realms.conf`` .. literalinclude:: /../saio/swift/container-sync-realms.conf #. ``/etc/swift/account-server/1.conf`` .. literalinclude:: /../saio/swift/account-server/1.conf #. ``/etc/swift/container-server/1.conf`` .. literalinclude:: /../saio/swift/container-server/1.conf #. ``/etc/swift/object-server/1.conf`` .. literalinclude:: /../saio/swift/object-server/1.conf #. ``/etc/swift/account-server/2.conf`` .. literalinclude:: /../saio/swift/account-server/2.conf #. ``/etc/swift/container-server/2.conf`` .. literalinclude:: /../saio/swift/container-server/2.conf #. ``/etc/swift/object-server/2.conf`` .. literalinclude:: /../saio/swift/object-server/2.conf #. ``/etc/swift/account-server/3.conf`` .. literalinclude:: /../saio/swift/account-server/3.conf #. ``/etc/swift/container-server/3.conf`` .. literalinclude:: /../saio/swift/container-server/3.conf #. ``/etc/swift/object-server/3.conf`` .. literalinclude:: /../saio/swift/object-server/3.conf #. ``/etc/swift/account-server/4.conf`` .. literalinclude:: /../saio/swift/account-server/4.conf #. ``/etc/swift/container-server/4.conf`` .. literalinclude:: /../saio/swift/container-server/4.conf #. ``/etc/swift/object-server/4.conf`` .. literalinclude:: /../saio/swift/object-server/4.conf .. _setup_scripts: ------------------------------------ Setting up scripts for running Swift ------------------------------------ #. Copy the SAIO scripts for resetting the environment:: mkdir -p $HOME/bin cd $HOME/swift/doc; cp saio/bin/* $HOME/bin; cd - chmod +x $HOME/bin/* #. Edit the ``$HOME/bin/resetswift`` script The template ``resetswift`` script looks like the following: .. literalinclude:: /../saio/bin/resetswift If you are using a loopback device add an environment var to subsitute ``/dev/sdb1`` with ``/srv/swift-disk``:: echo "export SAIO_BLOCK_DEVICE=/srv/swift-disk" >> $HOME/.bashrc If you did not set up rsyslog for individual logging, remove the ``find /var/log/swift...`` line:: sed -i "/find \/var\/log\/swift/d" $HOME/bin/resetswift #. Install the sample configuration file for running tests:: cp $HOME/swift/test/sample.conf /etc/swift/test.conf The template ``test.conf`` looks like the following: .. literalinclude:: /../../test/sample.conf #. Add an environment variable for running tests below:: echo "export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf" >> $HOME/.bashrc #. Be sure that your ``PATH`` includes the ``bin`` directory:: echo "export PATH=${PATH}:$HOME/bin" >> $HOME/.bashrc #. Source the above environment variables into your current environment:: . $HOME/.bashrc #. Construct the initial rings using the provided script:: remakerings The ``remakerings`` script looks like the following: .. literalinclude:: /../saio/bin/remakerings You can expect the output from this command to produce the following. Note that 3 object rings are created in order to test storage policies and EC in the SAIO environment. The EC ring is the only one with all 8 devices. There are also two replication rings, one for 3x replication and another for 2x replication, but those rings only use 4 devices:: Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 3 Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 3 Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb5_"" with 1.0 weight got id 1 Device d2r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 2 Device d3r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb6_"" with 1.0 weight got id 3 Device d4r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 4 Device d5r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb7_"" with 1.0 weight got id 5 Device d6r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 6 Device d7r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb8_"" with 1.0 weight got id 7 Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6011R127.0.0.1:6011/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.1:6021R127.0.0.1:6021/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.1:6031R127.0.0.1:6031/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.1:6041R127.0.0.1:6041/sdb4_"" with 1.0 weight got id 3 Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6012R127.0.0.1:6012/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.1:6022R127.0.0.1:6022/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.1:6032R127.0.0.1:6032/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.1:6042R127.0.0.1:6042/sdb4_"" with 1.0 weight got id 3 Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 #. Read more about Storage Policies and your SAIO :doc:`policies_saio` #. Verify the unit tests run:: $HOME/swift/.unittests Note that the unit tests do not require any swift daemons running. #. Start the "main" Swift daemon processes (proxy, account, container, and object):: startmain (The "``Unable to increase file descriptor limit. Running as non-root?``" warnings are expected and ok.) The ``startmain`` script looks like the following: .. literalinclude:: /../saio/bin/startmain #. Get an ``X-Storage-Url`` and ``X-Auth-Token``:: curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 #. Check that you can ``GET`` account:: curl -v -H 'X-Auth-Token: ' #. Check that ``swift`` command provided by the python-swiftclient package works:: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat #. Verify the functional tests run:: $HOME/swift/.functests (Note: functional tests will first delete everything in the configured accounts.) #. Verify the probe tests run:: $HOME/swift/.probetests (Note: probe tests will reset your environment as they call ``resetswift`` for each test.) ---------------- Debugging Issues ---------------- If all doesn't go as planned, and tests fail, or you can't auth, or something doesn't work, here are some good starting places to look for issues: #. Everything is logged using system facilities -- usually in ``/var/log/syslog``, but possibly in ``/var/log/messages`` on e.g. Fedora -- so that is a good first place to look for errors (most likely python tracebacks). #. Make sure all of the server processes are running. For the base functionality, the Proxy, Account, Container, and Object servers should be running. #. If one of the servers are not running, and no errors are logged to syslog, it may be useful to try to start the server manually, for example: ``swift-object-server /etc/swift/object-server/1.conf`` will start the object server. If there are problems not showing up in syslog, then you will likely see the traceback on startup. #. If you need to, you can turn off syslog for unit tests. This can be useful for environments where ``/dev/log`` is unavailable, or which cannot rate limit (unit tests generate a lot of logs very quickly). Open the file ``SWIFT_TEST_CONFIG_FILE`` points to, and change the value of ``fake_syslog`` to ``True``. #. If you encounter a ``401 Unauthorized`` when following Step 12 where you check that you can ``GET`` account, use ``sudo service memcached status`` and check if memcache is running. If memcache is not running, start it using ``sudo service memcached start``. Once memcache is running, rerun ``GET`` account. swift-2.7.1/doc/source/proxy.rst0000664000567000056710000000131113024044352017767 0ustar jenkinsjenkins00000000000000.. _proxy: ***** Proxy ***** .. _proxy-controllers: Proxy Controllers ================= Base ~~~~ .. automodule:: swift.proxy.controllers.base :members: :undoc-members: :show-inheritance: Account ~~~~~~~ .. automodule:: swift.proxy.controllers.account :members: :undoc-members: :show-inheritance: Container ~~~~~~~~~ .. automodule:: swift.proxy.controllers.container :members: :undoc-members: :show-inheritance: Object ~~~~~~ .. automodule:: swift.proxy.controllers.obj :members: :undoc-members: :show-inheritance: .. _proxy-server: Proxy Server ============ .. automodule:: swift.proxy.server :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/overview_replication.rst0000664000567000056710000002027513024044354023061 0ustar jenkinsjenkins00000000000000=========== Replication =========== Because each replica in swift functions independently, and clients generally require only a simple majority of nodes responding to consider an operation successful, transient failures like network partitions can quickly cause replicas to diverge. These differences are eventually reconciled by asynchronous, peer-to-peer replicator processes. The replicator processes traverse their local filesystems, concurrently performing operations in a manner that balances load across physical disks. Replication uses a push model, with records and files generally only being copied from local to remote replicas. This is important because data on the node may not belong there (as in the case of handoffs and ring changes), and a replicator can't know what data exists elsewhere in the cluster that it should pull in. It's the duty of any node that contains data to ensure that data gets to where it belongs. Replica placement is handled by the ring. Every deleted record or file in the system is marked by a tombstone, so that deletions can be replicated alongside creations. The replication process cleans up tombstones after a time period known as the consistency window. The consistency window encompasses replication duration and how long transient failure can remove a node from the cluster. Tombstone cleanup must be tied to replication to reach replica convergence. If a replicator detects that a remote drive has failed, the replicator uses the get_more_nodes interface for the ring to choose an alternate node with which to synchronize. The replicator can maintain desired levels of replication in the face of disk failures, though some replicas may not be in an immediately usable location. Note that the replicator doesn't maintain desired levels of replication when other failures, such as entire node failures, occur because most failure are transient. Replication is an area of active development, and likely rife with potential improvements to speed and correctness. There are two major classes of replicator - the db replicator, which replicates accounts and containers, and the object replicator, which replicates object data. -------------- DB Replication -------------- The first step performed by db replication is a low-cost hash comparison to determine whether two replicas already match. Under normal operation, this check is able to verify that most databases in the system are already synchronized very quickly. If the hashes differ, the replicator brings the databases in sync by sharing records added since the last sync point. This sync point is a high water mark noting the last record at which two databases were known to be in sync, and is stored in each database as a tuple of the remote database id and record id. Database ids are unique amongst all replicas of the database, and record ids are monotonically increasing integers. After all new records have been pushed to the remote database, the entire sync table of the local database is pushed, so the remote database can guarantee that it is in sync with everything with which the local database has previously synchronized. If a replica is found to be missing entirely, the whole local database file is transmitted to the peer using rsync(1) and vested with a new unique id. In practice, DB replication can process hundreds of databases per concurrency setting per second (up to the number of available CPUs or disks) and is bound by the number of DB transactions that must be performed. ------------------ Object Replication ------------------ The initial implementation of object replication simply performed an rsync to push data from a local partition to all remote servers it was expected to exist on. While this performed adequately at small scale, replication times skyrocketed once directory structures could no longer be held in RAM. We now use a modification of this scheme in which a hash of the contents for each suffix directory is saved to a per-partition hashes file. The hash for a suffix directory is invalidated when the contents of that suffix directory are modified. The object replication process reads in these hash files, calculating any invalidated hashes. It then transmits the hashes to each remote server that should hold the partition, and only suffix directories with differing hashes on the remote server are rsynced. After pushing files to the remote server, the replication process notifies it to recalculate hashes for the rsynced suffix directories. Performance of object replication is generally bound by the number of uncached directories it has to traverse, usually as a result of invalidated suffix directory hashes. Using write volume and partition counts from our running systems, it was designed so that around 2% of the hash space on a normal node will be invalidated per day, which has experimentally given us acceptable replication speeds. .. _ssync: Work continues with a new ssync method where rsync is not used at all and instead all-Swift code is used to transfer the objects. At first, this ssync will just strive to emulate the rsync behavior. Once deemed stable it will open the way for future improvements in replication since we'll be able to easily add code in the replication path instead of trying to alter the rsync code base and distributing such modifications. One of the first improvements planned is an "index.db" that will replace the hashes.pkl. This will allow quicker updates to that data as well as more streamlined queries. Quite likely we'll implement a better scheme than the current one hashes.pkl uses (hash-trees, that sort of thing). Another improvement planned all along the way is separating the local disk structure from the protocol path structure. This separation will allow ring resizing at some point, or at least ring-doubling. Note that for objects being stored with an Erasure Code policy, the replicator daemon is not involved. Instead, the reconstructor is used by Erasure Code policies and is analogous to the replicator for Replication type policies. See :doc:`overview_erasure_code` for complete information on both Erasure Code support as well as the reconstructor. ---------- Hashes.pkl ---------- The hashes.pkl file is a key element for both replication and reconstruction (for Erasure Coding). Both daemons use this file to determine if any kind of action is required between nodes that are participating in the durability scheme. The file itself is a pickled dictionary with slightly different formats depending on whether the policy is Replication or Erasure Code. In either case, however, the same basic information is provided between the nodes. The dictionary contains a dictionary where the key is a suffix directory name and the value is the MD5 hash of the directory listing for that suffix. In this manner, the daemon can quickly identify differences between local and remote suffix directories on a per partition basis as the scope of any one hashes.pkl file is a partition directory. For Erasure Code policies, there is a little more information required. An object's hash directory may contain multiple fragments of a single object in the event that the node is acting as a handoff or perhaps if a rebalance is underway. Each fragment of an object is stored with a fragment index, so the hashes.pkl for an Erasure Code partition will still be a dictionary keyed on the suffix directory name, however, the value is another dictionary keyed on the fragment index with subsequent MD5 hashes for each one as values. Some files within an object hash directory don't require a fragment index so None is used to represent those. Below are examples of what these dictionaries might look like. Replication hashes.pkl:: {'a43': '72018c5fbfae934e1f56069ad4425627', 'b23': '12348c5fbfae934e1f56069ad4421234'} Erasure Code hashes.pkl:: {'a43': {None: '72018c5fbfae934e1f56069ad4425627', 2: 'b6dd6db937cb8748f50a5b6e4bc3b808'}, 'b23': {None: '12348c5fbfae934e1f56069ad4421234', 1: '45676db937cb8748f50a5b6e4bc34567'}} ----------------------------- Dedicated replication network ----------------------------- Swift has support for using dedicated network for replication traffic. For more information see :ref:`Overview of dedicated replication network `. swift-2.7.1/doc/source/ring.rst0000664000567000056710000000061213024044352017550 0ustar jenkinsjenkins00000000000000.. _consistent_hashing_ring: ******************************** Partitioned Consistent Hash Ring ******************************** .. _ring: Ring ==== .. automodule:: swift.common.ring.ring :members: :undoc-members: :show-inheritance: .. _ring-builder: Ring Builder ============ .. automodule:: swift.common.ring.builder :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/overview_ring.rst0000664000567000056710000005527313024044354021515 0ustar jenkinsjenkins00000000000000========= The Rings ========= The rings determine where data should reside in the cluster. There is a separate ring for account databases, container databases, and individual object storage policies but each ring works in the same way. These rings are externally managed, in that the server processes themselves do not modify the rings, they are instead given new rings modified by other tools. The ring uses a configurable number of bits from a path's MD5 hash as a partition index that designates a device. The number of bits kept from the hash is known as the partition power, and 2 to the partition power indicates the partition count. Partitioning the full MD5 hash ring allows other parts of the cluster to work in batches of items at once which ends up either more efficient or at least less complex than working with each item separately or the entire cluster all at once. Another configurable value is the replica count, which indicates how many of the partition->device assignments comprise a single ring. For a given partition number, each replica will be assigned to a different device in the ring. Devices are added to the ring to describe the capacity available for part-replica assignment. Devices are placed into failure domains consisting of region, zone, and server. Regions can be used to describe geo-graphically systems characterized by lower-bandwidth or higher latency between machines in different regions. Many rings will consist of only a single region. Zones can be used to group devices based on physical locations, power separations, network separations, or any other attribute that would lessen multiple replicas being unavailable at the same time. Devices are given a weight which describes relative weight of the device in comparison to other devices. When building a ring all of each part's replicas will be assigned to devices according to their weight. Additionally, each replica of a part will attempt to be assigned to a device who's failure domain does not already have a replica for the part. Only a single replica of a part may be assigned to each device - you must have as many devices as replicas. ------------ Ring Builder ------------ The rings are built and managed manually by a utility called the ring-builder. The ring-builder assigns partitions to devices and writes an optimized Python structure to a gzipped, serialized file on disk for shipping out to the servers. The server processes just check the modification time of the file occasionally and reload their in-memory copies of the ring structure as needed. Because of how the ring-builder manages changes to the ring, using a slightly older ring usually just means one of the three replicas for a subset of the partitions will be incorrect, which can be easily worked around. The ring-builder also keeps its own builder file with the ring information and additional data required to build future rings. It is very important to keep multiple backup copies of these builder files. One option is to copy the builder files out to every server while copying the ring files themselves. Another is to upload the builder files into the cluster itself. Complete loss of a builder file will mean creating a new ring from scratch, nearly all partitions will end up assigned to different devices, and therefore nearly all data stored will have to be replicated to new locations. So, recovery from a builder file loss is possible, but data will definitely be unreachable for an extended time. ------------------- Ring Data Structure ------------------- The ring data structure consists of three top level fields: a list of devices in the cluster, a list of lists of device ids indicating partition to device assignments, and an integer indicating the number of bits to shift an MD5 hash to calculate the partition for the hash. *************** List of Devices *************** The list of devices is known internally to the Ring class as devs. Each item in the list of devices is a dictionary with the following keys: ====== ======= ============================================================== id integer The index into the list devices. zone integer The zone the devices resides in. weight float The relative weight of the device in comparison to other devices. This usually corresponds directly to the amount of disk space the device has compared to other devices. For instance a device with 1 terabyte of space might have a weight of 100.0 and another device with 2 terabytes of space might have a weight of 200.0. This weight can also be used to bring back into balance a device that has ended up with more or less data than desired over time. A good average weight of 100.0 allows flexibility in lowering the weight later if necessary. ip string The IP address or hostname of the server containing the device. port int The TCP port the listening server process uses that serves requests for the device. device string The on disk name of the device on the server. For example: sdb1 meta string A general-use field for storing additional information for the device. This information isn't used directly by the server processes, but can be useful in debugging. For example, the date and time of installation and hardware manufacturer could be stored here. ====== ======= ============================================================== Note: The list of devices may contain holes, or indexes set to None, for devices that have been removed from the cluster. However, device ids are reused. Device ids are reused to avoid potentially running out of device id slots when there are available slots (from prior removal of devices). A consequence of this device id reuse is that the device id (integer value) does not necessarily correspond with the chronology of when the device was added to the ring. Also, some devices may be temporarily disabled by setting their weight to 0.0. To obtain a list of active devices (for uptime polling, for example) the Python code would look like: ``devices = list(self._iter_devs())`` ************************* Partition Assignment List ************************* This is a list of array('H') of devices ids. The outermost list contains an array('H') for each replica. Each array('H') has a length equal to the partition count for the ring. Each integer in the array('H') is an index into the above list of devices. The partition list is known internally to the Ring class as _replica2part2dev_id. So, to create a list of device dictionaries assigned to a partition, the Python code would look like: ``devices = [self.devs[part2dev_id[partition]] for part2dev_id in self._replica2part2dev_id]`` array('H') is used for memory conservation as there may be millions of partitions. ********************* Partition Shift Value ********************* The partition shift value is known internally to the Ring class as _part_shift. This value used to shift an MD5 hash to calculate the partition on which the data for that hash should reside. Only the top four bytes of the hash is used in this process. For example, to compute the partition for the path /account/container/object the Python code might look like: ``partition = unpack_from('>I', md5('/account/container/object').digest())[0] >> self._part_shift`` For a ring generated with part_power P, the partition shift value is 32 - P. ******************* Fractional Replicas ******************* A ring is not restricted to having an integer number of replicas. In order to support the gradual changing of replica counts, the ring is able to have a real number of replicas. When the number of replicas is not an integer, then the last element of _replica2part2dev_id will have a length that is less than the partition count for the ring. This means that some partitions will have more replicas than others. For example, if a ring has 3.25 replicas, then 25% of its partitions will have four replicas, while the remaining 75% will have just three. ********** Dispersion ********** With each rebalance, the ring builder calculates a dispersion metric. This is the percentage of partitions in the ring that have too many replicas within a particular failure domain. For example, if you have three servers in a cluster but two replicas for a partition get placed onto the same server, that partition will count towards the dispersion metric. A lower dispersion value is better, and the value can be used to find the proper value for "overload". ******** Overload ******** The ring builder tries to keep replicas as far apart as possible while still respecting device weights. When it can't do both, the overload factor determines what happens. Each device will take some extra fraction of its desired partitions to allow for replica dispersion; once that extra fraction is exhausted, replicas will be placed closer together than optimal. Essentially, the overload factor lets the operator trade off replica dispersion (durability) against data dispersion (uniform disk usage). The default overload factor is 0, so device weights will be strictly followed. With an overload factor of 0.1, each device will accept 10% more partitions than it otherwise would, but only if needed to maintain partition dispersion. Example: Consider a 3-node cluster of machines with equal-size disks; let node A have 12 disks, node B have 12 disks, and node C have only 11 disks. Let the ring have an overload factor of 0.1 (10%). Without the overload, some partitions would end up with replicas only on nodes A and B. However, with the overload, every device is willing to accept up to 10% more partitions for the sake of dispersion. The missing disk in C means there is one disk's worth of partitions that would like to spread across the remaining 11 disks, which gives each disk in C an extra 9.09% load. Since this is less than the 10% overload, there is one replica of each partition on each node. However, this does mean that the disks in node C will have more data on them than the disks in nodes A and B. If 80% full is the warning threshold for the cluster, node C's disks will reach 80% full while A and B's disks are only 72.7% full. ------------------------------- Partition & Replica Terminology ------------------------------- All descriptions of consistent hashing describe the process of breaking the keyspace up into multiple ranges (vnodes, buckets, etc.) - many more than the number of "nodes" to which keys in the keyspace must be assigned. Swift calls these ranges `partitions` - they are partitions of the total keyspace. Each partition will have multiple replicas. Every replica of each partition must be assigned to a device in the ring. When a describing a specific replica of a partition (like when it's assigned a device) it is described as a `part-replica` in that it is a specific `replica` of the specific `partition`. A single device may be assigned different replicas from many parts, but it may not be assigned multiple replicas of a single part. The total number of partitions in a ring is calculated as ``2 ** ``. The total number of part-replicas in a ring is calculated as `` * 2 ** ``. When considering a device's `weight` it is useful to describe the number of part-replicas it would like to be assigned. A single device regardless of weight will never hold more than ``2 ** `` part-replicas because it can not have more than one replica of any part assigned. The number of part-replicas a device can take by weights is calculated as it's `parts_wanted`. The true number of part-replicas assigned to a device can be compared to it's parts wanted similarly to a calculation of percentage error - this deviation in the observed result from the idealized target is called a devices `balance`. When considering a device's `failure domain` it is useful to describe the number of part-replicas it would like to be assigned. The number of part-replicas wanted in a failure domain of a tier is the sum of the part-replicas wanted in the failure domains of it's sub-tier. However, collectively when the total number of part-replicas in a failure domain exceeds or is equal to ``2 ** `` it is most obvious that it's no longer sufficient to consider only the number of total part-replicas, but rather the fraction of each replica's partitions. Consider for example a ring with ``3`` replicas and ``3`` servers, while it's necessary for dispersion that each server hold only ``1/3`` of the total part-replicas it is additionally constrained to require ``1.0`` replica of *each* partition. It would not be sufficient to satisfy dispersion if two devices on one of the servers each held a replica of a single partition, while another server held none. By considering a decimal fraction of one replica's worth of parts in a failure domain we can derive the total part-replicas wanted in a failure domain (``1.0 * 2 ** ``). Additionally we infer more about `which` part-replicas must go in the failure domain. Consider a ring with three replicas, and two zones, each with two servers (four servers total). The three replicas worth of partitions will be assigned into two failure domains at the zone tier. Each zone must hold more than one replica of some parts. We represent this improper faction of a replica's worth of partitions in decimal form as ``1.5`` (``3.0 / 2``). This tells us not only the *number* of total parts (``1.5 * 2 ** ``) but also that *each* partition must have `at least` one replica in this failure domain (in fact ``0.5`` of the partitions will have ``2`` replicas). Within each zone the two servers will hold ``0.75`` of a replica's worth of partitions - this is equal both to "the fraction of a replica's worth of partitions assigned to each zone (``1.5``) divided evenly among the number of failure domain's in it's sub-tier (``2`` servers in each zone, i.e. ``1.5 / 2``)" but *also* "the total number of replicas (``3.0``) divided evenly among the total number of failure domains in the server tier (``2`` servers x ``2`` zones = ``4``, i.e. ``3.0 / 4``)". It is useful to consider that each server in this ring will hold only ``0.75`` of a replica's worth of partitions which tells that any server should have `at most` one replica of a given part assigned. In the interests of brevity, some variable names will often refer to the concept representing the fraction of a replica's worth of partitions in decimal form as *replicanths* - this is meant to invoke connotations similar to ordinal numbers as applied to fractions, but generalized to a replica instead of four*th* or a fif*th*. The 'n' was probably thrown in because of Blade Runner. ----------------- Building the Ring ----------------- First the ring builder calculates the replicanths wanted at each tier in the ring's topology based on weight. Then the ring builder calculates the replicanths wanted at each tier in the ring's topology based on dispersion. Then the ring calculates the maximum deviation on a single device between it's weighted replicanths and wanted replicanths. Next we interpolate between the two replicanth values (weighted & wanted) at each tier using the specified overload (up to the maximum required overload). It's a linear interpolation, similar to solving for a point on a line between two points - we calculate the slope across the max required overload and then calculate the intersection of the line with the desired overload. This becomes the target. From the target we calculate the minimum and maximum number of replicas any part may have in a tier. This becomes the replica_plan. Finally, we calculate the number of partitions that should ideally be assigned to each device based the replica_plan. On initial balance, the first time partitions are placed to generate a ring, we must assign each replica of each partition to the device that desires the most partitions excluding any devices that already have their maximum number of replicas of that part assigned to some parent tier of that device's failure domain. When building a new ring based on an old ring, the desired number of partitions each device wants is recalculated from the current replica_plan. Next the partitions to be reassigned are gathered up. Any removed devices have all their assigned partitions unassigned and added to the gathered list. Any partition replicas that (due to the addition of new devices) can be spread out for better durability are unassigned and added to the gathered list. Any devices that have more partitions than they now desire have random partitions unassigned from them and added to the gathered list. Lastly, the gathered partitions are then reassigned to devices using a similar method as in the initial assignment described above. Whenever a partition has a replica reassigned, the time of the reassignment is recorded. This is taken into account when gathering partitions to reassign so that no partition is moved twice in a configurable amount of time. This configurable amount of time is known internally to the RingBuilder class as min_part_hours. This restriction is ignored for replicas of partitions on devices that have been removed, as removing a device only happens on device failure and there's no choice but to make a reassignment. The above processes don't always perfectly rebalance a ring due to the random nature of gathering partitions for reassignment. To help reach a more balanced ring, the rebalance process is repeated a fixed number of times until the replica_plan is fulfilled or unable to be fulfilled (indicating we probably can't get perfect balance due to too many partitions recently moved). --------------------- Ring Builder Analyzer --------------------- .. automodule:: swift.cli.ring_builder_analyzer ------- History ------- The ring code went through many iterations before arriving at what it is now and while it has largely been stable, the algorithm has seen a few tweaks or perhaps even fundamentally changed as new ideas emerge. This section will try to describe the previous ideas attempted and attempt to explain why they were discarded. A "live ring" option was considered where each server could maintain its own copy of the ring and the servers would use a gossip protocol to communicate the changes they made. This was discarded as too complex and error prone to code correctly in the project time span available. One bug could easily gossip bad data out to the entire cluster and be difficult to recover from. Having an externally managed ring simplifies the process, allows full validation of data before it's shipped out to the servers, and guarantees each server is using a ring from the same timeline. It also means that the servers themselves aren't spending a lot of resources maintaining rings. A couple of "ring server" options were considered. One was where all ring lookups would be done by calling a service on a separate server or set of servers, but this was discarded due to the latency involved. Another was much like the current process but where servers could submit change requests to the ring server to have a new ring built and shipped back out to the servers. This was discarded due to project time constraints and because ring changes are currently infrequent enough that manual control was sufficient. However, lack of quick automatic ring changes did mean that other parts of the system had to be coded to handle devices being unavailable for a period of hours until someone could manually update the ring. The current ring process has each replica of a partition independently assigned to a device. A version of the ring that used a third of the memory was tried, where the first replica of a partition was directly assigned and the other two were determined by "walking" the ring until finding additional devices in other zones. This was discarded as control was lost as to how many replicas for a given partition moved at once. Keeping each replica independent allows for moving only one partition replica within a given time window (except due to device failures). Using the additional memory was deemed a good trade-off for moving data around the cluster much less often. Another ring design was tried where the partition to device assignments weren't stored in a big list in memory but instead each device was assigned a set of hashes, or anchors. The partition would be determined from the data item's hash and the nearest device anchors would determine where the replicas should be stored. However, to get reasonable distribution of data each device had to have a lot of anchors and walking through those anchors to find replicas started to add up. In the end, the memory savings wasn't that great and more processing power was used, so the idea was discarded. A completely non-partitioned ring was also tried but discarded as the partitioning helps many other parts of the system, especially replication. Replication can be attempted and retried in a partition batch with the other replicas rather than each data item independently attempted and retried. Hashes of directory structures can be calculated and compared with other replicas to reduce directory walking and network traffic. Partitioning and independently assigning partition replicas also allowed for the best balanced cluster. The best of the other strategies tended to give +-10% variance on device balance with devices of equal weight and +-15% with devices of varying weights. The current strategy allows us to get +-3% and +-8% respectively. Various hashing algorithms were tried. SHA offers better security, but the ring doesn't need to be cryptographically secure and SHA is slower. Murmur was much faster, but MD5 was built-in and hash computation is a small percentage of the overall request handling time. In all, once it was decided the servers wouldn't be maintaining the rings themselves anyway and only doing hash lookups, MD5 was chosen for its general availability, good distribution, and adequate speed. The placement algorithm has seen a number of behavioral changes for unbalanceable rings. The ring builder wants to keep replicas as far apart as possible while still respecting device weights. In most cases, the ring builder can achieve both, but sometimes they conflict. At first, the behavior was to keep the replicas far apart and ignore device weight, but that made it impossible to gradually go from one region to two, or from two to three. Then it was changed to favor device weight over dispersion, but that wasn't so good for rings that were close to balanceable, like 3 machines with 60TB, 60TB, and 57TB of disk space; operators were expecting one replica per machine, but didn't always get it. After that, overload was added to the ring builder so that operators could choose a balance between dispersion and device weights. In time the overload concept was improved and made more accurate. swift-2.7.1/doc/source/middleware.rst0000664000567000056710000001026413024044354020734 0ustar jenkinsjenkins00000000000000.. _common_middleware: ********** Middleware ********** Account Quotas ============== .. automodule:: swift.common.middleware.account_quotas :members: :show-inheritance: .. _bulk: Bulk Operations (Delete and Archive Auto Extraction) ==================================================== .. automodule:: swift.common.middleware.bulk :members: :show-inheritance: .. _catch_errors: CatchErrors ============= .. automodule:: swift.common.middleware.catch_errors :members: :show-inheritance: CNAME Lookup ============ .. automodule:: swift.common.middleware.cname_lookup :members: :show-inheritance: .. _container-quotas: Container Quotas ================ .. automodule:: swift.common.middleware.container_quotas :members: :show-inheritance: .. _container-sync: Container Sync Middleware ========================= .. automodule:: swift.common.middleware.container_sync :members: :show-inheritance: Cross Domain Policies ===================== .. automodule:: swift.common.middleware.crossdomain :members: :show-inheritance: .. _discoverability: Discoverability =============== Swift will by default provide clients with an interface providing details about the installation. Unless disabled (i.e ``expose_info=false`` in :ref:`proxy-server-config`), a GET request to ``/info`` will return configuration data in JSON format. An example response:: {"swift": {"version": "1.11.0"}, "staticweb": {}, "tempurl": {}} This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via ``/info``. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so:: swift-temp-url GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g Domain Remap ============ .. automodule:: swift.common.middleware.domain_remap :members: :show-inheritance: Dynamic Large Objects ===================== DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for :ref:`dlo-doc` further details. .. _formpost: FormPost ======== .. automodule:: swift.common.middleware.formpost :members: :show-inheritance: .. _gatekeeper: GateKeeper ============= .. automodule:: swift.common.middleware.gatekeeper :members: :show-inheritance: .. _healthcheck: Healthcheck =========== .. automodule:: swift.common.middleware.healthcheck :members: :show-inheritance: .. _keystoneauth: KeystoneAuth ============ .. automodule:: swift.common.middleware.keystoneauth :members: :show-inheritance: .. _list_endpoints: List Endpoints ============== .. automodule:: swift.common.middleware.list_endpoints :members: :show-inheritance: Memcache ======== .. automodule:: swift.common.middleware.memcache :members: :show-inheritance: Name Check (Forbidden Character Filter) ======================================= .. automodule:: swift.common.middleware.name_check :members: :show-inheritance: .. _versioned_writes: Object Versioning ================= .. automodule:: swift.common.middleware.versioned_writes :members: :show-inheritance: Proxy Logging ============= .. automodule:: swift.common.middleware.proxy_logging :members: :show-inheritance: Ratelimit ========= .. automodule:: swift.common.middleware.ratelimit :members: :show-inheritance: .. _recon: Recon =========== .. automodule:: swift.common.middleware.recon :members: :show-inheritance: Static Large Objects ==================== Please see the SLO docs for :ref:`slo-doc` further details. .. _staticweb: StaticWeb ========= .. automodule:: swift.common.middleware.staticweb :members: :show-inheritance: .. _common_tempauth: TempAuth ======== .. automodule:: swift.common.middleware.tempauth :members: :show-inheritance: .. _tempurl: TempURL ======= .. automodule:: swift.common.middleware.tempurl :members: :show-inheritance: XProfile ============== .. automodule:: swift.common.middleware.xprofile :members: :show-inheritance: swift-2.7.1/doc/source/development_ondisk_backends.rst0000664000567000056710000000267213024044352024344 0ustar jenkinsjenkins00000000000000=============================== Pluggable On-Disk Back-end APIs =============================== The internal REST API used between the proxy server and the account, container and object server is almost identical to public Swift REST API, but with a few internal extensions (for example, update an account with a new container). The pluggable back-end APIs for the three REST API servers (account, container, object) abstracts the needs for servicing the various REST APIs from the details of how data is laid out and stored on-disk. The APIs are documented in the reference implementations for all three servers. For historical reasons, the object server backend reference implementation module is named `diskfile`, while the account and container server backend reference implementation modules are named appropriately. This API is still under development and not yet finalized. ----------------------------------------- Back-end API for Account Server REST APIs ----------------------------------------- .. automodule:: swift.account.backend :noindex: :members: ------------------------------------------- Back-end API for Container Server REST APIs ------------------------------------------- .. automodule:: swift.container.backend :noindex: :members: ---------------------------------------- Back-end API for Object Server REST APIs ---------------------------------------- .. automodule:: swift.obj.diskfile :noindex: :members: swift-2.7.1/doc/source/getting_started.rst0000664000567000056710000000307413024044354022007 0ustar jenkinsjenkins00000000000000=============== Getting Started =============== ------------------- System Requirements ------------------- Swift development currently targets Ubuntu Server 14.04, but should work on most Linux platforms. Swift is written in Python and has these dependencies: * Python 2.7 * rsync 3.0 * The Python packages listed in `the requirements file `_ * Testing additionally requires `the test dependencies `_ There is no current support for Python 3. ----------- Development ----------- To get started with development with Swift, or to just play around, the following docs will be useful: * :doc:`Swift All in One ` - Set up a VM with Swift installed * :doc:`Development Guidelines ` * :doc:`First Contribution to Swift ` * :doc:`Associated Projects ` -------------------------- CLI client and SDK library -------------------------- There are many clients in the `ecosystem `_. The official CLI and SDK is python-swiftclient. * `Source code `_ * `Python Package Index `_ ---------- Production ---------- If you want to set up and configure Swift for a production cluster, the following doc should be useful: * :doc:`Multiple Server Swift Installation ` swift-2.7.1/doc/source/crossdomain.rst0000664000567000056710000000371513024044352021141 0ustar jenkinsjenkins00000000000000======================== Cross-domain Policy File ======================== A cross-domain policy file allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. See http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html for a description of the purpose and structure of the cross-domain policy file. The cross-domain policy file is installed in the root of a web server (i.e., the path is /crossdomain.xml). The crossdomain middleware responds to a path of /crossdomain.xml with an XML document such as:: You should use a policy appropriate to your site. The examples and the default policy are provided to indicate how to syntactically construct a cross domain policy file -- they are not recommendations. ------------- Configuration ------------- To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis (...) indicate other middleware you may have chosen to use:: [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server And add a filter section, such as:: [filter:crossdomain] use = egg:swift#crossdomain cross_domain_policy = For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the cross_domain_policy value. The cross_domain_policy name/value is optional. If omitted, the policy defaults as if you had specified:: cross_domain_policy = swift-2.7.1/doc/source/associated_projects.rst0000664000567000056710000001405113024044354022645 0ustar jenkinsjenkins00000000000000.. _associated_projects: Associated Projects =================== Application Bindings -------------------- * OpenStack supported binding: * `Python-SwiftClient `_ * Unofficial libraries and bindings: * `PHP-opencloud `_ - Official Rackspace PHP bindings that should work for other Swift deployments too. * `PyRAX `_ - Official Rackspace Python bindings for CloudFiles that should work for other Swift deployments too. * `openstack.net `_ - Official Rackspace .NET bindings that should work for other Swift deployments too. * `RSwift `_ - R API bindings. * `Go language bindings `_ * `supload `_ - Bash script to upload file to cloud storage based on OpenStack Swift API. * `libcloud `_ - Apache Libcloud - a unified interface in Python for different clouds with OpenStack Swift support. * `SwiftBox `_ - C# library using RestSharp * `jclouds `_ - Java library offering bindings for all OpenStack projects * `java-openstack-swift `_ - Java bindings for OpenStack Swift * `swift_client `_ - Small but powerful Ruby client to interact with OpenStack Swift * `nightcrawler_swift `_ - This Ruby gem teleports your assets to a OpenStack Swift bucket/container * `swift storage `_ - Simple OpenStack Swift storage client. Authentication -------------- * `Keystone `_ - Official Identity Service for OpenStack. * `Swauth `_ - An alternative Swift authentication service that only requires Swift itself. * `Basicauth `_ - HTTP Basic authentication support (keystone backed). Command Line Access ------------------- * `Swiftly `_ - Alternate command line access to Swift with direct (no proxy) access capabilities as well. Log Processing -------------- * `Slogging `_ - Basic stats and logging tools. Monitoring & Statistics ----------------------- * `Swift Informant `_ - Swift Proxy Middleware to send events to a statsd instance. * `Swift Inspector `_ - Swift middleware to relay information about a request back to the client. Content Distribution Network Integration ---------------------------------------- * `SOS `_ - Swift Origin Server. Alternative API --------------- * `Swift3 `_ - Amazon S3 API emulation. * `CDMI `_ - CDMI support * `SwiftHLM `_ - a middleware for using OpenStack Swift with tape and other high latency media storage backends Benchmarking/Load Generators ---------------------------- * `getput `_ - getput tool suite * `COSbench `_ - COSbench tool suite * `ssbench `_ - ssbench tool suite .. _custom-logger-hooks-label: Custom Logger Hooks ------------------- * `swift-sentry `_ - Sentry exception reporting for Swift Storage Backends (DiskFile API implementations) ----------------------------------------------- * `Swift-on-File `_ - Enables objects created using Swift API to be accessed as files on a POSIX filesystem and vice versa. * `swift-ceph-backend `_ - Ceph RADOS object server implementation for Swift. * `kinetic-swift `_ - Seagate Kinetic Drive as backend for Swift * `swift-scality-backend `_ - Scality sproxyd object server implementation for Swift. Developer Tools --------------- * `vagrant-swift-all-in-one `_ - Quickly setup a standard development environment using Vagrant and Chef cookbooks in an Ubuntu virtual machine. * `SAIO Ansible playbook `_ - Quickly setup a standard development environment using Vagrant and Ansible in a Fedora virtual machine (with built-in `Swift-on-File `_ support). Other ----- * `Glance `_ - Provides services for discovering, registering, and retrieving virtual machine images (for OpenStack Compute [Nova], for example). * `Better Staticweb `_ - Makes swift containers accessible by default. * `Swiftsync `_ - A massive syncer between two swift clusters. * `Django Swiftbrowser `_ - Simple Django web app to access OpenStack Swift. * `Swift-account-stats `_ - Swift-account-stats is a tool to report statistics on Swift usage at tenant and global levels. * `PyECLib `_ - High Level Erasure Code library used by Swift * `liberasurecode `_ - Low Level Erasure Code library used by PyECLib * `Swift Browser `_ - JavaScript interface for Swift * `swift-ui `_ - OpenStack Swift web browser * `Swift Durability Calculator `_ - Data Durability Calculation Tool for Swift swift-2.7.1/doc/source/howto_installmultinode.rst0000664000567000056710000000416313024044354023427 0ustar jenkinsjenkins00000000000000===================================================== Instructions for a Multiple Server Swift Installation ===================================================== Please refer to the latest official `OpenStack Installation Guides `_ for the most up-to-date documentation. Object Storage installation guide for OpenStack Liberty ------------------------------------------------------- * `openSUSE 13.2 and SUSE Linux Enterprise Server 12 `_ * `RHEL 7, CentOS 7 `_ * `Ubuntu 14.04 `_ Object Storage installation guide for OpenStack Kilo ---------------------------------------------------- * `openSUSE 13.2 and SUSE Linux Enterprise Server 12 `_ * `RHEL 7, CentOS 7, and Fedora 21 `_ * `Ubuntu 14.04 `_ Object Storage installation guide for OpenStack Juno ---------------------------------------------------- * `openSUSE 13.1 and SUSE Linux Enterprise Server 11 `_ * `RHEL 7, CentOS 7, and Fedora 20 `_ * `Ubuntu 14.04 `_ Object Storage installation guide for OpenStack Icehouse -------------------------------------------------------- * `openSUSE and SUSE Linux Enterprise Server `_ * `Red Hat Enterprise Linux, CentOS, and Fedora `_ * `Ubuntu 12.04/14.04 (LTS) `_ swift-2.7.1/doc/source/development_guidelines.rst0000664000567000056710000001775313024044354023363 0ustar jenkinsjenkins00000000000000====================== Development Guidelines ====================== ----------------- Coding Guidelines ----------------- For the most part we try to follow PEP 8 guidelines which can be viewed here: http://www.python.org/dev/peps/pep-0008/ ------------------ Testing Guidelines ------------------ Swift has a comprehensive suite of tests and pep8 checks that are run on all submitted code, and it is recommended that developers execute the tests themselves to catch regressions early. Developers are also expected to keep the test suite up-to-date with any submitted code changes. Swift's tests and pep8 checks can be executed in an isolated environment with Tox: http://tox.testrun.org/ To execute the tests: * Install Tox:: pip install tox * Run Tox from the root of the swift repo:: tox Remarks: If you installed using ``cd ~/swift; sudo python setup.py develop``, you may need to do ``cd ~/swift; sudo chown -R ${USER}:${USER} swift.egg-info`` prior to running tox. * By default ``tox`` will run all of the unit test and pep8 checks listed in the ``tox.ini`` file ``envlist`` option. A subset of the test environments can be specified on the tox command line or by setting the ``TOXENV`` environment variable. For example, to run only the pep8 checks and python2.7 unit tests use:: tox -e pep8,py27 or:: TOXENV=py27,pep8 tox .. note:: As of tox version 2.0.0, most environment variables are not automatically passed to the test environment. Swift's ``tox.ini`` overrides this default behavior so that variable names matching ``SWIFT_*`` and ``*_proxy`` will be passed, but you may need to run ``tox --recreate`` for this to take effect after upgrading from tox<2.0.0. Conversely, if you do not want those environment variables to be passed to the test environment then you will need to unset them before calling ``tox``. Also, if you ever encounter DistributionNotFound, try to use ``tox --recreate`` or remove the ``.tox`` directory to force tox to recreate the dependency list. Swift's functional tests may be executed against a :doc:`development_saio` or other running Swift cluster using the command:: tox -e func The endpoint and authorization credentials to be used by functional tests should be configured in the ``test.conf`` file as described in the section :ref:`setup_scripts`. The environment variable ``SWIFT_TEST_POLICY`` may be set to specify a particular storage policy *name* that will be used for testing. When set, tests that would otherwise not specify a policy or choose a random policy from those available will instead use the policy specified. Tests that use more than one policy will include the specified policy in the set of policies used. The specified policy must be available on the cluster under test. For example, this command would run the functional tests using policy 'silver':: SWIFT_TEST_POLICY=silver tox -e func In-process functional testing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the ``test.conf`` file is not found then the functional test framework will instantiate a set of Swift servers in the same process that executes the functional tests. This 'in-process test' mode may also be enabled (or disabled) by setting the environment variable ``SWIFT_TEST_IN_PROCESS`` to a true (or false) value prior to executing `tox -e func`. When using the 'in-process test' mode some server configuration options may be set using environment variables: - the optional in-memory object server may be selected by setting the environment variable ``SWIFT_TEST_IN_MEMORY_OBJ`` to a true value. - the proxy-server ``object_post_as_copy`` option may be set using the environment variable ``SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY``. For example, this command would run the in-process mode functional tests with the proxy-server using object_post_as_copy=False (the 'fast-POST' mode):: SWIFT_TEST_IN_PROCESS=1 SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY=False \ tox -e func This particular example may also be run using the ``func-in-process-fast-post`` tox environment:: tox -e func-in-process-fast-post The 'in-process test' mode searches for ``proxy-server.conf`` and ``swift.conf`` config files from which it copies config options and overrides some options to suit in process testing. The search will first look for config files in a ```` that may optionally be specified using the environment variable:: SWIFT_TEST_IN_PROCESS_CONF_DIR= If ``SWIFT_TEST_IN_PROCESS_CONF_DIR`` is not set, or if a config file is not found in ````, the search will then look in the ``etc/`` directory in the source tree. If the config file is still not found, the corresponding sample config file from ``etc/`` is used (e.g. ``proxy-server.conf-sample`` or ``swift.conf-sample``). When using the 'in-process test' mode ``SWIFT_TEST_POLICY`` may be set to specify a particular storage policy *name* that will be used for testing as described above. When set, this policy must exist in the ``swift.conf`` file and its corresponding ring file must exist in ```` (if specified) or ``etc/``. The test setup will set the specified policy to be the default and use its ring file properties for constructing the test object ring. This allows in-process testing to be run against various policy types and ring files. For example, this command would run the in-process mode functional tests using config files found in ``$HOME/my_tests`` and policy 'silver':: SWIFT_TEST_IN_PROCESS=1 SWIFT_TEST_IN_PROCESS_CONF_DIR=$HOME/my_tests \ SWIFT_TEST_POLICY=silver tox -e func ------------ Coding Style ------------ Swift uses flake8 with the OpenStack `hacking`_ module to enforce coding style. Install flake8 and hacking with pip or by the packages of your Operating System. It is advised to integrate flake8+hacking with your editor to get it automated and not get `caught` by Jenkins. For example for Vim the `syntastic`_ plugin can do this for you. .. _`hacking`: https://pypi.python.org/pypi/hacking .. _`syntastic`: https://github.com/scrooloose/syntastic ------------------------ Documentation Guidelines ------------------------ The documentation in docstrings should follow the PEP 257 conventions (as mentioned in the PEP 8 guidelines). More specifically: 1. Triple quotes should be used for all docstrings. 2. If the docstring is simple and fits on one line, then just use one line. 3. For docstrings that take multiple lines, there should be a newline after the opening quotes, and before the closing quotes. 4. Sphinx is used to build documentation, so use the restructured text markup to designate parameters, return values, etc. Documentation on the sphinx specific markup can be found here: http://sphinx.pocoo.org/markup/index.html Installing Sphinx: #. Install sphinx (On Ubuntu: `sudo apt-get install python-sphinx`) #. `python setup.py build_sphinx` -------- Manpages -------- For sanity check of your change in manpage, use this command in the root of your Swift repo:: ./.manpages --------------------- License and Copyright --------------------- You can have the following copyright and license statement at the top of each source file. Copyright assignment is optional. New files should contain the current year. Substantial updates can have another year added, and date ranges are not needed.:: # Copyright (c) 2013 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. swift-2.7.1/doc/source/overview_auth.rst0000664000567000056710000003543313024044354021513 0ustar jenkinsjenkins00000000000000=============== The Auth System =============== -------- TempAuth -------- The auth system for Swift is loosely based on the auth system from the existing Rackspace architecture -- actually from a few existing auth systems -- and is therefore a bit disjointed. The distilled points about it are: * The authentication/authorization part can be an external system or a subsystem run within Swift as WSGI middleware * The user of Swift passes in an auth token with each request * Swift validates each token with the external auth system or auth subsystem and caches the result * The token does not change from request to request, but does expire The token can be passed into Swift using the X-Auth-Token or the X-Storage-Token header. Both have the same format: just a simple string representing the token. Some auth systems use UUID tokens, some an MD5 hash of something unique, some use "something else" but the salient point is that the token is a string which can be sent as-is back to the auth system for validation. Swift will make calls to the auth system, giving the auth token to be validated. For a valid token, the auth system responds with an overall expiration in seconds from now. Swift will cache the token up to the expiration time. The included TempAuth also has the concept of admin and non-admin users within an account. Admin users can do anything within the account. Non-admin users can only perform operations per container based on the container's X-Container-Read and X-Container-Write ACLs. Container ACLs use the "V1" ACL syntax, which looks like this: ``name1, name2, .r:referrer1.com, .r:-bad.referrer1.com, .rlistings`` For more information on ACLs, see :mod:`swift.common.middleware.acl`. Additionally, if the auth system sets the request environ's swift_owner key to True, the proxy will return additional header information in some requests, such as the X-Container-Sync-Key for a container GET or HEAD. In addition to container ACLs, TempAuth allows account-level ACLs. Any auth system may use the special header ``X-Account-Access-Control`` to specify account-level ACLs in a format specific to that auth system. (Following the TempAuth format is strongly recommended.) These headers are visible and settable only by account owners (those for whom ``swift_owner`` is true). Behavior of account ACLs is auth-system-dependent. In the case of TempAuth, if an authenticated user has membership in a group which is listed in the ACL, then the user is allowed the access level of that ACL. Account ACLs use the "V2" ACL syntax, which is a JSON dictionary with keys named "admin", "read-write", and "read-only". (Note the case sensitivity.) An example value for the ``X-Account-Access-Control`` header looks like this: ``{"admin":["a","b"],"read-only":["c"]}`` Keys may be absent (as shown). The recommended way to generate ACL strings is as follows:: from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } acl_string = format_acl(version=2, acl_dict=acl_data) Using the :func:`format_acl` method will ensure that JSON is encoded as ASCII (using e.g. '\u1234' for Unicode). While it's permissible to manually send ``curl`` commands containing ``X-Account-Access-Control`` headers, you should exercise caution when doing so, due to the potential for human error. Within the JSON dictionary stored in ``X-Account-Access-Control``, the keys have the following meanings: ============ ============================================================== Access Level Description ============ ============================================================== read-only These identities can read *everything* (except privileged headers) in the account. Specifically, a user with read-only account access can get a list of containers in the account, list the contents of any container, retrieve any object, and see the (non-privileged) headers of the account, any container, or any object. read-write These identities can read or write (or create) any container. A user with read-write account access can create new containers, set any unprivileged container headers, overwrite objects, delete containers, etc. A read-write user can NOT set account headers (or perform any PUT/POST/DELETE requests on the account). admin These identities have "swift_owner" privileges. A user with admin account access can do anything the account owner can, including setting account headers and any privileged headers -- and thus granting read-only, read-write, or admin access to other users. ============ ============================================================== For more details, see :mod:`swift.common.middleware.tempauth`. For details on the ACL format, see :mod:`swift.common.middleware.acl`. Users with the special group ``.reseller_admin`` can operate on any account. For an example usage please see :mod:`swift.common.middleware.tempauth`. If a request is coming from a reseller the auth system sets the request environ reseller_request to True. This can be used by other middlewares. TempAuth will now allow OPTIONS requests to go through without a token. The user starts a session by sending a ReST request to the auth system to receive the auth token and a URL to the Swift system. ------------- Keystone Auth ------------- Swift is able to authenticate against OpenStack Keystone_ via the :ref:`keystoneauth` middleware. In order to use the ``keystoneauth`` middleware the ``auth_token`` middleware from KeystoneMiddleware_ will need to be configured. The ``authtoken`` middleware performs the authentication token validation and retrieves actual user authentication information. It can be found in the KeystoneMiddleware_ distribution. The :ref:`keystoneauth` middleware performs authorization and mapping the Keystone roles to Swift's ACLs. .. _KeystoneMiddleware: http://docs.openstack.org/developer/keystonemiddleware/ .. _Keystone: http://docs.openstack.org/developer/keystone/ Configuring Swift to use Keystone ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Configuring Swift to use Keystone_ is relatively straight forward. The first step is to ensure that you have the ``auth_token`` middleware installed. It can either be dropped in your python path or installed via the KeystoneMiddleware_ package. You need at first make sure you have a service endpoint of type ``object-store`` in Keystone pointing to your Swift proxy. For example having this in your ``/etc/keystone/default_catalog.templates`` :: catalog.RegionOne.object_store.name = Swift Service catalog.RegionOne.object_store.publicURL = http://swiftproxy:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object_store.adminURL = http://swiftproxy:8080/ catalog.RegionOne.object_store.internalURL = http://swiftproxy:8080/v1/AUTH_$(tenant_id)s On your Swift Proxy server you will want to adjust your main pipeline and add auth_token and keystoneauth in your ``/etc/swift/proxy-server.conf`` like this :: [pipeline:main] pipeline = [....] authtoken keystoneauth proxy-logging proxy-server add the configuration for the authtoken middleware:: [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = http://keystonehost:5000/ auth_url = http://keystonehost:35357/ auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = swift password = password cache = swift.cache include_service_catalog = False delay_auth_decision = True The actual values for these variables will need to be set depending on your situation, but in short: * ``auth_uri`` should point to a Keystone service from which users may retrieve tokens. This value is used in the `WWW-Authenticate` header that auth_token sends with any denial response. * ``auth_url`` points to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. It is not necessary to append any Keystone API version number to this URI. * The auth credentials (``project_domain_id``, ``user_domain_id``, ``username``, ``project_name``, ``password``) will be used to retrieve an admin token. That token will be used to authorize user tokens behind the scenes. * ``cache`` is set to ``swift.cache``. This means that the middleware will get the Swift memcache from the request environment. * ``include_service_catalog`` defaults to ``True`` if not set. This means that when validating a token, the service catalog is retrieved and stored in the ``X-Service-Catalog`` header. Since Swift does not use the ``X-Service-Catalog`` header, there is no point in getting the service catalog. We recommend you set ``include_service_catalog`` to ``False``. .. note:: The authtoken config variable ``delay_auth_decision`` must be set to ``True``. The default is ``False``, but that breaks public access, :ref:`staticweb`, :ref:`formpost`, :ref:`tempurl`, and authenticated capabilities requests (using :ref:`discoverability`). and you can finally add the keystoneauth configuration. Here is a simple configuration:: [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator Use an appropriate list of roles in operator_roles. For example, in some systems, the role ``_member_`` or ``Member`` is used to indicate that the user is allowed to operate on project resources. OpenStack Service Using Composite Tokens ---------------------------------------- Some OpenStack services such as Cinder and Glance may use a "service account". In this mode, you configure a separate account where the service stores project data that it manages. This account is not used directly by the end-user. Instead, all access is done through the service. To access the "service" account, the service must present two tokens: one from the end-user and another from its own service user. Only when both tokens are present can the account be accessed. This section describes how to set the configuration options to correctly control access to both the "normal" and "service" accounts. In this example, end users use the ``AUTH_`` prefix in account names, whereas services use the ``SERVICE_`` prefix:: [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator SERVICE_service_roles = service The actual values for these variable will need to be set depending on your situation as follows: * The first item in the reseller_prefix list must match Keystone's endpoint (see ``/etc/keystone/default_catalog.templates`` above). Normally this is ``AUTH``. * The second item in the reseller_prefix list is the prefix used by the OpenStack services(s). You must configure this value (``SERVICE`` in the example) with whatever the other OpenStack service(s) use. * Set the operator_roles option to contain a role or roles that end-user's have on project's they use. * Set the SERVICE_service_roles value to a role or roles that only the OpenStack service user has. Do not use a role that is assigned to "normal" end users. In this example, the role ``service`` is used. The service user is granted this role to a *single* project only. You do not need to make the service user a member of every project. This configuration works as follows: * The end-user presents a user token to an OpenStack service. The service then makes a Swift request to the account with the ``SERVICE`` prefix. * The service forwards the original user token with the request. It also adds it's own service token. * Swift validates both tokens. When validated, the user token gives the ``admin`` or ``swiftoperator`` role(s). When validated, the service token gives the ``service`` role. * Swift interprets the above configuration as follows: * Did the user token provide one of the roles listed in operator_roles? * Did the service token have the ``service`` role as described by the ``SERVICE_service_roles`` options. * If both conditions are met, the request is granted. Otherwise, Swift rejects the request. In the above example, all services share the same account. You can separate each service into its own account. For example, the following provides a dedicated account for each of the Glance and Cinder services. In addition, you must assign the ``glance_service`` and ``cinder_service`` to the appropriate service users:: [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, IMAGE, VOLUME operator_roles = admin, swiftoperator IMAGE_service_roles = glance_service VOLUME_service_roles = cinder_service Access control using keystoneauth ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default the only users able to perform operations (e.g. create a container) on an account are those having a Keystone role for the corresponding Keystone project that matches one of the roles specified in the ``operator_roles`` option. Users who have one of the ``operator_roles`` will be able to set container ACLs to grant other users permission to read and/or write objects in specific containers, using ``X-Container-Read`` and ``X-Container-Write`` headers respectively. In addition to the ACL formats described :mod:`here `, keystoneauth supports ACLs using the format:: other_project_id:other_user_id. where ``other_project_id`` is the UUID of a Keystone project and ``other_user_id`` is the UUID of a Keystone user. This will allow the other user to access a container provided their token is scoped on the other project. Both ``other_project_id`` and ``other_user_id`` may be replaced with the wildcard character ``*`` which will match any project or user respectively. Be sure to use Keystone UUIDs rather than names in container ACLs. .. note:: For backwards compatibility, keystoneauth will by default grant container ACLs expressed as ``other_project_name:other_user_name`` (i.e. using Keystone names rather than UUIDs) in the special case when both the other project and the other user are in Keystone's default domain and the project being accessed is also in the default domain. For further information see :ref:`keystoneauth` Users with the Keystone role defined in ``reseller_admin_role`` (``ResellerAdmin`` by default) can operate on any account. The auth system sets the request environ reseller_request to True if a request is coming from a user with this role. This can be used by other middlewares. -------------- Extending Auth -------------- TempAuth is written as wsgi middleware, so implementing your own auth is as easy as writing new wsgi middleware, and plugging it in to the proxy server. The KeyStone project and the Swauth project are examples of additional auth services. Also, see :doc:`development_auth`. swift-2.7.1/doc/source/overview_large_objects.rst0000664000567000056710000001477313024044354023361 0ustar jenkinsjenkins00000000000000.. _large-objects: ==================== Large Object Support ==================== -------- Overview -------- Swift has a limit on the size of a single uploaded object; by default this is 5GB. However, the download size of a single object is virtually unlimited with the concept of segmentation. Segments of the larger object are uploaded and a special manifest file is created that, when downloaded, sends all the segments concatenated as a single object. This also offers much greater upload speed with the possibility of parallel uploads of the segments. .. _dynamic-large-objects: .. _dlo-doc: --------------------- Dynamic Large Objects --------------------- .. automodule:: swift.common.middleware.dlo :members: :show-inheritance: .. _static-large-objects: .. _slo-doc: -------------------- Static Large Objects -------------------- .. automodule:: swift.common.middleware.slo :members: :show-inheritance: ---------- Direct API ---------- SLO support centers around the user generated manifest file. After the user has uploaded the segments into their account a manifest file needs to be built and uploaded. All object segments, must be at least 1 byte in size. Please see the SLO docs for :ref:`slo-doc` further details. ---------------- Additional Notes ---------------- * With a ``GET`` or ``HEAD`` of a manifest file, the ``X-Object-Manifest: /`` header will be returned with the concatenated object so you can tell where it's getting its segments from. * The response's ``Content-Length`` for a ``GET`` or ``HEAD`` on the manifest file will be the sum of all the segments in the ``/`` listing, dynamically. So, uploading additional segments after the manifest is created will cause the concatenated object to be that much larger; there's no need to recreate the manifest file. * The response's ``Content-Type`` for a ``GET`` or ``HEAD`` on the manifest will be the same as the ``Content-Type`` set during the ``PUT`` request that created the manifest. You can easily change the ``Content-Type`` by reissuing the ``PUT``. * The response's ``ETag`` for a ``GET`` or ``HEAD`` on the manifest file will be the MD5 sum of the concatenated string of ETags for each of the segments in the manifest (for DLO, from the listing ``/``). Usually in Swift the ETag is the MD5 sum of the contents of the object, and that holds true for each segment independently. But it's not meaningful to generate such an ETag for the manifest itself so this method was chosen to at least offer change detection. .. note:: If you are using the container sync feature you will need to ensure both your manifest file and your segment files are synced if they happen to be in different containers. ------- History ------- Dynamic large object support has gone through various iterations before settling on this implementation. The primary factor driving the limitation of object size in swift is maintaining balance among the partitions of the ring. To maintain an even dispersion of disk usage throughout the cluster the obvious storage pattern was to simply split larger objects into smaller segments, which could then be glued together during a read. Before the introduction of large object support some applications were already splitting their uploads into segments and re-assembling them on the client side after retrieving the individual pieces. This design allowed the client to support backup and archiving of large data sets, but was also frequently employed to improve performance or reduce errors due to network interruption. The major disadvantage of this method is that knowledge of the original partitioning scheme is required to properly reassemble the object, which is not practical for some use cases, such as CDN origination. In order to eliminate any barrier to entry for clients wanting to store objects larger than 5GB, initially we also prototyped fully transparent support for large object uploads. A fully transparent implementation would support a larger max size by automatically splitting objects into segments during upload within the proxy without any changes to the client API. All segments were completely hidden from the client API. This solution introduced a number of challenging failure conditions into the cluster, wouldn't provide the client with any option to do parallel uploads, and had no basis for a resume feature. The transparent implementation was deemed just too complex for the benefit. The current "user manifest" design was chosen in order to provide a transparent download of large objects to the client and still provide the uploading client a clean API to support segmented uploads. To meet an many use cases as possible swift supports two types of large object manifests. Dynamic and static large object manifests both support the same idea of allowing the user to upload many segments to be later downloaded as a single file. Dynamic large objects rely on a container listing to provide the manifest. This has the advantage of allowing the user to add/removes segments from the manifest at any time. It has the disadvantage of relying on eventually consistent container listings. All three copies of the container dbs must be updated for a complete list to be guaranteed. Also, all segments must be in a single container, which can limit concurrent upload speed. Static large objects rely on a user provided manifest file. A user can upload objects into multiple containers and then reference those objects (segments) in a self generated manifest file. Future GETs to that file will download the concatenation of the specified segments. This has the advantage of being able to immediately download the complete object once the manifest has been successfully PUT. Being able to upload segments into separate containers also improves concurrent upload speed. It has the disadvantage that the manifest is finalized once PUT. Any changes to it means it has to be replaced. Between these two methods the user has great flexibility in how (s)he chooses to upload and retrieve large objects to swift. Swift does not, however, stop the user from harming themselves. In both cases the segments are deletable by the user at any time. If a segment was deleted by mistake, a dynamic large object, having no way of knowing it was ever there, would happily ignore the deleted file and the user will get an incomplete file. A static large object would, when failing to retrieve the object specified in the manifest, drop the connection and the user would receive partial results. swift-2.7.1/doc/source/overview_expiring_objects.rst0000664000567000056710000000640713024044354024107 0ustar jenkinsjenkins00000000000000======================= Expiring Object Support ======================= The ``swift-object-expirer`` offers scheduled deletion of objects. The Swift client would use the ``X-Delete-At`` or ``X-Delete-After`` headers during an object ``PUT`` or ``POST`` and the cluster would automatically quit serving that object at the specified time and would shortly thereafter remove the object from the system. The ``X-Delete-At`` header takes a Unix Epoch timestamp, in integer form; for example: ``1317070737`` represents ``Mon Sep 26 20:58:57 2011 UTC``. The ``X-Delete-After`` header takes an integer number of seconds. The proxy server that receives the request will convert this header into an ``X-Delete-At`` header using its current time plus the value given. As expiring objects are added to the system, the object servers will record the expirations in a hidden ``.expiring_objects`` account for the ``swift-object-expirer`` to handle later. Usually, just one instance of the ``swift-object-expirer`` daemon needs to run for a cluster. This isn't exactly automatic failover high availability, but if this daemon doesn't run for a few hours it should not be any real issue. The expired-but-not-yet-deleted objects will still ``404 Not Found`` if someone tries to ``GET`` or ``HEAD`` them and they'll just be deleted a bit later when the daemon is restarted. By default, the ``swift-object-expirer`` daemon will run with a concurrency of 1. Increase this value to get more concurrency. A concurrency of 1 may not be enough to delete expiring objects in a timely fashion for a particular swift cluster. It is possible to run multiple daemons to do different parts of the work if a single process with a concurrency of more than 1 is not enough (see the sample config file for details). To run the ``swift-object-expirer`` as multiple processes, set ``processes`` to the number of processes (either in the config file or on the command line). Then run one process for each part. Use ``process`` to specify the part of the work to be done by a process using the command line or the config. So, for example, if you'd like to run three processes, set ``processes`` to 3 and run three processes with ``process`` set to 0, 1, and 2 for the three processes. If multiple processes are used, it's necessary to run one for each part of the work or that part of the work will not be done. The daemon uses the ``/etc/swift/object-expirer.conf`` by default, and here is a quick sample conf file:: [DEFAULT] # swift_dir = /etc/swift # user = swift # You can specify default log routing here if you want: # log_name = swift # log_facility = LOG_LOCAL0 # log_level = INFO [object-expirer] interval = 300 [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options The daemon needs to run on a machine with access to all the backend servers in the cluster, but does not need proxy server or public access. The daemon will use its own internal proxy code instance to access the backend servers. swift-2.7.1/doc/source/admin_guide.rst0000664000567000056710000021453213024044354021070 0ustar jenkinsjenkins00000000000000===================== Administrator's Guide ===================== ------------------------- Defining Storage Policies ------------------------- Defining your Storage Policies is very easy to do with Swift. It is important that the administrator understand the concepts behind Storage Policies before actually creating and using them in order to get the most benefit out of the feature and, more importantly, to avoid having to make unnecessary changes once a set of policies have been deployed to a cluster. It is highly recommended that the reader fully read and comprehend :doc:`overview_policies` before proceeding with administration of policies. Plan carefully and it is suggested that experimentation be done first on a non-production cluster to be certain that the desired configuration meets the needs of the users. See :ref:`upgrade-policy` before planning the upgrade of your existing deployment. Following is a high level view of the very few steps it takes to configure policies once you have decided what you want to do: #. Define your policies in ``/etc/swift/swift.conf`` #. Create the corresponding object rings #. Communicate the names of the Storage Policies to cluster users For a specific example that takes you through these steps, please see :doc:`policies_saio` ------------------ Managing the Rings ------------------ You may build the storage rings on any server with the appropriate version of Swift installed. Once built or changed (rebalanced), you must distribute the rings to all the servers in the cluster. Storage rings contain information about all the Swift storage partitions and how they are distributed between the different nodes and disks. Swift 1.6.0 is the last version to use a Python pickle format. Subsequent versions use a different serialization format. **Rings generated by Swift versions 1.6.0 and earlier may be read by any version, but rings generated after 1.6.0 may only be read by Swift versions greater than 1.6.0.** So when upgrading from version 1.6.0 or earlier to a version greater than 1.6.0, either upgrade Swift on your ring building server **last** after all Swift nodes have been successfully upgraded, or refrain from generating rings until all Swift nodes have been successfully upgraded. If you need to downgrade from a version of swift greater than 1.6.0 to a version less than or equal to 1.6.0, first downgrade your ring-building server, generate new rings, push them out, then continue with the rest of the downgrade. For more information see :doc:`overview_ring`. Removing a device from the ring:: swift-ring-builder remove / Removing a server from the ring:: swift-ring-builder remove Adding devices to the ring: See :ref:`ring-preparing` See what devices for a server are in the ring:: swift-ring-builder search Once you are done with all changes to the ring, the changes need to be "committed":: swift-ring-builder rebalance Once the new rings are built, they should be pushed out to all the servers in the cluster. Optionally, if invoked as 'swift-ring-builder-safe' the directory containing the specified builder file will be locked (via a .lock file in the parent directory). This provides a basic safe guard against multiple instances of the swift-ring-builder (or other utilities that observe this lock) from attempting to write to or read the builder/ring files while operations are in progress. This can be useful in environments where ring management has been automated but the operator still needs to interact with the rings manually. If the ring builder is not producing the balances that you are expecting, you can gain visibility into what it's doing with the ``--debug`` flag.:: swift-ring-builder rebalance --debug This produces a great deal of output that is mostly useful if you are either (a) attempting to fix the ring builder, or (b) filing a bug against the ring builder. ----------------------- Scripting Ring Creation ----------------------- You can create scripts to create the account and container rings and rebalance. Here's an example script for the Account ring. Use similar commands to create a make-container-ring.sh script on the proxy server node. 1. Create a script file called make-account-ring.sh on the proxy server node with the following content:: #!/bin/bash cd /etc/swift rm -f account.builder account.ring.gz backups/account.builder backups/account.ring.gz swift-ring-builder account.builder create 18 3 1 swift-ring-builder account.builder add r1z1-:6002/sdb1 1 swift-ring-builder account.builder add r1z2-:6002/sdb1 1 swift-ring-builder account.builder rebalance You need to replace the values of , , etc. with the IP addresses of the account servers used in your setup. You can have as many account servers as you need. All account servers are assumed to be listening on port 6002, and have a storage device called "sdb1" (this is a directory name created under /drives when we setup the account server). The "z1", "z2", etc. designate zones, and you can choose whether you put devices in the same or different zones. The "r1" designates the region, with different regions specified as "r1", "r2", etc. 2. Make the script file executable and run it to create the account ring file:: chmod +x make-account-ring.sh sudo ./make-account-ring.sh 3. Copy the resulting ring file /etc/swift/account.ring.gz to all the account server nodes in your Swift environment, and put them in the /etc/swift directory on these nodes. Make sure that every time you change the account ring configuration, you copy the resulting ring file to all the account nodes. ----------------------- Handling System Updates ----------------------- It is recommended that system updates and reboots are done a zone at a time. This allows the update to happen, and for the Swift cluster to stay available and responsive to requests. It is also advisable when updating a zone, let it run for a while before updating the other zones to make sure the update doesn't have any adverse effects. ---------------------- Handling Drive Failure ---------------------- In the event that a drive has failed, the first step is to make sure the drive is unmounted. This will make it easier for swift to work around the failure until it has been resolved. If the drive is going to be replaced immediately, then it is just best to replace the drive, format it, remount it, and let replication fill it up. After the drive is unmounted, make sure the mount point is owned by root (root:root 755). This ensures that rsync will not try to replicate into the root drive once the failed drive is unmounted. If the drive can't be replaced immediately, then it is best to leave it unmounted, and set the device weight to 0. This will allow all the replicas that were on that drive to be replicated elsewhere until the drive is replaced. Once the drive is replaced, the device weight can be increased again. Setting the device weight to 0 instead of removing the drive from the ring gives Swift the chance to replicate data from the failing disk too (in case it is still possible to read some of the data). Setting the device weight to 0 (or removing a failed drive from the ring) has another benefit: all partitions that were stored on the failed drive are distributed over the remaining disks in the cluster, and each disk only needs to store a few new partitions. This is much faster compared to replicating all partitions to a single, new disk. It decreases the time to recover from a degraded number of replicas significantly, and becomes more and more important with bigger disks. ----------------------- Handling Server Failure ----------------------- If a server is having hardware issues, it is a good idea to make sure the swift services are not running. This will allow Swift to work around the failure while you troubleshoot. If the server just needs a reboot, or a small amount of work that should only last a couple of hours, then it is probably best to let Swift work around the failure and get the machine fixed and back online. When the machine comes back online, replication will make sure that anything that is missing during the downtime will get updated. If the server has more serious issues, then it is probably best to remove all of the server's devices from the ring. Once the server has been repaired and is back online, the server's devices can be added back into the ring. It is important that the devices are reformatted before putting them back into the ring as it is likely to be responsible for a different set of partitions than before. ----------------------- Detecting Failed Drives ----------------------- It has been our experience that when a drive is about to fail, error messages will spew into `/var/log/kern.log`. There is a script called `swift-drive-audit` that can be run via cron to watch for bad drives. If errors are detected, it will unmount the bad drive, so that Swift can work around it. The script takes a configuration file with the following settings: [drive-audit] ================== ============== =========================================== Option Default Description ------------------ -------------- ------------------------------------------- log_facility LOG_LOCAL0 Syslog log facility log_level INFO Log level device_dir /srv/node Directory devices are mounted under minutes 60 Number of minutes to look back in `/var/log/kern.log` error_limit 1 Number of errors to find before a device is unmounted log_file_pattern /var/log/kern* Location of the log file with globbing pattern to check against device errors regex_pattern_X (see below) Regular expression patterns to be used to locate device blocks with errors in the log file ================== ============== =========================================== The default regex pattern used to locate device blocks with errors are `\berror\b.*\b(sd[a-z]{1,2}\d?)\b` and `\b(sd[a-z]{1,2}\d?)\b.*\berror\b`. One is able to overwrite the default above by providing new expressions using the format `regex_pattern_X = regex_expression`, where `X` is a number. This script has been tested on Ubuntu 10.04 and Ubuntu 12.04, so if you are using a different distro or OS, some care should be taken before using in production. .. _dispersion_report: ----------------- Dispersion Report ----------------- There is a swift-dispersion-report tool for measuring overall cluster health. This is accomplished by checking if a set of deliberately distributed containers and objects are currently in their proper places within the cluster. For instance, a common deployment has three replicas of each object. The health of that object can be measured by checking if each replica is in its proper place. If only 2 of the 3 is in place the object's heath can be said to be at 66.66%, where 100% would be perfect. A single object's health, especially an older object, usually reflects the health of that entire partition the object is in. If we make enough objects on a distinct percentage of the partitions in the cluster, we can get a pretty valid estimate of the overall cluster health. In practice, about 1% partition coverage seems to balance well between accuracy and the amount of time it takes to gather results. The first thing that needs to be done to provide this health value is create a new account solely for this usage. Next, we need to place the containers and objects throughout the system so that they are on distinct partitions. The swift-dispersion-populate tool does this by making up random container and object names until they fall on distinct partitions. Last, and repeatedly for the life of the cluster, we need to run the swift-dispersion-report tool to check the health of each of these containers and objects. These tools need direct access to the entire cluster and to the ring files (installing them on a proxy server will probably do). Both swift-dispersion-populate and swift-dispersion-report use the same configuration file, /etc/swift/dispersion.conf. Example conf file:: [dispersion] auth_url = http://localhost:8080/auth/v1.0 auth_user = test:tester auth_key = testing endpoint_type = internalURL There are also options for the conf file for specifying the dispersion coverage (defaults to 1%), retries, concurrency, etc. though usually the defaults are fine. If you want to use keystone v3 for authentication there are options like auth_version, user_domain_name, project_domain_name and project_name. Once the configuration is in place, run `swift-dispersion-populate` to populate the containers and objects throughout the cluster. Now that those containers and objects are in place, you can run `swift-dispersion-report` to get a dispersion report, or the overall health of the cluster. Here is an example of a cluster in perfect health:: $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 19s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space Now I'll deliberately double the weight of a device in the object ring (with replication turned off) and rerun the dispersion report to show what impact that has:: $ swift-ring-builder object.builder set_weight d0 200 $ swift-ring-builder object.builder rebalance ... $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 8s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries There were 1763 partitions missing one copy. 77.56% of object copies found (6094 of 7857) Sample represents 1.00% of the object partition space You can see the health of the objects in the cluster has gone down significantly. Of course, I only have four devices in this test environment, in a production environment with many many devices the impact of one device change is much less. Next, I'll run the replicators to get everything put back into place and then rerun the dispersion report:: ... start object replicators and monitor logs until they're caught up ... $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 17s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space You can also run the report for only containers or objects:: $ swift-dispersion-report --container-only Queried 2621 containers for dispersion reporting, 17s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space $ swift-dispersion-report --object-only Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space Alternatively, the dispersion report can also be output in json format. This allows it to be more easily consumed by third party utilities:: $ swift-dispersion-report -j {"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0, "copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container": {"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected": 12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}} Note that you may select which storage policy to use by setting the option '--policy-name silver' or '-P silver' (silver is the example policy name here). If no policy is specified, the default will be used per the swift.conf file. When you specify a policy the containers created also include the policy index, thus even when running a container_only report, you will need to specify the policy not using the default. ----------------------------------- Geographically Distributed Clusters ----------------------------------- Swift's default configuration is currently designed to work in a single region, where a region is defined as a group of machines with high-bandwidth, low-latency links between them. However, configuration options exist that make running a performant multi-region Swift cluster possible. For the rest of this section, we will assume a two-region Swift cluster: region 1 in San Francisco (SF), and region 2 in New York (NY). Each region shall contain within it 3 zones, numbered 1, 2, and 3, for a total of 6 zones. ~~~~~~~~~~~~~ read_affinity ~~~~~~~~~~~~~ This setting makes the proxy server prefer local backend servers for GET and HEAD requests over non-local ones. For example, it is preferable for an SF proxy server to service object GET requests by talking to SF object servers, as the client will receive lower latency and higher throughput. By default, Swift randomly chooses one of the three replicas to give to the client, thereby spreading the load evenly. In the case of a geographically-distributed cluster, the administrator is likely to prioritize keeping traffic local over even distribution of results. This is where the read_affinity setting comes in. Example:: [app:proxy-server] read_affinity = r1=100 This will make the proxy attempt to service GET and HEAD requests from backends in region 1 before contacting any backends in region 2. However, if no region 1 backends are available (due to replica placement, failed hardware, or other reasons), then the proxy will fall back to backend servers in other regions. Example:: [app:proxy-server] read_affinity = r1z1=100, r1=200 This will make the proxy attempt to service GET and HEAD requests from backends in region 1 zone 1, then backends in region 1, then any other backends. If a proxy is physically close to a particular zone or zones, this can provide bandwidth savings. For example, if a zone corresponds to servers in a particular rack, and the proxy server is in that same rack, then setting read_affinity to prefer reads from within the rack will result in less traffic between the top-of-rack switches. The read_affinity setting may contain any number of region/zone specifiers; the priority number (after the equals sign) determines the ordering in which backend servers will be contacted. A lower number means higher priority. Note that read_affinity only affects the ordering of primary nodes (see ring docs for definition of primary node), not the ordering of handoff nodes. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ write_affinity and write_affinity_node_count ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This setting makes the proxy server prefer local backend servers for object PUT requests over non-local ones. For example, it may be preferable for an SF proxy server to service object PUT requests by talking to SF object servers, as the client will receive lower latency and higher throughput. However, if this setting is used, note that a NY proxy server handling a GET request for an object that was PUT using write affinity may have to fetch it across the WAN link, as the object won't immediately have any replicas in NY. However, replication will move the object's replicas to their proper homes in both SF and NY. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. This setting lets you trade data distribution for throughput. If write_affinity is enabled, then object replicas will initially be stored all within a particular region or zone, thereby decreasing the quality of the data distribution, but the replicas will be distributed over fast WAN links, giving higher throughput to clients. Note that the replicators will eventually move objects to their proper, well-distributed homes. The write_affinity setting is useful only when you don't typically read objects immediately after writing them. For example, consider a workload of mainly backups: if you have a bunch of machines in NY that periodically write backups to Swift, then odds are that you don't then immediately read those backups in SF. If your workload doesn't look like that, then you probably shouldn't use write_affinity. The write_affinity_node_count setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. Example:: [app:proxy-server] write_affinity = r1 write_affinity_node_count = 2 * replicas Assuming 3 replicas, this configuration will make object PUTs try storing the object's replicas on up to 6 disks ("2 * replicas") in region 1 ("r1"). Proxy server tries to find 3 devices for storing the object. While a device is unavailable, it queries the ring for the 4th device and so on until 6th device. If the 6th disk is still unavailable, the last replica will be sent to other region. It doesn't mean there'll have 6 replicas in region 1. You should be aware that, if you have data coming into SF faster than your link to NY can transfer it, then your cluster's data distribution will get worse and worse over time as objects pile up in SF. If this happens, it is recommended to disable write_affinity and simply let object PUTs traverse the WAN link, as that will naturally limit the object growth rate to what your WAN link can handle. -------------------------------- Cluster Telemetry and Monitoring -------------------------------- Various metrics and telemetry can be obtained from the account, container, and object servers using the recon server middleware and the swift-recon cli. To do so update your account, container, or object servers pipelines to include recon and add the associated filter config. object-server.conf sample:: [pipeline:main] pipeline = recon object-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift container-server.conf sample:: [pipeline:main] pipeline = recon container-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift account-server.conf sample:: [pipeline:main] pipeline = recon account-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift The recon_cache_path simply sets the directory where stats for a few items will be stored. Depending on the method of deployment you may need to create this directory manually and ensure that swift has read/write access. Finally, if you also wish to track asynchronous pending on your object servers you will need to setup a cronjob to run the swift-recon-cron script periodically on your object servers:: */5 * * * * swift /usr/bin/swift-recon-cron /etc/swift/object-server.conf Once the recon middleware is enabled, a GET request for "/recon/" to the backend object server will return a JSON-formatted response:: fhines@ubuntu:~$ curl -i http://localhost:6030/recon/async HTTP/1.1 200 OK Content-Type: application/json Content-Length: 20 Date: Tue, 18 Oct 2011 21:03:01 GMT {"async_pending": 0} Note that the default port for the object server is 6000, except on a Swift All-In-One installation, which uses 6010, 6020, 6030, and 6040. The following metrics and telemetry are currently exposed: ========================= ======================================================================================== Request URI Description ------------------------- ---------------------------------------------------------------------------------------- /recon/load returns 1,5, and 15 minute load average /recon/mem returns /proc/meminfo /recon/mounted returns *ALL* currently mounted filesystems /recon/unmounted returns all unmounted drives if mount_check = True /recon/diskusage returns disk utilization for storage devices /recon/ringmd5 returns object/container/account ring md5sums /recon/quarantined returns # of quarantined objects/accounts/containers /recon/sockstat returns consumable info from /proc/net/sockstat|6 /recon/devices returns list of devices and devices dir i.e. /srv/node /recon/async returns count of async pending /recon/replication returns object replication info (for backward compatibility) /recon/replication/ returns replication info for given type (account, container, object) /recon/auditor/ returns auditor stats on last reported scan for given type (account, container, object) /recon/updater/ returns last updater sweep times for given type (container, object) ========================= ======================================================================================== Note that 'object_replication_last' and 'object_replication_time' in object replication info are considered to be transitional and will be removed in the subsequent releases. Use 'replication_last' and 'replication_time' instead. This information can also be queried via the swift-recon command line utility:: fhines@ubuntu:~$ swift-recon -h Usage: usage: swift-recon [-v] [--suppress] [-a] [-r] [-u] [-d] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat] account|container|object Defaults to object server. ex: swift-recon container -l --auditor Options: -h, --help show this help message and exit -v, --verbose Print verbose info --suppress Suppress most connection related errors -a, --async Get async stats -r, --replication Get replication stats --auditor Get auditor stats --updater Get updater stats --expirer Get expirer stats -u, --unmounted Check cluster for unmounted devices -d, --diskusage Get disk usage stats -l, --loadstats Get cluster load average stats -q, --quarantined Get cluster quarantine stats --md5 Get md5sum of servers ring and compare to local copy --sockstat Get cluster socket usage stats -T, --time Check time synchronization --all Perform all checks. Equal to -arudlqT --md5 --sockstat --auditor --updater --expirer --driveaudit --validate-servers -z ZONE, --zone=ZONE Only query servers in specified zone -t SECONDS, --timeout=SECONDS Time to wait for a response from a server --swiftdir=SWIFTDIR Default = /etc/swift For example, to obtain container replication info from all hosts in zone "3":: fhines@ubuntu:~$ swift-recon container -r --zone 3 =============================================================================== --> Starting reconnaissance on 1 hosts =============================================================================== [2012-04-02 02:45:48] Checking on replication [failure] low: 0.000, high: 0.000, avg: 0.000, reported: 1 [success] low: 486.000, high: 486.000, avg: 486.000, reported: 1 [replication_time] low: 20.853, high: 20.853, avg: 20.853, reported: 1 [attempted] low: 243.000, high: 243.000, avg: 243.000, reported: 1 --------------------------- Reporting Metrics to StatsD --------------------------- If you have a StatsD_ server running, Swift may be configured to send it real-time operational metrics. To enable this, set the following configuration entries (see the sample configuration files):: log_statsd_host = localhost log_statsd_port = 8125 log_statsd_default_sample_rate = 1.0 log_statsd_sample_rate_factor = 1.0 log_statsd_metric_prefix = [empty-string] If `log_statsd_host` is not set, this feature is disabled. The default values for the other settings are given above. The `log_statsd_host` can be a hostname, an IPv4 address, or an IPv6 address (not surrounded with brackets, as this is unnecessary since the port is specified separately). If a hostname resolves to an IPv4 address, an IPv4 socket will be used to send StatsD UDP packets, even if the hostname would also resolve to an IPv6 address. .. _StatsD: http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/ .. _Graphite: http://graphite.wikidot.com/ .. _Ganglia: http://ganglia.sourceforge.net/ The sample rate is a real number between 0 and 1 which defines the probability of sending a sample for any given event or timing measurement. This sample rate is sent with each sample to StatsD and used to multiply the value. For example, with a sample rate of 0.5, StatsD will multiply that counter's value by 2 when flushing the metric to an upstream monitoring system (Graphite_, Ganglia_, etc.). Some relatively high-frequency metrics have a default sample rate less than one. If you want to override the default sample rate for all metrics whose default sample rate is not specified in the Swift source, you may set `log_statsd_default_sample_rate` to a value less than one. This is NOT recommended (see next paragraph). A better way to reduce StatsD load is to adjust `log_statsd_sample_rate_factor` to a value less than one. The `log_statsd_sample_rate_factor` is multiplied to any sample rate (either the global default or one specified by the actual metric logging call in the Swift source) prior to handling. In other words, this one tunable can lower the frequency of all StatsD logging by a proportional amount. To get the best data, start with the default `log_statsd_default_sample_rate` and `log_statsd_sample_rate_factor` values of 1 and only lower `log_statsd_sample_rate_factor` if needed. The `log_statsd_default_sample_rate` should not be used and remains for backward compatibility only. The metric prefix will be prepended to every metric sent to the StatsD server For example, with:: log_statsd_metric_prefix = proxy01 the metric `proxy-server.errors` would be sent to StatsD as `proxy01.proxy-server.errors`. This is useful for differentiating different servers when sending statistics to a central StatsD server. If you run a local StatsD server per node, you could configure a per-node metrics prefix there and leave `log_statsd_metric_prefix` blank. Note that metrics reported to StatsD are counters or timing data (which are sent in units of milliseconds). StatsD usually expands timing data out to min, max, avg, count, and 90th percentile per timing metric, but the details of this behavior will depend on the configuration of your StatsD server. Some important "gauge" metrics may still need to be collected using another method. For example, the `object-server.async_pendings` StatsD metric counts the generation of async_pendings in real-time, but will not tell you the current number of async_pending container updates on disk at any point in time. Note also that the set of metrics collected, their names, and their semantics are not locked down and will change over time. Metrics for `account-auditor`: ========================== ========================================================= Metric Name Description -------------------------- --------------------------------------------------------- `account-auditor.errors` Count of audit runs (across all account databases) which caught an Exception. `account-auditor.passes` Count of individual account databases which passed audit. `account-auditor.failures` Count of individual account databases which failed audit. `account-auditor.timing` Timing data for individual account database audits. ========================== ========================================================= Metrics for `account-reaper`: ============================================== ==================================================== Metric Name Description ---------------------------------------------- ---------------------------------------------------- `account-reaper.errors` Count of devices failing the mount check. `account-reaper.timing` Timing data for each reap_account() call. `account-reaper.return_codes.X` Count of HTTP return codes from various operations (e.g. object listing, container deletion, etc.). The value for X is the first digit of the return code (2 for 201, 4 for 404, etc.). `account-reaper.containers_failures` Count of failures to delete a container. `account-reaper.containers_deleted` Count of containers successfully deleted. `account-reaper.containers_remaining` Count of containers which failed to delete with zero successes. `account-reaper.containers_possibly_remaining` Count of containers which failed to delete with at least one success. `account-reaper.objects_failures` Count of failures to delete an object. `account-reaper.objects_deleted` Count of objects successfully deleted. `account-reaper.objects_remaining` Count of objects which failed to delete with zero successes. `account-reaper.objects_possibly_remaining` Count of objects which failed to delete with at least one success. ============================================== ==================================================== Metrics for `account-server` ("Not Found" is not considered an error and requests which increment `errors` are not included in the timing data): ======================================== ======================================================= Metric Name Description ---------------------------------------- ------------------------------------------------------- `account-server.DELETE.errors.timing` Timing data for each DELETE request resulting in an error: bad request, not mounted, missing timestamp. `account-server.DELETE.timing` Timing data for each DELETE request not resulting in an error. `account-server.PUT.errors.timing` Timing data for each PUT request resulting in an error: bad request, not mounted, conflict, recently-deleted. `account-server.PUT.timing` Timing data for each PUT request not resulting in an error. `account-server.HEAD.errors.timing` Timing data for each HEAD request resulting in an error: bad request, not mounted. `account-server.HEAD.timing` Timing data for each HEAD request not resulting in an error. `account-server.GET.errors.timing` Timing data for each GET request resulting in an error: bad request, not mounted, bad delimiter, account listing limit too high, bad accept header. `account-server.GET.timing` Timing data for each GET request not resulting in an error. `account-server.REPLICATE.errors.timing` Timing data for each REPLICATE request resulting in an error: bad request, not mounted. `account-server.REPLICATE.timing` Timing data for each REPLICATE request not resulting in an error. `account-server.POST.errors.timing` Timing data for each POST request resulting in an error: bad request, bad or missing timestamp, not mounted. `account-server.POST.timing` Timing data for each POST request not resulting in an error. ======================================== ======================================================= Metrics for `account-replicator`: ===================================== ==================================================== Metric Name Description ------------------------------------- ---------------------------------------------------- `account-replicator.diffs` Count of syncs handled by sending differing rows. `account-replicator.diff_caps` Count of "diffs" operations which failed because "max_diffs" was hit. `account-replicator.no_changes` Count of accounts found to be in sync. `account-replicator.hashmatches` Count of accounts found to be in sync via hash comparison (`broker.merge_syncs` was called). `account-replicator.rsyncs` Count of completely missing accounts which were sent via rsync. `account-replicator.remote_merges` Count of syncs handled by sending entire database via rsync. `account-replicator.attempts` Count of database replication attempts. `account-replicator.failures` Count of database replication attempts which failed due to corruption (quarantined) or inability to read as well as attempts to individual nodes which failed. `account-replicator.removes.` Count of databases on deleted because the delete_timestamp was greater than the put_timestamp and the database had no rows or because it was successfully sync'ed to other locations and doesn't belong here anymore. `account-replicator.successes` Count of replication attempts to an individual node which were successful. `account-replicator.timing` Timing data for each database replication attempt not resulting in a failure. ===================================== ==================================================== Metrics for `container-auditor`: ============================ ==================================================== Metric Name Description ---------------------------- ---------------------------------------------------- `container-auditor.errors` Incremented when an Exception is caught in an audit pass (only once per pass, max). `container-auditor.passes` Count of individual containers passing an audit. `container-auditor.failures` Count of individual containers failing an audit. `container-auditor.timing` Timing data for each container audit. ============================ ==================================================== Metrics for `container-replicator`: ======================================= ==================================================== Metric Name Description --------------------------------------- ---------------------------------------------------- `container-replicator.diffs` Count of syncs handled by sending differing rows. `container-replicator.diff_caps` Count of "diffs" operations which failed because "max_diffs" was hit. `container-replicator.no_changes` Count of containers found to be in sync. `container-replicator.hashmatches` Count of containers found to be in sync via hash comparison (`broker.merge_syncs` was called). `container-replicator.rsyncs` Count of completely missing containers where were sent via rsync. `container-replicator.remote_merges` Count of syncs handled by sending entire database via rsync. `container-replicator.attempts` Count of database replication attempts. `container-replicator.failures` Count of database replication attempts which failed due to corruption (quarantined) or inability to read as well as attempts to individual nodes which failed. `container-replicator.removes.` Count of databases deleted on because the delete_timestamp was greater than the put_timestamp and the database had no rows or because it was successfully sync'ed to other locations and doesn't belong here anymore. `container-replicator.successes` Count of replication attempts to an individual node which were successful. `container-replicator.timing` Timing data for each database replication attempt not resulting in a failure. ======================================= ==================================================== Metrics for `container-server` ("Not Found" is not considered an error and requests which increment `errors` are not included in the timing data): ========================================== ==================================================== Metric Name Description ------------------------------------------ ---------------------------------------------------- `container-server.DELETE.errors.timing` Timing data for DELETE request errors: bad request, not mounted, missing timestamp, conflict. `container-server.DELETE.timing` Timing data for each DELETE request not resulting in an error. `container-server.PUT.errors.timing` Timing data for PUT request errors: bad request, missing timestamp, not mounted, conflict. `container-server.PUT.timing` Timing data for each PUT request not resulting in an error. `container-server.HEAD.errors.timing` Timing data for HEAD request errors: bad request, not mounted. `container-server.HEAD.timing` Timing data for each HEAD request not resulting in an error. `container-server.GET.errors.timing` Timing data for GET request errors: bad request, not mounted, parameters not utf8, bad accept header. `container-server.GET.timing` Timing data for each GET request not resulting in an error. `container-server.REPLICATE.errors.timing` Timing data for REPLICATE request errors: bad request, not mounted. `container-server.REPLICATE.timing` Timing data for each REPLICATE request not resulting in an error. `container-server.POST.errors.timing` Timing data for POST request errors: bad request, bad x-container-sync-to, not mounted. `container-server.POST.timing` Timing data for each POST request not resulting in an error. ========================================== ==================================================== Metrics for `container-sync`: =============================== ==================================================== Metric Name Description ------------------------------- ---------------------------------------------------- `container-sync.skips` Count of containers skipped because they don't have sync'ing enabled. `container-sync.failures` Count of failures sync'ing of individual containers. `container-sync.syncs` Count of individual containers sync'ed successfully. `container-sync.deletes` Count of container database rows sync'ed by deletion. `container-sync.deletes.timing` Timing data for each container database row synchronization via deletion. `container-sync.puts` Count of container database rows sync'ed by PUTing. `container-sync.puts.timing` Timing data for each container database row synchronization via PUTing. =============================== ==================================================== Metrics for `container-updater`: ============================== ==================================================== Metric Name Description ------------------------------ ---------------------------------------------------- `container-updater.successes` Count of containers which successfully updated their account. `container-updater.failures` Count of containers which failed to update their account. `container-updater.no_changes` Count of containers which didn't need to update their account. `container-updater.timing` Timing data for processing a container; only includes timing for containers which needed to update their accounts (i.e. "successes" and "failures" but not "no_changes"). ============================== ==================================================== Metrics for `object-auditor`: ============================ ==================================================== Metric Name Description ---------------------------- ---------------------------------------------------- `object-auditor.quarantines` Count of objects failing audit and quarantined. `object-auditor.errors` Count of errors encountered while auditing objects. `object-auditor.timing` Timing data for each object audit (does not include any rate-limiting sleep time for max_files_per_second, but does include rate-limiting sleep time for max_bytes_per_second). ============================ ==================================================== Metrics for `object-expirer`: ======================== ==================================================== Metric Name Description ------------------------ ---------------------------------------------------- `object-expirer.objects` Count of objects expired. `object-expirer.errors` Count of errors encountered while attempting to expire an object. `object-expirer.timing` Timing data for each object expiration attempt, including ones resulting in an error. ======================== ==================================================== Metrics for `object-reconstructor`: ====================================================== ====================================================== Metric Name Description ------------------------------------------------------ ------------------------------------------------------ `object-reconstructor.partition.delete.count.` A count of partitions on which were reconstructed and synced to another node because they didn't belong on this node. This metric is tracked per-device to allow for "quiescence detection" for object reconstruction activity on each device. `object-reconstructor.partition.delete.timing` Timing data for partitions reconstructed and synced to another node because they didn't belong on this node. This metric is not tracked per device. `object-reconstructor.partition.update.count.` A count of partitions on which were reconstructed and synced to another node, but also belong on this node. As with delete.count, this metric is tracked per-device. `object-reconstructor.partition.update.timing` Timing data for partitions reconstructed which also belong on this node. This metric is not tracked per-device. `object-reconstructor.suffix.hashes` Count of suffix directories whose hash (of filenames) was recalculated. `object-reconstructor.suffix.syncs` Count of suffix directories reconstructed with ssync. ====================================================== ====================================================== Metrics for `object-replicator`: =================================================== ==================================================== Metric Name Description --------------------------------------------------- ---------------------------------------------------- `object-replicator.partition.delete.count.` A count of partitions on which were replicated to another node because they didn't belong on this node. This metric is tracked per-device to allow for "quiescence detection" for object replication activity on each device. `object-replicator.partition.delete.timing` Timing data for partitions replicated to another node because they didn't belong on this node. This metric is not tracked per device. `object-replicator.partition.update.count.` A count of partitions on which were replicated to another node, but also belong on this node. As with delete.count, this metric is tracked per-device. `object-replicator.partition.update.timing` Timing data for partitions replicated which also belong on this node. This metric is not tracked per-device. `object-replicator.suffix.hashes` Count of suffix directories whose hash (of filenames) was recalculated. `object-replicator.suffix.syncs` Count of suffix directories replicated with rsync. =================================================== ==================================================== Metrics for `object-server`: ======================================= ==================================================== Metric Name Description --------------------------------------- ---------------------------------------------------- `object-server.quarantines` Count of objects (files) found bad and moved to quarantine. `object-server.async_pendings` Count of container updates saved as async_pendings (may result from PUT or DELETE requests). `object-server.POST.errors.timing` Timing data for POST request errors: bad request, missing timestamp, delete-at in past, not mounted. `object-server.POST.timing` Timing data for each POST request not resulting in an error. `object-server.PUT.errors.timing` Timing data for PUT request errors: bad request, not mounted, missing timestamp, object creation constraint violation, delete-at in past. `object-server.PUT.timeouts` Count of object PUTs which exceeded max_upload_time. `object-server.PUT.timing` Timing data for each PUT request not resulting in an error. `object-server.PUT..timing` Timing data per kB transferred (ms/kB) for each non-zero-byte PUT request on each device. Monitoring problematic devices, higher is bad. `object-server.GET.errors.timing` Timing data for GET request errors: bad request, not mounted, header timestamps before the epoch, precondition failed. File errors resulting in a quarantine are not counted here. `object-server.GET.timing` Timing data for each GET request not resulting in an error. Includes requests which couldn't find the object (including disk errors resulting in file quarantine). `object-server.HEAD.errors.timing` Timing data for HEAD request errors: bad request, not mounted. `object-server.HEAD.timing` Timing data for each HEAD request not resulting in an error. Includes requests which couldn't find the object (including disk errors resulting in file quarantine). `object-server.DELETE.errors.timing` Timing data for DELETE request errors: bad request, missing timestamp, not mounted, precondition failed. Includes requests which couldn't find or match the object. `object-server.DELETE.timing` Timing data for each DELETE request not resulting in an error. `object-server.REPLICATE.errors.timing` Timing data for REPLICATE request errors: bad request, not mounted. `object-server.REPLICATE.timing` Timing data for each REPLICATE request not resulting in an error. ======================================= ==================================================== Metrics for `object-updater`: ============================ ==================================================== Metric Name Description ---------------------------- ---------------------------------------------------- `object-updater.errors` Count of drives not mounted or async_pending files with an unexpected name. `object-updater.timing` Timing data for object sweeps to flush async_pending container updates. Does not include object sweeps which did not find an existing async_pending storage directory. `object-updater.quarantines` Count of async_pending container updates which were corrupted and moved to quarantine. `object-updater.successes` Count of successful container updates. `object-updater.failures` Count of failed container updates. `object-updater.unlinks` Count of async_pending files unlinked. An async_pending file is unlinked either when it is successfully processed or when the replicator sees that there is a newer async_pending file for the same object. ============================ ==================================================== Metrics for `proxy-server` (in the table, `` is the proxy-server controller responsible for the request and will be one of "account", "container", or "object"): ======================================== ==================================================== Metric Name Description ---------------------------------------- ---------------------------------------------------- `proxy-server.errors` Count of errors encountered while serving requests before the controller type is determined. Includes invalid Content-Length, errors finding the internal controller to handle the request, invalid utf8, and bad URLs. `proxy-server..handoff_count` Count of node hand-offs; only tracked if log_handoffs is set in the proxy-server config. `proxy-server..handoff_all_count` Count of times *only* hand-off locations were utilized; only tracked if log_handoffs is set in the proxy-server config. `proxy-server..client_timeouts` Count of client timeouts (client did not read within `client_timeout` seconds during a GET or did not supply data within `client_timeout` seconds during a PUT). `proxy-server..client_disconnects` Count of detected client disconnects during PUT operations (does NOT include caught Exceptions in the proxy-server which caused a client disconnect). ======================================== ==================================================== Metrics for `proxy-logging` middleware (in the table, `` is either the proxy-server controller responsible for the request: "account", "container", "object", or the string "SOS" if the request came from the `Swift Origin Server`_ middleware. The `` portion will be one of "GET", "HEAD", "POST", "PUT", "DELETE", "COPY", "OPTIONS", or "BAD_METHOD". The list of valid HTTP methods is configurable via the `log_statsd_valid_http_methods` config variable and the default setting yields the above behavior): .. _Swift Origin Server: https://github.com/dpgoetz/sos ==================================================== ============================================ Metric Name Description ---------------------------------------------------- -------------------------------------------- `proxy-server....timing` Timing data for requests, start to finish. The portion is the numeric HTTP status code for the request (e.g. "200" or "404"). `proxy-server..GET..first-byte.timing` Timing data up to completion of sending the response headers (only for GET requests). and are as for the main timing metric. `proxy-server....xfer` This counter metric is the sum of bytes transferred in (from clients) and out (to clients) for requests. The , , and portions of the metric are just like the main timing metric. ==================================================== ============================================ The `proxy-logging` middleware also groups these metrics by policy. The `` portion represents a policy index): ========================================================================== ===================================== Metric Name Description -------------------------------------------------------------------------- ------------------------------------- `proxy-server.object.policy....timing` Timing data for requests, aggregated by policy index. `proxy-server.object.policy..GET..first-byte.timing` Timing data up to completion of sending the response headers, aggregated by policy index. `proxy-server.object.policy....xfer` Sum of bytes transferred in and out, aggregated by policy index. ========================================================================== ===================================== Metrics for `tempauth` middleware (in the table, `` represents the actual configured reseller_prefix or "`NONE`" if the reseller_prefix is the empty string): ========================================= ==================================================== Metric Name Description ----------------------------------------- ---------------------------------------------------- `tempauth..unauthorized` Count of regular requests which were denied with HTTPUnauthorized. `tempauth..forbidden` Count of regular requests which were denied with HTTPForbidden. `tempauth..token_denied` Count of token requests which were denied. `tempauth..errors` Count of errors. ========================================= ==================================================== ------------------------ Debugging Tips and Tools ------------------------ When a request is made to Swift, it is given a unique transaction id. This id should be in every log line that has to do with that request. This can be useful when looking at all the services that are hit by a single request. If you need to know where a specific account, container or object is in the cluster, `swift-get-nodes` will show the location where each replica should be. If you are looking at an object on the server and need more info, `swift-object-info` will display the account, container, replica locations and metadata of the object. If you are looking at a container on the server and need more info, `swift-container-info` will display all the information like the account, container, replica locations and metadata of the container. If you are looking at an account on the server and need more info, `swift-account-info` will display the account, replica locations and metadata of the account. If you want to audit the data for an account, `swift-account-audit` can be used to crawl the account, checking that all containers and objects can be found. ----------------- Managing Services ----------------- Swift services are generally managed with `swift-init`. the general usage is ``swift-init ``, where service is the swift service to manage (for example object, container, account, proxy) and command is one of: ========== =============================================== Command Description ---------- ----------------------------------------------- start Start the service stop Stop the service restart Restart the service shutdown Attempt to gracefully shutdown the service reload Attempt to gracefully restart the service ========== =============================================== A graceful shutdown or reload will finish any current requests before completely stopping the old service. There is also a special case of `swift-init all `, which will run the command for all swift services. In cases where there are multiple configs for a service, a specific config can be managed with ``swift-init . ``. For example, when a separate replication network is used, there might be `/etc/swift/object-server/public.conf` for the object server and `/etc/swift/object-server/replication.conf` for the replication services. In this case, the replication services could be restarted with ``swift-init object-server.replication restart``. -------------- Object Auditor -------------- On system failures, the XFS file system can sometimes truncate files it's trying to write and produce zero-byte files. The object-auditor will catch these problems but in the case of a system crash it would be advisable to run an extra, less rate limited sweep to check for these specific files. You can run this command as follows: `swift-object-auditor /path/to/object-server/config/file.conf once -z 1000` "-z" means to only check for zero-byte files at 1000 files per second. At times it is useful to be able to run the object auditor on a specific device or set of devices. You can run the object-auditor as follows: swift-object-auditor /path/to/object-server/config/file.conf once --devices=sda,sdb This will run the object auditor on only the sda and sdb devices. This param accepts a comma separated list of values. ----------------- Object Replicator ----------------- At times it is useful to be able to run the object replicator on a specific device or partition. You can run the object-replicator as follows: swift-object-replicator /path/to/object-server/config/file.conf once --devices=sda,sdb This will run the object replicator on only the sda and sdb devices. You can likewise run that command with --partitions. Both params accept a comma separated list of values. If both are specified they will be ANDed together. These can only be run in "once" mode. ------------- Swift Orphans ------------- Swift Orphans are processes left over after a reload of a Swift server. For example, when upgrading a proxy server you would probably finish with a `swift-init proxy-server reload` or `/etc/init.d/swift-proxy reload`. This kills the parent proxy server process and leaves the child processes running to finish processing whatever requests they might be handling at the time. It then starts up a new parent proxy server process and its children to handle new incoming requests. This allows zero-downtime upgrades with no impact to existing requests. The orphaned child processes may take a while to exit, depending on the length of the requests they were handling. However, sometimes an old process can be hung up due to some bug or hardware issue. In these cases, these orphaned processes will hang around forever. `swift-orphans` can be used to find and kill these orphans. `swift-orphans` with no arguments will just list the orphans it finds that were started more than 24 hours ago. You shouldn't really check for orphans until 24 hours after you perform a reload, as some requests can take a long time to process. `swift-orphans -k TERM` will send the SIG_TERM signal to the orphans processes, or you can `kill -TERM` the pids yourself if you prefer. You can run `swift-orphans --help` for more options. ------------ Swift Oldies ------------ Swift Oldies are processes that have just been around for a long time. There's nothing necessarily wrong with this, but it might indicate a hung process if you regularly upgrade and reload/restart services. You might have so many servers that you don't notice when a reload/restart fails; `swift-oldies` can help with this. For example, if you upgraded and reloaded/restarted everything 2 days ago, and you've already cleaned up any orphans with `swift-orphans`, you can run `swift-oldies -a 48` to find any Swift processes still around that were started more than 2 days ago and then investigate them accordingly. ------------------- Custom Log Handlers ------------------- Swift supports setting up custom log handlers for services by specifying a comma-separated list of functions to invoke when logging is setup. It does so via the `log_custom_handlers` configuration option. Logger hooks invoked are passed the same arguments as Swift's get_logger function (as well as the getLogger and LogAdapter object): ============== =============================================== Name Description -------------- ----------------------------------------------- conf Configuration dict to read settings from name Name of the logger received log_to_console (optional) Write log messages to console on stderr log_route Route for the logging received fmt Override log format received logger The logging.getLogger object adapted_logger The LogAdapter object ============== =============================================== A basic example that sets up a custom logger might look like the following: .. code-block:: python def my_logger(conf, name, log_to_console, log_route, fmt, logger, adapted_logger): my_conf_opt = conf.get('some_custom_setting') my_handler = third_party_logstore_handler(my_conf_opt) logger.addHandler(my_handler) See :ref:`custom-logger-hooks-label` for sample use cases. ------------------------ Securing OpenStack Swift ------------------------ Please refer to the security guides at: * http://docs.openstack.org/sec/ * http://docs.openstack.org/security-guide/content/object-storage.html swift-2.7.1/doc/source/images/0000775000567000056710000000000013024044470017326 5ustar jenkinsjenkins00000000000000swift-2.7.1/doc/source/images/ec_overview.png0000664000567000056710000044117213024044352022361 0ustar jenkinsjenkins00000000000000PNG  IHDR~`sRGBgAMA a pHYs%%IR$IDATx^`IX(rwpCbMfSZp6ٙ;.O x$$$$$$$$$𓐐H HOBBBBBBB" ? 4WB%$#ɄCBB":?T=; vBʝРtZTBB{B!pa6ONfׂhqs8gUѥF82:ʌލ=viT|yHߞ7RX %$$NpvvƬkB.r 0-DFT h#!!| Sbol(4>887p݉qP|TW̢c’aȜCvV4V.ìt)JHS""" ///~OJHcS-.ZHHH0_-|vWͱ$*W4e^B+ z~ &]dž&A</̆0'ډĜF4Z ,jG|.  2;ŤqhD 2P̙ ;5)g$DO`j2Y=_y-op,_{Dq}{֠Fh!!!|U ˚G$hjeicET{FU~ҤGūLoޡtd$o971 uZoL$oM8HQ}FoO 2B|ӽ}Һ]<oDJ Y>xE(,3Ro\X4tKpm |?Z=e.萻:X&^; % ?(I=} .!GCQ'7iwt*|I#;t0Ǖ;ŝr7,Y6_?(ܒ,_§{gYSg>޹"|ڸa/Yc[E9P 7) ȠťIqQ>;([0Eblo^Uko%'!勯ě*eǎNh}8׽-E(̒yPH׆X[HPJҙ@D8 6udFF `x&k>@z%_na*ʭkzS#Fo.DkEDu#t?{chHn'ޓ[,[`ݙQhQWǁ~Pr-7uDӇ0cq뽙&Bns],/ Cc__ -V3> pײpJiEk"N$֕6^Yx&$Y-&\7|f{"79Ϧ#0nAn%Ϳ13a{VBBjg/U ] "h]{,73aO&Ɍe&ceHwpŠ-'ZRKaf'|:~ʞꂑ&euO B F ]ڴL{s0%35샭Y;.ʯnj>h-EEEx‚cQm]th3)蝙찫Ib^Ȧ!U:Tw"֟voA]І=ƘK*z ggڠ[gr9A"; 3o?Ξϑ'p%o芛TP[LP;&B#oG^IÒi0_#_f#y&A\l8j^!}OܭXWTòqYW@[ ODu^Nܑ*W ?QfMN#|g/81܆-Nt@,\,ڬ2Q /_>A׉0|̜25{D*?UNx}*}`ڧ[ >\dFBBWUI0p'"$T4$^P=/Wb{gtOa0pOYY(Sh&URC (RwO&@CBȎTBiaYQ_W"ܨ˷' K G3)A{hJW6' D}b|"*SDn( .2q U /=3v \Y̅()Hz̏A&׮/o+Vp`費?ZPJ 'Ba/>h6z͆CT`۹%h\|f7# 'C) l6„0RnI{ A8f=ˢ”k7'y'Cz &m(7EvEf`V!CQϔ0}e9qUt8gS^@nae U6< 1\h !s<mC)Z8b@hvO (oB͘TK i*Awq2֛U^xpe$!!|U·ES`]#dˁzT"u'yN0|~L3܂xL&GY[% e*/Tem}l:dlMO!|HUx,S,^&EiŽM8b,ZՔ9D(j=~;ߡa _D,\Âzpe'@iԆ<'90?2 > n[~^U'&Ϟh"Gq)mSZǠpeLCqK_O%aa=N%B8"+LQa[=EҸ5 ^ Y;aƯLb2cG+J%ug{v~~0swAUX,B}t{,蓡q|9=(-Ja/G  _W^M:h ٘$Qjozx_?[ /`3aѳw?S7$1K5D֭q+{vS6Mšsa^34I,WG p̈J3"ou.0=ہI@2ӽYʡV,œ Z%K]*%E/v~~Uj'3s(蕜EsWa&U &35 4S06OF -虷|4"#$Jr8 bM܌D5UF&"dZн<FN7Ag9'VB-7҄6G:x'bh *#S$F@.'nh?dV1)E??. f 7  TY}aK70Ugysx~,1Ul4#/w)nj;18 [jf. {) s/_@4yG$!!--`8ȜTcP{~B5[w ˞|΢t/aEes}%3uxLYa]{GvhU{xpt ݃rB"lv^Ǥh<[|+ V=+=/KóSxt;~'\)$'!Y~bkL<1W LBBK;*|SQ0\a+EWCʏFF{9Bo)>QHOBD~/6TZՏk.֌-܊e3@E _}faz*-W tq+g-V+Bls-$$$b y7MHcaR3Q߸3󇂅D&ԤC< =%'!1(V8NIɓ' vݤI>i7aܹ3Fnر3f'FI#G~nĈݡCe7lذO 2vn4h'i;E߿Rfׯ_H;;;hv}<(v{jzݻw vzi͎].]"휝u9%]ǎ#.]=]v";ݪU5g\rE[غ;whv .m|E[ S@hvsm F3gh.\8̙3EH"8M(^x4iӦ6@ɒ%qQThv'~LYlhvnm/_>[+Vf7j(\r4;N6Un PzhvlԬY3?uԉf7`[n4;Nc64hͮo߾ аahvzmƍGtdYf8h޼y4]6Kd_^=߂g"$(%J*CfLjq/NY܌ rLBc"z<^↝Fĭq˝g_c'!+3fg?H׷wO<5c،Y𓐐/6ȇ겄DF~0>ʕ+3 )Q& %!KqUa₄įIڵoC^=Y&OMc!},KH|ox6# Iߝg xR'Aץ'x)owhnΝn-$4cB<}2 HiѾn!'c=PR4%R)$'Km6a y(֮^S%gg*6$EYEjNעڶe/fx>rz`";Okzv܄ׯBUX)Hcddj ̝aam% _[naϞ=֟j 3^JAU+BRIׯSf Фėk0%aÀҥE _0뻝4 Hp-ה)SijOtW0,ʈgY#DJvnԪ%ZH.^z__q^X$7Z[L5aUwLʪ QرC4}~r.KHH$08˚Z G}|wہE]n]ytI}޽/C'FjKHʗ//]V8.IHs2P ?>X4H4 5tQ> 8$$3'0oՊ?b˹ ˱k*˞*♄D}*~;Gɓ&VΞ=U v M3gOΞ=[8N*ߔb{'1^ 伄Q}₋ Ra D#s$ ?wOPgDieė_bO;'C8-wL ?g@e Og)Y4dȕ.!dϞ=FO4IyhbFs?cƌY,ƍ1ѳghz^ZJG#ڵkų6bɝ[<˄X5mn:No c+ҵ-& d=x,kSNX_<~%JX,ۋ'?.+cg<[gᢟg{|kAc#16'3y}Jvڡ{hٲ`eFx {EYk $2g,ݽ{W8֭ ~yL`X"UTݻ7vbŊ f͚5Co/C|2 y/EbM28nNv r=K#Iӄ`hdr= o> Sigm[ ۩;=dj;[[@\Y>;< mMW{~V{9 xiBky~m%0#B n:Nj/ƙ*-j2]HZp]KҥKqaY"+`\ȓ'pͬYpq&n^~1e}I&O<)ڵksΑ͛7&m6mL/?xb?\΅ޡ0@gmaīF۾2n;\ËTC%Bx5/Z^ D qw'ʎNf5Xf(npTg s{\Ȧ8*wz0Fz(J# OW:f@9cVѭ Pk~? ^:S'$*#E H oRjBQ+q"kvo85\2\}[pg-N[`In{{THKynn:U$c!C^ex7 WյgPܩj^SzzRGi)|F h-t+\2.Ͻ'ܷʽ/  "wz0= ?w~QiF`!,]u{W?S;D ?9 \#"=W̙#~7Gf1:"r6}6S-wC D^z s̙3nAY|m-3S߾})"x!(xyy |fѢEҥ <( f`-\.Eq- B~dj֬)ŏ;[lpK)X*f|˹mݎ-L(pG5P؅~@o"|wL?r7^W SsjV3:}̟B+bC#ۼN tY' $"mJd2zUA[;kh|NB^uMp-l .T2ntY'Px'\qgtD+ւU?7{"ƆKr0| x<{:#WdΔ6.GKxCXhS!5խ`m+fސ$(m IDe Ky u&L$ Ov!,0$[u┎ҝ*\:q=UOJ/BP{Յ+7# 0]J~Oٗ(;eHv#㦓HF|q?u;{xyX]Ǯ@ uԫS[D ?.tYIy<< Y[.ܹ#|22X O`J*% vqvTFXBhc֭+ bŌϹES2=˖-]ܒ§rʂca~ٱ(5~GHHg wsuAhҤ 5j$ݺuK Üw&ȑ#Gڑӣ5: w3j(LH*D }TuPWʬ2V,a_PF(naF(L>}8 =N):FB!qGu9& 8nd9qw"_2 kDesy7 <A0OZ(UbZr_TxwsSF~Ժ$ ?*qܹ77P4 dps7C5 NeF@ȅ) &D I(~vSN6GDyie1!`x # 7q1`1鐥{!<[҆F6vʕƷ{VlǑcT0?`5aPB L`gAOE> :'EkA9ϺOX}͚4D_NrCh(Z-۶^O@"D܃[%Enܹ l[KF)QеpkӶm[3&.l6lxqMiF#w 3R̹lٲ g$I"E'?rY=%Kɭ~ܝA8r7l 1|_^ͭ_ٳ -!:$]TkÝ G-;E?zڥ \Ƈ.,_G(,93Pƥ5n:&1AY{daR'rь' X1(OẐ̸v7-bGYD82ܴ!Eڜ,B|C(GKUK?-dhi:uʍ㩾dQB*K uݮ(xh8YaDG,;\@*p4A`OQ\D>c՘(kC m BҶ DIųgiִ1֬[ΣRed]8#: _-[~:щcH/F5p-\,#׮`t(RX[gۿ[lݩxO`1BnܘH2%z!t㖷ިQ@-jq˿ 6lφLl?V_;!{ '>Q;˜?ӻW!nQwؕTŜaֳDN'3B;cwC6G5̑J{ֻ/jE"B`0SdmIlF <(䕐klG<,XpȽg$dŐK- 8/ ' OCPtb^[Xtu:yIVO^CaUpH} R#5rZ9lpP!޽PMQD'Bl ׸utK0K8/~2zKsu.rI}B0ykkyѭw>$=rx2[s6ӄ{[5:&PXV4™G8x5 S:Yw’1(&]V2HAeJR(w'͠Cy˶v(~r-h3f0V]/[(}yxB Noe*O2_&{ذQn^sMEx6ޏgG[}+|<#3?năœG83>wZ I%]xj>qE+@\ALpXts 'ѧ\!&lc,eP D!ݾQZ*T ~"_qxFLDIeN!ۗb(,ʖ-+,2tPtAƓ,xmnT{[m2hxm/سgVƨb]_x&jTRN-L aŐd2r !'BAp7rɒ%QF ;jwr'q.]:ald|&\Tq5p>$l/kp\A*!pGnV-[`3e Z88:[Vod}h4Zg'}zlޱ6ڷ/vg.G߫7& $ wrKƍ@ʂcX6nu{ [ m ").q!Alb1-m,x-ٹsK؄WL`1"ɓ3>(oM~J†an*LX`yL!Dl I@Q>:v(WX2;<:^m+HH|!2e"yygz-$$> 78t*Ky"#OZ@?(rAF TirZ/up]`A,v~.GH2VEdWbL5)W yǐ/+suAxsp$V$'!3Q_LpYױ/n)Ja ?).7Ucᑍ0AdW͛PIGdʙsð/զʼn%<6gc+;cZ˽DhaV| SRkPccwM"{NiggݹQ|>"c+UDlC[)aFxڦ'oxkưyGf[įӧOqyԫWO4!x\[uؽk DLO)QbOK]x!]X}'ᴑ?(i];6Fw'a+j}sÚ  oƦ$$~ax Oy8!x'8.{~$NRLR:P#KܻzsA'qh$pעYSZyWށ. aiܽq +iض3×@(tFI ȫf{aΐtNS$bA%^P^HEu<]-9R&Mv-x ф :¥ [I$xKa47Mh"R:]~߃>O6L9 7L /{7x9U'^Ŀ߇}|ygE[mp#Aq9FmCr(Ä rdQuqHSC~}(mH",;R _qG+^;`gyգ;(1h!!=^Gӳ:lbANxn9 >d\EK0?sϽcXt~#N܏y`c} ca9H>#݃6`T38(SLfk5&ЇY.LjJ&'J .N@ɀ2s@ Npٜ0(d@5'~'{^ 0}]=d7eŢQY~*^j͚ K[zC/?'.j%R@i"@,wG*c(eJQtֽ1ъ\Er *Jߐp"˷ 6C#¤zBS8Z-4Z'L8FG~ o5g~PN_~QIkH+n%GgcޘP7J5F!DFxlbO<ĬVAO?:brS2g}{ѺS/qGG Y+ _šួ8tj5EY]߽ y m"Aq!x04m7*7ѨN=Lwʨ2ym0:cƲvR@jFP"w, 2)cÌXlCn_,#+Ϛ(EL_ jhEOe/=1ldϚ&%D'A8n/i HQehVd8b$.CzޤػO4ǢOO+5օ^J]gM4ğ JmBaE)(_0s ]r:c ;oy=Wp~~R*nMVLkxBQG5P$Pܣ.WB 7!#oE!P^hfJ؃*Aת! ?[Ey&͊A7P(-~2կf$w T`u `}s0 (l52:%Hp5䞉S=ye3]{%o ݏT m!+O'wFP'_(6$sqUaxzSb; 2R1??=!8[[V' w%2m9RqpKq o?h{s.g, DGPSEǙ̌t=Mمr| b4/,0Fo)]k>(xVm~gP|n"G4Z4>vǢ}+plz4P.f*Dlwo(4\eF՚$ ¯yb ӓP2T۩9+û "O^}(FN78t8mz7:`3dvOPK\8=]6YT$C1돊xbڃ(`M*E!{wW+1gf_ZM*si02y^$Ήn>I'߳ŏD:9Nͨ .N2J~bШH() Y0vpFz>^bzTTPG^!H y乫x2o sa' d޿!LVU}y 7`H gNJA 1}vhzAZʳ(hYijx7ۿ(ڈߣJ {*CTY7/DOon2ѿv-dpݘ \ Tu.!oNZSЩ+xKWwgAH:=|tuAqP:vyK&BFy̯ݽ0tQ]|"FH;;m|uJZM:g^g7O:Fc[eFTiŒU9xx\!C>ߧ.ń⨕26]M {(w+]w@a@õݻx #ud ܆&JcXpdEH}oYqKcL?9U;buX{,f΃a }:zt΋cܦ;} V'OM$ɽc*cT :㕿:ÌmQ/,ZwC\Pgz45k]jL6㺷!!)5ü>-Яe:*<SA6qԬTmn0mZ 0wװh18CQaPIQC*aJ q9-Z# 9K0snKngG0i9 h ,>՛9P c}Y[P$~cå384js!‘]sf>nDX4(,Ywg"Ȟ Gֳpz-lFtㇱsNYbXkwa_p@rƸK0o9ll+@a\#VA>^EZC0r38|zL-:_FظZBBm1s+R6Qŭe2[,]3GAMR|d:o_ק0Vl]4n/IplϞŒQم&?8jøŋ130no#7xcĺF/1nꂎ`ZQ{yC؀g,&B!g!M' x$RL&7ׇ|c&lh8 >T1[̰HHG?L$0A8q#6؁%g7"vx KttPJ@ {K}֘" axۏFd$MT͘>#Bk]P>(Ö;S@}N-%7ui)Ď ^Hu@hpPM$-&Nj5EՎ/ɞ\LPz%B)+1iZ j5ya Ӭ4icHB5MN˞`H٢ Y[!7F.qo' v@{ɻaqBrHd)=`ʫ)0,P%)vb͞}Ȧ8Q}uBp.S]Mmq#[Yhկ5CV!eRz$\#ct8dLq)&-*"H#$ 01Jw -ضu!zv}>)iM9ǬŜ=aG/6hۺƴNtPo\!*Q~x a0jaG~cPʕYjCܽsFk5lf[9Wln;ڰfA}0hVٛEGqJei"y6Vtv%U+X,"xyu..LT8(6G'a윋_ׂ_kGz ݄4oԚ{ KWCPŨ4n/7BOb :Zh2exF&,&XOD{_ҾtaFGowxF1ʡAIPxdaSSX!F VN@l0e m`RaK DqxLP#OP|ۃObj>Yʅ~H5 <'fƍ'~"'wH;w|9_Ge0B)p FIjE)'`MvxD|#Iw\ Gĉa !2L!7ƣ&{к׬'f : Iq-2\>>Th8#}rX.ćr|*0Z!4!tc圛PȌ|[ssss&ϊcx եwP__+UŒO`|γpr4Ω}\ [ni4rvۖpI=蔗 Wopyq hmJnqf,0k93|Y{B Wwg.$00(s=(Q+kK)=epЗ"MY.Y EͿh9K9޿|OK'㖝 `䅤nt(VQ:%}E7z^ \%n`B⁊5 C rDqAqDcLSr~(=]B\~,])UjHe4k2Ridz~* :_?$!Qwj:k91(D-FJF#@  JayK(v=P~zI]a|[ =w*PTFВKtmho%RI;w|96 B%G>^5qdNGThs~/] .d.V%֕عf1\-LrpiѦpgѼ]7uf3[FBg7^_TL(6[9"^L/{|lOgt. LJs2K>:':jTdN䗘ܶ=z.`򽃑ëx}<1s-RQaa~gM _nE%F^2Tɝ s'BjES,*-eA۶Ѕ~3IolwaK3l$Qskhq ûK{*['TON.OĈ&~8ܳXԴ"^[l\7&xc[9;{>;a(U0ǽTͱbLsasa8 _Fg-?tm mwO ;a _I? /Bbl*\yuq!ql9i/G/%uX\ޙa4By i1ib(! H(7 = OK".&+8extj֬)~O>Aiq\=8C-d}|Rۛ x„3VK=0'lqH)Lț7F~'u;95 #: @?bjJ0TNK!Wh:WP 3:f5'?Ac3sHk׬FCqr`?vwy~-'5{-rʅ >ax$Ꮺ{5n,jK;Ypt e lݺUXɓ' Ǵo[wGn.9&uzKmɯF{'l^6R$_{QH{g$NLK@ 0pfbXxYaB\1w2a05^ӏ`Z눥c;ʼn8)^yb[H.02^5 m^ch9܈WWv]:1q8\ SWEp׹Ǣw7nApY+Q;t N@!^xq$6"_1 %C,IǢ 1EeW?şI|`O"wkG{F+Þ-[sx&#\ݎ1y:ѯ_?#8Crpp[Ql9DpďVLy}2 $KL0$ ;wvx%U:6FB5ԭxh/~τ!,DdTe\x!ixt%1iBv8KM_%;'}.a}iÑߤqU frS~`?=v߲УVRD-k.FwGG(?s Z6h+apJGhU4zg0.:/z1vFaqW{G^:߼.G`e6 ФJ} mZjU'XկQ.3Fvo{ݱթ=z5/,=]ڣOժr%ND1KMtm5K/!q(Hdjp OC1g{4)^I{-zӽz7/Sa0X]bq|5fNچ!c^I)S -ʱEܹ 44TXPK/0wGĉa K|teE';x#,En;z|bHjնvB[<|&ZmYl}t>R뛇[Xs!E3% QXJ.Jh8vZ`dJu*,n0i>l&8ey+W+M<6aް3~ W9v/l^UaV @< 1qb o[ gŵȝN1vQ,ܹ,@BPx)4ٓ('׎B}&|h6i%r%#Po"tk#:ElE3g.w2('?rH?Ia{v\.C+bn/ٶWhcf5 f/Ф +ED~ Y #k"&Lv#hFOM{"sF16hMdS[8f583R/F .mX"a*hZyJ0T@@+s\!RThXw~Nnк:y4*y>jBѺBNtQ08Q.6\&aؾ? 37k,wLF5$$F=B_Cj5l:bшt/\P̓pm5BY!wt#V2tLI(O3\I .n¾x6c pҪC;Ñ t- )p;,ƒe}V^L1׺yD../O\54G;fNkl^STA5a}K=dI ՛a[ CCγ5Ī3j&څ?W5CQp\_&W!Dѱ`,4JdƖ?{N. -.ǒ-OVWBʎ8j$LMH)NoewxO7LXx 589^^F3B+?EdBjK@!SQ > 6 IoÃ|B"/gaK@)s˻y+1,&, |.P{oJ9*X/0Y-%Gd.0U@[h\x+C*#/Rcu[޿|C~\}\jRAo^!B$IHm}^{A*xuȻoHG{9|_Zy"dvtw+HPk 'l 'A&BN@p#yRa^ng=D0\z} ) ~&ywgܱkHx 8p !^c\ۇ;Jy<<<7ONqrGT$O(U$*UMC3$< ^n1!_00C"ljjF-ly%ƌ1/D 8s= {puWd]M_2$f]0\7 ube}}'̇QHݾrRU~-U8:@F2a,y'0o_#,IaK,M-:n-n-h"‡Ei+ң!W;anu돷p~QD\f3Z3>vX#\c"F s3asb~;mdA +v9IIH/z:! >a5޽ΕJ;a\k.5ZG?ʨ!RY bg W/G>}Oy@ԝ;kPaԟp1&˴1qݺC>K.1r䅋AȠu2D J5=*gx2˯DTwIA%P"V˗0~xфg_$6Cu,A:'sXNiżbt{ڴiիx_͚݋HAbGƱ~JG6͏;2+eCϾ/N>v56bzXZ?y)YEJ3q2RUT2x$+p0)LNY"Vp? ⵎ۱@lba֐ߢ#]_5 ϡuAIH~ɓ'蓐)héܣp X?3Jy$R]vU\*<4y!5z/!߂e';yj\IS t_&}'ḳ>ziG[Rj.-h#i0m蓅"OMGb[dk)蜯k:3"sr`GeV f]ǿU^`"OL"oiH '?o}ܔ`7{m<%HvOio"5GD%_z_O4z%h* 7Yt,^^OG\!RmXE9TA`?{-uHW1?Q#Z}*;ʞ p7-8ڂ1Q+M2c,E$uLy;!cmǼpB,jFɚpU#/$D4VçFjYznMX:(Ua7)=n<Brq Lk!aFxOA6=/f) %jm?nϨKOrthT`E1t<ԫavPvvގ6.?\&*#W 8}&\0dr;}ϣX.q8̮wE(š 5klޝbP!Kn&hώDQi~ !X:/xN>N?ޮ ]ϱ9|7OC]8B}_#O%8;mG {޶ίY _\wwa" 7w+T5qg,;!^J4h3yM=x s~It #b]]t 6mĉE3ǛBSA%?K0Tqvw-, ^wRR$ ?wFupH(ic8g ާt1_ :Y#˦3k0@GQs̚5+Vw`2oQQ2ǑC1w+?R8P v2a 66z{z #srCKfvJʧMccHx` T<k$@Nv @prk،~[/tPͿJq1!{ ]V}<[Y̫(\FF.Ƭiv R3.FENυw7aq碤kxK_} [b8p32*b3ݨ~Lq3HZpϖ KfMD%iH׭ tiЀ[(`r1'tS6R^9B|Z!!{61Ϳͼ8rV/7o*4M " ?>OœӤCro=X1fzAM1 O0g3 > G`QL(b{2 %Ǯ{K.2;:=w,L#tTcl ߰9:a%Q}y(-$pL=&CJ{c\ _a+a;א>k&a&?CSi/5t޿{` D'ŷ`%LS8+D Bw{+ y )@z~fU,Ȕ.w7()n򞷿A  b.>f5GyoMIHq>1\BW[Za_^ns~m&uE;H|,pɭݼ y%$$$$ c'x]Y)r\F y#'X(k!h>r.Ͼ% v뙄׸2+!L0n4Hr^Yވ8LZpr#~OP׽ƻIOgpY_de2 +xg" z武>A3&FU+8B?=xMP0ѷM8j G>D4MXpXcA4{m] OWZ_Ǐ.}&GgW9\EEȟh#l7f4V8W4H0q1/wfD `#Z/~LfOu֓@z$| *r)<{.x- bWr.E3W68ޣfK ąH"vPX8u89pE4 X$5%j4Ej?6^grTXj%5k.D _KxO`\=1k`˸'B)(qp0>C!Q\AK !Ȑ0pa++#0 >7k)!a*Ūu֯_iӦ&?fջx&4pƎ/}D~6T={g 7g "᷌_Pf!e"k#(.\ۋg?hO⿱ +[chegoA\L,b?ɓ'QLș nnnDm``.Qea5hذh.9sIcgOr޽y9/~=ƯA {N{l䴧wĂmc~ؔbJTp)kԨ]&^|׏BeH]WO-E5q(R0Ez:q8jת7o qĢH~{A*Upxy%9)m$ 8mH۷o6Qk"$$Jq>x+7$5p1;Non WAMp-wjZX8!a,$NƢ{5i[([hB|Fqk_ pm%0^ȕ3ΝJ*6" Xկ_KAC" D+HyV#B()CMJ逥; (%?zO ?Q>˃$!/'6:u mڴ^w1T/)+ ggg89;^ cDIW!J g''P*>S9b vʸ*UvP~>߇QpB428FD,IG[@%f`~)J.8ǏA;ݿҨAH.i4,S)nyG8&(|ȃJ;8S^v4C$xlBCCy S8)d"~$-JwpVRA)l].@b. 8d_ ,A,2&_S u@m1yVd?W)u#Wgwqܩ*U턧O%HiBѺdlچm-oSoM*D<Ð$;vjKbTQ(~)*/sh׬+'?0u(Oc߬iY-*G}򷇑"I&tڷA᫈+OF # U/ۣv?gJ:]Opэ9 vY${mbI}5%OXd$AaV>GgaRD5ڄtO+rw#2? {'Li]ӱur,]5B\'Ax8C%䰼ƣ#պ///h8Ml]squfqM؅jqL{$<]'gP2V'wgl3_`IGPf:?fYwoN>f τXx)\ CfWg'o*8;vTsˉ>WxRJ;G / >ǭ: 3N}Ch jkë$P%mjL_UE4"Y34o#6\ĒE1hL G5߼WwOxR[=)[m6&j:Ñ-_dްKp3Uk W`m=6A("pm Xv ,\E #" prp ;$vppu<"2|Fmtw>1~PGpzɬ-u())|.ptg GGDoΰ$MM5+7v:l=*sW?z_)Q@p6!00 8<$ O@FbG8$ăג,YL7jO%fƭPv`n_ڶGݚmȨ;C-Ѻay$Ik.ആؼ+j4cХKTQ jvX Mɽz5ZP\a ;Cql x'DpP[ |} W[W9r#Lo]{X jvۡAͦ/׌G:ѶiU$rop~q;j{QнkԪY ϾV ҈p;LgK*'L@?dP:ށezv;.x],3n;q&t-B,FU EᄤI\ 'xծ-P:^^KΨ/2w:9ZKC]&d2-[7C`TQCEW2ڶlN@{ǠH.ҿ3Z+SEG‡h7l)mȐ$YbXL$AZP`R"`ADKSbдhסΜ;}h}2*/v!׬.:wj"e2xip@mthͺ](RtJة505*mЪDyx#LCvFtNxx:`؁Ez6jBvZ<=0 a9#FŅHdm-ao ^I|KbI}1m 2bOGl ̙3ųOX C7PZU?x+g,&bM!\ ߉ݻ!{T(]0+B|_)ފas]fVjɽ酕AYi {ӺfD^so8QׯşAaj?m;!5Xp/܅!ya>x9Gu,؆!1Gp; T:&"c&=<^$d)|<ѽG^9DнPl X`Q:TfTmJէ'"F">j7_:.t]c˄ xc6!+[Ql0A/KqWcۊU(|!Nm `܆ș) \Sd6 }+WM$u1@sܹIHC 灓.;eBa8}ިא'[ Lf$N ?I~_0"LHBC3|vFUhU7cڠ%Nf`鲕<:(./] ["jHSo"<0rx 0v Mwr!8*g/%1aY&K6[(oȓ;/^Ȗ" |D@D]FЇ"`}q Q~vl?'PHeOq+r]IF*YgWMœ93{vlSz ̐~6'O93? .`Í1սHaפ9Bi FQ^}!BBBDX`4~|ߴԇnN8{%.իVa*V: Pp>ͫA&ͱ+#_隸qo6 X ,DG*Y/ wrɗ !pŇ=hAk㺔HDilgPG H'CҤ<@>$bCBCa'^ufB,0O̚Z <9] .pJT>Z#$M3Zdr>5)3ZaLģGg? %kׅ>O}HF.aZٴsËC26Z-b!Ps1S~ґ-ao>󇙅FQS@Z *B9(S]댱WaUؾOϟwA~#Phq %1`N,K`3uR'cȽ޶~>/֮C!QBDiAkCc ߒ%B`@ \=#A𡵙{5G{a*NG_;Qh,V Ri]a:o9s9|W;N+WbˮhU$M3'o I?:w>*Dгf4GzԙR i*< 00PL֎'G+rLq!켌 #zп ײeK{#!vEQө[- yȗ]L>wl݂[HmqaA(%EPqup3ae+ !ft6 {UF10~DOTTlЇ 1T(_'/_ "ip74kWGdLq6aTͼ~g/pzqQЇ"7S`U(nYSx%+Y2dUĸQc1kp(- L  GMt!|mO 5mLL' GFm 2J*Faϥt3E# J>,̺з^cL]+NFv1l020v@qT+UWdDB0eFj&CFp@1v]p`3 CzL^UKlro`kt W^h9dpp;ڻY3&x-dj'd͚ &fu9˄g`J3ib"QBiGW{/0 nnP40'oٵҤK_ aBf>#cGbO(ο!^'"D{ I sg)m%o azWwCJr_Azx<3gBxHn%uqlVamck~X&;psP C@npVRM%>z iӦ"?yQm1:0SA|񛧓a[=rrJIvȓ;'Y\h*p.Y$T&M r2 Z;#n]Li^h- +rР^g6CaCNDw%)o@J;yB 4fo QFF3>78x&GNQ%%*.ֵ}ܡ0H:#^Ҁ7PD2B``fhu2"*2 H7*vwF{Ȟ^ŸEPkC(g B|Z-^!c <؂@?Dh\H0SU>OoⵟdH"yOC/WG!~j)í4uf8۱ոtYÎ*N&< 8wBxNSQ.peՎ>>n7aYeIRʸ=jk(O30ip{5"BW>J+Z!|TzxCFiV:Y×6%7 Rڑefvn;>T%>wkEWI9o1sC)\9_ L,YfF'%>Na쩛wZd!Z8jjZSٳЧushE~P8ym=UD(zw3T$B\ЅRAp/ Rx:[TtD_ݼDw.s>!e_CM dJbP>.%!+ Xk7]/SG} >h!1S@Od%GV1Oc̘=ׇPs h]`Njպ}*~ۂ*j J/@6;Z޽{ //BQIP" Fd2[J Jz0O7<|W-¯U0S1C!3 B(TYMV[H+MA XiHm_{Ŋ1g2&Wz̎]˹>u57Ł;Od'p (5 ͙*6" X5nXr!EBCAcsD"0#:¤b~rJGuՖO!- X6y- ^Do]=3wr.AAPT~.2ZF䟎B%DbCAh mI` 9J>}!1>42yʠkN6&_$"y%$| ;k<gCo(ɯNb|O)1"4q˖.7qK(>FNі_:x (R || { оREUS 9ѾI3 m}qFq]aʖWkHICAY`i܁w1 υhW 8״#x07Qđty|h7@` G^%{{%Èm1菦ض/}xDX*(#zXl,p2J^ω8C6ڴ, -zL3|4`՘e=r<ۄo jW3W/E0nD8P,MSQ|Me~m}ܚ6K飦w{ERrÎ{9MzP ƜEXcѱ| t^q/ìiS әO$"y2]ZC,]/ÛV-L}H= $&?5BM7¨n%@zl1hW5(|W>g`~bPVTfNIquaPJ(k EвyWL]j4ıbtTv ɘ&t+C6Tq'ާ^%Omś@+&hh:y#71TO:IH YR "`QJ //gT@CY"5oѣBIB `X׎h JVGcˆtm 47AV мq_>^u*`P&98k6):_9m?c ٲ;ndVu1סp:_ ^;bjzù ѯuc4 DhyVz.!mcpIPQ"OC]0mpG hXl3vT'_,F5퀀`ZQ1Fb^،j\i)_95S%lj_>D?$nz_\l,eN_nD{?p ~NÇ~>b#Nd^&0eP-+Mǚ_[OLJ1}g iQ"cWD:m)^#ZcNH}˰H,3Rz;ḡ /P޼`lrgAkѬjv"Dƶg-ҟzO,"!s€0vL$E>ř|j΅*}`14& ËLcǒv#' uCt}3˖`~Xҳ❀=#Gۼ ÇŰN#aicWB_fTuUU,Gzoh"Gs =2=Zm+yصe B1SO۰+pnnsH,yɏS6nՙqq TۏQ ,_:/ ^AHj?q}e;n%T"I!Ezcu2K`uC'#j?l*,1E=en93jsȅ;FYҳP(׮xGh2J? 4zh*X[Z 2 D4c?<{&ƣ4:Ȱrr ͺ^{Ç'SСChڴW@j?ۄKEJtb.ܸrӻڸX0f&8H%!`^HWliA 4%=Fg_Ɖf>L~~>2ӦA01%$EPpeOh k")`#V튮e1dthzP *^Ӏct4DLL(Ay#V`@.\D=(D#rJT{iڤ&ۏ}GfMRi|<~Ŷu|bc"i21Y+\1g.L6qnڵXC7k׉9*bdm\X; ˖l¤A֍Z+%Bot/©9z8#YcpC`L 2:wi%MB%E!˓ipNlvno5:3÷:-WE9Gi),2 0cP Ҽ+O XR#)O^ q&T?*FAQ\dWY{ՊK> V4f%ҺȞR!=Sb1ހii !Mx>l= #Àf=[ǝɎoö͟cu1(٬#!{pӢ\%n1nڂjPgu*Hz44@tJK-ǜD$21868a"Fbq3bmE`]%u\b|TZKh`=ӛMpJ層kq>S("hz`.ޮ`(_Fv4]ʠ~j(X/fn+ݧ0Pet Ykc/0y|K$F'gq~HE<=Bk/ MZi1`lLY wmC"8m+ դ*+bcψwØѸ w]Ujةr ܎-hA"m\@x@N YإwxȪ8MQNW*.Dգw ALiiE+АXj2;k#J ј1kHɱ/6pZ\?6=a 5z/E[vq)[ĊѸWGZNaV<՚ kw݂g&_~?ذ,<-$RYhyYw=_F[=W4Q: \FZN#7pvj xWw.جt4&4֬O©l?wɳO^LF>mBI ñvvo.$0D$'D#Lxh EL$j\)cF`{P {r*>:ҡA" a̡8q9HX-iۿ/ >piZYraΛ?_ ~Lt"z59ZiKa6MwoX`->Ub6F~"qK1v, : U _D jZX˲ ?/| KmNy y#QEm* @D)&WLME4|2ǎ1o=Mv>9C bVkx +`t[*d @ >,BM;SlZ+G|x0ldo!O_FXCFo;R|B'?k״T&*[B8\ ,a6&y kzW*P ~ _z<} OZXہ(tR% g~?< eFE݇)h|q{8y$4#!2pvqOCuD%xd/ޅHGFZ6*k;ӈ䆖ټ_6uq_C75C Mϼǖ p`?!{-ٽ :k{ό-ʚ&3?^Rp7bx(޼,Co׃0dzdt .! h䴌DxyI1VѰhGh&E/7XhQ5 ~Dz'eeH5ENe زzNڈ^jcW'53EڬXErtj3 c䞈5?O&)}e>7A,ܟE?knjT*ܧeAX8t /lH>CG߬,PJq Ca܁o_'TƔ(=Etp(įð:Q% :*"IԞ Z@a֏m^6)DLv3҇_H4"}'1Io̘J+RQ Sg_ j$nsmP3xR܎(:9_,M{l9VhN]4Cq#B~<޸ }C!2WNrf5w3qP¦c7jpls ڤxF㥐^ga8"g~7񱉟qq)#at~o#(4HrD\߻]ݾ >fP!a&l$ӳwtƬm_Dp%~c+FT*?{TesoN=S^1 ["ۇ_-H֯_/uS h2y&-j ! cTZgC13hMNpL.Ծ0(]܎|S4X0y81!81f|/** 666ժ;UDVLe(uwΐd` Vw0;} `۱[UsЩsSL287"i H_<,UQ[amg}JkKWwm"&# a-L VoyD;?c7nD߾}S]2f4F#M$|OٽϜe eǾ@HL*/e l*U6 ~2///isgΜ)PP$Ч)Db}IIdo>WЦ>XB{ǧj? 3p7   ?6|cNi&pqq*{KxZ qz&QjL 6A_,mD8m^.|'וaz"cl=l\&]h1O6DhciDDܽ\Yspha,hBk" O:!rY掆SLhlҽ-rkP 0O_p"xԴ6ONPwʒ rlI)g._(& r-,لH")I#~jӦ)F*IL55-llm`Ujn(s Tj o'8ZB^9];; lfGz_T?~ZK+!Ӫޭ<%%V[ϿQE@4|Q ѣA*xK(ٛ5HPNZj,,Bc?%fj[: # T? RE㔝(-=׼>o{~gZ#u5 #d@⋧g&/ sk`K"pw$y@_4c*},zew fNx2+p0pr*( `d"kH X_'w:D+W/fND90*7q)ňl=n\goXOg~#>4ZښrL(F x4Jer%TIQ_h١:v o0:*n9G )ȑV2WOFYP;Ё>MD܂4]8@c}[ w ^"qwX|@U?\%bwKd,}>ҾBGelBQF|pzD 8Dd/7~'“DY ~C;NurW(q@V#}y$}؟[ 3|LQ,`NJޒs`IiJ"0lX KWB!%kiM+k[@)FX4V0;{8;;BA u{W,ej8gX1S#=":v ZCA2*"ȕ<9&-DJu})MdT_)ֳ;.ƒP)'$5T6-M,2sy?CQ(~Y? Ӛ'D)G\?k[l/~Gk)2h X6vN;#^& 7’eM?A4i]k "%}¿МكE}+TjzvY[[Yܟ`G$F[{GX[kYZ–d&-r?Sh@rYjZ-<iר}(l*YZiB"֭Yyc+;'X[ZRVD_>kcmi9PEXkY ZGe~&I]jҫ]۴em_ٍ5kb'ļ kZU)^뇹$-h n~rnٴI-(\5(NZow 4b%tDYBaK j k"KZb`{+l'"W:RVh36 퀃dG:Eiw هP16+6Z4mDdnЍ;ۆH$V5'B8mLV~zu)dU_x=5#F&{|II8|.?gEC07}7WΨU.] 'V?4GsNqOO+Vr"}fwDAЯ[+w K'tEь|5R["a7OG-ѡemdCOpfE/8CCmp1ƖsRTz, Mu@q+Q h^ MBk*M!Jkk׀mLr3Ko MPu;Y_jxH+E wnF?C8-DMIHR9"S:܈E=N@mo & o]{tEbYBCЭN ;m^hն9*$-THpҫ[qˡur=?J9r0/8*1oH<[B 8 0>>5ƙl1[oRji|"_2-BN]pEЭ1:m?΅֚x]wE9xUM߅Msvv7~2;9@"GLW ܹ|q :^އ_Gkݹ0Z,, LFڵ ]t939*.UeQqSdqƜ5HLTGtG=>>8#מO':立y^a0>6nm[|o+MG>uXӭʾGC5J"rZ #Rj  >42R(-R YJB ׼9n0zȎ;gB|!zsmctAO*ezb+dy+`Λ?z]ȏǻ( AsnJ7_?Ďq̵=FH_bU?5Ȃ|rPE T0'EdWco ̗7fΜU)+ |AdL +V(S4T["66  Zg*lqwBQz-hbAiewp'ri1Mpz~[3ȕ. "YaV|8aVvˆ*+a(@2wE1oa8V_B>أͽO{mig9MiY[+"R]2%٧;pjj;͍Y Xұ n^ m/Z9wTNtb.u gƊEҪ͇rZFUM;ïQ<}ΊPdwǼ (ma!T8G5EOhdxZ<@j1䵈F ,%NƎ#+ ,QuqvHIIp@0|ZkE1 y=,j`6ܽ MFo N*bJJsۉWSgӷ7L9|]UmO[;(^>qDtn3-O^LÇ'%`[LBr\o}`HJD:-fn؈ fg4T{q8qlƍckHa [̰LxYScܸ)pVnvTF]J)HXz"spH* d'#qt5_7n%dک~iܽE 88\H≰FG!6={RbN4a/Y#:BD}TM\$ 4[ +όpRs0qV3O⅋!1*X$?1HҺ@Gply\>M~[t 3A, 3yS"NL~ :D#SwR Z1eQh)PsThy|i,mw 1!^c߱dL_&CN W`Ѱa#\7 1Q![ݭtJK%^M+"x?dA-}!m?ټczDR / ީ Vʏ\%c)M0ޫ{~c'C.9˶kVa߰fbg$D%s$a(HHH|X`j$KU."w!o%B@@-FxHhS~XLC}*.Glh߽3n2k[8aXd~۵-ݡ 2E 1M1rR`%ᮠR^e@-  9=zk|ZhE`@u56)yJφ(w֯YJY\ԨprnTҥEsr^ T._d{~2#/ "8.ei3bq9iNT?suTB#!MT*zz,mVb)<E n _K8ؼyOO;K}Cprw/gaƶxWxZ[$|TVvw ?v6ZKl ؇CbAvּ|FŋM^+ >wMV:,ç.-1{78o6X߯&~'!}dYs|x) O.Mr He>ሎ<[Ĩ+1gP=xxao@HUc8v=*EʡLOyGTUcbRV2GIoDŽ#sF=s.L_[Vz#6f%QL{ɡ1$_pv?ơaش?}: 5\aa/%3񱑈'R@ct56 QK|-E|wiӌy{)ɟr`9bc+K3ϘI>&4nu ѱl<U 鏺a-bƏ+/\[y cN=6jD' e|"Hf N֔*LWB50$F#F >"2NO$s>Ô +'`RnTIQ?40AFYc)[*UsOX8*]B\ chq vZqY2~qQd<&jkWcˆ8 #acz$Q?,Óc`d}cu#FȅdhN5D pi MREiQ='ˊu4t>a1}6F ZME:duqrv#@3[}&^K[[a}9$G,Ѩg[79DB컓"ƨ_'`EhT6&/ l߻QmaeS3Ix$kąkGBEp`RwÎq50v&\,>?UvDH~(9 0xx{Bqȉ% G(qG>{X[ gAL.20ˉ&B Y0 ǭc`'aGV8^ 9-"_ť;/=_>ľmcEmݒebA|{05ˀDZpwBb M#x$>^k w .xc$7cDE:lyQ2!3m4!YG8VC;kB/O.̜5UVvu^BUJUoɜh֢9ծDѳr݇;j18~<>eGDaim;ыXyfAvhXvzgwĀh8?= LP&GٹR  kܥ籰k}]X%|=iɃ;ptqp-*[~#x <֮pV€AUȢA>wF#S:7Ǎ$%sO܆r=4Z\?BlQH\szTBUh%JU+q : +w?XR)o78YxKxPlpQ47cD?#Ƀ j(CqUd/{-R^3BLޯx9X;m* ʵe/~ G0BTL"~/0ld̼2?TjYJ"V2:HmT)=ztOƙW_;ύ) ab8bк5p8߼£.e"VRkgJU ,tɶ Шwz?,fsɩ523vMCcd<ƾNϔO~fGlW[ZlcIbac-B U<}R+' OǴ3*V!a#79G_{׏{6UP;'yY?z03n`Wބ% 4Vs/r( {e"bSkK=>>?W?]=|~)%oahudU9[rdŋQ%ԇPiQ Dy;VÇvbie[IhTZG)D}>ء11X?.+mQd!ą 疟{ k?ٸ G]Ѣrn9/bha'NM["H/5@-< '/p8"gVِt KoU "4DuQ('ÁHR|L<|=&rU̞ qxB*ԩ;%{$Kɔ@(z7rZՏ%ufF  6" ت}@N`~!$j ϟlqHlIÆT?,ISҸr$mR;R?Fj߱=ǚ5Hj҄ޓ(xk2828ŢJ#=-_*"6:?+0b܌}%烇3݅HC* 7_ìNPku\{AaAlH*ZG+<xYaZ]WjYv] (aH r%uC҆FY \{v]"^B?w+ϜN.h?Nߖ-[0p@\gB׮ܾ!k%Kh*v|%7Na耊KkywiNQU-o h{}bb<وZ#<>v᝛5j~}|H#5V1Q0v~9 KXd?CPlt h @ҥ6m;p!HS;Vz5dϞ NqO=,lgamI 2gKU}E#*`mm0߷э{Ùp%TPBOہf͌KD3Z%$d7FòcG`|SCꓠԊ箟0z6MU5]yR1&~>> zޠ?j6m*Y0\xѐ#GM6nݺ;wn {1 42[xŋ"m3^W:3fj0TC%|=˖5L>ZipQ!~F"V7`P OOaRsSg ߉U S{~<`;.xd|;)!I$3ZhUV޽{H6F"yرcTWaÆĉ%K=' B=?ƚ5kuVye˖E~D7!i w">୍&Msn݀.]MZP 6Ȕ XȝmcyVt5I֘+kS{nX-)>‹>>Kt/֑cc[nf͚sӦMqQL>, ٟ\ [4}Z31l߾{[syܼi2=+A% & ,eRRlösB1&, Z8 s!*<%8~*D_٬L$Hx[4jԈ B9rJSk RbH.[v!s"PhQ@u6ol6<ܰo>Ï?hȐ!N8n#???/ ෪!ŠwB$|IHL4l?paSiC"~ L+&^Eddgo"|+W ?K2Wn]C>} ⷣaԩP}Av:uɓǰj*CxxH{_%ګx hn'3Sݾ}c6Ze͛7M1Ѧnܸa1˗"!)Q`q/_6 N}ƌ"Ec"3?j4qϟ7D$;{)`(UT3mBҥSĝ:uc0)S&Eɓ'M1CrR?~c0/_>EѣGM1CJ jeܑ#GL1CʕS37jժ):da۝j)ޖ5y떷}(oiP.\7H#Fz9SLy霸wBGCAMb{y/[ \ Am+W.qD6= t}h\Μ9ʼnk׮ ˑ#gZ\U bRhCm+08#**e͚VVF_JA ck6N8e˜93hq‰^G;dʔ DDGFc;vꃏ]y _bc 1/u]c ʉ ?$'`-Y+n?k,L|||uvrp| bzL={|W:SL >{L110`!}Z7O 7 AQFgɼpAؘXpC· =0L<3蠈t0H:~ u͛)`=RJa̘1M)Z/_[Xuo3X'X}N:Ud-b2-Gnݺ w0k`],eb=|X'u`GAn >==E}Nl;sVY퉳YOS$|t ij`|pՃ$'L2©u>VuwwD&|>V~1Ϟ=Ç?܄bŊ8V=~8/^Çذ㧟~zyO L$äܹsزe v%V0)W-A ~/xaﭣ5yVm/?6Fc! K$qkT@Dpߔ`'ltRn8p ~gaAǖ\o1bɟȷ.j&lyС\1=&e@]xQXC o[$d 3g~zqbL5k={ X $|;Ç˧3+,mB&K~!+(Q;x_ڟKIOacSyS迃ɔyh7 c0acWb\_6>ۺk87O.н{n9D_&[ >U3&Mq\V $(,صMIbZ& "`f}ؽЇJ$'?[*TݻZǀbŝ䏥u5jW`},cǾ)_Ul1Kx{}29?[N\2dq|ߦM^˾x+ 6 AV`g6t{qQ&AE)$Ո߇ %|`Cob)_N9ۜ#w.2n!]K.8vcIPn6mvQdIAyuXgsfBXti 0@H:yۘ҉]k-*&H rLzbL?ZǠB>k uS|@ľg^zJ?"LH3n ˉ︰K4E~ %Qy.R@IWԍKҴK$ ܷK(䏥0*y68q0<:2aY` n\ *W20EKeD$+xEG6xoma&L:yKȑ#B&p-e"f27l0ޮ];̟?_c[Eq|=on۶MR&[h>N ΃{iz B~ ތoa 1rC6)U Ԍ5DPEG Xb > x~ q$"dqۥ2#)-`YFP9)m"u&Y]'# K8UL82-t4RT!QX rE`+Sؤ {x"C+ڞ?+`6l ۶m+˄Lgv n[x+M`rpmΝʕ+ _f0 dRf㓃bBďb)erW4[y0o޼hڴ. KYDF0FgqJ ۆ wXbFPYp1aK>lo;כ#*j[ahX [GɡeYo?OSsba߆%V>Uga>:UfbsU`Cl:ETWe21&l֛{g+adM#3oI:虘0dD' GƘ 1*w)4(ע-]CsvmVba92) V8dטmף/WuM7YNDކ3(R؜0x+,;)1PWŏYmh_D$LbmPۂ_7cT*q8o߼y^zB f03!-[~W\)ˤSʖ-N"xۅ Y??z 􏥊(Aoā yg<ڭhy!>Z21$ k˶F_zBfc!GXꂱy8g66mJvېh:Kkv Jr.]zdAJCa~#(!XΦf4Zcj*o3Q 8C(398cIT %Gc~9XU'"˺h*߹|k꬙ќ]f`Ԋ*谲58[aa `e3vt؃m;;0z8;,CuP}bCtH"m ./T.VIx=ʕ%Yld.ML|.`+,KNKrh&t,c/+$oڴi+0K1c2˖-[m{!qd^N`r0{(Oaȵk׊4>|(,q ַ xw j#ȡ9 轷s"Vit]ûZBm(S{`#;Gc~[=qutyd8:cw[1j"D _;U@1\-7vB;i y2IౖƿyƮhrbn=;o{r̮P:OѱXgWp(P-yO,)A U3U23f܀RSALuTnq=l])%*yYm% Ix-؀ 7Y^%k@,n/o[ a0^K > dlҺ&Mdž l$*8OF-c???Σ<ď3qeg,1dbj>iD X&3_AѠh`==G;X> r "2c@w,!r%i1D`C[S,oQjg c] Cˡ*%o!SFe0$Qh%%IiL+P[uXQFQۿC]( ŐrWrʫ{<3!4P}JE*r}xf$*< +}u;th~ꊢ9 "z%1q~]cSWۤ &@ g 3c)YG ^v*gQoY oOH `cGЬ`Ng &| Kng'4i *$h%u=zx- A—B hQ?զSGJ+81!b{1(WVC7{kzyʴƀ?Po '-`WP(Se_:bUvfk6Bp]r6AJM9CQ)m^ )Q:~(Gnck"oVJ/F}2'NHC %nth1zUmIu{fh.:Hy;ʤ$2) 5-杍%jc /Eu#kGiMlFBc)Q0Kc,dz% իmԀv[=u(͒?֯{[ KQ1c$ oC1ȿۄ]I'o3d0y7nFU3;ڙ%L uW o$|fzY2tB^{ zn9(䰰AzU .Ae*4xxx iǣz"PI틋#I5xA'ׯ܈ xxlUp4x܍2fm ;=BĈTv{dJL}?>’2%&Cȋ 8,W/,HDJJ'Y$g8PZG%L*}za"v+dg! :u+uE A?(eX6,Yf?eLx}zq c|Ni Ie}?6ۦL7盜qܫ0.A"~>6x{%Q8&cIwX/{Wy|ݻbl1gqYriE0oDR ۷I$_l&Ĕl42p@a$|;Vj܂<+4νN7I0o)@ƍcK혜2w7BΝ;"{ƌMWI[PPe78W^,9r[BnKXead΋(]ư _+⫚ȍ]4[Q mSMT=%}[;6KX=c0L5HO5%Xl#:u2?p1amZ:}#o>~X2eg ߷YO3,Y"\0 Xc"X/[ K(يX ɉ_.GϞ@p`Ű%$j^gˑMu,_vU[~G}|/#Xs KOB M~Rl lę" 6`i$˧n޸q;[Ly+%'|p85d`vq9S3 toܸ!9]v"F$ k3,nTݿ H"{\/:^-o;ZBkC35KĤ{/2A j<Sٗ!o2!dEr=@>-Z$f==#{Kp@e@@8Gy"]F9CQX{4|^LK}x2ɗWnz Y``mmQy` q"O&U䏷(>L"? 6l]:ns [;\R8cK6mڼl&, "?S^%vډ3d $Hx18t]^_SMNTM[[>,~zgGeX|>9\ѩz~tkZ[WAϦqN ,,]!\n4QIOr]cߣ{Ľ0{/#a|-Ü[ᔙZ+'N BGDĵJ@d({&50] 4k1eJ= ;u\VkD$L>]l)RP@AJxc 1ac0mjv4mq&WLY`sѽ{wo֬Ybz„ ²~/u{[0a峊~l !\~66or,- KM |ti8! NDkH fE*\g k./_>!+[>ಳQF!gΜo;v1 vݿqb?L»naca8XixN,^0" wakmzh_9F J7_L~|*x3x{8)O/F}e(77 ,R MtQ cvb1&Wu _ p0clB\ǛZ"3gL2NŮqZOWFo|[r]^[ײKx DD䣫hp ZD=EA8}bn:yJ#/vZxGe-"Hx j9?@0yPi-C°R4h~C82TX ˰"%DEh_qIDmFu12 FQ49[K>q(=usgA.hXH[R_`O39mƏvMV쇏?+L ;֕c)z…¡2H)SkK^9iӊct+,tww'ໂ9 vn_9n$K~Y$%a<.\0-A>>s$j}Vw.}X=n3:FiC̜|D񘼱.K &j2(V?֢}UmJ#:*]}7&m*%b1E5NJ9P!tǗpQ٠ kk>5pm*Đl%LN\0!ȏϟ`עf@Z83ѹlZ֊@Dx:9`mq`_5:ҼB'?1/`UČt2" OX7+`7Cҵ Z?,!-ۆ]|WIv)=XY &L%~EL@1e\֧cw-iҤɓMWzlz#F.kXZ۸;w~}&?w_ PdNvnޖ>BsVI@fʆp=zqPZ$zP\\7ecmFdMC->-"$ΆmzDD'?S@%{Q,g=7EJEp):Q1Tʇs7O B[ rB"~9x=y;LcR ll%K=Z0RPiu`O` ٳĉ\lRL//F!|tǺ,;vh?+[$|*Y%gC.H+-tIIj܈U1{~ѠSqk v@70flR? 4^5hY:0}Mk̩}JtaJ,SPeTYb^]@6&߈GvO18oPa4X𫙄MgF<?B7tD z T~&~hS!uYtB 1~JȉW1i714N\^G,dQVNXs O 6cHyt} J.ߵ;'پ~W/Lrs[w0 ;w w1tZEz8k1~=~kb_+ts̝;]v)~Xt % v,ѣG$581dyl+5zz쮅U1z α` o%OqvLbYߐ-fACv$$-םhY fDD[Æ/G"(Za+rW=T$POiȴ]$ЂR44(-慝pbA3sἴ9Lb#)udlzk+U)#1D*V @Z7׽t 0c(u. 'GB]y-C .O>sI֖ڧN<4UM}eY/,Ps!I7ʋWBap.Uhy g&wqtG,yp+);ncû&K}tDsC!`Zߐ1Iak::.9xTcǎy[y* i MؒA2o K>9؁sjPX/ɟMDlG&5hQN,h cW*D;T>رjΏ?E[hFEƘp0 u>,,D KYPHCnL/V5p{nDٕ_)*>̈́7Yr&D2ؿn?MGĆG?ʗ//e[OI\?e.q X9sfU3!c]<ޖ]|KrF"J*wj .4o=FSa3X|z˧6&eIKL]U )  y|Gװ3ԀDSp'bݶϹktiٲ5liKXӧ6`0[~ $~ J|Pv ۬2={~fG\ dv) / `8~x?[C3qM|7~- ~ {ĖojA"~)M&x[Xb!O ޢd$S?KV#Gρk˺ڬēu% ;2z!/ɍBڙyF!\& $|됈w v„Zύub͂ٵ pl۶mOF8_А!CAb|-kccc2[f=K>>:B$H !u m#Gč{l! \@1dEZTH >%b|0KBޏ d\u90<섚}6F!L/]$3 2ZY29MJ A·oI<0+-Lٿ\M vDSy2֞={p~ۀIϖ| atIX2 0\%Ǿ< v|zLX85bΛo2 dU^l۶ [nIX.''$H9|||Sď'M>Quu͘9)@dhM/_~8|~o3}7czSE@ܺ}B0xtkBYWDOsҷ)}׮^ItS&J Ajoq5x[u>%*Ex%;<~)>T*D>G=_*.\>XSe?d29? #1!QۀȹJ%d# w>BA^֟!T$/gS= $H'jG ;oYOLP|# A $H"U2}s4iLoL>^o | 8BbSʔMk׏) ZSS{πHc#Ig{xvYrU_ !9,?+Č$|HJbB">|0$BdJku1N o "}i|a+ Ĝ%W"ۧ%OS?wL,칭VH,Ԡ6Uk]ׯYrp[vZGۙ4RLt\`O3 k9doD|&P:r#|mR'ׁw#OԾ8(46J?y~ pŐ@"~=0Ƀ}~:S2\uǟdF[t9,1iܫTDÌ&1PQW Mbii%ߔ6l_f.kh.>2ULd|\=Mg]V!G&M<HۛAi^i.ۤKp@׊aj||$Uq3<(/*ZՇ>>i#s:̔6PjC7v8[<|aT.3R\:;IRø8|(?Z{Oah雎4,5p!k=G #mQdsptOGtPwrA!'-yBѴaDۢӰJ[y C>[- N@`m`"G֔O0 dr/kkY2|ldф%}4Blp8[o(}j VT!ŸD-c\6MVgCkȵpvFZ*{቏'(4#B^(잉v钃[}4B|,Q -I00ns rqH0?Gj;/JDB*EmvF~."ܖ(oJ#4 4y,ǧ {C-1 [JMfgOZcg %/l?=;U젰^Y†NT"Y,QW>(:{%ND926uHJ"T|4G? =SnON'р^Y䇛 Cѝ:Iv9)\9ѽzfpqpvȂ"U jV:%R܈p6}[ANSOR1]):祲Ap 4> 'Ɇ^[ypr=ԧ 9?Ź NY2P~jL >N~MuMH5cUw' g#i(cVdkB](ʎ ܹ4HGX=A+w܀ ErmN\WD$|HO7 M6:=JeB+e;=TYqTha(QpS6T$ί+vrЦcI Bkq`m0b4qsR!Nl}zζ_t* Qٽa_%Dl9"2=bD6p'mG+Kt+;SMzX9f[*(fOV9עGlu"5DhZY)Ƙ ًIS,A=/dr?GݷN K o.P &I֘Yw j"W8t,Tn &KI X KY'3]W2Se+XR |xw#CtD%1~i&_:H$QM切ֱ~/^JeFazyB1x/\lZnN[ ýqfD?y6p~\hޑ8v N؂D 'K {ŖpQt&rk[ w!6\BÞ< ̯n; .o8&$.$'2|켐 Z+?؂i7/n~is":3)bcVv+<~SNCKUVDb8Xxb5~>;JYbhs6Nv8)b¾Gp t*P~ma Z7f 째pM2g Fk1NCe4 ?d̀:KKC(rgiN> i('x i3/4a? \-3> {eCy8Fsʣ4Y@O$E\ (}ӎ/E&>jZT P w`]7B:A3E- '(=%o }"/v[ar'd8o\+N?8!ʒc.CSWX4l W!"/'$WwEW`'ȓFkMƁHc:'頏;#L+KNÝ-FD^`9vq] _-WIl5ŕ`l cUT1,X 5ܝͣ.*X] DD9Ozl<9/[v!@ A_ڨ+n{GQ0[*mwwNsp-sOB/3T.>) kNoWcKpsh<*Snס.-{Ph_t! CNCɰl|3 WUh#0NF=TG}m1e^u(./gÈPSeǐw~ǾQ`PvÉ 30Kφ5F{Ddmy,,]ҠeʋI=Bsbƒhd0\ψt,6Vu6rie) h!ЂAs4>Lfwf{IJ4YY6"AMu[,AB*A"~Id baX{zN?V.ܬYm@LI AdxTa)W.,0Vh:>Kjt濉$ UZC8 rJ[CƜ@+aAIZj'j())QtwPsthбgJS#mG֞pתe uT>& DDŖ1H<}kK], CӢ0XiDR:?`msX((]&$.w:5ՅD阓4A%k)_.7DhT `nZOm~`ȀHNmB&LP@-#C9FiE)^\2XJ@)AOmC*DzFrKl^'9oz29g&ն!>uƹ v491Zt˘li!.4. 5_dI^^-cKdM 6ᥥ"C 2Է2].蝘rv`i\qQ \!)u׃KS~aG"ώGyw+?;ܲtvBa״*]3`|jAE2cz,Qolq x.nbzD~$ o  XoIK_3-+_\%MjsA3@ȅris gyD_3AR}z<J{Q,AJK^[Mvi 8vu };m(-m(.rTəu! dCx0:8r 3ϝk"ì#p&̝DrnSU;!S ظ8QׁW19.6+ZxI/r+>"6b$[. aT k.%N݃5MdZn&q/x0QjtԦFr"CNÑ2՛Kܺ"e/JCB.cXT7|JcH BmA,pf롱?Sp)@wq':<`Ze {=ڂhh_r6C>f_&: w(2Ӊ=(:#, TeVa),h"g큖^~RM}Xk yaKۿ[HKq-6-6Җ !u h%^&m>j-S$cյc(:\_Y-q;>E^GxBʑVip5K6fCLCl~U/CvtAHhvܹPE8dEZC žSilyq~^" PyI|8z}'ǪP˿*u!X}y'gL,KBUaUSkX?}^$D\'|f L,pL_VqD$Hu_*%X4V!' :8 jKބT˛^'1#ūOQ!`I AI,_II]Fr3Ѷя&fP` rm>1⾩3KOG}MkCma {BDQH_8^H$87ݬs- UpK!|ec2zR<ׄ s~.sx/(ɟ;S1! vg -SKB!72:JHU0/6 ="}eG,7M:ĵ4PV)Jki4[}*PޣG-k"97Sb,g醝g ;ITIBE̓<'00x~5UYȑU@ϓ￀VߚK-+NE#gv^M'o֞ /ЄǺ'2:! pLibv򃟋ILO?\'7#SW?E찰vː0sN|1]!6rueK˽<\reU /xVWE ͂~YH;xǿud GUQȎMxSNB݀|.v2DSqyo, 7Q=Yrm+F _B`)ܷm)u5{܋ƈװ46\\zlb3&\v~`O/zN`̞L?x!m3ǬeE 5^>[aO1k=9ڧ֜} y@Ng-ȑsA@N<%&GAI^4' ^x:v:֏0N*k4>7";1=BGT.Q9Z61J>B9g %;}ZP^4`A8QZ%R'px:za(km"u VAk3n2=;MqLj9B}7/ˎP˩0ncٍV:Zbp]8"DYGm``?CZ̄={Eܑ-_[XwG.\Bzä.!~Q_؎гDsuÎ;mV{|P"dσZ?4*b/@cG6GEjk:]dx9ghzd4> -]R,1?5Kaץ͈)B|nw}u61OU.ĨѿL{Tsf.&"}T[i[{rF_1(| gbPӉ}(\Z ҃}]3އQClB($*{aE:yֺW jDaӭrBRw':җ'NaG~?w",F wƮ঳{ʾ9suǚ+1L,hO05>O-kVs:ϋ"{fNZK  ЫPv,9b(4?[~FٝX$T-ŵz}Yˣs$]95{Clsz"ۯCj_`XjX .ځ1ȑ%f-p_SڎEiR(bw|jznnQ&9~ݷ ND9!6ؿNM;.uwkG/mŋ*6sMC!'= 稌œ,( P"wMG@_}XRD6OuOE0a>܊}w-*2Py _]Ʈ~sǠkDڸn9#=K\'0<;_\n=1cN f'[30q!ȑ>{oUqw Co4?*GUPS.5Ѻh["mȇ'"zlvuSaq՚#A/[:hZ|A2|#rƠM?aͳȕ\lK6J}6w= x-%<㯋gp 8 Se`ԉOBƴbC&f95/o7 ` in@IUt~|T,~{;g8zj$>.>5:i)_J6ñS_of1pėM?Cy[8t#-t Bs,pG5t-`> \8c/=6 A9ˢѻ8qѽD (=<@{]|_DkF ۢWʽ?t;3uT7\Btxogog&#e1|V{Ω+:K6#Ϸ'myWv1F[#>V^;TȤMAuFP:{Y|\,'#\z !O3{ Yig|W ƬӮ÷†8?sݗMS~e v{mmǺy_?y@cOdev(G2aWxw <4Bb`O8} #}"9qcua ;v7V"Kvö]~4_6,"]rջ2l3E 1Yx2zlQ3ILSD`|x3FM glXunj aO[.FPc>σ*%ĐڠC07:a|]8Y%z:X~^µ(hW[;1/!pDz3{ $-ma% ?P#q Ȣ\8ѱ18vA%9 ߆ހ29*AX{.ˆ`1!X "剆 <.rWc~LP[;zI{7uZh`8~`mAu)~ױmh\)tr ȉCƂװnN 8Zj{+kcVsԚMaBhyPX- o^h7 p.Gw@hD \Hwȍ᧲8Y\s-J !N(<6vEbXԂr})Ϯ(2f)-]Bå;[p,l@cHBpb')>!sI'8F  [ǔSQ73 TCr=ʹR|+*!D.GǾ?p/<D/Wѫ\%,\+~ tz7XzrƠ;h,|gbھHN2.R_2 4*T bp);s1j4\DbAs]G%rr՗Xvja 6<V@5GA]5[N/h2 +0bT\ +o[. !nk`_] 8s<9 c|Ay]$5 @Θq/ `r%~WgGy`F7acu9%!.:dPu,tXr:,Y[ǡMw{8\ ҕnKʣ5F-n\`gsb ?9Wpv'wn)_18d*{o\D=([8p0 @؅0F3+_Z\CQ<ܛc#gV܌:-yH+7q̗ '0zo_,"#е[/#'9y#ݬ~d2N"džy^qBbP` 4 7[>eT;#Ǻ jY¡+ (6 sk&{P(kQ8B|եC ା_\~(>D\ }>pbK ̥1N-A"bOxVq*Fv*{ NDq:pܞNW.".艮s:K#d`h˕eP!DɳD}k.Cxt}?\ǹ?cQr Cݡ.&D@F y߿BAg^n#޵'Ư?DŽ^?: %I'rײ ziYWͣ4Nfu܀[ٙ<, ?ǻRs:ƙ9 lL@\v򚄌Q=HYK]PH5#m`V =Dl }(J'>dϮR,^ȧ^~OQ[(6X H1F 2_Q h|D LcCVj[wĽs*AQ\S&y]Z6hKA x󴐿[#8rN( UaB6TUVs "FB8F9aVq m/ik&~x_[ZNY av!{B~qYRwEVTS C$uǶ+R{/q/.o&m A\U\hU8.V]ay7='|onFgk^:$ €'}?޼P8x&;D'}Md߹5-HVd8'~Qwz;Laچmls$kfc-o<([ Pތw]^p`1|'Q08pOOB}y7J?WM"z~w0|Io] ~.|uVΎâyW 89fz.r{o>O*w.[WL6 oaM L*)_8P,˻8(G]`"{L_LExsg")+u_Ailף\oR^z|.KhG>)GZUXϡXoh-[];F}3DV@UQXC!orr58QܹxPF6oݿ-#M s:ȕ4j-#ףq8J"/Φ牳D|f90Mf`[O4.=)tpZwa'/2UUj{Qӫe?Kg&${kU'bZϱI*U 11a({X!Y΢Ǯp[m=[asJGC?GUG?1wo P'J*(Q:G\?FU>mwԜ9z]!*c;cwwnƈ ,k7VIrN^ȹ"JupW~: sBJceXk&)AI=F1o4x,6"bcxŰT_kZzK9#Z;]tm8Nox%0V E}=k`sEDx@ep=䮼 ꈚ9N^ޫWs1_hSު=}}ʈ ,>.{#:.7bbGvyMO Kak FI]B;:t>pP)MN=GӆfoXGF^ј;PD~->ͪD[Ʈ;'PAӠa˙Ɉ?zOд& ̕XcمK~{ bWS tΏ&|+5ށ{1? Z+;s/gƸWb4͕!iB6q 6Aװ_W5[.\!h7' ~G1t`p~9}{}߷ .RoWNcAx]>oA!X}Ƞ ޘ6;cr5s 9yA Lq% 3CsYys!h%2p?nuOAC">lo>MC7`~XY+~R0bY36Mbsms` 0s)|o`饽G(gD=r]D܂_7cq~|#)^G3]ەX+}"F'F#.::@! 'm&`Dr;rsЬ·g>N7#с(e<:\*/YlƜ}j~V.۱i=tVgB]Q1SwnC?׳(8qJ 1 L\ a67kY]XZcJ~ :798)J d_ڃ'"/O'U+? ).w NI6;0l9y.>E T|d#_z ,xGʰKnjQ|.Ų!_w )JAt*F8s;x{ji@Q8V8ZP\ ]ukAme9Y@Y1T_UB GEu釕hY JYq~jL I§L"_ 8' RD}3f7 D@B0VO +#lbqOޢP ȰuEm# >rX{[hLz6$$[vs45 V.Gi72)[T^"^"㦟ӂ)9.鏡 +wQw tw0‏3CY^>w~ R~O}^{XxGݘ(~_x_V酙[ѫ@pREjvx&Qi&{֌ 5͉E* 2)##=4NKb4qFvHyA/D]5 X}q]P$oD/0֐L_HH<==Ke:dR&eR&eR&eR&c Ǩư*=3F@֬YMG2)2)2)2)2e?3|2߿ڵk?~<:v*UyH|G=,, Oʕ+'Oxzc:]\I;wԩSj޽{lٲ MWuR:8{,bcc-[6շJ(x@ڻGDD]޽ÇO=֭5jĨ)kIOdR&edb?K2RA Q,y `(RHDѣcŊ\2rX`ŋf͚n0|O]jL~С~GpӃ;Rd(g{ݻW^hQC&MߝDa{)k.lٲ%Q710`4h5tF&eRF#:sR%Ldmmm(]A{ӑIgΜ!z0t$&U0~I_$ ְc UgggúuLg/deqA*>^ކ~MJl[n9b8qbb.ݻw7,Whx/_>n[6:utIb 4^jظq?4˗/E5O嶺2)2)Pf/H!ښV_tu(Po/ьA)G͘gƨ8'(^ʛ+VGZytD1{l]VV\9|whڴeьrBˉ'~zէX/RժU;m۶*12)9)Iݫ_~tՠ5kz28Ezؾ}{/)cم Ttub>|Kl+2n֭*zxx4s^v;oVyA+^=բE ULʤLJ 2O)ׯ4}tU+Vd,PgϞ0-[kB 3ܹspQèQ 9sTud:t/;e]&L`UVb]8N>II̡ @P8{UKx{{%)TEF8k׮ZjX2eVZe:'mR#D~z5To1;V gR&eI/u^EZpٳMG2&iýoMPyes5ɓGՏ3e3Rym oNՕCݜAB-ŋ /6RF ҥKE~c&eҫDC8YwUt!p332q_!RM6k sΈy_~%Fi5-M T[n8\:bĈڇyę ˗'ҦMaڵLʤLzq 1k*Gq)KїH _9[\ҥKzzDր<8uſ *… zȗ/gO+MдaL0!1|7jh@e$ ߏKh RGjD3K.6lڗZkw&evxՈn"W{^_YGDt9e9JODsA?~\eD CˣRJ]3:D_RG;w}6lv"Igq_2eʨ%i#}s](\U!^ OXXG#a0ssrNȖ-O_nܸ@& A#m_xԨQ4hӦM?~a֙@Nta1'u%HZ;nlGޤD~άUi6j2X^}UL"ۄmA;@ »ᆱ$ZO:sL{-N𞺇}g qy+*7'ț7/|||WMŸAJehߟRɦz"ȌٳG _z|6Y4ǎSnl*' <駟OrA|o][T~+VTC}ikqۭԂ?~j{O͈he CxȱcbƍSk׮i W+[c͚5?| df ar;;,hŋj[ *HG}7|3Q Qhiٞtg֯QFJ0BÌ%A>QdI]{`e#άg@7nc:3mܹ3vޭ]:tyOuyO\'NT6y? un  k\T3Ɓ1c(!y`GNxA;gZߙ<2qjٲ}6x6Т 4pI KQ`[hƞ{-s_ۚ5k7̉#L q؍Λ7OEx 켌DP!h(^?Sf= 8}4zŅ l=6F=6ۀuLGyfuG'u,/sjZ?2rq֏CN>M4QrǗ%Fr _(A@*6u#!1pDiK1]m8uX&ر ? z%A=ѻYOJ s=7SN)uiцQ%3"AD#JC͍TOKTV00*W_%DۀʀdB/b:DIC4Tz ^JrB-ee<O+"dC\BzH8#1'c\'Jl'֛sx^D S&|֟eUIF٥.ֶc)tI?2SZ]Bwn '#Lt$ҫĶ($ %:(lg#O>-~F#ɓU4{6vadMYRgpcrbժU 0D Mԩ<= q(Cj*;C6 5TՃd1JWD bT$9#t>5\@P>UT$^H]^ J4,@DŽ)t|VF$H`#}U"Rj>[ylc-J{,^X O ~0_HbՍĿO5cD'ߑ#GLg#L-όx2EgeRDÎ;T:y? 2 I)m09˩Ӓ24˙3ʙ`nIFqAaҺ .iHÓ&M2dMTB鲣PQ;{2DNޕ v`ԃuh~PSi0 ^AF``8 JP\OO4|&u ӹx;Zs^$I$;κ=q螠E[X҈Du#;ی: ' 1תU+5HyI swNذcj֪X؉m`7ND-OEOݓaM=tбe<(?0ySE(  ȑC)gь4$qd C|OiQgA A*5k֑A  SzO 0ZȡE4BV[?. 6|2~ڱ]ґ E7>pVw%S%JDb.-esK_`# h 5CDKD؜fkwEeBnq; 6LE *6V:kX[aK-`-`ߒk;i{^Eb%c? L ʐѥ:u(FQŰS@m DNbn!?/gf)0fNӒ}R?p؜K,I-W-Zt3{IyIZG<=K^?#}Gǧqț'79tTxD Fj)&~4!: s'ϓ^.KKOG:'!,A IOѰ,R<P/. qEqT$rGbb礂xHgm+,;9]z?DIm廁'8# =as!;*Ժu]RKLL<*'rUm 637!qRW-6C\hlpAh-ՓCtVi+G;u\gc/ Ϧ2M|tX}d#wr5]Vj -gХD3.  pC脘SkpFE\Xw&,gq-We ֶ;ʔBmVRZx_u(sN qDi:y4Etz*WSᢡv*$ej5.*Ene ýqvU5qoqi7ĝ] .D5ն>Qm/ C~ j;1arqZ hsiO~-X@9A{$`2)1[]aݴ7OzbϤNR}wwXgDB?Bx`U3Jrr$#*Ovz'g Z 6@t||ׇE셃'~[ +{7DZc7o hc$P NȂP^C| ذ}w0矑_Ϝ9 w1tQ]Ѫ0nKaSOFIJTƏ#q2J'@;>ܨSW*W­o eA|ϡ" AӠAA\ ș'7 F!*;gN|9r$#k߉8Q&J7u_c߂IR_H;Q3^HA慐LnH{s7qn DܑM!e@!c4pH+h8!s8|=RtاV R2k'v2<坥n+QWqLY?N3 aLz(`MdB㽭ǕZ~=V錈ٟGSK.Y1Nh,X=\5ܪYWE>5vi L'G(SG1`Ht FJʸR&M*Oi[qV."7#"ez#jZQ(I}F<&m P+P##kkUiWXiZg"NR,ҧbwX7CNQL1Η-_P v ο6qQ}&E7mx Ak։aׄmTDP6k (el<` Ǥ|Gtŵ.#T1Jq).AXV-&%@#qp4GC!jxߌe-q)w)V~3vⴗBe, b|U}1N98y9vH?zN3\됓;&'dIH G!MP&b޿ ʏq@sĉa#+ 4IHSqg[!zU<:eb?APGfu AY8nSx%&ChQz{;0 $@>tnDp`ȹ|GxOtq  ǢYSTW( p!9bzmۇ:},8kC'n6- OMbbo a Q 0:W'ţ{TPۮ.xb/—D)ݗwstFnߌ7A,~ G#>+=ɾpE lg\E:Zt.xKW{_gra.3Gܘ3(F yPxtnܛO?u4QΖ,C{w`+E(w@.<+QΕ,sū!h(?=W{"i \"$ڊq@wZ\-QSTܪ nˏ3Ej_E8!t{ 4>Q* q@2ȿ |'Br Z1%6HlY",|x]Qpb}9O !|U¹%qFOq.z;ܪQ o@td$\;pܮWFL<]ǵ6#*>[ug'8<_Q:wiw'D,sE)P_;V&//+>AD?¹shK(m.Xp@A>}R)bts8#U4 ao*w *D8,^yClHkͅ>j_~x]k 9LFs^cE7G%a'lɹ,*Ҕ --n8#jLk/ğ[cb qWϧ(n8%+OG"\[ʔ*bbϡ7 |*Ϗm׿ꅚ%+cp{E/p&D8&vZGnz8_4Η,+-G{vs|?z{+xQDx# pusB-r ڀ8DFtf8qCڭ#zdaF N;Y!H(  =܅m~bH]_+N)kg (EmO# ~\ɑإ=ppӇV_`Н>ދ">x9 /,K{!9; |"8_ ~*|Ã-ճ.1tT$!#tg`[Jj*s;KScSF" saxuj~AՅ;=\WV%ckcEi!Sz f"DG^\+!"^)! :ݍ3s-REáwNH-cn?jjg.<Ec gDoچ8{WXeqCԾ-"a.`5NzA\)io[q`1b܏3³^Ċb*G 2`̅~94>pX #)v^($|;R2a@OܶZFms&*) &oPp?B~ݠ:C O0ǭR"$eaYO!f(yr1 C|h8_ @9wu AqpDE(+h=ش%"n),ހwýb9K[gЬIS(VP=; pG!r/? (1m?{>]o > 1Vc{.Ac#2B8Zw C*L,B\s_n54!FCǥ0V0a%lFonoBޯZ >4 "=cuBEmyǠ\M2<\v>~C}WBnܝ ؞- 3Y}:#\y ^ME}e,s0O,]lᨋ3ljW|H4vDw꛵4O`m%F8$nو;=}qkz̃~Zme,lM^ݩ\VWL@/>+o "~bxo!#a*r%%r l /@%Hʋ#"V!Y[!Ƣ:نH9b ȋuƇ¾A'8ǍrpJrևkUѷF•5qoyCaz7~ A'#8{;GNthwh dy 8mT>$_(Ӝ$k'Erp]|[9+26b$V{+#ʞCc"8Kr[YB֎q̂9{ !?_|j]]W Z?}Qҷ<%y*cOi,XX:!z:DDac,Qւcۦr X_^$S(ڵc RJLJܙ;wSٳk8; 6ٳpy;qK>BS} I.\B?B˨?cHcޫ38<!m-/ޤ#կ ]ȷwݤr+ QF'C@U? TyOetn͈̑8sG\y%ߑ@)+ЕjD\d MPqrT.p!Xy sA؟sU4rŋ8ĿcOnGhV37?SgyԶP_z`Cr߭s".J4;`'"_ :wNDoz !Qupxu|ʗI=>[+=\-hP~TK]$U)Ho7ޯ<9pA}(]MpV3sR/{*,Goq# ?խT:V_[~<[y)QDqB!.PKv8ZI+/Djv IguSp>69qV%8FXⅥB~QҸ%Gc|vlp/a"0voUbDk:tE :9wG6>s rAћHP`pC߇cޝf][u餌Wm xIw<CBC8tO@U+N#pD9`K(xl7"'=}:uoxj(6 ֢`]sQ#q_rJ*JJ^67*:"k>epbuؗ{par:lʵkGN32zdur}?;ԩ#4ܟ ti1>ʿ9 4Cs"<(_sx=p/o gU ;O c,#]ŋ8ybOZѴjܞL\\6 s^dbD'*CEqlIDyã{Cȕ_R`HN,1?!x,n< +R9AuyZ5^!}|@Pԫ'?) ̓LJ;I< >Gwmq~}\-_~k! A\[ !C{(X^eCp.UV;nȂkUojsWN˭2mwK }ѮM}|!Ը! Xmpl>\]6 #@nA#&*2/v%! ?{MRòd}OgꔔF{RKj?n͡FtfHgkBgMDCAt^ŐGS]%?ГpCM8qŭ:ĵ[߷AY,ۊ2E] hݺ5> As@"\ꖒsDA#pJ:3b,haVr~OG`m_:uYvbBwsܰ [g"6,֥¹J^,f qg (kⲧ^ْꧧܦq֯_xrb NGMC:H(ćzZ!lY8"WSHfFuݸ`:{u_os඀x,p6\.Ps r^8nwWĞ؈ QpmN"~0lkslmuB6csi`װ1!||97V^KbssF9>#z5ј4xT\Ñ7,=rŋ'kʙ8ӑ+k쐝1GwL`uԬ36Bxt .qЉ R`Z'Wp:')rUԃ}NOp ﬷2>_:vtr_C"cI3cG6LV;ǣQ qQ J '#ȯERg{w[VP|XTEľ[p8Sd[6+W`,I1lЕа#lE.Sg89{7)w.M#̓9`<}ls6Z׭ky9sW{!şPx 1pG>.S:8>ޱǮ@T=sl{@Ke+ 7j aS1KxZ8}4Dƅ./D"䷹Sq1j럈& ϥG3ͯrӾ~I=>+aClՇ[B L}>3'd<@0(K\:7\ɉH[ "S>Tt@.]z_Ve]d8jVo\s%*!cgvp(eS'Dcg"OùWF&'wlX R9 [ФK[&4}!~(B0u %bmJéJĝ6kۭ 4F`)VYz2"/qW8C?N&'͚e eWgO%O$)eUFMc5(_s50#`@ -#(M&8<˷$/>Z(@"wMԒY?F;oRX,"b%ԅS<"R/#~CrGV˴аQrH PFO8<")(|eʒ |s}?NI1ۻ}Ct&Q ILV1%wӸGȭ2cq  Qg$Y̒P!| 8;C$`)e0LQsEO*)Q坘{$Rnlcދ @cNZ {Uѯn^SBp{Psܞy߆K!:+I+ScZ{(/@^@It:@%m%}MGIQ.#?ʀ=൉ o5HsWĀ?mГs279(h thۘi2 j"vrV+>'!a(tz /Lnj|cwis1!ôlp-2^`o%:3sM )Юю?q":=O t$ 2B%3\AX}QɧMbdkSPtPӎ ;AӫD0W-4ƆGes-LnT𗒤 ,ɬ iUA~Ub@obѡ}{u5`CkّUUq>odH| 2;7IDѪZ1{~Z$ \þFRJ[C*³x8so8%u~X:%*7AA29NǑC\OHJ%%6E(}a:N(gRpl3yC>+c^-uSr'ߥy|W}PAY`^#ٚvz)*:ʚF=g 'uHs{8tMrE:Ԝ˥H\1ocͪjnyx{Лȓ-+Z5j$e=uCuUMx lI竢C{+,=̮QS"?WTtQM=JtA@%YIɆ͞R7e?"X?PL7vm-\dTC{vnk%ѮюX=ÞEa]'lGZΐՖ-"ЄWI-Y6 up b5WI==-5sv=s&6CjRD?Ge%+s= 3gba?_*6pn.4Ci yC D01f9k]vUSyuO Y.:k>H\\fܱ֭ ñd?VFZ)5i^is`CBrE+iEҒA)H3 *^q+"DZ Г<=,HϦ$.L=X'f)#^l?eNZ=e'򓅆B ٞ#?\?s Ȑvg^yG|dE\Xh8aryv3m1TvT*dFJ5^Y[Jd➤hcq\<7PkcI, Ǣ93̛&l>+=srF,vr4<B\,,D53vI p/#,f:t@i%OC;p؎%MF'sERi?Nߓ'#-k_dځԩSjȓJDDH^zj$n ؾ}{#I=UO1~GOցmO,`.Ýt.Zψ'w !@d) 0uxFu3)}PeBY$-a^Ԓkw!gƚk K禧i3b\ǿR?\7'0 6L@0C!σ*82o^(V`#2Gr''H0e=Q K9KKDżޚ&iQP ́:'OVieԙvh"5<ˈ.H;j̒v%-(C?%BY{Um,OJ sZnp KI-Y6B]!cB?#6~Snq'Ә r.H<"HfcQ/2I&)<@θe˸=qHS\Hڵk!`a: 5R&Pؿg~dR 4 $* *n!*]VZONIË*%K&WFir ."L"E1"ڽ{w,_\E޽">4}3J!B"?cw`U -[$0%t~ZA E L>j5.o%83"H8`;wN=0)qd̒6%(C.YS*? E@êU8~:A:n @x 4V\1'cnS~)G]8/VxP^E<13u뢸(`b D\Ѣ]AFUTIxd83S4NT8o+dNطo2~L'1WO#g/cՇ!4w}W94@r{)biHPVҞ{m1nJ=쉐TO$usa- ~M~ TSj#+~T#pMƏJ{G*7x0z!#Ə8׈ٳ Ѹ\8p@MbaرcJ`oƶϴ"U u;A0=ĭ (|#qX|,[o FsgZ'2S8b%a6g@'" IFqQR¤aNn"Z"+eC9ǞZx-}?;_C8& !V-^ X~IG0%GP*G+mm-6Qos*lϝSI'ͱÈMC)pr2J zq7 S021_C?7܍V<{'#L<+DY<u޽8.qA!KΦ=>a<~ AC ૯TiR :8t]^E J?#D@I8l/8ɓVOC==CuP؉޲eo:qm9>5qg]|EY;Ça#F&7vq'Q/xR)GO^Dz˙S ~ʋOJ#={҈Kq6c NH@"' MNp)!"'/\qwrL  ft/鐧6k 1:I&Zt1Y= V@gwUp8TQҧh<<9U$E(F%%TxB`O@:͛$CԿm[x55!|=97GYC;FBHF K {P{@70_~KJ?FJJ}0]ȝ :Æjp_. ;S"2k@XMGTd|ƤQ7tfE5u`(dwVR/5"*:z5 HQF_I/J4pQD?^KMď9OD{yVR'mRhzQC? zuj*#}Cp4ݥgC^QL&Gq0% ƃC |ҿ P+XPAF$i"};u3w!NI,J%QOF"mİڝ= 'T}I5c|O4s`*uh*r/@i>u*POWbDVC+rwJt2 ?Πի4ԍP/K<% LshUퟳ\$ȳ BC!:tϞ=شu+:W/ણ>0ytoC_y>hI)?SJN/u\io=i-v@Hr}/Ðȟ|sw܄EQ~t$3+TFoEDX2wH5ӀQsL-^x{&XQ1DB:3٧L|5 zH:"AHl%:<?e# ־)^&Vbߟ1ixҥʅU_7^^p=D^ŶH7~T=#s,.L@b,@<Ԑ{v'Gy7Vǎ[$cC{Ö́%"~?{kN!.wnc9#K; LȾOLd/.4'FHS$ 9?;ZܩQ xjTI B<Qe=%Myd P>q~ u}a^ PBKڔ!Gc)G)w۶$LUQ$JE;;6R1`GxD#c{2֩+򩼾$&e/4MyyŨ^zx"QiM䫦X9&SNj8^ɻ oskr^NlԕF= 8k)Ǐ9{,FU)_kx/&3xfk>ziH; L䀟FJBΝU֖|5jZ`woۦd)<$n]5&syԱрȱIlX_qCE^T;D7fߢ  +CnǙeKp4u')@D׏s$\?"c?suE%.x?TwT/BbWoYƿri{~/vj5 RxĈb0u te24$(pރ_дGRTV}͚5SèRȢ*VDdj"F(A$Dދ2( ~ޞ?"! &g/8h`6= ir"C/2Fw1Rxb0E3t>2LLFS|^@E/\߽[l#xd"8BU.j\ME1^ ݓvo$igktGnp[A9 9A6mT۠q֭[֢E 5:Yj#f+2%}$s9uKbk;v,Zl3A߹,=%yG̞\l'5I_Rr~.:b<'ruF%>BҧR}4l1Q Fdwt8jǠLwLd#E<6w_cXXFE8+PㅥsEI cb%='x^VA=ˆpQ,B;Zh8nؠL?Vʓᡭ[#}{5Gnh:!y3I\x/Cl&Jڵpj/DY#8QvCbԯU ߖ+a+CMzgܦRɣbtg S5GFdQRq(=aC2J>[  bNJ  2C̑*M׉cG7GLS~oJSkQ^Fc]YgQ'MFN-1eml rKIIxO+ <9̔KOnGY\IC$T/ ㇶiݻzYHϪҥS$9#;'#8yǬh$oυ%Y湗8@R*Tϑ_9dN7#R1e]{> |8 ?N?}XwE.0HZ0T/}ڵr ȼ(λ4!%J LԴ7y]eIв[$`rT)Dc?g&ԭjGʕfsxX1S8ýr|…jX#?\F+K=Bf)ݳm™&O='E2gǵ -8Z?xZz<+=A_4hJ!&$!Qh3X"IQ4bi曦ShDiL9Tн,"H4ȬsӪ6}W|WC<=sy?3g0JlsYj+'oaQl)ujۀ%- HoWüT0'=R[ؾQC}a_ż":-|BJ8gܼi[3H;_9{/_F>EZu[czΜ9J&-cF.漒 b#i#?9:Our3NE 9ßT-9n%Ȣzu/mJDD8"\QH,<c"9;7.nضl?. phbJlTXb¯6^?4Aj)?|+o9yDgBbHכ4Q vvgEltpWW゛z{Kgw0{z,)%ALFpwXʕe\6_sf9JCħo߾Sa"w)o,Ʃt26Cc?>5k9aI?f]o8QO BUzi o$͠K/aР7U?wD2JKlv[5"5RHٲ;f "W7N^ׯ_p˶O>uQQ$Ҙ}D=C;v ߋ~Li1y2NP,D}L^+#Dv. O(y)~mV>?9V"pAkڑwǛ-[%H[@ O0wa9w:w+W`#r <1f U/܇6aC{Ĉ+Rqěj8ˆFF .w{4ӜgP~ =_I\_jW)@AR~c")*~'O"o׮8/Zg֬VM|[0pa,^?Ys!BH<шAB%v (3FP qgFSI'n` Chb9uJWSڔ͓JE~^V)}6 FE\0ڷI9"e8+(3Es9k(2mB~*'ACD]vL^8ᯘC;+깴N)#X%(٘VϘ\Q>{G=>vjqHIV2DfFU /wo~Vi?}kb%pyiY6V,02_ye^RC%ȅ6X"='~OrL;aQb#EmP%='GS<E[Yw3Wϝ@VD5iGx_$$hٶL}E)zi>]\9&Ji_"4h/Ds7? ŸO K)8gûk75tBmtL;'#h߅xxQD<슬9yǾo~].퓽KO-icIKJd~퉛;gG=틯Vqnuv?b#QAH7(Yo3^3V!Gܽ[@s|лo_ő:HիW.4-CHɜZ1'IJDO2ye3񸩤=zFp!q4=-YR644–:ų/W65GKo2vDKuI'us{>ѹ ?ṕ8)OVXpȜ(}K?b')5%UZQ% WX=]{dvDU1i>Sûm7Or .nN.Ƽ2s%I'Fđ ;{G;,YmPVR_M^o@|0V#K\gYۊ!(Oo8I BYbC|z|1x?P$ihKҢ5~YɧdY0\&[qH/0ՇqB7HE=~DgđТld"k4?z9~]:ui0l8+}ɉ쵵uĤqٲe7](i;~WcԖax.9~ ayYs)E5:T+wncɁ*fzXS`#;!'w?yRAƽz9t?zJy6C+s-[&w]ѭX<iWgY]m]r.ڥqKJ"ԏz]Igɵ%s+;7pKP4w:x qxx.߮7jM%D}&<`"DKQfzo,c~\ Ss׮sϰmQ([ItҘv.:}Njغ곜e4p$[;ܰ3NBJ9g7qJ'zhM#m9i9~Y^ =jTYJN%ӫL}qVoR'h,wQ/_11EN}5MZ5~n޻^N0f_˹'`t9?.`< Dj?5Wwؼy"/44e˔Dy!J{ +W.Ƿ|vf^ڹ3YGÖ#8gMR+++V F~#a]f(B {1wAԨV\?t "[,YC lQ);c B/5?T$$xUBà #2*y3>{ .] +}1t;i/M/Dq6(R( 6mS4Ep>) RďMX^k}xOݽaa ;Xzz086l8|-xx+Y~d×yxgR:OE(!dO#aÆ)oU(í[q7( u67WڕU8D pE{',\0ϓm0A+דFFq"~gnkC|W#أ0лcqrqDŽz%ͮ~SCĨ߃{l.W4ǁCUòϋ`oo~-#鹿'~fؑW"#0ٰq(ֵAs10o\l߹/oժ0~,vء6ԛUd4K3?)u(Ws̱+-m~E6ˡcp"Ȑ1jSOO l: xH9r}{TĈ)n! S3_EiFŶP]p DF#|#v@KEyS+WpC@TVTy{I-<|.q|( 4M6V@|%I:Pqdn4=DQ/ή"]u?<*VŸ.Eмyq(d܂1=MBN2d=<ʪUI!K׬*U\qRs z6mڤ|VEG,mZyJ2=O z x\Ĉ =VVVEīӄ3DJR2?F޼#<αlٟ:ujg])9^%ǑF<&`T6C/+"{*aR'%>Y<  iC5_7*WQ.#̱w5eYH.gG~VRJD#!N*./.>N H8O{8;b5l9qGzE'5^~C~r8uБ}1'qJ!(0{ü$ +,r—G39h:ۘ r-/ܹp~eYT+Wvz4mڴGf2/)nذC̷1c_PoTѢop˽QhaSِѮrݮxܹs[Mj kAϛ(၎''%QfIÒe֘1>MtAjz9$P\xF;-{6Da#[| p!`GX^O?[2FH)qk:(%~,sawذ8o_+1hTvU #<$/! ,τaӲ͘6jm#FM_嶄(?sLj-S.[>Z"oVⰀ{4S)$ tzN'>Ϭ^F́_҈~Zį@^8)?rbC!#^z>W?`ꏟ{~ 8yqľ` *V Va GR 򻸴v?u9VK{B=-, A"8}d?V/i!_WB|Jx/?uY*g)~o$X1وߓ?E_woP5aD- PD;@.  6,@Qe'7nSm-F%ҩ&Kr^NVjփ""~_8}3 5zQaj,;??[{'Ljc߉',?a-zMû2gc ?Krtf>s׋G`?6{,Pp>=rv&OK-3;7 14oz3mDp/hY>ǤPzUr/%V+A'9?lں=q9FM[&K b|$'m.Wbď} 2 .LEF֭>wY/냈0{q+Ƴy/nvQ|ܭN}tVo p$Y\05k%ѨxQEVU+W㖠J ҠDS/qMJ=OH%q'Cz(*ק"L/iqP}~'4UiÆ#KhJbT/R8[ƝAH5n.o~ʵkJ;B\Gg%':Gyf+'v3~qV/Q+  gg4JCG7D7K=FD;LcOҩs*u^Enlz۵UlD9WÑ`/ uOWLf|_4!rKhȽ~3F̈́9JV1s9V=Ն4m#RoL(!&Ksx8vLIS&3%[N4lWDph&4,lES^dS\Iy'LKj\oBX,2޹Lot0|?7ea/Ũ}tu*jaMJ->cI<1~IY8t FK;d?.^Vկܾ ^s;O~mf?!}Q{6[o*^:|k0]GgG%PTڈu;qAܸrᙼ_{iG"_?,]o*$Ee,ȳ.(e/7 +|ú6?CNQ=R|0PG̎RvOf '=&zئ~b;e\Trl%٧}gOglȕCts^T Ø91oyհjdtSMJR:80H95a/km^sh3"hTaarC4J+~'UCƍ{dzsrF8e(1*vjgII_]k|׫](~9χD9yz>::|X Aơ^͔^M76ŏ^ B*M2K՛8Իe 0fr`K9~KlU*ǻ^xτg?uh>T%O[{j,ٽh1cnT{ڵ nOe*VčѣZ#;w0Ǐ;p8=OO_9~P%(0z4i"NPW|1 lgc܅m_Z<;uǎMD#RY?nƽzi _ƍmK;4hj&wR><u{81Ai=w dJFF@$IDgJl1w?3GGKJ=)+(/R?ޓ6}88`j"l;eD3О:pnŽyuBjoA)DVGīE=Ih$Vt_eIg,8&c?J !.^:jH`rp#3#(\0kƩgGĨM)8ϖ1\>|o=ٳ{Ύ/ 17y<4*"Fp[C,FrΝ*Ĉ͡ %K`EU/n^s" éMР~C1kVHn;~jz_)Spzd#~KNP\k ?_>QQУ@yGR2m|H`Zѣճ(Ӊ˹?\+P?3?OOOiǪyԮOtyWwKocfN.Y'j|홿^Zҥe'fi7%b$mCld:bի[5mqw >k~1ze*Ww_D-g:B.;a:ףG/8S?yjf凉Onƽz5'A,vK:ԛVOo|q$Lcrk &T lS-5vC8>n:uwS۵PiCЛ')_百guFErIM.P2SޝI/SJJ|N7syyI|dKJT_} s9>Si0 =BRwK OSP(AQ**EQ){-ZE|eowv͛7߼y3~[gsL2w# Si甿<2 5 |ӻ'S>ME`>H!=t?X0>INVŊ0IHÊUXf[Hl8H_r>탰$׫C wzxeߴtڶm)Sc?j4goۼyPf5l@BXdOu޹'~Gt$y+Ac(&:F}> '(&wVs?S1MBCH ԿVod3q!Ʈ|y?,a6moDS gEROu4eJzMyp (*]xݱjv d$_C6}=MJTV-U"aJJ^Idq׫{>%XTQ@jժ%K)/==y0 .&8$݄EV|{}{տVγ-M)fɒM=_>3gƛo//o|I%,fO6}𝍽z %꫕ "^YܹÿnsDEJL,IR19 !cf|9;ĉ)8{F>3]<~ⴐ1/yQa>7W'رCVB.\)W.پ=١^>lwj̡ql"e{4:5jTǭ[y8=x77d5{P>S-UCza? .]4bY@M,Ò֤gO1,48ORBP|&ݲmȐ!e&'TZ"S#ƯPuzdԴiS*ĶG^}8I7̤-d@yON5Ԑ_бc4&g|SF…bQn]ea F7CʶT:g>2x0~dѣFaBRW/)}7㭷R$*^'WpqW>&Gӌ3֡ 972<+W4M &M!YAɏw-'39ǡފҾnO3 k/xTq1bGIOU  - ΀/G=<Ə:Ics|rrci˨T"%2frr۶!5=zSCxgQ!~A2y4o F~ 7wOycbkW-**qッB8Cٳqpݺ'w)?CvrįR8dnڴIjep%zYfVq棏%~{8p;?ڵjFWKU1~(ٰ!IGJ-S;Dޥ,ďF_z5i^?l\Nظ;pPE2eQN?+v~ Mn9p!~Wigǣ<ǖ{O'wT"c,j?r4AuJA?VjWϊIŠ!Ȝ->!U/%A񣁩V2T5}xJhmdڲodM'J_+oT?O_!~GZl-~vK\Ɇ-^m1'~]Q5d/$~=͜93Y/!)cV/菫hUM3 kW>:]`R4n!)g?kEP-7E-ޡ Ny {촴_t#s ' 1PjjbWq&񋍍S/>C&L)Uď3_zr SPiث7W\zm K^ΡI*Ƀr.$~/CB4Ko3ߒE v|WiUn04ϗzŁk%~XOW*TIxm|l%,CmhK.egYD$~k&oVqq?4<~G(o?*"V7T^ Qp9m*:M5 rR%e9q_Rܣ9=ӐMԭ[{ҮqXp߾)]WEW%7Kǡ+BzUoX~V0BJE2gS(_e!~[I}Apݨh xP1+ϧƽIan^T:[;՛XN31GIZm} PF4FV2JH-tҔwPrz..N&aȊ2b`r`c} [3upcV[ɅAHd2OlEŀ!j^r(=߭5xX< x=$s1:/F~T{qVGKf M _͟T)$[)an䨾=:<ϱ<9 ➅lޔnN]sޟܡ~0H!6XXIX7X|8RתsO^ÑZyvH>kcCrgήupPڴE%crR봍UZv@ Z-jOJVx\'#JUmH<]<5U1;/СݻwmM,o;w</(q*-<O; %dܺe TܺU$YܡP/wpsGŋqP99S'OCWQI[FsUQ@xqW3{׬S2;r0z(,ZW.T<7.+]/kU:hF$ ߮{Vj^1"ʕq1~~~p>~xz9aClE#(ڷ/߯׼I}*bk+TB6MԌJKPo&M̙b8Pӧ[Ce6LNoؠUtM rxESCW;3"D3LsYf|whD{3{Y6iz sxj9[5_>lv;NcFq㮒Vlc3k<iعo„ BhHNoƬٳgl+TsWM!j˶$1~%~< ua=dc[h!Qhw̶GUOc?ƄXVIc-}cq˸7e;V˅ UvrxE'wQ^%~֓;<FQi'!/ԃCK.[Eߔ'D~!~QVf!qƍHN:rB$ѩSbAp9ǎb꟰s.%WN -S{qٗ&w2B.I=c]b[ҲeKU Ĵb K|q>o"K2 -.܅?;fRsؗ[3_yD`d'>TM` _l9TXIhC0 j?BydAuIL:RK/}??¼^? 3H'zO? {6m};*3t\OUWNip?H0fWjUz}':th_ʕMǯX߾{~EDؐ~xڹ]9%˘QiSlg1a$~)gL 9l$ ^~g1P`YO9ď֤ď; `&~Lk"&.V`| 9ܹs&w&{g[?zHΗ"W^4ƚ~4>r=~7Y7e=. 8c٫ז4ΌR)͛7{$~5ش)Y\Y>5ď=~= tlڼ?ثYôWB??|_zȹ_SEU-V t?^ˆr;`eKp;snBu^Y~%%~G*ⷕ?!j/fx{((T͛7SY[-׬6 Kq甈߉'ݺ B.bC<}`z%a3_PoJď^Iwrwpa\G?^EιƐ^WxYįɹ˹L49 v "Kr/>.˿EX*rClj^[< Yx>Q3tJ{l?BSa0wc}shօd$ qt Lx?3 uoo|7;gآy9bbj4ZM1䶵zϽjDreF:V1m(VToӐS}2x!_ _Y\LNoY2gVÖOCӴ:9?4yWyR!s_MPQ!b89̟?ǏDzeL :z/]V),e|Tu-[erzZA{!͛|&Q΅_IH`4rq3|c9| l Dzbga[ ,\b^aI[Gnٌ۩0cVoſzuVJ"g#ƏspJVmj;&c8cbjuQxQԩ5TP^5vpv ~͛M`džCc-;y8lݺUm]:6ҞTl[mS8, 6>=xCEF^^(6t(B\~U؛CzRCf \Com*íqqi30k\&wi'o s 3Y0`HU8O2cKËA.GJ@>*Z;w0Gv~ [d u)%4 *+cӾ}J=W_aԩl$\d;N&QP?k֘v+0 8(p'*piƭ3/LYR%lwIF _az=V;V+ 1Ь ᐄPN45j޽{ O$|0`@NXԸ1N[Bjz=R-3_+FK2R>NH"Dڈc|*;[-%tY1 Qy<zFb"uJV~I0B*1h$kۄq.[ ŋUuc/Kz{IDI@|[h3IJ ܱ#nQ0!f;۳gOYidG#G<\!ۊ(s2w`eXHȟًݯ\}vwԓq?MEB*!+AËswz=qy6TC]֯0@Er:ٖjȊ^R"YqbH5= 8ӵUP6ʻ!W^ʢJZ.z)ñ4m۶UKp0f<;B% ??\r4i`}cg|ht ;*\PQHI&Dah^>}WAnfz>d1SX;ez??hꐊ>٪ 5ϜAi_bSE) 5X ~YH>Zy!ׅ8X LWC838?mEzԨ֠=H;QlG ݯ(HBcA:Ef`94re˰sJE pt3{I/;r9~Lj)10b[ GCt=VdM5!ңF8L.u 9Kr/O7æM6M";vrC|XNBSddI?v^m)i!%D<>>HGEbnڲ)?J}Ȓ%> ~1BB#%RCYs(JpkPPIA {7ʈ2΃dY/b+I8X]uqsgƒ5(N`< {tëb8b _qg9Pl`mVrp[*xξt)p #/2dnHP7 4 ˟_ aJ}83W^>GL N6( N[dҬYT`\&>,=pRصFgbڵ*~/M5kTz[Xy}t8W=z,A=dHN7PaQy|iiM F$`h%i_t)6|8'_J·o ˈ;CA8bPo\5zwo5y|NH "ޔ@A5jZhOг+ҿU+x=" 1S"4 w\ pA#? ap.5k<4.FLq[(5# f=g}ăYFTرd%SD#0hg}z0X&H0C}@iX9ƥ*P6 loT cE)7~. WO)we,Bl2L߭NH9XǴN ;N)Q܉_V-_PqFGI6ѰB`Udɒ|w;wp%~H"C/{5Y֭CI2qKz*Y_U Su"gǾS& SrCj`g4I^zB^M(k)[B]R]&!Jo$B&`$ T8Na0Z":NH/bli*{, :I4GŶU >҃eVAS6! wq c^I(92v,n-Ӵs4p G{ZS,Y.ۦǕK##zfi@OxGyt{*rj"u) 7cBö5lޤ ѦG5kP* (OʁmqS|3f ȕj&/ m7wJᄥ7|˓AKI??p-ʞ@lj3yp}&%%T.nIF-=]QʟVw ˛W{E s[=*m{H2Ǚw$+8 %mʞ[dرnW~Kg1}!Bo:+T?͠ 0n0oziS S߈[ BtŨBNMy yO6tBwR%앆eBYлg2rҥK:k ҥK:-F[FMuKZبQ*O$H".m꣓-YhQ$ϴAe5 M]r]}?߉H{c~"JHc%vn-\XaI7wn߮|WL&N h{xJʐyzGR\څZf̛AW.ȵd H nTڣ|a!u )R{NUڭoQ4\BiB܉߿VzgΜ x$PBDӜ0{l)Ȅ2l8Q\T+9EA^3d/xlAȝ (/P$-[T{:s(4p1fܬ UƖ3_4 |c[Bn8FkJV|wo Bpl Ƶ؁a0 !zi9ȑ.[²axBN?ߩ -T(Ŏ@;~9VBqm[1 ljJЖs7`gܷNFl? ҨD8EO[pRj5)ᢧCs';gBIb1c2i]Uȟq餌zABcGuW2paDž80Qlσ&A͡I6ztr>˙\YH<X4l}w64 iT (g4 S&.sQA gGǀ7q5kJà6 !d7\d6cϚ1J8ϟy>459zj*'|p g#-ki~ 2 5x]mg{0۱@YI/+eK֍Ԃ!C+)^#N/,M>kMëU.]&q6?;8>}2;}rm(~ۋ%.I,b<mvjGR4KVFmݺ58O>De1|X2>=>@k.([I=aYӳ9b,l; 7so z|gYS;wTu^->6KV{S8"F;'pGN$Q4:=9la_p ʝeo\nj#<_(uj@[|=(H`ްCBϑ#;`g}T3@ˀ_@txetӭLE/iT@nCc2 wQ[-FժUUe^uz~){aK'>2Lo7Ej&< (oʞ$رcزe" `jtI*Tʁe4f; ;eYoԆ@͡ P,_aI|ɁӣeyHDuqpq1ȹ;pDtɴa/DUM\kdB~u)A;{xf˹@ʨRE?1q<韼ӓ6EA˞63-ʊ͊o4~OW(e`کƐ??u$$(<.򇏇3bb)ĐAWB"U]%1ìBD"yjN0toNSt721#ܒLxTжK1=L'%Su5`)!3ϦӉeoGIJ=WtJz`Lc@qœ eʺ`ˡ/NFfOIEIީIL|j%)b>]< }.{N㦑 $9Z/ttȟ()u>LF?uN]#pHn\1Hq"!#~ΏHNTM@JoܹL6EH_vr3"| ;E\HD/`LJ˫z8t< 弤JI$}΅0$N|j*!oAPD\lꋭ%zB]JTs&~ #a3,uqQ7i<"1®"J٫aPTqネ@ nwEEahx Bx-vGmoQ0KJᶐXxֿ3;Ly(^#zu`aȝ=fҢ;" F9O?gK[ ^,;/xV B#q^w6};?u,Gq,C\9i!3r) 9B08cJwiP/eLgz#}>@Jܲ\wT&;:r.s&~ )E[)*J拽G`C`0ԛJl#ĠP)cBQ[Ⓚh)pQ6 砢s^X%`t)?xdm9MJʚ=俄KY(13l7OJ-w岋v0L,m|0ycNYX~<2Sii,,3Ӫx{e=S|$arرbJQjqb ͕ϵg\C4t1y($4CaA)#L0lCtn@Xa.:a% cLm9ScN`e A2oAl:Xl_]yM߄+9XT*[KV>а^ip#oX&UcNb_P.vCּ 1E:`pxInIoiO𲔟dsD͸ X~3su(d'{ͫ47Ǹ%mgLSջ j;__6 y|#~Weh|ii ((E|%1TywupcߠP,Xf4SEJyġh_KlW? lnMla7Уd,t-퇐0̵0d9t>AaiO(h0NekVOO7;J9cTI?fIf/ ᒩŌq!4QJ/?%EmF/I[|]7L4Q?¹F`oh341x%) lw?#Gtmt|Npk).Jq=[Ų|n~{={"Nb؎8{G\; v|RW*&`f1gȯ힁kjR %8GP"Z DEn,lRduBse^WnA}n|x"M+?#cVj8&t^rRC 06htFyՊƺv{{~.inW_ϤB:K=1Ju,>#>ݵ9)C$CUUMT mk"Gf!Q[5Ƭꄑ'qDu4(4N8uz˹۰)8:{|K'\lE40b ͰcXſtV247d:77/ꪱH AYվ9Vը+=ecOZ,~:GI Y21vȉVaՆhp BQ@Gh~S}f^z!A􆯇)i w)涙o- m 0 *@x}Mnˈs쟻Qps?s7ŌミcBHxl}l9c} )EG`O-?.ٮ6aDa-綇]QX\..YN`΍RG;*Л]a5t_+! 963~p>|ܜvv pEr/_AX7ZM'vQ}(2^C\0$ԭ:o0494Ęq&`WL\ >Cu_{KUVu<.ڃ18k~:nŹ~h5" KQxfwAaX/T5vqYΨ)x rtaRX iY.-{W$ ݖFU`&@G!sSȃ-ީҾb irz"^l֡Kw?tֹa IFEW^XY(Z>/ uD٭QJtɔEÁ+00G$_ i!_GK4l!F8iT*+&&9?FQFt&Tǫ,>l>š>!Q% `)+fIӕоvN}HR7A=g1*3c!#e;w#<}+@֫Q"b[k~3&~ {G9҇PaS/b߽=/P+u+Ak3$a'IEzB<Đ9:ޘtZ)TWƯrϻ~ ?)6ڕCt#4$:eJ}x F5_) έrƻe2ϋdG՝8tNpRw|G7l9.~+}1Bꭙ"yp,vKaXqa;3'%o LmӚg.vrAz텈3#p7KW_ x;epD{})^<+u?3$p_pr} .vpwt1\Ũ!G-ԋ9U73PJGًN앴`AX=W!7Lܰ`,)8sm ❤qE$$]ptd 'R7(9ѣL|ٴ?2Ce.aK͓;k!Zo\;=^BeLpva5n^G]3/ f! FS'LjۈKٰHG%6'Pk 6|ȳbyO\]LcNdqH/7A4Бe_=<BI"(s`$\gޫ0DQf*%J@9Ex=vIKɳqFl-.bn8G'.AR)ZysV(|O:Xuz?f):7.Nb G+(ϯT7}%-tۿ3jvVs'dH4\3IF9bb,Oxz7d#"\λ9Db/s0ȿ F-N-*]q=_vYKۉA3$WpL:a3-- ]ǒ^?aZ ppv,MgObtF-V"~]f#صyJaվ%Sc*Uwu $&oq>X=-wQ;+¼! ٥ ¶͌8+!v\cUbw7`;G=)"`:I-|',!utR>M^~fnJ9N ɇ[O QKSv8X*Yl u/A\*[ Yqh 0=?`ZFN_7Lj= s'UZyg|Wo9Xi#;DbΨ?0o}_ \0hOLğ6øĶK$0n# PLݖ4x!XbS6:%7;=I^e5k Y 9ߢweo,#ê1 sٸ_T>7.igao/v-p!yt@P>GOg3 ^>XHr؍$k&:Cбz7 +θk( }V~0.FQc\Ħ2ݑ6=0hqOMc9tO#MyMY`'S$ϋ[}jv΀=6 $;6gIݴw |?WvWើ7vIV̘Vĸg(_<%}\Nq-W=oZXL* QF&E1ϒ?{ĠE=/Zm_!qף-5˂[1/#[2T|JT #:df! i{wז v9). ȘԎ&gMBJW'=;bي`fr)P̻⃹Usbb~S-"9{>(E#.&Eo@zEIu"o\89[\ nl'ud,JLI;|aXۓ߮Xޢ#grvx[z-bb.+^GDC5Ւ7;9]{2"DǤX b,OQ%T[idsQU~ٮbyܤ24B8DyI~KeZh$$K~}n˝Rݝ\FMdB$BR e´Jvp Co/æҋM.y` *3Ewo؋l-,l\Ҹ:{c_cTy[\tƑ%`t*,y|CzG1VS"cbQLѕ\ ;OQ,:l,ohWx wʁXmMg"gtNx< }G<"p.[މEOgrIGlRgp5$[# @ꙻS2ut}7O8DsG5e ;]q,6+wP{rt{!{*HB."1arm1nbs<})b;I#$>WWWPOb+Btzp d(TG+&u%`ږEx<#sy fnWC&c:t(iX7ɇ F'gǧ߱|wDj/JhSށDx0l?ːwc9oZ܅|`!dNm!\h)/I eKCy3=y*{UhJ>8y:ŒCFw`ʕ{6 g8Oks6@J)g˪ia'¡GY@sGPP2f(hs .E=6UCYR' }%x6AyDq 5փۧYv\ o'HZO7Izܹmаm,O:3gI{8J(Kg&yiGX8j{Bjz_\"ȷkT°w(Agΰ=elΡ|^kJM[jCo,eJ.{^?ʊ(䅗˛e ƈP@s#}E 3YnF*1}gdFލ6(Kt 'P>^,$-liR8|%ך.I"-|cD1E^DYpdäFo"2;~}%Kl4Rb_ "q+2\H]>#P;5k#a,^G#e~1Bofﵰfvש͗]yk/Ux"t9_Ԋu}mabhh< HƒбT4uA|M]s?SBA? JNgď`=z^dև$PY(ӃBo{Hmz)}>K+Y ~"Qe2/^cDlkϓAl۳""C1X|X5MBBB!Ck]E/tVЫTkM?gG'EÿgËFg7x**gď;._\rhhhU*c`wddH q2g~p4#.^]vYmAH{BŊO~PFƱcwm 54 )S&T\qIkhhh<pvvVɃ/khh]8::*[$? Aw54444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lihhhhhhh4аh⧡a#OCCCCCCCF@? &~6M44444444lVn8׌IENDB`swift-2.7.1/doc/source/policies_saio.rst0000664000567000056710000001464613024044354021451 0ustar jenkinsjenkins00000000000000=========================================== Adding Storage Policies to an Existing SAIO =========================================== Depending on when you downloaded your SAIO environment, it may already be prepared with two storage policies that enable some basic functional tests. In the event that you are adding a storage policy to an existing installation, however, the following section will walk you through the steps for setting up Storage Policies. Note that configuring more than one storage policy on your development environment is recommended but optional. Enabling multiple Storage Policies is very easy regardless of whether you are working with an existing installation or starting a brand new one. Now we will create two policies - the first one will be a standard triple replication policy that we will also explicitly set as the default and the second will be setup for reduced replication using a factor of 2x. We will call the first one 'gold' and the second one 'silver'. In this example both policies map to the same devices because it's also important for this sample implementation to be simple and easy to understand and adding a bunch of new devices isn't really required to implement a usable set of policies. 1. To define your policies, add the following to your ``/etc/swift/swift.conf`` file:: [storage-policy:0] name = gold aliases = yellow, orange default = yes [storage-policy:1] name = silver See :doc:`overview_policies` for detailed information on ``swift.conf`` policy options. 2. To create the object ring for the silver policy (index 1), add the following to your ``bin/remakerings`` script and re-run it (your script may already have these changes):: swift-ring-builder object-1.builder create 10 2 1 swift-ring-builder object-1.builder add r1z1-127.0.0.1:6010/sdb1 1 swift-ring-builder object-1.builder add r1z2-127.0.0.1:6020/sdb2 1 swift-ring-builder object-1.builder add r1z3-127.0.0.1:6030/sdb3 1 swift-ring-builder object-1.builder add r1z4-127.0.0.1:6040/sdb4 1 swift-ring-builder object-1.builder rebalance Note that the reduced replication of the silver policy is only a function of the replication parameter in the ``swift-ring-builder create`` command and is not specified in ``/etc/swift/swift.conf``. 3. Copy ``etc/container-reconciler.conf-sample`` to ``/etc/swift/container-reconciler.conf`` and fix the user option:: cp etc/container-reconciler.conf-sample /etc/swift/container-reconciler.conf sed -i "s/# user.*/user = $USER/g" /etc/swift/container-reconciler.conf ------------------ Using Policies ------------------ Setting up Storage Policies was very simple, and using them is even simpler. In this section, we will run some commands to create a few containers with different policies and store objects in them and see how Storage Policies effect placement of data in Swift. 1. We will be using the list_endpoints middleware to confirm object locations, so enable that now in your ``proxy-server.conf`` file by adding it to the pipeline and including the filter section as shown below (be sure to restart your proxy after making these changes):: pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk \ slo dlo ratelimit crossdomain list-endpoints tempurl tempauth staticweb \ container-quotas account-quotas proxy-logging proxy-server [filter:list-endpoints] use = egg:swift#list_endpoints 2. Check to see that your policies are reported via /info:: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing info You should see this: (only showing the policy output here):: policies: [{'aliases': 'gold, yellow, orange', 'default': True, 'name': 'gold'}, {'aliases': 'silver', 'name': 'silver'}] 3. Now create a container without specifying a policy, it will use the default, 'gold' and then put a test object in it (create the file ``file0.txt`` with your favorite editor with some content):: curl -v -X PUT -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test/myCont0 curl -X PUT -v -T file0.txt -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test/myCont0/file0.txt 4. Now confirm placement of the object with the :ref:`list_endpoints` middleware:: curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont0/file0.txt You should see this: (note placement on expected devices):: ["http://127.0.0.1:6030/sdb3/761/AUTH_test/myCont0/file0.txt", "http://127.0.0.1:6010/sdb1/761/AUTH_test/myCont0/file0.txt", "http://127.0.0.1:6020/sdb2/761/AUTH_test/myCont0/file0.txt"] 5. Create a container using policy 'silver' and put a different file in it:: curl -v -X PUT -H 'X-Auth-Token: ' -H \ "X-Storage-Policy: silver" \ http://127.0.0.1:8080/v1/AUTH_test/myCont1 curl -X PUT -v -T file1.txt -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test/myCont1/ 6. Confirm placement of the object for policy 'silver':: curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont1/file1.txt You should see this: (note placement on expected devices):: ["http://127.0.0.1:6010/sdb1/32/AUTH_test/myCont1/file1.txt", "http://127.0.0.1:6040/sdb4/32/AUTH_test/myCont1/file1.txt"] 7. Confirm account information with HEAD, make sure that your container-updater service is running and has executed once since you performed the PUTs or the account database won't be updated yet:: curl -i -X HEAD -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test You should see something like this (note that total and per policy stats object sizes will vary):: HTTP/1.1 204 No Content Content-Length: 0 X-Account-Object-Count: 2 X-Account-Bytes-Used: 174 X-Account-Container-Count: 2 X-Account-Storage-Policy-Gold-Object-Count: 1 X-Account-Storage-Policy-Gold-Bytes-Used: 84 X-Account-Storage-Policy-Silver-Object-Count: 1 X-Account-Storage-Policy-Silver-Bytes-Used: 90 X-Timestamp: 1397230339.71525 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: tx96e7496b19bb44abb55a3-0053482c75 Date: Fri, 11 Apr 2014 17:55:01 GMT swift-2.7.1/doc/source/ratelimit.rst0000664000567000056710000001177313024044354020617 0ustar jenkinsjenkins00000000000000.. _ratelimit: ============= Rate Limiting ============= Rate limiting in swift is implemented as a pluggable middleware. Rate limiting is performed on requests that result in database writes to the account and container sqlite dbs. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks. -------------- Configuration -------------- All configuration is optional. If no account or container limits are provided there will be no rate limiting. Configuration available: ================================ ======= ====================================== Option Default Description -------------------------------- ------- -------------------------------------- clock_accuracy 1000 Represents how accurate the proxy servers' system clocks are with each other. 1000 means that all the proxies' clock are accurate to each other within 1 millisecond. No ratelimit should be higher than the clock accuracy. max_sleep_time_seconds 60 App will immediately return a 498 response if the necessary sleep time ever exceeds the given max_sleep_time_seconds. log_sleep_time_seconds 0 To allow visibility into rate limiting set this value > 0 and all sleeps greater than the number will be logged. rate_buffer_seconds 5 Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. account_ratelimit 0 If set, will limit PUT and DELETE requests to /account_name/container_name. Number is in requests per second. container_ratelimit_size '' When set with container_ratelimit_x = r: for containers of size x, limit requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/o. container_listing_ratelimit_size '' When set with container_listing_ratelimit_x = r: for containers of size x, limit listing requests per second to r. Will limit GET requests to /a/c. ================================ ======= ====================================== The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be: container_ratelimit_100 = 100 container_ratelimit_200 = 50 container_ratelimit_500 = 20 This would result in ================ ============ Container Size Rate Limit ---------------- ------------ 0-99 No limiting 100 100 150 75 500 20 1000 20 ================ ============ ----------------------------- Account Specific Ratelimiting ----------------------------- The above ratelimiting is to prevent the "many writes to a single container" bottleneck from causing a problem. There could also be a problem where a single account is just using too much of the cluster's resources. In this case, the container ratelimits may not help because the customer could be doing thousands of reqs/sec to distributed containers each getting a small fraction of the total so those limits would never trigger. If a system administrator notices this, he/she can set the X-Account-Sysmeta-Global-Write-Ratelimit on an account and that will limit the total number of write requests (PUT, POST, DELETE, COPY) that account can do for the whole account. This limit will be in addition to the applicable account/container limits from above. This header will be hidden from the user, because of the gatekeeper middleware, and can only be set using a direct client to the account nodes. It accepts a float value and will only limit requests if the value is > 0. ------------------- Black/White-listing ------------------- To blacklist or whitelist an account set: X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST or X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST in the account headers. swift-2.7.1/doc/source/first_contribution_swift.rst0000664000567000056710000001632113024044352023757 0ustar jenkinsjenkins00000000000000=========================== First Contribution to Swift =========================== ------------- Getting Swift ------------- Swift's source code is hosted on github and managed with git. The current trunk can be checked out like this: ``git clone https://github.com/openstack/swift.git`` This will clone the Swift repository under your account. A source tarball for the latest release of Swift is available on the `launchpad project page `_. Prebuilt packages for Ubuntu and RHEL variants are available. * `Swift Ubuntu Packages `_ * `Swift RDO Packages `_ -------------------- Source Control Setup -------------------- Swift uses `git` for source control. The OpenStack `Developer's Guide `_ describes the steps for setting up Git and all the necessary accounts for contributing code to Swift. ---------------- Changes to Swift ---------------- Once you have the source code and source control set up, you can make your changes to Swift. ------- Testing ------- The :doc:`Development Guidelines ` describes the testing requirements before submitting Swift code. In summary, you can execute tox from the swift home directory (where you checked out the source code): ``tox`` Tox will present tests results. Notice that in the beginning, it is very common to break many coding style guidelines. -------------------------- Proposing changes to Swift -------------------------- The OpenStack `Developer's Guide `_ describes the most common `git` commands that you will need. Following is a list of the commands that you need to know for your first contribution to Swift: To clone a copy of Swift: ``git clone https://github.com/openstack/swift.git`` Under the swift directory, set up the Gerrit repository. The following command configures the repository to know about Gerrit and makes the Change-Id commit hook get installed. You only need to do this once: ``git review -s`` To create your development branch (substitute branch_name for a name of your choice: ``git checkout -b `` To check the files that have been updated in your branch: ``git status`` To check the differences between your branch and the repository: ``git diff`` Assuming you have not added new files, you commit all your changes using: ``git commit -a`` Read the `Summary of Git commit message structure `_ for best practices on writing the commit message. When you are ready to send your changes for review use: ``git review`` If successful, Git response message will contain a URL you can use to track your changes. If you need to make further changes to the same review, you can commit them using: ``git commit -a --amend`` This will commit the changes under the same set of changes you issued earlier. Notice that in order to send your latest version for review, you will still need to call: ``git review`` --------------------- Tracking your changes --------------------- After you proposed your changes to Swift, you can track the review in: * ``_ .. _post-rebase-instructions: ------------------------ Post rebase instructions ------------------------ After rebasing, the following steps should be performed to rebuild the swift installation. Note that these commands should be performed from the root of the swift repo directory (e.g. $HOME/swift/): ``sudo python setup.py develop`` ``sudo pip install -r test-requirements.txt`` If using TOX, depending on the changes made during the rebase, you may need to rebuild the TOX environment (generally this will be the case if test-requirements.txt was updated such that a new version of a package is required), this can be accomplished using the '-r' argument to the TOX cli: ``tox -r`` You can include any of the other TOX arguments as well, for example, to run the pep8 suite and rebuild the TOX environment the following can be used: ``tox -r -e pep8`` The rebuild option only needs to be specified once for a particular build (e.g. pep8), that is further invocations of the same build will not require this until the next rebase. --------------- Troubleshooting --------------- You may run into the following errors when starting Swift if you rebase your commit using: ``git rebase`` .. code-block:: python Traceback (most recent call last): File "/usr/local/bin/swift-init", line 5, in from pkg_resources import require File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: swift==2.3.1.devXXX (where XXX represents a dev version of Swift). .. code-block:: python Traceback (most recent call last): File "/usr/local/bin/swift-proxy-server", line 10, in execfile(__file__) File "/home/swift/swift/bin/swift-proxy-server", line 23, in sys.exit(run_wsgi(conf_file, 'proxy-server', **options)) File "/home/swift/swift/swift/common/wsgi.py", line 888, in run_wsgi loadapp(conf_path, global_conf=global_conf) File "/home/swift/swift/swift/common/wsgi.py", line 390, in loadapp func(PipelineWrapper(ctx)) File "/home/swift/swift/swift/proxy/server.py", line 602, in modify_wsgi_pipeline ctx = pipe.create_filter(filter_name) File "/home/swift/swift/swift/common/wsgi.py", line 329, in create_filter global_conf=self.context.global_conf) File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 620, in get_context object_type, name=name) File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 659, in find_egg_entry_point for prot in protocol_options] or '(no entry points)')))) LookupError: Entry point 'versioned_writes' not found in egg 'swift' (dir: /home/swift/swift; protocols: paste.filter_factory, paste.filter_app_factory; entry_points: ) This happens because `git rebase` will retrieve code for a different version of Swift in the development stream, but the start scripts under `/usr/local/bin` have not been updated. The solution is to follow the steps described in the :ref:`post-rebase-instructions` section. swift-2.7.1/doc/source/overview_object_versioning.rst0000664000567000056710000000020313024044352024244 0ustar jenkinsjenkins00000000000000Object Versioning ================= .. automodule:: swift.common.middleware.versioned_writes :members: :show-inheritance: swift-2.7.1/doc/source/overview_erasure_code.rst0000664000567000056710000007732513024044354023220 0ustar jenkinsjenkins00000000000000==================== Erasure Code Support ==================== ------------------------------- History and Theory of Operation ------------------------------- There's a lot of good material out there on Erasure Code (EC) theory, this short introduction is just meant to provide some basic context to help the reader better understand the implementation in Swift. Erasure Coding for storage applications grew out of Coding Theory as far back as the 1960s with the Reed-Solomon codes. These codes have been used for years in applications ranging from CDs to DVDs to general communications and, yes, even in the space program starting with Voyager! The basic idea is that some amount of data is broken up into smaller pieces called fragments and coded in such a way that it can be transmitted with the ability to tolerate the loss of some number of the coded fragments. That's where the word "erasure" comes in, if you transmit 14 fragments and only 13 are received then one of them is said to be "erased". The word "erasure" provides an important distinction with EC; it isn't about detecting errors, it's about dealing with failures. Another important element of EC is that the number of erasures that can be tolerated can be adjusted to meet the needs of the application. At a high level EC works by using a specific scheme to break up a single data buffer into several smaller data buffers then, depending on the scheme, performing some encoding operation on that data in order to generate additional information. So you end up with more data than you started with and that extra data is often called "parity". Note that there are many, many different encoding techniques that vary both in how they organize and manipulate the data as well by what means they use to calculate parity. For example, one scheme might rely on `Galois Field Arithmetic `_ while others may work with only XOR. The number of variations and details about their differences are well beyond the scope of this introduction, but we will talk more about a few of them when we get into the implementation of EC in Swift. -------------------------------- Overview of EC Support in Swift -------------------------------- First and foremost, from an application perspective EC support is totally transparent. There are no EC related external API; a container is simply created using a Storage Policy defined to use EC and then interaction with the cluster is the same as any other durability policy. EC is implemented in Swift as a Storage Policy, see :doc:`overview_policies` for complete details on Storage Policies. Because support is implemented as a Storage Policy, all of the storage devices associated with your cluster's EC capability can be isolated. It is entirely possible to share devices between storage policies, but for EC it may make more sense to not only use separate devices but possibly even entire nodes dedicated for EC. Which direction one chooses depends on why the EC policy is being deployed. If, for example, there is a production replication policy in place already and the goal is to add a cold storage tier such that the existing nodes performing replication are impacted as little as possible, adding a new set of nodes dedicated to EC might make the most sense but also incurs the most cost. On the other hand, if EC is being added as a capability to provide additional durability for a specific set of applications and the existing infrastructure is well suited for EC (sufficient number of nodes, zones for the EC scheme that is chosen) then leveraging the existing infrastructure such that the EC ring shares nodes with the replication ring makes the most sense. These are some of the main considerations: * Layout of existing infrastructure. * Cost of adding dedicated EC nodes (or just dedicated EC devices). * Intended usage model(s). The Swift code base does not include any of the algorithms necessary to perform the actual encoding and decoding of data; that is left to external libraries. The Storage Policies architecture is leveraged to enable EC on a per container basis -- the object rings are still used to determine the placement of EC data fragments. Although there are several code paths that are unique to an operation associated with an EC policy, an external dependency to an Erasure Code library is what Swift counts on to perform the low level EC functions. The use of an external library allows for maximum flexibility as there are a significant number of options out there, each with its owns pros and cons that can vary greatly from one use case to another. --------------------------------------- PyECLib: External Erasure Code Library --------------------------------------- PyECLib is a Python Erasure Coding Library originally designed and written as part of the effort to add EC support to the Swift project, however it is an independent project. The library provides a well-defined and simple Python interface and internally implements a plug-in architecture allowing it to take advantage of many well-known C libraries such as: * Jerasure and GFComplete at http://jerasure.org. * Intel(R) ISA-L at http://01.org/intel%C2%AE-storage-acceleration-library-open-source-version. * Or write your own! PyECLib uses a C based library called liberasurecode to implement the plug in infrastructure; liberasure code is available at: * liberasurecode: https://bitbucket.org/tsg-/liberasurecode PyECLib itself therefore allows for not only choice but further extensibility as well. PyECLib also comes with a handy utility to help determine the best algorithm to use based on the equipment that will be used (processors and server configurations may vary in performance per algorithm). More on this will be covered in the configuration section. PyECLib is included as a Swift requirement. For complete details see `PyECLib `_ ------------------------------ Storing and Retrieving Objects ------------------------------ We will discuss the details of how PUT and GET work in the "Under the Hood" section later on. The key point here is that all of the erasure code work goes on behind the scenes; this summary is a high level information overview only. The PUT flow looks like this: #. The proxy server streams in an object and buffers up "a segment" of data (size is configurable). #. The proxy server calls on PyECLib to encode the data into smaller fragments. #. The proxy streams the encoded fragments out to the storage nodes based on ring locations. #. Repeat until the client is done sending data. #. The client is notified of completion when a quorum is met. The GET flow looks like this: #. The proxy server makes simultaneous requests to participating nodes. #. As soon as the proxy has the fragments it needs, it calls on PyECLib to decode the data. #. The proxy streams the decoded data it has back to the client. #. Repeat until the proxy is done sending data back to the client. It may sound like, from this high level overview, that using EC is going to cause an explosion in the number of actual files stored in each node's local file system. Although it is true that more files will be stored (because an object is broken into pieces), the implementation works to minimize this where possible, more details are available in the Under the Hood section. ------------- Handoff Nodes ------------- In EC policies, similarly to replication, handoff nodes are a set of storage nodes used to augment the list of primary nodes responsible for storing an erasure coded object. These handoff nodes are used in the event that one or more of the primaries are unavailable. Handoff nodes are still selected with an attempt to achieve maximum separation of the data being placed. -------------- Reconstruction -------------- For an EC policy, reconstruction is analogous to the process of replication for a replication type policy -- essentially "the reconstructor" replaces "the replicator" for EC policy types. The basic framework of reconstruction is very similar to that of replication with a few notable exceptions: * Because EC does not actually replicate partitions, it needs to operate at a finer granularity than what is provided with rsync, therefore EC leverages much of ssync behind the scenes (you do not need to manually configure ssync). * Once a pair of nodes has determined the need to replace a missing object fragment, instead of pushing over a copy like replication would do, the reconstructor has to read in enough surviving fragments from other nodes and perform a local reconstruction before it has the correct data to push to the other node. * A reconstructor does not talk to all other reconstructors in the set of nodes responsible for an EC partition, this would be far too chatty, instead each reconstructor is responsible for sync'ing with the partition's closest two neighbors (closest meaning left and right on the ring). .. note:: EC work (encode and decode) takes place both on the proxy nodes, for PUT/GET operations, as well as on the storage nodes for reconstruction. As with replication, reconstruction can be the result of rebalancing, bit-rot, drive failure or reverting data from a hand-off node back to its primary. -------------------------- Performance Considerations -------------------------- In general, EC has different performance characteristics than replicated data. EC requires substantially more CPU to read and write data, and is more suited for larger objects that are not frequently accessed (eg backups). Operators are encouraged to characterize the performance of various EC schemes and share their observations with the developer community. ---------------------------- Using an Erasure Code Policy ---------------------------- To use an EC policy, the administrator simply needs to define an EC policy in `swift.conf` and create/configure the associated object ring. An example of how an EC policy can be setup is shown below:: [storage-policy:2] name = ec104 policy_type = erasure_coding ec_type = liberasurecode_rs_vand ec_num_data_fragments = 10 ec_num_parity_fragments = 4 ec_object_segment_size = 1048576 Let's take a closer look at each configuration parameter: * ``name``: This is a standard storage policy parameter. See :doc:`overview_policies` for details. * ``policy_type``: Set this to ``erasure_coding`` to indicate that this is an EC policy. * ``ec_type``: Set this value according to the available options in the selected PyECLib back-end. This specifies the EC scheme that is to be used. For example the option shown here selects Vandermonde Reed-Solomon encoding while an option of ``flat_xor_hd_3`` would select Flat-XOR based HD combination codes. See the `PyECLib `_ page for full details. * ``ec_num_data_fragments``: The total number of fragments that will be comprised of data. * ``ec_num_parity_fragments``: The total number of fragments that will be comprised of parity. * ``ec_object_segment_size``: The amount of data that will be buffered up before feeding a segment into the encoder/decoder. The default value is 1048576. When PyECLib encodes an object, it will break it into N fragments. However, what is important during configuration, is how many of those are data and how many are parity. So in the example above, PyECLib will actually break an object in 14 different fragments, 10 of them will be made up of actual object data and 4 of them will be made of parity data (calculations depending on ec_type). When deciding which devices to use in the EC policy's object ring, be sure to carefully consider the performance impacts. Running some performance benchmarking in a test environment for your configuration is highly recommended before deployment. To create the EC policy's object ring, the only difference in the usage of the ``swift-ring-builder create`` command is the ``replicas`` parameter. The ``replicas`` value is the number of fragments spread across the object servers associated with the ring; ``replicas`` must be equal to the sum of ``ec_num_data_fragments`` and ``ec_num_parity_fragments``. For example:: swift-ring-builder object-1.builder create 10 14 1 Note that in this example the ``replicas`` value of 14 is based on the sum of 10 EC data fragments and 4 EC parity fragments. Once you have configured your EC policy in `swift.conf` and created your object ring, your application is ready to start using EC simply by creating a container with the specified policy name and interacting as usual. .. note:: It's important to note that once you have deployed a policy and have created objects with that policy, these configurations options cannot be changed. In case a change in the configuration is desired, you must create a new policy and migrate the data to a new container. Migrating Between Policies -------------------------- A common usage of EC is to migrate less commonly accessed data from a more expensive but lower latency policy such as replication. When an application determines that it wants to move data from a replication policy to an EC policy, it simply needs to move the data from the replicated container to an EC container that was created with the target durability policy. Region Support -------------- For at least the initial version of EC, it is not recommended that an EC scheme span beyond a single region, neither performance nor functional validation has be been done in such a configuration. -------------- Under the Hood -------------- Now that we've explained a little about EC support in Swift and how to configure/use it, let's explore how EC fits in at the nuts-n-bolts level. Terminology ----------- The term 'fragment' has been used already to describe the output of the EC process (a series of fragments) however we need to define some other key terms here before going any deeper. Without paying special attention to using the correct terms consistently, it is very easy to get confused in a hurry! * **chunk**: HTTP chunks received over wire (term not used to describe any EC specific operation). * **segment**: Not to be confused with SLO/DLO use of the word, in EC we call a segment a series of consecutive HTTP chunks buffered up before performing an EC operation. * **fragment**: Data and parity 'fragments' are generated when erasure coding transformation is applied to a segment. * **EC archive**: A concatenation of EC fragments; to a storage node this looks like an object. * **ec_ndata**: Number of EC data fragments. * **ec_nparity**: Number of EC parity fragments. Middleware ---------- Middleware remains unchanged. For most middleware (e.g., SLO/DLO) the fact that the proxy is fragmenting incoming objects is transparent. For list endpoints, however, it is a bit different. A caller of list endpoints will get back the locations of all of the fragments. The caller will be unable to re-assemble the original object with this information, however the node locations may still prove to be useful information for some applications. On Disk Storage --------------- EC archives are stored on disk in their respective objects-N directory based on their policy index. See :doc:`overview_policies` for details on per policy directory information. The actual names on disk of EC archives also have one additional piece of data encoded in the filename, the fragment archive index. Each storage policy now must include a transformation function that diskfile will use to build the filename to store on disk. The functions are implemented in the diskfile module as policy specific sub classes ``DiskFileManager``. This is required for a few reasons. For one, it allows us to store fragment archives of different indexes on the same storage node which is not typical however it is possible in many circumstances. Without unique filenames for the different EC archive files in a set, we would be at risk of overwriting one archive of index n with another of index m in some scenarios. The transformation function for the replication policy is simply a NOP. For reconstruction, the index is appended to the filename just before the .data extension. An example filename for a fragment archive storing the 5th fragment would like this this:: 1418673556.92690#5.data An additional file is also included for Erasure Code policies called the ``.durable`` file. Its meaning will be covered in detail later, however, its on- disk format does not require the name transformation function that was just covered. The .durable for the example above would simply look like this:: 1418673556.92690.durable And it would be found alongside every fragment specific .data file following a 100% successful PUT operation. Proxy Server ------------ High Level ========== The Proxy Server handles Erasure Coding in a different manner than replication, therefore there are several code paths unique to EC policies either though sub classing or simple conditionals. Taking a closer look at the PUT and the GET paths will help make this clearer. But first, a high level overview of how an object flows through the system: .. image:: images/ec_overview.png Note how: * Incoming objects are buffered into segments at the proxy. * Segments are erasure coded into fragments at the proxy. * The proxy stripes fragments across participating nodes such that the on-disk stored files that we call a fragment archive is appended with each new fragment. This scheme makes it possible to minimize the number of on-disk files given our segmenting and fragmenting. Multi_Phase Conversation ======================== Multi-part MIME document support is used to allow the proxy to engage in a handshake conversation with the storage node for processing PUT requests. This is required for a few different reasons. #. From the perspective of the storage node, a fragment archive is really just another object, we need a mechanism to send down the original object etag after all fragment archives have landed. #. Without introducing strong consistency semantics, the proxy needs a mechanism to know when a quorum of fragment archives have actually made it to disk before it can inform the client of a successful PUT. MIME supports a conversation between the proxy and the storage nodes for every PUT. This provides us with the ability to handle a PUT in one connection and assure that we have the essence of a 2 phase commit, basically having the proxy communicate back to the storage nodes once it has confirmation that a quorum of fragment archives in the set have been written. For the first phase of the conversation the proxy requires a quorum of `ec_ndata + 1` fragment archives to be successfully put to storage nodes. This ensures that the object could still be reconstructed even if one of the fragment archives becomes unavailable. During the second phase of the conversation the proxy communicates a confirmation to storage nodes that the fragment archive quorum has been achieved. This causes the storage node to create a `ts.durable` file at timestamp `ts` which acts as an indicator of the last known durable set of fragment archives for a given object. The presence of a `ts.durable` file means, to the object server, `there is a set of ts.data files that are durable at timestamp ts`. For the second phase of the conversation the proxy requires a quorum of `ec_ndata + 1` successful commits on storage nodes. This ensures that there are sufficient committed fragment archives for the object to be reconstructed even if one becomes unavailable. The reconstructor ensures that `.durable` files are replicated on storage nodes where they may be missing. Note that the completion of the commit phase of the conversation is also a signal for the object server to go ahead and immediately delete older timestamp files for this object. This is critical as we do not want to delete the older object until the storage node has confirmation from the proxy, via the multi-phase conversation, that the other nodes have landed enough for a quorum. The basic flow looks like this: * The Proxy Server erasure codes and streams the object fragments (ec_ndata + ec_nparity) to the storage nodes. * The storage nodes store objects as EC archives and upon finishing object data/metadata write, send a 1st-phase response to proxy. * Upon quorum of storage nodes responses, the proxy initiates 2nd-phase by sending commit confirmations to object servers. * Upon receipt of commit message, object servers store a 0-byte data file as `.durable` indicating successful PUT, and send a final response to the proxy server. * The proxy waits for `ec_ndata + 1` object servers to respond with a success (2xx) status before responding to the client with a successful status. Here is a high level example of what the conversation looks like:: proxy: PUT /p/a/c/o Transfer-Encoding': 'chunked' Expect': '100-continue' X-Backend-Obj-Multiphase-Commit: yes obj: 100 Continue X-Obj-Multiphase-Commit: yes proxy: --MIMEboundary X-Document: object body --MIMEboundary X-Document: object metadata Content-MD5: --MIMEboundary obj: 100 Continue proxy: X-Document: put commit commit_confirmation --MIMEboundary-- obj: 20x =2 2xx responses> proxy: 2xx -> client A few key points on the .durable file: * The .durable file means \"the matching .data file for this has sufficient fragment archives somewhere, committed, to reconstruct the object\". * The Proxy Server will never have knowledge, either on GET or HEAD, of the existence of a .data file on an object server if it does not have a matching .durable file. * The object server will never return a .data that does not have a matching .durable. * When a proxy does a GET, it will only receive fragment archives that have enough present somewhere to be reconstructed. Partial PUT Failures ==================== A partial PUT failure has a few different modes. In one scenario the Proxy Server is alive through the entire PUT conversation. This is a very straightforward case. The client will receive a good response if and only if a quorum of fragment archives were successfully landed on their storage nodes. In this case the Reconstructor will discover the missing fragment archives, perform a reconstruction and deliver fragment archives and their matching .durable files to the nodes. The more interesting case is what happens if the proxy dies in the middle of a conversation. If it turns out that a quorum had been met and the commit phase of the conversation finished, its as simple as the previous case in that the reconstructor will repair things. However, if the commit didn't get a chance to happen then some number of the storage nodes have .data files on them (fragment archives) but none of them knows whether there are enough elsewhere for the entire object to be reconstructed. In this case the client will not have received a 2xx response so there is no issue there, however, it is left to the storage nodes to clean up the stale fragment archives. Work is ongoing in this area to enable the proxy to play a role in reviving these fragment archives, however, for the current release, a proxy failure after the start of a conversation but before the commit message will simply result in a PUT failure. GET === The GET for EC is different enough from replication that subclassing the `BaseObjectController` to the `ECObjectController` enables an efficient way to implement the high level steps described earlier: #. The proxy server makes simultaneous requests to participating nodes. #. As soon as the proxy has the fragments it needs, it calls on PyECLib to decode the data. #. The proxy streams the decoded data it has back to the client. #. Repeat until the proxy is done sending data back to the client. The GET path will attempt to contact all nodes participating in the EC scheme, if not enough primaries respond then handoffs will be contacted just as with replication. Etag and content length headers are updated for the client response following reconstruction as the individual fragment archives metadata is valid only for that fragment archive. Object Server ------------- The Object Server, like the Proxy Server, supports MIME conversations as described in the proxy section earlier. This includes processing of the commit message and decoding various sections of the MIME document to extract the footer which includes things like the entire object etag. DiskFile ======== Erasure code uses subclassed ``ECDiskFile``, ``ECDiskFileWriter``, ``ECDiskFileReader`` and ``ECDiskFileManager`` to implement EC specific handling of on disk files. This includes things like file name manipulation to include the fragment index in the filename, determination of valid .data files based on .durable presence, construction of EC specific hashes.pkl file to include fragment index information, etc., etc. Metadata -------- There are few different categories of metadata that are associated with EC: System Metadata: EC has a set of object level system metadata that it attaches to each of the EC archives. The metadata is for internal use only: * ``X-Object-Sysmeta-EC-Etag``: The Etag of the original object. * ``X-Object-Sysmeta-EC-Content-Length``: The content length of the original object. * ``X-Object-Sysmeta-EC-Frag-Index``: The fragment index for the object. * ``X-Object-Sysmeta-EC-Scheme``: Description of the EC policy used to encode the object. * ``X-Object-Sysmeta-EC-Segment-Size``: The segment size used for the object. User Metadata: User metadata is unaffected by EC, however, a full copy of the user metadata is stored with every EC archive. This is required as the reconstructor needs this information and each reconstructor only communicates with its closest neighbors on the ring. PyECLib Metadata: PyECLib stores a small amount of metadata on a per fragment basis. This metadata is not documented here as it is opaque to Swift. Database Updates ---------------- As account and container rings are not associated with a Storage Policy, there is no change to how these database updates occur when using an EC policy. The Reconstructor ----------------- The Reconstructor performs analogous functions to the replicator: #. Recovery from disk drive failure. #. Moving data around because of a rebalance. #. Reverting data back to a primary from a handoff. #. Recovering fragment archives from bit rot discovered by the auditor. However, under the hood it operates quite differently. The following are some of the key elements in understanding how the reconstructor operates. Unlike the replicator, the work that the reconstructor does is not always as easy to break down into the 2 basic tasks of synchronize or revert (move data from handoff back to primary) because of the fact that one storage node can house fragment archives of various indexes and each index really /"belongs/" to a different node. So, whereas when the replicator is reverting data from a handoff it has just one node to send its data to, the reconstructor can have several. Additionally, its not always the case that the processing of a particular suffix directory means one or the other for the entire directory (as it does for replication). The scenarios that create these mixed situations can be pretty complex so we will just focus on what the reconstructor does here and not a detailed explanation of why. Job Construction and Processing =============================== Because of the nature of the work it has to do as described above, the reconstructor builds jobs for a single job processor. The job itself contains all of the information needed for the processor to execute the job which may be a synchronization or a data reversion and there may be a mix of jobs that perform both of these operations on the same suffix directory. Jobs are constructed on a per partition basis and then per fragment index basis. That is, there will be one job for every fragment index in a partition. Performing this construction \"up front\" like this helps minimize the interaction between nodes collecting hashes.pkl information. Once a set of jobs for a partition has been constructed, those jobs are sent off to threads for execution. The single job processor then performs the necessary actions working closely with ssync to carry out its instructions. For data reversion, the actual objects themselves are cleaned up via the ssync module and once that partition's set of jobs is complete, the reconstructor will attempt to remove the relevant directory structures. The scenarios that job construction has to take into account include: #. A partition directory with all fragment indexes matching the local node index. This is the case where everything is where it belongs and we just need to compare hashes and sync if needed, here we sync with our partners. #. A partition directory with one local fragment index and mix of others. Here we need to sync with our partners where fragment indexes matches the local_id, all others are sync'd with their home nodes and then deleted. #. A partition directory with no local fragment index and just one or more of others. Here we sync with just the home nodes for the fragment indexes that we have and then all the local archives are deleted. This is the basic handoff reversion case. .. note:: A \"home node\" is the node where the fragment index encoded in the fragment archive's filename matches the node index of a node in the primary partition list. Node Communication ================== The replicators talk to all nodes who have a copy of their object, typically just 2 other nodes. For EC, having each reconstructor node talk to all nodes would incur a large amount of overhead as there will typically be a much larger number of nodes participating in the EC scheme. Therefore, the reconstructor is built to talk to its adjacent nodes on the ring only. These nodes are typically referred to as partners. Reconstruction ============== Reconstruction can be thought of sort of like replication but with an extra step in the middle. The reconstructor is hard-wired to use ssync to determine what is missing and desired by the other side. However, before an object is sent over the wire it needs to be reconstructed from the remaining fragments as the local fragment is just that - a different fragment index than what the other end is asking for. Thus, there are hooks in ssync for EC based policies. One case would be for basic reconstruction which, at a high level, looks like this: * Determine which nodes need to be contacted to collect other EC archives needed to perform reconstruction. * Update the etag and fragment index metadata elements of the newly constructed fragment archive. * Establish a connection to the target nodes and give ssync a DiskFileLike class that it can stream data from. The reader in this class gathers fragments from the nodes and uses PyECLib to reconstruct each segment before yielding data back to ssync. Essentially what this means is that data is buffered, in memory, on a per segment basis at the node performing reconstruction and each segment is dynamically reconstructed and delivered to `ssync_sender` where the `send_put()` method will ship them on over. The sender is then responsible for deleting the objects as they are sent in the case of data reversion. The Auditor ----------- Because the auditor already operates on a per storage policy basis, there are no specific auditor changes associated with EC. Each EC archive looks like, and is treated like, a regular object from the perspective of the auditor. Therefore, if the auditor finds bit-rot in an EC archive, it simply quarantines it and the reconstructor will take care of the rest just as the replicator does for replication policies. swift-2.7.1/doc/source/api/0000775000567000056710000000000013024044470016632 5ustar jenkinsjenkins00000000000000swift-2.7.1/doc/source/api/container_quotas.rst0000664000567000056710000000230313024044352022737 0ustar jenkinsjenkins00000000000000================ Container quotas ================ You can set quotas on the size and number of objects stored in a container by setting the following metadata: - ``X-Container-Meta-Quota-Bytes``. The size, in bytes, of objects that can be stored in a container. - ``X-Container-Meta-Quota-Count``. The number of objects that can be stored in a container. When you exceed a container quota, subsequent requests to create objects fail with a 413 Request Entity Too Large error. The Object Storage system uses an eventual consistency model. When you create a new object, the container size and object count might not be immediately updated. Consequently, you might be allowed to create objects even though you have actually exceeded the quota. At some later time, the system updates the container size and object count to the actual values. At this time, subsequent requests fails. In addition, if you are currently under the ``X-Container-Meta-Quota-Bytes`` limit and a request uses chunked transfer encoding, the system cannot know if the request will exceed the quota so the system allows the request. However, once the quota is exceeded, any subsequent uploads that use chunked transfer encoding fail. swift-2.7.1/doc/source/api/form_post_middleware.rst0000664000567000056710000001505413024044352023575 0ustar jenkinsjenkins00000000000000==================== Form POST middleware ==================== To discover whether your Object Storage system supports this feature, check with your service provider or send a **GET** request using the :file:`/info` path. You can upload objects directly to the Object Storage system from a browser by using the form **POST** middleware. This middleware uses account or container secret keys to generate a cryptographic signature for the request. This means that you do not need to send an authentication token in the ``X-Auth-Token`` header to perform the request. The form **POST** middleware uses the same secret keys as the temporary URL middleware uses. For information about how to set these keys, see :ref:`secret_keys`. For information about the form **POST** middleware configuration options, see :ref:`formpost` in the *Source Documentation*. Form POST format ~~~~~~~~~~~~~~~~ To upload objects to a cluster, you can use an HTML form **POST** request. The format of the form **POST** request is: **Example 1.14. Form POST format** .. code::
]]> **action="SWIFT_URL"** Set to full URL where the objects are to be uploaded. The names of uploaded files are appended to the specified *SWIFT_URL*. So, you can upload directly to the root of a container with a URL like: .. code:: https://swift-cluster.example.com/v1/my_account/container/ Optionally, you can include an object prefix to separate uploads, such as: .. code:: https://swift-cluster.example.com/v1/my_account/container/OBJECT_PREFIX **method="POST"** Must be ``POST``. **enctype="multipart/form-data"** Must be ``multipart/form-data``. **name="redirect" value="REDIRECT_URL"** Redirects the browser to the *REDIRECT_URL* after the upload completes. The URL has status and message query parameters added to it, which specify the HTTP status code for the upload and an optional error message. The 2\ *nn* status code indicates success. The *REDIRECT_URL* can be an empty string. If so, the ``Location`` response header is not set. **name="max\_file\_size" value="BYTES"** Required. Indicates the size, in bytes, of the maximum single file upload. **name="max\_file\_count" value= "COUNT"** Required. Indicates the maximum number of files that can be uploaded with the form. **name="expires" value="UNIX_TIMESTAMP"** The UNIX timestamp that specifies the time before which the form must be submitted before it becomes no longer valid. **name="signature" value="HMAC"** The HMAC-SHA1 signature of the form. **type="file" name="FILE_NAME"** File name of the file to be uploaded. You can include from one to the ``max_file_count`` value of files. The file attributes must appear after the other attributes to be processed correctly. If attributes appear after the file attributes, they are not sent with the sub-request because all attributes in the file cannot be parsed on the server side unless the whole file is read into memory; the server does not have enough memory to service these requests. Attributes that follow the file attributes are ignored. Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: .. code:: **type= "submit"** Must be ``submit``. HMAC-SHA1 signature for form POST ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Form **POST** middleware uses an HMAC-SHA1 cryptographic signature. This signature includes these elements from the form: - The path. Starting with ``/v1/`` onwards and including a container name and, optionally, an object prefix. In `Example 1.15`, “HMAC-SHA1 signature for form POST” the path is ``/v1/my_account/container/object_prefix``. Do not URL-encode the path at this stage. - A redirect URL. If there is no redirect URL, use the empty string. - Maximum file size. In `Example 1.15`, “HMAC-SHA1 signature for form POST” the ``max_file_size`` is ``104857600`` bytes. - The maximum number of objects to upload. In `Example 1.15`, “HMAC-SHA1 signature for form POST” ``max_file_count`` is ``10``. - Expiry time. In `Example 1.15, “HMAC-SHA1 signature for form POST” the expiry time is set to ``600`` seconds into the future. - The secret key. Set as the ``X-Account-Meta-Temp-URL-Key`` header value for accounts or ``X-Container-Meta-Temp-URL-Key`` header value for containers. See :ref:`secret_keys` for more information. The following example code generates a signature for use with form **POST**: **Example 1.15. HMAC-SHA1 signature for form POST** .. code:: import hmac from hashlib import sha1 from time import time path = '/v1/my_account/container/object_prefix' redirect = 'https://myserver.com/some-page' max_file_size = 104857600 max_file_count = 10 expires = int(time() + 600) key = 'MYKEY' hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect, max_file_size, max_file_count, expires) signature = hmac.new(key, hmac_body, sha1).hexdigest() For more information, see `RFC 2104: HMAC: Keyed-Hashing for Message Authentication `__. Form POST example ~~~~~~~~~~~~~~~~~ The following example shows how to submit a form by using a cURL command. In this example, the object prefix is ``photos/`` and the file being uploaded is called ``flower.jpg``. This example uses the **swift-form-signature** script to compute the ``expires`` and ``signature`` values. .. code:: $ bin/swift-form-signature /v1/my_account/container/photos/ https://example.com/done.html 5373952000 1 200 MYKEY Expires: 1390825338 Signature: 35129416ebda2f1a21b3c2b8939850dfc63d8f43 .. code:: $ curl -i https://swift-cluster.example.com/v1/my_account/container/photos/ -X POST \ -F max_file_size=5373952000 -F max_file_count=1 -F expires=1390825338 \ -F signature=35129416ebda2f1a21b3c2b8939850dfc63d8f43 \ -F redirect=https://example.com/done.html \ -F file=@flower.jpg swift-2.7.1/doc/source/api/object_versioning.rst0000664000567000056710000001354113024044354023102 0ustar jenkinsjenkins00000000000000================= Object versioning ================= You can store multiple versions of your content so that you can recover from unintended overwrites. Object versioning is an easy way to implement version control, which you can use with any type of content. Note ~~~~ You cannot version a large-object manifest file, but the large-object manifest file can point to versioned segments. It is strongly recommended that you put non-current objects in a different container than the container where current object versions reside. To enable object versioning, the cloud provider sets the ``allow_versions`` option to ``TRUE`` in the container configuration file. The ``X-Versions-Location`` header defines the container that holds the non-current versions of your objects. You must UTF-8-encode and then URL-encode the container name before you include it in the ``X-Versions-Location`` header. This header enables object versioning for all objects in the container. With a comparable ``archive`` container in place, changes to objects in the ``current`` container automatically create non-current versions in the ``archive`` container. Here's an example: #. Create the ``current`` container: .. code:: # curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive" .. code:: HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txb91810fb717347d09eec8-0052e18997 Date: Thu, 23 Jan 2014 21:28:55 GMT #. Create the first version of an object in the ``current`` container: .. code:: # curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" .. code:: HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a Date: Thu, 23 Jan 2014 21:31:22 GMT Nothing is written to the non-current version container when you initially **PUT** an object in the ``current`` container. However, subsequent **PUT** requests that edit an object trigger the creation of a version of that object in the ``archive`` container. These non-current versions are named as follows: .. code:: Where ``length`` is the 3-character, zero-padded hexadecimal character length of the object, ```` is the object name, and ```` is the time when the object was initially created as a current version. #. Create a second version of the object in the ``current`` container: .. code:: # curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" .. code:: HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c Date: Thu, 23 Jan 2014 21:41:32 GMT #. Issue a **GET** request to a versioned object to get the current version of the object. You do not have to do any request redirects or metadata lookups. List older versions of the object in the ``archive`` container: .. code:: # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" .. code:: HTTP/1.1 200 OK Content-Length: 30 X-Container-Object-Count: 1 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e Date: Thu, 23 Jan 2014 21:45:50 GMT 009my_object/1390512682.92052 Note ~~~~ A **POST** request to a versioned object updates only the metadata for the object and does not create a new version of the object. New versions are created only when the content of the object changes. #. Issue a **DELETE** request to a versioned object to remove the current version of the object and replace it with the next-most current version in the non-current container. .. code:: # curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token" .. code:: HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd Date: Thu, 23 Jan 2014 21:51:25 GMT List objects in the ``archive`` container to show that the archived object was moved back to the ``current`` container: .. code:: # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" .. code:: HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 0 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed Date: Thu, 23 Jan 2014 21:51:41 GMT This next-most current version carries with it any metadata last set on it. If want to completely remove an object and you have five versions of it, you must **DELETE** it five times. #. To disable object versioning for the ``current`` container, remove its ``X-Versions-Location`` metadata header by sending an empty key value. .. code:: # curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: " .. code:: HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txe2476de217134549996d0-0052e19038 Date: Thu, 23 Jan 2014 21:57:12 GMT

Accepted

The request is accepted for processing.

swift-2.7.1/doc/source/api/object_api_v1_overview.rst0000664000567000056710000001642713024044354024032 0ustar jenkinsjenkins00000000000000Object Storage API overview --------------------------- OpenStack Object Storage is a highly available, distributed, eventually consistent object/blob store. You create, modify, and get objects and metadata by using the Object Storage API, which is implemented as a set of Representational State Transfer (REST) web services. For an introduction to OpenStack Object Storage, see `Object Storage ` in the *OpenStack Cloud Administrator Guide*. You use the HTTPS (SSL) protocol to interact with Object Storage, and you use standard HTTP calls to perform API operations. You can also use language-specific APIs, which use the RESTful API, that make it easier for you to integrate into your applications. To assert your right to access and change data in an account, you identify yourself to Object Storage by using an authentication token. To get a token, you present your credentials to an authentication service. The authentication service returns a token and the URL for the account. Depending on which authentication service that you use, the URL for the account appears in: - **OpenStack Identity Service**. The URL is defined in the service catalog. - **Tempauth**. The URL is provided in the ``X-Storage-Url`` response header. In both cases, the URL is the full URL and includes the account resource. The Object Storage API supports the standard, non-serialized response format, which is the default, and both JSON and XML serialized response formats. The Object Storage system organizes data in a hierarchy, as follows: - **Account**. Represents the top-level of the hierarchy. Your service provider creates your account and you own all resources in that account. The account defines a namespace for containers. A container might have the same name in two different accounts. In the OpenStack environment, *account* is synonymous with a project or tenant. - **Container**. Defines a namespace for objects. An object with the same name in two different containers represents two different objects. You can create any number of containers within an account. In addition to containing objects, you can also use the container to control access to objects by using an access control list (ACL). You cannot store an ACL with individual objects. In addition, you configure and control many other features, such as object versioning, at the container level. You can bulk-delete up to 10,000 containers in a single request. You can set a storage policy on a container with predefined names and definitions from your cloud provider. - **Object**. Stores data content, such as documents, images, and so on. You can also store custom metadata with an object. With the Object Storage API, you can: - Store an unlimited number of objects. Each object can be as large as 5 GB, which is the default. You can configure the maximum object size. - Upload and store objects of any size with large object creation. - Use cross-origin resource sharing to manage object security. - Compress files using content-encoding metadata. - Override browser behavior for an object using content-disposition metadata. - Schedule objects for deletion. - Bulk-delete up to 10,000 objects in a single request. - Auto-extract archive files. - Generate a URL that provides time-limited **GET** access to an object. - Upload objects directly to the Object Storage system from a browser by using form **POST** middleware The account, container, and object hierarchy affects the way you interact with the Object Storage API. Specifically, the resource path reflects this structure and has this format: .. code:: /v1/{account}/{container}/{object} For example, for the ``flowers/rose.jpg`` object in the ``images`` container in the ``12345678912345`` account, the resource path is: .. code:: /v1/12345678912345/images/flowers/rose.jpg Notice that the object name contains the ``/`` character. This slash does not indicate that Object Storage has a sub-hierarchy called ``flowers`` because containers do not store objects in actual sub-folders. However, the inclusion of ``/`` or a similar convention inside object names enables you to create pseudo-hierarchical folders and directories. For example, if the endpoint for Object Storage is ``objects.mycloud.com``, the returned URL is ``https://objects.mycloud.com/v1/12345678912345``. To access a container, append the container name to the resource path. To access an object, append the container and the object name to the path. If you have a large number of containers or objects, you can use query parameters to page through large lists of containers or objects. Use the *``marker``*, *``limit``*, and *``end_marker``* query parameters to control how many items are returned in a list and where the list starts or ends. If you want to page through in reverse order, you can use the query parameter *``reverse``*, noting that your marker and end_markers should be switched when applied to a reverse listing. I.e, for a list of objects ``[a, b, c, d, e]`` the non-reversed could be: .. code:: /v1/{account}/{container}/?marker=a&end_marker=d b c However, when reversed marker and end_marker are applied to a reversed list: .. code:: /v1/{account}/{container}/?marker=d&end_marker=a&reverse=on c b Object Storage HTTP requests have the following default constraints. Your service provider might use different default values. ============================ ============= ===== Item Maximum value Notes ============================ ============= ===== Number of HTTP headers 90 Length of HTTP headers 4096 bytes Length per HTTP request line 8192 bytes Length of HTTP request 5 GB Length of container names 256 bytes Cannot contain the ``/`` character. Length of object names 1024 bytes By default, there are no character restrictions. ============================ ============= ===== You must UTF-8-encode and then URL-encode container and object names before you call the API binding. If you use an API binding that performs the URL-encoding for you, do not URL-encode the names before you call the API binding. Otherwise, you double-encode these names. Check the length restrictions against the URL-encoded string. The API Reference describes the operations that you can perform with the Object Storage API: - `Storage accounts `__: Use to perform account-level tasks. Lists containers for a specified account. Creates, updates, and deletes account metadata. Shows account metadata. - `Storage containers `__: Use to perform container-level tasks. Lists objects in a specified container. Creates, shows details for, and deletes containers. Creates, updates, shows, and deletes container metadata. - `Storage objects `__: Use to perform object-level tasks. Creates, replaces, shows details for, and deletes objects. Copies objects with another object with a new or different name. Updates object metadata. swift-2.7.1/doc/source/api/use_content-encoding_metadata.rst0000664000567000056710000000140213024044352025332 0ustar jenkinsjenkins00000000000000============================= Use Content-Encoding metadata ============================= When you create an object or update its metadata, you can optionally set the ``Content-Encoding`` metadata. This metadata enables you to indicate that the object content is compressed without losing the identity of the underlying media type (``Content-Type``) of the file, such as a video. **Example Content-Encoding header request: HTTP** This example assigns an attachment type to the ``Content-Encoding`` header that indicates how the file is downloaded: .. code:: PUT //// HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Content-Type: video/mp4 Content-Encoding: gzip swift-2.7.1/doc/source/api/use_the_content-disposition_metadata.rst0000664000567000056710000000216013024044354026754 0ustar jenkinsjenkins00000000000000==================================== Use the Content-Disposition metadata ==================================== To override the default behavior for a browser, use the ``Content-Disposition`` header to specify the override behavior and assign this header to an object. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. **Example Override browser default behavior request: HTTP** This example assigns an attachment type to the ``Content-Disposition`` header. This attachment type indicates that the file is to be downloaded as ``goodbye.txt``: .. code:: # curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "Content-Length: 14" -H "Content-Type: application/octet-stream" -H "Content-Disposition: attachment; filename=goodbye.txt" .. code:: HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txa9b5e57d7f354d7ea9f57-0052e17e13 Date: Thu, 23 Jan 2014 20:39:47 GMT

Accepted

The request is accepted for processing.

swift-2.7.1/doc/source/api/temporary_url_middleware.rst0000664000567000056710000001337413024044354024476 0ustar jenkinsjenkins00000000000000======================== Temporary URL middleware ======================== To discover whether your Object Storage system supports this feature, check with your service provider or send a **GET** request using the ``/info`` path. A temporary URL gives users temporary access to objects. For example, a website might want to provide a link to download a large object in Object Storage, but the Object Storage account has no public access. The website can generate a URL that provides time-limited **GET** access to the object. When the web browser user clicks on the link, the browser downloads the object directly from Object Storage, eliminating the need for the website to act as a proxy for the request. Ask your cloud administrator to enable the temporary URL feature. For information, see :ref:`tempurl` in the *Source Documentation*. Note ~~~~ To use **POST** requests to upload objects to specific Object Storage locations, use :doc:`form_post_middleware` instead of temporary URL middleware. Temporary URL format ~~~~~~~~~~~~~~~~~~~~ A temporary URL is comprised of the URL for an object with added query parameters: **Example Temporary URL format** .. code:: https://swift-cluster.example.com/v1/my_account/container/object ?temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709 &temp_url_expires=1323479485 &filename=My+Test+File.pdf The example shows these elements: **Object URL**: Required. The full path URL to the object. **temp\_url\_sig**: Required. An HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. **temp\_url\_expires**: Required. An expiration date as a UNIX Epoch timestamp, which is an integer value. For example, ``1390852007`` represents ``Mon, 27 Jan 2014 19:46:47 GMT``. For more information, see `Epoch & Unix Timestamp Conversion Tools `__. **filename**: Optional. Overrides the default file name. Object Storage generates a default file name for **GET** temporary URLs that is based on the object name. Object Storage returns this value in the ``Content-Disposition`` response header. Browsers can interpret this file name value as a file attachment to be saved. .. _secret_keys: Secret Keys ~~~~~~~~~~~ The cryptographic signature used in Temporary URLs and also in :doc:`form_post_middleware` uses a secret key. Object Storage allows you to store two secret key values per account, and two per container. When validating a request, Object Storage checks signatures against all keys. Using two keys at each level enables key rotation without invalidating existing temporary URLs. To set the keys at the account level, set one or both of the following request headers to arbitrary values on a **POST** request to the account: - ``X-Account-Meta-Temp-URL-Key`` - ``X-Account-Meta-Temp-URL-Key-2`` To set the keys at the container level, set one or both of the following request headers to arbitrary values on a **POST** or **PUT** request to the container: - ``X-Container-Meta-Temp-URL-Key`` - ``X-Container-Meta-Temp-URL-Key-2`` The arbitrary values serve as the secret keys. For example, use the **swift post** command to set the secret key to *``MYKEY``*: .. code:: $ swift post -m "Temp-URL-Key:MYKEY" Note ~~~~ Changing these headers invalidates any previously generated temporary URLs within 60 seconds, which is the memcache time for the key. HMAC-SHA1 signature for temporary URLs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Temporary URL middleware uses an HMAC-SHA1 cryptographic signature. This signature includes these elements: - The allowed method. Typically, **GET** or **PUT**. - Expiry time. In the example for the HMAC-SHA1 signature for temporary URLs below, the expiry time is set to ``86400`` seconds (or 1 day) into the future. - The path. Starting with ``/v1/`` onwards and including a container name and object. In the example below, the path is ``/v1/my_account/container/object``. Do not URL-encode the path at this stage. - The secret key. Use one of the key values as described in :ref:`secret_keys`. This sample Python code shows how to compute a signature for use with temporary URLs: **Example HMAC-SHA1 signature for temporary URLs** .. code:: import hmac from hashlib import sha1 from time import time method = 'GET' duration_in_seconds = 60*60*24 expires = int(time() + duration_in_seconds) path = '/v1/my_account/container/object' key = 'MYKEY' hmac_body = '%s\n%s\n%s' % (method, expires, path) signature = hmac.new(key, hmac_body, sha1).hexdigest() Do not URL-encode the path when you generate the HMAC-SHA1 signature. However, when you make the actual HTTP request, you should properly URL-encode the URL. The *``MYKEY``* value is one of the key values as described in :ref:`secret_keys`. For more information, see `RFC 2104: HMAC: Keyed-Hashing for Message Authentication `__. swift-temp-url script ~~~~~~~~~~~~~~~~~~~~~ Object Storage provides the **swift-temp-url** script that auto-generates the *``temp_url_sig``* and *``temp_url_expires``* query parameters. For example, you might run this command: .. code:: $ bin/swift-temp-url GET 3600 /v1/my_account/container/object MYKEY This command returns the path: .. code:: /v1/my_account/container/object ?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 &temp_url_expires=1374497657 To create the temporary URL, prefix this path with the Object Storage storage host name. For example, prefix the path with ``https://swift-cluster.example.com``, as follows: .. code:: https://swift-cluster.example.com/v1/my_account/container/object ?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 &temp_url_expires=1374497657 swift-2.7.1/doc/source/api/authentication.rst0000664000567000056710000000422213024044352022402 0ustar jenkinsjenkins00000000000000============== Authentication ============== The owner of an Object Storage account controls access to that account and its containers and objects. An owner is the user who has the ''admin'' role for that tenant. The tenant is also known as the project or account. As the account owner, you can modify account metadata and create, modify, and delete containers and objects. To identify yourself as the account owner, include an authentication token in the ''X-Auth-Token'' header in the API request. Depending on the token value in the ''X-Auth-Token'' header, one of the following actions occur: - ''X-Auth-Token'' contains the token for the account owner. The request is permitted and has full access to make changes to the account. - The ''X-Auth-Token'' header is omitted or it contains a token for a non-owner or a token that is not valid. The request fails with a 401 Unauthorized or 403 Forbidden response. You have no access to accounts or containers, unless an access control list (ACL) explicitly grants access. The account owner can grant account and container access to users through access control lists (ACLs). In addition, it is possible to provide an additional token in the ''X-Service-Token'' header. More information about how this is used is in :doc:`../overview_backing_store`. The following list describes the authentication services that you can use with Object Storage: - OpenStack Identity (keystone): For Object Storage, account is synonymous with project or tenant ID. - Tempauth middleware: Object Storage includes this middleware. User and account management is performed in Object Storage itself. - Swauth middleware: Stored in github, this custom middleware is modeled on Tempauth. Usage is similar to Tempauth. - Other custom middleware: Write it yourself to fit your environment. Specifically, you use the ''X-Auth-Token'' header to pass an authentication token to an API request. Authentication tokens expire after a time period that the authentication service defines. When a token expires, use of the token causes requests to fail with a 401 Unauthorized response. To continue, you must obtain a new token. swift-2.7.1/doc/source/api/discoverability.rst0000664000567000056710000000165113024044352022562 0ustar jenkinsjenkins00000000000000=============== Discoverability =============== Your Object Storage system might not enable all features that you read about because your service provider chooses which features to enable. To discover which features are enabled in your Object Storage system, use the ``/info`` request. However, your service provider might have disabled the ``/info`` request, or you might be using an older version that does not support the ``/info`` request. To use the ``/info`` request, send a **GET** request using the ``/info`` path to the Object Store endpoint as shown in this example: .. code:: # curl https://storage.clouddrive.com/info This example shows a truncated response body: .. code:: { "swift":{ "version":"1.11.0" }, "staticweb":{ }, "tempurl":{ } } This output shows that the Object Storage system has enabled the static website and temporary URL features. swift-2.7.1/doc/source/api/large_objects.rst0000664000567000056710000003125513024044354022176 0ustar jenkinsjenkins00000000000000============= Large objects ============= By default, the content of an object cannot be greater than 5 GB. However, you can use a number of smaller objects to construct a large object. The large object is comprised of two types of objects: - **Segment objects** store the object content. You can divide your content into segments, and upload each segment into its own segment object. Segment objects do not have any special features. You create, update, download, and delete segment objects just as you would normal objects. - A **manifest object** links the segment objects into one logical large object. When you download a manifest object, Object Storage concatenates and returns the contents of the segment objects in the response body of the request. This behavior extends to the response headers returned by **GET** and **HEAD** requests. The ``Content-Length`` response header value is the total size of all segment objects. Object Storage calculates the ``ETag`` response header value by taking the ``ETag`` value of each segment, concatenating them together, and returning the MD5 checksum of the result. The manifest object types are: **Static large objects** The manifest object content is an ordered list of the names of the segment objects in JSON format. **Dynamic large objects** The manifest object has a ``X-Object-Manifest`` metadata header. The value of this header is ``{container}/{prefix}``, where ``{container}`` is the name of the container where the segment objects are stored, and ``{prefix}`` is a string that all segment objects have in common. The manifest object should have no content. However, this is not enforced. Note ~~~~ If you make a **COPY** request by using a manifest object as the source, the new object is a normal, and not a segment, object. If the total size of the source segment objects exceeds 5 GB, the **COPY** request fails. However, you can make a duplicate of the manifest object and this new object can be larger than 5 GB. Static large objects ~~~~~~~~~~~~~~~~~~~~ To create a static large object, divide your content into pieces and create (upload) a segment object to contain each piece. You must record the ``ETag`` response header that the **PUT** operation returns. Alternatively, you can calculate the MD5 checksum of the segment prior to uploading and include this in the ``ETag`` request header. This ensures that the upload cannot corrupt your data. List the name of each segment object along with its size and MD5 checksum in order. Create a manifest object. Include the ``multipart-manifest=put`` query string at the end of the manifest object name to indicate that this is a manifest object. The body of the **PUT** request on the manifest object comprises a json list, where each element contains the following attributes: - ``path``. The container and object name in the format: ``{container-name}/{object-name}`` - ``etag``. The MD5 checksum of the content of the segment object. This value must match the ``ETag`` of that object. - ``size_bytes``. The size of the segment object. This value must match the ``Content-Length`` of that object. **Example Static large object manifest list** This example shows three segment objects. You can use several containers and the object names do not have to conform to a specific pattern, in contrast to dynamic large objects. .. code:: [ { "path": "mycontainer/objseg1", "etag": "0228c7926b8b642dfb29554cd1f00963", "size_bytes": 1468006 }, { "path": "mycontainer/pseudodir/seg-obj2", "etag": "5bfc9ea51a00b790717eeb934fb77b9b", "size_bytes": 1572864 }, { "path": "other-container/seg-final", "etag": "b9c3da507d2557c1ddc51f27c54bae51", "size_bytes": 256 } ] | The ``Content-Length`` request header must contain the length of the json content—not the length of the segment objects. However, after the **PUT** operation completes, the ``Content-Length`` metadata is set to the total length of all the object segments. A similar situation applies to the ``ETag``. If used in the **PUT** operation, it must contain the MD5 checksum of the json content. The ``ETag`` metadata value is then set to be the MD5 checksum of the concatenated ``ETag`` values of the object segments. You can also set the ``Content-Type`` request header and custom object metadata. When the **PUT** operation sees the ``multipart-manifest=put`` query string, it reads the request body and verifies that each segment object exists and that the sizes and ETags match. If there is a mismatch, the **PUT**\ operation fails. If everything matches, the manifest object is created. The ``X-Static-Large-Object`` metadata is set to ``true`` indicating that this is a static object manifest. Normally when you perform a **GET** operation on the manifest object, the response body contains the concatenated content of the segment objects. To download the manifest list, use the ``multipart-manifest=get`` query string. The resulting list is not formatted the same as the manifest you originally used in the **PUT** operation. If you use the **DELETE** operation on a manifest object, the manifest object is deleted. The segment objects are not affected. However, if you add the ``multipart-manifest=delete`` query string, the segment objects are deleted and if all are successfully deleted, the manifest object is also deleted. To change the manifest, use a **PUT** operation with the ``multipart-manifest=put`` query string. This request creates a manifest object. You can also update the object metadata in the usual way. Dynamic large objects ~~~~~~~~~~~~~~~~~~~~~ You must segment objects that are larger than 5 GB before you can upload them. You then upload the segment objects like you would any other object and create a dynamic large manifest object. The manifest object tells Object Storage how to find the segment objects that comprise the large object. The segments remain individually addressable, but retrieving the manifest object streams all the segments concatenated. There is no limit to the number of segments that can be a part of a single large object. To ensure the download works correctly, you must upload all the object segments to the same container and ensure that each object name is prefixed in such a way that it sorts in the order in which it should be concatenated. You also create and upload a manifest file. The manifest file is a zero-byte file with the extra ``X-Object-Manifest`` ``{container}/{prefix}`` header, where ``{container}`` is the container the object segments are in and ``{prefix}`` is the common prefix for all the segments. You must UTF-8-encode and then URL-encode the container and common prefix in the ``X-Object-Manifest`` header. It is best to upload all the segments first and then create or update the manifest. With this method, the full object is not available for downloading until the upload is complete. Also, you can upload a new set of segments to a second location and update the manifest to point to this new location. During the upload of the new segments, the original manifest is still available to download the first set of segments. **Example Upload segment of large object request: HTTP** .. code:: PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234 No response body is returned. A status code of 2\ *``nn``* (between 200 and 299, inclusive) indicates a successful write; status 411 Length Required denotes a missing ``Content-Length`` or ``Content-Type`` header in the request. If the MD5 checksum of the data written to the storage system does NOT match the (optionally) supplied ETag value, a 422 Unprocessable Entity response is returned. You can continue uploading segments like this example shows, prior to uploading the manifest. **Example Upload next segment of large object request: HTTP** .. code:: PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234 Next, upload the manifest you created that indicates the container the object segments reside within. Note that uploading additional segments after the manifest is created causes the concatenated object to be that much larger but you do not need to recreate the manifest file for subsequent additional segments. **Example Upload manifest request: HTTP** .. code:: PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Content-Length: 0 X-Object-Meta-PIN: 1234 X-Object-Manifest: {container}/{prefix} **Example Upload manifest response: HTTP** .. code:: [...] The ``Content-Type`` in the response for a **GET** or **HEAD** on the manifest is the same as the ``Content-Type`` set during the **PUT** request that created the manifest. You can easily change the ``Content-Type`` by reissuing the **PUT** request. Comparison of static and dynamic large objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ While static and dynamic objects have similar behavior, here are their differences: End-to-end integrity -------------------- With static large objects, integrity can be assured. The list of segments may include the MD5 checksum (``ETag``) of each segment. You cannot upload the manifest object if the ``ETag`` in the list differs from the uploaded segment object. If a segment is somehow lost, an attempt to download the manifest object results in an error. With dynamic large objects, integrity is not guaranteed. The eventual consistency model means that although you have uploaded a segment object, it might not appear in the container listing until later. If you download the manifest before it appears in the container, it does not form part of the content returned in response to a **GET** request. Upload Order ------------ With static large objects, you must upload the segment objects before you upload the manifest object. With dynamic large objects, you can upload manifest and segment objects in any order. In case a premature download of the manifest occurs, we recommend users upload the manifest object after the segments. However, the system does not enforce the order. Removal or addition of segment objects -------------------------------------- With static large objects, you cannot add or remove segment objects from the manifest. However, you can create a completely new manifest object of the same name with a different manifest list. With dynamic large objects, you can upload new segment objects or remove existing segments. The names must simply match the ``{prefix}`` supplied in ``X-Object-Manifest``. Segment object size and number ------------------------------ With static large objects, the segment objects must be at least 1 byte in size. However, if the segment objects are less than 1MB (by default), the SLO download is (by default) rate limited. At most, 1000 segments are supported (by default) and the manifest has a limit (by default) of 2MB in size. With dynamic large objects, segment objects can be any size. Segment object container name ----------------------------- With static large objects, the manifest list includes the container name of each object. Segment objects can be in different containers. With dynamic large objects, all segment objects must be in the same container. Manifest object metadata ------------------------ With static large objects, the manifest object has ``X-Static-Large-Object`` set to ``true``. You do not set this metadata directly. Instead the system sets it when you **PUT** a static manifest object. With dynamic large objects, the ``X-Object-Manifest`` value is the ``{container}/{prefix}``, which indicates where the segment objects are located. You supply this request header in the **PUT** operation. Copying the manifest object --------------------------- The semantics are the same for both static and dynamic large objects. When copying large objects, the **COPY** operation does not create a manifest object but a normal object with content same as what you would get on a **GET** request to the original manifest object. To copy the manifest object, you include the ``multipart-manifest=get`` query string in the **COPY** request. The new object contains the same manifest as the original. The segment objects are not copied. Instead, both the original and new manifest objects share the same set of segment objects. swift-2.7.1/doc/source/overview_reaper.rst0000664000567000056710000001100113024044354022011 0ustar jenkinsjenkins00000000000000================== The Account Reaper ================== The Account Reaper removes data from deleted accounts in the background. An account is marked for deletion by a reseller issuing a DELETE request on the account's storage URL. This simply puts the value DELETED into the status column of the account_stat table in the account database (and replicas), indicating the data for the account should be deleted later. There is normally no set retention time and no undelete; it is assumed the reseller will implement such features and only call DELETE on the account once it is truly desired the account's data be removed. However, in order to protect the Swift cluster accounts from an improper or mistaken delete request, you can set a delay_reaping value in the [account-reaper] section of the account-server.conf to delay the actual deletion of data. At this time, there is no utility to undelete an account; one would have to update the account database replicas directly, setting the status column to an empty string and updating the put_timestamp to be greater than the delete_timestamp. (On the TODO list is writing a utility to perform this task, preferably through a ReST call.) The account reaper runs on each account server and scans the server occasionally for account databases marked for deletion. It will only trigger on accounts that server is the primary node for, so that multiple account servers aren't all trying to do the same work at the same time. Using multiple servers to delete one account might improve deletion speed, but requires coordination so they aren't duplicating effort. Speed really isn't as much of a concern with data deletion and large accounts aren't deleted that often. The deletion process for an account itself is pretty straightforward. For each container in the account, each object is deleted and then the container is deleted. Any deletion requests that fail won't stop the overall process, but will cause the overall process to fail eventually (for example, if an object delete times out, the container won't be able to be deleted later and therefore the account won't be deleted either). The overall process continues even on a failure so that it doesn't get hung up reclaiming cluster space because of one troublesome spot. The account reaper will keep trying to delete an account until it eventually becomes empty, at which point the database reclaim process within the db_replicator will eventually remove the database files. Sometimes a persistent error state can prevent some object or container from being deleted. If this happens, you will see a message such as "Account has not been reaped since " in the log. You can control when this is logged with the reap_warn_after value in the [account-reaper] section of the account-server.conf file. By default this is 30 days. ------- History ------- At first, a simple approach of deleting an account through completely external calls was considered as it required no changes to the system. All data would simply be deleted in the same way the actual user would, through the public ReST API. However, the downside was that it would use proxy resources and log everything when it didn't really need to. Also, it would likely need a dedicated server or two, just for issuing the delete requests. A completely bottom-up approach was also considered, where the object and container servers would occasionally scan the data they held and check if the account was deleted, removing the data if so. The upside was the speed of reclamation with no impact on the proxies or logging, but the downside was that nearly 100% of the scanning would result in no action creating a lot of I/O load for no reason. A more container server centric approach was also considered, where the account server would mark all the containers for deletion and the container servers would delete the objects in each container and then themselves. This has the benefit of still speedy reclamation for accounts with a lot of containers, but has the downside of a pretty big load spike. The process could be slowed down to alleviate the load spike possibility, but then the benefit of speedy reclamation is lost and what's left is just a more complex process. Also, scanning all the containers for those marked for deletion when the majority wouldn't be seemed wasteful. The db_replicator could do this work while performing its replication scan, but it would have to spawn and track deletion processes which seemed needlessly complex. In the end, an account server centric approach seemed best, as described above. swift-2.7.1/doc/source/development_auth.rst0000664000567000056710000004644413024044352022171 0ustar jenkinsjenkins00000000000000========================== Auth Server and Middleware ========================== -------------------------------------------- Creating Your Own Auth Server and Middleware -------------------------------------------- The included swift/common/middleware/tempauth.py is a good example of how to create an auth subsystem with proxy server auth middleware. The main points are that the auth middleware can reject requests up front, before they ever get to the Swift Proxy application, and afterwards when the proxy issues callbacks to verify authorization. It's generally good to separate the authentication and authorization procedures. Authentication verifies that a request actually comes from who it says it does. Authorization verifies the 'who' has access to the resource(s) the request wants. Authentication is performed on the request before it ever gets to the Swift Proxy application. The identity information is gleaned from the request, validated in some way, and the validation information is added to the WSGI environment as needed by the future authorization procedure. What exactly is added to the WSGI environment is solely dependent on what the installed authorization procedures need; the Swift Proxy application itself needs no specific information, it just passes it along. Convention has environ['REMOTE_USER'] set to the authenticated user string but often more information is needed than just that. The included TempAuth will set the REMOTE_USER to a comma separated list of groups the user belongs to. The first group will be the "user's group", a group that only the user belongs to. The second group will be the "account's group", a group that includes all users for that auth account (different than the storage account). The third group is optional and is the storage account string. If the user does not have admin access to the account, the third group will be omitted. It is highly recommended that authentication server implementers prefix their tokens and Swift storage accounts they create with a configurable reseller prefix (`AUTH_` by default with the included TempAuth). This prefix will avoid conflicts with other authentication servers that might be using the same Swift cluster. Otherwise, the Swift cluster will have to try all the resellers until one validates a token or all fail. A restriction with group names is that no group name should begin with a period '.' as that is reserved for internal Swift use (such as the .r for referrer designations as you'll see later). Example Authentication with TempAuth: * Token AUTH_tkabcd is given to the TempAuth middleware in a request's X-Auth-Token header. * The TempAuth middleware validates the token AUTH_tkabcd and discovers it matches the "tester" user within the "test" account for the storage account "AUTH_storage_xyz". * The TempAuth middleware sets the REMOTE_USER to "test:tester,test,AUTH_storage_xyz" * Now this user will have full access (via authorization procedures later) to the AUTH_storage_xyz Swift storage account and access to containers in other storage accounts, provided the storage account begins with the same `AUTH_` reseller prefix and the container has an ACL specifying at least one of those three groups. Authorization is performed through callbacks by the Swift Proxy server to the WSGI environment's swift.authorize value, if one is set. The swift.authorize value should simply be a function that takes a Request as an argument and returns None if access is granted or returns a callable(environ, start_response) if access is denied. This callable is a standard WSGI callable. Generally, you should return 403 Forbidden for requests by an authenticated user and 401 Unauthorized for an unauthenticated request. For example, here's an authorize function that only allows GETs (in this case you'd probably return 405 Method Not Allowed, but ignore that for the moment).:: from swift.common.swob import HTTPForbidden, HTTPUnauthorized def authorize(req): if req.method == 'GET': return None if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) Adding the swift.authorize callback is often done by the authentication middleware as authentication and authorization are often paired together. But, you could create separate authorization middleware that simply sets the callback before passing on the request. To continue our example above:: from swift.common.swob import HTTPForbidden, HTTPUnauthorized class Authorization(object): def __init__(self, app, conf): self.app = app self.conf = conf def __call__(self, environ, start_response): environ['swift.authorize'] = self.authorize return self.app(environ, start_response) def authorize(self, req): if req.method == 'GET': return None if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return Authorization(app, conf) return auth_filter The Swift Proxy server will call swift.authorize after some initial work, but before truly trying to process the request. Positive authorization at this point will cause the request to be fully processed immediately. A denial at this point will immediately send the denial response for most operations. But for some operations that might be approved with more information, the additional information will be gathered and added to the WSGI environment and then swift.authorize will be called once more. These are called delay_denial requests and currently include container read requests and object read and write requests. For these requests, the read or write access control string (X-Container-Read and X-Container-Write) will be fetched and set as the 'acl' attribute in the Request passed to swift.authorize. The delay_denial procedures allow skipping possibly expensive access control string retrievals for requests that can be approved without that information, such as administrator or account owner requests. To further our example, we now will approve all requests that have the access control string set to same value as the authenticated user string. Note that you probably wouldn't do this exactly as the access control string represents a list rather than a single user, but it'll suffice for this example:: from swift.common.swob import HTTPForbidden, HTTPUnauthorized class Authorization(object): def __init__(self, app, conf): self.app = app self.conf = conf def __call__(self, environ, start_response): environ['swift.authorize'] = self.authorize return self.app(environ, start_response) def authorize(self, req): # Allow anyone to perform GET requests if req.method == 'GET': return None # Allow any request where the acl equals the authenticated user if getattr(req, 'acl', None) == req.remote_user: return None if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return Authorization(app, conf) return auth_filter The access control string has a standard format included with Swift, though this can be overridden if desired. The standard format can be parsed with swift.common.middleware.acl.parse_acl which converts the string into two arrays of strings: (referrers, groups). The referrers allow comparing the request's Referer header to control access. The groups allow comparing the request.remote_user (or other sources of group information) to control access. Checking referrer access can be accomplished by using the swift.common.middleware.acl.referrer_allowed function. Checking group access is usually a simple string comparison. Let's continue our example to use parse_acl and referrer_allowed. Now we'll only allow GETs after a referrer check and any requests after a group check:: from swift.common.middleware.acl import parse_acl, referrer_allowed from swift.common.swob import HTTPForbidden, HTTPUnauthorized class Authorization(object): def __init__(self, app, conf): self.app = app self.conf = conf def __call__(self, environ, start_response): environ['swift.authorize'] = self.authorize return self.app(environ, start_response) def authorize(self, req): if hasattr(req, 'acl'): referrers, groups = parse_acl(req.acl) if req.method == 'GET' and referrer_allowed(req, referrers): return None if req.remote_user and groups and req.remote_user in groups: return None if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return Authorization(app, conf) return auth_filter The access control strings are set with PUTs and POSTs to containers with the X-Container-Read and X-Container-Write headers. Swift allows these strings to be set to any value, though it's very useful to validate that the strings meet the desired format and return a useful error to the user if they don't. To support this validation, the Swift Proxy application will call the WSGI environment's swift.clean_acl callback whenever one of these headers is to be written. The callback should take a header name and value as its arguments. It should return the cleaned value to save if valid or raise a ValueError with a reasonable error message if not. There is an included swift.common.middleware.acl.clean_acl that validates the standard Swift format. Let's improve our example by making use of that:: from swift.common.middleware.acl import \ clean_acl, parse_acl, referrer_allowed from swift.common.swob import HTTPForbidden, HTTPUnauthorized class Authorization(object): def __init__(self, app, conf): self.app = app self.conf = conf def __call__(self, environ, start_response): environ['swift.authorize'] = self.authorize environ['swift.clean_acl'] = clean_acl return self.app(environ, start_response) def authorize(self, req): if hasattr(req, 'acl'): referrers, groups = parse_acl(req.acl) if req.method == 'GET' and referrer_allowed(req, referrers): return None if req.remote_user and groups and req.remote_user in groups: return None if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return Authorization(app, conf) return auth_filter Now, if you want to override the format for access control strings you'll have to provide your own clean_acl function and you'll have to do your own parsing and authorization checking for that format. It's highly recommended you use the standard format simply to support the widest range of external tools, but sometimes that's less important than meeting certain ACL requirements. ---------------------------- Integrating With repoze.what ---------------------------- Here's an example of integration with repoze.what, though honestly I'm no repoze.what expert by any stretch; this is just included here to hopefully give folks a start on their own code if they want to use repoze.what:: from time import time from eventlet.timeout import Timeout from repoze.what.adapters import BaseSourceAdapter from repoze.what.middleware import setup_auth from repoze.what.predicates import in_any_group, NotAuthorizedError from swift.common.bufferedhttp import http_connect_raw as http_connect from swift.common.middleware.acl import clean_acl, parse_acl, referrer_allowed from swift.common.utils import cache_from_env, split_path from swift.common.swob import HTTPForbidden, HTTPUnauthorized class DevAuthorization(object): def __init__(self, app, conf): self.app = app self.conf = conf def __call__(self, environ, start_response): environ['swift.authorize'] = self.authorize environ['swift.clean_acl'] = clean_acl return self.app(environ, start_response) def authorize(self, req): version, account, container, obj = split_path(req.path, 1, 4, True) if not account: return self.denied_response(req) referrers, groups = parse_acl(getattr(req, 'acl', None)) if referrer_allowed(req, referrers): return None try: in_any_group(account, *groups).check_authorization(req.environ) except NotAuthorizedError: return self.denied_response(req) return None def denied_response(self, req): if req.remote_user: return HTTPForbidden(request=req) else: return HTTPUnauthorized(request=req) class DevIdentifier(object): def __init__(self, conf): self.conf = conf def identify(self, env): return {'token': env.get('HTTP_X_AUTH_TOKEN', env.get('HTTP_X_STORAGE_TOKEN'))} def remember(self, env, identity): return [] def forget(self, env, identity): return [] class DevAuthenticator(object): def __init__(self, conf): self.conf = conf self.auth_host = conf.get('ip', '127.0.0.1') self.auth_port = int(conf.get('port', 11000)) self.ssl = \ conf.get('ssl', 'false').lower() in ('true', 'on', '1', 'yes') self.auth_prefix = conf.get('prefix', '/') self.timeout = float(conf.get('node_timeout', 10)) def authenticate(self, env, identity): token = identity.get('token') if not token: return None memcache_client = cache_from_env(env) key = 'devauth/%s' % token cached_auth_data = memcache_client.get(key) if cached_auth_data: start, expiration, user = cached_auth_data if time() - start <= expiration: return user with Timeout(self.timeout): conn = http_connect(self.auth_host, self.auth_port, 'GET', '%stoken/%s' % (self.auth_prefix, token), ssl=self.ssl) resp = conn.getresponse() resp.read() conn.close() if resp.status == 204: expiration = float(resp.getheader('x-auth-ttl')) user = resp.getheader('x-auth-user') memcache_client.set(key, (time(), expiration, user), time=expiration) return user return None class DevChallenger(object): def __init__(self, conf): self.conf = conf def challenge(self, env, status, app_headers, forget_headers): def no_challenge(env, start_response): start_response(str(status), []) return [] return no_challenge class DevGroupSourceAdapter(BaseSourceAdapter): def __init__(self, *args, **kwargs): super(DevGroupSourceAdapter, self).__init__(*args, **kwargs) self.sections = {} def _get_all_sections(self): return self.sections def _get_section_items(self, section): return self.sections[section] def _find_sections(self, credentials): return credentials['repoze.what.userid'].split(',') def _include_items(self, section, items): self.sections[section] |= items def _exclude_items(self, section, items): for item in items: self.sections[section].remove(item) def _item_is_included(self, section, item): return item in self.sections[section] def _create_section(self, section): self.sections[section] = set() def _edit_section(self, section, new_section): self.sections[new_section] = self.sections[section] del self.sections[section] def _delete_section(self, section): del self.sections[section] def _section_exists(self, section): return self.sections.has_key(section) class DevPermissionSourceAdapter(BaseSourceAdapter): def __init__(self, *args, **kwargs): super(DevPermissionSourceAdapter, self).__init__(*args, **kwargs) self.sections = {} def _get_all_sections(self): return self.sections def _get_section_items(self, section): return self.sections[section] def _find_sections(self, group_name): return set([n for (n, p) in self.sections.items() if group_name in p]) def _include_items(self, section, items): self.sections[section] |= items def _exclude_items(self, section, items): for item in items: self.sections[section].remove(item) def _item_is_included(self, section, item): return item in self.sections[section] def _create_section(self, section): self.sections[section] = set() def _edit_section(self, section, new_section): self.sections[new_section] = self.sections[section] del self.sections[section] def _delete_section(self, section): del self.sections[section] def _section_exists(self, section): return self.sections.has_key(section) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return setup_auth(DevAuthorization(app, conf), group_adapters={'all_groups': DevGroupSourceAdapter()}, permission_adapters={'all_perms': DevPermissionSourceAdapter()}, identifiers=[('devauth', DevIdentifier(conf))], authenticators=[('devauth', DevAuthenticator(conf))], challengers=[('devauth', DevChallenger(conf))]) return auth_filter ----------------------- Allowing CORS with Auth ----------------------- Cross Origin Resource Sharing (CORS) require that the auth system allow the OPTIONS method to pass through without a token. The preflight request will make an OPTIONS call against the object or container and will not work if the auth system stops it. See TempAuth for an example of how OPTIONS requests are handled. swift-2.7.1/doc/source/apache_deployment_guide.rst0000664000567000056710000001501013024044354023447 0ustar jenkinsjenkins00000000000000======================= Apache Deployment Guide ======================= ---------------------------- Web Front End Considerations ---------------------------- Swift can be configured to work both using an integral web front-end and using a full-fledged Web Server such as the Apache2 (HTTPD) web server. The integral web front-end is a wsgi mini "Web Server" which opens up its own socket and serves http requests directly. The incoming requests accepted by the integral web front-end are then forwarded to a wsgi application (the core swift) for further handling, possibly via wsgi middleware sub-components. client<---->'integral web front-end'<---->middleware<---->'core swift' To gain full advantage of Apache2, Swift can alternatively be configured to work as a request processor of the Apache2 server. This alternative deployment scenario uses mod_wsgi of Apache2 to forward requests to the swift wsgi application and middleware. client<---->'Apache2 with mod_wsgi'<----->middleware<---->'core swift' The integral web front-end offers simplicity and requires minimal configuration. It is also the web front-end most commonly used with Swift. Additionally, the integral web front-end includes support for receiving chunked transfer encoding from a client, presently not supported by Apache2 in the operation mode described here. The use of Apache2 offers new ways to extend Swift and integrate it with existing authentication, administration and control systems. A single Apache2 server can serve as the web front end of any number of swift servers residing on a swift node. For example when a storage node offers account, container and object services, a single Apache2 server can serve as the web front end of all three services. The apache variant described here was tested as part of an IBM research work. It was found that following tuning, the Apache2 offer generally equivalent performance to that offered by the integral web front-end. Alternative to Apache2, other web servers may be used, but were never tested. ------------- Apache2 Setup ------------- Both Apache2 and mod-wsgi needs to be installed on the system. Ubuntu comes with Apache2 installed. Install mod-wsgi using:: sudo apt-get install libapache2-mod-wsgi First, change the User and Group IDs of Apache2 to be those used by Swift. For example in /etc/apache2/envvars use:: export APACHE_RUN_USER=swift export APACHE_RUN_GROUP=swift Create a directory for the Apache2 wsgi files:: sudo mkdir /var/www/swift Create a file for each service under /var/www/swift. For a proxy service create /var/www/swift/proxy-server.wsgi:: from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/proxy-server.conf','proxy-server') For an account service create /var/www/swift/account-server.wsgi:: from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/account-server.conf', 'account-server') For an container service create /var/www/swift/container-server.wsgi:: from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/container-server.conf', 'container-server') For an object service create /var/www/swift/object-server.wsgi:: from swift.common.wsgi import init_request_processor application, conf, logger, log_name = \ init_request_processor('/etc/swift/object-server.conf', 'object-server') Create a /etc/apache2/conf.d/swift_wsgi.conf configuration file that will define a port and Virtual Host per each local service. For example an Apache2 serving as a web front end of a proxy service:: #Proxy NameVirtualHost *:8080 Listen 8080 ServerName proxy-server LimitRequestBody 5368709122 WSGIDaemonProcess proxy-server processes=5 threads=1 WSGIProcessGroup proxy-server WSGIScriptAlias / /var/www/swift/proxy-server.wsgi LimitRequestFields 200 ErrorLog /var/log/apache2/proxy-server LogLevel debug CustomLog /var/log/apache2/proxy.log combined Notice that when using Apache the limit on the maximal object size should be imposed by Apache using the LimitRequestBody rather by the swift proxy. Note also that the LimitRequestBody should indicate the same value as indicated by max_file_size located in both /etc/swift/swift.conf and in /etc/swift/test.conf. The Swift default value for max_file_size (when not present) is 5368709122. For example an Apache2 serving as a web front end of a storage node:: #Object Service NameVirtualHost *:6000 Listen 6000 ServerName object-server WSGIDaemonProcess object-server processes=5 threads=1 WSGIProcessGroup object-server WSGIScriptAlias / /var/www/swift/object-server.wsgi LimitRequestFields 200 ErrorLog /var/log/apache2/object-server LogLevel debug CustomLog /var/log/apache2/access.log combined #Container Service NameVirtualHost *:6001 Listen 6001 ServerName container-server WSGIDaemonProcess container-server processes=5 threads=1 WSGIProcessGroup container-server WSGIScriptAlias / /var/www/swift/container-server.wsgi LimitRequestFields 200 ErrorLog /var/log/apache2/container-server LogLevel debug CustomLog /var/log/apache2/access.log combined #Account Service NameVirtualHost *:6002 Listen 6002 ServerName account-server WSGIDaemonProcess account-server processes=5 threads=1 WSGIProcessGroup account-server WSGIScriptAlias / /var/www/swift/account-server.wsgi LimitRequestFields 200 ErrorLog /var/log/apache2/account-server LogLevel debug CustomLog /var/log/apache2/access.log combined Next stop the Apache2 and start it again (apache2ctl restart is not enough):: apache2ctl stop apache2ctl start Edit the tests config file and add:: web_front_end = apache2 normalized_urls = True Also check to see that the file includes max_file_size of the same value as used for the LimitRequestBody in the apache config file above. We are done. You may run functional tests to test - e.g.:: cd ~swift/swift ./.functests swift-2.7.1/doc/source/overview_backing_store.rst0000664000567000056710000002717113024044352023362 0ustar jenkinsjenkins00000000000000 ============================================= Using Swift as Backing Store for Service Data ============================================= ---------- Background ---------- This section provides guidance to OpenStack Service developers for how to store your users' data in Swift. An example of this is that a user requests that Nova save a snapshot of a VM. Nova passes the request to Glance, Glance writes the image to a Swift container as a set of objects. Throughout this section, the following terminology and concepts are used: * User or end-user. This is a person making a request that will result in an OpenStack Service making a request to Swift. * Project (also known as Tenant). This is the unit of resource ownership. While data such as snapshot images or block volume backups may be stored as a result of an end-user's request, the reality is that these are project data. * Service. This is a program or system used by end-users. Specifically, it is any program or system that is capable of receiving end-user's tokens and validating the token with the Keystone Service and has a need to store data in Swift. Glance and Cinder are examples of such Services. * Service User. This is a Keystone user that has been assigned to a Service. This allows the Service to generate and use its own tokens so that it can interact with other Services as itself. * Service Project. This is a project (tenant) that is associated with a Service. There may be a single project shared by many Services or there may be a project dedicated to each Service. In this document, the main purpose of the Service Project is to allow the system operator to configure specific roles for each Service User. ------------------------------- Alternate Backing Store Schemes ------------------------------- There are three schemes described here: * Dedicated Service Account (Single Tenant) Your Service has a dedicated Service Project (hence a single dedicated Swift account). Data for all users and projects are stored in this account. Your Service must have a user assigned to it (the Service User). When you have data to store on behalf of one of your users, you use the Service User credentials to get a token for the Service Project and request Swift to store the data in the Service Project. With this scheme, data for all users is stored in a single account. This is transparent to your users and since the credentials for the Service User are typically not shared with anyone, your users' cannot access their data by making a request directly to Swift. However, since data belonging to all users is stored in one account, it presents a single point of vulnerably to accidental deletion or a leak of the service-user credentials. * Multi Project (Multi Tenant) Data belonging to a project is stored in the Swift account associated with the project. Users make requests to your Service using a token scoped to a project in the normal way. You can then use this same token to store the user data in the project's Swift account. The effect is that data is stored in multiple projects (aka tenants). Hence this scheme has been known as the "multi tenant" scheme. With this scheme, access is controlled by Keystone. The users must have a role that allows them to perform the request to your Service. In addition, they must have a role that also allows them to store data in the Swift account. By default, the admin or swiftoperator roles are used for this purpose (specific systems may use other role names). If the user does not have the appropriate roles, when your Service attempts to access Swift, the operation will fail. Since you are using the user's token to access the data, it follows that the user can use the same token to access Swift directly -- bypassing your Service. When end-users are browsing containers, they will also see your Service's containers and objects -- and may potentially delete the data. Conversely, there is no single account where all data so leakage of credentials will only affect a single project/tenant. * Service Prefix Account Data belonging to a project is stored in a Swift account associated with the project. This is similar to the Multi Project scheme described above. However, the Swift account is different than the account that users access. Specifically, it has a different account prefix. For example, for the project 1234, the user account is named AUTH_1234. Your Service uses a different account, for example, SERVICE_1234. To access the SERVICE_1234 account, you must present two tokens: the user's token is put in the X-Auth-Token header. You present your Service's token in the X-Service-Token header. Swift is configured such that only when both tokens are presented will it allow access. Specifically, the user cannot bypass your Service because they only have their own token. Conversely, your Service can only access the data while it has a copy of the user's token -- the Service's token by itself will not grant access. The data stored in the Service Prefix Account cannot be seen by end-users. So they cannot delete this data -- they can only access the data if they make a request through your Service. The data is also more secure. To make an unauthorized access, someone would need to compromise both an end-user's and your Service User credentials. Even then, this would only expose one project -- not other projects. The Service Prefix Account scheme combines features of the Dedicated Service Account and Multi Project schemes. It has the private, dedicated, characteristics of the Dedicated Service Account scheme but does not present a single point of attack. Using the Service Prefix Account scheme is a little more involved than the other schemes, so the rest of this document describes it more detail. ------------------------------- Service Prefix Account Overview ------------------------------- The following diagram shows the flow through the system from the end-user, to your Service and then onto Swift:: client \ \ : \ x-auth-token: \ SERVICE \ \ PUT: /v1/SERVICE_1234// \ x-auth-token: \ x-service-token: \ Swift The sequence of events and actions are as follows: * Request arrives at your Service * The is validated by the keystonemiddleware.auth_token middleware. The user's role(s) are used to determine if the user can perform the request. See :doc:`overview_auth` for technical information on the authentication system. * As part of this request, your Service needs to access Swift (either to write or read a container or object). In this example, you want to perform a PUT on /. * In the wsgi environment, the auth_token module will have populated the HTTP_X_SERVICE_CATALOG item. This lists the Swift endpoint and account. This is something such as https:///v1/AUTH_1234 where ``AUTH_`` is a prefix and ``1234`` is the project id. * The ``AUTH_`` prefix is the default value. However, your system may use a different prefix. To determine the actual prefix, search for the first underscore ('_') character in the account name. If there is no underscore character in the account name, this means there is no prefix. * Your Service should have a configuration parameter that provides the appropriate prefix to use for storing data in Swift. There is more discussion of this below, but for now assume the prefix is ``SERVICE_``. * Replace the prefix (``AUTH_`` in above examples) in the path with ``SERVICE_``, so the full URL to access the object becomes https:///v1/SERVICE_1234//. * Make the request to Swift, using this URL. In the X-Auth-Token header place a copy of the . In the X-Service-Token header, place your Service's token. If you use python-swiftclient you can achieve this by: * Putting the URL in the ``preauthurl`` parameter * Putting the in ``preauthtoken`` paramater * Adding the X-Service-Token to the ``headers`` parameter Using the HTTP_X_SERVICE_CATALOG to get Swift Account Name ---------------------------------------------------------- The auth_token middleware populates the wsgi environment with information when it validates the user's token. The HTTP_X_SERVICE_CATALOG item is a JSON string containing details of the OpenStack endpoints. For Swift, this also contains the project's Swift account name. Here is an example of a catalog entry for Swift:: "serviceCatalog": [ ... { .... "type": "object-store", "endpoints": [ ... { ... "publicURL": "https:///v1/AUTH_1234", "region": "" ... } ... ... } } To get the End-user's account: * Look for an entry with ``type`` of ``object-store`` * If there are several regions, there will be several endpoints. Use the appropriate region name and select the ``publicURL`` item. * The Swift account name is the final item in the path ("AUTH_1234" in this example). Getting a Service Token ----------------------- A Service Token is no different than any other token and is requested from Keystone using user credentials and project in the usual way. The core requirement is that your Service User has the appropriate role. In practice: * Your Service must have a user assigned to it (the Service User). * Your Service has a project assigned to it (the Service Project). * The Service User must have a role on the Service Project. This role is distinct from any of the normal end-user roles. * The role used must the role configured in the /etc/swift/proxy-server.conf. This is the ``_service_roles`` option. In this example, the role is the ``service`` role:: [keystoneauth] reseller_prefix = AUTH_, SERVICE_ SERVICE_service_role = service The ``service`` role should only be granted to OpenStack Services. It should not be granted to users. Single or multiple Service Prefixes? ------------------------------------ Most of the examples used in this document used a single prefix. The prefix, ``SERVICE`` was used. By using a single prefix, an operator is allowing all OpenStack Services to share the same account for data associated with a given project. For test systems or deployments well protected on private firewalled networks, this is appropriate. However, if one Service is compromised, that Service can access data created by another Service. To prevent this, multiple Service Prefixes may be used. This also requires that the operator configure multiple service roles. For example, in a system that has Glance and Cinder, the following Swift configuration could be used:: [keystoneauth] reseller_prefix = AUTH_, IMAGE_, BLOCK_ IMAGE_service_roles = image_service BLOCK_service_roles = block_service The Service User for Glance would be granted the ``image_service`` role on its Service Project and the Cinder Service user is granted the ``block_service`` role on its project. In this scheme, if the Cinder Service was compromised, it would not be able to access any Glance data. Container Naming ---------------- Since a single Service Prefix is possible, container names should be prefixed with a unique string to prevent name clashes. We suggest you use the service type field (as used in the service catalog). For example, The Glance Service would use "image" as a prefix. swift-2.7.1/doc/source/index.rst0000664000567000056710000000667613024044354017742 0ustar jenkinsjenkins00000000000000.. Copyright 2010-2012 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Welcome to Swift's documentation! ================================= Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the `OpenStack wiki`_ and at http://docs.openstack.org. .. _`OpenStack wiki`: http://wiki.openstack.org .. note:: If you're looking for associated projects that enhance or use Swift, please see the :ref:`associated_projects` page. .. toctree:: :maxdepth: 1 getting_started Overview and Concepts ===================== .. toctree:: :maxdepth: 1 api/object_api_v1_overview overview_architecture overview_ring overview_policies overview_reaper overview_auth overview_replication ratelimit overview_large_objects overview_object_versioning overview_container_sync overview_expiring_objects cors crossdomain overview_erasure_code overview_backing_store associated_projects Developer Documentation ======================= .. toctree:: :maxdepth: 1 development_guidelines development_saio first_contribution_swift policies_saio development_auth development_middleware development_ondisk_backends Administrator Documentation =========================== .. toctree:: :maxdepth: 1 howto_installmultinode deployment_guide apache_deployment_guide admin_guide replication_network logs ops_runbook/index Object Storage v1 REST API Documentation ======================================== See `Complete Reference for the Object Storage REST API `_ The following provides supporting information for the REST API: .. toctree:: :maxdepth: 1 api/object_api_v1_overview.rst api/discoverability.rst api/authentication.rst api/container_quotas.rst api/object_versioning.rst api/large_objects.rst api/temporary_url_middleware.rst api/form_post_middleware.rst api/use_content-encoding_metadata.rst api/use_the_content-disposition_metadata.rst OpenStack End User Guide ======================== The `OpenStack End User Guide `_ has additional information on using Swift. See the `Manage objects and containers `_ section. Source Documentation ==================== .. toctree:: :maxdepth: 2 ring proxy account container db object misc middleware Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` swift-2.7.1/doc/source/development_middleware.rst0000664000567000056710000002240413024044354023335 0ustar jenkinsjenkins00000000000000======================= Middleware and Metadata ======================= ---------------- Using Middleware ---------------- `Python WSGI Middleware`_ (or just "middleware") can be used to "wrap" the request and response of a Python WSGI application (i.e. a webapp, or REST/HTTP API), like Swift's WSGI servers (proxy-server, account-server, container-server, object-server). Swift uses middleware to add (sometimes optional) behaviors to the Swift WSGI servers. .. _Python WSGI Middleware: http://www.python.org/dev/peps/pep-0333/#middleware-components-that-play-both-sides Middleware can be added to the Swift WSGI servers by modifying their `paste`_ configuration file. The majority of Swift middleware is applied to the :ref:`proxy-server`. .. _paste: http://pythonpaste.org/ Given the following basic configuration:: [DEFAULT] log_level = DEBUG user = [pipeline:main] pipeline = proxy-server [app:proxy-server] use = egg:swift#proxy You could add the :ref:`healthcheck` middleware by adding a section for that filter and adding it to the pipeline:: [DEFAULT] log_level = DEBUG user = [pipeline:main] pipeline = healthcheck proxy-server [filter:healthcheck] use = egg:swift#healthcheck [app:proxy-server] use = egg:swift#proxy Some middleware is required and will be inserted into your pipeline automatically by core swift code (e.g. the proxy-server will insert :ref:`catch_errors` and :ref:`gatekeeper` at the start of the pipeline if they are not already present). You can see which features are available on a given Swift endpoint (including middleware) using the :ref:`discoverability` interface. ---------------------------- Creating Your Own Middleware ---------------------------- The best way to see how to write middleware is to look at examples. Many optional features in Swift are implemented as :ref:`common_middleware` and provided in ``swift.common.middleware``, but Swift middleware may be packaged and distributed as a separate project. Some examples are listed on the :ref:`associated_projects` page. A contrived middleware example that modifies request behavior by inspecting custom HTTP headers (e.g. X-Webhook) and uses :ref:`sysmeta` to persist data to backend storage as well as common patterns like a :func:`.get_container_info` cache/query and :func:`.wsgify` decorator is presented below:: from swift.common.http import is_success from swift.common.swob import wsgify from swift.common.utils import split_path, get_logger from swift.common.request_helper import get_sys_meta_prefix from swift.proxy.controllers.base import get_container_info from eventlet import Timeout from eventlet.green import urllib2 # x-container-sysmeta-webhook SYSMETA_WEBHOOK = get_sys_meta_prefix('container') + 'webhook' class WebhookMiddleware(object): def __init__(self, app, conf): self.app = app self.logger = get_logger(conf, log_route='webhook') @wsgify def __call__(self, req): obj = None try: (version, account, container, obj) = \ split_path(req.path_info, 4, 4, True) except ValueError: # not an object request pass if 'x-webhook' in req.headers: # translate user's request header to sysmeta req.headers[SYSMETA_WEBHOOK] = \ req.headers['x-webhook'] if 'x-remove-webhook' in req.headers: # empty value will tombstone sysmeta req.headers[SYSMETA_WEBHOOK] = '' # account and object storage will ignore x-container-sysmeta-* resp = req.get_response(self.app) if obj and is_success(resp.status_int) and req.method == 'PUT': container_info = get_container_info(req.environ, self.app) # container_info may have our new sysmeta key webhook = container_info['sysmeta'].get('webhook') if webhook: # create a POST request with obj name as body webhook_req = urllib2.Request(webhook, data=obj) with Timeout(20): try: urllib2.urlopen(webhook_req).read() except (Exception, Timeout): self.logger.exception( 'failed POST to webhook %s' % webhook) else: self.logger.info( 'successfully called webhook %s' % webhook) if 'x-container-sysmeta-webhook' in resp.headers: # translate sysmeta from the backend resp to # user-visible client resp header resp.headers['x-webhook'] = resp.headers[SYSMETA_WEBHOOK] return resp def webhook_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def webhook_filter(app, conf): return WebhookMiddleware(app) return webhook_filter In practice this middleware will call the url stored on the container as X-Webhook on all successful object uploads. If this example was at ``/swift/common/middleware/webhook.py`` - you could add it to your proxy by creating a new filter section and adding it to the pipeline:: [DEFAULT] log_level = DEBUG user = [pipeline:main] pipeline = healthcheck webhook proxy-server [filter:webhook] paste.filter_factory = swift.common.middleware.webhook:webhook_factory [filter:healthcheck] use = egg:swift#healthcheck [app:proxy-server] use = egg:swift#proxy Most python packages expose middleware as entrypoints. See `PasteDeploy`_ documentation for more information about the syntax of the ``use`` option. All middleware included with Swift is installed to support the ``egg:swift`` syntax. .. _PasteDeploy: http://pythonpaste.org/deploy/#egg-uris Middleware may advertize its availability and capabilities via Swift's :ref:`discoverability` support by using :func:`.register_swift_info`:: from swift.common.utils import register_swift_info def webhook_factory(global_conf, **local_conf): register_swift_info('webhook') def webhook_filter(app): return WebhookMiddleware(app) return webhook_filter -------------- Swift Metadata -------------- Generally speaking metadata is information about a resource that is associated with the resource but is not the data contained in the resource itself - which is set and retrieved via HTTP headers. (e.g. the "Content-Type" of a Swift object that is returned in HTTP response headers) All user resources in Swift (i.e. account, container, objects) can have user metadata associated with them. Middleware may also persist custom metadata to accounts and containers safely using System Metadata. Some core swift features which predate sysmeta have added exceptions for custom non-user metadata headers (e.g. :ref:`acls`, :ref:`large-objects`) ^^^^^^^^^^^^^ User Metadata ^^^^^^^^^^^^^ User metadata takes the form of ``X--Meta-: ``, where ```` depends on the resources type (i.e. Account, Container, Object) and ```` and ```` are set by the client. User metadata should generally be reserved for use by the client or client applications. An perfect example use-case for user metadata is `python-swiftclient`_'s ``X-Object-Meta-Mtime`` which it stores on object it uploads to implement its ``--changed`` option which will only upload files that have changed since the last upload. .. _python-swiftclient: https://github.com/openstack/python-swiftclient New middleware should avoid storing metadata within the User Metadata namespace to avoid potential conflict with existing user metadata when introducing new metadata keys. An example of legacy middleware that borrows the user metadata namespace is :ref:`tempurl`. An example of middleware which uses custom non-user metadata to avoid the user metadata namespace is :ref:`slo-doc`. .. _sysmeta: ^^^^^^^^^^^^^^^ System Metadata ^^^^^^^^^^^^^^^ System metadata takes the form of ``X--Sysmeta-: ``, where ```` depends on the resources type (i.e. Account, Container, Object) and ```` and ```` are set by trusted code running in a Swift WSGI Server. All headers on client requests in the form of ``X--Sysmeta-`` will be dropped from the request before being processed by any middleware. All headers on responses from back-end systems in the form of ``X--Sysmeta-`` will be removed after all middleware has processed the response but before the response is sent to the client. See :ref:`gatekeeper` middleware for more information. System metadata provides a means to store potentially private custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The incoming filtering ensures that the namespace can not be modified directly by client requests, and the outgoing filter ensures that removing middleware that uses a specific system metadata key renders it benign. New middleware should take advantage of system metadata. swift-2.7.1/doc/source/overview_architecture.rst0000664000567000056710000002265513024044354023236 0ustar jenkinsjenkins00000000000000============================ Swift Architectural Overview ============================ .. TODO - add links to more detailed overview in each section below. ------------ Proxy Server ------------ The Proxy Server is responsible for tying together the rest of the Swift architecture. For each request, it will look up the location of the account, container, or object in the ring (see below) and route the request accordingly. For Erasure Code type policies, the Proxy Server is also responsible for encoding and decoding object data. See :doc:`overview_erasure_code` for complete information on Erasure Code support. The public API is also exposed through the Proxy Server. A large number of failures are also handled in the Proxy Server. For example, if a server is unavailable for an object PUT, it will ask the ring for a handoff server and route there instead. When objects are streamed to or from an object server, they are streamed directly through the proxy server to or from the user -- the proxy server does not spool them. -------- The Ring -------- A ring represents a mapping between the names of entities stored on disk and their physical location. There are separate rings for accounts, containers, and one object ring per storage policy. When other components need to perform any operation on an object, container, or account, they need to interact with the appropriate ring to determine its location in the cluster. The Ring maintains this mapping using zones, devices, partitions, and replicas. Each partition in the ring is replicated, by default, 3 times across the cluster, and the locations for a partition are stored in the mapping maintained by the ring. The ring is also responsible for determining which devices are used for handoff in failure scenarios. The replicas of each partition will be isolated onto as many distinct regions, zones, servers and devices as the capacity of these failure domains allow. If there are less failure domains at a given tier than replicas of the partition assigned within a tier (e.g. a 3 replica cluster with 2 servers), or the available capacity across the failure domains within a tier are not well balanced it will not be possible to achieve both even capacity distribution (`balance`) as well as complete isolation of replicas across failure domains (`dispersion`). When this occurs the ring management tools will display a warning so that the operator can evaluate the cluster topology. Data is evenly distributed across the capacity available in the cluster as described by the devices weight. Weights can be used to balance the distribution of partitions on drives across the cluster. This can be useful, for example, when different sized drives are used in a cluster. Device weights can also be used when adding or removing capacity or failure domains to control how many partitions are reassigned during a rebalance to be moved as soon as replication bandwidth allows. .. note:: Prior to Swift 2.1.0 it was not possible to restrict partition movement by device weight when adding new failure domains, and would allow extremely unbalanced rings. The greedy dispersion algorithm is now subject to the constraints of the physical capacity in the system, but can be adjusted with-in reason via the overload option. Artificially unbalancing the partition assignment without respect to capacity can introduce unexpected full devices when a given failure domain does not physically support its share of the used capacity in the tier. When partitions need to be moved around (for example if a device is added to the cluster), the ring ensures that a minimum number of partitions are moved at a time, and only one replica of a partition is moved at a time. The ring is used by the Proxy server and several background processes (like replication). ---------------- Storage Policies ---------------- Storage Policies provide a way for object storage providers to differentiate service levels, features and behaviors of a Swift deployment. Each Storage Policy configured in Swift is exposed to the client via an abstract name. Each device in the system is assigned to one or more Storage Policies. This is accomplished through the use of multiple object rings, where each Storage Policy has an independent object ring, which may include a subset of hardware implementing a particular differentiation. For example, one might have the default policy with 3x replication, and create a second policy which, when applied to new containers only uses 2x replication. Another might add SSDs to a set of storage nodes and create a performance tier storage policy for certain containers to have their objects stored there. Yet another might be the use of Erasure Coding to define a cold-storage tier. This mapping is then exposed on a per-container basis, where each container can be assigned a specific storage policy when it is created, which remains in effect for the lifetime of the container. Applications require minimal awareness of storage policies to use them; once a container has been created with a specific policy, all objects stored in it will be done so in accordance with that policy. The Storage Policies feature is implemented throughout the entire code base so it is an important concept in understanding Swift architecture. ------------- Object Server ------------- The Object Server is a very simple blob storage server that can store, retrieve and delete objects stored on local devices. Objects are stored as binary files on the filesystem with metadata stored in the file's extended attributes (xattrs). This requires that the underlying filesystem choice for object servers support xattrs on files. Some filesystems, like ext3, have xattrs turned off by default. Each object is stored using a path derived from the object name's hash and the operation's timestamp. Last write always wins, and ensures that the latest object version will be served. A deletion is also treated as a version of the file (a 0 byte file ending with ".ts", which stands for tombstone). This ensures that deleted files are replicated correctly and older versions don't magically reappear due to failure scenarios. ---------------- Container Server ---------------- The Container Server's primary job is to handle listings of objects. It doesn't know where those object's are, just what objects are in a specific container. The listings are stored as sqlite database files, and replicated across the cluster similar to how objects are. Statistics are also tracked that include the total number of objects, and total storage usage for that container. -------------- Account Server -------------- The Account Server is very similar to the Container Server, excepting that it is responsible for listings of containers rather than objects. ----------- Replication ----------- Replication is designed to keep the system in a consistent state in the face of temporary error conditions like network outages or drive failures. The replication processes compare local data with each remote copy to ensure they all contain the latest version. Object replication uses a hash list to quickly compare subsections of each partition, and container and account replication use a combination of hashes and shared high water marks. Replication updates are push based. For object replication, updating is just a matter of rsyncing files to the peer. Account and container replication push missing records over HTTP or rsync whole database files. The replicator also ensures that data is removed from the system. When an item (object, container, or account) is deleted, a tombstone is set as the latest version of the item. The replicator will see the tombstone and ensure that the item is removed from the entire system. -------------- Reconstruction -------------- The reconstructor is used by Erasure Code policies and is analogous to the replicator for Replication type policies. See :doc:`overview_erasure_code` for complete information on both Erasure Code support as well as the reconstructor. -------- Updaters -------- There are times when container or account data can not be immediately updated. This usually occurs during failure scenarios or periods of high load. If an update fails, the update is queued locally on the filesystem, and the updater will process the failed updates. This is where an eventual consistency window will most likely come in to play. For example, suppose a container server is under load and a new object is put in to the system. The object will be immediately available for reads as soon as the proxy server responds to the client with success. However, the container server did not update the object listing, and so the update would be queued for a later update. Container listings, therefore, may not immediately contain the object. In practice, the consistency window is only as large as the frequency at which the updater runs and may not even be noticed as the proxy server will route listing requests to the first container server which responds. The server under load may not be the one that serves subsequent listing requests -- one of the other two replicas may handle the listing. -------- Auditors -------- Auditors crawl the local server checking the integrity of the objects, containers, and accounts. If corruption is found (in the case of bit rot, for example), the file is quarantined, and replication will replace the bad file from another replica. If other errors are found they are logged (for example, an object's listing can't be found on any container server it should be). swift-2.7.1/doc/source/db.rst0000664000567000056710000000056313024044352017203 0ustar jenkinsjenkins00000000000000.. _account_and_container_db: *************************** Account DB and Container DB *************************** .. _db: DB == .. automodule:: swift.common.db :members: :undoc-members: :show-inheritance: .. _db-replicator: DB replicator ============= .. automodule:: swift.common.db_replicator :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/ops_runbook/0000775000567000056710000000000013024044470020421 5ustar jenkinsjenkins00000000000000swift-2.7.1/doc/source/ops_runbook/troubleshooting.rst0000664000567000056710000002515013024044354024406 0ustar jenkinsjenkins00000000000000==================== Troubleshooting tips ==================== Diagnose: Customer complains they receive a HTTP status 500 when trying to browse containers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This entry is prompted by a real customer issue and exclusively focused on how that problem was identified. There are many reasons why a http status of 500 could be returned. If there are no obvious problems with the swift object store, then it may be necessary to take a closer look at the users transactions. After finding the users swift account, you can search the swift proxy logs on each swift proxy server for transactions from this user. The linux ``bzgrep`` command can be used to search all the proxy log files on a node including the ``.bz2`` compressed files. For example: .. code:: $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l -R ssh \ -w .68.[4-11,132-139 4-11,132-139],.132.[4-11,132-139] \ 'sudo bzgrep -w AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log*' | dshbak -c . . ---------------- .132.6 ---------------- Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server .16.132 .66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af /%3Fformat%3Djson HTTP/1.0 404 - - _4f4d50c5e4b064d88bd7ab82 - - - tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130 This shows a ``GET`` operation on the users account. .. note:: The HTTP status returned is 404, Not found, rather than 500 as reported by the user. Using the transaction ID, ``tx429fc3be354f434ab7f9c6c4206c1dc3`` you can search the swift object servers log files for this transaction ID: .. code:: $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l -R ssh \ -w .72.[4-67|4-67],.[4-67|4-67],.[4-67|4-67],.204.[4-131] \ 'sudo bzgrep tx429fc3be354f434ab7f9c6c4206c1dc3 /var/log/swift/server.log*' | dshbak -c . . ---------------- .72.16 ---------------- Feb 29 08:51:57 sw-aw2az1-object013 account-server .132.6 - - [29/Feb/2012:08:51:57 +0000|] "GET /disk9/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" 404 - "tx429fc3be354f434ab7f9c6c4206c1dc3" "-" "-" 0.0016 "" ---------------- .31 ---------------- Feb 29 08:51:57 node-az2-object060 account-server .132.6 - - [29/Feb/2012:08:51:57 +0000|] "GET /disk6/198875/AUTH_redacted-4962- 4692-98fb-52ddda82a5af" 404 - "tx429fc3be354f434ab7f9c6c4206c1dc3" "-" "-" 0.0011 "" ---------------- .204.70 ---------------- Feb 29 08:51:57 sw-aw2az3-object0067 account-server .132.6 - - [29/Feb/2012:08:51:57 +0000|] "GET /disk6/198875/AUTH_redacted-4962- 4692-98fb-52ddda82a5af" 404 - "tx429fc3be354f434ab7f9c6c4206c1dc3" "-" "-" 0.0014 "" .. note:: The 3 GET operations to 3 different object servers that hold the 3 replicas of this users account. Each ``GET`` returns a HTTP status of 404, Not found. Next, use the ``swift-get-nodes`` command to determine exactly where the user's account data is stored: .. code:: $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_redacted-4962-4692-98fb-52ddda82a5af Account AUTH_redacted-4962-4692-98fb-52ddda82a5af Container None Object None Partition 198875 Hash 1846d99185f8a0edaf65cfbf37439696 Server:Port Device .31:6002 disk6 Server:Port Device .204.70:6002 disk6 Server:Port Device .72.16:6002 disk9 Server:Port Device .204.64:6002 disk11 [Handoff] Server:Port Device .26:6002 disk11 [Handoff] Server:Port Device .72.27:6002 disk11 [Handoff] curl -I -XHEAD "`http://.31:6002/disk6/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ curl -I -XHEAD "`http://.204.70:6002/disk6/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ curl -I -XHEAD "`http://.72.16:6002/disk9/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ curl -I -XHEAD "`http://.204.64:6002/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ # [Handoff] curl -I -XHEAD "`http://.26:6002/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ # [Handoff] curl -I -XHEAD "`http://.72.27:6002/disk11/198875/AUTH_redacted-4962-4692-98fb-52ddda82a5af" `_ # [Handoff] ssh .31 "ls -lah /srv/node/disk6/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh .204.70 "ls -lah /srv/node/disk6/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh .72.16 "ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" ssh .204.64 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff] ssh .26 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff] ssh .72.27 "ls -lah /srv/node/disk11/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/" # [Handoff] Check each of the primary servers, .31, .204.70 and .72.16, for this users account. For example on .72.16: .. code:: $ ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/ total 1.0M drwxrwxrwx 2 swift swift 98 2012-02-23 14:49 . drwxrwxrwx 3 swift swift 45 2012-02-03 23:28 .. -rw------- 1 swift swift 15K 2012-02-23 14:49 1846d99185f8a0edaf65cfbf37439696.db -rw-rw-rw- 1 swift swift 0 2012-02-23 14:49 1846d99185f8a0edaf65cfbf37439696.db.pending So this users account db, an sqlite db is present. Use sqlite to checkout the account: .. code:: $ sudo cp /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/1846d99185f8a0edaf65cfbf37439696.db /tmp $ sudo sqlite3 /tmp/1846d99185f8a0edaf65cfbf37439696.db sqlite> .mode line sqlite> select * from account_stat; account = AUTH_redacted-4962-4692-98fb-52ddda82a5af created_at = 1328311738.42190 put_timestamp = 1330000873.61411 delete_timestamp = 1330001026.00514 container_count = 0 object_count = 0 bytes_used = 0 hash = eb7e5d0ea3544d9def940b19114e8b43 id = 2de8c8a8-cef9-4a94-a421-2f845802fe90 status = DELETED status_changed_at = 1330001026.00514 metadata = .. note: The status is ``DELETED``. So this account was deleted. This explains why the GET operations are returning 404, not found. Check the account delete date/time: .. code:: $ python >>> import time >>> time.ctime(1330001026.00514) 'Thu Feb 23 12:43:46 2012' Next try and find the ``DELETE`` operation for this account in the proxy server logs: .. code:: $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l -R ssh \ -w .68.[4-11,132-139 4-11,132-139],.132.[4-11,132-139|4-11,132-139] \ 'sudo bzgrep AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log* \ | grep -w DELETE | awk "{print $3,$10,$12}"' |- dshbak -c . . Feb 23 12:43:46 sw-aw2az2-proxy001 proxy-server .66.7 23/Feb/2012/12/43/46 DELETE /v1.0/AUTH_redacted-4962-4692-98fb- 52ddda82a5af/ HTTP/1.0 204 - Apache-HttpClient/4.1.2%20%28java%201.5%29 _4f458ee4e4b02a869c3aad02 - - - tx4471188b0b87406899973d297c55ab53 - 0.0086 From this you can see the operation that resulted in the account being deleted. Procedure: Deleting objects ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Simple case - deleting small number of objects and containers ------------------------------------------------------------- .. note:: ``swift-direct`` is specific to the Hewlett Packard Enterprise Helion Public Cloud. Use ``swiftly`` as an alternative. .. note:: Object and container names are in UTF8. Swift direct accepts UTF8 directly, not URL-encoded UTF8 (the REST API expects UTF8 and then URL-encoded). In practice cut and paste of foreign language strings to a terminal window will produce the right result. Hint: Use the ``head`` command before any destructive commands. To delete a small number of objects, log into any proxy node and proceed as follows: Examine the object in question: .. code:: $ sudo -u swift /opt/hp/swift/bin/swift-direct head 132345678912345 container_name obj_name See if ``X-Object-Manifest`` or ``X-Static-Large-Object`` is set, then this is the manifest object and segment objects may be in another container. If the ``X-Object-Manifest`` attribute is set, you need to find the name of the objects this means it is a DLO. For example, if ``X-Object-Manifest`` is ``container2/seg-blah``, list the contents of the container container2 as follows: .. code:: $ sudo -u swift /opt/hp/swift/bin/swift-direct show 132345678912345 container2 Pick out the objects whose names start with ``seg-blah``. Delete the segment objects as follows: .. code:: $ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah01 $ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah02 etc If ``X-Static-Large-Object`` is set, you need to read the contents. Do this by: - Using swift-get-nodes to get the details of the object's location. - Change the ``-X HEAD`` to ``-X GET`` and run ``curl`` against one copy. - This lists a json body listing containers and object names - Delete the objects as described above for DLO segments Once the segments are deleted, you can delete the object using ``swift-direct`` as described above. Finally, use ``swift-direct`` to delete the container. Procedure: Decommissioning swift nodes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Should Swift nodes need to be decommissioned (e.g.,, where they are being re-purposed), it is very important to follow the following steps. #. In the case of object servers, follow the procedure for removing the node from the rings. #. In the case of swift proxy servers, have the network team remove the node from the load balancers. #. Open a network ticket to have the node removed from network firewalls. #. Make sure that you remove the ``/etc/swift`` directory and everything in it. swift-2.7.1/doc/source/ops_runbook/procedures.rst0000664000567000056710000003413213024044354023332 0ustar jenkinsjenkins00000000000000================================= Software configuration procedures ================================= .. _fix_broken_gpt_table: Fix broken GPT table (broken disk partition) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - If a GPT table is broken, a message like the following should be observed when the command... .. code:: $ sudo parted -l - ... is run. .. code:: ... Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used. OK/Cancel? #. To fix this, firstly install the ``gdisk`` program to fix this: .. code:: $ sudo aptitude install gdisk #. Run ``gdisk`` for the particular drive with the damaged partition: .. code: $ sudo gdisk /dev/sd*a-l* GPT fdisk (gdisk) version 0.6.14 Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! One or more CRCs don't match. You should repair the disk! Partition table scan: MBR: protective BSD: not present APM: not present GPT: damaged /dev/sd ***************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. ***************************************************************************** #. On the command prompt, type ``r`` (recovery and transformation options), followed by ``d`` (use main GPT header) , ``v`` (verify disk) and finally ``w`` (write table to disk and exit). Will also need to enter ``Y`` when prompted in order to confirm actions. .. code:: Command (? for help): r Recovery/transformation command (? for help): d Recovery/transformation command (? for help): v Caution: The CRC for the backup partition table is invalid. This table may be corrupt. This program will automatically create a new backup partition table when you save your partitions. Caution: Partition 1 doesn't begin on a 8-sector boundary. This may result in degraded performance on some modern (2009 and later) hard disks. Caution: Partition 2 doesn't begin on a 8-sector boundary. This may result in degraded performance on some modern (2009 and later) hard disks. Caution: Partition 3 doesn't begin on a 8-sector boundary. This may result in degraded performance on some modern (2009 and later) hard disks. Identified 1 problems! Recovery/transformation command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed, possibly destroying your data? (Y/N): Y OK; writing new GUID partition table (GPT). The operation has completed successfully. #. Running the command: .. code:: $ sudo parted /dev/sd# #. Should now show that the partition is recovered and healthy again. #. Finally, uninstall ``gdisk`` from the node: .. code:: $ sudo aptitude remove gdisk .. _fix_broken_xfs_filesystem: Procedure: Fix broken XFS filesystem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. A filesystem may be corrupt or broken if the following output is observed when checking its label: .. code:: $ sudo xfs_admin -l /dev/sd# cache_node_purge: refcount was 1, not zero (node=0x25d5ee0) xfs_admin: cannot read root inode (117) cache_node_purge: refcount was 1, not zero (node=0x25d92b0) xfs_admin: cannot read realtime bitmap inode (117) bad sb magic # 0 in AG 1 failed to read label in AG 1 #. Run the following commands to remove the broken/corrupt filesystem and replace. (This example uses the filesystem ``/dev/sdb2``) Firstly need to replace the partition: .. code:: $ sudo parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sdb Using /dev/sdb (parted) p Model: HP LOGICAL VOLUME (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 17.4kB 1024MB 1024MB ext3 boot 2 1024MB 1751GB 1750GB xfs sw-aw2az1-object045-disk1 3 1751GB 2000GB 249GB lvm (parted) rm 2 (parted) mkpart primary 2 -1 Warning: You requested a partition from 2000kB to 2000GB. The closest location we can manage is 1024MB to 1751GB. Is this still acceptable to you? Yes/No? Yes Warning: The resulting partition is not properly aligned for best performance. Ignore/Cancel? Ignore (parted) p Model: HP LOGICAL VOLUME (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 17.4kB 1024MB 1024MB ext3 boot 2 1024MB 1751GB 1750GB xfs primary 3 1751GB 2000GB 249GB lvm (parted) quit #. Next step is to scrub the filesystem and format: .. code:: $ sudo dd if=/dev/zero of=/dev/sdb2 bs=$((1024*1024)) count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00480617 s, 218 MB/s $ sudo /sbin/mkfs.xfs -f -i size=1024 /dev/sdb2 meta-data=/dev/sdb2 isize=1024 agcount=4, agsize=106811524 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=427246093, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=208616, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 #. You should now label and mount your filesystem. #. Can now check to see if the filesystem is mounted using the command: .. code:: $ mount .. _checking_if_account_ok: Procedure: Checking if an account is okay ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: ``swift-direct`` is only available in the HPE Helion Public Cloud. Use ``swiftly`` as an alternate (or use ``swift-get-nodes`` as explained here). You must know the tenant/project ID. You can check if the account is okay as follows from a proxy. .. code:: $ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_ The response will either be similar to a swift list of the account containers, or an error indicating that the resource could not be found. Alternatively, you can use ``swift-get-nodes`` to find the account database files. Run the following on a proxy: .. code:: $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_ The response will print curl/ssh commands that will list the replicated account databases. Use the indicated ``curl`` or ``ssh`` commands to check the status and existence of the account. Procedure: Getting swift account stats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: ``swift-direct`` is specific to the HPE Helion Public Cloud. Go look at ``swifty`` for an alternate or use ``swift-get-nodes`` as explained in :ref:`checking_if_account_ok`. This procedure describes how you determine the swift usage for a given swift account, that is the number of containers, number of objects and total bytes used. To do this you will need the project ID. Log onto one of the swift proxy servers. Use swift-direct to show this accounts usage: .. code:: $ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_ Status: 200 Content-Length: 0 Accept-Ranges: bytes X-Timestamp: 1379698586.88364 X-Account-Bytes-Used: 67440225625994 X-Account-Container-Count: 1 Content-Type: text/plain; charset=utf-8 X-Account-Object-Count: 8436776 Status: 200 name: my_container count: 8436776 bytes: 67440225625994 This account has 1 container. That container has 8436776 objects. The total bytes used is 67440225625994. Procedure: Revive a deleted account ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Swift accounts are normally not recreated. If a tenant/project is deleted, the account can then be deleted. If the user wishes to use Swift again, the normal process is to create a new tenant/project -- and hence a new Swift account. However, if the Swift account is deleted, but the tenant/project is not deleted from Keystone, the user can no longer access the account. This is because the account is marked deleted in Swift. You can revive the account as described in this process. .. note:: The containers and objects in the "old" account cannot be listed anymore. In addition, if the Account Reaper process has not finished reaping the containers and objects in the "old" account, these are effectively orphaned and it is virtually impossible to find and delete them to free up disk space. The solution is to delete the account database files and re-create the account as follows: #. You must know the tenant/project ID. The account name is AUTH_. In this example, the tenant/project is is ``4ebe3039674d4864a11fe0864ae4d905`` so the Swift account name is ``AUTH_4ebe3039674d4864a11fe0864ae4d905``. #. Use ``swift-get-nodes`` to locate the account's database files (on three servers). The output has been truncated so we can focus on the import pieces of data: .. code:: $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_4ebe3039674d4864a11fe0864ae4d905 ... curl -I -XHEAD "http://192.168.245.5:6002/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905" curl -I -XHEAD "http://192.168.245.3:6002/disk0/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905" curl -I -XHEAD "http://192.168.245.4:6002/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905" ... Use your own device location of servers: such as "export DEVICE=/srv/node" ssh 192.168.245.5 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052" ssh 192.168.245.3 "ls -lah ${DEVICE:-/srv/node*}/disk0/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052" ssh 192.168.245.4 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052" ... note: `/srv/node*` is used as default value of `devices`, the real value is set in the config file on each storage node. #. Before proceeding check that the account is really deleted by using curl. Execute the commands printed by ``swift-get-nodes``. For example: .. code:: $ curl -I -XHEAD "http://192.168.245.5:6002/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905" HTTP/1.1 404 Not Found Content-Length: 0 Content-Type: text/html; charset=utf-8 Repeat for the other two servers (192.168.245.3 and 192.168.245.4). A ``404 Not Found`` indicates that the account is deleted (or never existed). If you get a ``204 No Content`` response, do **not** proceed. #. Use the ssh commands printed by ``swift-get-nodes`` to check if database files exist. For example: .. code:: $ ssh 192.168.245.5 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052" total 20K drwxr-xr-x 2 swift swift 110 Mar 9 10:22 . drwxr-xr-x 3 swift swift 45 Mar 9 10:18 .. -rw------- 1 swift swift 17K Mar 9 10:22 f5ecf8b40de3e1b0adb0dbe576874052.db -rw-r--r-- 1 swift swift 0 Mar 9 10:22 f5ecf8b40de3e1b0adb0dbe576874052.db.pending -rwxr-xr-x 1 swift swift 0 Mar 9 10:18 .lock Repeat for the other two servers (192.168.245.3 and 192.168.245.4). If no files exist, no further action is needed. #. Stop Swift processes on all nodes listed by ``swift-get-nodes`` (In this example, that is 192.168.245.3, 192.168.245.4 and 192.168.245.5). #. We recommend you make backup copies of the database files. #. Delete the database files. For example: .. code:: $ ssh 192.168.245.5 $ cd /srv/node/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052 $ sudo rm * Repeat for the other two servers (192.168.245.3 and 192.168.245.4). #. Restart Swift on all three servers At this stage, the account is fully deleted. If you enable the auto-create option, the next time the user attempts to access the account, the account will be created. You may also use swiftly to recreate the account. Procedure: Temporarily stop load balancers from directing traffic to a proxy server ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can stop the load balancers sending requests to a proxy server as follows. This can be useful when a proxy is misbehaving but you need Swift running to help diagnose the problem. By removing from the load balancers, customer's are not impacted by the misbehaving proxy. #. Ensure that in /etc/swift/proxy-server.conf the ``disable_path`` variable is set to ``/etc/swift/disabled-by-file``. #. Log onto the proxy node. #. Shut down Swift as follows: .. code:: sudo swift-init proxy shutdown .. note:: Shutdown, not stop. #. Create the ``/etc/swift/disabled-by-file`` file. For example: .. code:: sudo touch /etc/swift/disabled-by-file #. Optional, restart Swift: .. code:: sudo swift-init proxy start It works because the healthcheck middleware looks for /etc/swift/disabled-by-file. If it exists, the middleware will return 503/error instead of 200/OK. This means the load balancer should stop sending traffic to the proxy. Procedure: Ad-Hoc disk performance test ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can get an idea whether a disk drive is performing as follows: .. code:: sudo dd bs=1M count=256 if=/dev/zero conv=fdatasync of=/srv/node/disk11/remember-to-delete-this-later You can expect ~600MB/sec. If you get a low number, repeat many times as Swift itself may also read or write to the disk, hence giving a lower number. swift-2.7.1/doc/source/ops_runbook/maintenance.rst0000664000567000056710000004000413024044354023434 0ustar jenkinsjenkins00000000000000================== Server maintenance ================== General assumptions ~~~~~~~~~~~~~~~~~~~ - It is assumed that anyone attempting to replace hardware components will have already read and understood the appropriate maintenance and service guides. - It is assumed that where servers need to be taken off-line for hardware replacement, that this will be done in series, bringing the server back on-line before taking the next off-line. - It is assumed that the operations directed procedure will be used for identifying hardware for replacement. Assessing the health of swift ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can run the swift-recon tool on a Swift proxy node to get a quick check of how Swift is doing. Please note that the numbers below are necessarily somewhat subjective. Sometimes parameters for which we say 'low values are good' will have pretty high values for a time. Often if you wait a while things get better. For example: .. code:: sudo swift-recon -rla =============================================================================== [2012-03-10 12:57:21] Checking async pendings on 384 hosts... Async stats: low: 0, high: 1, avg: 0, total: 1 =============================================================================== [2012-03-10 12:57:22] Checking replication times on 384 hosts... [Replication Times] shortest: 1.4113877813, longest: 36.8293570836, avg: 4.86278064749 =============================================================================== [2012-03-10 12:57:22] Checking load avg's on 384 hosts... [5m load average] lowest: 2.22, highest: 9.5, avg: 4.59578125 [15m load average] lowest: 2.36, highest: 9.45, avg: 4.62622395833 [1m load average] lowest: 1.84, highest: 9.57, avg: 4.5696875 =============================================================================== In the example above we ask for information on replication times (-r), load averages (-l) and async pendings (-a). This is a healthy Swift system. Rules-of-thumb for 'good' recon output are: - Nodes that respond are up and running Swift. If all nodes respond, that is a good sign. But some nodes may time out. For example: .. code:: -> [http://.29:6000/recon/load:] -> [http://.31:6000/recon/load:] - That could be okay or could require investigation. - Low values (say < 10 for high and average) for async pendings are good. Higher values occur when disks are down and/or when the system is heavily loaded. Many simultaneous PUTs to the same container can drive async pendings up. This may be normal, and may resolve itself after a while. If it persists, one way to track down the problem is to find a node with high async pendings (with ``swift-recon -av | sort -n -k4``), then check its Swift logs, Often async pendings are high because a node cannot write to a container on another node. Often this is because the node or disk is offline or bad. This may be okay if we know about it. - Low values for replication times are good. These values rise when new rings are pushed, and when nodes and devices are brought back on line. - Our 'high' load average values are typically in the 9-15 range. If they are a lot bigger it is worth having a look at the systems pushing the average up. Run ``swift-recon -av`` to get the individual averages. To sort the entries with the highest at the end, run ``swift-recon -av | sort -n -k4``. For comparison here is the recon output for the same system above when two entire racks of Swift are down: .. code:: [2012-03-10 16:56:33] Checking async pendings on 384 hosts... -> http://.22:6000/recon/async: -> http://.18:6000/recon/async: -> http://.16:6000/recon/async: -> http://.13:6000/recon/async: -> http://.30:6000/recon/async: -> http://.6:6000/recon/async: ......... -> http://.5:6000/recon/async: -> http://.15:6000/recon/async: -> http://.9:6000/recon/async: -> http://.27:6000/recon/async: -> http://.4:6000/recon/async: -> http://.8:6000/recon/async: Async stats: low: 243, high: 659, avg: 413, total: 132275 =============================================================================== [2012-03-10 16:57:48] Checking replication times on 384 hosts... -> http://.22:6000/recon/replication: -> http://.18:6000/recon/replication: -> http://.16:6000/recon/replication: -> http://.13:6000/recon/replication: -> http://.30:6000/recon/replication: -> http://.6:6000/recon/replication: ............ -> http://.5:6000/recon/replication: -> http://.15:6000/recon/replication: -> http://.9:6000/recon/replication: -> http://.27:6000/recon/replication: -> http://.4:6000/recon/replication: -> http://.8:6000/recon/replication: [Replication Times] shortest: 1.38144306739, longest: 112.620954418, avg: 10.285 9475361 =============================================================================== [2012-03-10 16:59:03] Checking load avg's on 384 hosts... -> http://.22:6000/recon/load: -> http://.18:6000/recon/load: -> http://.16:6000/recon/load: -> http://.13:6000/recon/load: -> http://.30:6000/recon/load: -> http://.6:6000/recon/load: ............ -> http://.15:6000/recon/load: -> http://.9:6000/recon/load: -> http://.27:6000/recon/load: -> http://.4:6000/recon/load: -> http://.8:6000/recon/load: [5m load average] lowest: 1.71, highest: 4.91, avg: 2.486375 [15m load average] lowest: 1.79, highest: 5.04, avg: 2.506125 [1m load average] lowest: 1.46, highest: 4.55, avg: 2.4929375 =============================================================================== .. note:: The replication times and load averages are within reasonable parameters, even with 80 object stores down. Async pendings, however is quite high. This is due to the fact that the containers on the servers which are down cannot be updated. When those servers come back up, async pendings should drop. If async pendings were at this level without an explanation, we have a problem. Recon examples ~~~~~~~~~~~~~~ Here is an example of noting and tracking down a problem with recon. Running reccon shows some async pendings: .. code:: bob@notso:~/swift-1.4.4/swift$ ssh -q .132.7 sudo swift-recon -alr =============================================================================== [2012-03-14 17:25:55] Checking async pendings on 384 hosts... Async stats: low: 0, high: 23, avg: 8, total: 3356 =============================================================================== [2012-03-14 17:25:55] Checking replication times on 384 hosts... [Replication Times] shortest: 1.49303831657, longest: 39.6982825994, avg: 4.2418222066 =============================================================================== [2012-03-14 17:25:56] Checking load avg's on 384 hosts... [5m load average] lowest: 2.35, highest: 8.88, avg: 4.45911458333 [15m load average] lowest: 2.41, highest: 9.11, avg: 4.504765625 [1m load average] lowest: 1.95, highest: 8.56, avg: 4.40588541667 =============================================================================== Why? Running recon again with -av swift (not shown here) tells us that the node with the highest (23) is .72.61. Looking at the log files on .72.61 we see: .. code:: souzab@:~$ sudo tail -f /var/log/swift/background.log | - grep -i ERROR Mar 14 17:28:06 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6001} Mar 14 17:28:06 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6001} Mar 14 17:28:09 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:11 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:13 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6001} Mar 14 17:28:13 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6001} Mar 14 17:28:15 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:15 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:19 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:19 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:20 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6001} Mar 14 17:28:21 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:21 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} Mar 14 17:28:22 container-replicator ERROR Remote drive not mounted {'zone': 5, 'weight': 1952.0, 'ip': '.204.20', 'id': 2311, 'meta': '', 'device': 'disk5', 'port': 6001} That is why this node has a lot of async pendings: a bunch of disks that are not mounted on and . There may be other issues, but clearing this up will likely drop the async pendings a fair bit, as other nodes will be having the same problem. Assessing the availability risk when multiple storage servers are down ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: This procedure will tell you if you have a problem, however, in practice you will find that you will not use this procedure frequently. If three storage nodes (or, more precisely, three disks on three different storage nodes) are down, there is a small but nonzero probability that user objects, containers, or accounts will not be available. Procedure --------- .. note:: swift has three rings: one each for objects, containers and accounts. This procedure should be run three times, each time specifying the appropriate ``*.builder`` file. #. Determine whether all three nodes are in different Swift zones by running the ring builder on a proxy node to determine which zones the storage nodes are in. For example: .. code:: % sudo swift-ring-builder /etc/swift/object.builder /etc/swift/object.builder, build version 1467 2097152 partitions, 3 replicas, 5 zones, 1320 devices, 0.02 balance The minimum number of hours before a partition can be reassigned is 24 Devices: id zone ip address port name weight partitions balance meta 0 1 .4 6000 disk0 1708.00 4259 -0.00 1 1 .4 6000 disk1 1708.00 4260 0.02 2 1 .4 6000 disk2 1952.00 4868 0.01 3 1 .4 6000 disk3 1952.00 4868 0.01 4 1 .4 6000 disk4 1952.00 4867 -0.01 #. Here, node .4 is in zone 1. If two or more of the three nodes under consideration are in the same Swift zone, they do not have any ring partitions in common; there is little/no data availability risk if all three nodes are down. #. If the nodes are in three distinct Swift zones it is necessary to whether the nodes have ring partitions in common. Run ``swift-ring`` builder again, this time with the ``list_parts`` option and specify the nodes under consideration. For example: .. code:: % sudo swift-ring-builder /etc/swift/object.builder list_parts .8 .15 .72.2 Partition Matches 91 2 729 2 3754 2 3769 2 3947 2 5818 2 7918 2 8733 2 9509 2 10233 2 #. The ``list_parts`` option to the ring builder indicates how many ring partitions the nodes have in common. If, as in this case, the first entry in the list has a ‘Matches’ column of 2 or less, there is no data availability risk if all three nodes are down. #. If the ‘Matches’ column has entries equal to 3, there is some data availability risk if all three nodes are down. The risk is generally small, and is proportional to the number of entries that have a 3 in the Matches column. For example: .. code:: Partition Matches 26865 3 362367 3 745940 3 778715 3 797559 3 820295 3 822118 3 839603 3 852332 3 855965 3 858016 3 #. A quick way to count the number of rows with 3 matches is: .. code:: % sudo swift-ring-builder /etc/swift/object.builder list_parts .8 .15 .72.2 | grep "3$" | wc -l 30 #. In this case the nodes have 30 out of a total of 2097152 partitions in common; about 0.001%. In this case the risk is small/nonzero. Recall that a partition is simply a portion of the ring mapping space, not actual data. So having partitions in common is a necessary but not sufficient condition for data unavailability. .. note:: We should not bring down a node for repair if it shows Matches entries of 3 with other nodes that are also down. If three nodes that have 3 partitions in common are all down, there is a nonzero probability that data are unavailable and we should work to bring some or all of the nodes up ASAP. Swift startup/shutdown ~~~~~~~~~~~~~~~~~~~~~~ - Use reload - not stop/start/restart. - Try to roll sets of servers (especially proxy) in groups of less than 20% of your servers.swift-2.7.1/doc/source/ops_runbook/diagnose.rst0000664000567000056710000013344413024044354022756 0ustar jenkinsjenkins00000000000000================================== Identifying issues and resolutions ================================== Is the system up? ----------------- If you have a report that Swift is down, perform the following basic checks: #. Run swift functional tests. #. From a server in your data center, use ``curl`` to check ``/healthcheck`` (see below). #. If you have a monitoring system, check your monitoring system. #. Check your hardware load balancers infrastructure. #. Run swift-recon on a proxy node. Functional tests usage ----------------------- We would recommend that you set up the functional tests to run against your production system. Run regularly this can be a useful tool to validate that the system is configured correctly. In addition, it can provide early warning about failures in your system (if the functional tests stop working, user applications will also probably stop working). A script for running the function tests is located in ``swift/.functests``. External monitoring ------------------- We use pingdom.com to monitor the external Swift API. We suggest the following: - Do a GET on ``/healthcheck`` - Create a container, make it public (x-container-read: .r*,.rlistings), create a small file in the container; do a GET on the object Diagnose: General approach -------------------------- - Look at service status in your monitoring system. - In addition to system monitoring tools and issue logging by users, swift errors will often result in log entries (see :ref:`swift_logs`). - Look at any logs your deployment tool produces. - Log files should be reviewed for error signatures (see below) that may point to a known issue, or root cause issues reported by the diagnostics tools, prior to escalation. Dependencies ^^^^^^^^^^^^ The Swift software is dependent on overall system health. Operating system level issues with network connectivity, domain name resolution, user management, hardware and system configuration and capacity in terms of memory and free disk space, may result is secondary Swift issues. System level issues should be resolved prior to diagnosis of swift issues. Diagnose: Swift-dispersion-report --------------------------------- The swift-dispersion-report is a useful tool to gauge the general health of the system. Configure the ``swift-dispersion`` report to cover at a minimum every disk drive in your system (usually 1% coverage). See :ref:`dispersion_report` for details of how to configure and use the dispersion reporting tool. The ``swift-dispersion-report`` tool can take a long time to run, especially if any servers are down. We suggest you run it regularly (e.g., in a cron job) and save the results. This makes it easy to refer to the last report without having to wait for a long-running command to complete. Diagnose: Is system responding to /healthcheck? ----------------------------------------------- When you want to establish if a swift endpoint is running, run ``curl -k`` against https://*[ENDPOINT]*/healthcheck. .. _swift_logs: Diagnose: Interpreting messages in ``/var/log/swift/`` files ------------------------------------------------------------ .. note:: In the Hewlett Packard Enterprise Helion Public Cloud we send logs to ``proxy.log`` (proxy-server logs), ``server.log`` (object-server, account-server, container-server logs), ``background.log`` (all other servers [object-replicator, etc]). The following table lists known issues: .. list-table:: :widths: 25 25 25 25 :header-rows: 1 * - **Logfile** - **Signature** - **Issue** - **Steps to take** * - /var/log/syslog - kernel: [] sd .... [csbu:sd...] Sense Key: Medium Error - Suggests disk surface issues - Run ``swift-drive-audit`` on the target node to check for disk errors, repair disk errors * - /var/log/syslog - kernel: [] sd .... [csbu:sd...] Sense Key: Hardware Error - Suggests storage hardware issues - Run diagnostics on the target node to check for disk failures, replace failed disks * - /var/log/syslog - kernel: [] .... I/O error, dev sd.... ,sector .... - - Run diagnostics on the target node to check for disk errors * - /var/log/syslog - pound: NULL get_thr_arg - Multiple threads woke up - Noise, safe to ignore * - /var/log/swift/proxy.log - .... ERROR .... ConnectionTimeout .... - A storage node is not responding in a timely fashion - Check if node is down, not running Swift, unconfigured, storage off-line or for network issues between the proxy and non responding node * - /var/log/swift/proxy.log - proxy-server .... HTTP/1.0 500 .... - A proxy server has reported an internal server error - Examine the logs for any errors at the time the error was reported to attempt to understand the cause of the error. * - /var/log/swift/server.log - .... ERROR .... ConnectionTimeout .... - A storage server is not responding in a timely fashion - Check if node is down, not running Swift, unconfigured, storage off-line or for network issues between the server and non responding node * - /var/log/swift/server.log - .... ERROR .... Remote I/O error: '/srv/node/disk.... - A storage device is not responding as expected - Run ``swift-drive-audit`` and check the filesystem named in the error for corruption (unmount & xfs_repair). Check if the filesystem is mounted and working. * - /var/log/swift/background.log - object-server ERROR container update failed .... Connection refused - A container server node could not be contacted - Check if node is down, not running Swift, unconfigured, storage off-line or for network issues between the server and non responding node * - /var/log/swift/background.log - object-updater ERROR with remote .... ConnectionTimeout - The remote container server is busy - If the container is very large, some errors updating it can be expected. However, this error can also occur if there is a networking issue. * - /var/log/swift/background.log - account-reaper STDOUT: .... error: ECONNREFUSED - Network connectivity issue or the target server is down. - Resolve network issue or reboot the target server * - /var/log/swift/background.log - .... ERROR .... ConnectionTimeout - A storage server is not responding in a timely fashion - The target server may be busy. However, this error can also occur if there is a networking issue. * - /var/log/swift/background.log - .... ERROR syncing .... Timeout - A timeout occurred syncing data to another node. - The target server may be busy. However, this error can also occur if there is a networking issue. * - /var/log/swift/background.log - .... ERROR Remote drive not mounted .... - A storage server disk is unavailable - Repair and remount the file system (on the remote node) * - /var/log/swift/background.log - object-replicator .... responded as unmounted - A storage server disk is unavailable - Repair and remount the file system (on the remote node) * - /var/log/swift/*.log - STDOUT: EXCEPTION IN - A unexpected error occurred - Read the Traceback details, if it matches known issues (e.g. active network/disk issues), check for re-ocurrences after the primary issues have been resolved * - /var/log/rsyncd.log - rsync: mkdir "/disk....failed: No such file or directory.... - A local storage server disk is unavailable - Run diagnostics on the node to check for a failed or unmounted disk * - /var/log/swift* - Exception: Could not bind to 0.0.0.0:6xxx - Possible Swift process restart issue. This indicates an old swift process is still running. - Restart Swift services. If some swift services are reported down, check if they left residual process behind. Diagnose: Parted reports the backup GPT table is corrupt -------------------------------------------------------- - If a GPT table is broken, a message like the following should be observed when the following command is run: .. code:: $ sudo parted -l .. code:: Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used. OK/Cancel? To fix, go to :ref:`fix_broken_gpt_table` Diagnose: Drives diagnostic reports a FS label is not acceptable ---------------------------------------------------------------- If diagnostics reports something like "FS label: obj001dsk011 is not acceptable", it indicates that a partition has a valid disk label, but an invalid filesystem label. In such cases proceed as follows: #. Verify that the disk labels are correct: .. code:: FS=/dev/sd#1 sudo parted -l | grep object #. If partition labels are inconsistent then, resolve the disk label issues before proceeding: .. code:: sudo parted -s ${FS} name ${PART_NO} ${PART_NAME} #Partition Label #PART_NO is 1 for object disks and 3 for OS disks #PART_NAME follows the convention seen in "sudo parted -l | grep object" #. If the Filesystem label is missing then create it with care: .. code:: sudo xfs_admin -l ${FS} #Filesystem label (12 Char limit) #Check for the existence of a FS label OBJNO=<3 Length Object No.> #I.E OBJNO for sw-stbaz3-object0007 would be 007 DISKNO=<3 Length Disk No.> #I.E DISKNO for /dev/sdb would be 001, /dev/sdc would be 002 etc. sudo xfs_admin -L "obj${OBJNO}dsk${DISKNO}" ${FS} #Create a FS Label Diagnose: Failed LUNs --------------------- .. note:: The HPE Helion Public Cloud uses direct attach SmartArray controllers/drives. The information here is specific to that environment. The hpacucli utility mentioned here may be called hpssacli in your environment. The ``swift_diagnostics`` mount checks may return a warning that a LUN has failed, typically accompanied by DriveAudit check failures and device errors. Such cases are typically caused by a drive failure, and if drive check also reports a failed status for the underlying drive, then follow the procedure to replace the disk. Otherwise the lun can be re-enabled as follows: #. Generate a hpssacli diagnostic report. This report allows the DC team to troubleshoot potential cabling or hardware issues so it is imperative that you run it immediately when troubleshooting a failed LUN. You will come back later and grep this file for more details, but just generate it for now. .. code:: sudo hpssacli controller all diag file=/tmp/hpacu.diag ris=on xml=off zip=off Export the following variables using the below instructions before proceeding further. #. Print a list of logical drives and their numbers and take note of the failed drive's number and array value (example output: "array A logicaldrive 1..." would be exported as LDRIVE=1): .. code:: sudo hpssacli controller slot=1 ld all show #. Export the number of the logical drive that was retrieved from the previous command into the LDRIVE variable: .. code:: export LDRIVE= #. Print the array value and Port:Box:Bay for all drives and take note of the Port:Box:Bay for the failed drive (example output: " array A physicaldrive 2C:1:1..." would be exported as PBOX=2C:1:1). Match the array value of this output with the array value obtained from the previous command to be sure you are working on the same drive. Also, the array value usually matches the device name (For example, /dev/sdc in the case of "array c"), but we will run a different command to be sure we are operating on the correct device. .. code:: sudo hpssacli controller slot=1 pd all show .. note:: Sometimes a LUN may appear to be failed as it is not and cannot be mounted but the hpssacli/parted commands may show no problems with the LUNS/drives. In this case, the filesystem may be corrupt and may be necessary to run ``sudo xfs_check /dev/sd[a-l][1-2]`` to see if there is an xfs issue. The results of running this command may require that ``xfs_repair`` is run. #. Export the Port:Box:Bay for the failed drive into the PBOX variable: .. code:: export PBOX= #. Print the physical device information and take note of the Disk Name (example output: "Disk Name: /dev/sdk" would be exported as DEV=/dev/sdk): .. code:: sudo hpssacli controller slot=1 ld ${LDRIVE} show detail | grep -i "Disk Name" #. Export the device name variable from the preceding command (example: /dev/sdk): .. code:: export DEV= #. Export the filesystem variable. Disks that are split between the operating system and data storage, typically sda and sdb, should only have repairs done on their data filesystem, usually /dev/sda2 and /dev/sdb2, Other data only disks have just one partition on the device, so the filesystem will be 1. In any case you should verify the data filesystem by running ``df -h | grep /srv/node`` and using the listed data filesystem for the device in question as the export. For example: /dev/sdk1. .. code:: export FS= #. Verify the LUN is failed, and the device is not: .. code:: sudo hpssacli controller slot=1 ld all show sudo hpssacli controller slot=1 pd all show sudo hpssacli controller slot=1 ld ${LDRIVE} show detail sudo hpssacli controller slot=1 pd ${PBOX} show detail #. Stop the swift and rsync service: .. code:: sudo service rsync stop sudo swift-init shutdown all #. Unmount the problem drive, fix the LUN and the filesystem: .. code:: sudo umount ${FS} #. If umount fails, you should run lsof search for the mountpoint and kill any lingering processes before repeating the unpount: .. code:: sudo hpacucli controller slot=1 ld ${LDRIVE} modify reenable sudo xfs_repair ${FS} #. If the ``xfs_repair`` complains about possible journal data, use the ``xfs_repair -L`` option to zeroise the journal log. #. Once complete test-mount the filesystem, and tidy up its lost and found area. .. code:: sudo mount ${FS} /mnt sudo rm -rf /mnt/lost+found/ sudo umount /mnt #. Mount the filesystem and restart swift and rsync. #. Run the following to determine if a DC ticket is needed to check the cables on the node: .. code:: grep -y media.exchanged /tmp/hpacu.diag grep -y hot.plug.count /tmp/hpacu.diag #. If the output reports any non 0x00 values, it suggests that the cables should be checked. For example, log a DC ticket to check the sas cables between the drive and the expander. .. _diagnose_slow_disk_drives: Diagnose: Slow disk devices --------------------------- .. note:: collectl is an open-source performance gathering/analysis tool. If the diagnostics report a message such as ``sda: drive is slow``, you should log onto the node and run the following command (remove ``-c 1`` option to continuously monitor the data): .. code:: $ /usr/bin/collectl -s D -c 1 waiting for 1 second sample... # DISK STATISTICS (/sec) # <---------reads---------><---------writes---------><--------averages--------> Pct #Name KBytes Merged IOs Size KBytes Merged IOs Size RWSize QLen Wait SvcTim Util sdb 204 0 33 6 43 0 4 11 6 1 7 6 23 sda 84 0 13 6 108 21 6 18 10 1 7 7 13 sdc 100 0 16 6 0 0 0 0 6 1 7 6 9 sdd 140 0 22 6 22 0 2 11 6 1 9 9 22 sde 76 0 12 6 255 0 52 5 5 1 2 1 10 sdf 276 0 44 6 0 0 0 0 6 1 11 8 38 sdg 112 0 17 7 18 0 2 9 6 1 7 7 13 sdh 3552 0 73 49 0 0 0 0 48 1 9 8 62 sdi 72 0 12 6 0 0 0 0 6 1 8 8 10 sdj 112 0 17 7 22 0 2 11 7 1 10 9 18 sdk 120 0 19 6 21 0 2 11 6 1 8 8 16 sdl 144 0 22 7 18 0 2 9 6 1 9 7 18 dm-0 0 0 0 0 0 0 0 0 0 0 0 0 0 dm-1 0 0 0 0 60 0 15 4 4 0 0 0 0 dm-2 0 0 0 0 48 0 12 4 4 0 0 0 0 dm-3 0 0 0 0 0 0 0 0 0 0 0 0 0 dm-4 0 0 0 0 0 0 0 0 0 0 0 0 0 dm-5 0 0 0 0 0 0 0 0 0 0 0 0 0 Look at the ``Wait`` and ``SvcTime`` values. It is not normal for these values to exceed 50msec. This is known to impact customer performance (upload/download). For a controller problem, many/all drives will show long wait and service times. A reboot may correct the problem; otherwise hardware replacement is needed. Another way to look at the data is as follows: .. code:: $ /opt/hp/syseng/disk-anal.pl -d Disk: sda Wait: 54580 371 65 25 12 6 6 0 1 2 0 46 Disk: sdb Wait: 54532 374 96 36 16 7 4 1 0 2 0 46 Disk: sdc Wait: 54345 554 105 29 15 4 7 1 4 4 0 46 Disk: sdd Wait: 54175 553 254 31 20 11 6 6 2 2 1 53 Disk: sde Wait: 54923 66 56 15 8 7 7 0 1 0 2 29 Disk: sdf Wait: 50952 941 565 403 426 366 442 447 338 99 38 97 Disk: sdg Wait: 50711 689 808 562 642 675 696 185 43 14 7 82 Disk: sdh Wait: 51018 668 688 483 575 542 692 275 55 22 9 87 Disk: sdi Wait: 51012 1011 849 672 568 240 344 280 38 13 6 81 Disk: sdj Wait: 50724 743 770 586 662 509 684 283 46 17 11 79 Disk: sdk Wait: 50886 700 585 517 633 511 729 352 89 23 8 81 Disk: sdl Wait: 50106 617 794 553 604 504 532 501 288 234 165 216 Disk: sda Time: 55040 22 16 6 1 1 13 0 0 0 3 12 Disk: sdb Time: 55014 41 19 8 3 1 8 0 0 0 3 17 Disk: sdc Time: 55032 23 14 8 9 2 6 1 0 0 0 19 Disk: sdd Time: 55022 29 17 12 6 2 11 0 0 0 1 14 Disk: sde Time: 55018 34 15 11 12 1 9 0 0 0 2 12 Disk: sdf Time: 54809 250 45 7 1 0 0 0 0 0 1 1 Disk: sdg Time: 55070 36 6 2 0 0 0 0 0 0 0 0 Disk: sdh Time: 55079 33 2 0 0 0 0 0 0 0 0 0 Disk: sdi Time: 55074 28 7 2 0 0 2 0 0 0 0 1 Disk: sdj Time: 55067 35 10 0 1 0 0 0 0 0 0 1 Disk: sdk Time: 55068 31 10 3 0 0 1 0 0 0 0 1 Disk: sdl Time: 54905 130 61 7 3 4 1 0 0 0 0 3 This shows the historical distribution of the wait and service times over a day. This is how you read it: - sda did 54580 operations with a short wait time, 371 operations with a longer wait time and 65 with an even longer wait time. - sdl did 50106 operations with a short wait time, but as you can see many took longer. There is a clear pattern that sdf to sdl have a problem. Actually, sda to sde would more normally have lots of zeros in their data. But maybe this is a busy system. In this example it is worth changing the controller as the individual drives may be ok. After the controller is changed, use collectl -s D as described above to see if the problem has cleared. disk-anal.pl will continue to show historical data. You can look at recent data as follows. It only looks at data from 13:15 to 14:15. As you can see, this is a relatively clean system (few if any long wait or service times): .. code:: $ /opt/hp/syseng/disk-anal.pl -d -t 13:15-14:15 Disk: sda Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdb Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdc Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdd Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sde Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdf Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdg Wait: 3594 6 0 0 0 0 0 0 0 0 0 0 Disk: sdh Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdi Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdj Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdk Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdl Wait: 3599 1 0 0 0 0 0 0 0 0 0 0 Disk: sda Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdb Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdc Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdd Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sde Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdf Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdg Time: 3594 6 0 0 0 0 0 0 0 0 0 0 Disk: sdh Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdi Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdj Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdk Time: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sdl Time: 3599 1 0 0 0 0 0 0 0 0 0 0 For long wait times, where the service time appears normal is to check the logical drive cache status. While the cache may be enabled, it can be disabled on a per-drive basis. Diagnose: Slow network link - Measuring network performance ----------------------------------------------------------- Network faults can cause performance between Swift nodes to degrade. Testing with ``netperf`` is recommended. Other methods (such as copying large files) may also work, but can produce inconclusive results. Install ``netperf`` on all systems if not already installed. Check that the UFW rules for its control port are in place. However, there are no pre-opened ports for netperf's data connection. Pick a port number. In this example, 12866 is used because it is one higher than netperf's default control port number, 12865. If you get very strange results including zero values, you may not have gotten the data port opened in UFW at the target or may have gotten the netperf command-line wrong. Pick a ``source`` and ``target`` node. The source is often a proxy node and the target is often an object node. Using the same source proxy you can test communication to different object nodes in different AZs to identity possible bottlekecks. Running tests ^^^^^^^^^^^^^ #. Prepare the ``target`` node as follows: .. code:: sudo iptables -I INPUT -p tcp -j ACCEPT Or, do: .. code:: sudo ufw allow 12866/tcp #. On the ``source`` node, run the following command to check throughput. Note the double-dash before the -P option. The command takes 10 seconds to complete. The ``target`` node is 192.168.245.5. .. code:: $ netperf -H 192.168.245.5 -- -P 12866 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 12866 AF_INET to .72.4 (.72.4) port 12866 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.02 923.69 #. On the ``source`` node, run the following command to check latency: .. code:: $ netperf -H 192.168.245.5 -t TCP_RR -- -P 12866 MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 12866 AF_INET to .72.4 (.72.4) port 12866 AF_INET : demo : first burst 0 Local Remote Socket Size Request Resp. Elapsed Trans. Send Recv Size Size Time Rate bytes Bytes bytes bytes secs. per sec 16384 87380 1 1 10.00 11753.37 16384 87380 Expected results ^^^^^^^^^^^^^^^^ Faults will show up as differences between different pairs of nodes. However, for reference, here are some expected numbers: - For throughput, proxy to proxy, expect ~9300 Mbit/sec (proxies have a 10Ge link). - For throughout, proxy to object, expect ~920 Mbit/sec (at time of writing this, object nodes have a 1Ge link). - For throughput, object to object, expect ~920 Mbit/sec. - For latency (all types), expect ~11000 transactions/sec. Diagnose: Remapping sectors experiencing UREs --------------------------------------------- #. Find the bad sector, device, and filesystem in ``kern.log``. #. Set the environment variables SEC, DEV & FS, for example: .. code:: SEC=2930954256 DEV=/dev/sdi FS=/dev/sdi1 #. Verify that the sector is bad: .. code:: sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC} #. If the sector is bad this command will output an input/output error: .. code:: dd: reading `/dev/sdi`: Input/output error 0+0 records in 0+0 records out #. Prevent chef from attempting to re-mount the filesystem while the repair is in progress: .. code:: sudo mv /etc/chef/client.pem /etc/chef/xx-client.xx-pem #. Stop the swift and rsync service: .. code:: sudo service rsync stop sudo swift-init shutdown all #. Unmount the problem drive: .. code:: sudo umount ${FS} #. Overwrite/remap the bad sector: .. code:: sudo dd_rescue -d -A -m8b -s ${SEC}b ${DEV} ${DEV} #. This command should report an input/output error the first time it is run. Run the command a second time, if it successfully remapped the bad sector it should not report an input/output error. #. Verify the sector is now readable: .. code:: sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC} #. If the sector is now readable this command should not report an input/output error. #. If more than one problem sector is listed, set the SEC environment variable to the next sector in the list: .. code:: SEC=123456789 #. Repeat from step 8. #. Repair the filesystem: .. code:: sudo xfs_repair ${FS} #. If ``xfs_repair`` reports that the filesystem has valuable filesystem changes: .. code:: sudo xfs_repair ${FS} Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. #. You should attempt to mount the filesystem, and clear the lost+found area: .. code:: sudo mount $FS /mnt sudo rm -rf /mnt/lost+found/* sudo umount /mnt #. If the filesystem fails to mount then you will need to use the ``xfs_repair -L`` option to force log zeroing. Repeat step 11. #. If ``xfs_repair`` reports that an additional input/output error has been encountered, get the sector details as follows: .. code:: sudo grep "I/O error" /var/log/kern.log | grep sector | tail -1 #. If new input/output error is reported then set the SEC environment variable to the problem sector number: .. code:: SEC=234567890 #. Repeat from step 8 #. Remount the filesystem and restart swift and rsync. - If all UREs in the kern.log have been fixed and you are still unable to have xfs_repair disk, it is possible that the URE's have corrupted the filesystem or possibly destroyed the drive altogether. In this case, the first step is to re-format the filesystem and if this fails, get the disk replaced. Diagnose: High system latency ----------------------------- .. note:: The latency measurements described here are specific to the HPE Helion Public Cloud. - A bad NIC on a proxy server. However, as explained above, this usually causes the peak to rise, but average should remain near normal parameters. A quick fix is to shutdown the proxy. - A stuck memcache server. Accepts connections, but then will not respond. Expect to see timeout messages in ``/var/log/proxy.log`` (port 11211). Swift Diags will also report this as a failed node/port. A quick fix is to shutdown the proxy server. - A bad/broken object server can also cause problems if the accounts used by the monitor program happen to live on the bad object server. - A general network problem within the data canter. Compare the results with the Pingdom monitors to see if they also have a problem. Diagnose: Interface reports errors ---------------------------------- Should a network interface on a Swift node begin reporting network errors, it may well indicate a cable, switch, or network issue. Get an overview of the interface with: .. code:: sudo ifconfig eth{n} sudo ethtool eth{n} The ``Link Detected:`` indicator will read ``yes`` if the nic is cabled. Establish the adapter type with: .. code:: sudo ethtool -i eth{n} Gather the interface statistics with: .. code:: sudo ethtool -S eth{n} If the nick supports self test, this can be performed with: .. code:: sudo ethtool -t eth{n} Self tests should read ``PASS`` if the nic is operating correctly. Nic module drivers can be re-initialised by carefully removing and re-installing the modules (this avoids rebooting the server). For example, mellanox drivers use a two part driver mlx4_en and mlx4_core. To reload these you must carefully remove the mlx4_en (ethernet) then the mlx4_core modules, and reinstall them in the reverse order. As the interface will be disabled while the modules are unloaded, you must be very careful not to lock yourself out so it may be better to script this. Diagnose: Hung swift object replicator -------------------------------------- A replicator reports in its log that remaining time exceeds 100 hours. This may indicate that the swift ``object-replicator`` is stuck and not making progress. Another useful way to check this is with the 'swift-recon -r' command on a swift proxy server: .. code:: sudo swift-recon -r =============================================================================== --> Starting reconnaissance on 384 hosts =============================================================================== [2013-07-17 12:56:19] Checking on replication [replication_time] low: 2, high: 80, avg: 28.8, total: 11037, Failed: 0.0%, no_result: 0, reported: 383 Oldest completion was 2013-06-12 22:46:50 (12 days ago) by 192.168.245.3:6000. Most recent completion was 2013-07-17 12:56:19 (5 seconds ago) by 192.168.245.5:6000. =============================================================================== The ``Oldest completion`` line in this example indicates that the object-replicator on swift object server 192.168.245.3 has not completed the replication cycle in 12 days. This replicator is stuck. The object replicator cycle is generally less than 1 hour. Though an replicator cycle of 15-20 hours can occur if nodes are added to the system and a new ring has been deployed. You can further check if the object replicator is stuck by logging on the the object server and checking the object replicator progress with the following command: .. code:: # sudo grep object-rep /var/log/swift/background.log | grep -e "Starting object replication" -e "Object replication complete" -e "partitions rep" Jul 16 06:25:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69018.48s (0.22/sec, 22h remaining) Jul 16 06:30:46 192.168.245.4object-replicator 15344/16450 (93.28%) partitions replicated in 69318.58s (0.22/sec, 22h remaining) Jul 16 06:35:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69618.63s (0.22/sec, 23h remaining) Jul 16 06:40:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69918.73s (0.22/sec, 23h remaining) Jul 16 06:45:46 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 70218.75s (0.22/sec, 24h remaining) Jul 16 06:50:47 192.168.245.4object-replicator 15348/16450 (93.30%) partitions replicated in 70518.85s (0.22/sec, 24h remaining) Jul 16 06:55:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 70818.95s (0.22/sec, 25h remaining) Jul 16 07:00:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 71119.05s (0.22/sec, 25h remaining) Jul 16 07:05:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 71419.15s (0.21/sec, 26h remaining) Jul 16 07:10:47 192.168.245.4object-replicator 15348/16450 (93.30%) partitions replicated in 71719.25s (0.21/sec, 26h remaining) Jul 16 07:15:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 72019.27s (0.21/sec, 27h remaining) Jul 16 07:20:47 192.168.245.4object-replicator 15348/16450 (93.30%) partitions replicated in 72319.37s (0.21/sec, 27h remaining) Jul 16 07:25:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 72619.47s (0.21/sec, 28h remaining) Jul 16 07:30:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 72919.56s (0.21/sec, 28h remaining) Jul 16 07:35:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 73219.67s (0.21/sec, 29h remaining) Jul 16 07:40:47 192.168.245.4 object-replicator 15348/16450 (93.30%) partitions replicated in 73519.76s (0.21/sec, 29h remaining) The above status is output every 5 minutes to ``/var/log/swift/background.log``. .. note:: The 'remaining' time is increasing as time goes on, normally the time remaining should be decreasing. Also note the partition number. For example, 15344 remains the same for several status lines. Eventually the object replicator detects the hang and attempts to make progress by killing the problem thread. The replicator then progresses to the next partition but quite often it again gets stuck on the same partition. One of the reasons for the object replicator hanging like this is filesystem corruption on the drive. The following is a typical log entry of a corrupted filesystem detected by the object replicator: .. code:: # sudo bzgrep "Remote I/O error" /var/log/swift/background.log* |grep srv | - tail -1 Jul 12 03:33:30 192.168.245.4 object-replicator STDOUT: ERROR:root:Error hashing suffix#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 199, in get_hashes#012 hashes[suffix] = hash_suffix(suffix_dir, reclaim_age)#012 File "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 84, in hash_suffix#012 path_contents = sorted(os.listdir(path))#012OSError: [Errno 121] Remote I/O error: '/srv/node/disk4/objects/1643763/b51' An ``ls`` of the problem file or directory usually shows something like the following: .. code:: # ls -l /srv/node/disk4/objects/1643763/b51 ls: cannot access /srv/node/disk4/objects/1643763/b51: Remote I/O error If no entry with ``Remote I/O error`` occurs in the ``background.log`` it is not possible to determine why the object-replicator is hung. It may be that the ``Remote I/O error`` entry is older than 7 days and so has been rotated out of the logs. In this scenario it may be best to simply restart the object-replicator. #. Stop the object-replicator: .. code:: # sudo swift-init object-replicator stop #. Make sure the object replicator has stopped, if it has hung, the stop command will not stop the hung process: .. code:: # ps auxww | - grep swift-object-replicator #. If the previous ps shows the object-replicator is still running, kill the process: .. code:: # kill -9 #. Start the object-replicator: .. code:: # sudo swift-init object-replicator start If the above grep did find an ``Remote I/O error`` then it may be possible to repair the problem filesystem. #. Stop swift and rsync: .. code:: # sudo swift-init all shutdown # sudo service rsync stop #. Make sure all swift process have stopped: .. code:: # ps auxww | grep swift | grep python #. Kill any swift processes still running. #. Unmount the problem filesystem: .. code:: # sudo umount /srv/node/disk4 #. Repair the filesystem: .. code:: # sudo xfs_repair -P /dev/sde1 #. If the ``xfs_repair`` fails then it may be necessary to re-format the filesystem. See :ref:`fix_broken_xfs_filesystem`. If the ``xfs_repair`` is successful, re-enable chef using the following command and replication should commence again. Diagnose: High CPU load ----------------------- The CPU load average on an object server, as shown with the 'uptime' command, is typically under 10 when the server is lightly-moderately loaded: .. code:: $ uptime 07:59:26 up 99 days, 5:57, 1 user, load average: 8.59, 8.39, 8.32 During times of increased activity, due to user transactions or object replication, the CPU load average can increase to to around 30. However, sometimes the CPU load average can increase significantly. The following is an example of an object server that has extremely high CPU load: .. code:: $ uptime 07:44:02 up 18:22, 1 user, load average: 407.12, 406.36, 404.59 Further issues and resolutions ------------------------------ .. note:: The urgency levels in each **Action** column indicates whether or not it is required to take immediate action, or if the problem can be worked on during business hours. .. list-table:: :widths: 33 33 33 :header-rows: 1 * - **Scenario** - **Description** - **Action** * - ``/healthcheck`` latency is high. - The ``/healthcheck`` test does not tax the proxy very much so any drop in value is probably related to network issues, rather than the proxies being very busy. A very slow proxy might impact the average number, but it would need to be very slow to shift the number that much. - Check networks. Do a ``curl https://:/healthcheck`` where ``ip-address`` is individual proxy IP address. Repeat this for every proxy server to see if you can pin point the problem. Urgency: If there are other indications that your system is slow, you should treat this as an urgent problem. * - Swift process is not running. - You can use ``swift-init`` status to check if swift processes are running on any given server. - Run this command: .. code:: sudo swift-init all start Examine messages in the swift log files to see if there are any error messages related to any of the swift processes since the time you ran the ``swift-init`` command. Take any corrective actions that seem necessary. Urgency: If this only affects one server, and you have more than one, identifying and fixing the problem can wait until business hours. If this same problem affects many servers, then you need to take corrective action immediately. * - ntpd is not running. - NTP is not running. - Configure and start NTP. Urgency: For proxy servers, this is vital. * - Host clock is not syncd to an NTP server. - Node time settings does not match NTP server time. This may take some time to sync after a reboot. - Assuming NTP is configured and running, you have to wait until the times sync. * - A swift process has hundreds, to thousands of open file descriptors. - May happen to any of the swift processes. Known to have happened with a ``rsyslod`` restart and where ``/tmp`` was hanging. - Restart the swift processes on the affected node: .. code:: % sudo swift-init all reload Urgency: If known performance problem: Immediate If system seems fine: Medium * - A swift process is not owned by the swift user. - If the UID of the swift user has changed, then the processes might not be owned by that UID. - Urgency: If this only affects one server, and you have more than one, identifying and fixing the problem can wait until business hours. If this same problem affects many servers, then you need to take corrective action immediately. * - Object account or container files not owned by swift. - This typically happens if during a reinstall or a re-image of a server that the UID of the swift user was changed. The data files in the object account and container directories are owned by the original swift UID. As a result, the current swift user does not own these files. - Correct the UID of the swift user to reflect that of the original UID. An alternate action is to change the ownership of every file on all file systems. This alternate action is often impractical and will take considerable time. Urgency: If this only affects one server, and you have more than one, identifying and fixing the problem can wait until business hours. If this same problem affects many servers, then you need to take corrective action immediately. * - A disk drive has a high IO wait or service time. - If high wait IO times are seen for a single disk, then the disk drive is the problem. If most/all devices are slow, the controller is probably the source of the problem. The controller cache may also be miss configured – which will cause similar long wait or service times. - As a first step, if your controllers have a cache, check that it is enabled and their battery/capacitor is working. Second, reboot the server. If problem persists, file a DC ticket to have the drive or controller replaced. See :ref:`diagnose_slow_disk_drives` on how to check the drive wait or service times. Urgency: Medium * - The network interface is not up. - Use the ``ifconfig`` and ``ethtool`` commands to determine the network state. - You can try restarting the interface. However, generally the interface (or cable) is probably broken, especially if the interface is flapping. Urgency: If this only affects one server, and you have more than one, identifying and fixing the problem can wait until business hours. If this same problem affects many servers, then you need to take corrective action immediately. * - Network interface card (NIC) is not operating at the expected speed. - The NIC is running at a slower speed than its nominal rated speed. For example, it is running at 100 Mb/s and the NIC is a 1Ge NIC. - 1. Try resetting the interface with: .. code:: sudo ethtool -s eth0 speed 1000 ... and then run: .. code:: sudo lshw -class See if size goes to the expected speed. Failing that, check hardware (NIC cable/switch port). 2. If persistent, consider shutting down the server (especially if a proxy) until the problem is identified and resolved. If you leave this server running it can have a large impact on overall performance. Urgency: High * - The interface RX/TX error count is non-zero. - A value of 0 is typical, but counts of 1 or 2 do not indicate a problem. - 1. For low numbers (For example, 1 or 2), you can simply ignore. Numbers in the range 3-30 probably indicate that the error count has crept up slowly over a long time. Consider rebooting the server to remove the report from the noise. Typically, when a cable or interface is bad, the error count goes to 400+. For example, it stands out. There may be other symptoms such as the interface going up and down or not running at correct speed. A server with a high error count should be watched. 2. If the error count continues to climb, consider taking the server down until it can be properly investigated. In any case, a reboot should be done to clear the error count. Urgency: High, if the error count increasing. * - In a swift log you see a message that a process has not replicated in over 24 hours. - The replicator has not successfully completed a run in the last 24 hours. This indicates that the replicator has probably hung. - Use ``swift-init`` to stop and then restart the replicator process. Urgency: Low. However if you recently added or replaced disk drives then you should treat this urgently. * - Container Updater has not run in 4 hour(s). - The service may appear to be running however, it may be hung. Examine their swift logs to see if there are any error messages relating to the container updater. This may potentially explain why the container is not running. - Urgency: Medium This may have been triggered by a recent restart of the rsyslog daemon. Restart the service with: .. code:: sudo swift-init reload * - Object replicator: Reports the remaining time and that time is more than 100 hours. - Each replication cycle the object replicator writes a log message to its log reporting statistics about the current cycle. This includes an estimate for the remaining time needed to replicate all objects. If this time is longer than 100 hours, there is a problem with the replication process. - Urgency: Medium Restart the service with: .. code:: sudo swift-init object-replicator reload Check that the remaining replication time is going down. swift-2.7.1/doc/source/ops_runbook/index.rst0000664000567000056710000000157313024044352022267 0ustar jenkinsjenkins00000000000000================= Swift Ops Runbook ================= This document contains operational procedures that Hewlett Packard Enterprise (HPE) uses to operate and monitor the Swift system within the HPE Helion Public Cloud. This document is an excerpt of a larger product-specific handbook. As such, the material may appear incomplete. The suggestions and recommendations made in this document are for our particular environment, and may not be suitable for your environment or situation. We make no representations concerning the accuracy, adequacy, completeness or suitability of the information, suggestions or recommendations. This document are provided for reference only. We are not responsible for your use of any information, suggestions or recommendations contained herein. .. toctree:: :maxdepth: 2 diagnose.rst procedures.rst maintenance.rst troubleshooting.rst swift-2.7.1/doc/source/container.rst0000664000567000056710000000175313024044352020602 0ustar jenkinsjenkins00000000000000.. _Container: ********* Container ********* .. _container-auditor: Container Auditor ================= .. automodule:: swift.container.auditor :members: :undoc-members: :show-inheritance: .. _container-backend: Container Backend ================= .. automodule:: swift.container.backend :members: :undoc-members: :show-inheritance: .. _container-server: Container Server ================ .. automodule:: swift.container.server :members: :undoc-members: :show-inheritance: .. _container-replicator: Container Replicator ==================== .. automodule:: swift.container.replicator :members: :undoc-members: :show-inheritance: .. _container-sync-daemon: Container Sync ============== .. automodule:: swift.container.sync :members: :undoc-members: :show-inheritance: .. _container-updater: Container Updater ================= .. automodule:: swift.container.updater :members: :undoc-members: :show-inheritance: swift-2.7.1/doc/source/conf.py0000664000567000056710000001716613024044354017374 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # Copyright (c) 2010-2012 OpenStack Foundation. # # Swift documentation build configuration file, created by # sphinx-quickstart on Tue May 18 13:50:15 2010. # # This file is execfile()d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import datetime import os from swift import __version__ import subprocess import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.extend([os.path.abspath('../swift'), os.path.abspath('..'), os.path.abspath('../bin')]) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'oslosphinx'] todo_include_todos = True # Add any paths that contain templates here, relative to this directory. # Changing the path so that the Hudson build output contains GA code and the # source docs do not contain the code so local, offline sphinx builds are # "clean." # templates_path = [] # if os.getenv('HUDSON_PUBLISH_DOCS'): # templates_path = ['_ga', '_templates'] # else: # templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Swift' copyright = u'%d, OpenStack Foundation' % datetime.datetime.now().year # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = __version__.rsplit('.', 1)[0] # The full version, including alpha/beta/rc tags. release = __version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. # unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['swift.'] # -- Options for HTML output ----------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme = 'default' # html_theme_path = ["."] # html_theme = '_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", "-n1"] html_last_updated_fmt = subprocess.Popen( git_cmd, stdout=subprocess.PIPE).communicate()[0] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'swiftdoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Swift.tex', u'Swift Documentation', u'Swift Team', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True swift-2.7.1/doc/source/overview_policies.rst0000664000567000056710000010006113024044354022347 0ustar jenkinsjenkins00000000000000================ Storage Policies ================ Storage Policies allow for some level of segmenting the cluster for various purposes through the creation of multiple object rings. The Storage Policies feature is implemented throughout the entire code base so it is an important concept in understanding Swift architecture. As described in :doc:`overview_ring`, Swift uses modified hashing rings to determine where data should reside in the cluster. There is a separate ring for account databases, container databases, and there is also one object ring per storage policy. Each object ring behaves exactly the same way and is maintained in the same manner, but with policies, different devices can belong to different rings. By supporting multiple object rings, Swift allows the application and/or deployer to essentially segregate the object storage within a single cluster. There are many reasons why this might be desirable: * Different levels of durability: If a provider wants to offer, for example, 2x replication and 3x replication but doesn't want to maintain 2 separate clusters, they would setup a 2x and a 3x replication policy and assign the nodes to their respective rings. Furthermore, if a provider wanted to offer a cold storage tier, they could create an erasure coded policy. * Performance: Just as SSDs can be used as the exclusive members of an account or database ring, an SSD-only object ring can be created as well and used to implement a low-latency/high performance policy. * Collecting nodes into group: Different object rings may have different physical servers so that objects in specific storage policies are always placed in a particular data center or geography. * Different Storage implementations: Another example would be to collect together a set of nodes that use a different Diskfile (e.g., Kinetic, GlusterFS) and use a policy to direct traffic just to those nodes. .. note:: Today, Swift supports two different policy types: Replication and Erasure Code. See :doc:`overview_erasure_code` for details. Also note that Diskfile refers to backend object storage plug-in architecture. See :doc:`development_ondisk_backends` for details. ----------------------- Containers and Policies ----------------------- Policies are implemented at the container level. There are many advantages to this approach, not the least of which is how easy it makes life on applications that want to take advantage of them. It also ensures that Storage Policies remain a core feature of swift independent of the auth implementation. Policies were not implemented at the account/auth layer because it would require changes to all auth systems in use by Swift deployers. Each container has a new special immutable metadata element called the storage policy index. Note that internally, Swift relies on policy indexes and not policy names. Policy names exist for human readability and translation is managed in the proxy. When a container is created, one new optional header is supported to specify the policy name. If no name is specified, the default policy is used (and if no other policies defined, Policy-0 is considered the default). We will be covering the difference between default and Policy-0 in the next section. Policies are assigned when a container is created. Once a container has been assigned a policy, it cannot be changed (unless it is deleted/recreated). The implications on data placement/movement for large datasets would make this a task best left for applications to perform. Therefore, if a container has an existing policy of, for example 3x replication, and one wanted to migrate that data to an Erasure Code policy, the application would create another container specifying the other policy parameters and then simply move the data from one container to the other. Policies apply on a per container basis allowing for minimal application awareness; once a container has been created with a specific policy, all objects stored in it will be done so in accordance with that policy. If a container with a specific name is deleted (requires the container be empty) a new container may be created with the same name without any restriction on storage policy enforced by the deleted container which previously shared the same name. Containers have a many-to-one relationship with policies meaning that any number of containers can share one policy. There is no limit to how many containers can use a specific policy. The notion of associating a ring with a container introduces an interesting scenario: What would happen if 2 containers of the same name were created with different Storage Policies on either side of a network outage at the same time? Furthermore, what would happen if objects were placed in those containers, a whole bunch of them, and then later the network outage was restored? Well, without special care it would be a big problem as an application could end up using the wrong ring to try and find an object. Luckily there is a solution for this problem, a daemon known as the Container Reconciler works tirelessly to identify and rectify this potential scenario. -------------------- Container Reconciler -------------------- Because atomicity of container creation cannot be enforced in a distributed eventually consistent system, object writes into the wrong storage policy must be eventually merged into the correct storage policy by an asynchronous daemon. Recovery from object writes during a network partition which resulted in a split brain container created with different storage policies are handled by the `swift-container-reconciler` daemon. The container reconciler works off a queue similar to the object-expirer. The queue is populated during container-replication. It is never considered incorrect to enqueue an object to be evaluated by the container-reconciler because if there is nothing wrong with the location of the object the reconciler will simply dequeue it. The container-reconciler queue is an indexed log for the real location of an object for which a discrepancy in the storage policy of the container was discovered. To determine the correct storage policy of a container, it is necessary to update the status_changed_at field in the container_stat table when a container changes status from deleted to re-created. This transaction log allows the container-replicator to update the correct storage policy both when replicating a container and handling REPLICATE requests. Because each object write is a separate distributed transaction it is not possible to determine the correctness of the storage policy for each object write with respect to the entire transaction log at a given container database. As such, container databases will always record the object write regardless of the storage policy on a per object row basis. Object byte and count stats are tracked per storage policy in each container and reconciled using normal object row merge semantics. The object rows are ensured to be fully durable during replication using the normal container replication. After the container replicator pushes its object rows to available primary nodes any misplaced object rows are bulk loaded into containers based off the object timestamp under the ``.misplaced_objects`` system account. The rows are initially written to a handoff container on the local node, and at the end of the replication pass the ``.misplaced_objects`` containers are replicated to the correct primary nodes. The container-reconciler processes the ``.misplaced_objects`` containers in descending order and reaps its containers as the objects represented by the rows are successfully reconciled. The container-reconciler will always validate the correct storage policy for enqueued objects using direct container HEAD requests which are accelerated via caching. Because failure of individual storage nodes in aggregate is assumed to be common at scale, the container-reconciler will make forward progress with a simple quorum majority. During a combination of failures and rebalances it is possible that a quorum could provide an incomplete record of the correct storage policy - so an object write may have to be applied more than once. Because storage nodes and container databases will not process writes with an ``X-Timestamp`` less than or equal to their existing record when objects writes are re-applied their timestamp is slightly incremented. In order for this increment to be applied transparently to the client a second vector of time has been added to Swift for internal use. See :class:`~swift.common.utils.Timestamp`. As the reconciler applies object writes to the correct storage policy it cleans up writes which no longer apply to the incorrect storage policy and removes the rows from the ``.misplaced_objects`` containers. After all rows have been successfully processed it sleeps and will periodically check for newly enqueued rows to be discovered during container replication. .. _default-policy: ------------------------- Default versus 'Policy-0' ------------------------- Storage Policies is a versatile feature intended to support both new and pre-existing clusters with the same level of flexibility. For that reason, we introduce the ``Policy-0`` concept which is not the same as the "default" policy. As you will see when we begin to configure policies, each policy has a single name and an arbitrary number of aliases (human friendly, configurable) as well as an index (or simply policy number). Swift reserves index 0 to map to the object ring that's present in all installations (e.g., ``/etc/swift/object.ring.gz``). You can name this policy anything you like, and if no policies are defined it will report itself as ``Policy-0``, however you cannot change the index as there must always be a policy with index 0. Another important concept is the default policy which can be any policy in the cluster. The default policy is the policy that is automatically chosen when a container creation request is sent without a storage policy being specified. :ref:`configure-policy` describes how to set the default policy. The difference from ``Policy-0`` is subtle but extremely important. ``Policy-0`` is what is used by Swift when accessing pre-storage-policy containers which won't have a policy - in this case we would not use the default as it might not have the same policy as legacy containers. When no other policies are defined, Swift will always choose ``Policy-0`` as the default. In other words, default means "create using this policy if nothing else is specified" and ``Policy-0`` means "use the legacy policy if a container doesn't have one" which really means use ``object.ring.gz`` for lookups. .. note:: With the Storage Policy based code, it's not possible to create a container that doesn't have a policy. If nothing is provided, Swift will still select the default and assign it to the container. For containers created before Storage Policies were introduced, the legacy Policy-0 will be used. .. _deprecate-policy: -------------------- Deprecating Policies -------------------- There will be times when a policy is no longer desired; however simply deleting the policy and associated rings would be problematic for existing data. In order to ensure that resources are not orphaned in the cluster (left on disk but no longer accessible) and to provide proper messaging to applications when a policy needs to be retired, the notion of deprecation is used. :ref:`configure-policy` describes how to deprecate a policy. Swift's behavior with deprecated policies is as follows: * The deprecated policy will not appear in /info * PUT/GET/DELETE/POST/HEAD are still allowed on the pre-existing containers created with a deprecated policy * Clients will get an ''400 Bad Request'' error when trying to create a new container using the deprecated policy * Clients still have access to policy statistics via HEAD on pre-existing containers .. note:: A policy cannot be both the default and deprecated. If you deprecate the default policy, you must specify a new default. You can also use the deprecated feature to rollout new policies. If you want to test a new storage policy before making it generally available you could deprecate the policy when you initially roll it the new configuration and rings to all nodes. Being deprecated will render it innate and unable to be used. To test it you will need to create a container with that storage policy; which will require a single proxy instance (or a set of proxy-servers which are only internally accessible) that has been one-off configured with the new policy NOT marked deprecated. Once the container has been created with the new storage policy any client authorized to use that container will be able to add and access data stored in that container in the new storage policy. When satisfied you can roll out a new ``swift.conf`` which does not mark the policy as deprecated to all nodes. .. _configure-policy: -------------------- Configuring Policies -------------------- Policies are configured in ``swift.conf`` and it is important that the deployer have a solid understanding of the semantics for configuring policies. Recall that a policy must have a corresponding ring file, so configuring a policy is a two-step process. First, edit your ``/etc/swift/swift.conf`` file to add your new policy and, second, create the corresponding policy object ring file. See :doc:`policies_saio` for a step by step guide on adding a policy to the SAIO setup. Note that each policy has a section starting with ``[storage-policy:N]`` where N is the policy index. There's no reason other than readability that these be sequential but there are a number of rules enforced by Swift when parsing this file: * If a policy with index 0 is not declared and no other policies defined, Swift will create one * The policy index must be a non-negative integer * If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default * Policy indexes must be unique * Policy names are required * Policy names are case insensitive * Policy names must contain only letters, digits or a dash * Policy names must be unique * The policy name 'Policy-0' can only be used for the policy with index 0 * Multiple names can be assigned to one policy using aliases. All names must follow the Swift naming rules. * If any policies are defined, exactly one policy must be declared default * Deprecated policies cannot be declared the default * If no ``policy_type`` is provided, ``replication`` is the default value. The following is an example of a properly configured ``swift.conf`` file. See :doc:`policies_saio` for full instructions on setting up an all-in-one with this example configuration.:: [swift-hash] # random unique strings that can never change (DO NOT LOSE) # Use only printable chars (python -c "import string; print(string.printable)") swift_hash_path_prefix = changeme swift_hash_path_suffix = changeme [storage-policy:0] name = gold aliases = yellow, orange policy_type = replication default = yes [storage-policy:1] name = silver policy_type = replication deprecated = yes Review :ref:`default-policy` and :ref:`deprecate-policy` for more information about the ``default`` and ``deprecated`` options. There are some other considerations when managing policies: * Policy names can be changed. * Aliases are supported and can be added and removed. If the primary name of a policy is removed the next available alias will be adopted as the primary name. A policy must always have at least one name. * You cannot change the index of a policy once it has been created * The default policy can be changed at any time, by adding the default directive to the desired policy section * Any policy may be deprecated by adding the deprecated directive to the desired policy section, but a deprecated policy may not also be declared the default, and you must specify a default - so you must have policy which is not deprecated at all times. * The option ``policy_type`` is used to distinguish between different policy types. The default value is ``replication``. When defining an EC policy use the value ``erasure_coding``. * The EC policy has additional required parameters. See :doc:`overview_erasure_code` for details. Once ``swift.conf`` is configured for a new policy, a new ring must be created. The ring tools are not policy name aware so it's critical that the correct policy index be used when creating the new policy's ring file. Additional object rings are created in the same manner as the legacy ring except that '-N' is appended after the word ``object`` where N matches the policy index used in ``swift.conf``. This naming convention follows the pattern for per-policy storage node data directories as well. So, to create the ring for policy 1:: swift-ring-builder object-1.builder create 10 3 1 .. note:: The same drives can indeed be used for multiple policies and the details of how that's managed on disk will be covered in a later section, it's important to understand the implications of such a configuration before setting one up. Make sure it's really what you want to do, in many cases it will be, but in others maybe not. -------------- Using Policies -------------- Using policies is very simple - a policy is only specified when a container is initially created. There are no other API changes. Creating a container can be done without any special policy information:: curl -v -X PUT -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test/myCont0 Which will result in a container created that is associated with the policy name 'gold' assuming we're using the swift.conf example from above. It would use 'gold' because it was specified as the default. Now, when we put an object into this container, it will get placed on nodes that are part of the ring we created for policy 'gold'. If we wanted to explicitly state that we wanted policy 'gold' the command would simply need to include a new header as shown below:: curl -v -X PUT -H 'X-Auth-Token: ' \ -H 'X-Storage-Policy: gold' http://127.0.0.1:8080/v1/AUTH_test/myCont0 And that's it! The application does not need to specify the policy name ever again. There are some illegal operations however: * If an invalid (typo, non-existent) policy is specified: 400 Bad Request * if you try to change the policy either via PUT or POST: 409 Conflict If you'd like to see how the storage in the cluster is being used, simply HEAD the account and you'll see not only the cumulative numbers, as before, but per policy statistics as well. In the example below there's 3 objects total with two of them in policy 'gold' and one in policy 'silver':: curl -i -X HEAD -H 'X-Auth-Token: ' \ http://127.0.0.1:8080/v1/AUTH_test and your results will include (some output removed for readability):: X-Account-Container-Count: 3 X-Account-Object-Count: 3 X-Account-Bytes-Used: 21 X-Storage-Policy-Gold-Object-Count: 2 X-Storage-Policy-Gold-Bytes-Used: 14 X-Storage-Policy-Silver-Object-Count: 1 X-Storage-Policy-Silver-Bytes-Used: 7 -------------- Under the Hood -------------- Now that we've explained a little about what Policies are and how to configure/use them, let's explore how Storage Policies fit in at the nuts-n-bolts level. Parsing and Configuring ----------------------- The module, :ref:`storage_policy`, is responsible for parsing the ``swift.conf`` file, validating the input, and creating a global collection of configured policies via class :class:`.StoragePolicyCollection`. This collection is made up of policies of class :class:`.StoragePolicy`. The collection class includes handy functions for getting to a policy either by name or by index , getting info about the policies, etc. There's also one very important function, :meth:`~.StoragePolicyCollection.get_object_ring`. Object rings are members of the :class:`.StoragePolicy` class and are actually not instantiated until the :meth:`~.StoragePolicy.load_ring` method is called. Any caller anywhere in the code base that needs to access an object ring must use the :data:`.POLICIES` global singleton to access the :meth:`~.StoragePolicyCollection.get_object_ring` function and provide the policy index which will call :meth:`~.StoragePolicy.load_ring` if needed; however, when starting request handling services such as the :ref:`proxy-server` rings are proactively loaded to provide moderate protection against a mis-configuration resulting in a run time error. The global is instantiated when Swift starts and provides a mechanism to patch policies for the test code. Middleware ---------- Middleware can take advantage of policies through the :data:`.POLICIES` global and by importing :func:`.get_container_info` to gain access to the policy index associated with the container in question. From the index it can then use the :data:`.POLICIES` singleton to grab the right ring. For example, :ref:`list_endpoints` is policy aware using the means just described. Another example is :ref:`recon` which will report the md5 sums for all of the rings. Proxy Server ------------ The :ref:`proxy-server` module's role in Storage Policies is essentially to make sure the correct ring is used as its member element. Before policies, the one object ring would be instantiated when the :class:`.Application` class was instantiated and could be overridden by test code via init parameter. With policies, however, there is no init parameter and the :class:`.Application` class instead depends on the :data:`.POLICIES` global singleton to retrieve the ring which is instantiated the first time it's needed. So, instead of an object ring member of the :class:`.Application` class, there is an accessor function, :meth:`~.Application.get_object_ring`, that gets the ring from :data:`.POLICIES`. In general, when any module running on the proxy requires an object ring, it does so via first getting the policy index from the cached container info. The exception is during container creation where it uses the policy name from the request header to look up policy index from the :data:`.POLICIES` global. Once the proxy has determined the policy index, it can use the :meth:`~.Application.get_object_ring` method described earlier to gain access to the correct ring. It then has the responsibility of passing the index information, not the policy name, on to the back-end servers via the header ``X -Backend-Storage-Policy-Index``. Going the other way, the proxy also strips the index out of headers that go back to clients, and makes sure they only see the friendly policy names. On Disk Storage --------------- Policies each have their own directories on the back-end servers and are identified by their storage policy indexes. Organizing the back-end directory structures by policy index helps keep track of things and also allows for sharing of disks between policies which may or may not make sense depending on the needs of the provider. More on this later, but for now be aware of the following directory naming convention: * ``/objects`` maps to objects associated with Policy-0 * ``/objects-N`` maps to storage policy index #N * ``/async_pending`` maps to async pending update for Policy-0 * ``/async_pending-N`` maps to async pending update for storage policy index #N * ``/tmp`` maps to the DiskFile temporary directory for Policy-0 * ``/tmp-N`` maps to the DiskFile temporary directory for policy index #N * ``/quarantined/objects`` maps to the quarantine directory for Policy-0 * ``/quarantined/objects-N`` maps to the quarantine directory for policy index #N Note that these directory names are actually owned by the specific Diskfile implementation, the names shown above are used by the default Diskfile. Object Server ------------- The :ref:`object-server` is not involved with selecting the storage policy placement directly. However, because of how back-end directory structures are setup for policies, as described earlier, the object server modules do play a role. When the object server gets a :class:`.Diskfile`, it passes in the policy index and leaves the actual directory naming/structure mechanisms to :class:`.Diskfile`. By passing in the index, the instance of :class:`.Diskfile` being used will assure that data is properly located in the tree based on its policy. For the same reason, the :ref:`object-updater` also is policy aware. As previously described, different policies use different async pending directories so the updater needs to know how to scan them appropriately. The :ref:`object-replicator` is policy aware in that, depending on the policy, it may have to do drastically different things, or maybe not. For example, the difference in handling a replication job for 2x versus 3x is trivial; however, the difference in handling replication between 3x and erasure code is most definitely not. In fact, the term 'replication' really isn't appropriate for some policies like erasure code; however, the majority of the framework for collecting and processing jobs is common. Thus, those functions in the replicator are leveraged for all policies and then there is policy specific code required for each policy, added when the policy is defined if needed. The ssync functionality is policy aware for the same reason. Some of the other modules may not obviously be affected, but the back-end directory structure owned by :class:`.Diskfile` requires the policy index parameter. Therefore ssync being policy aware really means passing the policy index along. See :class:`~swift.obj.ssync_sender` and :class:`~swift.obj.ssync_receiver` for more information on ssync. For :class:`.Diskfile` itself, being policy aware is all about managing the back-end structure using the provided policy index. In other words, callers who get a :class:`.Diskfile` instance provide a policy index and :class:`.Diskfile`'s job is to keep data separated via this index (however it chooses) such that policies can share the same media/nodes if desired. The included implementation of :class:`.Diskfile` lays out the directory structure described earlier but that's owned within :class:`.Diskfile`; external modules have no visibility into that detail. A common function is provided to map various directory names and/or strings based on their policy index. For example :class:`.Diskfile` defines :func:`.get_data_dir` which builds off of a generic :func:`.get_policy_string` to consistently build policy aware strings for various usage. Container Server ---------------- The :ref:`container-server` plays a very important role in Storage Policies, it is responsible for handling the assignment of a policy to a container and the prevention of bad things like changing policies or picking the wrong policy to use when nothing is specified (recall earlier discussion on Policy-0 versus default). The :ref:`container-updater` is policy aware, however its job is very simple, to pass the policy index along to the :ref:`account-server` via a request header. The :ref:`container-backend` is responsible for both altering existing DB schema as well as assuring new DBs are created with a schema that supports storage policies. The "on-demand" migration of container schemas allows Swift to upgrade without downtime (sqlite's alter statements are fast regardless of row count). To support rolling upgrades (and downgrades) the incompatible schema changes to the ``container_stat`` table are made to a ``container_info`` table, and the ``container_stat`` table is replaced with a view that includes an ``INSTEAD OF UPDATE`` trigger which makes it behave like the old table. The policy index is stored here for use in reporting information about the container as well as managing split-brain scenario induced discrepancies between containers and their storage policies. Furthermore, during split-brain, containers must be prepared to track object updates from multiple policies so the object table also includes a ``storage_policy_index`` column. Per-policy object counts and bytes are updated in the ``policy_stat`` table using ``INSERT`` and ``DELETE`` triggers similar to the pre-policy triggers that updated ``container_stat`` directly. The :ref:`container-replicator` daemon will pro-actively migrate legacy schemas as part of its normal consistency checking process when it updates the ``reconciler_sync_point`` entry in the ``container_info`` table. This ensures that read heavy containers which do not encounter any writes will still get migrated to be fully compatible with the post-storage-policy queries without having to fall back and retry queries with the legacy schema to service container read requests. The :ref:`container-sync-daemon` functionality only needs to be policy aware in that it accesses the object rings. Therefore, it needs to pull the policy index out of the container information and use it to select the appropriate object ring from the :data:`.POLICIES` global. Account Server -------------- The :ref:`account-server`'s role in Storage Policies is really limited to reporting. When a HEAD request is made on an account (see example provided earlier), the account server is provided with the storage policy index and builds the ``object_count`` and ``byte_count`` information for the client on a per policy basis. The account servers are able to report per-storage-policy object and byte counts because of some policy specific DB schema changes. A policy specific table, ``policy_stat``, maintains information on a per policy basis (one row per policy) in the same manner in which the ``account_stat`` table does. The ``account_stat`` table still serves the same purpose and is not replaced by ``policy_stat``, it holds the total account stats whereas ``policy_stat`` just has the break downs. The backend is also responsible for migrating pre-storage-policy accounts by altering the DB schema and populating the ``policy_stat`` table for Policy-0 with current ``account_stat`` data at that point in time. The per-storage-policy object and byte counts are not updated with each object PUT and DELETE request, instead container updates to the account server are performed asynchronously by the ``swift-container-updater``. .. _upgrade-policy: Upgrading and Confirming Functionality -------------------------------------- Upgrading to a version of Swift that has Storage Policy support is not difficult, in fact, the cluster administrator isn't required to make any special configuration changes to get going. Swift will automatically begin using the existing object ring as both the default ring and the Policy-0 ring. Adding the declaration of policy 0 is totally optional and in its absence, the name given to the implicit policy 0 will be 'Policy-0'. Let's say for testing purposes that you wanted to take an existing cluster that already has lots of data on it and upgrade to Swift with Storage Policies. From there you want to go ahead and create a policy and test a few things out. All you need to do is: #. Upgrade all of your Swift nodes to a policy-aware version of Swift #. Define your policies in ``/etc/swift/swift.conf`` #. Create the corresponding object rings #. Create containers and objects and confirm their placement is as expected For a specific example that takes you through these steps, please see :doc:`policies_saio` .. note:: If you downgrade from a Storage Policy enabled version of Swift to an older version that doesn't support policies, you will not be able to access any data stored in policies other than the policy with index 0 but those objects WILL appear in container listings (possibly as duplicates if there was a network partition and un-reconciled objects). It is EXTREMELY important that you perform any necessary integration testing on the upgraded deployment before enabling an additional storage policy to ensure a consistent API experience for your clients. DO NOT downgrade to a version of Swift that does not support storage policies once you expose multiple storage policies. swift-2.7.1/doc/source/misc.rst0000664000567000056710000000336613024044352017555 0ustar jenkinsjenkins00000000000000.. _misc: **** Misc **** .. _acls: ACLs ==== .. automodule:: swift.common.middleware.acl :members: :show-inheritance: .. _buffered_http: Buffered HTTP ============= .. automodule:: swift.common.bufferedhttp :members: :show-inheritance: .. _constraints: Constraints =========== .. automodule:: swift.common.constraints :members: :undoc-members: :show-inheritance: Container Sync Realms ===================== .. automodule:: swift.common.container_sync_realms :members: :show-inheritance: .. _direct_client: Direct Client ============= .. automodule:: swift.common.direct_client :members: :undoc-members: :show-inheritance: .. _exceptions: Exceptions ========== .. automodule:: swift.common.exceptions :members: :undoc-members: :show-inheritance: .. _internal_client: Internal Client =============== .. automodule:: swift.common.internal_client :members: :undoc-members: :show-inheritance: Manager ========= .. automodule:: swift.common.manager :members: :show-inheritance: MemCacheD ========= .. automodule:: swift.common.memcached :members: :show-inheritance: .. _request_helpers: Request Helpers =============== .. automodule:: swift.common.request_helpers :members: :undoc-members: :show-inheritance: .. _swob: Swob ==== .. automodule:: swift.common.swob :members: :show-inheritance: :special-members: __call__ .. _utils: Utils ===== .. automodule:: swift.common.utils :members: :show-inheritance: .. _wsgi: WSGI ==== .. automodule:: swift.common.wsgi :members: :show-inheritance: .. _storage_policy: Storage Policy ============== .. automodule:: swift.common.storage_policy :members: :show-inheritance: swift-2.7.1/doc/source/test-cors.html0000664000567000056710000000372513024044352020700 0ustar jenkinsjenkins00000000000000 Test CORS Token


Method


URL (Container or Object)



    




    

  

swift-2.7.1/doc/source/logs.rst0000664000567000056710000001607513024044354017571 0ustar  jenkinsjenkins00000000000000====
Logs
====

Swift has quite verbose logging, and the generated logs can be used for
cluster monitoring, utilization calculations, audit records, and more. As an
overview, Swift's logs are sent to syslog and organized by log level and
syslog facility. All log lines related to the same request have the same
transaction id. This page documents the log formats used in the system.

.. note::

    By default, Swift will log full log lines. However, with the
    ``log_max_line_length`` setting and depending on your logging server
    software, lines may be truncated or shortened. With ``log_max_line_length <
    7``, the log line will be truncated. With ``log_max_line_length >= 7``, the
    log line will be "shortened": about half the max length followed by " ... "
    followed by the other half the max length. Unless you use exceptionally
    short values, you are unlikely to run across this with the following
    documented log lines, but you may see it with debugging and error log
    lines.

----------
Proxy Logs
----------

The proxy logs contain the record of all external API requests made to the
proxy server. Swift's proxy servers log requests using a custom format
designed to provide robust information and simple processing. The log format
is::

    client_ip remote_addr datetime request_method request_path protocol
        status_int referer user_agent auth_token bytes_recvd bytes_sent
        client_etag transaction_id headers request_time source log_info
        request_start_time request_end_time policy_index

=================== ==========================================================
**Log Field**       **Value**
------------------- ----------------------------------------------------------
client_ip           Swift's guess at the end-client IP, taken from various
                    headers in the request.
remote_addr         The IP address of the other end of the TCP connection.
datetime            Timestamp of the request, in
                    day/month/year/hour/minute/second format.
request_method      The HTTP verb in the request.
request_path        The path portion of the request.
protocol            The transport protocol used (currently one of http or
                    https).
status_int          The response code for the request.
referer             The value of the HTTP Referer header.
user_agent          The value of the HTTP User-Agent header.
auth_token          The value of the auth token. This may be truncated or
                    otherwise obscured.
bytes_recvd         The number of bytes read from the client for this request.
bytes_sent          The number of bytes sent to the client in the body of the
                    response. This is how many bytes were yielded to the WSGI
                    server.
client_etag         The etag header value given by the client.
transaction_id      The transaction id of the request.
headers             The headers given in the request.
request_time        The duration of the request.
source              The "source" of the request. This may be set for requests
                    that are generated in order to fulfill client requests,
                    e.g. bulk uploads.
log_info            Various info that may be useful for diagnostics, e.g. the
                    value of any x-delete-at header.
request_start_time  High-resolution timestamp from the start of the request.
request_end_time    High-resolution timestamp from the end of the request.
policy_index        The value of the storage policy index.
=================== ==========================================================

In one log line, all of the above fields are space-separated and url-encoded.
If any value is empty, it will be logged as a "-". This allows for simple
parsing by splitting each line on whitespace. New values may be placed at the
end of the log line from time to time, but the order of the existing values
will not change. Swift log processing utilities should look for the first N
fields they require (e.g. in Python using something like
``log_line.split()[:14]`` to get up through the transaction id).

Swift Source
============

The ``source`` value in the proxy logs is used to identify the originator of a
request in the system. For example, if the client initiates a bulk upload, the
proxy server may end up doing many requests. The initial bulk upload request
will be logged as normal, but all of the internal "child requests" will have a
source value indicating they came from the bulk functionality.

======================= =============================
**Logged Source Value** **Originator of the Request**
----------------------- -----------------------------
FP                      :ref:`formpost`
SLO                     :ref:`static-large-objects`
SW                      :ref:`staticweb`
TU                      :ref:`tempurl`
BD                      :ref:`bulk` (delete)
EA                      :ref:`bulk` (extract)
CQ                      :ref:`container-quotas`
CS                      :ref:`container-sync`
TA                      :ref:`common_tempauth`
DLO                     :ref:`dynamic-large-objects`
LE                      :ref:`list_endpoints`
KS                      :ref:`keystoneauth`
RL                      :ref:`ratelimit`
VW                      :ref:`versioned_writes`
======================= =============================


-----------------
Storage Node Logs
-----------------

Swift's account, container, and object server processes each log requests
that they receive, if they have been configured to do so with the
``log_requests`` config parameter (which defaults to true). The format for
these log lines is::

    remote_addr - - [datetime] "request_method request_path" status_int
        content_length "referer" "transaction_id" "user_agent" request_time
        additional_info server_pid policy_index

=================== ==========================================================
**Log Field**       **Value**
------------------- ----------------------------------------------------------
remote_addr         The IP address of the other end of the TCP connection.
datetime            Timestamp of the request, in
                    "day/month/year:hour:minute:second +0000" format.
request_method      The HTTP verb in the request.
request_path        The path portion of the request.
status_int          The response code for the request.
content_length      The value of the Content-Length header in the response.
referer             The value of the HTTP Referer header.
transaction_id      The transaction id of the request.
user_agent          The value of the HTTP User-Agent header. Swift services
                    report a user-agent string of the service name followed by
                    the process ID, such as ``"proxy-server "`` or ``"object-updater "``.
request_time        The duration of the request.
additional_info     Additional useful information.
server_pid          The process id of the server
policy_index        The value of the storage policy index.
=================== ==========================================================
swift-2.7.1/doc/saio/0000775000567000056710000000000013024044470015514 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/rsyslog.d/0000775000567000056710000000000013024044470017440 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/rsyslog.d/10-swift.conf0000664000567000056710000000213413024044352021660 0ustar  jenkinsjenkins00000000000000# Uncomment the following to have a log containing all logs together
#local1,local2,local3,local4,local5.*   /var/log/swift/all.log

# Uncomment the following to have hourly proxy logs for stats processing
#$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%"
#local1.*;local1.!notice ?HourlyProxyLog

local1.*;local1.!notice /var/log/swift/proxy.log
local1.notice           /var/log/swift/proxy.error
local1.*                ~

local2.*;local2.!notice /var/log/swift/storage1.log
local2.notice           /var/log/swift/storage1.error
local2.*                ~

local3.*;local3.!notice /var/log/swift/storage2.log
local3.notice           /var/log/swift/storage2.error
local3.*                ~

local4.*;local4.!notice /var/log/swift/storage3.log
local4.notice           /var/log/swift/storage3.error
local4.*                ~

local5.*;local5.!notice /var/log/swift/storage4.log
local5.notice           /var/log/swift/storage4.error
local5.*                ~

local6.*;local6.!notice /var/log/swift/expirer.log
local6.notice           /var/log/swift/expirer.error
local6.*                ~
swift-2.7.1/doc/saio/swift/0000775000567000056710000000000013024044470016650 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/swift/account-server/0000775000567000056710000000000013024044470021610 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/swift/account-server/2.conf0000664000567000056710000000074613024044352022626 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6022
workers = 1
user = 
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[filter:recon]
use = egg:swift#recon

[account-replicator]
rsync_module = {replication_ip}::account{replication_port}

[account-auditor]

[account-reaper]
swift-2.7.1/doc/saio/swift/account-server/4.conf0000664000567000056710000000074613024044352022630 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6042
workers = 1
user = 
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[filter:recon]
use = egg:swift#recon

[account-replicator]
rsync_module = {replication_ip}::account{replication_port}

[account-auditor]

[account-reaper]
swift-2.7.1/doc/saio/swift/account-server/3.conf0000664000567000056710000000074613024044352022627 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6032
workers = 1
user = 
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[filter:recon]
use = egg:swift#recon

[account-replicator]
rsync_module = {replication_ip}::account{replication_port}

[account-auditor]

[account-reaper]
swift-2.7.1/doc/saio/swift/account-server/1.conf0000664000567000056710000000074513024044352022624 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6012
workers = 1
user = 
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[filter:recon]
use = egg:swift#recon

[account-replicator]
rsync_module = {replication_ip}::account{replication_port}

[account-auditor]

[account-reaper]
swift-2.7.1/doc/saio/swift/proxy-server.conf0000664000567000056710000000306313024044354022207 0ustar  jenkinsjenkins00000000000000[DEFAULT]
bind_ip = 127.0.0.1
bind_port = 8080
workers = 1
user = 
log_facility = LOG_LOCAL1
eventlet_debug = true

[pipeline:main]
# Yes, proxy-logging appears twice. This is so that
# middleware-originated requests get logged too.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync tempauth staticweb container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:bulk]
use = egg:swift#bulk

[filter:ratelimit]
use = egg:swift#ratelimit

[filter:crossdomain]
use = egg:swift#crossdomain

[filter:dlo]
use = egg:swift#dlo

[filter:slo]
use = egg:swift#slo

[filter:container_sync]
use = egg:swift#container_sync
current = //saio/saio_endpoint

[filter:tempurl]
use = egg:swift#tempurl

[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3

[filter:staticweb]
use = egg:swift#staticweb

[filter:account-quotas]
use = egg:swift#account_quotas

[filter:container-quotas]
use = egg:swift#container_quotas

[filter:cache]
use = egg:swift#memcache

[filter:gatekeeper]
use = egg:swift#gatekeeper

[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
swift-2.7.1/doc/saio/swift/container-sync-realms.conf0000664000567000056710000000013113024044352023726 0ustar  jenkinsjenkins00000000000000[saio]
key = changeme
key2 = changeme
cluster_saio_endpoint = http://127.0.0.1:8080/v1/

swift-2.7.1/doc/saio/swift/container-reconciler.conf0000664000567000056710000000220313024044352023620 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# swift_dir = /etc/swift
user = 
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[container-reconciler]
# reclaim_age = 604800
# interval = 300
# request_tries = 3

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
swift-2.7.1/doc/saio/swift/object-server/0000775000567000056710000000000013024044470021422 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/swift/object-server/2.conf0000664000567000056710000000077013024044352022435 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6020
workers = 1
user = 
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[filter:recon]
use = egg:swift#recon

[object-replicator]
rsync_module = {replication_ip}::object{replication_port}

[object-reconstructor]

[object-updater]

[object-auditor]
swift-2.7.1/doc/saio/swift/object-server/4.conf0000664000567000056710000000077013024044352022437 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6040
workers = 1
user = 
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[filter:recon]
use = egg:swift#recon

[object-replicator]
rsync_module = {replication_ip}::object{replication_port}

[object-reconstructor]

[object-updater]

[object-auditor]
swift-2.7.1/doc/saio/swift/object-server/3.conf0000664000567000056710000000077013024044352022436 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6030
workers = 1
user = 
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[filter:recon]
use = egg:swift#recon

[object-replicator]
rsync_module = {replication_ip}::object{replication_port}

[object-reconstructor]

[object-updater]

[object-auditor]
swift-2.7.1/doc/saio/swift/object-server/1.conf0000664000567000056710000000076713024044352022442 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6010
workers = 1
user = 
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[filter:recon]
use = egg:swift#recon

[object-replicator]
rsync_module = {replication_ip}::object{replication_port}

[object-reconstructor]

[object-updater]

[object-auditor]
swift-2.7.1/doc/saio/swift/container-server/0000775000567000056710000000000013024044470022136 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/swift/container-server/2.conf0000664000567000056710000000100713024044352023143 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6021
workers = 1
user = 
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[filter:recon]
use = egg:swift#recon

[container-replicator]
rsync_module = {replication_ip}::container{replication_port}

[container-updater]

[container-auditor]

[container-sync]
swift-2.7.1/doc/saio/swift/container-server/4.conf0000664000567000056710000000100713024044352023145 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6041
workers = 1
user = 
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[filter:recon]
use = egg:swift#recon

[container-replicator]
rsync_module = {replication_ip}::container{replication_port}

[container-updater]

[container-auditor]

[container-sync]
swift-2.7.1/doc/saio/swift/container-server/3.conf0000664000567000056710000000100713024044352023144 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6031
workers = 1
user = 
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[filter:recon]
use = egg:swift#recon

[container-replicator]
rsync_module = {replication_ip}::container{replication_port}

[container-updater]

[container-auditor]

[container-sync]
swift-2.7.1/doc/saio/swift/container-server/1.conf0000664000567000056710000000100613024044352023141 0ustar  jenkinsjenkins00000000000000[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6011
workers = 1
user = 
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[filter:recon]
use = egg:swift#recon

[container-replicator]
rsync_module = {replication_ip}::container{replication_port}

[container-updater]

[container-auditor]

[container-sync]
swift-2.7.1/doc/saio/swift/object-expirer.conf0000664000567000056710000000340413024044352022441 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# swift_dir = /etc/swift
user = 
# You can specify default log routing here if you want:
log_name = object-expirer
log_facility = LOG_LOCAL6
log_level = INFO
#log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[object-expirer]
interval = 300
# auto_create_account_prefix = .
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process
#   that will be doing the work
# processes set 0 means that a single process will be doing all the work
# processes can also be specified on the command line and will override the
#   config value
# processes = 0
# process is which of the parts a particular process will work on
# process can also be specified on the command line and will override the config
#   value
# process is "zero based", if you want to use 3 processes, you should run
#  processes with process set to 0, 1, and 2
# process = 0

[pipeline:main]
pipeline = catch_errors cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
swift-2.7.1/doc/saio/swift/swift.conf0000664000567000056710000000076513024044352020662 0ustar  jenkinsjenkins00000000000000[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
# Use only printable chars (python -c "import string; print(string.printable)")
swift_hash_path_prefix = changeme
swift_hash_path_suffix = changeme

[storage-policy:0]
name = gold
policy_type = replication
default = yes

[storage-policy:1]
name = silver
policy_type = replication

[storage-policy:2]
name = ec42
policy_type = erasure_coding
ec_type = liberasurecode_rs_vand
ec_num_data_fragments = 4
ec_num_parity_fragments = 2
swift-2.7.1/doc/saio/bin/0000775000567000056710000000000013024044470016264 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/saio/bin/startmain0000775000567000056710000000004213024044354020211 0ustar  jenkinsjenkins00000000000000#!/bin/bash

swift-init main startswift-2.7.1/doc/saio/bin/startrest0000775000567000056710000000004213024044354020242 0ustar  jenkinsjenkins00000000000000#!/bin/bash

swift-init rest startswift-2.7.1/doc/saio/bin/resetswift0000775000567000056710000000166613024044354020423 0ustar  jenkinsjenkins00000000000000#!/bin/bash

swift-init all stop
# Remove the following line if you did not set up rsyslog for individual logging:
sudo find /var/log/swift -type f -exec rm -f {} \;
sudo umount /mnt/sdb1
# If you are using a loopback device set SAIO_BLOCK_DEVICE to "/srv/swift-disk"
sudo mkfs.xfs -f ${SAIO_BLOCK_DEVICE:-/dev/sdb1}
sudo mount /mnt/sdb1
sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4
sudo chown ${USER}:${USER} /mnt/sdb1/*
mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \
         /srv/2/node/sdb2 /srv/2/node/sdb6 \
         /srv/3/node/sdb3 /srv/3/node/sdb7 \
         /srv/4/node/sdb4 /srv/4/node/sdb8
sudo rm -f /var/log/debug /var/log/messages /var/log/rsyncd.log /var/log/syslog
find /var/cache/swift* -type f -name *.recon -exec rm -f {} \;
if [ "`type -t systemctl`" == "file" ]; then
    sudo systemctl restart rsyslog
    sudo systemctl restart memcached
else
    sudo service rsyslog restart
    sudo service memcached restart
fi
swift-2.7.1/doc/saio/bin/remakerings0000775000567000056710000000416713024044354020532 0ustar  jenkinsjenkins00000000000000#!/bin/bash

cd /etc/swift

rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object.builder rebalance
swift-ring-builder object-1.builder create 10 2 1
swift-ring-builder object-1.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object-1.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object-1.builder rebalance
swift-ring-builder object-2.builder create 10 6 1
swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb5 1
swift-ring-builder object-2.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object-2.builder add r1z2-127.0.0.1:6020/sdb6 1
swift-ring-builder object-2.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object-2.builder add r1z3-127.0.0.1:6030/sdb7 1
swift-ring-builder object-2.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object-2.builder add r1z4-127.0.0.1:6040/sdb8 1
swift-ring-builder object-2.builder rebalance
swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add r1z1-127.0.0.1:6011/sdb1 1
swift-ring-builder container.builder add r1z2-127.0.0.1:6021/sdb2 1
swift-ring-builder container.builder add r1z3-127.0.0.1:6031/sdb3 1
swift-ring-builder container.builder add r1z4-127.0.0.1:6041/sdb4 1
swift-ring-builder container.builder rebalance
swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add r1z1-127.0.0.1:6012/sdb1 1
swift-ring-builder account.builder add r1z2-127.0.0.1:6022/sdb2 1
swift-ring-builder account.builder add r1z3-127.0.0.1:6032/sdb3 1
swift-ring-builder account.builder add r1z4-127.0.0.1:6042/sdb4 1
swift-ring-builder account.builder rebalance
swift-2.7.1/doc/saio/rsyncd.conf0000664000567000056710000000272413024044352017671 0ustar  jenkinsjenkins00000000000000uid = 
gid = 
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1

[account6012]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/account6012.lock

[account6022]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/account6022.lock

[account6032]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/account6032.lock

[account6042]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/account6042.lock

[container6011]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/container6011.lock

[container6021]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/container6021.lock

[container6031]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/container6031.lock

[container6041]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/container6041.lock

[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock

[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock

[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock

[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock
swift-2.7.1/doc/manpages/0000775000567000056710000000000013024044470016354 5ustar  jenkinsjenkins00000000000000swift-2.7.1/doc/manpages/swift-dispersion-populate.10000664000567000056710000001007713024044354023604 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-dispersion-populate 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-dispersion-populate
\- Openstack-swift dispersion populate 

.SH SYNOPSIS
.LP
.B swift-dispersion-populate [--container-suffix-start] [--object-suffix-start] [--container-only|--object-only] [--insecure] [conf_file]

.SH DESCRIPTION 
.PP
This is one of the swift-dispersion utilities that is used to evaluate the
overall cluster health. This is accomplished by checking if a set of 
deliberately distributed containers and objects are currently in their
proper places within the cluster.

.PP 
For instance, a common deployment has three replicas of each object.
The health of that object can be measured by checking if each replica
is in its proper place. If only 2 of the 3 is in place the object's health
can be said to be at 66.66%, where 100% would be perfect.

.PP
We need to place the containers and objects throughout the system so
that they are on distinct partitions. The \fBswift-dispersion-populate\fR tool
does this by making up random container and object names until they fall
on distinct partitions. Last, and repeatedly for the life of the cluster,
we need to run the \fBswift-dispersion-report\fR tool to check the health of each
of these containers and objects.

.PP
These tools need direct access to the entire cluster and to the ring files. 
Installing them on a proxy server will probably do or a box used for swift 
administration purposes that also contains the common swift packages and ring. 
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the 
same configuration file, /etc/swift/dispersion.conf . The account used by these
tool should be a dedicated account for the dispersion stats and also have admin
privileges. 

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--insecure\fR"
Allow accessing insecure keystone server. The keystone's certificate will not
be verified.
.IP "\fB--container-suffix-start=NUMBER\fR"
Start container suffix at NUMBER and resume population at this point; default: 0
.IP "\fB--object-suffix-start=NUMBER\fR"
Start object suffix at NUMBER and resume population at this point; default: 0
.IP "\fB--object-only\fR"
Only run object population
.IP "\fB--container-only\fR"
Only run container population
.IP "\fB--object-only\fR"
Only run object population
.IP "\fB--no-overlap\fR"
Increase coverage by amount in dispersion_coverage option with no overlap of existing partitions (if run more than once)

.SH CONFIGURATION
.PD 0 
Example \fI/etc/swift/dispersion.conf\fR: 

.RS 3
.IP "[dispersion]"
.IP "auth_url = https://127.0.0.1:443/auth/v1.0"
.IP "auth_user = dpstats:dpstats"
.IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0"
.IP "# retries = 5"
.IP "# concurrency = 25"
.IP "# endpoint_type = publicURL"
.RE
.PD 

.SH EXAMPLE
.PP 
.PD 0
$ swift-dispersion-populate
.RS 1
.IP "Created 2621 containers for dispersion reporting, 38s, 0 retries"
.IP "Created 2621 objects for dispersion reporting, 27s, 0 retries"
.RE

.PD
 

.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and 
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-dispersion-report(1),
.BR dispersion.conf (5)
swift-2.7.1/doc/manpages/object-expirer.conf.50000664000567000056710000001552013024044354022314 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH object-expirer.conf 5 "03/15/2012" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B object-expirer.conf
\- configuration file for the openstack-swift object exprier daemon  



.SH SYNOPSIS
.LP
.B object-expirer.conf



.SH DESCRIPTION 
.PP
This is the configuration file used by the object expirer daemon. The daemon's 
function is to query the internal hidden expiring_objects_account to discover 
objects that need to be deleted and to then delete them.

The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a 
certain number of key/value parameters which are described later. 

Any line that begins with a '#' symbol is ignored. 

You can find more information about python-pastedeploy configuration format at 
\fIhttp://pythonpaste.org/deploy/#config-format\fR



.SH GLOBAL SECTION
.PD 1 
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that 
are acceptable within this section. 

.IP \fBswift_dir\fR 
Swift configuration directory. The default is /etc/swift.
.IP \fBuser\fR 
The system user that the object server will run as. The default is swift. 
.IP \fBlog_name\fR 
Label used when logging. The default is swift.
.IP \fBlog_facility\fR 
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR 
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBlog_max_line_length\fR
The following caps the length of log lines to the value given; no limit if
set to 0, the default.
.IP \fBlog_custom_handlers\fR
Comma separated list of functions to call to setup custom log handlers.
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
adapted_logger. The default is empty.
.IP \fBlog_udp_host\fR
If set, log_udp_host will override log_address.
.IP "\fBlog_udp_port\fR
UDP log port, the default is 514.
.IP \fBlog_statsd_host\fR
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBlog_statsd_port\fR
The default is 8125.
.IP \fBlog_statsd_default_sample_rate\fR
The default is 1.
.IP \fBlog_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.RE
.PD



.SH PIPELINE SECTION
.PD 1 
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section. 

.IP "\fBpipeline\fR"
It is used when you need to apply a number of filters. It is a list of filters 
ended by an application. The default should be \fB"catch_errors cache proxy-server"\fR
.RE
.PD



.SH APP SECTION
.PD 1 
.RS 0
This is indicated by section name [app:object-server]. Below are the parameters
that are acceptable within this section.
.IP "\fBuse\fR"
Entry point for paste.deploy for the object server. This is the reference to the installed python egg. 
The default is \fBegg:swift#proxy\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage. 
.RE
.PD



.SH FILTER SECTION
.PD 1 
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters. 

.RS 0
.IP "\fB[filter:cache]\fR"
.RE

Caching middleware that manages caching in swift.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the memcache middleware. This is the reference to the installed python egg.
The default is \fBegg:swift#memcache\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
.RE


.RS 0  
.IP "\fB[filter:catch_errors]\fR" 
.RE
.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the catch_errors middleware. This is the reference to the installed python egg.
The default is \fBegg:swift#catch_errors\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
.RE

.RS 0
.IP "\fB[filter:proxy-logging]\fR"
.RE

Logging for the proxy server now lives in this middleware.
If the access_* variables are not set, logging directives from [DEFAULT]
without "access_" will be used.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the proxy_logging middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#proxy_logging\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
.RE

.PD


.SH ADDITIONAL SECTIONS
.PD 1
.RS 0
The following sections are used by other swift-account services, such as replicator,
auditor and reaper.
.IP "\fB[account-replicator]\fR"
.RE
.RS 3
.IP \fBinterval\fR
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 300.
.IP "\fBauto_create_account_prefix\fR
The default is ".".
.IP \fBexpiring_objects_account_name\fR
The default is 'expiring_objects'.
.IP \fBreport_interval\fR
The default is 300 seconds.
.IP \fBconcurrency\fR
Number of replication workers to spawn. The default is 1.
.IP \fBprocesses\fR
Processes is how many parts to divide the work into, one part per process that will be doing the work.
Processes set 0 means that a single process will be doing all the work.
Processes can also be specified on the command line and will override the config value.
The default is 0.
.IP \fBprocess\fR
Process is which of the parts a particular process will work on process can also be specified
on the command line and will override the config value process is "zero based", if you want
to use 3 processes, you should run processes with process set to 0, 1, and 2. The default is 0.
.IP \fBreclaim_age\fR
The expirer will re-attempt expiring if the source object is not available
up to reclaim_age seconds before it gives up and deletes the entry in the
queue. The default is 604800 seconds.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE
.PD


.SH DOCUMENTATION
.LP
More in depth documentation about the swift-object-expirer and
also Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/admin_guide.html 
and 
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-proxy-server.conf(5),

swift-2.7.1/doc/manpages/swift-get-nodes.10000664000567000056710000000537713024044354021472 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-get-nodes 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-get-nodes
\- Openstack-swift get-nodes tool

.SH SYNOPSIS
.LP
.B swift-get-nodes 
\   [ []]
 
.SH DESCRIPTION 
.PP
The swift-get-nodes tool can be used to find out the location where
a particular account, container or object item is located within the 
swift cluster nodes. For example, if you have the account hash and a container 
name that belongs to that account, you can use swift-get-nodes to lookup 
where the container resides by using the container ring.

.RS 0
.IP "\fIExample:\fR"
.RE

.RS 4
.PD 0 
.IP "$ swift-get-nodes /etc/swift/account.ring.gz MyAccount-12ac01446be2"

.PD 0
.IP "Account     MyAccount-12ac01446be2"
.IP "Container   None"
.IP "Object      None"

.IP "Partition 221082"
.IP "Hash d7e6ba68cfdce0f0e4ca7890e46cacce"

.IP "Server:Port Device      172.24.24.29:6002 sdd"
.IP "Server:Port Device      172.24.24.27:6002 sdr"
.IP "Server:Port Device      172.24.24.32:6002 sde"
.IP "Server:Port Device      172.24.24.26:6002 sdv    [Handoff]"


.IP "curl -I -XHEAD http://172.24.24.29:6002/sdd/221082/MyAccount-12ac01446be2"
.IP "curl -I -XHEAD http://172.24.24.27:6002/sdr/221082/MyAccount-12ac01446be2"
.IP "curl -I -XHEAD http://172.24.24.32:6002/sde/221082/MyAccount-12ac01446be2"
.IP "curl -I -XHEAD http://172.24.24.26:6002/sdv/221082/MyAccount-12ac01446be2 # [Handoff]"

.IP "ssh 172.24.24.29 ls -lah /srv/node/sdd/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/ "
.IP "ssh 172.24.24.27 ls -lah /srv/node/sdr/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/"
.IP "ssh 172.24.24.32 ls -lah /srv/node/sde/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/"
.IP "ssh 172.24.24.26 ls -lah /srv/node/sdv/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/ # [Handoff] "

.PD 
.RE 

.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at 
.BI http://swift.openstack.org/index.html



.SH "SEE ALSO"

.BR swift-account-info(1),
.BR swift-container-info(1),
.BR swift-object-info(1),
.BR swift-ring-builder(1)
swift-2.7.1/doc/manpages/swift-account-replicator.10000664000567000056710000000420113024044354023364 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-account-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-account-replicator 
\- Openstack-swift account replicator

.SH SYNOPSIS
.LP
.B swift-account-replicator 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
Replication is designed to keep the system in a consistent state in the face of 
temporary error conditions like network outages or drive failures. The replication 
processes compare local data with each remote copy to ensure they all contain the 
latest version. Account replication uses a combination of hashes and shared high 
water marks to quickly compare subsections of each partition.
.PP
Replication updates are push based. Account replication push missing records over 
HTTP or rsync whole database files. The replicator also ensures that data is removed
from the system. When an account item is deleted a tombstone is set as the latest 
version of the item. The replicator will see the tombstone and ensure that the item 
is removed from the entire system.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD 
.RE
    
   
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-account-replicator
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR account-server.conf(5)
swift-2.7.1/doc/manpages/account-server.conf.50000664000567000056710000003131113024044354022326 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH account-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B account-server.conf
\- configuration file for the openstack-swift account server



.SH SYNOPSIS
.LP
.B account-server.conf



.SH DESCRIPTION
.PP
This is the configuration file used by the account server and other account
background services, such as; replicator, auditor and reaper.

The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.

Any line that begins with a '#' symbol is ignored.

You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR



.SH GLOBAL SECTION
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.

.IP "\fBbind_ip\fR"
IP address the account server should bind to. The default is 0.0.0.0 which will make
it bind to all available addresses.
.IP "\fBbind_port\fR"
TCP port the account server should bind to. The default is 6002.
.IP "\fBbind_timeout\fR"
Timeout to bind socket. The default is 30.
.IP \fBbacklog\fR
TCP backlog.  Maximum number of allowed pending connections. The default value is 4096.
.IP \fBworkers\fR
The number of pre-forked processes that will accept connections.  Zero means
no fork.  The default is auto which will make the server try to match the
number of effective cpu cores if python multiprocessing is available (included
with most python distributions >= 2.6) or fallback to one.  It's worth noting
that individual workers will use many eventlet co-routines to service multiple
concurrent requests.
.IP \fBmax_clients\fR
Maximum number of clients one worker can process simultaneously (it will
actually accept(2) N + 1). Setting this to one (1) will only handle one request
at a time, without accepting another request concurrently.  The default is 1024.
.IP \fBuser\fR
The system user that the account server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
.IP \fBdisable_fallocate\fR
Disable pre-allocate disk space for a file. The default is false.
.IP \fBlog_name\fR
Label used when logging. The default is swift.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP "\fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBlog_max_line_length\fR
The following caps the length of log lines to the value given; no limit if
set to 0, the default.
.IP \fBlog_custom_handlers\fR
Comma separated list of functions to call to setup custom log handlers.
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
adapted_logger. The default is empty.
.IP \fBlog_udp_host\fR
If set, log_udp_host will override log_address.
.IP "\fBlog_udp_port\fR
UDP log port, the default is 514.
.IP \fBlog_statsd_host\fR
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBlog_statsd_port\fR
The default is 8125.
.IP \fBlog_statsd_default_sample_rate\fR
The default is 1.
.IP \fBlog_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.IP \fBdb_preallocation\fR
If you don't mind the extra disk space usage in overhead, you can turn this
on to preallocate disk space with SQLite databases to decrease fragmentation.
The default is false.
.IP \fBeventlet_debug\fR
Debug mode for eventlet library. The default is false.
.IP \fBfallocate_reserve\fR
You can set fallocate_reserve to the number of bytes you'd like fallocate to
reserve, whether there is space for the given file size or not. The default is 0.
.RE
.PD



.SH PIPELINE SECTION
.PD 1
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section.

.IP "\fBpipeline\fR"
It is used when you need apply a number of filters. It is a list of filters
ended by an application. The normal pipeline is "healthcheck
recon account-server".
.RE
.PD



.SH APP SECTION
.PD 1
.RS 0
This is indicated by section name [app:account-server]. Below are the parameters
that are acceptable within this section.
.IP "\fBuse\fR"
Entry point for paste.deploy for the account server. This is the reference to the installed python egg.
This is normally \fBegg:swift#account\fR.
.IP "\fBset log_name\fR
Label used when logging. The default is account-server.
.IP "\fBset log_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR
Logging level. The default is INFO.
.IP "\fBset log_requests\fR
Enables request logging. The default is True.
.IP "\fBset log_address\fR
Logging address. The default is /dev/log.
.IP "\fBauto_create_account_prefix\fR
The default is ".".
.IP "\fBreplication_server\fR
Configure parameter for creating specific server.
To handle all verbs, including replication verbs, do not specify
"replication_server" (this is the default). To only handle replication,
set to a true value (e.g. "true" or "1"). To handle only non-replication
verbs, set to "false". Unless you have a separate replication network, you
should not specify any value for "replication_server". The default is empty.
.RE
.PD



.SH FILTER SECTION
.PD 1
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters.
.IP "\fB[filter:healthcheck]\fR"
.RE
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#healthcheck\fR.
.IP "\fBdisable_path\fR"
An optional filesystem path which, if present, will cause the healthcheck
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
.RE

.RS 0
.IP "\fB[filter:recon]\fR"
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#recon\fR.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.RE
.PD

.RS 0
.IP "\fB[filter:xprofile]\fR"
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#xprofile\fR.
.IP "\fBprofile_module\fR"
This option enable you to switch profilers which should inherit from python
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
.IP "\fBlog_filename_prefix\fR"
This prefix will be used to combine process ID and timestamp to name the
profile data file.  Make sure the executing user has permission to write
into this path (missing path segments will be created, if necessary).
If you enable profiling in more than one type of daemon, you must override
it with an unique value like, the default is /var/log/swift/profile/account.profile.
.IP "\fBdump_interval\fR"
The profile data will be dumped to local disk based on above naming rule
in this interval. The default is 5.0.
.IP "\fBdump_timestamp\fR"
Be careful, this option will enable profiler to dump data into the file with
time stamp which means there will be lots of files piled up in the directory.
The default is false
.IP "\fBpath\fR"
This is the path of the URL to access the mini web UI. The default is __profile__.
.IP "\fBflush_at_shutdown\fR"
Clear the data when the wsgi server shutdown. The default is false.
.IP "\fBunwind\fR"
Unwind the iterator of applications. Default is false.
.RE
.PD


.SH ADDITIONAL SECTIONS
.PD 1
.RS 0
The following sections are used by other swift-account services, such as replicator,
auditor and reaper.
.IP "\fB[account-replicator]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is account-replicator.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBper_diff\fR
Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
.IP \fBmax_diffs\fR
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
.IP \fBconcurrency\fR
Number of replication workers to spawn. The default is 8.
.IP "\fBrun_pause [deprecated]\fR"
Time in seconds to wait between replication passes. The default is 30.
.IP \fBinterval\fR
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an account can be reclaimed. The default is
604800 seconds.
.IP \fBrsync_compress\fR
Allow rsync to compress data which is transmitted to destination node
during sync. However, this is applicable only when destination node is in
a different region than the local one. The default is false.
.IP \fBrsync_module\fR
Format of the rysnc module where the replicator will send data. See
etc/rsyncd.conf-sample for some usage examples.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE



.RS 0
.IP "\fB[account-auditor]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is account-auditor.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBinterval\fR
Will audit, at most, 1 account per device per interval. The default is 1800 seconds.
.IP \fBaccounts_per_second\fR
Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE



.RS 0
.IP "\fB[account-reaper]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is account-reaper.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBconcurrency\fR
Number of reaper workers to spawn. The default is 25.
.IP \fBinterval\fR
Minimum time for a pass to take. The default is 3600 seconds.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBdelay_reaping\fR
Normally, the reaper begins deleting account information for deleted accounts
immediately; you can set this to delay its work however. The value is in
seconds. The default is 0.
.IP \fBreap_warn_after\fR
If the account fails to be be reaped due to a persistent error, the
account reaper will log a message such as:
    Account  has not been reaped since 
You can search logs for this message if space is not being reclaimed
after you delete account(s).
Default is 2592000 seconds (30 days). This is in addition to any time
requested by delay_reaping.
.RE
.PD




.SH DOCUMENTATION
.LP
More in depth documentation about the swift-account-server and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-account-server(1),
swift-2.7.1/doc/manpages/swift-account-auditor.10000664000567000056710000000315013024044354022671 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-account-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-account-auditor 
\- Openstack-swift account auditor

.SH SYNOPSIS
.LP
.B swift-account-auditor 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP

The account auditor crawls the local account system checking the integrity of accounts 
objects. If corruption is found (in the case of bit rot, for example), the file is 
quarantined, and replication will replace the bad file from another replica.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
    
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-account-auditor 
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html

.SH "SEE ALSO"
.BR account-server.conf(5)
swift-2.7.1/doc/manpages/swift-ring-builder.10000664000567000056710000001716613024044354022167 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-ring-builder 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-ring-builder
\- Openstack-swift ring builder

.SH SYNOPSIS
.LP
.B swift-ring-builder
   <...>

.SH DESCRIPTION 
.PP
The swift-ring-builder utility is used to create, search and manipulate 
the swift storage ring. The ring-builder assigns partitions to devices and 
writes an optimized Python structure to a gzipped, pickled file on disk for
shipping out to the servers. The server processes just check the modification 
time of the file occasionally and reload their in-memory copies of the ring 
structure as needed. Because of how the ring-builder manages changes to the
ring, using a slightly older ring usually just means one of the three replicas
for a subset of the partitions will be incorrect, which can be easily worked around.
.PP
The ring-builder also keeps its own builder file with the ring information and
additional data required to build future rings. It is very important to keep
multiple backup copies of these builder files. One option is to copy the
builder files out to every server while copying the ring files themselves.
Another is to upload the builder files into the cluster itself. Complete loss
of a builder file will mean creating a new ring from scratch, nearly all
partitions will end up assigned to different devices, and therefore nearly all
data stored will have to be replicated to new locations. So, recovery from a
builder file loss is possible, but data will definitely be unreachable for an
extended time.
.PP
If invoked as 'swift-ring-builder-safe' the directory containing the builder
file provided will be locked (via a .lock file in the files parent directory).
This provides a basic safe guard against multiple instances of the swift-ring-builder
(or other utilities that observe this lock) from attempting to write to or read
the builder/ring files while operations are in progress. This can be useful in
environments where ring management has been automated but the operator still
needs to interact with the rings manually.


.SH SEARCH
.PD 0 

.IP "\fB\fR"
.RS 5
.IP "Can be of the form:"
.IP "dz-:/_"

.IP "Any part is optional, but you must include at least one, examples:"

.RS 3
.IP "d74              Matches the device id 74"
.IP "z1               Matches devices in zone 1"
.IP "z1-1.2.3.4       Matches devices in zone 1 with the ip 1.2.3.4"
.IP "1.2.3.4          Matches devices in any zone with the ip 1.2.3.4"
.IP "z1:5678          Matches devices in zone 1 using port 5678"
.IP ":5678            Matches devices that use port 5678"
.IP "/sdb1            Matches devices with the device name sdb1"
.IP "_shiny           Matches devices with shiny in the meta data"
.IP "_'snet: 5.6.7.8' Matches devices with snet: 5.6.7.8 in the meta data"
.IP "[::1]            Matches devices in any zone with the ip ::1"
.IP "z1-[::1]:5678    Matches devices in zone 1 with ip ::1 and port 5678"
.RE
   
Most specific example:

.RS 3
d74z1-1.2.3.4:5678/sdb1_"snet: 5.6.7.8" 
.RE 

Nerd explanation:

.RS 3
.IP "All items require their single character prefix except the ip, in which case the - is optional unless the device id or zone is also included."
.RE
.RE
.PD 


.SH COMMANDS

.PD 0 


.IP "\fB\fR"
.RS 5
Shows information about the ring and the devices within. 
.RE


.IP "\fBsearch\fR  "
.RS 5
Shows information about matching devices.
.RE


.IP "\fBadd\fR z-:/_ "
.IP "\fBadd\fR rz-:/_ "
.IP "\fBadd\fR -r  -z  -i  -p  -d  -m  -w "
.RS 5
Adds a device to the ring with the given information. No partitions will be 
assigned to the new device until after running 'rebalance'. This is so you 
can make multiple device changes and rebalance them all just once.
.RE


.IP "\fBcreate\fR   "
.RS 5
Creates  with 2^ partitions and . 
 is number of hours to restrict moving a partition more than once.
.RE


.IP "\fBlist_parts\fR  [] .."
.RS 5
Returns a 2 column list of all the partitions that are assigned to any of
the devices matching the search values given. The first column is the
assigned partition number and the second column is the number of device
matches for that partition. The list is ordered from most number of matches
to least. If there are a lot of devices to match against, this command
could take a while to run.  
.RE


.IP "\fBrebalance\fR"
.RS 5
Attempts to rebalance the ring by reassigning partitions that haven't been recently reassigned.
.RE


.IP "\fBremove\fR  "
.RS 5
Removes the device(s) from the ring. This should normally just be used for 
a device that has failed. For a device you wish to decommission, it's best 
to set its weight to 0, wait for it to drain all its data, then use this 
remove command. This will not take effect until after running 'rebalance'. 
This is so you can make multiple device changes and rebalance them all just once.
.RE


.IP "\fBset_info\fR  :/_"
.RS 5
Resets the device's information. This information isn't used to assign 
partitions, so you can use 'write_ring' afterward to rewrite the current 
ring with the newer device information. Any of the parts are optional 
in the final :/_ parameter; just give what you 
want to change. For instance set_info d74 _"snet: 5.6.7.8" would just 
update the meta data for device id 74.
.RE


.IP "\fBset_min_part_hours\fR "
.RS 5
Changes the  to the given . This should be set to 
however long a full replication/update cycle takes. We're working on a way 
to determine this more easily than scanning logs.
.RE


.IP "\fBset_weight\fR  "
.RS 5
Resets the device's weight. No partitions will be reassigned to or from the 
device until after running 'rebalance'. This is so you can make multiple 
device changes and rebalance them all just once.
.RE


.IP "\fBvalidate\fR"
.RS 5
Just runs the validation routines on the ring.
.RE


.IP "\fBwrite_ring\fR"
.RS 5
Just rewrites the distributable ring file. This is done automatically after 
a successful rebalance, so really this is only useful after one or more 'set_info' 
calls when no rebalance is needed but you want to send out the new device information.
.RE


\fBQuick list:\fR add create list_parts rebalance remove search set_info
            set_min_part_hours set_weight validate write_ring

\fBExit codes:\fR 0 = ring changed, 1 = ring did not change, 2 = error
.PD 


 

.SH DOCUMENTATION
.LP
More in depth documentation about the swift ring and also Openstack-Swift as a 
whole can be found at 
.BI http://swift.openstack.org/overview_ring.html, 
.BI http://swift.openstack.org/admin_guide.html#managing-the-rings 
and 
.BI http://swift.openstack.org


swift-2.7.1/doc/manpages/dispersion.conf.50000664000567000056710000000630313024044354021550 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH dispersion.conf 5 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B dispersion.conf
\- configuration file for the openstack-swift dispersion tools 

.SH SYNOPSIS
.LP
.B dispersion.conf

.SH DESCRIPTION 
.PP
This is the configuration file used by the dispersion populate and report tools.
The file format consists of the '[dispersion]' module as the header and available parameters. 
Any line that begins with a '#' symbol is ignored. 


.SH PARAMETERS
.PD 1 
.RS 0
.IP "\fBauth_version\fR"
Authentication system API version. The default is 1.0.
.IP "\fBauth_url\fR"
Authentication system URL 
.IP "\fBauth_user\fR" 
Authentication system account/user name
.IP "\fBauth_key\fR"
Authentication system account/user password
.IP "\fBproject_name\fR"
Project name in case of keystone auth version 3
.IP "\fBproject_domain_name\fR"
Project domain name in case of keystone auth version 3
.IP "\fBuser_domain_name\fR"
User domain name in case of keystone auth version 3
.IP "\fBendpoint_type\fR"
The default is 'publicURL'.
.IP "\fBkeystone_api_insecure\fR"
The default is false.
.IP "\fBswift_dir\fR"
Location of openstack-swift configuration and ring files
.IP "\fBdispersion_coverage\fR"
Percentage of partition coverage to use. The default is 1.0.
.IP "\fBretries\fR"
Maximum number of attempts. The defaul is 5.
.IP "\fBconcurrency\fR"
Concurrency to use. The default is 25.
.IP "\fBcontainer_populate\fR"
The default is true.
.IP "\fBobject_populate\fR"
The default is true.
.IP "\fBdump_json\fR"
Whether to output in json format. The default is no.
.IP "\fBcontainer_report\fR"
Whether to run the container report. The default is yes.
.IP "\fBobject_report\fR"
Whether to run the object report. The default is yes.
.RE
.PD

.SH SAMPLE
.PD 0 
.RS 0
.IP "[dispersion]"
.IP "auth_url = https://127.0.0.1:443/auth/v1.0"
.IP "auth_user = dpstats:dpstats"
.IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift"
.IP "# keystone_api_insecure = no"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0"
.IP "# retries = 5"
.IP "# concurrency = 25"
.IP "# dump_json = no"
.IP "# container_report = yes"
.IP "# object_report = yes"
.RE
.PD 

 
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and 
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-dispersion-report(1),
.BR swift-dispersion-populate(1)

swift-2.7.1/doc/manpages/swift-orphans.10000664000567000056710000000361213024044354021245 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH swift-orphans 1 "3/15/2012" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B swift-orphans
\- Openstack-swift orphans tool

.SH SYNOPSIS
.LP
.B swift-orphans
[-h|--help] [-a|--age] [-k|--kill] [-w|--wide] [-r|--run-dir]


.SH DESCRIPTION
.PP
Lists and optionally kills orphaned Swift processes. This is done by scanning
/var/run/swift or the directory specified to the \-r switch for .pid files and
listing any processes that look like Swift processes but aren't associated with
the pids in those .pid files. Any Swift processes running with the 'once'
parameter are ignored, as those are usually for full-speed audit scans and
such.

Example (sends SIGTERM to all orphaned Swift processes older than two hours):
swift-orphans \-a 2 \-k TERM

The options are as follows:

.RS 4
.PD 0
.IP "-a HOURS"
.IP "--age=HOURS"
.RS 4
.IP "Look for processes at least HOURS old; default: 24"
.RE
.IP "-k SIGNAL"
.IP "--kill=SIGNAL"
.RS 4
.IP "Send SIGNAL to matched processes; default: just list process information"
.RE
.IP "-w"
.IP "--wide"
.RS 4
.IP "Don't clip the listing at 80 characters"
.RE
.PD
.RE


.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
.BI http://swift.openstack.org/index.html

swift-2.7.1/doc/manpages/swift-init.10000664000567000056710000000773113024044354020544 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-init 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-init
\- Openstack-swift swift-init tool

.SH SYNOPSIS
.LP
.B swift-init
  [ ...]  [options]
 
.SH DESCRIPTION 
.PP
The swift-init tool can be used to initialize all swift daemons available as part of
openstack-swift. Instead of calling individual init scripts for each 
swift daemon, one can just use swift-init. With swift-init you can initialize 
just one swift service, such as the "proxy", or a combination of them. The tool also 
allows one to use the keywords such as "all", "main" and "rest" for the  argument.


\fBServers:\fR

.PD 0
.RS 4
.IP "\fIproxy\fR" "4"
.IP "    - Initializes the swift proxy daemon" 
.RE

.RS 4
.IP "\fIobject\fR, \fIobject-replicator\fR, \fIobject-auditor\fR, \fIobject-updater\fR"
.IP "    - Initializes the swift object daemons above"
.RE

.RS 4
.IP "\fIcontainer\fR, \fIcontainer-update\fR, \fIcontainer-replicator\fR, \fIcontainer-auditor\fR"
.IP "    - Initializes the swift container daemons above"
.RE

.RS 4
.IP "\fIaccount\fR, \fIaccount-auditor\fR, \fIaccount-reaper\fR, \fIaccount-replicator\fR"
.IP "    - Initializes the swift account daemons above"
.RE

.RS 4
.IP "\fIall\fR"
.IP "    - Initializes \fBall\fR the swift daemons"
.RE

.RS 4
.IP "\fImain\fR"
.IP "    - Initializes all the \fBmain\fR swift daemons"
.IP "      (proxy, container, account and object servers)"
.RE

.RS 4
.IP "\fIrest\fR"
.IP "    - Initializes all the other \fBswift background daemons\fR"
.IP "      (updater, replicator, auditor, reaper, etc)"
.RE
.PD 


\fBCommands:\fR

.RS 4
.PD 0
.IP "\fIforce-reload\fR: \t\t alias for reload"
.IP "\fIno-daemon\fR: \t\t start a server interactively"
.IP "\fIno-wait\fR: \t\t\t spawn server and return immediately"
.IP "\fIonce\fR: \t\t\t start server and run one pass on supporting daemons"
.IP "\fIreload\fR: \t\t\t graceful shutdown then restart on supporting servers"
.IP "\fIrestart\fR: \t\t\t stops then restarts server"
.IP "\fIshutdown\fR: \t\t allow current requests to finish on supporting servers"
.IP "\fIstart\fR: \t\t\t starts a server"
.IP "\fIstatus\fR: \t\t\t display status of tracked pids for server"
.IP "\fIstop\fR: \t\t\t stops a server"
.PD 
.RE



\fBOptions:\fR
.RS 4
.PD 0 
.IP "-h, --help \t\t\t show this help message and exit"
.IP "-v, --verbose \t\t\t display verbose output"
.IP "-w, --no-wait \t\t\t won't wait for server to start before returning
.IP "-o, --once \t\t\t only run one pass of daemon
.IP "-n, --no-daemon \t\t start server interactively
.IP "-g, --graceful \t\t send SIGHUP to supporting servers
.IP "-c N, --config-num=N \t send command to the Nth server only
.IP "-k N, --kill-wait=N \t wait N seconds for processes to die (default 15)
.IP "-r RUN_DIR, --run-dir=RUN_DIR directory where the pids will be stored (default /var/run/swift)
.IP "--strict return non-zero status code if some config is missing. Default mode if server is explicitly named."
.IP "--non-strict return zero status code even if some config is missing. Default mode if server is one of aliases `all`, `main` or `rest`."
.IP "--kill-after-timeout kill daemon and all children after kill-wait period."
.PD 
.RE



.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at 
.BI http://swift.openstack.org/index.html



swift-2.7.1/doc/manpages/swift-container-auditor.10000664000567000056710000000320113024044354023214 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-auditor 
\- Openstack-swift container auditor

.SH SYNOPSIS
.LP
.B swift-container-auditor 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP

The container auditor crawls the local container system checking the integrity of container 
objects. If corruption is found (in the case of bit rot, for example), the file is 
quarantined, and replication will replace the bad file from another replica.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
     	
    
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-container-auditor 
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR container-server.conf(5)
swift-2.7.1/doc/manpages/swift-container-server.10000664000567000056710000000314113024044354023056 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-server 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-server
\- Openstack-swift container server

.SH SYNOPSIS
.LP
.B swift-container-server
[CONFIG] [-h|--help] [-v|--verbose]

.SH DESCRIPTION 
.PP
The Container Server's primary job is to handle listings of objects. It doesn't know 
where those objects are, just what objects are in a specific container. The listings 
are stored as sqlite database files, and replicated across the cluster similar to how 
objects are. Statistics are also tracked that include the total number of objects, and 
total storage usage for that container.

.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-container-server
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html
and 
.BI http://docs.openstack.org

.LP 

.SH "SEE ALSO"
.BR container-server.conf(5)
swift-2.7.1/doc/manpages/swift-account-info.10000664000567000056710000000317613024044354022165 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Madhuri Kumari 
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-account-info 1 "3/22/2014" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-account-info
\- Openstack-swift account-info tool

.SH SYNOPSIS
.LP
.B swift-account-info
[ACCOUNT_DB_FILE] [SWIFT_DIR] 

.SH DESCRIPTION 
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve 
information about an account that is located on the storage node. One calls 
the tool with a given db file as it is stored on the storage node system. 
It will then return several information about that account such as; 

.PD 0
.IP	"- Account"
.IP  "- Account hash "
.IP  "- Created timestamp "
.IP  "- Put timestamp "
.IP  "- Delete timestamp "
.IP  "- Container Count "
.IP  "- Object count "
.IP  "- Bytes used "
.IP  "- Chexor "
.IP  "- ID"
.IP  "- User Metadata "
.IP  "- Ring Location"
.PD 
    
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at 
.BI http://swift.openstack.org/index.html

.SH "SEE ALSO"

.BR swift-container-info(1),
.BR swift-get-nodes(1),
.BR swift-object-info(1)
swift-2.7.1/doc/manpages/swift-account-reaper.10000664000567000056710000000360213024044354022502 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-account-reaper 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-account-reaper
\- Openstack-swift account reaper

.SH SYNOPSIS
.LP
.B swift-account-reaper 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
Removes data from status=DELETED accounts. These are accounts that have
been asked to be removed by the reseller via services remove_storage_account
XMLRPC call. 
.PP
The account is not deleted immediately by the services call, but instead
the account is simply marked for deletion by setting the status column in
the account_stat table of the account database. This account reaper scans
for such accounts and removes the data in the background. The background
deletion process will occur on the primary account server for the account.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE

    
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-auditor 
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR account-server.conf(5)
swift-2.7.1/doc/manpages/swift-object-server.10000664000567000056710000000401113024044354022337 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-server 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-server
\- Openstack-swift object server.

.SH SYNOPSIS
.LP
.B swift-object-server
[CONFIG] [-h|--help] [-v|--verbose]

.SH DESCRIPTION 
.PP
The Object Server is a very simple blob storage server that can store, retrieve
and delete objects stored on local devices. Objects are stored as binary files 
on the filesystem with metadata stored in the file's extended attributes (xattrs).
This requires that the underlying filesystem choice for object servers support 
xattrs on files. Some filesystems, like ext3, have xattrs turned off by default. 
Each object is stored using a path derived from the object name's hash and the operation's
timestamp. Last write always wins, and ensures that the latest object version will be
served. A deletion is also treated as a version of the file (a 0 byte file ending with
".ts", which stands for tombstone). This ensures that deleted files are replicated 
correctly and older versions don't magically reappear due to failure scenarios.

.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-server
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html
and 
.BI http://docs.openstack.org


.SH "SEE ALSO"
.BR object-server.conf(5)
swift-2.7.1/doc/manpages/container-server.conf.50000664000567000056710000003362413024044354022665 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH container-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B container-server.conf
\- configuration file for the openstack-swift container server



.SH SYNOPSIS
.LP
.B container-server.conf



.SH DESCRIPTION
.PP
This is the configuration file used by the container server and other container
background services, such as; replicator, updater, auditor and sync.

The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.

Any line that begins with a '#' symbol is ignored.

You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR



.SH GLOBAL SECTION
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.

.IP "\fBbind_ip\fR"
IP address the container server should bind to. The default is 0.0.0.0 which will make
it bind to all available addresses.
.IP "\fBbind_port\fR"
TCP port the container server should bind to. The default is 6001.
.IP "\fBbind_timeout\fR"
Timeout to bind socket. The default is 30.
.IP \fBbacklog\fR
TCP backlog.  Maximum number of allowed pending connections. The default value is 4096.
.IP \fBworkers\fR
The number of pre-forked processes that will accept connections.  Zero means
no fork.  The default is auto which will make the server try to match the
number of effective cpu cores if python multiprocessing is available (included
with most python distributions >= 2.6) or fallback to one.  It's worth noting
that individual workers will use many eventlet co-routines to service multiple
concurrent requests.
.IP \fBmax_clients\fR
Maximum number of clients one worker can process simultaneously (it will
actually accept(2) N + 1). Setting this to one (1) will only handle one request
at a time, without accepting another request concurrently.  The default is 1024.
.IP \fBallowed_sync_hosts\fR
This is a comma separated list of hosts allowed in the X-Container-Sync-To
field for containers. This is the old-style of using container sync. It is
strongly recommended to use the new style of a separate
container-sync-realms.conf -- see container-sync-realms.conf-sample
allowed_sync_hosts = 127.0.0.1
.IP \fBuser\fR
The system user that the container server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
.IP \fBdisable_fallocate\fR
Disable pre-allocate disk space for a file. The default is false.
.IP \fBlog_name\fR
Label used when logging. The default is swift.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBlog_max_line_length\fR
The following caps the length of log lines to the value given; no limit if
set to 0, the default.
.IP \fBlog_custom_handlers\fR
Comma separated list of functions to call to setup custom log handlers.
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
adapted_logger. The default is empty.
.IP \fBlog_udp_host\fR
If set, log_udp_host will override log_address.
.IP "\fBlog_udp_port\fR
UDP log port, the default is 514.
.IP \fBlog_statsd_host\fR
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBlog_statsd_port\fR
The default is 8125.
.IP \fBlog_statsd_default_sample_rate\fR
The default is 1.
.IP \fBlog_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.IP \fBdb_preallocation\fR
If you don't mind the extra disk space usage in overhead, you can turn this
on to preallocate disk space with SQLite databases to decrease fragmentation.
The default is false.
.IP \fBeventlet_debug\fR
Debug mode for eventlet library. The default is false.
.IP \fBfallocate_reserve\fR
You can set fallocate_reserve to the number of bytes you'd like fallocate to
reserve, whether there is space for the given file size or not. The default is 0.
.RE
.PD



.SH PIPELINE SECTION
.PD 1
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section.

.IP "\fBpipeline\fR"
It is used when you need to apply a number of filters. It is a list of filters
ended by an application.  The normal pipeline is "healthcheck
recon container-server".
.RE
.PD



.SH APP SECTION
.PD 1
.RS 0
This is indicated by section name [app:container-server]. Below are the parameters
that are acceptable within this section.
.IP "\fBuse\fR"
Entry point for paste.deploy for the container server. This is the reference to the installed python egg.
This is normally \fBegg:swift#container\fR.
.IP "\fBset log_name\fR
Label used when logging. The default is container-server.
.IP "\fBset log_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR
Logging level. The default is INFO.
.IP "\fBset log_requests\fR
Enables request logging. The default is True.
.IP "\fBset log_address\fR
Logging address. The default is /dev/log.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 3 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBallow_versions\fR
The default is false.
.IP \fBauto_create_account_prefix\fR
The default is '.'.
.IP \fBreplication_server\fR
Configure parameter for creating specific server.
To handle all verbs, including replication verbs, do not specify
"replication_server" (this is the default). To only handle replication,
set to a True value (e.g. "True" or "1"). To handle only non-replication
verbs, set to "False". Unless you have a separate replication network, you
should not specify any value for "replication_server".
.RE
.PD



.SH FILTER SECTION
.PD 1
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters.
.IP "\fB[filter:healthcheck]\fR"
.RE
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#healthcheck\fR.
.IP "\fBdisable_path\fR"
An optional filesystem path which, if present, will cause the healthcheck
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
.RE

.RS 0
.IP "\fB[filter:recon]\fR"
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#recon\fR.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.RE
.PD

.RS 0
.IP "\fB[filter:xprofile]\fR"
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#xprofile\fR.
.IP "\fBprofile_module\fR"
This option enable you to switch profilers which should inherit from python
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
.IP "\fBlog_filename_prefix\fR"
This prefix will be used to combine process ID and timestamp to name the
profile data file.  Make sure the executing user has permission to write
into this path (missing path segments will be created, if necessary).
If you enable profiling in more than one type of daemon, you must override
it with an unique value like, the default is /var/log/swift/profile/account.profile.
.IP "\fBdump_interval\fR"
The profile data will be dumped to local disk based on above naming rule
in this interval. The default is 5.0.
.IP "\fBdump_timestamp\fR"
Be careful, this option will enable profiler to dump data into the file with
time stamp which means there will be lots of files piled up in the directory.
The default is false
.IP "\fBpath\fR"
This is the path of the URL to access the mini web UI. The default is __profile__.
.IP "\fBflush_at_shutdown\fR"
Clear the data when the wsgi server shutdown. The default is false.
.IP "\fBunwind\fR"
Unwind the iterator of applications. Default is false.
.RE
.PD


.SH ADDITIONAL SECTIONS
.PD 1
.RS 0
The following sections are used by other swift-container services, such as replicator,
updater, auditor and sync.
.IP "\fB[container-replicator]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is container-replicator.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBper_diff\fR
Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
.IP \fBmax_diffs\fR
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
.IP \fBconcurrency\fR
Number of replication workers to spawn. The default is 8.
.IP "\fBrun_pause [deprecated]\fR"
Time in seconds to wait between replication passes. The default is 30.
.IP \fBinterval\fR
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an container can be reclaimed. The default is
604800 seconds.
.IP \fBrsync_compress\fR
Allow rsync to compress data which is transmitted to destination node
during sync. However, this is applicable only when destination node is in
a different region than the local one. The default is false.
.IP \fBrsync_module\fR
Format of the rysnc module where the replicator will send data. See
etc/rsyncd.conf-sample for some usage examples.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE


.RS 0
.IP "\fB[container-updater]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is container-updater.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBinterval\fR
Minimum time for a pass to take. The default is 300 seconds.
.IP \fBconcurrency\fR
Number of reaper workers to spawn. The default is 4.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 3 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBslowdown\fR
Slowdown will sleep that amount between containers. The default is 0.01 seconds.
.IP \fBaccount_suppression_time\fR
Seconds to suppress updating an account that has generated an error. The default is 60 seconds.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE
.PD


.RS 0
.IP "\fB[container-auditor]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is container-auditor.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBinterval\fR
Will audit, at most, 1 container per device per interval. The default is 1800 seconds.
.IP \fBcontainers_per_second\fR
Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.RE



.RS 0
.IP "\fB[container-sync]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is container-sync.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBsync_proxy\fR
If you need to use an HTTP Proxy, set it here; defaults to no proxy.
.IP \fBinterval\fR
Will audit, at most, each container once per interval. The default is 300 seconds.
.IP \fBcontainer_time\fR
Maximum amount of time to spend syncing each container per pass. The default is 60 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 5 seconds.
.IP \fBrequest_tries\fR
Server errors from requests will be retried by default. The default is 3.
.IP \fBinternal_client_conf_path\fR
Internal client config file path.
.RE
.PD




.SH DOCUMENTATION
.LP
More in depth documentation about the swift-container-server and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-container-server(1)
swift-2.7.1/doc/manpages/swift-container-info.10000664000567000056710000000355413024044354022513 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Madhuri Kumari 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-info 1 "3/20/2013" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-info
\- Openstack-swift container-info tool

.SH SYNOPSIS
.LP
.B swift-container-info
[CONTAINER_DB_FILE] [SWIFT_DIR] 

.SH DESCRIPTION 
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve 
information about a container that is located on the storage node.
One calls the tool with a given container db file as 
it is stored on the storage node system. 
It will then return several information about that container such as; 

.PD 0
.IP	"- Account it belongs to"
.IP  "- Container "
.IP  "- Created timestamp "
.IP  "- Put timestamp "
.IP  "- Delete timestamp "
.IP  "- Object count "
.IP  "- Bytes used "
.IP  "- Reported put timestamp "
.IP  "- Reported delete timestamp "
.IP  "- Reported object count "
.IP  "- Reported bytes used "
.IP  "- Hash "
.IP  "- ID "
.IP  "- User metadata "
.IP  "- X-Container-Sync-Point 1 " 
.IP  "- X-Container-Sync-Point 2 " 
.IP  "- Location on the ring "
.PD 
    
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at 
.BI http://swift.openstack.org/index.html

.SH "SEE ALSO"
.BR swift-get-nodes(1),
.BR swift-object-info(1)
swift-2.7.1/doc/manpages/swift-object-expirer.10000664000567000056710000000410213024044354022510 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-expirer 1 "3/15/2012" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-expirer
\- Openstack-swift object expirer

.SH SYNOPSIS
.LP
.B swift-object-expirer 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
The swift-object-expirer offers scheduled deletion of objects. The Swift client would 
use the X-Delete-At or X-Delete-After headers during an object PUT or POST and the 
cluster would automatically quit serving that object at the specified time and would 
shortly thereafter remove the object from the system.

The X-Delete-At header takes a Unix Epoch timestamp, in integer form; for example: 
1317070737 represents Mon Sep 26 20:58:57 2011 UTC.

The X-Delete-After header takes a integer number of seconds. The proxy server 
that receives the request will convert this header into an X-Delete-At header 
using its current time plus the value given.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
    
   
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-expirer
can be foud at 
.BI http://swift.openstack.org/overview_expiring_objects.html
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR object-expirer.conf(5)

swift-2.7.1/doc/manpages/swift-proxy-server.10000664000567000056710000000344213024044354022261 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-proxy-server 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-proxy-server 
\- Openstack-swift proxy server.

.SH SYNOPSIS
.LP
.B swift-proxy-server
[CONFIG] [-h|--help] [-v|--verbose]

.SH DESCRIPTION 
.PP
The Swift Proxy Server is responsible for tying together the rest of the Swift architecture. 
For each request, it will look up the location of the account, container, or object in the 
ring and route the request accordingly. The public API is also exposed through the Proxy 
Server. A large number of failures are also handled in the Proxy Server. For example, 
if a server is unavailable for an object PUT, it will ask the ring for a handoff server
and route there instead. When objects are streamed to or from an object server, they are
streamed directly through the proxy server to or from the user the proxy server does 
not spool them.

.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-proxy-server
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR proxy-server.conf(5)
swift-2.7.1/doc/manpages/swift-recon.10000664000567000056710000001002313024044354020673 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH swift-recon 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B swift-recon
\- Openstack-swift recon middleware cli tool

.SH SYNOPSIS
.LP
.B swift-recon
\  [-v] [--suppress] [-a] [-r] [-u] [-d] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat]

.SH DESCRIPTION
.PP
The swift-recon cli tool can be used to retrieve various metrics and telemetry information about
a cluster that has been collected by the swift-recon middleware.

In order to make use of the swift-recon middleware, update the object-server.conf file and
enable the recon middleware by adding a pipeline entry and setting its option(s). You can view
more information in the example section below.


.SH OPTIONS
.RS 0
.PD 1
.IP "\fB\fR"
account|container|object - Defaults to object server.
.IP "\fB-h, --help\fR"
show this help message and exit
.IP "\fB-v, --verbose\fR"
Print verbose information
.IP "\fB--suppress\fR"
Suppress most connection related errors
.IP "\fB-a, --async\fR"
Get async stats
.IP "\fB--auditor\fR"
Get auditor stats
.IP "\fB--updater\fR"
Get updater stats
.IP "\fB--expirer\fR"
Get expirer stats
.IP "\fB-r, --replication\fR"
Get replication stats
.IP "\fB-u, --unmounted\fR"
Check cluster for unmounted devices
.IP "\fB-d, --diskusage\fR"
Get disk usage stats
.IP "\fB--top=COUNT\fR"
Also show the top COUNT entries in rank order
.IP "\fB--lowest=COUNT\fR"
Also show the lowest COUNT entries in rank order
.IP "\fB--human-readable\fR"
Use human readable suffix for disk usage stats
.IP "\fB-l, --loadstats\fR"
Get cluster load average stats
.IP "\fB-q, --quarantined\fR"
Get cluster quarantine stats
.IP "\fB--validate-servers\fR"
Validate servers on the ring
.IP "\fB--md5\fR"
Get md5sum of servers ring and compare to local copy
.IP "\fB--sockstat\fR"
Get cluster socket usage stats
.IP "\fB--driveaudit\fR"
Get drive audit error stats
.IP "\fB-T, --time\fR"
Check time synchronization
.IP "\fB--all\fR"
Perform all checks. Equivalent to \-arudlqT
\-\-md5 \-\-sockstat \-\-auditor \-\-updater \-\-expirer
\-\-driveaudit \-\-validate\-servers
.IP "\fB--region=REGION\fR"
Only query servers in specified region
.IP "\fB-z ZONE, --zone=ZONE\fR"
Only query servers in specified zone
.IP "\fB-t SECONDS, --timeout=SECONDS\fR"
Time to wait for a response from a server
.IP "\fB--swiftdir=PATH\fR"
Default = /etc/swift
.PD
.RE



.SH EXAMPLE
.LP
.PD 0
.RS 0
.IP "ubuntu:~$ swift-recon -q --zone 3"
.IP "================================================================="
.IP "[2011-10-18 19:36:00] Checking quarantine dirs on 1 hosts... "
.IP "[Quarantined objects] low: 4, high: 4, avg: 4, total: 4 "
.IP "[Quarantined accounts] low: 0, high: 0, avg: 0, total: 0 "
.IP "[Quarantined containers] low: 0, high: 0, avg: 0, total: 0 "
.IP "================================================================="
.RE

.RS 0
Finally if you also wish to track asynchronous pending’s you will need to setup a
cronjob to run the swift-recon-cron script periodically:

.IP "*/5 * * * * swift /usr/bin/swift-recon-cron /etc/swift/object-server.conf"
.RE




.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
.BI http://swift.openstack.org/index.html
Also more specific documentation about swift-recon can be found at
.BI http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring



.SH "SEE ALSO"
.BR object-server.conf(5),


swift-2.7.1/doc/manpages/swift-container-sync.10000664000567000056710000000361013024044354022525 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-sync 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-sync
\- Openstack-swift container sync

.SH SYNOPSIS
.LP
.B swift-container-sync
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
Swift has a feature where all the contents of a container can be mirrored to
another container through background synchronization. Swift cluster operators
configure their cluster to allow/accept sync requests to/from other clusters,
and the user specifies where to sync their container to along with a secret 
synchronization key.
.PP
The swift-container-sync does the job of sending updates to the remote container.
This is done by scanning the local devices for container databases and checking
for x-container-sync-to and x-container-sync-key metadata values. If they exist,
newer rows since the last sync will trigger PUTs or DELETEs to the other container.

.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-container-sync
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/overview_container_sync.html
and 
.BI http://docs.openstack.org

.LP 

.SH "SEE ALSO"
.BR container-server.conf(5)
swift-2.7.1/doc/manpages/proxy-server.conf.50000664000567000056710000012102613024044354022056 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH proxy-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B proxy-server.conf
\- configuration file for the openstack-swift proxy server



.SH SYNOPSIS
.LP
.B proxy-server.conf



.SH DESCRIPTION
.PP
This is the configuration file used by the proxy server and other proxy middlewares.

The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.

Any line that begins with a '#' symbol is ignored.

You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR



.SH GLOBAL SECTION
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.

.IP "\fBbind_ip\fR"
IP address the proxy server should bind to. The default is 0.0.0.0 which will make
it bind to all available addresses.
.IP "\fBbind_port\fR"
TCP port the proxy server should bind to. The default is 80.
.IP "\fBbind_timeout\fR"
Timeout to bind socket. The default is 30.
.IP \fBbacklog\fR
TCP backlog.  Maximum number of allowed pending connections. The default value is 4096.
.IP \fBadmin_key\fR
Key to use for admin calls that are HMAC signed.  Default is empty,
which will disable admin calls to /info.
.IP \fBdisallowed_sections\fR
Allows the ability to withhold sections from showing up in the public calls
to /info.  You can withhold subsections by separating the dict level with a
".".  The following would cause the sections 'container_quotas' and 'tempurl'
to not be listed, and the key max_failed_deletes would be removed from
bulk_delete.  Default value is 'swift.valid_api_versions' which allows all
registered features to be listed via HTTP GET /info except
swift.valid_api_versions information
.IP \fBworkers\fR
The number of pre-forked processes that will accept connections.  Zero means
no fork.  The default is auto which will make the server try to match the
number of effective cpu cores if python multiprocessing is available (included
with most python distributions >= 2.6) or fallback to one.  It's worth noting
that individual workers will use many eventlet co-routines to service multiple
concurrent requests.
.IP \fBmax_clients\fR
Maximum number of clients one worker can process simultaneously (it will
actually accept(2) N + 1). Setting this to one (1) will only handle one request
at a time, without accepting another request concurrently.  The default is 1024.
.IP \fBuser\fR
The system user that the proxy server will run as. The default is swift.
.IP \fBexpose_info\fR
Enables exposing configuration settings via HTTP GET /info. The default is true.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBcert_file\fR
Location of the SSL certificate file. The default path is /etc/swift/proxy.crt. This is
disabled by default.
.IP \fBkey_file\fR
Location of the SSL certificate key file. The default path is /etc/swift/proxy.key. This is
disabled by default.
.IP \fBexpiring_objects_container_divisor\fR
The default is 86400.
.IP \fBexpiring_objects_account_name\fR
The default is 'expiring_objects'.
.IP \fBlog_name\fR
Label used when logging. The default is swift.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBlog_max_line_length\fR
To cap the length of log lines to the value given. No limit if set to 0, the default.
.IP \fBlog_headers\fR
The default is false.
.IP \fBlog_custom_handlers\fR
Comma separated list of functions to call to setup custom log handlers.
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
adapted_logger. The default is empty.
.IP \fBlog_udp_host\fR
If set, log_udp_host will override log_address.
.IP "\fBlog_udp_port\fR
UDP log port, the default is 514.
.IP \fBlog_statsd_host\fR
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBlog_statsd_port\fR
The default is 8125.
.IP \fBlog_statsd_default_sample_rate\fR
The default is 1.
.IP \fBlog_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.IP \fBclient_timeout\fR
Time to wait while receiving each chunk of data from a client or another
backend node. The default is 60.
.IP \fBeventlet_debug\fR
Debug mode for eventlet library. The default is false.
.IP \fBtrans_id_suffix\fR
This optional suffix (default is empty) that would be appended to the swift transaction
id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
This is very useful when one is managing more than one swift cluster.
.IP \fBcors_allow_origin\fR
Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
.IP \fBstrict_cors_mode\fR
The default is true.
.RE
.PD



.SH PIPELINE SECTION
.PD 1
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section.

.IP "\fBpipeline\fR"
It is used when you need apply a number of filters. It is a list of filters
ended by an application. The normal pipeline is "catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server".

Note: The double proxy-logging in the pipeline is not a mistake. The
left-most proxy-logging is there to log requests that were handled in
middleware and never made it through to the right-most middleware (and
proxy server). Double logging is prevented for normal requests. See
proxy-logging docs.
.RE
.PD



.SH FILTER SECTION
.PD 1
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters.
.IP "\fB[filter:healthcheck]\fR"
.RE
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#healthcheck\fR.
.IP "\fBdisable_path\fR"
An optional filesystem path which, if present, will cause the healthcheck
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
.RE
.PD


.RS 0
.IP "\fB[filter:tempauth]\fR"
.RE
.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the tempauth middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#tempauth\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is tempauth.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR "
Enables the ability to log request headers. The default is False.
.IP \fBreseller_prefix\fR
The reseller prefix will verify a token begins with this prefix before even
attempting to validate it. Also, with authorization, only Swift storage accounts
with this prefix will be authorized by this middleware. Useful if multiple auth
systems are in use for one Swift cluster. The default is AUTH.
.IP \fBauth_prefix\fR
The auth prefix will cause requests beginning with this prefix to be routed
to the auth subsystem, for granting tokens, etc. The default is /auth/.
.IP \fBrequire_group\fR
The require_group parameter names a group that must be presented by
either X-Auth-Token or X-Service-Token. Usually this parameter is
used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
By default, no group is needed. Do not use .admin.
.IP \fBtoken_life\fR
This is the time in seconds before the token expires. The default is 86400.
.IP \fBallow_overrides\fR
This allows middleware higher in the WSGI pipeline to override auth
processing, useful for middleware such as tempurl and formpost. If you know
you're not going to use such middleware and you want a bit of extra security,
you can set this to false. The default is true.
.IP \fBstorage_url_scheme\fR
This specifies what scheme to return with storage urls:
http, https, or default (chooses based on what the server is running as)
This can be useful with an SSL load balancer in front of a non-SSL server.
.IP \fBuser__\fR
Lastly, you need to list all the accounts/users you want here. The format is:
user__ =  [group] [group] [...] [storage_url]
or if you want underscores in  or , you can base64 encode them
(with no equal signs) and use this format:
user64__ =  [group] [group] [...] [storage_url]

There are special groups of: \fI.reseller_admin\fR who can do anything to any account for this auth
and also \fI.admin\fR who can do anything within the account.

If neither of these groups are specified, the user can only access containers that
have been explicitly allowed for them by a \fI.admin\fR or \fI.reseller_admin\fR.
The trailing optional storage_url allows you to specify an alternate url to hand
back to the user upon authentication. If not specified, this defaults to
\fIhttp[s]://:/v1/_\fR where http or https depends
on whether cert_file is specified in the [DEFAULT] section,  and  are based
on the [DEFAULT] section's bind_ip and bind_port (falling back to 127.0.0.1 and 8080),
 is from this section, and  is from the user__ name.

Here are example entries, required for running the tests:
.RE

.PD 0
.RS 10
.IP "user_admin_admin = admin .admin .reseller_admin"
.IP "user_test_tester = testing .admin"
.IP "user_test2_tester2 = testing2 .admin"
.IP "user_test_tester3 = testing3"
.RE
.PD

.RS 0
.IP "\fB[filter:authtoken]\fR"
.RE

To enable Keystone authentication you need to have the auth token
middleware first to be configured. Here is an example below, please
refer to the keystone's documentation for details about the
different settings.

You'll need to have as well the keystoneauth middleware enabled
and have it in your main pipeline so instead of having tempauth in
there you can change it to: authtoken keystoneauth

.PD 0
.RS 10
.IP "paste.filter_factory = keystonemiddleware.auth_token:filter_factory"
.IP "auth_uri = http://keystonehost:5000"
.IP "auth_url = http://keystonehost:35357"
.IP "auth_plugin = password"
.IP "project_domain_id = default"
.IP "user_domain_id = default"
.IP "project_name = service"
.IP "username = swift"
.IP "password = password"
.IP ""
.IP "# delay_auth_decision defaults to False, but leaving it as false will"
.IP "# prevent other auth systems, staticweb, tempurl, formpost, and ACLs from"
.IP "# working. This value must be explicitly set to True."
.IP "delay_auth_decision = False"
.IP
.IP "cache = swift.cache"
.IP "include_service_catalog = False"
.RE
.PD


.RS 0
.IP "\fB[filter:keystoneauth]\fR"
.RE

Keystone authentication middleware.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the keystoneauth middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#keystoneauth\fR.
.IP \fBreseller_prefix\fR
The reseller_prefix option lists account namespaces that this middleware is
responsible for. The prefix is placed before the Keystone project id.
For example, for project 12345678, and prefix AUTH, the account is
named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
Several prefixes are allowed by specifying a comma-separated list
as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
single blank/empty prefix. If an empty prefix is required in a list of
prefixes, a value of '' (two single quote characters) indicates a
blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
character is appended to the value unless already present.
.IP \fBoperator_roles\fR
The user must have at least one role named by operator_roles on a
project in order to create, delete and modify containers and objects
and to set and read privileged headers such as ACLs.
If there are several reseller prefix items, you can prefix the
parameter so it applies only to those accounts (for example
the parameter SERVICE_operator_roles applies to the /v1/SERVICE_
path). If you omit the prefix, the option applies to all reseller
prefix items. For the blank/empty prefix, prefix with '' (do not put
underscore after the two single quote characters).
.IP \fBreseller_admin_role\fR
The reseller admin role has the ability to create and delete accounts.
.IP \fBallow_overrides\fR
This allows middleware higher in the WSGI pipeline to override auth
processing, useful for middleware such as tempurl and formpost. If you know
you're not going to use such middleware and you want a bit of extra security,
you can set this to false.
.IP \fBservice_roles\fR
If the service_roles parameter is present, an X-Service-Token must be
present in the request that when validated, grants at least one role listed
in the parameter. The X-Service-Token may be scoped to any project.
If there are several reseller prefix items, you can prefix the
parameter so it applies only to those accounts (for example
the parameter SERVICE_service_roles applies to the /v1/SERVICE_
path). If you omit the prefix, the option applies to all reseller
prefix items. For the blank/empty prefix, prefix with '' (do not put
underscore after the two single quote characters).
By default, no service_roles are required.
.IP \fBdefault_domain_id\fR
For backwards compatibility, keystoneauth will match names in cross-tenant
access control lists (ACLs) when both the requesting user and the tenant
are in the default domain i.e the domain to which existing tenants are
migrated. The default_domain_id value configured here should be the same as
the value used during migration of tenants to keystone domains.
.IP \fBallow_names_in_acls\fR
For a new installation, or an installation in which keystone projects may
move between domains, you should disable backwards compatible name matching
in ACLs by setting allow_names_in_acls to false:
.RE
.PD


.RS 0
.IP "\fB[filter:cache]\fR"
.RE

Caching middleware that manages caching in swift.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the memcache middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#memcache\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is memcache.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR"
Enables the ability to log request headers. The default is False.
.IP \fBmemcache_max_connections\fR
Sets the maximum number of connections to each memcached server per worker.
.IP \fBmemcache_servers\fR
If not set in the configuration file, the value for memcache_servers will be
read from /etc/swift/memcache.conf (see memcache.conf-sample) or lacking that
file, it will default to 127.0.0.1:11211. You can specify multiple servers
separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211.  (IPv6
addresses must follow rfc3986 section-3.2.2, i.e. [::1]:11211)
.IP \fBmemcache_serialization_support\fR
This sets how memcache values are serialized and deserialized:
.RE

.PD 0
.RS 10
.IP "0 = older, insecure pickle serialization"
.IP "1 = json serialization but pickles can still be read (still insecure)"
.IP "2 = json serialization only (secure and the default)"
.RE

.RS 10
To avoid an instant full cache flush, existing installations should upgrade with 0, then set to 1 and reload, then after some time (24 hours) set to 2 and reload. In the future, the ability to use pickle serialization will be removed.

If not set in the configuration file, the value for memcache_serialization_support will be read from /etc/swift/memcache.conf if it exists (see memcache.conf-sample). Otherwise, the default value as indicated above will be used.
.RE
.PD


.RS 0
.IP "\fB[filter:ratelimit]\fR"
.RE

Rate limits requests on both an Account and Container level.  Limits are configurable.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the ratelimit middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#ratelimit\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is ratelimit.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR "
Enables the ability to log request headers. The default is False.
.IP \fBclock_accuracy\fR
This should represent how accurate the proxy servers' system clocks are with each other.
1000 means that all the proxies' clock are accurate to each other within 1 millisecond.
No ratelimit should be higher than the clock accuracy. The default is 1000.
.IP \fBmax_sleep_time_seconds\fR
App will immediately return a 498 response if the necessary sleep time ever exceeds
the given max_sleep_time_seconds. The default is 60 seconds.
.IP \fBlog_sleep_time_seconds\fR
To allow visibility into rate limiting set this value > 0 and all sleeps greater than
the number will be logged. If set to 0 means disabled. The default is 0.
.IP \fBrate_buffer_seconds\fR
Number of seconds the rate counter can drop and be allowed to catch up
(at a faster than listed rate). A larger number will result in larger spikes in
rate but better average accuracy. The default is 5.
.IP \fBaccount_ratelimit\fR
If set, will limit PUT and DELETE requests to /account_name/container_name. Number is
in requests per second. If set to 0 means disabled. The default is 0.
.IP \fBcontainer_ratelimit_size\fR
When set with container_limit_x = r: for containers of size x, limit requests per second
to r. Will limit PUT, DELETE, and POST requests to /a/c/o. The default is ''.
.IP \fBcontainer_listing_ratelimit_size\fR
Similarly to the above container-level write limits, the following will limit
container GET (listing) requests.
.RE
.PD



.RS 0
.IP "\fB[filter:domain_remap]\fR"
.RE

Middleware that translates container and account parts of a domain to path parameters that the proxy server understands.
The container.account.storageurl/object gets translated to container.account.storageurl/path_root/account/container/object and account.storageurl/path_root/container/object gets translated to account.storageurl/path_root/account/container/object

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the domain_remap middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#domain_remap\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is domain_remap.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR "
Enables the ability to log request headers. The default is False.
.IP \fBstorage_domain\fR
The domain to be used by the middleware.
.IP \fBpath_root\fR
The path root value for the storage URL. The default is v1.
.IP \fBreseller_prefixes\fR
Browsers can convert a host header to lowercase, so check that reseller
prefix on the account is the correct case. This is done by comparing the
items in the reseller_prefixes config option to the found prefix. If they
match except for case, the item from reseller_prefixes will be used
instead of the found reseller prefix. When none match, the default reseller
prefix is used. When no default reseller prefix is configured, any request with
an account prefix not in that list will be ignored by this middleware.
Defaults to 'AUTH'.
.IP \fBdefault_reseller_prefix\fR
The default reseller prefix. This is used when none of the configured
reseller_prefixes match. When not set, no reseller prefix is added.
.RE
.PD


.RS 0
.IP "\fB[filter:catch_errors]\fR"
.RE
.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the catch_errors middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#catch_errors\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is catch_errors.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR "
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR"
Enables the ability to log request headers. The default is False.
.RE
.PD


.RS 0
.IP "\fB[filter:cname_lookup]\fR"
.RE

Note: this middleware requires python-dnspython

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the cname_lookup middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#cname_lookup\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is cname_lookup.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR"
Enables the ability to log request headers. The default is False.
.IP \fBstorage_domain\fR
The domain to be used by the middleware.
.IP \fBlookup_depth\fR
How deep in the CNAME chain to look for something that matches the storage domain.
The default is 1.
.RE
.PD


.RS 0
.IP "\fB[filter:staticweb]\fR"
.RE

Note: Put staticweb just after your auth filter(s) in the pipeline

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the staticweb middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#staticweb\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is staticweb.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR "
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR"
Enables the ability to log request headers. The default is False.
.RE
.PD


.RS 0
.IP "\fB[filter:tempurl]\fR"
.RE

Note: Put tempurl before slo, dlo, and your auth filter(s) in the pipeline

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the tempurl middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#tempurl\fR.
.IP \fBmethods\fR
The methods allowed with Temp URLs. The default is 'GET HEAD PUT POST DELETE'.
.IP \fBincoming_remove_headers\fR
The headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with '*' to indicate a prefix match. incoming_allow_headers is a list of exceptions to these removals.
.IP \fBincoming_allow_headers\fR
The headers allowed as exceptions to incoming_remove_headers. Simply a whitespace delimited list of header names and names can optionally end with '*' to indicate a prefix match.
.IP "\fBoutgoing_remove_headers\fR"
The headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with '*' to indicate a prefix match. outgoing_allow_headers is a list of exceptions to these removals.
.IP "\fBoutgoing_allow_headers\fR"
The headers allowed as exceptions to outgoing_remove_headers. Simply a whitespace delimited list of header names and names can optionally end with '*' to indicate a prefix match.
.RE
.PD


.RS 0
.IP "\fB[filter:formpost]\fR"
.RE

Note: Put formpost just before your auth filter(s) in the pipeline

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the formpost middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#formpost\fR.
.RE
.PD



.RS 0
.IP "\fB[filter:name_check]\fR"
.RE

Note: Just needs to be placed before the proxy-server in the pipeline.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the name_check middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#name_check\fR.
.IP \fBforbidden_chars\fR
Characters that will not be allowed in a name. The default is '"`<>.
.IP \fBmaximum_length\fR
Maximum number of characters that can be in the name. The default is 255.
.IP \fBforbidden_regexp\fR
Python regular expressions of substrings that will not be allowed in a name. The default is /\./|/\.\./|/\.$|/\.\.$.
.RE
.PD


.RS 0
.IP "\fB[filter:list-endpoints]\fR"
.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the list_endpoints middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#list_endpoints\fR.
.IP \fBlist_endpoints_path\fR
The default is '/endpoints/'.
.RE
.PD


.RS 0
.IP "\fB[filter:proxy-logging]\fR"
.RE

Logging for the proxy server now lives in this middleware.
If the access_* variables are not set, logging directives from [DEFAULT]
without "access_" will be used.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the proxy_logging middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#proxy_logging\fR.
.IP "\fBaccess_log_name\fR"
Label used when logging. The default is proxy-server.
.IP "\fBaccess_log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBaccess_log_level\fR "
Logging level. The default is INFO.
.IP \fBaccess_log_address\fR
Default is /dev/log.
.IP \fBaccess_log_udp_host\fR
If set, access_log_udp_host will override access_log_address.  Default is
unset.
.IP \fBaccess_log_udp_port\fR
Default is 514.
.IP \fBaccess_log_statsd_host\fR
You can use log_statsd_* from [DEFAULT], or override them here.
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBaccess_log_statsd_port\fR
Default is 8125.
.IP \fBaccess_log_statsd_default_sample_rate\fR
Default is 1.
.IP \fBaccess_log_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBaccess_log_statsd_metric_prefix\fR
Default is "" (empty-string)
.IP \fBaccess_log_headers\fR
Default is False.
.IP \fBaccess_log_headers_only\fR
If access_log_headers is True and access_log_headers_only is set only
these headers are logged. Multiple headers can be defined as comma separated
list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
.IP \fBreveal_sensitive_prefix\fR
By default, the X-Auth-Token is logged. To obscure the value,
set reveal_sensitive_prefix to the number of characters to log.
For example, if set to 12, only the first 12 characters of the
token appear in the log. An unauthorized access of the log file
won't allow unauthorized usage of the token. However, the first
12 or so characters is unique enough that you can trace/debug
token usage. Set to 0 to suppress the token completely (replaced
by '...' in the log). The default is 16 chars.
Note: reveal_sensitive_prefix will not affect the value logged with access_log_headers=True.
.IP \fBlog_statsd_valid_http_methods\fR
What HTTP methods are allowed for StatsD logging (comma-sep); request methods
not in this list will have "BAD_METHOD" for the  portion of the metric.
Default is "GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS".
.RE
.PD


.RS 0
.IP "\fB[filter:bulk]\fR"
.RE

Note: Put before both ratelimit and auth in the pipeline.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the bulk middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#bulk\fR.
.IP \fBmax_containers_per_extraction\fR
The default is 10000.
.IP \fBmax_failed_extractions\fR
The default is 1000.
.IP \fBmax_deletes_per_request\fR
The default is 10000.
.IP \fBmax_failed_deletes\fR
The default is 1000.

In order to keep a connection active during a potentially long bulk request,
Swift may return whitespace prepended to the actual response body. This
whitespace will be yielded no more than every yield_frequency seconds.
The default is 10.
.IP \fByield_frequency\fR

.IP \fBdelete_container_retry_count\fR
Note: This parameter is used during a bulk delete of objects and
their container. This would frequently fail because it is very likely
that all replicated objects have not been deleted by the time the middleware got a
successful response. It can be configured the number of retries. And the
number of seconds to wait between each retry will be 1.5**retry
The default is 0.
.RE
.PD


.RS 0
.IP "\fB[filter:slo]\fR"
.RE

Note: Put after auth and staticweb in the pipeline.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the slo middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#slo\fR.
.IP \fBmax_manifest_segments\fR
The default is 1000.
.IP \fBmax_manifest_size\fR
The default is 2097152.
.IP \fBmin_segment_size\fR
The default is 1048576
.IP \fBrate_limit_after_segment\fR
Start rate-limiting object segments after the Nth segment of a segmented
object. The default is 10 segments.
.IP \fBrate_limit_segments_per_sec\fR
Once segment rate-limiting kicks in for an object, limit segments served to N
per second. The default is 1.
.IP \fBmax_get_time\fR
Time limit on GET requests (seconds). The default is 86400.
.RE
.PD


.RS 0
.IP "\fB[filter:dlo]\fR"
.RE

Note: Put after auth and staticweb in the pipeline.
If you don't put it in the pipeline, it will be inserted for you.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the dlo middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#dlo\fR.
.IP \fBrate_limit_after_segment\fR
Start rate-limiting object segments after the Nth segment of a segmented
object. The default is 10 segments.
.IP \fBrate_limit_segments_per_sec\fR
Once segment rate-limiting kicks in for an object, limit segments served to N
per second. The default is 1.
.IP \fBmax_get_time\fR
Time limit on GET requests (seconds). The default is 86400.
.RE
.PD


.RS 0
.IP "\fB[filter:container-quotas]\fR"
.RE

Note: Put after auth in the pipeline.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the container_quotas middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#container_quotas\fR.
.RE
.PD


.RS 0
.IP "\fB[filter:account-quotas]\fR"
.RE

Note: Put after auth in the pipeline.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the account_quotas middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#account_quotas\fR.
.RE
.PD


.RS 0
.IP "\fB[filter:gatekeeper]\fR"
.RE

Note: this middleware requires python-dnspython

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the gatekeeper middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#gatekeeper\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is gatekeeper.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR "
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBset log_headers\fR"
Enables the ability to log request headers. The default is False.
.RE
.PD


.RS 0
.IP "\fB[filter:container_sync]\fR"
.RE

Note: this middleware requires python-dnspython

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the container_sync middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#container_sync\fR.
.IP \fBallow_full_urls\fR
Set this to false if you want to disallow any full url values to be set for
any new X-Container-Sync-To headers. This will keep any new full urls from
coming in, but won't change any existing values already in the cluster.
Updating those will have to be done manually, as knowing what the true realm
endpoint should be cannot always be guessed. The default is true.
.IP \fBcurrent\fR
Set this to specify this clusters //realm/cluster as "current" in /info
.RE
.PD


.RS 0
.IP "\fB[filter:xprofile]\fR"
.RE

Note: Put it at the beginning of the pipeline to profile all middleware. But it is safer to put this after healthcheck.

.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#xprofile\fR.
.IP "\fBprofile_module\fR"
This option enable you to switch profilers which should inherit from python
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
.IP "\fBlog_filename_prefix\fR"
This prefix will be used to combine process ID and timestamp to name the
profile data file.  Make sure the executing user has permission to write
into this path (missing path segments will be created, if necessary).
If you enable profiling in more than one type of daemon, you must override
it with an unique value like, the default is /var/log/swift/profile/account.profile.
.IP "\fBdump_interval\fR"
The profile data will be dumped to local disk based on above naming rule
in this interval. The default is 5.0.
.IP "\fBdump_timestamp\fR"
Be careful, this option will enable profiler to dump data into the file with
time stamp which means there will be lots of files piled up in the directory.
The default is false
.IP "\fBpath\fR"
This is the path of the URL to access the mini web UI. The default is __profile__.
.IP "\fBflush_at_shutdown\fR"
Clear the data when the wsgi server shutdown. The default is false.
.IP "\fBunwind\fR"
Unwind the iterator of applications. Default is false.
.RE
.PD


.RS 0
.IP "\fB[filter:versioned_writes]\fR"
.RE

Note: Put after slo, dlo in the pipeline.
If you don't put it in the pipeline, it will be inserted automatically.

.RS 3
.IP \fBuse\fR
Entry point for paste.deploy for the versioned_writes middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#versioned_writes\fR.
.IP \fBallow_versioned_writes\fR
Enables using versioned writes middleware and exposing configuration settings via HTTP GET /info.
WARNING: Setting this option bypasses the "allow_versions" option
in the container configuration file, which will be eventually
deprecated. See documentation for more details.
.RE
.PD


.SH APP SECTION
.PD 1
.RS 0
This is indicated by section name [app:proxy-server]. Below are the parameters
that are acceptable within this section.
.IP \fBuse\fR
Entry point for paste.deploy for the proxy server. This is the reference to the installed python egg.
This is normally \fBegg:swift#proxy\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is proxy-server.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR"
Logging level. The default is INFO.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP \fBlog_handoffs\fR
Log when handoff locations are used.  Default is True.
.IP \fBrecheck_account_existence\fR
Cache timeout in seconds to send memcached for account existence. The default is 60 seconds.
.IP \fBrecheck_container_existence\fR
Cache timeout in seconds to send memcached for container existence. The default is 60 seconds.
.IP \fBobject_chunk_size\fR
Chunk size to read from object servers. The default is 8192.
.IP \fBclient_chunk_size\fR
Chunk size to read from clients. The default is 8192.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBrecoverable_node_timeout\fR
How long the proxy server will wait for an initial response and to read a
chunk of data from the object servers while serving GET / HEAD requests.
Timeouts from these requests can be recovered from so setting this to
something lower than node_timeout would provide quicker error recovery
while allowing for a longer timeout for non-recoverable requests (PUTs).
Defaults to node_timeout, should be overridden if node_timeout is set to a
high number to prevent client timeouts from firing before the proxy server
has a chance to retry.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBpost_quorum_timeout\fR
How long to wait for requests to finish after a quorum has been established. The default is 0.5 seconds.
.IP \fBerror_suppression_interval\fR
Time in seconds that must elapse since the last error for a node to
be considered no longer error limited. The default is 60 seconds.
.IP \fBerror_suppression_limit\fR
Error count to consider a node error limited. The default is 10.
.IP \fBallow_account_management\fR
Whether account PUTs and DELETEs are even callable. If set to 'true' any authorized
user may create and delete accounts; if 'false' no one, even authorized, can. The default
is false.
.IP \fBobject_post_as_copy\fR
Set object_post_as_copy = false to turn on fast posts where only the metadata changes
are stored as new and the original data file is kept in place. This makes for quicker
posts. The default is True.
.IP \fBaccount_autocreate\fR
If set to 'true' authorized accounts that do not yet exist within the Swift cluster
will be automatically created. The default is set to false.
.IP \fBauto_create_account_prefix\fR
Prefix used when automatically creating accounts. The default is '.'.
.IP \fBmax_containers_per_account\fR
If set to a positive value, trying to create a container when the account
already has at least this maximum containers will result in a 403 Forbidden.
Note: This is a soft limit, meaning a user might exceed the cap for
recheck_account_existence before the 403s kick in.
.IP \fBmax_containers_whitelist\fR
This is a comma separated list of account hashes that ignore the max_containers_per_account cap.
.IP \fBdeny_host_headers\fR
Comma separated list of Host headers to which the proxy will deny requests. The default is empty.
.IP \fBput_queue_depth\fR
Depth of the proxy put queue. The default is 10.
.IP \fBsorting_method\fR
Storage nodes can be chosen at random (shuffle - default), by using timing
measurements (timing), or by using an explicit match (affinity).
Using timing measurements may allow for lower overall latency, while
using affinity allows for finer control. In both the timing and
affinity cases, equally-sorting nodes are still randomly chosen to
spread load.
The valid values for sorting_method are "affinity", "shuffle", and "timing".
.IP \fBtiming_expiry\fR
If the "timing" sorting_method is used, the timings will only be valid for
the number of seconds configured by timing_expiry. The default is 300.
.IP \fBrequest_node_count\fR
Set to the number of nodes to contact for a normal request. You can use '* replicas'
at the end to have it use the number given times the number of
replicas for the ring being used for the request. The default is '2 * replicas'.
.IP \fBread_affinity\fR
Which backend servers to prefer on reads. Format is r for region
N or rz for region N, zone M. The value after the equals is
the priority; lower numbers are higher priority.
Default is empty, meaning no preference.
Example: first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else:
read_affinity = r1z1=100, r1z2=200, r2=300
.IP \fBwrite_affinity\fR
Which backend servers to prefer on writes. Format is r for region
N or rz for region N, zone M. If this is set, then when
handling an object PUT request, some number (see setting
write_affinity_node_count) of local backend servers will be tried
before any nonlocal ones. Default is empty, meaning no preference.
Example: try to write to regions 1 and 2 before writing to any other
nodes:
write_affinity = r1, r2
.IP \fBwrite_affinity_node_count\fR
The number of local (as governed by the write_affinity setting)
nodes to attempt to contact first, before any non-local ones. You
can use '* replicas' at the end to have it use the number given
times the number of replicas for the ring being used for the
request. The default is '2 * replicas'.
.IP \fBswift_owner_headers\fR
These are the headers whose values will only be shown to swift_owners. The
exact definition of a swift_owner is up to the auth system in use, but
usually indicates administrative responsibilities.
The default is 'x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control'.
.RE
.PD

.SH DOCUMENTATION
.LP
More in depth documentation about the swift-proxy-server and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

.SH "SEE ALSO"
.BR swift-proxy-server(1)
swift-2.7.1/doc/manpages/object-server.conf.50000664000567000056710000005031213024044354022142 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH object-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME
.LP
.B object-server.conf
\- configuration file for the openstack-swift object server



.SH SYNOPSIS
.LP
.B object-server.conf



.SH DESCRIPTION
.PP
This is the configuration file used by the object server and other object
background services, such as; replicator, reconstructor, updater and auditor.

The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.

Any line that begins with a '#' symbol is ignored.

You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR



.SH GLOBAL SECTION
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.

.IP "\fBbind_ip\fR"
IP address the object server should bind to. The default is 0.0.0.0 which will make
it bind to all available addresses.
.IP "\fBbind_port\fR"
TCP port the object server should bind to. The default is 6000.
.IP "\fBbind_timeout\fR"
Timeout to bind socket. The default is 30.
.IP \fBbacklog\fR
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
.IP \fBworkers\fR
The number of pre-forked processes that will accept connections.  Zero means
no fork.  The default is auto which will make the server try to match the
number of effective cpu cores if python multiprocessing is available (included
with most python distributions >= 2.6) or fallback to one.  It's worth noting
that individual workers will use many eventlet co-routines to service multiple
concurrent requests.
.IP \fBmax_clients\fR
Maximum number of clients one worker can process simultaneously (it will
actually accept(2) N + 1). Setting this to one (1) will only handle one request
at a time, without accepting another request concurrently. The default is 1024.
.IP \fBuser\fR
The system user that the object server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
.IP \fBdisable_fallocate\fR
Disable pre-allocate disk space for a file. The default is false.
.IP \fBexpiring_objects_container_divisor\fR
The default is 86400.
.IP \fBexpiring_objects_account_name\fR
The default is 'expiring_objects'.
.IP \fBservers_per_port\fR
Make object-server run this many worker processes per unique port of
"local" ring devices across all storage policies.  This can help provide
the isolation of threads_per_disk without the severe overhead.  The default
value of 0 disables this feature.
.IP \fBlog_name\fR
Label used when logging. The default is swift.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBlog_max_line_length\fR
The following caps the length of log lines to the value given; no limit if
set to 0, the default.
.IP \fBlog_custom_handlers\fR
Comma separated list of functions to call to setup custom log handlers.
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
adapted_logger. The default is empty.
.IP \fBlog_udp_host\fR
If set, log_udp_host will override log_address.
.IP "\fBlog_udp_port\fR
UDP log port, the default is 514.
.IP \fBlog_statsd_host\fR
StatsD server. IPv4/IPv6 addresses and hostnames are
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
address will be used.
.IP \fBlog_statsd_port\fR
The default is 8125.
.IP \fBlog_statsd_default_sample_rate\fR
The default is 1.
.IP \fBlog_statsd_sample_rate_factor\fR
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.IP \fBeventlet_debug\fR
Debug mode for eventlet library. The default is false.
.IP \fBfallocate_reserve\fR
You can set fallocate_reserve to the number of bytes you'd like fallocate to
reserve, whether there is space for the given file size or not. The default is 0.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 3 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBcontainer_update_timeout\fR
Time to wait while sending a container update on object update. The default is 1 second.
.IP \fBclient_timeout\fR
Time to wait while receiving each chunk of data from a client or another
backend node. The default is 60.
.IP \fBnetwork_chunk_size\fR
The default is 65536.
.IP \fBdisk_chunk_size\fR
The default is 65536.
.RE
.PD



.SH PIPELINE SECTION
.PD 1
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section.

.IP "\fBpipeline\fR"
It is used when you need to apply a number of filters. It is a list of filters
ended by an application. The normal pipeline is "healthcheck recon
object-server".
.RE
.PD



.SH APP SECTION
.PD 1
.RS 0
This is indicated by section name [app:object-server]. Below are the parameters
that are acceptable within this section.
.IP "\fBuse\fR"
Entry point for paste.deploy for the object server. This is the reference to the installed python egg.
This is normally \fBegg:swift#object\fR.
.IP "\fBset log_name\fR"
Label used when logging. The default is object-server.
.IP "\fBset log_facility\fR"
Syslog log facility. The default is LOG_LOCAL0.
.IP "\fBset log_level\fR"
Logging level. The default is INFO.
.IP "\fBset log_requests\fR"
Enables request logging. The default is True.
.IP "\fBset log_address\fR"
Logging address. The default is /dev/log.
.IP "\fBmax_upload_time\fR"
The default is 86400.
.IP "\fBslow\fR"
The default is 0.
.IP "\fBkeep_cache_size\fR"
Objects smaller than this are not evicted from the buffercache once read. The default is 5242880.
.IP "\fBkeep_cache_private\fR"
If true, objects for authenticated GET requests may be kept in buffer cache
if small enough. The default is false.
.IP "\fBmb_per_sync\fR"
On PUTs, sync data every n MB. The default is 512.
.IP "\fBallowed_headers\fR"
Comma separated list of headers that can be set in metadata on an object.
This list is in addition to X-Object-Meta-* headers and cannot include Content-Type, etag, Content-Length, or deleted.
The default is 'Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object'.
.IP "\fBauto_create_account_prefix\fR"
The default is '.'.
.IP "\fBthreads_per_disk\fR"
A value of 0 means "don't use thread pools". A reasonable starting point is
4. The default is 0.
.IP "\fBreplication_server\fR"
Configure parameter for creating specific server
To handle all verbs, including replication verbs, do not specify
"replication_server" (this is the default). To only handle replication,
set to a True value (e.g. "True" or "1"). To handle only non-replication
verbs, set to "False". Unless you have a separate replication network, you
should not specify any value for "replication_server".
.IP "\fBreplication_concurrency\fR"
Set to restrict the number of concurrent incoming SSYNC requests
Set to 0 for unlimited (the default is 4). Note that SSYNC requests are only used
by the object reconstructor or the object replicator when configured to use ssync.
.IP "\fBreplication_one_per_device\fR"
Restricts incoming SSYNC requests to one per device,
replication_currency above allowing. This can help control I/O to each
device, but you may wish to set this to False to allow multiple SSYNC
requests (up to the above replication_concurrency setting) per device. The default is true.
.IP "\fBreplication_lock_timeout\fR"
Number of seconds to wait for an existing replication device lock before
giving up. The default is 15.
.IP "\fBreplication_failure_threshold\fR"
.IP "\fBreplication_failure_ratio\fR"
These two settings control when the SSYNC subrequest handler will
abort an incoming SSYNC attempt. An abort will occur if there are at
least threshold number of failures and the value of failures / successes
exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
failures have to occur and there have to be more failures than successes for
an abort to occur.
.IP "\fBsplice\fR"
Use splice() for zero-copy object GETs. This requires Linux kernel
version 3.0 or greater. If you set "splice = yes" but the kernel
does not support it, error messages will appear in the object server
logs at startup, but your object servers should continue to function.
The default is false.
.RE
.PD



.SH FILTER SECTION
.PD 1
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters.
.IP "\fB[filter:healthcheck]\fR"
.RE
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#healthcheck\fR.
.IP "\fBdisable_path\fR"
An optional filesystem path which, if present, will cause the healthcheck
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
.RE

.RS 0
.IP "\fB[filter:recon]\fR"
.RE
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#recon\fR.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.IP "\fBrecon_lock_path\fR"
The default is /var/lock.
.RE
.PD

.RS 0
.IP "\fB[filter:xprofile]\fR"
.RS 3
.IP "\fBuse\fR"
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
This is normally \fBegg:swift#xprofile\fR.
.IP "\fBprofile_module\fR"
This option enable you to switch profilers which should inherit from python
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
.IP "\fBlog_filename_prefix\fR"
This prefix will be used to combine process ID and timestamp to name the
profile data file.  Make sure the executing user has permission to write
into this path (missing path segments will be created, if necessary).
If you enable profiling in more than one type of daemon, you must override
it with an unique value like, the default is /var/log/swift/profile/account.profile.
.IP "\fBdump_interval\fR"
The profile data will be dumped to local disk based on above naming rule
in this interval. The default is 5.0.
.IP "\fBdump_timestamp\fR"
Be careful, this option will enable profiler to dump data into the file with
time stamp which means there will be lots of files piled up in the directory.
The default is false
.IP "\fBpath\fR"
This is the path of the URL to access the mini web UI. The default is __profile__.
.IP "\fBflush_at_shutdown\fR"
Clear the data when the wsgi server shutdown. The default is false.
.IP "\fBunwind\fR"
Unwind the iterator of applications. Default is false.
.RE
.PD


.SH ADDITIONAL SECTIONS
.PD 1
.RS 0
The following sections are used by other swift-object services, such as replicator,
updater, auditor.
.IP "\fB[object-replicator]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is object-replicator.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBdaemonize\fR
Whether or not to run replication as a daemon. The default is yes.
.IP "\fBrun_pause [deprecated]\fR"
Time in seconds to wait between replication passes. The default is 30.
.IP \fBinterval\fR
Time in seconds to wait between replication passes. The default is 30.
.IP \fBconcurrency\fR
Number of replication workers to spawn. The default is 1.
.IP \fBstats_interval\fR
Interval in seconds between logging replication statistics. The default is 300.
.IP \fBsync_method\fR
The sync method to use; default is rsync but you can use ssync to try the
EXPERIMENTAL all-swift-code-no-rsync-callouts method. Once ssync is verified
as having performance comparable to, or better than, rsync, we plan to
deprecate rsync so we can move on with more features for replication.
.IP \fBrsync_timeout\fR
Max duration of a partition rsync. The default is 900 seconds.
.IP \fBrsync_io_timeout\fR
Passed to rsync for I/O OP timeout. The default is 30 seconds.
.IP \fBrsync_compress\fR
Allow rsync to compress data which is transmitted to destination node
during sync. However, this is applicable only when destination node is in
a different region than the local one.
NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
slow down the syncing process. The default is false.
.IP \fBrsync_module\fR
Format of the rysnc module where the replicator will send data. See
etc/rsyncd.conf-sample for some usage examples. The default is empty.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBrsync_bwlimit\fR
Passed to rsync for bandwidth limit in kB/s.  The default is 0 (unlimited).
.IP \fBhttp_timeout\fR
Max duration of an HTTP request. The default is 60 seconds.
.IP \fBlockup_timeout\fR
Attempts to kill all workers if nothing replicates for lockup_timeout seconds. The
default is 1800 seconds.
.IP \fBring_check_interval\fR
The default is 15.
.IP \fBrsync_error_log_line_length\fR
Limits how long rsync error log lines are. 0 (default) means to log the entire line.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an object can be reclaimed. The default is
604800 seconds.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write.The default is /var/cache/swift.
.IP "\fBhandoffs_first\fR"
The flag to replicate handoffs prior to canonical partitions.
It allows one to force syncing and deleting handoffs quickly.
If set to a True value(e.g. "True" or "1"), partitions
that are not supposed to be on the node will be replicated first.
The default is false.
.IP "\fBhandoff_delete\fR"
The number of replicas which are ensured in swift.
If the number less than the number of replicas is set, object-replicator
could delete local handoffs even if all replicas are not ensured in the
cluster. Object-replicator would remove local handoff partition directories
after syncing partition when the number of successful responses is greater
than or equal to this number. By default(auto), handoff partitions will be
removed  when it has successfully replicated to all the canonical nodes.

The handoffs_first and handoff_delete are options for a special case
such as disk full in the cluster. These two options SHOULD NOT BE
CHANGED, except for such an extreme situations. (e.g. disks filled up
or are about to fill up. Anyway, DO NOT let your drives fill up).
.RE


.RS 0
.IP "\fB[object-reconstructor]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is object-reconstructor.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBdaemonize\fR
Whether or not to run replication as a daemon. The default is yes.
.IP "\fBrun_pause [deprecated]\fR"
Time in seconds to wait between replication passes. The default is 30.
.IP \fBinterval\fR
Time in seconds to wait between replication passes. The default is 30.
.IP \fBconcurrency\fR
Number of replication workers to spawn. The default is 1.
.IP \fBstats_interval\fR
Interval in seconds between logging replication statistics. The default is 300.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBhttp_timeout\fR
Max duration of an HTTP request. The default is 60 seconds.
.IP \fBlockup_timeout\fR
Attempts to kill all workers if nothing replicates for lockup_timeout seconds. The
default is 1800 seconds.
.IP \fBring_check_interval\fR
The default is 15.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an object can be reclaimed. The default is
604800 seconds.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write.The default is /var/cache/swift.
.IP "\fBhandoffs_first\fR"
The flag to replicate handoffs prior to canonical partitions.
It allows one to force syncing and deleting handoffs quickly.
If set to a True value(e.g. "True" or "1"), partitions
that are not supposed to be on the node will be replicated first.
The default is false.
.RE
.PD


.RS 0
.IP "\fB[object-updater]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is object-updater.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
.IP \fBinterval\fR
Minimum time for a pass to take. The default is 300 seconds.
.IP \fBconcurrency\fR
Number of reaper workers to spawn. The default is 1.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds.
.IP \fBslowdown\fR
Slowdown will sleep that amount between objects. The default is 0.01 seconds.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.RE
.PD


.RS 0
.IP "\fB[object-auditor]\fR"
.RE
.RS 3
.IP \fBlog_name\fR
Label used when logging. The default is object-auditor.
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.

.IP \fBdisk_chunk_size\fR
The default is 65536.
.IP \fBfiles_per_second\fR
Maximum files audited per second. Should be tuned according to individual
system specs. 0 is unlimited. The default is 20.
.IP \fBbytes_per_second\fR
Maximum bytes audited per second. Should be tuned according to individual
system specs. 0 is unlimited. The default is 10000000.
.IP \fBconcurrency\fR
Number of reaper workers to spawn. The default is 1.
.IP \fBlog_time\fR
The default is 3600 seconds.
.IP \fBzero_byte_files_per_second\fR
The default is 50.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.IP \fBobject_size_stats\fR
Takes a comma separated list of ints. If set, the object auditor will
increment a counter for every object whose size is <= to the given break
points and report the result after a full scan.
.IP \fBrsync_tempfile_timeout\fR
Time elapsed in seconds before rsync tempfiles will be unlinked. Config value of "auto"
will try to use object-replicator's rsync_timeout + 900 or fall-back to 86400 (1 day).
.RE




.SH DOCUMENTATION
.LP
More in depth documentation about the swift-object-server and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-object-server(1),
swift-2.7.1/doc/manpages/swift-object-auditor.10000664000567000056710000000336013024044354022506 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-auditor 
\- Openstack-swift object auditor

.SH SYNOPSIS
.LP
.B swift-object-auditor 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once] [-z|--zero_byte_fps]

.SH DESCRIPTION 
.PP
The object auditor crawls the local object system checking the integrity of objects. 
If corruption is found (in the case of bit rot, for example), the file is 
quarantined, and replication will replace the bad file from another replica.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE

.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE

.IP "-z ZERO_BYTE_FPS"
.IP "--zero_byte_fps=ZERO_BYTE_FPS"
.RS 4
.IP "Audit only zero byte files at specified files/sec"
.RE
.PD
.RE
    
    
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-auditor 
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR object-server.conf(5)
swift-2.7.1/doc/manpages/swift-object-replicator.10000664000567000056710000000411713024044354023204 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-replicator 
\- Openstack-swift object replicator

.SH SYNOPSIS
.LP
.B swift-object-replicator 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
Replication is designed to keep the system in a consistent state in the face of 
temporary error conditions like network outages or drive failures. The replication 
processes compare local data with each remote copy to ensure they all contain the 
latest version. Object replication uses a hash list to quickly compare subsections 
of each partition.
.PP
Replication updates are push based. For object replication, updating is just a matter 
of rsyncing files to the peer. The replicator also ensures that data is removed
from the system. When an object item is deleted a tombstone is set as the latest 
version of the item. The replicator will see the tombstone and ensure that the item 
is removed from the entire system.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
    
   
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-replicator
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR object-server.conf(5)
swift-2.7.1/doc/manpages/swift-object-info.10000664000567000056710000000322713024044354021774 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-info 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-info
\- Openstack-swift object-info tool

.SH SYNOPSIS
.LP
.B swift-object-info
[OBJECT_FILE] [SWIFT_DIR] 

.SH DESCRIPTION 
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve 
information about an object that is located on the storage node. One calls 
the tool with a given object file as it is stored on the storage node system. 
It will then return several information about that object such as; 

.PD 0
.IP	"- Account it belongs to"
.IP  "- Container "
.IP  "- Object hash "
.IP  "- Content Type "
.IP  "- timestamp "
.IP  "- Etag "
.IP  "- Content Length "
.IP  "- User Metadata "
.IP  "- Location on the ring "
.PD 
    
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at 
.BI http://swift.openstack.org/index.html

.SH "SEE ALSO"

.BR swift-account-info(1),
.BR swift-container-info(1),
.BR swift-get-nodes(1)
swift-2.7.1/doc/manpages/swift-object-updater.10000664000567000056710000000500213024044354022476 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-object-updater 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-object-updater
\- Openstack-swift object updater

.SH SYNOPSIS
.LP
.B swift-object-updater
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
The object updater is responsible for updating object information in container listings. 
It will check to see if there are any locally queued updates on the filesystem of each 
devices, what is also known as async pending file(s), walk each one and update the 
container listing.

For example, suppose a container server is under load and a new object is put 
into the system. The object will be immediately available for reads as soon as 
the proxy server responds to the client with success. However, the object 
server has not been able to update the object listing in the container server. 
Therefore, the update would be queued locally for a later update. Container listings, 
therefore, may not immediately contain the object. This is where an eventual consistency
window will most likely come in to play. 

In practice, the consistency window is only as large as the frequency at which 
the updater runs and may not even be noticed as the proxy server will route 
listing requests to the first container server which responds. The server under
load may not be the one that serves subsequent listing requests – one of the other
two replicas may handle the listing.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD 
.RE
    
    
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-object-updater
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR object-server.conf(5)
swift-2.7.1/doc/manpages/swift-container-replicator.10000664000567000056710000000422213024044354023715 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-replicator 
\- Openstack-swift container replicator

.SH SYNOPSIS
.LP
.B swift-container-replicator 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
Replication is designed to keep the system in a consistent state in the face of 
temporary error conditions like network outages or drive failures. The replication 
processes compare local data with each remote copy to ensure they all contain the 
latest version. Container replication uses a combination of hashes and shared high 
water marks to quickly compare subsections of each partition.
.PP
Replication updates are push based. Container replication push missing records over 
HTTP or rsync whole database files. The replicator also ensures that data is removed
from the system. When an container item is deleted a tombstone is set as the latest 
version of the item. The replicator will see the tombstone and ensure that the item 
is removed from the entire system.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
    
   
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-container-replicator
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR container-server.conf(5)
swift-2.7.1/doc/manpages/swift-container-updater.10000664000567000056710000000425613024044354023224 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2012 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-container-updater 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-container-updater
\- Openstack-swift container updater

.SH SYNOPSIS
.LP
.B swift-container-updater 
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]

.SH DESCRIPTION 
.PP
The container updater is responsible for updating container information in the account database. 
It will walk the container path in the system looking for container DBs and sending updates
to the account server as needed as it goes along. 

There are times when account data can not be immediately updated. This usually occurs 
during failure scenarios or periods of high load. This is where an eventual consistency 
window will most likely come in to play. 

In practice, the consistency window is only as large as the frequency at which 
the updater runs and may not even be noticed as the proxy server will route 
listing requests to the first account server which responds. The server under
load may not be the one that serves subsequent listing requests – one of the other
two replicas may handle the listing.

The options are as follows:

.RS 4
.PD 0
.IP "-v"
.IP "--verbose"
.RS 4
.IP "log to console"
.RE
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon" 
.RE
.PD
.RE
       
.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-container-updater
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html


.SH "SEE ALSO"
.BR container-server.conf(5)
swift-2.7.1/doc/manpages/swift-account-server.10000664000567000056710000000260413024044354022533 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-account-server 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-account-server
\- Openstack-swift account server

.SH SYNOPSIS
.LP
.B swift-account-server
[CONFIG] [-h|--help] [-v|--verbose]

.SH DESCRIPTION 
.PP
The Account Server's primary job is to handle listings of containers. The listings
are stored as sqlite database files, and replicated across the cluster similar to how
objects are. 

.SH DOCUMENTATION
.LP
More in depth documentation in regards to 
.BI swift-account-server
and also about Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/index.html
and 
.BI http://docs.openstack.org


.SH "SEE ALSO"
.BR account-server.conf(5)
swift-2.7.1/doc/manpages/swift-dispersion-report.10000664000567000056710000001007613024044354023265 0ustar  jenkinsjenkins00000000000000.\"
.\" Author: Joao Marcelo Martins  or 
.\" Copyright (c) 2010-2011 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\"    http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"  
.TH swift-dispersion-report 1 "8/26/2011" "Linux" "OpenStack Swift"

.SH NAME 
.LP
.B swift-dispersion-report
\- Openstack-swift dispersion report 

.SH SYNOPSIS
.LP
.B swift-dispersion-report [-d|--debug] [-j|--dump-json] [-p|--partitions] [--container-only|--object-only] [--insecure] [conf_file]

.SH DESCRIPTION 
.PP
This is one of the swift-dispersion utilities that is used to evaluate the
overall cluster health. This is accomplished by checking if a set of 
deliberately distributed containers and objects are currently in their
proper places within the cluster.

.PP 
For instance, a common deployment has three replicas of each object.
The health of that object can be measured by checking if each replica
is in its proper place. If only 2 of the 3 is in place the object's health
can be said to be at 66.66%, where 100% would be perfect.

.PP
Once the \fBswift-dispersion-populate\fR has been used to populate the 
dispersion account, one should run the \fBswift-dispersion-report\fR tool 
repeatedly for the life of the cluster, in order to check the health of each
of these containers and objects.

.PP
These tools need direct access to the entire cluster and to the ring files. 
Installing them on a proxy server will probably do or a box used for swift 
administration purposes that also contains the common swift packages and ring. 
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the 
same configuration file, /etc/swift/dispersion.conf . The account used by these
tool should be a dedicated account for the dispersion stats and also have admin
privileges. 

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB-d, --debug\fR"
output any 404 responses to standard error

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB-j, --dump-json\fR"
output dispersion report in json format

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB-p, --partitions\fR"
output the partition numbers that have any missing replicas

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--container-only\fR"
Only run the container report

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--object-only\fR"
Only run the object report

.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--insecure\fR"
Allow accessing insecure keystone server. The keystone's certificate will not
be verified.

.SH CONFIGURATION
.PD 0 
Example \fI/etc/swift/dispersion.conf\fR: 

.RS 3
.IP "[dispersion]"
.IP "auth_url = https://127.0.0.1:443/auth/v1.0"
.IP "auth_user = dpstats:dpstats"
.IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0"
.IP "# retries = 5"
.IP "# concurrency = 25"
.IP "# dump_json = no"
.IP "# endpoint_type = publicURL"
.RE
.PD 

.SH EXAMPLE
.PP 
.PD 0
$ swift-dispersion-report 


.RS 1
.IP "Queried 2622 containers for dispersion reporting, 31s, 0 retries"
.IP "100.00% of container copies found (7866 of 7866)"
.IP "Sample represents 1.00% of the container partition space"

.IP "Queried 2621 objects for dispersion reporting, 22s, 0 retries"
.IP "100.00% of object copies found (7863 of 7863)"
.IP "Sample represents 1.00% of the object partition space"
.RE

.PD
 

.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at 
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and 
.BI http://swift.openstack.org


.SH "SEE ALSO"
.BR swift-dispersion-populate(1),
.BR dispersion.conf (5)
swift-2.7.1/test-requirements.txt0000664000567000056710000000072113024044354020256 0ustar  jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

# Hacking already pins down pep8, pyflakes and flake8
hacking>=0.10.0,<0.11
coverage
nose
nosexcover
nosehtmloutput
oslosphinx
sphinx>=1.1.2,<1.2
os-testr>=0.4.1
mock>=1.0
python-swiftclient
python-keystoneclient>=1.3.0

# Security checks
bandit>=0.10.1
swift-2.7.1/MANIFEST.in0000664000567000056710000000047113024044354015555 0ustar  jenkinsjenkins00000000000000include AUTHORS LICENSE .functests .unittests .probetests test/__init__.py
include CHANGELOG CONTRIBUTING.md README.md
include babel.cfg
include test/sample.conf
include tox.ini
include requirements.txt test-requirements.txt
graft doc
graft etc
graft locale
graft test/functional
graft test/probe
graft test/unit
swift-2.7.1/etc/0000775000567000056710000000000013024044470014567 5ustar  jenkinsjenkins00000000000000swift-2.7.1/etc/rsyncd.conf-sample0000664000567000056710000000320313024044352020214 0ustar  jenkinsjenkins00000000000000uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid

[account]
max connections = 2
path = /srv/node
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 8
path = /srv/node
read only = false
lock file = /var/lock/object.lock


# If rsync_module includes the device, you can tune rsyncd to permit 4
# connections per device instead of simply allowing 8 connections for all
# devices:
# rsync_module = {replication_ip}::object_{device}
#
# (if devices in your object ring are named sda, sdb and sdc)
#
#[object_sda]
#max connections = 4
#path = /srv/node
#read only = false
#lock file = /var/lock/object_sda.lock
#
#[object_sdb]
#max connections = 4
#path = /srv/node
#read only = false
#lock file = /var/lock/object_sdb.lock
#
#[object_sdc]
#max connections = 4
#path = /srv/node
#read only = false
#lock file = /var/lock/object_sdc.lock


# To emulate the deprecated option vm_test_mode = yes, set:
# rsync_module = {replication_ip}::object{replication_port}
#
# So, on your SAIO, you have to set the following rsyncd configuration:
#
#[object6010]
#max connections = 25
#path = /srv/1/node/
#read only = false
#lock file = /var/lock/object6010.lock
#
#[object6020]
#max connections = 25
#path = /srv/2/node/
#read only = false
#lock file = /var/lock/object6020.lock
#
#[object6030]
#max connections = 25
#path = /srv/3/node/
#read only = false
#lock file = /var/lock/object6030.lock
#
#[object6040]
#max connections = 25
#path = /srv/4/node/
#read only = false
#lock file = /var/lock/object6040.lock
swift-2.7.1/etc/swift-rsyslog.conf-sample0000664000567000056710000000403413024044352021551 0ustar  jenkinsjenkins00000000000000# Uncomment the following to have a log containing all logs together
#local.* /var/log/swift/all.log

# Uncomment the following to have hourly swift logs.
#$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%"
#local0.* ?HourlyProxyLog

# Use the following to have separate log files for each of the main servers:
# account-server, container-server, object-server, proxy-server. Note:
# object-updater's output will be stored in object.log.
if $programname contains 'swift' then /var/log/swift/swift.log
if $programname contains 'account' then /var/log/swift/account.log
if $programname contains 'container' then /var/log/swift/container.log
if $programname contains 'object' then /var/log/swift/object.log
if $programname contains 'proxy' then /var/log/swift/proxy.log

# Uncomment the following to have specific log via program name.
#if $programname == 'swift' then /var/log/swift/swift.log
#if $programname == 'account-server' then /var/log/swift/account-server.log
#if $programname == 'account-replicator' then /var/log/swift/account-replicator.log
#if $programname == 'account-auditor' then /var/log/swift/account-auditor.log
#if $programname == 'account-reaper' then /var/log/swift/account-reaper.log
#if $programname == 'container-server' then /var/log/swift/container-server.log
#if $programname == 'container-replicator' then /var/log/swift/container-replicator.log
#if $programname == 'container-updater' then /var/log/swift/container-updater.log
#if $programname == 'container-auditor' then /var/log/swift/container-auditor.log
#if $programname == 'container-sync' then /var/log/swift/container-sync.log
#if $programname == 'object-server' then /var/log/swift/object-server.log
#if $programname == 'object-replicator' then /var/log/swift/object-replicator.log
#if $programname == 'object-updater' then /var/log/swift/object-updater.log
#if $programname == 'object-auditor' then /var/log/swift/object-auditor.log

# Use the following to discard logs that don't match any of the above to avoid
# them filling up /var/log/messages.
local0.* ~
swift-2.7.1/etc/drive-audit.conf-sample0000664000567000056710000000216213024044354021134 0ustar  jenkinsjenkins00000000000000[drive-audit]
# device_dir = /srv/node
#
# You can specify default log routing here if you want:
# log_name = drive-audit
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# minutes = 60
# error_limit = 1
# recon_cache_path = /var/cache/swift
# unmount_failed_device = True
#
# By default, drive-audit logs only to syslog. Setting this option True
# makes drive-audit log to console in addition to syslog.
# log_to_console = False
#
# Location of the log file with globbing
# pattern to check against device errors.
# log_file_pattern = /var/log/kern.*[!.][!g][!z]
#
# Regular expression patterns to be used to locate
# device blocks with errors in the log file. Currently
# the default ones are as follows:
#   \berror\b.*\b(sd[a-z]{1,2}\d?)\b
#   \b(sd[a-z]{1,2}\d?)\b.*\berror\b
# One can overwrite the default ones by providing
# new expressions using the format below:
# Format: regex_pattern_X = regex_expression
# Example:
#   regex_pattern_1 = \berror\b.*\b(dm-[0-9]{1,2}\d?)\b
swift-2.7.1/etc/memcache.conf-sample0000664000567000056710000000227113024044352020460 0ustar  jenkinsjenkins00000000000000[memcache]
# You can use this single conf file instead of having memcache_servers set in
# several other conf files under [filter:cache] for example. You can specify
# multiple servers separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# (IPv6 addresses must follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# Timeout for connection
# connect_timeout = 0.3
# Timeout for pooled connection
# pool_timeout = 1.0
# number of servers to retry on failures getting a pooled connection
# tries = 3
# Timeout for read and writes
# io_timeout = 2.0
swift-2.7.1/etc/container-sync-realms.conf-sample0000664000567000056710000000365413024044354023143 0ustar  jenkinsjenkins00000000000000# [DEFAULT]
# The number of seconds between checking the modified time of this config file
# for changes and therefore reloading it.
# mtime_check_interval = 300


# [realm1]
# key = realm1key
# key2 = realm1key2
# cluster_clustername1 = https://host1/v1/
# cluster_clustername2 = https://host2/v1/
#
# [realm2]
# key = realm2key
# key2 = realm2key2
# cluster_clustername3 = https://host3/v1/
# cluster_clustername4 = https://host4/v1/


# Each section name is the name of a sync realm. A sync realm is a set of
# clusters that have agreed to allow container syncing with each other. Realm
# names will be considered case insensitive.
#
# The key is the overall cluster-to-cluster key used in combination with the
# external users' key that they set on their containers' X-Container-Sync-Key
# metadata header values. These keys will be used to sign each request the
# container sync daemon makes and used to validate each incoming container sync
# request.
#
# The key2 is optional and is an additional key incoming requests will be
# checked against. This is so you can rotate keys if you wish; you move the
# existing key to key2 and make a new key value.
#
# Any values in the realm section whose names begin with cluster_ will indicate
# the name and endpoint of a cluster and will be used by external users in
# their containers' X-Container-Sync-To metadata header values with the format
# "realm_name/cluster_name/container_name". Realm and cluster names are
# considered case insensitive.
#
# The endpoint is what the container sync daemon will use when sending out
# requests to that cluster. Keep in mind this endpoint must be reachable by all
# container servers, since that is where the container sync daemon runs. Note
# the the endpoint ends with /v1/ and that the container sync daemon will then
# add the account/container/obj name after that.
#
# Distribute this container-sync-realms.conf file to all your proxy servers
# and container servers.
swift-2.7.1/etc/internal-client.conf-sample0000664000567000056710000000204613024044352022006 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
swift-2.7.1/etc/object-server.conf-sample0000664000567000056710000002777313024044354021510 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6000
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.  NOTE: if servers_per_port is set, this setting is
# ignored.
# workers = auto
#
# Make object-server run this many worker processes per unique port of
# "local" ring devices across all storage policies.  This can help provide
# the isolation of threads_per_disk without the severe overhead.  The default
# value of 0 disables this feature.
# servers_per_port = 0
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes you'd like fallocate to
# reserve, whether there is space for the given file size or not.
# fallocate_reserve = 0
#
# Time to wait while attempting to connect to another backend node.
# conn_timeout = 0.5
# Time to wait while sending each chunk of data to another backend node.
# node_timeout = 3
# Time to wait while sending a container update on object update.
# container_update_timeout = 1.0
# Time to wait while receiving each chunk of data from a client or another
# backend node.
# client_timeout = 60
#
# network_chunk_size = 65536
# disk_chunk_size = 65536

[pipeline:main]
pipeline = healthcheck recon object-server

[app:object-server]
use = egg:swift#object
# You can override the default log routing for this app here:
# set log_name = object-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# max_upload_time = 86400
#
# slow is the total amount of seconds an object PUT/DELETE request takes at
# least. If it is faster, the object server will sleep this amount of time minus
# the already passed transaction time.  This is only useful for simulating slow
# devices on storage nodes during testing and development.
# slow = 0
#
# Objects smaller than this are not evicted from the buffercache once read
# keep_cache_size = 5242880
#
# If true, objects for authenticated GET requests may be kept in buffer cache
# if small enough
# keep_cache_private = false
#
# on PUTs, sync data every n MB
# mb_per_sync = 512
#
# Comma separated list of headers that can be set in metadata on an object.
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object
#
# auto_create_account_prefix = .
#
# A value of 0 means "don't use thread pools". A reasonable starting point is
# 4.
# threads_per_disk = 0
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# Set to restrict the number of concurrent incoming SSYNC requests
# Set to 0 for unlimited
# Note that SSYNC requests are only used by the object reconstructor or the
# object replicator when configured to use ssync.
# replication_concurrency = 4
#
# Restricts incoming SSYNC requests to one per device,
# replication_currency above allowing. This can help control I/O to each
# device, but you may wish to set this to False to allow multiple SSYNC
# requests (up to the above replication_concurrency setting) per device.
# replication_one_per_device = True
#
# Number of seconds to wait for an existing replication device lock before
# giving up.
# replication_lock_timeout = 15
#
# These next two settings control when the SSYNC subrequest handler will
# abort an incoming SSYNC attempt. An abort will occur if there are at
# least threshold number of failures and the value of failures / successes
# exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
# failures have to occur and there have to be more failures than successes for
# an abort to occur.
# replication_failure_threshold = 100
# replication_failure_ratio = 1.0
#
# Use splice() for zero-copy object GETs. This requires Linux kernel
# version 3.0 or greater. If you set "splice = yes" but the kernel
# does not support it, error messages will appear in the object server
# logs at startup, but your object servers should continue to function.
#
# splice = no

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
#recon_lock_path = /var/lock

[object-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
#
# default is rsync, alternative is ssync
# sync_method = rsync
#
# max duration of a partition rsync
# rsync_timeout = 900
#
# bandwidth limit for rsync in kB/s. 0 means unlimited
# rsync_bwlimit = 0
#
# passed to rsync for io op timeout
# rsync_io_timeout = 30
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
# slow down the syncing process.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::object
#
# node_timeout = 
# max duration of an http request; this is for REPLICATE finalization calls and
# so should be longer than node_timeout
# http_timeout = 60
#
# attempts to kill all workers if nothing replicates for lockup_timeout seconds
# lockup_timeout = 1800
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
#
# limits how long rsync error log lines are
# 0 means to log the entire line
# rsync_error_log_line_length = 0
#
# handoffs_first and handoff_delete are options for a special case
# such as disk full in the cluster. These two options SHOULD NOT BE
# CHANGED, except for such an extreme situations. (e.g. disks filled up
# or are about to fill up. Anyway, DO NOT let your drives fill up)
# handoffs_first is the flag to replicate handoffs prior to canonical
# partitions. It allows to force syncing and deleting handoffs quickly.
# If set to a True value(e.g. "True" or "1"), partitions
# that are not supposed to be on the node will be replicated first.
# handoffs_first = False
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, object-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. Object-replicator would remove local handoff partition directories
# after syncing partition when the number of successful responses is greater
# than or equal to this number. By default(auto), handoff partitions will be
# removed  when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto

[object-reconstructor]
# You can override the default log routing for this app here (don't use set!):
# Unless otherwise noted, each setting below has the same meaning as described
# in the [object-replicator] section, however these settings apply to the EC
# reconstructor
#
# log_name = object-reconstructor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between reconstruction passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
# node_timeout = 10
# http_timeout = 60
# lockup_timeout = 1800
# reclaim_age = 604800
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
# handoffs_first = False

[object-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 1
# node_timeout = 
# slowdown will sleep that amount between objects
# slowdown = 0.01
#
# recon_cache_path = /var/cache/swift

[object-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Time in seconds to wait between auditor passes
# interval = 30
#
# You can set the disk chunk size that the auditor uses making it larger if
# you like for more efficient local auditing of larger objects
# disk_chunk_size = 65536
# files_per_second = 20
# concurrency = 1
# bytes_per_second = 10000000
# log_time = 3600
# zero_byte_files_per_second = 50
# recon_cache_path = /var/cache/swift

# Takes a comma separated list of ints. If set, the object auditor will
# increment a counter for every object whose size is <= to the given break
# points and report the result after a full scan.
# object_size_stats =

# The auditor will cleanup old rsync tempfiles after they are "old
# enough" to delete.  You can configure the time elapsed in seconds
# before rsync tempfiles will be unlinked, or the default value of
# "auto" try to use object-replicator's rsync_timeout + 900 and fallback
# to 86400 (1 day).
# rsync_tempfile_timeout = auto

# Note: Put it at the beginning of the pipleline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/object.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
swift-2.7.1/etc/object-expirer.conf-sample0000664000567000056710000000755713024044354021656 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[object-expirer]
# interval = 300
# auto_create_account_prefix = .
# expiring_objects_account_name = expiring_objects
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process
#   that will be doing the work
# processes set 0 means that a single process will be doing all the work
# processes can also be specified on the command line and will override the
#   config value
# processes = 0
# process is which of the parts a particular process will work on
# process can also be specified on the command line and will override the config
#   value
# process is "zero based", if you want to use 3 processes, you should run
#  processes with process set to 0, 1, and 2
# process = 0
# The expirer will re-attempt expiring if the source object is not available
# up to reclaim_age seconds before it gives up and deletes the entry in the
# queue.
# reclaim_age = 604800
# recon_cache_path = /var/cache/swift

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the  portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
swift-2.7.1/etc/container-reconciler.conf-sample0000664000567000056710000000257413024044354023033 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[container-reconciler]
# The reconciler will re-attempt reconciliation if the source object is not
# available up to reclaim_age seconds before it gives up and deletes the entry
# in the queue.
# reclaim_age = 604800
# The cycle time of the daemon
# interval = 30
# Server errors from requests will be retried by default
# request_tries = 3

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
swift-2.7.1/etc/proxy-server.conf-sample0000664000567000056710000007532213024044354021414 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 8080
# bind_timeout = 30
# backlog = 4096
# swift_dir = /etc/swift
# user = swift

# Enables exposing configuration settings via HTTP GET /info.
# expose_info = true

# Key to use for admin calls that are HMAC signed.  Default is empty,
# which will disable admin calls to /info.
# admin_key = secret_admin_key
#
# Allows the ability to withhold sections from showing up in the public calls
# to /info.  You can withhold subsections by separating the dict level with a
# ".".  The following would cause the sections 'container_quotas' and 'tempurl'
# to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete.  Default value is 'swift.valid_api_versions' which allows all
# registered features to be listed via HTTP GET /info except
# swift.valid_api_versions information
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl

# Use an integer to override the number of pre-forked processes that will
# accept connections.  Should default to the number of effective cpu
# cores in the system.  It's worth noting that individual workers will
# use many eventlet co-routines to service multiple concurrent requests.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# Set the following two lines to enable SSL. This is for testing only.
# cert_file = /etc/swift/proxy.crt
# key_file = /etc/swift/proxy.key
#
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_headers = false
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# This optional suffix (default is empty) that would be appended to the swift transaction
# id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
# This is very useful when one is managing more than one swift cluster.
# trans_id_suffix =
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
# cors_allow_origin =
# strict_cors_mode = True
#
# client_timeout = 60
# eventlet_debug = false

[pipeline:main]
# This sample pipeline uses tempauth and is used for SAIO dev work and
# testing. See below for a pipeline using keystone.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

# The following pipeline shows keystone integration. Comment out the one
# above and uncomment this one. Additional steps for integrating keystone are
# covered further below in the filter sections for authtoken and keystoneauth.
#pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_address = /dev/log
#
# log_handoffs = true
# recheck_account_existence = 60
# recheck_container_existence = 60
# object_chunk_size = 65536
# client_chunk_size = 65536
#
# How long the proxy server will wait on responses from the a/c/o servers.
# node_timeout = 10
#
# How long the proxy server will wait for an initial response and to read a
# chunk of data from the object servers while serving GET / HEAD requests.
# Timeouts from these requests can be recovered from so setting this to
# something lower than node_timeout would provide quicker error recovery
# while allowing for a longer timeout for non-recoverable requests (PUTs).
# Defaults to node_timeout, should be overriden if node_timeout is set to a
# high number to prevent client timeouts from firing before the proxy server
# has a chance to retry.
# recoverable_node_timeout = node_timeout
#
# conn_timeout = 0.5
#
# How long to wait for requests to finish after a quorum has been established.
# post_quorum_timeout = 0.5
#
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is triggered.
# error_suppression_interval = 60
#
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit = 10
#
# If set to 'true' any authorized user may create and delete accounts; if
# 'false' no one, even authorized, can.
# allow_account_management = false
#
# Set object_post_as_copy = false to turn on fast posts where only the metadata
# changes are stored anew and the original data file is kept in place. This
# makes for quicker posts.
# object_post_as_copy = true
#
# If set to 'true' authorized accounts that do not yet exist within the Swift
# cluster will be automatically created.
# account_autocreate = false
#
# If set to a positive value, trying to create a container when the account
# already has at least this maximum containers will result in a 403 Forbidden.
# Note: This is a soft limit, meaning a user might exceed the cap for
# recheck_account_existence before the 403s kick in.
# max_containers_per_account = 0
#
# This is a comma separated list of account hashes that ignore the
# max_containers_per_account cap.
# max_containers_whitelist =
#
# Comma separated list of Host headers to which the proxy will deny requests.
# deny_host_headers =
#
# Prefix used when automatically creating accounts.
# auto_create_account_prefix = .
#
# Depth of the proxy put queue.
# put_queue_depth = 10
#
# Storage nodes can be chosen at random (shuffle), by using timing
# measurements (timing), or by using an explicit match (affinity).
# Using timing measurements may allow for lower overall latency, while
# using affinity allows for finer control. In both the timing and
# affinity cases, equally-sorting nodes are still randomly chosen to
# spread load.
# The valid values for sorting_method are "affinity", "shuffle", or "timing".
# sorting_method = shuffle
#
# If the "timing" sorting_method is used, the timings will only be valid for
# the number of seconds configured by timing_expiry.
# timing_expiry = 300
#
# By default on a GET/HEAD swift will connect to a storage node one at a time
# in a single thread. There is smarts in the order they are hit however. If you
# turn on concurrent_gets below, then replica count threads will be used.
# With addition of the concurrency_timeout option this will allow swift to send
# out GET/HEAD requests to the storage nodes concurrently and answer with the
# first to respond. With an EC policy the parameter only affects HEAD requests.
# concurrent_gets = off
#
# This parameter controls how long to wait before firing off the next
# concurrent_get thread. A value of 0 would be fully concurrent, any other
# number will stagger the firing of the threads. This number should be
# between 0 and node_timeout. The default is what ever you set for the
# conn_timeout parameter.
# concurrency_timeout = 0.5
#
# Set to the number of nodes to contact for a normal request. You can use
# '* replicas' at the end to have it use the number given times the number of
# replicas for the ring being used for the request.
# request_node_count = 2 * replicas
#
# Which backend servers to prefer on reads. Format is r for region
# N or rz for region N, zone M. The value after the equals is
# the priority; lower numbers are higher priority.
#
# Example: first read from region 1 zone 1, then region 1 zone 2, then
# anything in region 2, then everything else:
# read_affinity = r1z1=100, r1z2=200, r2=300
# Default is empty, meaning no preference.
# read_affinity =
#
# Which backend servers to prefer on writes. Format is r for region
# N or rz for region N, zone M. If this is set, then when
# handling an object PUT request, some number (see setting
# write_affinity_node_count) of local backend servers will be tried
# before any nonlocal ones.
#
# Example: try to write to regions 1 and 2 before writing to any other
# nodes:
# write_affinity = r1, r2
# Default is empty, meaning no preference.
# write_affinity =
#
# The number of local (as governed by the write_affinity setting)
# nodes to attempt to contact first, before any non-local ones. You
# can use '* replicas' at the end to have it use the number given
# times the number of replicas for the ring being used for the
# request.
# write_affinity_node_count = 2 * replicas
#
# These are the headers whose values will only be shown to swift_owners. The
# exact definition of a swift_owner is up to the auth system in use, but
# usually indicates administrative responsibilities.
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control

[filter:tempauth]
use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# The reseller prefix will verify a token begins with this prefix before even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# The reseller_prefix may contain a comma separated list of items. The first
# item is used for the token as mentioned above. If second and subsequent
# items exist, the middleware will handle authorization for an account with
# that prefix. For example, for prefixes "AUTH, SERVICE", a path of
# /v1/SERVICE_account is handled the same as /v1/AUTH_account. If an empty
# (blank) reseller prefix is required, it must be first in the list. Two
# single quote characters indicates an empty (blank) reseller prefix.
# reseller_prefix = AUTH

#
# The require_group parameter names a group that must be presented by
# either X-Auth-Token or X-Service-Token. Usually this parameter is
# used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
# By default, no group is needed. Do not use .admin.
# require_group =

# The auth prefix will cause requests beginning with this prefix to be routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life = 86400
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# This specifies what scheme to return with storage urls:
# http, https, or default (chooses based on what the server is running as)
# This can be useful with an SSL load balancer in front of a non-SSL server.
# storage_url_scheme = default
#
# Lastly, you need to list all the accounts/users you want here. The format is:
#   user__ =  [group] [group] [...] [storage_url]
# or if you want underscores in  or , you can base64 encode them
# (with no equal signs) and use this format:
#   user64__ =  [group] [group] [...] [storage_url]
# There are special groups of:
#   .reseller_admin = can do anything to any account for this auth
#   .admin = can do anything within the account
# If neither of these groups are specified, the user can only access containers
# that have been explicitly allowed for them by a .admin or .reseller_admin.
# The trailing optional storage_url allows you to specify an alternate url to
# hand back to the user upon authentication. If not specified, this defaults to
# $HOST/v1/_ where $HOST will do its best to resolve
# to what the requester would need to use to reach this host.
# Here are example entries, required for running the tests:
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service

# To enable Keystone authentication you need to have the auth token
# middleware first to be configured. Here is an example below, please
# refer to the keystone's documentation for details about the
# different settings.
#
# You'll also need to have the keystoneauth middleware enabled and have it in
# your main pipeline, as show in the sample pipeline at the top of this file.
#
# Following parameters are known to work with keystonemiddleware v2.3.0
# (above v2.0.0), but checking the latest information in the wiki page[1]
# is recommended.
# 1. http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration
#
# [filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# auth_uri = http://keystonehost:5000
# auth_url = http://keystonehost:35357
# auth_plugin = password
# project_domain_id = default
# user_domain_id = default
# project_name = service
# username = swift
# password = password
#
# delay_auth_decision defaults to False, but leaving it as false will
# prevent other auth systems, staticweb, tempurl, formpost, and ACLs from
# working. This value must be explicitly set to True.
# delay_auth_decision = False
#
# cache = swift.cache
# include_service_catalog = False
#
# [filter:keystoneauth]
# use = egg:swift#keystoneauth
# The reseller_prefix option lists account namespaces that this middleware is
# responsible for. The prefix is placed before the Keystone project id.
# For example, for project 12345678, and prefix AUTH, the account is
# named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
# Several prefixes are allowed by specifying a comma-separated list
# as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
# single blank/empty prefix. If an empty prefix is required in a list of
# prefixes, a value of '' (two single quote characters) indicates a
# blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
# character is appended to the value unless already present.
# reseller_prefix = AUTH
#
# The user must have at least one role named by operator_roles on a
# project in order to create, delete and modify containers and objects
# and to set and read privileged headers such as ACLs.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_operator_roles applies to the /v1/SERVICE_
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# operator_roles = admin, swiftoperator
#
# The reseller admin role has the ability to create and delete accounts
# reseller_admin_role = ResellerAdmin
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# If the service_roles parameter is present, an X-Service-Token must be
# present in the request that when validated, grants at least one role listed
# in the parameter. The X-Service-Token may be scoped to any project.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_service_roles applies to the /v1/SERVICE_
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# By default, no service_roles are required.
# service_roles =
#
# For backwards compatibility, keystoneauth will match names in cross-tenant
# access control lists (ACLs) when both the requesting user and the tenant
# are in the default domain i.e the domain to which existing tenants are
# migrated. The default_domain_id value configured here should be the same as
# the value used during migration of tenants to keystone domains.
# default_domain_id = default
#
# For a new installation, or an installation in which keystone projects may
# move between domains, you should disable backwards compatible name matching
# in ACLs by setting allow_names_in_acls to false:
# allow_names_in_acls = true

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
# This facility may be used to temporarily remove a Swift node from a load
# balancer pool during maintenance or upgrade (remove the file to allow the
# node back into the load balancer pool).
# disable_path =

[filter:cache]
use = egg:swift#memcache
# You can override the default log routing for this filter here:
# set log_name = cache
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# If not set here, the value for memcache_servers will be read from
# memcache.conf (see memcache.conf-sample) or lacking that file, it will
# default to the value below. You can specify multiple servers separated with
# commas, as in: 10.1.2.3:11211,10.1.2.4:11211 (IPv6 addresses must
# follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# If not set here, the value for memcache_serialization_support will be read
# from /etc/swift/memcache.conf (see memcache.conf-sample).
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# More options documented in memcache.conf-sample

[filter:ratelimit]
use = egg:swift#ratelimit
# You can override the default log routing for this filter here:
# set log_name = ratelimit
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# clock_accuracy should represent how accurate the proxy servers' system clocks
# are with each other. 1000 means that all the proxies' clock are accurate to
# each other within 1 millisecond.  No ratelimit should be higher than the
# clock accuracy.
# clock_accuracy = 1000
#
# max_sleep_time_seconds = 60
#
# log_sleep_time_seconds of 0 means disabled
# log_sleep_time_seconds = 0
#
# allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
# rate_buffer_seconds = 5
#
# account_ratelimit of 0 means disabled
# account_ratelimit = 0

# DEPRECATED- these will continue to work but will be replaced
# by the X-Account-Sysmeta-Global-Write-Ratelimit flag.
# Please see ratelimiting docs for details.
# these are comma separated lists of account names
# account_whitelist = a,b
# account_blacklist = c,d

# with container_limit_x = r
# for containers of size x limit write requests per second to r.  The container
# rate will be linearly interpolated from the values given. With the values
# below, a container of size 5 will get a rate of 75.
# container_ratelimit_0 = 100
# container_ratelimit_10 = 50
# container_ratelimit_50 = 20

# Similarly to the above container-level write limits, the following will limit
# container GET (listing) requests.
# container_listing_ratelimit_0 = 100
# container_listing_ratelimit_10 = 50
# container_listing_ratelimit_50 = 20

[filter:domain_remap]
use = egg:swift#domain_remap
# You can override the default log routing for this filter here:
# set log_name = domain_remap
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# storage_domain = example.com
# path_root = v1

# Browsers can convert a host header to lowercase, so check that reseller
# prefix on the account is the correct case. This is done by comparing the
# items in the reseller_prefixes config option to the found prefix. If they
# match except for case, the item from reseller_prefixes will be used
# instead of the found reseller prefix. When none match, the default reseller
# prefix is used. When no default reseller prefix is configured, any request
# with an account prefix not in that list will be ignored by this middleware.
# reseller_prefixes = AUTH
# default_reseller_prefix =

[filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
# set log_name = catch_errors
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

[filter:cname_lookup]
# Note: this middleware requires python-dnspython
use = egg:swift#cname_lookup
# You can override the default log routing for this filter here:
# set log_name = cname_lookup
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# Specify the storage_domain that match your cloud, multiple domains
# can be specified separated by a comma
# storage_domain = example.com
#
# lookup_depth = 1

# Note: Put staticweb just after your auth filter(s) in the pipeline
[filter:staticweb]
use = egg:swift#staticweb
# You can override the default log routing for this filter here:
# set log_name = staticweb
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

# Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
[filter:tempurl]
use = egg:swift#tempurl
# The methods allowed with Temp URLs.
# methods = GET HEAD PUT POST DELETE
#
# The headers to remove from incoming requests. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. incoming_allow_headers is a list of exceptions to these
# removals.
# incoming_remove_headers = x-timestamp
#
# The headers allowed as exceptions to incoming_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# incoming_allow_headers =
#
# The headers to remove from outgoing responses. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. outgoing_allow_headers is a list of exceptions to these
# removals.
# outgoing_remove_headers = x-object-meta-*
#
# The headers allowed as exceptions to outgoing_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# outgoing_allow_headers = x-object-meta-public-*

# Note: Put formpost just before your auth filter(s) in the pipeline
[filter:formpost]
use = egg:swift#formpost

# Note: Just needs to be placed before the proxy-server in the pipeline.
[filter:name_check]
use = egg:swift#name_check
# forbidden_chars = '"`<>
# maximum_length = 255
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$

[filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/

[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the  portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
#
# Note: The double proxy-logging in the pipeline is not a mistake. The
# left-most proxy-logging is there to log requests that were handled in
# middleware and never made it through to the right-most middleware (and
# proxy server). Double logging is prevented for normal requests. See
# proxy-logging docs.

# Note: Put before both ratelimit and auth in the pipeline.
[filter:bulk]
use = egg:swift#bulk
# max_containers_per_extraction = 10000
# max_failed_extractions = 1000
# max_deletes_per_request = 10000
# max_failed_deletes = 1000

# In order to keep a connection active during a potentially long bulk request,
# Swift may return whitespace prepended to the actual response body. This
# whitespace will be yielded no more than every yield_frequency seconds.
# yield_frequency = 10

# Note: The following parameter is used during a bulk delete of objects and
# their container. This would frequently fail because it is very likely
# that all replicated objects have not been deleted by the time the middleware got a
# successful response. It can be configured the number of retries. And the
# number of seconds to wait between each retry will be 1.5**retry

# delete_container_retry_count = 0

# Note: Put after auth and staticweb in the pipeline.
[filter:slo]
use = egg:swift#slo
# max_manifest_segments = 1000
# max_manifest_size = 2097152
#
# Rate limiting applies only to segments smaller than this size (bytes).
# rate_limit_under_size = 1048576
#
# Start rate-limiting SLO segment serving after the Nth small segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400

# Note: Put after auth and staticweb in the pipeline.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:dlo]
use = egg:swift#dlo
# Start rate-limiting DLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400

# Note: Put after auth in the pipeline.
[filter:container-quotas]
use = egg:swift#container_quotas

# Note: Put after auth in the pipeline.
[filter:account-quotas]
use = egg:swift#account_quotas

[filter:gatekeeper]
use = egg:swift#gatekeeper
# Set this to false if you want to allow clients to set arbitrary X-Timestamps
# on uploaded objects. This may be used to preserve timestamps when migrating
# from a previous storage system, but risks allowing users to upload
# difficult-to-delete data.
# shunt_inbound_x_timestamp = true
#
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

[filter:container_sync]
use = egg:swift#container_sync
# Set this to false if you want to disallow any full url values to be set for
# any new X-Container-Sync-To headers. This will keep any new full urls from
# coming in, but won't change any existing values already in the cluster.
# Updating those will have to be done manually, as knowing what the true realm
# endpoint should be cannot always be guessed.
# allow_full_urls = true
# Set this to specify this clusters //realm/cluster as "current" in /info
# current = //REALM/CLUSTER

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after catch_errors, gatekeeper and healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/proxy.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

# Note: Put after slo, dlo in the pipeline.
# If you don't put it in the pipeline, it will be inserted automatically.
[filter:versioned_writes]
use = egg:swift#versioned_writes
# Enables using versioned writes middleware and exposing configuration
# settings via HTTP GET /info.
# WARNING: Setting this option bypasses the "allow_versions" option
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false
swift-2.7.1/etc/swift.conf-sample0000664000567000056710000001660113024044354020056 0ustar  jenkinsjenkins00000000000000[swift-hash]

# swift_hash_path_suffix and swift_hash_path_prefix are used as part of the
# the hashing algorithm when determining data placement in the cluster.
# These values should remain secret and MUST NOT change
# once a cluster has been deployed.
# Use only printable chars (python -c "import string; print(string.printable)")

swift_hash_path_suffix = changeme
swift_hash_path_prefix = changeme

# storage policies are defined here and determine various characteristics
# about how objects are stored and treated.  Policies are specified by name on
# a per container basis.  Names are case-insensitive.  The policy index is
# specified in the section header and is used internally.  The policy with
# index 0 is always used for legacy containers and can be given a name for use
# in metadata however the ring file name will always be 'object.ring.gz' for
# backwards compatibility.  If no policies are defined a policy with index 0
# will be automatically created for backwards compatibility and given the name
# Policy-0.  A default policy is used when creating new containers when no
# policy is specified in the request.  If no other policies are defined the
# policy with index 0 will be declared the default.  If multiple policies are
# defined you must define a policy with index 0 and you must specify a
# default.  It is recommended you always define a section for
# storage-policy:0. Aliases are not required when defining a storage policy.
#
# A 'policy_type' argument is also supported but is not mandatory.  Default
# policy type 'replication' is used when 'policy_type' is unspecified.
[storage-policy:0]
name = Policy-0
default = yes
#policy_type = replication
aliases = yellow, orange

# the following section would declare a policy called 'silver', the number of
# replicas will be determined by how the ring is built.  In this example the
# 'silver' policy could have a lower or higher # of replicas than the
# 'Policy-0' policy above.  The ring filename will be 'object-1.ring.gz'.  You
# may only specify one storage policy section as the default.  If you changed
# this section to specify 'silver' as the default, when a client created a new
# container w/o a policy specified, it will get the 'silver' policy because
# this config has specified it as the default.  However if a legacy container
# (one created with a pre-policy version of swift) is accessed, it is known
# implicitly to be assigned to the policy with index 0 as opposed to the
# current default. Note that even without specifying any aliases, a policy
# always has at least the default name stored in aliases because this field is
# used to contain all human readable names for a storage policy.
#
#[storage-policy:1]
#name = silver
#policy_type = replication

# The following declares a storage policy of type 'erasure_coding' which uses
# Erasure Coding for data reliability. Please refer to Swift documentation for
# details on how the 'erasure_coding' storage policy is implemented.
#
# Swift uses PyECLib, a Python Erasure coding API library, for encode/decode
# operations.  Please refer to Swift documentation for details on how to
# install PyECLib.
#
# When defining an EC policy, 'policy_type' needs to be 'erasure_coding' and
# EC configuration parameters 'ec_type', 'ec_num_data_fragments' and
# 'ec_num_parity_fragments' must be specified.  'ec_type' is chosen from the
# list of EC backends supported by PyECLib.  The ring configured for the
# storage policy must have it's "replica" count configured to
# 'ec_num_data_fragments' + 'ec_num_parity_fragments' - this requirement is
# validated when services start.  'ec_object_segment_size' is the amount of
# data that will be buffered up before feeding a segment into the
# encoder/decoder.  More information about these configuration options and
# supported `ec_type` schemes is available in the Swift documentation.  Please
# refer to Swift documentation for details on how to configure EC policies.
#
# The example 'deepfreeze10-4' policy defined below is a _sample_
# configuration with an alias of 'df10-4' as well as 10 'data' and 4 'parity'
# fragments. 'ec_type' defines the Erasure Coding scheme.
# 'liberasurecode_rs_vand' (Reed-Solomon Vandermonde) is used as an example
# below.
#
#[storage-policy:2]
#name = deepfreeze10-4
#aliases = df10-4
#policy_type = erasure_coding
#ec_type = liberasurecode_rs_vand
#ec_num_data_fragments = 10
#ec_num_parity_fragments = 4
#ec_object_segment_size = 1048576


# The swift-constraints section sets the basic constraints on data
# saved in the swift cluster. These constraints are automatically
# published by the proxy server in responses to /info requests.

[swift-constraints]

# max_file_size is the largest "normal" object that can be saved in
# the cluster. This is also the limit on the size of each segment of
# a "large" object when using the large object manifest support.
# This value is set in bytes. Setting it to lower than 1MiB will cause
# some tests to fail. It is STRONGLY recommended to leave this value at
# the default (5 * 2**30 + 2).

#max_file_size = 5368709122


# max_meta_name_length is the max number of bytes in the utf8 encoding
# of the name portion of a metadata header.

#max_meta_name_length = 128


# max_meta_value_length is the max number of bytes in the utf8 encoding
# of a metadata value

#max_meta_value_length = 256


# max_meta_count is the max number of metadata keys that can be stored
# on a single account, container, or object

#max_meta_count = 90


# max_meta_overall_size is the max number of bytes in the utf8 encoding
# of the metadata (keys + values)

#max_meta_overall_size = 4096

# max_header_size is the max number of bytes in the utf8 encoding of each
# header. Using 8192 as default because eventlet use 8192 as max size of
# header line. This value may need to be increased when using identity
# v3 API tokens including more than 7 catalog entries.
# See also include_service_catalog in proxy-server.conf-sample
# (documented in overview_auth.rst)

#max_header_size = 8192


# By default the maximum number of allowed headers depends on the number of max
# allowed metadata settings plus a default value of 32 for regular http
# headers.  If for some reason this is not enough (custom middleware for
# example) it can be increased with the extra_header_count constraint.

#extra_header_count = 0


# max_object_name_length is the max number of bytes in the utf8 encoding
# of an object name

#max_object_name_length = 1024


# container_listing_limit is the default (and max) number of items
# returned for a container listing request

#container_listing_limit = 10000


# account_listing_limit is the default (and max) number of items returned
# for an account listing request
#account_listing_limit = 10000


# max_account_name_length is the max number of bytes in the utf8 encoding
# of an account name

#max_account_name_length = 256


# max_container_name_length is the max number of bytes in the utf8 encoding
# of a container name

#max_container_name_length = 256


# By default all REST API calls should use "v1" or "v1.0" as the version string,
# for example "/v1/account". This can be manually overridden to make this
# backward-compatible, in case a different version string has been used before.
# Use a comma-separated list in case of multiple allowed versions, for example
# valid_api_versions = v0,v1,v2
# This is only enforced for account, container and object requests. The allowed
# api versions are by default excluded from /info.

# valid_api_versions = v1,v1.0
swift-2.7.1/etc/container-server.conf-sample0000664000567000056710000001651013024044354022207 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6001
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# This is a comma separated list of hosts allowed in the X-Container-Sync-To
# field for containers. This is the old-style of using container sync. It is
# strongly recommended to use the new style of a separate
# container-sync-realms.conf -- see container-sync-realms.conf-sample
# allowed_sync_hosts = 127.0.0.1
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes you'd like fallocate to
# reserve, whether there is space for the given file size or not.
# fallocate_reserve = 0

[pipeline:main]
pipeline = healthcheck recon container-server

[app:container-server]
use = egg:swift#container
# You can override the default log routing for this app here:
# set log_name = container-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift

[container-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::container
#
# recon_cache_path = /var/cache/swift

[container-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 4
# node_timeout = 3
# conn_timeout = 0.5
#
# slowdown will sleep that amount between containers
# slowdown = 0.01
#
# Seconds to suppress updating an account that has generated an error
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift

[container-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each container at most once per interval
# interval = 1800
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift

[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# You can also set this to a comma separated list of HTTP Proxies and they will
# be randomly used (simple load balancing).
# sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
#
# Will sync each container at most once per interval
# interval = 300
#
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
#
# Maximum amount of time in seconds for the connection attempt
# conn_timeout = 5
# Server errors from requests will be retried by default
# request_tries = 3
#
# Internal client config file path
# internal_client_conf_path = /etc/swift/internal-client.conf

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/container.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
swift-2.7.1/etc/account-server.conf-sample0000664000567000056710000001475613024044354021673 0ustar  jenkinsjenkins00000000000000[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6002
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes you'd like fallocate to
# reserve, whether there is space for the given file size or not.
# fallocate_reserve = 0

[pipeline:main]
pipeline = healthcheck recon account-server

[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server". Default is empty.
# replication_server = false

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
# recon_cache_path = /var/cache/swift

[account-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift

[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift

[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
# delay_reaping = 0
#
# If the account fails to be be reaped due to a persistent error, the
# account reaper will log a message such as:
#     Account  has not been reaped since 
# You can search logs for this message if space is not being reclaimed
# after you delete account(s).
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/account.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
swift-2.7.1/etc/mime.types-sample0000664000567000056710000000052113024044352020060 0ustar  jenkinsjenkins00000000000000#########################################################
# A nice place to put custom Mime-Types for Swift       #
# Please enter Mime-Types in standard mime.types format #
# Mime-Type Extension ex. image/jpeg jpg                #
#########################################################

#EX. Mime-Type Extension
#    foo/bar   foo


swift-2.7.1/etc/dispersion.conf-sample0000664000567000056710000000200013024044354021065 0ustar  jenkinsjenkins00000000000000[dispersion]
# Please create a new account solely for using dispersion tools, which is
# helpful for keep your own data clean.
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
# auth_version = 1.0
#
# NOTE: If you want to use keystone (auth version 2.0), then its configuration
# would look something like:
# auth_url = http://localhost:5000/v2.0/
# auth_user = tenant:user
# auth_key = password
# auth_version = 2.0
#
# NOTE: If you want to use keystone (auth version 3.0), then its configuration
# would look something like:
# auth_url = http://localhost:5000/v3/
# auth_user = user
# auth_key = password
# auth_version = 3.0
# project_name = project
# project_domain_name = project_domain
# user_domain_name = user_domain
#
# endpoint_type = publicURL
# keystone_api_insecure = no
#
# swift_dir = /etc/swift
# dispersion_coverage = 1.0
# retries = 5
# concurrency = 25
# container_populate = yes
# object_populate = yes
# container_report = yes
# object_report = yes
# dump_json = no
swift-2.7.1/.manpages0000775000567000056710000000054513024044352015616 0ustar  jenkinsjenkins00000000000000#!/bin/sh

RET=0
for MAN in doc/manpages/* ; do
    OUTPUT=$(LC_ALL=en_US.UTF-8 MANROFFSEQ='' MANWIDTH=80 man --warnings -E UTF-8 -l \
        -Tutf8 -Z "$MAN" 2>&1 >/dev/null)
    if [ -n "$OUTPUT" ] ; then
        RET=1
        echo "$MAN:"
        echo "$OUTPUT"
    fi
done

if [ "$RET" -eq "0" ] ; then
    echo "All manpages are fine"
fi

exit "$RET"
swift-2.7.1/.mailmap0000664000567000056710000001337713024044354015451 0ustar  jenkinsjenkins00000000000000Greg Holt  gholt 
Greg Holt  gholt 
Greg Holt  gholt 
Greg Holt  gholt 
Greg Holt  
Greg Holt 
John Dickinson  
Michael Barton  
Michael Barton  
Michael Barton  Mike Barton
Clay Gerrard  
Clay Gerrard  
Clay Gerrard  
Clay Gerrard  clayg 
David Goetz  
David Goetz  
Anne Gentle  
Anne Gentle  annegentle
Fujita Tomonori 
Greg Lange  
Greg Lange  
Chmouel Boudjnah  
Gaurav B. Gangalwar  gaurav@gluster.com <>
Joe Arnold  
Kapil Thangavelu  kapil.foss@gmail.com <>
Samuel Merritt  
Morita Kazutaka 
Zhongyue Luo  
Russ Nelson  
Marcelo Martins  
Andrew Clay Shafer  
Soren Hansen  
Soren Hansen  
Ye Jia Xu  monsterxx03 
Victor Rodionov  
Florian Hines  
Jay Payne  
Doug Weimer  
Li Riqiang  lrqrun 
Cory Wright  
Julien Danjou  
David Hadas  
Yaguang Wang  ywang19 
Liu Siqi  dk647 
James E. Blair  
Kun Huang  
Michael Shuler  
Ilya Kharin  
Dmitry Ukov  Ukov Dmitry 
Tom Fifield  Tom Fifield 
Sascha Peilicke  Sascha Peilicke 
Zhenguo Niu  
Peter Portante  
Christian Schwede  
Christian Schwede  
Constantine Peresypkin  
Madhuri Kumari  madhuri 
Morgan Fainberg  
Hua Zhang  
Yummy Bian  
Alistair Coles  
Tong Li  
Paul Luse  
Yuan Zhou  
Jola Mirecka  
Ning Zhang  
Mauro Stettler  
Pawel Palucki  
Guang Yee  
Jing Liuqing  
Lorcan Browne  
Eohyung Lee  
Harshit Chitalia  
Richard Hawkins 
Sarvesh Ranjan 
Minwoo Bae  Minwoo B
Jaivish Kothari  
Michael Matur 
Kazuhiro Miyahara 
Alexandra Settle 
Kenichiro Matsuda 
Atsushi Sakai 
Takashi Natsume 
Nakagawa Masaaki  nakagawamsa
Romain Le Disez  Romain LE DISEZ
Donagh McCabe  
Eamonn O'Toole  
Gerry Drudy  
Mark Seger  
Timur Alperovich  
Mehdi Abaakouk  
Richard Hawkins  
Ondrej Novy 
Peter Lisak 
Ke Liang 
Daisuke Morita  
Andreas Jaeger  
Hugo Kuo 
Gage Hugo 
Oshrit Feder 
Larry Rensing 
Ben Keller 
Chaozhe Chen 
swift-2.7.1/.unittests0000775000567000056710000000114313024044352016060 0ustar  jenkinsjenkins00000000000000#!/bin/bash

TOP_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")

python -c 'from distutils.version import LooseVersion as Ver; import nose, sys; sys.exit(0 if Ver(nose.__version__) >= Ver("1.2.0") else 1)'
if [ $? != 0 ]; then
    cover_branches=""
else
    # Having the HTML reports is REALLY useful for achieving 100% branch
    # coverage.
    cover_branches="--cover-branches --cover-html --cover-html-dir=$TOP_DIR/cover"
fi
cd $TOP_DIR/test/unit
nosetests --exe --with-coverage --cover-package swift --cover-erase $cover_branches $@
rvalue=$?
rm -f .coverage
cd -
exit $rvalue
swift-2.7.1/CHANGELOG0000664000567000056710000021344213024044354015235 0ustar  jenkinsjenkins00000000000000swift (2.7.1, stable release update)

    * Closed a bug where ssync may have written bad fragment data in
      some circumstances. A check was added to ensure the correct number
      of bytes is written for a fragment before finalizing the write.
      Also, erasure coded fragment metadata will now be validated on read
      requests and, if bad data is found, the fragment will be quarantined.

    * Fixed regression in consolidate_hashes that occured when a new
      file was stored to new suffix to a non-empty partition. This bug
      was introduced in 2.7.0 and could cause an increase in rsync
      replication stats during and after upgrade, due to inconsistent
      hashing of partition suffixes.

    * Fixed non-deterministic suffix updates in hashes.pkl where a partition
      may be updated much less often than expected.

    * Fixed a rare infinite loop in `swift-ring-builder` while placing parts.

    * Fixed upgrade bug in versioned_writes where older containers servers
      may have caused out-of-order restores.

    * The object auditor now ignores files in the devices directory when
      auditing objects.

    * Removed "in-process-" from func env tox name to work with upstream CI.

swift (2.7.0, OpenStack Mitaka)

    * Bump PyECLib requirement to >= 1.2.0

    * Update container on fast-POST

      "Fast-POST" is the mode where `object_post_as_copy` is set to
      `False` in the proxy server config. This mode now allows for
      fast, efficient updates of metadata without needing to fully
      recopy the contents of the object. While the default still is
      `object_post_as_copy` as True, the plan is to change the default
      to False and then deprecate post-as-copy functionality in later
      releases. Fast-POST now supports container-sync functionality.

    * Add concurrent reads option to proxy.

      This change adds 2 new parameters to enable and control concurrent
      GETs in Swift, these are `concurrent_gets` and `concurrency_timeout`.

      `concurrent_gets` allows you to turn on or off concurrent
      GETs; when on, it will set the GET/HEAD concurrency to the
      replica count. And in the case of EC HEADs it will set it to
      ndata. The proxy will then serve only the first valid source to
      respond. This applies to all account, container, and replicated
      object GETs and HEADs. For EC only HEAD requests are affected.
      The default for `concurrent_gets` is off.

      `concurrency_timeout` is related to `concurrent_gets` and is
      the amount of time to wait before firing the next thread. A
      value of 0 will fire at the same time (fully concurrent), but
      setting another value will stagger the firing allowing you the
      ability to give a node a short chance to respond before firing
      the next. This value is a float and should be somewhere between
      0 and `node_timeout`. The default is `conn_timeout`, meaning by
      default it will stagger the firing.

    * Added an operational procedures guide to the docs. It can be
      found at http://swift.openstack.org/ops_runbook/index.html and
      includes information on detecting and handling day-to-day
      operational issues in a Swift cluster.

    * Make `handoffs_first` a more useful mode for the object replicator.

      The `handoffs_first` replication mode is used during periods of
      problematic cluster behavior (e.g. full disks) when replication
      needs to quickly drain partitions from a handoff node and move
      them to a primary node.

      Previously, `handoffs_first` would sort that handoff work before
      "normal" replication jobs, but the normal replication work could
      take quite some time and result in handoffs not being drained
      quickly enough.

      In order to focus on getting handoff partitions off the node
      `handoffs_first` mode will now abort the current replication
      sweep before attempting any primary suffix syncing if any of the
      handoff partitions were not removed for any reason - and start
      over with replication of handoffs jobs as the highest priority.

      Note that `handoffs_first` being enabled will emit a warning on
      start up, even if no handoff jobs fail, because of the negative
      impact it can have during normal operations by dog-piling on a
      node that was temporarily unavailable.

    * By default, inbound `X-Timestamp` headers are now disallowed
      (except when in an authorized container-sync request). This
      header is useful for allowing data migration from other storage
      systems to Swift and keeping the original timestamp of the data.
      If you have this migration use case (or any other requirement on
      allowing the clients to set an object's timestamp), set the
      `shunt_inbound_x_timestamp` config variable to False in the
      gatekeeper middleware config section of the proxy server config.

    * Requesting a SLO manifest file with the query parameters
      "?multipart-manifest=get&format=raw" will return the contents of
      the manifest in the format as was originally sent by the client.
      The "format=raw" is new.

    * Static web page listings can now be rendered with a custom
      label. By default listings are rendered with a label of:
      "Listing of /v1///". This change adds
      a new custom metadata key/value pair
      `X-Container-Meta-Web-Listings-Label: My Label` that when set,
      will cause the following: "Listing of My Label/" to be
      rendered instead.

    * Previously, static large objects (SLOs) had a minimum segment
      size (default to 1MiB). This limit has been removed, but small
      segments will be ratelimited. The config parameter
      `rate_limit_under_size` controls the definition of "small"
      segments (1MiB by default), and `rate_limit_segments_per_sec`
      controls how many segments per second can be served (default is 1).
      With the default values, the effective behavior is identical to the
      previous behavior when serving SLOs.

    * Container sync has been improved to perform a HEAD on the remote
      side of the sync for each object being synced. If the object
      exists on the remote side, container-sync will no longer
      transfer the object, thus significantly lowering the network
      requirements to use the feature.

    * The object auditor will now clean up any old, stale rsync temp
      files that it finds. These rsync temp files are left if the
      rsync process fails without completing a full transfer of an
      object. Since these files can be large, the temp files may end
      up filling a disk. The new auditor functionality will reap these
      rsync temp files if they are old. The new object-auditor config
      variable `rsync_tempfile_timeout` is the number of seconds old a
      tempfile must be before it is reaped. By default, this variable
      is set to "auto" or the rsync_timeout plus 900 seconds (falling
      back to a value of 1 day).

    * The Erasure Code reconstruction process has been made more
      efficient by not syncing data files when only the durable commit
      file is missing.

    * Fixed a bug where 304 and 416 response may not have the right
      Etag and Accept-Ranges headers when the object is stored in an
      Erasure Coded policy.

    * Versioned writes now correctly stores the date of previous versions
      using GMT instead of local time.

    * The deprecated Keystone middleware option is_admin has been removed.

    * Fixed log format in object auditor.

    * The zero-byte mode (ZBF) of the object auditor will now properly
      observe the `--once` option.

    * Swift keeps track, internally, of "dirty" parts of the partition
      keyspace with a "hashes.pkl" file. Operations on this file no
      longer require a read-modify-write cycle and use a new
      "hashes.invalid" file to track dirty partitions. This change
      will improve end-user performance for PUT and DELETE operations.

    * The object replicator's succeeded and failed counts are now logged.

    * `swift-recon` can now query hosts by storage policy.

    * The log_statsd_host value can now be an IPv6 address or a hostname
      which only resolves to an IPv6 address.

    * Erasure coded fragments now properly call fallocate to reserve disk
      space before being written.

    * Various other minor bug fixes and improvements.

swift (2.6.0)

    * Dependency changes
      - Updated minimum version of eventlet to 0.17.4 to support IPv6.

      - Updated the minimum version of PyECLib to 1.0.7.

    * The ring rebalancing algorithm was updated to better handle edge cases
      and to give better (more balanced) rings in the general case. New rings
      will have better initial placement, capacity adjustments will move less
      data for better balance, and existing rings that were imbalanced should
      start to become better balanced as they go through rebalance cycles.

    * Added container and account reverse listings.

      A GET request to an account or container resource with a "reverse=true"
      query parameter will return the listing in reverse order. When
      iterating over pages of reverse listings, the relative order of marker
      and end_marker are swapped.

    * Storage policies now support having more than one name.

      This allows operators to fix a typo without breaking existing clients,
      or, alternatively, have "short names" for policies. This is implemented
      with the "aliases" config key in the storage policy config in
      swift.conf. The aliases value is a list of names that the storage
      policy may also be identified by. The storage policy "name" is used to
      report the policy to users (eg in container headers). The aliases have
      the same naming restrictions as the policy's primary name.

    * The object auditor learned the "interval" config value to control the
      time between each audit pass.

    * `swift-recon --all` now includes the config checksum check.

    * `swift-init` learned the --kill-after-timeout option to force a service
      to quit (SIGKILL) after a designated time.

    * `swift-recon` now correctly shows timestamps in UTC instead of local
      time.

    * Fixed bug where `swift-ring-builder` couldn't select device id 0.

    * Documented the previously undocumented
      `swift-ring-builder pretend_min_part_hours_passed` command.

    * The "node_timeout" config value now accepts decimal values.

    * `swift-ring-builder` now properly removes devices with zero weight.

    * `swift-init` return codes are updated via "--strict" and "--non-strict"
      options. Please see the usage string for more information.

    * `swift-ring-builder` now reports the min_part_hours lockout time
      remaining

    * Container sync has been improved to more quickly find and iterate over
      the containers to be synced. This reduced server load and lowers the
      time required to see data propagate between two clusters. Please see
      http://swift.openstack.org/overview_container_sync.html for more details
      about the new on-disk structure for tracking synchronized containers.

    * A container POST will now update that container's put-timestamp value.

    * TempURL header restrictions are now exposed in /info.

    * Error messages on static large object manifest responses have been
      greatly improved.

    * Closed a bug where an unfinished read of a large object would leak a
      socket file descriptor and a small amount of memory. (CVE-2016-0738)

    * Fixed an issue where a zero-byte object PUT with an incorrect Etag
      would return a 503.

    * Fixed an error when a static large object manifest references the same
      object more than once.

    * Improved performance of finding handoff nodes if a zone is empty.

    * Fixed duplication of headers in Access-Control-Expose-Headers on CORS
      requests.

    * Fixed handling of IPv6 connections to memcache pools.

    * Continued work towards python 3 compatibility.

    * Various other minor bug fixes and improvements.

swift (2.5.0, OpenStack Liberty)

    * Added the ability to specify ranges for Static Large Object (SLO)
      segments.

    * Replicator configs now support an "rsync_module" value to allow
      for per-device rsync modules. This setting gives operators the
      ability to fine-tune replication traffic in a Swift cluster and
      isolate replication disk IO to a particular device. Please see
      the docs and sample config files for more information and
      examples.

    * Significant work has gone in to testing, fixing, and validating
      Swift's erasure code support at different scales.

    * Swift now emits StatsD metrics on a per-policy basis.

    * Fixed an issue with Keystone integration where a COPY request to a
      service account may have succeeded even if a service token was not
      included in the request.

    * Ring validation now warns if a placement partition gets assigned to the
      same device multiple times. This happens when devices in the ring are
      unbalanced (e.g. two servers where one server has significantly more
      available capacity).

    * Various other minor bug fixes and improvements.

swift (2.4.0)

    * Dependency changes

      - Added six requirement. This is part of an ongoing effort to add
        support for Python 3.

      - Dropped support for Python 2.6.

    * Config changes

      - Recent versions of Python restrict the number of headers allowed in a
        request to 100. This number may be too low for custom middleware. The
        new "extra_header_count" config value in swift.conf can be used to
        increase the number of headers allowed.

      - Renamed "run_pause" setting to "interval" (current configs with
        run_pause still work). Future versions of Swift may remove the
        run_pause setting.

    * Versioned writes middleware

      The versioned writes feature has been refactored and reimplemented as
      middleware. You should explicitly add the versioned_writes middleware to
      your proxy pipeline, but do not remove or disable the existing container
      server config setting ("allow_versions"), if it is currently enabled.
      The existing container server config setting enables existing
      containers to continue being versioned. Please see
      http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
      for further upgrade notes.

    * Allow 1+ object-servers-per-disk deployment

      Enabled by a new > 0 integer config value, "servers_per_port" in the
      [DEFAULT] config section for object-server and/or replication server
      configs. The setting's integer value determines how many different
      object-server workers handle requests for any single unique local port
      in the ring. In this mode, the parent swift-object-server process
      continues to run as the original user (i.e. root if low-port binding
      is required), binds to all ports as defined in the ring, and forks off
      the specified number of workers per listen socket. The child, per-port
      servers drop privileges and behave pretty much how object-server workers
      always have, except that because the ring has unique ports per disk, the
      object-servers will only be handling requests for a single disk. The
      parent process detects dead servers and restarts them (with the correct
      listen socket), starts missing servers when an updated ring file is
      found with a device on the server with a new port, and kills extraneous
      servers when their port is found to no longer be in the ring. The ring
      files are stat'ed at most every "ring_check_interval" seconds, as
      configured in the object-server config (same default of 15s).

      In testing, this deployment configuration (with a value of 3) lowers
      request latency, improves requests per second, and isolates slow disk
      IO as compared to the existing "workers" setting. To use this, each
      device must be added to the ring using a different port.

    * Do container listing updates in another (green)thread

      The object server has learned the "container_update_timeout" setting
      (with a default of 1 second). This value is the number of seconds that
      the object server will wait for the container server to update the
      listing before returning the status of the object PUT operation.

      Previously, the object server would wait up to 3 seconds for the
      container server response. The new behavior dramatically lowers object
      PUT latency when container servers in the cluster are busy (e.g. when
      the container is very large). Setting the value too low may result in a
      client PUT'ing an object and not being able to immediately find it in
      listings. Setting it too high will increase latency for clients when
      container servers are busy.

    * TempURL fixes (closes CVE-2015-5223)

      Do not allow PUT tempurls to create pointers to other data.
      Specifically, disallow the creation of DLO object manifests via a PUT
      tempurl. This prevents discoverability attacks which can use any PUT
      tempurl to probe for private data by creating a DLO object manifest and
      then using the PUT tempurl to head the object.

    * Ring changes

      - Partition placement no longer uses the port number to place
        partitions. This improves dispersion in small clusters running one
        object server per drive, and it does not affect dispersion in
        clusters running one object server per server.

      - Added ring-builder-analyzer tool to more easily test and analyze a
        series of ring management operations.

      - Stop moving partitions unnecessarily when overload is on.

    * Significant improvements and bug fixes have been made to erasure code
      support. This feature is suitable for beta testing, but it is not yet
      ready for broad production usage.

    * Bulk upload now treats user xattrs on files in the given archive as
      object metadata on the resulting created objects.

    * Emit warning log in object replicator if "handoffs_first" or
      "handoff_delete" is set.

    * Enable object replicator's failure count in swift-recon.

    * Added storage policy support to dispersion tools.

    * Support keystone v3 domains in swift-dispersion.

    * Added domain_remap information to the /info endpoint.

    * Added support for a "default_reseller_prefix" in domain_remap
      middleware config.

    * Allow SLO PUTs to forgo per-segment integrity checks. Previously, each
      segment referenced in the manifest also needed the correct etag and
      bytes setting. These fields now allow the "null" value to skip those
      particular checks on the given segment.

    * Allow rsync to use compression via a "rsync_compress" config. If set to
      true, compression is only enabled for an rsync to a device in a
      different region. In some cases, this can speed up cross-region
      replication data transfer.

    * Added time synchronization check in swift-recon (the --time option).

    * The account reaper now runs faster on large accounts.

    * Various other minor bug fixes and improvements.


swift (2.3.0, OpenStack Kilo)

    * Erasure Code support (beta)

      Swift now supports an erasure-code (EC) storage policy type. This allows
      deployers to achieve very high durability with less raw capacity as used
      in replicated storage. However, EC requires more CPU and network
      resources, so it is not good for every use case. EC is great for storing
      large, infrequently accessed data in a single region.

      Swift's implementation of erasure codes is meant to be transparent to
      end users. There is no API difference between replicated storage and
      EC storage.

      To support erasure codes, Swift now depends on PyECLib and
      liberasurecode. liberasurecode is a pluggable library that allows for
      the actual EC algorithm to be implemented in a library of your choosing.

      As a beta release, EC support is nearly fully feature complete, but it
      is lacking support for some features (like multi-range reads) and has
      not had a full performance characterization. This feature relies on
      ssync for durability. Deployers are urged to do extensive testing and
      not deploy production data using an erasure code storage policy.

      Full docs are at http://swift.openstack.org/overview_erasure_code.html

    * Add support for container TempURL Keys.

    * Make more memcache options configurable. connection_timeout,
      pool_timeout, tries, and io_timeout are all now configurable.

    * Swift now supports composite tokens. This allows another service to
      act on behalf of a user, but only with that user's consent.
      See http://swift.openstack.org/overview_auth.html for more details.

    * Multi-region replication was improved. When replicating data to a
      different region, only one replica will be pushed per replication
      cycle. This gives the remote region a chance to replicate the data
      locally instead of pushing more data over the inter-region network.

    * Internal requests from the ratelimit middleware now properly log a
      swift_source. See http://swift.openstack.org/logs.html for details.

    * Improved storage policy support for quarantine stats in swift-recon.

    * The proxy log line now includes the request's storage policy index.

    * Ring checker has been added to swift-recon to validate if rings are
      built correctly. As part of this feature, storage servers have learned
      the OPTIONS verb.

    * Add support of x-remove- headers for container-sync.

    * Rings now support hostnames instead of just IP addresses.

    * Swift now enforces that the API version on a request is valid. Valid
      versions are configured via the valid_api_versions setting in swift.conf

    * Various other minor bug fixes and improvements.


swift (2.2.2)

    * Data placement changes

      This release has several major changes to data placement in Swift in
      order to better handle different deployment patterns. First, with an
      unbalance-able ring, less partitions will move if the movement doesn't
      result in any better dispersion across failure domains. Also, empty
      (partition weight of zero) devices will no longer keep partitions after
      rebalancing when there is an unbalance-able ring.

      Second, the notion of "overload" has been added to Swift's rings. This
      allows devices to take some extra partitions (more than would normally
      be allowed by the device weight) so that smaller and unbalanced clusters
      will have less data movement between servers, zones, or regions if there
      is a failure in the cluster.

      Finally, rings have a new metric called "dispersion". This is the
      percentage of partitions in the ring that have too many replicas in a
      particular failure domain. For example, if you have three servers in a
      cluster but two replicas for a partition get placed onto the same
      server, that partition will count towards the dispersion metric. A
      lower value is better, and the value can be used to find the proper
      value for "overload".

      The overload and dispersion metrics have been exposed in the
      swift-ring-build CLI tools.

      See http://docs.openstack.org/developer/swift/overview_ring.html
      for more info on how data placement works now.

    * Improve replication of large out-of-sync, out-of-date containers.

    * Added console logging to swift-drive-audit with a new log_to_console
      config option (default False).

    * Optimize replication when a device and/or partition is specified.

    * Fix dynamic large object manifests getting versioned. This was not
      intended and did not work. Now it is properly prevented.

    * Fix the GET's response code when there is a missing segment in a
      large object manifest.

    * Change black/white listing in ratelimit middleware to use sysmeta.
      Instead of using the config option, operators can set
      "X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST" or
      "X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST" on an account to
      whitelist or blacklist it for ratelimiting. Note: the existing
      config options continue to work.

    * Use TCP_NODELAY on outgoing connections.

    * Improve object-replicator startup time.

    * Implement OPTIONS verb for storage nodes.

    * Various other minor bug fixes and improvements.


swift (2.2.1)

    * Swift now rejects object names with Unicode surrogates.

    * Return 403 (instead of 413) on unauthorized upload when over account
      quota.

    * Fix a rare condition when a rebalance could cause swift-ring-builder
      to crash. This would only happen on old ring files when "rebalance"
      was the first command run.

    * Storage node error limits now survive a ring reload.

    * Speed up reading and writing xattrs for object metadata by using larger
      xattr value sizes. The change is moving from 254 byte values to 64KiB
      values. There is no migration issue with this.

    * Deleted containers beyond the reclaim age are now properly reclaimed.

    * Full Simplified Chinese translation (zh_CN locale) for errors and logs.

    * Container quota is now properly enforced during cross-account COPY.

    * ssync replication now properly uses the configured replication_ip.

    * Fixed issue were ssync did not replicate custom object headers.

    * swift-drive-audit now has the 'unmount_failed_device' config option
      (default to True) that controls if the process will unmount failed
      drives or not.

    * swift-drive-audit will now dump drive error rates to a recon file.
      The file location is controlled by the 'recon_cache_path' config value
      and it includes each drive and its associated number of errors.

    * When a filesystem does't support xattr, the object server now returns
      a 507 Insufficient Storage error to the proxy server.

    * Clean up empty account and container partitions directories if they
      are empty. This keeps the system healthy and prevents a large number
      of empty directories from slowing down the replication process.

    * Show the sum of every policy's amount of async pendings in swift-recon.

    * Various other minor bug fixes and improvements.


swift (2.2.0, OpenStack Juno)

    * Added support for Keystone v3 auth.

      Keystone v3 introduced the concept of "domains" and user names
      are no longer unique across domains. Swift's Keystone integration
      now requires that ACLs be set on IDs, which are unique across
      domains, and further restricts setting new ACLs to only use IDs.

      Please see http://swift.openstack.org/overview_auth.html for
      more information on configuring Swift and Keystone together.

    * Swift now supports server-side account-to-account copy. Server-
      side copy in Swift requires the X-Copy-From header (on a PUT)
      or the Destination header (on a COPY). To initiate an account-to-
      account copy, the existing header value remains the same, but the
      X-Copy-From-Account header (on a PUT) or the Destination-Account
      (on a COPY) are used to indicate the proper account.

    * Limit partition movement when adding a new placement tier.

      When adding a new placement tier (server, zone, or region), Swift
      previously attempted to move all placement partitions, regardless
      of the space available on the new tier, to ensure the best possible
      durability. Unfortunately, this could result in too many partitions
      being moved all at once to a new tier. Swift's ring-builder now
      ensures that only the correct number of placement partitions are
      rebalanced, and thus makes adding capacity to the cluster more
      efficient.

    * Per storage policy container counts are now reported in an
      account response headers.

    * Swift will now reject, with a 4xx series response, GET requests
      with more than 50 ranges, more than 3 overlapping ranges, or more
      than 8 non-increasing ranges.

    * The bind_port config setting is now required to be explicitly set.

    * The object server can now use splice() for a zero-copy GET
      response. This feature is enabled with the "splice" config variable
      in the object server config and defaults to off. Also, this feature
      only works on recent Linux kernels (AF_ALG sockets must be
      supported). A zero-copy GET response can significantly reduce CPU
      requirements for object servers.

    * Added "--no-overlap" option to swift-dispersion populate so that
      multiple runs of the tool can add coverage without overlapping
      existing monitored partitions.

    * swift-recon now supports filtering by region.

    * Various other minor bug fixes and improvements.

swift (2.1.0)

    * swift-ring-builder placement was improved to allow gradual addition
      of new regions without causing a massive migration of data to the new
      region. The change was to prefer device weight first, then look at
      failure domains.

    * Logging updates

      - Eliminated "Handoff requested (N)" log spam.

      - Added process pid to the end of storage node log lines.

      - Container auditor now logs a warning if the devices path contains a
        non-directory.

      - Object daemons now send a user-agent string with their full name.

    * 412 and 416 responses are no longer tracked as errors in the StatsD
      messages from the backend servers.

    * Parallel object auditor

      The object auditor can now be controlled with a "concurrency" config
      value that allows multiple auditor processes to run at once. Using
      multiple parallel auditor processes can speed up the overall auditor
      cycle time.

    * The object updater will now concurrently update each necessary node
      in a new greenthread.

    * TempURL updates

      - The default allowed methods have changed to also allow POST and
        DELETE. The new default list is "GET HEAD PUT POST DELETE".

      - TempURLs for POST now also allow HEAD, matching existing GET and PUT
        functionality.

      - Added filename*= support to TempURL Content-Disposition response
        header.

    * X-Delete-At/After can now be used with the FormPost middleware.

    * Make swift-form-signature output a sample form.

    * Add v2 API to list endpoints middleware

      The new API adds better support for storage policies and changes the
      response from a list of backend urls to a dictionary with the keys
      "endpoints" and "headers". The endpoints key contains a list of the
      backend urls, and the headers key is a dictionary of headers to send
      along with the backend request.

    * Added allow_account_management and account_autocreate values to /info
      responses.

    * Enable object system metadata on PUTs (Note: POST support is ongoing).

    * Various other minor bug fixes and improvements.

swift (2.0.0)

    * Storage policies

      Storage policies allow deployers to configure multiple object rings
      and expose them to end users on a per-container basis. Deployers
      can create policies based on hardware performance, regions, or other
      criteria and independently choose different replication factors on
      them. A policy is set on a Swift container at container creation
      time and cannot be changed.

      Full docs are at http://swift.openstack.org/overview_policies.html

    * Add profiling middleware in Swift

      The profile middleware provides a tool to profile Swift
      code on the fly and collects statistical data for performance
      analysis. A native simple Web UI is also provided to help
      query and visualize the data.

    * Add --quoted option to swift-temp-url

    * swift-recon now supports checking the md5sum of swift.conf, which
      helps deployers verify configurations are consistent across a cluster.

    * Users can now set the transaction id suffix by passing in
      a value in the X-Trans-Id-Extra header.

    * New log_max_line_length option caps the maximum length of a log line.

    * Support If-[Un]Modified-Since for object HEAD

    * Added missing constraints and ratelimit parameters to /info

    * Add ability to remove subsections from /info

    * Unify logging for account, container, and object server processes
      to provide a consistent message format. This change reorders the
      fields logged for the account server.

    * Add targeted config loading to swift-init. This allows an easier
      and more explicit way to tell swift-init to run specific server
      process configurations.

    * Properly quote www-authenticate (CVE-2014-3497)

    * Fix logging issue when services stop on py26.

    * Change the default logged length of the auth token to 16.

    * Explicitly set permissions on generated ring files to 0644

    * Fix file uploads larger than 2GiB in the formpost feature

    * Fixed issue where large objects would fail to download if the
      auth token expired partway through the download

    * Various other minor bug fixes and improvements

swift (1.13.1, OpenStack Icehouse)

    * Change the behavior of CORS responses to better match the spec

      A new proxy config variable (strict_cors_mode, default to True)
      has been added. Setting it to False keeps the old behavior. For
      an overview of old versus new behavior, please see
      https://review.openstack.org/#/c/69419/

    * Invert the responsibility of the two instances of proxy-logging in
      the proxy pipeline

      The first proxy_logging middleware instance to receive a request
      in the pipeline marks that request as handling it. So now, the
      left most proxy_logging middleware handles logging for all
      client requests, and the right most proxy_logging middleware
      handles all other requests initiated from within the pipeline to
      its left. This fixes logging related to large object
      requests not properly recording bandwidth.

    * Added swift-container-info and swift-account-info tools

    * Allow specification of object devices for audit

    * Dynamic large object COPY requests with ?multipart-manifest=get
      now work as expected

    * When a client is downloading a large object and one of the segment
      reads gets bad data, Swift will now immediately abort the request.

    * Fix ring-builder crash when a ring partition was assigned to a
      deleted device, zero-weighted device, and normal device

    * Make probetests work with conf.d configs

    * Various other minor bug fixes and improvements.

swift (1.13.0)

    * Account-level ACLs and ACL format v2

      Accounts now have a new privileged header to represent ACLs or
      any other form of account-level access control. The value of
      the header is a JSON dictionary string to be interpreted by the
      auth system. A reference implementation is given in TempAuth.
      Please see the full docs at
      http://swift.openstack.org/overview_auth.html

    * Added a WSGI environment flag to stop swob from always using
      absolute location. This is useful if middleware needs to use
      out-of-spec Location headers in a response.

    * Container sync proxies now support simple load balancing

    * Config option to lower the timeout for recoverable object GETs

    * Add a way to ratelimit all writes to an account

    * Allow multiple storage_domain values in cname_lookup middleware

    * Moved all DLO functionality into middleware

      The proxy will automatically insert the dlo middleware at an
      appropriate place in the pipeline the same way it does with the
      gatekeeper middleware. Clusters will still support DLOs after upgrade
      even with an old config file that doesn't mention dlo at all.

    * Remove python-swiftclient dependency

    * Add secondary groups to process user during privilege escalation

    * When logging request headers, it is now possible to specify
      specifically which headers should be logged

    * Added log_requests config parameter to account and container servers
      to match the parameter in the object server. This allows a deployer
      to turn off log messages for these processes.

    * Ensure swift.source is set for DLO/SLO requests

    * Fixed an issue where overwriting segments in a dynamic manifest
      could cause issues on pipelined requests.

    * Properly handle COPY verb in container quota middleware

    * Improved StaticWeb 404 error message on web-listings and index

    * Various other minor bug fixes and improvements.

swift (1.12.0)

    * Several important pieces of information have been added to /info:

       - Configured constraints are included and allow a client to discover
         the limits on names and object sizes that the cluster supports.

       - The supported tempurl methods are now included.

       - Static large object constraints are now included.

    * The Last-Modified header value returned will now be the object's
      timestamp rounded up to the next second. This allows subsequent
      requests with If-[un]modified-Since to use the Last-Modified
      value as expected.

    * Non-integer values for if-delete-at headers will now properly
      report a 400 error instead of a 503.

    * Fix object versioning with non-ASCII container names.

    * Bulk delete with POST now works properly.

    * Generic means for persisting system metadata

      Swift now supports system-level metadata on accounts and
      containers. System metadata provides a means to store internal
      custom metadata with associated Swift resources in a safe and
      secure fashion without actually having to plumb custom metadata
      through the core swift servers. The new gatekeeper middleware
      prevents this system metadata from leaking into the request or
      being set by a client.

    * catch_errors and gatekeeper middleware are now forced into the proxy
      pipeline if not explicitly referenced.

    * New container sync configuration option, separating the end user
      from knowing the required end point and adding more secure
      signed requests. See
      http://swift.openstack.org/overview_container_sync.html for full
      information.

    * bulk middleware now can be configured to retry deleting containers.

    * The default yield_frequency used to keep client connections alive
      during slow bulk requests was reduced from 60 seconds to 10 seconds.
      While this is a change to a default, it should not affect deployments
      and there is no migration process needed.

    * Swift processes will attempt to set RLIMIT_NPROC to 8192.

    * Server processes will now exit with a non-zero error code on config
      errors.

    * Warn if read_affinity is configured but not enabled.

    * Fix checkmount error parsing in swift-recon.

    * Log at warn level when an object is quarantined.

    * Fixed CVE-2014-0006 to avoid a potential timing attack with tempurl.

    * Various other minor bug fixes and improvements.


swift (1.11.0)

    * Added discoverable capabilities

      A Swift proxy server now by default (although it can be turned off)
      will respond to requests to /info. The response to these requests
      include information about the cluster and can be used by clients to
      determine which features are supported in the cluster.

    * Object replication ssync (an rsync alternative)

      A Swift storage node can now be configured to use Swift primitives
      for replication transport instead of rsync. This is an experimental
      feature that is not yet considered production ready.

    * If a source times out on an object server read, try another one
      of them with a modified range.

    * The proxy now responds to many types of requests as soon as it
      has a quorum. This can help speed up responses (without
      changing the results), especially when one node is acting up.
      There is a post_quorum_timeout config value that can tune how
      long to wait for requests to finish after a quorum has been
      established.

    * Add accurate timestamps in proxy log lines for the start and
      end of a request. These are added as new fields on the end of
      the existing log lines, and therefore should not break
      existing, well-behaved log processors.

    * Add an "inline" query parameter to tempurl

      By default, temporary URLs add a "Content-Disposition" header
      that forces many clients to download the object. Now, temporary
      URLs support an optional "inline" query parameter that will
      force a "Content-Disposition: inline" header to be added to the
      response, overriding the default.

    * Use TCP_NODELAY for created sockets. This can dramatically
      lower latency for small object workloads.

    * DiskFile API, with reference implementation

      The DiskFile abstraction for talking to data on disk has been
      refactored to allow alternate implementations to be developed.
      Included in the codebase is an in-memory reference
      implementation. For full documentation, please see the developer
      documentation. The DiskFile API is still a work in progress and
      is not yet finalized.

    * Removal of swift-bench

      The included benchmarking tool swift-bench has been extracted
      from the codebase and is now in its own repository at
      https://github.com/openstack/swift-bench. New swift-bench
      binaries and packages may be found on PyPI at
      https://pypi.python.org/pypi/swift-bench

    * Bulk delete now also supports the POST verb, in addition to DELETE

    * Added functionality to the swift-ring-builder to support
      limited recreation of ring builder files from the ring file itself.

    * HEAD on account now returns 410 if account was deleted and
      not yet reaped. The old behavior was to return a 404.

    * Fixed a bug introduced since the 1.10.0 release that
      prevented expired objects from being removed from the system.
      This resulted in orphaned expired objects taking up space on
      the system but inaccessible to the API. This regression and
      fix are only important if you have deployed code since the
      1.10.0 release. For a full discussion, including a script that
      can be used to clean up orphaned objects, see
      https://bugs.launchpad.net/swift/+bug/1257330

    * Tie socket write buffer size to server chunk size parameter. This
      pairs the underlying network buffer size with the size of data
      that Swift attempts to read from the connection, thereby
      improving efficiency and throughput on connections.

    * Fix 500 from account-quota middleware. If a user had set
      X-Account-Meta-Quota-Bytes to something non-integer prior to
      the installation of the account-quota middleware, then the
      quota check would choke on it. Now a non-integer value is
      treated as "no quota".

    * Quarantine objects with busted metadata. Before, if you
      encountered an object with corrupt or missing xattrs, the
      object server would return a 500 on GET, and wouldn't quarantine
      anything. Now the object server returns a 404 for that GET and
      the corrupted file is quarantined, thus giving replication a
      chance to fix it.

    * Fix quarantine and error counts in audit logs

    * Report transaction ID in failure exception logs

    * Make pbr a build-time only dependency

    * Worked around a bug in eventlet 0.9.16 where the size of the
      memcache connection pools would grow unbounded.

    * Tempurl keys are now properly stored as utf8

    * Fixed an issue where concurrent PUT requests to accounts or
      containers may result in errors due to locked databases.

    * Handle copy requests in account and container quota middleware

    * Now ensure that a WWW-Authenticate header is on all 401 responses

    * Various other bug fixes and improvements

swift (1.10.0, OpenStack Havana)

    * Added support for pooling memcache connections

    * Added support to replicating handoff partitions first in object
      replication. Can also configure how many remote nodes a storage node
      must talk to before removing a local handoff partition.

    * Fixed bug where memcache entries would not expire

    * Much faster calculation for choosing handoff nodes

    * Added container listing ratelimiting

    * Fixed issue where the proxy would continue to read from a storage
      server even after a client had disconnected

    * Added support for headers that are only visible to the owner of a Swift
      account

    * Fixed ranged GET with If-None-Match

    * Fixed an issue where rings may not be balanced after initial creation

    * Fixed internationalization support

    * Return the correct etag for a static large object on the PUT response

    * Allow users to extract archives to containers with ACLs set

    * Fix support for range requests against static large objects

    * Now logs x-copy-from header in a useful place

    * Reverted back to old XML output of account and container listings to
      ensure older clients do not break

    * Account quotas now appropriately handle copy requests

    * Fix issue with UTF-8 handling in versioned writes

    * Various other bug fixes and improvements, including support for running
      Swift under Pypy and continuing work to support storage policies

swift (1.9.1)

    * Disallow PUT, POST, and DELETE requests from creating older tombstone
      files, preventing the possibility of filling up the disk and removing
      unnecessary container updates.

    * Set default wsgi workers to cpu_count

      Change the default value of wsgi workers from 1 to auto. The new
      default value for workers in the proxy, container, account & object
      wsgi servers will spawn as many workers per process as you have cpu
      cores. This will not be ideal for some configurations, but it's much
      more likely to produce a successful out of the box deployment.

    * Added reveal_sensitive_prefix config setting to filter the auth token
      logged by the proxy server.

    * Ensure Keystone's reseller prefix ends with an underscore. Previously
      this was a recommendation--now it is enforced.

    * Added log_file_pattern config to swift-drive-audit for drive errors

    * Add support for telling Swift to detect a content type on a request.

    * Additional object stats are now logged in the object auditor

    * Moved the DiskFile interface into its own module

    * Ensure the SQLite cursors are closed when creating functions

    * Better support for valid Accept headers

    * In Keystone, don't allow users to delete their own account

    * Return a UTC timezone designator in container listings

    * Ensure that users can't remove their account quotas

    * Allow floating point value for dispersion coverage

    * Fix incorrect error page handling in staticweb

    * Add utf-8 charset to multipart-manifest=get response.

    * Allow dispersion tools to use keystone server with insecure certificate

    * Ensure that files are always closed in tests

    * Use OpenStack's "Hacking" guidelines for code formatting

    * Various other minor bug fixes and improvements

swift (1.9.0)

    * Global clusters support

      The "region" concept introduced in Swift 1.8.0 has been augmented with
      support for using a separate replication network and configuring read
      and write affinity. These features combine to offer support for a single
      Swift cluster spanning wide geographic area.

    * Disk performance

      The object server now can be configured to use threadpools to increase
      performance and smooth out latency throughout the system. Also, many
      disk operations were reordered to increase reliability and improve
      performance.

    * Added config file conf.d support

      Allow Swift daemons and servers to optionally accept a directory as the
      configuration parameter. This allows different parts of the config file
      to be managed separately, eg each middleware could use a separate file
      for its particular config settings.

    * Allow two TempURL keys per account

      By adding a second key, a user can safely rotate keys and prevent URLs
      already in use from becoming invalid. TempURL middlware has also been
      updated to allow a configuable set of allowed methods and to prevent a
      bugrelated to content-disposition names.

    * Added crossdomain.xml middleware. See
      http://docs.openstack.org/developer/swift/crossdomain.html for details

    * Added rsync bandwidth limit setting for object replicator

    * Transaction ID updated to include the time and an optional suffix

    * Added x-remove-versions-location header to disable versioned writes

    * Improvements to support for Keystone ACLs

    * Added parallelism to object expirer daemon

    * Added support for ring hash prefix in addition to the existing suffix

    * Allow all headers requested for CORS

    * Stop getting useless bytes on manifest Range requests

    * Improved container-sync resiliency

    * Added example Apache config files. See
      http://docs.openstack.org/developer/swift/apache_deployment_guide.html
      for more info

    * If an account is marked as deleted but hasn't been reaped and is still
      on disk, responses will include an "X-Account-Status" header

    * Fix 503 on account/container HEAD with invalid format

    * Added extra safety on account-level DELETE when using bulk deletes

    * Made colons quote-safe in logs (mainly for IPv6)

    * Fixed bug with bulk delete max items

    * Fixed static large object manifest range requests

    * Prevent static large objects from containing other static large objects

    * Fixed issue with use of delimiter in container queries where some
      objects would not be listed

    * Various other minor bug fixes and improvements

swift (1.8.0, OpenStack Grizzly)

    * Make rings' replica count adjustable

    * Added a region tier to the ring above zones

    * Added timing-based sorting of object servers on read requests

    * Added support for auto-extract archive uploads

    * Added support for bulk delete requests

    * Added support for large objects with static manifests

    * Added list_endpoints middleware to provide an API for determining where
      the ring places data

    * proxy-logging middleware can now handle logging for other middleware

      proxy-logging should be used twice in the proxy pipeline. The first
      handles middleware logs for requests that never made it all the way
      to the server. The last handles requests that do make it to the server.

      This is a change that may require an update to your proxy server
      config file or custom middleware that you may be using. See the full
      docs at http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.proxy_logging.

    * Changed the default sample rate for a few high-traffic requests.

      Added log_statsd_sample_rate_factor to globally tune the StatsD
      sample rate. This tunable can be used to reduce StatsD traffic
      proportionally for all metrics and is intended to replace
      log_statsd_default_sample_rate, which is left alone for
      backward-compatibility, should anyone be using it.

    * Added swift_hash_path_prefix option to swift.conf

      New deployments are advised to set this value to a random secret
      to protect against hash collisions

    * Added user-managed container quotas

    * Added support for account-level quotas managed by an auth reseller

    * Added --run-dir option to swift-init

    * Added more options to swift-bench

    * Added support for CORS "actual requests"

    * Added fallocate_reserve option to protect against full drives

    * Allow ring rebalance to take a seed

    * Ring serialization will now produce the same gzip file (Py2.7)

    * Added support to swift-drive-audit for handling rotated logs

    * Added first-byte latency timings for GET requests

    * Added per disk PUT timing monitoring support

    * Added speed limit options for DB auditor

    * Force log entries to be one line

    * Ensure that fsync is used and not just fdatasync

    * Improved handoff node selection

    * Deprecated keystone is_admin feature

    * Fix large objects with unicode in the segment names

    * Update Swift's MemcacheRing to provide API compatibility with
      standard Python memcache libraries

    * Various other minor bug fixes and improvements

swift (1.7.6)

    * Better tempauth storage URL guessing

    * Added --top option to swift-recon -d

    * Allow optional, temporary healthcheck failure

    * keystoneauth middleware now supports cross-tenant ACLs

    * Add dispersion report flags to limit reports

    * Add config option to turn eventlet debug on/off

    * Added override option for swift-init's KILL_WAIT

    * Added oldest and most recent replication pass to swift-recon

    * Fixed 500 error response when GETing a many-segment manifest

    * Memcached keys now use a delta timeout when possible

    * Refactor DiskFile to hide temp file names and exts

    * Remove IP-based container-sync ACLs from auth middlewares

    * Fixed bug in deleting memcached account info data

    * Fixed lazy-listing of object manifest segments

    * Fixed bug where a ? in the object name caused an error

    * Swift now returns 406 if it can't satisfy Accept

    * Fix infinite recursion bug in object replicator

    * Swift will now reject names with NULL characters

    * Fixed object-auditor logging to use a minimum of unix sockets

    * Various other minor bug fixes and improvements

swift (1.7.5)

    * Support OPTIONS verb, including CORS preflight requests

    * Added support for custom log handlers

    * Range support is extended to support GET requests with multiple ranges.
      Multi-range GETs are not yet supported against large-object manifests.

    * Cluster constraints are now settable by config

    * Replicators can now run against specific devices or partitions

    * swift-bench now supports running on multiple cores and multiple servers

    * Added partition option to swift-get-nodes

    * Allow underscores in account and user in tempauth via base64 encodings

    * New option to the dispersion report to output the missing partitions

    * Changed storage server StatsD metrics to report timings instead of
      counts for errors. See the admin guide for the updated metric names.

    * Removed a dependency on WebOb and replaced it with an internal module

    * Fixed config parsing in swift-bench -x

    * Fixed sample_rate in StatsD logging

    * Track unlinks of async_pendings with StatsD

    * Remove double GET on range requests

    * Allow unsetting of X-Container-Sync-To and ACL headers

    * DB reclamation now removes empty suffix directories

    * Fix non-standard 100-continue behavior

    * Allow object-expirer to delete the last copy of a versioned object

    * Only set TCP_KEEPIDLE on systems where it is supported

    * Fix stdin flush and fdatasync issues on BSD platforms

    * Allow object-expirer to delete the last version of an object

    * Various other minor bug fixes and improvements

swift (1.7.4, OpenStack Folsom)

    * Fix issue where early client disconnects may have caused a memory leak

swift (1.7.2)

    * Fix issue where memcache serialization was not properly loading
      the config value

swift (1.7.0)

    * Use custom encoding for ring data instead of pickle

      Serialize RingData in a versioned, custom format which is a combination
      of a JSON-encoded header and .tostring() dumps of the
      replica2part2dev_id arrays. This format deserializes hundreds of times
      faster than rings serialized with Python 2.7's pickle (a significant
      performance regression for ring loading between Python 2.6 and Python
      2.7). Fixes bug 1031954.

      The new implementation is backward-compatible; if a ring
      does not begin with a new-style magic string, it is assumed to be an
      old-style pickle-dumped ring and is handled as before. So new Swift
      code can read old rings, but old Swift code will not be able to read
      newly-serialized rings.

    * Do not use pickle for serialization in memcache, but JSON

      To avoid issues on upgrades (unability to read pickled values, and cache
      poisoning for old servers not understanding JSON), we add a
      memcache_serialization_support configuration option, with the following
      values:

       0 = older, insecure pickle serialization
       1 = json serialization but pickles can still be read (still insecure)
       2 = json serialization only (secure and the default)

      To avoid an instant full cache flush, existing installations should
      upgrade with 0, then set to 1 and reload, then after some time (24
      hours) set to 2 and reload. Support for 0 and 1 will be removed in
      future versions.

    * Update proxy-server StatsD logging. This is a significant change to the
      existing StatsD intigration. Docs for this feature can be found in
      doc/source/admin_guide.rst.

    * Improved swift-bench to allow random object sizes and better usability

    * Updated probe tests

    * Replicator removal metrics are now generated on a per-device basis

    * Made object replicator locking more optimistic

    * Split proxy-server code into separate modules

    * Fixed bug where swift-recon would not report all unmounted drives

    * Fixed issue where a LockTimeout may have caused a file descriptor to
      not be closed properly

    * Fixed a bug where an error may have caused the proxy to stop returning
      data to a client

    * Fixed bug where expirer would get confused by odd deletion times

    * Fixed a bug where auto-creating accounts would return an error if they
      were recreated after being deleted

    * Fix when rate_limit_after_segment kicks in

    * fallocate() failures properly return HTTPInsufficientStorage from
      object-server before reading from wsgi.input, allowing the proxy
      server to quickly error_limit that node

    * Fixed error with large object manifests and x-newest headers on GET

    * Various other minor bug fixes and improvements

swift (1.6.0)

    * Removed bin/swift and swift/common/client.py from the swift repo. These
      tools are now managed in the python-swiftclient project. The
      python-swiftclient project is a second deliverable of the openstack
      swift project.

    * Moved swift_auth (openstack keystone) middleware from keystone project
      into swift project

    * Made dispersion report work with any replica count other than 3. This
      substantially affects the JSON output of the dispersion report, and any
      tools written to consume this output will need to be updated.

    * Added Solaris (Illumos) compatibility

    * Added -a option to swift-get-nodes to show all handoffs

    * Add UDP protocol support for logger

    * Added config options for rate limiting of large object downloads.

    * Added config option `log_handoffs` (defaults to True) to proxy server
      to log and update statsd with information about when a handoff node is
      used. This is helpful to track the health of the cluster.

    * swift-bench can now use auth 2.0

    * Support forbidding substrings based on a regexp in name_filter
      middleware

    * Hardened internal server processes so only authorized methods can be
      called.

    * Made ranged requests on large objects work correctly when size of
      manifest file is not 0 byte

    * Added option to dispersion report to print 404s to stdout

    * Fix object replication on older rsync versions when using ipv4

    * Fixed bug with container reclaim/report race

    * Make object server's caching more configurable.

    * Check disk failure before syncing for each partition

    * Allow special characters to be referenced by manifest objects

    * Validate devices and partitions to avoid directory traversals

    * Support WebOb 1.2

    * Ensure that accessing the ring devs reloads the ring if necessary.
      Specifically, this allows replication to work when it has been started
      with an empty ring.

    * Various other minor bug fixes and improvements

swift (1.5.0)

    * New option to toggle SQLite database preallocation with account
      and container servers.

      IMPORTANT:
      The default for database preallocation is now off when before
      it was always on. This will affect performance on clusters that
      use standard drives with shared account, container, object
      servers. Such deployments will need to update their
      configurations to turn database preallocation back on (see
      account-server.conf-sample and container-server.conf.sample
      files).

      If you are using dedicated account and container servers with
      SSDs, you should defragment your file systems after upgrade and
      should notice dramatically less disk usage.

    * swift3 middleware removed and moved to http://github.com/fujita/swift3.
      This will require a config change in the proxy server and adds a new
      dependency for deployers using this middleware.

    * Moved proxy server logging to middleware. This requires a config change
      in the proxy server.

    * Added object versioning feature. (See docs for full description)

    * Add statsd logging throughout the system (beta, some event names may
      change)

    * Expanded swift-recon middleware support

    * The ring builder now supports as-unique-as-possible partition
      placement, unified balancing methods, and can work on more than one
      device at a time.

    * Numerous bug fixes to StaticWeb (previously unusable at scale).

    * Bug fixes to all middleware to allow passthrough requests under various
      conditions and to share pre-authed request code (which previously had
      differing behaviors and interaction bugs).

    * Bug fix to object expirer that could cause infinite looping.

    * Added optional delay to account reaping.

    * Async-pending write optimization.

    * Dispersion tools now support multiple auth versions

    * Updated man pages

    * Proxy server can now deny requests to particular hostnames

    * Updated docs for domain remap middleware

    * Updated docs for cname lookup middleware

    * Made swift CLI binary easier to wrap

    * Proxy will now also return X-Timestamp header

    * Added associated projects doc as a place to track ecosystem projects

    * end_marker made consistent across both object and container listings

    * Various other minor bug fixes and improvements

swift (1.4.8, OpenStack Essex)

    * Added optional max_containers_per_account restriction

    * Added alternate metadata header removal method

    * Added optional name_check middleware filter

    * Added support for venv-based test runs with tox

    * StaticWeb behavior change with X-Web-Mode: true and
      non-StaticWeb-enabled containers (immediately 404s instead of passing
      the request on down the WSGI pipeline).

    * Fixed typo in swift-dispersion-report JSON output.

    * Swift-Recon-related fix to create temporary files on the same disk as
      their final destinations.

    * Updated return codes in swift3 middleware

    * Fixed swift3 middleware to allow Content-Range header in response

    * Updated swift.common.client and swift CLI tool with auth 2.0 changes

    * Swift CLI tool now supports common openstack auth args

    * Body of HTTP responses now included in error messages of swift CLI tool

    * Refactored some ring building functions for clarity and simplicity

swift (1.4.7)

    * Improvements to account and container replication.

    * Fix for account servers allowing .pending to exist before .db.

    * Fixed possible key-guessing exploit in formpost.

    * Fixed bug in ring builder when removing a large percentage of devices.

    * Swift CLI tool now supports openstack-standard CLI flags.

    * New JSON output option for swift-dispersion-report.

    * Removed old stats tools.

    * Other bug fixes and documentation updates.

swift (1.4.6)

    * TempURL and FormPost middleware added

    * Added memcache.conf option

    * Dropped eval-based json parser fallback

    * Properly lose all groups when dropping privileges

    * Fix permissions when creating files

    * Fixed bug regarding negative Content-Length in requests

    * Consistent formatting on Last-Modified response header

    * Added timeout option to swift-recon

    * Allow arguments to be passed to nosetest

    * Removed tools/rfc.sh

    * Other minor bug fixes

swift (1.4.5)

    * New swift-orphans and swift-oldies command line tools to detect
      orphaned Swift processes and long running processes.

    * Command line tool "swift" now supports marker queries.

    * StaticWeb middleware improved to save an extra request when
      possible.

    * Updated swift-init to support swift-object-expirer.

    * Fixed object replicator timeout handling [bug 814263].

    * Fixed accept header 503 vs. 400 [bug 891247].

    * More exception handling for auditors.

    * Doc updates for PPA [bug 905608].

    * Doc updates to explain replication more clearly [bug 906976].

    * Updated SAIO instructions to no longer mention ~/swift/trunk.

    * Fixed docstrings in the ring code.

    * PEP8 Updates.

swift (1.4.4)

    * Fixes to prevent socket hoarding (memory leak)

    * Add sockstat info to recon.

    * Fixed leak from SegmentedIterable.

    * Fixed bufferedhttp to deref socks and fps.

    * Add support for OS Auth API version 2.

    * Make Eventlet's WSGI server log differently.

    * Updated TimeoutError and except Exception refs.

    * Fixed time-sensitive tests.

    * Fixed object manifest etags.

    * Fixes for swift-recon disk usage distribution graph.

    * Adding new manpages for configuration files.

    * Change bzr to swift in getting_started doc.

    * Fixes the HTTPConflict import.

    * Expiring Objects Support.

    * Fixing bug with x-trans-id.

    * Requote the source when doing a COPY.

    * Add documentation for Swift Recon.

    * Make drive audit regexes detect 4-letter drives.

    * Adding what acc/cont/obj into the ratelimit error messages.

    * Query only specific zone via swift-recon.

swift (1.4.3, OpenStack Diablo)

    * Additional quarantine catching code.

    * Added client_ip to all proxy log lines not otherwise containing it.

    * Content-Type is now application/xml for "GET services/bucket" swift3
      middleware requests.

    * Alpha release of the Swift Recon Experiment

    * Fix last modified date for swift3 middleware.

    * Fix to clear account/container metadata on account/container deletion.

    * Fix for corner case regarding X-Newest.

    * Fix for object auditor running out of file descriptors.

    * Fix to return all proper headers for manifest objects.

    * Fix to the swift tool to strip any leading slashes on file names when
      uploading.

swift (1.4.2)

    * Removed stats/logging code from Swift [now in separate slogging project].

    * Container Synchronization Feature - First Edition

    * Fix swift3 authentication bug about the Date and X-Amz-Date handling.

    * Changing ratelimiting so that it only limits PUTs/DELETEs.

    * Object POSTs are implemented as COPYs now by default (you can revert to
      previous implementation with conf object_post_as_copy = false)

    * You can specify X-Newest: true on GETs and HEADs to indicate you want
      Swift to query all backend copies and return the newest version
      retrieved.

    * Object COPY requests now always copy the newest object they can find.

    * Account and container GETs and HEADs now shuffle the nodes they use to
      balance load.

    * Fixed the infinite charset: utf-8 bug

    * This fixes the bug that drop_buffer_cache() doesn't work on systems where
      off_t isn't 64 bits.

swift (1.4.1)

    * st renamed to swift

    * swauth was separated froms swift. It is now its own project and can be
      found at https://github.com/gholt/swauth.

    * tempauth middleware added as an extremely limited auth system for dev
      work.

    * Account and container listings now properly labeled UTF-8 (previously the
      label was "utf8").

    * Accounts are auto-created if an auth token is valid when the
      account_autocreate proxy config parameter is set to true.

swift (1.4.0)

    * swift-bench now cleans up containers it creates.

    * WSGI servers now load WSGI filters and applications after forking for
      better plugin support.

    * swauth-cleanup-tokens now handles 404s on token containers and tokens
      better.

    * Proxy logs the remote IP address as the client IP in the absence of
      X-Forwarded-For and X-Cluster-Client-IP headers instead of - like it did
      before.

    * Swift3 WSGI middleware added support for param-signed URLs.

    * swauth- scripts now exit with proper exit codes.

    * Fixed a bug where allowed_headers weren't honored for HEAD requests.

    * Double quarantining of corrupted sqlite3 databases now works.

    * Fix for Object replicator breaking when running object replicator with no
      objects on the server.

    * Added the Accept-Ranges header to GET and HEAD requests.

    * When a single object has multiple async pending updates on a single
      device, only latest async pending is now sent.

    * Fixed issue of Swift3 WSGI middleware not working correctly with '/' in
      object names.

    * Renamed swift-stats-* to swift-dispersion-* to avoid confusion with log
      stats stuff.

    * Added X-Trans-Id transaction id header to every response.

    * Fixed a Python 2.7 compatibility problem.

    * Now using bracketed notation for ip literals in rsync calls, so
      compressed ipv6 literals work.

    * Added a container stats collector and refactoring some of the stats code.

    * Changed subdir nodes in XML formatted object listings to align with
      object nodes. Now: foo Before:
      .

    * Fixed bug in Swauth to support for multiple swauth instances.

    * swift-ring-builder: Added list_parts command which shows common
      partitions for a given list of devices.

    * Object auditor now shows better statistics updates in the logs.

    * Stats uploaders now allow overrides for source_filename_pattern and
      new_log_cutoff values.

---

Changelog entries for previous versions are incomplete

swift (1.3.0, OpenStack Cactus)

swift (1.2.0, OpenStack Bexar)

swift (1.1.0, OpenStack Austin)

swift (1.0.0, Initial Release)
swift-2.7.1/.testr.conf0000664000567000056710000000034513024044352016103 0ustar  jenkinsjenkins00000000000000[DEFAULT]
test_command=SWIFT_TEST_DEBUG_LOGS=${SWIFT_TEST_DEBUG_LOGS} ${PYTHON:-python} -m subunit.run discover -t ./ ${TESTS_DIR:-./test/functional/} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
swift-2.7.1/.coveragerc0000664000567000056710000000014513024044352016134 0ustar  jenkinsjenkins00000000000000[run]
branch = True
omit = /usr*,setup.py,*egg*,.venv/*,.tox/*,test/*

[report]
ignore_errors = True
swift-2.7.1/babel.cfg0000664000567000056710000000002113024044352015532 0ustar  jenkinsjenkins00000000000000[python: **.py]

swift-2.7.1/.alltests0000775000567000056710000000076613024044354015665 0ustar  jenkinsjenkins00000000000000#!/bin/bash

TOP_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")

echo "==== Unit tests ===="
resetswift
$TOP_DIR/.unittests $@
rvalue=$?
if [ $rvalue != 0 ] ; then
    exit $rvalue
fi

echo "==== Func tests ===="
resetswift
startmain
$TOP_DIR/.functests $@
rvalue=$?
if [ $rvalue != 0 ] ; then
    exit $rvalue
fi

echo "==== Probe tests ===="
resetswift
$TOP_DIR/.probetests $@
rvalue=$?
if [ $rvalue != 0 ] ; then
    exit $rvalue
fi

echo "All tests runs fine"

exit 0

swift-2.7.1/requirements.txt0000664000567000056710000000070313024044354017301 0ustar  jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

dnspython>=1.12.0;python_version<'3.0'
dnspython3>=1.12.0;python_version>='3.0'
eventlet>=0.17.4  # MIT
greenlet>=0.3.1
netifaces>=0.5,!=0.10.0,!=0.10.1
pastedeploy>=1.3.3
six>=1.9.0
xattr>=0.4
PyECLib>=1.2.0                          # BSD
swift-2.7.1/tox.ini0000664000567000056710000000462213024044354015334 0ustar  jenkinsjenkins00000000000000[tox]
envlist = py34,py27,pep8
minversion = 1.6
skipsdist = True

[testenv]
usedevelop = True
install_command = pip install --allow-external netifaces --allow-insecure netifaces -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
         NOSE_WITH_COVERAGE=1
         NOSE_COVER_BRANCHES=1
deps =
  -r{toxinidir}/requirements.txt
  -r{toxinidir}/test-requirements.txt
commands = find . -type f -name "*.py[c|o]" -delete
           find . -type d -name "__pycache__" -delete
           nosetests {posargs:test/unit}
whitelist_externals = find
passenv = SWIFT_* *_proxy

[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
         NOSE_WITH_COVERAGE=1
         NOSE_COVER_BRANCHES=1
         NOSE_COVER_HTML=1
         NOSE_COVER_HTML_DIR={toxinidir}/cover

[testenv:py34]
commands =
  nosetests test/unit/common/test_exceptions.py

[testenv:pep8]
basepython = python2.7
commands =
  flake8 {posargs:swift test doc setup.py}
  flake8 --filename=swift* bin

[testenv:py3pep8]
basepython = python3
install_command = echo {packages}
commands =
  # Gross hack. There's no other way to get it to /not/ install swift itself
  # (which triggers installing eventlet) but also get flake8 installed.
  pip install flake8
  flake8 swift test doc setup.py
  flake8 --filename=swift* bin

[testenv:func]
commands = ./.functests {posargs}

[testenv:func-fast-post]
commands = ./.functests {posargs}
setenv = SWIFT_TEST_IN_PROCESS=1
         SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY=False

[testenv:venv]
commands = {posargs}

[testenv:docs]
commands = python setup.py build_sphinx

[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit -c bandit.yaml -r swift bin -n 5 -p gate

[flake8]
# it's not a bug that we aren't using all of hacking, ignore:
# F812: list comprehension redefines ...
# H101: Use TODO(NAME)
# H202: assertRaises Exception too broad
# H233: Python 3.x incompatible use of print operator
# H301: one import per line
# H306: imports not in alphabetical order (time, os)
# H401: docstring should not start with a space
# H403: multi line docstrings should end on a new line
# H404: multi line docstring should start without a leading new line
# H405: multi line docstring summary not separated with an empty line
# H501: Do not use self.__dict__ for string formatting
# H703: Multiple positional placeholders
ignore = F812,H101,H202,H233,H301,H306,H401,H403,H404,H405,H501,H703
exclude = .venv,.tox,dist,*egg
show-source = True
swift-2.7.1/setup.cfg0000664000567000056710000000757413024044470015652 0ustar  jenkinsjenkins00000000000000[metadata]
name = swift
summary = OpenStack Object Storage
description-file = 
	README.md
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier = 
	Development Status :: 5 - Production/Stable
	Environment :: OpenStack
	Intended Audience :: Information Technology
	Intended Audience :: System Administrators
	License :: OSI Approved :: Apache Software License
	Operating System :: POSIX :: Linux
	Programming Language :: Python
	Programming Language :: Python :: 2
	Programming Language :: Python :: 2.7

[pbr]
skip_authors = True
skip_changelog = True

[files]
packages = 
	swift
scripts = 
	bin/swift-account-audit
	bin/swift-account-auditor
	bin/swift-account-info
	bin/swift-account-reaper
	bin/swift-account-replicator
	bin/swift-account-server
	bin/swift-config
	bin/swift-container-auditor
	bin/swift-container-info
	bin/swift-container-replicator
	bin/swift-container-server
	bin/swift-container-sync
	bin/swift-container-updater
	bin/swift-container-reconciler
	bin/swift-reconciler-enqueue
	bin/swift-dispersion-populate
	bin/swift-dispersion-report
	bin/swift-drive-audit
	bin/swift-form-signature
	bin/swift-get-nodes
	bin/swift-init
	bin/swift-object-auditor
	bin/swift-object-expirer
	bin/swift-object-info
	bin/swift-object-replicator
	bin/swift-object-reconstructor
	bin/swift-object-server
	bin/swift-object-updater
	bin/swift-oldies
	bin/swift-orphans
	bin/swift-proxy-server
	bin/swift-recon
	bin/swift-recon-cron
	bin/swift-ring-builder
	bin/swift-ring-builder-analyzer
	bin/swift-temp-url

[entry_points]
paste.app_factory = 
	proxy = swift.proxy.server:app_factory
	object = swift.obj.server:app_factory
	mem_object = swift.obj.mem_server:app_factory
	container = swift.container.server:app_factory
	account = swift.account.server:app_factory
paste.filter_factory = 
	healthcheck = swift.common.middleware.healthcheck:filter_factory
	crossdomain = swift.common.middleware.crossdomain:filter_factory
	memcache = swift.common.middleware.memcache:filter_factory
	ratelimit = swift.common.middleware.ratelimit:filter_factory
	cname_lookup = swift.common.middleware.cname_lookup:filter_factory
	catch_errors = swift.common.middleware.catch_errors:filter_factory
	domain_remap = swift.common.middleware.domain_remap:filter_factory
	staticweb = swift.common.middleware.staticweb:filter_factory
	tempauth = swift.common.middleware.tempauth:filter_factory
	keystoneauth = swift.common.middleware.keystoneauth:filter_factory
	recon = swift.common.middleware.recon:filter_factory
	tempurl = swift.common.middleware.tempurl:filter_factory
	formpost = swift.common.middleware.formpost:filter_factory
	name_check = swift.common.middleware.name_check:filter_factory
	bulk = swift.common.middleware.bulk:filter_factory
	container_quotas = swift.common.middleware.container_quotas:filter_factory
	account_quotas = swift.common.middleware.account_quotas:filter_factory
	proxy_logging = swift.common.middleware.proxy_logging:filter_factory
	dlo = swift.common.middleware.dlo:filter_factory
	slo = swift.common.middleware.slo:filter_factory
	list_endpoints = swift.common.middleware.list_endpoints:filter_factory
	gatekeeper = swift.common.middleware.gatekeeper:filter_factory
	container_sync = swift.common.middleware.container_sync:filter_factory
	xprofile = swift.common.middleware.xprofile:filter_factory
	versioned_writes = swift.common.middleware.versioned_writes:filter_factory

[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source

[egg_info]
tag_build = 
tag_date = 0
tag_svn_revision = 0

[compile_catalog]
directory = swift/locale
domain = swift

[update_catalog]
domain = swift
output_dir = swift/locale
input_file = swift/locale/swift.pot

[extract_messages]
keywords = _ l_ lazy_gettext
mapping_file = babel.cfg
output_file = swift/locale/swift.pot

[nosetests]
exe = 1
verbosity = 2
detailed-errors = 1
cover-package = swift
cover-html = true
cover-erase = true

swift-2.7.1/swift/0000775000567000056710000000000013024044470015150 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/__init__.py0000664000567000056710000000305313024044352017261 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import gettext

import pkg_resources

try:
    # First, try to get our version out of PKG-INFO. If we're installed,
    # this'll let us find our version without pulling in pbr. After all, if
    # we're installed on a system, we're not in a Git-managed source tree, so
    # pbr doesn't really buy us anything.
    __version__ = __canonical_version__ = pkg_resources.get_provider(
        pkg_resources.Requirement.parse('swift')).version
except pkg_resources.DistributionNotFound:
    # No PKG-INFO? We're probably running from a checkout, then. Let pbr do
    # its thing to figure out a version number.
    import pbr.version
    _version_info = pbr.version.VersionInfo('swift')
    __version__ = _version_info.release_string()
    __canonical_version__ = _version_info.version_string()

_localedir = os.environ.get('SWIFT_LOCALEDIR')
_t = gettext.translation('swift', localedir=_localedir, fallback=True)


def gettext_(msg):
    return _t.gettext(msg)
swift-2.7.1/swift/proxy/0000775000567000056710000000000013024044470016331 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/proxy/__init__.py0000664000567000056710000000000013024044352020427 0ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/proxy/controllers/0000775000567000056710000000000013024044470020677 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/proxy/controllers/base.py0000664000567000056710000021543113024044354022172 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# NOTE: swift_conn
# You'll see swift_conn passed around a few places in this file. This is the
# source bufferedhttp connection of whatever it is attached to.
#   It is used when early termination of reading from the connection should
# happen, such as when a range request is satisfied but there's still more the
# source connection would like to send. To prevent having to read all the data
# that could be left, the source connection can be .close() and then reads
# commence to empty out any buffers.
#   These shenanigans are to ensure all related objects can be garbage
# collected. We've seen objects hang around forever otherwise.

from six.moves.urllib.parse import quote

import os
import time
import functools
import inspect
import itertools
import operator
from sys import exc_info
from swift import gettext_ as _

from eventlet import sleep
from eventlet.timeout import Timeout
import six

from swift.common.wsgi import make_pre_authed_env
from swift.common.utils import Timestamp, config_true_value, \
    public, split_path, list_from_csv, GreenthreadSafeIterator, \
    GreenAsyncPile, quorum_size, parse_content_type, \
    document_iters_to_http_response_body
from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ChunkReadTimeout, ChunkWriteTimeout, \
    ConnectionTimeout, RangeAlreadyComplete
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.http import is_informational, is_success, is_redirection, \
    is_server_error, HTTP_OK, HTTP_PARTIAL_CONTENT, HTTP_MULTIPLE_CHOICES, \
    HTTP_BAD_REQUEST, HTTP_NOT_FOUND, HTTP_SERVICE_UNAVAILABLE, \
    HTTP_INSUFFICIENT_STORAGE, HTTP_UNAUTHORIZED, HTTP_CONTINUE
from swift.common.swob import Request, Response, Range, \
    HTTPException, HTTPRequestedRangeNotSatisfiable, HTTPServiceUnavailable, \
    status_map
from swift.common.request_helpers import strip_sys_meta_prefix, \
    strip_user_meta_prefix, is_user_meta, is_sys_meta, is_sys_or_user_meta, \
    http_response_to_document_iters
from swift.common.storage_policy import POLICIES


def update_headers(response, headers):
    """
    Helper function to update headers in the response.

    :param response: swob.Response object
    :param headers: dictionary headers
    """
    if hasattr(headers, 'items'):
        headers = headers.items()
    for name, value in headers:
        if name == 'etag':
            response.headers[name] = value.replace('"', '')
        elif name not in ('date', 'content-length', 'content-type',
                          'connection', 'x-put-timestamp', 'x-delete-after'):
            response.headers[name] = value


def source_key(resp):
    """
    Provide the timestamp of the swift http response as a floating
    point value.  Used as a sort key.

    :param resp: bufferedhttp response object
    """
    return Timestamp(resp.getheader('x-backend-timestamp') or
                     resp.getheader('x-put-timestamp') or
                     resp.getheader('x-timestamp') or 0)


def delay_denial(func):
    """
    Decorator to declare which methods should have any swift.authorize call
    delayed. This is so the method can load the Request object up with
    additional information that may be needed by the authorization system.

    :param func: function for which authorization will be delayed
    """
    func.delay_denial = True
    return func


def get_account_memcache_key(account):
    cache_key, env_key = _get_cache_key(account, None)
    return cache_key


def get_container_memcache_key(account, container):
    if not container:
        raise ValueError("container not provided")
    cache_key, env_key = _get_cache_key(account, container)
    return cache_key


def _prep_headers_to_info(headers, server_type):
    """
    Helper method that iterates once over a dict of headers,
    converting all keys to lower case and separating
    into subsets containing user metadata, system metadata
    and other headers.
    """
    meta = {}
    sysmeta = {}
    other = {}
    for key, val in dict(headers).items():
        lkey = key.lower()
        if is_user_meta(server_type, lkey):
            meta[strip_user_meta_prefix(server_type, lkey)] = val
        elif is_sys_meta(server_type, lkey):
            sysmeta[strip_sys_meta_prefix(server_type, lkey)] = val
        else:
            other[lkey] = val
    return other, meta, sysmeta


def headers_to_account_info(headers, status_int=HTTP_OK):
    """
    Construct a cacheable dict of account info based on response headers.
    """
    headers, meta, sysmeta = _prep_headers_to_info(headers, 'account')
    return {
        'status': status_int,
        # 'container_count' anomaly:
        # Previous code sometimes expects an int sometimes a string
        # Current code aligns to str and None, yet translates to int in
        # deprecated functions as needed
        'container_count': headers.get('x-account-container-count'),
        'total_object_count': headers.get('x-account-object-count'),
        'bytes': headers.get('x-account-bytes-used'),
        'meta': meta,
        'sysmeta': sysmeta
    }


def headers_to_container_info(headers, status_int=HTTP_OK):
    """
    Construct a cacheable dict of container info based on response headers.
    """
    headers, meta, sysmeta = _prep_headers_to_info(headers, 'container')
    return {
        'status': status_int,
        'read_acl': headers.get('x-container-read'),
        'write_acl': headers.get('x-container-write'),
        'sync_key': headers.get('x-container-sync-key'),
        'object_count': headers.get('x-container-object-count'),
        'bytes': headers.get('x-container-bytes-used'),
        'versions': headers.get('x-versions-location'),
        'storage_policy': headers.get('x-backend-storage-policy-index', '0'),
        'cors': {
            'allow_origin': meta.get('access-control-allow-origin'),
            'expose_headers': meta.get('access-control-expose-headers'),
            'max_age': meta.get('access-control-max-age')
        },
        'meta': meta,
        'sysmeta': sysmeta
    }


def headers_to_object_info(headers, status_int=HTTP_OK):
    """
    Construct a cacheable dict of object info based on response headers.
    """
    headers, meta, sysmeta = _prep_headers_to_info(headers, 'object')
    info = {'status': status_int,
            'length': headers.get('content-length'),
            'type': headers.get('content-type'),
            'etag': headers.get('etag'),
            'meta': meta,
            'sysmeta': sysmeta
            }
    return info


def cors_validation(func):
    """
    Decorator to check if the request is a CORS request and if so, if it's
    valid.

    :param func: function to check
    """
    @functools.wraps(func)
    def wrapped(*a, **kw):
        controller = a[0]
        req = a[1]

        # The logic here was interpreted from
        #    http://www.w3.org/TR/cors/#resource-requests

        # Is this a CORS request?
        req_origin = req.headers.get('Origin', None)
        if req_origin:
            # Yes, this is a CORS request so test if the origin is allowed
            container_info = \
                controller.container_info(controller.account_name,
                                          controller.container_name, req)
            cors_info = container_info.get('cors', {})

            # Call through to the decorated method
            resp = func(*a, **kw)

            if controller.app.strict_cors_mode and \
                    not controller.is_origin_allowed(cors_info, req_origin):
                return resp

            # Expose,
            #  - simple response headers,
            #    http://www.w3.org/TR/cors/#simple-response-header
            #  - swift specific: etag, x-timestamp, x-trans-id
            #  - user metadata headers
            #  - headers provided by the user in
            #    x-container-meta-access-control-expose-headers
            if 'Access-Control-Expose-Headers' not in resp.headers:
                expose_headers = set([
                    'cache-control', 'content-language', 'content-type',
                    'expires', 'last-modified', 'pragma', 'etag',
                    'x-timestamp', 'x-trans-id'])
                for header in resp.headers:
                    if header.startswith('X-Container-Meta') or \
                            header.startswith('X-Object-Meta'):
                        expose_headers.add(header.lower())
                if cors_info.get('expose_headers'):
                    expose_headers = expose_headers.union(
                        [header_line.strip().lower()
                         for header_line in
                         cors_info['expose_headers'].split(' ')
                         if header_line.strip()])
                resp.headers['Access-Control-Expose-Headers'] = \
                    ', '.join(expose_headers)

            # The user agent won't process the response if the Allow-Origin
            # header isn't included
            if 'Access-Control-Allow-Origin' not in resp.headers:
                if cors_info['allow_origin'] and \
                        cors_info['allow_origin'].strip() == '*':
                    resp.headers['Access-Control-Allow-Origin'] = '*'
                else:
                    resp.headers['Access-Control-Allow-Origin'] = req_origin

            return resp
        else:
            # Not a CORS request so make the call as normal
            return func(*a, **kw)

    return wrapped


def get_object_info(env, app, path=None, swift_source=None):
    """
    Get the info structure for an object, based on env and app.
    This is useful to middlewares.

    .. note::

        This call bypasses auth. Success does not imply that the request has
        authorization to the object.
    """
    (version, account, container, obj) = \
        split_path(path or env['PATH_INFO'], 4, 4, True)
    info = _get_object_info(app, env, account, container, obj,
                            swift_source=swift_source)
    if not info:
        info = headers_to_object_info({}, 0)
    return info


def get_container_info(env, app, swift_source=None):
    """
    Get the info structure for a container, based on env and app.
    This is useful to middlewares.

    .. note::

        This call bypasses auth. Success does not imply that the request has
        authorization to the container.
    """
    (version, account, container, unused) = \
        split_path(env['PATH_INFO'], 3, 4, True)
    info = get_info(app, env, account, container, ret_not_found=True,
                    swift_source=swift_source)
    if not info:
        info = headers_to_container_info({}, 0)
    info.setdefault('storage_policy', '0')
    return info


def get_account_info(env, app, swift_source=None):
    """
    Get the info structure for an account, based on env and app.
    This is useful to middlewares.

    .. note::

        This call bypasses auth. Success does not imply that the request has
        authorization to the account.

    :raises ValueError: when path can't be split(path, 2, 4)
    """
    (version, account, _junk, _junk) = \
        split_path(env['PATH_INFO'], 2, 4, True)
    info = get_info(app, env, account, ret_not_found=True,
                    swift_source=swift_source)
    if not info:
        info = headers_to_account_info({}, 0)
    if info.get('container_count') is None:
        info['container_count'] = 0
    else:
        info['container_count'] = int(info['container_count'])
    return info


def _get_cache_key(account, container):
    """
    Get the keys for both memcache (cache_key) and env (env_key)
    where info about accounts and containers is cached

    :param   account: The name of the account
    :param container: The name of the container (or None if account)
    :returns: a tuple of (cache_key, env_key)
    """

    if container:
        cache_key = 'container/%s/%s' % (account, container)
    else:
        cache_key = 'account/%s' % account
    # Use a unique environment cache key per account and one container.
    # This allows caching both account and container and ensures that when we
    # copy this env to form a new request, it won't accidentally reuse the
    # old container or account info
    env_key = 'swift.%s' % cache_key
    return cache_key, env_key


def get_object_env_key(account, container, obj):
    """
    Get the keys for env (env_key) where info about object is cached

    :param   account: The name of the account
    :param container: The name of the container
    :param obj: The name of the object
    :returns: a string env_key
    """
    env_key = 'swift.object/%s/%s/%s' % (account,
                                         container, obj)
    return env_key


def _set_info_cache(app, env, account, container, resp):
    """
    Cache info in both memcache and env.

    Caching is used to avoid unnecessary calls to account & container servers.
    This is a private function that is being called by GETorHEAD_base and
    by clear_info_cache.
    Any attempt to GET or HEAD from the container/account server should use
    the GETorHEAD_base interface which would than set the cache.

    :param  app: the application object
    :param  account: the unquoted account name
    :param  container: the unquoted container name or None
    :param resp: the response received or None if info cache should be cleared
    """

    if container:
        cache_time = app.recheck_container_existence
    else:
        cache_time = app.recheck_account_existence
    cache_key, env_key = _get_cache_key(account, container)

    if resp:
        if resp.status_int == HTTP_NOT_FOUND:
            cache_time *= 0.1
        elif not is_success(resp.status_int):
            cache_time = None
    else:
        cache_time = None

    # Next actually set both memcache and the env cache
    memcache = getattr(app, 'memcache', None) or env.get('swift.cache')
    if not cache_time:
        env.pop(env_key, None)
        if memcache:
            memcache.delete(cache_key)
        return

    if container:
        info = headers_to_container_info(resp.headers, resp.status_int)
    else:
        info = headers_to_account_info(resp.headers, resp.status_int)
    if memcache:
        memcache.set(cache_key, info, time=cache_time)
    env[env_key] = info


def _set_object_info_cache(app, env, account, container, obj, resp):
    """
    Cache object info env. Do not cache object information in
    memcache. This is an intentional omission as it would lead
    to cache pressure. This is a per-request cache.

    Caching is used to avoid unnecessary calls to object servers.
    This is a private function that is being called by GETorHEAD_base.
    Any attempt to GET or HEAD from the object server should use
    the GETorHEAD_base interface which would then set the cache.

    :param  app: the application object
    :param  account: the unquoted account name
    :param  container: the unquoted container name or None
    :param  object: the unquoted object name or None
    :param resp: the response received or None if info cache should be cleared
    """

    env_key = get_object_env_key(account, container, obj)

    if not resp:
        env.pop(env_key, None)
        return

    info = headers_to_object_info(resp.headers, resp.status_int)
    env[env_key] = info


def clear_info_cache(app, env, account, container=None):
    """
    Clear the cached info in both memcache and env

    :param  app: the application object
    :param  account: the account name
    :param  container: the containr name or None if setting info for containers
    """
    _set_info_cache(app, env, account, container, None)


def _get_info_cache(app, env, account, container=None):
    """
    Get the cached info from env or memcache (if used) in that order
    Used for both account and container info
    A private function used by get_info

    :param  app: the application object
    :param  env: the environment used by the current request
    :returns: the cached info or None if not cached
    """

    cache_key, env_key = _get_cache_key(account, container)
    if env_key in env:
        return env[env_key]
    memcache = getattr(app, 'memcache', None) or env.get('swift.cache')
    if memcache:
        info = memcache.get(cache_key)
        if info:
            for key in info:
                if isinstance(info[key], six.text_type):
                    info[key] = info[key].encode("utf-8")
                if isinstance(info[key], dict):
                    for subkey, value in info[key].items():
                        if isinstance(value, six.text_type):
                            info[key][subkey] = value.encode("utf-8")
            env[env_key] = info
        return info
    return None


def _prepare_pre_auth_info_request(env, path, swift_source):
    """
    Prepares a pre authed request to obtain info using a HEAD.

    :param env: the environment used by the current request
    :param path: The unquoted request path
    :param swift_source: value for swift.source in WSGI environment
    :returns: the pre authed request
    """
    # Set the env for the pre_authed call without a query string
    newenv = make_pre_authed_env(env, 'HEAD', path, agent='Swift',
                                 query_string='', swift_source=swift_source)
    # This is a sub request for container metadata- drop the Origin header from
    # the request so the it is not treated as a CORS request.
    newenv.pop('HTTP_ORIGIN', None)
    # Note that Request.blank expects quoted path
    return Request.blank(quote(path), environ=newenv)


def get_info(app, env, account, container=None, ret_not_found=False,
             swift_source=None):
    """
    Get the info about accounts or containers

    Note: This call bypasses auth. Success does not imply that the
          request has authorization to the info.

    :param app: the application object
    :param env: the environment used by the current request
    :param account: The unquoted name of the account
    :param container: The unquoted name of the container (or None if account)
    :returns: the cached info or None if cannot be retrieved
    """
    info = _get_info_cache(app, env, account, container)
    if info:
        if ret_not_found or is_success(info['status']):
            return info
        return None
    # Not in cache, let's try the account servers
    path = '/v1/%s' % account
    if container:
        # Stop and check if we have an account?
        if not get_info(app, env, account) and not account.startswith(
                getattr(app, 'auto_create_account_prefix', '.')):
            return None
        path += '/' + container

    req = _prepare_pre_auth_info_request(
        env, path, (swift_source or 'GET_INFO'))
    # Whenever we do a GET/HEAD, the GETorHEAD_base will set the info in
    # the environment under environ[env_key] and in memcache. We will
    # pick the one from environ[env_key] and use it to set the caller env
    resp = req.get_response(app)
    cache_key, env_key = _get_cache_key(account, container)
    try:
        info = resp.environ[env_key]
        env[env_key] = info
        if ret_not_found or is_success(info['status']):
            return info
    except (KeyError, AttributeError):
        pass
    return None


def _get_object_info(app, env, account, container, obj, swift_source=None):
    """
    Get the info about object

    Note: This call bypasses auth. Success does not imply that the
          request has authorization to the info.

    :param app: the application object
    :param env: the environment used by the current request
    :param account: The unquoted name of the account
    :param container: The unquoted name of the container
    :param obj: The unquoted name of the object
    :returns: the cached info or None if cannot be retrieved
    """
    env_key = get_object_env_key(account, container, obj)
    info = env.get(env_key)
    if info:
        return info
    # Not in cached, let's try the object servers
    path = '/v1/%s/%s/%s' % (account, container, obj)
    req = _prepare_pre_auth_info_request(env, path, swift_source)
    # Whenever we do a GET/HEAD, the GETorHEAD_base will set the info in
    # the environment under environ[env_key]. We will
    # pick the one from environ[env_key] and use it to set the caller env
    resp = req.get_response(app)
    try:
        info = resp.environ[env_key]
        env[env_key] = info
        return info
    except (KeyError, AttributeError):
        pass
    return None


def close_swift_conn(src):
    """
    Force close the http connection to the backend.

    :param src: the response from the backend
    """
    try:
        # Since the backends set "Connection: close" in their response
        # headers, the response object (src) is solely responsible for the
        # socket. The connection object (src.swift_conn) has no references
        # to the socket, so calling its close() method does nothing, and
        # therefore we don't do it.
        #
        # Also, since calling the response's close() method might not
        # close the underlying socket but only decrement some
        # reference-counter, we have a special method here that really,
        # really kills the underlying socket with a close() syscall.
        src.nuke_from_orbit()  # it's the only way to be sure
    except Exception:
        pass


def bytes_to_skip(record_size, range_start):
    """
    Assume an object is composed of N records, where the first N-1 are all
    the same size and the last is at most that large, but may be smaller.

    When a range request is made, it might start with a partial record. This
    must be discarded, lest the consumer get bad data. This is particularly
    true of suffix-byte-range requests, e.g. "Range: bytes=-12345" where the
    size of the object is unknown at the time the request is made.

    This function computes the number of bytes that must be discarded to
    ensure only whole records are yielded. Erasure-code decoding needs this.

    This function could have been inlined, but it took enough tries to get
    right that some targeted unit tests were desirable, hence its extraction.
    """
    return (record_size - (range_start % record_size)) % record_size


class ResumingGetter(object):
    def __init__(self, app, req, server_type, node_iter, partition, path,
                 backend_headers, concurrency=1, client_chunk_size=None,
                 newest=None):
        self.app = app
        self.node_iter = node_iter
        self.server_type = server_type
        self.partition = partition
        self.path = path
        self.backend_headers = backend_headers
        self.client_chunk_size = client_chunk_size
        self.skip_bytes = 0
        self.used_nodes = []
        self.used_source_etag = ''
        self.concurrency = concurrency

        # stuff from request
        self.req_method = req.method
        self.req_path = req.path
        self.req_query_string = req.query_string
        if newest is None:
            self.newest = config_true_value(req.headers.get('x-newest', 'f'))
        else:
            self.newest = newest

        # populated when finding source
        self.statuses = []
        self.reasons = []
        self.bodies = []
        self.source_headers = []
        self.sources = []

        # populated from response headers
        self.start_byte = self.end_byte = self.length = None

    def fast_forward(self, num_bytes):
        """
        Will skip num_bytes into the current ranges.

        :params num_bytes: the number of bytes that have already been read on
                           this request. This will change the Range header
                           so that the next req will start where it left off.

        :raises ValueError: if invalid range header
        :raises HTTPRequestedRangeNotSatisfiable: if begin + num_bytes
                                                  > end of range + 1
        :raises RangeAlreadyComplete: if begin + num_bytes == end of range + 1
        """
        if 'Range' in self.backend_headers:
            req_range = Range(self.backend_headers['Range'])

            begin, end = req_range.ranges[0]
            if begin is None:
                # this is a -50 range req (last 50 bytes of file)
                end -= num_bytes
            else:
                begin += num_bytes
            if end and begin == end + 1:
                # we sent out exactly the first range's worth of bytes, so
                # we're done with it
                raise RangeAlreadyComplete()
            elif end and begin > end:
                raise HTTPRequestedRangeNotSatisfiable()
            elif end and begin:
                req_range.ranges = [(begin, end)] + req_range.ranges[1:]
            elif end:
                req_range.ranges = [(None, end)] + req_range.ranges[1:]
            else:
                req_range.ranges = [(begin, None)] + req_range.ranges[1:]

            self.backend_headers['Range'] = str(req_range)
        else:
            self.backend_headers['Range'] = 'bytes=%d-' % num_bytes

    def pop_range(self):
        """
        Remove the first byterange from our Range header.

        This is used after a byterange has been completely sent to the
        client; this way, should we need to resume the download from another
        object server, we do not re-fetch byteranges that the client already
        has.

        If we have no Range header, this is a no-op.
        """
        if 'Range' in self.backend_headers:
            try:
                req_range = Range(self.backend_headers['Range'])
            except ValueError:
                # there's a Range header, but it's garbage, so get rid of it
                self.backend_headers.pop('Range')
                return
            begin, end = req_range.ranges.pop(0)
            if len(req_range.ranges) > 0:
                self.backend_headers['Range'] = str(req_range)
            else:
                self.backend_headers.pop('Range')

    def learn_size_from_content_range(self, start, end, length):
        """
        If client_chunk_size is set, makes sure we yield things starting on
        chunk boundaries based on the Content-Range header in the response.

        Sets our Range header's first byterange to the value learned from
        the Content-Range header in the response; if we were given a
        fully-specified range (e.g. "bytes=123-456"), this is a no-op.

        If we were given a half-specified range (e.g. "bytes=123-" or
        "bytes=-456"), then this changes the Range header to a
        semantically-equivalent one *and* it lets us resume on a proper
        boundary instead of just in the middle of a piece somewhere.
        """
        if length == 0:
            return

        if self.client_chunk_size:
            self.skip_bytes = bytes_to_skip(self.client_chunk_size, start)

        if 'Range' in self.backend_headers:
            try:
                req_range = Range(self.backend_headers['Range'])
                new_ranges = [(start, end)] + req_range.ranges[1:]
            except ValueError:
                new_ranges = [(start, end)]
        else:
            new_ranges = [(start, end)]

        self.backend_headers['Range'] = (
            "bytes=" + (",".join("%s-%s" % (s if s is not None else '',
                                            e if e is not None else '')
                                 for s, e in new_ranges)))

    def is_good_source(self, src):
        """
        Indicates whether or not the request made to the backend found
        what it was looking for.

        :param src: the response from the backend
        :returns: True if found, False if not
        """
        if self.server_type == 'Object' and src.status == 416:
            return True
        return is_success(src.status) or is_redirection(src.status)

    def response_parts_iter(self, req):
        source, node = self._get_source_and_node()
        it = None
        if source:
            it = self._get_response_parts_iter(req, node, source)
        return it

    def _get_response_parts_iter(self, req, node, source):
        # Someday we can replace this [mess] with python 3's "nonlocal"
        source = [source]
        node = [node]

        try:
            client_chunk_size = self.client_chunk_size
            node_timeout = self.app.node_timeout
            if self.server_type == 'Object':
                node_timeout = self.app.recoverable_node_timeout

            # This is safe; it sets up a generator but does not call next()
            # on it, so no IO is performed.
            parts_iter = [
                http_response_to_document_iters(
                    source[0], read_chunk_size=self.app.object_chunk_size)]

            def get_next_doc_part():
                while True:
                    try:
                        # This call to next() performs IO when we have a
                        # multipart/byteranges response; it reads the MIME
                        # boundary and part headers.
                        #
                        # If we don't have a multipart/byteranges response,
                        # but just a 200 or a single-range 206, then this
                        # performs no IO, and either just returns source or
                        # raises StopIteration.
                        with ChunkReadTimeout(node_timeout):
                            # if StopIteration is raised, it escapes and is
                            # handled elsewhere
                            start_byte, end_byte, length, headers, part = next(
                                parts_iter[0])
                        return (start_byte, end_byte, length, headers, part)
                    except ChunkReadTimeout:
                        new_source, new_node = self._get_source_and_node()
                        if new_source:
                            self.app.exception_occurred(
                                node[0], _('Object'),
                                _('Trying to read during GET (retrying)'))
                            # Close-out the connection as best as possible.
                            if getattr(source[0], 'swift_conn', None):
                                close_swift_conn(source[0])
                            source[0] = new_source
                            node[0] = new_node
                            # This is safe; it sets up a generator but does
                            # not call next() on it, so no IO is performed.
                            parts_iter[0] = http_response_to_document_iters(
                                new_source,
                                read_chunk_size=self.app.object_chunk_size)
                        else:
                            raise StopIteration()

            def iter_bytes_from_response_part(part_file):
                nchunks = 0
                buf = ''
                bytes_used_from_backend = 0
                while True:
                    try:
                        with ChunkReadTimeout(node_timeout):
                            chunk = part_file.read(self.app.object_chunk_size)
                            nchunks += 1
                            buf += chunk
                    except ChunkReadTimeout:
                        exc_type, exc_value, exc_traceback = exc_info()
                        if self.newest or self.server_type != 'Object':
                            six.reraise(exc_type, exc_value, exc_traceback)
                        try:
                            self.fast_forward(bytes_used_from_backend)
                        except (HTTPException, ValueError):
                            six.reraise(exc_type, exc_value, exc_traceback)
                        except RangeAlreadyComplete:
                            break
                        buf = ''
                        new_source, new_node = self._get_source_and_node()
                        if new_source:
                            self.app.exception_occurred(
                                node[0], _('Object'),
                                _('Trying to read during GET (retrying)'))
                            # Close-out the connection as best as possible.
                            if getattr(source[0], 'swift_conn', None):
                                close_swift_conn(source[0])
                            source[0] = new_source
                            node[0] = new_node
                            # This is safe; it just sets up a generator but
                            # does not call next() on it, so no IO is
                            # performed.
                            parts_iter[0] = http_response_to_document_iters(
                                new_source,
                                read_chunk_size=self.app.object_chunk_size)

                            try:
                                _junk, _junk, _junk, _junk, part_file = \
                                    get_next_doc_part()
                            except StopIteration:
                                # Tried to find a new node from which to
                                # finish the GET, but failed. There's
                                # nothing more to do here.
                                return
                        else:
                            six.reraise(exc_type, exc_value, exc_traceback)
                    else:
                        if buf and self.skip_bytes:
                            if self.skip_bytes < len(buf):
                                buf = buf[self.skip_bytes:]
                                bytes_used_from_backend += self.skip_bytes
                                self.skip_bytes = 0
                            else:
                                self.skip_bytes -= len(buf)
                                bytes_used_from_backend += len(buf)
                                buf = ''

                        if not chunk:
                            if buf:
                                with ChunkWriteTimeout(
                                        self.app.client_timeout):
                                    bytes_used_from_backend += len(buf)
                                    yield buf
                                buf = ''
                            break

                        if client_chunk_size is not None:
                            while len(buf) >= client_chunk_size:
                                client_chunk = buf[:client_chunk_size]
                                buf = buf[client_chunk_size:]
                                with ChunkWriteTimeout(
                                        self.app.client_timeout):
                                    yield client_chunk
                                bytes_used_from_backend += len(client_chunk)
                        else:
                            with ChunkWriteTimeout(self.app.client_timeout):
                                yield buf
                            bytes_used_from_backend += len(buf)
                            buf = ''

                        # This is for fairness; if the network is outpacing
                        # the CPU, we'll always be able to read and write
                        # data without encountering an EWOULDBLOCK, and so
                        # eventlet will not switch greenthreads on its own.
                        # We do it manually so that clients don't starve.
                        #
                        # The number 5 here was chosen by making stuff up.
                        # It's not every single chunk, but it's not too big
                        # either, so it seemed like it would probably be an
                        # okay choice.
                        #
                        # Note that we may trampoline to other greenthreads
                        # more often than once every 5 chunks, depending on
                        # how blocking our network IO is; the explicit sleep
                        # here simply provides a lower bound on the rate of
                        # trampolining.
                        if nchunks % 5 == 0:
                            sleep()

            try:
                while True:
                    start_byte, end_byte, length, headers, part = \
                        get_next_doc_part()
                    self.learn_size_from_content_range(
                        start_byte, end_byte, length)
                    part_iter = iter_bytes_from_response_part(part)
                    yield {'start_byte': start_byte, 'end_byte': end_byte,
                           'entity_length': length, 'headers': headers,
                           'part_iter': part_iter}
                    self.pop_range()
            except StopIteration:
                req.environ['swift.non_client_disconnect'] = True

        except ChunkReadTimeout:
            self.app.exception_occurred(node[0], _('Object'),
                                        _('Trying to read during GET'))
            raise
        except ChunkWriteTimeout:
            self.app.logger.warning(
                _('Client did not read from proxy within %ss') %
                self.app.client_timeout)
            self.app.logger.increment('client_timeouts')
        except GeneratorExit:
            if not req.environ.get('swift.non_client_disconnect'):
                self.app.logger.warning(_('Client disconnected on read'))
        except Exception:
            self.app.logger.exception(_('Trying to send to client'))
            raise
        finally:
            # Close-out the connection as best as possible.
            if getattr(source[0], 'swift_conn', None):
                close_swift_conn(source[0])

    @property
    def last_status(self):
        if self.statuses:
            return self.statuses[-1]
        else:
            return None

    @property
    def last_headers(self):
        if self.source_headers:
            return self.source_headers[-1]
        else:
            return None

    def _make_node_request(self, node, node_timeout, logger_thread_locals):
        self.app.logger.thread_locals = logger_thread_locals
        if node in self.used_nodes:
            return False
        start_node_timing = time.time()
        try:
            with ConnectionTimeout(self.app.conn_timeout):
                conn = http_connect(
                    node['ip'], node['port'], node['device'],
                    self.partition, self.req_method, self.path,
                    headers=self.backend_headers,
                    query_string=self.req_query_string)
            self.app.set_node_timing(node, time.time() - start_node_timing)

            with Timeout(node_timeout):
                possible_source = conn.getresponse()
                # See NOTE: swift_conn at top of file about this.
                possible_source.swift_conn = conn
        except (Exception, Timeout):
            self.app.exception_occurred(
                node, self.server_type,
                _('Trying to %(method)s %(path)s') %
                {'method': self.req_method, 'path': self.req_path})
            return False
        if self.is_good_source(possible_source):
            # 404 if we know we don't have a synced copy
            if not float(possible_source.getheader('X-PUT-Timestamp', 1)):
                self.statuses.append(HTTP_NOT_FOUND)
                self.reasons.append('')
                self.bodies.append('')
                self.source_headers.append([])
                close_swift_conn(possible_source)
            else:
                if self.used_source_etag:
                    src_headers = dict(
                        (k.lower(), v) for k, v in
                        possible_source.getheaders())

                    if self.used_source_etag != src_headers.get(
                            'x-object-sysmeta-ec-etag',
                            src_headers.get('etag', '')).strip('"'):
                        self.statuses.append(HTTP_NOT_FOUND)
                        self.reasons.append('')
                        self.bodies.append('')
                        self.source_headers.append([])
                        return False

                self.statuses.append(possible_source.status)
                self.reasons.append(possible_source.reason)
                self.bodies.append(None)
                self.source_headers.append(possible_source.getheaders())
                self.sources.append((possible_source, node))
                if not self.newest:  # one good source is enough
                    return True
        else:
            self.statuses.append(possible_source.status)
            self.reasons.append(possible_source.reason)
            self.bodies.append(possible_source.read())
            self.source_headers.append(possible_source.getheaders())
            if possible_source.status == HTTP_INSUFFICIENT_STORAGE:
                self.app.error_limit(node, _('ERROR Insufficient Storage'))
            elif is_server_error(possible_source.status):
                self.app.error_occurred(
                    node, _('ERROR %(status)d %(body)s '
                            'From %(type)s Server') %
                    {'status': possible_source.status,
                     'body': self.bodies[-1][:1024],
                     'type': self.server_type})
        return False

    def _get_source_and_node(self):
        self.statuses = []
        self.reasons = []
        self.bodies = []
        self.source_headers = []
        self.sources = []

        nodes = GreenthreadSafeIterator(self.node_iter)

        node_timeout = self.app.node_timeout
        if self.server_type == 'Object' and not self.newest:
            node_timeout = self.app.recoverable_node_timeout

        pile = GreenAsyncPile(self.concurrency)

        for node in nodes:
            pile.spawn(self._make_node_request, node, node_timeout,
                       self.app.logger.thread_locals)
            _timeout = self.app.concurrency_timeout \
                if pile.inflight < self.concurrency else None
            if pile.waitfirst(_timeout):
                break
        else:
            # ran out of nodes, see if any stragglers will finish
            any(pile)

        if self.sources:
            self.sources.sort(key=lambda s: source_key(s[0]))
            source, node = self.sources.pop()
            for src, _junk in self.sources:
                close_swift_conn(src)
            self.used_nodes.append(node)
            src_headers = dict(
                (k.lower(), v) for k, v in
                source.getheaders())

            # Save off the source etag so that, if we lose the connection
            # and have to resume from a different node, we can be sure that
            # we have the same object (replication) or a fragment archive
            # from the same object (EC). Otherwise, if the cluster has two
            # versions of the same object, we might end up switching between
            # old and new mid-stream and giving garbage to the client.
            self.used_source_etag = src_headers.get(
                'x-object-sysmeta-ec-etag',
                src_headers.get('etag', '')).strip('"')
            return source, node
        return None, None


class GetOrHeadHandler(ResumingGetter):
    def _make_app_iter(self, req, node, source):
        """
        Returns an iterator over the contents of the source (via its read
        func).  There is also quite a bit of cleanup to ensure garbage
        collection works and the underlying socket of the source is closed.

        :param req: incoming request object
        :param source: The httplib.Response object this iterator should read
                       from.
        :param node: The node the source is reading from, for logging purposes.
        """

        ct = source.getheader('Content-Type')
        if ct:
            content_type, content_type_attrs = parse_content_type(ct)
            is_multipart = content_type == 'multipart/byteranges'
        else:
            is_multipart = False

        boundary = "dontcare"
        if is_multipart:
            # we need some MIME boundary; fortunately, the object server has
            # furnished one for us, so we'll just re-use it
            boundary = dict(content_type_attrs)["boundary"]

        parts_iter = self._get_response_parts_iter(req, node, source)

        def add_content_type(response_part):
            response_part["content_type"] = \
                HeaderKeyDict(response_part["headers"]).get("Content-Type")
            return response_part

        return document_iters_to_http_response_body(
            (add_content_type(pi) for pi in parts_iter),
            boundary, is_multipart, self.app.logger)

    def get_working_response(self, req):
        source, node = self._get_source_and_node()
        res = None
        if source:
            res = Response(request=req)
            res.status = source.status
            update_headers(res, source.getheaders())
            if req.method == 'GET' and \
                    source.status in (HTTP_OK, HTTP_PARTIAL_CONTENT):
                res.app_iter = self._make_app_iter(req, node, source)
                # See NOTE: swift_conn at top of file about this.
                res.swift_conn = source.swift_conn
            if not res.environ:
                res.environ = {}
            res.environ['swift_x_timestamp'] = \
                source.getheader('x-timestamp')
            res.accept_ranges = 'bytes'
            res.content_length = source.getheader('Content-Length')
            if source.getheader('Content-Type'):
                res.charset = None
                res.content_type = source.getheader('Content-Type')
        return res


class NodeIter(object):
    """
    Yields nodes for a ring partition, skipping over error
    limited nodes and stopping at the configurable number of nodes. If a
    node yielded subsequently gets error limited, an extra node will be
    yielded to take its place.

    Note that if you're going to iterate over this concurrently from
    multiple greenthreads, you'll want to use a
    swift.common.utils.GreenthreadSafeIterator to serialize access.
    Otherwise, you may get ValueErrors from concurrent access. (You also
    may not, depending on how logging is configured, the vagaries of
    socket IO and eventlet, and the phase of the moon.)

    :param app: a proxy app
    :param ring: ring to get yield nodes from
    :param partition: ring partition to yield nodes for
    :param node_iter: optional iterable of nodes to try. Useful if you
        want to filter or reorder the nodes.
    """

    def __init__(self, app, ring, partition, node_iter=None):
        self.app = app
        self.ring = ring
        self.partition = partition

        part_nodes = ring.get_part_nodes(partition)
        if node_iter is None:
            node_iter = itertools.chain(
                part_nodes, ring.get_more_nodes(partition))
        num_primary_nodes = len(part_nodes)
        self.nodes_left = self.app.request_node_count(num_primary_nodes)
        self.expected_handoffs = self.nodes_left - num_primary_nodes

        # Use of list() here forcibly yanks the first N nodes (the primary
        # nodes) from node_iter, so the rest of its values are handoffs.
        self.primary_nodes = self.app.sort_nodes(
            list(itertools.islice(node_iter, num_primary_nodes)))
        self.handoff_iter = node_iter

    def __iter__(self):
        self._node_iter = self._node_gen()
        return self

    def log_handoffs(self, handoffs):
        """
        Log handoff requests if handoff logging is enabled and the
        handoff was not expected.

        We only log handoffs when we've pushed the handoff count further
        than we would normally have expected under normal circumstances,
        that is (request_node_count - num_primaries), when handoffs goes
        higher than that it means one of the primaries must have been
        skipped because of error limiting before we consumed all of our
        nodes_left.
        """
        if not self.app.log_handoffs:
            return
        extra_handoffs = handoffs - self.expected_handoffs
        if extra_handoffs > 0:
            self.app.logger.increment('handoff_count')
            self.app.logger.warning(
                'Handoff requested (%d)' % handoffs)
            if (extra_handoffs == len(self.primary_nodes)):
                # all the primaries were skipped, and handoffs didn't help
                self.app.logger.increment('handoff_all_count')

    def _node_gen(self):
        for node in self.primary_nodes:
            if not self.app.error_limited(node):
                yield node
                if not self.app.error_limited(node):
                    self.nodes_left -= 1
                    if self.nodes_left <= 0:
                        return
        handoffs = 0
        for node in self.handoff_iter:
            if not self.app.error_limited(node):
                handoffs += 1
                self.log_handoffs(handoffs)
                yield node
                if not self.app.error_limited(node):
                    self.nodes_left -= 1
                    if self.nodes_left <= 0:
                        return

    def next(self):
        return next(self._node_iter)

    def __next__(self):
        return self.next()


class Controller(object):
    """Base WSGI controller class for the proxy"""
    server_type = 'Base'

    # Ensure these are all lowercase
    pass_through_headers = []

    def __init__(self, app):
        """
        Creates a controller attached to an application instance

        :param app: the application instance
        """
        self.account_name = None
        self.app = app
        self.trans_id = '-'
        self._allowed_methods = None

    @property
    def allowed_methods(self):
        if self._allowed_methods is None:
            self._allowed_methods = set()
            all_methods = inspect.getmembers(self, predicate=inspect.ismethod)
            for name, m in all_methods:
                if getattr(m, 'publicly_accessible', False):
                    self._allowed_methods.add(name)
        return self._allowed_methods

    def _x_remove_headers(self):
        """
        Returns a list of headers that must not be sent to the backend

        :returns: a list of header
        """
        return []

    def transfer_headers(self, src_headers, dst_headers):
        """
        Transfer legal headers from an original client request to dictionary
        that will be used as headers by the backend request

        :param src_headers: A dictionary of the original client request headers
        :param dst_headers: A dictionary of the backend request headers
        """
        st = self.server_type.lower()

        x_remove = 'x-remove-%s-meta-' % st
        dst_headers.update((k.lower().replace('-remove', '', 1), '')
                           for k in src_headers
                           if k.lower().startswith(x_remove) or
                           k.lower() in self._x_remove_headers())

        dst_headers.update((k.lower(), v)
                           for k, v in src_headers.items()
                           if k.lower() in self.pass_through_headers or
                           is_sys_or_user_meta(st, k))

    def generate_request_headers(self, orig_req=None, additional=None,
                                 transfer=False):
        """
        Create a list of headers to be used in backend requests

        :param orig_req: the original request sent by the client to the proxy
        :param additional: additional headers to send to the backend
        :param transfer: If True, transfer headers from original client request
        :returns: a dictionary of headers
        """
        # Use the additional headers first so they don't overwrite the headers
        # we require.
        headers = HeaderKeyDict(additional) if additional else HeaderKeyDict()
        if transfer:
            self.transfer_headers(orig_req.headers, headers)
        headers.setdefault('x-timestamp', Timestamp(time.time()).internal)
        if orig_req:
            referer = orig_req.as_referer()
        else:
            referer = ''
        headers['x-trans-id'] = self.trans_id
        headers['connection'] = 'close'
        headers['user-agent'] = 'proxy-server %s' % os.getpid()
        headers['referer'] = referer
        return headers

    def account_info(self, account, req=None):
        """
        Get account information, and also verify that the account exists.

        :param account: name of the account to get the info for
        :param req: caller's HTTP request context object (optional)
        :returns: tuple of (account partition, account nodes, container_count)
                  or (None, None, None) if it does not exist
        """
        partition, nodes = self.app.account_ring.get_nodes(account)
        if req:
            env = getattr(req, 'environ', {})
        else:
            env = {}
        info = get_info(self.app, env, account)
        if not info:
            return None, None, None
        if info.get('container_count') is None:
            container_count = 0
        else:
            container_count = int(info['container_count'])
        return partition, nodes, container_count

    def container_info(self, account, container, req=None):
        """
        Get container information and thusly verify container existence.
        This will also verify account existence.

        :param account: account name for the container
        :param container: container name to look up
        :param req: caller's HTTP request context object (optional)
        :returns: dict containing at least container partition ('partition'),
                  container nodes ('containers'), container read
                  acl ('read_acl'), container write acl ('write_acl'),
                  and container sync key ('sync_key').
                  Values are set to None if the container does not exist.
        """
        part, nodes = self.app.container_ring.get_nodes(account, container)
        if req:
            env = getattr(req, 'environ', {})
        else:
            env = {}
        info = get_info(self.app, env, account, container)
        if not info:
            info = headers_to_container_info({}, 0)
            info['partition'] = None
            info['nodes'] = None
        else:
            info['partition'] = part
            info['nodes'] = nodes
        if info.get('storage_policy') is None:
            info['storage_policy'] = 0
        return info

    def _make_request(self, nodes, part, method, path, headers, query,
                      logger_thread_locals):
        """
        Iterates over the given node iterator, sending an HTTP request to one
        node at a time.  The first non-informational, non-server-error
        response is returned.  If no non-informational, non-server-error
        response is received from any of the nodes, returns None.

        :param nodes: an iterator of the backend server and handoff servers
        :param part: the partition number
        :param method: the method to send to the backend
        :param path: the path to send to the backend
                     (full path ends up being /<$device>/<$part>/<$path>)
        :param headers: dictionary of headers
        :param query: query string to send to the backend.
        :param logger_thread_locals: The thread local values to be set on the
                                     self.app.logger to retain transaction
                                     logging information.
        :returns: a swob.Response object, or None if no responses were received
        """
        self.app.logger.thread_locals = logger_thread_locals
        for node in nodes:
            try:
                start_node_timing = time.time()
                with ConnectionTimeout(self.app.conn_timeout):
                    conn = http_connect(node['ip'], node['port'],
                                        node['device'], part, method, path,
                                        headers=headers, query_string=query)
                    conn.node = node
                self.app.set_node_timing(node, time.time() - start_node_timing)
                with Timeout(self.app.node_timeout):
                    resp = conn.getresponse()
                    if not is_informational(resp.status) and \
                            not is_server_error(resp.status):
                        return resp.status, resp.reason, resp.getheaders(), \
                            resp.read()
                    elif resp.status == HTTP_INSUFFICIENT_STORAGE:
                        self.app.error_limit(node,
                                             _('ERROR Insufficient Storage'))
                    elif is_server_error(resp.status):
                        self.app.error_occurred(
                            node, _('ERROR %(status)d '
                                    'Trying to %(method)s %(path)s'
                                    'From Container Server') % {
                                        'status': resp.status,
                                        'method': method,
                                        'path': path})
            except (Exception, Timeout):
                self.app.exception_occurred(
                    node, self.server_type,
                    _('Trying to %(method)s %(path)s') %
                    {'method': method, 'path': path})

    def make_requests(self, req, ring, part, method, path, headers,
                      query_string='', overrides=None):
        """
        Sends an HTTP request to multiple nodes and aggregates the results.
        It attempts the primary nodes concurrently, then iterates over the
        handoff nodes as needed.

        :param req: a request sent by the client
        :param ring: the ring used for finding backend servers
        :param part: the partition number
        :param method: the method to send to the backend
        :param path: the path to send to the backend
                     (full path ends up being  /<$device>/<$part>/<$path>)
        :param headers: a list of dicts, where each dict represents one
                        backend request that should be made.
        :param query_string: optional query string to send to the backend
        :param overrides: optional return status override map used to override
                          the returned status of a request.
        :returns: a swob.Response object
        """
        start_nodes = ring.get_part_nodes(part)
        nodes = GreenthreadSafeIterator(self.app.iter_nodes(ring, part))
        pile = GreenAsyncPile(len(start_nodes))
        for head in headers:
            pile.spawn(self._make_request, nodes, part, method, path,
                       head, query_string, self.app.logger.thread_locals)
        response = []
        statuses = []
        for resp in pile:
            if not resp:
                continue
            response.append(resp)
            statuses.append(resp[0])
            if self.have_quorum(statuses, len(start_nodes)):
                break
        # give any pending requests *some* chance to finish
        finished_quickly = pile.waitall(self.app.post_quorum_timeout)
        for resp in finished_quickly:
            if not resp:
                continue
            response.append(resp)
            statuses.append(resp[0])
        while len(response) < len(start_nodes):
            response.append((HTTP_SERVICE_UNAVAILABLE, '', '', ''))
        statuses, reasons, resp_headers, bodies = zip(*response)
        return self.best_response(req, statuses, reasons, bodies,
                                  '%s %s' % (self.server_type, req.method),
                                  overrides=overrides, headers=resp_headers)

    def _quorum_size(self, n):
        """
        Number of successful backend responses needed for the proxy to
        consider the client request successful.
        """
        return quorum_size(n)

    def have_quorum(self, statuses, node_count, quorum=None):
        """
        Given a list of statuses from several requests, determine if
        a quorum response can already be decided.

        :param statuses: list of statuses returned
        :param node_count: number of nodes being queried (basically ring count)
        :param quorum: number of statuses required for quorum
        :returns: True or False, depending on if quorum is established
        """
        if quorum is None:
            quorum = self._quorum_size(node_count)
        if len(statuses) >= quorum:
            for hundred in (HTTP_CONTINUE, HTTP_OK, HTTP_MULTIPLE_CHOICES,
                            HTTP_BAD_REQUEST):
                if sum(1 for s in statuses
                       if hundred <= s < hundred + 100) >= quorum:
                    return True
        return False

    def best_response(self, req, statuses, reasons, bodies, server_type,
                      etag=None, headers=None, overrides=None,
                      quorum_size=None):
        """
        Given a list of responses from several servers, choose the best to
        return to the API.

        :param req: swob.Request object
        :param statuses: list of statuses returned
        :param reasons: list of reasons for each status
        :param bodies: bodies of each response
        :param server_type: type of server the responses came from
        :param etag: etag
        :param headers: headers of each response
        :param overrides: overrides to apply when lacking quorum
        :param quorum_size: quorum size to use
        :returns: swob.Response object with the correct status, body, etc. set
        """
        if quorum_size is None:
            quorum_size = self._quorum_size(len(statuses))

        resp = self._compute_quorum_response(
            req, statuses, reasons, bodies, etag, headers,
            quorum_size=quorum_size)
        if overrides and not resp:
            faked_up_status_indices = set()
            transformed = []
            for (i, (status, reason, hdrs, body)) in enumerate(zip(
                    statuses, reasons, headers, bodies)):
                if status in overrides:
                    faked_up_status_indices.add(i)
                    transformed.append((overrides[status], '', '', ''))
                else:
                    transformed.append((status, reason, hdrs, body))
            statuses, reasons, headers, bodies = zip(*transformed)
            resp = self._compute_quorum_response(
                req, statuses, reasons, bodies, etag, headers,
                indices_to_avoid=faked_up_status_indices,
                quorum_size=quorum_size)

        if not resp:
            resp = HTTPServiceUnavailable(request=req)
            self.app.logger.error(_('%(type)s returning 503 for %(statuses)s'),
                                  {'type': server_type, 'statuses': statuses})

        return resp

    def _compute_quorum_response(self, req, statuses, reasons, bodies, etag,
                                 headers, quorum_size, indices_to_avoid=()):
        if not statuses:
            return None
        for hundred in (HTTP_OK, HTTP_MULTIPLE_CHOICES, HTTP_BAD_REQUEST):
            hstatuses = \
                [(i, s) for i, s in enumerate(statuses)
                 if hundred <= s < hundred + 100]
            if len(hstatuses) >= quorum_size:
                try:
                    status_index, status = max(
                        ((i, stat) for i, stat in hstatuses
                            if i not in indices_to_avoid),
                        key=operator.itemgetter(1))
                except ValueError:
                    # All statuses were indices to avoid
                    continue
                resp = status_map[status](request=req)
                resp.status = '%s %s' % (status, reasons[status_index])
                resp.body = bodies[status_index]
                if headers:
                    update_headers(resp, headers[status_index])
                if etag:
                    resp.headers['etag'] = etag.strip('"')
                return resp
        return None

    @public
    def GET(self, req):
        """
        Handler for HTTP GET requests.

        :param req: The client request
        :returns: the response to the client
        """
        return self.GETorHEAD(req)

    @public
    def HEAD(self, req):
        """
        Handler for HTTP HEAD requests.

        :param req: The client request
        :returns: the response to the client
        """
        return self.GETorHEAD(req)

    def autocreate_account(self, req, account):
        """
        Autocreate an account

        :param req: request leading to this autocreate
        :param account: the unquoted account name
        """
        partition, nodes = self.app.account_ring.get_nodes(account)
        path = '/%s' % account
        headers = {'X-Timestamp': Timestamp(time.time()).internal,
                   'X-Trans-Id': self.trans_id,
                   'Connection': 'close'}
        # transfer any x-account-sysmeta headers from original request
        # to the autocreate PUT
        headers.update((k, v)
                       for k, v in req.headers.items()
                       if is_sys_meta('account', k))
        resp = self.make_requests(Request.blank('/v1' + path),
                                  self.app.account_ring, partition, 'PUT',
                                  path, [headers] * len(nodes))
        if is_success(resp.status_int):
            self.app.logger.info('autocreate account %r' % path)
            clear_info_cache(self.app, req.environ, account)
        else:
            self.app.logger.warning('Could not autocreate account %r' % path)

    def GETorHEAD_base(self, req, server_type, node_iter, partition, path,
                       concurrency=1, client_chunk_size=None):
        """
        Base handler for HTTP GET or HEAD requests.

        :param req: swob.Request object
        :param server_type: server type used in logging
        :param node_iter: an iterator to obtain nodes from
        :param partition: partition
        :param path: path for the request
        :param concurrency: number of requests to run concurrently
        :param client_chunk_size: chunk size for response body iterator
        :returns: swob.Response object
        """
        backend_headers = self.generate_request_headers(
            req, additional=req.headers)

        handler = GetOrHeadHandler(self.app, req, self.server_type, node_iter,
                                   partition, path, backend_headers,
                                   concurrency,
                                   client_chunk_size=client_chunk_size)
        res = handler.get_working_response(req)

        if not res:
            res = self.best_response(
                req, handler.statuses, handler.reasons, handler.bodies,
                '%s %s' % (server_type, req.method),
                headers=handler.source_headers)
        try:
            (vrs, account, container) = req.split_path(2, 3)
            _set_info_cache(self.app, req.environ, account, container, res)
        except ValueError:
            pass
        try:
            (vrs, account, container, obj) = req.split_path(4, 4, True)
            _set_object_info_cache(self.app, req.environ, account,
                                   container, obj, res)
        except ValueError:
            pass
        # if a backend policy index is present in resp headers, translate it
        # here with the friendly policy name
        if 'X-Backend-Storage-Policy-Index' in res.headers and \
                is_success(res.status_int):
            policy = \
                POLICIES.get_by_index(
                    res.headers['X-Backend-Storage-Policy-Index'])
            if policy:
                res.headers['X-Storage-Policy'] = policy.name
            else:
                self.app.logger.error(
                    'Could not translate %s (%r) from %r to policy',
                    'X-Backend-Storage-Policy-Index',
                    res.headers['X-Backend-Storage-Policy-Index'], path)
        return res

    def is_origin_allowed(self, cors_info, origin):
        """
        Is the given Origin allowed to make requests to this resource

        :param cors_info: the resource's CORS related metadata headers
        :param origin: the origin making the request
        :return: True or False
        """
        allowed_origins = set()
        if cors_info.get('allow_origin'):
            allowed_origins.update(
                [a.strip()
                 for a in cors_info['allow_origin'].split(' ')
                 if a.strip()])
        if self.app.cors_allow_origin:
            allowed_origins.update(self.app.cors_allow_origin)
        return origin in allowed_origins or '*' in allowed_origins

    @public
    def OPTIONS(self, req):
        """
        Base handler for OPTIONS requests

        :param req: swob.Request object
        :returns: swob.Response object
        """
        # Prepare the default response
        headers = {'Allow': ', '.join(self.allowed_methods)}
        resp = Response(status=200, request=req, headers=headers)

        # If this isn't a CORS pre-flight request then return now
        req_origin_value = req.headers.get('Origin', None)
        if not req_origin_value:
            return resp

        # This is a CORS preflight request so check it's allowed
        try:
            container_info = \
                self.container_info(self.account_name,
                                    self.container_name, req)
        except AttributeError:
            # This should only happen for requests to the Account. A future
            # change could allow CORS requests to the Account level as well.
            return resp

        cors = container_info.get('cors', {})

        # If the CORS origin isn't allowed return a 401
        if not self.is_origin_allowed(cors, req_origin_value) or (
                req.headers.get('Access-Control-Request-Method') not in
                self.allowed_methods):
            resp.status = HTTP_UNAUTHORIZED
            return resp

        # Allow all headers requested in the request. The CORS
        # specification does leave the door open for this, as mentioned in
        # http://www.w3.org/TR/cors/#resource-preflight-requests
        # Note: Since the list of headers can be unbounded
        # simply returning headers can be enough.
        allow_headers = set()
        if req.headers.get('Access-Control-Request-Headers'):
            allow_headers.update(
                list_from_csv(req.headers['Access-Control-Request-Headers']))

        # Populate the response with the CORS preflight headers
        if cors.get('allow_origin') and \
                cors.get('allow_origin').strip() == '*':
            headers['access-control-allow-origin'] = '*'
        else:
            headers['access-control-allow-origin'] = req_origin_value
        if cors.get('max_age') is not None:
            headers['access-control-max-age'] = cors.get('max_age')
        headers['access-control-allow-methods'] = \
            ', '.join(self.allowed_methods)
        if allow_headers:
            headers['access-control-allow-headers'] = ', '.join(allow_headers)
        resp.headers = headers

        return resp
swift-2.7.1/swift/proxy/controllers/account.py0000664000567000056710000001550413024044354022713 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from six.moves.urllib.parse import unquote

from swift import gettext_ as _

from swift.account.utils import account_listing_response
from swift.common.request_helpers import get_listing_content_type
from swift.common.middleware.acl import parse_acl, format_acl
from swift.common.utils import public
from swift.common.constraints import check_metadata
from swift.common import constraints
from swift.common.http import HTTP_NOT_FOUND, HTTP_GONE
from swift.proxy.controllers.base import Controller, clear_info_cache
from swift.common.swob import HTTPBadRequest, HTTPMethodNotAllowed
from swift.common.request_helpers import get_sys_meta_prefix


class AccountController(Controller):
    """WSGI controller for account requests"""
    server_type = 'Account'

    def __init__(self, app, account_name, **kwargs):
        Controller.__init__(self, app)
        self.account_name = unquote(account_name)
        if not self.app.allow_account_management:
            self.allowed_methods.remove('PUT')
            self.allowed_methods.remove('DELETE')

    def add_acls_from_sys_metadata(self, resp):
        if resp.environ['REQUEST_METHOD'] in ('HEAD', 'GET', 'PUT', 'POST'):
            prefix = get_sys_meta_prefix('account') + 'core-'
            name = 'access-control'
            (extname, intname) = ('x-account-' + name, prefix + name)
            acl_dict = parse_acl(version=2, data=resp.headers.pop(intname))
            if acl_dict:  # treat empty dict as empty header
                resp.headers[extname] = format_acl(
                    version=2, acl_dict=acl_dict)

    def GETorHEAD(self, req):
        """Handler for HTTP GET/HEAD requests."""
        if len(self.account_name) > constraints.MAX_ACCOUNT_NAME_LENGTH:
            resp = HTTPBadRequest(request=req)
            resp.body = 'Account name length of %d longer than %d' % \
                        (len(self.account_name),
                         constraints.MAX_ACCOUNT_NAME_LENGTH)
            return resp

        partition = self.app.account_ring.get_part(self.account_name)
        concurrency = self.app.account_ring.replica_count \
            if self.app.concurrent_gets else 1
        node_iter = self.app.iter_nodes(self.app.account_ring, partition)
        resp = self.GETorHEAD_base(
            req, _('Account'), node_iter, partition,
            req.swift_entity_path.rstrip('/'), concurrency)
        if resp.status_int == HTTP_NOT_FOUND:
            if resp.headers.get('X-Account-Status', '').lower() == 'deleted':
                resp.status = HTTP_GONE
            elif self.app.account_autocreate:
                resp = account_listing_response(self.account_name, req,
                                                get_listing_content_type(req))
        if req.environ.get('swift_owner'):
            self.add_acls_from_sys_metadata(resp)
        else:
            for header in self.app.swift_owner_headers:
                resp.headers.pop(header, None)
        return resp

    @public
    def PUT(self, req):
        """HTTP PUT request handler."""
        if not self.app.allow_account_management:
            return HTTPMethodNotAllowed(
                request=req,
                headers={'Allow': ', '.join(self.allowed_methods)})
        error_response = check_metadata(req, 'account')
        if error_response:
            return error_response
        if len(self.account_name) > constraints.MAX_ACCOUNT_NAME_LENGTH:
            resp = HTTPBadRequest(request=req)
            resp.body = 'Account name length of %d longer than %d' % \
                        (len(self.account_name),
                         constraints.MAX_ACCOUNT_NAME_LENGTH)
            return resp
        account_partition, accounts = \
            self.app.account_ring.get_nodes(self.account_name)
        headers = self.generate_request_headers(req, transfer=True)
        clear_info_cache(self.app, req.environ, self.account_name)
        resp = self.make_requests(
            req, self.app.account_ring, account_partition, 'PUT',
            req.swift_entity_path, [headers] * len(accounts))
        self.add_acls_from_sys_metadata(resp)
        return resp

    @public
    def POST(self, req):
        """HTTP POST request handler."""
        if len(self.account_name) > constraints.MAX_ACCOUNT_NAME_LENGTH:
            resp = HTTPBadRequest(request=req)
            resp.body = 'Account name length of %d longer than %d' % \
                        (len(self.account_name),
                         constraints.MAX_ACCOUNT_NAME_LENGTH)
            return resp
        error_response = check_metadata(req, 'account')
        if error_response:
            return error_response
        account_partition, accounts = \
            self.app.account_ring.get_nodes(self.account_name)
        headers = self.generate_request_headers(req, transfer=True)
        clear_info_cache(self.app, req.environ, self.account_name)
        resp = self.make_requests(
            req, self.app.account_ring, account_partition, 'POST',
            req.swift_entity_path, [headers] * len(accounts))
        if resp.status_int == HTTP_NOT_FOUND and self.app.account_autocreate:
            self.autocreate_account(req, self.account_name)
            resp = self.make_requests(
                req, self.app.account_ring, account_partition, 'POST',
                req.swift_entity_path, [headers] * len(accounts))
        self.add_acls_from_sys_metadata(resp)
        return resp

    @public
    def DELETE(self, req):
        """HTTP DELETE request handler."""
        # Extra safety in case someone typos a query string for an
        # account-level DELETE request that was really meant to be caught by
        # some middleware.
        if req.query_string:
            return HTTPBadRequest(request=req)
        if not self.app.allow_account_management:
            return HTTPMethodNotAllowed(
                request=req,
                headers={'Allow': ', '.join(self.allowed_methods)})
        account_partition, accounts = \
            self.app.account_ring.get_nodes(self.account_name)
        headers = self.generate_request_headers(req)
        clear_info_cache(self.app, req.environ, self.account_name)
        resp = self.make_requests(
            req, self.app.account_ring, account_partition, 'DELETE',
            req.swift_entity_path, [headers] * len(accounts))
        return resp
swift-2.7.1/swift/proxy/controllers/__init__.py0000664000567000056710000000172613024044352023015 0ustar  jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from swift.proxy.controllers.base import Controller
from swift.proxy.controllers.info import InfoController
from swift.proxy.controllers.obj import ObjectControllerRouter
from swift.proxy.controllers.account import AccountController
from swift.proxy.controllers.container import ContainerController

__all__ = [
    'AccountController',
    'ContainerController',
    'Controller',
    'InfoController',
    'ObjectControllerRouter',
]
swift-2.7.1/swift/proxy/controllers/info.py0000664000567000056710000000727613024044352022217 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import json
from time import time

from swift.common.utils import public, get_hmac, get_swift_info, \
    streq_const_time
from swift.proxy.controllers.base import Controller, delay_denial
from swift.common.swob import HTTPOk, HTTPForbidden, HTTPUnauthorized


class InfoController(Controller):
    """WSGI controller for info requests"""
    server_type = 'Info'

    def __init__(self, app, version, expose_info, disallowed_sections,
                 admin_key):
        Controller.__init__(self, app)
        self.expose_info = expose_info
        self.disallowed_sections = disallowed_sections
        self.admin_key = admin_key
        self.allowed_hmac_methods = {
            'HEAD': ['HEAD', 'GET'],
            'GET': ['GET']}

    @public
    @delay_denial
    def GET(self, req):
        return self.GETorHEAD(req)

    @public
    @delay_denial
    def HEAD(self, req):
        return self.GETorHEAD(req)

    @public
    @delay_denial
    def OPTIONS(self, req):
        return HTTPOk(request=req, headers={'Allow': 'HEAD, GET, OPTIONS'})

    def GETorHEAD(self, req):
        """Handler for HTTP GET/HEAD requests."""
        """
        Handles requests to /info
        Should return a WSGI-style callable (such as swob.Response).

        :param req: swob.Request object
        """
        if not self.expose_info:
            return HTTPForbidden(request=req)

        admin_request = False
        sig = req.params.get('swiftinfo_sig', '')
        expires = req.params.get('swiftinfo_expires', '')

        if sig != '' or expires != '':
            admin_request = True
            if not self.admin_key:
                return HTTPForbidden(request=req)
            try:
                expires = int(expires)
            except ValueError:
                return HTTPUnauthorized(request=req)
            if expires < time():
                return HTTPUnauthorized(request=req)

            valid_sigs = []
            for method in self.allowed_hmac_methods[req.method]:
                valid_sigs.append(get_hmac(method,
                                           '/info',
                                           expires,
                                           self.admin_key))

            # While it's true that any() will short-circuit, this doesn't
            # affect the timing-attack resistance since the only way this will
            # short-circuit is when a valid signature is passed in.
            is_valid_hmac = any(streq_const_time(valid_sig, sig)
                                for valid_sig in valid_sigs)
            if not is_valid_hmac:
                return HTTPUnauthorized(request=req)

        headers = {}
        if 'Origin' in req.headers:
            headers['Access-Control-Allow-Origin'] = req.headers['Origin']
            headers['Access-Control-Expose-Headers'] = ', '.join(
                ['x-trans-id'])

        info = json.dumps(get_swift_info(
            admin=admin_request, disallowed_sections=self.disallowed_sections))

        return HTTPOk(request=req,
                      headers=headers,
                      body=info,
                      content_type='application/json; charset=UTF-8')
swift-2.7.1/swift/proxy/controllers/obj.py0000664000567000056710000032703013024044354022031 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# NOTE: swift_conn
# You'll see swift_conn passed around a few places in this file. This is the
# source bufferedhttp connection of whatever it is attached to.
#   It is used when early termination of reading from the connection should
# happen, such as when a range request is satisfied but there's still more the
# source connection would like to send. To prevent having to read all the data
# that could be left, the source connection can be .close() and then reads
# commence to empty out any buffers.
#   These shenanigans are to ensure all related objects can be garbage
# collected. We've seen objects hang around forever otherwise.

import six
from six.moves.urllib.parse import unquote, quote

import collections
import itertools
import json
import mimetypes
import time
import math
import random
from hashlib import md5
from swift import gettext_ as _

from greenlet import GreenletExit
from eventlet import GreenPile
from eventlet.queue import Queue
from eventlet.timeout import Timeout

from swift.common.utils import (
    clean_content_type, config_true_value, ContextPool, csv_append,
    GreenAsyncPile, GreenthreadSafeIterator, Timestamp,
    normalize_delete_at_timestamp, public, get_expirer_container,
    document_iters_to_http_response_body, parse_content_range,
    quorum_size, reiterate, close_if_possible)
from swift.common.bufferedhttp import http_connect
from swift.common.constraints import check_metadata, check_object_creation, \
    check_copy_from_header, check_destination_header, \
    check_account_format
from swift.common import constraints
from swift.common.exceptions import ChunkReadTimeout, \
    ChunkWriteTimeout, ConnectionTimeout, ResponseTimeout, \
    InsufficientStorage, FooterNotSupported, MultiphasePUTNotSupported, \
    PutterConnectError, ChunkReadError
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.http import (
    is_informational, is_success, is_client_error, is_server_error,
    is_redirection, HTTP_CONTINUE, HTTP_CREATED, HTTP_MULTIPLE_CHOICES,
    HTTP_INTERNAL_SERVER_ERROR, HTTP_SERVICE_UNAVAILABLE,
    HTTP_INSUFFICIENT_STORAGE, HTTP_PRECONDITION_FAILED, HTTP_CONFLICT,
    HTTP_UNPROCESSABLE_ENTITY, HTTP_REQUESTED_RANGE_NOT_SATISFIABLE)
from swift.common.storage_policy import (POLICIES, REPL_POLICY, EC_POLICY,
                                         ECDriverError, PolicyError)
from swift.proxy.controllers.base import Controller, delay_denial, \
    cors_validation, ResumingGetter
from swift.common.swob import HTTPAccepted, HTTPBadRequest, HTTPNotFound, \
    HTTPPreconditionFailed, HTTPRequestEntityTooLarge, HTTPRequestTimeout, \
    HTTPServerError, HTTPServiceUnavailable, Request, \
    HTTPClientDisconnect, HTTPUnprocessableEntity, Response, HTTPException, \
    HTTPRequestedRangeNotSatisfiable, Range, HTTPInternalServerError
from swift.common.request_helpers import is_sys_or_user_meta, is_sys_meta, \
    remove_items, copy_header_subset


def copy_headers_into(from_r, to_r):
    """
    Will copy desired headers from from_r to to_r
    :params from_r: a swob Request or Response
    :params to_r: a swob Request or Response
    """
    pass_headers = ['x-delete-at']
    for k, v in from_r.headers.items():
        if is_sys_or_user_meta('object', k) or k.lower() in pass_headers:
            to_r.headers[k] = v


def check_content_type(req):
    if not req.environ.get('swift.content_type_overridden') and \
            ';' in req.headers.get('content-type', ''):
        for param in req.headers['content-type'].split(';')[1:]:
            if param.lstrip().startswith('swift_'):
                return HTTPBadRequest("Invalid Content-Type, "
                                      "swift_* is not a valid parameter name.")
    return None


class ObjectControllerRouter(object):

    policy_type_to_controller_map = {}

    @classmethod
    def register(cls, policy_type):
        """
        Decorator for Storage Policy implemenations to register
        their ObjectController implementations.

        This also fills in a policy_type attribute on the class.
        """
        def register_wrapper(controller_cls):
            if policy_type in cls.policy_type_to_controller_map:
                raise PolicyError(
                    '%r is already registered for the policy_type %r' % (
                        cls.policy_type_to_controller_map[policy_type],
                        policy_type))
            cls.policy_type_to_controller_map[policy_type] = controller_cls
            controller_cls.policy_type = policy_type
            return controller_cls
        return register_wrapper

    def __init__(self):
        self.policy_to_controller_cls = {}
        for policy in POLICIES:
            self.policy_to_controller_cls[policy] = \
                self.policy_type_to_controller_map[policy.policy_type]

    def __getitem__(self, policy):
        return self.policy_to_controller_cls[policy]


class BaseObjectController(Controller):
    """Base WSGI controller for object requests."""
    server_type = 'Object'

    def __init__(self, app, account_name, container_name, object_name,
                 **kwargs):
        Controller.__init__(self, app)
        self.account_name = unquote(account_name)
        self.container_name = unquote(container_name)
        self.object_name = unquote(object_name)

    def iter_nodes_local_first(self, ring, partition):
        """
        Yields nodes for a ring partition.

        If the 'write_affinity' setting is non-empty, then this will yield N
        local nodes (as defined by the write_affinity setting) first, then the
        rest of the nodes as normal. It is a re-ordering of the nodes such
        that the local ones come first; no node is omitted. The effect is
        that the request will be serviced by local object servers first, but
        nonlocal ones will be employed if not enough local ones are available.

        :param ring: ring to get nodes from
        :param partition: ring partition to yield nodes for
        """

        is_local = self.app.write_affinity_is_local_fn
        if is_local is None:
            return self.app.iter_nodes(ring, partition)

        primary_nodes = ring.get_part_nodes(partition)
        num_locals = self.app.write_affinity_node_count(len(primary_nodes))

        all_nodes = itertools.chain(primary_nodes,
                                    ring.get_more_nodes(partition))
        first_n_local_nodes = list(itertools.islice(
            six.moves.filter(is_local, all_nodes), num_locals))

        # refresh it; it moved when we computed first_n_local_nodes
        all_nodes = itertools.chain(primary_nodes,
                                    ring.get_more_nodes(partition))
        local_first_node_iter = itertools.chain(
            first_n_local_nodes,
            six.moves.filter(lambda node: node not in first_n_local_nodes,
                             all_nodes))

        return self.app.iter_nodes(
            ring, partition, node_iter=local_first_node_iter)

    def GETorHEAD(self, req):
        """Handle HTTP GET or HEAD requests."""
        container_info = self.container_info(
            self.account_name, self.container_name, req)
        req.acl = container_info['read_acl']
        # pass the policy index to storage nodes via req header
        policy_index = req.headers.get('X-Backend-Storage-Policy-Index',
                                       container_info['storage_policy'])
        policy = POLICIES.get_by_index(policy_index)
        obj_ring = self.app.get_object_ring(policy_index)
        req.headers['X-Backend-Storage-Policy-Index'] = policy_index
        if 'swift.authorize' in req.environ:
            aresp = req.environ['swift.authorize'](req)
            if aresp:
                return aresp
        partition = obj_ring.get_part(
            self.account_name, self.container_name, self.object_name)
        node_iter = self.app.iter_nodes(obj_ring, partition)

        resp = self._reroute(policy)._get_or_head_response(
            req, node_iter, partition, policy)

        if ';' in resp.headers.get('content-type', ''):
            resp.content_type = clean_content_type(
                resp.headers['content-type'])
        return resp

    @public
    @cors_validation
    @delay_denial
    def GET(self, req):
        """Handler for HTTP GET requests."""
        return self.GETorHEAD(req)

    @public
    @cors_validation
    @delay_denial
    def HEAD(self, req):
        """Handler for HTTP HEAD requests."""
        return self.GETorHEAD(req)

    @public
    @cors_validation
    @delay_denial
    def POST(self, req):
        """HTTP POST request handler."""
        if self.app.object_post_as_copy:
            req.method = 'PUT'
            req.path_info = '/v1/%s/%s/%s' % (
                self.account_name, self.container_name, self.object_name)
            req.headers['Content-Length'] = 0
            req.headers['X-Copy-From'] = quote('/%s/%s' % (self.container_name,
                                               self.object_name))
            req.environ['swift.post_as_copy'] = True
            req.environ['swift_versioned_copy'] = True
            resp = self.PUT(req)
            # Older editions returned 202 Accepted on object POSTs, so we'll
            # convert any 201 Created responses to that for compatibility with
            # picky clients.
            if resp.status_int != HTTP_CREATED:
                return resp
            return HTTPAccepted(request=req)
        else:
            error_response = check_metadata(req, 'object')
            if error_response:
                return error_response
            container_info = self.container_info(
                self.account_name, self.container_name, req)
            container_partition = container_info['partition']
            containers = container_info['nodes']
            req.acl = container_info['write_acl']
            if 'swift.authorize' in req.environ:
                aresp = req.environ['swift.authorize'](req)
                if aresp:
                    return aresp
            if not containers:
                return HTTPNotFound(request=req)

            req, delete_at_container, delete_at_part, \
                delete_at_nodes = self._config_obj_expiration(req)

            # pass the policy index to storage nodes via req header
            policy_index = req.headers.get('X-Backend-Storage-Policy-Index',
                                           container_info['storage_policy'])
            obj_ring = self.app.get_object_ring(policy_index)
            req.headers['X-Backend-Storage-Policy-Index'] = policy_index
            partition, nodes = obj_ring.get_nodes(
                self.account_name, self.container_name, self.object_name)

            req.headers['X-Timestamp'] = Timestamp(time.time()).internal

            headers = self._backend_requests(
                req, len(nodes), container_partition, containers,
                delete_at_container, delete_at_part, delete_at_nodes)
            return self._post_object(req, obj_ring, partition, headers)

    def _backend_requests(self, req, n_outgoing,
                          container_partition, containers,
                          delete_at_container=None, delete_at_partition=None,
                          delete_at_nodes=None):
        policy_index = req.headers['X-Backend-Storage-Policy-Index']
        policy = POLICIES.get_by_index(policy_index)
        headers = [self.generate_request_headers(req, additional=req.headers)
                   for _junk in range(n_outgoing)]

        def set_container_update(index, container):
            headers[index]['X-Container-Partition'] = container_partition
            headers[index]['X-Container-Host'] = csv_append(
                headers[index].get('X-Container-Host'),
                '%(ip)s:%(port)s' % container)
            headers[index]['X-Container-Device'] = csv_append(
                headers[index].get('X-Container-Device'),
                container['device'])

        for i, container in enumerate(containers):
            i = i % len(headers)
            set_container_update(i, container)

        # if # of container_updates is not enough against # of replicas
        # (or fragments). Fill them like as pigeon hole problem.
        # TODO?: apply these to X-Delete-At-Container?
        n_updates_needed = min(policy.quorum + 1, n_outgoing)
        container_iter = itertools.cycle(containers)
        existing_updates = len(containers)
        while existing_updates < n_updates_needed:
            set_container_update(existing_updates, next(container_iter))
            existing_updates += 1

        for i, node in enumerate(delete_at_nodes or []):
            i = i % len(headers)

            headers[i]['X-Delete-At-Container'] = delete_at_container
            headers[i]['X-Delete-At-Partition'] = delete_at_partition
            headers[i]['X-Delete-At-Host'] = csv_append(
                headers[i].get('X-Delete-At-Host'),
                '%(ip)s:%(port)s' % node)
            headers[i]['X-Delete-At-Device'] = csv_append(
                headers[i].get('X-Delete-At-Device'),
                node['device'])

        return headers

    def _await_response(self, conn, **kwargs):
        with Timeout(self.app.node_timeout):
            if conn.resp:
                return conn.resp
            else:
                return conn.getresponse()

    def _get_conn_response(self, conn, req, logger_thread_locals, **kwargs):
        self.app.logger.thread_locals = logger_thread_locals
        try:
            resp = self._await_response(conn, **kwargs)
            return (conn, resp)
        except (Exception, Timeout):
            self.app.exception_occurred(
                conn.node, _('Object'),
                _('Trying to get final status of PUT to %s') % req.path)
        return (None, None)

    def _get_put_responses(self, req, conns, nodes, **kwargs):
        """
        Collect replicated object responses.
        """
        statuses = []
        reasons = []
        bodies = []
        etags = set()

        pile = GreenAsyncPile(len(conns))
        for conn in conns:
            pile.spawn(self._get_conn_response, conn,
                       req, self.app.logger.thread_locals)

        def _handle_response(conn, response):
            statuses.append(response.status)
            reasons.append(response.reason)
            bodies.append(response.read())
            if response.status == HTTP_INSUFFICIENT_STORAGE:
                self.app.error_limit(conn.node,
                                     _('ERROR Insufficient Storage'))
            elif response.status >= HTTP_INTERNAL_SERVER_ERROR:
                self.app.error_occurred(
                    conn.node,
                    _('ERROR %(status)d %(body)s From Object Server '
                      're: %(path)s') %
                    {'status': response.status,
                     'body': bodies[-1][:1024], 'path': req.path})
            elif is_success(response.status):
                etags.add(response.getheader('etag').strip('"'))

        for (conn, response) in pile:
            if response:
                _handle_response(conn, response)
                if self.have_quorum(statuses, len(nodes)):
                    break

        # give any pending requests *some* chance to finish
        finished_quickly = pile.waitall(self.app.post_quorum_timeout)
        for (conn, response) in finished_quickly:
            if response:
                _handle_response(conn, response)

        while len(statuses) < len(nodes):
            statuses.append(HTTP_SERVICE_UNAVAILABLE)
            reasons.append('')
            bodies.append('')
        return statuses, reasons, bodies, etags

    def _config_obj_expiration(self, req):
        delete_at_container = None
        delete_at_part = None
        delete_at_nodes = None

        req = constraints.check_delete_headers(req)

        if 'x-delete-at' in req.headers:
            x_delete_at = int(normalize_delete_at_timestamp(
                int(req.headers['x-delete-at'])))

            req.environ.setdefault('swift.log_info', []).append(
                'x-delete-at:%s' % x_delete_at)

            delete_at_container = get_expirer_container(
                x_delete_at, self.app.expiring_objects_container_divisor,
                self.account_name, self.container_name, self.object_name)

            delete_at_part, delete_at_nodes = \
                self.app.container_ring.get_nodes(
                    self.app.expiring_objects_account, delete_at_container)

        return req, delete_at_container, delete_at_part, delete_at_nodes

    def _handle_copy_request(self, req):
        """
        This method handles copying objects based on values set in the headers
        'X-Copy-From' and 'X-Copy-From-Account'

        Note that if the incomming request has some conditional headers (e.g.
        'Range', 'If-Match'), *source* object will be evaluated for these
        headers. i.e. if PUT with both 'X-Copy-From' and 'Range', Swift will
        make a partial copy as a new object.

        This method was added as part of the refactoring of the PUT method and
        the functionality is expected to be moved to middleware
        """
        if req.environ.get('swift.orig_req_method', req.method) != 'POST':
            req.environ.setdefault('swift.log_info', []).append(
                'x-copy-from:%s' % req.headers['X-Copy-From'])
        ver, acct, _rest = req.split_path(2, 3, True)
        src_account_name = req.headers.get('X-Copy-From-Account', None)
        if src_account_name:
            src_account_name = check_account_format(req, src_account_name)
        else:
            src_account_name = acct
        src_container_name, src_obj_name = check_copy_from_header(req)
        source_header = '/%s/%s/%s/%s' % (
            ver, src_account_name, src_container_name, src_obj_name)
        source_req = req.copy_get()

        # make sure the source request uses it's container_info
        source_req.headers.pop('X-Backend-Storage-Policy-Index', None)
        source_req.path_info = source_header
        source_req.headers['X-Newest'] = 'true'
        if 'swift.post_as_copy' in req.environ:
            # We're COPYing one object over itself because of a POST; rely on
            # the PUT for write authorization, don't require read authorization
            source_req.environ['swift.authorize'] = lambda req: None
            source_req.environ['swift.authorize_override'] = True

        orig_obj_name = self.object_name
        orig_container_name = self.container_name
        orig_account_name = self.account_name
        sink_req = Request.blank(req.path_info,
                                 environ=req.environ, headers=req.headers)

        self.object_name = src_obj_name
        self.container_name = src_container_name
        self.account_name = src_account_name

        source_resp = self.GET(source_req)

        # This gives middlewares a way to change the source; for example,
        # this lets you COPY a SLO manifest and have the new object be the
        # concatenation of the segments (like what a GET request gives
        # the client), not a copy of the manifest file.
        hook = req.environ.get(
            'swift.copy_hook',
            (lambda source_req, source_resp, sink_req: source_resp))
        source_resp = hook(source_req, source_resp, sink_req)

        # reset names
        self.object_name = orig_obj_name
        self.container_name = orig_container_name
        self.account_name = orig_account_name

        if source_resp.status_int >= HTTP_MULTIPLE_CHOICES:
            # this is a bit of ugly code, but I'm willing to live with it
            # until copy request handling moves to middleware
            return source_resp, None, None, None
        if source_resp.content_length is None:
            # This indicates a transfer-encoding: chunked source object,
            # which currently only happens because there are more than
            # CONTAINER_LISTING_LIMIT segments in a segmented object. In
            # this case, we're going to refuse to do the server-side copy.
            raise HTTPRequestEntityTooLarge(request=req)
        if source_resp.content_length > constraints.MAX_FILE_SIZE:
            raise HTTPRequestEntityTooLarge(request=req)

        data_source = iter(source_resp.app_iter)
        sink_req.content_length = source_resp.content_length
        sink_req.etag = source_resp.etag

        # we no longer need the X-Copy-From header
        del sink_req.headers['X-Copy-From']
        if 'X-Copy-From-Account' in sink_req.headers:
            del sink_req.headers['X-Copy-From-Account']
        if not req.content_type_manually_set:
            sink_req.headers['Content-Type'] = \
                source_resp.headers['Content-Type']

        fresh_meta_flag = config_true_value(
            sink_req.headers.get('x-fresh-metadata', 'false'))

        if fresh_meta_flag or 'swift.post_as_copy' in sink_req.environ:
            # post-as-copy: ignore new sysmeta, copy existing sysmeta
            condition = lambda k: is_sys_meta('object', k)
            remove_items(sink_req.headers, condition)
            copy_header_subset(source_resp, sink_req, condition)
        else:
            # copy/update existing sysmeta and user meta
            copy_headers_into(source_resp, sink_req)
            copy_headers_into(req, sink_req)

        # copy over x-static-large-object for POSTs and manifest copies
        if 'X-Static-Large-Object' in source_resp.headers and \
                (req.params.get('multipart-manifest') == 'get' or
                 'swift.post_as_copy' in req.environ):
            sink_req.headers['X-Static-Large-Object'] = \
                source_resp.headers['X-Static-Large-Object']

        req = sink_req

        def update_response(req, resp):
            acct, path = source_resp.environ['PATH_INFO'].split('/', 3)[2:4]
            resp.headers['X-Copied-From-Account'] = quote(acct)
            resp.headers['X-Copied-From'] = quote(path)
            if 'last-modified' in source_resp.headers:
                resp.headers['X-Copied-From-Last-Modified'] = \
                    source_resp.headers['last-modified']
            copy_headers_into(req, resp)
            return resp

        # this is a bit of ugly code, but I'm willing to live with it
        # until copy request handling moves to middleware
        return None, req, data_source, update_response

    def _update_content_type(self, req):
        # Sometimes the 'content-type' header exists, but is set to None.
        req.content_type_manually_set = True
        detect_content_type = \
            config_true_value(req.headers.get('x-detect-content-type'))
        if detect_content_type or not req.headers.get('content-type'):
            guessed_type, _junk = mimetypes.guess_type(req.path_info)
            req.headers['Content-Type'] = guessed_type or \
                'application/octet-stream'
            if detect_content_type:
                req.headers.pop('x-detect-content-type')
            else:
                req.content_type_manually_set = False

    def _update_x_timestamp(self, req):
        # Used by container sync feature
        if 'x-timestamp' in req.headers:
            try:
                req_timestamp = Timestamp(req.headers['X-Timestamp'])
            except ValueError:
                raise HTTPBadRequest(
                    request=req, content_type='text/plain',
                    body='X-Timestamp should be a UNIX timestamp float value; '
                         'was %r' % req.headers['x-timestamp'])
            req.headers['X-Timestamp'] = req_timestamp.internal
        else:
            req.headers['X-Timestamp'] = Timestamp(time.time()).internal
        return None

    def _check_failure_put_connections(self, conns, req, nodes, min_conns):
        """
        Identify any failed connections and check minimum connection count.
        """
        if req.if_none_match is not None and '*' in req.if_none_match:
            statuses = [conn.resp.status for conn in conns if conn.resp]
            if HTTP_PRECONDITION_FAILED in statuses:
                # If we find any copy of the file, it shouldn't be uploaded
                self.app.logger.debug(
                    _('Object PUT returning 412, %(statuses)r'),
                    {'statuses': statuses})
                raise HTTPPreconditionFailed(request=req)

        if any(conn for conn in conns if conn.resp and
               conn.resp.status == HTTP_CONFLICT):
            status_times = ['%(status)s (%(timestamp)s)' % {
                'status': conn.resp.status,
                'timestamp': HeaderKeyDict(
                    conn.resp.getheaders()).get(
                        'X-Backend-Timestamp', 'unknown')
            } for conn in conns if conn.resp]
            self.app.logger.debug(
                _('Object PUT returning 202 for 409: '
                  '%(req_timestamp)s <= %(timestamps)r'),
                {'req_timestamp': req.timestamp.internal,
                 'timestamps': ', '.join(status_times)})
            raise HTTPAccepted(request=req)

        self._check_min_conn(req, conns, min_conns)

    def _connect_put_node(self, nodes, part, path, headers,
                          logger_thread_locals):
        """
        Make connection to storage nodes

        Connects to the first working node that it finds in nodes iter
        and sends over the request headers. Returns an HTTPConnection
        object to handle the rest of the streaming.

        This method must be implemented by each policy ObjectController.

        :param nodes: an iterator of the target storage nodes
        :param partition: ring partition number
        :param path: the object path to send to the storage node
        :param headers: request headers
        :param logger_thread_locals: The thread local values to be set on the
                                     self.app.logger to retain transaction
                                     logging information.
        :return: HTTPConnection object
        """
        raise NotImplementedError()

    def _get_put_connections(self, req, nodes, partition, outgoing_headers,
                             policy, expect):
        """
        Establish connections to storage nodes for PUT request
        """
        obj_ring = policy.object_ring
        node_iter = GreenthreadSafeIterator(
            self.iter_nodes_local_first(obj_ring, partition))
        pile = GreenPile(len(nodes))

        for nheaders in outgoing_headers:
            if expect:
                nheaders['Expect'] = '100-continue'
            pile.spawn(self._connect_put_node, node_iter, partition,
                       req.swift_entity_path, nheaders,
                       self.app.logger.thread_locals)

        conns = [conn for conn in pile if conn]

        return conns

    def _check_min_conn(self, req, conns, min_conns, msg=None):
        msg = msg or 'Object PUT returning 503, %(conns)s/%(nodes)s ' \
            'required connections'

        if len(conns) < min_conns:
            self.app.logger.error((msg),
                                  {'conns': len(conns), 'nodes': min_conns})
            raise HTTPServiceUnavailable(request=req)

    def _store_object(self, req, data_source, nodes, partition,
                      outgoing_headers):
        """
        This method is responsible for establishing connection
        with storage nodes and sending the data to each one of those
        nodes. The process of transferring data is specific to each
        Storage Policy, thus it is required for each policy specific
        ObjectController to provide their own implementation of this method.

        :param req: the PUT Request
        :param data_source: an iterator of the source of the data
        :param nodes: an iterator of the target storage nodes
        :param partition: ring partition number
        :param outgoing_headers: system headers to storage nodes
        :return: Response object
        """
        raise NotImplementedError()

    def _delete_object(self, req, obj_ring, partition, headers):
        """
        send object DELETE request to storage nodes. Subclasses of
        the BaseObjectController can provide their own implementation
        of this method.

        :param req: the DELETE Request
        :param obj_ring: the object ring
        :param partition: ring partition number
        :param headers: system headers to storage nodes
        :return: Response object
        """
        # When deleting objects treat a 404 status as 204.
        status_overrides = {404: 204}
        resp = self.make_requests(req, obj_ring,
                                  partition, 'DELETE', req.swift_entity_path,
                                  headers, overrides=status_overrides)
        return resp

    def _post_object(self, req, obj_ring, partition, headers):
        """
        send object POST request to storage nodes.

        :param req: the POST Request
        :param obj_ring: the object ring
        :param partition: ring partition number
        :param headers: system headers to storage nodes
        :return: Response object
        """
        resp = self.make_requests(req, obj_ring, partition,
                                  'POST', req.swift_entity_path, headers)
        return resp

    @public
    @cors_validation
    @delay_denial
    def PUT(self, req):
        """HTTP PUT request handler."""
        if req.if_none_match is not None and '*' not in req.if_none_match:
            # Sending an etag with if-none-match isn't currently supported
            return HTTPBadRequest(request=req, content_type='text/plain',
                                  body='If-None-Match only supports *')
        container_info = self.container_info(
            self.account_name, self.container_name, req)
        policy_index = req.headers.get('X-Backend-Storage-Policy-Index',
                                       container_info['storage_policy'])
        obj_ring = self.app.get_object_ring(policy_index)
        container_nodes = container_info['nodes']
        container_partition = container_info['partition']
        partition, nodes = obj_ring.get_nodes(
            self.account_name, self.container_name, self.object_name)

        # pass the policy index to storage nodes via req header
        req.headers['X-Backend-Storage-Policy-Index'] = policy_index
        req.acl = container_info['write_acl']
        req.environ['swift_sync_key'] = container_info['sync_key']

        # is request authorized
        if 'swift.authorize' in req.environ:
            aresp = req.environ['swift.authorize'](req)
            if aresp:
                return aresp

        if not container_info['nodes']:
            return HTTPNotFound(request=req)

        # update content type in case it is missing
        self._update_content_type(req)

        # check constraints on object name and request headers
        error_response = check_object_creation(req, self.object_name) or \
            check_content_type(req)
        if error_response:
            return error_response

        self._update_x_timestamp(req)

        # check if request is a COPY of an existing object
        source_header = req.headers.get('X-Copy-From')
        if source_header:
            error_response, req, data_source, update_response = \
                self._handle_copy_request(req)
            if error_response:
                return error_response
        else:
            def reader():
                try:
                    return req.environ['wsgi.input'].read(
                        self.app.client_chunk_size)
                except (ValueError, IOError) as e:
                    raise ChunkReadError(str(e))
            data_source = iter(reader, '')
            update_response = lambda req, resp: resp

        # check if object is set to be automatically deleted (i.e. expired)
        req, delete_at_container, delete_at_part, \
            delete_at_nodes = self._config_obj_expiration(req)

        # add special headers to be handled by storage nodes
        outgoing_headers = self._backend_requests(
            req, len(nodes), container_partition, container_nodes,
            delete_at_container, delete_at_part, delete_at_nodes)

        # send object to storage nodes
        resp = self._store_object(
            req, data_source, nodes, partition, outgoing_headers)
        return update_response(req, resp)

    @public
    @cors_validation
    @delay_denial
    def DELETE(self, req):
        """HTTP DELETE request handler."""
        container_info = self.container_info(
            self.account_name, self.container_name, req)
        # pass the policy index to storage nodes via req header
        policy_index = req.headers.get('X-Backend-Storage-Policy-Index',
                                       container_info['storage_policy'])
        obj_ring = self.app.get_object_ring(policy_index)
        # pass the policy index to storage nodes via req header
        req.headers['X-Backend-Storage-Policy-Index'] = policy_index
        container_partition = container_info['partition']
        containers = container_info['nodes']
        req.acl = container_info['write_acl']
        req.environ['swift_sync_key'] = container_info['sync_key']
        if 'swift.authorize' in req.environ:
            aresp = req.environ['swift.authorize'](req)
            if aresp:
                return aresp
        if not containers:
            return HTTPNotFound(request=req)
        partition, nodes = obj_ring.get_nodes(
            self.account_name, self.container_name, self.object_name)
        # Used by container sync feature
        if 'x-timestamp' in req.headers:
            try:
                req_timestamp = Timestamp(req.headers['X-Timestamp'])
            except ValueError:
                return HTTPBadRequest(
                    request=req, content_type='text/plain',
                    body='X-Timestamp should be a UNIX timestamp float value; '
                         'was %r' % req.headers['x-timestamp'])
            req.headers['X-Timestamp'] = req_timestamp.internal
        else:
            req.headers['X-Timestamp'] = Timestamp(time.time()).internal

        headers = self._backend_requests(
            req, len(nodes), container_partition, containers)
        return self._delete_object(req, obj_ring, partition, headers)

    def _reroute(self, policy):
        """
        For COPY requests we need to make sure the controller instance the
        request is routed through is the correct type for the policy.
        """
        if not policy:
            raise HTTPServiceUnavailable('Unknown Storage Policy')
        if policy.policy_type != self.policy_type:
            controller = self.app.obj_controller_router[policy](
                self.app, self.account_name, self.container_name,
                self.object_name)
        else:
            controller = self
        return controller

    @public
    @cors_validation
    @delay_denial
    def COPY(self, req):
        """HTTP COPY request handler."""
        if not req.headers.get('Destination'):
            return HTTPPreconditionFailed(request=req,
                                          body='Destination header required')
        dest_account = self.account_name
        if 'Destination-Account' in req.headers:
            dest_account = req.headers.get('Destination-Account')
            dest_account = check_account_format(req, dest_account)
            req.headers['X-Copy-From-Account'] = self.account_name
            self.account_name = dest_account
            del req.headers['Destination-Account']
        dest_container, dest_object = check_destination_header(req)

        source = '/%s/%s' % (self.container_name, self.object_name)
        self.container_name = dest_container
        self.object_name = dest_object
        # re-write the existing request as a PUT instead of creating a new one
        # since this one is already attached to the posthooklogger
        # TODO: Swift now has proxy-logging middleware instead of
        #       posthooklogger used in before. i.e. we don't have to
        #       keep the code depends on evnetlet.posthooks sequence, IMHO.
        #       However, creating a new sub request might
        #       cause the possibility to hide some bugs behindes the request
        #       so that we should discuss whichi is suitable (new-sub-request
        #       vs re-write-existing-request) for Swift. [kota_]
        req.method = 'PUT'
        req.path_info = '/v1/%s/%s/%s' % \
                        (dest_account, dest_container, dest_object)
        req.headers['Content-Length'] = 0
        req.headers['X-Copy-From'] = quote(source)
        del req.headers['Destination']

        container_info = self.container_info(
            dest_account, dest_container, req)
        dest_policy = POLICIES.get_by_index(container_info['storage_policy'])

        return self._reroute(dest_policy).PUT(req)


@ObjectControllerRouter.register(REPL_POLICY)
class ReplicatedObjectController(BaseObjectController):

    def _get_or_head_response(self, req, node_iter, partition, policy):
        concurrency = self.app.get_object_ring(policy.idx).replica_count \
            if self.app.concurrent_gets else 1
        resp = self.GETorHEAD_base(
            req, _('Object'), node_iter, partition,
            req.swift_entity_path, concurrency)
        return resp

    def _connect_put_node(self, nodes, part, path, headers,
                          logger_thread_locals):
        """
        Make a connection for a replicated object.

        Connects to the first working node that it finds in node_iter
        and sends over the request headers. Returns an HTTPConnection
        object to handle the rest of the streaming.
        """
        self.app.logger.thread_locals = logger_thread_locals
        for node in nodes:
            try:
                start_time = time.time()
                with ConnectionTimeout(self.app.conn_timeout):
                    conn = http_connect(
                        node['ip'], node['port'], node['device'], part, 'PUT',
                        path, headers)
                self.app.set_node_timing(node, time.time() - start_time)
                with Timeout(self.app.node_timeout):
                    resp = conn.getexpect()
                if resp.status == HTTP_CONTINUE:
                    conn.resp = None
                    conn.node = node
                    return conn
                elif (is_success(resp.status)
                      or resp.status in (HTTP_CONFLICT,
                                         HTTP_UNPROCESSABLE_ENTITY)):
                    conn.resp = resp
                    conn.node = node
                    return conn
                elif headers['If-None-Match'] is not None and \
                        resp.status == HTTP_PRECONDITION_FAILED:
                    conn.resp = resp
                    conn.node = node
                    return conn
                elif resp.status == HTTP_INSUFFICIENT_STORAGE:
                    self.app.error_limit(node, _('ERROR Insufficient Storage'))
                elif is_server_error(resp.status):
                    self.app.error_occurred(
                        node,
                        _('ERROR %(status)d Expect: 100-continue '
                          'From Object Server') % {
                              'status': resp.status})
            except (Exception, Timeout):
                self.app.exception_occurred(
                    node, _('Object'),
                    _('Expect: 100-continue on %s') % path)

    def _send_file(self, conn, path):
        """Method for a file PUT coro"""
        while True:
            chunk = conn.queue.get()
            if not conn.failed:
                try:
                    with ChunkWriteTimeout(self.app.node_timeout):
                        conn.send(chunk)
                except (Exception, ChunkWriteTimeout):
                    conn.failed = True
                    self.app.exception_occurred(
                        conn.node, _('Object'),
                        _('Trying to write to %s') % path)
            conn.queue.task_done()

    def _transfer_data(self, req, data_source, conns, nodes):
        """
        Transfer data for a replicated object.

        This method was added in the PUT method extraction change
        """
        min_conns = quorum_size(len(nodes))
        bytes_transferred = 0
        try:
            with ContextPool(len(nodes)) as pool:
                for conn in conns:
                    conn.failed = False
                    conn.queue = Queue(self.app.put_queue_depth)
                    pool.spawn(self._send_file, conn, req.path)
                while True:
                    with ChunkReadTimeout(self.app.client_timeout):
                        try:
                            chunk = next(data_source)
                        except StopIteration:
                            if req.is_chunked:
                                for conn in conns:
                                    conn.queue.put('0\r\n\r\n')
                            break
                    bytes_transferred += len(chunk)
                    if bytes_transferred > constraints.MAX_FILE_SIZE:
                        raise HTTPRequestEntityTooLarge(request=req)
                    for conn in list(conns):
                        if not conn.failed:
                            conn.queue.put(
                                '%x\r\n%s\r\n' % (len(chunk), chunk)
                                if req.is_chunked else chunk)
                        else:
                            conn.close()
                            conns.remove(conn)
                    self._check_min_conn(
                        req, conns, min_conns,
                        msg='Object PUT exceptions during'
                            ' send, %(conns)s/%(nodes)s required connections')
                for conn in conns:
                    if conn.queue.unfinished_tasks:
                        conn.queue.join()
            conns = [conn for conn in conns if not conn.failed]
            self._check_min_conn(
                req, conns, min_conns,
                msg='Object PUT exceptions after last send, '
                '%(conns)s/%(nodes)s required connections')
        except ChunkReadTimeout as err:
            self.app.logger.warning(
                _('ERROR Client read timeout (%ss)'), err.seconds)
            self.app.logger.increment('client_timeouts')
            raise HTTPRequestTimeout(request=req)
        except HTTPException:
            raise
        except ChunkReadError:
            req.client_disconnect = True
            self.app.logger.warning(
                _('Client disconnected without sending last chunk'))
            self.app.logger.increment('client_disconnects')
            raise HTTPClientDisconnect(request=req)
        except Timeout:
            self.app.logger.exception(
                _('ERROR Exception causing client disconnect'))
            raise HTTPClientDisconnect(request=req)
        except Exception:
            self.app.logger.exception(
                _('ERROR Exception transferring data to object servers %s'),
                {'path': req.path})
            raise HTTPInternalServerError(request=req)
        if req.content_length and bytes_transferred < req.content_length:
            req.client_disconnect = True
            self.app.logger.warning(
                _('Client disconnected without sending enough data'))
            self.app.logger.increment('client_disconnects')
            raise HTTPClientDisconnect(request=req)

    def _store_object(self, req, data_source, nodes, partition,
                      outgoing_headers):
        """
        Store a replicated object.

        This method is responsible for establishing connection
        with storage nodes and sending object to each one of those
        nodes. After sending the data, the "best" response will be
        returned based on statuses from all connections
        """
        policy_index = req.headers.get('X-Backend-Storage-Policy-Index')
        policy = POLICIES.get_by_index(policy_index)
        if not nodes:
            return HTTPNotFound()

        # RFC2616:8.2.3 disallows 100-continue without a body
        if (req.content_length > 0) or req.is_chunked:
            expect = True
        else:
            expect = False
        conns = self._get_put_connections(req, nodes, partition,
                                          outgoing_headers, policy, expect)
        min_conns = quorum_size(len(nodes))
        try:
            # check that a minimum number of connections were established and
            # meet all the correct conditions set in the request
            self._check_failure_put_connections(conns, req, nodes, min_conns)

            # transfer data
            self._transfer_data(req, data_source, conns, nodes)

            # get responses
            statuses, reasons, bodies, etags = self._get_put_responses(
                req, conns, nodes)
        except HTTPException as resp:
            return resp
        finally:
            for conn in conns:
                conn.close()

        if len(etags) > 1:
            self.app.logger.error(
                _('Object servers returned %s mismatched etags'), len(etags))
            return HTTPServerError(request=req)
        etag = etags.pop() if len(etags) else None
        resp = self.best_response(req, statuses, reasons, bodies,
                                  _('Object PUT'), etag=etag)
        resp.last_modified = math.ceil(
            float(Timestamp(req.headers['X-Timestamp'])))
        return resp


class ECAppIter(object):
    """
    WSGI iterable that decodes EC fragment archives (or portions thereof)
    into the original object (or portions thereof).

    :param path: object's path, sans v1 (e.g. /a/c/o)

    :param policy: storage policy for this object

    :param internal_parts_iters: list of the response-document-parts
        iterators for the backend GET responses. For an M+K erasure code,
        the caller must supply M such iterables.

    :param range_specs: list of dictionaries describing the ranges requested
        by the client. Each dictionary contains the start and end of the
        client's requested byte range as well as the start and end of the EC
        segments containing that byte range.

    :param fa_length: length of the fragment archive, in bytes, if the
        response is a 200. If it's a 206, then this is ignored.

    :param obj_length: length of the object, in bytes. Learned from the
        headers in the GET response from the object server.

    :param logger: a logger
    """
    def __init__(self, path, policy, internal_parts_iters, range_specs,
                 fa_length, obj_length, logger):
        self.path = path
        self.policy = policy
        self.internal_parts_iters = internal_parts_iters
        self.range_specs = range_specs
        self.fa_length = fa_length
        self.obj_length = obj_length if obj_length is not None else 0
        self.boundary = ''
        self.logger = logger

        self.mime_boundary = None
        self.learned_content_type = None
        self.stashed_iter = None

    def close(self):
        for it in self.internal_parts_iters:
            close_if_possible(it)

    def kickoff(self, req, resp):
        """
        Start pulling data from the backends so that we can learn things like
        the real Content-Type that might only be in the multipart/byteranges
        response body. Update our response accordingly.

        Also, this is the first point at which we can learn the MIME
        boundary that our response has in the headers. We grab that so we
        can also use it in the body.

        :returns: None
        :raises: HTTPException on error
        """
        self.mime_boundary = resp.boundary

        self.stashed_iter = reiterate(self._real_iter(req, resp.headers))

        if self.learned_content_type is not None:
            resp.content_type = self.learned_content_type
        resp.content_length = self.obj_length

    def _next_range(self):
        # Each FA part should have approximately the same headers. We really
        # only care about Content-Range and Content-Type, and that'll be the
        # same for all the different FAs.
        frag_iters = []
        headers = None
        for parts_iter in self.internal_parts_iters:
            part_info = next(parts_iter)
            frag_iters.append(part_info['part_iter'])
            headers = part_info['headers']
        headers = HeaderKeyDict(headers)
        return headers, frag_iters

    def _actual_range(self, req_start, req_end, entity_length):
        try:
            rng = Range("bytes=%s-%s" % (
                req_start if req_start is not None else '',
                req_end if req_end is not None else ''))
        except ValueError:
            return (None, None)

        rfl = rng.ranges_for_length(entity_length)
        if not rfl:
            return (None, None)
        else:
            # ranges_for_length() adds 1 to the last byte's position
            # because webob once made a mistake
            return (rfl[0][0], rfl[0][1] - 1)

    def _fill_out_range_specs_from_obj_length(self, range_specs):
        # Add a few fields to each range spec:
        #
        #  * resp_client_start, resp_client_end: the actual bytes that will
        #      be delivered to the client for the requested range. This may
        #      differ from the requested bytes if, say, the requested range
        #      overlaps the end of the object.
        #
        #  * resp_segment_start, resp_segment_end: the actual offsets of the
        #      segments that will be decoded for the requested range. These
        #      differ from resp_client_start/end in that these are aligned
        #      to segment boundaries, while resp_client_start/end are not
        #      necessarily so.
        #
        #  * satisfiable: a boolean indicating whether the range is
        #      satisfiable or not (i.e. the requested range overlaps the
        #      object in at least one byte).
        #
        # This is kept separate from _fill_out_range_specs_from_fa_length()
        # because this computation can be done with just the response
        # headers from the object servers (in particular
        # X-Object-Sysmeta-Ec-Content-Length), while the computation in
        # _fill_out_range_specs_from_fa_length() requires the beginnings of
        # the response bodies.
        for spec in range_specs:
            cstart, cend = self._actual_range(
                spec['req_client_start'],
                spec['req_client_end'],
                self.obj_length)
            spec['resp_client_start'] = cstart
            spec['resp_client_end'] = cend
            spec['satisfiable'] = (cstart is not None and cend is not None)

            sstart, send = self._actual_range(
                spec['req_segment_start'],
                spec['req_segment_end'],
                self.obj_length)

            seg_size = self.policy.ec_segment_size
            if spec['req_segment_start'] is None and sstart % seg_size != 0:
                # Segment start may, in the case of a suffix request, need
                # to be rounded up (not down!) to the nearest segment boundary.
                # This reflects the trimming of leading garbage (partial
                # fragments) from the retrieved fragments.
                sstart += seg_size - (sstart % seg_size)

            spec['resp_segment_start'] = sstart
            spec['resp_segment_end'] = send

    def _fill_out_range_specs_from_fa_length(self, fa_length, range_specs):
        # Add two fields to each range spec:
        #
        #  * resp_fragment_start, resp_fragment_end: the start and end of
        #      the fragments that compose this byterange. These values are
        #      aligned to fragment boundaries.
        #
        # This way, ECAppIter has the knowledge it needs to correlate
        # response byteranges with requested ones for when some byteranges
        # are omitted from the response entirely and also to put the right
        # Content-Range headers in a multipart/byteranges response.
        for spec in range_specs:
            fstart, fend = self._actual_range(
                spec['req_fragment_start'],
                spec['req_fragment_end'],
                fa_length)
            spec['resp_fragment_start'] = fstart
            spec['resp_fragment_end'] = fend

    def __iter__(self):
        if self.stashed_iter is not None:
            return iter(self.stashed_iter)
        else:
            raise ValueError("Failed to call kickoff() before __iter__()")

    def _real_iter(self, req, resp_headers):
        if not self.range_specs:
            client_asked_for_range = False
            range_specs = [{
                'req_client_start': 0,
                'req_client_end': (None if self.obj_length is None
                                   else self.obj_length - 1),
                'resp_client_start': 0,
                'resp_client_end': (None if self.obj_length is None
                                    else self.obj_length - 1),
                'req_segment_start': 0,
                'req_segment_end': (None if self.obj_length is None
                                    else self.obj_length - 1),
                'resp_segment_start': 0,
                'resp_segment_end': (None if self.obj_length is None
                                     else self.obj_length - 1),
                'req_fragment_start': 0,
                'req_fragment_end': self.fa_length - 1,
                'resp_fragment_start': 0,
                'resp_fragment_end': self.fa_length - 1,
                'satisfiable': self.obj_length > 0,
            }]
        else:
            client_asked_for_range = True
            range_specs = self.range_specs

        self._fill_out_range_specs_from_obj_length(range_specs)

        multipart = (len([rs for rs in range_specs if rs['satisfiable']]) > 1)
        # Multipart responses are not required to be in the same order as
        # the Range header; the parts may be in any order the server wants.
        # Further, if multiple ranges are requested and only some are
        # satisfiable, then only the satisfiable ones appear in the response
        # at all. Thus, we cannot simply iterate over range_specs in order;
        # we must use the Content-Range header from each part to figure out
        # what we've been given.
        #
        # We do, however, make the assumption that all the object-server
        # responses have their ranges in the same order. Otherwise, a
        # streaming decode would be impossible.

        def convert_ranges_iter():
            seen_first_headers = False
            ranges_for_resp = {}

            while True:
                # this'll raise StopIteration and exit the loop
                next_range = self._next_range()

                headers, frag_iters = next_range
                content_type = headers['Content-Type']

                content_range = headers.get('Content-Range')
                if content_range is not None:
                    fa_start, fa_end, fa_length = parse_content_range(
                        content_range)
                elif self.fa_length <= 0:
                    fa_start = None
                    fa_end = None
                    fa_length = 0
                else:
                    fa_start = 0
                    fa_end = self.fa_length - 1
                    fa_length = self.fa_length

                if not seen_first_headers:
                    # This is the earliest we can possibly do this. On a
                    # 200 or 206-single-byterange response, we can learn
                    # the FA's length from the HTTP response headers.
                    # However, on a 206-multiple-byteranges response, we
                    # don't learn it until the first part of the
                    # response body, in the headers of the first MIME
                    # part.
                    #
                    # Similarly, the content type of a
                    # 206-multiple-byteranges response is
                    # "multipart/byteranges", not the object's actual
                    # content type.
                    self._fill_out_range_specs_from_fa_length(
                        fa_length, range_specs)

                    satisfiable = False
                    for range_spec in range_specs:
                        satisfiable |= range_spec['satisfiable']
                        key = (range_spec['resp_fragment_start'],
                               range_spec['resp_fragment_end'])
                        ranges_for_resp.setdefault(key, []).append(range_spec)

                    # The client may have asked for an unsatisfiable set of
                    # ranges, but when converted to fragments, the object
                    # servers see it as satisfiable. For example, imagine a
                    # request for bytes 800-900 of a 750-byte object with a
                    # 1024-byte segment size. The object servers will see a
                    # request for bytes 0-${fragsize-1}, and that's
                    # satisfiable, so they return 206. It's not until we
                    # learn the object size that we can check for this
                    # condition.
                    #
                    # Note that some unsatisfiable ranges *will* be caught
                    # by the object servers, like bytes 1800-1900 of a
                    # 100-byte object with 1024-byte segments. That's not
                    # what we're dealing with here, though.
                    if client_asked_for_range and not satisfiable:
                        req.environ[
                            'swift.non_client_disconnect'] = True
                        raise HTTPRequestedRangeNotSatisfiable(
                            request=req, headers=resp_headers)
                    self.learned_content_type = content_type
                    seen_first_headers = True

                range_spec = ranges_for_resp[(fa_start, fa_end)].pop(0)
                seg_iter = self._decode_segments_from_fragments(frag_iters)
                if not range_spec['satisfiable']:
                    # This'll be small; just a single small segment. Discard
                    # it.
                    for x in seg_iter:
                        pass
                    continue

                byterange_iter = self._iter_one_range(range_spec, seg_iter)

                converted = {
                    "start_byte": range_spec["resp_client_start"],
                    "end_byte": range_spec["resp_client_end"],
                    "content_type": content_type,
                    "part_iter": byterange_iter}

                if self.obj_length is not None:
                    converted["entity_length"] = self.obj_length
                yield converted

        return document_iters_to_http_response_body(
            convert_ranges_iter(), self.mime_boundary, multipart, self.logger)

    def _iter_one_range(self, range_spec, segment_iter):
        client_start = range_spec['resp_client_start']
        client_end = range_spec['resp_client_end']
        segment_start = range_spec['resp_segment_start']
        segment_end = range_spec['resp_segment_end']

        # It's entirely possible that the client asked for a range that
        # includes some bytes we have and some we don't; for example, a
        # range of bytes 1000-20000000 on a 1500-byte object.
        segment_end = (min(segment_end, self.obj_length - 1)
                       if segment_end is not None
                       else self.obj_length - 1)
        client_end = (min(client_end, self.obj_length - 1)
                      if client_end is not None
                      else self.obj_length - 1)
        num_segments = int(
            math.ceil(float(segment_end + 1 - segment_start)
                      / self.policy.ec_segment_size))
        # We get full segments here, but the client may have requested a
        # byte range that begins or ends in the middle of a segment.
        # Thus, we have some amount of overrun (extra decoded bytes)
        # that we trim off so the client gets exactly what they
        # requested.
        start_overrun = client_start - segment_start
        end_overrun = segment_end - client_end

        for i, next_seg in enumerate(segment_iter):
            # We may have a start_overrun of more than one segment in
            # the case of suffix-byte-range requests. However, we never
            # have an end_overrun of more than one segment.
            if start_overrun > 0:
                seglen = len(next_seg)
                if seglen <= start_overrun:
                    start_overrun -= seglen
                    continue
                else:
                    next_seg = next_seg[start_overrun:]
                    start_overrun = 0

            if i == (num_segments - 1) and end_overrun:
                next_seg = next_seg[:-end_overrun]

            yield next_seg

    def _decode_segments_from_fragments(self, fragment_iters):
        # Decodes the fragments from the object servers and yields one
        # segment at a time.
        queues = [Queue(1) for _junk in range(len(fragment_iters))]

        def put_fragments_in_queue(frag_iter, queue):
            try:
                for fragment in frag_iter:
                    if fragment.startswith(' '):
                        raise Exception('Leading whitespace on fragment.')
                    queue.put(fragment)
            except GreenletExit:
                # killed by contextpool
                pass
            except ChunkReadTimeout:
                # unable to resume in GetOrHeadHandler
                self.logger.exception("Timeout fetching fragments for %r" %
                                      self.path)
            except:  # noqa
                self.logger.exception("Exception fetching fragments for %r" %
                                      self.path)
            finally:
                queue.resize(2)  # ensure there's room
                queue.put(None)
                frag_iter.close()

        with ContextPool(len(fragment_iters)) as pool:
            for frag_iter, queue in zip(fragment_iters, queues):
                pool.spawn(put_fragments_in_queue, frag_iter, queue)

            while True:
                fragments = []
                for queue in queues:
                    fragment = queue.get()
                    queue.task_done()
                    fragments.append(fragment)

                # If any object server connection yields out a None; we're
                # done.  Either they are all None, and we've finished
                # successfully; or some un-recoverable failure has left us
                # with an un-reconstructible list of fragments - so we'll
                # break out of the iter so WSGI can tear down the broken
                # connection.
                if not all(fragments):
                    break
                try:
                    segment = self.policy.pyeclib_driver.decode(fragments)
                except ECDriverError:
                    self.logger.exception("Error decoding fragments for %r" %
                                          self.path)
                    raise

                yield segment

    def app_iter_range(self, start, end):
        return self

    def app_iter_ranges(self, ranges, content_type, boundary, content_size):
        return self


def client_range_to_segment_range(client_start, client_end, segment_size):
    """
    Takes a byterange from the client and converts it into a byterange
    spanning the necessary segments.

    Handles prefix, suffix, and fully-specified byte ranges.

    Examples:
        client_range_to_segment_range(100, 700, 512) = (0, 1023)
        client_range_to_segment_range(100, 700, 256) = (0, 767)
        client_range_to_segment_range(300, None, 256) = (256, None)

    :param client_start: first byte of the range requested by the client
    :param client_end: last byte of the range requested by the client
    :param segment_size: size of an EC segment, in bytes

    :returns: a 2-tuple (seg_start, seg_end) where

      * seg_start is the first byte of the first segment, or None if this is
        a suffix byte range

      * seg_end is the last byte of the last segment, or None if this is a
        prefix byte range
    """
    # the index of the first byte of the first segment
    segment_start = (
        int(client_start // segment_size)
        * segment_size) if client_start is not None else None
    # the index of the last byte of the last segment
    segment_end = (
        # bytes M-
        None if client_end is None else
        # bytes M-N
        (((int(client_end // segment_size) + 1)
          * segment_size) - 1) if client_start is not None else
        # bytes -N: we get some extra bytes to make sure we
        # have all we need.
        #
        # To see why, imagine a 100-byte segment size, a
        # 340-byte object, and a request for the last 50
        # bytes. Naively requesting the last 100 bytes would
        # result in a truncated first segment and hence a
        # truncated download. (Of course, the actual
        # obj-server requests are for fragments, not
        # segments, but that doesn't change the
        # calculation.)
        #
        # This does mean that we fetch an extra segment if
        # the object size is an exact multiple of the
        # segment size. It's a little wasteful, but it's
        # better to be a little wasteful than to get some
        # range requests completely wrong.
        (int(math.ceil((
            float(client_end) / segment_size) + 1))  # nsegs
         * segment_size))
    return (segment_start, segment_end)


def segment_range_to_fragment_range(segment_start, segment_end, segment_size,
                                    fragment_size):
    """
    Takes a byterange spanning some segments and converts that into a
    byterange spanning the corresponding fragments within their fragment
    archives.

    Handles prefix, suffix, and fully-specified byte ranges.

    :param segment_start: first byte of the first segment
    :param segment_end: last byte of the last segment
    :param segment_size: size of an EC segment, in bytes
    :param fragment_size: size of an EC fragment, in bytes

    :returns: a 2-tuple (frag_start, frag_end) where

      * frag_start is the first byte of the first fragment, or None if this
        is a suffix byte range

      * frag_end is the last byte of the last fragment, or None if this is a
        prefix byte range
    """
    # Note: segment_start and (segment_end + 1) are
    # multiples of segment_size, so we don't have to worry
    # about integer math giving us rounding troubles.
    #
    # There's a whole bunch of +1 and -1 in here; that's because HTTP wants
    # byteranges to be inclusive of the start and end, so e.g. bytes 200-300
    # is a range containing 101 bytes. Python has half-inclusive ranges, of
    # course, so we have to convert back and forth. We try to keep things in
    # HTTP-style byteranges for consistency.

    # the index of the first byte of the first fragment
    fragment_start = ((
        segment_start / segment_size * fragment_size)
        if segment_start is not None else None)
    # the index of the last byte of the last fragment
    fragment_end = (
        # range unbounded on the right
        None if segment_end is None else
        # range unbounded on the left; no -1 since we're
        # asking for the last N bytes, not to have a
        # particular byte be the last one
        ((segment_end + 1) / segment_size
         * fragment_size) if segment_start is None else
        # range bounded on both sides; the -1 is because the
        # rest of the expression computes the length of the
        # fragment, and a range of N bytes starts at index M
        # and ends at M + N - 1.
        ((segment_end + 1) / segment_size * fragment_size) - 1)
    return (fragment_start, fragment_end)


NO_DATA_SENT = 1
SENDING_DATA = 2
DATA_SENT = 3
DATA_ACKED = 4
COMMIT_SENT = 5


class ECPutter(object):
    """
    This is here mostly to wrap up the fact that all EC PUTs are
    chunked because of the mime boundary footer trick and the first
    half of the two-phase PUT conversation handling.

    An HTTP PUT request that supports streaming.

    Probably deserves more docs than this, but meh.
    """
    def __init__(self, conn, node, resp, path, connect_duration,
                 mime_boundary):
        # Note: you probably want to call Putter.connect() instead of
        # instantiating one of these directly.
        self.conn = conn
        self.node = node
        self.resp = resp
        self.path = path
        self.connect_duration = connect_duration
        # for handoff nodes node_index is None
        self.node_index = node.get('index')
        self.mime_boundary = mime_boundary
        self.chunk_hasher = md5()

        self.failed = False
        self.queue = None
        self.state = NO_DATA_SENT

    def current_status(self):
        """
        Returns the current status of the response.

        A response starts off with no current status, then may or may not have
        a status of 100 for some time, and then ultimately has a final status
        like 200, 404, et cetera.
        """
        return self.resp.status

    def await_response(self, timeout, informational=False):
        """
        Get 100-continue response indicating the end of 1st phase of a 2-phase
        commit or the final response, i.e. the one with status >= 200.

        Might or might not actually wait for anything. If we said Expect:
        100-continue but got back a non-100 response, that'll be the thing
        returned, and we won't do any network IO to get it. OTOH, if we got
        a 100 Continue response and sent up the PUT request's body, then
        we'll actually read the 2xx-5xx response off the network here.

        :returns: HTTPResponse
        :raises: Timeout if the response took too long
        """
        conn = self.conn
        with Timeout(timeout):
            if not conn.resp:
                if informational:
                    self.resp = conn.getexpect()
                else:
                    self.resp = conn.getresponse()
            return self.resp

    def spawn_sender_greenthread(self, pool, queue_depth, write_timeout,
                                 exception_handler):
        """Call before sending the first chunk of request body"""
        self.queue = Queue(queue_depth)
        pool.spawn(self._send_file, write_timeout, exception_handler)

    def wait(self):
        if self.queue.unfinished_tasks:
            self.queue.join()

    def _start_mime_doc_object_body(self):
        self.queue.put("--%s\r\nX-Document: object body\r\n\r\n" %
                       (self.mime_boundary,))

    def send_chunk(self, chunk):
        if not chunk:
            # If we're not using chunked transfer-encoding, sending a 0-byte
            # chunk is just wasteful. If we *are* using chunked
            # transfer-encoding, sending a 0-byte chunk terminates the
            # request body. Neither one of these is good.
            return
        elif self.state == DATA_SENT:
            raise ValueError("called send_chunk after end_of_object_data")

        if self.state == NO_DATA_SENT and self.mime_boundary:
            # We're sending the object plus other stuff in the same request
            # body, all wrapped up in multipart MIME, so we'd better start
            # off the MIME document before sending any object data.
            self._start_mime_doc_object_body()
            self.state = SENDING_DATA

        self.queue.put(chunk)

    def end_of_object_data(self, footer_metadata):
        """
        Call when there is no more data to send.

        :param footer_metadata: dictionary of metadata items
        """
        if self.state == DATA_SENT:
            raise ValueError("called end_of_object_data twice")
        elif self.state == NO_DATA_SENT and self.mime_boundary:
            self._start_mime_doc_object_body()

        footer_body = json.dumps(footer_metadata)
        footer_md5 = md5(footer_body).hexdigest()

        tail_boundary = ("--%s" % (self.mime_boundary,))

        message_parts = [
            ("\r\n--%s\r\n" % self.mime_boundary),
            "X-Document: object metadata\r\n",
            "Content-MD5: %s\r\n" % footer_md5,
            "\r\n",
            footer_body, "\r\n",
            tail_boundary, "\r\n",
        ]
        self.queue.put("".join(message_parts))

        self.queue.put('')
        self.state = DATA_SENT

    def send_commit_confirmation(self):
        """
        Call when there are > quorum 2XX responses received.  Send commit
        confirmations to all object nodes to finalize the PUT.
        """
        if self.state == COMMIT_SENT:
            raise ValueError("called send_commit_confirmation twice")

        self.state = DATA_ACKED

        if self.mime_boundary:
            body = "put_commit_confirmation"
            tail_boundary = ("--%s--" % (self.mime_boundary,))
            message_parts = [
                "X-Document: put commit\r\n",
                "\r\n",
                body, "\r\n",
                tail_boundary,
            ]
            self.queue.put("".join(message_parts))

        self.queue.put('')
        self.state = COMMIT_SENT

    def _send_file(self, write_timeout, exception_handler):
        """
        Method for a file PUT coro. Takes chunks from a queue and sends them
        down a socket.

        If something goes wrong, the "failed" attribute will be set to true
        and the exception handler will be called.
        """
        while True:
            chunk = self.queue.get()
            if not self.failed:
                to_send = "%x\r\n%s\r\n" % (len(chunk), chunk)
                try:
                    with ChunkWriteTimeout(write_timeout):
                        self.conn.send(to_send)
                except (Exception, ChunkWriteTimeout):
                    self.failed = True
                    exception_handler(self.conn.node, _('Object'),
                                      _('Trying to write to %s') % self.path)
            self.queue.task_done()

    @classmethod
    def connect(cls, node, part, path, headers, conn_timeout, node_timeout,
                chunked=False, expected_frag_archive_size=None):
        """
        Connect to a backend node and send the headers.

        :returns: Putter instance

        :raises: ConnectionTimeout if initial connection timed out
        :raises: ResponseTimeout if header retrieval timed out
        :raises: InsufficientStorage on 507 response from node
        :raises: PutterConnectError on non-507 server error response from node
        :raises: FooterNotSupported if need_metadata_footer is set but
                 backend node can't process footers
        :raises: MultiphasePUTNotSupported if need_multiphase_support is
                 set but backend node can't handle multiphase PUT
        """
        mime_boundary = "%.64x" % random.randint(0, 16 ** 64)
        headers = HeaderKeyDict(headers)
        # We're going to be adding some unknown amount of data to the
        # request, so we can't use an explicit content length, and thus
        # we must use chunked encoding.
        headers['Transfer-Encoding'] = 'chunked'
        headers['Expect'] = '100-continue'

        # make sure this isn't there
        headers.pop('Content-Length')
        headers['X-Backend-Obj-Content-Length'] = expected_frag_archive_size

        headers['X-Backend-Obj-Multipart-Mime-Boundary'] = mime_boundary

        headers['X-Backend-Obj-Metadata-Footer'] = 'yes'

        headers['X-Backend-Obj-Multiphase-Commit'] = 'yes'

        start_time = time.time()
        with ConnectionTimeout(conn_timeout):
            conn = http_connect(node['ip'], node['port'], node['device'],
                                part, 'PUT', path, headers)
        connect_duration = time.time() - start_time

        with ResponseTimeout(node_timeout):
            resp = conn.getexpect()

        if resp.status == HTTP_INSUFFICIENT_STORAGE:
            raise InsufficientStorage

        if is_server_error(resp.status):
            raise PutterConnectError(resp.status)

        if is_informational(resp.status):
            continue_headers = HeaderKeyDict(resp.getheaders())
            can_send_metadata_footer = config_true_value(
                continue_headers.get('X-Obj-Metadata-Footer', 'no'))
            can_handle_multiphase_put = config_true_value(
                continue_headers.get('X-Obj-Multiphase-Commit', 'no'))

            if not can_send_metadata_footer:
                raise FooterNotSupported()

            if not can_handle_multiphase_put:
                raise MultiphasePUTNotSupported()

        conn.node = node
        conn.resp = None
        if is_success(resp.status) or resp.status == HTTP_CONFLICT:
            conn.resp = resp
        elif (headers.get('If-None-Match', None) is not None and
              resp.status == HTTP_PRECONDITION_FAILED):
            conn.resp = resp

        return cls(conn, node, resp, path, connect_duration, mime_boundary)


def chunk_transformer(policy, nstreams):
    segment_size = policy.ec_segment_size

    buf = collections.deque()
    total_buf_len = 0

    chunk = yield
    while chunk:
        buf.append(chunk)
        total_buf_len += len(chunk)
        if total_buf_len >= segment_size:
            chunks_to_encode = []
            # extract as many chunks as we can from the input buffer
            while total_buf_len >= segment_size:
                to_take = segment_size
                pieces = []
                while to_take > 0:
                    piece = buf.popleft()
                    if len(piece) > to_take:
                        buf.appendleft(piece[to_take:])
                        piece = piece[:to_take]
                    pieces.append(piece)
                    to_take -= len(piece)
                    total_buf_len -= len(piece)
                chunks_to_encode.append(''.join(pieces))

            frags_by_byte_order = []
            for chunk_to_encode in chunks_to_encode:
                frags_by_byte_order.append(
                    policy.pyeclib_driver.encode(chunk_to_encode))
            # Sequential calls to encode() have given us a list that
            # looks like this:
            #
            # [[frag_A1, frag_B1, frag_C1, ...],
            #  [frag_A2, frag_B2, frag_C2, ...], ...]
            #
            # What we need is a list like this:
            #
            # [(frag_A1 + frag_A2 + ...),  # destined for node A
            #  (frag_B1 + frag_B2 + ...),  # destined for node B
            #  (frag_C1 + frag_C2 + ...),  # destined for node C
            #  ...]
            obj_data = [''.join(frags)
                        for frags in zip(*frags_by_byte_order)]
            chunk = yield obj_data
        else:
            # didn't have enough data to encode
            chunk = yield None

    # Now we've gotten an empty chunk, which indicates end-of-input.
    # Take any leftover bytes and encode them.
    last_bytes = ''.join(buf)
    if last_bytes:
        last_frags = policy.pyeclib_driver.encode(last_bytes)
        yield last_frags
    else:
        yield [''] * nstreams


def trailing_metadata(policy, client_obj_hasher,
                      bytes_transferred_from_client,
                      fragment_archive_index):
    return {
        # etag and size values are being added twice here.
        # The container override header is used to update the container db
        # with these values as they represent the correct etag and size for
        # the whole object and not just the FA.
        # The object sysmeta headers will be saved on each FA of the object.
        'X-Object-Sysmeta-EC-Etag': client_obj_hasher.hexdigest(),
        'X-Object-Sysmeta-EC-Content-Length':
        str(bytes_transferred_from_client),
        'X-Backend-Container-Update-Override-Etag':
        client_obj_hasher.hexdigest(),
        'X-Backend-Container-Update-Override-Size':
        str(bytes_transferred_from_client),
        'X-Object-Sysmeta-Ec-Frag-Index': str(fragment_archive_index),
        # These fields are for debuggability,
        # AKA "what is this thing?"
        'X-Object-Sysmeta-EC-Scheme': policy.ec_scheme_description,
        'X-Object-Sysmeta-EC-Segment-Size': str(policy.ec_segment_size),
    }


@ObjectControllerRouter.register(EC_POLICY)
class ECObjectController(BaseObjectController):
    def _fragment_GET_request(self, req, node_iter, partition, policy):
        """
        Makes a GET request for a fragment.
        """
        backend_headers = self.generate_request_headers(
            req, additional=req.headers)

        getter = ResumingGetter(self.app, req, 'Object', node_iter,
                                partition, req.swift_entity_path,
                                backend_headers,
                                client_chunk_size=policy.fragment_size,
                                newest=False)
        return (getter, getter.response_parts_iter(req))

    def _convert_range(self, req, policy):
        """
        Take the requested range(s) from the client and convert it to range(s)
        to be sent to the object servers.

        This includes widening requested ranges to full segments, then
        converting those ranges to fragments so that we retrieve the minimum
        number of fragments from the object server.

        Mutates the request passed in.

        Returns a list of range specs (dictionaries with the different byte
        indices in them).
        """
        # Since segments and fragments have different sizes, we need
        # to modify the Range header sent to the object servers to
        # make sure we get the right fragments out of the fragment
        # archives.
        segment_size = policy.ec_segment_size
        fragment_size = policy.fragment_size

        range_specs = []
        new_ranges = []
        for client_start, client_end in req.range.ranges:
            # TODO: coalesce ranges that overlap segments. For
            # example, "bytes=0-10,20-30,40-50" with a 64 KiB
            # segment size will result in a a Range header in the
            # object request of "bytes=0-65535,0-65535,0-65535",
            # which is wasteful. We should be smarter and only
            # request that first segment once.
            segment_start, segment_end = client_range_to_segment_range(
                client_start, client_end, segment_size)

            fragment_start, fragment_end = \
                segment_range_to_fragment_range(
                    segment_start, segment_end,
                    segment_size, fragment_size)

            new_ranges.append((fragment_start, fragment_end))
            range_specs.append({'req_client_start': client_start,
                                'req_client_end': client_end,
                                'req_segment_start': segment_start,
                                'req_segment_end': segment_end,
                                'req_fragment_start': fragment_start,
                                'req_fragment_end': fragment_end})

        req.range = "bytes=" + ",".join(
            "%s-%s" % (s if s is not None else "",
                       e if e is not None else "")
            for s, e in new_ranges)
        return range_specs

    def _get_or_head_response(self, req, node_iter, partition, policy):
        req.headers.setdefault("X-Backend-Etag-Is-At",
                               "X-Object-Sysmeta-Ec-Etag")

        if req.method == 'HEAD':
            # no fancy EC decoding here, just one plain old HEAD request to
            # one object server because all fragments hold all metadata
            # information about the object.
            concurrency = policy.ec_ndata if self.app.concurrent_gets else 1
            resp = self.GETorHEAD_base(
                req, _('Object'), node_iter, partition,
                req.swift_entity_path, concurrency)
        else:  # GET request
            orig_range = None
            range_specs = []
            if req.range:
                orig_range = req.range
                range_specs = self._convert_range(req, policy)

            safe_iter = GreenthreadSafeIterator(node_iter)
            # Sending the request concurrently to all nodes, and responding
            # with the first response isn't something useful for EC as all
            # nodes contain different fragments. Also EC has implemented it's
            # own specific implementation of concurrent gets to ec_ndata nodes.
            # So we don't need to  worry about plumbing and sending a
            # concurrency value to ResumingGetter.
            with ContextPool(policy.ec_ndata) as pool:
                pile = GreenAsyncPile(pool)
                for _junk in range(policy.ec_ndata):
                    pile.spawn(self._fragment_GET_request,
                               req, safe_iter, partition,
                               policy)

                bad_gets = []
                etag_buckets = collections.defaultdict(list)
                best_etag = None
                for get, parts_iter in pile:
                    if is_success(get.last_status):
                        etag = HeaderKeyDict(
                            get.last_headers)['X-Object-Sysmeta-Ec-Etag']
                        etag_buckets[etag].append((get, parts_iter))
                        if etag != best_etag and (
                                len(etag_buckets[etag]) >
                                len(etag_buckets[best_etag])):
                            best_etag = etag
                    else:
                        bad_gets.append((get, parts_iter))
                    matching_response_count = max(
                        len(etag_buckets[best_etag]), len(bad_gets))
                    if (policy.ec_ndata - matching_response_count >
                            pile._pending) and node_iter.nodes_left > 0:
                        # we need more matching responses to reach ec_ndata
                        # than we have pending gets, as long as we still have
                        # nodes in node_iter we can spawn another
                        pile.spawn(self._fragment_GET_request, req,
                                   safe_iter, partition, policy)

            req.range = orig_range
            if len(etag_buckets[best_etag]) >= policy.ec_ndata:
                # headers can come from any of the getters
                resp_headers = HeaderKeyDict(
                    etag_buckets[best_etag][0][0].source_headers[-1])
                resp_headers.pop('Content-Range', None)
                eccl = resp_headers.get('X-Object-Sysmeta-Ec-Content-Length')
                obj_length = int(eccl) if eccl is not None else None

                # This is only true if we didn't get a 206 response, but
                # that's the only time this is used anyway.
                fa_length = int(resp_headers['Content-Length'])
                app_iter = ECAppIter(
                    req.swift_entity_path,
                    policy,
                    [iterator for getter, iterator in etag_buckets[best_etag]],
                    range_specs, fa_length, obj_length,
                    self.app.logger)
                resp = Response(
                    request=req,
                    headers=resp_headers,
                    conditional_response=True,
                    app_iter=app_iter)
                try:
                    app_iter.kickoff(req, resp)
                except HTTPException as err_resp:
                    # catch any HTTPException response here so that we can
                    # process response headers uniformly in _fix_response
                    resp = err_resp
            else:
                statuses = []
                reasons = []
                bodies = []
                headers = []
                for getter, body_parts_iter in bad_gets:
                    statuses.extend(getter.statuses)
                    reasons.extend(getter.reasons)
                    bodies.extend(getter.bodies)
                    headers.extend(getter.source_headers)
                resp = self.best_response(
                    req, statuses, reasons, bodies, 'Object',
                    headers=headers)
        self._fix_response(resp)
        return resp

    def _fix_response(self, resp):
        # EC fragment archives each have different bytes, hence different
        # etags. However, they all have the original object's etag stored in
        # sysmeta, so we copy that here (if it exists) so the client gets it.
        resp.headers['Etag'] = resp.headers.get('X-Object-Sysmeta-Ec-Etag')
        if (is_success(resp.status_int) or is_redirection(resp.status_int) or
                resp.status_int == HTTP_REQUESTED_RANGE_NOT_SATISFIABLE):
            resp.accept_ranges = 'bytes'
        if is_success(resp.status_int):
            resp.headers['Content-Length'] = resp.headers.get(
                'X-Object-Sysmeta-Ec-Content-Length')
            resp.fix_conditional_response()

    def _connect_put_node(self, node_iter, part, path, headers,
                          logger_thread_locals):
        """
        Make a connection for a erasure encoded object.

        Connects to the first working node that it finds in node_iter and sends
        over the request headers. Returns a Putter to handle the rest of the
        streaming, or None if no working nodes were found.
        """
        # the object server will get different bytes, so these
        # values do not apply (Content-Length might, in general, but
        # in the specific case of replication vs. EC, it doesn't).
        client_cl = headers.pop('Content-Length', None)
        headers.pop('Etag', None)

        expected_frag_size = None
        if client_cl:
            policy_index = int(headers.get('X-Backend-Storage-Policy-Index'))
            policy = POLICIES.get_by_index(policy_index)
            # TODO: PyECLib <= 1.2.0 looks to return the segment info
            # different from the input for aligned data efficiency but
            # Swift never does. So calculate the fragment length Swift
            # will actually send to object sever by making two different
            # get_segment_info calls (until PyECLib fixed).
            # policy.fragment_size makes the call using segment size,
            # and the next call is to get info for the last segment

            # get number of fragments except the tail - use truncation //
            num_fragments = int(client_cl) // policy.ec_segment_size
            expected_frag_size = policy.fragment_size * num_fragments

            # calculate the tail fragment_size by hand and add it to
            # expected_frag_size
            last_segment_size = int(client_cl) % policy.ec_segment_size
            if last_segment_size:
                last_info = policy.pyeclib_driver.get_segment_info(
                    last_segment_size, policy.ec_segment_size)
                expected_frag_size += last_info['fragment_size']

        self.app.logger.thread_locals = logger_thread_locals
        for node in node_iter:
            try:
                putter = ECPutter.connect(
                    node, part, path, headers,
                    conn_timeout=self.app.conn_timeout,
                    node_timeout=self.app.node_timeout,
                    expected_frag_archive_size=expected_frag_size)
                self.app.set_node_timing(node, putter.connect_duration)
                return putter
            except InsufficientStorage:
                self.app.error_limit(node, _('ERROR Insufficient Storage'))
            except PutterConnectError as e:
                self.app.error_occurred(
                    node, _('ERROR %(status)d Expect: 100-continue '
                            'From Object Server') % {
                                'status': e.status})
            except (Exception, Timeout):
                self.app.exception_occurred(
                    node, _('Object'),
                    _('Expect: 100-continue on %s') % path)

    def _determine_chunk_destinations(self, putters):
        """
        Given a list of putters, return a dict where the key is the putter
        and the value is the node index to use.

        This is done so that we line up handoffs using the same node index
        (in the primary part list) as the primary that the handoff is standing
        in for.  This lets erasure-code fragment archives wind up on the
        preferred local primary nodes when possible.
        """
        # Give each putter a "chunk index": the index of the
        # transformed chunk that we'll send to it.
        #
        # For primary nodes, that's just its index (primary 0 gets
        # chunk 0, primary 1 gets chunk 1, and so on). For handoffs,
        # we assign the chunk index of a missing primary.
        handoff_conns = []
        chunk_index = {}
        for p in putters:
            if p.node_index is not None:
                chunk_index[p] = p.node_index
            else:
                handoff_conns.append(p)

        # Note: we may have more holes than handoffs. This is okay; it
        # just means that we failed to connect to one or more storage
        # nodes. Holes occur when a storage node is down, in which
        # case the connection is not replaced, and when a storage node
        # returns 507, in which case a handoff is used to replace it.
        holes = [x for x in range(len(putters))
                 if x not in chunk_index.values()]

        for hole, p in zip(holes, handoff_conns):
            chunk_index[p] = hole
        return chunk_index

    def _transfer_data(self, req, policy, data_source, putters, nodes,
                       min_conns, etag_hasher):
        """
        Transfer data for an erasure coded object.

        This method was added in the PUT method extraction change
        """
        bytes_transferred = 0
        chunk_transform = chunk_transformer(policy, len(nodes))
        chunk_transform.send(None)

        def send_chunk(chunk):
            if etag_hasher:
                etag_hasher.update(chunk)
            backend_chunks = chunk_transform.send(chunk)
            if backend_chunks is None:
                # If there's not enough bytes buffered for erasure-encoding
                # or whatever we're doing, the transform will give us None.
                return

            for putter in list(putters):
                backend_chunk = backend_chunks[chunk_index[putter]]
                if not putter.failed:
                    putter.chunk_hasher.update(backend_chunk)
                    putter.send_chunk(backend_chunk)
                else:
                    putters.remove(putter)
            self._check_min_conn(
                req, putters, min_conns, msg='Object PUT exceptions during'
                ' send, %(conns)s/%(nodes)s required connections')

        try:
            with ContextPool(len(putters)) as pool:

                # build our chunk index dict to place handoffs in the
                # same part nodes index as the primaries they are covering
                chunk_index = self._determine_chunk_destinations(putters)

                for putter in putters:
                    putter.spawn_sender_greenthread(
                        pool, self.app.put_queue_depth, self.app.node_timeout,
                        self.app.exception_occurred)
                while True:
                    with ChunkReadTimeout(self.app.client_timeout):
                        try:
                            chunk = next(data_source)
                        except StopIteration:
                            break
                    bytes_transferred += len(chunk)
                    if bytes_transferred > constraints.MAX_FILE_SIZE:
                        raise HTTPRequestEntityTooLarge(request=req)

                    send_chunk(chunk)

                if req.content_length and (
                        bytes_transferred < req.content_length):
                    req.client_disconnect = True
                    self.app.logger.warning(
                        _('Client disconnected without sending enough data'))
                    self.app.logger.increment('client_disconnects')
                    raise HTTPClientDisconnect(request=req)

                computed_etag = (etag_hasher.hexdigest()
                                 if etag_hasher else None)
                received_etag = req.headers.get(
                    'etag', '').strip('"')
                if (computed_etag and received_etag and
                   computed_etag != received_etag):
                    raise HTTPUnprocessableEntity(request=req)

                send_chunk('')  # flush out any buffered data

                for putter in putters:
                    trail_md = trailing_metadata(
                        policy, etag_hasher,
                        bytes_transferred,
                        chunk_index[putter])
                    trail_md['Etag'] = \
                        putter.chunk_hasher.hexdigest()
                    putter.end_of_object_data(trail_md)

                for putter in putters:
                    putter.wait()

                # for storage policies requiring 2-phase commit (e.g.
                # erasure coding), enforce >= 'quorum' number of
                # 100-continue responses - this indicates successful
                # object data and metadata commit and is a necessary
                # condition to be met before starting 2nd PUT phase
                final_phase = False
                need_quorum = True
                statuses, reasons, bodies, _junk, quorum = \
                    self._get_put_responses(
                        req, putters, len(nodes), final_phase,
                        min_conns, need_quorum=need_quorum)
                if not quorum:
                    self.app.logger.error(
                        _('Not enough object servers ack\'ed (got %d)'),
                        statuses.count(HTTP_CONTINUE))
                    raise HTTPServiceUnavailable(request=req)

                elif not self._have_adequate_informational(
                        statuses, min_conns):
                    resp = self.best_response(req, statuses, reasons, bodies,
                                              _('Object PUT'),
                                              quorum_size=min_conns)
                    if is_client_error(resp.status_int):
                        # if 4xx occurred in this state it is absolutely
                        # a bad conversation between proxy-server and
                        # object-server (even if it's
                        # HTTP_UNPROCESSABLE_ENTITY) so we should regard this
                        # as HTTPServiceUnavailable.
                        raise HTTPServiceUnavailable(request=req)
                    else:
                        # Other errors should use raw best_response
                        raise resp

                # quorum achieved, start 2nd phase - send commit
                # confirmation to participating object servers
                # so they write a .durable state file indicating
                # a successful PUT
                for putter in putters:
                    putter.send_commit_confirmation()
                for putter in putters:
                    putter.wait()
        except ChunkReadTimeout as err:
            self.app.logger.warning(
                _('ERROR Client read timeout (%ss)'), err.seconds)
            self.app.logger.increment('client_timeouts')
            raise HTTPRequestTimeout(request=req)
        except ChunkReadError:
            req.client_disconnect = True
            self.app.logger.warning(
                _('Client disconnected without sending last chunk'))
            self.app.logger.increment('client_disconnects')
            raise HTTPClientDisconnect(request=req)
        except HTTPException:
            raise
        except Timeout:
            self.app.logger.exception(
                _('ERROR Exception causing client disconnect'))
            raise HTTPClientDisconnect(request=req)
        except Exception:
            self.app.logger.exception(
                _('ERROR Exception transferring data to object servers %s'),
                {'path': req.path})
            raise HTTPInternalServerError(request=req)

    def _have_adequate_responses(
            self, statuses, min_responses, conditional_func):
        """
        Given a list of statuses from several requests, determine if a
        satisfactory number of nodes have responded with 1xx or 2xx statuses to
        deem the transaction for a succssful response to the client.

        :param statuses: list of statuses returned so far
        :param min_responses: minimal pass criterion for number of successes
        :param conditional_func: a callable function to check http status code
        :returns: True or False, depending on current number of successes
        """
        if sum(1 for s in statuses if (conditional_func(s))) >= min_responses:
            return True
        return False

    def _have_adequate_successes(self, statuses, min_responses):
        """
        Partial method of _have_adequate_responses for 2xx
        """
        return self._have_adequate_responses(
            statuses, min_responses, is_success)

    def _have_adequate_informational(self, statuses, min_responses):
        """
        Partial method of _have_adequate_responses for 1xx
        """
        return self._have_adequate_responses(
            statuses, min_responses, is_informational)

    def _await_response(self, conn, final_phase):
        return conn.await_response(
            self.app.node_timeout, not final_phase)

    def _get_conn_response(self, conn, req, logger_thread_locals,
                           final_phase, **kwargs):
        self.app.logger.thread_locals = logger_thread_locals
        try:
            resp = self._await_response(conn, final_phase=final_phase,
                                        **kwargs)
        except (Exception, Timeout):
            resp = None
            if final_phase:
                status_type = 'final'
            else:
                status_type = 'commit'
            self.app.exception_occurred(
                conn.node, _('Object'),
                _('Trying to get %s status of PUT to %s') % (
                    status_type, req.path))
        return (conn, resp)

    def _get_put_responses(self, req, putters, num_nodes, final_phase,
                           min_responses, need_quorum=True):
        """
        Collect erasure coded object responses.

        Collect object responses to a PUT request and determine if
        satisfactory number of nodes have returned success.  Return
        statuses, quorum result if indicated by 'need_quorum' and
        etags if this is a final phase or a multiphase PUT transaction.

        :param req: the request
        :param putters: list of putters for the request
        :param num_nodes: number of nodes involved
        :param final_phase: boolean indicating if this is the last phase
        :param min_responses: minimum needed when not requiring quorum
        :param need_quorum: boolean indicating if quorum is required
        """
        statuses = []
        reasons = []
        bodies = []
        etags = set()

        pile = GreenAsyncPile(len(putters))
        for putter in putters:
            if putter.failed:
                continue
            pile.spawn(self._get_conn_response, putter, req,
                       self.app.logger.thread_locals, final_phase=final_phase)

        def _handle_response(putter, response):
            statuses.append(response.status)
            reasons.append(response.reason)
            if final_phase:
                body = response.read()
            else:
                body = ''
            bodies.append(body)
            if response.status == HTTP_INSUFFICIENT_STORAGE:
                putter.failed = True
                self.app.error_limit(putter.node,
                                     _('ERROR Insufficient Storage'))
            elif response.status >= HTTP_INTERNAL_SERVER_ERROR:
                putter.failed = True
                self.app.error_occurred(
                    putter.node,
                    _('ERROR %(status)d %(body)s From Object Server '
                      're: %(path)s') %
                    {'status': response.status,
                     'body': body[:1024], 'path': req.path})
            elif is_success(response.status):
                etags.add(response.getheader('etag').strip('"'))

        quorum = False
        for (putter, response) in pile:
            if response:
                _handle_response(putter, response)
                if self._have_adequate_successes(statuses, min_responses):
                    break
            else:
                putter.failed = True

        # give any pending requests *some* chance to finish
        finished_quickly = pile.waitall(self.app.post_quorum_timeout)
        for (putter, response) in finished_quickly:
            if response:
                _handle_response(putter, response)

        if need_quorum:
            if final_phase:
                while len(statuses) < num_nodes:
                    statuses.append(HTTP_SERVICE_UNAVAILABLE)
                    reasons.append('')
                    bodies.append('')
            else:
                # intermediate response phase - set return value to true only
                # if there are responses having same value of *any* status
                # except 5xx
                if self.have_quorum(statuses, num_nodes, quorum=min_responses):
                    quorum = True

        return statuses, reasons, bodies, etags, quorum

    def _store_object(self, req, data_source, nodes, partition,
                      outgoing_headers):
        """
        Store an erasure coded object.
        """
        policy_index = int(req.headers.get('X-Backend-Storage-Policy-Index'))
        policy = POLICIES.get_by_index(policy_index)
        # Since the request body sent from client -> proxy is not
        # the same as the request body sent proxy -> object, we
        # can't rely on the object-server to do the etag checking -
        # so we have to do it here.
        etag_hasher = md5()

        min_conns = policy.quorum
        putters = self._get_put_connections(
            req, nodes, partition, outgoing_headers,
            policy, expect=True)

        try:
            # check that a minimum number of connections were established and
            # meet all the correct conditions set in the request
            self._check_failure_put_connections(putters, req, nodes, min_conns)

            self._transfer_data(req, policy, data_source, putters,
                                nodes, min_conns, etag_hasher)
            final_phase = True
            need_quorum = False
            # The .durable file will propagate in a replicated fashion; if
            # one exists, the reconstructor will spread it around.
            # In order to avoid successfully writing an object, but refusing
            # to serve it on a subsequent GET because don't have enough
            # durable data fragments - we require the same number of durable
            # writes as quorum fragment writes.  If object servers are in the
            # future able to serve their non-durable fragment archives we may
            # be able to reduce this quorum count if needed.
            min_conns = policy.quorum
            putters = [p for p in putters if not p.failed]
            # ignore response etags, and quorum boolean
            statuses, reasons, bodies, _etags, _quorum = \
                self._get_put_responses(req, putters, len(nodes),
                                        final_phase, min_conns,
                                        need_quorum=need_quorum)
        except HTTPException as resp:
            return resp

        etag = etag_hasher.hexdigest()
        resp = self.best_response(req, statuses, reasons, bodies,
                                  _('Object PUT'), etag=etag,
                                  quorum_size=min_conns)
        resp.last_modified = math.ceil(
            float(Timestamp(req.headers['X-Timestamp'])))
        return resp
swift-2.7.1/swift/proxy/controllers/container.py0000664000567000056710000002430213024044354023235 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from swift import gettext_ as _
import time

from six.moves.urllib.parse import unquote
from swift.common.utils import public, csv_append, Timestamp
from swift.common.constraints import check_metadata
from swift.common import constraints
from swift.common.http import HTTP_ACCEPTED, is_success
from swift.proxy.controllers.base import Controller, delay_denial, \
    cors_validation, clear_info_cache
from swift.common.storage_policy import POLICIES
from swift.common.swob import HTTPBadRequest, HTTPForbidden, \
    HTTPNotFound


class ContainerController(Controller):
    """WSGI controller for container requests"""
    server_type = 'Container'

    # Ensure these are all lowercase
    pass_through_headers = ['x-container-read', 'x-container-write',
                            'x-container-sync-key', 'x-container-sync-to',
                            'x-versions-location']

    def __init__(self, app, account_name, container_name, **kwargs):
        Controller.__init__(self, app)
        self.account_name = unquote(account_name)
        self.container_name = unquote(container_name)

    def _x_remove_headers(self):
        st = self.server_type.lower()
        return ['x-remove-%s-read' % st,
                'x-remove-%s-write' % st,
                'x-remove-versions-location',
                'x-remove-%s-sync-key' % st,
                'x-remove-%s-sync-to' % st]

    def _convert_policy_to_index(self, req):
        """
        Helper method to convert a policy name (from a request from a client)
        to a policy index (for a request to a backend).

        :param req: incoming request
        """
        policy_name = req.headers.get('X-Storage-Policy')
        if not policy_name:
            return
        policy = POLICIES.get_by_name(policy_name)
        if not policy:
            raise HTTPBadRequest(request=req,
                                 content_type="text/plain",
                                 body=("Invalid %s '%s'"
                                       % ('X-Storage-Policy', policy_name)))
        if policy.is_deprecated:
            body = 'Storage Policy %r is deprecated' % (policy.name)
            raise HTTPBadRequest(request=req, body=body)
        return int(policy)

    def clean_acls(self, req):
        if 'swift.clean_acl' in req.environ:
            for header in ('x-container-read', 'x-container-write'):
                if header in req.headers:
                    try:
                        req.headers[header] = \
                            req.environ['swift.clean_acl'](header,
                                                           req.headers[header])
                    except ValueError as err:
                        return HTTPBadRequest(request=req, body=str(err))
        return None

    def GETorHEAD(self, req):
        """Handler for HTTP GET/HEAD requests."""
        if not self.account_info(self.account_name, req)[1]:
            if 'swift.authorize' in req.environ:
                aresp = req.environ['swift.authorize'](req)
                if aresp:
                    return aresp
            return HTTPNotFound(request=req)
        part = self.app.container_ring.get_part(
            self.account_name, self.container_name)
        concurrency = self.app.container_ring.replica_count \
            if self.app.concurrent_gets else 1
        node_iter = self.app.iter_nodes(self.app.container_ring, part)
        resp = self.GETorHEAD_base(
            req, _('Container'), node_iter, part,
            req.swift_entity_path, concurrency)
        if 'swift.authorize' in req.environ:
            req.acl = resp.headers.get('x-container-read')
            aresp = req.environ['swift.authorize'](req)
            if aresp:
                return aresp
        if not req.environ.get('swift_owner', False):
            for key in self.app.swift_owner_headers:
                if key in resp.headers:
                    del resp.headers[key]
        return resp

    @public
    @delay_denial
    @cors_validation
    def GET(self, req):
        """Handler for HTTP GET requests."""
        return self.GETorHEAD(req)

    @public
    @delay_denial
    @cors_validation
    def HEAD(self, req):
        """Handler for HTTP HEAD requests."""
        return self.GETorHEAD(req)

    @public
    @cors_validation
    def PUT(self, req):
        """HTTP PUT request handler."""
        error_response = \
            self.clean_acls(req) or check_metadata(req, 'container')
        if error_response:
            return error_response
        policy_index = self._convert_policy_to_index(req)
        if not req.environ.get('swift_owner'):
            for key in self.app.swift_owner_headers:
                req.headers.pop(key, None)
        if len(self.container_name) > constraints.MAX_CONTAINER_NAME_LENGTH:
            resp = HTTPBadRequest(request=req)
            resp.body = 'Container name length of %d longer than %d' % \
                        (len(self.container_name),
                         constraints.MAX_CONTAINER_NAME_LENGTH)
            return resp
        account_partition, accounts, container_count = \
            self.account_info(self.account_name, req)
        if not accounts and self.app.account_autocreate:
            self.autocreate_account(req, self.account_name)
            account_partition, accounts, container_count = \
                self.account_info(self.account_name, req)
        if not accounts:
            return HTTPNotFound(request=req)
        if self.app.max_containers_per_account > 0 and \
                container_count >= self.app.max_containers_per_account and \
                self.account_name not in self.app.max_containers_whitelist:
            container_info = \
                self.container_info(self.account_name, self.container_name,
                                    req)
            if not is_success(container_info.get('status')):
                resp = HTTPForbidden(request=req)
                resp.body = 'Reached container limit of %s' % \
                    self.app.max_containers_per_account
                return resp
        container_partition, containers = self.app.container_ring.get_nodes(
            self.account_name, self.container_name)
        headers = self._backend_requests(req, len(containers),
                                         account_partition, accounts,
                                         policy_index)
        clear_info_cache(self.app, req.environ,
                         self.account_name, self.container_name)
        resp = self.make_requests(
            req, self.app.container_ring,
            container_partition, 'PUT', req.swift_entity_path, headers)
        return resp

    @public
    @cors_validation
    def POST(self, req):
        """HTTP POST request handler."""
        error_response = \
            self.clean_acls(req) or check_metadata(req, 'container')
        if error_response:
            return error_response
        if not req.environ.get('swift_owner'):
            for key in self.app.swift_owner_headers:
                req.headers.pop(key, None)
        account_partition, accounts, container_count = \
            self.account_info(self.account_name, req)
        if not accounts:
            return HTTPNotFound(request=req)
        container_partition, containers = self.app.container_ring.get_nodes(
            self.account_name, self.container_name)
        headers = self.generate_request_headers(req, transfer=True)
        clear_info_cache(self.app, req.environ,
                         self.account_name, self.container_name)
        resp = self.make_requests(
            req, self.app.container_ring, container_partition, 'POST',
            req.swift_entity_path, [headers] * len(containers))
        return resp

    @public
    @cors_validation
    def DELETE(self, req):
        """HTTP DELETE request handler."""
        account_partition, accounts, container_count = \
            self.account_info(self.account_name, req)
        if not accounts:
            return HTTPNotFound(request=req)
        container_partition, containers = self.app.container_ring.get_nodes(
            self.account_name, self.container_name)
        headers = self._backend_requests(req, len(containers),
                                         account_partition, accounts)
        clear_info_cache(self.app, req.environ,
                         self.account_name, self.container_name)
        resp = self.make_requests(
            req, self.app.container_ring, container_partition, 'DELETE',
            req.swift_entity_path, headers)
        # Indicates no server had the container
        if resp.status_int == HTTP_ACCEPTED:
            return HTTPNotFound(request=req)
        return resp

    def _backend_requests(self, req, n_outgoing, account_partition, accounts,
                          policy_index=None):
        additional = {'X-Timestamp': Timestamp(time.time()).internal}
        if policy_index is None:
            additional['X-Backend-Storage-Policy-Default'] = \
                int(POLICIES.default)
        else:
            additional['X-Backend-Storage-Policy-Index'] = str(policy_index)
        headers = [self.generate_request_headers(req, transfer=True,
                                                 additional=additional)
                   for _junk in range(n_outgoing)]

        for i, account in enumerate(accounts):
            i = i % len(headers)

            headers[i]['X-Account-Partition'] = account_partition
            headers[i]['X-Account-Host'] = csv_append(
                headers[i].get('X-Account-Host'),
                '%(ip)s:%(port)s' % account)
            headers[i]['X-Account-Device'] = csv_append(
                headers[i].get('X-Account-Device'),
                account['device'])

        return headers
swift-2.7.1/swift/proxy/server.py0000664000567000056710000006264513024044354020227 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import mimetypes
import os
import socket
from swift import gettext_ as _
from random import shuffle
from time import time
import functools
import sys

from eventlet import Timeout
import six

from swift import __canonical_version__ as swift_version
from swift.common import constraints
from swift.common.storage_policy import POLICIES
from swift.common.ring import Ring
from swift.common.utils import cache_from_env, get_logger, \
    get_remote_client, split_path, config_true_value, generate_trans_id, \
    affinity_key_function, affinity_locality_predicate, list_from_csv, \
    register_swift_info
from swift.common.constraints import check_utf8, valid_api_version
from swift.proxy.controllers import AccountController, ContainerController, \
    ObjectControllerRouter, InfoController
from swift.proxy.controllers.base import get_container_info, NodeIter
from swift.common.swob import HTTPBadRequest, HTTPForbidden, \
    HTTPMethodNotAllowed, HTTPNotFound, HTTPPreconditionFailed, \
    HTTPServerError, HTTPException, Request, HTTPServiceUnavailable
from swift.common.exceptions import APIVersionError


# List of entry points for mandatory middlewares.
#
# Fields:
#
# "name" (required) is the entry point name from setup.py.
#
# "after_fn" (optional) a function that takes a PipelineWrapper object as its
# single argument and returns a list of middlewares that this middleware
# should come after. Any middlewares in the returned list that are not present
# in the pipeline will be ignored, so you can safely name optional middlewares
# to come after. For example, ["catch_errors", "bulk"] would install this
# middleware after catch_errors and bulk if both were present, but if bulk
# were absent, would just install it after catch_errors.

required_filters = [
    {'name': 'catch_errors'},
    {'name': 'gatekeeper',
     'after_fn': lambda pipe: (['catch_errors']
                               if pipe.startswith('catch_errors')
                               else [])},
    {'name': 'dlo', 'after_fn': lambda _junk: [
        'staticweb', 'tempauth', 'keystoneauth',
        'catch_errors', 'gatekeeper', 'proxy_logging']},
    {'name': 'versioned_writes', 'after_fn': lambda _junk: [
        'slo', 'dlo', 'staticweb', 'tempauth', 'keystoneauth',
        'catch_errors', 'gatekeeper', 'proxy_logging']}]


class Application(object):
    """WSGI application for the proxy server."""

    def __init__(self, conf, memcache=None, logger=None, account_ring=None,
                 container_ring=None):
        if conf is None:
            conf = {}
        if logger is None:
            self.logger = get_logger(conf, log_route='proxy-server')
        else:
            self.logger = logger

        self._error_limiting = {}

        swift_dir = conf.get('swift_dir', '/etc/swift')
        self.swift_dir = swift_dir
        self.node_timeout = float(conf.get('node_timeout', 10))
        self.recoverable_node_timeout = float(
            conf.get('recoverable_node_timeout', self.node_timeout))
        self.conn_timeout = float(conf.get('conn_timeout', 0.5))
        self.client_timeout = int(conf.get('client_timeout', 60))
        self.put_queue_depth = int(conf.get('put_queue_depth', 10))
        self.object_chunk_size = int(conf.get('object_chunk_size', 65536))
        self.client_chunk_size = int(conf.get('client_chunk_size', 65536))
        self.trans_id_suffix = conf.get('trans_id_suffix', '')
        self.post_quorum_timeout = float(conf.get('post_quorum_timeout', 0.5))
        self.error_suppression_interval = \
            int(conf.get('error_suppression_interval', 60))
        self.error_suppression_limit = \
            int(conf.get('error_suppression_limit', 10))
        self.recheck_container_existence = \
            int(conf.get('recheck_container_existence', 60))
        self.recheck_account_existence = \
            int(conf.get('recheck_account_existence', 60))
        self.allow_account_management = \
            config_true_value(conf.get('allow_account_management', 'no'))
        self.object_post_as_copy = \
            config_true_value(conf.get('object_post_as_copy', 'true'))
        self.container_ring = container_ring or Ring(swift_dir,
                                                     ring_name='container')
        self.account_ring = account_ring or Ring(swift_dir,
                                                 ring_name='account')
        # ensure rings are loaded for all configured storage policies
        for policy in POLICIES:
            policy.load_ring(swift_dir)
        self.obj_controller_router = ObjectControllerRouter()
        self.memcache = memcache
        mimetypes.init(mimetypes.knownfiles +
                       [os.path.join(swift_dir, 'mime.types')])
        self.account_autocreate = \
            config_true_value(conf.get('account_autocreate', 'no'))
        self.auto_create_account_prefix = (
            conf.get('auto_create_account_prefix') or '.')
        self.expiring_objects_account = self.auto_create_account_prefix + \
            (conf.get('expiring_objects_account_name') or 'expiring_objects')
        self.expiring_objects_container_divisor = \
            int(conf.get('expiring_objects_container_divisor') or 86400)
        self.max_containers_per_account = \
            int(conf.get('max_containers_per_account') or 0)
        self.max_containers_whitelist = [
            a.strip()
            for a in conf.get('max_containers_whitelist', '').split(',')
            if a.strip()]
        self.deny_host_headers = [
            host.strip() for host in
            conf.get('deny_host_headers', '').split(',') if host.strip()]
        self.log_handoffs = config_true_value(conf.get('log_handoffs', 'true'))
        self.cors_allow_origin = [
            a.strip()
            for a in conf.get('cors_allow_origin', '').split(',')
            if a.strip()]
        self.strict_cors_mode = config_true_value(
            conf.get('strict_cors_mode', 't'))
        self.node_timings = {}
        self.timing_expiry = int(conf.get('timing_expiry', 300))
        self.sorting_method = conf.get('sorting_method', 'shuffle').lower()
        self.concurrent_gets = \
            config_true_value(conf.get('concurrent_gets'))
        self.concurrency_timeout = float(conf.get('concurrency_timeout',
                                                  self.conn_timeout))
        value = conf.get('request_node_count', '2 * replicas').lower().split()
        if len(value) == 1:
            rnc_value = int(value[0])
            self.request_node_count = lambda replicas: rnc_value
        elif len(value) == 3 and value[1] == '*' and value[2] == 'replicas':
            rnc_value = int(value[0])
            self.request_node_count = lambda replicas: rnc_value * replicas
        else:
            raise ValueError(
                'Invalid request_node_count value: %r' % ''.join(value))
        try:
            self._read_affinity = read_affinity = conf.get('read_affinity', '')
            self.read_affinity_sort_key = affinity_key_function(read_affinity)
        except ValueError as err:
            # make the message a little more useful
            raise ValueError("Invalid read_affinity value: %r (%s)" %
                             (read_affinity, err.message))
        try:
            write_affinity = conf.get('write_affinity', '')
            self.write_affinity_is_local_fn \
                = affinity_locality_predicate(write_affinity)
        except ValueError as err:
            # make the message a little more useful
            raise ValueError("Invalid write_affinity value: %r (%s)" %
                             (write_affinity, err.message))
        value = conf.get('write_affinity_node_count',
                         '2 * replicas').lower().split()
        if len(value) == 1:
            wanc_value = int(value[0])
            self.write_affinity_node_count = lambda replicas: wanc_value
        elif len(value) == 3 and value[1] == '*' and value[2] == 'replicas':
            wanc_value = int(value[0])
            self.write_affinity_node_count = \
                lambda replicas: wanc_value * replicas
        else:
            raise ValueError(
                'Invalid write_affinity_node_count value: %r' % ''.join(value))
        # swift_owner_headers are stripped by the account and container
        # controllers; we should extend header stripping to object controller
        # when a privileged object header is implemented.
        swift_owner_headers = conf.get(
            'swift_owner_headers',
            'x-container-read, x-container-write, '
            'x-container-sync-key, x-container-sync-to, '
            'x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, '
            'x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, '
            'x-account-access-control')
        self.swift_owner_headers = [
            name.strip().title()
            for name in swift_owner_headers.split(',') if name.strip()]
        # Initialization was successful, so now apply the client chunk size
        # parameter as the default read / write buffer size for the network
        # sockets.
        #
        # NOTE WELL: This is a class setting, so until we get set this on a
        # per-connection basis, this affects reading and writing on ALL
        # sockets, those between the proxy servers and external clients, and
        # those between the proxy servers and the other internal servers.
        #
        # ** Because it affects the client as well, currently, we use the
        # client chunk size as the govenor and not the object chunk size.
        socket._fileobject.default_bufsize = self.client_chunk_size
        self.expose_info = config_true_value(
            conf.get('expose_info', 'yes'))
        self.disallowed_sections = list_from_csv(
            conf.get('disallowed_sections', 'swift.valid_api_versions'))
        self.admin_key = conf.get('admin_key', None)
        register_swift_info(
            version=swift_version,
            strict_cors_mode=self.strict_cors_mode,
            policies=POLICIES.get_policy_info(),
            allow_account_management=self.allow_account_management,
            account_autocreate=self.account_autocreate,
            **constraints.EFFECTIVE_CONSTRAINTS)

    def check_config(self):
        """
        Check the configuration for possible errors
        """
        if self._read_affinity and self.sorting_method != 'affinity':
            self.logger.warning(
                "sorting_method is set to '%s', not 'affinity'; "
                "read_affinity setting will have no effect." %
                self.sorting_method)

    def get_object_ring(self, policy_idx):
        """
        Get the ring object to use to handle a request based on its policy.

        :param policy_idx: policy index as defined in swift.conf

        :returns: appropriate ring object
        """
        return POLICIES.get_object_ring(policy_idx, self.swift_dir)

    def get_controller(self, req):
        """
        Get the controller to handle a request.

        :param req: the request
        :returns: tuple of (controller class, path dictionary)

        :raises: ValueError (thrown by split_path) if given invalid path
        """
        if req.path == '/info':
            d = dict(version=None,
                     expose_info=self.expose_info,
                     disallowed_sections=self.disallowed_sections,
                     admin_key=self.admin_key)
            return InfoController, d

        version, account, container, obj = split_path(req.path, 1, 4, True)
        d = dict(version=version,
                 account_name=account,
                 container_name=container,
                 object_name=obj)
        if account and not valid_api_version(version):
            raise APIVersionError('Invalid path')
        if obj and container and account:
            info = get_container_info(req.environ, self)
            policy_index = req.headers.get('X-Backend-Storage-Policy-Index',
                                           info['storage_policy'])
            policy = POLICIES.get_by_index(policy_index)
            if not policy:
                # This indicates that a new policy has been created,
                # with rings, deployed, released (i.e. deprecated =
                # False), used by a client to create a container via
                # another proxy that was restarted after the policy
                # was released, and is now cached - all before this
                # worker was HUPed to stop accepting new
                # connections.  There should never be an "unknown"
                # index - but when there is - it's probably operator
                # error and hopefully temporary.
                raise HTTPServiceUnavailable('Unknown Storage Policy')
            return self.obj_controller_router[policy], d
        elif container and account:
            return ContainerController, d
        elif account and not container and not obj:
            return AccountController, d
        return None, d

    def __call__(self, env, start_response):
        """
        WSGI entry point.
        Wraps env in swob.Request object and passes it down.

        :param env: WSGI environment dictionary
        :param start_response: WSGI callable
        """
        try:
            if self.memcache is None:
                self.memcache = cache_from_env(env, True)
            req = self.update_request(Request(env))
            return self.handle_request(req)(env, start_response)
        except UnicodeError:
            err = HTTPPreconditionFailed(
                request=req, body='Invalid UTF8 or contains NULL')
            return err(env, start_response)
        except (Exception, Timeout):
            start_response('500 Server Error',
                           [('Content-Type', 'text/plain')])
            return ['Internal server error.\n']

    def update_request(self, req):
        if 'x-storage-token' in req.headers and \
                'x-auth-token' not in req.headers:
            req.headers['x-auth-token'] = req.headers['x-storage-token']
        return req

    def handle_request(self, req):
        """
        Entry point for proxy server.
        Should return a WSGI-style callable (such as swob.Response).

        :param req: swob.Request object
        """
        try:
            self.logger.set_statsd_prefix('proxy-server')
            if req.content_length and req.content_length < 0:
                self.logger.increment('errors')
                return HTTPBadRequest(request=req,
                                      body='Invalid Content-Length')

            try:
                if not check_utf8(req.path_info):
                    self.logger.increment('errors')
                    return HTTPPreconditionFailed(
                        request=req, body='Invalid UTF8 or contains NULL')
            except UnicodeError:
                self.logger.increment('errors')
                return HTTPPreconditionFailed(
                    request=req, body='Invalid UTF8 or contains NULL')

            try:
                controller, path_parts = self.get_controller(req)
                p = req.path_info
                if isinstance(p, six.text_type):
                    p = p.encode('utf-8')
            except APIVersionError:
                self.logger.increment('errors')
                return HTTPBadRequest(request=req)
            except ValueError:
                self.logger.increment('errors')
                return HTTPNotFound(request=req)
            if not controller:
                self.logger.increment('errors')
                return HTTPPreconditionFailed(request=req, body='Bad URL')
            if self.deny_host_headers and \
                    req.host.split(':')[0] in self.deny_host_headers:
                return HTTPForbidden(request=req, body='Invalid host header')

            self.logger.set_statsd_prefix('proxy-server.' +
                                          controller.server_type.lower())
            controller = controller(self, **path_parts)
            if 'swift.trans_id' not in req.environ:
                # if this wasn't set by an earlier middleware, set it now
                trans_id_suffix = self.trans_id_suffix
                trans_id_extra = req.headers.get('x-trans-id-extra')
                if trans_id_extra:
                    trans_id_suffix += '-' + trans_id_extra[:32]
                trans_id = generate_trans_id(trans_id_suffix)
                req.environ['swift.trans_id'] = trans_id
                self.logger.txn_id = trans_id
            req.headers['x-trans-id'] = req.environ['swift.trans_id']
            controller.trans_id = req.environ['swift.trans_id']
            self.logger.client_ip = get_remote_client(req)
            try:
                handler = getattr(controller, req.method)
                getattr(handler, 'publicly_accessible')
            except AttributeError:
                allowed_methods = getattr(controller, 'allowed_methods', set())
                return HTTPMethodNotAllowed(
                    request=req, headers={'Allow': ', '.join(allowed_methods)})
            old_authorize = None
            if 'swift.authorize' in req.environ:
                # We call authorize before the handler, always. If authorized,
                # we remove the swift.authorize hook so isn't ever called
                # again. If not authorized, we return the denial unless the
                # controller's method indicates it'd like to gather more
                # information and try again later.
                resp = req.environ['swift.authorize'](req)
                if not resp and not req.headers.get('X-Copy-From-Account') \
                        and not req.headers.get('Destination-Account'):
                    # No resp means authorized, no delayed recheck required.
                    old_authorize = req.environ['swift.authorize']
                else:
                    # Response indicates denial, but we might delay the denial
                    # and recheck later. If not delayed, return the error now.
                    if not getattr(handler, 'delay_denial', None):
                        return resp
            # Save off original request method (GET, POST, etc.) in case it
            # gets mutated during handling.  This way logging can display the
            # method the client actually sent.
            req.environ['swift.orig_req_method'] = req.method
            try:
                if old_authorize:
                    req.environ.pop('swift.authorize', None)
                return handler(req)
            finally:
                if old_authorize:
                    req.environ['swift.authorize'] = old_authorize
        except HTTPException as error_response:
            return error_response
        except (Exception, Timeout):
            self.logger.exception(_('ERROR Unhandled exception in request'))
            return HTTPServerError(request=req)

    def sort_nodes(self, nodes):
        '''
        Sorts nodes in-place (and returns the sorted list) according to
        the configured strategy. The default "sorting" is to randomly
        shuffle the nodes. If the "timing" strategy is chosen, the nodes
        are sorted according to the stored timing data.
        '''
        # In the case of timing sorting, shuffling ensures that close timings
        # (ie within the rounding resolution) won't prefer one over another.
        # Python's sort is stable (http://wiki.python.org/moin/HowTo/Sorting/)
        shuffle(nodes)
        if self.sorting_method == 'timing':
            now = time()

            def key_func(node):
                timing, expires = self.node_timings.get(node['ip'], (-1.0, 0))
                return timing if expires > now else -1.0
            nodes.sort(key=key_func)
        elif self.sorting_method == 'affinity':
            nodes.sort(key=self.read_affinity_sort_key)
        return nodes

    def set_node_timing(self, node, timing):
        if self.sorting_method != 'timing':
            return
        now = time()
        timing = round(timing, 3)  # sort timings to the millisecond
        self.node_timings[node['ip']] = (timing, now + self.timing_expiry)

    def _error_limit_node_key(self, node):
        return "{ip}:{port}/{device}".format(**node)

    def error_limited(self, node):
        """
        Check if the node is currently error limited.

        :param node: dictionary of node to check
        :returns: True if error limited, False otherwise
        """
        now = time()
        node_key = self._error_limit_node_key(node)
        error_stats = self._error_limiting.get(node_key)

        if error_stats is None or 'errors' not in error_stats:
            return False
        if 'last_error' in error_stats and error_stats['last_error'] < \
                now - self.error_suppression_interval:
            self._error_limiting.pop(node_key, None)
            return False
        limited = error_stats['errors'] > self.error_suppression_limit
        if limited:
            self.logger.debug(
                _('Node error limited %(ip)s:%(port)s (%(device)s)'), node)
        return limited

    def error_limit(self, node, msg):
        """
        Mark a node as error limited. This immediately pretends the
        node received enough errors to trigger error suppression. Use
        this for errors like Insufficient Storage. For other errors
        use :func:`error_occurred`.

        :param node: dictionary of node to error limit
        :param msg: error message
        """
        node_key = self._error_limit_node_key(node)
        error_stats = self._error_limiting.setdefault(node_key, {})
        error_stats['errors'] = self.error_suppression_limit + 1
        error_stats['last_error'] = time()
        self.logger.error(_('%(msg)s %(ip)s:%(port)s/%(device)s'),
                          {'msg': msg, 'ip': node['ip'],
                          'port': node['port'], 'device': node['device']})

    def _incr_node_errors(self, node):
        node_key = self._error_limit_node_key(node)
        error_stats = self._error_limiting.setdefault(node_key, {})
        error_stats['errors'] = error_stats.get('errors', 0) + 1
        error_stats['last_error'] = time()

    def error_occurred(self, node, msg):
        """
        Handle logging, and handling of errors.

        :param node: dictionary of node to handle errors for
        :param msg: error message
        """
        self._incr_node_errors(node)
        self.logger.error(_('%(msg)s %(ip)s:%(port)s/%(device)s'),
                          {'msg': msg, 'ip': node['ip'],
                          'port': node['port'], 'device': node['device']})

    def iter_nodes(self, ring, partition, node_iter=None):
        return NodeIter(self, ring, partition, node_iter=node_iter)

    def exception_occurred(self, node, typ, additional_info,
                           **kwargs):
        """
        Handle logging of generic exceptions.

        :param node: dictionary of node to log the error for
        :param typ: server type
        :param additional_info: additional information to log
        """
        self._incr_node_errors(node)
        if 'level' in kwargs:
            log = functools.partial(self.logger.log, kwargs.pop('level'))
            if 'exc_info' not in kwargs:
                kwargs['exc_info'] = sys.exc_info()
        else:
            log = self.logger.exception
        log(_('ERROR with %(type)s server %(ip)s:%(port)s/%(device)s'
              ' re: %(info)s'),
            {'type': typ, 'ip': node['ip'],
             'port': node['port'], 'device': node['device'],
             'info': additional_info},
            **kwargs)

    def modify_wsgi_pipeline(self, pipe):
        """
        Called during WSGI pipeline creation. Modifies the WSGI pipeline
        context to ensure that mandatory middleware is present in the pipeline.

        :param pipe: A PipelineWrapper object
        """
        pipeline_was_modified = False
        for filter_spec in reversed(required_filters):
            filter_name = filter_spec['name']
            if filter_name not in pipe:
                afters = filter_spec.get('after_fn', lambda _junk: [])(pipe)
                insert_at = 0
                for after in afters:
                    try:
                        insert_at = max(insert_at, pipe.index(after) + 1)
                    except ValueError:  # not in pipeline; ignore it
                        pass
                self.logger.info(
                    'Adding required filter %s to pipeline at position %d' %
                    (filter_name, insert_at))
                ctx = pipe.create_filter(filter_name)
                pipe.insert_filter(ctx, index=insert_at)
                pipeline_was_modified = True

        if pipeline_was_modified:
            self.logger.info("Pipeline was modified. New pipeline is \"%s\".",
                             pipe)
        else:
            self.logger.debug("Pipeline is \"%s\"", pipe)


def app_factory(global_conf, **local_conf):
    """paste.deploy app factory for creating WSGI proxy apps."""
    conf = global_conf.copy()
    conf.update(local_conf)
    app = Application(conf)
    app.check_config()
    return app
swift-2.7.1/swift/locale/0000775000567000056710000000000013024044470016407 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ru/0000775000567000056710000000000013024044470017035 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ru/LC_MESSAGES/0000775000567000056710000000000013024044470020622 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ru/LC_MESSAGES/swift.po0000664000567000056710000013014113024044354022317 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Lucas Palm , 2015. #zanata
# OpenStack Infra , 2015. #zanata
# Filatov Sergey , 2016. #zanata
# Grigory Mokhin , 2016. #zanata
# Ilya Alekseyev , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-03-27 11:17+0000\n"
"Last-Translator: Ilya Alekseyev \n"
"Language: ru\n"
"Plural-Forms: nplurals=4; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n"
"%10<=4 && (n%100<12 || n%100>14) ? 1 : n%10==0 || (n%10>=5 && n%10<=9) || (n"
"%100>=11 && n%100<=14)? 2 : 3);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Russian\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"Завершение работы пользователя"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - параллельно, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"Проверено суффиксов: %(checked)d - хэшировано: %(hashed).2f%%, "
"синхронизировано: %(synced).2f%%"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "Ответили как размонтированные: %(ip)s/%(device)s"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"Реконструированно разделов: %(reconstructed)d/%(total)d (%(percentage).2f%%) "
"partitions of %(device)d/%(dtotal)d (%(dpercentage).2f%%) за время "
"%(time).2fs (%(rate).2f/sec, осталось: %(remaining)s)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"Реплицировано разделов: %(replicated)d/%(total)d (%(percentage).2f%%) за "
"время %(time).2f с (%(rate).2f/с, осталось: %(remaining)s)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s успешно, %(failure)s с ошибками"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s возвратил 503 для %(statuses)s"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d не запущен (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "Возможно, %s (%s) остановлен"

#, python-format
msgid "%s already started..."
msgstr "%s уже запущен..."

#, python-format
msgid "%s does not exist"
msgstr "%s не существует"

#, python-format
msgid "%s is not mounted"
msgstr "%s не смонтирован"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s ответил как размонтированный"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s выполняется (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: соединение сброшено на другой стороне"

#, python-format
msgid ", %s containers deleted"
msgstr ", удалено контейнеров: %s"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", осталось контейнеров (возможно): %s"

#, python-format
msgid ", %s containers remaining"
msgstr ", осталось контейнеров: %s"

#, python-format
msgid ", %s objects deleted"
msgstr ", удалено объектов: %s"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", осталось объектов (возможно): %s"

#, python-format
msgid ", %s objects remaining"
msgstr ", осталось объектов: %s"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", прошло: %.02fs"

msgid ", return codes: "
msgstr ", коды возврата: "

msgid "Account"
msgstr "Учетная запись"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Учетная запись %s не очищалась после %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Проверка учетной записи в \"однократном\" режиме завершена: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Проход контроля учетной записи выполнен: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"Попытка репликации %(count)d баз данных за %(time).5f секунд (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Контроль %s не выполнен: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Неправильный код возврата rsync: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Начать проверку учетной записи в \"однократном\" режиме"

msgid "Begin account audit pass."
msgstr "Начать проход проверки учетной записи."

msgid "Begin container audit \"once\" mode"
msgstr "Начать проверку контейнера в \"однократном\" режиме"

msgid "Begin container audit pass."
msgstr "Начать проход проверки контейнера."

msgid "Begin container sync \"once\" mode"
msgstr "Начать синхронизацию контейнера в \"однократном\" режиме"

msgid "Begin container update single threaded sweep"
msgstr "Начать однонитевую сплошную проверку обновлений контейнера"

msgid "Begin container update sweep"
msgstr "Начать сплошную проверку обновлений контейнера"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Начать проверку объекта в режиме \"%s\" (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Начать однонитевую сплошную проверку обновлений объекта"

msgid "Begin object update sweep"
msgstr "Начать сплошную проверку обновлений объекта"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Начинается проход для учетной записи %s"

msgid "Beginning replication run"
msgstr "Запуск репликации"

msgid "Broker error trying to rollback locked connection"
msgstr "Ошибка посредника при попытке отката заблокированного соединения"

#, python-format
msgid "Can not access the file %s."
msgstr "Отсутствует доступ к файлу %s."

#, python-format
msgid "Can not load profile data from %s."
msgstr "Не удается загрузить данные профайла из %s."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "Невозможно прочитать %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "Невозможно записать %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "Клиент не прочитал данные из proxy в %ss"

msgid "Client disconnected on read"
msgstr "Клиент отключен во время чтения"

msgid "Client disconnected without sending enough data"
msgstr "Клиент отключен без отправки данных"

msgid "Client disconnected without sending last chunk"
msgstr "Клиент отключился, не отправив последний фрагмент данных"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"Путь клиента %(client)s не соответствует пути в метаданных объекта %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"Опция internal_client_conf_path конфигурации не определена. Используется  "
"конфигурация по умолчанию. Используйте intenal-client.conf-sample для "
"информации об опциях"

msgid "Connection refused"
msgstr "Соединение отклонено"

msgid "Connection timeout"
msgstr "Тайм-аут соединения"

msgid "Container"
msgstr "контейнер"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Проверка контейнера в \"однократном\" режиме завершена: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Проход проверки контейнера завершен: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "Синхронизация контейнера в \"однократном\" режиме завершена: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Сплошная однонитевая проверка обновлений контейнера завершена: "
"%(elapsed).02fs, успешно: %(success)s, сбоев: %(fail)s, без изменений: "
"%(no_change)s"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Сплошная проверка обновлений контейнера завершена: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Сплошная проверка обновлений контейнера в %(path)s завершена: "
"%(elapsed).02fs, успешно: %(success)s, сбоев: %(fail)s, без изменений: "
"%(no_change)s"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "Не удалось подключиться к порту %s:%s по истечении %s секунд"

#, python-format
msgid "Could not load %r: %s"
msgstr "Не удалось загрузить %r: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Ошибка загрузки данных: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Проход устройств выполнен: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "Каталог %r не связан со стратегией policy (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "Ошибка %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "Ошибка %(status)d %(body)s из сервера %(type)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "Ошибка %(status)d %(body)s, ответ от сервера объекта: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "Ошибка %(status)d. Ожидаемое значение от сервера объекта: 100-continue"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr ""
"Ошибка %(status)d. попытка выполнить метод %(method)s %(path)s из сервера "
"контейнера"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"Ошибка: обновление учетной записи не выполнено для %(ip)s:%(port)s/"
"%(device)s (операция будет повторена позднее): Ответ: %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"Ошибка: обновление учетной записи не выполнено, в запросе указано разное "
"число хостов и устройств: \"%s\" и \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "Ошибка: Неправильный запрос %(status)s из %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "Ошибка: тайм-аут чтения клиента (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"Ошибка. Обновление контейнера не выполнено (сохранение асинхронных "
"обновлений будет выполнено позднее): %(status)d ответ от %(ip)s:%(port)s/"
"%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"Ошибка: обновление контейнера не выполнено, в запросе указано разное число "
"хостов и устройств: \"%s\" и \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "Ошибка: не удалось получить сведения об учетной записи %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "Ошибка: не удалось получить информацию о контейнере %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "Ошибка: ошибка закрытия DiskFile %(data_file)s: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "Ошибка. Исключительная ситуация при отключении клиента"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr ""
"ОШИБКА. Исключительная ситуация при передаче данных на серверы объектов %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "Ошибка: не удалось получить собственные IP-адреса?"

msgid "ERROR Insufficient Storage"
msgstr "Ошибка - недостаточно памяти"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"Ошибка: контроль объекта %(obj)s не выполнен, объект помещен в карантин: "
"%(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "Ошибка Pickle, %s помещается в карантин"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "Ошибка: удаленный накопитель не смонтирован %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "Ошибка синхронизации %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "Ошибка синхронизации %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "Ошибка при попытке контроля %s"

msgid "ERROR Unhandled exception in request"
msgstr "Ошибка. Необрабатываемая исключительная ситуация в запросе"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "Ошибка: ошибка __call__ в %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"Ошибка: обновление учетной записи не выполнено для %(ip)s:%(port)s/"
"%(device)s (операция будет повторена позднее)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"Ошибка: обновление учетной записи не выполнено для %(ip)s:%(port)s/"
"%(device)s (операция будет повторена позднее): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr ""
"Ошибка выполнения асинхронной передачи ожидающего файла с непредвиденным "
"именем %s"

msgid "ERROR auditing"
msgstr "ОШИБКА контроля"

#, python-format
msgid "ERROR auditing: %s"
msgstr "Ошибка контроля: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"Ошибка. Обновление контейнера не выполнена с %(ip)s:%(port)s/%(dev)s "
"(сохранение асинхронного обновления будет выполнено позднее)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "Ошибка чтения ответа HTTP из %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "Ошибка чтения базы данных %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "Ошибка: команда rsync не выполнена с кодом %(code)s: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "Ошибка синхронизации %(file)s с узлом %(node)s"

msgid "ERROR trying to replicate"
msgstr "Ошибка при попытке репликации"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "Ошибка при попытке очистки %s"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr ""
"Ошибка с сервером %(type)s %(ip)s:%(port)s/%(device)s, возврат: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "Ошибка при загрузки скрытых объектов из %s: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "Ошибка с удаленным сервером %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "Ошибка:  не удалось получить пути к разделам накопителей: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "Ошибка: ошибка при извлечении сегментов"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "Ошибка: не удалось получить доступ к %(path)s: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "Ошибка: не удалось запустить процесс контроля: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Ошибка действия %(action)s для сохранения в кэш памяти: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Ошибка кодирования в UTF-8: %s"

msgid "Error hashing suffix"
msgstr "Ошибка хэширования суффикса"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Ошибка в %r с mtime_check_interval: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Ошибка ограничения сервера %s"

msgid "Error listing devices"
msgstr "Ошибка при выводе списка устройств"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Ошибка при выводе результатов профилирования: %s"

msgid "Error parsing recon cache file"
msgstr "Ошибка анализа файла кэша recon"

msgid "Error reading recon cache file"
msgstr "Ошибка чтения файла кэша recon"

msgid "Error reading ringfile"
msgstr "Ошибка при чтении ringfile"

msgid "Error reading swift.conf"
msgstr "Ошибка чтения swift.conf"

msgid "Error retrieving recon data"
msgstr "Ошибка при получении данных recon"

msgid "Error syncing handoff partition"
msgstr "Ошибка при синхронизации раздела передачи управления"

msgid "Error syncing partition"
msgstr "Ошибка синхронизации раздела"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Ошибка синхронизации с узлом %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Ошибка при попытке перекомпоновки стратегии  %(path)s: номер#%(policy)d "
"фрагмент#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Ошибка: произошла ошибка"

msgid "Error: missing config path argument"
msgstr "Ошибка: отсутствует аргумент пути конфигурации"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Ошибка: не удалось найти %s"

msgid "Exception dumping recon cache"
msgstr "Исключительная ситуация при создании кэша recon"

msgid "Exception in top-level account reaper loop"
msgstr ""
"Исключительная ситуация в цикле чистильщика учетных записей верхнего уровня"

msgid "Exception in top-level replication loop"
msgstr "Исключительная ситуация в цикле репликации верхнего уровня"

msgid "Exception in top-levelreconstruction loop"
msgstr "Исключение в цикле реконструкции верхнего уровня"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Исключительная ситуация во время удаления контейнера %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Исключительная ситуация во время удаления объекта %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Исключительная ситуация в %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Исключительная ситуация в учетной записи %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Исключительная ситуация в контейнерах для учетной записи %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Исключительная ситуация в объектах для контейнера %(container)s для учетной "
"записи %(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Ожидаемое значение: 100-continue в %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "Следующая цепочка CNAME для %(given_domain)s в %(found_domain)s"

msgid "Found configs:"
msgstr "Обнаружены конфигурации:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"В режиме передачи управления не все операции завершены. Принудительное "
"завершение текущего прохода репликации."

msgid "Host unreachable"
msgstr "Хост недоступен"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Не завершен проход для учетной записи %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Недопустимый формат X-Container-Sync-To %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Недопустимый хост %r в X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Недопустимая ожидающая запись %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Недопустимый ответ %(resp)s  от  %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Недопустимый ответ %(resp)s от %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Недопустимая схема %r в X-Container-Sync-To, допустимые значения: \"//\", "
"\"http\" или \"https\"."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Принудительное завершение долго выполняющегося rsync: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Загрузка JSON из %s провалилась (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Обнаружена блокировка.. принудительное завершение работающих модулей."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "Преобразовано %(given_domain)s в %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "%s не выполняется"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "Отсутствует конечная точка кластера для %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "Нет прав доступа для отправки сигнала в PID %d"

#, python-format
msgid "No policy with index %s"
msgstr "Не найдено стратегии с индексом %s"

#, python-format
msgid "No realm key for %r"
msgstr "Отсутствует ключ области для %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Не устройстве %s (%s) закончилось место"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Ограниченная ошибка узла %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "Недостаточное число подтверждений с серверов объектов (получено %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"Не найдено: %(sync_from)r => %(sync_to)r                       - объект "
"%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "Ничего не реконструировано за %s с."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "Ничего не реплицировано за %s с."

msgid "Object"
msgstr "Объект"

msgid "Object PUT"
msgstr "Функция PUT объекта"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"Функция PUT объекта возвратила 202 для 409: %(req_timestamp)s <= "
"%(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "Функция PUT объекта возвратила 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Контроль объекта (%(type)s) в режиме \"%(mode)s\" завершен: %(elapsed).02fs. "
"Всего в карантине: %(quars)d, всего ошибок: %(errors)d, всего файлов/с: "
"%(frate).2f, всего байт/с: %(brate).2f, время контроля: %(audit).2f, "
"скорость: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Проверка объекта (%(type)s). После %(start_time)s: локально: успешно - "
"%(passes)d, в карантине - %(quars)d, файлов с ошибками %(errors)d в секунду: "
"%(frate).2f , байт/с: %(brate).2f, общее время: %(total).2f, время контроля: "
"%(audit).2f, скорость: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Состояние контроля объекта: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Реконструкция объекта выполнена (однократно). (%.02f мин.)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Реконструкция объекта выполнена. (%.02f мин.)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Репликация объекта выполнена (однократно). (%.02f мин.)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Репликация объекта выполнена. (%.02f мин.)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "Серверы объектов вернули несоответствующие etag: %s"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Сплошная однонитевая проверка обновлений объекта завершена: %(elapsed).02fs, "
"%(success)s успешно, %(fail)s с ошибками"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Сплошная проверка обновлений объекта завершена: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Сплошная проверка обновлений объекта на устройстве %(device)s завершена: "
"%(elapsed).02fs, успешно: %(success)s, ошибка: %(fail)s"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "В X-Container-Sync-To не разрешены параметры, запросы и фрагменты"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Время раздела: максимум: %(max).4fs, минимум: %(min).4fs, среднее: %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Проход запущен; возможных контейнеров: %s; возможных объектов: %s"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Проход выполнен за %ds; устарело объектов: %d"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Проход выполняется до настоящего времени %ds; устарело объектов: %d"

msgid "Path required in X-Container-Sync-To"
msgstr "Требуется путь в X-Container-Sync-To"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Неполадка при очистке %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Возникла проблема при очистке %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Возникла неполадка при записи файла сохраняемого состояния %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Ошибка профилирования: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(hsh_path)s помещен в карантин в %(quar_path)s, так как не является "
"каталогом"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(object_path)s помещен в карантин в %(quar_path)s, так как не является "
"каталогом"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s помещено в карантин %s из-за базы данных %s"

#, python-format
msgid "Quarantining DB %s"
msgstr "БД %s помещена в карантин"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Протокол тайм-аута при ограничении скорости %(sleep)s для %(account)s/"
"%(container)s/%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "Удалено баз данных: %(remove)d"

#, python-format
msgid "Removing %s objects"
msgstr "Удаление объектов %s"

#, python-format
msgid "Removing partition: %s"
msgstr "Удаление раздела: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "Удаление файла pid %(pid_file)s с ошибочным pid %(pid)d"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Удаление pid файла %s с неверным pid-ом"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Удаление устаревшего файла pid %s"

msgid "Replication run OVER"
msgstr "Репликация запущена поверх"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Возвращено 497 из-за черного списка: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"Возвращено 498 для %(meth)s в %(acc)s/%(cont)s/%(obj)s . Ratelimit "
"(максимальная задержка): %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Обнаружено изменение кольца. Принудительное завершение текущего прохода "
"реконструкции."

msgid "Ring change detected. Aborting current replication pass."
msgstr ""
"Обнаружено кольцевое изменение. Принудительное завершение текущего прохода "
"репликации."

#, python-format
msgid "Running %s once"
msgstr "Однократное выполнение %s"

msgid "Running object reconstructor in script mode."
msgstr "Запуск утилиты реконструкции объектов в режиме скрипта."

msgid "Running object replicator in script mode."
msgstr "Запуск утилиты репликации объектов в режиме сценариев."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "Сигнал: %s, pid: %s, сигнал: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"За %(time)s операций синхронизировано %(sync)s [удалено: %(delete)s, "
"добавлено: %(put)s], пропущено: %(skip)s, ошибки: %(fail)s"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"Выполнено проверок учетной записи: %(time)s, из них успешно: %(passed)s, с "
"ошибками: %(failed)s "

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"Выполнено проверок контейнера: %(time)s, из них успешно: %(pass)s, с "
"ошибками: %(fail)s "

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "%(device)s будет пропущен, так как он не смонтирован"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "%s будет пропущен, так как он не смонтирован"

#, python-format
msgid "Starting %s"
msgstr "Запуск %s"

msgid "Starting object reconstruction pass."
msgstr "Запуск прохода реконструкции объектов."

msgid "Starting object reconstructor in daemon mode."
msgstr "Запуск утилиты реконструкции объектов в режиме демона."

msgid "Starting object replication pass."
msgstr "Запуск прохода репликации объектов."

msgid "Starting object replicator in daemon mode."
msgstr "Запуск утилиты репликации объектов в режиме демона."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "Успешное выполнение rsync для %(src)s на %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "Запрещен доступ к этому типу файла!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"Общее число %(key)s для контейнера (%(total)s) не соответствует сумме "
"%(key)s в стратегиях (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Тайм-аут действия %(action)s для сохранения в кэш памяти: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Исключение по таймауту %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Попытка выполнения метода %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Попытка GET-запроса %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Попытка получения состояния %s операции PUT в %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Попытка получения конечного состояния PUT в %s"

msgid "Trying to read during GET"
msgstr "Попытка чтения во время операции GET"

msgid "Trying to read during GET (retrying)"
msgstr "Попытка чтения во время операции GET (выполняется повтор)"

msgid "Trying to send to client"
msgstr "Попытка отправки клиенту"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Попытка синхронизации суффиксов с  %s"

#, python-format
msgid "Trying to write to %s"
msgstr "Попытка записи в %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "Необрабатываемая исключительная ситуация"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "Не удалось найти раздел конфигурации %s в %s"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "Не удалось загрузить клиент из конфигурации: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "Не удалось найти %s в libc.  Оставлено как no-op."

#, python-format
msgid "Unable to locate config for %s"
msgstr "Не удалось найти конфигурационный файл для %s"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Не удается найти конфигурации с номером %s для %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"Не удалось найти fallocate, posix_fallocate в libc.  Оставлено как no-op."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "Не удалось выполнить функцию fsync() для каталога %s: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "Не удалось прочитать конфигурацию из %s"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "Синхронизация %(sync_from)r => %(sync_to)r без прав доступа"

#, python-format
msgid "Unexpected response: %s"
msgstr "Непредвиденный ответ: %s"

msgid "Unhandled exception"
msgstr "Необработанная исключительная ситуация"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"Неизвестное исключение в GET-запросе: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Отчет об обновлении для %(container)s %(dbfile)s не выполнен"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Отчет об обновлении отправлен для %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"Предупреждение: SSL должен быть включен только в целях тестирования. "
"Используйте внешнее завершение SSL для развертывания в рабочем режиме."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"Предупреждение: не удалось изменить предельное значение для дескриптора "
"файла. Запущен без прав доступа root?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"Предупреждение: не удалось изменить предельное значение для числа процессов. "
"Запущен без прав доступа root?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"Предупреждение: не удалось изменить предельное значение для памяти. Запущен "
"без прав доступа root?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "Система ожидала %s секунд для %s завершения; освобождение"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "Система ожидала %s секунд для %s завершения; Принудительное завершение"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr ""
"Предупреждение: не удается ограничить скорость без клиента с кэшированием "
"памяти"

#, python-format
msgid "method %s is not allowed."
msgstr "Метод %s не разрешен."

msgid "no log file found"
msgstr "Не найден файл протокола"

msgid "odfpy not installed."
msgstr "Библиотека odfpy не установлена."

#, python-format
msgid "plotting results failed due to %s"
msgstr "Ошибка в результатах plotting из-за %s"

msgid "python-matplotlib not installed."
msgstr "Библиотека python-matplotlib не установлена."
swift-2.7.1/swift/locale/ja/0000775000567000056710000000000013024044470017001 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ja/LC_MESSAGES/0000775000567000056710000000000013024044470020566 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ja/LC_MESSAGES/swift.po0000664000567000056710000011453313024044354022272 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Sasuke(Kyohei MORIYAMA) <>, 2015
# Akihiro Motoki , 2015. #zanata
# OpenStack Infra , 2015. #zanata
# Tom Cocozzello , 2015. #zanata
# 笹原 昌美 , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-03-29 05:40+0000\n"
"Last-Translator: 笹原 昌美 \n"
"Language: ja\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Japanese\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"ユーザー終了"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - パラレル、%s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d サフィックスが検査されました - ハッシュ済み %(hashed).2f%%、同期"
"済み %(synced).2f%%"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s はアンマウントとして応答しました"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(device)d/%(dtotal)d (%(dpercentage).2f%%) デバイスの %(reconstructed)d/"
"%(total)d (%(percentage).2f%%) パーティションが %(time).2fs で再構成されまし"
"た (%(rate).2f/秒、残り %(remaining)s)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) パーティションが%(time).2fs で"
"複製されました (%(rate).2f/秒、残り %(remaining)s)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "成功 %(success)s、失敗 %(failure)s"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s が %(statuses)s について 503 を返しています"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d が実行されていません (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) が停止された可能性があります"

#, python-format
msgid "%s already started..."
msgstr "%s は既に開始されています..."

#, python-format
msgid "%s does not exist"
msgstr "%s が存在しません"

#, python-format
msgid "%s is not mounted"
msgstr "%s がマウントされていません"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s はアンマウントとして応答しました"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s が実行中 (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: 接続がピアによってリセットされました"

#, python-format
msgid ", %s containers deleted"
msgstr "、%s コンテナーが削除されました"

#, python-format
msgid ", %s containers possibly remaining"
msgstr "、%s コンテナーが残っていると思われます"

#, python-format
msgid ", %s containers remaining"
msgstr "、%s コンテナーが残っています"

#, python-format
msgid ", %s objects deleted"
msgstr "、%s オブジェクトが削除されました"

#, python-format
msgid ", %s objects possibly remaining"
msgstr "、%s オブジェクトが残っていると思われます"

#, python-format
msgid ", %s objects remaining"
msgstr "、%s オブジェクトが残っています"

#, fuzzy, python-format
msgid ", elapsed: %.02fs"
msgstr "、経過時間: %.02fs"

msgid ", return codes: "
msgstr "、戻りコード: "

msgid "Account"
msgstr "アカウント"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "アカウント %s は %s 以降リープされていません"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "アカウント監査 \"once\" モードが完了しました: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "アカウント監査の処理が完了しました: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr "%(time).5f 秒で %(count)d 個の DB の複製を試行しました (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "%s の監査が失敗しました: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "正しくない再同期戻りコード: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "アカウント監査 \"once\" モードの開始"

msgid "Begin account audit pass."
msgstr "アカウント監査パスを開始します。"

msgid "Begin container audit \"once\" mode"
msgstr "コンテナー監査「once」モードの開始"

msgid "Begin container audit pass."
msgstr "コンテナー監査パスを開始します。"

msgid "Begin container sync \"once\" mode"
msgstr "コンテナー同期「once」モードの開始"

msgid "Begin container update single threaded sweep"
msgstr "コンテナー更新単一スレッド化スイープの開始"

msgid "Begin container update sweep"
msgstr "コンテナー更新スイープの開始"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "オブジェクト監査「%s」モードの開始 (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "オブジェクト更新単一スレッド化スイープの開始"

msgid "Begin object update sweep"
msgstr "オブジェクト更新スイープの開始"

#, python-format
msgid "Beginning pass on account %s"
msgstr "アカウント %s でパスを開始中"

msgid "Beginning replication run"
msgstr "複製の実行を開始中"

msgid "Broker error trying to rollback locked connection"
msgstr "ロック済み接続のロールバックを試行中のブローカーエラー"

#, python-format
msgid "Can not access the file %s."
msgstr "ファイル %s にアクセスできません。"

#, python-format
msgid "Can not load profile data from %s."
msgstr "プロファイルデータを %s からロードできません。"

#, python-format
msgid "Cannot read %s (%s)"
msgstr "%s を読み取ることができません (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "%s を書き込むことができません (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "クライアントは %s 内のプロキシーからの読み取りを行いませんでした"

msgid "Client disconnected on read"
msgstr "クライアントが読み取り時に切断されました"

msgid "Client disconnected without sending enough data"
msgstr "十分なデータを送信せずにクライアントが切断されました"

msgid "Client disconnected without sending last chunk"
msgstr "最後のチャンクを送信せずにクライアントが切断されました"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"クライアントパス %(client)s はオブジェクトメタデータ %(meta)s に保管されたパ"
"スに一致しません"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"設定オプション internal_client_conf_path が定義されていません。デフォルト設定"
"を使用しています。オプションについては internal-client.conf-sample を参照して"
"ください"

msgid "Connection refused"
msgstr "接続が拒否されました"

msgid "Connection timeout"
msgstr "接続がタイムアウトになりました"

msgid "Container"
msgstr "コンテナー"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "コンテナー監査「once」モードが完了しました: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "コンテナー監査の処理が完了しました: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "コンテナー同期「once」モードが完了しました: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"コンテナー更新単一スレッド化スイープが完了しました: %(elapsed).02fs、成功 "
"%(success)s、失敗 %(fail)s、未変更 %(no_change)s"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "コンテナー更新スイープが完了しました: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"%(path)s のコンテナー更新スイープが完了しました: %(elapsed).02fs、成功 "
"%(success)s、失敗 %(fail)s、未変更 %(no_change)s"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "%s 秒間の試行後に %s:%s にバインドできませんでした"

#, python-format
msgid "Could not load %r: %s"
msgstr "%r をロードできませんでした: %s"

#, python-format
msgid "Data download error: %s"
msgstr "データダウンロードエラー: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "デバイスの処理が完了しました: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "ディレクトリー %r は有効なポリシーにマップしていません (%s) "

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "エラー %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "エラー %(status)d: %(type)s サーバーからの %(body)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "エラー %(status)d: オブジェクトサーバーからの %(body)s、re: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "エラー %(status)d: 予期: オブジェクトサーバーからの 100-continue"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "エラー %(status)d: コンテナーサーバーから %(method)s %(path)s を試行中"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"エラー: アカウント更新が %(ip)s:%(port)s/%(device)s で失敗しました(後で再試行"
"されます): 応答 %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"エラー: アカウント更新に失敗しました。要求内のホスト数およびデバイス数が異な"
"ります: 「%s」vs「%s」"

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "エラー: ホスト %(host)s からの応答 %(status)s が正しくありません"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "エラー: クライアント読み取りがタイムアウトになりました (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"エラー: コンテナー更新に失敗しました (後の非同期更新のために保存中): %(ip)s:"
"%(port)s/%(dev)s からの %(status)d 応答"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"エラー: コンテナー更新に失敗しました。要求内のホスト数およびデバイス数が異な"
"ります: 「%s」vs「%s」"

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "ERROR アカウント情報 %s が取得できませんでした"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "エラー: コンテナー情報 %s を取得できませんでした"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr ""
"エラー: DiskFile %(data_file)s を閉じることができません: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "エラー: 例外によりクライアントが切断されています"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "エラー: オブジェクトサーバー %s へのデータ転送で例外が発生しました"

msgid "ERROR Failed to get my own IPs?"
msgstr "エラー: 自分の IP の取得に失敗?"

msgid "ERROR Insufficient Storage"
msgstr "エラー: ストレージが不足しています"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr "エラー: オブジェクト %(obj)s は監査に失敗し、検疫されました: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "エラー: ピックルの問題、%s を検疫します"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "エラー: リモートドライブに %s がマウントされていません"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "%(db_file)s %(row)s の同期エラー"

#, python-format
msgid "ERROR Syncing %s"
msgstr "%s の同期エラー"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "%s の監査を試行中にエラーが発生しました"

msgid "ERROR Unhandled exception in request"
msgstr "エラー: 要求で未処理例外が発生しました"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "エラー: %(method)s %(path)s での __call__ エラー"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"エラー: アカウント更新が %(ip)s:%(port)s/%(device)s で失敗しました(後で再試行"
"されます)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"エラー: アカウント更新が %(ip)s:%(port)s/%(device)s で失敗しました(後で再試行"
"されます): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "エラー: 予期しない名前 %s を持つファイルを非同期保留中"

msgid "ERROR auditing"
msgstr "監査エラー"

#, python-format
msgid "ERROR auditing: %s"
msgstr "監査エラー: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"エラー: コンテナー更新が %(ip)s:%(port)s/%(dev)s で失敗しました (後の非同期更"
"新のために保存中)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "%s からの HTTP 応答の読み取りエラー"

#, python-format
msgid "ERROR reading db %s"
msgstr "DB %s の読み取りエラー"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "エラー: %(code)s との再同期に失敗しました: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "ノード %(node)s との %(file)s の同期エラー"

msgid "ERROR trying to replicate"
msgstr "複製の試行エラー"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "%s のクリーンアップを試行中にエラーが発生しました"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr ""
"%(type)s サーバー %(ip)s:%(port)s/%(device)s でのエラー、返された値: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "%s からの抑止のロードでエラーが発生しました: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "リモートサーバー %(ip)s:%(port)s/%(device)s でのエラー"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "エラー: ドライブパーティションに対するパスの取得に失敗しました: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "エラー: セグメントの取得中にエラーが発生しました"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "エラー: %(path)s にアクセスできません: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "エラー: 監査を実行できません: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "memcached %(server)s に対する %(action)s がエラーになりました"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "UTF-8 へのエンコードエラー: %s"

msgid "Error hashing suffix"
msgstr "サフィックスのハッシュエラー"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "mtime_check_interval で %r にエラーがあります: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "サーバー %s の制限エラー"

msgid "Error listing devices"
msgstr "デバイスのリストエラー"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "レンダリングプロファイル結果でのエラー: %s"

msgid "Error parsing recon cache file"
msgstr "再構成キャッシュファイルの構文解析エラー"

msgid "Error reading recon cache file"
msgstr "再構成キャッシュファイルの読み取りエラー"

msgid "Error reading ringfile"
msgstr "リングファイルの読み取りエラー"

msgid "Error reading swift.conf"
msgstr "swift.conf の読み取りエラー"

msgid "Error retrieving recon data"
msgstr "再構成データの取得エラー"

msgid "Error syncing handoff partition"
msgstr "ハンドオフパーティションの同期エラー"

msgid "Error syncing partition"
msgstr "パーティションとの同期エラー"

#, python-format
msgid "Error syncing with node: %s"
msgstr "ノードとの同期エラー: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"%(path)s の再構築を試行中にエラーが発生しました。ポリシー #%(policy)d フラグ"
"メント #%(frag_index)s"

msgid "Error: An error occurred"
msgstr "エラー: エラーが発生しました"

msgid "Error: missing config path argument"
msgstr "エラー: 構成パス引数がありません"

#, python-format
msgid "Error: unable to locate %s"
msgstr "エラー: %s が見つかりません"

msgid "Exception dumping recon cache"
msgstr "再構成キャッシュのダンプで例外が発生しました"

msgid "Exception in top-level account reaper loop"
msgstr "最上位アカウントリーパーループで例外が発生しました"

msgid "Exception in top-level replication loop"
msgstr "最上位複製ループで例外が発生しました"

msgid "Exception in top-levelreconstruction loop"
msgstr "最上位再構成ループで例外が発生しました"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "コンテナー %s %s の削除中に例外が発生しました"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "オブジェクト %s %s %s の削除中に例外が発生しました"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s で例外が発生しました"

#, python-format
msgid "Exception with account %s"
msgstr "アカウント %s で例外が発生しました"

#, python-format
msgid "Exception with containers for account %s"
msgstr "アカウント %s のコンテナーで例外が発生しました"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"アカウント %(account)s のコンテナー %(container)s のオブジェクトで例外が発生"
"しました"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "予期: %s での 100-continue"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s から %(found_domain)s へ CNAME チェーンをフォロー中"

msgid "Found configs:"
msgstr "構成が見つかりました:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"ハンドオフのファーストモードにハンドオフが残っています。現行複製パスを打ち切"
"ります。"

msgid "Host unreachable"
msgstr "ホストが到達不能です"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "アカウント %s での不完全なパス"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "X-Container-Sync-To 形式 %r が無効です"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "無効なホスト %r が X-Container-Sync-To にあります"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "無効な保留中項目 %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "%(full_path)s からの応答 %(resp)s が無効です"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "%(ip)s からの応答 %(resp)s が無効です"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"無効なスキーム %r が X-Container-Sync-To にあります。「//」、「http」、"
"「https」のいずれかでなければなりません。"

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "長期実行の再同期を強制終了中: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "%s からの JSON のロードが失敗しました (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "ロックが検出されました.. ライブ coros を強制終了中"

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s が %(found_domain)s にマップされました"

#, python-format
msgid "No %s running"
msgstr "%s が実行されていません"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "%r %r のエンドポイントクラスターがありません"

#, python-format
msgid "No permission to signal PID %d"
msgstr "PID %d にシグナル通知する許可がありません"

#, python-format
msgid "No policy with index %s"
msgstr "インデックス %s のポリシーはありません"

#, python-format
msgid "No realm key for %r"
msgstr "%r のレルムキーがありません"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "%s 用のデバイス容量が残っていません (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "ノードエラー制限 %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "肯定応答を返したオブジェクト・サーバーが不十分です (%d 取得)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"不検出 %(sync_from)r => %(sync_to)r                       - オブジェクト "
"%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "%s 秒間で何も再構成されませんでした。"

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "%s 秒間で何も複製されませんでした。"

msgid "Object"
msgstr "オブジェクト"

msgid "Object PUT"
msgstr "オブジェクト PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"オブジェクト PUT が 409 に対して 202 を返しています: %(req_timestamp)s<= "
"%(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "オブジェクト PUT が 412 を返しています。%(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"オブジェクト監査 (%(type)s) 「%(mode)s」モード完了: %(elapsed).02fs。合計検疫"
"済み: %(quars)d、合計エラー: %(errors)d、合計ファイル/秒: %(frate).2f、合計バ"
"イト/秒: %(brate).2f、監査時間: %(audit).2f、率: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"オブジェクト監査 (%(type)s)。%(start_time)s 以降: ローカル: 合格した監査 "
"%(passes)d、検疫済み %(quars)d、エラー %(errors)d、ファイル/秒: %(frate).2f、"
"バイト/秒: %(brate).2f、合計時間: %(total).2f、監査時間: %(audit).2f、率: "
"%(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "オブジェクト監査統計: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "オブジェクト再構成が完了しました (1 回)。(%.02f 分)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "オブジェクト再構成が完了しました。(%.02f 分)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "オブジェクト複製が完了しました (1 回)。(%.02f 分)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "オブジェクト複製が完了しました。(%.02f 分)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "オブジェクトサーバーが %s 個の不一致 etag を返しました"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"オブジェクト更新単一スレッド化スイープが完了しました: %(elapsed).02fs、成功 "
"%(success)s、失敗 %(fail)s"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "オブジェクト更新スイープが完了しました: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"%(device)s のオブジェクト更新スイープが完了しました: %(elapsed).02fs、成功 "
"%(success)s、失敗 %(fail)s"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr ""
"パラメーター、照会、およびフラグメントは X-Container-Sync-To で許可されていま"
"せん"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr "パーティション時間: 最大 %(max).4fs、最小 %(min).4fs、中間 %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr ""
"パスの開始中。%s コンテナーおよび %s オブジェクトが存在する可能性があります"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "%d でパスが完了しました。%d オブジェクトの有効期限が切れました"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "現在までのパス %d。%d オブジェクトの有効期限が切れました"

msgid "Path required in X-Container-Sync-To"
msgstr "X-Container-Sync-To にパスが必要です"

#, python-format
msgid "Problem cleaning up %s"
msgstr "%s のクリーンアップ中に問題が発生しました"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "%s のクリーンアップ中に問題が発生しました (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "永続状態ファイル %s の書き込み中に問題が発生しました (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "プロファイル作成エラー: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"ディレクトリーではないため、%(hsh_path)s は %(quar_path)s へ検疫されました"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"ディレクトリーではないため、%(object_path)s は %(quar_path)s へ検疫されました"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s から %s が検疫されました (%s データベースが原因)"

#, python-format
msgid "Quarantining DB %s"
msgstr "DB %s の検疫中"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Ratelimit スリープログ: %(account)s/%(container)s/%(object)s の %(sleep)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "%(remove)d 個の DB が削除されました"

#, python-format
msgid "Removing %s objects"
msgstr "%s オブジェクトの削除中"

#, python-format
msgid "Removing partition: %s"
msgstr "パーティションの削除中: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "正しくない pid %(pid)d の pid ファイル %(pid_file)s を削除中"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "無効な pid の pid ファイル %s を削除中"

#, python-format
msgid "Removing stale pid file %s"
msgstr "失効した pid ファイル %s を削除中"

msgid "Replication run OVER"
msgstr "複製の実行が終了しました"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "ブラックリスティングのため 497 を返しています: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"%(acc)s/%(cont)s/%(obj)s に対する %(meth)s に関して 498 を返しています。"
"Ratelimit (最大スリープ) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr "リング変更が検出されました。現行再構成パスを打ち切ります。"

msgid "Ring change detected. Aborting current replication pass."
msgstr "リング変更が検出されました。現行複製パスを打ち切ります。"

#, python-format
msgid "Running %s once"
msgstr "%s を 1 回実行中"

msgid "Running object reconstructor in script mode."
msgstr "スクリプトモードでオブジェクトリコンストラクターを実行中です。"

msgid "Running object replicator in script mode."
msgstr "スクリプトモードでオブジェクトレプリケーターを実行中です。"

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "%s のシグナル通知、pid: %s シグナル: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"%(time)s 以降: 同期済み %(sync)s [削除 %(delete)s、書き込み %(put)s]、スキッ"
"プ %(skip)s、失敗 %(fail)s"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"%(time)s 以降: アカウント監査: 合格した監査 %(passed)s、不合格の監"
"査%(failed)s"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"%(time)s 以降: コンテナー監査: 合格した監査 %(pass)s、不合格の監査%(fail)s"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "%(device)s はマウントされていないため、スキップされます"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "マウントされていないため、 %s をスキップします"

#, python-format
msgid "Starting %s"
msgstr "%s を開始しています"

msgid "Starting object reconstruction pass."
msgstr "オブジェクト再構成パスを開始中です。"

msgid "Starting object reconstructor in daemon mode."
msgstr "オブジェクトリコンストラクターをデーモンモードで開始中です。"

msgid "Starting object replication pass."
msgstr "オブジェクト複製パスを開始中です。"

msgid "Starting object replicator in daemon mode."
msgstr "オブジェクトレプリケーターをデーモンモードで開始中です。"

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "%(dst)s での %(src)s の再同期が成功しました (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "このファイルタイプにはアクセスが禁止されています"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"コンテナーの合計 %(key)s (%(total)s) がポリシー全体の合計 %(key)s(%(sum)s) に"
"一致しません"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "memcached %(server)s に対する %(action)s がタイムアウトになりました"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s のタイムアウト例外"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "%(method)s %(path)s を試行中"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "GET %(full_path)s を試行中"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "%s への PUT の状況 %s の取得を試行中"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "%s への PUT の最終状況の取得を試行中"

msgid "Trying to read during GET"
msgstr "GET 時に読み取りを試行中"

msgid "Trying to read during GET (retrying)"
msgstr "GET 時に読み取りを試行中 (再試行中)"

msgid "Trying to send to client"
msgstr "クライアントへの送信を試行中"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "%s でサフィックスの同期を試行中"

#, python-format
msgid "Trying to write to %s"
msgstr "%s への書き込みを試行中"

msgid "UNCAUGHT EXCEPTION"
msgstr "キャッチされていない例外"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "%s 構成セクションが %s に見つかりません"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "設定から内部クライアントをロードできません: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "%s が libc に見つかりません。no-op として終了します。"

#, python-format
msgid "Unable to locate config for %s"
msgstr "%s の設定が見つかりません"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "%s の設定番号 %s が見つかりません"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"fallocate、posix_fallocate が libc に見つかりません。no-op として終了します。"

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "ディレクトリー %s で fsync() を実行できません: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "構成を %s から読み取ることができません"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "非認証 %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "予期しない応答: %s"

msgid "Unhandled exception"
msgstr "未処理例外"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"GET を試行中に不明な例外が発生しました: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s に関する更新レポートが失敗しました"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s に関する更新レポートが送信されました"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"警告: SSL を有効にするのはテスト目的のみでなければなりません。製品のデプロイ"
"には外部 SSL 終端を使用してください。"

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr "警告: ファイル記述子制限を変更できません。非ルートとして実行しますか?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr "警告: 最大処理限界を変更できません。非ルートとして実行しますか?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr "警告: メモリー制限を変更できません。非ルートとして実行しますか?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "%s 秒間、%s の停止を待機しました。中止します"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "%s 秒間、%s の停止を待機しました。強制終了します"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "警告: memcached クライアントなしで ratelimit を行うことはできません"

#, python-format
msgid "method %s is not allowed."
msgstr "メソッド %s は許可されていません。"

msgid "no log file found"
msgstr "ログファイルが見つかりません"

msgid "odfpy not installed."
msgstr "odfpy がインストールされていません。"

#, python-format
msgid "plotting results failed due to %s"
msgstr "%s が原因で結果のプロットに失敗しました"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib がインストールされていません。"
swift-2.7.1/swift/locale/zh_TW/0000775000567000056710000000000013024044470017442 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/zh_TW/LC_MESSAGES/0000775000567000056710000000000013024044470021227 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/zh_TW/LC_MESSAGES/swift.po0000664000567000056710000010231513024044354022726 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Jennifer , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-21 08:17+0000\n"
"Last-Translator: Jennifer \n"
"Language: zh-TW\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Chinese (Taiwan)\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"使用者退出"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - 平行,%s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"已檢查 %(checked)d 個字尾 - %(hashed).2f%% 個已雜湊,%(synced).2f%% 個已同步"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s 已回應為未裝載"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"在 %(time).2f 秒內重新建構了 %(device)d/%(dtotal)d (%(dpercentage).2f%%) 個裝"
"置的 %(reconstructed)d/%(total)d (%(percentage).2f%%) 個分割區(%(rate).2f/"
"秒,剩餘 %(remaining)s)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"已在 %(time).2f 秒內抄寫了 %(replicated)d/%(total)d (%(percentage).2f%%) 個分"
"割區(%(rate).2f/秒,剩餘 %(remaining)s)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s 個成功,%(failure)s 個失敗"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s 針對 %(statuses)s 正在傳回 503"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d 未在執行中 (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) 似乎已停止"

#, python-format
msgid "%s already started..."
msgstr "%s 已啟動..."

#, python-format
msgid "%s does not exist"
msgstr "%s 不存在"

#, python-format
msgid "%s is not mounted"
msgstr "未裝載 %s"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s 已回應為未裝載"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s 在執行中 (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s:%s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s:已由對等項目重設連線"

#, python-format
msgid ", %s containers deleted"
msgstr ",已刪除 %s 個儲存器"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ",可能剩餘 %s 個儲存器"

#, python-format
msgid ", %s containers remaining"
msgstr ",剩餘 %s 個儲存器"

#, python-format
msgid ", %s objects deleted"
msgstr ",已刪除 %s 個物件"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ",可能剩餘 %s 個物件"

#, python-format
msgid ", %s objects remaining"
msgstr ",剩餘 %s 個物件"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ",經歷時間:%.02fs"

msgid ", return codes: "
msgstr ",回覆碼:"

msgid "Account"
msgstr "帳戶"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "尚未回收帳戶 %s(自 %s 之後)"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "帳戶審核「一次性」模式已完成:%.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "帳戶審核通過已完成:%.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr "已嘗試在 %(time).5f 秒內抄寫 %(count)d 個資料庫 (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "%s 的審核失敗:%s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "不當的遠端同步回覆碼:%(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "開始帳戶審核「一次性」模式"

msgid "Begin account audit pass."
msgstr "開始帳戶審核通過。"

msgid "Begin container audit \"once\" mode"
msgstr "開始儲存器審核「一次性」模式"

msgid "Begin container audit pass."
msgstr "開始儲存器審核通過。"

msgid "Begin container sync \"once\" mode"
msgstr "開始儲存器同步「一次性」模式"

msgid "Begin container update single threaded sweep"
msgstr "開始儲存器更新單一執行緒清理"

msgid "Begin container update sweep"
msgstr "開始儲存器更新清理"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "開始物件審核 \"%s\" 模式 (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "開始物件更新單一執行緒清理"

msgid "Begin object update sweep"
msgstr "開始物件更新清理"

#, python-format
msgid "Beginning pass on account %s"
msgstr "正在開始帳戶 %s 上的通過"

msgid "Beginning replication run"
msgstr "正在開始抄寫執行"

msgid "Broker error trying to rollback locked connection"
msgstr "嘗試回復已鎖定的連線時發生分配管理系統錯誤"

#, python-format
msgid "Can not access the file %s."
msgstr "無法存取檔案 %s。"

#, python-format
msgid "Can not load profile data from %s."
msgstr "無法從 %s 載入設定檔資料。"

#, python-format
msgid "Cannot read %s (%s)"
msgstr "無法讀取 %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "無法寫入 %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "用戶端未在 %s 秒內從 Proxy 中讀取"

msgid "Client disconnected on read"
msgstr "用戶端在讀取時中斷連線"

msgid "Client disconnected without sending enough data"
msgstr "用戶端已中斷連線,未傳送足夠的資料"

msgid "Client disconnected without sending last chunk"
msgstr "用戶端已中斷連線,未傳送最後一個片段"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr "用戶端路徑 %(client)s 與物件 meta 資料%(meta)s 中儲存的路徑不符"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"未定義配置選項 internal_client_conf_path。將使用預設配置,請參閱 internal-"
"client.conf-sample 以取得選項"

msgid "Connection refused"
msgstr "連線遭拒"

msgid "Connection timeout"
msgstr "連線逾時"

msgid "Container"
msgstr "儲存器"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "儲存器審核「一次性」模式已完成:%.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "儲存器審核通過已完成:%.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "儲存器同步「一次性」模式已完成:%.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"儲存器更新單一執行緒清理已完成:%(elapsed).02fs,%(success)s 個成"
"功,%(fail)s 個失敗,%(no_change)s 個無變更"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "儲存器更新清理已完成:%.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"%(path)s 的儲存器更新清理已完成:%(elapsed).02fs,%(success)s 個成"
"功,%(fail)s 個失敗,%(no_change)s 個無變更"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "嘗試 %s 秒後仍無法連結至 %s:%s"

#, python-format
msgid "Could not load %r: %s"
msgstr "無法載入 %r:%s"

#, python-format
msgid "Data download error: %s"
msgstr "資料下載錯誤:%s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "裝置通過已完成:%.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "目錄 %r 未對映至有效的原則 (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "錯誤:%(db_file)s:%(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "錯誤:%(status)d %(body)s 來自 %(type)s 伺服器"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "錯誤:%(status)d %(body)s 來自物件伺服器 re:%(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "錯誤:%(status)d 預期:100 - 繼續自物件伺服器"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "錯誤:%(status)d 正在嘗試來自儲存器伺服器的 %(method)s %(path)s"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"錯誤:%(ip)s:%(port)s/%(device)s 的帳戶更新失敗(將稍後重試):回應 "
"%(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr "錯誤:帳戶更新失敗:要求中的主機與裝置數目不同:\"%s\" 對 \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "錯誤:來自 %(host)s 的回應 %(status)s 不當"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "錯誤:用戶端讀取逾時(%s 秒)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"錯誤:儲存器更新失敗(儲存以稍後進行非同步更新):%(status)d 回應(來自 "
"%(ip)s:%(port)s/%(dev)s)"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr "錯誤:儲存器更新失敗:要求中的主機數目與裝置數目不同:\"%s\" 對 \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "錯誤:無法取得帳戶資訊 %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "錯誤:無法取得儲存器資訊 %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "錯誤:磁碟檔 %(data_file)s 關閉失敗:%(exc)s:%(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "錯誤:異常狀況造成用戶端中斷連線"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "錯誤:將資料轉送至物件伺服器 %s 時發生異常狀況"

msgid "ERROR Failed to get my own IPs?"
msgstr "錯誤:無法取得我自己的 IP?"

msgid "ERROR Insufficient Storage"
msgstr "錯誤:儲存體不足"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr "錯誤:物件 %(obj)s 審核失敗,並且已予以隔離:%(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "錯誤:嚴重問題,正在隔離 %s"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "錯誤:未裝載遠端磁碟機 %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "同步 %(db_file)s %(row)s 時發生錯誤"

#, python-format
msgid "ERROR Syncing %s"
msgstr "同步 %s 時發生錯誤"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "嘗試審核 %s 時發生錯誤"

msgid "ERROR Unhandled exception in request"
msgstr "錯誤:要求中有未處理的異常狀況"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "錯誤:%(method)s %(path)s 發生 __call__ 錯誤"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr "錯誤:%(ip)s:%(port)s/%(device)s 的帳戶更新失敗(將稍後重試)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr "錯誤:%(ip)s:%(port)s/%(device)s 的帳戶更新失敗(將稍後重試):"

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "錯誤:非同步擱置檔案具有非預期的名稱 %s"

msgid "ERROR auditing"
msgstr "審核時發生錯誤"

#, python-format
msgid "ERROR auditing: %s"
msgstr "審核時發生錯誤:%s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"錯誤:%(ip)s:%(port)s/%(dev)s 的儲存器更新失敗(儲存以稍後進行非同步更新)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "從 %s 讀取 HTTP 回應時發生錯誤"

#, python-format
msgid "ERROR reading db %s"
msgstr "讀取資料庫 %s 時發生錯誤"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "錯誤:遠端同步失敗,%(code)s:%(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "將 %(file)s 與節點 %(node)s 進行同步時發生錯誤"

msgid "ERROR trying to replicate"
msgstr "嘗試抄寫時發生錯誤"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "嘗試清除 %s 時發生錯誤"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "%(type)s 伺服器發生錯誤:%(ip)s:%(port)s/%(device)s,re:%(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "從 %s 載入抑制時發生錯誤:"

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "遠端伺服器發生錯誤:%(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "錯誤:無法取得磁碟機分割區的路徑:%s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "錯誤:擷取區段時發生錯誤"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "錯誤:無法存取 %(path)s:%(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "錯誤:無法執行審核:%s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "對 memcached 執行%(action)s作業時發生錯誤:%(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "編碼為 UTF-8 時發生錯誤:%s"

msgid "Error hashing suffix"
msgstr "混合字尾時發生錯誤"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "在 mtime_check_interval 中,%r 發生錯誤:%s"

#, python-format
msgid "Error limiting server %s"
msgstr "限制伺服器 %s 時發生錯誤"

msgid "Error listing devices"
msgstr "列出裝置時發生錯誤"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "呈現側寫結果時發生錯誤:%s"

msgid "Error parsing recon cache file"
msgstr "剖析 recon 快取檔案時發生錯誤"

msgid "Error reading recon cache file"
msgstr "讀取 recon 快取檔案時發生錯誤"

msgid "Error reading ringfile"
msgstr "讀取 ringfile 時發生錯誤"

msgid "Error reading swift.conf"
msgstr "讀取 swift.conf 時發生錯誤"

msgid "Error retrieving recon data"
msgstr "擷取 recon 資料時發生錯誤"

msgid "Error syncing handoff partition"
msgstr "同步遞交分割區時發生錯誤"

msgid "Error syncing partition"
msgstr "同步分割區時發生錯誤"

#, python-format
msgid "Error syncing with node: %s"
msgstr "與節點同步時發生錯誤:%s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr "嘗試重建 %(path)s 原則 #%(policy)d 分段 #%(frag_index)s 時發生錯誤"

msgid "Error: An error occurred"
msgstr "錯誤:發生錯誤"

msgid "Error: missing config path argument"
msgstr "錯誤:遺漏配置路徑引數"

#, python-format
msgid "Error: unable to locate %s"
msgstr "錯誤:找不到 %s"

msgid "Exception dumping recon cache"
msgstr "傾出 recon 快取時發生異常狀況"

msgid "Exception in top-level account reaper loop"
msgstr "最上層帳戶收割者迴圈發生異常狀況"

msgid "Exception in top-level replication loop"
msgstr "最上層抄寫迴圈中發生異常狀況"

msgid "Exception in top-levelreconstruction loop"
msgstr "最上層重新建構迴圈中發生異常狀況"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "刪除儲存器 %s %s 時發生異常狀況"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "刪除物件 %s %s %s 時發生異常狀況"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s  發生異常狀況"

#, python-format
msgid "Exception with account %s"
msgstr "帳戶 %s 發生異常狀況"

#, python-format
msgid "Exception with containers for account %s"
msgstr "帳戶 %s 的儲存器發生異常狀況"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr "針對帳戶 %(account)s,儲存器 %(container)s 的物件發生異常狀況"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "預期 100 - 在 %s 上繼續"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "遵循 %(given_domain)s 到 %(found_domain)s 的 CNAME 鏈"

msgid "Found configs:"
msgstr "找到配置:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr "「遞交作業最先」模式仍有剩餘的遞交作業。正在中斷現行抄寫傳遞。"

msgid "Host unreachable"
msgstr "無法呼叫到主機"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "帳戶 %s 上的通過未完成"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "無效的 X-Container-Sync-To 格式 %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "X-Container-Sync-To 中的主機 %r 無效"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "無效的擱置項目 %(file)s:%(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "來自 %(full_path)s 的回應 %(resp)s 無效"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "來自 %(ip)s 的回應 %(resp)s 無效"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"X-Container-Sync-To 中的架構 %r 無效,必須是 \"//\"、\"http\" 或 \"https\"。"

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "正在結束長時間執行的遠端同步:%s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "從 %s 載入 JSON 失敗 (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "偵測到鎖定。正在結束即時 coros。"

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "已將 %(given_domain)s 對映至 %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "沒有 %s 在執行中"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "沒有 %r %r 的叢集端點"

#, python-format
msgid "No permission to signal PID %d"
msgstr "沒有傳送 PID %d 信號的許可權"

#, python-format
msgid "No policy with index %s"
msgstr "沒有具有索引 %s 的原則"

#, python-format
msgid "No realm key for %r"
msgstr "沒有 %r 的範圍金鑰"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "裝置上沒有用於 %s 的剩餘空間 (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "節點錯誤限制 %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "未確認足夠的物件伺服器(已取得 %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"找不到 %(sync_from)r => %(sync_to)r                       - 物件 %(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "%s 秒未重新建構任何內容。"

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "未抄寫任何項目達 %s 秒。"

msgid "Object"
msgstr "物件"

msgid "Object PUT"
msgstr "物件 PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr "物件 PUT 針對 409 正在傳回 202:%(req_timestamp)s <= %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "物件 PUT 正在傳回 412,%(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"物件審核 (%(type)s) \"%(mode)s\" 模式已完成:%(elapsed).02fs。已隔離項目數總"
"計:%(quars)d,錯誤數總計:%(errors)d,檔案數/秒總計:%(frate).2f,位元組數/"
"秒總計:%(brate).2f,審核時間:%(audit).2f,速率:%(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"物件審核 (%(type)s)。自 %(start_time)s 以來:本端:%(passes)d 個已通"
"過,%(quars)d 個已隔離,%(errors)d 個錯誤,檔案數/秒:%(frate).2f,位元組數/"
"秒:%(brate).2f,時間總計:%(total).2f,審核時間:%(audit).2f,速率:"
"%(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "物件審核統計資料:%s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "物件重新建構完成(一次性)。(%.02f 分鐘)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "物件重新建構完成。(%.02f 分鐘)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "物件抄寫完成(一次性)。(%.02f 分鐘)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "物件抄寫完成。(%.02f 分鐘)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "物件伺服器已傳回 %s 個不符的 etag"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"物件更新單一執行緒清理已完成:%(elapsed).02f 秒,%(success)s 個成"
"功,%(fail)s 個失敗"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "物件更新清理已完成:%.02f 秒"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"%(device)s 的物件更新清理已完成:%(elapsed).02f 秒,%(success)s 個成"
"功,%(fail)s 個失敗"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "X-Container-Sync-To 中不容許參數、查詢及片段"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr "分割區時間:上限 %(max).4fs,下限 %(min).4fs,中間 %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "通過正在開始;%s 個可能的儲存器;%s 個可能的物件"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "已在 %d 秒內完成通過;%d 個物件已過期"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "到目前為止,通過執行了 %d 秒;%d 個物件已過期"

msgid "Path required in X-Container-Sync-To"
msgstr "X-Container-Sync-To 中需要的路徑"

#, python-format
msgid "Problem cleaning up %s"
msgstr "清除 %s 時發生問題"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "清除 %s 時發生問題  (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "寫入可延續狀態檔 %s 時發生問題 (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "側寫錯誤:%s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr "已將 %(hsh_path)s 隔離至 %(quar_path)s,因為它不是目錄"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr "已將 %(object_path)s 隔離至 %(quar_path)s,因為它不是目錄"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "已將 %s 隔離至 %s,原因是 %s 資料庫"

#, python-format
msgid "Quarantining DB %s"
msgstr "正在隔離資料庫 %s"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr "%(account)s/%(container)s/%(object)s 的限制速率休眠日誌:%(sleep)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "已移除 %(remove)d 個資料庫"

#, python-format
msgid "Removing %s objects"
msgstr "正在移除 %s 物件"

#, python-format
msgid "Removing partition: %s"
msgstr "正在移除分割區:%s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "正在移除具有錯誤 PID %(pid)d 的 PID 檔 %(pid_file)s"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "正在移除具有無效 PID 的 PID 檔 %s"

#, python-format
msgid "Removing stale pid file %s"
msgstr "正在移除過時 PID 檔案 %s"

msgid "Replication run OVER"
msgstr "抄寫執行結束"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "由於黑名單,正在傳回 497:%s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"正在將 %(meth)s 的 498 傳回至 %(acc)s/%(cont)s/%(obj)s。限制速率(休眠上"
"限)%(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr "偵測到環變更。正在中斷現行重新建構傳遞。"

msgid "Ring change detected. Aborting current replication pass."
msgstr "偵測到環變更。正在中斷現行抄寫傳遞。"

#, python-format
msgid "Running %s once"
msgstr "正在執行 %s 一次"

msgid "Running object reconstructor in script mode."
msgstr "正在 Script 模式下執行物件重新建構器。"

msgid "Running object replicator in script mode."
msgstr "正在 Script 模式下執行物件抄寫器"

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "信號 %s  PID:%s  信號:%s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"自 %(time)s 以來:已同步 %(sync)s 個 [已刪除 %(delete)s 個,已放置 %(put)s "
"個],已跳過 %(skip)s 個,%(fail)s 個失敗"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"自 %(time)s 以來:帳戶審核:%(passed)s 個已通過審核,%(failed)s 個失敗審核"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"自 %(time)s 以來:儲存器審核:%(pass)s 個已通過審核,%(fail)s 個失敗審核"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "正在跳過 %(device)s,因為它未裝載"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "正在跳過 %s,原因是它未裝載"

#, python-format
msgid "Starting %s"
msgstr "正在啟動 %s"

msgid "Starting object reconstruction pass."
msgstr "正在啟動物件重新建構傳遞。"

msgid "Starting object reconstructor in daemon mode."
msgstr "正在常駐程式模式下啟動物件重新建構器。"

msgid "Starting object replication pass."
msgstr "正在啟動物件抄寫傳遞。"

msgid "Starting object replicator in daemon mode."
msgstr "正在常駐程式模式下啟動物件抄寫器。"

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "已順利遠端同步 %(dst)s 中的 %(src)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "禁止此檔案類型進行存取!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"儲存器的 %(key)s 總計 (%(total)s) 不符合原則中的 %(key)s 總和 (%(sum)s) "

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "對 memcached 執行%(action)s作業時逾時:%(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s  發生逾時異常狀況"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "正在嘗試 %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "正在嘗試對 %(full_path)s 執行 GET 動作"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "正在嘗試讓 PUT 的 %s 狀態變為 %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "正在嘗試讓 PUT 的最終狀態變為 %s"

msgid "Trying to read during GET"
msgstr "正在嘗試於 GET 期間讀取"

msgid "Trying to read during GET (retrying)"
msgstr "正在嘗試於 GET 期間讀取(正在重試)"

msgid "Trying to send to client"
msgstr "正在嘗試傳送至用戶端"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "正在嘗試與 %s 同步字尾"

#, python-format
msgid "Trying to write to %s"
msgstr "正在嘗試寫入至 %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "未捕捉的異常狀況"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "找不到 %s 配置區段(在 %s 中)"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "無法從配置載入內部用戶端:%r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "在 libc 中找不到 %s。保留為 no-op。"

#, python-format
msgid "Unable to locate config for %s"
msgstr "找不到 %s 的配置"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "找不到配置號碼 %s(針對 %s)"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr "在 libc 中找不到 fallocate、posix_fallocate。保留為 no-op。"

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "無法對目錄 %s 執行 fsync():%s"

#, python-format
msgid "Unable to read config from %s"
msgstr "無法從 %s 讀取配置"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "未鑑別 %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "非預期的回應:%s"

msgid "Unhandled exception"
msgstr "未處理的異常狀況"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"嘗試執行 GET 動作時發生不明異常狀況:%(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s 的更新報告失敗"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "已傳送 %(container)s %(dbfile)s 的更新報告"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"警告:應該僅啟用 SSL 以用於測試目的。使用外部 SSL 終止以進行正式作業部署。"

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr "警告:無法修改檔案描述子限制。以非 root 使用者身分執行?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr "警告:無法修改程序數目上限限制。以非 root 使用者身分執行?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr "警告:無法修改記憶體限制。以非 root 使用者身分執行?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "已等待 %s 秒以讓 %s 當掉;正在放棄"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "已等待 %s 秒以讓 %s 當掉;正在結束"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "警告:無法在沒有 memcached 用戶端的情況下限制速率"

#, python-format
msgid "method %s is not allowed."
msgstr "不容許使用方法 %s。"

msgid "no log file found"
msgstr "找不到日誌檔"

msgid "odfpy not installed."
msgstr "未安裝 odfpy。"

#, python-format
msgid "plotting results failed due to %s"
msgstr "由於 %s,繪製結果失敗"

msgid "python-matplotlib not installed."
msgstr "未安裝 python-matplotlib。"
swift-2.7.1/swift/locale/tr_TR/0000775000567000056710000000000013024044470017441 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/tr_TR/LC_MESSAGES/0000775000567000056710000000000013024044470021226 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/tr_TR/LC_MESSAGES/swift.po0000664000567000056710000010045213024044354022725 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# İşbaran Akçayır , 2015
# OpenStack Infra , 2015. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2015-09-04 07:42+0000\n"
"Last-Translator: İşbaran Akçayır \n"
"Language: tr-TR\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Turkish (Turkey)\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"kullanıcı çıktı"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - paralel, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d sonek kontrol edildi - %(hashed).2f%% özetlenen, %(synced).2f%% "
"eşzamanlanan"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s bağlı değil olarak yanıt verdi"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(device)d/%(dtotal)d (%(dpercentage).2f%%) aygıtın %(reconstructed)d/"
"%(total)d (%(percentage).2f%%) bölümü %(time).2fs (%(rate).2f/sn, "
"%(remaining)s kalan) içinde yeniden oluşturuldu"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) bölüm %(time).2fs (%(rate).2f/"
"sn, %(remaining)s kalan) içinde çoğaltıldı"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s başarı, %(failure)s başarısızlık"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s %(statuses)s için 503 döndürüyor"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d çalışmıyor (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) durmuş gibi görünüyor"

#, python-format
msgid "%s already started..."
msgstr "%s zaten başlatıldı..."

#, python-format
msgid "%s does not exist"
msgstr "%s mevcut değil"

#, python-format
msgid "%s is not mounted"
msgstr "%s bağlı değil"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s bağlı değil olarak yanıt verdi"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s çalışıyor (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: Bağlantı eş tarafından sıfırlandı"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s kap silindi"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s kap kaldı muhtemelen"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s kap kaldı"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s nesne silindi"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s nesne kaldı muhtemelen"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s nesne kaldı"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", geçen süre: %.02fs"

msgid ", return codes: "
msgstr ", dönen kodlar: "

msgid "Account"
msgstr "Hesap"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Hesap %s %s'den beri biçilmedi"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Hesap denetimi \"bir kere\" kipi tamamlandı: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Hesap denetimi geçişi tamamlandı: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr "%(count)d db %(time).5f saniyede çoğaltılmaya çalışıldı (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Denetim %s için başarısız: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Kötü rsync dönüş kodu: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Hesap denetimi \"bir kere\" kipini başlat"

msgid "Begin account audit pass."
msgstr "Hesap denetimi başlatma geçildi."

msgid "Begin container audit \"once\" mode"
msgstr "Kap denetimine \"bir kere\" kipinde başla"

msgid "Begin container audit pass."
msgstr "Kap denetimi geçişini başlat."

msgid "Begin container sync \"once\" mode"
msgstr "Kap eşzamanlamayı \"bir kere\" kipinde başlat"

msgid "Begin container update single threaded sweep"
msgstr "Kap güncelleme tek iş iplikli süpürmeye başla"

msgid "Begin container update sweep"
msgstr "Kap güncelleme süpürmesine başla"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Nesne denetimini \"%s\" kipinde başlat (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Nesne güncelleme tek iş iplikli süpürmeye başla"

msgid "Begin object update sweep"
msgstr "Nesne güncelleme süpürmesine başla"

#, python-format
msgid "Beginning pass on account %s"
msgstr "%s hesabı üzerinde geçiş başlatılıyor"

msgid "Beginning replication run"
msgstr "Çoğaltmanın çalıştırılmasına başlanıyor"

msgid "Broker error trying to rollback locked connection"
msgstr "Kilitli bağlantı geri alınmaya çalışılırken vekil hatası"

#, python-format
msgid "Can not access the file %s."
msgstr "%s dosyasına erişilemiyor."

#, python-format
msgid "Can not load profile data from %s."
msgstr "%s'den profil verisi yüklenemiyor."

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "İstemci %ss içinde vekilden okumadı"

msgid "Client disconnected on read"
msgstr "İstemci okuma sırasında bağlantıyı kesti"

msgid "Client disconnected without sending enough data"
msgstr "İstemci yeterli veri göndermeden bağlantıyı kesti"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"İstemci yolu %(client)s nesne metadata'sında kayıtlı yol ile eşleşmiyor "
"%(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"Yapılandırma seçeneği internal_client_conf_path belirtilmemiş. Varsayılan "
"yapılandırma kullanılıyor, seçenekleri çin internal-client.conf-sample'a "
"bakın"

msgid "Connection refused"
msgstr "Bağlantı reddedildi"

msgid "Connection timeout"
msgstr "Bağlantı zaman aşımına uğradı"

msgid "Container"
msgstr "Kap"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Kap denetimi \"bir kere\" kipinde tamamlandı: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Kap denetim geçişi tamamlandı: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "Kap eşzamanlama \"bir kere\" kipinde tamamlandı: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Kap güncelleme tek iş iplikli süpürme tamamlandı: %(elapsed).02fs, "
"%(success)s başarılı, %(fail)s başarısız, %(no_change)s değişiklik yok"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Kap güncelleme süpürme tamamlandı: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"%(path)s in kap güncelleme süpürmesi tamamlandı: %(elapsed).02fs, "
"%(success)s başarılı, %(fail)s başarısız, %(no_change)s değişiklik yok"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "%s:%s'e bağlanılamadı, %s saniye beklendi"

#, python-format
msgid "Could not load %r: %s"
msgstr "%r yüklenemedi: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Veri indirme hatası: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Aygıtlar geçişi tamamlandı: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "Dizin %r geçerli bir ilkeye eşleştirilmemiş (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "HATA %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "HATA %(status)d %(body)s %(type)s Sunucudan"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "HATA %(status)d %(body)s Nesne Sunucu re'den: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "HATA %(status)d Beklenen: 100-Nesne Sunucusundan devam et"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "HATA %(status)d Kap Sunucusundan %(method)s %(path)s denenirken"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"HATA %(ip)s:%(port)s/%(device)s ile hesap güncelleme başarısız (sonra tekrar "
"denenecek): Yanıt %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"HATA Hesap güncelleme başarısız: istekte farklı sayıda  istemci ve aygıt "
"var: \"%s\" \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "HATA %(host)s dan kötü yanıt %(status)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "HATA İstemci okuma zaman aşımına uğradı (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"HATA Kap güncelleme başarısız (daha sonraki async güncellemesi için "
"kaydediliyor): %(ip)s:%(port)s/%(dev)s den %(status)d yanıtı"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"HATA Kap güncelleme başarısız: istekte farklı sayıda istemci ve aygıt var: "
"\"%s\" e karşı \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "HATA hesap bilgisi %s alınamadı"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "HATA %s kap bilgisi alınamadı"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "HATA %(data_file)s disk dosyası kapatma başarısız: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "HATA İstisna istemci bağlantısının kesilmesine neden oluyor"

msgid "ERROR Failed to get my own IPs?"
msgstr "Kendi IP'lerimi alırken HATA?"

msgid "ERROR Insufficient Storage"
msgstr "HATA Yetersiz Depolama"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"HATA Nesne %(obj)s denetimde başarısız oldu ve karantinaya alındı: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "HATA Picke problemi, %s karantinaya alınıyor"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "HATA Uzak sürücü bağlı değil %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "HATA %(db_file)s %(row)s eşzamanlamada"

#, python-format
msgid "ERROR Syncing %s"
msgstr "HATA %s Eşzamanlama"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "HATA %s denetimi denemesinde"

msgid "ERROR Unhandled exception in request"
msgstr "HATA İstekte ele alınmayan istisna var"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "ERROR __call__ hatası %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"HATA %(ip)s:%(port)s/%(device)s ile hesap güncelleme başarısız (sonra "
"yeniden denenecek)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"HATA hesap güncelleme başarısız %(ip)s:%(port)s/%(device)s (sonra tekrar "
"denenecek):"

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "HATA beklenmeyen isimli async bekleyen dosya %s"

msgid "ERROR auditing"
msgstr "denetlemede HATA"

#, python-format
msgid "ERROR auditing: %s"
msgstr "HATA denetim: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"HATA kap güncelleme %(ip)s:%(port)s/%(dev)s ile başarısız oldu (sonraki "
"async güncellemesi için kaydediliyor)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "%s'den HTTP yanıtı okumada HATA"

#, python-format
msgid "ERROR reading db %s"
msgstr "%s veri tabanı okumada HATA"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "HATA rsync %(code)s ile başarısız oldu: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "%(node)s düğümlü %(file)s eş zamanlamada HATA"

msgid "ERROR trying to replicate"
msgstr "Çoğaltmaya çalışmada HATA"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "%s temizlenmeye çalışırken HATA"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "HATA %(type)s sunucusu %(ip)s:%(port)s/%(device)s re: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "HATA %s den baskılamaların yüklenmesinde: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "HATA uzuk sunucuda %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "HATA:  Sürücü bölümlerine olan yollar alınamadı: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "HATA: Dilimler alınırken bir hata oluştu"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "HATA: %(path)s e erişilemiyor: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "HATA: Denetim çalıştırılamıyor: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Memcached'e hata %(action)s: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "UTF-8 ile kodlama hatası: %s"

msgid "Error hashing suffix"
msgstr "Sonek özetini çıkarmada hata"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "mtime_check_interval ile %r de hata: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "%s sunucusu sınırlandırılırken hata"

msgid "Error listing devices"
msgstr "Aygıtları listelemede hata"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Profilleme sonuçlarının gerçeklenmesinde hata: %s"

msgid "Error parsing recon cache file"
msgstr "Recon zula dosyasını ayrıştırmada hata"

msgid "Error reading recon cache file"
msgstr "Recon zula dosyası okumada hata"

msgid "Error reading ringfile"
msgstr "Halka dosyası okunurken hata"

msgid "Error reading swift.conf"
msgstr "swift.conf okunurken hata"

msgid "Error retrieving recon data"
msgstr "Recon verisini almada hata"

msgid "Error syncing handoff partition"
msgstr "Devir bölümünü eş zamanlamada hata"

msgid "Error syncing partition"
msgstr "Bölüm eşzamanlamada hata"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Düğüm ile eş zamanlamada hata: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Yeniden inşa denenirken hata %(path)s policy#%(policy)d frag#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Hata: Bir hata oluştu"

msgid "Error: missing config path argument"
msgstr "Hata: yapılandırma yolu değişkeni eksik"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Hata: %s bulunamıyor"

msgid "Exception dumping recon cache"
msgstr "Yeniden bağlanma zulasının dökümünde istisna"

msgid "Exception in top-level account reaper loop"
msgstr "Üst seviye hesap biçme döngüsünde istisna"

msgid "Exception in top-level replication loop"
msgstr "Üst seviye çoğaltma döngüsünde istisna"

msgid "Exception in top-levelreconstruction loop"
msgstr "Üst seviye yeniden oluşturma döngüsünde istisna"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "%s %s kabı silinirken istisna"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "%s %s %s nesnesi silinirken istisna"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s ile istisna"

#, python-format
msgid "Exception with account %s"
msgstr "%s hesabında istisna"

#, python-format
msgid "Exception with containers for account %s"
msgstr "%s hesabı için kaplarla ilgili istisna"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr "%(account)s hesabı için %(container)s kabı için nesneler için istisna"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Beklenen: 100-%s üzerinden devam et"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s den %(found_domain)s e CNAME zinciri takip ediliyor"

msgid "Found configs:"
msgstr "Yapılandırmalar bulundu:"

msgid "Host unreachable"
msgstr "İstemci erişilebilir değil"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "%s hesabından tamamlanmamış geçiş"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Geçersix X-Container-Sync-To biçimi %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "X-Container-Sync-To'da geçersiz istemci %r"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Geçersiz bekleyen girdi %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "%(full_path)s den geçersiz yanıt %(resp)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "%(ip)s den geçersiz yanıt %(resp)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"X-Container-Sync-To'da geçersiz şema %r, \"//\", \"http\", veya \"https\" "
"olmalı."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Uzun süre çalışan rsync öldürülüyor: %s"

msgid "Lockup detected.. killing live coros."
msgstr "Kilitleme algılandı.. canlı co-rutinler öldürülüyor."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s %(found_domain)s eşleştirildi"

#, python-format
msgid "No %s running"
msgstr "Çalışan %s yok"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "%r %r için küme uç noktası yok"

#, python-format
msgid "No permission to signal PID %d"
msgstr "%d PID'ine sinyalleme izni yok"

#, python-format
msgid "No policy with index %s"
msgstr "%s indisine sahip ilke yok"

#, python-format
msgid "No realm key for %r"
msgstr "%r için realm anahtarı yok"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Aygıtta %s için boş alan kalmadı (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Düğüm hatası sınırlandı %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "Yeterince nesne sunucu ack'lenmedi (%d alındı)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"Bulunamadı %(sync_from)r => %(sync_to)r            - nesne %(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "%s saniye boyunca hiçbir şey yeniden oluşturulmadı."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "%s saniyedir hiçbir şey çoğaltılmadı."

msgid "Object"
msgstr "Nesne"

msgid "Object PUT"
msgstr "Nesne PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr "Nesne PUT 409 için 202 döndürüyor: %(req_timestamp)s <= %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "Nesne PUT 412 döndürüyor, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Nesne denetimi (%(type)s) \"%(mode)s\" kipinde tamamlandı: %(elapsed).02fs. "
"Toplam karantina: %(quars)d, Toplam hata: %(errors)d, Toplam dosya/sn: "
"%(frate).2f, Toplam bayt/sn: %(brate).2f, Denetleme zamanı: %(audit).2f, "
"Oran: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Nesne denetim istatistikleri: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Nesne yeniden oluşturma tamamlandı (bir kere). (%.02f dakika)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Nesne yeniden oluşturma tamamlandı. (%.02f dakika)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Nesne çoğaltma tamamlandı (bir kere). (%.02f dakika)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Nesne çoğaltma tamamlandı. (%.02f dakika)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "Nesne sunucuları %s eşleşmeyen etag döndürdü"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Nesne güncelleme tek iş iplikli süpürme tamamlandı: %(elapsed).02fs, "
"%(success)s başarılı, %(fail)s başarısız"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Nesne güncelleme süpürmesi tamamlandı: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"%(device)s ın nesne güncelleme süpürmesi tamamlandı: %(elapsed).02fs, "
"%(success)s başarılı, %(fail)s başarısız"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "X-Container-Sync-To'da parametre, sorgular, ve parçalara izin verilmez"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Bölüm zamanları: azami %(max).4fs, asgari %(min).4fs, ortalama %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Geçiş başlıyor; %s olası kap; %s olası nesne"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Geçiş %ds de tamamlandı; %d nesnenin süresi doldu"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Şimdiye kadarki geçiş %ds; %d nesnenin süresi doldu"

msgid "Path required in X-Container-Sync-To"
msgstr "X-Container-Sync-To'de yol gerekli"

#, python-format
msgid "Problem cleaning up %s"
msgstr "%s temizliğinde problem"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "%s temizlemede problem (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Dayanıklı durum dosyas %s ile ilgili problem (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Profilleme Hatası: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr "%(hsh_path)s %(quar_path)s karantinasına alındı çünkü bir dizin değil"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"Bir dizin olmadığından %(object_path)s %(quar_path)s e karantinaya alındı"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s %s'e karantinaya alındı %s veri tabanı sebebiyle"

#, python-format
msgid "Quarantining DB %s"
msgstr "DB %s karantinaya alınıyor"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Oran sınırı uyku kaydı: %(account)s/%(container)s/%(object)s için %(sleep)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "%(remove)d db silindi"

#, python-format
msgid "Removing %s objects"
msgstr "%s nesne kaldırılıyor"

#, python-format
msgid "Removing partition: %s"
msgstr "Bölüm kaldırılıyor: %s"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Geçersiz pid'e sahip pid dosyası %s siliniyor"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Askıdaki pid dosyası siliniyor %s"

msgid "Replication run OVER"
msgstr "Çoğaltma çalışması BİTTİ"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Kara listeleme yüzünden 497 döndürülüyor: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"%(acc)s/%(cont)s/%(obj)s ye %(meth)s için 498 döndürülüyor. Oran sınırı "
"(Azami uyku) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Zincir değişikliği algılandı. Mevcut yeniden oluşturma geçişi iptal ediliyor."

msgid "Ring change detected. Aborting current replication pass."
msgstr "Zincir değişimi algılandı. Mevcut çoğaltma geçişi iptal ediliyor."

#, python-format
msgid "Running %s once"
msgstr "%s bir kere çalıştırılıyor"

msgid "Running object reconstructor in script mode."
msgstr "Nesne yeniden oluşturma betik kipinde çalıştırılıyor."

msgid "Running object replicator in script mode."
msgstr "Nesne çoğaltıcı betik kipinde çalıştırılıyor."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "Sinyal %s  pid: %s  sinyal: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"%(time)s den beri: %(sync)s eşzamanlandı [%(delete)s silme, %(put)s koyma], "
"%(skip)s atlama, %(fail)s başarısız"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"%(time)s den beri: Hesap denetimleri: %(passed)s denetimi geçti, %(failed)s "
"denetimi geçemedi"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"%(time)s den beri: Kap denetimleri: %(pass)s denetimi geçti, %(fail)s "
"denetimde başarısız"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "Bağlı olmadığından %(device)s atlanıyor"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "Bağlı olmadığından %s atlanıyor"

#, python-format
msgid "Starting %s"
msgstr "%s başlatılıyor"

msgid "Starting object reconstruction pass."
msgstr "Nesne yeniden oluşturma geçişi başlatılıyor."

msgid "Starting object reconstructor in daemon mode."
msgstr "Nesne yeniden oluşturma artalan işlemi kipinde başlatılıyor."

msgid "Starting object replication pass."
msgstr "Nesne çoğaltma geçişi başlatılıyor."

msgid "Starting object replicator in daemon mode."
msgstr "Nesne çoğaltıcı artalan işlemi kipinde başlatılıyor."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "%(dst)s (%(time).03f) de %(src)s başarılı rsync'i"

msgid "The file type are forbidden to access!"
msgstr "Dosya türüne erişim yasaklanmış!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"(%(total)s) kabı için %(key)s toplamı ilkeler arasındaki %(key)s toplamıyla "
"eşleşmiyor (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Memcached'e zaman aşımı %(action)s: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s ile zaman aşımı istisnası"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "%(method)s %(path)s deneniyor"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "%(full_path)s GET deneniyor"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "%s'e PUT'un %s durumu alınmaya çalışılıyor"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "%s'e PUT için son durum alınmaya çalışılıyor"

msgid "Trying to read during GET"
msgstr "GET sırasında okuma deneniyor"

msgid "Trying to read during GET (retrying)"
msgstr "GET sırasında okuma deneniyor (yeniden deneniyor)"

msgid "Trying to send to client"
msgstr "İstemciye gönderilmeye çalışılıyor"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "%s e sahip son ekler eşzamanlanmaya çalışılıyor"

#, python-format
msgid "Trying to write to %s"
msgstr "%s'e yazmaya çalışılıyor"

msgid "UNCAUGHT EXCEPTION"
msgstr "YAKALANMAYAN İSTİSNA"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "%s yapılandırma kısmı %s'de bulunamıyor"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "Yapılandırmadan dahili istemci yüklenemedi: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "%s libc'de bulunamadı.  No-op olarak çıkılıyor."

#, python-format
msgid "Unable to locate config for %s"
msgstr "%s için yapılandırma bulunamıyor"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Yapılandırma sayısı %s %s için bulunamıyor"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"fallocate, posix_fallocate libc'de bulunamadı.  No-op olarak çıkılıyor."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "%s dizininde fsynıc() yapılamıyor: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "%s'den yapılandırma okunamıyor"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "%(sync_from)r => %(sync_to)r yetki al"

#, python-format
msgid "Unexpected response: %s"
msgstr "Beklenmeyen yanıt: %s"

msgid "Unhandled exception"
msgstr "Yakalanmamış istisna"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr "GET sırasında bilinmeyen istisna: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s için güncelleme raporu başarısız"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s için güncelleme raporu gönderildi"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"UYARI: SSL yalnızca test amaçlı etkinleştirilmelidir. Üretim için kurulumda "
"harici SSL sonlandırma kullanın."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr "UYARI: Dosya göstericisi sınırı değiştirilemiyor.  Root değil misiniz?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr "UYARI: Azami süreç limiti değiştirilemiyor.  Root değil misiniz?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr "UYARI: Hafıza sınırı değiştirilemiyor.  Root değil misiniz?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "%s saniye %s'in ölmesi için beklendi; vaz geçiliyor"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "Uyarı: Memcached istemcisi olmadan oran sınırlama yapılamaz"

#, python-format
msgid "method %s is not allowed."
msgstr "%s metoduna izin verilmez."

msgid "no log file found"
msgstr "kayıt dosyası bulunamadı"

msgid "odfpy not installed."
msgstr "odfpy kurulu değil."

#, python-format
msgid "plotting results failed due to %s"
msgstr "çizdirme sonuçlaru %s sebebiyle başarısız"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib kurulu değil."
swift-2.7.1/swift/locale/pt_BR/0000775000567000056710000000000013024044470017415 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/pt_BR/LC_MESSAGES/0000775000567000056710000000000013024044470021202 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/pt_BR/LC_MESSAGES/swift.po0000664000567000056710000010571713024044354022712 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Andre Campos Bezerra , 2015
# Lucas Ribeiro , 2014
# thiagol , 2015
# Volmar Oliveira Junior , 2014
# Carlos Marques , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-26 10:29+0000\n"
"Last-Translator: Carlos Marques \n"
"Language: pt-BR\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Portuguese (Brazil)\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"encerramento do usuário"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - paralelo, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d sufixos verificados – %(hashed).2f%% em hash, %(synced).2f%% "
"sincronizados"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s respondeu como desmontado"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partições de %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) dispositivos reconstruídos em %(time).2fs "
"(%(rate).2f/sec, %(remaining)s restantes)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partições replicadas em "
"%(time).2fs (%(rate).2f/seg, %(remaining)s restantes)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s sucessos, %(failure)s falhas"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s retornando 503 para %(statuses)s"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d não está em execução (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) parece ter sido interrompido"

#, python-format
msgid "%s already started..."
msgstr "%s já iniciado..."

#, python-format
msgid "%s does not exist"
msgstr "%s não existe"

#, python-format
msgid "%s is not mounted"
msgstr "%s não está montado"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s respondeu como não montado"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s em execução (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: Reconfiguração da conexão por peer"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s containers excluídos"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s contêineres possivelmente restando"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s contêineres restando"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s objetos excluídos"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s objetos possivelmente restando"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s objetos restando"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", decorrido: %.02fs"

msgid ", return codes: "
msgstr ", códigos de retorno:"

msgid "Account"
msgstr "Conta"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "A conta %s não foi colhida desde %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Auditoria de conta em modo \"único\" finalizado: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Passo de auditoria de conta finalizado: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"Tentativa de replicação do %(count)d dbs em%(time).5f segundos (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "A Auditoria Falhou para %s: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Código de retorno de ressincronização inválido: %(ret)d <-%(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Iniciar auditoria de conta em modo \"único\""

msgid "Begin account audit pass."
msgstr "Iniciando passo de auditoria de conta."

msgid "Begin container audit \"once\" mode"
msgstr "Iniciar o modo \"único\" da auditoria do contêiner"

msgid "Begin container audit pass."
msgstr "Iniciar a aprovação da auditoria do contêiner."

msgid "Begin container sync \"once\" mode"
msgstr "Iniciar o modo \"único\" de sincronização do contêiner"

msgid "Begin container update single threaded sweep"
msgstr "Iniciar a varredura de encadeamento único da atualização do contêiner"

msgid "Begin container update sweep"
msgstr "Iniciar a varredura de atualização do contêiner"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Iniciar o modo \"%s\" da auditoria de objeto (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Iniciar a varredura de encadeamento único da atualização do objeto"

msgid "Begin object update sweep"
msgstr "Iniciar a varredura da atualização do objeto"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Iniciando o passo na conta %s"

msgid "Beginning replication run"
msgstr "Iniciando execução de replicação"

msgid "Broker error trying to rollback locked connection"
msgstr "Erro do Broker ao tentar retroceder a conexão bloqueada"

#, python-format
msgid "Can not access the file %s."
msgstr "Não é possível acessar o arquivo %s."

#, python-format
msgid "Can not load profile data from %s."
msgstr "Não é possível carregar dados do perfil a partir de %s."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "Não é possível ler %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "Não é possível gravar %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "O cliente não leu no proxy dentro de %ss"

msgid "Client disconnected on read"
msgstr "Cliente desconectado durante leitura"

msgid "Client disconnected without sending enough data"
msgstr "Cliente desconecatdo sem ter enviado dados suficientes"

msgid "Client disconnected without sending last chunk"
msgstr "Cliente desconectado sem ter enviado o último chunk"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"O caminho do cliente %(client)s não corresponde ao caminho armazenado nos "
"metadados do objeto %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"Opção de configuração internal_client_conf_path não definida. Usando a "
"configuração padrão. Consulte internal-client.conf-sample para obter opções"

msgid "Connection refused"
msgstr "Conexão recusada"

msgid "Connection timeout"
msgstr "Tempo limite de conexão"

msgid "Container"
msgstr "Contêiner"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Modo \"único\" da auditoria do contêiner concluído: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Aprovação da auditoria do contêiner concluída: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "Modo \"único\" de sincronização do contêiner concluído: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Varredura de encadeamento único da atualização do contêiner concluída: "
"%(elapsed).02fs, %(success)s com sucesso, %(fail)s com falha, %(no_change)s "
"sem mudanças"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Varredura da atualização do contêiner concluída: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Varredura da atualização do contêiner de %(path)s concluída: "
"%(elapsed).02fs, %(success)s com sucesso, %(fail)s com falha, %(no_change)s "
"sem mudanças"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "Não foi possível conectar a %s:%s após tentar por %s segundos"

#, python-format
msgid "Could not load %r: %s"
msgstr "Não é possível carregar %r: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Erro ao fazer download de dados: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Passo de dispositivos finalizados: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "O diretório %r não está mapeado para uma política válida (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "ERRO %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "ERRO %(status)d %(body)s Do Servidor %(type)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "ERRO %(status)d %(body)s No Servidor de Objetos re: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "ERRO %(status)d Expectativa: 100-continuar Do Servidor de Objeto"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "ERRO %(status)d Tentando %(method)s %(path)s Do Servidor de Contêiner"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"ERRO A atualização da conta falhou com %(ip)s:%(port)s/%(device)s (tente "
"novamente mais tarde): Resposta %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERRO A atualização da conta falhou: números diferentes de hosts e "
"dispositivos na solicitação: \"%s\" vs \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "ERRO Resposta inválida %(status)s a partir de %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "ERRO Tempo limite de leitura do cliente (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"ERRO A atualização do contêiner falhou (salvando para atualização assíncrona "
"posterior): %(status)d resposta do %(ip)s:%(port)s/%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERRO A atualização do contêiner falhou: números diferentes de hosts e "
"dispositivos na solicitação: \"%s\" vs \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "ERRO Não foi possível recuperar as informações da conta %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "ERRO Não foi possível obter informações do contêiner %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "ERROR DiskFile %(data_file)s falha ao fechar: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "ERRO Exceção ao causar desconexão do cliente"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "ERRO Exceção ao transferir dados para os servidores de objeto %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "ERRO Falha ao obter meus próprios IPs?"

msgid "ERROR Insufficient Storage"
msgstr "ERRO Armazenamento Insuficiente"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr "ERRO O objeto %(obj)s falhou ao auditar e ficou em quarentena: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "ERRO Problema de seleção, colocando em quarentena %s"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "ERRO Drive remoto não montado %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "ERRO Sincronizando %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "ERRO Sincronizando %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "ERRO Tentando auditar %s"

msgid "ERROR Unhandled exception in request"
msgstr "ERRO Exceção não manipulada na solicitação"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "ERROR __call__ erro com %(method)s %(path)s"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"ERRO A atualização da conta falhou com %(ip)s:%(port)s/%(device)s (tente "
"novamente mais tarde)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"ERRO A atualização da conta falhou com %(ip)s:%(port)s/%(device)s (tente "
"novamente mais tarde): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "ERRO arquivo pendente assíncrono com nome inesperado %s"

msgid "ERROR auditing"
msgstr "Erro na auditoria"

#, python-format
msgid "ERROR auditing: %s"
msgstr "ERRO ao auditar: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"ERRO A atualização de contêiner falhou com %(ip)s:%(port)s/%(dev)s (salvando "
"para atualização assíncrona posterior)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "ERRO ao ler a resposta HTTP de %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "ERRO ao ler o BD  %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "ERRO A ressincronização falhou com %(code)s: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "ERRO ao sincronizar %(file)s com o nó %(node)s"

msgid "ERROR trying to replicate"
msgstr "ERRO ao tentar replicar"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "ERRO ao tentar limpar %s"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "ERRO com %(type)s do servidor %(ip)s:%(port)s/%(device)s re: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "ERRO com as supressões de carregamento a partir de %s: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "ERRO com o servidor remoto %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "ERRO: Falha ao obter caminhos para partições de unidade: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "ERRO: Ocorreu um erro ao recuperar segmentos"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "ERRO: Não é possível acessar %(path)s: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "ERRO: Não é possível executar a auditoria: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Erro %(action)s para memcached: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Erro ao codificar para UTF-8: %s"

msgid "Error hashing suffix"
msgstr "Erro ao efetuar hash do sufixo"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Erro em %r com mtime_check_interval: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Erro ao limitar o servidor %s"

msgid "Error listing devices"
msgstr "Erro ao listar dispositivos"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Erro na renderização de resultados de criação de perfil: %s"

msgid "Error parsing recon cache file"
msgstr "Erro ao analisar o arquivo de cache de reconhecimento"

msgid "Error reading recon cache file"
msgstr "Erro ao ler o arquivo de cache de reconhecimento"

msgid "Error reading ringfile"
msgstr "Erro ao ler ringfile"

msgid "Error reading swift.conf"
msgstr "Erro ao ler swift.conf"

msgid "Error retrieving recon data"
msgstr "Erro ao recuperar dados de reconhecimento"

msgid "Error syncing handoff partition"
msgstr "Erro ao sincronizar a partição de handoff"

msgid "Error syncing partition"
msgstr "Erro ao sincronizar partição"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Erro ao sincronizar com o nó: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Erro ao tentar reconstruir %(path)s policy#%(policy)d frag#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Erro: Ocorreu um erro"

msgid "Error: missing config path argument"
msgstr "Erro: argumento do caminho de configuração ausente"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Erro: não é possível localizar %s"

msgid "Exception dumping recon cache"
msgstr "Exceção de dump de cache de reconhecimento"

msgid "Exception in top-level account reaper loop"
msgstr "Exceção no loop do removedor da conta de nível superior"

msgid "Exception in top-level replication loop"
msgstr "Exceção no loop de replicação de nível superior"

msgid "Exception in top-levelreconstruction loop"
msgstr "Exceção no loop de reconstrução de nível superior"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Exceção ao excluir o contêiner %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Exceção ao excluir objeto %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Exceção com %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Exceção com a conta %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Exceção com os containers para a conta %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Exceção com objetos para o container %(container)s para conta %(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Expectativa: 100-continuar em %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "Cadeia CNAME a seguir para %(given_domain)s para %(found_domain)s"

msgid "Found configs:"
msgstr "Configurações localizadas:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"O primeiro modo de handoffs ainda possui handoffs. Interrompendo a aprovação "
"da replicação atual."

msgid "Host unreachable"
msgstr "Destino inacessível"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Passo incompleto na conta %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Formato X-Container-Sync-To inválido %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Host inválido %r em X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Entrada pendente inválida %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Resposta inválida %(resp)s a partir de %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Resposta inválida %(resp)s a partir de %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Esquema inválido %r em X-Container-Sync-To, deve ser \" // \", \"http\" ou "
"\"https\"."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Eliminando a ressincronização de longa execução: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Falha ao carregar JSON a partir do %s (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Bloqueio detectado... eliminando núcleos em tempo real."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s mapeado para %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "Nenhum %s em execução"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "Nenhum terminal de cluster para %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "Nenhuma permissão para PID do sinal %d"

#, python-format
msgid "No policy with index %s"
msgstr "Nenhuma política com índice %s"

#, python-format
msgid "No realm key for %r"
msgstr "Nenhuma chave do domínio para %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Nenhum espaço deixado no dispositivo para %s (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Erro de nó limitado %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "Servidores de objeto insuficientes  confirmados (obtidos %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"Não localizado %(sync_from)r => %(sync_to)r                    – objeto "
"%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "Nada foi reconstruído durante %s segundos."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "Nada foi replicado durante %s segundos."

msgid "Object"
msgstr "Objeto"

msgid "Object PUT"
msgstr "Objeto PUT "

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"Objeto PUT retornando 202 para 409: %(req_timestamp)s < = %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "PUT de objeto retornando 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Modo \"%(mode)s\" da auditoria de objeto (%(type)s) concluído: "
"%(elapsed).02fs. Total em quarentena: %(quars)d, Total de erros: %(errors)d, "
"Total de arquivos/seg: %(frate).2f, Total de bytes/seg: %(brate).2f, Tempo "
"de auditoria: %(audit).2f, Taxa: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Auditoria de objeto (%(type)s). Desde %(start_time)s: Localmente: %(passes)d "
"aprovado, %(quars)d em quarentena, %(errors)d erros, arquivos/s: "
"%(frate).2f, bytes/seg: %(brate).2f, Tempo total: %(total).2f, Tempo de "
"auditoria: %(audit).2f, Taxa: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Estatísticas de auditoria do objeto: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Reconstrução do objeto concluída (única). (%.02f minutos)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Reconstrução do objeto concluída. (%.02f minutos)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Replicação do objeto concluída (única). (%.02f minutos)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Replicação do objeto concluída. (%.02f minutos)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "Servidores de objeto retornaram %s etags incompatíveis"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Varredura de encadeamento único da atualização do objeto concluída: "
"%(elapsed).02fs, %(success)s com sucesso, %(fail)s com falha"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Varredura da atualização de objeto concluída: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Varredura da atualização do objeto de %(device)s concluída: %(elapsed).02fs, "
"%(success)s com sucesso, %(fail)s com falha"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr ""
"Parâmetros, consultas e fragmentos não permitidos em X-Container-Sync-To"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Tempos de partição: máximo %(max).4fs, mínimo %(min).4fs, médio %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Início da aprovação; %s contêineres possíveis; %s objetos possíveis"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Aprovação concluída em %ds; %d objetos expirados"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Aprovados até o momento %ds; %d objetos expirados"

msgid "Path required in X-Container-Sync-To"
msgstr "Caminho necessário em X-Container-Sync-To"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Problema ao limpar %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Problema ao limpar %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Problema ao gravar arquivo de estado durável %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Erro da Criação de Perfil: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"Em quarentena %(hsh_path)s para %(quar_path)s porque ele não é um diretório"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(object_path)s colocado em quarentena para %(quar_path)s porque ele não é "
"um diretório"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "Em quarentena %s para %s devido ao banco de dados %s "

#, python-format
msgid "Quarantining DB %s"
msgstr "Colocando o BD %s em quarentena"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Log de suspensão do limite de taxa: %(sleep)s para %(account)s/%(container)s/"
"%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "Dbs %(remove)d removido"

#, python-format
msgid "Removing %s objects"
msgstr "Removendo %s objetos"

#, python-format
msgid "Removing partition: %s"
msgstr "Removendo partição: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "Removendo arquivo pid %(pid_file)s com pid errado %(pid)d"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Removendo o arquivo pid %s com pid inválido"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Removendo o arquivo pid %s antigo"

msgid "Replication run OVER"
msgstr "Execução de replicação TERMINADA"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Retornando 497 por causa da listad e bloqueio: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"Retornando 498 para %(meth)s para %(acc)s/%(cont)s/%(obj)s. Limite de taxa "
"(Suspensão Máxima) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Mudança no anel detectada. Interrompendo a aprovação da reconstrução atual."

msgid "Ring change detected. Aborting current replication pass."
msgstr ""
"Mudança no anel detectada. Interrompendo a aprovação da replicação atual."

#, python-format
msgid "Running %s once"
msgstr "Executando %s uma vez,"

msgid "Running object reconstructor in script mode."
msgstr "Executando o reconstrutor do objeto no modo de script."

msgid "Running object replicator in script mode."
msgstr "Executando replicador do objeto no modo de script."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "PID %s do sinal: %s sinal: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"Desde %(time)s: %(sync)s sincronizados [%(delete)s exclusões, %(put)s "
"colocações], %(skip)s ignorados, %(fail)s com falha"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"Desde %(time)s: Auditoria de contas: %(passed)s auditorias aprovadas,"
"%(failed)s auditorias com falha"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"Desde %(time)s: Auditorias do contêiner: %(pass)s auditoria aprovada, "
"%(fail)s auditoria com falha"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "Ignorando %(device)s porque não está montado"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "Ignorando %s porque não está montado"

#, python-format
msgid "Starting %s"
msgstr "Iniciando %s"

msgid "Starting object reconstruction pass."
msgstr "Iniciando a aprovação da reconstrução de objeto."

msgid "Starting object reconstructor in daemon mode."
msgstr "Iniciando o reconstrutor do objeto no modo daemon."

msgid "Starting object replication pass."
msgstr "Iniciando a aprovação da replicação de objeto."

msgid "Starting object replicator in daemon mode."
msgstr "Iniciando o replicador do objeto no modo daemon."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "Ressincronização bem-sucedida de %(src)s em %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "O tipo de arquivo é de acesso proibido!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"O total %(key)s para o contêiner (%(total)s) não confere com a soma %(key)s "
"pelas politicas (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Tempo limite %(action)s para memcached: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Exceção de tempo limite com %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Tentando %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Tentando GET %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Tentando obter o status %s do PUT para o %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Tentando obter o status final do PUT para o %s"

msgid "Trying to read during GET"
msgstr "Tentando ler durante GET"

msgid "Trying to read during GET (retrying)"
msgstr "Tentando ler durante GET (tentando novamente)"

msgid "Trying to send to client"
msgstr "Tentando enviar para o cliente"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Tentando sincronizar sufixos com %s"

#, python-format
msgid "Trying to write to %s"
msgstr "Tentando gravar em %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "EXCEÇÃO NÃO CAPTURADA"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "Não é possível localizar a seção de configuração %s em %s"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr ""
"Não é possível carregar cliente interno a partir da configuração: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "Não é possível localizar %s em libc. Saindo como um não operacional."

#, python-format
msgid "Unable to locate config for %s"
msgstr "Não é possível localizar configuração para %s"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Não é possível localizar o número de configuração %s para %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"Não é possível localizar fallocate, posix_fallocate em libc. Saindo como um "
"não operacional."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "Não é possível executar fsync() no diretório %s: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "Não é possível ler a configuração a partir de %s"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "Não autorizado %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "Resposta inesperada: %s"

msgid "Unhandled exception"
msgstr "Exceção não manipulada"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr "Exceção inesperada ao tentar GET: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Atualize o relatório com falha para %(container)s %(dbfile)s"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Atualize o relatório enviado para %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"AVISO: O SSL deve ser ativado somente para fins de teste. Use rescisão SSL "
"externa para uma implementação de produção."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"AVISO: Não é possível modificar o limite do descritor de arquivo. Executar "
"como não raiz?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"AVISO: Não é possível modificar o limite máximo do processo. Executar como "
"não raiz?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"AVISO: Não é possível modificar o limite de memória. Executar como não raiz?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "Esperou %s segundos para %s eliminar; desistindo"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "Esperou %s segundos para %s eliminar; eliminando"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr ""
"Aviso: Não é possível estabelecer um limite de taxa sem um cliente memcached"

#, python-format
msgid "method %s is not allowed."
msgstr "O método %s não é permitido."

msgid "no log file found"
msgstr "Nenhum arquivo de log encontrado"

msgid "odfpy not installed."
msgstr "odfpy não está instalado."

#, python-format
msgid "plotting results failed due to %s"
msgstr "A plotagem de resultados falhou devido a %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib não instalado."
swift-2.7.1/swift/locale/de/0000775000567000056710000000000013024044470016777 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/de/LC_MESSAGES/0000775000567000056710000000000013024044470020564 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/de/LC_MESSAGES/swift.po0000664000567000056710000010754513024044354022275 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Andreas Jaeger , 2014
# Ettore Atalan , 2014-2015
# Jonas John , 2015
# Frank Kloeker , 2016. #zanata
# Monika Wolf , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 10:35+0000\n"
"Last-Translator: Monika Wolf \n"
"Language: de\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: German\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"Durch Benutzer beendet"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - parallel, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d Suffixe überprüft - %(hashed).2f%% hashverschlüsselt, "
"%(synced).2f%% synchronisiert"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s zurückgemeldet als ausgehängt"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) Partitionen von %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) Geräten rekonstruiert in  %(time).2fs "
"(%(rate).2f/sec, %(remaining)s verbleibend)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) Partitionen repliziert in "
"%(time).2fs (%(rate).2f/s, %(remaining)s verbleibend)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s Erfolge, %(failure)s Fehlschläge"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s gab 503 für %(statuses)s zurück"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d läuft nicht (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) scheinbar gestoppt"

#, python-format
msgid "%s already started..."
msgstr "%s bereits gestartet..."

#, python-format
msgid "%s does not exist"
msgstr "%s existiert nicht"

#, python-format
msgid "%s is not mounted"
msgstr "%s ist nicht eingehängt"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s zurückgemeldet als ausgehängt"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s läuft (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: Verbindung zurückgesetzt durch Peer"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s Container gelöscht"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s Container möglicherweise verbleibend"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s Container verbleibend"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s Objekte gelöscht"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s Objekte möglicherweise verbleibend"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s Objekte verbleibend"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", vergangen: %.02fs"

msgid ", return codes: "
msgstr ", Rückgabecodes: "

msgid "Account"
msgstr "Konto"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Konto %s wurde nicht aufgeräumt seit %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Kontoprüfungsmodus \"once\" abgeschlossen: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Kontoprüfungsdurchlauf abgeschlossen: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"Versuch, %(count)d Datenbanken in %(time).5f Sekunden zu replizieren "
"(%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Prüfung fehlgeschlagen für %s: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Falscher rsync-Rückgabecode: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Kontoprüfungsmodus \"once\" wird gestartet"

msgid "Begin account audit pass."
msgstr "Kontoprüfungsdurchlauf wird gestartet."

msgid "Begin container audit \"once\" mode"
msgstr "Containerprüfungsmodus \"once\" wird gestartet"

msgid "Begin container audit pass."
msgstr "Containerprüfungsdurchlauf wird gestartet."

msgid "Begin container sync \"once\" mode"
msgstr "Containersynchronisationsmodus \"once\" wird gestartet"

msgid "Begin container update single threaded sweep"
msgstr "Einzelthread-Scanvorgang für Containeraktualisierung wird gestartet"

msgid "Begin container update sweep"
msgstr "Scanvorgang für Containeraktualisierung wird gestartet"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Objektprüfung mit \"%s\"-Modus wird gestartet (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Einzelthread-Scanvorgang für Objektaktualisierung wird gestartet"

msgid "Begin object update sweep"
msgstr "Scanvorgang für Objektaktualisierung wird gestartet"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Durchlauf für Konto %s wird gestartet"

msgid "Beginning replication run"
msgstr "Replizierungsdurchlauf wird gestartet"

msgid "Broker error trying to rollback locked connection"
msgstr ""
"Brokerfehler beim Versuch, für eine gesperrte Verbindung ein Rollback "
"durchzuführen"

#, python-format
msgid "Can not access the file %s."
msgstr "Kann nicht auf die Datei %s zugreifen."

#, python-format
msgid "Can not load profile data from %s."
msgstr "Die Profildaten von %s können nicht geladen werden."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "%s (%s) kann nicht gelesen werden."

#, python-format
msgid "Cannot write %s (%s)"
msgstr "Schreiben von %s (%s) nicht möglich."

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "Client konnte nicht innerhalb von %ss vom Proxy lesen"

msgid "Client disconnected on read"
msgstr "Client beim Lesen getrennt"

msgid "Client disconnected without sending enough data"
msgstr "Client getrennt ohne dem Senden von genügend Daten"

msgid "Client disconnected without sending last chunk"
msgstr ""
"Die Verbindung zum Client wurde getrennt, bevor der letzte Chunk gesendet "
"wurde. "

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"Clientpfad %(client)s entspricht nicht dem in den Objektmetadaten "
"gespeicherten Pfad %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"Konfigurationsoption internal_client_conf_path nicht definiert. "
"Standardkonfiguration wird verwendet. Informationen zu den Optionen finden "
"Sie in internal-client.conf-sample."

msgid "Connection refused"
msgstr "Verbindung abgelehnt"

msgid "Connection timeout"
msgstr "Verbindungszeitüberschreitung"

msgid "Container"
msgstr "Container"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Containerprüfungsmodus \"once\" abgeschlossen: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Containerprüfungsdurchlauf abgeschlossen: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "Containersynchronisationsmodus \"once\" abgeschlossen: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Einzelthread-Scanvorgang für Containeraktualisierung abgeschlossen: "
"%(elapsed).02fs, %(success)s Erfolge, %(fail)s Fehler, %(no_change)s ohne "
"Änderungen"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Scanvorgang für Containeraktualisierung abgeschlossen: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Scanvorgang für Containeraktualisierung von %(path)s abgeschlossen: "
"%(elapsed).02fs, %(success)s Erfolge, %(fail)s Fehler, %(no_change)s ohne "
"Änderungen"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "Keine Bindung an %s:%s möglich nach Versuch über %s Sekunden"

#, python-format
msgid "Could not load %r: %s"
msgstr "Konnte %r nicht laden: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Fehler beim Downloaden von Daten: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Gerätedurchgang abgeschlossen: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr ""
"Das Verzeichnis %r kann keiner gültigen Richtlinie (%s) zugeordnet werden."

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "FEHLER %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "FEHLER %(status)d %(body)s von %(type)s Server"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "FEHLER %(status)d %(body)s Vom Objektserver bezüglich: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "FEHLER %(status)d Erwartet: 100-continue von Objektserver"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "FEHLER %(status)d Versuch, %(method)s %(path)sAus Container-Server"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"FEHLER Kontoaktualisierung fehlgeschlagen mit %(ip)s:%(port)s/%(device)s "
"(wird zu einem späteren Zeitpunkt erneut versucht): Antwort %(status)s "
"%(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"FEHLER Kontoaktualisierung fehlgeschlagen: Unterschiedliche Anzahl von Hosts "
"und Einheiten in der Anforderung: \"%s\" contra \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "FEHLER Falsche Rückmeldung %(status)s von %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "FEHLER Client-Lesezeitüberschreitung (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"FEHLER Containeraktualisierung fehlgeschlagen (wird für asynchrone "
"Aktualisierung zu einem späteren Zeitpunkt gespeichert): %(status)d Antwort "
"von %(ip)s:%(port)s/%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"FEHLER Containeraktualisierung fehlgeschlagen: Unterschiedliche Anzahl von "
"Hosts und Einheiten in der Anforderung: \"%s\" contra \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "FEHLER Kontoinfo %s konnte nicht abgerufen werden"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "FEHLER Containerinformation %s konnte nicht geholt werden"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr ""
"FEHLER Fehler beim Schließen von DiskFile %(data_file)s: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr ""
"FEHLER Ausnahme, die zu einer Unterbrechung der Verbindung zum Client führt"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "FEHLER: Ausnahme bei der Übertragung von Daten an die Ojektserver %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "FEHLER Eigene IPs konnten nicht abgerufen werden?"

msgid "ERROR Insufficient Storage"
msgstr "FEHLER Nicht genügend Speicher"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"FEHLER Objekt %(obj)s hat die Prüfung nicht bestanden und wurde unter "
"Quarantäne gestellt: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "FEHLER Pickle-Problem, %s wird unter Quarantäne gestellt"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "FEHLER Entferntes Laufwerk nicht eingehängt %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "FEHLER beim Synchronisieren %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "FEHLER beim Synchronisieren %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "FEHLER beim Versuch, %s zu prüfen"

msgid "ERROR Unhandled exception in request"
msgstr "FEHLER Nicht behandelte Ausnahme in Anforderung"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "FEHLER __call__-Fehler mit %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"FEHLER Containeraktualisierung fehlgeschlagen mit %(ip)s:%(port)s/%(device)s "
"(wird zu einem späteren Zeitpunkt erneut versucht)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"FEHLER Kontoaktualisierung fehlgeschlagen mit %(ip)s:%(port)s/%(device)s "
"(wird später erneut versucht): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "FEHLER asynchrone anstehende Datei mit unerwartetem Namen %s"

msgid "ERROR auditing"
msgstr "FEHLER bei der Prüfung"

#, python-format
msgid "ERROR auditing: %s"
msgstr "FEHLER bei der Prüfung: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"FEHLER Containeraktualisierung fehlgeschlagen mit %(ip)s:%(port)s/%(dev)s "
"(wird für asynchrone Aktualisierung zu einem späteren Zeitpunkt gespeichert)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "FEHLER beim Lesen der HTTP-Antwort von %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "FEHLER beim Lesen der Datenbank %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "FEHLER rsync fehlgeschlagen mit %(code)s: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr ""
"FEHLER beim Synchronisieren von %(file)s Dateien mit dem Knoten %(node)s"

msgid "ERROR trying to replicate"
msgstr "FEHLER beim Versuch zu replizieren"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "FEHLER beim Versuch, %s zu bereinigen"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "FEHLER mit %(type)s Server %(ip)s:%(port)s/%(device)s AW: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "FEHLER beim Laden von Unterdrückungen von %s: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "FEHLER mit entferntem Server %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr ""
"FEHLER:  Pfade zu Laufwerkpartitionen konnten nicht abgerufen werden: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "FEHLER: Beim Abrufen von Segmenten ist ein Fehler aufgetreten"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "FEHLER: Auf %(path)s kann nicht zugegriffen werden: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "FEHLER: Prüfung konnte nicht durchgeführt werden: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Fehler %(action)s für memcached: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Fehler beim Kodieren nach UTF-8: %s"

msgid "Error hashing suffix"
msgstr "Fehler beim Hashing des Suffix"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Fehler in %r mit mtime_check_interval: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Fehler beim Begrenzen des Servers %s"

msgid "Error listing devices"
msgstr "Fehler beim Auflisten der Geräte"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Fehler beim Wiedergeben der Profilerstellungsergebnisse: %s"

msgid "Error parsing recon cache file"
msgstr "Fehler beim Analysieren von recon-Zwischenspeicherdatei"

msgid "Error reading recon cache file"
msgstr "Fehler beim Lesen von recon-Zwischenspeicherdatei"

msgid "Error reading ringfile"
msgstr "Fehler beim Lesen der Ringdatei"

msgid "Error reading swift.conf"
msgstr "Fehler beim Lesen der swift.conf"

msgid "Error retrieving recon data"
msgstr "Fehler beim Abrufen der recon-Daten"

msgid "Error syncing handoff partition"
msgstr "Fehler bei der Synchronisierung der Übergabepartition"

msgid "Error syncing partition"
msgstr "Fehler beim Syncen der Partition"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Fehler beim Synchronisieren mit Knoten: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Fehler bei Versuch, erneuten Build zu erstellen für %(path)s policy#"
"%(policy)d frag#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Fehler: Ein Fehler ist aufgetreten"

msgid "Error: missing config path argument"
msgstr "Fehler: fehlendes Konfigurationspfadargument"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Fehler: %s kann nicht lokalisiert werden"

msgid "Exception dumping recon cache"
msgstr "Ausnahme beim Löschen von recon-Cache"

msgid "Exception in top-level account reaper loop"
msgstr "Ausnahme in Reaper-Loop für Konto der höchsten Ebene"

msgid "Exception in top-level replication loop"
msgstr "Ausnahme in Replizierungsloop der höchsten Ebene"

msgid "Exception in top-levelreconstruction loop"
msgstr "Ausnahme in Rekonstruktionsloop der höchsten Ebene"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Ausnahme beim Löschen von Container %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Ausnahme beim Löschen von Objekt %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Ausnahme bei %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Ausnahme mit Account %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Ausnahme bei Containern für Konto %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Ausnahme bei Objekten für Container %(container)s für Konto %(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Erwartet: 100-continue auf %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "CNAME-Kette für %(given_domain)s bis %(found_domain)s wird gefolgt"

msgid "Found configs:"
msgstr "Gefundene Konfigurationen:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"Der Modus 'handoffs_first' ist noch nicht abgeschlossen. Der aktuelle "
"Replikationsdurchgang wird abgebrochen."

msgid "Host unreachable"
msgstr "Host nicht erreichbar"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Unvollständiger Durchgang auf Konto %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Ungültiges X-Container-Sync-To-Format %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Ungültiger Host %r in X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Ungültiger ausstehender Eintrag %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Ungültige Rückmeldung %(resp)s von %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Ungültige Rückmeldung %(resp)s von %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Ungültiges Schema %r in X-Container-Sync-To, muss \"//\", \"http\" oder "
"\"https\" sein."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Lange laufendes rsync wird gekillt: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Laden von JSON aus %s fehlgeschlagen: (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Suche erkannt. Live-Coros werden gelöscht."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s zugeordnet zu %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "Kein %s läuft"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "Kein Cluster-Endpunkt für %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "Keine Berechtigung zu Signal-Programmkennung %d"

#, python-format
msgid "No policy with index %s"
msgstr "Keine Richtlinie mit Index %s"

#, python-format
msgid "No realm key for %r"
msgstr "Kein Bereichsschlüssel für %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Kein freier Speicherplatz im Gerät für %s (%s) vorhanden."

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Knotenfehler begrenzt %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "Es wurden nicht genügend Objektserver bestätigt (got %d)."

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"Nicht gefunden %(sync_from)r => %(sync_to)r                       - Objekt "
"%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "Für %s Sekunden nichts rekonstruiert."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "Für %s Sekunden nichts repliziert."

msgid "Object"
msgstr "Objekt"

msgid "Object PUT"
msgstr "Objekt PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"PUT-Operation für ein Objekt gibt 202 für 409 zurück: %(req_timestamp)s <= "
"%(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "Objekt PUT Rückgabe 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Objektprüfung (%(type)s) \"%(mode)s\" Modus abgeschlossen: %(elapsed).02fs. "
"Unter Quarantäne gestellt insgesamt: %(quars)d, Fehler insgesamt: "
"%(errors)d, Dateien/s insgesamt: %(frate).2f, Bytes/s insgesamt: "
"%(brate).2f, Prüfungszeit: %(audit).2f, Geschwindigkeit: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Objektprüfung (%(type)s). Seit %(start_time)s: Lokal: %(passes)d übergeben, "
"%(quars)d unter Quarantäne gestellt, %(errors)d Fehler, Dateien/s: "
"%(frate).2f, Bytes/s: %(brate).2f, Zeit insgesamt: %(total).2f, "
"Prüfungszeit: %(audit).2f, Geschwindigkeit: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Objektprüfungsstatistik: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Objektrekonstruktion vollständig (einmal). (%.02f Minuten)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Objektrekonstruktion vollständig. (%.02f Minuten)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Objektreplizierung abgeschlossen (einmal). (%.02f Minuten)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Objektreplikation vollständig. (%.02f Minuten)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "Objektserver haben %s nicht übereinstimmende Etags zurückgegeben"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Einzelthread-Scanvorgang für Objektaktualisierung abgeschlossen: "
"%(elapsed).02fs, %(success)s Erfolge, %(fail)s Fehler"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Scanvorgang für Objektaktualisierung abgeschlossen: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Scanvorgang für Objektaktualisierung von %(device)s abgeschlossen: "
"%(elapsed).02fs, %(success)s Erfolge, %(fail)s Fehler"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr ""
"Parameter, Abfragen und Fragmente nicht zulässig in X-Container-Sync-To"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Partitionszeiten: max. %(max).4fs, min. %(min).4fs, durchschnittl. %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Durchlauf wird gestartet; %s mögliche Container; %s mögliche Objekte"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Durchgang abgeschlossen in %ds; %d Objekte abgelaufen"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Bisherige Durchgänge %ds; %d Objekte abgelaufen"

msgid "Path required in X-Container-Sync-To"
msgstr "Pfad in X-Container-Sync-To ist erforderlich"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Problem bei der Bereinigung von %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Problem bei der Bereinigung von %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Problem beim Schreiben der langlebigen Statusdatei %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Fehler bei der Profilerstellung: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(hsh_path)s bis %(quar_path)s wurden unter Quarantäne gestellt, da es sich "
"nicht um ein Verzeichnis handelt"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(object_path)s bis %(quar_path)s wurden unter Quarantäne gestellt, da es "
"sich nicht um ein Verzeichnis handelt"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s unter Quarantäne gestellt in %s aufgrund von %s-Datenbank"

#, python-format
msgid "Quarantining DB %s"
msgstr "Datenbank %s wird unter Quarantäne gestellt"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Inaktivitätsprotokoll für Geschwindigkeitsbegrenzung: %(sleep)s für "
"%(account)s/%(container)s/%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "%(remove)d Datenbanken entfernt"

#, python-format
msgid "Removing %s objects"
msgstr "%s Objekte werden entfernt"

#, python-format
msgid "Removing partition: %s"
msgstr "Partition wird entfernt: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "PID-Datei %(pid_file)s mit falscher PID %(pid)d wird entfernt"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "PID-Datei %s mit ungültiger PID wird entfernt."

#, python-format
msgid "Removing stale pid file %s"
msgstr "Veraltete PID-Datei %s wird entfernt"

msgid "Replication run OVER"
msgstr "Replizierungsdurchlauf ABGESCHLOSSEN"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "497 wird aufgrund von Blacklisting zurückgegeben: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"498 wird für %(meth)s auf %(acc)s/%(cont)s/%(obj)s zurückgegeben. "
"Geschwindigkeitsbegrenzung (Max. Inaktivität) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Ringänderung erkannt. Aktueller Rekonstruktionsdurchgang wird abgebrochen."

msgid "Ring change detected. Aborting current replication pass."
msgstr ""
"Ringänderung erkannt. Aktueller Replizierungsdurchlauf wird abgebrochen."

#, python-format
msgid "Running %s once"
msgstr "%s läuft einmal"

msgid "Running object reconstructor in script mode."
msgstr "Objektrekonstruktor läuft im Skriptmodus."

msgid "Running object replicator in script mode."
msgstr "Objektreplikator läuft im Skriptmodus."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "Signal %s  PID: %s  Signal: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"Seit %(time)s: %(sync)s synchronisiert [%(delete)s Löschungen, %(put)s "
"Puts], %(skip)s übersprungen, %(fail)s fehlgeschlagen"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"Seit %(time)s: Kontoprüfungen: %(passed)s bestandene Prüfung,%(failed)s "
"nicht bestandene Prüfung"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"Seit %(time)s: Containerprüfungen: %(pass)s bestandene Prüfung, %(fail)s "
"nicht bestandene Prüfung"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "%(device)s wird übersprungen, da nicht angehängt"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "%s wird übersprungen, weil es nicht eingehängt ist"

#, python-format
msgid "Starting %s"
msgstr "%s wird gestartet"

msgid "Starting object reconstruction pass."
msgstr "Objektrekonstruktionsdurchgang wird gestartet."

msgid "Starting object reconstructor in daemon mode."
msgstr "Objektrekonstruktor wird im Daemon-Modus gestartet."

msgid "Starting object replication pass."
msgstr "Objektreplikationsdurchgang wird gestartet."

msgid "Starting object replicator in daemon mode."
msgstr "Objektreplikator wird im Dämonmodus gestartet."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "Erfolgreiches rsync von %(src)s um %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "Auf den Dateityp darf nicht zugegriffen werden!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"Die Gesamtsumme an %(key)s für den Container (%(total)s) entspricht nicht "
"der Summe der %(key)s für alle Richtlinien (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Zeitlimit %(action)s für memcached: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Zeitüberschreitungsausnahme bei %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Versuch, %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Versuch, %(full_path)s mit GET abzurufen"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Es wird versucht, %s-Status von PUT für %s abzurufen."

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Versuch, den finalen Status von PUT für %s abzurufen"

msgid "Trying to read during GET"
msgstr "Versuch, während des GET-Vorgangs zu lesen"

msgid "Trying to read during GET (retrying)"
msgstr "Versuch, während des GET-Vorgangs zu lesen (Wiederholung)"

msgid "Trying to send to client"
msgstr "Versuch, an den Client zu senden"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Es wird versucht, Suffixe mit %s zu synchronisieren."

#, python-format
msgid "Trying to write to %s"
msgstr "Versuch, an %s zu schreiben"

msgid "UNCAUGHT EXCEPTION"
msgstr "NICHT ABGEFANGENE AUSNAHME"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "%s-Konfigurationsabschnitt in %s kann nicht gefunden werden"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr ""
"Interner Client konnte nicht aus der Konfiguration geladen werden:  %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr ""
"%s konnte nicht in libc gefunden werden. Wird als Nullbefehl verlassen."

#, python-format
msgid "Unable to locate config for %s"
msgstr "Konfiguration für %s wurde nicht gefunden."

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Konfigurationsnummer %s für %s wurde nicht gefunden."

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"fallocate, posix_fallocate konnte nicht in libc gefunden werden. Wird als "
"Nullbefehl verlassen."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "fsync() kann für Verzeichnis %s nicht ausgeführt werden: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "Konfiguration aus %s kann nicht gelesen werden"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "Nicht genehmigte %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "Unerwartete Antwort: %s"

msgid "Unhandled exception"
msgstr "Nicht behandelte Exception"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"Unbekannte Ausnahme bei GET-Versuch: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Aktualisierungsbericht fehlgeschlagen für %(container)s %(dbfile)s"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Aktualisierungsbericht gesendet für %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"WARNUNG: SSL sollte nur zu Testzwecken aktiviert werden. Verwenden Sie die "
"externe SSL-Beendigung für eine Implementierung in der Produktionsumgebung."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"WARNUNG: Grenzwert für Dateideskriptoren kann nicht geändert werden.  Wird "
"nicht als Root ausgeführt?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"WARNUNG: Grenzwert für maximale Verarbeitung kann nicht geändert werden.  "
"Wird nicht als Root ausgeführt?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"WARNUNG: Grenzwert für Speicher kann nicht geändert werden.  Wird nicht als "
"Root ausgeführt?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "Hat %s Sekunden für %s zum Erlöschen gewartet; Gibt auf"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "Hat %s Sekunden für %s zum Erlöschen gewartet. Wird abgebrochen."

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr ""
"Warnung: Geschwindigkeitsbegrenzung kann nicht ohne memcached-Client "
"durchgeführt werden"

#, python-format
msgid "method %s is not allowed."
msgstr "Methode %s ist nicht erlaubt."

msgid "no log file found"
msgstr "keine Protokolldatei gefunden"

msgid "odfpy not installed."
msgstr "odfpy ist nicht installiert."

#, python-format
msgid "plotting results failed due to %s"
msgstr ""
"Die grafische Darstellung der Ergebnisse ist fehlgeschlagen aufgrund von %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib ist nicht installiert."
swift-2.7.1/swift/locale/ko_KR/0000775000567000056710000000000013024044470017414 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ko_KR/LC_MESSAGES/0000775000567000056710000000000013024044470021201 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/ko_KR/LC_MESSAGES/swift.po0000664000567000056710000010642013024044354022701 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Mario Cho , 2014
# Ying Chun Guo , 2015
# Sungjin Kang , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-19 03:34+0000\n"
"Last-Translator: SeYeon Lee \n"
"Language: ko-KR\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Korean (South Korea)\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"사용자 종료"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - 병렬, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d개 접미부를 검사함 - %(hashed).2f%%개 해시됨, %(synced).2f%%개 동"
"기화됨"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s에서 마운트 해제된 것으로 응답함"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(device)d/%(dtotal)d (%(dpercentage).2f%%) 장치 중 %(reconstructed)d/"
"%(total)d (%(percentage).2f%%)개의 파티션이 %(time).2fs (%(rate).2f/sec, "
"%(remaining)s 남음)에 재구성됨"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d(%(percentage).2f%%)개 파티션이 %(time).2f초"
"(%(rate).2f/초, %(remaining)s 남음) 안에 복제됨"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s개 성공, %(failure)s개 실패"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s에서 %(statuses)s에 대해 503을 리턴함"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d이(가) 실행되지 않음(%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s(%s)이(가) 중지됨"

#, python-format
msgid "%s already started..."
msgstr "%s이(가) 이미 시작되었음..."

#, python-format
msgid "%s does not exist"
msgstr "%s이(가) 존재하지 않음"

#, python-format
msgid "%s is not mounted"
msgstr "%s이(가) 마운트되지 않음"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s이(가) 마운트 해제된 것으로 응답"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s 실행 중(%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: 피어에서 연결 재설정"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s 지워진 컨테이너"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s 여분의 컨테이너"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s 남은 컨테이너"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s 지워진 오브젝트"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s o여분의 오브젝트"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s 남은 오브젝트"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", 경과됨: %.02fs"

msgid ", return codes: "
msgstr ", 반환 코드들:"

msgid "Account"
msgstr "계정"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Account %s을(를) %s 이후에 얻지 못함"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Account 감사 \"once\"모드가 완료: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "정상으로 판정난 account: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"%(time).5f초(%(rate).5f/s)에 %(count)d개의 데이터베이스를 복제하려고 함"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "검사 중 오류 %s: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "잘못된 rsync 리턴 코드: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Account 감사 \"once\"모드로 시작"

msgid "Begin account audit pass."
msgstr "Account 검사 시작."

msgid "Begin container audit \"once\" mode"
msgstr "컨테이너 감사 \"once\" 모드 시작"

msgid "Begin container audit pass."
msgstr "컨테이너 감사 전달이 시작됩니다."

msgid "Begin container sync \"once\" mode"
msgstr "컨테이너 동기화 \"once\" 모드 시작"

msgid "Begin container update single threaded sweep"
msgstr "컨테이너 업데이트 단일 스레드 스윕 시작"

msgid "Begin container update sweep"
msgstr "컨테이너 업데이트 스윕 시작"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "오브젝트 감사 \"%s\" 모드(%s%s) 시작"

msgid "Begin object update single threaded sweep"
msgstr "오브젝트 업데이트 단일 스레드 스윕 시작"

msgid "Begin object update sweep"
msgstr "오브젝트 업데이트 스윕 시작"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Account 패스 시작 %s"

msgid "Beginning replication run"
msgstr "복제 실행 시작"

msgid "Broker error trying to rollback locked connection"
msgstr "잠긴 연결을 롤백하는 중 브로커 오류 발생"

#, python-format
msgid "Can not access the file %s."
msgstr "파일 %s에 액세스할 수 없습니다."

#, python-format
msgid "Can not load profile data from %s."
msgstr "%s에서 프로파일 데이터를 로드할 수 없습니다."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "%s을(를) 읽을 수 없음(%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "%s을(를) 쓸 수 없음(%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "클라이언트에서 %ss 내에 프록시를 읽을 수 없었음"

msgid "Client disconnected on read"
msgstr "읽기 시 클라이언트 연결이 끊어짐"

msgid "Client disconnected without sending enough data"
msgstr "데이터를 모두 전송하기 전에 클라이언트 연결이 끊어짐"

msgid "Client disconnected without sending last chunk"
msgstr "마지막 청크를 전송하기 전에 클라이언트 연결이 끊어짐"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"클라이언트 경로 %(client)s이(가) 오브젝트 메타데이터 %(meta)s에 저장된 경로"
"와 일치하지 않음"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"구성 옵션 internal_client_conf_path가 정의되지 않았습니다. 기본 구성 사용 시 "
"internal-client.conf-sample에서 옵션을 참조하십시오."

msgid "Connection refused"
msgstr "연결이 거부됨"

msgid "Connection timeout"
msgstr "연결 제한시간 초과"

msgid "Container"
msgstr "컨테이너"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "컨테이너 감사 \"once\" 모드 완료: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "컨테이너 감사 전달 완료: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "컨테이너 동기화 \"once\" 모드 완료: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"컨테이너 업데이트 단일 스레드 스윕 완료: %(elapsed).02fs, %(success)s개 성"
"공, %(fail)s개 실패, %(no_change)s개 변경 없음"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "컨테이너 업데이트 스윕 완료: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"%(path)s의 컨테이너 업데이트 스윕 완료: %(elapsed).02fs, %(success)s개 성공, "
"%(fail)s개 실패, %(no_change)s개 변경 없음"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "%s초 동안 시도한 후 %s:%s에 바인드할 수 없음"

#, python-format
msgid "Could not load %r: %s"
msgstr "%r을(를) 로드할 수 없음: %s"

#, python-format
msgid "Data download error: %s"
msgstr "데이터 다운로드 오류: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "장치 패스 완료 : %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "%r 디렉토리가 올바른 정책(%s)에 맵핑되지 않음"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "오류 %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "오류 %(status)d %(body)s, %(type)s 서버 발신"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "오류 %(status)d %(body)s, 오브젝트 서버 발신, 회신: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "오류 %(status)d. 예상: 100-continue, 오브젝트 서버 발신"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "오류 %(status)d, 컨테이너 서버에서 %(method)s %(path)s 시도 중"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"오류. %(ip)s:%(port)s/%(device)s(으)로 계정 업데이트 실패(나중에 다시 시도): "
"응답 %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"오류. 계정 업데이트 실패: 다음 요청에서 호스트 및 디바이스 수가 서로 다름: "
"\"%s\" 대 \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "오류. %(host)s의 잘못된 응답 %(status)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "ERROR 클라이언트 읽기 시간 초과 (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"오류. 컨테이너 업데이트 실패(이후 비동기 업데이트용으로 저장): %(status)d응"
"답. 출처: %(ip)s:%(port)s/%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"오류. 컨테이너 업데이트 실패: 다음 요청에서 호스트 및 디바이스 수가 서로 다"
"름: \"%s\" 대 \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "오류는 %s의 account 정보를 얻을 수 없습니다"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "오류. 컨테이너 정보 %s을(를) 가져올 수 없음"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "오류. 디스크 파일 %(data_file)s 닫기 실패: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "오류. 예외로 인해 클라이언트 연결이 끊어짐"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "ERROR 오브젝트 서버 %s에 데이터를 전송하는 중에 예외 발생"

msgid "ERROR Failed to get my own IPs?"
msgstr "오류. 자체 IP를 가져오는 중 오류 발생 여부"

msgid "ERROR Insufficient Storage"
msgstr "오류. 스토리지 공간이 충분하지 않음"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr "오류. 오브젝트 %(obj)s의 감사가 실패하여 격리됨: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "오류. 문제가 발생함, %s 격리 중"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "오류. 원격 드라이브가 마운트되지 않음. %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "%(db_file)s %(row)s 동기화 오류"

#, python-format
msgid "ERROR Syncing %s"
msgstr "%s 동기화 오류"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "%s 감사 중 오류 발생"

msgid "ERROR Unhandled exception in request"
msgstr "오류. 요청에 처리되지 않은 예외가 있음"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "오류. %(method)s %(path)s에 __call__ 오류 발생"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"오류. %(ip)s:%(port)s/%(device)s(으)로 계정 업데이트 실패(나중에 다시 시도)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"오류. %(ip)s:%(port)s/%(device)s(으)로 계정 업데이트 실패(나중에 다시 시도): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "오류. 비동기 보류 파일에 예상치 못한 이름 %s을(를) 사용함"

msgid "ERROR auditing"
msgstr "검사 오류"

#, python-format
msgid "ERROR auditing: %s"
msgstr "감사 오류: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"오류. %(ip)s:%(port)s/%(dev)s(으)로 컨테이너 업데이트 실패(이후 비동기 업데이"
"트용으로 저장)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "%s에서 HTTP 응답을 읽는 중 오류 발생"

#, python-format
msgid "ERROR reading db %s"
msgstr "데이터베이스 %s을(를) 읽는 중 오류 발생"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "오류. %(code)s의 rsync가 실패함: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "%(file)s을(를) 노드 %(node)s과(와) 동기화하는 중 오류 발생"

msgid "ERROR trying to replicate"
msgstr "복제 중 오류 발생"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "%s 정리 중 오류 발생"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "%(type)s 서버 %(ip)s:%(port)s/%(device)s 오류, 회신: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "%s에서 억제를 로드하는 중 오류 발생: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "원격 서버 %(ip)s:%(port)s/%(device)s에 오류 발생"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "오류: 드라이브 파티션에 대한 경로를 가져오지 못함: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "오류: 세그먼트를 검색하는 중 오류 발생"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "오류: %(path)s에 액세스할 수 없음: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "오류: 감사를 실행할 수 없음: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Memcached에 대한 %(action)s 오류: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "UTF-8: %s 으로 변환 오류"

msgid "Error hashing suffix"
msgstr "접미부를 해싱하는 중 오류 발생"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "%r에서 mtime_check_interval 오류 발생: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "서버 %s 제한 오류"

msgid "Error listing devices"
msgstr "디바이스 나열 중 오류 발생"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "프로파일링 결과를 렌더링하는 중 오류 발생: %s"

msgid "Error parsing recon cache file"
msgstr "조정 캐시 파일을 구문 분석하는 중 오류 발생"

msgid "Error reading recon cache file"
msgstr "조정 캐시 파일을 읽는 중 오류 발생"

msgid "Error reading ringfile"
msgstr "링 파일을 읽는 중 오류 발생"

msgid "Error reading swift.conf"
msgstr "swift.conf를 읽는 중 오류 발생"

msgid "Error retrieving recon data"
msgstr "조정 데이터를 검색하는 중에 오류 발생"

msgid "Error syncing handoff partition"
msgstr "핸드오프 파티션 동기화 중 오류 발생"

msgid "Error syncing partition"
msgstr "파티션 동기 오류 "

#, python-format
msgid "Error syncing with node: %s"
msgstr "노드 동기 오류: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"%(path)s policy#%(policy)d frag#%(frag_index)s을(를) 다시 빌드하려는 중 오류 "
"발생"

msgid "Error: An error occurred"
msgstr "오류: 오류 발생"

msgid "Error: missing config path argument"
msgstr "오류: 구성 경로 인수 누락"

#, python-format
msgid "Error: unable to locate %s"
msgstr "오류: %s을(를) 찾을 수 없음"

msgid "Exception dumping recon cache"
msgstr "조정 캐시 덤프 중 예외 발생"

msgid "Exception in top-level account reaper loop"
msgstr "최상위 account 루프의 예외 "

msgid "Exception in top-level replication loop"
msgstr "최상위 레벨 복제 루프에서 예외 발생"

msgid "Exception in top-levelreconstruction loop"
msgstr "최상위 레벨 재구성 루프에서 예외 발생"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "컨테이너 %s %s 삭제 중 예외 발생"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "오브젝트 %s %s %s 삭제 중 예외 발생"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s 예외"

#, python-format
msgid "Exception with account %s"
msgstr "예외 계정 %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "계정 콘테이너의 예외 %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Account %(account)s의 컨테이너 %(container)s에 대한 오브젝트에 예외 발생"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "%s에서 100-continue 예상"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s에서 %(found_domain)s(으)로의 다음 CNAME 체인"

msgid "Found configs:"
msgstr "구성 발견:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"핸드오프 첫 모드에 여전히 핸드오프가 남아 있습니다. 현재 복제 전달을 중단합니"
"다."

msgid "Host unreachable"
msgstr "호스트 도달 불가능"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "계정 패스 미완료 %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "올바르지 않은 X-Container-Sync-To 형식 %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "X-Container-Sync-To에 올바르지 않은 호스트 %r이(가) 있음"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "올바르지 않은 보류 항목 %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "%(full_path)s에서 올바르지 않은 응답 %(resp)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "%(ip)s의 올바르지 않은 응답 %(resp)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"X-Container-Sync-To 올바르지 않은 스키마 %r이(가) 있습니다. \"//\", \"http\" "
"또는 \"https\"여야 합니다."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "장기 실행 중인 rsync 강제 종료: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "%s에서 JSON 로드 실패(%s)"

msgid "Lockup detected.. killing live coros."
msgstr "잠금 발견.. 활성 coros를 강제 종료합니다."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s을(를) %(found_domain)s(으)로 맵핑함"

#, python-format
msgid "No %s running"
msgstr "%s이(가) 실행되지 않음"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "%r %r에 대한 클러스터 엔드포인트가 없음"

#, python-format
msgid "No permission to signal PID %d"
msgstr "PID %d을(를) 표시할 권한이 없음"

#, python-format
msgid "No policy with index %s"
msgstr "인덱스가 %s인 정책이 없음"

#, python-format
msgid "No realm key for %r"
msgstr "%r에 대한 영역 키가 없음"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "%s의 장치 왼쪽에 공백이 없음(%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "노드 오류로 %(ip)s:%(port)s(%(device)s)이(가) 제한됨"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "승인된 오브젝트 서버가 부족함(%d을(를) 받음)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"찾을 수 없음 %(sync_from)r => %(sync_to)r                       - 오브젝"
"트%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "%s초 동안 재구성된 것이 없습니다."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "%s초 동안 복제된 것이 없습니다."

msgid "Object"
msgstr "오브젝트"

msgid "Object PUT"
msgstr "Object PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"Object PUT에서 409에 대해 202를 리턴함: %(req_timestamp)s <= %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "Object PUT에서 412를 리턴함, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"오브젝트 감사(%(type)s) \"%(mode)s\" 모드 완료: %(elapsed).02fs. 총 격리 항"
"목: %(quars)d, 총 오류 수: %(errors)d, 총 파일/초: %(frate).2f, 총 바이트/"
"초: %(brate).2f, 감사 시간: %(audit).2f, 속도: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"오브젝트 감사(%(type)s). %(start_time)s 이후: 로컬: %(passes)d개 통과, "
"%(quars)d개 격리, %(errors)d개 오류, 파일/초: %(frate).2f, 바이트/초: "
"%(brate).2f, 총 시간: %(total).2f, 감사 시간: %(audit).2f, 속도: "
"%(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "오브젝트 감사 통계: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "오브젝트 재구성 완료(일 회). (%.02f분)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "오브젝트 재구성 완료. (%.02f분)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "오브젝트 복제 완료(일 회). (%.02f분)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "오브젝트 복제 완료. (%.02f분)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "오브젝트 서버에서 %s개의 불일치 etag를 리턴함"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"오브젝트 업데이트 단일 스레드 스윕 완료: %(elapsed).02fs, %(success)s개 성"
"공, %(fail)s개 실패"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "오브젝트 업데이트 스윕 완료: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"%(device)s의 오브젝트 업데이트 스윕 완료: %(elapsed).02fs, %(success)s개 성"
"공, %(fail)s개 실패"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "X-Container-Sync-To에 매개변수, 조회, 단편이 허용되지 않음"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr "파티션 시간: 최대 %(max).4f초, 최소 %(min).4f초, 중간 %(med).4f초"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "전달 시작, %s개의 컨테이너 사용 가능, %s개의 오브젝트 사용 가능"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "%d초 안에 전달이 완료됨. %d개의 오브젝트가 만료됨"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "현재 %d개 전달, %d개의 오브젝트가 만료됨"

msgid "Path required in X-Container-Sync-To"
msgstr "X-Container-Sync-To에 경로가 필요함"

#, python-format
msgid "Problem cleaning up %s"
msgstr "%s 정리 문제 발생"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "%s 정리 문제 발생(%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "지속적인 상태 파일 %s 쓰기 오류(%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "프로파일링 오류: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr "디렉토리가 아니어서 %(hsh_path)s을(를) %(quar_path)s에 격리함"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr "디렉토리가 아니어서 %(object_path)s을(를) %(quar_path)s에 격리함"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s을(를) %s에 격리. 원인: %s 데이터베이스"

#, python-format
msgid "Quarantining DB %s"
msgstr "데이터베이스 %s 격리"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"%(account)s/%(container)s/%(object)s에 대한 Ratelimit 휴면 로그: %(sleep)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "%(remove)d 데이터베이스를 제거함"

#, python-format
msgid "Removing %s objects"
msgstr "%s 오브젝트 제거 중"

#, python-format
msgid "Removing partition: %s"
msgstr "파티션 제거: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "잘못된 pid %(pid)d의 pid 파일 %(pid_file)s 제거"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "PID가 올바르지 않은 pid 파일 %s 제거"

#, python-format
msgid "Removing stale pid file %s"
msgstr "시간이 경과된 pid 파일 %s을(를) 제거하는 중 "

msgid "Replication run OVER"
msgstr "복제 실행 대상"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "블랙리스트 지정으로 인해 497이 리턴됨: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"%(acc)s/%(cont)s/%(obj)s(으)로 %(meth)s에 대한 498을 리턴합니다. 전송률 제한"
"(최대 휴면) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr "링 변경이 발견되었습니다. 현재 재구성 전달을 중단합니다."

msgid "Ring change detected. Aborting current replication pass."
msgstr "Ring 변경이 발견되었습니다. 현재 복제 전달을 중단합니다."

#, python-format
msgid "Running %s once"
msgstr "%s을(를) 한 번 실행"

msgid "Running object reconstructor in script mode."
msgstr "오브젝트 재구성자를 스크립트 모드로 실행 중입니다."

msgid "Running object replicator in script mode."
msgstr "오브젝트 복제자를 스크립트 모드로 실행 중입니다."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "신호 %s  pid: %s  신호: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"%(time)s 이후: %(sync)s 동기화됨 [%(delete)s 삭제, %(put)s 배치], %(skip)s 건"
"너뜀, %(fail)s 실패"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"검사 경과 시간 %(time)s: Account 검사A: %(passed)s 정상 ,%(failed)s 실패"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr "%(time)s 이후: 컨테이너 감사: %(pass)s 감사 전달, %(fail)s 감사 실패"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "마운트되지 않았으므로 %(device)s을(를) 건너뜀"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "마운트되지 않는 %s를 건너 뛰기"

#, python-format
msgid "Starting %s"
msgstr "%s 시작 중"

msgid "Starting object reconstruction pass."
msgstr "오브젝트 재구성 전달을 시작합니다."

msgid "Starting object reconstructor in daemon mode."
msgstr "오브젝트 재구성자를 디먼 모드로 시작합니다."

msgid "Starting object replication pass."
msgstr "오브젝트 복제 전달을 시작합니다."

msgid "Starting object replicator in daemon mode."
msgstr "오브젝트 복제자를 디먼 모드로 시작합니다."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "%(dst)s(%(time).03f)에서 %(src)s의 rsync 성공"

msgid "The file type are forbidden to access!"
msgstr "이 파일 유형에 대한 액세스가 금지되었습니다!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"컨테이너의 총 %(key)s가 (%(total)s) 과  %(key)s의 총합 (%(sum)s)가 일치하지 "
"않습니다."

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Memcached에 대한 %(action)s 제한시간 초과: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s에서 제한시간 초과 예외 발생"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "%(method)s %(path)s 시도 중"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "GET %(full_path)s 시도 중"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "PUT의 %s 상태를 %s(으)로 가져오는 중"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "PUT의 최종 상태를 %s(으)로 가져오는 중"

msgid "Trying to read during GET"
msgstr "가져오기 중 읽기를 시도함"

msgid "Trying to read during GET (retrying)"
msgstr "가져오기(재시도) 중 읽기를 시도함"

msgid "Trying to send to client"
msgstr "클라이언트로 전송 시도 중"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "%s과(와) 접미사를 동기화하려고 시도"

#, python-format
msgid "Trying to write to %s"
msgstr "%s에 쓰기 시도 중"

msgid "UNCAUGHT EXCEPTION"
msgstr "미발견 예외"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "%s 구성 섹션을 %s에서 찾을 수 없음"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "구성에서 내부 클라이언트를 로드할 수 없음: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "Libc에서 %s을(를) 찾을 수 없습니다. no-op로 남겨 둡니다."

#, python-format
msgid "Unable to locate config for %s"
msgstr "%s의 구성을 찾을 수 없음"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "구성 번호 %s을(를) 찾을 수 없음(대상: %s)"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"Libc에서 fallocate, posix_fallocate를 찾을 수 없습니다. no-op로 남겨 둡니다."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "%s 디렉토리에서 fsync()를 수행할 수 없음: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "%s에서 구성을 읽을 수 없음"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "권한 부여 해제 %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "예상치 않은 응답: %s"

msgid "Unhandled exception"
msgstr "처리되지 않은 예외"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"GET을 시도하는 중 알 수 없는 예외 발생: %(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s의 업데이트 보고서 실패"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "%(container)s %(dbfile)s의 업데이트 보고서를 발송함"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"경고: SSL은 테스트용으로만 사용해야 합니다. 프로덕션 배치에는 외부 SSL 종료"
"를 사용하십시오."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"경고: 파일 디스크립터 한계를 수정할 수 없습니다. 비루트로 실행 중인지 확인하"
"십시오."

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"경고: 최대 프로세스 한계를 수정할 수 없습니다. 비루트로 실행 중인지 확인하십"
"시오."

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"경고: 메모리 한계를 수정할 수 없습니다. 비루트로 실행 중인지 확인하십시오."

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "%s초 동안 %s의 종료를 대기함, 포기하는 중"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "%s초 동안 %s을(를) 대기, 강제 종료 중"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "경고: memcached 클라이언트 없이 전송률을 제한할 수 없음"

#, python-format
msgid "method %s is not allowed."
msgstr "메소드 %s이(가) 허용되지 않습니다."

msgid "no log file found"
msgstr "로그 파일을 찾을 수 없음"

msgid "odfpy not installed."
msgstr "odfpy가 설치되어 있지 않습니다."

#, python-format
msgid "plotting results failed due to %s"
msgstr "%s(으)로 인해 결과 표시 실패"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib가 설치되어 있지 않습니다."
swift-2.7.1/swift/locale/it/0000775000567000056710000000000013024044470017023 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/it/LC_MESSAGES/0000775000567000056710000000000013024044470020610 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/it/LC_MESSAGES/swift.po0000664000567000056710000010761013024044354022312 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# OpenStack Infra , 2015. #zanata
# Tom Cocozzello , 2015. #zanata
# Alessandra , 2016. #zanata
# Remo Mattei , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-03-22 05:31+0000\n"
"Last-Translator: Remo Mattei \n"
"Language: it\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Italian\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"l'utente è uscito"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - parallelo, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d suffissi controllati - %(hashed).2f%% con hash, %(synced).2f%% "
"sincronizzati"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s ha risposto come smontato"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partizioni di %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) dispositivi ricostruiti in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s rimanenti)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partizioni replicate in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s rimanenti)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s operazioni con esito positivo, %(failure)s errori"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s restituisce 503 per %(statuses)s"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d non in esecuzione (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) sembra essere stato arrestato"

#, python-format
msgid "%s already started..."
msgstr "%s già avviato..."

#, python-format
msgid "%s does not exist"
msgstr "%s non esiste"

#, python-format
msgid "%s is not mounted"
msgstr "%s non è montato"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s ha risposto come smontato"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s in esecuzione (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: Connessione reimpostata dal peer"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s contenitori eliminati"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s contenitori probabilmente rimanenti"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s contenitori rimanenti"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s oggetti eliminati"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s oggetti probabilmente rimanenti"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s oggetti rimanenti"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", trascorso: %.02fs"

msgid ", return codes: "
msgstr ", codici di ritorno: "

msgid "Account"
msgstr "Conto"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Account %s non utilizzato da %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Modalità \"once\" verifica account completata: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Trasmissione verifica account completata: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"È stato eseguito un tentativo di replicare %(count)d dbs in %(time).5f "
"secondi (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Verifica non riuscita per %s: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Codice di ritorno rsync errato: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Avvio modalità \"once\" verifica account"

msgid "Begin account audit pass."
msgstr "Avvio trasmissione verifica account."

msgid "Begin container audit \"once\" mode"
msgstr "Avvio modalità \"once\" verifica contenitore"

msgid "Begin container audit pass."
msgstr "Avvio trasmissione verifica contenitore."

msgid "Begin container sync \"once\" mode"
msgstr "Avvio della modalità \"once\" di sincronizzazione contenitore"

msgid "Begin container update single threaded sweep"
msgstr "Avvio pulizia a singolo thread aggiornamento contenitore"

msgid "Begin container update sweep"
msgstr "Avvio pulizia aggiornamento contenitore"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Avvio modalità \"%s\" verifica oggetto (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Avvio pulizia a singolo thread aggiornamento oggetto"

msgid "Begin object update sweep"
msgstr "Avvio pulizia aggiornamento oggetto"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Avvio della trasmissione sull'account %s"

msgid "Beginning replication run"
msgstr "Avvio replica"

msgid "Broker error trying to rollback locked connection"
msgstr ""
"Errore del broker durante il tentativo di eseguire il rollback della "
"connessione bloccata"

#, python-format
msgid "Can not access the file %s."
msgstr "Impossibile accedere al file %s."

#, python-format
msgid "Can not load profile data from %s."
msgstr "Impossibile caricare i dati del profilo da %s."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "Non e' possibile leggere %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "Non e' possibile scriver %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "Il client non ha eseguito la lettura dal proxy in %ss"

msgid "Client disconnected on read"
msgstr "Client scollegato alla lettura"

msgid "Client disconnected without sending enough data"
msgstr "Client disconnesso senza inviare dati sufficienti"

msgid "Client disconnected without sending last chunk"
msgstr "Client disconnesso senza inviare l'ultima porzione"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"Il percorso del client %(client)s non corrisponde al percorso memorizzato "
"nei metadati dell'oggetto %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"Opzione di configurazione internal_client_conf_path non definita. Viene "
"utilizzata la configurazione predefinita, vedere l'esempio internal-client."
"conf-sample per le opzioni"

msgid "Connection refused"
msgstr "Connessione rifiutata"

msgid "Connection timeout"
msgstr "Timeout della connessione"

msgid "Container"
msgstr "Contenitore"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Modalità \"once\" verifica contenitore completata: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Trasmissione verifica contenitore completata: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr ""
"Modalità \"once\" di sincronizzazione del contenitore completata: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Pulizia a singolo thread aggiornamento contenitore completata: "
"%(elapsed).02fs, %(success)s operazioni con esito positivo, %(fail)s errori, "
"%(no_change)s senza modifiche"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Pulizia aggiornamento contenitore completata: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Pulizia aggiornamento contenitore di %(path)s completata: %(elapsed).02fs, "
"%(success)s operazioni con esito positivo, %(fail)s errori, %(no_change)s "
"senza modifiche"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr ""
"Impossibile effettuare il bind a %s:%s dopo aver provato per %s secondi"

#, python-format
msgid "Could not load %r: %s"
msgstr "Impossibile caricare %r: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Errore di download dei dati: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Trasmissione dei dispositivi completata: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "La directory %r non è associata ad una politica valida (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "ERRORE %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "ERRORE %(status)d %(body)s dal server %(type)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "ERRORE %(status)d %(body)s Dal server degli oggetti re: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "ERRORE %(status)d Previsto: 100-continue dal server degli oggetti"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr ""
"ERRORE %(status)d Tentativo di %(method)s %(path)s dal server contenitore"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"ERRORE Aggiornamento dell'account non riuscito con %(ip)s:%(port)s/"
"%(device)s (verrà eseguito un nuovo tentativo successivamente): Risposta "
"%(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERRORE Aggiornamento dell'account non riuscito: numero differente di host e "
"dispositivi nella richiesta: \"%s\" vs \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "ERRORE Risposta errata %(status)s da %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "ERRORE Timeout di lettura del client (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"ERRORE Aggiornamento del contenitore non riuscito (salvataggio per "
"l'aggiornamento asincrono successivamente): %(status)d risposta da %(ip)s:"
"%(port)s/%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERRORE Aggiornamento del contenitore non riuscito: numero differente di host "
"e dispositivi nella richiesta: \"%s\" vs \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "ERRORE Impossibile ottenere le informazioni sull'account %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "ERRORE Impossibile ottenere le informazioni sul contenitore %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "ERRORE Errore di chiusura DiskFile %(data_file)s: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "ERRORE Eccezione che causa la disconnessione del client"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr ""
"ERRORE Eccezione durante il trasferimento di dati nel server degli oggetti %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "ERRORE Impossibile ottenere i propri IP?"

msgid "ERROR Insufficient Storage"
msgstr "ERRORE Memoria insufficiente"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"ERRORE L'oggetto %(obj)s non ha superato la verifica ed è stato inserito "
"nella quarantena: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "ERRORE Problema relativo a pickle, inserimento di %s nella quarantena"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "ERRORE Unità remota non montata %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "ERRORE durante la sincronizzazione di %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "ERRORE durante la sincronizzazione di %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "ERRORE durante il tentativo di eseguire la verifica %s"

msgid "ERROR Unhandled exception in request"
msgstr "ERRORE Eccezione non gestita nella richiesta"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "ERRORE  errore __call__ con %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"ERRORE aggiornamento dell'account non riuscito con %(ip)s:%(port)s/"
"%(device)s (verrà eseguito un nuovo tentativo successivamente)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"ERRORE aggiornamento dell'account non riuscito con %(ip)s:%(port)s/"
"%(device)s (verrà eseguito un nuovo tentativo successivamente): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "ERRORE file in sospeso asincrono con nome non previsto %s"

msgid "ERROR auditing"
msgstr "ERRORE durante la verifica"

#, python-format
msgid "ERROR auditing: %s"
msgstr "ERRORE durante la verifica: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"ERRORE aggiornamento del contenitore non riuscito con %(ip)s:%(port)s/"
"%(dev)s (salvataggio per aggiornamento asincrono successivamente)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "ERRORE durante la lettura della risposta HTTP da %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "ERRORE durante la lettura del db %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "ERRORE rsync non riuscito con %(code)s: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "ERRORE durante la sincronizzazione di %(file)s con il nodo %(node)s"

msgid "ERROR trying to replicate"
msgstr "ERRORE durante il tentativo di eseguire la replica"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "ERRORE durante il tentativo di ripulire %s"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr ""
"ERRORE relativo al server %(type)s %(ip)s:%(port)s/%(device)s re: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "ERRORE relativo al caricamento delle eliminazioni da %s: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "ERRORE relativo al server remoto %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "ERRORE:  Impossibile ottenere i percorsi per gestire le partizioni: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "ERRORE: Si è verificato un errore durante il richiamo dei segmenti"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "ERRORE: Impossibile accedere a %(path)s: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "ERRORE: Impossibile eseguire la verifica: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Errore di %(action)s su memcached: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Errore durante la codifica in UTF-8: %s"

msgid "Error hashing suffix"
msgstr "Errore durante l'hash del suffisso"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Errore in %r con mtime_check_interval: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Errore durante la limitazione del server %s"

msgid "Error listing devices"
msgstr "Errore durante l'elenco dei dispositivi"

#, python-format
msgid "Error on render profiling results: %s"
msgstr ""
"Errore durante la visualizzazione dei risultati della creazione dei profili: "
"%s"

msgid "Error parsing recon cache file"
msgstr "Errore durante l'analisi del file della cache di riconoscimento"

msgid "Error reading recon cache file"
msgstr "Errore durante la lettura del file della cache di riconoscimento"

msgid "Error reading ringfile"
msgstr "Errore durante la lettura del ringfile"

msgid "Error reading swift.conf"
msgstr "Errore durante la lettura di swift.conf"

msgid "Error retrieving recon data"
msgstr "Errore durante il richiamo dei dati di riconoscimento"

msgid "Error syncing handoff partition"
msgstr "Errore durante la sincronizzazione della partizione di passaggio"

msgid "Error syncing partition"
msgstr "Errore durante la sincronizzazione della partizione"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Errore durante la sincronizzazione con il nodo: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Errore nel tentativo di ricreare %(path)s policy#%(policy)d frag#"
"%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Errore: Si è verificato un errore"

msgid "Error: missing config path argument"
msgstr "Errore: Argomento path della configurazione mancante"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Errore: impossibile individuare %s"

msgid "Exception dumping recon cache"
msgstr "Eccezione durante il dump della cache di recon"

msgid "Exception in top-level account reaper loop"
msgstr "Eccezione nel loop reaper dell'account di livello superiore"

msgid "Exception in top-level replication loop"
msgstr "Eccezione nel loop di replica di livello superiore"

msgid "Exception in top-levelreconstruction loop"
msgstr "Eccezione nel loop di ricostruzione di livello superiore"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Eccezione durante l'eliminazione del contenitore %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Eccezione durante l'eliminazione dell'oggetto %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Eccezione relativa a %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Eccezione relativa all'account %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Eccezione relativa ai contenitori per l'account %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Eccezione relativa agli oggetti per il contenitore %(container)s per "
"l'account %(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Previsto: 100-continue su %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr ""
"Viene seguita la catena CNAME per %(given_domain)s verso %(found_domain)s"

msgid "Found configs:"
msgstr "Configurazioni trovate:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"Nella prima modalità di passaggio ci sono ancora passaggi restanti. "
"Interruzione del passaggio di replica corrente."

msgid "Host unreachable"
msgstr "Host non raggiungibile"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Trasmissione non completa sull'account %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Formato X-Container-Sync-To non valido %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Host non valido %r in X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Voce in sospeso non valida %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Risposta non valida %(resp)s da %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Risposta non valida %(resp)s da %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Schema non valido %r in X-Container-Sync-To, deve essere \"//\", \"http\" "
"oppure \"https\"."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Chiusura rsync ad elaborazione prolungata: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Caricamento JSON dal %s fallito (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Blocco rilevato... chiusura dei coros attivi."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s associato a %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "Nessun %s in esecuzione"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "Nessun endpoint del cluster per %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "Nessuna autorizzazione per la segnalazione del PID %d"

#, python-format
msgid "No policy with index %s"
msgstr "Nessuna politica con indice %s"

#, python-format
msgid "No realm key for %r"
msgstr "Nessuna chiave dell'area di autenticazione per %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Nessuno spazio rimasto sul dispositivo per %s (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Errore del nodo limitato %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "Server degli oggetti riconosciuti non sufficienti (got %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr "%(sync_from)r => %(sync_to)r non trovato - oggetto %(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "Nessun elemento ricostruito per %s secondi."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "Nessun elemento replicato per %s secondi."

msgid "Object"
msgstr "Oggetto"

msgid "Object PUT"
msgstr "PUT dell'oggetto"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"Il PUT dell'oggetto ha restituito 202 per 409: %(req_timestamp)s <= "
"%(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "Il PUT dell'oggetto ha restituito 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Modalità \"%(mode)s\" (%(type)s) verifica oggetto completata: "
"%(elapsed).02fs. Totale in quarantena: %(quars)d, Totale errori: %(errors)d, "
"Totale file/sec: %(frate).2f, Totale byte/sec: %(brate).2f, Tempo verifica: "
"%(audit).2f, Velocità: %(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Verifica oggetto (%(type)s). A partire da %(start_time)s: In locale: "
"%(passes)d passati, %(quars)d in quarantena, %(errors)d errori file/sec: "
"%(frate).2f , byte/sec: %(brate).2f, Tempo totale: %(total).2f, Tempo "
"verifica: %(audit).2f, Velocità: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Statistiche verifica oggetto: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Ricostruzione dell'oggetto completata (una volta). (%.02f minuti)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Ricostruzione dell'oggetto completata. (%.02f minuti)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Replica dell'oggetto completata (una volta). (%.02f minuti)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Replica dell'oggetto completata. (%.02f minuti)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "I server dell'oggetto hanno restituito %s etag senza corrispondenza"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Pulizia a singolo thread aggiornamento oggetto completata: %(elapsed).02fs, "
"%(success)s operazioni con esito positivo, %(fail)s errori"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Pulizia aggiornamento oggetto completata: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Pulizia aggiornamento oggetto di %(device)s completata: %(elapsed).02fs, "
"%(success)s operazioni con esito positivo, %(fail)s errori"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "Parametri, query e frammenti non consentiti in X-Container-Sync-To"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr "Tempi partizione: max %(max).4fs, min %(min).4fs, med %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr ""
"Avvio della trasmissione; %s contenitori possibili; %s oggetti possibili"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Trasmissione completata in %ds; %d oggetti scaduti"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Trasmissione eseguita fino ad ora %ds; %d oggetti scaduti"

msgid "Path required in X-Container-Sync-To"
msgstr "Percorso richiesto in X-Container-Sync-To"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Problema durante la ripulitura di %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Problema durante la ripulitura di %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Problema durante la scrittura del file obsoleto duraturo %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Errore di creazione dei profili: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(hsh_path)s inserito in quarantena in %(quar_path)s perché non è una "
"directory"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(object_path)s inserito in quarantena in %(quar_path)s perché non è una "
"directory"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s inserito in quarantena in %s a causa del database %s"

#, python-format
msgid "Quarantining DB %s"
msgstr "Inserimento in quarantena del DB %s"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Log di sospensione Ratelimit: %(sleep)s per %(account)s/%(container)s/"
"%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "Rimossi %(remove)d dbs"

#, python-format
msgid "Removing %s objects"
msgstr "Rimozione di oggetti %s"

#, python-format
msgid "Removing partition: %s"
msgstr "Rimozione della partizione: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "Rimozione del file pid %(pid_file)s con pid non valido %(pid)d"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Rimozione del file pid %s con pid non valido"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Rimozione del file pid %s obsoleto in corso"

msgid "Replication run OVER"
msgstr "Esecuzione della replica TERMINATA"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Viene restituito il codice 497 a causa della blacklist: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"Viene restituito 498 per %(meth)s a %(acc)s/%(cont)s/%(obj)s . Ratelimit "
"(numero massimo sospensioni) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Modifica ring rilevata. Interruzione della trasmissione della ricostruzione "
"corrente."

msgid "Ring change detected. Aborting current replication pass."
msgstr ""
"Modifica ring rilevata. Interruzione della trasmissione della replica "
"corrente."

#, python-format
msgid "Running %s once"
msgstr "Esecuzione di %s una volta"

msgid "Running object reconstructor in script mode."
msgstr ""
"Esecuzione del programma di ricostruzione dell'oggetto in modalità script."

msgid "Running object replicator in script mode."
msgstr "Esecuzione del programma di replica dell'oggetto in modalità script."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "Segnale %s  pid: %s  segnale: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"A partire da %(time)s: %(sync)s sincronizzati [%(delete)s eliminazioni, "
"%(put)s inserimenti], %(skip)s ignorati, %(fail)s non riusciti"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"A partire da %(time)s: Verifiche account: %(passed)s verifiche superate, "
"%(failed)s verifiche non superate"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"A partire da %(time)s: Verifiche contenitore: %(pass)s verifiche superate, "
"%(fail)s verifiche non superate"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "%(device)s viene ignorato perché non è montato"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "%s viene ignorato perché non è montato"

#, python-format
msgid "Starting %s"
msgstr "Avvio di %s"

msgid "Starting object reconstruction pass."
msgstr "Avvio della trasmissione della ricostruzione dell'oggetto."

msgid "Starting object reconstructor in daemon mode."
msgstr "Avvio del programma di ricostruzione dell'oggetto in modalità daemon."

msgid "Starting object replication pass."
msgstr "Avvio della trasmissione della replica dell'oggetto."

msgid "Starting object replicator in daemon mode."
msgstr "Avvio del programma di replica dell'oggetto in modalità daemon."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "Rsync di %(src)s eseguito correttamente su %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "Non è consentito l'accesso a questo tipo di file!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"Il numero totale di %(key)s per il contenitore (%(total)s) non corrisponde "
"alla somma di %(key)s tra le politiche (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Timeout di %(action)s su memcached: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Eccezione di timeout con %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Tentativo di %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Tentativo di eseguire GET %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Tentativo di acquisire  lo stato %s di PUT su %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Tentativo di acquisire lo stato finale di PUT su %s"

msgid "Trying to read during GET"
msgstr "Tentativo di lettura durante GET"

msgid "Trying to read during GET (retrying)"
msgstr "Tentativo di lettura durante GET (nuovo tentativo)"

msgid "Trying to send to client"
msgstr "Tentativo di invio al client"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Tentativo di sincronizzazione dei suffissi con %s"

#, python-format
msgid "Trying to write to %s"
msgstr "Tentativo di scrittura in %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "ECCEZIONE NON RILEVATA"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "Impossibile trovare la sezione di configurazione %s in %s"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "Impossibile caricare il client interno dalla configurazione: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "Impossibile individuare %s in libc.  Lasciato come no-op."

#, python-format
msgid "Unable to locate config for %s"
msgstr "Impossibile individuare la configurazione per %s"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Impossibile individuare il numero di configurazione %s per %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"Impossibile individuare fallocate, posix_fallocate in libc.  Lasciato come "
"no-op."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "Impossibile eseguire fsync() sulla directory %s: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "Impossibile leggere la configurazione da %s"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "%(sync_from)r => %(sync_to)r non autorizzato"

#, python-format
msgid "Unexpected response: %s"
msgstr "Risposta imprevista: %s"

msgid "Unhandled exception"
msgstr "Eccezione non gestita"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"Eccezione imprevista nel tentativo di eseguire GET: %(account)r "
"%(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Report di aggiornamento non riuscito per %(container)s %(dbfile)s"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Report di aggiornamento inviato per %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"AVVERTENZA: SSL deve essere abilitato solo per scopi di test. Utilizzare la "
"terminazione SSL esterna per una distribuzione di produzione."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"AVVERTENZA: Impossibile modificare il limite del descrittore del file. "
"Eseguire come non-root?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"AVVERTENZA: Impossibile modificare il limite del numero massimo di processi. "
"Eseguire come non-root?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"AVVERTENZA: Impossibile modificare il limite di memoria. Eseguire come non-"
"root?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr ""
"Sono trascorsi %s secondi in attesa che %s venga interrotto; operazione "
"terminata"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr ""
"Sono trascorsi %s secondi in attesa che %s venga interrotto; operazione "
"terminata"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "Avvertenza: impossibile eseguire ratelimit senza un client memcached"

#, python-format
msgid "method %s is not allowed."
msgstr "il metodo %s non è consentito."

msgid "no log file found"
msgstr "nessun file di log trovato"

msgid "odfpy not installed."
msgstr "odfpy non installato."

#, python-format
msgid "plotting results failed due to %s"
msgstr "tracciamento dei risultati non riuscito a causa di %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib non installato."
swift-2.7.1/swift/locale/fr/0000775000567000056710000000000013024044470017016 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/fr/LC_MESSAGES/0000775000567000056710000000000013024044470020603 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/fr/LC_MESSAGES/swift.po0000664000567000056710000011075313024044354022307 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Maxime COQUEREL , 2014
# Martine Marin , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 09:55+0000\n"
"Last-Translator: Martine Marin \n"
"Language: fr\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: French\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"l'utilisateur quitte le programme"

#, python-format
msgid " - %s"
msgstr "- %s"

#, python-format
msgid " - parallel, %s"
msgstr "- parallel, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d suffixe(s) vérifié(s) - %(hashed).2f%% haché(s), %(synced).2f%% "
"synchronisé(s)"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s démonté (d'après la réponse)"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions sur %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) périphériques reconstruites en %(time).2fs "
"(%(rate).2f/sec, %(remaining)s restantes)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions répliquées en "
"%(time).2fs (%(rate).2f/sec ; %(remaining)s restante(s))"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s succès, %(failure)s échec(s)"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s : renvoi de l'erreur 503 pour %(statuses)s"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d n'est pas en cours d'exécution (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) semble s'être arrêté"

#, python-format
msgid "%s already started..."
msgstr "%s déjà démarré..."

#, python-format
msgid "%s does not exist"
msgstr "%s n'existe pas"

#, python-format
msgid "%s is not mounted"
msgstr "%s n'est pas monté"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s ont été identifié(es) comme étant démonté(es)"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s en cours d'exécution (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s : %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s : Connexion réinitialisée par l'homologue"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s conteneurs supprimés"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s conteneur(s) restant(s), le cas échéant"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s conteneur(s) restant(s)"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s objets supprimés"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s objet(s) restant(s), le cas échéant"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s objet(s) restant(s)"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", temps écoulé : %.02fs"

msgid ", return codes: "
msgstr ", codes retour : "

msgid "Account"
msgstr "Compte"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "Le compte %s n'a pas été collecté depuis %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Audit de compte en mode \"once\" terminé : %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Session d'audit de compte terminée : %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"Tentative de réplication de %(count)d bases de données en %(time).5f "
"secondes (%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Echec de l'audit pour %s : %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Code retour rsync non valide : %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Démarrer l'audit de compte en mode \"once\" (une fois)"

msgid "Begin account audit pass."
msgstr "Démarrer la session d'audit de compte."

msgid "Begin container audit \"once\" mode"
msgstr "Démarrer l'audit de conteneur en mode \"once\" (une fois)"

msgid "Begin container audit pass."
msgstr "Démarrer la session d'audit de conteneur."

msgid "Begin container sync \"once\" mode"
msgstr "Démarrer la synchronisation de conteneurs en mode \"once\" (une fois)"

msgid "Begin container update single threaded sweep"
msgstr ""
"Démarrer le balayage des mises à jour du conteneur (unité d'exécution unique)"

msgid "Begin container update sweep"
msgstr "Démarrer le balayage des mises à jour du conteneur"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Démarrer l'audit d'objet en mode \"%s\" (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr ""
"Démarrer le balayage des mises à jour d'objet (unité d'exécution unique)"

msgid "Begin object update sweep"
msgstr "Démarrer le balayage des mises à jour d'objet"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Démarrage de la session d'audit sur le compte %s"

msgid "Beginning replication run"
msgstr "Démarrage du cycle de réplication"

msgid "Broker error trying to rollback locked connection"
msgstr ""
"Erreur de courtier lors d'une tentative d'annulation d'une connexion "
"verrouillée"

#, python-format
msgid "Can not access the file %s."
msgstr "Impossible d'accéder au fichier %s."

#, python-format
msgid "Can not load profile data from %s."
msgstr "Impossible de charger des données de profil depuis %s."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "Impossible de lire %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "Impossible d'écrire %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "Le client n'a pas lu les données du proxy en %s s"

msgid "Client disconnected on read"
msgstr "Client déconnecté lors de la lecture"

msgid "Client disconnected without sending enough data"
msgstr "Client déconnecté avant l'envoi de toutes les données requises"

msgid "Client disconnected without sending last chunk"
msgstr "Le client a été déconnecté avant l'envoi du dernier bloc"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"Le chemin d'accès au client %(client)s ne correspond pas au chemin stocké "
"dans les métadonnées d'objet %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"L'option de configuration internal_client_conf_path n'est pas définie. La "
"configuration par défaut est utilisée. Consultez les options dans internal-"
"client.conf-sample."

msgid "Connection refused"
msgstr "Connexion refusée"

msgid "Connection timeout"
msgstr "Dépassement du délai d'attente de connexion"

msgid "Container"
msgstr "Conteneur"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Audit de conteneur en mode \"once\" terminé : %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Session d'audit de conteneur terminée : %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "Synchronisation de conteneurs en mode \"once\" terminée : %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Le balayage des mises à jour du conteneur (unité d'exécution unique) est "
"terminé : %(elapsed).02fs, %(success)s succès, %(fail)s échec(s), "
"%(no_change)s inchangé(s)"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Le balayage des mises à jour du conteneur est terminé : %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Le balayage des mises à jour du conteneur (%(path)s) est terminé : "
"%(elapsed).02fs, %(success)s succès, %(fail)s échec(s), %(no_change)s "
"inchangé(s)"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "Liaison impossible à %s:%s après une tentative de %s secondes"

#, python-format
msgid "Could not load %r: %s"
msgstr "Impossible de charger  %r : %s"

#, python-format
msgid "Data download error: %s"
msgstr "Erreur de téléchargement des données : %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Session d'audit d'unités terminée : %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "Le répertoire %r n'est pas mappé à une stratégie valide (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "ERREUR %(db_file)s : %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "ERREUR %(status)d %(body)s depuis le serveur %(type)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "ERREUR %(status)d %(body)s depuis le serveur d'objets. Réf. : %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr ""
"ERREUR %(status)d Attendu(s) : 100 - poursuivre depuis le serveur d'objets"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr ""
"ERREUR %(status)d Tentative d'exécution de %(method)s %(path)s à partir du "
"serveur de conteneur"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"ERREUR Echec de la mise à jour du compte avec %(ip)s:%(port)s/%(device)s "
"(une nouvelle tentative sera effectuée ultérieurement). Réponse %(status)s "
"%(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERREUR Echec de la mise à jour du compte. Le nombre d'hôtes et le nombre "
"d'unités diffèrent dans la demande : \"%s\" / \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "ERREUR Réponse incorrecte %(status)s de %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "ERREUR Dépassement du délai de lecture du client (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"ERREUR Echec de la mise à jour du conteneur (sauvegarde pour mise à jour "
"asynchrone ultérieure) : réponse %(status)d renvoyée par %(ip)s:%(port)s/"
"%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERREUR Echec de la mise à jour du conteneur. Le nombre d'hôtes et le nombre "
"d'unités diffèrent dans la demande : \"%s\" / \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "ERREUR Impossible d'obtenir les infos de compte %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "ERREUR Impossible d'obtenir les infos de conteneur %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr ""
"ERREUR Incident de fermeture du fichier disque %(data_file)s : %(exc)s : "
"%(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "ERREUR Exception entraînant la déconnexion du client"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr ""
"ERREUR Exception lors du transfert de données vers des serveurs d'objets %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "ERREUR Impossible d'obtenir mes propres adresses IP ?"

msgid "ERROR Insufficient Storage"
msgstr "ERREUR Stockage insuffisant"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"ERREUR L'objet %(obj)s a échoué à l'audit et a été mis en quarantaine : "
"%(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "ERREUR Problème lié à Pickle. Mise en quarantaine de %s"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "ERREUR Unité distante %s non montée"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "ERREUR lors de la synchronisation de %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "ERREUR lors de la synchronisation de %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "ERREUR lors de la tentative d'audit de %s"

msgid "ERROR Unhandled exception in request"
msgstr "ERREUR Exception non gérée dans la demande"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "ERROR __call__ error sur %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"ERREUR Echec de la mise à jour du compte avec %(ip)s:%(port)s/%(device)s "
"(une nouvelle tentative sera effectuée ultérieurement)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"ERREUR Echec de la mise à jour du compte avec %(ip)s:%(port)s/%(device)s "
"(une nouvelle tentative sera effectuée ultérieurement) : "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr ""
"ERREUR Le fichier des mises à jour asynchrones en attente porte un nom "
"inattendu %s"

msgid "ERROR auditing"
msgstr "Erreur d'audit"

#, python-format
msgid "ERROR auditing: %s"
msgstr "ERREUR d'audit : %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"ERREUR Echec de la mise à jour du conteneur avec %(ip)s:%(port)s/%(dev)s "
"(sauvegarde pour mise à jour asynchrone ultérieure)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "Erreur de lecture de la réponse HTTP depuis %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "ERREUR de lecture de la base de données %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "ERREUR Echec de rsync avec %(code)s : %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "ERREUR de synchronisation de %(file)s avec le noeud %(node)s"

msgid "ERROR trying to replicate"
msgstr "ERREUR lors de la tentative de réplication"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "ERREUR lors de la tentative de nettoyage de %s"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr ""
"ERREUR liée au serveur %(type)s %(ip)s:%(port)s/%(device)s. Réf. : %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "ERREUR de chargement des suppressions de %s : "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "ERREUR liée au serveur distant %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr ""
"ERREUR : Echec de l'obtention des chemins d'accès aux partitions d'unité : %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "ERREUR : Une erreur s'est produite lors de l'extraction des segments"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "ERREUR : Impossible d'accéder à %(path)s : %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "ERREUR : Impossible d'exécuter l'audit : %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "Erreur de %(action)s dans memcached : %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Erreur d'encodage UTF-8 : %s"

msgid "Error hashing suffix"
msgstr "Erreur suffixe hashing"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Erreur dans %r liée à mtime_check_interval : %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Erreur de limitation du serveur %s"

msgid "Error listing devices"
msgstr "Erreur lors du listage des unités"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Erreur de rendu des résultats de profilage : %s"

msgid "Error parsing recon cache file"
msgstr "Erreur lors de l'analyse syntaxique du fichier cache Recon"

msgid "Error reading recon cache file"
msgstr "Erreur de lecture du fichier cache Recon"

msgid "Error reading ringfile"
msgstr "Erreur de lecture du fichier Ring"

msgid "Error reading swift.conf"
msgstr "Erreur de lecture de swift.conf"

msgid "Error retrieving recon data"
msgstr "Erreur lors de l'extraction des données Recon"

msgid "Error syncing handoff partition"
msgstr "Erreur lors de la synchronisation de la partition de transfert"

msgid "Error syncing partition"
msgstr "Erreur de synchronisation de la partition"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Erreur de synchronisation avec le noeud : %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Une erreur est survenue lors de la tentative de régénération de %(path)s "
"policy#%(policy)d frag#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Erreur : une erreur s'est produite"

msgid "Error: missing config path argument"
msgstr "Erreur : Argument de configuration du chemin manquant"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Erreur : impossible de localiser %s"

msgid "Exception dumping recon cache"
msgstr "Exception lors du vidage de cache recon"

msgid "Exception in top-level account reaper loop"
msgstr "Exception dans la boucle du collecteur de compte de niveau supérieur"

msgid "Exception in top-level replication loop"
msgstr "Exception dans la boucle de réplication de niveau supérieur"

msgid "Exception in top-levelreconstruction loop"
msgstr "Exception dans la boucle de reconstruction de niveau supérieur"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Exception lors de la suppression du conteneur %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Exception lors de la suppression de l'objet %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Exception liée à %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Exception avec le compte %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Exception avec les conteneurs pour le compte %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Exception liée aux objets pour le conteneur %(container)s et le compte "
"%(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Attendus(s) : 100 - poursuivre sur %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr ""
"Suivi de la chaîne CNAME pour %(given_domain)s jusqu'à %(found_domain)s"

msgid "Found configs:"
msgstr "Configurations trouvées :"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"Le premier mode de transferts contient d'autres transferts. Abandon de la "
"session de réplication en cours."

msgid "Host unreachable"
msgstr "Hôte inaccessible"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Session d'audit incomplète sur le compte %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Format X-Container-Sync-To %r non valide"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Hôte %r non valide dans X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Entrée en attente non valide %(file)s : %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Réponse %(resp)s non valide de %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Réponse %(resp)s non valide de %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Schéma %r non valide dans X-Container-Sync-To. Doit être \"//\", \"http\" ou "
"\"https\"."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Arrêt de l'opération rsync à exécution longue : %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Echec du chargement du fichier JSON depuis %s (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Blocage détecté. Arrêt des coroutines actives."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "%(given_domain)s mappé avec %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "Pas de %s en cours d'exécution"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "Aucun noeud final de cluster pour %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "Aucun droit pour signaler le PID %d"

#, python-format
msgid "No policy with index %s"
msgstr "Aucune statégie avec un index de type %s"

#, python-format
msgid "No realm key for %r"
msgstr "Aucune clé de domaine pour %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "Plus d'espace disponible sur le périphérique pour %s (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr ""
"Noeud marqué avec limite d'erreurs (error_limited) %(ip)s:%(port)s "
"(%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr ""
"Le nombre de serveurs d'objets reconnus n'est pas suffisant (%d obtenus)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"Introuvable : %(sync_from)r => %(sync_to)r                       - objet "
"%(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "Aucun élément reconstruit pendant %s secondes."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "Aucun élément répliqué pendant %s secondes."

msgid "Object"
msgstr "Objet"

msgid "Object PUT"
msgstr "Opération d'insertion (PUT) d'objet"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"L'opération d'insertion (PUT) d'objet a renvoyé l'erreur 202 pour 409 : "
"%(req_timestamp)s <= %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr ""
"L'opération d'insertion (PUT) d'objet a renvoyé l'erreur 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"L'audit d'objet (%(type)s) en mode \"%(mode)s\" est terminé : "
"%(elapsed).02fs. Nombre total mis en quarantaine : %(quars)d. Nombre total "
"d'erreurs : %(errors)d. Nombre total de fichiers/sec : %(frate).2f. Nombre "
"total d'octets/sec : %(brate).2f. Durée d'audit : %(audit).2f. Taux : "
"%(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Audit d'objet (%(type)s). Depuis %(start_time)s, localement : %(passes)d "
"succès. %(quars)d en quarantaine. %(errors)d erreurs. Fichiers/sec : "
"%(frate).2f. Octets/sec : %(brate).2f. Durée totale : %(total).2f. Durée "
"d'audit : %(audit).2f. Taux : %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Statistiques de l'audit d'objet : %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr ""
"La reconstruction d'objet en mode once (une fois) est terminée. (%.02f "
"minutes)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Reconstruction d'objet terminée. (%.02f minutes)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr ""
"La réplication d'objet en mode once (une fois) est terminée. (%.02f minutes)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Réplication d'objet terminée. (%.02f minutes)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr ""
"Des serveurs d'objets ont renvoyé %s  balises d'entité (etag) non "
"concordantes"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Le balayage des mises à jour d'objet (unité d'exécution unique) est "
"terminé : %(elapsed).02fs, %(success)s succès, %(fail)s échec(s)"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Le balayage des mises à jour d'objet est terminé : %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Le balayage des mises à jour d'objet (%(device)s) est terminé : "
"%(elapsed).02fs, %(success)s succès, %(fail)s échec(s)"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr ""
"Paramètres, requêtes et fragments non autorisés dans X-Container-Sync-To"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Temps de partition : maximum %(max).4fs, minimum %(min).4fs, moyenne "
"%(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Début de session. %s conteneur(s) possible(s). %s objet(s) possible(s)"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Session terminée dans %ds. %d objet(s) arrivé(s) à expiration"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Session jusqu'à %ds. %d objet(s) arrivé(s) à expiration"

msgid "Path required in X-Container-Sync-To"
msgstr "Chemin requis dans X-Container-Sync-To"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Problème lors du nettoyage de %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Problème lors du nettoyage de %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr ""
"Un problème est survenu lors de l'écriture du fichier d'état durable %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Erreur de profilage : %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(hsh_path)s n'est pas un répertoire et a donc été mis en quarantaine dans "
"%(quar_path)s"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"%(object_path)s n'est pas un répertoire et a donc été mis en quarantaine "
"dans %(quar_path)s"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "Mise en quarantaine de %s dans %s en raison de la base de données %s"

#, python-format
msgid "Quarantining DB %s"
msgstr "Mise en quarantaine de la base de données %s"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Journal de mise en veille Ratelimit : %(sleep)s pour %(account)s/"
"%(container)s/%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "%(remove)d bases de données ont été retirées"

#, python-format
msgid "Removing %s objects"
msgstr "Suppression de %s objets"

#, python-format
msgid "Removing partition: %s"
msgstr "Suppression de la partition : %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr ""
"Supression du fichier pid %(pid_file)s, comportant un PID incorrect %(pid)d"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Suppression du fichier pid %s  comportant un PID non valide"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Suppression du fichier pid %s périmé"

msgid "Replication run OVER"
msgstr "Le cycle de réplication est terminé"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Renvoi de 497 en raison du placement sur liste noire : %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"Renvoi de 498 pour %(meth)s jusqu'à %(acc)s/%(cont)s/%(obj)s . Ratelimit "
"(Max Sleep) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Changement d'anneau détecté. Abandon de la session de reconstruction en "
"cours."

msgid "Ring change detected. Aborting current replication pass."
msgstr ""
"Changement d'anneau détecté. Abandon de la session de réplication en cours."

#, python-format
msgid "Running %s once"
msgstr "Exécution unique de %s"

msgid "Running object reconstructor in script mode."
msgstr "Exécution du reconstructeur d'objet en mode script."

msgid "Running object replicator in script mode."
msgstr "Exécution du réplicateur d'objet en mode script."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "PID %s du signal : %s signal : %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"Depuis %(time)s : %(sync)s synchronisé(s) [%(delete)s suppression(s), "
"%(put)s insertion(s)], %(skip)s ignoré(s), %(fail)s échec(s)"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"Depuis %(time)s : audits de compte : %(passed)s succès, %(failed)s échec(s)"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"Depuis %(time)s : audits de conteneur : %(pass)s succès, %(fail)s échec(s)"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "%(device)s est ignoré car il n'est pas monté"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "%s est ignoré car il n'est pas monté"

#, python-format
msgid "Starting %s"
msgstr "Démarrage de %s"

msgid "Starting object reconstruction pass."
msgstr "Démarrage de la session de reconstruction d'objet."

msgid "Starting object reconstructor in daemon mode."
msgstr "Démarrage du reconstructeur d'objet en mode démon."

msgid "Starting object replication pass."
msgstr "Démarrage de la session de réplication d'objet."

msgid "Starting object replicator in daemon mode."
msgstr "Démarrage du réplicateur d'objet en mode démon."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "Succès de rsync pour %(src)s dans %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "Accès interdit au type de fichier"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"Le total %(key)s du conteneur (%(total)s) ne correspond pas à la somme de "
"%(key)s dans les différentes règles (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "Délai d'attente de %(action)s dans memcached : %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr ""
"Exception liée à un dépassement de délai concernant %(ip)s:%(port)s/"
"%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Tentative d'exécution de %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Tentative d'obtention (GET) de %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Tentative d'obtention du statut de l'opération PUT %s sur %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Tentative d'obtention du statut final de l'opération PUT sur %s"

msgid "Trying to read during GET"
msgstr "Tentative de lecture pendant une opération GET"

msgid "Trying to read during GET (retrying)"
msgstr "Tentative de lecture pendant une opération GET (nouvelle tentative)"

msgid "Trying to send to client"
msgstr "Tentative d'envoi au client"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Tentative de synchronisation de suffixes à l'aide de %s"

#, python-format
msgid "Trying to write to %s"
msgstr "Tentative d'écriture sur %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "EXCEPTION NON INTERCEPTEE"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "Impossible de trouver la section de configuration %s dans %s"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr ""
"Impossible de charger le client interne depuis la configuration : %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr ""
"Impossible de localiser %s dans libc. Laissé comme action nulle (no-op)."

#, python-format
msgid "Unable to locate config for %s"
msgstr "Impossible de trouver la configuration pour %s"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "Impossible de trouver la configuration portant le numéro %s pour %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"Impossible de localiser fallocate, posix_fallocate dans libc. Laissé comme "
"action nulle (no-op)."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "Impossible d'exécuter fsync() dans le répertoire %s : %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "Impossible de lire le fichier de configuration depuis %s"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "Non autorisé : %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "Réponse inattendue : %s"

msgid "Unhandled exception"
msgstr "Exception non prise en charge"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"Une exception inconnue s'est produite pendant une opération GET : "
"%(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Echec du rapport de mise à jour pour %(container)s %(dbfile)s"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Rapport de mise à jour envoyé pour %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"AVERTISSEMENT : SSL ne doit être activé qu'à des fins de test. Utilisez la "
"terminaison SSL externe pour un déploiement en production."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"AVERTISSEMENT : Impossible de modifier la limite de descripteur de fichier. "
"Exécution en tant que non root ?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"AVERTISSEMENT : Impossible de modifier la limite maximale de processus. "
"Exécution en tant que non root ?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"AVERTISSEMENT : Impossible de modifier la limite de mémoire. Exécution en "
"tant que non root ?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "Attente de %s secondes pour la fin de %s ; abandon..."

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "Attente de %s secondes pour la fin de %s . Arrêt en cours..."

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "Avertissement : impossible d'appliquer Ratelimit sans client memcached"

#, python-format
msgid "method %s is not allowed."
msgstr "Méthode %s non autorisée."

msgid "no log file found"
msgstr "Pas de fichier log trouvé"

msgid "odfpy not installed."
msgstr "odfpy n'est pas installé."

#, python-format
msgid "plotting results failed due to %s"
msgstr "Echec du traçage des résultats. Cause : %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib non installé."
swift-2.7.1/swift/locale/zh_CN/0000775000567000056710000000000013024044470017410 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/zh_CN/LC_MESSAGES/0000775000567000056710000000000013024044470021175 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/zh_CN/LC_MESSAGES/swift.po0000664000567000056710000010013013024044354022665 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Pearl Yajing Tan(Seagate Tech) , 2014
# Linda , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-29 02:44+0000\n"
"Last-Translator: Linda \n"
"Language: zh-CN\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Chinese (China)\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"用户退出"

#, python-format
msgid " - %s"
msgstr "- %s"

#, python-format
msgid " - parallel, %s"
msgstr "-平行,%s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr "已检查 %(checked)d 后缀 - 已散列 %(hashed).2f%%,已同步 %(synced).2f%%"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s 的回应为未安装"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(device)d/%(dtotal)d (%(dpercentage).2f%%) 设备的 %(reconstructed)d/"
"%(total)d (%(percentage).2f%%) 分区已于 %(time).2fs 重构(%(rate).2f/秒,剩"
"余  %(remaining)s)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"已复制 %(time).2fs (%(rate).2f/sec 中的 %(replicated)d/"
"%(total)d(%(percentage).2f%%) 分区,%(remaining)s 剩余)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s 成功,%(failure)s 失败"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s 返回 %(statuses)s 的 503"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d 未在运行 (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) 显示已停止"

#, python-format
msgid "%s already started..."
msgstr "%s 已启动..."

#, python-format
msgid "%s does not exist"
msgstr "%s 不存在"

#, python-format
msgid "%s is not mounted"
msgstr "未安装 %s"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s 响应为未安装"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s 正在运行 (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s:%s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s:已由同级重置连接"

#, python-format
msgid ", %s containers deleted"
msgstr ",删除容器 %s"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ",可能剩余容器 %s"

#, python-format
msgid ", %s containers remaining"
msgstr ",剩余容器 %s"

#, python-format
msgid ", %s objects deleted"
msgstr ",删除对象 %s"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ",可能剩余对象 %s"

#, python-format
msgid ", %s objects remaining"
msgstr ",剩余对象 %s"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ",耗时:%.02fs"

msgid ", return codes: "
msgstr ",返回代码:"

msgid "Account"
msgstr "帐号"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "账号 %s 自 %s 起未被获取"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "帐户审计“once”模式完成: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "帐户审计完成:%.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr "已尝试复制 %(time).5f seconds (%(rate).5f/s) 中的 %(count)d dbs"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "审计失败 %s:%s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Bad rsync 返还代码:%(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "开始帐户审计“once”模式"

msgid "Begin account audit pass."
msgstr "开始帐户审计通过。"

msgid "Begin container audit \"once\" mode"
msgstr "开始容器审计“once”模式"

msgid "Begin container audit pass."
msgstr "开始通过容器审计。"

msgid "Begin container sync \"once\" mode"
msgstr "开始容器同步“once”模式"

msgid "Begin container update single threaded sweep"
msgstr "开始容器更新单线程扫除"

msgid "Begin container update sweep"
msgstr "开始容器更新扫除"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "开始对象审计“%s”模式 (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "开始对象更新单线程扫除"

msgid "Begin object update sweep"
msgstr "开始对象更新扫除"

#, python-format
msgid "Beginning pass on account %s"
msgstr "开始传递帐户 %s"

msgid "Beginning replication run"
msgstr "开始复制运行"

msgid "Broker error trying to rollback locked connection"
msgstr "尝试回滚已锁定的链接时发生代理程序错误"

#, python-format
msgid "Can not access the file %s."
msgstr "无法访问文件 %s。"

#, python-format
msgid "Can not load profile data from %s."
msgstr "无法从 %s 载入概要分析数据。"

#, python-format
msgid "Cannot read %s (%s)"
msgstr "无法读取 %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "无法写入 %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "客户端尚未从 %ss 中的代理读取"

msgid "Client disconnected on read"
msgstr "读取时客户端断开连接"

msgid "Client disconnected without sending enough data"
msgstr "在发送足够数据前客户机断开了连接"

msgid "Client disconnected without sending last chunk"
msgstr "客户机已断开连接而未发送最后一个数据块"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr "客户机路径 %(client)s 与对象元数据 %(meta)s 中存储的路径不匹配"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"未定义配置选项 internal_client_conf_path。正在使用缺省配置。请参阅 internal-"
"client.conf-sample 以了解各个选项"

msgid "Connection refused"
msgstr "连接被拒绝"

msgid "Connection timeout"
msgstr "连接超时"

msgid "Container"
msgstr "容器"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "容器审计“once”模式完成:%.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "容器审计通过完成: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr "容器同步“once”模式完成:%.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"容器更新单线程清理完成:%(elapsed).02fs,%(success)s 成功,%(fail)s 失"
"败,%(no_change)s 无更改"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "容器更新扫除完成:%.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"%(path)s 容器更新清理完成:%(elapsed).02fs,%(success)s 成功,%(fail)s 失"
"败,%(no_change)s 无更改"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr "尝试过 %s 秒后仍无法捆绑至 %s:%s"

#, python-format
msgid "Could not load %r: %s"
msgstr "无法载入 %r:%s"

#, python-format
msgid "Data download error: %s"
msgstr "数据下载错误:%s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "设备通过完成:%.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "目录 %r 未映射至有效策略 (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "错误 %(db_file)s:%(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "错误:来自 %(type)s 服务器的 %(status)d %(body)s "

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "错误 %(status)d %(body)s 来自对象服务器 re:%(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "发生 %(status)d 错误,需要 100 - 从对象服务器继续"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr "尝试从容器服务器执行 %(method)s %(path)s 时发生 %(status)d 错误"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"错误:帐号更新失败:%(ip)s:%(port)s/%(device)s(稍后尝试):回应 %(status)s "
"%(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr "错误:帐号更新失败:本机数量与请求的设备数量不同:“%s”对“%s”"

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "错误:来自 %(host)s 的错误响应 %(status)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "错误:客户机读取超时 (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"错误:容器更新失败(稍后保存异步更新):来自%(ip)s:%(port)s/%(dev)s 的 "
"%(status)d 响应"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr "错误:容器更新失败:主机数量和设备数量不符合请求:“%s”对“%s”"

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "错误:无法获取帐户信息 %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "错误:无法获取容器信息 %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "错误:磁盘文件 %(data_file)s 关闭失败:%(exc)s:%(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "错误:异常导致客户机断开连接"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "错误:向对象服务器 %s 传输数据时发生异常"

msgid "ERROR Failed to get my own IPs?"
msgstr "错误:未能获取我自己的 IP?"

msgid "ERROR Insufficient Storage"
msgstr "错误:存储空间不足"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr "错误:对象 %(obj)s 审计失败并被隔离:%(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "错误 Pickle 问题,隔离 %s"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "错误:未安装远程驱动 %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "同步 %(db_file)s %(row)s 时出错"

#, python-format
msgid "ERROR Syncing %s"
msgstr "同步时发生错误 %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "错误:尝试开始审计 %s"

msgid "ERROR Unhandled exception in request"
msgstr "错误:未处理请求中的异常"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "%(method)s %(path)s 出现错误__call__ error"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr "错误:帐号更新失败 %(ip)s:%(port)s/%(device)s(稍后尝试)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr "错误:帐号更新失败 %(ip)s:%(port)s/%(device)s (稍后尝试):"

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr "同步具有意外名称的暂挂文件 %s"

msgid "ERROR auditing"
msgstr "审计时发生错误"

#, python-format
msgid "ERROR auditing: %s"
msgstr "审计错误:%s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr "错误:容器更新失败 %(ip)s:%(port)s/%(dev)s(正在保存,稍后同步更新)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "从 %s 读取 HTTP 响应时出错"

#, python-format
msgid "ERROR reading db %s"
msgstr "读取数据库 %s 时出错"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "错误:rsync 失败 %(code)s:%(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "错误:同步具有节点 %(node)s 的 %(file)s "

msgid "ERROR trying to replicate"
msgstr "尝试复制时发生错误"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "尝试清理 %s 时出错"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "%(type)s 服务器发生错误 %(ip)s:%(port)s/%(device)s 响应:%(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "从 %s 载入压缩时出错"

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "远程服务器发生错误 %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr "错误:未能获取驱动器分区的路径:%s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "错误:检索段时出错"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "出错,无法访问 %(path)s:%(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "错误:无法执行审计:%s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "内存高速缓存时出错 %(action)s:%(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "UTF-8 编码错误:%s"

msgid "Error hashing suffix"
msgstr "执行散列后缀时发生错误"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "带有 mtime_check_interval 的 %r 中出现错误:%s"

#, python-format
msgid "Error limiting server %s"
msgstr "限制服务器 %s 时出错 "

msgid "Error listing devices"
msgstr "列示设备时出错"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "呈现概要分析结果时出错:%s"

msgid "Error parsing recon cache file"
msgstr "解析 recon 高速缓存文件时出错"

msgid "Error reading recon cache file"
msgstr "读取 recon 高速缓存文件时出错"

msgid "Error reading ringfile"
msgstr "读取 ringfile 时出错"

msgid "Error reading swift.conf"
msgstr "读取 swift.conf 时出错"

msgid "Error retrieving recon data"
msgstr "检索 recon 数据时出错"

msgid "Error syncing handoff partition"
msgstr "同步切换分区时发生错误"

msgid "Error syncing partition"
msgstr "同步分区时发生错误"

#, python-format
msgid "Error syncing with node: %s"
msgstr "同步节点 %s 时发生错误"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr "尝试重建 %(path)s 策略 #%(policy)d frag#%(frag_index)s 时出错"

msgid "Error: An error occurred"
msgstr "错误:发生了错误"

msgid "Error: missing config path argument"
msgstr "错误:缺少配置路径参数"

#, python-format
msgid "Error: unable to locate %s"
msgstr "错误:无法找到 %s"

msgid "Exception dumping recon cache"
msgstr "执行 dump recon 高速缓存时出现异常"

msgid "Exception in top-level account reaper loop"
msgstr "顶级帐户 reaper 环中出现异常"

msgid "Exception in top-level replication loop"
msgstr "顶级复制环中出现异常"

msgid "Exception in top-levelreconstruction loop"
msgstr " top-levelreconstruction 环中发生异常"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "删除容器时出现异常 %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "删除对象时出现异常 %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s 出现异常"

#, python-format
msgid "Exception with account %s"
msgstr "帐户 %s 出现异常"

#, python-format
msgid "Exception with containers for account %s"
msgstr "帐户 %s 的容器出现异常"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr "帐户 %(account)s 的容器%(container)s 的对象出现异常"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "期望:100-在 %s 上继续"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "跟随 CNAME 链从 %(given_domain)s 到 %(found_domain)s"

msgid "Found configs:"
msgstr "找到配置:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr "Handoffs 优先方式仍有 handoffs。正在中止当前复制过程。"

msgid "Host unreachable"
msgstr "无法连接到主机"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "传递帐户 %s 未完成"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "无效的 X-Container-Sync-To 格式 %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "X-Container-Sync-To 中无效的主机 %r"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "无效的暂挂输入 %(file)s:%(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "从 %(full_path)s 返回了无效响应 %(resp)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "来自 %(ip)s 的无效回应 %(resp)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr "在 X-Container-Sync-To 中 %r 是无效的方案,须为“//”、“http”或“https”。"

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "终止长时间运行同步:%s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "从 %s 载入 JSON 失败 (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "检测到锁定。终止实时 coros"

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "将 %(given_domain)s 映射到 %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "没有 %s 正在运行"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "没有 %r %r 的集群端点"

#, python-format
msgid "No permission to signal PID %d"
msgstr "没有权限发送信号 PID %d"

#, python-format
msgid "No policy with index %s"
msgstr "没有具有索引 %s 的策略"

#, python-format
msgid "No realm key for %r"
msgstr "没有 %r 的域键"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "设备上没有可容纳 %s (%s) 的空间"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "节点错误极限 %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "没有足够的对象服务器应答(收到 %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr "未找到 %(sync_from)r => %(sync_to)r - object %(obj_name)r"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "过去 %s 秒未重构任何对象。"

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "%s 秒无复制"

msgid "Object"
msgstr "对象"

msgid "Object PUT"
msgstr "对象 PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"对象 PUT 正在返回 202(对于 409):%(req_timestamp)s 小于或等于 "
"%(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "对象 PUT 返还 412,%(statuses)r "

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"对象审计 (%(type)s) “%(mode)s”模式完成:%(elapsed).02fs。隔离总数:"
"%(quars)d,错误总数:%(errors)d,文件/秒总和:%(frate).2f,字节/秒总和:"
"%(brate).2f,审计时间:%(audit).2f,速率:%(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"对象审计 (%(type)s)。自 %(start_time)s 开始:本地:%(passes)d 通"
"过,%(quars)d 隔离,%(errors)d 个错误,文件/秒:%(frate).2f,字节/秒:"
"%(brate).2f,总时间:%(total).2f,审计时间:%(audit).2f,速率:"
"%(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "对象审计统计信息:%s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "对象重构完成(一次)。(%.02f 分钟)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "对象重构完成。(%.02f 分钟)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "对象复制完成(一次)。(%.02f 分钟)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "对象复制完成。(%.02f 分钟)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr "对象服务器返回了 %s 不匹配的 etags"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"对象更新单线程清理完成:%(elapsed).02fs,%(success)s 个成功,%(fail)s 个失败"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "对象更新扫除完成:%.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"%(device)s 的对象更新清理完成:%(elapsed).02fs,%(success)s 个成功,%(fail)s "
"个失败"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "在 X-Container-Sync-To 中,不允许使用变量、查询和碎片"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr "分区次数:最大值 %(max).4fs、最小值 %(min).4fs、中间值 %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "开始通过;%s 可能的容器;%s 可能的对象"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "%ds 中的通过完成;%d 对象过期"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "%ds 目前通过;%d 对象过期"

msgid "Path required in X-Container-Sync-To"
msgstr "在 X-Container-Sync-To 中路径是必须的"

#, python-format
msgid "Problem cleaning up %s"
msgstr "问题清除 %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "清除 %s (%s) 时发生了问题"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "编写可持续状态文件 %s (%s) 时发生了问题"

#, python-format
msgid "Profiling Error: %s"
msgstr "概要分析时出错:%s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr "隔离 %(hsh_path)s 和 %(quar_path)s,因为它不是目录"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr "将 %(object_path)s 隔离至 %(quar_path)s,因为它不是目录"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "隔离 %s 到 %s,因为 %s 数据库"

#, python-format
msgid "Quarantining DB %s"
msgstr "隔离数据库 %s"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr "速率限制休眠日志:%(account)s/%(container)s/%(object)s 的 %(sleep)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "已删除 %(remove)d dbs"

#, python-format
msgid "Removing %s objects"
msgstr "正在移除 %s 个对象"

#, python-format
msgid "Removing partition: %s"
msgstr "移除分区:%s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr "移除 pid 文件 %(pid_file)s 失败,pid %(pid)d 不正确"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "正在移除带有无效 pid 的 pid 文件 %s"

#, python-format
msgid "Removing stale pid file %s"
msgstr "移除原有 pid 文件%s"

msgid "Replication run OVER"
msgstr "复制运行结束"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "返回 497,因为黑名单:%s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"返回 %(meth)s 的 498 至 %(acc)s/%(cont)s/%(obj)s。速率限制 (Max Sleep) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr "检测到环更改。正在中止当前重构过程。"

msgid "Ring change detected. Aborting current replication pass."
msgstr "检测到环更改。正在中止当前复制过程。"

#, python-format
msgid "Running %s once"
msgstr "运行 %s 一次"

msgid "Running object reconstructor in script mode."
msgstr "正以脚本方式运行对象重构程序。"

msgid "Running object replicator in script mode."
msgstr "以脚本模式运行对象复制程序"

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "信号 %s  pid:%s  信号:%s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"自 %(time)s 起:已同步 %(sync)s [%(delete)s deletes, %(put)s puts],已跳过 "
"%(skip)s,%(fail)s 失败"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr "自 %(time)s 开始:帐户审计:%(passed)s 通过审计,%(failed)s 审计失败"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr "自 %(time)s 起:容器审计:%(pass)s 通过审计, %(fail)s 审计失败"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "跳过 %(device)s,因为它为安装"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "跳过 %s,因为它未安装"

#, python-format
msgid "Starting %s"
msgstr "启动 %s"

msgid "Starting object reconstruction pass."
msgstr "正在启动对象重构过程。"

msgid "Starting object reconstructor in daemon mode."
msgstr "正以守护程序方式启动对象重构程序。"

msgid "Starting object replication pass."
msgstr "开始对象复制过程。"

msgid "Starting object replicator in daemon mode."
msgstr "以守护模式开始对象复制程序。"

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr "成功的异步 %(src)s 于 %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "禁止访问该文件类型!"

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"容器 (%(total)s) 的总计 %(key)s 与跨策略 (%(sum)s) 的 %(key)s 总数不匹配"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "内存高速缓存时超时 %(action)s:%(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "%(ip)s:%(port)s/%(device)s 发生超时异常"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "尝试执行%(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "正尝试获取 %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "正尝试将 PUT 的 %s 状态发送至 %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "尝试获取 PUT 至 %s 的最后状态"

msgid "Trying to read during GET"
msgstr "执行 GET 时尝试读取"

msgid "Trying to read during GET (retrying)"
msgstr "执行 GET 时尝试读取(重试)"

msgid "Trying to send to client"
msgstr "尝试发送到客户端"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "正尝试使后缀与 %s 同步"

#, python-format
msgid "Trying to write to %s"
msgstr "尝试写入 %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "未捕获的异常"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "无法在 %s 中找到 %s 配置部分"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr "无法从配置装入内部客户机:%r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "无法找到 libc 中的 %s。保留为 no-op。"

#, python-format
msgid "Unable to locate config for %s"
msgstr "找不到 %s 的配置"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "找不到 %s 的配置编号 %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr "无法找到 fallocate、posix_fallocate。保存为 no-op。"

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "无法在目录 %s 上执行 fsync():%s"

#, python-format
msgid "Unable to read config from %s"
msgstr "无法从 %s 读取配置"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "未授权 %(sync_from)r => %(sync_to)r"

#, python-format
msgid "Unexpected response: %s"
msgstr "意外响应:%s"

msgid "Unhandled exception"
msgstr "未处理的异常"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr "尝试获取 %(account)r %(container)r %(object)r 时发生未知异常"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "由于 %(container)s %(dbfile)s,更新报告失败"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "更新报告发至 %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr "警告:SSL 仅可以做测试使用。产品部署时,请使用外部 SSL 终端"

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr "警告:无法修改文件描述限制。是否按非 root 用户运行?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr "警告:无法修改最大进程限制。是否按非 root 用户运行?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr "警告:无法修改内存限制。是否按非 root 用户运行?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "已消耗 %s 等待 %s 停止;放弃"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "已消耗 %s 秒等待 %s 终止;正在终止"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr "警告:没有内存高速缓存客户端无法进行速率限制"

#, python-format
msgid "method %s is not allowed."
msgstr "不允许方法 %s。"

msgid "no log file found"
msgstr "找不到日志文件"

msgid "odfpy not installed."
msgstr "odfpy 未安装。"

#, python-format
msgid "plotting results failed due to %s"
msgstr "绘制结果图标时失败,因为 %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib 未安装。"
swift-2.7.1/swift/locale/es/0000775000567000056710000000000013024044470017016 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/es/LC_MESSAGES/0000775000567000056710000000000013024044470020603 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/locale/es/LC_MESSAGES/swift.po0000664000567000056710000010737213024044354022312 0ustar  jenkinsjenkins00000000000000# Translations template for swift.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project.
#
# Translators:
# Eugènia Torrella , 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift 2.7.1.dev7\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-04-28 15:21+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-28 08:44+0000\n"
"Last-Translator: Eugènia Torrella \n"
"Language: es\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Spanish\n"

msgid ""
"\n"
"user quit"
msgstr ""
"\n"
"el usuario ha salido"

#, python-format
msgid " - %s"
msgstr " - %s"

#, python-format
msgid " - parallel, %s"
msgstr " - paralelo, %s"

#, python-format
msgid ""
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% synced"
msgstr ""
"%(checked)d sufijos comprobados - %(hashed).2f%% con hash, %(synced).2f%% "
"sincronizados"

#, python-format
msgid "%(ip)s/%(device)s responded as unmounted"
msgstr "%(ip)s/%(device)s han respondido como desmontados"

#, python-format
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"

#, python-format
msgid ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) partitions of %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) devices reconstructed in %(time).2fs "
"(%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(reconstructed)d/%(total)d (%(percentage).2f%%) particiones de %(device)d/"
"%(dtotal)d (%(dpercentage).2f%%) dispositivos reconstruidos en %(time).2fs "
"(%(rate).2f/sec, %(remaining)s restantes)"

#, python-format
msgid ""
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
msgstr ""
"%(replicated)d/%(total)d (%(percentage).2f%%) particiones replicadas en "
"%(time).2fs (%(rate).2f/segundo, %(remaining)s restantes)"

#, python-format
msgid "%(success)s successes, %(failure)s failures"
msgstr "%(success)s éxitos, %(failure)s fallos"

#, python-format
msgid "%(type)s returning 503 for %(statuses)s"
msgstr "%(type)s devuelve 503 para %(statuses)s"

#, python-format
msgid "%s #%d not running (%s)"
msgstr "%s #%d no está en ejecución (%s)"

#, python-format
msgid "%s (%s) appears to have stopped"
msgstr "%s (%s) parece haberse detenido"

#, python-format
msgid "%s already started..."
msgstr "%s ya está iniciado..."

#, python-format
msgid "%s does not exist"
msgstr "%s no existe"

#, python-format
msgid "%s is not mounted"
msgstr "%s no está montado"

#, python-format
msgid "%s responded as unmounted"
msgstr "%s ha respondido como desmontado"

#, python-format
msgid "%s running (%s - %s)"
msgstr "%s en ejecución (%s - %s)"

#, python-format
msgid "%s: %s"
msgstr "%s: %s"

#, python-format
msgid "%s: Connection reset by peer"
msgstr "%s: Restablecimiento de conexión por igual"

#, python-format
msgid ", %s containers deleted"
msgstr ", %s contenedores suprimidos"

#, python-format
msgid ", %s containers possibly remaining"
msgstr ", %s contenedores posiblemente restantes"

#, python-format
msgid ", %s containers remaining"
msgstr ", %s contenedores restantes"

#, python-format
msgid ", %s objects deleted"
msgstr ", %s objetos suprimidos"

#, python-format
msgid ", %s objects possibly remaining"
msgstr ", %s objetos posiblemente restantes"

#, python-format
msgid ", %s objects remaining"
msgstr ", %s objectos restantes"

#, python-format
msgid ", elapsed: %.02fs"
msgstr ", transcurrido: %.02fs"

msgid ", return codes: "
msgstr ", códigos de retorno:"

msgid "Account"
msgstr "Cuenta"

#, python-format
msgid "Account %s has not been reaped since %s"
msgstr "La cuenta %s no se ha cosechado desde %s"

#, python-format
msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "Auditoría de cuenta en modalidad de \"una vez\" finalizada: %.02fs"

#, python-format
msgid "Account audit pass completed: %.02fs"
msgstr "Paso de auditoría de cuenta finalizado: %.02fs"

#, python-format
msgid ""
"Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
msgstr ""
"Se han intentado replicar %(count)d bases de datos en %(time).5f segundos "
"(%(rate).5f/s)"

#, python-format
msgid "Audit Failed for %s: %s"
msgstr "Ha fallado la auditoría para %s: %s"

#, python-format
msgid "Bad rsync return code: %(ret)d <- %(args)s"
msgstr "Código de retorno de resincronización erróneo: %(ret)d <- %(args)s"

msgid "Begin account audit \"once\" mode"
msgstr "Comenzar auditoría de cuenta en modalidad de \"una vez\""

msgid "Begin account audit pass."
msgstr "Comenzar a pasar la auditoría de cuenta."

msgid "Begin container audit \"once\" mode"
msgstr "Comenzar auditoría de contenedor en modalidad de \"una vez\""

msgid "Begin container audit pass."
msgstr "Comenzar a pasar la auditoría de contenedor."

msgid "Begin container sync \"once\" mode"
msgstr "Comenzar sincronización de contenedor en modalidad de \"una vez\""

msgid "Begin container update single threaded sweep"
msgstr "Comenzar el barrido de hebra única de actualización del contenedor"

msgid "Begin container update sweep"
msgstr "Comenzar el barrido de actualización del contenedor"

#, python-format
msgid "Begin object audit \"%s\" mode (%s%s)"
msgstr "Comenzar auditoría de objetos en modalidad \"%s\" (%s%s)"

msgid "Begin object update single threaded sweep"
msgstr "Comenzar el barrido de hebra única de actualización del objeto"

msgid "Begin object update sweep"
msgstr "Comenzar el barrido de actualización del objeto"

#, python-format
msgid "Beginning pass on account %s"
msgstr "Iniciando el paso en la cuenta %s"

msgid "Beginning replication run"
msgstr "Iniciando la ejecución de la replicación"

msgid "Broker error trying to rollback locked connection"
msgstr "Error de intermediario al intentar retrotraer una conexión bloqueada"

#, python-format
msgid "Can not access the file %s."
msgstr "No se puede acceder al archivo %s."

#, python-format
msgid "Can not load profile data from %s."
msgstr "No se pueden cargar los datos de perfil desde %s."

#, python-format
msgid "Cannot read %s (%s)"
msgstr "No se puede leer %s (%s)"

#, python-format
msgid "Cannot write %s (%s)"
msgstr "No se puede escribir en %s (%s)"

#, python-format
msgid "Client did not read from proxy within %ss"
msgstr "El cliente pudo realizar la lectura desde el proxy en %ss"

msgid "Client disconnected on read"
msgstr "El cliente se ha desconectado durante la lectura"

msgid "Client disconnected without sending enough data"
msgstr "El cliente se ha desconectado sin enviar suficientes datos"

msgid "Client disconnected without sending last chunk"
msgstr "El cliente se ha desconectado sin enviar el último fragmento"

#, python-format
msgid ""
"Client path %(client)s does not match path stored in object metadata %(meta)s"
msgstr ""
"La vía de acceso de cliente %(client)s no coincide con la vía de acceso "
"almacenada en los metadatos de objeto %(meta)s"

msgid ""
"Configuration option internal_client_conf_path not defined. Using default "
"configuration, See internal-client.conf-sample for options"
msgstr ""
"La opción de configuración internal_client_conf_path no está definida. Se "
"utilizará la configuración predeterminada, Consulte internal-client.conf-"
"sample para ver las opciones"

msgid "Connection refused"
msgstr "Conexión rechazada"

msgid "Connection timeout"
msgstr "Tiempo de espera de conexión agotado"

msgid "Container"
msgstr "Contenedor"

#, python-format
msgid "Container audit \"once\" mode completed: %.02fs"
msgstr "Auditoría de contenedor en modalidad de \"una vez\" finalizada: %.02fs"

#, python-format
msgid "Container audit pass completed: %.02fs"
msgstr "Paso de auditoría de contenedor finalizado: %.02fs"

#, python-format
msgid "Container sync \"once\" mode completed: %.02fs"
msgstr ""
"Sincronización de contenedor en modalidad de \"una vez\" finalizada: %.02fs"

#, python-format
msgid ""
"Container update single threaded sweep completed: %(elapsed).02fs, "
"%(success)s successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Barrido de hebra única de actualización del contenedor finalizado: "
"%(elapsed).02fs, %(success)s con éxito, %(fail)s con fallos, %(no_change)s "
"sin cambios"

#, python-format
msgid "Container update sweep completed: %.02fs"
msgstr "Barrido de actualización del contenedor finalizado: %.02fs"

#, python-format
msgid ""
"Container update sweep of %(path)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures, %(no_change)s with no changes"
msgstr ""
"Barrido de actualización del contenedor de %(path)s finalizado: "
"%(elapsed).02fs, %(success)s con éxito, %(fail)s con fallos, %(no_change)s "
"sin cambios"

#, python-format
msgid "Could not bind to %s:%s after trying for %s seconds"
msgstr ""
"No se ha podido enlazar a %s:%s después de intentarlo durante %s segundos"

#, python-format
msgid "Could not load %r: %s"
msgstr "No se ha podido cargar %r: %s"

#, python-format
msgid "Data download error: %s"
msgstr "Error de descarga de datos: %s"

#, python-format
msgid "Devices pass completed: %.02fs"
msgstr "Paso de dispositivos finalizado: %.02fs"

#, python-format
msgid "Directory %r does not map to a valid policy (%s)"
msgstr "El directorio %r no está correlacionado con una política válida (%s)"

#, python-format
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
msgstr "ERROR %(db_file)s: %(validate_sync_to_err)s"

#, python-format
msgid "ERROR %(status)d %(body)s From %(type)s Server"
msgstr "ERROR %(status)d %(body)s Desde el servidor %(type)s"

#, python-format
msgid "ERROR %(status)d %(body)s From Object Server re: %(path)s"
msgstr "ERROR %(status)d %(body)s Desde el servidor de objeto re: %(path)s"

#, python-format
msgid "ERROR %(status)d Expect: 100-continue From Object Server"
msgstr "ERROR %(status)d Esperado: 100-continuo Desde el servidor de objeto"

#, python-format
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
msgstr ""
"ERROR %(status)d Intentando %(method)s %(path)sDesde el servidor de "
"contenedor"

#, python-format
msgid ""
"ERROR Account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): Response %(status)s %(reason)s"
msgstr ""
"ERROR La actualización de la cuenta ha fallado con %(ip)s:%(port)s/"
"%(device)s (se volverá a intentar más tarde): Respuesta %(status)s %(reason)s"

#, python-format
msgid ""
"ERROR Account update failed: different  numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERROR La actualización de la cuenta ha fallado: hay números distintos de "
"hosts y dispositivos en la solicitud: \"%s\" frente a \"%s\""

#, python-format
msgid "ERROR Bad response %(status)s from %(host)s"
msgstr "ERROR Respuesta errónea %(status)s desde %(host)s"

#, python-format
msgid "ERROR Client read timeout (%ss)"
msgstr "ERROR Tiempo de espera de lectura de cliente agotado (%ss)"

#, python-format
msgid ""
"ERROR Container update failed (saving for async update later): %(status)d "
"response from %(ip)s:%(port)s/%(dev)s"
msgstr ""
"ERROR La actualización del contenedor ha fallado (guardando para una "
"actualización asíncrona posterior): %(status)d respuesta desde %(ip)s:"
"%(port)s/%(dev)s"

#, python-format
msgid ""
"ERROR Container update failed: different numbers of hosts and devices in "
"request: \"%s\" vs \"%s\""
msgstr ""
"ERROR La actualización del contenedor ha fallado: hay números distintos de "
"hosts y dispositivos en la solicitud: \"%s\" frente a \"%s\""

#, python-format
msgid "ERROR Could not get account info %s"
msgstr "ERROR No se ha podido obtener la información de cuenta %s"

#, python-format
msgid "ERROR Could not get container info %s"
msgstr "ERROR No se ha podido obtener la información de contenedor %s"

#, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr ""
"ERROR Fallo al cerrar el archivo de disco %(data_file)s: %(exc)s : %(stack)s"

msgid "ERROR Exception causing client disconnect"
msgstr "ERROR Excepción que provoca la desconexión del cliente"

#, python-format
msgid "ERROR Exception transferring data to object servers %s"
msgstr "ERROR Excepción al transferir datos a los servidores de objetos %s"

msgid "ERROR Failed to get my own IPs?"
msgstr "ERROR ¿No puedo obtener mis propias IP?"

msgid "ERROR Insufficient Storage"
msgstr "ERROR No hay suficiente almacenamiento"

#, python-format
msgid "ERROR Object %(obj)s failed audit and was quarantined: %(err)s"
msgstr ""
"ERROR La auditoría del objeto %(obj)s ha fallado y se ha puesto en "
"cuarentena: %(err)s"

#, python-format
msgid "ERROR Pickle problem, quarantining %s"
msgstr "ERROR Problema de desorden, poniendo %s en cuarentena"

#, python-format
msgid "ERROR Remote drive not mounted %s"
msgstr "ERROR Unidad remota no montada %s"

#, python-format
msgid "ERROR Syncing %(db_file)s %(row)s"
msgstr "ERROR al sincronizar %(db_file)s %(row)s"

#, python-format
msgid "ERROR Syncing %s"
msgstr "ERROR al sincronizar %s"

#, python-format
msgid "ERROR Trying to audit %s"
msgstr "ERROR al intentar la auditoría de %s"

msgid "ERROR Unhandled exception in request"
msgstr "ERROR Excepción no controlada en la solicitud"

#, python-format
msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "ERROR Error de __call__ con %(method)s %(path)s "

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later)"
msgstr ""
"ERROR La actualización de la cuenta ha fallado con %(ip)s:%(port)s/"
"%(device)s (se volverá a intentar más tarde)"

#, python-format
msgid ""
"ERROR account update failed with %(ip)s:%(port)s/%(device)s (will retry "
"later): "
msgstr ""
"ERROR Ha fallado la actualización de la cuenta con %(ip)s:%(port)s/"
"%(device)s (se volverá a intentar más tarde): "

#, python-format
msgid "ERROR async pending file with unexpected name %s"
msgstr ""
"ERROR Archivo pendiente de sincronización asíncrona con nombre inesperado %s"

msgid "ERROR auditing"
msgstr "ERROR en la auditoría"

#, python-format
msgid "ERROR auditing: %s"
msgstr "ERROR en la auditoría: %s"

#, python-format
msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for async "
"update later)"
msgstr ""
"ERROR La actualización del contenedor ha fallado con %(ip)s:%(port)s/%(dev)s "
"(guardando para una actualización asíncrona posterior)"

#, python-format
msgid "ERROR reading HTTP response from %s"
msgstr "ERROR al leer la respuesta HTTP desde %s"

#, python-format
msgid "ERROR reading db %s"
msgstr "ERROR al leer la base de datos %s"

#, python-format
msgid "ERROR rsync failed with %(code)s: %(args)s"
msgstr "ERROR La resincronización ha fallado con %(code)s: %(args)s"

#, python-format
msgid "ERROR syncing %(file)s with node %(node)s"
msgstr "ERROR al sincronizar %(file)s con el nodo %(node)s"

msgid "ERROR trying to replicate"
msgstr "ERROR al intentar la replicación"

#, python-format
msgid "ERROR while trying to clean up %s"
msgstr "ERROR al intentar limpiar %s"

#, python-format
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
msgstr "ERROR con el servidor %(type)s %(ip)s:%(port)s/%(device)s re: %(info)s"

#, python-format
msgid "ERROR with loading suppressions from %s: "
msgstr "ERROR con las supresiones de carga desde %s: "

#, python-format
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
msgstr "ERROR con el servidor remoto %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "ERROR:  Failed to get paths to drive partitions: %s"
msgstr ""
"ERROR: No se han podido obtener las vías de acceso a las particiones de "
"unidad: %s"

msgid "ERROR: An error occurred while retrieving segments"
msgstr "ERROR: se ha producido un error al recuperar los segmentos"

#, python-format
msgid "ERROR: Unable to access %(path)s: %(error)s"
msgstr "ERROR: no se ha podido acceder a %(path)s: %(error)s"

#, python-format
msgid "ERROR: Unable to run auditing: %s"
msgstr "ERROR: no se ha podido ejecutar la auditoría: %s"

#, python-format
msgid "Error %(action)s to memcached: %(server)s"
msgstr "%(action)s de error para memcached: %(server)s"

#, python-format
msgid "Error encoding to UTF-8: %s"
msgstr "Error en la codificación a UTF-8: %s"

msgid "Error hashing suffix"
msgstr "Error en el hash del sufijo"

#, python-format
msgid "Error in %r with mtime_check_interval: %s"
msgstr "Error en %r con mtime_check_interval: %s"

#, python-format
msgid "Error limiting server %s"
msgstr "Error al limitar el servidor %s"

msgid "Error listing devices"
msgstr "Error al mostrar los dispositivos"

#, python-format
msgid "Error on render profiling results: %s"
msgstr "Error al representar los resultados de perfil: %s"

msgid "Error parsing recon cache file"
msgstr "Error al analizar el archivo de memoria caché de recon"

msgid "Error reading recon cache file"
msgstr "Error al leer el archivo de memoria caché de recon"

msgid "Error reading ringfile"
msgstr "Error al leer el archivo de anillo"

msgid "Error reading swift.conf"
msgstr "Error al leer swift.conf"

msgid "Error retrieving recon data"
msgstr "Error al recuperar los datos de recon"

msgid "Error syncing handoff partition"
msgstr "Error al sincronizar la partición de transferencia"

msgid "Error syncing partition"
msgstr "Error al sincronizar la partición"

#, python-format
msgid "Error syncing with node: %s"
msgstr "Error en la sincronización con el nodo: %s"

#, python-format
msgid "Error trying to rebuild %(path)s policy#%(policy)d frag#%(frag_index)s"
msgstr ""
"Error al intentar reconstruir %(path)s policy#%(policy)d frag#%(frag_index)s"

msgid "Error: An error occurred"
msgstr "Error: se ha producido un error"

msgid "Error: missing config path argument"
msgstr "Error: falta el argumento de vía de acceso de configuración"

#, python-format
msgid "Error: unable to locate %s"
msgstr "Error: no se ha podido localizar %s"

msgid "Exception dumping recon cache"
msgstr "Excepción al volcar la memoria caché de recon"

msgid "Exception in top-level account reaper loop"
msgstr "Excepción en el bucle cosechador de cuentas de nivel superior"

msgid "Exception in top-level replication loop"
msgstr "Excepción en el bucle de réplica de nivel superior"

msgid "Exception in top-levelreconstruction loop"
msgstr "Excepción en el bucle de reconstrucción de nivel superior"

#, python-format
msgid "Exception while deleting container %s %s"
msgstr "Excepción al suprimir el contenedor %s %s"

#, python-format
msgid "Exception while deleting object %s %s %s"
msgstr "Excepción al suprimir el objeto %s %s %s"

#, python-format
msgid "Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Excepción con %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Exception with account %s"
msgstr "Excepción con la cuenta %s"

#, python-format
msgid "Exception with containers for account %s"
msgstr "Excepción con los contenedores para la cuenta %s"

#, python-format
msgid ""
"Exception with objects for container %(container)s for account %(account)s"
msgstr ""
"Excepción con objetos para el contenedor %(container)s para la cuenta "
"%(account)s"

#, python-format
msgid "Expect: 100-continue on %s"
msgstr "Esperado: 100-continuo en %s"

#, python-format
msgid "Following CNAME chain for  %(given_domain)s to %(found_domain)s"
msgstr "Siguiente cadena CNAME de %(given_domain)s a %(found_domain)s"

msgid "Found configs:"
msgstr "Configuraciones encontradas:"

msgid ""
"Handoffs first mode still has handoffs remaining.  Aborting current "
"replication pass."
msgstr ""
"El modo de transferencias primero aún tiene transferencias restantes. "
"Abortando el paso de réplica actual."

msgid "Host unreachable"
msgstr "Host no alcanzable"

#, python-format
msgid "Incomplete pass on account %s"
msgstr "Paso incompleto en la cuenta %s"

#, python-format
msgid "Invalid X-Container-Sync-To format %r"
msgstr "Formato de X-Container-Sync-To no válido %r"

#, python-format
msgid "Invalid host %r in X-Container-Sync-To"
msgstr "Host no válido %r en X-Container-Sync-To"

#, python-format
msgid "Invalid pending entry %(file)s: %(entry)s"
msgstr "Entrada pendiente no válida %(file)s: %(entry)s"

#, python-format
msgid "Invalid response %(resp)s from %(full_path)s"
msgstr "Respuesta no válida %(resp)s de %(full_path)s"

#, python-format
msgid "Invalid response %(resp)s from %(ip)s"
msgstr "Respuesta no válida %(resp)s desde %(ip)s"

#, python-format
msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"."
msgstr ""
"Esquema no válido %r en X-Container-Sync-To, debe ser \"//\", \"http\" o "
"\"https\"."

#, python-format
msgid "Killing long-running rsync: %s"
msgstr "Interrumpiendo resincronización (rsync) de larga duración: %s"

#, python-format
msgid "Loading JSON from %s failed (%s)"
msgstr "Error al cargar JSON desde %s (%s)"

msgid "Lockup detected.. killing live coros."
msgstr "Bloqueo detectado. Interrumpiendo coros activos."

#, python-format
msgid "Mapped %(given_domain)s to %(found_domain)s"
msgstr "Se ha correlacionado %(given_domain)s con %(found_domain)s"

#, python-format
msgid "No %s running"
msgstr "Ningún %s en ejecución"

#, python-format
msgid "No cluster endpoint for %r %r"
msgstr "No hay ningún punto final de clúster para %r %r"

#, python-format
msgid "No permission to signal PID %d"
msgstr "No tiene permiso para señalar el PID %d"

#, python-format
msgid "No policy with index %s"
msgstr "No hay ninguna política que tenga el índice %s"

#, python-format
msgid "No realm key for %r"
msgstr "No hay ninguna clave de dominio para %r"

#, python-format
msgid "No space left on device for %s (%s)"
msgstr "No queda espacio libre en el dispositivo para %s (%s)"

#, python-format
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
msgstr "Error de nodo limitado %(ip)s:%(port)s (%(device)s)"

#, python-format
msgid "Not enough object servers ack'ed (got %d)"
msgstr "No hay suficientes servidores de objetos reconocidos (constan %d)"

#, python-format
msgid ""
"Not found %(sync_from)r => %(sync_to)r                       - object "
"%(obj_name)r"
msgstr ""
"No se ha encontrado %(sync_from)r => %(sync_to)r                       - "
"objeto %(obj_name)rd"

#, python-format
msgid "Nothing reconstructed for %s seconds."
msgstr "No se ha reconstruido nada durante %s segundos."

#, python-format
msgid "Nothing replicated for %s seconds."
msgstr "No se ha replicado nada durante %s segundos."

msgid "Object"
msgstr "Objeto"

msgid "Object PUT"
msgstr "Objeto PUT"

#, python-format
msgid "Object PUT returning 202 for 409: %(req_timestamp)s <= %(timestamps)r"
msgstr ""
"El objeto PUT devuelve 202 para 409: %(req_timestamp)s <= %(timestamps)r"

#, python-format
msgid "Object PUT returning 412, %(statuses)r"
msgstr "El objeto PUT devuelve 412, %(statuses)r"

#, python-format
msgid ""
"Object audit (%(type)s) \"%(mode)s\" mode completed: %(elapsed).02fs. Total "
"quarantined: %(quars)d, Total errors: %(errors)d, Total files/sec: "
"%(frate).2f, Total bytes/sec: %(brate).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Auditoría de objetos (%(type)s) en modalidad \"%(mode)s\" finalizada: "
"%(elapsed).02fs. Total en cuarentena: %(quars)d, Errores totales: "
"%(errors)d, Archivos totales por segundo: %(frate).2f, Bytes totales por "
"segundo: %(brate).2f, Tiempo de auditoría: %(audit).2f, Velocidad: "
"%(audit_rate).2f"

#, python-format
msgid ""
"Object audit (%(type)s). Since %(start_time)s: Locally: %(passes)d passed, "
"%(quars)d quarantined, %(errors)d errors, files/sec: %(frate).2f, bytes/sec: "
"%(brate).2f, Total time: %(total).2f, Auditing time: %(audit).2f, Rate: "
"%(audit_rate).2f"
msgstr ""
"Auditoría de objetos (%(type)s). Desde %(start_time)s: Localmente: "
"%(passes)d han pasado, %(quars)d en cuarentena, %(errors)d errores, archivos "
"por segundo: %(frate).2f , bytes por segundo: %(brate).2f, Tiempo total: "
"%(total).2f, Tiempo de auditoría: %(audit).2f, Velocidad: %(audit_rate).2f"

#, python-format
msgid "Object audit stats: %s"
msgstr "Estadísticas de auditoría de objetos: %s"

#, python-format
msgid "Object reconstruction complete (once). (%.02f minutes)"
msgstr "Reconstrucción de objeto finalizada (una vez). (%.02f minutos)"

#, python-format
msgid "Object reconstruction complete. (%.02f minutes)"
msgstr "Reconstrucción de objeto finalizada. (%.02f minutos)"

#, python-format
msgid "Object replication complete (once). (%.02f minutes)"
msgstr "Réplica de objeto finalizada (una vez). (%.02f minutos)"

#, python-format
msgid "Object replication complete. (%.02f minutes)"
msgstr "Réplica de objeto finalizada. (%.02f minutos)"

#, python-format
msgid "Object servers returned %s mismatched etags"
msgstr ""
"Los servidores de objeto han devuelvo %s etiquetas (etags) no coincidentes"

#, python-format
msgid ""
"Object update single threaded sweep completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Barrido de hebra única de actualización del objeto finalizado: "
"%(elapsed).02fs, %(success)s éxitos, %(fail)s fallos"

#, python-format
msgid "Object update sweep completed: %.02fs"
msgstr "Barrido de actualización del objeto finalizado: %.02fs"

#, python-format
msgid ""
"Object update sweep of %(device)s completed: %(elapsed).02fs, %(success)s "
"successes, %(fail)s failures"
msgstr ""
"Barrido de actualización del objeto de %(device)s finalizado: "
"%(elapsed).02fs, %(success)s con éxito, %(fail)s fallos"

msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr ""
"Parámetros, consultas y fragmentos no permitidos en X-Container-Sync-To"

#, python-format
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
msgstr ""
"Tiempos de partición: máximo %(max).4fs, mínimo %(min).4fs, medio %(med).4fs"

#, python-format
msgid "Pass beginning; %s possible containers; %s possible objects"
msgstr "Inicio del paso; %s posibles contenedores; %s posibles objetos"

#, python-format
msgid "Pass completed in %ds; %d objects expired"
msgstr "Paso completado en %ds; %d objetos caducados"

#, python-format
msgid "Pass so far %ds; %d objects expired"
msgstr "Paso hasta ahora %ds; %d objetos caducados"

msgid "Path required in X-Container-Sync-To"
msgstr "La vía de acceso es obligatoria en X-Container-Sync-To"

#, python-format
msgid "Problem cleaning up %s"
msgstr "Problema al limpiar %s"

#, python-format
msgid "Problem cleaning up %s (%s)"
msgstr "Problema al limpiar %s (%s)"

#, python-format
msgid "Problem writing durable state file %s (%s)"
msgstr "Problema al escribir en el archivo de estado durable %s (%s)"

#, python-format
msgid "Profiling Error: %s"
msgstr "Error de perfil: %s"

#, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"Se ha puesto en cuarentena %(hsh_path)s en %(quar_path)s debido a que no es "
"un directorio"

#, python-format
msgid ""
"Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr ""
"Se ha puesto en cuarentena %(object_path)s en %(quar_path)s debido a que no "
"es un directorio"

#, python-format
msgid "Quarantined %s to %s due to %s database"
msgstr "%s de %s en cuarentena debido a la base de datos %s"

#, python-format
msgid "Quarantining DB %s"
msgstr "Poniendo en cuarentena la base de datos %s"

#, python-format
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
msgstr ""
"Ajuste de límite de registro de suspensión: %(sleep)s para %(account)s/"
"%(container)s/%(object)s"

#, python-format
msgid "Removed %(remove)d dbs"
msgstr "Se han eliminado %(remove)d bases de datos"

#, python-format
msgid "Removing %s objects"
msgstr "Eliminando %s objetos"

#, python-format
msgid "Removing partition: %s"
msgstr "Eliminando partición: %s"

#, python-format
msgid "Removing pid file %(pid_file)s with wrong pid %(pid)d"
msgstr ""
"Eliminando el archivo PID %(pid_file)s que tiene el PID no válido %(pid)d"

#, python-format
msgid "Removing pid file %s with invalid pid"
msgstr "Eliminando el archivo PID %s, que tiene un PID no válido"

#, python-format
msgid "Removing stale pid file %s"
msgstr "Eliminando el archivo PID obsoleto %s"

msgid "Replication run OVER"
msgstr "Ejecución de la replicación finalizada"

#, python-format
msgid "Returning 497 because of blacklisting: %s"
msgstr "Se devuelve 497 debido a las listas negras: %s"

#, python-format
msgid ""
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
"Sleep) %(e)s"
msgstr ""
"Se devuelven 498 de %(meth)s a %(acc)s/%(cont)s/%(obj)s. Ajuste de límite "
"(suspensión máxima) %(e)s"

msgid "Ring change detected. Aborting current reconstruction pass."
msgstr ""
"Cambio de anillo detectado. Abortando el pase de reconstrucción actual."

msgid "Ring change detected. Aborting current replication pass."
msgstr "Cambio de anillo detectado. Abortando el paso de réplica actual."

#, python-format
msgid "Running %s once"
msgstr "Ejecutando %s una vez"

msgid "Running object reconstructor in script mode."
msgstr "Ejecutando reconstructor de objeto en modo script."

msgid "Running object replicator in script mode."
msgstr "Ejecutando el replicador de objetos en modalidad de script."

#, python-format
msgid "Signal %s  pid: %s  signal: %s"
msgstr "Señal %s  pid: %s  señal: %s"

#, python-format
msgid ""
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], %(skip)s "
"skipped, %(fail)s failed"
msgstr ""
"Desde las %(time)s: %(sync)s se han sincronizado [%(delete)s supresiones, "
"%(put)s colocaciones], %(skip)s se han omitido, %(fail)s han fallado"

#, python-format
msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed "
"audit"
msgstr ""
"Desde las %(time)s: Auditorías de cuenta: %(passed)s han pasado la auditoría,"
"%(failed)s han fallado la auditoría"

#, python-format
msgid ""
"Since %(time)s: Container audits: %(pass)s passed audit, %(fail)s failed "
"audit"
msgstr ""
"Desde las %(time)s: Auditorías de contenedor: %(pass)s han pasado la "
"auditoría,%(fail)s han fallado la auditoría"

#, python-format
msgid "Skipping %(device)s as it is not mounted"
msgstr "Omitiendo %(device)s, ya que no está montado"

#, python-format
msgid "Skipping %s as it is not mounted"
msgstr "Omitiendo %s, ya que no está montado"

#, python-format
msgid "Starting %s"
msgstr "Iniciando %s"

msgid "Starting object reconstruction pass."
msgstr "Iniciando el paso de reconstrucción de objeto."

msgid "Starting object reconstructor in daemon mode."
msgstr "Iniciando reconstructor de objeto en modo daemon."

msgid "Starting object replication pass."
msgstr "Iniciando el paso de réplica de objeto."

msgid "Starting object replicator in daemon mode."
msgstr "Iniciando el replicador de objetos en modalidad de daemon."

#, python-format
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
msgstr ""
"Resincronización de %(src)s realizada con éxito en %(dst)s (%(time).03f)"

msgid "The file type are forbidden to access!"
msgstr "El acceso al tipo de archivo está prohibido."

#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of "
"%(key)s across policies (%(sum)s)"
msgstr ""
"El total de %(key)s del contenedor (%(total)s) no coincide con la suma de "
"%(key)s en las políticas (%(sum)s)"

#, python-format
msgid "Timeout %(action)s to memcached: %(server)s"
msgstr "%(action)s de tiempo de espera para memcached: %(server)s"

#, python-format
msgid "Timeout Exception with %(ip)s:%(port)s/%(device)s"
msgstr "Excepción de tiempo de espera superado con %(ip)s:%(port)s/%(device)s"

#, python-format
msgid "Trying to %(method)s %(path)s"
msgstr "Intentando %(method)s %(path)s"

#, python-format
msgid "Trying to GET %(full_path)s"
msgstr "Intentando hacer un GET  de %(full_path)s"

#, python-format
msgid "Trying to get %s status of PUT to %s"
msgstr "Intentando obtener el estado %s de PUT en %s"

#, python-format
msgid "Trying to get final status of PUT to %s"
msgstr "Intentando obtener el estado final de PUT en %s"

msgid "Trying to read during GET"
msgstr "Intentado leer durante GET"

msgid "Trying to read during GET (retrying)"
msgstr "Intentando leer durante GET (reintentando)"

msgid "Trying to send to client"
msgstr "Intentando enviar al cliente"

#, python-format
msgid "Trying to sync suffixes with %s"
msgstr "Intentando sincronizar los sufijos con %s"

#, python-format
msgid "Trying to write to %s"
msgstr "Intentando escribir en %s"

msgid "UNCAUGHT EXCEPTION"
msgstr "EXCEPCIÓN NO DETECTADA"

#, python-format
msgid "Unable to find %s config section in %s"
msgstr "No se ha podido encontrar la sección de configuración %s en %s"

#, python-format
msgid "Unable to load internal client from config: %r (%s)"
msgstr ""
"No se puede cargar el cliente interno a partir de la configuración: %r (%s)"

#, python-format
msgid "Unable to locate %s in libc.  Leaving as a no-op."
msgstr "No se ha podido localizar %s en libc. Se dejará como no operativo."

#, python-format
msgid "Unable to locate config for %s"
msgstr "No se ha podido encontrar la configuración de %s"

#, python-format
msgid "Unable to locate config number %s for %s"
msgstr "No se ha podido encontrar el número de configuración %s de %s"

msgid ""
"Unable to locate fallocate, posix_fallocate in libc.  Leaving as a no-op."
msgstr ""
"No se ha podido localizar fallocate, posix_fallocate en libc. Se dejará como "
"no operativo."

#, python-format
msgid "Unable to perform fsync() on directory %s: %s"
msgstr "No se puede realizar fsync() en el directorio %s: %s"

#, python-format
msgid "Unable to read config from %s"
msgstr "No se ha podido leer la configuración de %s"

#, python-format
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "%(sync_from)r => %(sync_to)r sin autorización"

#, python-format
msgid "Unexpected response: %s"
msgstr "Respuesta inesperada : %s "

msgid "Unhandled exception"
msgstr "Excepción no controlada"

#, python-format
msgid "Unknown exception trying to GET: %(account)r %(container)r %(object)r"
msgstr ""
"Se ha producido una excepción desconocida al intentar hacer un GET de: "
"%(account)r %(container)r %(object)r"

#, python-format
msgid "Update report failed for %(container)s %(dbfile)s"
msgstr "Informe de actualización fallido para %(container)s %(dbfile)s"

#, python-format
msgid "Update report sent for %(container)s %(dbfile)s"
msgstr "Informe de actualización enviado para %(container)s %(dbfile)s"

msgid ""
"WARNING: SSL should only be enabled for testing purposes. Use external SSL "
"termination for a production deployment."
msgstr ""
"AVISO: SSL sólo se debe habilitar con fines de prueba. Utilice la "
"terminación de SSL externa para un despliegue de producción."

msgid "WARNING: Unable to modify file descriptor limit.  Running as non-root?"
msgstr ""
"AVISO: no se ha podido modificar el límite del descriptor de archivos. ¿Está "
"operando como no root?"

msgid "WARNING: Unable to modify max process limit.  Running as non-root?"
msgstr ""
"AVISO: no se ha podido modificar el límite máximo de procesos. ¿Está "
"operando como no root?"

msgid "WARNING: Unable to modify memory limit.  Running as non-root?"
msgstr ""
"AVISO: no se ha podido modificar el límite de memoria. ¿Está operando como "
"no root?"

#, python-format
msgid "Waited %s seconds for %s to die; giving up"
msgstr "Se han esperado %s segundos a que terminara %s; abandonando"

#, python-format
msgid "Waited %s seconds for %s to die; killing"
msgstr "Se han esperado %s segundos a que terminara %s; terminando"

msgid "Warning: Cannot ratelimit without a memcached client"
msgstr ""
"Aviso: no se puede ajustar el límite sin un cliente almacenado en memoria "
"caché"

#, python-format
msgid "method %s is not allowed."
msgstr "el método %s no está permitido."

msgid "no log file found"
msgstr "no se ha encontrado ningún archivo de registro"

msgid "odfpy not installed."
msgstr "odfpy no está instalado."

#, python-format
msgid "plotting results failed due to %s"
msgstr "error en el trazado de resultados debido a %s"

msgid "python-matplotlib not installed."
msgstr "python-matplotlib no está instalado."
swift-2.7.1/swift/common/0000775000567000056710000000000013024044470016440 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/common/container_sync_realms.py0000664000567000056710000001364313024044354023403 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import errno
import hashlib
import hmac
import os
import time

from six.moves import configparser

from swift import gettext_ as _
from swift.common.utils import get_valid_utf8_str


class ContainerSyncRealms(object):
    """
    Loads and parses the container-sync-realms.conf, occasionally
    checking the file's mtime to see if it needs to be reloaded.
    """

    def __init__(self, conf_path, logger):
        self.conf_path = conf_path
        self.logger = logger
        self.next_mtime_check = 0
        self.mtime_check_interval = 300
        self.conf_path_mtime = 0
        self.data = {}
        self.reload()

    def reload(self):
        """Forces a reload of the conf file."""
        self.next_mtime_check = 0
        self.conf_path_mtime = 0
        self._reload()

    def _reload(self):
        now = time.time()
        if now >= self.next_mtime_check:
            self.next_mtime_check = now + self.mtime_check_interval
            try:
                mtime = os.path.getmtime(self.conf_path)
            except OSError as err:
                if err.errno == errno.ENOENT:
                    log_func = self.logger.debug
                else:
                    log_func = self.logger.error
                log_func(_('Could not load %r: %s'), self.conf_path, err)
            else:
                if mtime != self.conf_path_mtime:
                    self.conf_path_mtime = mtime
                    try:
                        conf = configparser.SafeConfigParser()
                        conf.read(self.conf_path)
                    except configparser.ParsingError as err:
                        self.logger.error(
                            _('Could not load %r: %s'), self.conf_path, err)
                    else:
                        try:
                            self.mtime_check_interval = conf.getint(
                                'DEFAULT', 'mtime_check_interval')
                            self.next_mtime_check = \
                                now + self.mtime_check_interval
                        except configparser.NoOptionError:
                            self.mtime_check_interval = 300
                            self.next_mtime_check = \
                                now + self.mtime_check_interval
                        except (configparser.ParsingError, ValueError) as err:
                            self.logger.error(
                                _('Error in %r with mtime_check_interval: %s'),
                                self.conf_path, err)
                        realms = {}
                        for section in conf.sections():
                            realm = {}
                            clusters = {}
                            for option, value in conf.items(section):
                                if option in ('key', 'key2'):
                                    realm[option] = value
                                elif option.startswith('cluster_'):
                                    clusters[option[8:].upper()] = value
                            realm['clusters'] = clusters
                            realms[section.upper()] = realm
                        self.data = realms

    def realms(self):
        """Returns a list of realms."""
        self._reload()
        return self.data.keys()

    def key(self, realm):
        """Returns the key for the realm."""
        self._reload()
        result = self.data.get(realm.upper())
        if result:
            result = result.get('key')
        return result

    def key2(self, realm):
        """Returns the key2 for the realm."""
        self._reload()
        result = self.data.get(realm.upper())
        if result:
            result = result.get('key2')
        return result

    def clusters(self, realm):
        """Returns a list of clusters for the realm."""
        self._reload()
        result = self.data.get(realm.upper())
        if result:
            result = result.get('clusters')
            if result:
                result = result.keys()
        return result or []

    def endpoint(self, realm, cluster):
        """Returns the endpoint for the cluster in the realm."""
        self._reload()
        result = None
        realm_data = self.data.get(realm.upper())
        if realm_data:
            cluster_data = realm_data.get('clusters')
            if cluster_data:
                result = cluster_data.get(cluster.upper())
        return result

    def get_sig(self, request_method, path, x_timestamp, nonce, realm_key,
                user_key):
        """
        Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for
        the information given.

        :param request_method: HTTP method of the request.
        :param path: The path to the resource.
        :param x_timestamp: The X-Timestamp header value for the request.
        :param nonce: A unique value for the request.
        :param realm_key: Shared secret at the cluster operator level.
        :param user_key: Shared secret at the user's container level.
        :returns: hexdigest str of the HMAC-SHA1 for the request.
        """
        nonce = get_valid_utf8_str(nonce)
        realm_key = get_valid_utf8_str(realm_key)
        user_key = get_valid_utf8_str(user_key)
        return hmac.new(
            realm_key,
            '%s\n%s\n%s\n%s\n%s' % (
                request_method, path, x_timestamp, nonce, user_key),
            hashlib.sha1).hexdigest()
swift-2.7.1/swift/common/middleware/0000775000567000056710000000000013024044470020555 5ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/common/middleware/dlo.py0000664000567000056710000004721513024044354021717 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Middleware that will provide Dynamic Large Object (DLO) support.

---------------
Using ``swift``
---------------

The quickest way to try out this feature is use the ``swift`` Swift Tool
included with the `python-swiftclient`_ library.  You can use the ``-S``
option to specify the segment size to use when splitting a large file. For
example::

    swift upload test_container -S 1073741824 large_file

This would split the large_file into 1G segments and begin uploading those
segments in parallel. Once all the segments have been uploaded, ``swift`` will
then create the manifest file so the segments can be downloaded as one.

So now, the following ``swift`` command would download the entire large
object::

    swift download test_container large_file

``swift`` command uses a strict convention for its segmented object
support. In the above example it will upload all the segments into a
second container named test_container_segments. These segments will
have names like large_file/1290206778.25/21474836480/00000000,
large_file/1290206778.25/21474836480/00000001, etc.

The main benefit for using a separate container is that the main container
listings will not be polluted with all the segment names. The reason for using
the segment name format of /// is so that an
upload of a new file with the same name won't overwrite the contents of the
first until the last moment when the manifest file is updated.

``swift`` will manage these segment files for you, deleting old segments on
deletes and overwrites, etc. You can override this behavior with the
``--leave-segments`` option if desired; this is useful if you want to have
multiple versions of the same large object available.

.. _`python-swiftclient`: http://github.com/openstack/python-swiftclient

----------
Direct API
----------

You can also work with the segments and manifests directly with HTTP
requests instead of having ``swift`` do that for you. You can just
upload the segments like you would any other object and the manifest
is just a zero-byte (not enforced) file with an extra
``X-Object-Manifest`` header.

All the object segments need to be in the same container, have a common object
name prefix, and sort in the order in which they should be concatenated.
Object names are sorted lexicographically as UTF-8 byte strings.
They don't have to be in the same container as the manifest file will be, which
is useful to keep container listings clean as explained above with ``swift``.

The manifest file is simply a zero-byte (not enforced) file with the extra
``X-Object-Manifest: /`` header, where ```` is
the container the object segments are in and ```` is the common prefix
for all the segments.

It is best to upload all the segments first and then create or update the
manifest. In this way, the full object won't be available for downloading
until the upload is complete. Also, you can upload a new set of segments to
a second location and then update the manifest to point to this new location.
During the upload of the new segments, the original manifest will still be
available to download the first set of segments.

.. note::

    The manifest file should have no content. However, this is not enforced.
    If the manifest path itself conforms to container/prefix specified in
    X-Object-Manifest, and if manifest has some content/data in it, it would
    also be considered as segment and manifest's content will be part of the
    concatenated GET response. The order of concatenation follows the usual DLO
    logic which is - the order of concatenation adheres to order returned when
    segment names are sorted.


Here's an example using ``curl`` with tiny 1-byte segments::

    # First, upload the segments
    curl -X PUT -H 'X-Auth-Token: ' \
        http:///container/myobject/00000001 --data-binary '1'
    curl -X PUT -H 'X-Auth-Token: ' \
        http:///container/myobject/00000002 --data-binary '2'
    curl -X PUT -H 'X-Auth-Token: ' \
        http:///container/myobject/00000003 --data-binary '3'

    # Next, create the manifest file
    curl -X PUT -H 'X-Auth-Token: ' \
        -H 'X-Object-Manifest: container/myobject/' \
        http:///container/myobject --data-binary ''

    # And now we can download the segments as a single object
    curl -H 'X-Auth-Token: ' \
        http:///container/myobject
"""

import json
import os

import six
from six.moves.configparser import ConfigParser, NoSectionError, NoOptionError
from six.moves.urllib.parse import unquote

from hashlib import md5
from swift.common import constraints
from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
from swift.common.swob import Request, Response, \
    HTTPRequestedRangeNotSatisfiable, HTTPBadRequest, HTTPConflict
from swift.common.utils import get_logger, \
    RateLimitedIterator, read_conf_dir, quote, close_if_possible, \
    closing_if_possible
from swift.common.request_helpers import SegmentedIterable
from swift.common.wsgi import WSGIContext, make_subrequest


class GetContext(WSGIContext):
    def __init__(self, dlo, logger):
        super(GetContext, self).__init__(dlo.app)
        self.dlo = dlo
        self.logger = logger

    def _get_container_listing(self, req, version, account, container,
                               prefix, marker=''):
        con_req = make_subrequest(
            req.environ, path='/'.join(['', version, account, container]),
            method='GET',
            headers={'x-auth-token': req.headers.get('x-auth-token')},
            agent=('%(orig)s ' + 'DLO MultipartGET'), swift_source='DLO')
        con_req.query_string = 'format=json&prefix=%s' % quote(prefix)
        if marker:
            con_req.query_string += '&marker=%s' % quote(marker)

        con_resp = con_req.get_response(self.dlo.app)
        if not is_success(con_resp.status_int):
            return con_resp, None
        with closing_if_possible(con_resp.app_iter):
            return None, json.loads(''.join(con_resp.app_iter))

    def _segment_listing_iterator(self, req, version, account, container,
                                  prefix, segments, first_byte=None,
                                  last_byte=None):
        # It's sort of hokey that this thing takes in the first page of
        # segments as an argument, but we need to compute the etag and content
        # length from the first page, and it's better to have a hokey
        # interface than to make redundant requests.
        if first_byte is None:
            first_byte = 0
        if last_byte is None:
            last_byte = float("inf")

        marker = ''
        while True:
            for segment in segments:
                seg_length = int(segment['bytes'])

                if first_byte >= seg_length:
                    # don't need any bytes from this segment
                    first_byte = max(first_byte - seg_length, -1)
                    last_byte = max(last_byte - seg_length, -1)
                    continue
                elif last_byte < 0:
                    # no bytes are needed from this or any future segment
                    break

                seg_name = segment['name']
                if isinstance(seg_name, six.text_type):
                    seg_name = seg_name.encode("utf-8")

                # (obj path, etag, size, first byte, last byte)
                yield ("/" + "/".join((version, account, container,
                                       seg_name)),
                       # We deliberately omit the etag and size here;
                       # SegmentedIterable will check size and etag if
                       # specified, but we don't want it to. DLOs only care
                       # that the objects' names match the specified prefix.
                       None, None,
                       (None if first_byte <= 0 else first_byte),
                       (None if last_byte >= seg_length - 1 else last_byte))

                first_byte = max(first_byte - seg_length, -1)
                last_byte = max(last_byte - seg_length, -1)

            if len(segments) < constraints.CONTAINER_LISTING_LIMIT:
                # a short page means that we're done with the listing
                break
            elif last_byte < 0:
                break

            marker = segments[-1]['name']
            error_response, segments = self._get_container_listing(
                req, version, account, container, prefix, marker)
            if error_response:
                # we've already started sending the response body to the
                # client, so all we can do is raise an exception to make the
                # WSGI server close the connection early
                close_if_possible(error_response.app_iter)
                raise ListingIterError(
                    "Got status %d listing container /%s/%s" %
                    (error_response.status_int, account, container))

    def get_or_head_response(self, req, x_object_manifest,
                             response_headers=None):
        if response_headers is None:
            response_headers = self._response_headers

        container, obj_prefix = x_object_manifest.split('/', 1)
        container = unquote(container)
        obj_prefix = unquote(obj_prefix)

        # manifest might point to a different container
        req.acl = None
        version, account, _junk = req.split_path(2, 3, True)
        error_response, segments = self._get_container_listing(
            req, version, account, container, obj_prefix)
        if error_response:
            return error_response
        have_complete_listing = len(segments) < \
            constraints.CONTAINER_LISTING_LIMIT

        first_byte = last_byte = None
        actual_content_length = None
        content_length_for_swob_range = None
        if req.range and len(req.range.ranges) == 1:
            content_length_for_swob_range = sum(o['bytes'] for o in segments)

            # This is a hack to handle suffix byte ranges (e.g. "bytes=-5"),
            # which we can't honor unless we have a complete listing.
            _junk, range_end = req.range.ranges_for_length(float("inf"))[0]

            # If this is all the segments, we know whether or not this
            # range request is satisfiable.
            #
            # Alternately, we may not have all the segments, but this range
            # falls entirely within the first page's segments, so we know
            # that it is satisfiable.
            if (have_complete_listing
               or range_end < content_length_for_swob_range):
                byteranges = req.range.ranges_for_length(
                    content_length_for_swob_range)
                if not byteranges:
                    return HTTPRequestedRangeNotSatisfiable(request=req)
                first_byte, last_byte = byteranges[0]
                # For some reason, swob.Range.ranges_for_length adds 1 to the
                # last byte's position.
                last_byte -= 1
                actual_content_length = last_byte - first_byte + 1
            else:
                # The range may or may not be satisfiable, but we can't tell
                # based on just one page of listing, and we're not going to go
                # get more pages because that would use up too many resources,
                # so we ignore the Range header and return the whole object.
                actual_content_length = None
                content_length_for_swob_range = None
                req.range = None

        response_headers = [
            (h, v) for h, v in response_headers
            if h.lower() not in ("content-length", "content-range")]

        if content_length_for_swob_range is not None:
            # Here, we have to give swob a big-enough content length so that
            # it can compute the actual content length based on the Range
            # header. This value will not be visible to the client; swob will
            # substitute its own Content-Length.
            #
            # Note: if the manifest points to at least CONTAINER_LISTING_LIMIT
            # segments, this may be less than the sum of all the segments'
            # sizes. However, it'll still be greater than the last byte in the
            # Range header, so it's good enough for swob.
            response_headers.append(('Content-Length',
                                     str(content_length_for_swob_range)))
        elif have_complete_listing:
            actual_content_length = sum(o['bytes'] for o in segments)
            response_headers.append(('Content-Length',
                                     str(actual_content_length)))

        if have_complete_listing:
            response_headers = [(h, v) for h, v in response_headers
                                if h.lower() != "etag"]
            etag = md5()
            for seg_dict in segments:
                etag.update(seg_dict['hash'].strip('"'))
            response_headers.append(('Etag', '"%s"' % etag.hexdigest()))

        app_iter = None
        if req.method == 'GET':
            listing_iter = RateLimitedIterator(
                self._segment_listing_iterator(
                    req, version, account, container, obj_prefix, segments,
                    first_byte=first_byte, last_byte=last_byte),
                self.dlo.rate_limit_segments_per_sec,
                limit_after=self.dlo.rate_limit_after_segment)

            app_iter = SegmentedIterable(
                req, self.dlo.app, listing_iter, ua_suffix="DLO MultipartGET",
                swift_source="DLO", name=req.path, logger=self.logger,
                max_get_time=self.dlo.max_get_time,
                response_body_length=actual_content_length)

            try:
                app_iter.validate_first_segment()
            except (SegmentError, ListingIterError):
                return HTTPConflict(request=req)

        resp = Response(request=req, headers=response_headers,
                        conditional_response=True,
                        app_iter=app_iter)

        return resp

    def handle_request(self, req, start_response):
        """
        Take a GET or HEAD request, and if it is for a dynamic large object
        manifest, return an appropriate response.

        Otherwise, simply pass it through.
        """
        resp_iter = self._app_call(req.environ)

        # make sure this response is for a dynamic large object manifest
        for header, value in self._response_headers:
            if (header.lower() == 'x-object-manifest'):
                close_if_possible(resp_iter)
                response = self.get_or_head_response(req, value)
                return response(req.environ, start_response)
        else:
            # Not a dynamic large object manifest; just pass it through.
            start_response(self._response_status,
                           self._response_headers,
                           self._response_exc_info)
            return resp_iter


class DynamicLargeObject(object):
    def __init__(self, app, conf):
        self.app = app
        self.logger = get_logger(conf, log_route='dlo')

        # DLO functionality used to live in the proxy server, not middleware,
        # so let's try to go find config values in the proxy's config section
        # to ease cluster upgrades.
        self._populate_config_from_old_location(conf)

        self.max_get_time = int(conf.get('max_get_time', '86400'))
        self.rate_limit_after_segment = int(conf.get(
            'rate_limit_after_segment', '10'))
        self.rate_limit_segments_per_sec = int(conf.get(
            'rate_limit_segments_per_sec', '1'))

    def _populate_config_from_old_location(self, conf):
        if ('rate_limit_after_segment' in conf or
                'rate_limit_segments_per_sec' in conf or
                'max_get_time' in conf or
                '__file__' not in conf):
            return

        cp = ConfigParser()
        if os.path.isdir(conf['__file__']):
            read_conf_dir(cp, conf['__file__'])
        else:
            cp.read(conf['__file__'])

        try:
            pipe = cp.get("pipeline:main", "pipeline")
        except (NoSectionError, NoOptionError):
            return

        proxy_name = pipe.rsplit(None, 1)[-1]
        proxy_section = "app:" + proxy_name
        for setting in ('rate_limit_after_segment',
                        'rate_limit_segments_per_sec',
                        'max_get_time'):
            try:
                conf[setting] = cp.get(proxy_section, setting)
            except (NoSectionError, NoOptionError):
                pass

    def __call__(self, env, start_response):
        """
        WSGI entry point
        """
        req = Request(env)
        try:
            vrs, account, container, obj = req.split_path(4, 4, True)
        except ValueError:
            return self.app(env, start_response)

        # install our COPY-callback hook
        env['swift.copy_hook'] = self.copy_hook(
            env.get('swift.copy_hook',
                    lambda src_req, src_resp, sink_req: src_resp))

        if ((req.method == 'GET' or req.method == 'HEAD') and
                req.params.get('multipart-manifest') != 'get'):
            return GetContext(self, self.logger).\
                handle_request(req, start_response)
        elif req.method == 'PUT':
            error_response = self._validate_x_object_manifest_header(req)
            if error_response:
                return error_response(env, start_response)
        return self.app(env, start_response)

    def _validate_x_object_manifest_header(self, req):
        """
        Make sure that X-Object-Manifest is valid if present.
        """
        if 'X-Object-Manifest' in req.headers:
            value = req.headers['X-Object-Manifest']
            container = prefix = None
            try:
                container, prefix = value.split('/', 1)
            except ValueError:
                pass
            if not container or not prefix or '?' in value or '&' in value or \
                    prefix.startswith('/'):
                return HTTPBadRequest(
                    request=req,
                    body=('X-Object-Manifest must be in the '
                          'format container/prefix'))

    def copy_hook(self, inner_hook):

        def dlo_copy_hook(source_req, source_resp, sink_req):
            x_o_m = source_resp.headers.get('X-Object-Manifest')
            if x_o_m:
                if source_req.params.get('multipart-manifest') == 'get':
                    # To copy the manifest, we let the copy proceed as normal,
                    # but ensure that X-Object-Manifest is set on the new
                    # object.
                    sink_req.headers['X-Object-Manifest'] = x_o_m
                else:
                    ctx = GetContext(self, self.logger)
                    source_resp = ctx.get_or_head_response(
                        source_req, x_o_m, source_resp.headers.items())
            return inner_hook(source_req, source_resp, sink_req)

        return dlo_copy_hook


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)

    def dlo_filter(app):
        return DynamicLargeObject(app, conf)
    return dlo_filter
swift-2.7.1/swift/common/middleware/list_endpoints.py0000664000567000056710000002354213024044354024174 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
List endpoints for an object, account or container.

This middleware makes it possible to integrate swift with software
that relies on data locality information to avoid network overhead,
such as Hadoop.

Using the original API, answers requests of the form::

    /endpoints/{account}/{container}/{object}
    /endpoints/{account}/{container}
    /endpoints/{account}
    /endpoints/v1/{account}/{container}/{object}
    /endpoints/v1/{account}/{container}
    /endpoints/v1/{account}

with a JSON-encoded list of endpoints of the form::

    http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj}
    http://{server}:{port}/{dev}/{part}/{acc}/{cont}
    http://{server}:{port}/{dev}/{part}/{acc}

correspondingly, e.g.::

    http://10.1.1.1:6000/sda1/2/a/c2/o1
    http://10.1.1.1:6000/sda1/2/a/c2
    http://10.1.1.1:6000/sda1/2/a

Using the v2 API, answers requests of the form::

    /endpoints/v2/{account}/{container}/{object}
    /endpoints/v2/{account}/{container}
    /endpoints/v2/{account}

with a JSON-encoded dictionary containing a key 'endpoints' that maps to a list
of endpoints having the same form as described above, and a key 'headers' that
maps to a dictionary of headers that should be sent with a request made to
the endpoints, e.g.::

    { "endpoints": {"http://10.1.1.1:6010/sda1/2/a/c3/o1",
                    "http://10.1.1.1:6030/sda3/2/a/c3/o1",
                    "http://10.1.1.1:6040/sda4/2/a/c3/o1"},
      "headers": {"X-Backend-Storage-Policy-Index": "1"}}

In this example, the 'headers' dictionary indicates that requests to the
endpoint URLs should include the header 'X-Backend-Storage-Policy-Index: 1'
because the object's container is using storage policy index 1.

The '/endpoints/' path is customizable ('list_endpoints_path'
configuration parameter).

Intended for consumption by third-party services living inside the
cluster (as the endpoints make sense only inside the cluster behind
the firewall); potentially written in a different language.

This is why it's provided as a REST API and not just a Python API:
to avoid requiring clients to write their own ring parsers in their
languages, and to avoid the necessity to distribute the ring file
to clients and keep it up-to-date.

Note that the call is not authenticated, which means that a proxy
with this middleware enabled should not be open to an untrusted
environment (everyone can query the locality data using this middleware).
"""

import json

from six.moves.urllib.parse import quote, unquote

from swift.common.ring import Ring
from swift.common.utils import get_logger, split_path
from swift.common.swob import Request, Response
from swift.common.swob import HTTPBadRequest, HTTPMethodNotAllowed
from swift.common.storage_policy import POLICIES
from swift.proxy.controllers.base import get_container_info

RESPONSE_VERSIONS = (1.0, 2.0)


class ListEndpointsMiddleware(object):
    """
    List endpoints for an object, account or container.

    See above for a full description.

    Uses configuration parameter `swift_dir` (default `/etc/swift`).

    :param app: The next WSGI filter or app in the paste.deploy
                chain.
    :param conf: The configuration dict for the middleware.
    """

    def __init__(self, app, conf):
        self.app = app
        self.logger = get_logger(conf, log_route='endpoints')
        self.swift_dir = conf.get('swift_dir', '/etc/swift')
        self.account_ring = Ring(self.swift_dir, ring_name='account')
        self.container_ring = Ring(self.swift_dir, ring_name='container')
        self.endpoints_path = conf.get('list_endpoints_path', '/endpoints/')
        if not self.endpoints_path.endswith('/'):
            self.endpoints_path += '/'
        self.default_response_version = 1.0
        self.response_map = {
            1.0: self.v1_format_response,
            2.0: self.v2_format_response,
        }

    def get_object_ring(self, policy_idx):
        """
        Get the ring object to use to handle a request based on its policy.

        :policy_idx: policy index as defined in swift.conf
        :returns: appropriate ring object
        """
        return POLICIES.get_object_ring(policy_idx, self.swift_dir)

    def _parse_version(self, raw_version):
        err_msg = 'Unsupported version %r' % raw_version
        try:
            version = float(raw_version.lstrip('v'))
        except ValueError:
            raise ValueError(err_msg)
        if not any(version == v for v in RESPONSE_VERSIONS):
            raise ValueError(err_msg)
        return version

    def _parse_path(self, request):
        """
        Parse path parts of request into a tuple of version, account,
        container, obj.  Unspecified path parts are filled in as None,
        except version which is always returned as a float using the
        configured default response version if not specified in the
        request.

        :param request: the swob request

        :returns: parsed path parts as a tuple with version filled in as
                  configured default response version if not specified.
        :raises: ValueError if path is invalid, message will say why.
        """
        clean_path = request.path[len(self.endpoints_path) - 1:]
        # try to peel off version
        try:
            raw_version, rest = split_path(clean_path, 1, 2, True)
        except ValueError:
            raise ValueError('No account specified')
        try:
            version = self._parse_version(raw_version)
        except ValueError:
            if raw_version.startswith('v') and '_' not in raw_version:
                # looks more like a invalid version than an account
                raise
            # probably no version specified, but if the client really
            # said /endpoints/v_3/account they'll probably be sorta
            # confused by the useless response and lack of error.
            version = self.default_response_version
            rest = clean_path
        else:
            rest = '/' + rest if rest else '/'
        try:
            account, container, obj = split_path(rest, 1, 3, True)
        except ValueError:
            raise ValueError('No account specified')
        return version, account, container, obj

    def v1_format_response(self, req, endpoints, **kwargs):
        return Response(json.dumps(endpoints),
                        content_type='application/json')

    def v2_format_response(self, req, endpoints, storage_policy_index,
                           **kwargs):
        resp = {
            'endpoints': endpoints,
            'headers': {},
        }
        if storage_policy_index is not None:
            resp['headers'][
                'X-Backend-Storage-Policy-Index'] = str(storage_policy_index)
        return Response(json.dumps(resp),
                        content_type='application/json')

    def __call__(self, env, start_response):
        request = Request(env)
        if not request.path.startswith(self.endpoints_path):
            return self.app(env, start_response)

        if request.method != 'GET':
            return HTTPMethodNotAllowed(
                req=request, headers={"Allow": "GET"})(env, start_response)

        try:
            version, account, container, obj = self._parse_path(request)
        except ValueError as err:
            return HTTPBadRequest(str(err))(env, start_response)

        if account is not None:
            account = unquote(account)
        if container is not None:
            container = unquote(container)
        if obj is not None:
            obj = unquote(obj)

        storage_policy_index = None
        if obj is not None:
            container_info = get_container_info(
                {'PATH_INFO': '/v1/%s/%s' % (account, container)},
                self.app, swift_source='LE')
            storage_policy_index = container_info['storage_policy']
            obj_ring = self.get_object_ring(storage_policy_index)
            partition, nodes = obj_ring.get_nodes(
                account, container, obj)
            endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
                                '{account}/{container}/{obj}'
        elif container is not None:
            partition, nodes = self.container_ring.get_nodes(
                account, container)
            endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
                                '{account}/{container}'
        else:
            partition, nodes = self.account_ring.get_nodes(
                account)
            endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
                                '{account}'

        endpoints = []
        for node in nodes:
            endpoint = endpoint_template.format(
                ip=node['ip'],
                port=node['port'],
                device=node['device'],
                partition=partition,
                account=quote(account),
                container=quote(container or ''),
                obj=quote(obj or ''))
            endpoints.append(endpoint)

        resp = self.response_map[version](
            request, endpoints=endpoints,
            storage_policy_index=storage_policy_index)
        return resp(env, start_response)


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)

    def list_endpoints_filter(app):
        return ListEndpointsMiddleware(app, conf)

    return list_endpoints_filter
swift-2.7.1/swift/common/middleware/__init__.py0000664000567000056710000000000013024044352022653 0ustar  jenkinsjenkins00000000000000swift-2.7.1/swift/common/middleware/xprofile.py0000664000567000056710000002262713024044354022771 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Profiling middleware for Swift Servers.

The current implementation is based on eventlet aware profiler.(For the
future, more profilers could be added in to collect more data for analysis.)
Profiling all incoming requests and accumulating cpu timing statistics
information for performance tuning and optimization. An mini web UI is also
provided for profiling data analysis. It can be accessed from the URL as
below.

Index page for browse profile data::

    http://SERVER_IP:PORT/__profile__

List all profiles to return profile ids in json format::

    http://SERVER_IP:PORT/__profile__/
    http://SERVER_IP:PORT/__profile__/all

Retrieve specific profile data in different formats::

    http://SERVER_IP:PORT/__profile__/PROFILE_ID?format=[default|json|csv|ods]
    http://SERVER_IP:PORT/__profile__/current?format=[default|json|csv|ods]
    http://SERVER_IP:PORT/__profile__/all?format=[default|json|csv|ods]

Retrieve metrics from specific function in json format::

    http://SERVER_IP:PORT/__profile__/PROFILE_ID/NFL?format=json
    http://SERVER_IP:PORT/__profile__/current/NFL?format=json
    http://SERVER_IP:PORT/__profile__/all/NFL?format=json

    NFL is defined by concatenation of file name, function name and the first
    line number.
    e.g.::
        account.py:50(GETorHEAD)
    or with full path:
        opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD)

    A list of URL examples:

    http://localhost:8080/__profile__    (proxy server)
    http://localhost:6000/__profile__/all    (object server)
    http://localhost:6001/__profile__/current    (container server)
    http://localhost:6002/__profile__/12345?format=json    (account server)

The profiling middleware can be configured in paste file for WSGI servers such
as proxy, account, container and object servers. Please refer to the sample
configuration files in etc directory.

The profiling data is provided with four formats such as binary(by default),
json, csv and odf spreadsheet which requires installing odfpy library.

    sudo pip install odfpy

There's also a simple visualization capability which is enabled by using
matplotlib toolkit. it is also required to be installed if you want to use
it to visualize statistic data.

    sudo apt-get install python-matplotlib
"""

import os
import sys
import time

from eventlet import greenthread, GreenPool, patcher
import eventlet.green.profile as eprofile
import six
from six.moves import urllib

from swift import gettext_ as _
from swift.common.utils import get_logger, config_true_value
from swift.common.swob import Request
from x_profile.exceptions import NotFoundException, MethodNotAllowed,\
    ProfileException
from x_profile.html_viewer import HTMLViewer
from x_profile.profile_model import ProfileLog


DEFAULT_PROFILE_PREFIX = '/tmp/log/swift/profile/default.profile'

# unwind the iterator; it may call start_response, do lots of work, etc
PROFILE_EXEC_EAGER = """
app_iter = self.app(environ, start_response)
app_iter_ = list(app_iter)
if hasattr(app_iter, 'close'):
    app_iter.close()
"""

# don't unwind the iterator (don't consume resources)
PROFILE_EXEC_LAZY = """
app_iter_ = self.app(environ, start_response)
"""

thread = patcher.original('thread')  # non-monkeypatched module needed


# This monkey patch code fix the problem of eventlet profile tool
# which can not accumulate profiling results across multiple calls
# of runcalls and runctx.
def new_setup(self):
    self._has_setup = True
    self.cur = None
    self.timings = {}
    self.current_tasklet = greenthread.getcurrent()
    self.thread_id = thread.get_ident()
    self.simulate_call("profiler")


def new_runctx(self, cmd, globals, locals):
    if not getattr(self, '_has_setup', False):
        self._setup()
    try:
        return self.base.runctx(self, cmd, globals, locals)
    finally:
        self.TallyTimings()


def new_runcall(self, func, *args, **kw):
    if not getattr(self, '_has_setup', False):
        self._setup()
    try:
        return self.base.runcall(self, func, *args, **kw)
    finally:
        self.TallyTimings()


class ProfileMiddleware(object):

    def __init__(self, app, conf):
        self.app = app
        self.logger = get_logger(conf, log_route='profile')
        self.log_filename_prefix = conf.get('log_filename_prefix',
                                            DEFAULT_PROFILE_PREFIX)
        dirname = os.path.dirname(self.log_filename_prefix)
        # Notes: this effort may fail due to permission denied.
        # it is better to be created and authorized to current
        # user in advance.
        if not os.path.exists(dirname):
            os.makedirs(dirname)
        self.dump_interval = float(conf.get('dump_interval', 5.0))
        self.dump_timestamp = config_true_value(conf.get(
            'dump_timestamp', 'no'))
        self.flush_at_shutdown = config_true_value(conf.get(
            'flush_at_shutdown', 'no'))
        self.path = conf.get('path', '__profile__').replace('/', '')
        self.unwind = config_true_value(conf.get('unwind', 'no'))
        self.profile_module = conf.get('profile_module',
                                       'eventlet.green.profile')
        self.profiler = get_profiler(self.profile_module)
        self.profile_log = ProfileLog(self.log_filename_prefix,
                                      self.dump_timestamp)
        self.viewer = HTMLViewer(self.path, self.profile_module,
                                 self.profile_log)
        self.dump_pool = GreenPool(1000)
        self.last_dump_at = None

    def __del__(self):
        if self.flush_at_shutdown:
            self.profile_log.clear(str(os.getpid()))

    def _combine_body_qs(self, request):
        wsgi_input = request.environ['wsgi.input']
        query_dict = request.params
        qs_in_body = wsgi_input.read()
        query_dict.update(urllib.parse.parse_qs(qs_in_body,
                                                keep_blank_values=True,
                                                strict_parsing=False))
        return query_dict

    def dump_checkpoint(self):
        current_time = time.time()
        if self.last_dump_at is None or self.last_dump_at +\
                self.dump_interval < current_time:
            self.dump_pool.spawn_n(self.profile_log.dump_profile,
                                   self.profiler, os.getpid())
            self.last_dump_at = current_time

    def __call__(self, environ, start_response):
        request = Request(environ)
        path_entry = request.path_info.split('/')
        # hijack favicon request sent by browser so that it doesn't
        # invoke profiling hook and contaminate the data.
        if path_entry[1] == 'favicon.ico':
            start_response('200 OK', [])
            return ''
        elif path_entry[1] == self.path:
            try:
                self.dump_checkpoint()
                query_dict = self._combine_body_qs(request)
                content, headers = self.viewer.render(request.url,
                                                      request.method,
                                                      path_entry,
                                                      query_dict,
                                                      self.renew_profile)
                start_response('200 OK', headers)
                if isinstance(content, six.text_type):
                    content = content.encode('utf-8')
                return [content]
            except MethodNotAllowed as mx:
                start_response('405 Method Not Allowed', [])
                return '%s' % mx
            except NotFoundException as nx:
                start_response('404 Not Found', [])
                return '%s' % nx
            except ProfileException as pf:
                start_response('500 Internal Server Error', [])
                return '%s' % pf
            except Exception as ex:
                start_response('500 Internal Server Error', [])
                return _('Error on render profiling results: %s') % ex
        else:
            _locals = locals()
            code = self.unwind and PROFILE_EXEC_EAGER or\
                PROFILE_EXEC_LAZY
            self.profiler.runctx(code, globals(), _locals)
            app_iter = _locals['app_iter_']
            self.dump_checkpoint()
            return app_iter

    def renew_profile(self):
        self.profiler = get_profiler(self.profile_module)


def get_profiler(profile_module):
    if profile_module == 'eventlet.green.profile':
        eprofile.Profile._setup = new_setup
        eprofile.Profile.runctx = new_runctx
        eprofile.Profile.runcall = new_runcall
    # hacked method to import profile module supported in python 2.6
    __import__(profile_module)
    return sys.modules[profile_module].Profile()


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)

    def profile_filter(app):
        return ProfileMiddleware(app, conf)

    return profile_filter
swift-2.7.1/swift/common/middleware/memcache.py0000664000567000056710000001111113024044352022663 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os

from six.moves.configparser import ConfigParser, NoSectionError, NoOptionError

from swift.common.memcached import (MemcacheRing, CONN_TIMEOUT, POOL_TIMEOUT,
                                    IO_TIMEOUT, TRY_COUNT)


class MemcacheMiddleware(object):
    """
    Caching middleware that manages caching in swift.
    """

    def __init__(self, app, conf):
        self.app = app
        self.memcache_servers = conf.get('memcache_servers')
        serialization_format = conf.get('memcache_serialization_support')
        try:
            # Originally, while we documented using memcache_max_connections
            # we only accepted max_connections
            max_conns = int(conf.get('memcache_max_connections',
                                     conf.get('max_connections', 0)))
        except ValueError:
            max_conns = 0

        memcache_options = {}
        if (not self.memcache_servers
                or serialization_format is None
                or max_conns <= 0):
            path = os.path.join(conf.get('swift_dir', '/etc/swift'),
                                'memcache.conf')
            memcache_conf = ConfigParser()
            if memcache_conf.read(path):
                # if memcache.conf exists we'll start with those base options
                try:
                    memcache_options = dict(memcache_conf.items('memcache'))
                except NoSectionError:
                    pass

                if not self.memcache_servers:
                    try:
                        self.memcache_servers = \
                            memcache_conf.get('memcache', 'memcache_servers')
                    except (NoSectionError, NoOptionError):
                        pass
                if serialization_format is None:
                    try:
                        serialization_format = \
                            memcache_conf.get('memcache',
                                              'memcache_serialization_support')
                    except (NoSectionError, NoOptionError):
                        pass
                if max_conns <= 0:
                    try:
                        new_max_conns = \
                            memcache_conf.get('memcache',
                                              'memcache_max_connections')
                        max_conns = int(new_max_conns)
                    except (NoSectionError, NoOptionError, ValueError):
                        pass

        # while memcache.conf options are the base for the memcache
        # middleware, if you set the same option also in the filter
        # section of the proxy config it is more specific.
        memcache_options.update(conf)
        connect_timeout = float(memcache_options.get(
            'connect_timeout', CONN_TIMEOUT))
        pool_timeout = float(memcache_options.get(
            'pool_timeout', POOL_TIMEOUT))
        tries = int(memcache_options.get('tries', TRY_COUNT))
        io_timeout = float(memcache_options.get('io_timeout', IO_TIMEOUT))

        if not self.memcache_servers:
            self.memcache_servers = '127.0.0.1:11211'
        if max_conns <= 0:
            max_conns = 2
        if serialization_format is None:
            serialization_format = 2
        else:
            serialization_format = int(serialization_format)

        self.memcache = MemcacheRing(
            [s.strip() for s in self.memcache_servers.split(',') if s.strip()],
            connect_timeout=connect_timeout,
            pool_timeout=pool_timeout,
            tries=tries,
            io_timeout=io_timeout,
            allow_pickle=(serialization_format == 0),
            allow_unpickle=(serialization_format <= 1),
            max_conns=max_conns)

    def __call__(self, env, start_response):
        env['swift.cache'] = self.memcache
        return self.app(env, start_response)


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)

    def cache_filter(app):
        return MemcacheMiddleware(app, conf)

    return cache_filter
swift-2.7.1/swift/common/middleware/ratelimit.py0000664000567000056710000003160613024044352023126 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from swift import gettext_ as _

import eventlet

from swift.common.utils import cache_from_env, get_logger, register_swift_info
from swift.proxy.controllers.base import get_account_info, get_container_info
from swift.common.memcached import MemcacheConnectionError
from swift.common.swob import Request, Response


def interpret_conf_limits(conf, name_prefix, info=None):
    """
    Parses general parms for rate limits looking for things that
    start with the provided name_prefix within the provided conf
    and returns lists for both internal use and for /info

    :param conf: conf dict to parse
    :param name_prefix: prefix of config parms to look for
    :param info: set to return extra stuff for /info registration
    """
    conf_limits = []
    for conf_key in conf:
        if conf_key.startswith(name_prefix):
            cont_size = int(conf_key[len(name_prefix):])
            rate = float(conf[conf_key])
            conf_limits.append((cont_size, rate))

    conf_limits.sort()
    ratelimits = []
    conf_limits_info = list(conf_limits)
    while conf_limits:
        cur_size, cur_rate = conf_limits.pop(0)
        if conf_limits:
            next_size, next_rate = conf_limits[0]
            slope = (float(next_rate) - float(cur_rate)) \
                / (next_size - cur_size)

            def new_scope(cur_size, slope, cur_rate):
                # making new scope for variables
                return lambda x: (x - cur_size) * slope + cur_rate
            line_func = new_scope(cur_size, slope, cur_rate)
        else:
            line_func = lambda x: cur_rate

        ratelimits.append((cur_size, cur_rate, line_func))
    if info is None:
        return ratelimits
    else:
        return ratelimits, conf_limits_info


def get_maxrate(ratelimits, size):
    """
    Returns number of requests allowed per second for given size.
    """
    last_func = None
    if size:
        size = int(size)
        for ratesize, rate, func in ratelimits:
            if size < ratesize:
                break
            last_func = func
        if last_func:
            return last_func(size)
    return None


class MaxSleepTimeHitError(Exception):
    pass


class RateLimitMiddleware(object):
    """
    Rate limiting middleware

    Rate limits requests on both an Account and Container level.  Limits are
    configurable.
    """

    BLACK_LIST_SLEEP = 1

    def __init__(self, app, conf, logger=None):

        self.app = app
        self.logger = logger or get_logger(conf, log_route='ratelimit')
        self.memcache_client = None
        self.account_ratelimit = float(conf.get('account_ratelimit', 0))
        self.max_sleep_time_seconds = \
            float(conf.get('max_sleep_time_seconds', 60))
        self.log_sleep_time_seconds = \
            float(conf.get('log_sleep_time_seconds', 0))
        self.clock_accuracy = int(conf.get('clock_accuracy', 1000))
        self.rate_buffer_seconds = int(conf.get('rate_buffer_seconds', 5))
        self.ratelimit_whitelist = \
            [acc.strip() for acc in
                conf.get('account_whitelist', '').split(',') if acc.strip()]
        self.ratelimit_blacklist = \
            [acc.strip() for acc in
                conf.get('account_blacklist', '').split(',') if acc.strip()]
        self.container_ratelimits = interpret_conf_limits(
            conf, 'container_ratelimit_')
        self.container_listing_ratelimits = interpret_conf_limits(
            conf, 'container_listing_ratelimit_')

    def get_container_size(self, env):
        rv = 0
        container_info = get_container_info(
            env, self.app, swift_source='RL')
        if isinstance(container_info, dict):
            rv = container_info.get(
                'object_count', container_info.get('container_size', 0))
        return rv

    def get_ratelimitable_key_tuples(self, req, account_name,
                                     container_name=None, obj_name=None,
                                     global_ratelimit=None):
        """
        Returns a list of key (used in memcache), ratelimit tuples. Keys
        should be checked in order.

        :param req: swob request
        :param account_name: account name from path
        :param container_name: container name from path
        :param obj_name: object name from path
        :param global_ratelimit: this account has an account wide
                                 ratelimit on all writes combined
        """
        keys = []
        # COPYs are not limited

        if self.account_ratelimit and \
                account_name and container_name and not obj_name and \
                req.method in ('PUT', 'DELETE'):
            keys.append(("ratelimit/%s" % account_name,
                         self.account_ratelimit))

        if account_name and container_name and obj_name and \
                req.method in ('PUT', 'DELETE', 'POST', 'COPY'):
            container_size = self.get_container_size(req.environ)
            container_rate = get_maxrate(
                self.container_ratelimits, container_size)
            if container_rate:
                keys.append((
                    "ratelimit/%s/%s" % (account_name, container_name),
                    container_rate))

        if account_name and container_name and not obj_name and \
                req.method == 'GET':
            container_size = self.get_container_size(req.environ)
            container_rate = get_maxrate(
                self.container_listing_ratelimits, container_size)
            if container_rate:
                keys.append((
                    "ratelimit_listing/%s/%s" % (account_name, container_name),
                    container_rate))

        if account_name and req.method in ('PUT', 'DELETE', 'POST', 'COPY'):
            if global_ratelimit:
                try:
                    global_ratelimit = float(global_ratelimit)
                    if global_ratelimit > 0:
                        keys.append((
                            "ratelimit/global-write/%s" % account_name,
                            global_ratelimit))
                except ValueError:
                    pass

        return keys

    def _get_sleep_time(self, key, max_rate):
        '''
        Returns the amount of time (a float in seconds) that the app
        should sleep.

        :param key: a memcache key
        :param max_rate: maximum rate allowed in requests per second
        :raises: MaxSleepTimeHitError if max sleep time is exceeded.
        '''
        try:
            now_m = int(round(time.time() * self.clock_accuracy))
            time_per_request_m = int(round(self.clock_accuracy / max_rate))
            running_time_m = self.memcache_client.incr(
                key, delta=time_per_request_m)
            need_to_sleep_m = 0
            if (now_m - running_time_m >
                    self.rate_buffer_seconds * self.clock_accuracy):
                next_avail_time = int(now_m + time_per_request_m)
                self.memcache_client.set(key, str(next_avail_time),
                                         serialize=False)
            else:
                need_to_sleep_m = \
                    max(running_time_m - now_m - time_per_request_m, 0)

            max_sleep_m = self.max_sleep_time_seconds * self.clock_accuracy
            if max_sleep_m - need_to_sleep_m <= self.clock_accuracy * 0.01:
                # treat as no-op decrement time
                self.memcache_client.decr(key, delta=time_per_request_m)
                raise MaxSleepTimeHitError(
                    "Max Sleep Time Exceeded: %.2f" %
                    (float(need_to_sleep_m) / self.clock_accuracy))

            return float(need_to_sleep_m) / self.clock_accuracy
        except MemcacheConnectionError:
            return 0

    def handle_ratelimit(self, req, account_name, container_name, obj_name):
        '''
        Performs rate limiting and account white/black listing.  Sleeps
        if necessary. If self.memcache_client is not set, immediately returns
        None.

        :param account_name: account name from path
        :param container_name: container name from path
        :param obj_name: object name from path
        '''
        if not self.memcache_client:
            return None

        try:
            account_info = get_account_info(req.environ, self.app,
                                            swift_source='RL')
            account_global_ratelimit = \
                account_info.get('sysmeta', {}).get('global-write-ratelimit')
        except ValueError:
            account_global_ratelimit = None

        if account_name in self.ratelimit_whitelist or \
                account_global_ratelimit == 'WHITELIST':
            return None

        if account_name in self.ratelimit_blacklist or \
                account_global_ratelimit == 'BLACKLIST':
            self.logger.error(_('Returning 497 because of blacklisting: %s'),
                              account_name)
            eventlet.sleep(self.BLACK_LIST_SLEEP)
            return Response(status='497 Blacklisted',
                            body='Your account has been blacklisted',
                            request=req)

        for key, max_rate in self.get_ratelimitable_key_tuples(
                req, account_name, container_name=container_name,
                obj_name=obj_name, global_ratelimit=account_global_ratelimit):
            try:
                need_to_sleep = self._get_sleep_time(key, max_rate)
                if self.log_sleep_time_seconds and \
                        need_to_sleep > self.log_sleep_time_seconds:
                    self.logger.warning(
                        _("Ratelimit sleep log: %(sleep)s for "
                          "%(account)s/%(container)s/%(object)s"),
                        {'sleep': need_to_sleep, 'account': account_name,
                         'container': container_name, 'object': obj_name})
                if need_to_sleep > 0:
                    eventlet.sleep(need_to_sleep)
            except MaxSleepTimeHitError as e:
                self.logger.error(
                    _('Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s '
                      '. Ratelimit (Max Sleep) %(e)s'),
                    {'meth': req.method, 'acc': account_name,
                     'cont': container_name, 'obj': obj_name, 'e': str(e)})
                error_resp = Response(status='498 Rate Limited',
                                      body='Slow down', request=req)
                return error_resp
        return None

    def __call__(self, env, start_response):
        """
        WSGI entry point.
        Wraps env in swob.Request object and passes it down.

        :param env: WSGI environment dictionary
        :param start_response: WSGI callable
        """
        req = Request(env)
        if self.memcache_client is None:
            self.memcache_client = cache_from_env(env)
        if not self.memcache_client:
            self.logger.warning(
                _('Warning: Cannot ratelimit without a memcached client'))
            return self.app(env, start_response)
        try:
            version, account, container, obj = req.split_path(1, 4, True)
        except ValueError:
            return self.app(env, start_response)
        ratelimit_resp = self.handle_ratelimit(req, account, container, obj)
        if ratelimit_resp is None:
            return self.app(env, start_response)
        else:
            return ratelimit_resp(env, start_response)


def filter_factory(global_conf, **local_conf):
    """
    paste.deploy app factory for creating WSGI proxy apps.
    """
    conf = global_conf.copy()
    conf.update(local_conf)

    account_ratelimit = float(conf.get('account_ratelimit', 0))
    max_sleep_time_seconds = \
        float(conf.get('max_sleep_time_seconds', 60))
    container_ratelimits, cont_limit_info = interpret_conf_limits(
        conf, 'container_ratelimit_', info=1)
    container_listing_ratelimits, cont_list_limit_info = \
        interpret_conf_limits(conf, 'container_listing_ratelimit_', info=1)
    # not all limits are exposed (intentionally)
    register_swift_info('ratelimit',
                        account_ratelimit=account_ratelimit,
                        max_sleep_time_seconds=max_sleep_time_seconds,
                        container_ratelimits=cont_limit_info,
                        container_listing_ratelimits=cont_list_limit_info)

    def limit_filter(app):
        return RateLimitMiddleware(app, conf)

    return limit_filter
swift-2.7.1/swift/common/middleware/versioned_writes.py0000664000567000056710000006116013024044354024527 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Object versioning in swift is implemented by setting a flag on the container
to tell swift to version all objects in the container. The flag is the
``X-Versions-Location`` header on the container, and its value is the
container where the versions are stored. It is recommended to use a different
``X-Versions-Location`` container for each container that is being versioned.

When data is ``PUT`` into a versioned container (a container with the
versioning flag turned on), the existing data in the file is redirected to a
new object and the data in the ``PUT`` request is saved as the data for the
versioned object. The new object name (for the previous version) is
``//``, where ``length``
is the 3-character zero-padded hexadecimal length of the ```` and
```` is the timestamp of when the previous version was created.

A ``GET`` to a versioned object will return the current version of the object
without having to do any request redirects or metadata lookups.

A ``POST`` to a versioned object will update the object metadata as normal,
but will not create a new version of the object. In other words, new versions
are only created when the content of the object changes.

A ``DELETE`` to a versioned object will only remove the current version of the
object. If you have 5 total versions of the object, you must delete the
object 5 times to completely remove the object.

--------------------------------------------------
How to Enable Object Versioning in a Swift Cluster
--------------------------------------------------

This middleware was written as an effort to refactor parts of the proxy server,
so this functionality was already available in previous releases and every
attempt was made to maintain backwards compatibility. To allow operators to
perform a seamless upgrade, it is not required to add the middleware to the
proxy pipeline and the flag ``allow_versions`` in the container server
configuration files are still valid. In future releases, ``allow_versions``
will be deprecated in favor of adding this middleware to the pipeline to enable
or disable the feature.

In case the middleware is added to the proxy pipeline, you must also
set ``allow_versioned_writes`` to ``True`` in the middleware options
to enable the information about this middleware to be returned in a /info
request.

Upgrade considerations: If ``allow_versioned_writes`` is set in the filter
configuration, you can leave the ``allow_versions`` flag in the container
server configuration files untouched. If you decide to disable or remove the
``allow_versions`` flag, you must re-set any existing containers that had
the 'X-Versions-Location' flag configured so that it can now be tracked by the
versioned_writes middleware.

-----------------------
Examples Using ``curl``
-----------------------

First, create a container with the ``X-Versions-Location`` header or add the
header to an existing container. Also make sure the container referenced by
the ``X-Versions-Location`` exists. In this example, the name of that
container is "versions"::

    curl -i -XPUT -H "X-Auth-Token: " \
-H "X-Versions-Location: versions" http:///container
    curl -i -XPUT -H "X-Auth-Token: " http:///versions

Create an object (the first version)::

    curl -i -XPUT --data-binary 1 -H "X-Auth-Token: " \
http:///container/myobject

Now create a new version of that object::

    curl -i -XPUT --data-binary 2 -H "X-Auth-Token: " \
http:///container/myobject

See a listing of the older versions of the object::

    curl -i -H "X-Auth-Token: " \
http:///versions?prefix=008myobject/

Now delete the current version of the object and see that the older version is
gone from 'versions' container and back in 'container' container::

    curl -i -XDELETE -H "X-Auth-Token: " \
http:///container/myobject
    curl -i -H "X-Auth-Token: " \
http:///versions?prefix=008myobject/
    curl -i -XGET -H "X-Auth-Token: " \
http:///container/myobject

---------------------------------------------------
How to Disable Object Versioning in a Swift Cluster
---------------------------------------------------

If you want to disable all functionality, set ``allow_versioned_writes`` to
``False`` in the middleware options.

Disable versioning from a container (x is any value except empty)::

    curl -i -XPOST -H "X-Auth-Token: " \
-H "X-Remove-Versions-Location: x" http:///container
"""

import calendar
import json
import six
from six.moves.urllib.parse import quote, unquote
import time
from swift.common.utils import get_logger, Timestamp, \
    register_swift_info, config_true_value
from swift.common.request_helpers import get_sys_meta_prefix
from swift.common.wsgi import WSGIContext, make_pre_authed_request
from swift.common.swob import Request, HTTPException
from swift.common.constraints import (
    check_account_format, check_container_format, check_destination_header)
from swift.proxy.controllers.base import get_container_info
from swift.common.http import (
    is_success, is_client_error, HTTP_NOT_FOUND)
from swift.common.swob import HTTPPreconditionFailed, HTTPServiceUnavailable, \
    HTTPServerError
from swift.common.exceptions import (
    ListingIterNotFound, ListingIterError)


class VersionedWritesContext(WSGIContext):

    def __init__(self, wsgi_app, logger):
        WSGIContext.__init__(self, wsgi_app)
        self.logger = logger

    def _listing_iter(self, account_name, lcontainer, lprefix, req):
        try:
            for page in self._listing_pages_iter(account_name, lcontainer,
                                                 lprefix, req.environ):
                for item in page:
                    yield item
        except ListingIterNotFound:
            pass
        except HTTPPreconditionFailed:
            raise HTTPPreconditionFailed(request=req)
        except ListingIterError:
            raise HTTPServerError(request=req)

    def _in_proxy_reverse_listing(self, account_name, lcontainer, lprefix,
                                  env, failed_marker, failed_listing):
        '''Get the complete prefix listing and reverse it on the proxy.

        This is only necessary if we encounter a response from a
        container-server that does not respect the ``reverse`` param
        included by default in ``_listing_pages_iter``. This may happen
        during rolling upgrades from pre-2.6.0 swift.

        :param failed_marker: the marker that was used when we encountered
                              the non-reversed listing
        :param failed_listing: the non-reversed listing that was encountered.
                               If ``failed_marker`` is blank, we can use this
                               to save ourselves a request
        :returns: an iterator over all objects starting with ``lprefix`` (up
                  to but not including the failed marker) in reverse order
        '''
        complete_listing = []
        if not failed_marker:
            # We've never gotten a reversed listing. So save a request and
            # use the failed listing.
            complete_listing.extend(failed_listing)
            marker = complete_listing[-1]['name'].encode('utf8')
        else:
            # We've gotten at least one reversed listing. Have to start at
            # the beginning.
            marker = ''

        # First, take the *entire* prefix listing into memory
        try:
            for page in self._listing_pages_iter(
                    account_name, lcontainer, lprefix,
                    env, marker, end_marker=failed_marker, reverse=False):
                complete_listing.extend(page)
        except ListingIterNotFound:
            pass

        # Now that we've got everything, return the whole listing as one giant
        # reversed page
        return reversed(complete_listing)

    def _listing_pages_iter(self, account_name, lcontainer, lprefix,
                            env, marker='', end_marker='', reverse=True):
        '''Get "pages" worth of objects that start with a prefix.

        The optional keyword arguments ``marker``, ``end_marker``, and
        ``reverse`` are used similar to how they are for containers. We're
        either coming:

           - directly from ``_listing_iter``, in which case none of the
             optional args are specified, or

           - from ``_in_proxy_reverse_listing``, in which case ``reverse``
             is ``False`` and both ``marker`` and ``end_marker`` are specified
             (although they may still be blank).
        '''
        while True:
            lreq = make_pre_authed_request(
                env, method='GET', swift_source='VW',
                path='/v1/%s/%s' % (account_name, lcontainer))
            lreq.environ['QUERY_STRING'] = \
                'format=json&prefix=%s&marker=%s' % (
                    quote(lprefix), quote(marker))
            if end_marker:
                lreq.environ['QUERY_STRING'] += '&end_marker=%s' % (
                    quote(end_marker))
            if reverse:
                lreq.environ['QUERY_STRING'] += '&reverse=on'
            lresp = lreq.get_response(self.app)
            if not is_success(lresp.status_int):
                if lresp.status_int == HTTP_NOT_FOUND:
                    raise ListingIterNotFound()
                elif is_client_error(lresp.status_int):
                    raise HTTPPreconditionFailed()
                else:
                    raise ListingIterError()

            if not lresp.body:
                break

            sublisting = json.loads(lresp.body)
            if not sublisting:
                break

            # When using the ``reverse`` param, check that the listing is
            # actually reversed
            first_item = sublisting[0]['name'].encode('utf-8')
            last_item = sublisting[-1]['name'].encode('utf-8')
            page_is_after_marker = marker and first_item > marker
            if reverse and (first_item < last_item or page_is_after_marker):
                # Apparently there's at least one pre-2.6.0 container server
                yield self._in_proxy_reverse_listing(
                    account_name, lcontainer, lprefix,
                    env, marker, sublisting)
                return

            marker = last_item
            yield sublisting

    def handle_obj_versions_put(self, req, object_versions,
                                object_name, policy_index):
        ret = None

        # do a HEAD request to check object versions
        _headers = {'X-Newest': 'True',
                    'X-Backend-Storage-Policy-Index': policy_index,
                    'x-auth-token': req.headers.get('x-auth-token')}

        # make a pre_auth request in case the user has write access
        # to container, but not READ. This was allowed in previous version
        # (i.e., before middleware) so keeping the same behavior here
        head_req = make_pre_authed_request(
            req.environ, path=req.path_info,
            headers=_headers, method='HEAD', swift_source='VW')
        hresp = head_req.get_response(self.app)

        is_dlo_manifest = 'X-Object-Manifest' in req.headers or \
                          'X-Object-Manifest' in hresp.headers

        # if there's an existing object, then copy it to
        # X-Versions-Location
        if is_success(hresp.status_int) and not is_dlo_manifest:
            lcontainer = object_versions.split('/')[0]
            prefix_len = '%03x' % len(object_name)
            lprefix = prefix_len + object_name + '/'
            ts_source = hresp.environ.get('swift_x_timestamp')
            if ts_source is None:
                ts_source = calendar.timegm(time.strptime(
                                            hresp.headers['last-modified'],
                                            '%a, %d %b %Y %H:%M:%S GMT'))
            new_ts = Timestamp(ts_source).internal
            vers_obj_name = lprefix + new_ts
            copy_headers = {
                'Destination': '%s/%s' % (lcontainer, vers_obj_name),
                'x-auth-token': req.headers.get('x-auth-token')}

            # COPY implementation sets X-Newest to True when it internally
            # does a GET on source object. So, we don't have to explicity
            # set it in request headers here.
            copy_req = make_pre_authed_request(
                req.environ, path=req.path_info,
                headers=copy_headers, method='COPY', swift_source='VW')
            copy_resp = copy_req.get_response(self.app)

            if is_success(copy_resp.status_int):
                # success versioning previous existing object
                # return None and handle original request
                ret = None
            else:
                if is_client_error(copy_resp.status_int):
                    # missing container or bad permissions
                    ret = HTTPPreconditionFailed(request=req)
                else:
                    # could not copy the data, bail
                    ret = HTTPServiceUnavailable(request=req)

        else:
            if hresp.status_int == HTTP_NOT_FOUND or is_dlo_manifest:
                # nothing to version
                # return None and handle original request
                ret = None
            else:
                # if not HTTP_NOT_FOUND, return error immediately
                ret = hresp

        return ret

    def handle_obj_versions_delete(self, req, object_versions,
                                   account_name, container_name, object_name):
        lcontainer = object_versions.split('/')[0]
        prefix_len = '%03x' % len(object_name)
        lprefix = prefix_len + object_name + '/'

        item_iter = self._listing_iter(account_name, lcontainer, lprefix, req)

        authed = False
        for previous_version in item_iter:
            if not authed:
                # we're about to start making COPY requests - need to
                # validate the write access to the versioned container
                if 'swift.authorize' in req.environ:
                    container_info = get_container_info(
                        req.environ, self.app)
                    req.acl = container_info.get('write_acl')
                    aresp = req.environ['swift.authorize'](req)
                    if aresp:
                        return aresp
                    authed = True

            # there are older versions so copy the previous version to the
            # current object and delete the previous version
            prev_obj_name = previous_version['name'].encode('utf-8')

            copy_path = '/v1/' + account_name + '/' + \
                        lcontainer + '/' + prev_obj_name

            copy_headers = {'X-Newest': 'True',
                            'Destination': container_name + '/' + object_name,
                            'x-auth-token': req.headers.get('x-auth-token')}

            copy_req = make_pre_authed_request(
                req.environ, path=copy_path,
                headers=copy_headers, method='COPY', swift_source='VW')
            copy_resp = copy_req.get_response(self.app)

            # if the version isn't there, keep trying with previous version
            if copy_resp.status_int == HTTP_NOT_FOUND:
                continue

            if not is_success(copy_resp.status_int):
                if is_client_error(copy_resp.status_int):
                    # some user error, maybe permissions
                    return HTTPPreconditionFailed(request=req)
                else:
                    # could not copy the data, bail
                    return HTTPServiceUnavailable(request=req)

            # reset these because the COPY changed them
            new_del_req = make_pre_authed_request(
                req.environ, path=copy_path, method='DELETE',
                swift_source='VW')
            req = new_del_req

            # remove 'X-If-Delete-At', since it is not for the older copy
            if 'X-If-Delete-At' in req.headers:
                del req.headers['X-If-Delete-At']
            break

        # handle DELETE request here in case it was modified
        return req.get_response(self.app)

    def handle_container_request(self, env, start_response):
        app_resp = self._app_call(env)
        if self._response_headers is None:
            self._response_headers = []
        sysmeta_version_hdr = get_sys_meta_prefix('container') + \
            'versions-location'
        location = ''
        for key, val in self._response_headers:
            if key.lower() == sysmeta_version_hdr:
                location = val

        if location:
            self._response_headers.extend([('X-Versions-Location', location)])

        start_response(self._response_status,
                       self._response_headers,
                       self._response_exc_info)
        return app_resp


class VersionedWritesMiddleware(object):

    def __init__(self, app, conf):
        self.app = app
        self.conf = conf
        self.logger = get_logger(conf, log_route='versioned_writes')

    def container_request(self, req, start_response, enabled):
        sysmeta_version_hdr = get_sys_meta_prefix('container') + \
            'versions-location'

        # set version location header as sysmeta
        if 'X-Versions-Location' in req.headers:
            val = req.headers.get('X-Versions-Location')
            if val:
                # differently from previous version, we are actually
                # returning an error if user tries to set versions location
                # while feature is explicitly disabled.
                if not config_true_value(enabled) and \
                        req.method in ('PUT', 'POST'):
                    raise HTTPPreconditionFailed(
                        request=req, content_type='text/plain',
                        body='Versioned Writes is disabled')

                location = check_container_format(req, val)
                req.headers[sysmeta_version_hdr] = location

                # reset original header to maintain sanity
                # now only sysmeta is source of Versions Location
                req.headers['X-Versions-Location'] = ''

                # if both headers are in the same request
                # adding location takes precendence over removing
                if 'X-Remove-Versions-Location' in req.headers:
                    del req.headers['X-Remove-Versions-Location']
            else:
                # empty value is the same as X-Remove-Versions-Location
                req.headers['X-Remove-Versions-Location'] = 'x'

        # handle removing versions container
        val = req.headers.get('X-Remove-Versions-Location')
        if val:
            req.headers.update({sysmeta_version_hdr: ''})
            req.headers.update({'X-Versions-Location': ''})
            del req.headers['X-Remove-Versions-Location']

        # send request and translate sysmeta headers from response
        vw_ctx = VersionedWritesContext(self.app, self.logger)
        return vw_ctx.handle_container_request(req.environ, start_response)

    def object_request(self, req, version, account, container, obj,
                       allow_versioned_writes):
        account_name = unquote(account)
        container_name = unquote(container)
        object_name = unquote(obj)
        container_info = None
        resp = None
        is_enabled = config_true_value(allow_versioned_writes)
        if req.method in ('PUT', 'DELETE'):
            container_info = get_container_info(
                req.environ, self.app)
        elif req.method == 'COPY' and 'Destination' in req.headers:
            if 'Destination-Account' in req.headers:
                account_name = req.headers.get('Destination-Account')
                account_name = check_account_format(req, account_name)
            container_name, object_name = check_destination_header(req)
            req.environ['PATH_INFO'] = "/%s/%s/%s/%s" % (
                version, account_name, container_name, object_name)
            container_info = get_container_info(
                req.environ, self.app)

        if not container_info:
            return self.app

        # To maintain backwards compatibility, container version
        # location could be stored as sysmeta or not, need to check both.
        # If stored as sysmeta, check if middleware is enabled. If sysmeta
        # is not set, but versions property is set in container_info, then
        # for backwards compatibility feature is enabled.
        object_versions = container_info.get(
            'sysmeta', {}).get('versions-location')
        if object_versions and isinstance(object_versions, six.text_type):
            object_versions = object_versions.encode('utf-8')
        elif not object_versions:
            object_versions = container_info.get('versions')
            # if allow_versioned_writes is not set in the configuration files
            # but 'versions' is configured, enable feature to maintain
            # backwards compatibility
            if not allow_versioned_writes and object_versions:
                is_enabled = True

        if is_enabled and object_versions:
            object_versions = unquote(object_versions)
            vw_ctx = VersionedWritesContext(self.app, self.logger)
            if req.method in ('PUT', 'COPY'):
                policy_idx = req.headers.get(
                    'X-Backend-Storage-Policy-Index',
                    container_info['storage_policy'])
                resp = vw_ctx.handle_obj_versions_put(
                    req, object_versions, object_name, policy_idx)
            else:  # handle DELETE
                resp = vw_ctx.handle_obj_versions_delete(
                    req, object_versions, account_name,
                    container_name, object_name)

        if resp:
            return resp
        else:
            return self.app

    def __call__(self, env, start_response):
        # making a duplicate, because if this is a COPY request, we will
        # modify the PATH_INFO to find out if the 'Destination' is in a
        # versioned container
        req = Request(env.copy())
        try:
            (version, account, container, obj) = req.split_path(3, 4, True)
        except ValueError:
            return self.app(env, start_response)

        # In case allow_versioned_writes is set in the filter configuration,
        # the middleware becomes the authority on whether object
        # versioning is enabled or not. In case it is not set, then
        # the option in the container configuration is still checked
        # for backwards compatibility

        # For a container request, first just check if option is set,
        # can be either true or false.
        # If set, check if enabled when actually trying to set container
        # header. If not set, let request be handled by container server
        # for backwards compatibility.
        # For an object request, also check if option is set (either T or F).
        # If set, check if enabled when checking versions container in
        # sysmeta property. If it is not set check 'versions' property in
        # container_info
        allow_versioned_writes = self.conf.get('allow_versioned_writes')
        if allow_versioned_writes and container and not obj:
            try:
                return self.container_request(req, start_response,
                                              allow_versioned_writes)
            except HTTPException as error_response:
                return error_response(env, start_response)
        elif obj and req.method in ('PUT', 'COPY', 'DELETE'):
            try:
                return self.object_request(
                    req, version, account, container, obj,
                    allow_versioned_writes)(env, start_response)
            except HTTPException as error_response:
                return error_response(env, start_response)
        else:
            return self.app(env, start_response)


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)
    if config_true_value(conf.get('allow_versioned_writes')):
        register_swift_info('versioned_writes')

    def obj_versions_filter(app):
        return VersionedWritesMiddleware(app, conf)

    return obj_versions_filter
swift-2.7.1/swift/common/middleware/recon.py0000664000567000056710000004103213024044354022236 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import errno
import json
import os
import time
from swift import gettext_ as _

from swift import __version__ as swiftver
from swift.common.storage_policy import POLICIES
from swift.common.swob import Request, Response
from swift.common.utils import get_logger, config_true_value, \
    SWIFT_CONF_FILE
from swift.common.constraints import check_mount
from resource import getpagesize
from hashlib import md5


class ReconMiddleware(object):
    """
    Recon middleware used for monitoring.

    /recon/load|mem|async... will return various system metrics.

    Needs to be added to the pipeline and requires a filter
    declaration in the object-server.conf:

    [filter:recon]
    use = egg:swift#recon
    recon_cache_path = /var/cache/swift
    """

    def __init__(self, app, conf, *args, **kwargs):
        self.app = app
        self.devices = conf.get('devices', '/srv/node')
        swift_dir = conf.get('swift_dir', '/etc/swift')
        self.logger = get_logger(conf, log_route='recon')
        self.recon_cache_path = conf.get('recon_cache_path',
                                         '/var/cache/swift')
        self.object_recon_cache = os.path.join(self.recon_cache_path,
                                               'object.recon')
        self.container_recon_cache = os.path.join(self.recon_cache_path,
                                                  'container.recon')
        self.account_recon_cache = os.path.join(self.recon_cache_path,
                                                'account.recon')
        self.drive_recon_cache = os.path.join(self.recon_cache_path,
                                              'drive.recon')
        self.account_ring_path = os.path.join(swift_dir, 'account.ring.gz')
        self.container_ring_path = os.path.join(swift_dir, 'container.ring.gz')

        self.rings = [self.account_ring_path, self.container_ring_path]
        # include all object ring files (for all policies)
        for policy in POLICIES:
            self.rings.append(os.path.join(swift_dir,
                                           policy.ring_name + '.ring.gz'))

        self.mount_check = config_true_value(conf.get('mount_check', 'true'))

    def _from_recon_cache(self, cache_keys, cache_file, openr=open):
        """retrieve values from a recon cache file

        :params cache_keys: list of cache items to retrieve
        :params cache_file: cache file to retrieve items from.
        :params openr: open to use [for unittests]
        :return: dict of cache items and their values or none if not found
        """
        try:
            with openr(cache_file, 'r') as f:
                recondata = json.load(f)
                return dict((key, recondata.get(key)) for key in cache_keys)
        except IOError:
            self.logger.exception(_('Error reading recon cache file'))
        except ValueError:
            self.logger.exception(_('Error parsing recon cache file'))
        except Exception:
            self.logger.exception(_('Error retrieving recon data'))
        return dict((key, None) for key in cache_keys)

    def get_version(self):
        """get swift version"""
        verinfo = {'version': swiftver}
        return verinfo

    def get_mounted(self, openr=open):
        """get ALL mounted fs from /proc/mounts"""
        mounts = []
        with openr('/proc/mounts', 'r') as procmounts:
            for line in procmounts:
                mount = {}
                mount['device'], mount['path'], opt1, opt2, opt3, \
                    opt4 = line.rstrip().split()
                mounts.append(mount)
        return mounts

    def get_load(self, openr=open):
        """get info from /proc/loadavg"""
        loadavg = {}
        with openr('/proc/loadavg', 'r') as f:
            onemin, fivemin, ftmin, tasks, procs = f.read().rstrip().split()
        loadavg['1m'] = float(onemin)
        loadavg['5m'] = float(fivemin)
        loadavg['15m'] = float(ftmin)
        loadavg['tasks'] = tasks
        loadavg['processes'] = int(procs)
        return loadavg

    def get_mem(self, openr=open):
        """get info from /proc/meminfo"""
        meminfo = {}
        with openr('/proc/meminfo', 'r') as memlines:
            for i in memlines:
                entry = i.rstrip().split(":")
                meminfo[entry[0]] = entry[1].strip()
        return meminfo

    def get_async_info(self):
        """get # of async pendings"""
        return self._from_recon_cache(['async_pending'],
                                      self.object_recon_cache)

    def get_driveaudit_error(self):
        """get # of drive audit errors"""
        return self._from_recon_cache(['drive_audit_errors'],
                                      self.drive_recon_cache)

    def get_replication_info(self, recon_type):
        """get replication info"""
        replication_list = ['replication_time',
                            'replication_stats',
                            'replication_last']
        if recon_type == 'account':
            return self._from_recon_cache(replication_list,
                                          self.account_recon_cache)
        elif recon_type == 'container':
            return self._from_recon_cache(replication_list,
                                          self.container_recon_cache)
        elif recon_type == 'object':
            replication_list += ['object_replication_time',
                                 'object_replication_last']
            return self._from_recon_cache(replication_list,
                                          self.object_recon_cache)
        else:
            return None

    def get_device_info(self):
        """get devices"""
        try:
            return {self.devices: os.listdir(self.devices)}
        except Exception:
            self.logger.exception(_('Error listing devices'))
            return {self.devices: None}

    def get_updater_info(self, recon_type):
        """get updater info"""
        if recon_type == 'container':
            return self._from_recon_cache(['container_updater_sweep'],
                                          self.container_recon_cache)
        elif recon_type == 'object':
            return self._from_recon_cache(['object_updater_sweep'],
                                          self.object_recon_cache)
        else:
            return None

    def get_expirer_info(self, recon_type):
        """get expirer info"""
        if recon_type == 'object':
            return self._from_recon_cache(['object_expiration_pass',
                                           'expired_last_pass'],
                                          self.object_recon_cache)

    def get_auditor_info(self, recon_type):
        """get auditor info"""
        if recon_type == 'account':
            return self._from_recon_cache(['account_audits_passed',
                                           'account_auditor_pass_completed',
                                           'account_audits_since',
                                           'account_audits_failed'],
                                          self.account_recon_cache)
        elif recon_type == 'container':
            return self._from_recon_cache(['container_audits_passed',
                                           'container_auditor_pass_completed',
                                           'container_audits_since',
                                           'container_audits_failed'],
                                          self.container_recon_cache)
        elif recon_type == 'object':
            return self._from_recon_cache(['object_auditor_stats_ALL',
                                           'object_auditor_stats_ZBF'],
                                          self.object_recon_cache)
        else:
            return None

    def get_unmounted(self):
        """list unmounted (failed?) devices"""
        mountlist = []
        for entry in os.listdir(self.devices):
            if not os.path.isdir(os.path.join(self.devices, entry)):
                continue

            try:
                mounted = check_mount(self.devices, entry)
            except OSError as err:
                mounted = str(err)
            mpoint = {'device': entry, 'mounted': mounted}
            if mpoint['mounted'] is not True:
                mountlist.append(mpoint)
        return mountlist

    def get_diskusage(self):
        """get disk utilization statistics"""
        devices = []
        for entry in os.listdir(self.devices):
            if not os.path.isdir(os.path.join(self.devices, entry)):
                continue

            try:
                mounted = check_mount(self.devices, entry)
            except OSError as err:
                devices.append({'device': entry, 'mounted': str(err),
                                'size': '', 'used': '', 'avail': ''})
                continue

            if mounted:
                path = os.path.join(self.devices, entry)
                disk = os.statvfs(path)
                capacity = disk.f_bsize * disk.f_blocks
                available = disk.f_bsize * disk.f_bavail
                used = disk.f_bsize * (disk.f_blocks - disk.f_bavail)
                devices.append({'device': entry, 'mounted': True,
                                'size': capacity, 'used': used,
                                'avail': available})
            else:
                devices.append({'device': entry, 'mounted': False,
                                'size': '', 'used': '', 'avail': ''})
        return devices

    def get_ring_md5(self, openr=open):
        """get all ring md5sum's"""
        sums = {}
        for ringfile in self.rings:
            md5sum = md5()
            if os.path.exists(ringfile):
                try:
                    with openr(ringfile, 'rb') as f:
                        block = f.read(4096)
                        while block:
                            md5sum.update(block)
                            block = f.read(4096)
                    sums[ringfile] = md5sum.hexdigest()
                except IOError as err:
                    sums[ringfile] = None
                    if err.errno != errno.ENOENT:
                        self.logger.exception(_('Error reading ringfile'))
        return sums

    def get_swift_conf_md5(self, openr=open):
        """get md5 of swift.conf"""
        md5sum = md5()
        try:
            with openr(SWIFT_CONF_FILE, 'r') as fh:
                chunk = fh.read(4096)
                while chunk:
                    md5sum.update(chunk)
                    chunk = fh.read(4096)
        except IOError as err:
            if err.errno != errno.ENOENT:
                self.logger.exception(_('Error reading swift.conf'))
            hexsum = None
        else:
            hexsum = md5sum.hexdigest()
        return {SWIFT_CONF_FILE: hexsum}

    def get_quarantine_count(self):
        """get obj/container/account quarantine counts"""
        qcounts = {"objects": 0, "containers": 0, "accounts": 0,
                   "policies": {}}
        qdir = "quarantined"
        for device in os.listdir(self.devices):
            qpath = os.path.join(self.devices, device, qdir)
            if os.path.exists(qpath):
                for qtype in os.listdir(qpath):
                    qtgt = os.path.join(qpath, qtype)
                    linkcount = os.lstat(qtgt).st_nlink
                    if linkcount > 2:
                        if qtype.startswith('objects'):
                            if '-' in qtype:
                                pkey = qtype.split('-', 1)[1]
                            else:
                                pkey = '0'
                            qcounts['policies'].setdefault(pkey,
                                                           {'objects': 0})
                            qcounts['policies'][pkey]['objects'] \
                                += linkcount - 2
                            qcounts['objects'] += linkcount - 2
                        else:
                            qcounts[qtype] += linkcount - 2
        return qcounts

    def get_socket_info(self, openr=open):
        """
        get info from /proc/net/sockstat and sockstat6

        Note: The mem value is actually kernel pages, but we return bytes
        allocated based on the systems page size.
        """
        sockstat = {}
        try:
            with openr('/proc/net/sockstat', 'r') as proc_sockstat:
                for entry in proc_sockstat:
                    if entry.startswith("TCP: inuse"):
                        tcpstats = entry.split()
                        sockstat['tcp_in_use'] = int(tcpstats[2])
                        sockstat['orphan'] = int(tcpstats[4])
                        sockstat['time_wait'] = int(tcpstats[6])
                        sockstat['tcp_mem_allocated_bytes'] = \
                            int(tcpstats[10]) * getpagesize()
        except IOError as e:
            if e.errno != errno.ENOENT:
                raise
        try:
            with openr('/proc/net/sockstat6', 'r') as proc_sockstat6:
                for entry in proc_sockstat6:
                    if entry.startswith("TCP6: inuse"):
                        sockstat['tcp6_in_use'] = int(entry.split()[2])
        except IOError as e:
            if e.errno != errno.ENOENT:
                raise
        return sockstat

    def get_time(self):
        """get current time"""

        return time.time()

    def GET(self, req):
        root, rcheck, rtype = req.split_path(1, 3, True)
        all_rtypes = ['account', 'container', 'object']
        if rcheck == "mem":
            content = self.get_mem()
        elif rcheck == "load":
            content = self.get_load()
        elif rcheck == "async":
            content = self.get_async_info()
        elif rcheck == 'replication' and rtype in all_rtypes:
            content = self.get_replication_info(rtype)
        elif rcheck == 'replication' and rtype is None:
            # handle old style object replication requests
            content = self.get_replication_info('object')
        elif rcheck == "devices":
            content = self.get_device_info()
        elif rcheck == "updater" and rtype in ['container', 'object']:
            content = self.get_updater_info(rtype)
        elif rcheck == "auditor" and rtype in all_rtypes:
            content = self.get_auditor_info(rtype)
        elif rcheck == "expirer" and rtype == 'object':
            content = self.get_expirer_info(rtype)
        elif rcheck == "mounted":
            content = self.get_mounted()
        elif rcheck == "unmounted":
            content = self.get_unmounted()
        elif rcheck == "diskusage":
            content = self.get_diskusage()
        elif rcheck == "ringmd5":
            content = self.get_ring_md5()
        elif rcheck == "swiftconfmd5":
            content = self.get_swift_conf_md5()
        elif rcheck == "quarantined":
            content = self.get_quarantine_count()
        elif rcheck == "sockstat":
            content = self.get_socket_info()
        elif rcheck == "version":
            content = self.get_version()
        elif rcheck == "driveaudit":
            content = self.get_driveaudit_error()
        elif rcheck == "time":
            content = self.get_time()
        else:
            content = "Invalid path: %s" % req.path
            return Response(request=req, status="404 Not Found",
                            body=content, content_type="text/plain")
        if content is not None:
            return Response(request=req, body=json.dumps(content),
                            content_type="application/json")
        else:
            return Response(request=req, status="500 Server Error",
                            body="Internal server error.",
                            content_type="text/plain")

    def __call__(self, env, start_response):
        req = Request(env)
        if req.path.startswith('/recon/'):
            return self.GET(req)(env, start_response)
        else:
            return self.app(env, start_response)


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)

    def recon_filter(app):
        return ReconMiddleware(app, conf)
    return recon_filter
swift-2.7.1/swift/common/middleware/container_sync.py0000664000567000056710000001430513024044354024151 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os

from swift.common.container_sync_realms import ContainerSyncRealms
from swift.common.swob import HTTPBadRequest, HTTPUnauthorized, wsgify
from swift.common.utils import (
    config_true_value, get_logger, register_swift_info, streq_const_time)
from swift.proxy.controllers.base import get_container_info


class ContainerSync(object):
    """
    WSGI middleware that validates an incoming container sync request
    using the container-sync-realms.conf style of container sync.
    """

    def __init__(self, app, conf, logger=None):
        self.app = app
        self.conf = conf
        self.logger = logger or get_logger(conf, log_route='container_sync')
        self.realms_conf = ContainerSyncRealms(
            os.path.join(
                conf.get('swift_dir', '/etc/swift'),
                'container-sync-realms.conf'),
            self.logger)
        self.allow_full_urls = config_true_value(
            conf.get('allow_full_urls', 'true'))
        # configure current realm/cluster for /info
        self.realm = self.cluster = None
        current = conf.get('current', None)
        if current:
            try:
                self.realm, self.cluster = (p.upper() for p in
                                            current.strip('/').split('/'))
            except ValueError:
                self.logger.error('Invalid current //REALM/CLUSTER (%s)',
                                  current)
        self.register_info()

    def register_info(self):
        dct = {}
        for realm in self.realms_conf.realms():
            clusters = self.realms_conf.clusters(realm)
            if clusters:
                dct[realm] = {'clusters': dict((c, {}) for c in clusters)}
        if self.realm and self.cluster:
            try:
                dct[self.realm]['clusters'][self.cluster]['current'] = True
            except KeyError:
                self.logger.error('Unknown current //REALM/CLUSTER (%s)',
                                  '//%s/%s' % (self.realm, self.cluster))
        register_swift_info('container_sync', realms=dct)

    @wsgify
    def __call__(self, req):
        if not self.allow_full_urls:
            sync_to = req.headers.get('x-container-sync-to')
            if sync_to and not sync_to.startswith('//'):
                raise HTTPBadRequest(
                    body='Full URLs are not allowed for X-Container-Sync-To '
                         'values. Only realm values of the format '
                         '//realm/cluster/account/container are allowed.\n',
                    request=req)
        auth = req.headers.get('x-container-sync-auth')
        if auth:
            valid = False
            auth = auth.split()
            if len(auth) != 3:
                req.environ.setdefault('swift.log_info', []).append(
                    'cs:not-3-args')
            else:
                realm, nonce, sig = auth
                realm_key = self.realms_conf.key(realm)
                realm_key2 = self.realms_conf.key2(realm)
                if not realm_key:
                    req.environ.setdefault('swift.log_info', []).append(
                        'cs:no-local-realm-key')
                else:
                    info = get_container_info(
                        req.environ, self.app, swift_source='CS')
                    user_key = info.get('sync_key')
                    if not user_key:
                        req.environ.setdefault('swift.log_info', []).append(
                            'cs:no-local-user-key')
                    else:
                        # x-timestamp headers get shunted by gatekeeper
                        if 'x-backend-inbound-x-timestamp' in req.headers:
                            req.headers['x-timestamp'] = req.headers.pop(
                                'x-backend-inbound-x-timestamp')

                        expected = self.realms_conf.get_sig(
                            req.method, req.path,
                            req.headers.get('x-timestamp', '0'), nonce,
                            realm_key, user_key)
                        expected2 = self.realms_conf.get_sig(
                            req.method, req.path,
                            req.headers.get('x-timestamp', '0'), nonce,
                            realm_key2, user_key) if realm_key2 else expected
                        if not streq_const_time(sig, expected) and \
                                not streq_const_time(sig, expected2):
                            req.environ.setdefault(
                                'swift.log_info', []).append('cs:invalid-sig')
                        else:
                            req.environ.setdefault(
                                'swift.log_info', []).append('cs:valid')
                            valid = True
            if not valid:
                exc = HTTPUnauthorized(
                    body='X-Container-Sync-Auth header not valid; '
                         'contact cluster operator for support.',
                    headers={'content-type': 'text/plain'},
                    request=req)
                exc.headers['www-authenticate'] = ' '.join([
                    'SwiftContainerSync',
                    exc.www_authenticate().split(None, 1)[1]])
                raise exc
            else:
                req.environ['swift.authorize_override'] = True
        if req.path == '/info':
            # Ensure /info requests get the freshest results
            self.register_info()
        return self.app


def filter_factory(global_conf, **local_conf):
    conf = global_conf.copy()
    conf.update(local_conf)
    register_swift_info('container_sync')

    def cache_filter(app):
        return ContainerSync(app, conf)

    return cache_filter
swift-2.7.1/swift/common/middleware/acl.py0000664000567000056710000002626013024044354021675 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import json

from swift.common.utils import urlparse


def clean_acl(name, value):
    """
    Returns a cleaned ACL header value, validating that it meets the formatting
    requirements for standard Swift ACL strings.

    The ACL format is::

        [item[,item...]]

    Each item can be a group name to give access to or a referrer designation
    to grant or deny based on the HTTP Referer header.

    The referrer designation format is::

        .r:[-]value

    The ``.r`` can also be ``.ref``, ``.referer``, or ``.referrer``; though it
    will be shortened to just ``.r`` for decreased character count usage.

    The value can be ``*`` to specify any referrer host is allowed access, a
    specific host name like ``www.example.com``, or if it has a leading period
    ``.`` or leading ``*.`` it is a domain name specification, like
    ``.example.com`` or ``*.example.com``. The leading minus sign ``-``
    indicates referrer hosts that should be denied access.

    Referrer access is applied in the order they are specified. For example,
    .r:.example.com,.r:-thief.example.com would allow all hosts ending with
    .example.com except for the specific host thief.example.com.

    Example valid ACLs::

        .r:*
        .r:*,.r:-.thief.com
        .r:*,.r:.example.com,.r:-thief.example.com
        .r:*,.r:-.thief.com,bobs_account,sues_account:sue
        bobs_account,sues_account:sue

    Example invalid ACLs::

        .r:
        .r:-

    By default, allowing read access via .r will not allow listing objects in
    the container -- just retrieving objects from the container. To turn on
    listings, use the .rlistings directive.

    Also, .r designations aren't allowed in headers whose names include the
    word 'write'.

    ACLs that are "messy" will be cleaned up. Examples:

    ======================  ======================
    Original                Cleaned
    ----------------------  ----------------------
    ``bob, sue``            ``bob,sue``
    ``bob , sue``           ``bob,sue``
    ``bob,,,sue``           ``bob,sue``
    ``.referrer : *``       ``.r:*``
    ``.ref:*.example.com``  ``.r:.example.com``
    ``.r:*, .rlistings``    ``.r:*,.rlistings``
    ======================  ======================

    :param name: The name of the header being cleaned, such as X-Container-Read
                 or X-Container-Write.
    :param value: The value of the header being cleaned.
    :returns: The value, cleaned of extraneous formatting.
    :raises ValueError: If the value does not meet the ACL formatting
                        requirements; the error message will indicate why.
    """
    name = name.lower()
    values = []
    for raw_value in value.split(','):
        raw_value = raw_value.strip()
        if not raw_value:
            continue
        if ':' not in raw_value:
            values.append(raw_value)
            continue
        first, second = (v.strip() for v in raw_value.split(':', 1))
        if not first or not first.startswith('.'):
            values.append(raw_value)
        elif first in ('.r', '.ref', '.referer', '.referrer'):
            if 'write' in name:
                raise ValueError('Referrers not allowed in write ACL: '
                                 '%s' % repr(raw_value))
            negate = False
            if second and second.startswith('-'):
                negate = True
                second = second[1:].strip()
            if second and second != '*' and second.startswith('*'):
                second = second[1:].strip()
            if not second or second == '.':
                raise ValueError('No host/domain value after referrer '
                                 'designation in ACL: %s' % repr(raw_value))
            values.append('.r:%s%s' % ('-' if negate else '', second))
        else:
            raise ValueError('Unknown designator %s in ACL: %s' %
                             (repr(first), repr(raw_value)))
    return ','.join(values)


def format_acl_v1(groups=None, referrers=None, header_name=None):
    """
    Returns a standard Swift ACL string for the given inputs.

    Caller is responsible for ensuring that :referrers: parameter is only given
    if the ACL is being generated for X-Container-Read.  (X-Container-Write
    and the account ACL headers don't support referrers.)

    :param groups: a list of groups (and/or members in most auth systems) to
                   grant access
    :param referrers: a list of referrer designations (without the leading .r:)
    :param header_name: (optional) header name of the ACL we're preparing, for
                        clean_acl; if None, returned ACL won't be cleaned
    :returns: a Swift ACL string for use in X-Container-{Read,Write},
              X-Account-Access-Control, etc.
    """
    groups, referrers = groups or [], referrers or []
    referrers = ['.r:%s' % r for r in referrers]
    result = ','.join(groups + referrers)
    return (clean_acl(header_name, result) if header_name else result)


def format_acl_v2(acl_dict):
    """
    Returns a version-2 Swift ACL JSON string.

    HTTP headers for Version 2 ACLs have the following form:
      Header-Name: {"arbitrary":"json","encoded":"string"}

    JSON will be forced ASCII (containing six-char \uNNNN sequences rather
    than UTF-8; UTF-8 is valid JSON but clients vary in their support for
    UTF-8 headers), and without extraneous whitespace.

    Advantages over V1: forward compatibility (new keys don't cause parsing
    exceptions); Unicode support; no reserved words (you can have a user
    named .rlistings if you want).

    :param acl_dict: dict of arbitrary data to put in the ACL; see specific
                     auth systems such as tempauth for supported values
    :returns: a JSON string which encodes the ACL
    """
    return json.dumps(acl_dict, ensure_ascii=True, separators=(',', ':'),
                      sort_keys=True)


def format_acl(version=1, **kwargs):
    """
    Compatibility wrapper to help migrate ACL syntax from version 1 to 2.
    Delegates to the appropriate version-specific format_acl method, defaulting
    to version 1 for backward compatibility.

    :param kwargs: keyword args appropriate for the selected ACL syntax version
                   (see :func:`format_acl_v1` or :func:`format_acl_v2`)
    """
    if version == 1:
        return format_acl_v1(
            groups=kwargs.get('groups'), referrers=kwargs.get('referrers'),
            header_name=kwargs.get('header_name'))
    elif version == 2:
        return format_acl_v2(kwargs.get('acl_dict'))
    raise ValueError("Invalid ACL version: %r" % version)


def parse_acl_v1(acl_string):
    """
    Parses a standard Swift ACL string into a referrers list and groups list.

    See :func:`clean_acl` for documentation of the standard Swift ACL format.

    :param acl_string: The standard Swift ACL string to parse.
    :returns: A tuple of (referrers, groups) where referrers is a list of
              referrer designations (without the leading .r:) and groups is a
              list of groups to allow access.
    """
    referrers = []
    groups = []
    if acl_string:
        for value in acl_string.split(','):
            if value.startswith('.r:'):
                referrers.append(value[len('.r:'):])
            else:
                groups.append(value)
    return referrers, groups


def parse_acl_v2(data):
    """
    Parses a version-2 Swift ACL string and returns a dict of ACL info.

    :param data: string containing the ACL data in JSON format
    :returns: A dict (possibly empty) containing ACL info, e.g.:
              {"groups": [...], "referrers": [...]}
    :returns: None if data is None, is not valid JSON or does not parse
        as a dict
    :returns: empty dictionary if data is an empty string
    """
    if data is None:
        return None
    if data is '':
        return {}
    try:
        result = json.loads(data)
        return (result if type(result) is dict else None)
    except ValueError:
        return None


def parse_acl(*args, **kwargs):
    """
    Compatibility wrapper to help migrate ACL syntax from version 1 to 2.
    Delegates to the appropriate version-specific parse_acl method, attempting
    to determine the version from the types of args/kwargs.

    :param args: positional args for the selected ACL syntax version
    :param kwargs: keyword args for the selected ACL syntax version
                   (see :func:`parse_acl_v1` or :func:`parse_acl_v2`)
    :returns: the return value of :func:`parse_acl_v1` or :func:`parse_acl_v2`
    """
    version = kwargs.pop('version', None)
    if version in (1, None):
        return parse_acl_v1(*args)
    elif version == 2:
        return parse_acl_v2(*args, **kwargs)
    else:
        raise ValueError('Unknown ACL version: parse_acl(%r, %r)' %
                         (args, kwargs))


def referrer_allowed(referrer, referrer_acl):
    """
    Returns True if the referrer should be allowed based on the referrer_acl
    list (as returned by :func:`parse_acl`).

    See :func:`clean_acl` for documentation of the standard Swift ACL format.

    :param referrer: The value of the HTTP Referer header.
    :param referrer_acl: The list of referrer designations as returned by
                         :func:`parse_acl`.
    :returns: True if the referrer should be allowed; False if not.
    """
    allow = False
    if referrer_acl:
        rhost = urlparse(referrer or '').hostname or 'unknown'
        for mhost in referrer_acl:
            if mhost.startswith('-'):
                mhost = mhost[1:]
                if mhost == rhost or (mhost.startswith('.') and
                                      rhost.endswith(mhost)):
                    allow = False
            elif mhost == '*' or mhost == rhost or \
                    (mhost.startswith('.') and rhost.endswith(mhost)):
                allow = True
    return allow


def acls_from_account_info(info):
    """
    Extract the account ACLs from the given account_info, and return the ACLs.

    :param info: a dict of the form returned by get_account_info
    :returns: None (no ACL system metadata is set), or a dict of the form::
       {'admin': [...], 'read-write': [...], 'read-only': [...]}

    :raises ValueError: on a syntactically invalid header
    """
    acl = parse_acl(
        version=2, data=info.get('sysmeta', {}).get('core-access-control'))
    if acl is None:
        return None
    admin_members = acl.get('admin', [])
    readwrite_members = acl.get('read-write', [])
    readonly_members = acl.get('read-only', [])
    if not any((admin_members, readwrite_members, readonly_members)):
        return None
    return {
        'admin': admin_members,
        'read-write': readwrite_members,
        'read-only': readonly_members,
    }
swift-2.7.1/swift/common/middleware/staticweb.py0000664000567000056710000005446313024044354023131 0ustar  jenkinsjenkins00000000000000# Copyright (c) 2010-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
This StaticWeb WSGI middleware will serve container data as a static web site
with index file and error file resolution and optional file listings. This mode
is normally only active for anonymous requests. When using keystone for
authentication set ``delay_auth_decision = true`` in the authtoken middleware
configuration in your ``/etc/swift/proxy-server.conf`` file.  If you want to
use it with authenticated requests, set the ``X-Web-Mode: true`` header on the
request.

The ``staticweb`` filter should be added to the pipeline in your
``/etc/swift/proxy-server.conf`` file just after any auth middleware. Also, the
configuration section for the ``staticweb`` middleware itself needs to be
added. For example::

    [DEFAULT]
    ...

    [pipeline:main]
    pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth
               staticweb proxy-logging proxy-server

    ...

    [filter:staticweb]
    use = egg:swift#staticweb

Any publicly readable containers (for example, ``X-Container-Read: .r:*``, see
:ref:`acls` for more information on this) will be checked for
X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values::

    X-Container-Meta-Web-Index  
    X-Container-Meta-Web-Error  

If X-Container-Meta-Web-Index is set, any  files will be served
without having to specify the  part. For instance, setting
``X-Container-Meta-Web-Index: index.html`` will be able to serve the object
.../pseudo/path/index.html with just .../pseudo/path or .../pseudo/path/

If X-Container-Meta-Web-Error is set, any errors (currently just 401
Unauthorized and 404 Not Found) will instead serve the
.../ object. For instance, setting
``X-Container-Meta-Web-Error: error.html`` will serve .../404error.html for
requests for paths not found.

For pseudo paths that have no , this middleware can serve HTML file
listings if you set the ``X-Container-Meta-Web-Listings: true`` metadata item
on the container.

If listings are enabled, the listings can have a custom style sheet by setting
the X-Container-Meta-Web-Listings-CSS header. For instance, setting
``X-Container-Meta-Web-Listings-CSS: listing.css`` will make listings link to
the .../listing.css style sheet. If you "view source" in your browser on a
listing page, you will see the well defined document structure that can be
styled.

By default, the listings will be rendered with a label of
"Listing of /v1/account/container/path".  This can be altered by
setting a ``X-Container-Meta-Web-Listings-Label: