Limetime's TimeLine
article thumbnail
Published 2025. 2. 9. 22:41
[Cloud] Openstack - 3. Block Node Project
반응형

1. [Cloud] Openstack - Caracal 구축 개요
2. [Cloud] Openstack - 1-1. Controller Node (Preprocess, Environment, Keystone, Glance)
3. [Cloud] Openstack - 1-2. Controller Node (Placement, Nova, Neutron) 
4. [Cloud] Openstack - 1-3. Controller Node (Cinder, Swift) 
5. [Cloud] Openstack - 1-4. Controller Node (Horizon) 
6. [Cloud] Openstack - 2. Compute Node
7. [Cloud] Openstack - 3. Block Node ←
8. [Cloud] Openstack - 4. Horizon Dashboard Console 개선

Preprocess


vi /etc/hosts

127.0.0.1       localhost
#127.0.1.1      block-virtual-machine

192.168.2.10    controller

192.168.2.20    compute1
192.168.2.10    compute2
192.168.2.30    compute3

192.168.2.30    block
192.168.2.30    swift

Network Setting

- NetworkManager 내리기

update-rc.d -f NetworkManager remove

- netplan 설정

vi /etc/netplan/01-network-manager-all.yaml
# Let NetworkManager manage all devices on this system
network:
  ethernets:
   ens33:
    addresses:
            - 192.168.2.30/24
    nameservers:
            addresses: [8.8.8.8,8.8.4.4]
    routes:
            - to: default
              via: 192.168.2.1
   ens34:
           dhcp4: false
  version: 2
netplan apply

- /dev/sdb 파티션 분할

fdisk /dev/sdb
	p : 파티션 출력
	n : 파티션 생성
	w : 저장 후 종료

/dev/sdb1, /dev/sdb2 = 50 : 50

 

Environment


 

 

Environment — Installation Guide documentation

Environment This section explains how to configure the controller node and one compute node using the example architecture. Although most environments include Identity, Image service, Compute, at least one networking service, and the Dashboard, the Object

docs.openstack.org

NTP Server

- install chrony

apt install chrony -y

- vi /etc/chrony/chrony.conf

server controller iburst

- Restart the NTP service

service chrony restart

Openstack packages for Ubuntu

- Openstack 2024.1 Caracal for Ubuntu 22.04 LTS

add-apt-repository cloud-archive:caracal

- Client Installation

apt install python3-openstackclient

 

Neutron(Network)


 

 

Install and configure compute node — Neutron 23.3.1.dev15 documentation

Install and configure compute node The compute node handles connectivity and security groups for instances. Install the components # apt install neutron-openvswitch-agent Configure the common component The Networking common component configuration includes

docs.openstack.org

- Install the components

apt install neutron-openvswitch-agent

- vi /etc/neutron/neutron.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = NEUTRON_PASS

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

 

Storage Node (Cinder) Installment


apt install lvm2 thin-provisioning-tools -y

- Storage로 사용할 disk를 cinder-volumes화 하기

pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1

 

- 만약 LVM 파티션이 존재한다면?

# fdisk로 존재하는 파티션 모두 확인
fdisk -l
# cinder-mapping 어쩌구 저쩌구 파티션 다 삭제하고
lvremove /dev/cinder-mapping~~~

그래도 안지워 진다면???
	fuser -kuc /dev/cinder-volumes/volume-39b4bd66-938b-4297-a621-868696fcf20b
	lvremove -f /dev/cinder-volumes/volume-39b4bd66-938b-4297-a621-868696fcf20b
<https://itguava.tistory.com/105>
대시보드에 volume이 계속 남아 있거나, 무한 Dettaching 중이라면.. 강제 볼륨 제거!
openstack volume list
mysql
	use cinder;
	update volumes set deleted=1,status=deleted,deleted_at=now(),updated_at=now() where deleted=0 and id='1f4518ec-7ec7-45d8-940b-4c0ffdeb0ca8';

# 이걸로 lvm 파티션 확인
vgdisplay 

# 지우기 쌉가능!
vgremove -f <vg_name>

# pvdisplay로 pv보고 pv 삭제할 수 있긴한데, 사실 pv가 /dev/sdb 같은거라 할 필요 없을듯?

- lvm설정

vi /etc/lvm/lvm.conf
devices {
	#...
	filter = ["a|/dev/sdb1|", "r|.*|"]
}

 

apt install cinder-volume tgt -y

 

cinder 설정

vi /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = lioadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
enabled_backends = lvm

my_ip = 192.168.2.30
enabled_backends = lvm
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
glance_api_servers = http://controller:9292

backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = http://controller:8080/v1

[database]
#connection = sqlite:////var/lib/cinder/cinder.sqlite
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = tgtadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
vi /etc/tgt/conf.d/cinder.conf
	include /var/lib/cinder/volumes/*

vi /etc/tgt/conf.d/cinder_tgt.conf
	include /var/lib/cinder/volumes/*
service tgt restart
service cinder-volume restart

 

Backup Service


- Install the packages

apt install cinder-backup

- Edit the /etc/cinder/cinder.conf file and complete the following actions:

[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = http://controller:8080/v1

- Replace SWIFT_URL with the URL of the Object Storage service. The URL can be found by showing the object-store API endpoints

openstack catalog show object-store

- Restart the Block Storage backup service:

service cinder-backup restart

 

Swift (Object Storage)


 

 

Object Storage Install Guide — Swift 2.35.0.dev148 documentation

 

docs.openstack.org

Install and configure the storage nodes

 

Install and configure the storage nodes — Swift 2.35.0.dev148 documentation

Install and configure the storage nodes This section describes how to install and configure storage nodes that operate the account, container, and object services. Note that installation and configuration vary by distribution.

docs.openstack.org

Prerequisites

1. Install the supporting utility packages:

apt-get install xfsprogs rsync -y

2. Format the /dev/sdb2 devices as XFS:

mkfs.xfs -f /dev/sdb2

	meta-data=/dev/sdb2              isize=512    agcount=4, agsize=3276736 blks
	         =                       sectsz=512   attr=2, projid32bit=1
	         =                       crc=1        finobt=1, sparse=1, rmapbt=0
	         =                       reflink=1    bigtime=0 inobtcount=0
	data     =                       bsize=4096   blocks=13106944, imaxpct=25
	         =                       sunit=0      swidth=0 blks
	naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
	log      =internal log           bsize=4096   blocks=6399, version=2
	         =                       sectsz=512   sunit=0 blks, lazy-count=1
	realtime =none                   extsz=4096   blocks=0, rtextents=0

3. Create the mount point directory structure:

mkdir -p /srv/node/sdb2

4. Find the UUID of the new partitions:

blkid

	/dev/sdb2: UUID="b080bf66-66e9-453d-8fb7-c1dfe691fc93" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="cbcd9e31-02"
	/dev/sdb1: UUID="Lg01lO-sjkd-rciO-LfDY-SUbS-o4SQ-bESsdW" TYPE="LVM2_member" PARTUUID="cbcd9e31-01"

5. Edit the /etc/fstab file and add the following to it

/dev/disk/by-uuid/b080bf66-66e9-453d-8fb7-c1dfe691fc93 /srv/node/sdb2 xfs defaults,noatime,nodiratime 0 0

6. Mount the devices:

mount /srv/node/sdb2

7. Create or edit the /etc/rsyncd.conf file to contain the following

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.2.30

[account]
path            = /srv/node
read only       = false
write only      = no
list            = yes
incoming chmod  = 0644
outgoing chmod  = 0644
max connections = 25
lock file =     /var/lock/account.lock


[container]
path            = /srv/node
read only       = false
write only      = no
list            = yes
incoming chmod  = 0644
outgoing chmod  = 0644
max connections = 25
lock file =     /var/lock/container.lock

[object]
path            = /srv/node
read only       = false
write only      = no
list            = yes
incoming chmod  = 0644
outgoing chmod  = 0644
max connections = 25
lock file =     /var/lock/object.lock

[swift_server]
path            = /etc/swift
read only       = true
write only      = no
list            = yes
incoming chmod  = 0644
outgoing chmod  = 0644
max connections = 5
lock file =     /var/lock/swift_server.lock

8. Edit the /etc/default/rsync file and enable the rsync service

RSYNC_ENABLE=true

9. Start the rsync service:

service rsync start

Install and configure components

1. Install the packages:

apt-get install swift swift-account swift-container swift-object

2. Edit the /etc/swift/account-server.conf file and complete the following actions

[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
log_level = DEBUG
reclaim_age = 604800
# keep_idle = 600
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# Hashing algorithm for log anonymization. Must be one of algorithms supported
# by Python's hashlib.
# log_anonymization_method = MD5
#
# Salt added during log anonymization
# log_anonymization_salt =
#
# Template used to format logs. All words surrounded by curly brackets
# will be substituted with the appropriate values
# log_format = {remote_addr} - - [{time.d}/{time.b}/{time.Y}:{time.H}:{time.M}:{time.S} +0000] "{method} {path}" {status} {content_length} "{referer}" "{txn_id}" "{user_agent}" {trans_time:.4f} "{additional_info}" {pid} {policy_index}
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# Enable this option to log all sqlite3 queries (requires python >=3.3)
# db_query_logging = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon account-server

[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# You can disable REPLICATE handling (default is to allow it). When deploying
# a cluster with a separate replication network, you'll want multiple
# account-server processes running: one for client-driven traffic and another
# for replication traffic. The server handling client-driven traffic may set
# this to false. If there is only one account-server process, leave this as
# true.
# replication_server = true
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# You can set fallocate_reserve to the number of bytes or percentage
# of disk space you'd like kept free at all times. If the disk's free
# space falls below this value, then PUT, POST, and REPLICATE requests
# will be denied until the disk ha s more space available. Percentage
# will be used if the value ends with a '%'.
# fallocate_reserve = 1%

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[filter:backend_ratelimit]
use = egg:swift#backend_ratelimit
# Config options can optionally be loaded from a separate config file. Config
# options in this section will be used unless the same option is found in the
# config file, in which case the config file option will be used. See the
# backend-ratelimit.conf-sample file for details of available config options.
# backend_ratelimit_conf_path = /etc/swift/backend-ratelimit.conf

# The minimum interval between attempts to reload any config file at
# backend_ratelimit_conf_path while the server is running. A value of 0 means
# that the file is loaded at start-up but not subsequently reloaded. Note that
# config options in this section are never reloaded after start-up.
# config_reload_interval = 60

[account-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30.0
# run_pause is deprecated, use interval instead
# run_pause = 30.0
#
# Process at most this many databases per second
# databases_per_second = 50
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rsync module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# The handoffs_only and handoff_delete options are for special-case emergency
# situations such as full disks in the cluster. These options SHOULD NOT
# BE ENABLED except in emergencies. When handoffs_only mode is enabled
# the replicator will *only* replicate from handoff nodes to primary
# nodes and will not sync primary nodes with other primary nodes.
#
# This has two main effects: first, the replicator becomes much more
# effective at removing misplaced databases, thereby freeing up disk
# space at a much faster pace than normal. Second, the replicator does
# not sync data between primary nodes, so out-of-sync account and
# container listings will not resolve while handoffs_only is enabled.
#
# This mode is intended to allow operators to temporarily sacrifice
# consistency in order to gain faster rebalancing, such as during a
# capacity addition with nearly-full disks. It is not intended for
# long-term use.
#
# handoffs_only = no
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, account-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. The replicator would remove local handoff account database after
# syncing when the number of successful responses is greater than or equal to
# this number. By default(auto), handoff partitions will be
# removed  when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto

[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800.0
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600.0
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example. The sum of this value and the
# container-updater interval should be less than the account-replicator
# reclaim_age. This ensures that once the account-reaper has deleted a
# container there is sufficient time for the container-updater to report to the
# account before the account DB is removed.
# delay_reaping = 0
#
# If the account fails to be reaped due to a persistent error, the
# account reaper will log a message such as:
#     Account <name> has not been reaped since <date>
# You can search logs for this message if space is not being reclaimed
# after you delete account(s).
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/account.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

3. Edit the /etc/swift/container-server.conf file and complete the following actions

[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
log_level = DEBUG
reclaim_age = 604800
# keep_idle = 600
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# This is a comma separated list of hosts allowed in the X-Container-Sync-To
# field for containers. This is the old-style of using container sync. It is
# strongly recommended to use the new style of a separate
# container-sync-realms.conf -- see container-sync-realms.conf-sample
# allowed_sync_hosts = 127.0.0.1
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# Hashing algorithm for log anonymization. Must be one of algorithms supported
# by Python's hashlib.
# log_anonymization_method = MD5
#
# Salt added during log anonymization
# log_anonymization_salt =
#
# Template used to format logs. All words surrounded by curly brackets
# will be substituted with the appropriate values
# log_format = {remote_addr} - - [{time.d}/{time.b}/{time.Y}:{time.H}:{time.M}:{time.S} +0000] "{method} {path}" {status} {content_length} "{referer}" "{txn_id}" "{user_agent}" {trans_time:.4f} "{additional_info}" {pid} {policy_index}
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# Enable this option to log all sqlite3 queries (requires python >=3.3)
# db_query_logging = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon container-server

[app:container-server]
use = egg:swift#container
# You can override the default log routing for this app here:
# set log_name = container-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
#
# You can disable REPLICATE handling (default is to allow it). When deploying
# a cluster with a separate replication network, you'll want multiple
# container-server processes running: one for client-driven traffic and another
# for replication traffic. The server handling client-driven traffic may set
# this to false. If there is only one container-server process, leave this as
# true.
# replication_server = true
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# You can set fallocate_reserve to the number of bytes or percentage
# of disk space you'd like kept free at all times. If the disk's free
# space falls below this value, then PUT, POST, and REPLICATE requests
# will be denied until the disk ha s more space available. Percentage
# will be used if the value ends with a '%'.
# fallocate_reserve = 1%

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

[filter:backend_ratelimit]
use = egg:swift#backend_ratelimit
# Config options can optionally be loaded from a separate config file. Config
# options in this section will be used unless the same option is found in the
# config file, in which case the config file option will be used. See the
# backend-ratelimit.conf-sample file for details of available config options.
# backend_ratelimit_conf_path = /etc/swift/backend-ratelimit.conf

# The minimum interval between attempts to reload any config file at
# backend_ratelimit_conf_path while the server is running. A value of 0 means
# that the file is loaded at start-up but not subsequently reloaded. Note that
# config options in this section are never reloaded after start-up.
# config_reload_interval = 60

[container-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30.0
# run_pause is deprecated, use interval instead
# run_pause = 30.0
#
# Process at most this many databases per second
# databases_per_second = 50
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rsync module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::container
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# The handoffs_only and handoff_delete options are for special-case emergency
# situations such as full disks in the cluster. These options SHOULD NOT
# BE ENABLED except in emergencies. When handoffs_only mode is enabled
# the replicator will *only* replicate from handoff nodes to primary
# nodes and will not sync primary nodes with other primary nodes.
#
# This has two main effects: first, the replicator becomes much more
# effective at removing misplaced databases, thereby freeing up disk
# space at a much faster pace than normal. Second, the replicator does
# not sync data between primary nodes, so out-of-sync account and
# container listings will not resolve while handoffs_only is enabled.
#
# This mode is intended to allow operators to temporarily sacrifice
# consistency in order to gain faster rebalancing, such as during a
# capacity addition with nearly-full disks. It is not intended for
# long-term use.
#
# handoffs_only = no
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, container-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. The replicator would remove local handoff container database after
# syncing when the number of successful responses is greater than or equal to
# this number. By default(auto), handoff partitions will be
# removed  when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto

[container-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300.0
# concurrency = 4
# node_timeout = 3
# conn_timeout = 0.5
#
# Send at most this many container updates per second
# containers_per_second = 50
#
# slowdown will sleep that amount between containers. Deprecated; use
# containers_per_second instead.
# slowdown = 0.01
#
# Seconds to suppress updating an account that has generated an error
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each container at most once per interval
# interval = 1800.0
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# You can also set this to a comma separated list of HTTP Proxies and they will
# be randomly used (simple load balancing).
# sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
#
# Will sync each container at most once per interval
# interval = 300.0
#
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
#
# Maximum amount of time in seconds for the connection attempt
# conn_timeout = 5
# Server errors from requests will be retried by default
# request_tries = 3
#
# Internal client config file path
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/container.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

[container-sharder]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sharder
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Container sharder specific settings
#
# If the auto_shard option is true then the sharder will automatically select
# containers to shard, scan for shard ranges, and select shards to shrink.
# The default is false.
# Warning: auto-sharding is still under development and should not be used in
# production; do not set this option to true in a production cluster.
# auto_shard = false
#
# When auto-sharding is enabled shard_container_threshold defines the object
# count at which a container with container-sharding enabled will start to
# shard. shard_container_threshold also indirectly determines the defaults for
# rows_per_shard, shrink_threshold and expansion_limit.
# shard_container_threshold = 1000000
#
# rows_per_shard determines the initial nominal size of shard containers. The
# default is shard_container_threshold // 2
# rows_per_shard = 500000
#
# Minimum size of the final shard range. If this is greater than one then the
# final shard range may be extended to more than rows_per_shard in order to
# avoid a further shard range with less than minimum_shard_size rows. The
# default value is rows_per_shard // 5.
# minimum_shard_size = 100000
#
# When auto-sharding is enabled shrink_threshold defines the object count
# below which a 'donor' shard container will be considered for shrinking into
# another 'acceptor' shard container. The default is determined by
# shard_shrink_point. If set, shrink_threshold will take precedence over
# shard_shrink_point.
# shrink_threshold =
#
# When auto-sharding is enabled shard_shrink_point defines the object count
# below which a 'donor' shard container will be considered for shrinking into
# another 'acceptor' shard container. shard_shrink_point is a percentage of
# shard_container_threshold e.g. the default value of 10 means 10% of the
# shard_container_threshold.
# Deprecated: shrink_threshold is recommended and if set will take precedence
# over shard_shrink_point.
# shard_shrink_point = 10
#
# When auto-sharding is enabled expansion_limit defines the maximum
# allowed size of an acceptor shard container after having a donor merged into
# it. The default is determined by shard_shrink_merge_point.
# If set, expansion_limit will take precedence over shard_shrink_merge_point.
# expansion_limit =
#
# When auto-sharding is enabled shard_shrink_merge_point defines the maximum
# allowed size of an acceptor shard container after having a donor merged into
# it. Shard_shrink_merge_point is a percentage of shard_container_threshold.
# e.g. the default value of 75 means that the projected sum of a donor object
# count and acceptor count must be less than 75% of shard_container_threshold
# for the donor to be allowed to merge into the acceptor.
#
# For example, if the shard_container_threshold is 1 million,
# shard_shrink_point is 10, and shard_shrink_merge_point is 75 then a shard will
# be considered for shrinking if it has less than or equal to 100 thousand
# objects but will only merge into an acceptor if the combined object count
# would be less than or equal to 750 thousand objects.
# Deprecated: expansion_limit is recommended and if set will take precedence
# over shard_shrink_merge_point.
# shard_shrink_merge_point = 75
#
# When auto-sharding is enabled shard_scanner_batch_size defines the maximum
# number of shard ranges that will be found each time the sharder daemon visits
# a sharding container. If necessary the sharder daemon will continue to search
# for more shard ranges each time it visits the container.
# shard_scanner_batch_size = 10
#
# cleave_batch_size defines the number of shard ranges that will be cleaved
# each time the sharder daemon visits a sharding container.
# cleave_batch_size = 2
#
# cleave_row_batch_size defines the size of batches of object rows read from a
# sharding container and merged to a shard container during cleaving.
# cleave_row_batch_size = 10000
#
# max_expanding defines the maximum number of shards that could be expanded in a
# single cycle of the sharder. Defaults to unlimited (-1).
# max_expanding = -1
#
# max_shrinking defines the maximum number of shards that should be shrunk into
# each expanding shard. Defaults to 1.
# NOTE: Using values greater than 1 may result in temporary gaps in object listings
# until all selected shards have shrunk.
# max_shrinking = 1
#
# Defines the number of successfully replicated shard dbs required when
# cleaving a previously uncleaved shard range before the sharder will progress
# to the next shard range. The value should be less than or equal to the
# container ring replica count. The default of 'auto' causes the container ring
# quorum value to be used. This option only applies to the container-sharder
# replication and does not affect the number of shard container replicas that
# will eventually be replicated by the container-replicator.
# shard_replication_quorum = auto
#
# Defines the number of successfully replicated shard dbs required when
# cleaving a shard range that has been previously cleaved on another node
# before the sharder will progress to the next shard range. The value should be
# less than or equal to the container ring replica count. The default of 'auto'
# causes the shard_replication_quorum value to be used. This option only
# applies to the container-sharder replication and does not affect the number
# of shard container replicas that will eventually be replicated by the
# container-replicator.
# existing_shard_replication_quorum = auto
#
# The sharder uses an internal client to create and make requests to
# containers. The absolute path to the client config file can be configured.
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# The number of time the internal client will retry requests.
# request_tries = 3
#
# Each time the sharder dumps stats to the recon cache file it includes a list
# of containers that appear to need sharding but are not yet sharding. By
# default this list is limited to the top 5 containers, ordered by object
# count. The limit may be changed by setting recon_candidates_limit to an
# integer value. A negative value implies no limit.
# recon_candidates_limit = 5
#
# As the sharder visits each container that's currently sharding it dumps to
# recon their current progress. To be able to mark their progress as completed
# this in-progress check will need to monitor containers that have just
# completed sharding. The recon_sharded_timeout parameter says for how long a
# container whose just finished sharding should be checked by the in-progress
# check. This is to allow anything monitoring the sharding recon dump to have
# enough time to collate and see things complete. The time is capped at
# reclaim_age, so this parameter should be less than or equal to reclaim_age.
# The default is 12 hours (12 x 60 x 60)
# recon_sharded_timeout = 43200
#
# Maximum amount of time in seconds after sharding has been started on a shard
# container and before it's considered as timeout. After this amount of time,
# sharder will warn that a container DB has not completed sharding.
# The default is 48 hours (48 x 60 x 60)
# container_sharding_timeout = 172800
#
# Some sharder states lead to repeated messages of 'Reclaimable db stuck
# waiting for shrinking' on every sharder cycle. To reduce noise in logs,
# this message will be suppressed for some time after its last emission.
# Default is 24 hours.
# periodic_warnings_interval = 86400
#
# Large databases tend to take a while to work with, but we want to make sure
# we write down our progress. Use a larger-than-normal broker timeout to make
# us less likely to bomb out on a LockTimeout.
# broker_timeout = 60
#
# Time in seconds to wait between emitting stats to logs
# stats_interval = 3600.0
#
# Time in seconds to wait between sharder cycles
# interval = 30.0
#
# Process at most this many databases per second
# databases_per_second = 50
#
# The container-sharder accepts the following configuration options as defined
# in the container-replicator section:
#
# per_diff = 1000
# max_diffs = 100
# concurrency = 8
# node_timeout = 10
# conn_timeout = 0.5
# reclaim_age = 604800
# rsync_compress = no
# rsync_module = {replication_ip}::container
# recon_cache_path = /var/cache/swift
#

4. Edit the /etc/swift/object-server.conf file and complete the following actions

[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
log_level = DEBUG
reclaim_age = 604800
# keep_idle = 600
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.  NOTE: if servers_per_port is set, this setting is
# ignored.
# workers = auto
#
# Make object-server run this many worker processes per unique port of "local"
# ring devices across all storage policies. The default value of 0 disables this
# feature.
# servers_per_port = 0
#
# If running in a container, servers_per_port may not be able to use the
# bind_ip to lookup the ports in the ring.  You may instead override the port
# lookup in the ring using the ring_ip.  Any devices/ports associted with the
# ring_ip will be used when listening on the configured bind_ip address.
# ring_ip = <bind_ip>
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# Hashing algorithm for log anonymization. Must be one of algorithms supported
# by Python's hashlib.
# log_anonymization_method = MD5
#
# Salt added during log anonymization
# log_anonymization_salt =
#
# Template used to format logs. All words surrounded by curly brackets
# will be substituted with the appropriate values
# log_format = {remote_addr} - - [{time.d}/{time.b}/{time.Y}:{time.H}:{time.M}:{time.S} +0000] "{method} {path}" {status} {content_length} "{referer}" "{txn_id}" "{user_agent}" {trans_time:.4f} "{additional_info}" {pid} {policy_index}
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# Time to wait while attempting to connect to another backend node.
# conn_timeout = 0.5
# Time to wait while sending each chunk of data to another backend node.
# node_timeout = 3
# Time to wait while sending a container update on object update.
# container_update_timeout = 1.0
# Time to wait while receiving each chunk of data from a client or another
# backend node.
# client_timeout = 60.0
#
# network_chunk_size = 65536
# disk_chunk_size = 65536
#
# Reclamation of tombstone files is performed primarily by the replicator and
# the reconstructor but the object-server and object-auditor also reference
# this value - it should be the same for all object services in the cluster,
# and not greater than the container services reclaim_age
# reclaim_age = 604800
#
# Non-durable data files may also get reclaimed if they are older than
# reclaim_age, but not if the time they were written to disk (i.e. mtime) is
# less than commit_window seconds ago. The commit_window also prevents the
# reconstructor removing recently written non-durable data files from a handoff
# node after reverting them to a primary. This gives the object-server a window
# in which to finish a concurrent PUT on a handoff and mark the data durable. A
# commit_window greater than zero is strongly recommended to avoid unintended
# removal of data files that were about to become durable; commit_window should
# be much less than reclaim_age.
# commit_window = 60.0
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon object-server

[app:object-server]
use = egg:swift#object
# You can override the default log routing for this app here:
# set log_name = object-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# max_upload_time = 86400
#
# slow is the total amount of seconds an object PUT/DELETE request takes at
# least. If it is faster, the object server will sleep this amount of time minus
# the already passed transaction time.  This is only useful for simulating slow
# devices on storage nodes during testing and development.
# slow = 0
#
# Objects smaller than this are not evicted from the buffercache once read
# keep_cache_size = 5242880
#
# If true, objects for authenticated GET requests may be kept in buffer cache
# if small enough
# keep_cache_private = false
#
# If true, SLO object's manifest file for GET requests may be kept in buffer cache
# if smaller than 'keep_cache_size'. And this config will only matter when
# 'keep_cache_private' is false.
# keep_cache_slo_manifest = false
#
# cooperative_period defines how frequent object server GET request will
# perform the cooperative yielding during iterating the disk chunks. For
# example, value of '5' will insert one sleep() after every 5 disk_chunk_size
# chunk reads. A value of '0' (the default) will turn off cooperative yielding.
# cooperative_period = 0
#
# on PUTs, sync data every n MB
# mb_per_sync = 512
#
# Comma separated list of headers that can be set in metadata on an object.
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object, Cache-Control, Content-Language, Expires, X-Robots-Tag

# The number of threads in eventlet's thread pool. Most IO will occur
# in the object server's main thread, but certain "heavy" IO
# operations will occur in separate IO threads, managed by eventlet.
#
# The default value is auto, whose actual value is dependent on the
# servers_per_port value:
#
#  - When servers_per_port is zero, the default value of
#    eventlet_tpool_num_threads is empty, which uses eventlet's default
#    (currently 20 threads).
#
#  - When servers_per_port is nonzero, the default value of
#    eventlet_tpool_num_threads is 1.
#
# But you may override this value to any integer value.
#
# Note that this value is threads per object-server process, so to
# compute the total number of IO threads on a node, you must multiply
# this by the number of object-server processes on the node.
#
# eventlet_tpool_num_threads = auto

# You can disable REPLICATE and SSYNC handling (default is to allow it). When
# deploying a cluster with a separate replication network, you'll want multiple
# object-server processes running: one for client-driven traffic and another
# for replication traffic. The server handling client-driven traffic may set
# this to false. If there is only one object-server process, leave this as
# true.
# replication_server = true
#
# Set to restrict the number of concurrent incoming SSYNC requests
# Set to 0 for unlimited
# Note that SSYNC requests are only used by the object reconstructor or the
# object replicator when configured to use ssync.
# replication_concurrency = 4
#
# Set to restrict the number of concurrent incoming SSYNC requests per
# device; set to 0 for unlimited requests per device. This can help control
# I/O to each device. This does not override replication_concurrency described
# above, so you may need to adjust both parameters depending on your hardware
# or network capacity.
# replication_concurrency_per_device = 1
#
# Number of seconds to wait for an existing replication device lock before
# giving up.
# replication_lock_timeout = 15
#
# These next two settings control when the SSYNC subrequest handler will
# abort an incoming SSYNC attempt. An abort will occur if there are at
# least threshold number of failures and the value of failures / successes
# exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
# failures have to occur and there have to be more failures than successes for
# an abort to occur.
# replication_failure_threshold = 100
# replication_failure_ratio = 1.0
#
# Use splice() for zero-copy object GETs. This requires Linux kernel
# version 3.0 or greater. If you set "splice = yes" but the kernel
# does not support it, error messages will appear in the object server
# logs at startup, but your object servers should continue to function.
#
# splice = no
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

[filter:backend_ratelimit]
use = egg:swift#backend_ratelimit
# Config options can optionally be loaded from a separate config file. Config
# options in this section will be used unless the same option is found in the
# config file, in which case the config file option will be used. See the
# backend-ratelimit.conf-sample file for details of available config options.
# backend_ratelimit_conf_path = /etc/swift/backend-ratelimit.conf

# The minimum interval between attempts to reload any config file at
# backend_ratelimit_conf_path while the server is running. A value of 0 means
# that the file is loaded at start-up but not subsequently reloaded. Note that
# config options in this section are never reloaded after start-up.
# config_reload_interval = 60

[object-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between replication passes
# interval = 30.0
# run_pause is deprecated, use interval instead
# run_pause = 30.0
#
# Number of concurrent replication jobs to run. This is per-process,
# so replicator_workers=W and concurrency=C will result in W*C
# replication jobs running at once.
# concurrency = 1
#
# Number of worker processes to use. No matter how big this number is,
# at most one worker per disk will be used. 0 means no forking; all work
# is done in the main process.
# replicator_workers = 0
#
# stats_interval = 300.0
#
# default is rsync, alternative is ssync
# sync_method = rsync
#
# max duration of a partition rsync
# rsync_timeout = 900
#
# bandwidth limit for rsync in kB/s. 0 means unlimited. rsync 3.2.2 and later
# accept suffixed values like 10M or 1.5G; see the --bwlimit option for rsync(1)
# rsync_bwlimit = 0
#
# passed to rsync for both io op timeout and connection timeout
# rsync_io_timeout = 30
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
# slow down the syncing process.
# rsync_compress = no
#
# Format of the rsync module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::object
#
# node_timeout = <whatever's in the DEFAULT section or 10>
# max duration of an http request; this is for REPLICATE finalization calls and
# so should be longer than node_timeout
# http_timeout = 60
#
# attempts to kill all workers if nothing replicates for lockup_timeout seconds
# lockup_timeout = 1800
#
# ring_check_interval = 15.0
# recon_cache_path = /var/cache/swift
#
# By default, per-file rsync transfers are logged at debug if successful and
# error on failure. During large rebalances (which both increase the number
# of diskfiles transferred and increases the likelihood of failures), this
# can overwhelm log aggregation while providing little useful insights.
# Change this to false to disable per-file logging.
# log_rsync_transfers = true
#
# limits how long rsync error log lines are
# 0 means to log the entire line
# rsync_error_log_line_length = 0
#
# handoffs_first and handoff_delete are options for a special case
# such as disk full in the cluster. These two options SHOULD NOT BE
# CHANGED, except for such an extreme situations. (e.g. disks filled up
# or are about to fill up. Anyway, DO NOT let your drives fill up)
# handoffs_first is the flag to replicate handoffs prior to canonical
# partitions. It allows to force syncing and deleting handoffs quickly.
# If set to a True value(e.g. "True" or "1"), partitions
# that are not supposed to be on the node will be replicated first.
# handoffs_first = False
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, object-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. Object-replicator would remove local handoff partition directories
# after syncing partition when the number of successful responses is greater
# than or equal to this number. By default(auto), handoff partitions will be
# removed  when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-reconstructor]
# You can override the default log routing for this app here (don't use set!):
# Unless otherwise noted, each setting below has the same meaning as described
# in the [object-replicator] section, however these settings apply to the EC
# reconstructor
#
# log_name = object-reconstructor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between reconstruction passes
# interval = 30.0
# run_pause is deprecated, use interval instead
# run_pause = 30.0
#
# Maximum number of worker processes to spawn.  Each worker will handle a
# subset of devices.  Devices will be assigned evenly among the workers so that
# workers cycle at similar intervals (which can lead to fewer workers than
# requested).  You can not have more workers than devices.  If you have no
# devices only a single worker is spawned.
# reconstructor_workers = 0
#
# concurrency = 1
# stats_interval = 300.0
# node_timeout = 10
# http_timeout = 60
# lockup_timeout = 1800
# ring_check_interval = 15.0
# recon_cache_path = /var/cache/swift
#
# The handoffs_only mode option is for special case emergency situations during
# rebalance such as disk full in the cluster.  This option SHOULD NOT BE
# CHANGED, except for extreme situations.  When handoffs_only mode is enabled
# the reconstructor will *only* revert fragments from handoff nodes to primary
# nodes and will not sync primary nodes with neighboring primary nodes.  This
# will force the reconstructor to sync and delete handoffs' fragments more
# quickly and minimize the time of the rebalance by limiting the number of
# rebuilds.  The handoffs_only option is only for temporary use and should be
# disabled as soon as the emergency situation has been resolved.  When
# handoffs_only is not set, the deprecated handoffs_first option will be
# honored as a synonym, but may be ignored in a future release.
# handoffs_only = False
#
# The default strategy for unmounted drives will stage rebuilt data on a
# handoff node until updated rings are deployed.  Because fragments are rebuilt
# on offset handoffs based on fragment index and the proxy limits how deep it
# will search for EC frags we restrict how many nodes we'll try.  Setting to 0
# will disable rebuilds to handoffs and only rebuild fragments for unmounted
# devices to mounted primaries after a ring change.
# Setting to -1 means "no limit".
# rebuild_handoff_node_count = 2
#
# By default the reconstructor attempts to revert all objects from handoff
# partitions in a single batch using a single SSYNC request. In exceptional
# circumstances max_objects_per_revert can be used to temporarily limit the
# number of objects reverted by each reconstructor revert type job. If more
# than max_objects_per_revert are available in a sender's handoff partition,
# the remaining objects will remain in the handoff partition and will not be
# reverted until the next time the reconstructor visits that handoff partition
# i.e. with this option set, a single cycle of the reconstructor may not
# completely revert all handoff partitions. The option has no effect on
# reconstructor sync type jobs between primary partitions. A value of 0 (the
# default) means there is no limit.
# max_objects_per_revert = 0
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# When upgrading from liberasurecode<=1.5.0, you may want to continue writing
# legacy CRCs until all nodes are upgraded and capabale of reading fragments
# with zlib CRCs. liberasurecode>=1.6.2 checks for the environment variable
# LIBERASURECODE_WRITE_LEGACY_CRC; if set (value doesn't matter), it will use
# its legacy CRC. Set this option to true or false to ensure the environment
# variable is or is not set. Leave the option blank or absent to not touch
# the environment (default). For more information, see
# https://bugs.launchpad.net/liberasurecode/+bug/1886088
# write_legacy_ec_crc =
#
# When attempting to reconstruct a missing fragment on another node from a
# fragment on the local node, the reconstructor may fail to fetch sufficient
# fragments to reconstruct the missing fragment. This may be because most or
# all of the remote fragments have been deleted, and the local fragment is
# stale, in which case the reconstructor will never succeed in reconstructing
# the apparently missing fragment and will log errors. If the object's
# tombstones have been reclaimed then the stale fragment will never be deleted
# (see https://bugs.launchpad.net/swift/+bug/1655608). If an operator suspects
# that stale fragments have been re-introduced to the cluster and is seeing
# error logs similar to those in the bug report, then the quarantine_threshold
# option may be set to a value greater than zero. This enables the
# reconstructor to quarantine the stale fragments when it fails to fetch more
# than the quarantine_threshold number of fragments (including the stale
# fragment) during an attempt to reconstruct. For example, setting the
# quarantine_threshold to 1 would cause a fragment to be quarantined if no
# other fragments can be fetched. The value may be reset to zero after the
# reconstructor has run on all affected nodes and the error logs are no longer
# seen.
# Note: the quarantine_threshold applies equally to all policies, but for each
# policy it is effectively capped at (ec_ndata - 1) so that a fragment is never
# quarantined when sufficient fragments exist to reconstruct the object.
# quarantine_threshold = 0
#
# Fragments are not quarantined until they are older than
# quarantine_age, which defaults to the value of reclaim_age.
# quarantine_age =
#
# Sets the maximum number of nodes to which requests will be made before
# quarantining a fragment. You can use '* replicas' at the end to have it use
# the number given times the number of replicas for the ring being used for the
# requests. The minimum number of nodes to which requests are made is the
# number of replicas for the policy minus 1 (the node on which the fragment is
# to be rebuilt). The minimum is only exceeded if request_node_count is
# greater, and only for the purposes of quarantining.
# request_node_count = 2 * replicas

[object-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300.0
# node_timeout = <whatever's in the DEFAULT section or 10>
#
# updater_workers controls how many processes the object updater will
# spawn, while concurrency controls how many async_pending records
# each updater process will operate on at any one time. With
# concurrency=C and updater_workers=W, there will be up to W*C
# async_pending records being processed at once.
# concurrency = 8
# updater_workers = 1
#
# Send at most this many object updates per second
# objects_per_second = 50
#
# Send at most this many object updates per bucket per second. The value must
# be a float greater than or equal to 0. Set to 0 for unlimited.
# max_objects_per_container_per_second = 0
#
# The per_container ratelimit implementation uses a hashring to constrain
# memory requirements.  Orders of magnitude more buckets will use (nominally)
# more memory, but will ratelimit smaller groups of containers. The value must
# be an integer greater than 0.
# per_container_ratelimit_buckets = 1000
#
# Updates that cannot be sent due to per-container rate-limiting may be
# deferred and re-tried at the end of the updater cycle. This option constrains
# the size of the in-memory data structure used to store deferred updates.
# Must be an integer value greater than or equal to 0.
# max_deferred_updates = 10000
#
# slowdown will sleep that amount between objects. Deprecated; use
# objects_per_second instead.
# slowdown = 0.01
#
# Log stats (at INFO level) every report_interval seconds. This
# logging is per-process, so with concurrency > 1, the logs will
# contain one stats log per worker process every report_interval
# seconds.
# report_interval = 300.0
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Time in seconds to wait between auditor passes
# interval = 30.0
#
# You can set the disk chunk size that the auditor uses making it larger if
# you like for more efficient local auditing of larger objects
# disk_chunk_size = 65536
# files_per_second = 20
# concurrency = 1
# bytes_per_second = 10000000
# log_time = 3600
# zero_byte_files_per_second = 50
# recon_cache_path = /var/cache/swift

# Takes a comma separated list of ints. If set, the object auditor will
# increment a counter for every object whose size is <= to the given break
# points and report the result after a full scan.
# object_size_stats =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# The auditor will cleanup old rsync tempfiles after they are "old
# enough" to delete.  You can configure the time elapsed in seconds
# before rsync tempfiles will be unlinked, or the default value of
# "auto" try to use object-replicator's rsync_timeout + 900 and fallback
# to 86400 (1 day).
# rsync_tempfile_timeout = auto

# A comma-separated list of watcher entry points. This lets operators
# programmatically see audited objects.
#
# The entry point group name is "swift.object_audit_watcher". If your
# setup.py has something like this:
#
# entry_points={'swift.object_audit_watcher': [
#     'some_watcher = some_module:Watcher']}
#
# then you would enable it with "watchers = some_package#some_watcher".
# For example, the built-in reference implementation is enabled as
# "watchers = swift#dark_data".
#
# watchers =

# Watcher-specific parameters can be added in a section with a name
# [object-auditor:watcher:some_package#some_watcher]. The following
# example uses the built-in reference watcher.
#
# [object-auditor:watcher:swift#dark_data]
#
# Action type can be 'log' (default), 'delete', or 'quarantine'.
# action=log
#
# The watcher ignores the objects younger than certain minimum age.
# This prevents spurious actions upon fresh objects while container
# listings eventually settle.
# grace_age=604800

[object-expirer]
# If this true, this expirer will execute tasks from legacy expirer task queue,
# at least one object server should run with dequeue_from_legacy = true
# dequeue_from_legacy = false
#
# Note: Be careful not to enable ``dequeue_from_legacy`` on too many expirers
# as all legacy tasks are stored in a single hidden account and the same hidden
# containers. On a large cluster one may inadvertently make the
# acccount/container server for the hidden too busy.
#
# Note: the processes and process options can only be used in conjunction with
# notes using `dequeue_from_legacy = true`.  These options are ignored on nodes
# with `dequeue_from_legacy = false`.
#
# processes is how many parts to divide the legacy work into, one part per
# process that will be doing the work
# processes set 0 means that a single legacy process will be doing all the work
# processes can also be specified on the command line and will override the
# config value
# processes = 0
#
# process is which of the parts a particular legacy process will work on
# process can also be specified on the command line and will override the config
# value
# process is "zero based", if you want to use 3 processes, you should run
# processes with process set to 0, 1, and 2
# process = 0
#
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# You can override the default log routing for this app here (don't use set!):
# log_name = object-expirer
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300.0
#
# report_interval = 300.0
#
# request_tries is the number of times the expirer's internal client will
# attempt any given request in the event of failure. The default is 3.
# request_tries = 3
#
# concurrency is the level of concurrency to use to do the work, this value
# must be set to at least 1
# concurrency = 1
#
# deletes can be ratelimited to prevent the expirer from overwhelming the cluster
# tasks_per_second = 50.0
#
# The expirer will re-attempt expiring if the source object is not available
# up to reclaim_age seconds before it gives up and deletes the entry in the
# queue.
# reclaim_age = 604800
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are realtime, best-effort and idle. I/O niceness
# priority is a number which goes from 0 to 7. The higher the value, the lower
# the I/O priority of the process. Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# The expirer can delay the reaping of expired objects on disk (and in
# container listings) with an account level or container level delay_reaping
# time.
# After the delay_reaping time has passed objects will be reaped as normal.
# You may configure this delay_reaping value in seconds with dynamic config
# option names prefixed with delay_reaping_<ACCT> for account level delays
# and delay_reaping_<ACCT>/<CNTR> for container level delays.
# Special characters in <ACCT> or <CNTR> should be quoted.
# The delay_reaping value should be a float value greater than or equal to
# zero.
# A container level delay_reaping does not require an account level
# delay_reaping but overrides the account level delay_reaping for the same
# account if it exists.
# For example:
# delay_reaping_AUTH_test = 300.0
# delay_reaping_AUTH_test2 = 86400.0
# delay_reaping_AUTH_test/test = 400.0
# delay_reaping_AUTH_test/test2 = 600.0
# delay_reaping_AUTH_test/special%0Achars%3Dshould%20be%20quoted
# N.B. By default no delay_reaping value is configured for any accounts or
# containers.

[filter:xprofile]
use = egg:swift#xprofile
# Note: Put it at the beginning of the pipleline to profile all middleware. But
# it is safer to put this after healthcheck.
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/object.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

[object-relinker]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-relinker
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Start up to this many sub-processes to process disks in parallel. Each disk
# will be handled by at most one child process. By default, one process is
# spawned per disk.
# workers = auto
#
# Target this many relinks/cleanups per second for each worker, to reduce the
# likelihood that the added I/O from a partition-power increase impacts
# client traffic. Use zero for unlimited.
# files_per_second = 0.0
#
# stats_interval = 300.0
# recon_cache_path = /var/cache/swift

5. vi /etc/swift/swift.conf

[swift-hash]

# swift_hash_path_suffix and swift_hash_path_prefix are used as part of the
# hashing algorithm when determining data placement in the cluster.
# These values should remain secret and MUST NOT change
# once a cluster has been deployed.
# Use only printable chars (python -c "import string; print(string.printable)")

swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX

# Storage policies are defined here and determine various characteristics
# about how objects are stored and treated. More documentation can be found at
# https://docs.openstack.org/swift/latest/overview_policies.html.

# Client requests specify a policy on a per container basis using the policy
# name. Internally the policy name is mapped to the policy index specified in
# the policy's section header in this config file. Policy names are
# case-insensitive and, to avoid confusion with indexes names, should not be
# numbers.
#
# The policy with index 0 is always used for legacy containers and can be given
# a name for use in metadata however the ring file name will always be
# 'object.ring.gz' for backwards compatibility.  If no policies are defined a
# policy with index 0 will be automatically created for backwards compatibility
# and given the name Policy-0.  A default policy is used when creating new
# containers when no policy is specified in the request.  If no other policies
# are defined the policy with index 0 will be declared the default.  If
# multiple policies are defined you must define a policy with index 0 and you
# must specify a default.  It is recommended you always define a section for
# storage-policy:0.
#
# A 'policy_type' argument is also supported but is not mandatory.  Default
# policy type 'replication' is used when 'policy_type' is unspecified.
#
# A 'diskfile_module' optional argument lets you specify an alternate backend
# object storage plug-in architecture. The default is
# "egg:swift#replication.fs", or "egg:swift#erasure_coding.fs", depending on
# the policy type.
#
# Aliases for the storage policy name may be defined, but are not required.
#
[storage-policy:0]
name = Policy-0
default = yes
#policy_type = replication
#diskfile_module = egg:swift#replication.fs
#aliases = yellow, orange

# The following section would declare a policy called 'silver', the number of
# replicas will be determined by how the ring is built.  In this example the
# 'silver' policy could have a lower or higher # of replicas than the
# 'Policy-0' policy above.  The ring filename will be 'object-1.ring.gz'.  You
# may only specify one storage policy section as the default.  If you changed
# this section to specify 'silver' as the default, when a client created a new
# container w/o a policy specified, it will get the 'silver' policy because
# this config has specified it as the default.  However if a legacy container
# (one created with a pre-policy version of swift) is accessed, it is known
# implicitly to be assigned to the policy with index 0 as opposed to the
# current default. Note that even without specifying any aliases, a policy
# always has at least the default name stored in aliases because this field is
# used to contain all human readable names for a storage policy.
#
#[storage-policy:1]
#name = silver
#policy_type = replication
#diskfile_module = egg:swift#replication.fs

# The following declares a storage policy of type 'erasure_coding' which uses
# Erasure Coding for data reliability. Please refer to Swift documentation for
# details on how the 'erasure_coding' storage policy is implemented.
#
# Swift uses PyECLib, a Python Erasure coding API library, for encode/decode
# operations.  Please refer to Swift documentation for details on how to
# install PyECLib.
#
# When defining an EC policy, 'policy_type' needs to be 'erasure_coding' and
# EC configuration parameters 'ec_type', 'ec_num_data_fragments' and
# 'ec_num_parity_fragments' must be specified.  'ec_type' is chosen from the
# list of EC backends supported by PyECLib.  The ring configured for the
# storage policy must have its "replica" count configured to
# 'ec_num_data_fragments' + 'ec_num_parity_fragments' - this requirement is
# validated when services start.  'ec_object_segment_size' is the amount of
# data that will be buffered up before feeding a segment into the
# encoder/decoder.  More information about these configuration options and
# supported 'ec_type' schemes is available in the Swift documentation.  See
# https://docs.openstack.org/swift/latest/overview_erasure_code.html
# for more information on how to configure EC policies.
#
# The example 'deepfreeze10-4' policy defined below is a _sample_
# configuration with an alias of 'df10-4' as well as 10 'data' and 4 'parity'
# fragments. 'ec_type' defines the Erasure Coding scheme.
# 'liberasurecode_rs_vand' (Reed-Solomon Vandermonde) is used as an example
# below.
#
#[storage-policy:2]
#name = deepfreeze10-4
#aliases = df10-4
#policy_type = erasure_coding
#diskfile_module = egg:swift#erasure_coding.fs
#ec_type = liberasurecode_rs_vand
#ec_num_data_fragments = 10
#ec_num_parity_fragments = 4
#ec_object_segment_size = 1048576
#
# Duplicated EC fragments is proof-of-concept experimental support to enable
# Global Erasure Coding policies with multiple regions acting as independent
# failure domains.  Do not change the default except in development/testing.
#ec_duplication_factor = 1

# The swift-constraints section sets the basic constraints on data
# saved in the swift cluster. These constraints are automatically
# published by the proxy server in responses to /info requests.

[swift-constraints]

# max_file_size is the largest "normal" object that can be saved in
# the cluster. This is also the limit on the size of each segment of
# a "large" object when using the large object manifest support.
# This value is set in bytes. Setting it to lower than 1MiB will cause
# some tests to fail. It is STRONGLY recommended to leave this value at
# the default (5 * 2**30 + 2).

#max_file_size = 5368709122


# max_meta_name_length is the max number of bytes in the utf8 encoding
# of the name portion of a metadata header.

#max_meta_name_length = 128


# max_meta_value_length is the max number of bytes in the utf8 encoding
# of a metadata value

#max_meta_value_length = 256


# max_meta_count is the max number of metadata keys that can be stored
# on a single account, container, or object

#max_meta_count = 90


# max_meta_overall_size is the max number of bytes in the utf8 encoding
# of the metadata (keys + values)

#max_meta_overall_size = 4096

# max_header_size is the max number of bytes in the utf8 encoding of each
# header. Using 8192 as default because eventlet use 8192 as max size of
# header line. This value may need to be increased when using identity
# v3 API tokens including more than 7 catalog entries.
# See also include_service_catalog in proxy-server.conf-sample
# (documented at https://docs.openstack.org/swift/latest/overview_auth.html)

#max_header_size = 8192


# By default the maximum number of allowed headers depends on the number of max
# allowed metadata settings plus a default value of 36 for swift internally
# generated headers and regular http headers.  If for some reason this is not
# enough (custom middleware for example) it can be increased with the
# extra_header_count constraint.

#extra_header_count = 0


# max_object_name_length is the max number of bytes in the utf8 encoding
# of an object name

#max_object_name_length = 1024


# container_listing_limit is the default (and max) number of items
# returned for a container listing request

#container_listing_limit = 10000


# account_listing_limit is the default (and max) number of items returned
# for an account listing request
#account_listing_limit = 10000


# max_account_name_length is the max number of bytes in the utf8 encoding
# of an account name

#max_account_name_length = 256


# max_container_name_length is the max number of bytes in the utf8 encoding
# of a container name

#max_container_name_length = 256


# By default all REST API calls should use "v1" or "v1.0" as the version string,
# for example "/v1/account". This can be manually overridden to make this
# backward-compatible, in case a different version string has been used before.
# Use a comma-separated list in case of multiple allowed versions, for example
# valid_api_versions = v0,v1,v2
# This is only enforced for account, container and object requests. The allowed
# api versions are by default excluded from /info.

# valid_api_versions = v1,v1.0

# The prefix used for hidden auto-created accounts, for example accounts in
# which shard containers are created. It defaults to '.'; don't change it.

# auto_create_account_prefix = .

6. vi /etc/swift/internal-client.conf

[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# Note: the 'set' syntax is necessary to override the log_name that some
# daemons specify when instantiating an internal client.
# set log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =

[pipeline:main]
# Note: gatekeeper middleware is not allowed in the internal client pipeline
pipeline = catch_errors proxy-logging cache symlink proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
# See proxy-server.conf-sample for options

[filter:symlink]
use = egg:swift#symlink
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options

7. Ensure proper ownership of the mount point directory structure:

mkdir -p /srv/node/sdb2/accounts
mkdir -p /srv/node/sdb2/contain
chown -R swift:swift /srv/node

8. Create the recon directory and ensure proper ownership of it

mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

9. 

chown -R swift. /srv/node/sdb2/accounts

Finalize installation for Ubuntu and Debian

 

Finalize installation — Swift 2.35.0.dev148 documentation

 

docs.openstack.org

Controller 노드에서 Create and distribute initial rings 후 account.ring.gz, container.ring.gz, object.ring.gz를 block노드에 옮긴 후 진행

1. On the storage nodes, start the Object Storage services:

swift-init all start
systemctl restart rsync swift-account-auditor swift-account-replicator swift-account swift-container-auditor swift-container-replicator swift-container-updater swift-container swift-object-auditor swift-object-replicator swift-object-updater swift-object
반응형
profile

Limetime's TimeLine

@Limetime

포스팅이 좋았다면 "공감❤️" 또는 "구독👍🏻" 해주세요!