README
¶
Rally
Rally is Golang service for controlling and managing Ceph storage. Rally uses many of the binding provided by the Ceph API go-ceph. Rally currently only uses Ceph for storage, however rally can be expanded to use additional storage substraits.
Building Rally
Rally currently only supports building on Debian systems/containers. To build on your local machine you will need to install the dependencies.
make
To build the code in the containers use:
sudo make container-build
and to make the debian packages:
sudo ./build-deb-in-ctr.sh
sudo is not needed if the executing user is in the docker
group.
Testing Rally
Within the test/raven
directory is testing setup to build an environment for testing rally. This environment requires raven to use, there are packages for debian, ubuntu, and fedora 33 available.
Run run.sh
to create the testing environment. After words you can run tests.sh
to run a suite of unit tests across the miniature cluster. You can access nodes in the test environment through raven:
# sudo rvn status
INFO[0001] nodes
INFO[0001] ceph0 running success 172.22.0.244
INFO[0001] ceph1 running success 172.22.0.132
INFO[0001] ceph2 running success 172.22.0.163
INFO[0001] ceph3 running success 172.22.0.107
INFO[0001] client running success 172.22.0.235
INFO[0001] switches
INFO[0001] switch1 running success 172.22.0.95
INFO[0001] external links
# eval $(rvn ssh ceph0)
ceph0$ cd /tmp/rally/build
ceph0:/tmp/rally/build$ ./rallyctl rally show users
Rally users:
bob
jim
jake
Running Rally
There are two rally binaries that are generated: rallyd
and rallyctl
. rallyd
is the rally server (daemon). By default, rallyd
will listen on port 9950
, which for containers is exposed by the RALLY_PORT
environment variable. The container also accepts etcd endpoint environment variables: ETCD_HOST
, ETCD_PORT
, ETCD_TLS
. rallyctl
is the command line client for controlling the rally service.
The rallyd
binary or container is meant to be run on a node which can communicate with the ceph cluster via the ceph
command line and has access to the correct cephx permissions. rallyctl
does not have any such requirements as it accesses rally and ceph data through rallyd
's grpc interface.
Inner workings of rally
Rally is split into multiple components, at the moment those components are cephfs
and rally
. cephfs
controls and manages the ceph
data, while rally
controls the user management. This will allow for the expansion of additional filesystems under rally beyond ceph.
A user is the fundamental element of rally, it correlates to a piece of storage. At the moment this is a 1:n ratio, for every storage there is a user, but just because there is a rally user does not mean there is storage. All rally api calls operate over a user, there are however some administrative calls that can access the underlying storage, this is generally done for accessing the storage through a constrained interface that may deter administration through the storage platform (ceph in this case).
These differences can be illistrated by showing how rallyctl interacts with rally.
In this first scenario we will create a rally user.
# rallyctl rally create user jim
Rally user jim created.
/rally/users/jim
{
"username": "jim",
"longevity": {},
"owner": "jim"
}
Note that we did not specify any backing storage, so running a ceph command ceph auth ls
would not show the user jim. Jim is an abstract rally user, with no storage associated. This command is only used in the case of administration, a normal rally call would associate the storage as well, such as in the example below.
# rallyctl cephfs create user bob 10G
Rally filesystem object bob created.
Here we are interacting with cephfs
, and while you will see we also create a rally user, this is mainly to enforce that all storage is tied to a user.
/rally/ceph/bob
{
"username": "bob",
"secret": "AQBvUd1ftL/4KRAAblhD+1h9PvjSYVK1ftY6EA==",
"mon": {
"": "r"
},
"mds": {
"": "r",
"path=/rally/users/bob": "rw"
},
"cephfs": {
"Path": "/rally/users/bob"
}
}
/rally/users/bob
{
"username": "bob",
"longevity": {},
"owner": "bob",
"storage": "ceph"
}
We will disect the state that is generate in etcd to understand what rally is doing when a user is created. Without going to indepth on how ceph works, there are two main ceph services for ceph filesystems, the ceph monitor (mon) and the ceph metadata service (mds).
"mon": {
"": "r"
},
"mds": {
"": "r",
"path=/rally/users/bob": "rw"
},
Rally stores with in the ceph object the permissions for each service. Giving the monitor r
(read) permission allows the user to check read their permissions and otherwise communicate with the monitor. It also provides read permissions for the mds service. On the next line, rally also specified that the path=/rally/users/bob
has rw
(read/write) permissions. Meaning, for the most part, that the user bob can read and write data to that path.
These permissions can be seen by running ceph auth ls
, which gives us:
client.bob
key: AQBvUd1ftL/4KRAAblhD+1h9PvjSYVK1ftY6EA==
caps: [mds] allow r, allow rw path=/rally/users/bob
caps: [mon] allow r
The last bit of ceph related data stored by ceph is the secret
or key
. This is a cephx encoded key which encodes the permissions of the client, and is used by the user/client to access their data. This key should be kept private when possible. It can be retrieved using rallyctl:
rallyctl rally show user bob
Details for bob:
Secret: AQBvUd1ftL/4KRAAblhD+1h9PvjSYVK1ftY6EA==
Mount: 10.0.0.10:6789,10.0.0.11:6789,10.0.0.12:6789:/rally/users/bob
Which on top of returning the secret key, also returns the information needed by the client in order to mount their storage. The mount string contains 3 ip:port
combinations for each of the ceph monitors, followed by the path to their storage.
Foundry also contains the details how clients in general can modify their /etc/fstab
to mount these volumes over the network.
e.g.
10.0.0.10:6789,10.0.0.11:6789,10.0.0.12:6789:/rally/users/bob /mnt/path ceph rw,relatime,name=bob,secret=AQBvUd1ftL/4KRAAblhD+1h9PvjSYVK1ftY6EA==,acl,_netdev 0 0
Note that this will require the ceph-common
package to use ceph kernel mounting.
Integrations with Merge Technologies
Rally will mainly interact only with the cogs. Although rally data will be plumbed from a user's experiment definition (xir), to the merge portal, which is then parsed by the realization service, through to the materialization service, to the site commander, and finally to the cogs.
An example of xir for a user can be found here.
Quotas
Are not completely implemented. Quotas in ceph are meant to be cooperative, so while rally is not currently storing that value, it is up to clients to implement it, see foundry MR for more details.
Configuration file
The default file can be found in debian/config/config.yml
, where it is written to /etc/rally/config.yml
on a system that used the debian package to install rally. The configuration file is essential for large deployment environments.
services:
etcd:
address: localhost
port: 2379
# Provide TLS settings as follows
#TLS:
#Cacert: /etc/cogs/ca.pem
#Cert: /etc/cogs/etcd.pem
#Key: /etc/cogs/etcd-key.pem
timeout: 10
rally:
address: localhost
port: 9950
timeout: 10
ceph:
address: localhost
port: 6789
timeout: 10
cephfs:
name: rallyfs
datapool: rallyfs-data
metadatapool: rallyfs-meta
quota: 1GB
root: rally
users: users
owners: owners
rados:
pool: rados
quota: 1GB
pgnum: 1024
rally:
mount: "/mnt/rally"
Services
The services section of the configuration file controls the incoming and outgoing network configuration details. Where and how to reach etcd and ceph, which rally connects to, as well as where to host the rally service.
Cephfs
name
, datapool
, and metadatapool
are all used in the creation or accessing of the ceph filesystem. root
, users
, owners
are keys that are used to associate a prefix with both etcd and a filesystem path in order to maintain a hierarchical structure.
Rados
Not discussed yet are rados block storage. There is support in rally to allow users to create block storage devices. However, there has not been work yet on allowing those block devices to be shared over the network. For now this section is under development.
Rally
mount
is where the root mount of the ceph filesystem lives on the monitor hosts.
Contribute
Please feel free to contribute through opening Issues and submitting Pull Requests. All issuses welcome.
Documentation
¶
There is no documentation for this package.