Motr  M0
m0t1fs

Enumerations

enum  {
  M0_AVI_FS_OPEN = M0_AVI_M0T1FS_RANGE_START + 1, M0_AVI_FS_LOOKUP, M0_AVI_FS_CREATE, M0_AVI_FS_READ,
  M0_AVI_FS_WRITE, M0_AVI_FS_IO_DESCR, M0_AVI_FS_IO_MAP
}
 

Detailed Description

Overview

m0t1fs is a motr client file-system for linux. It is implemented as a kernel module.

Function Specification

m0t1fs has flat file-system structure i.e. no directories except root. m0t1fs does not support caching. All read-write requests are directly forwarded to servers.

By default m0t1fs uses end-point address 0:12345:45:6 as its local address. This address can be changed with local_addr module parameter. e.g. to make m0t1fs use 172.18.50.40:12345:34:1 as its end-point address load module with command:

sudo insmod m0tr.ko local_addr="172.18.50.40@o2ib1:12345:34:1"

m0t1fs can be mounted with mount command:

mount -t m0t1fs -o <options_list> dontcare <dir_name>

where <options_list> is a comma separated list of option=value elements. Currently supported list of options is:

Logical Specification

mount/unmount:

m0t1fs establishes rpc-connections and rpc-sessions with all the services obtained from configuration data. If multiple services have same end-point address, separate rpc-connection is established with each service i.e. if N services have same end-point address, there will be N rpc-connections leading to same target end-point.

The rpc-connections and rpc-sessions will be terminated at unmount time.

Pools and Pool versions:

m0t1fs can work with multiple pools and pool versions. A pool version comprises of a set of services and devices attached to them. Layout and pool machine are associated with the pool version. Pool version creates a device to io service map during initialisation.

m0t1fs finds a valid pool and pool version to start with which has no devices from the failure set during mount. Once pool version is obtained, if its corresponding pool already exist them the pool version is added to it else new pool is initialised.

Containers and component objects:

An io service provides access to storage objects, md service provides access to md objects and rm service provides access to resources. Containers are used to migrate and locate object. Each container is identified by container-id. Storage objects and md objects are identified by fid which is a pair <container-id, key>. All objects belonging to same container have same value for fid.container_id which is equal to id of that container.

"Container location map" maps container-id to service.

Even if containers are not yet implemented, notion of container id is required, to be able to locate a service serving some object identified by fid.

Currently m0t1fs implements simple (and temporary) mechanism to build container location map. Number of containers is equal to P + 2, where P is pool width and additional 2 containers are used by meta-data and resource management. Pool width is a file-system parameter, obtained from configuration.

Assume a user-visible file F. A gob representing F is assigned fid <0, K>, where K is taken from a monotonically increasing counter (m0t1fs_sb::csb_next_key). Container-id 0 is mapped to md-service, by container location map. Container-id ‘P+2’ is mapped to rm-service.

There are P number of component objects of file F, having fids { <i, K> | i = 1, 2..., P}. Here P is equal to pool_width mount option. Mapping from <gob_fid, cob_index> -> cob_fid is implemented using linear enumeration (B * x + A) with both A and B parameters set to 1. Container location map, maps container-ids from 1 to P, to io-services.

Container location map is populated at mount time and is a part of a pool version.

Directory Operations:

To create a regular file, m0t1fs sends cob create requests to mds (for global object aka gob) and io-service (for component objects). Because, mds is not yet implemented, m0t1fs does not send cob create request to any mds. Instead all directory entries are maintained in an in-memory list in root inode itself.

If component object creation fails, m0t1fs does not attempt to cleanup component objects that were successfully created. This should be handled by dtm component, which is not yet implemented.

Read/Write:

m0t1fs currently supports only full stripe IO i.e. (iosize % (nr_data_units * stripe_unit_size) == 0)

read-write operations on file are not synchronised.

m0t1fs does not cache any data.

For simplicity, m0t1fs does synchronous rpc with io-services, to read/write component objects.

Enumeration Type Documentation

◆ anonymous enum

anonymous enum
Enumerator
M0_AVI_FS_OPEN 
M0_AVI_FS_LOOKUP 
M0_AVI_FS_CREATE 
M0_AVI_FS_READ 
M0_AVI_FS_WRITE 
M0_AVI_FS_IO_DESCR 
M0_AVI_FS_IO_MAP 

Definition at line 36 of file m0t1fs_addb2.h.