Motr  M0
Test List
Page confd Internals

obj_serialize() will be tested.

{fetch,update}_next_state() will be tested.

Load predefined configuration object from configuration db. Check its predefined value.

Load predefined configuration directory from db. Check theirs predefined values.

Fetch non-existent configuration object from configuration db.

Page Detailed Level Design for read-modify-write IO requests.

Prepare a simple read IO request and enable the fault injection point in order to simulate behavior in case of degraded mode read IO. Keep a known pattern of data in data units so as to verify the contents of lost data unit after degraded mode read IO completes.

Prepare a read-modify-write IO request with one parity group and enable fault injection point in io_bottom_half. Keep only one valid data unit in parity group so as to ensure read_old approach is used. Trigger the fault injection point to simulate degraded mode read IO and verify the contents of read buffers after degraded mode read IO completes. The contents of read buffers should match the expected data pattern.

Preapre a read-modify-write IO request with one parity group and enable the fault injection point in io_bottom_half. Keep all but one valid data units in parity group so as to ensure read-rest approach is used. Trigger the fault injection point to simulate degraded mode read IO and verify contents of read buffers after degraded mode read IO completes. The contents of read buffers should match the expected data pattern.

In order to exercise the use-case where SNS repair is yet to start on the file,

  • Write 2 files, one which is sufficiently big in size (for instance, worth thousands of parity groups in size) and another which is smaller (worth one/two parity groups in size).
  • Keep a known data pattern in the smaller file which can be validated for correctness.
  • Write the big file first to m0t1fs and then the smaller one.
  • Start SNS repair manually by specifying failed device.
  • Repair will start in lexicographical order and will engage the bigger file first.
  • Issue a write IO request immediately (while repair is going on) on smaller file which will exercise the use-case of given file still to be repaired by SNS repair process.

In order to exercise the use-case where SNS repair has completed for given file,

  • Write 2 files, one which is sufficiently big in size (worth thousands of parity groups in size) and another which is smaller (worth one/two parity groups in size).
  • Keep a known data pattern in smaller file in order to verify the write IO later.
  • Write the smaller file first to m0t1fs and then the bigger one.
  • Start SNS repair manually by specifying failed device.
  • Repair will start in lexicographical order and will engage the smaller file first.
  • Issue a write IO request immediately (while repair is going on) on the smaller file which will exercise the use-case of given file being repaired by SNS repair.

Issue a full parity group size IO and check if it is successful. This test case should assert that full parity group IO is intact with new changes.

Issue a partial parity group read IO and check if it successful. This test case should assert the fact that partial parity group read IO is working properly.

Issue a partial parity group write IO and check if it is successful. This should confirm the fact that partial parity group write IO is working properly.

Write very small amount of data (10 - 20 bytes) to a newly created file and check if it is successful. This should stress 2 boundary conditions

  • a partial parity group write IO request and
  • unavailability of all data units in a parity group. In this case, the non-existing data units will be assumed as zero filled buffers and the parity will be calculated accordingly.

Kernel mode fault injection can be used to inject failure codes into IO path and check for results.

Test read-rest test case. If io request spans a parity group partially, and reading the rest of parity group units is more economical (in terms of io requests) than reading the spanned extent, the feature will read rest of parity group and calculate new parity. For instance, in an 8+1+1 layout, first 5 units are overwritten. In this case, the code should read rest of the 3 units and calculate new parity and write 9 pages in total.

Test read-old test case. If io request spans a parity group partially and reading old units and calculating parity iteratively is more economical than reading whole parity group, the feature will read old extent and calculate parity iteratively. For instance, in an 8+1+1 layout, first 2 units are overwritten. In this case, the code should read old data from these 2 units and old parity. Then parity is calculated iteratively and 3 units (2 data + 1 parity) are written.

Page Distributed File Lock DLD

1) Lock usage when no other thread is using the lock

  • wait mode: gets the lock

2) Lock usage when a local thread is holding the lock

  • wait mode: gets the lock when the other thread releases the lock

3) Lock usage when a remote thread is holding the lock

  • wait mode: gets the lock when the other thread releases the lock

4) lib/ut/mutex.c like test

  • A set of arbitrary thread perform lock, unlock operations. Verify the number of operations match the expected result.
Page Distributed Lock DLD

1) Request read lock when RW lock is not held by anyone else

  • result: lock is granted immediately

2) Request read lock when RW lock is held by another reader

  • result: lock is granted immediately

3) Request read lock when RW lock is held by writer

  • result: lock is granted after writer releases it

4) Request write lock when RW lock is not held by anyone else

  • result: lock is granted immediately

5) Request write lock when RW lock is held by reader

  • result: lock is granted after reader releases it

5) Request write lock when RW lock is held by writer

  • result: lock is granted after writer releases it
Page DLD of configuration caching

m0_conf_cache operations will be tested.

Path operations will be tested. This includes checking validity of various paths.

Object operations will be tested. This includes allocation, comparison with on-wire representation, stub enrichment.

m0_confstr_parse() will be tested.

path_walk() will be tested.

m0_confc_open*() and m0_confc_close() will be tested.

Cache operations will be tested. This includes cache_add(), object_enrich(), cache_grow(), and cache_preload().

Page FDMI Detailed Design

m0_fdmi_source_register() assigns correct private callbacks (see m0_fdmi_src::fs_record_post())

m0_fdmi_source_deregister() clears private callbacks in source registration structure

m0_fdmi__record_post() puts FDMI record into list and wake-ups source dock FOM if list was empty

m0_fdmi__handle_reply() calls registered m0_fdmi_src::fs_put() in case of RPC packet sending failure

m0_fdmi__handle_release() calls registered m0_fdmi_src::fs_put()

m0_fdmi__src_dock_fom_start() starts FOM correctly

m0_fdmi__src_dock_fom_stop() wakes up FOM so it can stop itself

process_fdmi_rec() calls registered source callback m0_fdmi_src::fs_begin() and applies all filters stored in filterC for this record type, calling registered source callback m0_fdmi_src::fs_node_eval() when necessary. For each matched filter registered source callback m0_fdmi_src::fs_get() is called.

sd_fom_process_matched_filters() creates 'FDMI record notification' FOPs for all matched filters and sends them over RPC.

fdmi_rr_fom_tick calls registered source callback m0_fdmi_src::fs_put() and posts reply with m0_fop_fdmi_rec_release_reply::frrr_rc set to 0

m0_fdmi__plugin_dock_init() and m0_fdmi__plugin_dock_start() initialise plugin dock correctly

m0_fdmi__plugin_dock_stop() and m0_fdmi__plugin_dock_fini() finalise plugin dock correctly unregistering filter and record registrations if any remained to the moment

Plugin correctly obtains private API interface with m0_fdmi_plugin_dock_api_get()

register_filter() successfully registers filter with correct filter attributes setting filter registration to deactivated state

register_filter() fails with incorrect filter attributes

enable_filters() correctly activates/deactivates filter registration entries with known filter ids, and being provided with unknown filter ids successfully ignores those

release_fdmi_rec() successfully finds record with know id and decrements its ref counter causing release request when counter reaches to zero resulting in pdock_record_release() call

deregister_plugin() successfully unregisters all known filter registrations with ids provided by plugin

m0_fdmi__pdock_fdmi_record_register() is able to create record registration entry with complete and correctly built FOP

pdock_fom_create() is able to successfully create FOM being provided with complete and correctly built FOP and register FDMI record for further processing

pdock_fom_create() correctly terminates when having FOP information not enough to register FDMI record, or having memory allocation failed. Disregarding FOM creation cancelling release request will be posted anyway in case FDMI record id is identifiable in FOP.

pdock_fom_tick__init() correctly replies to source over RPC and initialises record processing context

pdock_fom_tick__feed_plugin_with_rec() correctly iterates through filter ids, calls plugin back and is able to find existent FDMI record registeration each time when plugin accepts the record

pdock_fom_tick__finish_with_rec() is able to find the processed record and correctly unregister the one

m0_filterc_ops::fco_start is called and successfully completes when FDMI service is started (which means filterc has started successfully).

m0_filterc_start() is able to connect to confd and load filters from it.

m0_filterc_stop() is able to properly finalise filterc.

m0_fdmi_eval_flt() evaluates results properly (for each supported operator and each type, including error cases e.g. unsupported operand types).

m0_fdmi_eval_flt() properly handles source-specific operations, specified using m0_fdmi_eval_add_op_cb().

m0_xcode_print() and m0_xcode_read() are able to digest filter structures (definitions).

m0_fdmi_flt_node_to_str() and m0_fdmi_flt_node_from_str() are able to serialize/deserialize filter definitions properly.

m0_fol_fdmi_src_init() call followed by m0_fol_fdmi_src_fini() call work as expected.

ffs_op_node_eval() extract value as expected.

ffs_op_get() and ffs_op_put() do not modify counters.

ffs_op_encode() succeeds and encodes data properly.

ffs_op_decode() succeeds and decoded data matches original encoded record.

ffs_op_begin() does not modify counters.

ffs_op_end() decreases transaction counter by one (m0_be_tx_put()).

m0_fol_fdmi_src_fini() is able to handle case when there are "un-processed" FDMI records.

m0_fol_fdmi_post_record() calls required Source Dock methods, saves record into internal hash, and calls m0_be_tx_get().

m0_fdmi_conn_pool_init() initializes internals properly.

m0_fdmi_conn_pool_get() creates new connection if existing not found.

m0_fdmi_conn_pool_get() returns new session on existing connection if found.

m0_fdmi_conn_pool_get() and m0_fdmi_conn_pool_put() handle counters properly.

m0_fdmi_conn_pool_fini() succeeds.

Page Layout DB DLD

1) Registering layout types including PDCLUST amd COMPOSITE types.

2) Unregistering layout types including PDCLUST amd COMPOSITE types.

3) Registering each of LIST and LINEAR enum types.

4) Unregistering each of LIST and LINEAR enum types.

5) Encode layout with each of layout type and enum types.

6) Decode layout with each of layout type and enum types.

7) Adding layouts with all the possible combinations of all the layout types and enumeration types.

8) Deleting layouts with all the possible combinations of all the layout types and enumeration types.

9) Updating layouts with all the possible combinations of all the layout types and enumeration types.

10) Reading a layout with all the possible combinations of all the layout types and enumeration types.

11) Checking DB persistence by comparing a layout with the layout read from the DB, for all the possible combinations of all the layout types and enumeration types.

12) Covering all the negative test cases.

13) Covering all the error cases.

Page LNet Buffer Event Circular Queue DLD

Initializing a queue of minimum size 2

Successfully producing an element

Successfully consuming an element

Failing to consume an element because the queue is empty

Initializing a queue of larger size

Repeating the producing and consuming tests

Concurrently producing and consuming elements

Page LNet Transport Device DLD

Initializing the device causes it to be registered and visible in the file system.

The device can be opened and closed.

Reading or writing the device fails.

Unsupported ioctl requests fail.

A nlx_core_domain can be initialized and finalized, testing common code paths and the strategy of pinning and unpinning pages.

A nlx_core_domain is initialized, and several nlx_core_transfer_mc objects can be started and then stopped, the domain finalized and the device is closed. No cleanup is necessary.

A nlx_core_domain is initialized, and the same nlx_core_transfer_mc object is started twice, the error is detected. The remaining transfer machine is stopped. The device is closed. No cleanup is necessary.

A nlx_core_domain, and several nlx_core_transfer_mc objects can be registered, then the device is closed, and cleanup occurs.

Page LNet Transport DLD

Multiple domain creation will be tested.

Buffer registration and deregistration will be tested.

Multiple transfer machine creation will be tested.

Test that the processor affinity bitmask is set in the TM.

The transfer machine state change functionality.

Initiation of buffer operations will be tested.

Delivery of synthetic buffer events will be tested, including multiple receive buffer events for a single receive buffer. Both asynchronous and synchronous styles of buffer delivery will be tested.

Management of the reference counted end point objects; the addresses themselves don't have to valid for these tests.

Encoding and Decoding of the network buffer descriptor will be tested.

Orderly finalization will be tested.

The bulkping system test program will be updated to include support for the LNet transport. This program will be used to test communication between end points on the same system and between remote systems. The program will offer the ability to dynamically allocate a transfer machine identifier when operating in client mode.

Page LNet Transport Kernel Core DLD

The correct sequence of LNet operations are issued for each type of buffer operation with a fake LNet API.

The callback subroutine properly delivers events to the buffer event queue, including single and multiple events for receive buffers with a fake LNet API.

The dynamic assignment of transfer machine identifiers with a fake LNet API.

Test the parsing of LNet addresses with the real LNet API.

Test each type of buffer operation, including single and multiple events for receive buffers with the real LNet API.

Page Motr Network Benchmark

Ping message send/recv over loopback device.

Concurrent ping messages send/recv over loopback device.

Bulk active send/passive receive over loopback device.

Bulk passive send/active receive over loopback device.

Statistics for sample with one value.

Statistics for sample with ten values.

Merge two m0_net_test_stats structures with m0_net_test_stats_add_stats()

Script for tool ping/bulk testing with two test nodes.

Script for network benchmark ping/bulk self-testing over loopback device on single node.

Page SNS copy machine DLD

Test01: If an aggregation group is having a single copy packet, then transformation function should be a NO-OP.

Test02: Test if all copy packets of an aggregation group get collected.

Test03: Test the transformation function. Input: 2 bufvec's src and dest to be XORed. Output: XORed output stored in dest bufvec.