Contributing to SCST
If you would like to contribute to SCST development, you can do in many ways:
- By sending donations. They will be spent on further work making SCST better as well as on providing better support and troubleshooting for you. Donations can be on one time or per period of time basis, from companies or individuals.
- By sending patches, which fix bugs or implement new functionality. See below a list of possible SCST improvements with some possible implementation ideas.
- By writing or updating various documentation to keep it complete and up to date. For instance, SCST internals description document is in some areas quite outdated. Particularly, many functions were renamed since time, when it was written. It would be good to bring it up to date.
- By reporting bugs or other problems.
Possible SCST extensions and improvements
Zero-copy FILEIO for READ-direction commands
At the moment, SCST in FILEIO mode uses standard Linux read() and write() syscalls paths, which copy data from the page cache to the supplied buffer and back. Zero-copy FILEIO would use page cache data directly. This would be a major performance improvement, especially for fast hardware, like Infiniband, because it would eliminate the data copy latency as well as considerably ease CPU and memory bandwidth load. This proposal is limited for READs only, because for WRITEs it is a lot harder to implement, so it is worth to do zero-copy for READs and WRITEs separately.
The main idea is to add one more flag to filp_open() "flags" parameter (like O_RDONLY, O_DIRECT, etc.) O_ZEROCOPY, which would be available only if the caller is from the kernel space. In this case fd->f_op->readv(), do_sync_readv_writev(), etc. would receive as the pointer to data buffer not a real data buffer, but pointer to an empty SG vector. Then:
- Generic buffer allocation in SCST would not be used, instead vdisk_parse() would allocate the SG vector, but wouldn't fill it with actual pages.
- In generic_file_aio_read(), if O_ZEROCOPY flag was set, function do_generic_file_read() would be called with the last parameter set to a pointer to new function file_zero_copy_read_actor() instead of file_read_actor().
- Function file_zero_copy_read_actor() would be basically the same as file_read_actor(), but, instead of copy data using __copy_to_user*() functions, it would add the supplied page to the appropriate place in the received in desc->arg.buf SG vector and reference, i.e. page_get(), that page.
- In vdisk_devtype.on_free_cmd(), which doesn't exist yet, all pages from the SG vector would be dereferenced, i.e. page_put(). Then the SG vector itself would be freed.
That's all. For WRITEs the current code path would remain unchanged.
Zero-copy FILEIO for WRITE-direction commands
Implementation should be similar to zero-copy FILEIO for READ commands and should be done after it. All incoming data should be inserted in the page cache, then dereferenced in vdisk_devtype.on_free_cmd(). The main problem is insertion of data pages in the page cache, namely, locking issues related to it. They should be carefully investigated.
Dynamic I/O flow control
At the moment, if an initiator or several initiators simultaneously send to target too many commands, especially in seek intensive workloads, target can get overloaded and not able to finish commands on time. In such cases you can see on the initiator(s) messages about aborting commands or resetting the target. See in SCST core README section "What if target's backstorage is too slow" for more details. To fix this problem it is necessary to implement a dynamic I/O flow control in SCST core.
The flow control, generally, is quite simple. Each SCST command has timeout value, which is set by the corresponding dev handler. SCST core should keep device's queue depth at the level that the worst command's execution time, i.e. time between scst_rx_cmd() and scst_finish_cmd(), would be between something like timeout/10 and timeout/5. So, commands execution time should be checked and:
- If it's > timeout/5, then the new queue depth should be set to max(1, cur_depth/2)
- If it's < timeout/10, then new queue depth should be set to min(MAX_DEPTH, cur_depth+1). This shouldn't be done too often, once in a few minutes should be sufficient
The above is, of course, an oversimplification to let you see the idea. Implementation considering real life cases should be as the following:
1. There are several parameters:
- P - load watch period. During this period all the statistic is gathered and processed.
- MN - underload ratio divisor, which sets the underload portion of timeout. If the longest execution time among all commands completed during period P is below timeout/MN, the corresponding device considered underloaded.
- MX - overload ratio divisor, which sets the overload portion of timeout. If the longest execution time among all commands completed during period P is above timeout/MX, the corresponding device considered overloaded.
- I - step on which device's queue size will be increased if device considered underloaded.
- D - divisor on which device's queue size will be decreased if device considered overloaded.
- QI - quick fall interval. See description of Q parameter.
- Q - quick fall ratio divisor. If the longest execution time of a completed command is above timeout/Q and time from the previous quick fall is smaller than QI, the corresponding device considered heavily overloaded. The quick fall is needed to handle cases when load on device is instantly increased on the way, where it can't handle it properly.
- QD - divisor on which device's queue size will be decreased if device considered heavily overloaded.
The default values should be something like: P=15 sec., MN=20, MX=10, Q=3, I=1, D=2, QI=5 sec., QD=10.
2. There are the following new variables in struct scst_device:
- queue_depth - current queue depth.
- max_exec_ratio - maximum commands timeout/(execution time).
- queue_was_full - flag, marking that the queue was at least once full during period P.
- quick_fall_time - time of the last quick fall.
- flow_lock - protects flow control related variables, where needed.
- ...
3. The commands processing path should be as the following:
- In scst_rx_cmd() the start time of the command is recorded (already done).
- In __scst_init_cmd(), if dev->dev_cmd_count == dev->queue_depth, dev->queue_was_full set to true.
- In scst_finish_cmd() dev->max_exec_ratio set to max(dev->max_exec_ratio, (cmd's exec_time)*100/cmd->timeout).
- If in scst_finish_cmd() cmd's exec time is above cmd->timeout/Q and
time from the latest quick fall is above QI, then:
- dev->queue_depth set to max(1, dev->queue_depth/QD).
- Flow control period reset, i.e. started again, including setting dev->max_exec_ratio to 0 and dev->quick_fall_time to jiffies.
4. There should be a work, which once in a P seconds will check dev->max_exec_ratio, then:
- If device neither underloaded, nor overloaded. i.e. max_exec_ratio between defined by MN and MX, do nothing.
- If device was underloaded:
- if dev->queue_was_full is false, then do nothing.
- if dev->queue_was_full is true, then set dev->queue_depth to min(SCST_MAX_DEV_COMMANDS, dev->queue_depth + I).
- If device was overloaded, then set dev->queue_depth to max(1, dev->queue_depth/D).
Then the flow control period is reset, i.e. started again, including setting dev->max_exec_ratio to 0 and dev->quick_fall_time to jiffies.
That's all. Then only support for initiators, like iSCSI, which don't handle QUEUE FULL to decrease amount of queued commands, should be added. Such initiators expect target to control size of the queue, via, e.g., through MAX_SN for iSCSI.
For it at the stage 2 of the dynamic flow control development the following should be done:
- New callback on_queue_depth_adjustment() should be added to struct scst_tgt_template.
- If target driver defined it, each time after dev->queue_depth changed on_queue_depth_adjustment() should be called. In this callback target driver should change internal queue_depth to, e.g. for iSCSI target, set max_sn in the replies correctly.
Then, at the latest stage of the development, logic to not schedule the flow control work on idle devices should be added.
Support for O_DIRECT in scst_vdisk handler
At the moment, scst_vdisk handler doesn't support O_DIRECT option and possibility to set it was disabled. This limitation caused by Linux kernel expectation that memory supplied to read() and write() functions with O_DIRECT flag is mapped to some user space application.
It is relatively easy to remove that limitation. Function dio_refill_pages() should be modified to check before calling get_user_pages() if current->mm is not NULL. If it is NULL, then, instead of calling get_user_pages(), dio->pages should be filled by pages, taken directly from dio->curr_user_address. Each such page should be referenced by page_cache_get(). That's all.
Refactoring of command execution path in scst_vdisk handler
At the moment, in scst_vdisk handler command execution function vdisk_do_job() is overcomplicated and not very performance effective. It would be good to replace all those ugly "switch" statements by choosing the handler for each SCSI command by indirect function call on an array of function pointers.
I.e., there should be an array vdisk_exec_fns with 256 entries of function pointers:
void (*cmd_exec_fn) (struct scst_cmd *cmd)
Then vdisk_do_job() should look like
static int vdisk_do_job(struct scst_cmd *cmd)
{
return vdisk_exec_fns[cmd->cdb[0]](cmd);
}
Solve SG IO count limitation issue in pass-through mode
In the pass-through mode (i.e. using the pass-through device handlers like scst_tape, etc) SCSI commands, coming from remote initiators, are passed to local SCSI hardware on target as is, without any modifications. As any other hardware, the local SCSI hardware can not handle commands with amount of data and/or segments count in scatter-gather array bigger some values. For some commands SCST can split them on subcommands and, hence, workaround this problem, but it isn't always possible. For instance, for tapes splitting write commands may mean corrupting the tape data.
If you have this issue you will see symptoms like small transfers work well, but large transfers stall and messages like: "Unable to complete command due to SG IO count limitation" are printed in the kernel logs.
The only complete way to fix this problem is to allocate data buffers with number of entries inside the SG IO count limitation. In sgv_big_order_alloc.diff you can find a possible way to solve this issue.
You can also look at patch sgv_big_order_alloc-sfw5-rc3.diff created by Frank Zago for SCST 2.0.0. It was submitted too late to be included in it. Update for SCST trunk is welcome!
Memory registration
In some cases a target driver might need to register memory used for data buffers in the hardware. At the moment, none of SCST target drivers, including InfiniBand SRP target driver, need that feature. But in case if in future there is a need in such a feature, it can be easily added by extending SCST SGV cache. The SCST SGV cache is a memory management subsystem in SCST. It doesn't free to the system each data buffer, which is not used anymore, but keeps it for a while to let it be reused by the next consecutive command to reduce command processing latency and, hence, improve performance.
To support memory buffers registrations, it can be extended by the following way:
1. Struct scst_tgt_template would be extended to have 2 new callbacks:
- int register_buffer(struct scst_cmd *cmd)
- int unregister_buffer(unsigned long mem_priv, void *scst_priv)
2. SCST core would be extended to have 4 new functions:
- int scst_mem_registered(struct scst_cmd *cmd)
- int scst_mem_deregistered(void *scst_priv)
- int scst_set_mem_priv(struct scst_cmd *cmd, unsigned long mem_priv)
- unsigned long scst_get_mem_priv(struct scst_cmd *cmd)
3. The workflow would be the following:
- If target driver defined register_buffer() and unregister_buffer() callbacks, SCST core would allocate a dedicated SGV cache for each instance of struct scst_tgt, i.e. target.
- When there would be an SGV cache miss in memory buffer for a command allocation, SCST would check if register_buffer() callback was defined in the target driver's template and, if yes, would call it.
- In register_buffer() callback the target driver would do necessary actions to start registration of the commands memory buffer.
- Upon register_buffer() callback returns, SCST core would suspend processing the corresponding command and would switch to the next commands processing.
- After the memory registration finished, the target driver would call scst_set_mem_priv() to associate the memory buffer with some internal data.
- Then the target driver would call scst_mem_registered() and SCST would resume processing the command. Functions scst_set_mem_priv() and scst_mem_registered() can be called from inside register_buffer(). In this case SCST core would continue processing the command immediately without suspending.
- After the command finished, the corresponding memory buffer would remain in the SGV cache in the registered state and would be reused by the next commands. For each of them the target driver can at any time figure out the associated with the registered buffer data by using scst_get_mem_priv().
- When the SGV cache decide that there is a time to free the memory buffer, it would call the target driver's unregister_buffer() callback.
- In this callback the target driver would do necessary actions to start deregistration of the commands memory buffer.
- Upon unregister_buffer() callback returns, SGV cache would suspend freeing the corresponding buffer and would switch to other deals it has.
- After the memory deregistration finished, the target driver would call scst_mem_deregistered() and pass to it scst_priv pointer, received in unregister_buffer(). Then the memory buffer would be freed by the SGV cache. Function scst_mem_deregistered() can be called from inside unregister_buffer(). In this case SGV cache would free the buffer immediately without suspending.
SCST usage with non-SCSI transports
SCST might also be used with non-SCSI speaking transports, like NBD or AoE. Such cooperation would allow them to use SCST-emulated backend.
For user space targets this is trivial: they simply should use SCST-emulated devices locally via scst_local module.
For in-kernel non-SCSI target driver it's a bit more complicated. They should implement a small layer, which would translate their internal READ/WRITE requests to corresponding SCSI commands and, on the way back, SCSI status and sense codes to their internal status codes.
iSER target
iSER (iSCSI Extensions for RDMA) protocol accelerates iSCSI by allowing direct data transfers using RDMA services (iWARP or InfiniBand) bypassing the regular heavy weighted and CPU consuming TCP/IP data transfers path.
It would be good to add support for iSER in iSCSI-SCST.
GET CONFIGURATION command
SCSI command GET CONFIGURATION is mandatory for SCSI multimedia devices, like CD/DVD-ROMs or recorders, see MMC standard. Currently SCST lacks support for it, which leads to problems with some programs depending on the result of GET CONFIGURATION command execution.
It would be good to add support for it in the SCST core.
Per-device suspending
Currently before doing any management operations SCST core performs so called "activities suspending", i.e. it suspends new coming SCSI commands and wait until currently being executed ones finished. It allows to simplify internal locking and reference counting a lot, but has a drawback that it is global, i.e. affects all devices and SCSI commands, even ones which don't participate in the management operation. In the majority of regular cases it works pretty well, but sometimes it can be a problem. For instance, if a SCSI command needs a big amount of execution time (hours for some tapes operations), the management command and all other SCSI commands will wait until it's finished. Even worse, if a user space dev handler hangs and stops processing commands, any SCST management command will not be able to complete and fail with timeout until the user space dev handler gets killed.
The global suspending should be changed to more fine-grained per-device suspending and only for cases where it's really needed, like device unregistration. This is a very tricky task, because all the internal SCST locking should be reimplemented.