| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D51727
|
|
|
|
|
|
|
|
|
|
| |
This moves the checks previously under #ifdef STRICT in
nvmf_nqn_valid() into a separate helper for userland. This
requires that the NQN starts with "nqn.YYYY-MM." followed by at
least one additional character.
Reviewed by: asomers
Differential Revision: https://reviews.freebsd.org/D48767
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a timer in the nvmf(4) driver to periodically trigger a devctl
"RECONNECT" notification. A trigger in the /etc/devd/nvmf.conf file
invokes "nvmecontrol reconnect nvmeX" upon each notification. This
differs from iSCSI which uses a dedicated daemon (iscsid(8)) to wait
inside a custom ioctl for an iSCSI initiator event to occur, but I
think this design might be simpler.
Similar to nvme-cli, the interval between reconnection attempts is
specified in seconds by the --reconnect-delay argument to the connect
and reconnect commands. Note that nvme-cli uses -c for short letter
of this command, but that was already taken so nvmecontrol uses -r.
The default is 10 seconds to match Linux.
In addition, a second timeout can be used to force a full detach of a
disconnected the nvmeX device after the controller loss timeout
expires. The timeout for this is specified in seconds by the
--ctrl-loss-tmo/-l options (identical to nvme-cli). The default is
600 seconds.
Either of these timers can be disabled by setting the timer to 0. In
that case, the associated action (devctl notifications or full detach)
will not occur after a disconnect.
Note that this adds a dedicated taskqueue for nvmf tasks instead of
using taskqueue_thread as the controller loss task could deadlock
waiting for the completion of other tasks queued to taskqueue_thread.
(Specifically, tearing down the CAM SIM can trigger
destroy_dev_sched_cb() and waits for the callback to run, but the
callback is scheduled to run in a task on taskqueue_thread. Possibly,
destroy_dev_sched should be using a dedicated taskqueue.)
Reviewed by: imp (earlier version)
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D50222
|
|
|
|
|
|
|
|
|
| |
This returns an nvlist indicating if a Fabrics host is connected and
the time of the most recent disconnection.
Reviewed by: imp
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D48219
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Save more data associated with a new association including the network
address of the remote controller. This permits reconnecting an
association without providing the address or other details. To use
this new mode, provide only an existing device ID to nvmecontrol's
reconnect command. An address can still be provided to request a
different address or other different settings for the new association.
The saved data includes an entire Discovery Log page entry to aim to
be compatible with other transports in the future. When a remote
controller is connected to via a Discovery Log page entry (nvmecontrol
connect-all), the raw entry is used. When a remote controller is
connected to via an explicit address, an entry is synthesized from the
parameters.
Note that this is a pseudo-ABI break for the ioctls used by nvmf(4) in
that the nvlists for handoff and reconnect now use a slightly
different set of elements. Since this is only present in main I did
not bother implementing compatability shims.
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D48214
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For requests that handoff queues from userspace to the kernel as well
as the request to fetch reconnect parameters from the kernel, switch
from using flat structures to nvlists. In particular, this will
permit adding support for additional transports in the future without
breaking the ABI of the structures.
Note that this is an ABI break for the ioctls used by nvmf(4) and
nvmft(4). Since this is only present in main I did not bother
implementing compatability shims.
Inspired by: imp (suggestion on a different review)
Reviewed by: imp
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D48230
|
|
|
|
|
|
|
|
| |
PDU data alignment (PDA) isn't necessarily a power of 2, just a
multiple of 4, so use roundup() instead of roundup2() to compute the
PDU data offset (PDO).
Sponsored by: Chelsio Communications
|
|
|
|
|
|
|
|
|
|
|
|
| |
The caller supplied PDU data alignment (PDA) field from
nvmf_association_params is the caller's restriction on data alignment
(so affects received PDUs), and the PDA value received from the other
end is the remote end's restriction (so affects transmitted PDUs).
I had these backwards so that if the remote end advertised a PDA it
was used as the receive PDA instead of the transmit PDA.
Sponsored by: Chelsio Communications
|
|
|
|
|
|
|
| |
After building packages we have a number of new
and updated Makefile.depend files
Reviewed by: stevek
|
|
|
|
| |
Sponsored by: Chelsio Communications
|
|
|
|
|
|
|
| |
The spec says MAXH2CDATA to is "a multiple of dwords and should be no
less than 4,096".
Sponsored by: Chelsio Communications
|
|
|
|
|
|
|
| |
This prevents stack garbage from leaking into the cdata used for the
userspace I/O controller in nvmfd(8).
Sponsored by: Chelsio Communications
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In nvmf_host_fetch_discovery_log_page(), the log variable may have been
allocated on the heap during the first loop cycle, and should be
free()'d before exiting upon errors.
Reported by: Coverity
CID: 1545034
Sponsored by: The FreeBSD Foundation
Reviewed by: imp,jhb
Pull Request: https://github.com/freebsd/freebsd-src/pull/1239
|
|
|
|
|
|
| |
Add parentheses to ensure the correct order of operations.
Reported by: GCC
|
|
libnvmf provides APIs for transmitting and receiving Command and
Response capsules along with data associated with NVMe commands.
Capsules are represented by 'struct nvmf_capsule' objects.
Capsules are transmitted and received on queue pairs represented by
'struct nvmf_qpair' objects.
Queue pairs belong to an association represented by a 'struct
nvmf_association' object.
libnvmf provides additional helper APIs to assist with constructing
command capsules for a host, response capsules for a controller,
connecting queue pairs to a remote controller and optionally
offloading connected queues to an in-kernel host, accepting queue pair
connections from remote hosts and optionally offloading connected
queues to an in-kernel controller, constructing controller data
structures for local controllers, etc.
libnvmf also includes an internal transport abstraction as well as an
implementation of a userspace TCP transport.
libnvmf is primarily intended for ease of use and low-traffic use cases
such as establishing connections that are handed off to the kernel.
As such, it uses a simple API built on blocking I/O.
For a host, a consumer first populates an 'struct
nvmf_association_params' with a set of parameters shared by all queue
pairs for a single association such as whether or not to use SQ flow
control and header and data digests and creates a 'struct
nvmf_association' object. The consumer is responsible for
establishing a TCP socket for each queue pair. This socket is
included in the 'struct nvmf_qpair_params' passed to 'nvmf_connect' to
complete transport-specific negotiation, send a Fabrics Connect
command, and wait for the Connect reply. Upon success, a new 'struct
nvmf_qpair' object is returned. This queue pair can then be used to
send and receive capsules. A command capsule is allocated, populated
with an SQE and optional data buffer, and transmitted via
nvmf_host_transmit_command. The consumer can then wait for a reply
via nvmf_host_wait_for_response. The library also provides some
wrapper functions such as nvmf_read_property and nvmf_write_property
which send a command and wait for a response synchronously.
For a controller, a consumer uses a single association for a set of
incoming connections. A consumer can choose to use multiple
associations (e.g. a separate association for connections to a
discovery controller listening on a different port than I/O
controllers). The consumer is responsible for accepting TCP sockets
directly, but once a socket has been accepted it is passed to
nvmf_accept to perform transport-specific negotiation and wait for the
Connect command. Similar to nvmf_connect, nvmf_accept returns a newly
construct nvmf_qpair. However, in contrast to nvmf_connect,
nvmf_accept does not complete the Fabrics negotiation. The consumer
must explicitly send a response capsule before waiting for additional
command capsules to arrive. In particular, in the kernel offload
case, the Connect command and data are provided to the kernel
controller and the Connect response capsule is sent by the kernel once
it is ready to handle the new queue pair.
For userspace controller command handling, the consumer uses
nvmf_controller_receive_capsule to wait for a command capsule.
nvmf_receive_controller_data is used to retrieve any data from a
command (e.g. the data for a WRITE command). It can be called
multiple times to split the data transfer into smaller sizes.
nvmf_send_controller_data is used to send data to a remote host in
response to a command. It also sends a response capsule indicating
success, or an error if an internal error occurs. nvmf_send_response
is used to send a response without associated data. There are also
several convenience wrappers such as nvmf_send_success and
nvmf_send_generic_error.
Reviewed by: imp
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D44710
|