<feed xmlns='http://www.w3.org/2005/Atom'>
<title>src/sys/rpc, branch releng/9.3</title>
<subtitle>FreeBSD source tree</subtitle>
<id>https://cgit-dev.freebsd.org/src/atom?h=releng%2F9.3</id>
<link rel='self' href='https://cgit-dev.freebsd.org/src/atom?h=releng%2F9.3'/>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/'/>
<updated>2014-05-16T15:53:27Z</updated>
<entry>
<title>MFC: r265240</title>
<updated>2014-05-16T15:53:27Z</updated>
<author>
<name>Christian Brueffer</name>
<email>brueffer@FreeBSD.org</email>
</author>
<published>2014-05-16T15:53:27Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=9024499b25d2b85fdac7b153c58fbfbcca7e22eb'/>
<id>urn:sha1:9024499b25d2b85fdac7b153c58fbfbcca7e22eb</id>
<content type='text'>
Properly free resources in case of error.

CID:		1007032
Found with:	Coverity Prevent(tm)
</content>
</entry>
<entry>
<title>MFC r261449:</title>
<updated>2014-02-07T05:23:04Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-02-07T05:23:04Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=d6112afe6ab57c59849be84ac45bad791f4c4dc9'/>
<id>urn:sha1:d6112afe6ab57c59849be84ac45bad791f4c4dc9</id>
<content type='text'>
Fix lock acquisition in case no request space available, missed in r260097.
</content>
</entry>
<entry>
<title>Fix build on stable/9.</title>
<updated>2014-01-23T17:27:16Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T17:27:16Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=d5e74ed89d55fcad0f96c7053269087b3816cc09'/>
<id>urn:sha1:d5e74ed89d55fcad0f96c7053269087b3816cc09</id>
<content type='text'>
I am sorry. :(
</content>
</entry>
<entry>
<title>MFC r260229, r260258, r260367, r260390, r260459, r260648:</title>
<updated>2014-01-23T00:46:29Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:46:29Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=f16f013c51280c2ac373f4d46b927556cd449966'/>
<id>urn:sha1:f16f013c51280c2ac373f4d46b927556cd449966</id>
<content type='text'>
Rework NFS Duplicate Request Cache cleanup logic.

 - Introduce additional hash to group requests by hash of sockref.  This
allows to process TCP acknowledgements without looping though all the cache,
and as result allows to do it every time.
 - Indroduce additional callbacks to notify application layer about sockets
disconnection.  Without this last few requests processed just before socket
disconnection never processed their ACKs and stuck in cache for many hours.
 - Implement transport-specific method for tracking reply acknowledgements.
New implementation does not cross multiple stack layers to get the data and
does not have race conditions that previously made some requests stuck
in cache.  This could be done more efficiently at sockbuf layer, but that
would broke some KBIs, while I don't know other consumers for it aside NFS.
 - Instead of traversing all DRC twice per request, run cleaning only once
per request, and except in some conditions traverse only single hash slot
at a time.

Together this limits NFS DRC growth only to situations of real connectivity
problems.  If network is working well, and so all replies are acknowledged,
cache remains almost empty even after hours of heavy load.  Without this
change on the same test cache was growing to many thousand requests even
with perfectly working local network.

As another result this reduces CPU time spent on the DRC handling during
SPEC NFS benchmark from about 10% to 0.5%.

Sponsored by:   iXsystems, Inc.
</content>
</entry>
<entry>
<title>MFC r260097:</title>
<updated>2014-01-23T00:45:20Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:45:20Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=18c4a9798c36105fd96ae1d8ba48eec2d3c72cac'/>
<id>urn:sha1:18c4a9798c36105fd96ae1d8ba48eec2d3c72cac</id>
<content type='text'>
Move most of NFS file handle affinity code out of the heavily congested
global RPC thread pool lock and protect it with own set of locks.

On synthetic benchmarks this improves peak NFS request rate by 40%.
</content>
</entry>
<entry>
<title>MFC r260036:</title>
<updated>2014-01-23T00:44:45Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:44:45Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=99feeb7e6922507481c66a112e70caa362a1ee03'/>
<id>urn:sha1:99feeb7e6922507481c66a112e70caa362a1ee03</id>
<content type='text'>
Introduce xprt_inactive_self()  -- variant for use when sure that port
is assigned to thread.  For example, withing receive handlers.  In that
case the function reduces to single assignment and can avoid locking.
</content>
</entry>
<entry>
<title>MFC r260031:</title>
<updated>2014-01-23T00:44:14Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:44:14Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=4029dec2a33d462ee899e114a024165ca5ae49eb'/>
<id>urn:sha1:4029dec2a33d462ee899e114a024165ca5ae49eb</id>
<content type='text'>
In addition to r259632 completely block receive upcalls if we have more
data than we need.  This reduces lock pressure from xprt_active() side.
</content>
</entry>
<entry>
<title>MFC r259828:</title>
<updated>2014-01-23T00:42:55Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:42:55Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=7690a974609b937f99d644a0ff14a131c553d934'/>
<id>urn:sha1:7690a974609b937f99d644a0ff14a131c553d934</id>
<content type='text'>
Fix a bug introduced at r259632, triggering infinite loop in some cases.
</content>
</entry>
<entry>
<title>MFC r259659, r259662:</title>
<updated>2014-01-23T00:41:23Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:41:23Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=4ce8d27998c3d0e09d53fd899d80b849a10b1e0c'/>
<id>urn:sha1:4ce8d27998c3d0e09d53fd899d80b849a10b1e0c</id>
<content type='text'>
Remove several linear list traversals per request from RPC server code.

  Do not insert active ports into pool-&gt;sp_active list if they are success-
fully assigned to some thread.  This makes that list include only ports that
really require attention, and so traversal can be reduced to simple taking
the first one.

  Remove idle thread from pool-&gt;sp_idlethreads list when assigning some
work (port of requests) to it.  That again makes possible to replace list
traversals with simple taking the first element.
</content>
</entry>
<entry>
<title>MFC r259632:</title>
<updated>2014-01-23T00:40:28Z</updated>
<author>
<name>Alexander Motin</name>
<email>mav@FreeBSD.org</email>
</author>
<published>2014-01-23T00:40:28Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src/commit/?id=d2845e7e0395b5138d7dc7ff462e57d0dc91524e'/>
<id>urn:sha1:d2845e7e0395b5138d7dc7ff462e57d0dc91524e</id>
<content type='text'>
Rework flow control for connection-oriented (TCP) RPC server.

  When processing receive buffer, write the amount of data, expected
in present request record, into socket's so_rcv.sb_lowat to make stack
aware about our needs.  When processing following upcalls, ignore them
until socket collect enough data to be read and processed in one turn.
  This change reduces number of context switches and other operations
in RPC stack during large NFS writes (especially via non-Jumbo networks)
by order of magnitude.

  After precessing current packet, take another look into the pending
buffer to find out whether the next packet had been already received.
If not, deactivate this port right there without making RPC code to
push this port to another thread just to find that there is nothing.
If the next packet is received partially, also deactivate the port, but
also update socket's so_rcv.sb_lowat to not be woken up prematurely.
  This change additionally reduces number of context switches per NFS
request about in half.
</content>
</entry>
</feed>
