<feed xmlns='http://www.w3.org/2005/Atom'>
<title>src-test/sys/net/pfvar.h, branch main</title>
<subtitle>FreeBSD source tree</subtitle>
<id>https://cgit-dev.freebsd.org/src-test/atom?h=main</id>
<link rel='self' href='https://cgit-dev.freebsd.org/src-test/atom?h=main'/>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/'/>
<updated>2020-12-23T11:03:21Z</updated>
<entry>
<title>pf: Use counter(9) for pf_state byte/packet tracking</title>
<updated>2020-12-23T11:03:21Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2020-12-23T08:37:59Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=1c00efe98ed7d103b9684ff692ffd5e3b64d0237'/>
<id>urn:sha1:1c00efe98ed7d103b9684ff692ffd5e3b64d0237</id>
<content type='text'>
This improves cache behaviour by not writing to the same variable from
multiple cores simultaneously.

pf_state is only used in the kernel, so can be safely modified.

Reviewed by:	Lutz Donnerhacke, philip
MFC after:	1 week
Sponsed by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27661
</content>
</entry>
<entry>
<title>pf: Fix unaligned checksum updates</title>
<updated>2020-12-23T11:03:20Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2020-12-20T20:06:32Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=c3f69af03ae7acc167cc1151f0c1ecc5e014ce4e'/>
<id>urn:sha1:c3f69af03ae7acc167cc1151f0c1ecc5e014ce4e</id>
<content type='text'>
The algorithm we use to update checksums only works correctly if the
updated data is aligned on 16-bit boundaries (relative to the start of
the packet).

Import the OpenBSD fix for this issue.

PR:		240416
Obtained from:	OpenBSD
MFC after:	1 week
Reviewed by:	tuexen (previous version)
Differential Revision:	https://reviews.freebsd.org/D27696
</content>
</entry>
<entry>
<title>net: clean up empty lines in .c and .h files</title>
<updated>2020-09-01T21:19:14Z</updated>
<author>
<name>Mateusz Guzik</name>
<email>mjg@FreeBSD.org</email>
</author>
<published>2020-09-01T21:19:14Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=662c13053f4bf2d6245ba7e2b66c10d1cd5c1fb9'/>
<id>urn:sha1:662c13053f4bf2d6245ba7e2b66c10d1cd5c1fb9</id>
<content type='text'>
</content>
</entry>
<entry>
<title>pf: Add a new zone for per-table entry counters.</title>
<updated>2020-05-16T00:28:12Z</updated>
<author>
<name>Mark Johnston</name>
<email>markj@FreeBSD.org</email>
</author>
<published>2020-05-16T00:28:12Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=c1be839971100cbfb6f758dc7d1613c280e6a373'/>
<id>urn:sha1:c1be839971100cbfb6f758dc7d1613c280e6a373</id>
<content type='text'>
Right now we optionally allocate 8 counters per table entry, so in
addition to memory consumed by counters, we require 8 pointers worth of
space in each entry even when counters are not allocated (the default).

Instead, define a UMA zone that returns contiguous per-CPU counter
arrays for use in table entries.  On amd64 this reduces sizeof(struct
pfr_kentry) from 216 to 160.  The smaller size also results in better
slab efficiency, so memory usage for large tables is reduced by about
28%.

Reviewed by:	kp
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D24843
</content>
</entry>
<entry>
<title>pf :Use counter(9) in pf tables.</title>
<updated>2019-03-15T11:08:44Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2019-03-15T11:08:44Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=59048686917881a536c48c1cdb45f7029331f759'/>
<id>urn:sha1:59048686917881a536c48c1cdb45f7029331f759</id>
<content type='text'>
The counters of pf tables are updated outside the rule lock. That means state
updates might overwrite each other. Furthermore allocation and
freeing of counters happens outside the lock as well.

Use counter(9) for the counters, and always allocate the counter table
element, so that the race condition cannot happen any more.

PR:		230619
Submitted by:	Kajetan Staszkiewicz &lt;vegeta@tuxpowered.net&gt;
Reviewed by:	glebius
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D19558
</content>
</entry>
<entry>
<title>Reduce the time it takes the kernel to install a new PF config containing a large number of queues</title>
<updated>2019-02-11T05:17:31Z</updated>
<author>
<name>Patrick Kelsey</name>
<email>pkelsey@FreeBSD.org</email>
</author>
<published>2019-02-11T05:17:31Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=8f2ac656906a7d498bd6784a09ceeed9f953e2ff'/>
<id>urn:sha1:8f2ac656906a7d498bd6784a09ceeed9f953e2ff</id>
<content type='text'>
In general, the time savings come from separating the active and
inactive queues lists into separate interface and non-interface queue
lists, and changing the rule and queue tag management from list-based
to hash-bashed.

In HFSC, a linear scan of the class table during each queue destroy
was also eliminated.

There are now two new tunables to control the hash size used for each
tag set (default for each is 128):

net.pf.queue_tag_hashsize
net.pf.rule_tag_hashsize

Reviewed by:	kp
MFC after:	1 week
Sponsored by:	RG Nets
Differential Revision:	https://reviews.freebsd.org/D19131
</content>
</entry>
<entry>
<title>pfsync: Handle syncdev going away</title>
<updated>2018-11-02T16:57:23Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2018-11-02T16:57:23Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=fbbf436d56a307944c0cd0097492ddcb70b57490'/>
<id>urn:sha1:fbbf436d56a307944c0cd0097492ddcb70b57490</id>
<content type='text'>
If the syncdev is removed we no longer need to clean up the multicast
entry we've got set up for that device.

Pass the ifnet detach event through pf to pfsync, and remove our
multicast handle, and mark us as no longer having a syncdev.

Note that this callback is always installed, even if the pfsync
interface is disabled (and thus it's not a per-vnet callback pointer).

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D17502
</content>
</entry>
<entry>
<title>pfsync: Make pfsync callbacks per-vnet</title>
<updated>2018-11-02T16:47:07Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2018-11-02T16:47:07Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=5f6cf24e2da6f22e5aeea2bc7ae83da5d01682c4'/>
<id>urn:sha1:5f6cf24e2da6f22e5aeea2bc7ae83da5d01682c4</id>
<content type='text'>
The callbacks are installed and removed depending on the state of the
pfsync device, which is per-vnet. The callbacks must also be per-vnet.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D17499
</content>
</entry>
<entry>
<title>pf: Limit the fragment entry queue length to 64 per bucket.</title>
<updated>2018-11-02T15:32:04Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2018-11-02T15:32:04Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=790194cd472b1d17e08940e9f839322abcf14ec9'/>
<id>urn:sha1:790194cd472b1d17e08940e9f839322abcf14ec9</id>
<content type='text'>
So we have a global limit of 1024 fragments, but it is fine grained to
the region of the packet.  Smaller packets may have less fragments.
This costs another 16 bytes of memory per reassembly and devides the
worst case for searching by 8.

Obtained from:	OpenBSD
Differential Revision:	https://reviews.freebsd.org/D17734
</content>
</entry>
<entry>
<title>pf: Split the fragment reassembly queue into smaller parts</title>
<updated>2018-11-02T15:26:51Z</updated>
<author>
<name>Kristof Provost</name>
<email>kp@FreeBSD.org</email>
</author>
<published>2018-11-02T15:26:51Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test/commit/?id=fd2ea405e601bd5e240153c5de0f7c264946ce6f'/>
<id>urn:sha1:fd2ea405e601bd5e240153c5de0f7c264946ce6f</id>
<content type='text'>
Remember 16 entry points based on the fragment offset.  Instead of
a worst case of 8196 list traversals we now check a maximum of 512
list entries or 16 array elements.

Obtained from:	OpenBSD
Differential Revision:	https://reviews.freebsd.org/D17733
</content>
</entry>
</feed>
