aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--documentation/content/en/books/developers-handbook/_index.adoc9
-rw-r--r--documentation/content/en/books/developers-handbook/book.adoc9
-rw-r--r--documentation/content/en/books/developers-handbook/introduction/_index.adoc13
-rw-r--r--documentation/content/en/books/developers-handbook/ipv6/_index.adoc318
-rw-r--r--documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc10
-rw-r--r--documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc182
-rw-r--r--documentation/content/en/books/developers-handbook/l10n/_index.adoc82
-rw-r--r--documentation/content/en/books/developers-handbook/parti.adoc2
-rw-r--r--documentation/content/en/books/developers-handbook/policies/_index.adoc97
-rw-r--r--documentation/content/en/books/developers-handbook/secure/_index.adoc115
-rw-r--r--documentation/content/en/books/developers-handbook/sockets/_index.adoc413
-rw-r--r--documentation/content/en/books/developers-handbook/testing/_index.adoc34
-rw-r--r--documentation/content/en/books/developers-handbook/tools/_index.adoc468
-rw-r--r--documentation/content/en/books/developers-handbook/x86/_index.adoc1060
14 files changed, 2051 insertions, 761 deletions
diff --git a/documentation/content/en/books/developers-handbook/_index.adoc b/documentation/content/en/books/developers-handbook/_index.adoc
index c30bf0645d..43db14ab19 100644
--- a/documentation/content/en/books/developers-handbook/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/_index.adoc
@@ -48,12 +48,15 @@ include::../../../../shared/en/urls.adoc[]
endif::[]
[.abstract-title]
-[abstract]
Abstract
-Welcome to the Developers' Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
+Welcome to the Developers' Handbook.
+This manual is a _work in progress_ and is the work of many individuals.
+Many sections do not yet exist and some of those that do exist need to be updated.
+If you are interested in helping with this project, send email to the {freebsd-doc}.
-The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
+The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server].
+It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
diff --git a/documentation/content/en/books/developers-handbook/book.adoc b/documentation/content/en/books/developers-handbook/book.adoc
index 1e85eb8196..f874755ccd 100644
--- a/documentation/content/en/books/developers-handbook/book.adoc
+++ b/documentation/content/en/books/developers-handbook/book.adoc
@@ -61,12 +61,15 @@ include::../../../../shared/en/urls.adoc[]
endif::[]
[.abstract-title]
-[abstract]
Abstract
-Welcome to the Developers' Handbook. This manual is a _work in progress_ and is the work of many individuals. Many sections do not yet exist and some of those that do exist need to be updated. If you are interested in helping with this project, send email to the {freebsd-doc}.
+Welcome to the Developers' Handbook.
+This manual is a _work in progress_ and is the work of many individuals.
+Many sections do not yet exist and some of those that do exist need to be updated.
+If you are interested in helping with this project, send email to the {freebsd-doc}.
-The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server]. It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
+The latest version of this document is always available from the link:https://www.FreeBSD.org[FreeBSD World Wide Web server].
+It may also be downloaded in a variety of formats and compression options from the link:https://download.freebsd.org/ftp/doc/[FreeBSD FTP server] or one of the numerous link:{handbook}#mirrors-ftp/[mirror sites].
'''
diff --git a/documentation/content/en/books/developers-handbook/introduction/_index.adoc b/documentation/content/en/books/developers-handbook/introduction/_index.adoc
index c7c9ad74d0..35e8e65fca 100644
--- a/documentation/content/en/books/developers-handbook/introduction/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/introduction/_index.adoc
@@ -37,9 +37,15 @@ toc::[]
[[introduction-devel]]
== Developing on FreeBSD
-So here we are. System all installed and you are ready to start programming. But where to start? What does FreeBSD provide? What can it do for me, as a programmer?
+So here we are.
+System all installed and you are ready to start programming.
+But where to start? What does FreeBSD provide? What can it do for me, as a programmer?
-These are some questions which this chapter tries to answer. Of course, programming has different levels of proficiency like any other trade. For some it is a hobby, for others it is their profession. The information in this chapter might be aimed toward the beginning programmer; indeed, it could serve useful for the programmer unfamiliar with the FreeBSD platform.
+These are some questions which this chapter tries to answer.
+Of course, programming has different levels of proficiency like any other trade.
+For some it is a hobby, for others it is their profession.
+The information in this chapter might be aimed toward the beginning programmer;
+indeed, it could serve useful for the programmer unfamiliar with the FreeBSD platform.
[[introduction-bsdvision]]
== The BSD Vision
@@ -64,7 +70,8 @@ From Scheifler & Gettys: "X Window System"
[[introduction-layout]]
== The Layout of /usr/src
-The complete source code to FreeBSD is available from our public repository. The source code is normally installed in [.filename]#/usr/src# which contains the following subdirectories:
+The complete source code to FreeBSD is available from our public repository.
+The source code is normally installed in [.filename]#/usr/src# which contains the following subdirectories:
[.informaltable]
[cols="1,1", frame="none", options="header"]
diff --git a/documentation/content/en/books/developers-handbook/ipv6/_index.adoc b/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
index 6546bec79c..9596e12e78 100644
--- a/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/ipv6/_index.adoc
@@ -36,18 +36,22 @@ toc::[]
[[ipv6-implementation]]
== IPv6/IPsec Implementation
-This section should explain IPv6 and IPsec related implementation internals. These functionalities are derived from http://www.kame.net/[KAME project]
+This section should explain IPv6 and IPsec related implementation internals.
+These functionalities are derived from http://www.kame.net/[KAME project]
[[ipv6details]]
=== IPv6
==== Conformance
-The IPv6 related functions conforms, or tries to conform to the latest set of IPv6 specifications. For future reference we list some of the relevant documents below (_NOTE_: this is not a complete list - this is too hard to maintain...).
+The IPv6 related functions conforms, or tries to conform to the latest set of IPv6 specifications.
+For future reference we list some of the relevant documents below (_NOTE_: this is not a complete list - this is too hard to maintain...).
For details please refer to specific chapter in the document, RFCs, manual pages, or comments in the source code.
-Conformance tests have been performed on the KAME STABLE kit at TAHI project. Results can be viewed at http://www.tahi.org/report/KAME/[http://www.tahi.org/report/KAME/]. We also attended University of New Hampshire IOL tests (http://www.iol.unh.edu/[http://www.iol.unh.edu/]) in the past, with our past snapshots.
+Conformance tests have been performed on the KAME STABLE kit at TAHI project.
+Results can be viewed at http://www.tahi.org/report/KAME/[http://www.tahi.org/report/KAME/].
+We also attended University of New Hampshire IOL tests (http://www.iol.unh.edu/[http://www.iol.unh.edu/]) in the past, with our past snapshots.
* RFC1639: FTP Operation Over Big Address Records (FOOBAR)
@@ -140,55 +144,91 @@ Conformance tests have been performed on the KAME STABLE kit at TAHI project. Re
[[neighbor-discovery]]
==== Neighbor Discovery
-Neighbor Discovery is fairly stable. Currently Address Resolution, Duplicated Address Detection, and Neighbor Unreachability Detection are supported. In the near future we will be adding Proxy Neighbor Advertisement support in the kernel and Unsolicited Neighbor Advertisement transmission command as admin tool.
+Neighbor Discovery is fairly stable.
+Currently Address Resolution, Duplicated Address Detection, and Neighbor Unreachability Detection are supported.
+In the near future we will be adding Proxy Neighbor Advertisement support in the kernel and Unsolicited Neighbor Advertisement transmission command as admin tool.
-If DAD fails, the address will be marked "duplicated" and message will be generated to syslog (and usually to console). The "duplicated" mark can be checked with man:ifconfig[8]. It is administrators' responsibility to check for and recover from DAD failures. The behavior should be improved in the near future.
+If DAD fails, the address will be marked "duplicated" and message will be generated to syslog (and usually to console).
+The "duplicated" mark can be checked with man:ifconfig[8].
+It is administrators' responsibility to check for and recover from DAD failures.
+The behavior should be improved in the near future.
-Some of the network driver loops multicast packets back to itself, even if instructed not to do so (especially in promiscuous mode). In such cases DAD may fail, because DAD engine sees inbound NS packet (actually from the node itself) and considers it as a sign of duplicate. You may want to look at #if condition marked "heuristics" in sys/netinet6/nd6_nbr.c:nd6_dad_timer() as workaround (note that the code fragment in "heuristics" section is not spec conformant).
+Some of the network driver loops multicast packets back to itself, even if instructed not to do so (especially in promiscuous mode).
+In such cases DAD may fail, because DAD engine sees inbound NS packet (actually from the node itself) and considers it as a sign of duplicate.
+You may want to look at #if condition marked "heuristics" in sys/netinet6/nd6_nbr.c:nd6_dad_timer() as workaround (note that the code fragment in "heuristics" section is not spec conformant).
Neighbor Discovery specification (RFC2461) does not talk about neighbor cache handling in the following cases:
. when there was no neighbor cache entry, node received unsolicited RS/NS/NA/redirect packet without link-layer address
. neighbor cache handling on medium without link-layer address (we need a neighbor cache entry for IsRouter bit)
-For first case, we implemented workaround based on discussions on IETF ipngwg mailing list. For more details, see the comments in the source code and email thread started from (IPng 7155), dated Feb 6 1999.
+For first case, we implemented workaround based on discussions on IETF ipngwg mailing list.
+For more details, see the comments in the source code and email thread started from (IPng 7155), dated Feb 6 1999.
-IPv6 on-link determination rule (RFC2461) is quite different from assumptions in BSD network code. At this moment, no on-link determination rule is supported where default router list is empty (RFC2461, section 5.2, last sentence in 2nd paragraph - note that the spec misuse the word "host" and "node" in several places in the section).
+IPv6 on-link determination rule (RFC2461) is quite different from assumptions in BSD network code.
+At this moment, no on-link determination rule is supported where default router list is empty (RFC2461, section 5.2, last sentence in 2nd paragraph - note that the spec misuse the word "host" and "node" in several places in the section).
-To avoid possible DoS attacks and infinite loops, only 10 options on ND packet is accepted now. Therefore, if you have 20 prefix options attached to RA, only the first 10 prefixes will be recognized. If this troubles you, please ask it on FREEBSD-CURRENT mailing list and/or modify nd6_maxndopt in [.filename]#sys/netinet6/nd6.c#. If there are high demands we may provide sysctl knob for the variable.
+To avoid possible DoS attacks and infinite loops, only 10 options on ND packet is accepted now.
+Therefore, if you have 20 prefix options attached to RA, only the first 10 prefixes will be recognized.
+If this troubles you, please ask it on FREEBSD-CURRENT mailing list and/or modify nd6_maxndopt in [.filename]#sys/netinet6/nd6.c#.
+If there are high demands we may provide sysctl knob for the variable.
[[ipv6-scope-index]]
==== Scope Index
-IPv6 uses scoped addresses. Therefore, it is very important to specify scope index (interface index for link-local address, or site index for site-local address) with an IPv6 address. Without scope index, scoped IPv6 address is ambiguous to the kernel, and kernel will not be able to determine the outbound interface for a packet.
+IPv6 uses scoped addresses.
+Therefore, it is very important to specify scope index (interface index for link-local address, or site index for site-local address) with an IPv6 address.
+Without scope index, scoped IPv6 address is ambiguous to the kernel, and kernel will not be able to determine the outbound interface for a packet.
-Ordinary userland applications should use advanced API (RFC2292) to specify scope index, or interface index. For similar purpose, sin6_scope_id member in sockaddr_in6 structure is defined in RFC2553. However, the semantics for sin6_scope_id is rather vague. If you care about portability of your application, we suggest you to use advanced API rather than sin6_scope_id.
+Ordinary userland applications should use advanced API (RFC2292) to specify scope index, or interface index.
+For similar purpose, sin6_scope_id member in sockaddr_in6 structure is defined in RFC2553.
+However, the semantics for sin6_scope_id is rather vague.
+If you care about portability of your application, we suggest you to use advanced API rather than sin6_scope_id.
-In the kernel, an interface index for link-local scoped address is embedded into 2nd 16bit-word (3rd and 4th byte) in IPv6 address. For example, you may see something like:
+In the kernel, an interface index for link-local scoped address is embedded into 2nd 16bit-word (3rd and 4th byte) in IPv6 address.
+For example, you may see something like:
[source,bash]
....
fe80:1::200:f8ff:fe01:6317
....
-in the routing table and interface address structure (struct in6_ifaddr). The address above is a link-local unicast address which belongs to a network interface whose interface identifier is 1. The embedded index enables us to identify IPv6 link local addresses over multiple interfaces effectively and with only a little code change.
+in the routing table and interface address structure (struct in6_ifaddr).
+The address above is a link-local unicast address which belongs to a network interface whose interface identifier is 1.
+The embedded index enables us to identify IPv6 link local addresses over multiple interfaces effectively and with only a little code change.
-Routing daemons and configuration programs, like man:route6d[8] and man:ifconfig[8], will need to manipulate the "embedded" scope index. These programs use routing sockets and ioctls (like SIOCGIFADDR_IN6) and the kernel API will return IPv6 addresses with 2nd 16bit-word filled in. The APIs are for manipulating kernel internal structure. Programs that use these APIs have to be prepared about differences in kernels anyway.
+Routing daemons and configuration programs, like man:route6d[8] and man:ifconfig[8], will need to manipulate the "embedded" scope index.
+These programs use routing sockets and ioctls (like SIOCGIFADDR_IN6) and the kernel API will return IPv6 addresses with 2nd 16bit-word filled in.
+The APIs are for manipulating kernel internal structure.
+Programs that use these APIs have to be prepared about differences in kernels anyway.
-When you specify scoped address to the command line, NEVER write the embedded form (such as ff02:1::1 or fe80:2::fedc). This is not supposed to work. Always use standard form, like ff02::1 or fe80::fedc, with command line option for specifying interface (like `ping6 -I ne0 ff02::1`). In general, if a command does not have command line option to specify outgoing interface, that command is not ready to accept scoped address. This may seem to be opposite from IPv6's premise to support "dentist office" situation. We believe that specifications need some improvements for this.
+When you specify scoped address to the command line, NEVER write the embedded form (such as ff02:1::1 or fe80:2::fedc).
+This is not supposed to work.
+Always use standard form, like ff02::1 or fe80::fedc, with command line option for specifying interface (like `ping6 -I ne0 ff02::1`).
+In general, if a command does not have command line option to specify outgoing interface, that command is not ready to accept scoped address.
+This may seem to be opposite from IPv6's premise to support "dentist office" situation.
+We believe that specifications need some improvements for this.
-Some of the userland tools support extended numeric IPv6 syntax, as documented in [.filename]#draft-ietf-ipngwg-scopedaddr-format-00.txt#. You can specify outgoing link, by using name of the outgoing interface like "fe80::1%ne0". This way you will be able to specify link-local scoped address without much trouble.
+Some of the userland tools support extended numeric IPv6 syntax, as documented in [.filename]#draft-ietf-ipngwg-scopedaddr-format-00.txt#.
+You can specify outgoing link, by using name of the outgoing interface like "fe80::1%ne0".
+This way you will be able to specify link-local scoped address without much trouble.
-To use this extension in your program, you will need to use man:getaddrinfo[3], and man:getnameinfo[3] with NI_WITHSCOPEID. The implementation currently assumes 1-to-1 relationship between a link and an interface, which is stronger than what specs say.
+To use this extension in your program, you will need to use man:getaddrinfo[3], and man:getnameinfo[3] with NI_WITHSCOPEID.
+The implementation currently assumes 1-to-1 relationship between a link and an interface, which is stronger than what specs say.
[[ipv6-pnp]]
==== Plug and Play
-Most of the IPv6 stateless address autoconfiguration is implemented in the kernel. Neighbor Discovery functions are implemented in the kernel as a whole. Router Advertisement (RA) input for hosts is implemented in the kernel. Router Solicitation (RS) output for endhosts, RS input for routers, and RA output for routers are implemented in the userland.
+Most of the IPv6 stateless address autoconfiguration is implemented in the kernel.
+Neighbor Discovery functions are implemented in the kernel as a whole.
+Router Advertisement (RA) input for hosts is implemented in the kernel.
+Router Solicitation (RS) output for endhosts, RS input for routers, and RA output for routers are implemented in the userland.
===== Assignment of link-local, and special addresses
-IPv6 link-local address is generated from IEEE802 address (Ethernet MAC address). Each of interface is assigned an IPv6 link-local address automatically, when the interface becomes up (IFF_UP). Also, direct route for the link-local address is added to routing table.
+IPv6 link-local address is generated from IEEE802 address (Ethernet MAC address).
+Each of interface is assigned an IPv6 link-local address automatically, when the interface becomes up (IFF_UP).
+Also, direct route for the link-local address is added to routing table.
Here is an output of netstat command:
@@ -200,17 +240,30 @@ fe80:1::%ed0/64 link#1 UC ed0
fe80:2::%ep0/64 link#2 UC ep0
....
-Interfaces that has no IEEE802 address (pseudo interfaces like tunnel interfaces, or ppp interfaces) will borrow IEEE802 address from other interfaces, such as Ethernet interfaces, whenever possible. If there is no IEEE802 hardware attached, a last resort pseudo-random value, MD5(hostname), will be used as source of link-local address. If it is not suitable for your usage, you will need to configure the link-local address manually.
+Interfaces that has no IEEE802 address (pseudo interfaces like tunnel interfaces, or ppp interfaces) will borrow IEEE802 address from other interfaces, such as Ethernet interfaces, whenever possible.
+If there is no IEEE802 hardware attached, a last resort pseudo-random value, MD5(hostname), will be used as source of link-local address.
+If it is not suitable for your usage, you will need to configure the link-local address manually.
-If an interface is not capable of handling IPv6 (such as lack of multicast support), link-local address will not be assigned to that interface. See section 2 for details.
+If an interface is not capable of handling IPv6 (such as lack of multicast support), link-local address will not be assigned to that interface.
+See section 2 for details.
-Each interface joins the solicited multicast address and the link-local all-nodes multicast addresses (e.g., fe80::1:ff01:6317 and ff02::1, respectively, on the link the interface is attached). In addition to a link-local address, the loopback address (::1) will be assigned to the loopback interface. Also, ::1/128 and ff01::/32 are automatically added to routing table, and loopback interface joins node-local multicast group ff01::1.
+Each interface joins the solicited multicast address and the link-local all-nodes multicast addresses (e.g., fe80::1:ff01:6317 and ff02::1, respectively, on the link the interface is attached).
+In addition to a link-local address, the loopback address (::1) will be assigned to the loopback interface.
+Also, ::1/128 and ff01::/32 are automatically added to routing table, and loopback interface joins node-local multicast group ff01::1.
===== Stateless address autoconfiguration on Hosts
-In IPv6 specification, nodes are separated into two categories: _routers_ and _hosts_. Routers forward packets addressed to others, hosts does not forward the packets. net.inet6.ip6.forwarding defines whether this node is router or host (router if it is 1, host if it is 0).
+In IPv6 specification, nodes are separated into two categories: _routers_ and _hosts_.
+Routers forward packets addressed to others, hosts does not forward the packets. net.inet6.ip6.forwarding defines whether this node is router or host (router if it is 1, host if it is 0).
-When a host hears Router Advertisement from the router, a host may autoconfigure itself by stateless address autoconfiguration. This behavior can be controlled by net.inet6.ip6.accept_rtadv (host autoconfigures itself if it is set to 1). By autoconfiguration, network address prefix for the receiving interface (usually global address prefix) is added. Default route is also configured. Routers periodically generate Router Advertisement packets. To request an adjacent router to generate RA packet, a host can transmit Router Solicitation. To generate a RS packet at any time, use the _rtsol_ command. man:rtsold[8] daemon is also available. man:rtsold[8] generates Router Solicitation whenever necessary, and it works great for nomadic usage (notebooks/laptops). If one wishes to ignore Router Advertisements, use sysctl to set net.inet6.ip6.accept_rtadv to 0.
+When a host hears Router Advertisement from the router, a host may autoconfigure itself by stateless address autoconfiguration.
+This behavior can be controlled by net.inet6.ip6.accept_rtadv (host autoconfigures itself if it is set to 1).
+By autoconfiguration, network address prefix for the receiving interface (usually global address prefix) is added.
+Default route is also configured. Routers periodically generate Router Advertisement packets.
+To request an adjacent router to generate RA packet, a host can transmit Router Solicitation.
+To generate a RS packet at any time, use the _rtsol_ command. man:rtsold[8] daemon is also available.
+man:rtsold[8] generates Router Solicitation whenever necessary, and it works great for nomadic usage (notebooks/laptops).
+If one wishes to ignore Router Advertisements, use sysctl to set net.inet6.ip6.accept_rtadv to 0.
To generate Router Advertisement from a router, use the man:rtadvd[8] daemon.
@@ -219,7 +272,8 @@ Note that, IPv6 specification assumes the following items, and nonconforming cas
* Only hosts will listen to router advertisements
* Hosts have single network interface (except loopback)
-Therefore, this is unwise to enable net.inet6.ip6.accept_rtadv on routers, or multi-interface host. A misconfigured node can behave strange (nonconforming configuration allowed for those who would like to do some experiments).
+Therefore, this is unwise to enable net.inet6.ip6.accept_rtadv on routers, or multi-interface host.
+A misconfigured node can behave strange (nonconforming configuration allowed for those who would like to do some experiments).
To summarize the sysctl knob:
@@ -238,30 +292,39 @@ To summarize the sysctl knob:
(out-of-scope of spec)
....
-RFC2462 has validation rule against incoming RA prefix information option, in 5.5.3 (e). This is to protect hosts from malicious (or misconfigured) routers that advertise very short prefix lifetime. There was an update from Jim Bound to ipngwg mailing list (look for "(ipng 6712)" in the archive) and it is implemented Jim's update.
+RFC2462 has validation rule against incoming RA prefix information option, in 5.5.3 (e).
+This is to protect hosts from malicious (or misconfigured) routers that advertise very short prefix lifetime.
+There was an update from Jim Bound to ipngwg mailing list (look for "(ipng 6712)" in the archive) and it is implemented Jim's update.
See <<neighbor-discovery,23.5.1.2>> in the document for relationship between DAD and autoconfiguration.
[[gif]]
==== Generic Tunnel Interface
-GIF (Generic InterFace) is a pseudo interface for configured tunnel. Details are described in man:gif[4]. Currently
+GIF (Generic InterFace) is a pseudo interface for configured tunnel.
+Details are described in man:gif[4]. Currently
* v6 in v6
* v6 in v4
* v4 in v6
* v4 in v4
-are available. Use man:gifconfig[8] to assign physical (outer) source and destination address to gif interfaces. Configuration that uses same address family for inner and outer IP header (v4 in v4, or v6 in v6) is dangerous. It is very easy to configure interfaces and routing tables to perform infinite level of tunneling. _Please be warned_.
+are available. Use man:gifconfig[8] to assign physical (outer) source and destination address to gif interfaces.
+Configuration that uses same address family for inner and outer IP header (v4 in v4, or v6 in v6) is dangerous.
+It is very easy to configure interfaces and routing tables to perform infinite level of tunneling.
+_Please be warned_.
-gif can be configured to be ECN-friendly. See <<ipsec-ecn,23.5.4.5>> for ECN-friendliness of tunnels, and man:gif[4] for how to configure.
+gif can be configured to be ECN-friendly.
+See <<ipsec-ecn,23.5.4.5>> for ECN-friendliness of tunnels, and man:gif[4] for how to configure.
-If you would like to configure an IPv4-in-IPv6 tunnel with gif interface, read man:gif[4] carefully. You will need to remove IPv6 link-local address automatically assigned to the gif interface.
+If you would like to configure an IPv4-in-IPv6 tunnel with gif interface, read man:gif[4] carefully.
+You will need to remove IPv6 link-local address automatically assigned to the gif interface.
[[ipv6-sas]]
==== Source Address Selection
-Current source selection rule is scope oriented (there are some exceptions - see below). For a given destination, a source IPv6 address is selected by the following rule:
+Current source selection rule is scope oriented (there are some exceptions - see below).
+For a given destination, a source IPv6 address is selected by the following rule:
. If the source address is explicitly specified by the user (e.g., via the advanced API), the specified address is used.
. If there is an address assigned to the outgoing interface (which is usually determined by looking up the routing table) that has the same scope as the destination address, the address is used.
@@ -271,16 +334,29 @@ This is the most typical case.
. If there is no address that satisfies the above condition, and destination address is site local scope, choose a site local address assigned to one of the interfaces on the sending node.
. If there is no address that satisfies the above condition, choose the address associated with the routing table entry for the destination. This is the last resort, which may cause scope violation.
-For instance, ::1 is selected for ff01::1, fe80:1::200:f8ff:fe01:6317 for fe80:1::2a0:24ff:feab:839b (note that embedded interface index - described in <<ipv6-scope-index,23.5.1.3>> - helps us choose the right source address. Those embedded indices will not be on the wire). If the outgoing interface has multiple address for the scope, a source is selected longest match basis (rule 3). Suppose 2001:0DB8:808:1:200:f8ff:fe01:6317 and 2001:0DB8:9:124:200:f8ff:fe01:6317 are given to the outgoing interface. 2001:0DB8:808:1:200:f8ff:fe01:6317 is chosen as the source for the destination 2001:0DB8:800::1.
+For instance, ::1 is selected for ff01::1, fe80:1::200:f8ff:fe01:6317 for fe80:1::2a0:24ff:feab:839b (note that embedded interface index - described in <<ipv6-scope-index,23.5.1.3>> - helps us choose the right source address.
+Those embedded indices will not be on the wire).
+If the outgoing interface has multiple address for the scope, a source is selected longest match basis (rule 3).
+Suppose 2001:0DB8:808:1:200:f8ff:fe01:6317 and 2001:0DB8:9:124:200:f8ff:fe01:6317 are given to the outgoing interface. 2001:0DB8:808:1:200:f8ff:fe01:6317 is chosen as the source for the destination 2001:0DB8:800::1.
-Note that the above rule is not documented in the IPv6 spec. It is considered "up to implementation" item. There are some cases where we do not use the above rule. One example is connected TCP session, and we use the address kept in tcb as the source. Another example is source address for Neighbor Advertisement. Under the spec (RFC2461 7.2.2) NA's source should be the target address of the corresponding NS's target. In this case we follow the spec rather than the above longest-match rule.
+Note that the above rule is not documented in the IPv6 spec. It is considered "up to implementation" item.
+There are some cases where we do not use the above rule.
+One example is connected TCP session, and we use the address kept in tcb as the source.
+Another example is source address for Neighbor Advertisement.
+Under the spec (RFC2461 7.2.2) NA's source should be the target address of the corresponding NS's target.
+In this case we follow the spec rather than the above longest-match rule.
-For new connections (when rule 1 does not apply), deprecated addresses (addresses with preferred lifetime = 0) will not be chosen as source address if other choices are available. If no other choices are available, deprecated address will be used as a last resort. If there are multiple choice of deprecated addresses, the above scope rule will be used to choose from those deprecated addresses. If you would like to prohibit the use of deprecated address for some reason, configure net.inet6.ip6.use_deprecated to 0. The issue related to deprecated address is described in RFC2462 5.5.4 (NOTE: there is some debate underway in IETF ipngwg on how to use "deprecated" address).
+For new connections (when rule 1 does not apply), deprecated addresses (addresses with preferred lifetime = 0) will not be chosen as source address if other choices are available.
+If no other choices are available, deprecated address will be used as a last resort.
+If there are multiple choice of deprecated addresses, the above scope rule will be used to choose from those deprecated addresses.
+If you would like to prohibit the use of deprecated address for some reason, configure net.inet6.ip6.use_deprecated to 0.
+The issue related to deprecated address is described in RFC2462 5.5.4 (NOTE: there is some debate underway in IETF ipngwg on how to use "deprecated" address).
[[ipv6-jumbo]]
==== Jumbo Payload
-The Jumbo Payload hop-by-hop option is implemented and can be used to send IPv6 packets with payloads longer than 65,535 octets. But currently no physical interface whose MTU is more than 65,535 is supported, so such payloads can be seen only on the loopback interface (i.e., lo0).
+The Jumbo Payload hop-by-hop option is implemented and can be used to send IPv6 packets with payloads longer than 65,535 octets.
+But currently no physical interface whose MTU is more than 65,535 is supported, so such payloads can be seen only on the loopback interface (i.e., lo0).
If you want to try jumbo payloads, you first have to reconfigure the kernel so that the MTU of the loopback interface is more than 65,535 bytes; add the following to the kernel configuration file:
@@ -288,16 +364,22 @@ If you want to try jumbo payloads, you first have to reconfigure the kernel so t
and recompile the new kernel.
-Then you can test jumbo payloads by the man:ping6[8] command with -b and -s options. The -b option must be specified to enlarge the size of the socket buffer and the -s option specifies the length of the packet, which should be more than 65,535. For example, type as follows:
+Then you can test jumbo payloads by the man:ping6[8] command with -b and -s options.
+The -b option must be specified to enlarge the size of the socket buffer and the -s option specifies the length of the packet, which should be more than 65,535.
+For example, type as follows:
[source,bash]
....
% ping6 -b 70000 -s 68000 ::1
....
-The IPv6 specification requires that the Jumbo Payload option must not be used in a packet that carries a fragment header. If this condition is broken, an ICMPv6 Parameter Problem message must be sent to the sender. specification is followed, but you cannot usually see an ICMPv6 error caused by this requirement.
+The IPv6 specification requires that the Jumbo Payload option must not be used in a packet that carries a fragment header.
+If this condition is broken, an ICMPv6 Parameter Problem message must be sent to the sender.
+specification is followed, but you cannot usually see an ICMPv6 error caused by this requirement.
-When an IPv6 packet is received, the frame length is checked and compared to the length specified in the payload length field of the IPv6 header or in the value of the Jumbo Payload option, if any. If the former is shorter than the latter, the packet is discarded and statistics are incremented. You can see the statistics as output of man:netstat[8] command with `-s -p ip6' option:
+When an IPv6 packet is received, the frame length is checked and compared to the length specified in the payload length field of the IPv6 header or in the value of the Jumbo Payload option, if any.
+If the former is shorter than the latter, the packet is discarded and statistics are incremented.
+You can see the statistics as output of man:netstat[8] command with `-s -p ip6' option:
[source,bash]
....
@@ -307,45 +389,72 @@ When an IPv6 packet is received, the frame length is checked and compared to the
1 with data size < data length
....
-So, kernel does not send an ICMPv6 error unless the erroneous packet is an actual Jumbo Payload, that is, its packet size is more than 65,535 bytes. As described above, currently no physical interface with such a huge MTU is supported, so it rarely returns an ICMPv6 error.
+So, kernel does not send an ICMPv6 error unless the erroneous packet is an actual Jumbo Payload, that is, its packet size is more than 65,535 bytes.
+As described above, currently no physical interface with such a huge MTU is supported, so it rarely returns an ICMPv6 error.
-TCP/UDP over jumbogram is not supported at this moment. This is because we have no medium (other than loopback) to test this. Contact us if you need this.
+TCP/UDP over jumbogram is not supported at this moment.
+This is because we have no medium (other than loopback) to test this. Contact us if you need this.
-IPsec does not work on jumbograms. This is due to some specification twists in supporting AH with jumbograms (AH header size influences payload length, and this makes it real hard to authenticate inbound packet with jumbo payload option as well as AH).
+IPsec does not work on jumbograms.
+This is due to some specification twists in supporting AH with jumbograms (AH header size influences payload length, and this makes it real hard to authenticate inbound packet with jumbo payload option as well as AH).
-There are fundamental issues in *BSD support for jumbograms. We would like to address those, but we need more time to finalize these. To name a few:
+There are fundamental issues in *BSD support for jumbograms.
+We would like to address those, but we need more time to finalize these.
+To name a few:
+
+* mbuf pkthdr.len field is typed as "int" in 4.4BSD, so it will not hold jumbogram with len > 2G on 32bit architecture CPUs.
+If we would like to support jumbogram properly, the field must be expanded to hold 4G + IPv6 header + link-layer header.
+Therefore, it must be expanded to at least int64_t (u_int32_t is NOT enough).
-* mbuf pkthdr.len field is typed as "int" in 4.4BSD, so it will not hold jumbogram with len > 2G on 32bit architecture CPUs. If we would like to support jumbogram properly, the field must be expanded to hold 4G + IPv6 header + link-layer header. Therefore, it must be expanded to at least int64_t (u_int32_t is NOT enough).
* We mistakingly use "int" to hold packet length in many places. We need to convert them into larger integral type. It needs a great care, as we may experience overflow during packet length computation.
* We mistakingly check for ip6_plen field of IPv6 header for packet payload length in various places. We should be checking mbuf pkthdr.len instead. ip6_input() will perform sanity check on jumbo payload option on input, and we can safely use mbuf pkthdr.len afterwards.
* TCP code needs a careful update in bunch of places, of course.
==== Loop Prevention in Header Processing
-IPv6 specification allows arbitrary number of extension headers to be placed onto packets. If we implement IPv6 packet processing code in the way BSD IPv4 code is implemented, kernel stack may overflow due to long function call chain. sys/netinet6 code is carefully designed to avoid kernel stack overflow, so sys/netinet6 code defines its own protocol switch structure, as "struct ip6protosw" (see [.filename]#netinet6/ip6protosw.h#). There is no such update to IPv4 part (sys/netinet) for compatibility, but small change is added to its pr_input() prototype. So "struct ipprotosw" is also defined. As a result, if you receive IPsec-over-IPv4 packet with massive number of IPsec headers, kernel stack may blow up. IPsec-over-IPv6 is okay. (Of-course, for those all IPsec headers to be processed, each such IPsec header must pass each IPsec check. So an anonymous attacker will not be able to do such an attack.)
+IPv6 specification allows arbitrary number of extension headers to be placed onto packets.
+If we implement IPv6 packet processing code in the way BSD IPv4 code is implemented, kernel stack may overflow due to long function call chain.
+sys/netinet6 code is carefully designed to avoid kernel stack overflow, so sys/netinet6 code defines its own protocol switch structure, as "struct ip6protosw" (see [.filename]#netinet6/ip6protosw.h#).
+There is no such update to IPv4 part (sys/netinet) for compatibility, but small change is added to its pr_input() prototype.
+So "struct ipprotosw" is also defined.
+As a result, if you receive IPsec-over-IPv4 packet with massive number of IPsec headers, kernel stack may blow up.
+IPsec-over-IPv6 is okay.
+(Of-course, for those all IPsec headers to be processed, each such IPsec header must pass each IPsec check.
+So an anonymous attacker will not be able to do such an attack.)
[[icmpv6]]
==== ICMPv6
-After RFC2463 was published, IETF ipngwg has decided to disallow ICMPv6 error packet against ICMPv6 redirect, to prevent ICMPv6 storm on a network medium. This is already implemented into the kernel.
+After RFC2463 was published, IETF ipngwg has decided to disallow ICMPv6 error packet against ICMPv6 redirect, to prevent ICMPv6 storm on a network medium.
+This is already implemented into the kernel.
==== Applications
For userland programming, we support IPv6 socket API as specified in RFC2553, RFC2292 and upcoming Internet drafts.
-TCP/UDP over IPv6 is available and quite stable. You can enjoy man:telnet[1], man:ftp[1], man:rlogin[1], man:rsh[1], man:ssh[1], etc. These applications are protocol independent. That is, they automatically chooses IPv4 or IPv6 according to DNS.
+TCP/UDP over IPv6 is available and quite stable.
+You can enjoy man:telnet[1], man:ftp[1], man:rlogin[1], man:rsh[1], man:ssh[1], etc.
+These applications are protocol independent.
+That is, they automatically chooses IPv4 or IPv6 according to DNS.
==== Kernel Internals
While ip_forward() calls ip_output(), ip6_forward() directly calls if_output() since routers must not divide IPv6 packets into fragments.
-ICMPv6 should contain the original packet as long as possible up to 1280. UDP6/IP6 port unreach, for instance, should contain all extension headers and the *unchanged* UDP6 and IP6 headers. So, all IP6 functions except TCP never convert network byte order into host byte order, to save the original packet.
+ICMPv6 should contain the original packet as long as possible up to 1280.
+UDP6/IP6 port unreach, for instance, should contain all extension headers and the *unchanged* UDP6 and IP6 headers.
+So, all IP6 functions except TCP never convert network byte order into host byte order, to save the original packet.
-tcp_input(), udp6_input() and icmp6_input() can not assume that IP6 header is preceding the transport headers due to extension headers. So, in6_cksum() was implemented to handle packets whose IP6 header and transport header is not continuous. TCP/IP6 nor UDP6/IP6 header structures do not exist for checksum calculation.
+tcp_input(), udp6_input() and icmp6_input() can not assume that IP6 header is preceding the transport headers due to extension headers.
+So, in6_cksum() was implemented to handle packets whose IP6 header and transport header is not continuous.
+TCP/IP6 nor UDP6/IP6 header structures do not exist for checksum calculation.
-To process IP6 header, extension headers and transport headers easily, network drivers are now required to store packets in one internal mbuf or one or more external mbufs. A typical old driver prepares two internal mbufs for 96 - 204 bytes data, however, now such packet data is stored in one external mbuf.
+To process IP6 header, extension headers and transport headers easily, network drivers are now required to store packets in one internal mbuf or one or more external mbufs.
+A typical old driver prepares two internal mbufs for 96 - 204 bytes data, however, now such packet data is stored in one external mbuf.
-`netstat -s -p ip6` tells you whether or not your driver conforms such requirement. In the following example, "cce0" violates the requirement. (For more information, refer to Section 2.)
+`netstat -s -p ip6` tells you whether or not your driver conforms such requirement.
+In the following example, "cce0" violates the requirement.
+(For more information, refer to Section 2.)
[source,bash]
....
@@ -358,19 +467,23 @@ Mbuf statistics:
0 two or more ext mbuf
....
-Each input function calls IP6_EXTHDR_CHECK in the beginning to check if the region between IP6 and its header is continuous. IP6_EXTHDR_CHECK calls m_pullup() only if the mbuf has M_LOOP flag, that is, the packet comes from the loopback interface. m_pullup() is never called for packets coming from physical network interfaces.
+Each input function calls IP6_EXTHDR_CHECK in the beginning to check if the region between IP6 and its header is continuous.
+IP6_EXTHDR_CHECK calls m_pullup() only if the mbuf has M_LOOP flag, that is, the packet comes from the loopback interface.
+m_pullup() is never called for packets coming from physical network interfaces.
Both IP and IP6 reassemble functions never call m_pullup().
[[ipv6-wildcard-socket]]
==== IPv4 Mapped Address and IPv6 Wildcard Socket
-RFC2553 describes IPv4 mapped address (3.7) and special behavior of IPv6 wildcard bind socket (3.8). The spec allows you to:
+RFC2553 describes IPv4 mapped address (3.7) and special behavior of IPv6 wildcard bind socket (3.8).
+The spec allows you to:
* Accept IPv4 connections by AF_INET6 wildcard bind socket.
* Transmit IPv4 packet over AF_INET6 socket by using special form of the address like ::ffff:10.1.1.1.
-but the spec itself is very complicated and does not specify how the socket layer should behave. Here we call the former one "listening side" and the latter one "initiating side", for reference purposes.
+but the spec itself is very complicated and does not specify how the socket layer should behave.
+Here we call the former one "listening side" and the latter one "initiating side", for reference purposes.
You can perform wildcard bind on both of the address families, on the same port.
@@ -390,15 +503,29 @@ The following sections will give you more details, and how you can configure the
Comments on listening side:
-It looks that RFC2553 talks too little on wildcard bind issue, especially on the port space issue, failure mode and relationship between AF_INET/INET6 wildcard bind. There can be several separate interpretation for this RFC which conform to it but behaves differently. So, to implement portable application you should assume nothing about the behavior in the kernel. Using man:getaddrinfo[3] is the safest way. Port number space and wildcard bind issues were discussed in detail on ipv6imp mailing list, in mid March 1999 and it looks that there is no concrete consensus (means, up to implementers). You may want to check the mailing list archives.
+It looks that RFC2553 talks too little on wildcard bind issue, especially on the port space issue, failure mode and relationship between AF_INET/INET6 wildcard bind.
+There can be several separate interpretation for this RFC which conform to it but behaves differently.
+So, to implement portable application you should assume nothing about the behavior in the kernel.
+Using man:getaddrinfo[3] is the safest way.
+Port number space and wildcard bind issues were discussed in detail on ipv6imp mailing list, in mid March 1999 and it looks that there is no concrete consensus (means, up to implementers).
+You may want to check the mailing list archives.
If a server application would like to accept IPv4 and IPv6 connections, there will be two alternatives.
-One is using AF_INET and AF_INET6 socket (you will need two sockets). Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags, and man:socket[2] and man:bind[2] to all the addresses returned. By opening multiple sockets, you can accept connections onto the socket with proper address family. IPv4 connections will be accepted by AF_INET socket, and IPv6 connections will be accepted by AF_INET6 socket.
+One is using AF_INET and AF_INET6 socket (you will need two sockets).
+Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags, and man:socket[2] and man:bind[2] to all the addresses returned.
+By opening multiple sockets, you can accept connections onto the socket with proper address family.
+IPv4 connections will be accepted by AF_INET socket, and IPv6 connections will be accepted by AF_INET6 socket.
-Another way is using one AF_INET6 wildcard bind socket. Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags and with AF_INET6 into ai_family, and set the 1st argument hostname to NULL. And man:socket[2] and man:bind[2] to the address returned. (should be IPv6 unspecified addr). You can accept either of IPv4 and IPv6 packet via this one socket.
+Another way is using one AF_INET6 wildcard bind socket.
+Use man:getaddrinfo[3] with AI_PASSIVE into ai_flags and with AF_INET6 into ai_family, and set the 1st argument hostname to NULL.
+And man:socket[2] and man:bind[2] to the address returned.
+(should be IPv6 unspecified addr).
+You can accept either of IPv4 and IPv6 packet via this one socket.
-To support only IPv6 traffic on AF_INET6 wildcard binded socket portably, always check the peer address when a connection is made toward AF_INET6 listening socket. If the address is IPv4 mapped address, you may want to reject the connection. You can check the condition by using IN6_IS_ADDR_V4MAPPED() macro.
+To support only IPv6 traffic on AF_INET6 wildcard binded socket portably, always check the peer address when a connection is made toward AF_INET6 listening socket.
+If the address is IPv4 mapped address, you may want to reject the connection.
+You can check the condition by using IN6_IS_ADDR_V4MAPPED() macro.
To resolve this issue more easily, there is system dependent man:setsockopt[2] option, IPV6_BINDV6ONLY, used like below.
@@ -421,15 +548,23 @@ Advise to application implementers: to implement a portable IPv6 application (wh
* If you would like to connect to destination, use man:getaddrinfo[3] and try all the destination returned, like man:telnet[1] does.
* Some of the IPv6 stack is shipped with buggy man:getaddrinfo[3]. Ship a minimal working version with your application and use that as last resort.
-If you would like to use AF_INET6 socket for both IPv4 and IPv6 outgoing connection, you will need to use man:getipnodebyname[3]. When you would like to update your existing application to be IPv6 aware with minimal effort, this approach might be chosen. But please note that it is a temporal solution, because man:getipnodebyname[3] itself is not recommended as it does not handle scoped IPv6 addresses at all. For IPv6 name resolution, man:getaddrinfo[3] is the preferred API. So you should rewrite your application to use man:getaddrinfo[3], when you get the time to do it.
+If you would like to use AF_INET6 socket for both IPv4 and IPv6 outgoing connection, you will need to use man:getipnodebyname[3].
+When you would like to update your existing application to be IPv6 aware with minimal effort, this approach might be chosen.
+But please note that it is a temporal solution, because man:getipnodebyname[3] itself is not recommended as it does not handle scoped IPv6 addresses at all.
+For IPv6 name resolution, man:getaddrinfo[3] is the preferred API.
+So you should rewrite your application to use man:getaddrinfo[3], when you get the time to do it.
-When writing applications that make outgoing connections, story goes much simpler if you treat AF_INET and AF_INET6 as totally separate address family. {set,get}sockopt issue goes simpler, DNS issue will be made simpler. We do not recommend you to rely upon IPv4 mapped address.
+When writing applications that make outgoing connections, story goes much simpler if you treat AF_INET and AF_INET6 as totally separate address family.
+{set,get}sockopt issue goes simpler, DNS issue will be made simpler.
+We do not recommend you to rely upon IPv4 mapped address.
===== unified tcp and inpcb code
-FreeBSD 4.x uses shared tcp code between IPv4 and IPv6 (from sys/netinet/tcp*) and separate udp4/6 code. It uses unified inpcb structure.
+FreeBSD 4.x uses shared tcp code between IPv4 and IPv6 (from sys/netinet/tcp*) and separate udp4/6 code.
+It uses unified inpcb structure.
-The platform can be configured to support IPv4 mapped address. Kernel configuration is summarized as follows:
+The platform can be configured to support IPv4 mapped address.
+Kernel configuration is summarized as follows:
* By default, AF_INET6 socket will grab IPv4 connections in certain condition, and can initiate connection to IPv4 destination embedded in IPv4 mapped IPv6 address.
* You can disable it on entire system with sysctl like below.
@@ -438,7 +573,8 @@ The platform can be configured to support IPv4 mapped address. Kernel configurat
====== Listening Side
-Each socket can be configured to support special AF_INET6 wildcard bind (enabled by default). You can disable it on each socket basis with man:setsockopt[2] like below.
+Each socket can be configured to support special AF_INET6 wildcard bind (enabled by default).
+You can disable it on each socket basis with man:setsockopt[2] like below.
[.programlisting]
....
@@ -461,7 +597,10 @@ FreeBSD 4.x supports outgoing connection to IPv4 mapped address (::ffff:10.1.1.1
==== sockaddr_storage
-When RFC2553 was about to be finalized, there was discussion on how struct sockaddr_storage members are named. One proposal is to prepend "__" to the members (like "__ss_len") as they should not be touched. The other proposal was not to prepend it (like "ss_len") as we need to touch those members directly. There was no clear consensus on it.
+When RFC2553 was about to be finalized, there was discussion on how struct sockaddr_storage members are named.
+One proposal is to prepend "__" to the members (like "__ss_len") as they should not be touched.
+The other proposal was not to prepend it (like "ss_len") as we need to touch those members directly.
+There was no clear consensus on it.
As a result, RFC2553 defines struct sockaddr_storage as follows:
@@ -489,7 +628,8 @@ In December 1999, it was agreed that RFC2553bis should pick the latter (XNET) de
Current implementation conforms to XNET definition, based on RFC2553bis discussion.
-If you look at multiple IPv6 implementations, you will be able to see both definitions. As an userland programmer, the most portable way of dealing with it is to:
+If you look at multiple IPv6 implementations, you will be able to see both definitions.
+As an userland programmer, the most portable way of dealing with it is to:
. ensure ss_family and/or ss_len are available on the platform, by using GNU autoconf,
. have -Dss_family=__ss_family to unify all occurrences (including header file) into __ss_family, or
@@ -508,9 +648,11 @@ Now following two items are required to be supported by standard drivers:
. mbuf clustering requirement. In this stable release, we changed MINCLSIZE into MHLEN+1 for all the operating systems in order to make all the drivers behave as we expect.
. multicast. If man:ifmcstat[8] yields no multicast group for a interface, that interface has to be patched.
-If any of the drivers do not support the requirements, then the drivers cannot be used for IPv6 and/or IPsec communication. If you find any problem with your card using IPv6/IPsec, then, please report it to the {freebsd-bugs}.
+If any of the drivers do not support the requirements, then the drivers cannot be used for IPv6 and/or IPsec communication.
+If you find any problem with your card using IPv6/IPsec, then, please report it to the {freebsd-bugs}.
-(NOTE: In the past we required all PCMCIA drivers to have a call to in6_ifattach(). We have no such requirement any more)
+(NOTE: In the past we required all PCMCIA drivers to have a call to in6_ifattach().
+We have no such requirement any more)
=== Translator
@@ -532,23 +674,38 @@ IPsec is mainly organized by three components.
==== Policy Management
-The kernel implements experimental policy management code. There are two way to manage security policy. One is to configure per-socket policy using man:setsockopt[2]. In this cases, policy configuration is described in man:ipsec_set_policy[3]. The other is to configure kernel packet filter-based policy using PF_KEY interface, via man:setkey[8].
+The kernel implements experimental policy management code.
+There are two way to manage security policy.
+One is to configure per-socket policy using man:setsockopt[2].
+In this cases, policy configuration is described in man:ipsec_set_policy[3].
+The other is to configure kernel packet filter-based policy using PF_KEY interface, via man:setkey[8].
The policy entry is not re-ordered with its indexes, so the order of entry when you add is very significant.
==== Key Management
-The key management code implemented in this kit (sys/netkey) is a home-brew PFKEY v2 implementation. This conforms to RFC2367.
+The key management code implemented in this kit (sys/netkey) is a home-brew PFKEY v2 implementation.
+This conforms to RFC2367.
-The home-brew IKE daemon, "racoon" is included in the kit (kame/kame/racoon). Basically you will need to run racoon as daemon, then set up a policy to require keys (like `ping -P 'out ipsec esp/transport//use'`). The kernel will contact racoon daemon as necessary to exchange keys.
+The home-brew IKE daemon, "racoon" is included in the kit (kame/kame/racoon).
+Basically you will need to run racoon as daemon, then set up a policy to require keys (like `ping -P 'out ipsec esp/transport//use'`).
+The kernel will contact racoon daemon as necessary to exchange keys.
==== AH and ESP Handling
-IPsec module is implemented as "hooks" to the standard IPv4/IPv6 processing. When sending a packet, ip{,6}_output() checks if ESP/AH processing is required by checking if a matching SPD (Security Policy Database) is found. If ESP/AH is needed, {esp,ah}{4,6}_output() will be called and mbuf will be updated accordingly. When a packet is received, {esp,ah}4_input() will be called based on protocol number, i.e., (*inetsw[proto])(). {esp,ah}4_input() will decrypt/check authenticity of the packet, and strips off daisy-chained header and padding for ESP/AH. It is safe to strip off the ESP/AH header on packet reception, since we will never use the received packet in "as is" form.
+IPsec module is implemented as "hooks" to the standard IPv4/IPv6 processing.
+When sending a packet, ip{,6}_output() checks if ESP/AH processing is required by checking if a matching SPD (Security Policy Database) is found.
+If ESP/AH is needed, {esp,ah}{4,6}_output() will be called and mbuf will be updated accordingly.
+When a packet is received, {esp,ah}4_input() will be called based on protocol number, i.e., (*inetsw[proto])().
+{esp,ah}4_input() will decrypt/check authenticity of the packet, and strips off daisy-chained header and padding for ESP/AH.
+It is safe to strip off the ESP/AH header on packet reception, since we will never use the received packet in "as is" form.
-By using ESP/AH, TCP4/6 effective data segment size will be affected by extra daisy-chained headers inserted by ESP/AH. Our code takes care of the case.
+By using ESP/AH, TCP4/6 effective data segment size will be affected by extra daisy-chained headers inserted by ESP/AH.
+Our code takes care of the case.
-Basic crypto functions can be found in directory "sys/crypto". ESP/AH transform are listed in {esp,ah}_core.c with wrapper functions. If you wish to add some algorithm, add wrapper function in {esp,ah}_core.c, and add your crypto algorithm code into sys/crypto.
+Basic crypto functions can be found in directory "sys/crypto".
+ESP/AH transform are listed in {esp,ah}_core.c with wrapper functions.
+If you wish to add some algorithm, add wrapper function in {esp,ah}_core.c, and add your crypto algorithm code into sys/crypto.
Tunnel mode is partially supported in this release, with the following restrictions:
@@ -562,7 +719,8 @@ The IPsec code in the kernel conforms (or, tries to conform) to the following st
"old IPsec" specification documented in [.filename]#rfc182[5-9].txt#
-"new IPsec" specification documented in [.filename]#rfc240[1-6].txt#, [.filename]#rfc241[01].txt#, [.filename]#rfc2451.txt# and [.filename]#draft-mcdonald-simple-ipsec-api-01.txt# (draft expired, but you can take from link:ftp://ftp.kame.net/pub/internet-drafts/[ ftp://ftp.kame.net/pub/internet-drafts/]). (NOTE: IKE specifications, [.filename]#rfc241[7-9].txt# are implemented in userland, as "racoon" IKE daemon)
+"new IPsec" specification documented in [.filename]#rfc240[1-6].txt#, [.filename]#rfc241[01].txt#, [.filename]#rfc2451.txt# and [.filename]#draft-mcdonald-simple-ipsec-api-01.txt# (draft expired, but you can take from link:ftp://ftp.kame.net/pub/internet-drafts/[ ftp://ftp.kame.net/pub/internet-drafts/]).
+(NOTE: IKE specifications, [.filename]#rfc241[7-9].txt# are implemented in userland, as "racoon" IKE daemon)
Currently supported algorithms are:
@@ -608,16 +766,21 @@ The following algorithms are NOT supported:
** HMAC MD5 with 128bit crypto checksum + 64bit replay prevention ([.filename]#rfc2085.txt#)
** keyed SHA1 with 160bit crypto checksum + 32bit padding ([.filename]#rfc1852.txt#)
-IPsec (in kernel) and IKE (in userland as "racoon") has been tested at several interoperability test events, and it is known to interoperate with many other implementations well. Also, current IPsec implementation as quite wide coverage for IPsec crypto algorithms documented in RFC (we cover algorithms without intellectual property issues only).
+IPsec (in kernel) and IKE (in userland as "racoon") has been tested at several interoperability test events, and it is known to interoperate with many other implementations well.
+Also, current IPsec implementation as quite wide coverage for IPsec crypto algorithms documented in RFC (we cover algorithms without intellectual property issues only).
[[ipsec-ecn]]
==== ECN Consideration on IPsec Tunnels
ECN-friendly IPsec tunnel is supported as described in [.filename]#draft-ipsec-ecn-00.txt#.
-Normal IPsec tunnel is described in RFC2401. On encapsulation, IPv4 TOS field (or, IPv6 traffic class field) will be copied from inner IP header to outer IP header. On decapsulation outer IP header will be simply dropped. The decapsulation rule is not compatible with ECN, since ECN bit on the outer IP TOS/traffic class field will be lost.
+Normal IPsec tunnel is described in RFC2401.
+On encapsulation, IPv4 TOS field (or, IPv6 traffic class field) will be copied from inner IP header to outer IP header.
+On decapsulation outer IP header will be simply dropped.
+The decapsulation rule is not compatible with ECN, since ECN bit on the outer IP TOS/traffic class field will be lost.
-To make IPsec tunnel ECN-friendly, we should modify encapsulation and decapsulation procedure. This is described in http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt[ http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt], chapter 3.
+To make IPsec tunnel ECN-friendly, we should modify encapsulation and decapsulation procedure.
+This is described in http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt[ http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt], chapter 3.
IPsec tunnel implementation can give you three behaviors, by setting net.inet.ipsec.ecn (or net.inet6.ipsec6.ecn) to some value:
@@ -662,6 +825,7 @@ http://www.aciri.org/floyd/papers/draft-ipsec-ecn-00.txt[ http://www.aciri.org/f
==== Interoperability
-Here are (some of) platforms that KAME code have tested IPsec/IKE interoperability in the past. Note that both ends may have modified their implementation, so use the following list just for reference purposes.
+Here are (some of) platforms that KAME code have tested IPsec/IKE interoperability in the past.
+Note that both ends may have modified their implementation, so use the following list just for reference purposes.
Altiga, Ashley-laurent (vpcom.com), Data Fellows (F-Secure), Ericsson ACC, FreeS/WAN, HITACHI, IBM AIX(R), IIJ, Intel, Microsoft(R) Windows NT(R), NIST (linux IPsec + plutoplus), Netscreen, OpenBSD, RedCreek, Routerware, SSH, Secure Computing, Soliton, Toshiba, VPNet, Yamaha RT100i
diff --git a/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc b/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
index e7fb85fa07..bf127de265 100644
--- a/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/kernelbuild/_index.adoc
@@ -32,19 +32,23 @@ include::shared/en/urls.adoc[]
toc::[]
-Being a kernel developer requires understanding of the kernel build process. To debug the FreeBSD kernel it is required to be able to build one. There are two known ways to do so:
+Being a kernel developer requires understanding of the kernel build process.
+To debug the FreeBSD kernel it is required to be able to build one.
+There are two known ways to do so:
The supported procedure to build and install a kernel is documented in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] chapter of the FreeBSD Handbook.
[NOTE]
====
-It is supposed that the reader of this chapter is familiar with the information described in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] chapter of the FreeBSD Handbook. If this is not the case, please read through the above mentioned chapter to understand how the build process works.
+It is supposed that the reader of this chapter is familiar with the information described in the link:{handbook}#kernelconfig-building[Building and Installing a Custom Kernel] chapter of the FreeBSD Handbook.
+If this is not the case, please read through the above mentioned chapter to understand how the build process works.
====
[[kernelbuild-traditional]]
== Building the Faster but Brittle Way
-Building the kernel this way may be useful when working on the kernel code and it may actually be faster than the documented procedure when only a single option or two were tweaked in the kernel configuration file. On the other hand, it might lead to unexpected kernel build breakage.
+Building the kernel this way may be useful when working on the kernel code and it may actually be faster than the documented procedure when only a single option or two were tweaked in the kernel configuration file.
+On the other hand, it might lead to unexpected kernel build breakage.
[.procedure]
. Run man:config[8] to generate the kernel source code:
diff --git a/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc b/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
index daea455841..817ef1f7de 100644
--- a/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/kerneldebug/_index.adoc
@@ -38,13 +38,20 @@ toc::[]
[[kerneldebug-obtain]]
== Obtaining a Kernel Crash Dump
-When running a development kernel (e.g., FreeBSD-CURRENT), such as a kernel under extreme conditions (e.g., very high load averages, tens of thousands of connections, exceedingly high number of concurrent users, hundreds of man:jail[8]s, etc.), or using a new feature or device driver on FreeBSD-STABLE (e.g., PAE), sometimes a kernel will panic. In the event that it does, this chapter will demonstrate how to extract useful information out of a crash.
+When running a development kernel (e.g., FreeBSD-CURRENT), such as a kernel under extreme conditions (e.g., very high load averages, tens of thousands of connections, exceedingly high number of concurrent users, hundreds of man:jail[8]s, etc.),
+or using a new feature or device driver on FreeBSD-STABLE (e.g., PAE), sometimes a kernel will panic.
+In the event that it does, this chapter will demonstrate how to extract useful information out of a crash.
-A system reboot is inevitable once a kernel panics. Once a system is rebooted, the contents of a system's physical memory (RAM) is lost, as well as any bits that are on the swap device before the panic. To preserve the bits in physical memory, the kernel makes use of the swap device as a temporary place to store the bits that are in RAM across a reboot after a crash. In doing this, when FreeBSD boots after a crash, a kernel image can now be extracted and debugging can take place.
+A system reboot is inevitable once a kernel panics.
+Once a system is rebooted, the contents of a system's physical memory (RAM) is lost, as well as any bits that are on the swap device before the panic.
+To preserve the bits in physical memory, the kernel makes use of the swap device as a temporary place to store the bits that are in RAM across a reboot after a crash.
+In doing this, when FreeBSD boots after a crash, a kernel image can now be extracted and debugging can take place.
[NOTE]
====
-A swap device that has been configured as a dump device still acts as a swap device. Dumps to non-swap devices (such as tapes or CDRWs, for example) are not supported at this time. A "swap device" is synonymous with a "swap partition."
+A swap device that has been configured as a dump device still acts as a swap device.
+Dumps to non-swap devices (such as tapes or CDRWs, for example) are not supported at this time.
+A "swap device" is synonymous with a "swap partition."
====
Several types of kernel crash dumps are available:
@@ -63,11 +70,15 @@ Minidumps are the default dump type as of FreeBSD 7.0, and in most cases will ca
[[config-dumpdev]]
=== Configuring the Dump Device
-Before the kernel will dump the contents of its physical memory to a dump device, a dump device must be configured. A dump device is specified by using the man:dumpon[8] command to tell the kernel where to save kernel crash dumps. The man:dumpon[8] program must be called after the swap partition has been configured with man:swapon[8]. This is normally handled by setting the `dumpdev` variable in man:rc.conf[5] to the path of the swap device (the recommended way to extract a kernel dump) or `AUTO` to use the first configured swap device. The default for `dumpdev` is `AUTO` in HEAD, and changed to `NO` on RELENG_* branches (except for RELENG_7, which was left set to `AUTO`). On FreeBSD 9.0-RELEASE and later versions, bsdinstall will ask whether crash dumps should be enabled on the target system during the install process.
+Before the kernel will dump the contents of its physical memory to a dump device, a dump device must be configured.
+A dump device is specified by using the man:dumpon[8] command to tell the kernel where to save kernel crash dumps.
+The man:dumpon[8] program must be called after the swap partition has been configured with man:swapon[8].
+This is normally handled by setting the `dumpdev` variable in man:rc.conf[5] to the path of the swap device (the recommended way to extract a kernel dump) or `AUTO` to use the first configured swap device.
+The default for `dumpdev` is `AUTO` in HEAD, and changed to `NO` on RELENG_* branches (except for RELENG_7, which was left set to `AUTO`).
+On FreeBSD 9.0-RELEASE and later versions, bsdinstall will ask whether crash dumps should be enabled on the target system during the install process.
[TIP]
====
-
Check [.filename]#/etc/fstab# or man:swapinfo[8] for a list of swap devices.
====
@@ -87,16 +98,23 @@ Also, remember that the contents of [.filename]#/var/crash# is sensitive and ver
[[extract-dump]]
=== Extracting a Kernel Dump
-Once a dump has been written to a dump device, the dump must be extracted before the swap device is mounted. To extract a dump from a dump device, use the man:savecore[8] program. If `dumpdev` has been set in man:rc.conf[5], man:savecore[8] will be called automatically on the first multi-user boot after the crash and before the swap device is mounted. The location of the extracted core is placed in the man:rc.conf[5] value `dumpdir`, by default [.filename]#/var/crash# and will be named [.filename]#vmcore.0#.
+Once a dump has been written to a dump device, the dump must be extracted before the swap device is mounted.
+To extract a dump from a dump device, use the man:savecore[8] program.
+If `dumpdev` has been set in man:rc.conf[5], man:savecore[8] will be called automatically on the first multi-user boot after the crash and before the swap device is mounted.
+The location of the extracted core is placed in the man:rc.conf[5] value `dumpdir`, by default [.filename]#/var/crash# and will be named [.filename]#vmcore.0#.
-In the event that there is already a file called [.filename]#vmcore.0# in [.filename]#/var/crash# (or whatever `dumpdir` is set to), the kernel will increment the trailing number for every crash to avoid overwriting an existing [.filename]#vmcore# (e.g., [.filename]#vmcore.1#). man:savecore[8] will always create a symbolic link to named [.filename]#vmcore.last# in [.filename]#/var/crash# after a dump is saved. This symbolic link can be used to locate the name of the most recent dump.
+In the event that there is already a file called [.filename]#vmcore.0# in [.filename]#/var/crash# (or whatever `dumpdir` is set to), the kernel will increment the trailing number for every crash to avoid overwriting an existing [.filename]#vmcore# (e.g., [.filename]#vmcore.1#).
+man:savecore[8] will always create a symbolic link to named [.filename]#vmcore.last# in [.filename]#/var/crash# after a dump is saved.
+This symbolic link can be used to locate the name of the most recent dump.
-The man:crashinfo[8] utility generates a text file containing a summary of information from a full memory dump or minidump. If `dumpdev` has been set in man:rc.conf[5], man:crashinfo[8] will be invoked automatically after man:savecore[8]. The output is saved to a file in `dumpdir` named [.filename]#core.txt.N#.
+The man:crashinfo[8] utility generates a text file containing a summary of information from a full memory dump or minidump.
+If `dumpdev` has been set in man:rc.conf[5], man:crashinfo[8] will be invoked automatically after man:savecore[8].
+The output is saved to a file in `dumpdir` named [.filename]#core.txt.N#.
[TIP]
====
-
-If you are testing a new kernel but need to boot a different one in order to get your system up and running again, boot it only into single user mode using the `-s` flag at the boot prompt, and then perform the following steps:
+If you are testing a new kernel but need to boot a different one in order to get your system up and running again,
+boot it only into single user mode using the `-s` flag at the boot prompt, and then perform the following steps:
[source,bash]
....
@@ -106,12 +124,16 @@ If you are testing a new kernel but need to boot a different one in order to get
# exit # exit to multi-user
....
-This instructs man:savecore[8] to extract a kernel dump from [.filename]#/dev/ad0s1b# and place the contents in [.filename]#/var/crash#. Do not forget to make sure the destination directory [.filename]#/var/crash# has enough space for the dump. Also, do not forget to specify the correct path to your swap device as it is likely different than [.filename]#/dev/ad0s1b#!
+This instructs man:savecore[8] to extract a kernel dump from [.filename]#/dev/ad0s1b# and place the contents in [.filename]#/var/crash#.
+Do not forget to make sure the destination directory [.filename]#/var/crash# has enough space for the dump.
+Also, do not forget to specify the correct path to your swap device as it is likely different than [.filename]#/dev/ad0s1b#!
====
=== Testing Kernel Dump Configuration
-The kernel includes a man:sysctl[8] node that requests a kernel panic. This can be used to verify that your system is properly configured to save kernel crash dumps. You may wish to remount existing file systems as read-only in single user mode before triggering the crash to avoid data loss.
+The kernel includes a man:sysctl[8] node that requests a kernel panic.
+This can be used to verify that your system is properly configured to save kernel crash dumps.
+You may wish to remount existing file systems as read-only in single user mode before triggering the crash to avoid data loss.
[source,bash]
....
@@ -131,7 +153,9 @@ After rebooting, your system should save a dump in [.filename]#/var/crash# along
[NOTE]
====
-This section covers man:kgdb[1]. The latest version is included in the package:devel/gdb[]. An older version is also present in FreeBSD 11 and earlier.
+This section covers man:kgdb[1].
+The latest version is included in the package:devel/gdb[].
+An older version is also present in FreeBSD 11 and earlier.
====
To enter into the debugger and begin getting information from the dump, start kgdb:
@@ -141,14 +165,16 @@ To enter into the debugger and begin getting information from the dump, start kg
# kgdb -n N
....
-Where _N_ is the suffix of the [.filename]#vmcore.N# to examine. To open the most recent dump use:
+Where _N_ is the suffix of the [.filename]#vmcore.N# to examine.
+To open the most recent dump use:
[source,bash]
....
# kgdb -n last
....
-Normally, man:kgdb[1] should be able to locate the kernel running at the time the dump was generated. If it is not able to locate the correct kernel, pass the pathname of the kernel and dump as two arguments to kgdb:
+Normally, man:kgdb[1] should be able to locate the kernel running at the time the dump was generated.
+If it is not able to locate the correct kernel, pass the pathname of the kernel and dump as two arguments to kgdb:
[source,bash]
....
@@ -157,7 +183,12 @@ Normally, man:kgdb[1] should be able to locate the kernel running at the time th
You can debug the crash dump using the kernel sources just like you can for any other program.
-This dump is from a 5.2-BETA kernel and the crash comes from deep within the kernel. The output below has been modified to include line numbers on the left. This first trace inspects the instruction pointer and obtains a back trace. The address that is used on line 41 for the `list` command is the instruction pointer and can be found on line 17. Most developers will request having at least this information sent to them if you are unable to debug the problem yourself. If, however, you do solve the problem, make sure that your patch winds its way into the source tree via a problem report, mailing lists, or by being able to commit it!
+This dump is from a 5.2-BETA kernel and the crash comes from deep within the kernel.
+The output below has been modified to include line numbers on the left.
+This first trace inspects the instruction pointer and obtains a back trace.
+The address that is used on line 41 for the `list` command is the instruction pointer and can be found on line 17.
+Most developers will request having at least this information sent to them if you are unable to debug the problem yourself.
+If, however, you do solve the problem, make sure that your patch winds its way into the source tree via a problem report, mailing lists, or by being able to commit it!
[source,bash]
....
@@ -255,16 +286,19 @@ This dump is from a 5.2-BETA kernel and the crash comes from deep within the ker
[TIP]
====
-
-If your system is crashing regularly and you are running out of disk space, deleting old [.filename]#vmcore# files in [.filename]#/var/crash# could save a considerable amount of disk space!
+If your system is crashing regularly and you are running out of disk space,
+deleting old [.filename]#vmcore# files in [.filename]#/var/crash# could save a considerable amount of disk space!
====
[[kerneldebug-online-ddb]]
== On-Line Kernel Debugging Using DDB
-While `kgdb` as an off-line debugger provides a very high level of user interface, there are some things it cannot do. The most important ones being breakpointing and single-stepping kernel code.
+While `kgdb` as an off-line debugger provides a very high level of user interface, there are some things it cannot do.
+The most important ones being breakpointing and single-stepping kernel code.
-If you need to do low-level debugging on your kernel, there is an on-line debugger available called DDB. It allows setting of breakpoints, single-stepping kernel functions, examining and changing kernel variables, etc. However, it cannot access kernel source files, and only has access to the global and static symbols, not to the full debug information like `kgdb` does.
+If you need to do low-level debugging on your kernel, there is an on-line debugger available called DDB.
+It allows setting of breakpoints, single-stepping kernel functions, examining and changing kernel variables, etc.
+However, it cannot access kernel source files, and only has access to the global and static symbols, not to the full debug information like `kgdb` does.
To configure your kernel to include DDB, add the options
[.programlisting]
@@ -277,20 +311,32 @@ options KDB
options DDB
....
-to your config file, and rebuild. (See link:{handbook}/[The FreeBSD Handbook] for details on configuring the FreeBSD kernel).
+to your config file, and rebuild.
+(See link:{handbook}/[The FreeBSD Handbook] for details on configuring the FreeBSD kernel).
-Once your DDB kernel is running, there are several ways to enter DDB. The first, and earliest way is to use the boot flag `-d`. The kernel will start up in debug mode and enter DDB prior to any device probing. Hence you can even debug the device probe/attach functions. To use this, exit the loader's boot menu and enter `boot -d` at the loader prompt.
+Once your DDB kernel is running, there are several ways to enter DDB.
+The first, and earliest way is to use the boot flag `-d`.
+The kernel will start up in debug mode and enter DDB prior to any device probing.
+Hence you can even debug the device probe/attach functions.
+To use this, exit the loader's boot menu and enter `boot -d` at the loader prompt.
-The second scenario is to drop to the debugger once the system has booted. There are two simple ways to accomplish this. If you would like to break to the debugger from the command prompt, simply type the command:
+The second scenario is to drop to the debugger once the system has booted.
+There are two simple ways to accomplish this.
+If you would like to break to the debugger from the command prompt, simply type the command:
[source,bash]
....
# sysctl debug.kdb.enter=1
....
-Alternatively, if you are at the system console, you may use a hot-key on the keyboard. The default break-to-debugger sequence is kbd:[Ctrl+Alt+ESC]. For syscons, this sequence can be remapped and some of the distributed maps out there do this, so check to make sure you know the right sequence to use. There is an option available for serial consoles that allows the use of a serial line BREAK on the console line to enter DDB (`options BREAK_TO_DEBUGGER` in the kernel config file). It is not the default since there are a lot of serial adapters around that gratuitously generate a BREAK condition, for example when pulling the cable.
+Alternatively, if you are at the system console, you may use a hot-key on the keyboard.
+The default break-to-debugger sequence is kbd:[Ctrl+Alt+ESC].
+For syscons, this sequence can be remapped and some of the distributed maps out there do this, so check to make sure you know the right sequence to use.
+There is an option available for serial consoles that allows the use of a serial line BREAK on the console line to enter DDB (`options BREAK_TO_DEBUGGER` in the kernel config file).
+It is not the default since there are a lot of serial adapters around that gratuitously generate a BREAK condition, for example when pulling the cable.
-The third way is that any panic condition will branch to DDB if the kernel is configured to use it. For this reason, it is not wise to configure a kernel with DDB for a machine running unattended.
+The third way is that any panic condition will branch to DDB if the kernel is configured to use it.
+For this reason, it is not wise to configure a kernel with DDB for a machine running unattended.
To obtain the unattended functionality, add:
@@ -301,14 +347,17 @@ options KDB_UNATTENDED
to the kernel configuration file and rebuild/reinstall.
-The DDB commands roughly resemble some `gdb` commands. The first thing you probably need to do is to set a breakpoint:
+The DDB commands roughly resemble some `gdb` commands.
+The first thing you probably need to do is to set a breakpoint:
[source,bash]
....
break function-name address
....
-Numbers are taken hexadecimal by default, but to make them distinct from symbol names; hexadecimal numbers starting with the letters `a-f` need to be preceded with `0x` (this is optional for other numbers). Simple expressions are allowed, for example: `function-name + 0x103`.
+Numbers are taken hexadecimal by default, but to make them distinct from symbol names;
+hexadecimal numbers starting with the letters `a-f` need to be preceded with `0x` (this is optional for other numbers).
+Simple expressions are allowed, for example: `function-name + 0x103`.
To exit the debugger and continue execution, type:
@@ -334,7 +383,8 @@ If you want to remove a breakpoint, use
del address-expression
....
-The first form will be accepted immediately after a breakpoint hit, and deletes the current breakpoint. The second form can remove any breakpoint, but you need to specify the exact address; this can be obtained from:
+The first form will be accepted immediately after a breakpoint hit, and deletes the current breakpoint.
+The second form can remove any breakpoint, but you need to specify the exact address; this can be obtained from:
[source,bash]
....
@@ -364,7 +414,8 @@ This will step into functions, but you can make DDB trace them until the matchin
[NOTE]
====
-This is different from ``gdb``'s `next` statement; it is like ``gdb``'s `finish`. Pressing kbd:[n] more than once will cause a continue.
+This is different from ``gdb``'s `next` statement; it is like ``gdb``'s `finish`.
+Pressing kbd:[n] more than once will cause a continue.
====
To examine data from memory, use (for example):
@@ -377,7 +428,9 @@ To examine data from memory, use (for example):
x/s stringbuf
....
-for word/halfword/byte access, and hexadecimal/decimal/character/ string display. The number after the comma is the object count. To display the next 0x10 items, simply use:
+for word/halfword/byte access, and hexadecimal/decimal/character/ string display.
+The number after the comma is the object count.
+To display the next 0x10 items, simply use:
[source,bash]
....
@@ -401,7 +454,8 @@ To modify memory, use the write command:
w/w 0xf0010030 0 0
....
-The command modifier (`b`/`h`/`w`) specifies the size of the data to be written, the first following expression is the address to write to and the remainder is interpreted as data to write to successive memory locations.
+The command modifier (`b`/`h`/`w`) specifies the size of the data to be written,
+the first following expression is the address to write to and the remainder is interpreted as data to write to successive memory locations.
If you need to know the current registers, use:
@@ -440,7 +494,9 @@ For a man:ps[1] style summary of all running processes, use:
ps
....
-Now you have examined why your kernel failed, and you wish to reboot. Remember that, depending on the severity of previous malfunctioning, not all parts of the kernel might still be working as expected. Perform one of the following actions to shut down and reboot your system:
+Now you have examined why your kernel failed, and you wish to reboot.
+Remember that, depending on the severity of previous malfunctioning, not all parts of the kernel might still be working as expected.
+Perform one of the following actions to shut down and reboot your system:
[source,bash]
....
@@ -454,7 +510,8 @@ This will cause your kernel to dump core and reboot, so you can later analyze th
call boot(0)
....
-Might be a good way to cleanly shut down the running system, `sync()` all disks, and finally, in some cases, reboot. As long as the disk and filesystem interfaces of the kernel are not damaged, this could be a good way for an almost clean shutdown.
+Might be a good way to cleanly shut down the running system, `sync()` all disks, and finally, in some cases, reboot.
+As long as the disk and filesystem interfaces of the kernel are not damaged, this could be a good way for an almost clean shutdown.
[source,bash]
....
@@ -470,16 +527,28 @@ If you need a short command summary, simply type:
help
....
-It is highly recommended to have a printed copy of the man:ddb[4] manual page ready for a debugging session. Remember that it is hard to read the on-line manual while single-stepping the kernel.
+It is highly recommended to have a printed copy of the man:ddb[4] manual page ready for a debugging session.
+Remember that it is hard to read the on-line manual while single-stepping the kernel.
[[kerneldebug-online-gdb]]
== On-Line Kernel Debugging Using Remote GDB
This feature has been supported since FreeBSD 2.2, and it is actually a very neat one.
-GDB has already supported _remote debugging_ for a long time. This is done using a very simple protocol along a serial line. Unlike the other methods described above, you will need two machines for doing this. One is the host providing the debugging environment, including all the sources, and a copy of the kernel binary with all the symbols in it, and the other one is the target machine that simply runs a similar copy of the very same kernel (but stripped of the debugging information).
+GDB has already supported _remote debugging_ for a long time.
+This is done using a very simple protocol along a serial line.
+Unlike the other methods described above, you will need two machines for doing this.
+One is the host providing the debugging environment, including all the sources, and a copy of the kernel binary with all the symbols in it,
+and the other one is the target machine that simply runs a similar copy of the very same kernel (but stripped of the debugging information).
-You should configure the kernel in question with `config -g` if building the "traditional" way. If building the "new" way, make sure that `makeoptions DEBUG=-g` is in the configuration. In both cases, include `DDB` in the configuration, and compile it as usual. This gives a large binary, due to the debugging information. Copy this kernel to the target machine, strip the debugging symbols off with `strip -x`, and boot it using the `-d` boot option. Connect the serial line of the target machine that has "flags 080" set on its uart device to any serial line of the debugging host. See man:uart[4] for information on how to set the flags on an uart device. Now, on the debugging machine, go to the compile directory of the target kernel, and start `gdb`:
+You should configure the kernel in question with `config -g` if building the "traditional" way.
+If building the "new" way, make sure that `makeoptions DEBUG=-g` is in the configuration.
+In both cases, include `DDB` in the configuration, and compile it as usual.
+This gives a large binary, due to the debugging information.
+Copy this kernel to the target machine, strip the debugging symbols off with `strip -x`, and boot it using the `-d` boot option.
+Connect the serial line of the target machine that has "flags 080" set on its uart device to any serial line of the debugging host.
+See man:uart[4] for information on how to set the flags on an uart device.
+Now, on the debugging machine, go to the compile directory of the target kernel, and start `gdb`:
[source,bash]
....
@@ -515,7 +584,9 @@ DDB will respond with:
Next trap will enter GDB remote protocol mode
....
-Every time you type `gdb`, the mode will be toggled between remote GDB and local DDB. In order to force a next trap immediately, simply type `s` (step). Your hosting GDB will now gain control over the target kernel:
+Every time you type `gdb`, the mode will be toggled between remote GDB and local DDB
+In order to force a next trap immediately, simply type `s` (step).
+Your hosting GDB will now gain control over the target kernel:
[source,bash]
....
@@ -525,19 +596,27 @@ Debugger (msg=0xf01b0383 "Boot flags requested debugger")
(kgdb)
....
-You can use this session almost as any other GDB session, including full access to the source, running it in gud-mode inside an Emacs window (which gives you an automatic source code display in another Emacs window), etc.
+You can use this session almost as any other GDB session, including full access to the source,
+running it in gud-mode inside an Emacs window (which gives you an automatic source code display in another Emacs window), etc.
[[kerneldebug-console]]
== Debugging a Console Driver
-Since you need a console driver to run DDB on, things are more complicated if the console driver itself is failing. You might remember the use of a serial console (either with modified boot blocks, or by specifying `-h` at the `Boot:` prompt), and hook up a standard terminal onto your first serial port. DDB works on any configured console driver, including a serial console.
+Since you need a console driver to run DDB on, things are more complicated if the console driver itself is failing.
+You might remember the use of a serial console (either with modified boot blocks, or by specifying `-h` at the `Boot:` prompt),
+and hook up a standard terminal onto your first serial port.
+DDB works on any configured console driver, including a serial console.
[[kerneldebug-deadlocks]]
== Debugging Deadlocks
-You may experience so called deadlocks, a situation where a system stops doing useful work. To provide a helpful bug report in this situation, use man:ddb[4] as described in the previous section. Include the output of `ps` and `trace` for suspected processes in the report.
+You may experience so called deadlocks, a situation where a system stops doing useful work.
+To provide a helpful bug report in this situation, use man:ddb[4] as described in the previous section.
+Include the output of `ps` and `trace` for suspected processes in the report.
-If possible, consider doing further investigation. The recipe below is especially useful if you suspect that a deadlock occurs in the VFS layer. Add these options to the kernel configuration file.
+If possible, consider doing further investigation.
+The recipe below is especially useful if you suspect that a deadlock occurs in the VFS layer.
+Add these options to the kernel configuration file.
[.programlisting]
....
@@ -558,13 +637,22 @@ To obtain meaningful backtraces for threaded processes, use `thread thread-id` t
[[kerneldebug-dcons]]
== Kernel debugging with Dcons
-man:dcons[4] is a very simple console driver that is not directly connected with any physical devices. It just reads and writes characters from and to a buffer in a kernel or loader. Due to its simple nature, it is very useful for kernel debugging, especially with a FireWire(R) device. Currently, FreeBSD provides two ways to interact with the buffer from outside of the kernel using man:dconschat[8].
+man:dcons[4] is a very simple console driver that is not directly connected with any physical devices.
+It just reads and writes characters from and to a buffer in a kernel or loader.
+Due to its simple nature, it is very useful for kernel debugging, especially with a FireWire(R) device.
+Currently, FreeBSD provides two ways to interact with the buffer from outside of the kernel using man:dconschat[8].
=== Dcons over FireWire(R)
-Most FireWire(R) (IEEE1394) host controllers are based on the OHCI specification that supports physical access to the host memory. This means that once the host controller is initialized, we can access the host memory without the help of software (kernel). We can exploit this facility for interaction with man:dcons[4]. man:dcons[4] provides similar functionality as a serial console. It emulates two serial ports, one for the console and DDB, the other for GDB. Since remote memory access is fully handled by the hardware, the man:dcons[4] buffer is accessible even when the system crashes.
+Most FireWire(R) (IEEE1394) host controllers are based on the OHCI specification that supports physical access to the host memory.
+This means that once the host controller is initialized, we can access the host memory without the help of software (kernel).
+We can exploit this facility for interaction with man:dcons[4].
+man:dcons[4] provides similar functionality as a serial console.
+It emulates two serial ports, one for the console and DDB, the other for GDB.
+Since remote memory access is fully handled by the hardware, the man:dcons[4] buffer is accessible even when the system crashes.
-FireWire(R) devices are not limited to those integrated into motherboards. PCI cards exist for desktops, and a cardbus interface can be purchased for laptops.
+FireWire(R) devices are not limited to those integrated into motherboards.
+PCI cards exist for desktops, and a cardbus interface can be purchased for laptops.
==== Enabling FireWire(R) and Dcons support on the target machine
@@ -588,7 +676,8 @@ Add `LOADER_FIREWIRE_SUPPORT=YES` in [.filename]#/etc/make.conf# and rebuild man
To enable man:dcons[4] as an active low-level console, add `boot_multicons="YES"` to [.filename]#/boot/loader.conf#.
-Here are a few configuration examples. A sample kernel configuration file would contain:
+Here are a few configuration examples.
+A sample kernel configuration file would contain:
[source,bash]
....
@@ -689,7 +778,8 @@ LANG=C ddd --debugger kgdb kernel /dev/fwmem0.2
=== Dcons with KVM
-We can directly read the man:dcons[4] buffer via [.filename]#/dev/mem# for live systems, and in the core dump for crashed systems. These give you similar output to `dmesg -a`, but the man:dcons[4] buffer includes more information.
+We can directly read the man:dcons[4] buffer via [.filename]#/dev/mem# for live systems, and in the core dump for crashed systems.
+These give you similar output to `dmesg -a`, but the man:dcons[4] buffer includes more information.
==== Using Dcons with KVM
diff --git a/documentation/content/en/books/developers-handbook/l10n/_index.adoc b/documentation/content/en/books/developers-handbook/l10n/_index.adoc
index 19838d15b2..a8c81ee243 100644
--- a/documentation/content/en/books/developers-handbook/l10n/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/l10n/_index.adoc
@@ -35,59 +35,91 @@ toc::[]
[[l10n-programming]]
== Programming I18N Compliant Applications
-To make your application more useful for speakers of other languages, we hope that you will program I18N compliant. The GNU gcc compiler and GUI libraries like QT and GTK support I18N through special handling of strings. Making a program I18N compliant is very easy. It allows contributors to port your application to other languages quickly. Refer to the library specific I18N documentation for more details.
+To make your application more useful for speakers of other languages, we hope that you will program I18N compliant.
+The GNU gcc compiler and GUI libraries like QT and GTK support I18N through special handling of strings.
+Making a program I18N compliant is very easy.
+It allows contributors to port your application to other languages quickly.
+Refer to the library specific I18N documentation for more details.
-In contrast with common perception, I18N compliant code is easy to write. Usually, it only involves wrapping your strings with library specific functions. In addition, please be sure to allow for wide or multibyte character support.
+In contrast with common perception, I18N compliant code is easy to write.
+Usually, it only involves wrapping your strings with library specific functions.
+In addition, please be sure to allow for wide or multibyte character support.
=== A Call to Unify the I18N Effort
-It has come to our attention that the individual I18N/L10N efforts for each country has been repeating each others' efforts. Many of us have been reinventing the wheel repeatedly and inefficiently. We hope that the various major groups in I18N could congregate into a group effort similar to the Core Team's responsibility.
+It has come to our attention that the individual I18N/L10N efforts for each country has been repeating each others' efforts.
+Many of us have been reinventing the wheel repeatedly and inefficiently.
+We hope that the various major groups in I18N could congregate into a group effort similar to the Core Team's responsibility.
-Currently, we hope that, when you write or port I18N programs, you would send it out to each country's related FreeBSD mailing list for testing. In the future, we hope to create applications that work in all the languages out-of-the-box without dirty hacks.
+Currently, we hope that, when you write or port I18N programs, you would send it out to each country's related FreeBSD mailing list for testing.
+In the future, we hope to create applications that work in all the languages out-of-the-box without dirty hacks.
-The {freebsd-i18n} has been established. If you are an I18N/L10N developer, please send your comments, ideas, questions, and anything you deem related to it.
+The {freebsd-i18n} has been established.
+If you are an I18N/L10N developer, please send your comments, ideas, questions, and anything you deem related to it.
=== Perl and Python
-Perl and Python have I18N and wide character handling libraries. Please use them for I18N compliance.
+Perl and Python have I18N and wide character handling libraries.
+Please use them for I18N compliance.
[[posix-nls]]
== Localized Messages with POSIX.1 Native Language Support (NLS)
-Beyond the basic I18N functions, like supporting various input encodings or supporting national conventions, such as the different decimal separators, at a higher level of I18N, it is possible to localize the messages written to the output by the various programs. A common way of doing this is using the POSIX.1 NLS functions, which are provided as a part of the FreeBSD base system.
+Beyond the basic I18N functions, like supporting various input encodings or supporting national conventions, such as the different decimal separators, at a higher level of I18N, it is possible to localize the messages written to the output by the various programs.
+A common way of doing this is using the POSIX.1 NLS functions, which are provided as a part of the FreeBSD base system.
[[nls-catalogs]]
=== Organizing Localized Messages into Catalog Files
-POSIX.1 NLS is based on catalog files, which contain the localized messages in the desired encoding. The messages are organized into sets and each message is identified by an integer number in the containing set. The catalog files are conventionally named after the locale they contain localized messages for, followed by the `.msg` extension. For instance, the Hungarian messages for ISO8859-2 encoding should be stored in a file called [.filename]#hu_HU.ISO8859-2#.
+POSIX.1 NLS is based on catalog files, which contain the localized messages in the desired encoding.
+The messages are organized into sets and each message is identified by an integer number in the containing set.
+The catalog files are conventionally named after the locale they contain localized messages for, followed by the `.msg` extension.
+For instance, the Hungarian messages for ISO8859-2 encoding should be stored in a file called [.filename]#hu_HU.ISO8859-2#.
-These catalog files are common text files that contain the numbered messages. It is possible to write comments by starting the line with a `$` sign. Set boundaries are also separated by special comments, where the keyword `set` must directly follow the `$` sign. The `set` keyword is then followed by the set number. For example:
+These catalog files are common text files that contain the numbered messages.
+It is possible to write comments by starting the line with a `$` sign.
+Set boundaries are also separated by special comments, where the keyword `set` must directly follow the `$` sign.
+The `set` keyword is then followed by the set number. For example:
[.programlisting]
....
$set 1
....
-The actual message entries start with the message number and followed by the localized message. The well-known modifiers from man:printf[3] are accepted:
+The actual message entries start with the message number and followed by the localized message.
+The well-known modifiers from man:printf[3] are accepted:
[.programlisting]
....
15 "File not found: %s\n"
....
-The language catalog files have to be compiled into a binary form before they can be opened from the program. This conversion is done with the man:gencat[1] utility. Its first argument is the filename of the compiled catalog and its further arguments are the input catalogs. The localized messages can also be organized into more catalog files and then all of them can be processed with man:gencat[1].
+The language catalog files have to be compiled into a binary form before they can be opened from the program.
+This conversion is done with the man:gencat[1] utility.
+Its first argument is the filename of the compiled catalog and its further arguments are the input catalogs.
+The localized messages can also be organized into more catalog files and then all of them can be processed with man:gencat[1].
[[nls-using]]
=== Using the Catalog Files from the Source Code
-Using the catalog files is simple. To use the related functions, [.filename]#nl_types.h# must be included. Before using a catalog, it has to be opened with man:catopen[3]. The function takes two arguments. The first parameter is the name of the installed and compiled catalog. Usually, the name of the program is used, such as grep. This name will be used when looking for the compiled catalog file. The man:catopen[3] call looks for this file in [.filename]#/usr/share/nls/locale/catname# and in [.filename]#/usr/local/share/nls/locale/catname#, where `locale` is the locale set and `catname` is the catalog name being discussed. The second parameter is a constant, which can have two values:
+Using the catalog files is simple.
+To use the related functions, [.filename]#nl_types.h# must be included.
+Before using a catalog, it has to be opened with man:catopen[3].
+The function takes two arguments.
+The first parameter is the name of the installed and compiled catalog.
+Usually, the name of the program is used, such as grep.
+This name will be used when looking for the compiled catalog file.
+The man:catopen[3] call looks for this file in [.filename]#/usr/share/nls/locale/catname# and in [.filename]#/usr/local/share/nls/locale/catname#, where `locale` is the locale set and `catname` is the catalog name being discussed.
+The second parameter is a constant, which can have two values:
* `NL_CAT_LOCALE`, which means that the used catalog file will be based on `LC_MESSAGES`.
* `0`, which means that `LANG` has to be used to open the proper catalog.
-The man:catopen[3] call returns a catalog identifier of type `nl_catd`. Please refer to the manual page for a list of possible returned error codes.
+The man:catopen[3] call returns a catalog identifier of type `nl_catd`.
+Please refer to the manual page for a list of possible returned error codes.
-After opening a catalog man:catgets[3] can be used to retrieve a message. The first parameter is the catalog identifier returned by man:catopen[3], the second one is the number of the set, the third one is the number of the messages, and the fourth one is a fallback message, which will be returned if the requested message cannot be retrieved from the catalog file.
+After opening a catalog man:catgets[3] can be used to retrieve a message.
+The first parameter is the catalog identifier returned by man:catopen[3], the second one is the number of the set, the third one is the number of the messages, and the fourth one is a fallback message, which will be returned if the requested message cannot be retrieved from the catalog file.
After using the catalog file, it must be closed by calling man:catclose[3], which has one argument, the catalog id.
@@ -153,7 +185,8 @@ printf(getstr(1));
==== Reducing Strings to Localize
-There is a good way of reducing the strings that need to be localized by using libc error messages. This is also useful to just avoid duplication and provide consistent error messages for the common errors that can be encountered by a great many of programs.
+There is a good way of reducing the strings that need to be localized by using libc error messages.
+This is also useful to just avoid duplication and provide consistent error messages for the common errors that can be encountered by a great many of programs.
First, here is an example that does not use libc error messages:
@@ -178,7 +211,9 @@ if (!S_ISDIR(st.st_mode)) {
}
....
-In this example, the custom string is eliminated, thus translators will have less work when localizing the program and users will see the usual "Not a directory" error message when they encounter this error. This message will probably seem more familiar to them. Please note that it was necessary to include [.filename]#errno.h# in order to directly access `errno`.
+In this example, the custom string is eliminated, thus translators will have less work when localizing the program and users will see the usual "Not a directory" error message when they encounter this error.
+This message will probably seem more familiar to them.
+Please note that it was necessary to include [.filename]#errno.h# in order to directly access `errno`.
It is worth to note that there are cases when `errno` is set automatically by a preceding call, so it is not necessary to set it explicitly:
@@ -193,9 +228,13 @@ if ((p = malloc(size)) == NULL)
[[nls-mk]]
=== Making use of [.filename]#bsd.nls.mk#
-Using the catalog files requires few repeatable steps, such as compiling the catalogs and installing them to the proper location. In order to simplify this process even more, [.filename]#bsd.nls.mk# introduces some macros. It is not necessary to include [.filename]#bsd.nls.mk# explicitly, it is pulled in from the common Makefiles, such as [.filename]#bsd.prog.mk# or [.filename]#bsd.lib.mk#.
+Using the catalog files requires few repeatable steps, such as compiling the catalogs and installing them to the proper location.
+In order to simplify this process even more, [.filename]#bsd.nls.mk# introduces some macros.
+It is not necessary to include [.filename]#bsd.nls.mk# explicitly, it is pulled in from the common Makefiles, such as [.filename]#bsd.prog.mk# or [.filename]#bsd.lib.mk#.
-Usually it is enough to define `NLSNAME`, which should have the catalog name mentioned as the first argument of man:catopen[3] and list the catalog files in `NLS` without their `.msg` extension. Here is an example, which makes it possible to to disable NLS when used with the code examples before. The `WITHOUT_NLS` man:make[1] variable has to be defined in order to build the program without NLS support.
+Usually it is enough to define `NLSNAME`, which should have the catalog name mentioned as the first argument of man:catopen[3] and list the catalog files in `NLS` without their `.msg` extension.
+Here is an example, which makes it possible to to disable NLS when used with the code examples before.
+The `WITHOUT_NLS` man:make[1] variable has to be defined in order to build the program without NLS support.
[.programlisting]
....
@@ -208,4 +247,9 @@ CFLAGS+= -DWITHOUT_NLS
.endif
....
-Conventionally, the catalog files are placed under the [.filename]#nls# subdirectory and this is the default behavior of [.filename]#bsd.nls.mk#. It is possible, though to override the location of the catalogs with the `NLSSRCDIR` man:make[1] variable. The default name of the precompiled catalog files also follow the naming convention mentioned before. It can be overridden by setting the `NLSNAME` variable. There are other options to fine tune the processing of the catalog files but usually it is not needed, thus they are not described here. For further information on [.filename]#bsd.nls.mk#, please refer to the file itself, it is short and easy to understand.
+Conventionally, the catalog files are placed under the [.filename]#nls# subdirectory and this is the default behavior of [.filename]#bsd.nls.mk#.
+It is possible, though to override the location of the catalogs with the `NLSSRCDIR` man:make[1] variable.
+The default name of the precompiled catalog files also follow the naming convention mentioned before.
+It can be overridden by setting the `NLSNAME` variable.
+There are other options to fine tune the processing of the catalog files but usually it is not needed, thus they are not described here.
+For further information on [.filename]#bsd.nls.mk#, please refer to the file itself, it is short and easy to understand.
diff --git a/documentation/content/en/books/developers-handbook/parti.adoc b/documentation/content/en/books/developers-handbook/parti.adoc
index 28dc649a3e..547f228dd5 100644
--- a/documentation/content/en/books/developers-handbook/parti.adoc
+++ b/documentation/content/en/books/developers-handbook/parti.adoc
@@ -1,6 +1,6 @@
---
title: Part I. Basics
-prev: books/developers-handbook/preface
+prev: books/developers-handbook/
next: books/developers-handbook/introduction
---
diff --git a/documentation/content/en/books/developers-handbook/policies/_index.adoc b/documentation/content/en/books/developers-handbook/policies/_index.adoc
index 317c660e8a..f6d3a44cba 100644
--- a/documentation/content/en/books/developers-handbook/policies/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/policies/_index.adoc
@@ -39,12 +39,14 @@ This chapter documents various guidelines and policies in force for the FreeBSD
[[policies-style]]
== Style Guidelines
-Consistent coding style is extremely important, particularly with large projects like FreeBSD. Code should follow the FreeBSD coding styles described in man:style[9] and man:style.Makefile[5].
+Consistent coding style is extremely important, particularly with large projects like FreeBSD.
+Code should follow the FreeBSD coding styles described in man:style[9] and man:style.Makefile[5].
[[policies-maintainer]]
== `MAINTAINER` on Makefiles
-If a particular portion of the FreeBSD [.filename]#src/# distribution is being maintained by a person or group of persons, this is communicated through an entry in [.filename]#src/MAINTAINERS#. Maintainers of ports within the Ports Collection express their maintainership to the world by adding a `MAINTAINER` line to the [.filename]#Makefile# of the port in question:
+If a particular portion of the FreeBSD [.filename]#src/# distribution is being maintained by a person or group of persons, this is communicated through an entry in [.filename]#src/MAINTAINERS#.
+Maintainers of ports within the Ports Collection express their maintainership to the world by adding a `MAINTAINER` line to the [.filename]#Makefile# of the port in question:
[.programlisting]
....
@@ -53,8 +55,9 @@ MAINTAINER= email-addresses
[TIP]
====
-
-For other parts of the repository, or for sections not listed as having a maintainer, or when you are unsure who the active maintainer is, try looking at the recent commit history of the relevant parts of the source tree. It is quite often the case that a maintainer is not explicitly named, but the people who are actively working in a part of the source tree for, say, the last couple of years are interested in reviewing changes. Even if this is not specifically mentioned in the documentation or the source itself, asking for a review as a form of courtesy is a very reasonable thing to do.
+For other parts of the repository, or for sections not listed as having a maintainer, or when you are unsure who the active maintainer is, try looking at the recent commit history of the relevant parts of the source tree.
+It is quite often the case that a maintainer is not explicitly named, but the people who are actively working in a part of the source tree for, say, the last couple of years are interested in reviewing changes.
+Even if this is not specifically mentioned in the documentation or the source itself, asking for a review as a form of courtesy is a very reasonable thing to do.
====
The role of the maintainer is as follows:
@@ -66,13 +69,20 @@ The role of the maintainer is as follows:
[[policies-contributed]]
== Contributed Software
-Some parts of the FreeBSD distribution consist of software that is actively being maintained outside the FreeBSD project. For historical reasons, we call this _contributed_ software. Some examples are sendmail, gcc and patch.
+Some parts of the FreeBSD distribution consist of software that is actively being maintained outside the FreeBSD project.
+For historical reasons, we call this _contributed_ software.
+Some examples are sendmail, gcc and patch.
-Over the last couple of years, various methods have been used in dealing with this type of software and all have some number of advantages and drawbacks. No clear winner has emerged.
+Over the last couple of years, various methods have been used in dealing with this type of software and all have some number of advantages and drawbacks.
+No clear winner has emerged.
-Since this is the case, after some debate one of these methods has been selected as the "official" method and will be required for future imports of software of this kind. Furthermore, it is strongly suggested that existing contributed software converge on this model over time, as it has significant advantages over the old method, including the ability to easily obtain diffs relative to the "official" versions of the source by everyone (even without direct repository access). This will make it significantly easier to return changes to the primary developers of the contributed software.
+Since this is the case, after some debate one of these methods has been selected as the "official" method and will be required for future imports of software of this kind.
+Furthermore, it is strongly suggested that existing contributed software converge on this model over time, as it has significant advantages over the old method, including the ability to easily obtain diffs relative to the "official" versions of the source by everyone (even without direct repository access).
+This will make it significantly easier to return changes to the primary developers of the contributed software.
-Ultimately, however, it comes down to the people actually doing the work. If using this model is particularly unsuited to the package being dealt with, exceptions to these rules may be granted only with the approval of the core team and with the general consensus of the other developers. The ability to maintain the package in the future will be a key issue in the decisions.
+Ultimately, however, it comes down to the people actually doing the work.
+If using this model is particularly unsuited to the package being dealt with, exceptions to these rules may be granted only with the approval of the core team and with the general consensus of the other developers.
+The ability to maintain the package in the future will be a key issue in the decisions.
[NOTE]
====
@@ -87,9 +97,12 @@ This section describes the vendor import procedure with Subversion in details.
[.procedure]
. *Preparing the Tree*
+
-If this is your first import after the switch to SVN, you will have to flatten and clean up the vendor tree, and bootstrap merge history in the main tree. If not, you can safely omit this step.
+If this is your first import after the switch to SVN, you will have to flatten and clean up the vendor tree, and bootstrap merge history in the main tree.
+If not, you can safely omit this step.
+
-During the conversion from CVS to SVN, vendor branches were imported with the same layout as the main tree. For example, the foo vendor sources ended up in [.filename]#vendor/foo/dist/contrib/foo#, but it is pointless and rather inconvenient. What we really want is to have the vendor source directly in [.filename]#vendor/foo/dist#, like this:
+During the conversion from CVS to SVN, vendor branches were imported with the same layout as the main tree.
+For example, the foo vendor sources ended up in [.filename]#vendor/foo/dist/contrib/foo#, but it is pointless and rather inconvenient.
+What we really want is to have the vendor source directly in [.filename]#vendor/foo/dist#, like this:
+
[source,bash]
....
@@ -101,14 +114,19 @@ During the conversion from CVS to SVN, vendor branches were imported with the sa
% svn commit
....
+
-Note that, the `propdel` bit is necessary because starting with 1.5, Subversion will automatically add `svn:mergeinfo` to any directory you copy or move. In this case, you will not need this information, since you are not going to merge anything from the tree you deleted.
+Note that, the `propdel` bit is necessary because starting with 1.5, Subversion will automatically add `svn:mergeinfo` to any directory you copy or move.
+In this case, you will not need this information, since you are not going to merge anything from the tree you deleted.
+
[NOTE]
====
-You may want to flatten the tags as well. The procedure is exactly the same. If you do this, put off the commit until the end.
+You may want to flatten the tags as well.
+The procedure is exactly the same.
+If you do this, put off the commit until the end.
====
+
-Check the [.filename]#dist# tree and perform any cleanup that is deemed to be necessary. You may want to disable keyword expansion, as it makes no sense on unmodified vendor code. In some cases, it can be even be harmful.
+Check the [.filename]#dist# tree and perform any cleanup that is deemed to be necessary.
+You may want to disable keyword expansion, as it makes no sense on unmodified vendor code.
+In some cases, it can be even be harmful.
+
[source,bash]
....
@@ -128,9 +146,12 @@ Bootstrapping of `svn:mergeinfo` on the target directory (in the main tree) to t
With some shells, the `^` in the above command may need to be escaped with a backslash.
. *Importing New Sources*
+
-Prepare a full, clean tree of the vendor sources. With SVN, we can keep a full distribution in the vendor tree without bloating the main tree. Import everything but merge only what is needed.
+Prepare a full, clean tree of the vendor sources.
+With SVN, we can keep a full distribution in the vendor tree without bloating the main tree.
+Import everything but merge only what is needed.
+
-Note that you will need to add any files that were added since the last vendor import, and remove any that were removed. To facilitate this, you should prepare sorted lists of the contents of the vendor tree and of the sources you are about to import:
+Note that you will need to add any files that were added since the last vendor import, and remove any that were removed.
+To facilitate this, you should prepare sorted lists of the contents of the vendor tree and of the sources you are about to import:
+
[source,bash]
....
@@ -167,8 +188,9 @@ Let us put this together:
+
[WARNING]
====
-
-If there are new directories in the new distribution, the last command will fail. You will have to add the directories, and run it again. Conversely, if any directories were removed, you will have to remove them manually.
+If there are new directories in the new distribution, the last command will fail.
+You will have to add the directories, and run it again.
+Conversely, if any directories were removed, you will have to remove them manually.
====
+
Check properties on any new files:
@@ -183,7 +205,8 @@ Check properties on any new files:
You are ready to commit, but you should first check the output of `svn stat` and `svn diff` to make sure everything is in order.
====
+
-Once you have committed the new vendor release, you should tag it for future reference. The best and quickest way is to do it directly in the repository:
+Once you have committed the new vendor release, you should tag it for future reference.
+The best and quickest way is to do it directly in the repository:
+
[source,bash]
....
@@ -199,7 +222,8 @@ If you choose to do the copy in the checkout instead, do not forget to remove th
. *Merging to __-HEAD__*
+
-After you have prepared your import, it is time to merge. Option `--accept=postpone` tells SVN not to handle merge conflicts yet, because they will be taken care of manually:
+After you have prepared your import, it is time to merge.
+Option `--accept=postpone` tells SVN not to handle merge conflicts yet, because they will be taken care of manually:
+
[source,bash]
....
@@ -219,18 +243,27 @@ Resolve any conflicts, and make sure that any files that were added or removed i
+
[NOTE]
====
-With SVN, there is no concept of on or off the vendor branch. If a file that previously had local modifications no longer does, just remove any left-over cruft, such as FreeBSD version tags, so it no longer shows up in diffs against the vendor tree.
+With SVN, there is no concept of on or off the vendor branch.
+If a file that previously had local modifications no longer does, just remove any left-over cruft, such as FreeBSD version tags,
+so it no longer shows up in diffs against the vendor tree.
====
+
If any changes are required for the world to build with the new sources, make them now - and test until you are satisfied that everything build and runs correctly.
. *Commit*
+
-Now, you are ready to commit. Make sure you get everything in one go. Ideally, you would have done all steps in a clean tree, in which case you can just commit from the top of that tree. That is the best way to avoid surprises. If you do it properly, the tree will move atomically from a consistent state with the old code to a consistent state with the new code.
+Now, you are ready to commit.
+Make sure you get everything in one go.
+Ideally, you would have done all steps in a clean tree, in which case you can just commit from the top of that tree.
+That is the best way to avoid surprises.
+If you do it properly, the tree will move atomically from a consistent state with the old code to a consistent state with the new code.
[[policies-encumbered]]
== Encumbered Files
-It might occasionally be necessary to include an encumbered file in the FreeBSD source tree. For example, if a device requires a small piece of binary code to be loaded to it before the device will operate, and we do not have the source to that code, then the binary file is said to be encumbered. The following policies apply to including encumbered files in the FreeBSD source tree.
+It might occasionally be necessary to include an encumbered file in the FreeBSD source tree.
+For example, if a device requires a small piece of binary code to be loaded to it before the device will operate,
+and we do not have the source to that code, then the binary file is said to be encumbered.
+The following policies apply to including encumbered files in the FreeBSD source tree.
. Any file which is interpreted or executed by the system CPU(s) and not in source format is encumbered.
. Any file with a license more restrictive than BSD or GNU is encumbered.
@@ -251,7 +284,8 @@ It might occasionally be necessary to include an encumbered file in the FreeBSD
[[policies-shlib]]
== Shared Libraries
-If you are adding shared library support to a port or other piece of software that does not have one, the version numbers should follow these rules. Generally, the resulting numbers will have nothing to do with the release version of the software.
+If you are adding shared library support to a port or other piece of software that does not have one, the version numbers should follow these rules.
+Generally, the resulting numbers will have nothing to do with the release version of the software.
The three principles of shared library building are:
@@ -261,13 +295,22 @@ The three principles of shared library building are:
For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.
-Stick to version numbers of the form major.minor (`_x_._y_`). Our a.out dynamic linker does not handle version numbers of the form `_x_._y_._z_` well. Any version number after the `_y_` (i.e., the third digit) is totally ignored when comparing shared lib version numbers to decide which library to link with. Given two shared libraries that differ only in the "micro" revision, `ld.so` will link with the higher one. That is, if you link with [.filename]#libfoo.so.3.3.3#, the linker only records `3.3` in the headers, and will link with anything starting with `_libfoo.so.3_._(anything >= 3)_._(highest available)_`.
+Stick to version numbers of the form major.minor (`_x_._y_`).
+Our a.out dynamic linker does not handle version numbers of the form `_x_._y_._z_` well.
+Any version number after the `_y_` (i.e., the third digit) is totally ignored when comparing shared lib version numbers to decide which library to link with.
+Given two shared libraries that differ only in the "micro" revision, `ld.so` will link with the higher one.
+That is, if you link with [.filename]#libfoo.so.3.3.3#, the linker only records `3.3` in the headers, and will link with anything starting with `_libfoo.so.3_._(anything >= 3)_._(highest available)_`.
[NOTE]
====
-`ld.so` will always use the highest "minor" revision. For instance, it will use [.filename]#libc.so.2.2# in preference to [.filename]#libc.so.2.0#, even if the program was initially linked with [.filename]#libc.so.2.0#.
+`ld.so` will always use the highest "minor" revision.
+For instance, it will use [.filename]#libc.so.2.2# in preference to [.filename]#libc.so.2.0#, even if the program was initially linked with [.filename]#libc.so.2.0#.
====
-In addition, our ELF dynamic linker does not handle minor version numbers at all. However, one should still specify a major and minor version number as our [.filename]#Makefile#'s "do the right thing" based on the type of system.
+In addition, our ELF dynamic linker does not handle minor version numbers at all.
+However, one should still specify a major and minor version number as our [.filename]#Makefile#'s "do the right thing" based on the type of system.
-For non-port libraries, it is also our policy to change the shared library version number only once between releases. In addition, it is our policy to change the major shared library version number only once between major OS releases (i.e., from 6.0 to 7.0). When you make a change to a system library that requires the version number to be bumped, check the [.filename]#Makefile#'s commit logs. It is the responsibility of the committer to ensure that the first such change since the release will result in the shared library version number in the [.filename]#Makefile# to be updated, and any subsequent changes will not.
+For non-port libraries, it is also our policy to change the shared library version number only once between releases.
+In addition, it is our policy to change the major shared library version number only once between major OS releases (i.e., from 6.0 to 7.0).
+When you make a change to a system library that requires the version number to be bumped, check the [.filename]#Makefile#'s commit logs.
+It is the responsibility of the committer to ensure that the first such change since the release will result in the shared library version number in the [.filename]#Makefile# to be updated, and any subsequent changes will not.
diff --git a/documentation/content/en/books/developers-handbook/secure/_index.adoc b/documentation/content/en/books/developers-handbook/secure/_index.adoc
index 851f40a714..fac35d948e 100644
--- a/documentation/content/en/books/developers-handbook/secure/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/secure/_index.adoc
@@ -41,20 +41,36 @@ This chapter describes some of the security issues that have plagued UNIX(R) pro
[[secure-philosophy]]
== Secure Design Methodology
-Writing secure applications takes a very scrutinous and pessimistic outlook on life. Applications should be run with the principle of "least privilege" so that no process is ever running with more than the bare minimum access that it needs to accomplish its function. Previously tested code should be reused whenever possible to avoid common mistakes that others may have already fixed.
+Writing secure applications takes a very scrutinous and pessimistic outlook on life.
+Applications should be run with the principle of "least privilege" so that no process is ever running with more than the bare minimum access that it needs to accomplish its function.
+Previously tested code should be reused whenever possible to avoid common mistakes that others may have already fixed.
-One of the pitfalls of the UNIX(R) environment is how easy it is to make assumptions about the sanity of the environment. Applications should never trust user input (in all its forms), system resources, inter-process communication, or the timing of events. UNIX(R) processes do not execute synchronously so logical operations are rarely atomic.
+One of the pitfalls of the UNIX(R) environment is how easy it is to make assumptions about the sanity of the environment.
+Applications should never trust user input (in all its forms), system resources, inter-process communication, or the timing of events.
+UNIX(R) processes do not execute synchronously so logical operations are rarely atomic.
[[secure-bufferov]]
== Buffer Overflows
-Buffer Overflows have been around since the very beginnings of the von Neumann crossref:bibliography[cod,1] architecture. They first gained widespread notoriety in 1988 with the Morris Internet worm. Unfortunately, the same basic attack remains effective today. By far the most common type of buffer overflow attack is based on corrupting the stack.
+Buffer Overflows have been around since the very beginnings of the von Neumann crossref:bibliography[cod,1] architecture.
+They first gained widespread notoriety in 1988 with the Morris Internet worm.
+Unfortunately, the same basic attack remains effective today.
+By far the most common type of buffer overflow attack is based on corrupting the stack.
-Most modern computer systems use a stack to pass arguments to procedures and to store local variables. A stack is a last in first out (LIFO) buffer in the high memory area of a process image. When a program invokes a function a new "stack frame" is created. This stack frame consists of the arguments passed to the function as well as a dynamic amount of local variable space. The "stack pointer" is a register that holds the current location of the top of the stack. Since this value is constantly changing as new values are pushed onto the top of the stack, many implementations also provide a "frame pointer" that is located near the beginning of a stack frame so that local variables can more easily be addressed relative to this value. crossref:bibliography[cod,1] The return address for function calls is also stored on the stack, and this is the cause of stack-overflow exploits since overflowing a local variable in a function can overwrite the return address of that function, potentially allowing a malicious user to execute any code he or she wants.
+Most modern computer systems use a stack to pass arguments to procedures and to store local variables.
+A stack is a last in first out (LIFO) buffer in the high memory area of a process image.
+When a program invokes a function a new "stack frame" is created.
+This stack frame consists of the arguments passed to the function as well as a dynamic amount of local variable space.
+The "stack pointer" is a register that holds the current location of the top of the stack.
+Since this value is constantly changing as new values are pushed onto the top of the stack,
+many implementations also provide a "frame pointer" that is located near the beginning of a stack frame so that local variables can more easily be addressed relative to this value.
+crossref:bibliography[cod,1] The return address for function calls is also stored on the stack, and this is the cause of stack-overflow exploits since overflowing a local variable in a function can overwrite the return address of that function, potentially allowing a malicious user to execute any code he or she wants.
-Although stack-based attacks are by far the most common, it would also be possible to overrun the stack with a heap-based (malloc/free) attack.
+Although stack-based attacks are by far the most common,
+it would also be possible to overrun the stack with a heap-based (malloc/free) attack.
-The C programming language does not perform automatic bounds checking on arrays or pointers as many other languages do. In addition, the standard C library is filled with a handful of very dangerous functions.
+The C programming language does not perform automatic bounds checking on arrays or pointers as many other languages do.
+In addition, the standard C library is filled with a handful of very dangerous functions.
[.informaltable]
[cols="1,1", frame="none"]
@@ -131,51 +147,82 @@ Obviously more malicious input can be devised to execute actual compiled instruc
=== Avoiding Buffer Overflows
-The most straightforward solution to the problem of stack-overflows is to always use length restricted memory and string copy functions. `strncpy` and `strncat` are part of the standard C library. These functions accept a length value as a parameter which should be no larger than the size of the destination buffer. These functions will then copy up to 'length' bytes from the source to the destination. However there are a number of problems with these functions. Neither function guarantees NUL termination if the size of the input buffer is as large as the destination. The length parameter is also used inconsistently between strncpy and strncat so it is easy for programmers to get confused as to their proper usage. There is also a significant performance loss compared to `strcpy` when copying a short string into a large buffer since `strncpy` NUL fills up the size specified.
+The most straightforward solution to the problem of stack-overflows is to always use length restricted memory and string copy functions.
+`strncpy` and `strncat` are part of the standard C library.
+These functions accept a length value as a parameter which should be no larger than the size of the destination buffer.
+These functions will then copy up to 'length' bytes from the source to the destination.
+However there are a number of problems with these functions.
+Neither function guarantees NUL termination if the size of the input buffer is as large as the destination.
+The length parameter is also used inconsistently between strncpy and strncat so it is easy for programmers to get confused as to their proper usage.
+There is also a significant performance loss compared to `strcpy` when copying a short string into a large buffer since `strncpy` NUL fills up the size specified.
-Another memory copy implementation exists to get around these problems. The `strlcpy` and `strlcat` functions guarantee that they will always null terminate the destination string when given a non-zero length argument.
+Another memory copy implementation exists to get around these problems.
+The `strlcpy` and `strlcat` functions guarantee that they will always null terminate the destination string when given a non-zero length argument.
==== Compiler based run-time bounds checking
-Unfortunately there is still a very large assortment of code in public use which blindly copies memory around without using any of the bounded copy routines we just discussed. Fortunately, there is a way to help prevent such attacks - run-time bounds checking, which is implemented by several C/C++ compilers.
+Unfortunately there is still a very large assortment of code in public use which blindly copies memory around without using any of the bounded copy routines we just discussed.
+Fortunately, there is a way to help prevent such attacks - run-time bounds checking, which is implemented by several C/C++ compilers.
-ProPolice is one such compiler feature, and is integrated into man:gcc[1] versions 4.1 and later. It replaces and extends the earlier StackGuard man:gcc[1] extension.
+ProPolice is one such compiler feature, and is integrated into man:gcc[1] versions 4.1 and later.
+It replaces and extends the earlier StackGuard man:gcc[1] extension.
-ProPolice helps to protect against stack-based buffer overflows and other attacks by laying pseudo-random numbers in key areas of the stack before calling any function. When a function returns, these "canaries" are checked and if they are found to have been changed the executable is immediately aborted. Thus any attempt to modify the return address or other variable stored on the stack in an attempt to get malicious code to run is unlikely to succeed, as the attacker would have to also manage to leave the pseudo-random canaries untouched.
+ProPolice helps to protect against stack-based buffer overflows and other attacks by laying pseudo-random numbers in key areas of the stack before calling any function.
+When a function returns, these "canaries" are checked and if they are found to have been changed the executable is immediately aborted.
+Thus any attempt to modify the return address or other variable stored on the stack in an attempt to get malicious code to run is unlikely to succeed, as the attacker would have to also manage to leave the pseudo-random canaries untouched.
Recompiling your application with ProPolice is an effective means of stopping most buffer-overflow attacks, but it can still be compromised.
==== Library based run-time bounds checking
-Compiler-based mechanisms are completely useless for binary-only software for which you cannot recompile. For these situations there are a number of libraries which re-implement the unsafe functions of the C-library (`strcpy`, `fscanf`, `getwd`, etc..) and ensure that these functions can never write past the stack pointer.
+Compiler-based mechanisms are completely useless for binary-only software for which you cannot recompile.
+For these situations there are a number of libraries which re-implement the unsafe functions of the C-library (`strcpy`, `fscanf`, `getwd`, etc..) and ensure that these functions can never write past the stack pointer.
* libsafe
* libverify
* libparanoia
-Unfortunately these library-based defenses have a number of shortcomings. These libraries only protect against a very small set of security related issues and they neglect to fix the actual problem. These defenses may fail if the application was compiled with -fomit-frame-pointer. Also, the LD_PRELOAD and LD_LIBRARY_PATH environment variables can be overwritten/unset by the user.
+Unfortunately these library-based defenses have a number of shortcomings.
+These libraries only protect against a very small set of security related issues and they neglect to fix the actual problem.
+These defenses may fail if the application was compiled with -fomit-frame-pointer.
+Also, the LD_PRELOAD and LD_LIBRARY_PATH environment variables can be overwritten/unset by the user.
[[secure-setuid]]
== SetUID issues
-There are at least 6 different IDs associated with any given process, and you must therefore be very careful with the access that your process has at any given time. In particular, all seteuid applications should give up their privileges as soon as it is no longer required.
+There are at least 6 different IDs associated with any given process, and you must therefore be very careful with the access that your process has at any given time.
+In particular, all seteuid applications should give up their privileges as soon as it is no longer required.
-The real user ID can only be changed by a superuser process. The login program sets this when a user initially logs in and it is seldom changed.
+The real user ID can only be changed by a superuser process.
+The login program sets this when a user initially logs in and it is seldom changed.
-The effective user ID is set by the `exec()` functions if a program has its seteuid bit set. An application can call `seteuid()` at any time to set the effective user ID to either the real user ID or the saved set-user-ID. When the effective user ID is set by `exec()` functions, the previous value is saved in the saved set-user-ID.
+The effective user ID is set by the `exec()` functions if a program has its seteuid bit set.
+An application can call `seteuid()` at any time to set the effective user ID to either the real user ID or the saved set-user-ID.
+When the effective user ID is set by `exec()` functions, the previous value is saved in the saved set-user-ID.
[[secure-chroot]]
== Limiting your program's environment
-The traditional method of restricting a process is with the `chroot()` system call. This system call changes the root directory from which all other paths are referenced for a process and any child processes. For this call to succeed the process must have execute (search) permission on the directory being referenced. The new environment does not actually take effect until you `chdir()` into your new environment. It should also be noted that a process can easily break out of a chroot environment if it has root privilege. This could be accomplished by creating device nodes to read kernel memory, attaching a debugger to a process outside of the man:chroot[8] environment, or in many other creative ways.
+The traditional method of restricting a process is with the `chroot()` system call.
+This system call changes the root directory from which all other paths are referenced for a process and any child processes.
+For this call to succeed the process must have execute (search) permission on the directory being referenced.
+The new environment does not actually take effect until you `chdir()` into your new environment.
+It should also be noted that a process can easily break out of a chroot environment if it has root privilege.
+This could be accomplished by creating device nodes to read kernel memory, attaching a debugger to a process outside of the man:chroot[8] environment, or in many other creative ways.
-The behavior of the `chroot()` system call can be controlled somewhat with the kern.chroot_allow_open_directories `sysctl` variable. When this value is set to 0, `chroot()` will fail with EPERM if there are any directories open. If set to the default value of 1, then `chroot()` will fail with EPERM if there are any directories open and the process is already subject to a `chroot()` call. For any other value, the check for open directories will be bypassed completely.
+The behavior of the `chroot()` system call can be controlled somewhat with the kern.chroot_allow_open_directories `sysctl` variable.
+When this value is set to 0, `chroot()` will fail with EPERM if there are any directories open.
+If set to the default value of 1, then `chroot()` will fail with EPERM if there are any directories open and the process is already subject to a `chroot()` call.
+For any other value, the check for open directories will be bypassed completely.
=== FreeBSD's jail functionality
-The concept of a Jail extends upon the `chroot()` by limiting the powers of the superuser to create a true `virtual server'. Once a prison is set up all network communication must take place through the specified IP address, and the power of "root privilege" in this jail is severely constrained.
+The concept of a Jail extends upon the `chroot()` by limiting the powers of the superuser to create a true `virtual server'.
+Once a prison is set up all network communication must take place through the specified IP address, and the power of "root privilege" in this jail is severely constrained.
-While in a prison, any tests of superuser power within the kernel using the `suser()` call will fail. However, some calls to `suser()` have been changed to a new interface `suser_xxx()`. This function is responsible for recognizing or denying access to superuser power for imprisoned processes.
+While in a prison, any tests of superuser power within the kernel using the `suser()` call will fail.
+However, some calls to `suser()` have been changed to a new interface `suser_xxx()`.
+This function is responsible for recognizing or denying access to superuser power for imprisoned processes.
A superuser process within a jailed environment has the power to:
@@ -187,26 +234,40 @@ A superuser process within a jailed environment has the power to:
* Set attributes of a vnode such as file permission, owner, group, size, access time, and modification time.
* Bind to privileged ports in the Internet domain (ports < 1024)
-`Jail` is a very useful tool for running applications in a secure environment but it does have some shortcomings. Currently, the IPC mechanisms have not been converted to the `suser_xxx` so applications such as MySQL cannot be run within a jail. Superuser access may have a very limited meaning within a jail, but there is no way to specify exactly what "very limited" means.
+`Jail` is a very useful tool for running applications in a secure environment but it does have some shortcomings.
+Currently, the IPC mechanisms have not been converted to the `suser_xxx` so applications such as MySQL cannot be run within a jail.
+Superuser access may have a very limited meaning within a jail, but there is no way to specify exactly what "very limited" means.
=== POSIX(R).1e Process Capabilities
POSIX(R) has released a working draft that adds event auditing, access control lists, fine grained privileges, information labeling, and mandatory access control.
-This is a work in progress and is the focus of the http://www.trustedbsd.org/[TrustedBSD] project. Some of the initial work has been committed to FreeBSD-CURRENT (cap_set_proc(3)).
+This is a work in progress and is the focus of the http://www.trustedbsd.org/[TrustedBSD] project.
+Some of the initial work has been committed to FreeBSD-CURRENT (cap_set_proc(3)).
[[secure-trust]]
== Trust
-An application should never assume that anything about the users environment is sane. This includes (but is certainly not limited to): user input, signals, environment variables, resources, IPC, mmaps, the filesystem working directory, file descriptors, the # of open files, etc.
+An application should never assume that anything about the users environment is sane.
+This includes (but is certainly not limited to): user input, signals, environment variables, resources, IPC, mmaps, the filesystem working directory, file descriptors, the # of open files, etc.
-You should never assume that you can catch all forms of invalid input that a user might supply. Instead, your application should use positive filtering to only allow a specific subset of inputs that you deem safe. Improper data validation has been the cause of many exploits, especially with CGI scripts on the world wide web. For filenames you need to be extra careful about paths ("../", "/"), symbolic links, and shell escape characters.
+You should never assume that you can catch all forms of invalid input that a user might supply.
+Instead, your application should use positive filtering to only allow a specific subset of inputs that you deem safe.
+Improper data validation has been the cause of many exploits, especially with CGI scripts on the world wide web.
+For filenames you need to be extra careful about paths ("../", "/"), symbolic links, and shell escape characters.
-Perl has a really cool feature called "Taint" mode which can be used to prevent scripts from using data derived outside the program in an unsafe way. This mode will check command line arguments, environment variables, locale information, the results of certain syscalls (`readdir()`, `readlink()`, `getpwxxx()`), and all file input.
+Perl has a really cool feature called "Taint" mode which can be used to prevent scripts from using data derived outside the program in an unsafe way.
+This mode will check command line arguments, environment variables, locale information, the results of certain syscalls (`readdir()`, `readlink()`, `getpwxxx()`), and all file input.
[[secure-race-conditions]]
== Race Conditions
-A race condition is anomalous behavior caused by the unexpected dependence on the relative timing of events. In other words, a programmer incorrectly assumed that a particular event would always happen before another.
+A race condition is anomalous behavior caused by the unexpected dependence on the relative timing of events.
+In other words, a programmer incorrectly assumed that a particular event would always happen before another.
-Some of the common causes of race conditions are signals, access checks, and file opens. Signals are asynchronous events by nature so special care must be taken in dealing with them. Checking access with `access(2)` then `open(2)` is clearly non-atomic. Users can move files in between the two calls. Instead, privileged applications should `seteuid()` and then call `open()` directly. Along the same lines, an application should always set a proper umask before `open()` to obviate the need for spurious `chmod()` calls.
+Some of the common causes of race conditions are signals, access checks, and file opens.
+Signals are asynchronous events by nature so special care must be taken in dealing with them.
+Checking access with `access(2)` then `open(2)` is clearly non-atomic.
+Users can move files in between the two calls.
+Instead, privileged applications should `seteuid()` and then call `open()` directly.
+Along the same lines, an application should always set a proper umask before `open()` to obviate the need for spurious `chmod()` calls.
diff --git a/documentation/content/en/books/developers-handbook/sockets/_index.adoc b/documentation/content/en/books/developers-handbook/sockets/_index.adoc
index 338ecaf7c0..77e73f4ce9 100644
--- a/documentation/content/en/books/developers-handbook/sockets/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/sockets/_index.adoc
@@ -37,88 +37,149 @@ toc::[]
[[sockets-synopsis]]
== Synopsis
-BSD sockets take interprocess communications to a new level. It is no longer necessary for the communicating processes to run on the same machine. They still _can_, but they do not have to.
+BSD sockets take interprocess communications to a new level.
+It is no longer necessary for the communicating processes to run on the same machine.
+They still _can_, but they do not have to.
-Not only do these processes not have to run on the same machine, they do not have to run under the same operating system. Thanks to BSD sockets, your FreeBSD software can smoothly cooperate with a program running on a Macintosh(R), another one running on a Sun(TM) workstation, yet another one running under Windows(R) 2000, all connected with an Ethernet-based local area network.
+Not only do these processes not have to run on the same machine, they do not have to run under the same operating system.
+Thanks to BSD sockets, your FreeBSD software can smoothly cooperate with a program running on a Macintosh(R), another one running on a Sun(TM) workstation, yet another one running under Windows(R) 2000, all connected with an Ethernet-based local area network.
But your software can equally well cooperate with processes running in another building, or on another continent, inside a submarine, or a space shuttle.
-It can also cooperate with processes that are not part of a computer (at least not in the strict sense of the word), but of such devices as printers, digital cameras, medical equipment. Just about anything capable of digital communications.
+It can also cooperate with processes that are not part of a computer (at least not in the strict sense of the word), but of such devices as printers, digital cameras, medical equipment.
+Just about anything capable of digital communications.
[[sockets-diversity]]
== Networking and Diversity
-We have already hinted on the _diversity_ of networking. Many different systems have to talk to each other. And they have to speak the same language. They also have to _understand_ the same language the same way.
+We have already hinted on the _diversity_ of networking.
+Many different systems have to talk to each other.
+And they have to speak the same language.
+They also have to _understand_ the same language the same way.
-People often think that _body language_ is universal. But it is not. Back in my early teens, my father took me to Bulgaria. We were sitting at a table in a park in Sofia, when a vendor approached us trying to sell us some roasted almonds.
+People often think that _body language_ is universal.
+But it is not.
+Back in my early teens, my father took me to Bulgaria.
+We were sitting at a table in a park in Sofia, when a vendor approached us trying to sell us some roasted almonds.
-I had not learned much Bulgarian by then, so, instead of saying no, I shook my head from side to side, the "universal" body language for _no_. The vendor quickly started serving us some almonds.
+I had not learned much Bulgarian by then, so, instead of saying no, I shook my head from side to side, the "universal" body language for _no_.
+The vendor quickly started serving us some almonds.
-I then remembered I had been told that in Bulgaria shaking your head sideways meant _yes_. Quickly, I started nodding my head up and down. The vendor noticed, took his almonds, and walked away. To an uninformed observer, I did not change the body language: I continued using the language of shaking and nodding my head. What changed was the _meaning_ of the body language. At first, the vendor and I interpreted the same language as having completely different meaning. I had to adjust my own interpretation of that language so the vendor would understand.
+I then remembered I had been told that in Bulgaria shaking your head sideways meant _yes_.
+Quickly, I started nodding my head up and down.
+The vendor noticed, took his almonds, and walked away.
+To an uninformed observer, I did not change the body language: I continued using the language of shaking and nodding my head.
+What changed was the _meaning_ of the body language.
+At first, the vendor and I interpreted the same language as having completely different meaning.
+I had to adjust my own interpretation of that language so the vendor would understand.
-It is the same with computers: The same symbols may have different, even outright opposite meaning. Therefore, for two computers to understand each other, they must not only agree on the same _language_, but on the same _interpretation_ of the language.
+It is the same with computers: The same symbols may have different, even outright opposite meaning.
+Therefore, for two computers to understand each other, they must not only agree on the same _language_,
+but on the same _interpretation_ of the language.
[[sockets-protocols]]
== Protocols
-While various programming languages tend to have complex syntax and use a number of multi-letter reserved words (which makes them easy for the human programmer to understand), the languages of data communications tend to be very terse. Instead of multi-byte words, they often use individual _bits_. There is a very convincing reason for it: While data travels _inside_ your computer at speeds approaching the speed of light, it often travels considerably slower between two computers.
+While various programming languages tend to have complex syntax and use a number of multi-letter reserved words (which makes them easy for the human programmer to understand),
+the languages of data communications tend to be very terse.
+Instead of multi-byte words, they often use individual _bits_.
+There is a very convincing reason for it: While data travels _inside_ your computer at speeds approaching the speed of light,
+it often travels considerably slower between two computers.
As the languages used in data communications are so terse, we usually refer to them as _protocols_ rather than languages.
-As data travels from one computer to another, it always uses more than one protocol. These protocols are _layered_. The data can be compared to the inside of an onion: You have to peel off several layers of "skin" to get to the data. This is best illustrated with a picture:
+As data travels from one computer to another, it always uses more than one protocol.
+These protocols are _layered_.
+The data can be compared to the inside of an onion: You have to peel off several layers of "skin" to get to the data.
+This is best illustrated with a picture:
.Protocol Layers
image::layers.png[]
In this example, we are trying to get an image from a web page we are connected to via an Ethernet.
-The image consists of raw data, which is simply a sequence of RGB values that our software can process, i.e., convert into an image and display on our monitor.
+The image consists of raw data, which is simply a sequence of RGB values that our software can process, i.e.,
+convert into an image and display on our monitor.
-Alas, our software has no way of knowing how the raw data is organized: Is it a sequence of RGB values, or a sequence of grayscale intensities, or perhaps of CMYK encoded colors? Is the data represented by 8-bit quanta, or are they 16 bits in size, or perhaps 4 bits? How many rows and columns does the image consist of? Should certain pixels be transparent?
+Alas, our software has no way of knowing how the raw data is organized:
+Is it a sequence of RGB values, or a sequence of grayscale intensities, or perhaps of CMYK encoded colors?
+Is the data represented by 8-bit quanta, or are they 16 bits in size, or perhaps 4 bits?
+How many rows and columns does the image consist of? Should certain pixels be transparent?
I think you get the picture...
-To inform our software how to handle the raw data, it is encoded as a PNG file. It could be a GIF, or a JPEG, but it is a PNG.
+To inform our software how to handle the raw data, it is encoded as a PNG file.
+It could be a GIF, or a JPEG, but it is a PNG.
And PNG is a protocol.
At this point, I can hear some of you yelling, _"No, it is not! It is a file format!"_
-Well, of course it is a file format. But from the perspective of data communications, a file format is a protocol: The file structure is a _language_, a terse one at that, communicating to our _process_ how the data is organized. Ergo, it is a _protocol_.
+Well, of course it is a file format.
+But from the perspective of data communications, a file format is a protocol:
+The file structure is a _language_, a terse one at that, communicating to our _process_ how the data is organized.
+Ergo, it is a _protocol_.
-Alas, if all we received was the PNG file, our software would be facing a serious problem: How is it supposed to know the data is representing an image, as opposed to some text, or perhaps a sound, or what not? Secondly, how is it supposed to know the image is in the PNG format as opposed to GIF, or JPEG, or some other image format?
+Alas, if all we received was the PNG file, our software would be facing a serious problem:
+How is it supposed to know the data is representing an image, as opposed to some text, or perhaps a sound, or what not? Secondly, how is it supposed to know the image is in the PNG format as opposed to GIF, or JPEG, or some other image format?
-To obtain that information, we are using another protocol: HTTP. This protocol can tell us exactly that the data represents an image, and that it uses the PNG protocol. It can also tell us some other things, but let us stay focused on protocol layers here.
+To obtain that information, we are using another protocol: HTTP.
+This protocol can tell us exactly that the data represents an image, and that it uses the PNG protocol.
+It can also tell us some other things, but let us stay focused on protocol layers here.
-So, now we have some data wrapped in the PNG protocol, wrapped in the HTTP protocol. How did we get it from the server?
+So, now we have some data wrapped in the PNG protocol, wrapped in the HTTP protocol.
+How did we get it from the server?
-By using TCP/IP over Ethernet, that is how. Indeed, that is three more protocols. Instead of continuing inside out, I am now going to talk about Ethernet, simply because it is easier to explain the rest that way.
+By using TCP/IP over Ethernet, that is how.
+Indeed, that is three more protocols.
+Instead of continuing inside out, I am now going to talk about Ethernet, simply because it is easier to explain the rest that way.
-Ethernet is an interesting system of connecting computers in a _local area network_ (LAN). Each computer has a _network interface card_ (NIC), which has a unique 48-bit ID called its _address_. No two Ethernet NICs in the world have the same address.
+Ethernet is an interesting system of connecting computers in a _local area network_ (LAN).
+Each computer has a _network interface card_ (NIC), which has a unique 48-bit ID called its _address_.
+No two Ethernet NICs in the world have the same address.
-These NICs are all connected with each other. Whenever one computer wants to communicate with another in the same Ethernet LAN, it sends a message over the network. Every NIC sees the message. But as part of the Ethernet _protocol_, the data contains the address of the destination NIC (among other things). So, only one of all the network interface cards will pay attention to it, the rest will ignore it.
+These NICs are all connected with each other.
+Whenever one computer wants to communicate with another in the same Ethernet LAN, it sends a message over the network.
+Every NIC sees the message.
+But as part of the Ethernet _protocol_, the data contains the address of the destination NIC (among other things).
+So, only one of all the network interface cards will pay attention to it, the rest will ignore it.
-But not all computers are connected to the same network. Just because we have received the data over our Ethernet does not mean it originated in our own local area network. It could have come to us from some other network (which may not even be Ethernet based) connected with our own network via the Internet.
+But not all computers are connected to the same network.
+Just because we have received the data over our Ethernet does not mean it originated in our own local area network.
+It could have come to us from some other network (which may not even be Ethernet based) connected with our own network via the Internet.
-All data is transferred over the Internet using IP, which stands for _Internet Protocol_. Its basic role is to let us know where in the world the data has arrived from, and where it is supposed to go to. It does not _guarantee_ we will receive the data, only that we will know where it came from _if_ we do receive it.
+All data is transferred over the Internet using IP, which stands for _Internet Protocol_.
+Its basic role is to let us know where in the world the data has arrived from, and where it is supposed to go to.
+It does not _guarantee_ we will receive the data, only that we will know where it came from _if_ we do receive it.
-Even if we do receive the data, IP does not guarantee we will receive various chunks of data in the same order the other computer has sent it to us. So, we can receive the center of our image before we receive the upper left corner and after the lower right, for example.
+Even if we do receive the data, IP does not guarantee we will receive various chunks of data in the same order the other computer has sent it to us.
+So, we can receive the center of our image before we receive the upper left corner and after the lower right, for example.
It is TCP (_Transmission Control Protocol_) that asks the sender to resend any lost data and that places it all into the proper order.
-All in all, it took _five_ different protocols for one computer to communicate to another what an image looks like. We received the data wrapped into the PNG protocol, which was wrapped into the HTTP protocol, which was wrapped into the TCP protocol, which was wrapped into the IP protocol, which was wrapped into the Ethernet protocol.
+All in all, it took _five_ different protocols for one computer to communicate to another what an image looks like.
+We received the data wrapped into the PNG protocol, which was wrapped into the HTTP protocol, which was wrapped into the TCP protocol, which was wrapped into the IP protocol, which was wrapped into the Ethernet protocol.
-Oh, and by the way, there probably were several other protocols involved somewhere on the way. For example, if our LAN was connected to the Internet through a dial-up call, it used the PPP protocol over the modem which used one (or several) of the various modem protocols, et cetera, et cetera, et cetera...
+Oh, and by the way, there probably were several other protocols involved somewhere on the way.
+For example, if our LAN was connected to the Internet through a dial-up call,
+it used the PPP protocol over the modem which used one (or several) of the various modem protocols, et cetera, et cetera, et cetera...
As a developer you should be asking by now, _"How am I supposed to handle it all?"_
-Luckily for you, you are _not_ supposed to handle it all. You _are_ supposed to handle some of it, but not all of it. Specifically, you need not worry about the physical connection (in our case Ethernet and possibly PPP, etc). Nor do you need to handle the Internet Protocol, or the Transmission Control Protocol.
+Luckily for you, you are _not_ supposed to handle it all.
+You _are_ supposed to handle some of it, but not all of it.
+Specifically, you need not worry about the physical connection (in our case Ethernet and possibly PPP, etc).
+Nor do you need to handle the Internet Protocol, or the Transmission Control Protocol.
-In other words, you do not have to do anything to receive the data from the other computer. Well, you do have to _ask_ for it, but that is almost as simple as opening a file.
+In other words, you do not have to do anything to receive the data from the other computer.
+Well, you do have to _ask_ for it, but that is almost as simple as opening a file.
-Once you have received the data, it is up to you to figure out what to do with it. In our case, you would need to understand the HTTP protocol and the PNG file structure.
+Once you have received the data, it is up to you to figure out what to do with it.
+In our case, you would need to understand the HTTP protocol and the PNG file structure.
-To use an analogy, all the internetworking protocols become a gray area: Not so much because we do not understand how it works, but because we are no longer concerned about it. The sockets interface takes care of this gray area for us:
+To use an analogy, all the internetworking protocols become a gray area:
+Not so much because we do not understand how it works, but because we are no longer concerned about it.
+The sockets interface takes care of this gray area for us:
.Sockets Covered Protocol Layers
image::slayers.png[]
@@ -128,16 +189,21 @@ We only need to understand any protocols that tell us how to _interpret the data
[[sockets-model]]
== The Sockets Model
-BSD sockets are built on the basic UNIX(R) model: _Everything is a file._ In our example, then, sockets would let us receive an _HTTP file_, so to speak. It would then be up to us to extract the _PNG file_ from it.
+BSD sockets are built on the basic UNIX(R) model: _Everything is a file._
+In our example, then, sockets would let us receive an _HTTP file_, so to speak.
+It would then be up to us to extract the _PNG file_ from it.
-Due to the complexity of internetworking, we cannot just use the `open` system call, or the `open()` C function. Instead, we need to take several steps to "opening" a socket.
+Due to the complexity of internetworking, we cannot just use the `open` system call, or the `open()` C function.
+Instead, we need to take several steps to "opening" a socket.
-Once we do, however, we can start treating the _socket_ the same way we treat any _file descriptor_: We can `read` from it, `write` to it, `pipe` it, and, eventually, `close` it.
+Once we do, however, we can start treating the _socket_ the same way we treat any _file descriptor_:
+We can `read` from it, `write` to it, `pipe` it, and, eventually, `close` it.
[[sockets-essential-functions]]
== Essential Socket Functions
-While FreeBSD offers different functions to work with sockets, we only _need_ four to "open" a socket. And in some cases we only need two.
+While FreeBSD offers different functions to work with sockets, we only _need_ four to "open" a socket.
+And in some cases we only need two.
[[sockets-client-server]]
=== The Client-Server Difference
@@ -150,39 +216,51 @@ Typically, one of the ends of a socket-based data communication is a _server_, t
[[sockets-socket]]
===== `socket`
-The one function used by both, clients and servers, is man:socket[2]. It is declared this way:
+The one function used by both, clients and servers, is man:socket[2].
+It is declared this way:
[.programlisting]
....
int socket(int domain, int type, int protocol);
....
-The return value is of the same type as that of `open`, an integer. FreeBSD allocates its value from the same pool as that of file handles. That is what allows sockets to be treated the same way as files.
+The return value is of the same type as that of `open`, an integer.
+FreeBSD allocates its value from the same pool as that of file handles.
+That is what allows sockets to be treated the same way as files.
-The `domain` argument tells the system what _protocol family_ you want it to use. Many of them exist, some are vendor specific, others are very common. They are declared in [.filename]#sys/socket.h#.
+The `domain` argument tells the system what _protocol family_ you want it to use.
+Many of them exist, some are vendor specific, others are very common.
+They are declared in [.filename]#sys/socket.h#.
Use `PF_INET` for UDP, TCP and other Internet protocols (IPv4).
-Five values are defined for the `type` argument, again, in [.filename]#sys/socket.h#. All of them start with "`SOCK_`". The most common one is `SOCK_STREAM`, which tells the system you are asking for a _reliable stream delivery service_ (which is TCP when used with `PF_INET`).
+Five values are defined for the `type` argument, again, in [.filename]#sys/socket.h#.
+All of them start with "`SOCK_`".
+The most common one is `SOCK_STREAM`, which tells the system you are asking for a _reliable stream delivery service_ (which is TCP when used with `PF_INET`).
If you asked for `SOCK_DGRAM`, you would be requesting a _connectionless datagram delivery service_ (in our case, UDP).
If you wanted to be in charge of the low-level protocols (such as IP), or even network interfaces (e.g., the Ethernet), you would need to specify `SOCK_RAW`.
-Finally, the `protocol` argument depends on the previous two arguments, and is not always meaningful. In that case, use `0` for its value.
+Finally, the `protocol` argument depends on the previous two arguments, and is not always meaningful.
+In that case, use `0` for its value.
[NOTE]
.The Unconnected Socket
====
-Nowhere, in the `socket` function have we specified to what other system we should be connected. Our newly created socket remains _unconnected_.
+Nowhere, in the `socket` function have we specified to what other system we should be connected.
+Our newly created socket remains _unconnected_.
-This is on purpose: To use a telephone analogy, we have just attached a modem to the phone line. We have neither told the modem to make a call, nor to answer if the phone rings.
+This is on purpose: To use a telephone analogy, we have just attached a modem to the phone line.
+We have neither told the modem to make a call, nor to answer if the phone rings.
====
[[sockets-sockaddr]]
===== `sockaddr`
-Various functions of the sockets family expect the address of (or pointer to, to use C terminology) a small area of the memory. The various C declarations in the [.filename]#sys/socket.h# refer to it as `struct sockaddr`. This structure is declared in the same file:
+Various functions of the sockets family expect the address of (or pointer to, to use C terminology) a small area of the memory.
+The various C declarations in the [.filename]#sys/socket.h# refer to it as `struct sockaddr`.
+This structure is declared in the same file:
[.programlisting]
....
@@ -198,11 +276,14 @@ struct sockaddr {
#define SOCK_MAXADDRLEN 255 /* longest possible addresses */
....
-Please note the _vagueness_ with which the `sa_data` field is declared, just as an array of `14` bytes, with the comment hinting there can be more than `14` of them.
+Please note the _vagueness_ with which the `sa_data` field is declared, just as an array of `14` bytes,
+with the comment hinting there can be more than `14` of them.
-This vagueness is quite deliberate. Sockets is a very powerful interface. While most people perhaps think of it as nothing more than the Internet interface-and most applications probably use it for that nowadays-sockets can be used for just about _any_ kind of interprocess communications, of which the Internet (or, more precisely, IP) is only one.
+This vagueness is quite deliberate. Sockets is a very powerful interface.
+While most people perhaps think of it as nothing more than the Internet interface-and most applications probably use it for that nowadays-sockets can be used for just about _any_ kind of interprocess communications, of which the Internet (or, more precisely, IP) is only one.
-The [.filename]#sys/socket.h# refers to the various types of protocols sockets will handle as _address families_, and lists them right before the definition of `sockaddr`:
+The [.filename]#sys/socket.h# refers to the various types of protocols sockets will handle as _address families_,
+and lists them right before the definition of `sockaddr`:
[.programlisting]
....
@@ -254,7 +335,8 @@ The [.filename]#sys/socket.h# refers to the various types of protocols sockets w
#define AF_MAX 37
....
-The one used for IP is AF_INET. It is a symbol for the constant `2`.
+The one used for IP is AF_INET.
+It is a symbol for the constant `2`.
It is the _address family_ listed in the `sa_family` field of `sockaddr` that decides how exactly the vaguely named bytes of `sa_data` will be used.
@@ -281,7 +363,11 @@ image::sain.png[]
The three important fields are `sin_family`, which is byte 1 of the structure, `sin_port`, a 16-bit value found in bytes 2 and 3, and `sin_addr`, a 32-bit integer representation of the IP address, stored in bytes 4-7.
-Now, let us try to fill it out. Let us assume we are trying to write a client for the _daytime_ protocol, which simply states that its server will write a text string representing the current date and time to port 13. We want to use TCP/IP, so we need to specify `AF_INET` in the address family field. `AF_INET` is defined as `2`. Let us use the IP address of `192.43.244.18`, which is the time server of US federal government (`time.nist.gov`).
+Now, let us try to fill it out.
+Let us assume we are trying to write a client for the _daytime_ protocol, which simply states that its server will write a text string representing the current date and time to port 13.
+We want to use TCP/IP, so we need to specify `AF_INET` in the address family field.
+`AF_INET` is defined as `2`.
+Let us use the IP address of `192.43.244.18`, which is the time server of US federal government (`time.nist.gov`).
.Specific example of sockaddr_in
image::sainfill.png[]
@@ -302,7 +388,9 @@ In addition, `in_addr_t` is a 32-bit integer.
The `192.43.244.18` is just a convenient notation of expressing a 32-bit integer by listing all of its 8-bit bytes, starting with the _most significant_ one.
-So far, we have viewed `sockaddr` as an abstraction. Our computer does not store `short` integers as a single 16-bit entity, but as a sequence of 2 bytes. Similarly, it stores 32-bit integers as a sequence of 4 bytes.
+So far, we have viewed `sockaddr` as an abstraction.
+Our computer does not store `short` integers as a single 16-bit entity, but as a sequence of 2 bytes.
+Similarly, it stores 32-bit integers as a sequence of 4 bytes.
Suppose we coded something like this:
@@ -315,7 +403,8 @@ sa.sin_addr.s_addr = (((((192 << 8) | 43) << 8) | 244) << 8) | 18;
What would the result look like?
-Well, that depends, of course. On a Pentium(R), or other x86, based computer, it would look like this:
+Well, that depends, of course.
+On a Pentium(R), or other x86, based computer, it would look like this:
.sockaddr_in on an Intel system
image::sainlsb.png[]
@@ -325,9 +414,12 @@ On a different system, it might look like this:
.sockaddr_in on an MSB system
image::sainmsb.png[]
-And on a PDP it might look different yet. But the above two are the most common ways in use today.
+And on a PDP it might look different yet.
+But the above two are the most common ways in use today.
-Ordinarily, wanting to write portable code, programmers pretend that these differences do not exist. And they get away with it (except when they code in assembly language). Alas, you cannot get away with it that easily when coding for sockets.
+Ordinarily, wanting to write portable code, programmers pretend that these differences do not exist.
+And they get away with it (except when they code in assembly language).
+Alas, you cannot get away with it that easily when coding for sockets.
Why?
@@ -337,19 +429,26 @@ You might be wondering, _"So, will sockets not handle it for me?"_
It will not.
-While that answer may surprise you at first, remember that the general sockets interface only understands the `sa_len` and `sa_family` fields of the `sockaddr` structure. You do not have to worry about the byte order there (of course, on FreeBSD `sa_family` is only 1 byte anyway, but many other UNIX(R) systems do not have `sa_len` and use 2 bytes for `sa_family`, and expect the data in whatever order is native to the computer).
+While that answer may surprise you at first, remember that the general sockets interface only understands the `sa_len` and `sa_family` fields of the `sockaddr` structure.
+You do not have to worry about the byte order there (of course, on FreeBSD `sa_family` is only 1 byte anyway, but many other UNIX(R) systems do not have `sa_len` and use 2 bytes for `sa_family`, and expect the data in whatever order is native to the computer).
-But the rest of the data is just `sa_data[14]` as far as sockets goes. Depending on the _address family_, sockets just forwards that data to its destination.
+But the rest of the data is just `sa_data[14]` as far as sockets goes.
+Depending on the _address family_, sockets just forwards that data to its destination.
-Indeed, when we enter a port number, it is because we want the other computer to know what service we are asking for. And, when we are the server, we read the port number so we know what service the other computer is expecting from us. Either way, sockets only has to forward the port number as data. It does not interpret it in any way.
+Indeed, when we enter a port number, it is because we want the other computer to know what service we are asking for.
+And, when we are the server, we read the port number so we know what service the other computer is expecting from us.
+Either way, sockets only has to forward the port number as data.
+It does not interpret it in any way.
-Similarly, we enter the IP address to tell everyone on the way where to send our data to. Sockets, again, only forwards it as data.
+Similarly, we enter the IP address to tell everyone on the way where to send our data to.
+Sockets, again, only forwards it as data.
That is why, we (the _programmers_, not the _sockets_) have to distinguish between the byte order used by our computer and a conventional byte order to send the data in to the other computer.
We will call the byte order our computer uses the _host byte order_, or just the _host order_.
-There is a convention of sending the multi-byte data over IP _MSB first_. This, we will refer to as the _network byte order_, or simply the _network order_.
+There is a convention of sending the multi-byte data over IP _MSB first_.
+This, we will refer to as the _network byte order_, or simply the _network order_.
Now, if we compiled the above code for an Intel based computer, our _host byte order_ would produce:
@@ -363,7 +462,8 @@ image::sainmsb.png[]
Unfortunately, our _host order_ is the exact opposite of the _network order_.
-We have several ways of dealing with it. One would be to _reverse_ the values in our code:
+We have several ways of dealing with it.
+One would be to _reverse_ the values in our code:
[.programlisting]
....
@@ -372,26 +472,39 @@ sa.sin_port = 13 << 8;
sa.sin_addr.s_addr = (((((18 << 8) | 244) << 8) | 43) << 8) | 192;
....
-This will _trick_ our compiler into storing the data in the _network byte order_. In some cases, this is exactly the way to do it (e.g., when programming in assembly language). In most cases, however, it can cause a problem.
+This will _trick_ our compiler into storing the data in the _network byte order_.
+In some cases, this is exactly the way to do it (e.g., when programming in assembly language).
+In most cases, however, it can cause a problem.
-Suppose, you wrote a sockets-based program in C. You know it is going to run on a Pentium(R), so you enter all your constants in reverse and force them to the _network byte order_. It works well.
+Suppose, you wrote a sockets-based program in C.
+You know it is going to run on a Pentium(R), so you enter all your constants in reverse and force them to the _network byte order_.
+It works well.
-Then, some day, your trusted old Pentium(R) becomes a rusty old Pentium(R). You replace it with a system whose _host order_ is the same as the _network order_. You need to recompile all your software. All of your software continues to perform well, except the one program you wrote.
+Then, some day, your trusted old Pentium(R) becomes a rusty old Pentium(R).
+You replace it with a system whose _host order_ is the same as the _network order_.
+You need to recompile all your software.
+All of your software continues to perform well, except the one program you wrote.
-You have since forgotten that you had forced all of your constants to the opposite of the _host order_. You spend some quality time tearing out your hair, calling the names of all gods you ever heard of (and some you made up), hitting your monitor with a nerf bat, and performing all the other traditional ceremonies of trying to figure out why something that has worked so well is suddenly not working at all.
+You have since forgotten that you had forced all of your constants to the opposite of the _host order_.
+You spend some quality time tearing out your hair, calling the names of all gods you ever heard of (and some you made up),
+hitting your monitor with a nerf bat, and performing all the other traditional ceremonies of trying to figure out why something that has worked so well is suddenly not working at all.
Eventually, you figure it out, say a couple of swear words, and start rewriting your code.
-Luckily, you are not the first one to face the problem. Someone else has created the man:htons[3] and man:htonl[3] C functions to convert a `short` and `long` respectively from the _host byte order_ to the _network byte order_, and the man:ntohs[3] and man:ntohl[3] C functions to go the other way.
+Luckily, you are not the first one to face the problem.
+Someone else has created the man:htons[3] and man:htonl[3] C functions to convert a `short` and `long` respectively from the _host byte order_ to the _network byte order_, and the man:ntohs[3] and man:ntohl[3] C functions to go the other way.
-On _MSB-first_ systems these functions do nothing. On _LSB-first_ systems they convert values to the proper order.
+On _MSB-first_ systems these functions do nothing.
+On _LSB-first_ systems they convert values to the proper order.
So, regardless of what system your software is compiled on, your data will end up in the correct order if you use these functions.
[[sockets-client-functions]]
==== Client Functions
-Typically, the client initiates the connection to the server. The client knows which server it is about to call: It knows its IP address, and it knows the _port_ the server resides at. It is akin to you picking up the phone and dialing the number (the _address_), then, after someone answers, asking for the person in charge of wingdings (the _port_).
+Typically, the client initiates the connection to the server.
+The client knows which server it is about to call: It knows its IP address, and it knows the _port_ the server resides at.
+It is akin to you picking up the phone and dialing the number (the _address_), then, after someone answers, asking for the person in charge of wingdings (the _port_).
[[sockets-connect]]
===== `connect`
@@ -403,11 +516,16 @@ Once a client has created a socket, it needs to connect it to a specific port on
int connect(int s, const struct sockaddr *name, socklen_t namelen);
....
-The `s` argument is the socket, i.e., the value returned by the `socket` function. The `name` is a pointer to `sockaddr`, the structure we have talked about extensively. Finally, `namelen` informs the system how many bytes are in our `sockaddr` structure.
+The `s` argument is the socket, i.e., the value returned by the `socket` function.
+The `name` is a pointer to `sockaddr`, the structure we have talked about extensively.
+Finally, `namelen` informs the system how many bytes are in our `sockaddr` structure.
-If `connect` is successful, it returns `0`. Otherwise it returns `-1` and stores the error code in `errno`.
+If `connect` is successful, it returns `0`.
+Otherwise it returns `-1` and stores the error code in `errno`.
-There are many reasons why `connect` may fail. For example, with an attempt to an Internet connection, the IP address may not exist, or it may be down, or just too busy, or it may not have a server listening at the specified port. Or it may outright _refuse_ any request for specific code.
+There are many reasons why `connect` may fail.
+For example, with an attempt to an Internet connection, the IP address may not exist, or it may be down, or just too busy, or it may not have a server listening at the specified port.
+Or it may outright _refuse_ any request for specific code.
[[sockets-first-client]]
===== Our First Client
@@ -468,12 +586,16 @@ Go ahead, enter it in your editor, save it as [.filename]#daytime.c#, then compi
%
....
-In this case, the date was June 19, 2001, the time was 02:29:25 UTC. Naturally, your results will vary.
+In this case, the date was June 19, 2001, the time was 02:29:25 UTC.
+Naturally, your results will vary.
[[sockets-server-functions]]
==== Server Functions
-The typical server does not initiate the connection. Instead, it waits for a client to call it and request services. It does not know when the client will call, nor how many clients will call. It may be just sitting there, waiting patiently, one moment, The next moment, it can find itself swamped with requests from a number of clients, all calling in at the same time.
+The typical server does not initiate the connection.
+Instead, it waits for a client to call it and request services.
+It does not know when the client will call, nor how many clients will call.
+It may be just sitting there, waiting patiently, one moment, The next moment, it can find itself swamped with requests from a number of clients, all calling in at the same time.
The sockets interface offers three basic functions to handle this.
@@ -482,21 +604,27 @@ The sockets interface offers three basic functions to handle this.
Ports are like extensions to a phone line: After you dial a number, you dial the extension to get to a specific person or department.
-There are 65535 IP ports, but a server usually processes requests that come in on only one of them. It is like telling the phone room operator that we are now at work and available to answer the phone at a specific extension. We use man:bind[2] to tell sockets which port we want to serve.
+There are 65535 IP ports, but a server usually processes requests that come in on only one of them.
+It is like telling the phone room operator that we are now at work and available to answer the phone at a specific extension.
+We use man:bind[2] to tell sockets which port we want to serve.
[.programlisting]
....
int bind(int s, const struct sockaddr *addr, socklen_t addrlen);
....
-Beside specifying the port in `addr`, the server may include its IP address. However, it can just use the symbolic constant INADDR_ANY to indicate it will serve all requests to the specified port regardless of what its IP address is. This symbol, along with several similar ones, is declared in [.filename]#netinet/in.h#
+Beside specifying the port in `addr`, the server may include its IP address.
+However, it can just use the symbolic constant INADDR_ANY to indicate it will serve all requests to the specified port regardless of what its IP address is.
+This symbol, along with several similar ones, is declared in [.filename]#netinet/in.h#
[.programlisting]
....
#define INADDR_ANY (u_int32_t)0x00000000
....
-Suppose we were writing a server for the _daytime_ protocol over TCP/IP. Recall that it uses port 13. Our `sockaddr_in` structure would look like this:
+Suppose we were writing a server for the _daytime_ protocol over TCP/IP.
+Recall that it uses port 13.
+Our `sockaddr_in` structure would look like this:
.Example Server sockaddr_in
image::sainserv.png[]
@@ -504,7 +632,8 @@ image::sainserv.png[]
[[sockets-listen]]
===== `listen`
-To continue our office phone analogy, after you have told the phone central operator what extension you will be at, you now walk into your office, and make sure your own phone is plugged in and the ringer is turned on. Plus, you make sure your call waiting is activated, so you can hear the phone ring even while you are talking to someone.
+To continue our office phone analogy, after you have told the phone central operator what extension you will be at, you now walk into your office, and make sure your own phone is plugged in and the ringer is turned on.
+Plus, you make sure your call waiting is activated, so you can hear the phone ring even while you are talking to someone.
The server ensures all of that with the man:listen[2] function.
@@ -513,12 +642,15 @@ The server ensures all of that with the man:listen[2] function.
int listen(int s, int backlog);
....
-In here, the `backlog` variable tells sockets how many incoming requests to accept while you are busy processing the last request. In other words, it determines the maximum size of the queue of pending connections.
+In here, the `backlog` variable tells sockets how many incoming requests to accept while you are busy processing the last request.
+In other words, it determines the maximum size of the queue of pending connections.
[[sockets-accept]]
===== `accept`
-After you hear the phone ringing, you accept the call by answering the call. You have now established a connection with your client. This connection remains active until either you or your client hang up.
+After you hear the phone ringing, you accept the call by answering the call.
+You have now established a connection with your client.
+This connection remains active until either you or your client hang up.
The server accepts the connection by using the man:accept[2] function.
@@ -527,20 +659,27 @@ The server accepts the connection by using the man:accept[2] function.
int accept(int s, struct sockaddr *addr, socklen_t *addrlen);
....
-Note that this time `addrlen` is a pointer. This is necessary because in this case it is the socket that fills out `addr`, the `sockaddr_in` structure.
+Note that this time `addrlen` is a pointer.
+This is necessary because in this case it is the socket that fills out `addr`, the `sockaddr_in` structure.
-The return value is an integer. Indeed, the `accept` returns a _new socket_. You will use this new socket to communicate with the client.
+The return value is an integer.
+Indeed, the `accept` returns a _new socket_.
+You will use this new socket to communicate with the client.
What happens to the old socket? It continues to listen for more requests (remember the `backlog` variable we passed to `listen`?) until we `close` it.
-Now, the new socket is meant only for communications. It is fully connected. We cannot pass it to `listen` again, trying to accept additional connections.
+Now, the new socket is meant only for communications.
+It is fully connected.
+We cannot pass it to `listen` again, trying to accept additional connections.
[[sockets-first-server]]
===== Our First Server
-Our first server will be somewhat more complex than our first client was: Not only do we have more sockets functions to use, but we need to write it as a daemon.
+Our first server will be somewhat more complex than our first client was:
+Not only do we have more sockets functions to use, but we need to write it as a daemon.
-This is best achieved by creating a _child process_ after binding the port. The main process then exits and returns control to the shell (or whatever program invoked it).
+This is best achieved by creating a _child process_ after binding the port.
+The main process then exits and returns control to the shell (or whatever program invoked it).
The child calls `listen`, then starts an endless loop, which accepts a connection, serves it, and eventually closes its socket.
@@ -636,7 +775,9 @@ int main() {
}
....
-We start by creating a socket. Then we fill out the `sockaddr_in` structure in `sa`. Note the conditional use of INADDR_ANY:
+We start by creating a socket.
+Then we fill out the `sockaddr_in` structure in `sa`.
+Note the conditional use of INADDR_ANY:
[.programlisting]
....
@@ -644,11 +785,20 @@ if (INADDR_ANY)
sa.sin_addr.s_addr = htonl(INADDR_ANY);
....
-Its value is `0`. Since we have just used `bzero` on the entire structure, it would be redundant to set it to `0` again. But if we port our code to some other system where INADDR_ANY is perhaps not a zero, we need to assign it to `sa.sin_addr.s_addr`. Most modern C compilers are clever enough to notice that INADDR_ANY is a constant. As long as it is a zero, they will optimize the entire conditional statement out of the code.
+Its value is `0`.
+Since we have just used `bzero` on the entire structure, it would be redundant to set it to `0` again.
+But if we port our code to some other system where INADDR_ANY is perhaps not a zero, we need to assign it to `sa.sin_addr.s_addr`.
+Most modern C compilers are clever enough to notice that INADDR_ANY is a constant.
+As long as it is a zero, they will optimize the entire conditional statement out of the code.
-After we have called `bind` successfully, we are ready to become a _daemon_: We use `fork` to create a child process. In both, the parent and the child, the `s` variable is our socket. The parent process will not need it, so it calls `close`, then it returns `0` to inform its own parent it had terminated successfully.
+After we have called `bind` successfully, we are ready to become a _daemon_: We use `fork` to create a child process.
+In both, the parent and the child, the `s` variable is our socket.
+The parent process will not need it, so it calls `close`, then it returns `0` to inform its own parent it had terminated successfully.
-Meanwhile, the child process continues working in the background. It calls `listen` and sets its backlog to `4`. It does not need a large value here because _daytime_ is not a protocol many clients request all the time, and because it can process each request instantly anyway.
+Meanwhile, the child process continues working in the background.
+It calls `listen` and sets its backlog to `4`.
+It does not need a large value here because _daytime_ is not a protocol many clients request all the time,
+and because it can process each request instantly anyway.
Finally, the daemon starts an endless loop, which performs the following steps:
@@ -662,17 +812,30 @@ We can _generalize_ this, and use it as a model for many other servers:
.Sequential Server
image::serv.png[]
-This flowchart is good for _sequential servers_, i.e., servers that can serve one client at a time, just as we were able to with our _daytime_ server. This is only possible whenever there is no real "conversation" going on between the client and the server: As soon as the server detects a connection to the client, it sends out some data and closes the connection. The entire operation may take nanoseconds, and it is finished.
+This flowchart is good for _sequential servers_, i.e., servers that can serve one client at a time, just as we were able to with our _daytime_ server.
+This is only possible whenever there is no real "conversation" going on between the client and the server:
+As soon as the server detects a connection to the client, it sends out some data and closes the connection.
+The entire operation may take nanoseconds, and it is finished.
-The advantage of this flowchart is that, except for the brief moment after the parent ``fork``s and before it exits, there is always only one _process_ active: Our server does not take up much memory and other system resources.
+The advantage of this flowchart is that, except for the brief moment after the parent ``fork``s and before it exits,
+there is always only one _process_ active: Our server does not take up much memory and other system resources.
-Note that we have added _initialize daemon_ in our flowchart. We did not need to initialize our own daemon, but this is a good place in the flow of the program to set up any `signal` handlers, open any files we may need, etc.
+Note that we have added _initialize daemon_ in our flowchart.
+We did not need to initialize our own daemon, but this is a good place in the flow of the program to set up any `signal` handlers,
+open any files we may need, etc.
-Just about everything in the flow chart can be used literally on many different servers. The _serve_ entry is the exception. We think of it as a _"black box"_, i.e., something you design specifically for your own server, and just "plug it into the rest."
+Just about everything in the flow chart can be used literally on many different servers.
+The _serve_ entry is the exception.
+We think of it as a _"black box"_, i.e., something you design specifically for your own server, and just "plug it into the rest."
-Not all protocols are that simple. Many receive a request from the client, reply to it, then receive another request from the same client. As a result, they do not know in advance how long they will be serving the client. Such servers usually start a new process for each client. While the new process is serving its client, the daemon can continue listening for more connections.
+Not all protocols are that simple.
+Many receive a request from the client, reply to it, then receive another request from the same client.
+As a result, they do not know in advance how long they will be serving the client.
+Such servers usually start a new process for each client.
+While the new process is serving its client, the daemon can continue listening for more connections.
-Now, go ahead, save the above source code as [.filename]#daytimed.c# (it is customary to end the names of daemons with the letter `d`). After you have compiled it, try running it:
+Now, go ahead, save the above source code as [.filename]#daytimed.c# (it is customary to end the names of daemons with the letter `d`).
+After you have compiled it, try running it:
[source,bash]
....
@@ -681,7 +844,8 @@ bind: Permission denied
%
....
-What happened here? As you will recall, the _daytime_ protocol uses port 13. But all ports below 1024 are reserved to the superuser (otherwise, anyone could start a daemon pretending to serve a commonly used port, while causing a security breach).
+What happened here? As you will recall, the _daytime_ protocol uses port 13.
+But all ports below 1024 are reserved to the superuser (otherwise, anyone could start a daemon pretending to serve a commonly used port, while causing a security breach).
Try again, this time as the superuser:
@@ -701,9 +865,12 @@ bind: Address already in use
#
....
-Every port can only be bound by one program at a time. Our first attempt was indeed successful: It started the child daemon and returned quietly. It is still running and will continue to run until you either kill it, or any of its system calls fail, or you reboot the system.
+Every port can only be bound by one program at a time.
+Our first attempt was indeed successful: It started the child daemon and returned quietly.
+It is still running and will continue to run until you either kill it, or any of its system calls fail, or you reboot the system.
-Fine, we know it is running in the background. But is it working? How do we know it is a proper _daytime_ server? Simple:
+Fine, we know it is running in the background.
+But is it working? How do we know it is a proper _daytime_ server? Simple:
[source,bash]
....
@@ -719,9 +886,12 @@ Connection closed by foreign host.
%
....
-telnet tried the new IPv6, and failed. It retried with IPv4 and succeeded. The daemon works.
+telnet tried the new IPv6, and failed.
+It retried with IPv4 and succeeded.
+The daemon works.
-If you have access to another UNIX(R) system via telnet, you can use it to test accessing the server remotely. My computer does not have a static IP address, so this is what I did:
+If you have access to another UNIX(R) system via telnet, you can use it to test accessing the server remotely.
+My computer does not have a static IP address, so this is what I did:
[source,bash]
....
@@ -753,12 +923,16 @@ Connection closed by foreign host.
%
....
-By the way, telnet prints the _Connection closed by foreign host_ message after our daemon has closed the socket. This shows us that, indeed, using `fclose(client);` in our code works as advertised.
+By the way, telnet prints the _Connection closed by foreign host_ message after our daemon has closed the socket.
+This shows us that, indeed, using `fclose(client);` in our code works as advertised.
[[sockets-helper-functions]]
== Helper Functions
-FreeBSD C library contains many helper functions for sockets programming. For example, in our sample client we hard coded the `time.nist.gov` IP address. But we do not always know the IP address. Even if we do, our software is more flexible if it allows the user to enter the IP address, or even the domain name.
+FreeBSD C library contains many helper functions for sockets programming.
+For example, in our sample client we hard coded the `time.nist.gov` IP address.
+But we do not always know the IP address.
+Even if we do, our software is more flexible if it allows the user to enter the IP address, or even the domain name.
[[sockets-gethostbyname]]
=== `gethostbyname`
@@ -771,7 +945,8 @@ struct hostent * gethostbyname(const char *name);
struct hostent * gethostbyname2(const char *name, int af);
....
-Both return a pointer to the `hostent` structure, with much information about the domain. For our purposes, the `h_addr_list[0]` field of the structure points at `h_length` bytes of the correct address, already stored in the _network byte order_.
+Both return a pointer to the `hostent` structure, with much information about the domain.
+For our purposes, the `h_addr_list[0]` field of the structure points at `h_length` bytes of the correct address, already stored in the _network byte order_.
This allows us to create a much more flexible-and much more useful-version of our daytime program:
@@ -830,9 +1005,14 @@ int main(int argc, char *argv[]) {
}
....
-We now can type a domain name (or an IP address, it works both ways) on the command line, and the program will try to connect to its _daytime_ server. Otherwise, it will still default to `time.nist.gov`. However, even in this case we will use `gethostbyname` rather than hard coding `192.43.244.18`. That way, even if its IP address changes in the future, we will still find it.
+We now can type a domain name (or an IP address, it works both ways) on the command line, and the program will try to connect to its _daytime_ server.
+Otherwise, it will still default to `time.nist.gov`.
+However, even in this case we will use `gethostbyname` rather than hard coding `192.43.244.18`.
+That way, even if its IP address changes in the future, we will still find it.
-Since it takes virtually no time to get the time from your local server, you could run daytime twice in a row: First to get the time from `time.nist.gov`, the second time from your own system. You can then compare the results and see how exact your system clock is:
+Since it takes virtually no time to get the time from your local server, you could run daytime twice in a row:
+First to get the time from `time.nist.gov`, the second time from your own system.
+You can then compare the results and see how exact your system clock is:
[source,bash]
....
@@ -848,7 +1028,8 @@ As you can see, my system was two seconds ahead of the NIST time.
[[sockets-getservbyname]]
=== `getservbyname`
-Sometimes you may not be sure what port a certain service uses. The man:getservbyname[3] function, also declared in [.filename]#netdb.h# comes in very handy in those cases:
+Sometimes you may not be sure what port a certain service uses.
+The man:getservbyname[3] function, also declared in [.filename]#netdb.h# comes in very handy in those cases:
[.programlisting]
....
@@ -870,26 +1051,42 @@ struct servent *se;
sa.sin_port = se->s_port;
....
-You usually do know the port. But if you are developing a new protocol, you may be testing it on an unofficial port. Some day, you will register the protocol and its port (if nowhere else, at least in your [.filename]#/etc/services#, which is where `getservbyname` looks). Instead of returning an error in the above code, you just use the temporary port number. Once you have listed the protocol in [.filename]#/etc/services#, your software will find its port without you having to rewrite the code.
+You usually do know the port.
+But if you are developing a new protocol, you may be testing it on an unofficial port.
+Some day, you will register the protocol and its port (if nowhere else, at least in your [.filename]#/etc/services#, which is where `getservbyname` looks).
+Instead of returning an error in the above code, you just use the temporary port number.
+Once you have listed the protocol in [.filename]#/etc/services#, your software will find its port without you having to rewrite the code.
[[sockets-concurrent-servers]]
== Concurrent Servers
-Unlike a sequential server, a _concurrent server_ has to be able to serve more than one client at a time. For example, a _chat server_ may be serving a specific client for hours-it cannot wait till it stops serving a client before it serves the next one.
+Unlike a sequential server, a _concurrent server_ has to be able to serve more than one client at a time.
+For example, a _chat server_ may be serving a specific client for hours-it cannot wait till it stops serving a client before it serves the next one.
This requires a significant change in our flowchart:
.Concurrent Server
image::serv2.png[]
-We moved the _serve_ from the _daemon process_ to its own _server process_. However, because each child process inherits all open files (and a socket is treated just like a file), the new process inherits not only the _"accepted handle,"_ i.e., the socket returned by the `accept` call, but also the _top socket_, i.e., the one opened by the top process right at the beginning.
+We moved the _serve_ from the _daemon process_ to its own _server process_.
+However, because each child process inherits all open files (and a socket is treated just like a file), the new process inherits not only the _"accepted handle,"_ i.e., the socket returned by the `accept` call, but also the _top socket_, i.e., the one opened by the top process right at the beginning.
-However, the _server process_ does not need this socket and should `close` it immediately. Similarly, the _daemon process_ no longer needs the _accepted socket_, and not only should, but _must_ `close` it-otherwise, it will run out of available _file descriptors_ sooner or later.
+However, the _server process_ does not need this socket and should `close` it immediately.
+Similarly, the _daemon process_ no longer needs the _accepted socket_, and not only should, but _must_ `close` it-otherwise, it will run out of available _file descriptors_ sooner or later.
-After the _server process_ is done serving, it should close the _accepted socket_. Instead of returning to `accept`, it now exits.
+After the _server process_ is done serving, it should close the _accepted socket_.
+Instead of returning to `accept`, it now exits.
-Under UNIX(R), a process does not really _exit_. Instead, it _returns_ to its parent. Typically, a parent process ``wait``s for its child process, and obtains a return value. However, our _daemon process_ cannot simply stop and wait. That would defeat the whole purpose of creating additional processes. But if it never does `wait`, its children will become _zombies_-no longer functional but still roaming around.
+Under UNIX(R), a process does not really _exit_.
+Instead, it _returns_ to its parent.
+Typically, a parent process ``wait``s for its child process, and obtains a return value.
+However, our _daemon process_ cannot simply stop and wait.
+That would defeat the whole purpose of creating additional processes.
+But if it never does `wait`, its children will become _zombies_-no longer functional but still roaming around.
-For that reason, the _daemon process_ needs to set _signal handlers_ in its _initialize daemon_ phase. At least a SIGCHLD signal has to be processed, so the daemon can remove the zombie return values from the system and release the system resources they are taking up.
+For that reason, the _daemon process_ needs to set _signal handlers_ in its _initialize daemon_ phase.
+At least a SIGCHLD signal has to be processed, so the daemon can remove the zombie return values from the system and release the system resources they are taking up.
-That is why our flowchart now contains a _process signals_ box, which is not connected to any other box. By the way, many servers also process SIGHUP, and typically interpret as the signal from the superuser that they should reread their configuration files. This allows us to change settings without having to kill and restart these servers.
+That is why our flowchart now contains a _process signals_ box, which is not connected to any other box.
+By the way, many servers also process SIGHUP, and typically interpret as the signal from the superuser that they should reread their configuration files.
+This allows us to change settings without having to kill and restart these servers.
diff --git a/documentation/content/en/books/developers-handbook/testing/_index.adoc b/documentation/content/en/books/developers-handbook/testing/_index.adoc
index 06b2bbc620..95c5cab0e8 100644
--- a/documentation/content/en/books/developers-handbook/testing/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/testing/_index.adoc
@@ -64,7 +64,8 @@ If the system must be connected to a public network, watch out for spikes of bro
* Try to keep the temperature as stable as possible around the machine. This affects both quartz crystals and disk drive algorithms. To get real stable clock, consider stabilized clock injection. E.g., get a OCXO + PLL, inject output into clock circuits instead of motherboard xtal. Contact {phk} for more information about this.
* Run the test at least 3 times but it is better to run more than 20 times both for "before" and "after" code. Try to interleave if possible (i.e.: do not run 20 times before then 20 times after), this makes it possible to spot environmental effects. Do not interleave 1:1, but 3:3, this makes it possible to spot interaction effects.
+
-A good pattern is: `bababa{bbbaaa}*`. This gives hint after the first 1+1 runs (so it is possible to stop the test if it goes entirely the wrong way), a standard deviation after the first 3+3 (gives a good indication if it is going to be worth a long run) and trending and interaction numbers later on.
+A good pattern is: `bababa{bbbaaa}*`.
+This gives hint after the first 1+1 runs (so it is possible to stop the test if it goes entirely the wrong way), a standard deviation after the first 3+3 (gives a good indication if it is going to be worth a long run) and trending and interaction numbers later on.
* Use man:ministat[1] to see if the numbers are significant. Consider buying "Cartoon guide to statistics" ISBN: 0062731025, highly recommended, if you have forgotten or never learned about standard deviation and Student's T.
* Do not use background man:fsck[8] unless the test is a benchmark of background `fsck`. Also, disable `background_fsck` in [.filename]#/etc/rc.conf# unless the benchmark is not started at least 60+"``fsck`` runtime" seconds after the boot, as man:rc[8] wakes up and checks if `fsck` needs to run on any file systems when background `fsck` is enabled. Likewise, make sure there are no snapshots lying around unless the benchmark is a test with snapshots.
* If the benchmark show unexpected bad performance, check for things like high interrupt volume from an unexpected source. Some versions of ACPI have been reported to "misbehave" and generate excess interrupts. To help diagnose odd test results, take a few snapshots of `vmstat -i` and look for anything unusual.
@@ -82,30 +83,39 @@ The source Tinderbox consists of:
* A set of build servers that continually test the tip of the most important FreeBSD code branches.
* A webserver that keeps a complete set of Tinderbox logs and displays an up-to-date summary.
-The scripts are maintained and were developed by {des}, and are now written in Perl, a move on from their original incarnation as shell scripts. All scripts and configuration files are kept in https://www.freebsd.org/cgi/cvsweb.cgi/projects/tinderbox/[/projects/tinderbox/].
+The scripts are maintained and were developed by {des}, and are now written in Perl, a move on from their original incarnation as shell scripts.
+All scripts and configuration files are kept in https://www.freebsd.org/cgi/cvsweb.cgi/projects/tinderbox/[/projects/tinderbox/].
For more information about the tinderbox and tbmaster scripts at this stage, see their respective man pages: tinderbox(1) and tbmaster(1).
== The index.cgi Script
-The [.filename]#index.cgi# script generates the HTML summary of tinderbox and tbmaster logs. Although originally intended to be used as a CGI script, as indicated by its name, this script can also be run from the command line or from a man:cron[8] job, in which case it will look for logs in the directory where the script is located. It will automatically detect context, generating HTTP headers when it is run as a CGI script. It conforms to XHTML standards and is styled using CSS.
+The [.filename]#index.cgi# script generates the HTML summary of tinderbox and tbmaster logs.
+Although originally intended to be used as a CGI script, as indicated by its name,
+this script can also be run from the command line or from a man:cron[8] job, in which case it will look for logs in the directory where the script is located.
+It will automatically detect context, generating HTTP headers when it is run as a CGI script.
+It conforms to XHTML standards and is styled using CSS.
-The script starts in the `main()` block by attempting to verify that it is running on the official Tinderbox website. If it is not, a page indicating it is not an official website is produced, and a URL to the official site is provided.
+The script starts in the `main()` block by attempting to verify that it is running on the official Tinderbox website.
+If it is not, a page indicating it is not an official website is produced, and a URL to the official site is provided.
-Next, it scans the log directory to get an inventory of configurations, branches and architectures for which log files exist, to avoid hard-coding a list into the script and potentially ending up with blank rows or columns. This information is derived from the names of the log files matching the following pattern:
+Next, it scans the log directory to get an inventory of configurations, branches and architectures for which log files exist, to avoid hard-coding a list into the script and potentially ending up with blank rows or columns.
+This information is derived from the names of the log files matching the following pattern:
[.programlisting]
....
tinderbox-$config-$branch-$arch-$machine.{brief,full}
....
-The configurations used on the official Tinderbox build servers are named for the branches they build. For example, the `releng_8` configuration is used to build `RELENG_8` as well as all still-supported release branches.
+The configurations used on the official Tinderbox build servers are named for the branches they build.
+For example, the `releng_8` configuration is used to build `RELENG_8` as well as all still-supported release branches.
Once all of this startup procedure has been successfully completed, `do_config()` is called for each configuration.
The `do_config()` function generates HTML for a single Tinderbox configuration.
-It works by first generating a header row, then iterating over each branch build with the specified configuration, producing a single row of results for each in the following manner:
+It works by first generating a header row, then iterating over each branch build with the specified configuration,
+producing a single row of results for each in the following manner:
* For each item:
@@ -126,13 +136,15 @@ It works by first generating a header row, then iterating over each branch build
The `success()` function mentioned above scans a brief log file for the string "tinderbox run completed" in order to determine whether the build was successful.
-Configurations and branches are sorted according to their branch rank. This is computed as follows:
+Configurations and branches are sorted according to their branch rank.
+This is computed as follows:
* `HEAD` and `CURRENT` have rank 9999.
* `RELENG_x` has rank __``xx``__99.
* `RELENG_x_y` has rank _xxyy_.
-This means that `HEAD` always ranks highest, and `RELENG` branches are ranked in numerical order, with each `STABLE` branch ranking higher than the release branches forked off of it. For instance, for FreeBSD 8, the order from highest to lowest would be:
+This means that `HEAD` always ranks highest, and `RELENG` branches are ranked in numerical order, with each `STABLE` branch ranking higher than the release branches forked off of it.
+For instance, for FreeBSD 8, the order from highest to lowest would be:
* `RELENG_8` (branch rank 899).
* `RELENG_8_3` (branch rank 803).
@@ -140,7 +152,9 @@ This means that `HEAD` always ranks highest, and `RELENG` branches are ranked in
* `RELENG_8_1` (branch rank 801).
* `RELENG_8_0` (branch rank 800).
-The colors that Tinderbox uses for each cell in the table are defined by CSS. Successful builds are displayed with green text; unsuccessful builds are displayed with red text. The color fades as time passes since the corresponding build, with every half an hour bringing the color closer to grey.
+The colors that Tinderbox uses for each cell in the table are defined by CSS.
+Successful builds are displayed with green text; unsuccessful builds are displayed with red text.
+The color fades as time passes since the corresponding build, with every half an hour bringing the color closer to grey.
== Official Build Servers
diff --git a/documentation/content/en/books/developers-handbook/tools/_index.adoc b/documentation/content/en/books/developers-handbook/tools/_index.adoc
index f9ff8200f6..d72fb6a590 100644
--- a/documentation/content/en/books/developers-handbook/tools/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/tools/_index.adoc
@@ -39,29 +39,50 @@ toc::[]
[[tools-synopsis]]
== Synopsis
-This chapter is an introduction to using some of the programming tools supplied with FreeBSD, although much of it will be applicable to many other versions of UNIX(R). It does _not_ attempt to describe coding in any detail. Most of the chapter assumes little or no previous programming knowledge, although it is hoped that most programmers will find something of value in it.
+This chapter is an introduction to using some of the programming tools supplied with FreeBSD,
+although much of it will be applicable to many other versions of UNIX(R).
+It does _not_ attempt to describe coding in any detail.
+Most of the chapter assumes little or no previous programming knowledge,
+although it is hoped that most programmers will find something of value in it.
[[tools-intro]]
== Introduction
-FreeBSD offers an excellent development environment. Compilers for C and C++ and an assembler come with the basic system, not to mention classic UNIX(R) tools such as `sed` and `awk`. If that is not enough, there are many more compilers and interpreters in the Ports collection. The following section, <<tools-programming,Introduction to Programming>>, lists some of the available options. FreeBSD is very compatible with standards such as POSIX(R) and ANSI C, as well with its own BSD heritage, so it is possible to write applications that will compile and run with little or no modification on a wide range of platforms.
+FreeBSD offers an excellent development environment.
+Compilers for C and C++ and an assembler come with the basic system, not to mention classic UNIX(R) tools such as `sed` and `awk`.
+If that is not enough, there are many more compilers and interpreters in the Ports collection.
+The following section, <<tools-programming,Introduction to Programming>>, lists some of the available options.
+FreeBSD is very compatible with standards such as POSIX(R) and ANSI C, as well with its own BSD heritage, so it is possible to write applications that will compile and run with little or no modification on a wide range of platforms.
-However, all this power can be rather overwhelming at first if you have never written programs on a UNIX(R) platform before. This document aims to help you get up and running, without getting too deeply into more advanced topics. The intention is that this document should give you enough of the basics to be able to make some sense of the documentation.
+However, all this power can be rather overwhelming at first if you have never written programs on a UNIX(R) platform before.
+This document aims to help you get up and running, without getting too deeply into more advanced topics.
+The intention is that this document should give you enough of the basics to be able to make some sense of the documentation.
Most of the document requires little or no knowledge of programming, although it does assume a basic competence with using UNIX(R) and a willingness to learn!
[[tools-programming]]
== Introduction to Programming
-A program is a set of instructions that tell the computer to do various things; sometimes the instruction it has to perform depends on what happened when it performed a previous instruction. This section gives an overview of the two main ways in which you can give these instructions, or "commands" as they are usually called. One way uses an _interpreter_, the other a _compiler_. As human languages are too difficult for a computer to understand in an unambiguous way, commands are usually written in one or other languages specially designed for the purpose.
+A program is a set of instructions that tell the computer to do various things; sometimes the instruction it has to perform depends on what happened when it performed a previous instruction.
+This section gives an overview of the two main ways in which you can give these instructions, or "commands" as they are usually called.
+One way uses an _interpreter_, the other a _compiler_.
+As human languages are too difficult for a computer to understand in an unambiguous way, commands are usually written in one or other languages specially designed for the purpose.
=== Interpreters
-With an interpreter, the language comes as an environment, where you type in commands at a prompt and the environment executes them for you. For more complicated programs, you can type the commands into a file and get the interpreter to load the file and execute the commands in it. If anything goes wrong, many interpreters will drop you into a debugger to help you track down the problem.
+With an interpreter, the language comes as an environment, where you type in commands at a prompt and the environment executes them for you.
+For more complicated programs, you can type the commands into a file and get the interpreter to load the file and execute the commands in it.
+If anything goes wrong, many interpreters will drop you into a debugger to help you track down the problem.
-The advantage of this is that you can see the results of your commands immediately, and mistakes can be corrected readily. The biggest disadvantage comes when you want to share your programs with someone. They must have the same interpreter, or you must have some way of giving it to them, and they need to understand how to use it. Also users may not appreciate being thrown into a debugger if they press the wrong key! From a performance point of view, interpreters can use up a lot of memory, and generally do not generate code as efficiently as compilers.
+The advantage of this is that you can see the results of your commands immediately, and mistakes can be corrected readily.
+The biggest disadvantage comes when you want to share your programs with someone.
+They must have the same interpreter, or you must have some way of giving it to them, and they need to understand how to use it.
+Also users may not appreciate being thrown into a debugger if they press the wrong key! From a performance point of view, interpreters can use up a lot of memory, and generally do not generate code as efficiently as compilers.
-In my opinion, interpreted languages are the best way to start if you have not done any programming before. This kind of environment is typically found with languages like Lisp, Smalltalk, Perl and Basic. It could also be argued that the UNIX(R) shell (`sh`, `csh`) is itself an interpreter, and many people do in fact write shell "scripts" to help with various "housekeeping" tasks on their machine. Indeed, part of the original UNIX(R) philosophy was to provide lots of small utility programs that could be linked together in shell scripts to perform useful tasks.
+In my opinion, interpreted languages are the best way to start if you have not done any programming before.
+This kind of environment is typically found with languages like Lisp, Smalltalk, Perl and Basic.
+It could also be argued that the UNIX(R) shell (`sh`, `csh`) is itself an interpreter, and many people do in fact write shell "scripts" to help with various "housekeeping" tasks on their machine.
+Indeed, part of the original UNIX(R) philosophy was to provide lots of small utility programs that could be linked together in shell scripts to perform useful tasks.
=== Interpreters Available with FreeBSD
@@ -70,16 +91,23 @@ Here is a list of interpreters that are available from the FreeBSD Ports Collect
Instructions on how to get and install applications from the Ports Collection can be found in the link:{handbook}#ports-using/[Ports section] of the handbook.
BASIC::
-Short for Beginner's All-purpose Symbolic Instruction Code. Developed in the 1950s for teaching University students to program and provided with every self-respecting personal computer in the 1980s, BASIC has been the first programming language for many programmers. It is also the foundation for Visual Basic.
+Short for Beginner's All-purpose Symbolic Instruction Code.
+Developed in the 1950s for teaching University students to program and provided with every self-respecting personal computer in the 1980s, BASIC has been the first programming language for many programmers.
+It is also the foundation for Visual Basic.
+
The Bywater Basic Interpreter can be found in the Ports Collection as package:lang/bwbasic[] and the Phil Cockroft's Basic Interpreter (formerly Rabbit Basic) is available as package:lang/pbasic[].
Lisp::
-A language that was developed in the late 1950s as an alternative to the "number-crunching" languages that were popular at the time. Instead of being based on numbers, Lisp is based on lists; in fact, the name is short for "List Processing". It is very popular in AI (Artificial Intelligence) circles.
+A language that was developed in the late 1950s as an alternative to the "number-crunching" languages that were popular at the time.
+Instead of being based on numbers, Lisp is based on lists; in fact, the name is short for "List Processing".
+It is very popular in AI (Artificial Intelligence) circles.
+
Lisp is an extremely powerful and sophisticated language, but can be rather large and unwieldy.
+
-Various implementations of Lisp that can run on UNIX(R) systems are available in the Ports Collection for FreeBSD. GNU Common Lisp can be found as package:lang/gcl[]. CLISP by Bruno Haible and Michael Stoll is available as package:lang/clisp[]. For CMUCL, which includes a highly-optimizing compiler too, or simpler Lisp implementations like SLisp, which implements most of the Common Lisp constructs in a few hundred lines of C code, package:lang/cmucl[] and package:lang/slisp[] are available respectively.
+Various implementations of Lisp that can run on UNIX(R) systems are available in the Ports Collection for FreeBSD.
+GNU Common Lisp can be found as package:lang/gcl[].
+CLISP by Bruno Haible and Michael Stoll is available as package:lang/clisp[].
+For CMUCL, which includes a highly-optimizing compiler too, or simpler Lisp implementations like SLisp, which implements most of the Common Lisp constructs in a few hundred lines of C code, package:lang/cmucl[] and package:lang/slisp[] are available respectively.
Perl::
Very popular with system administrators for writing scripts; also often used on World Wide Web servers for writing CGI scripts.
@@ -87,47 +115,70 @@ Very popular with system administrators for writing scripts; also often used on
Perl is available in the Ports Collection as package:lang/perl5.24[] for all FreeBSD releases.
Scheme::
-A dialect of Lisp that is rather more compact and cleaner than Common Lisp. Popular in Universities as it is simple enough to teach to undergraduates as a first language, while it has a high enough level of abstraction to be used in research work.
+A dialect of Lisp that is rather more compact and cleaner than Common Lisp.
+Popular in Universities as it is simple enough to teach to undergraduates as a first language,
+while it has a high enough level of abstraction to be used in research work.
+
-Scheme is available from the Ports Collection as package:lang/elk[] for the Elk Scheme Interpreter. The MIT Scheme Interpreter can be found in package:lang/mit-scheme[] and the SCM Scheme Interpreter in package:lang/scm[].
+Scheme is available from the Ports Collection as package:lang/elk[] for the Elk Scheme Interpreter.
+The MIT Scheme Interpreter can be found in package:lang/mit-scheme[] and the SCM Scheme Interpreter in package:lang/scm[].
Icon::
-Icon is a high-level language with extensive facilities for processing strings and structures. The version of Icon for FreeBSD can be found in the Ports Collection as package:lang/icon[].
+Icon is a high-level language with extensive facilities for processing strings and structures.
+The version of Icon for FreeBSD can be found in the Ports Collection as package:lang/icon[].
Logo::
-Logo is a language that is easy to learn, and has been used as an introductory programming language in various courses. It is an excellent tool to work with when teaching programming to smaller age groups, as it makes creation of elaborate geometric shapes an easy task.
+Logo is a language that is easy to learn, and has been used as an introductory programming language in various courses.
+It is an excellent tool to work with when teaching programming to smaller age groups, as it makes creation of elaborate geometric shapes an easy task.
+
The latest version of Logo for FreeBSD is available from the Ports Collection in package:lang/logo[].
Python::
-Python is an Object-Oriented, interpreted language. Its advocates argue that it is one of the best languages to start programming with, since it is relatively easy to start with, but is not limited in comparison to other popular interpreted languages that are used for the development of large, complex applications (Perl and Tcl are two other languages that are popular for such tasks).
+Python is an Object-Oriented, interpreted language.
+Its advocates argue that it is one of the best languages to start programming with, since it is relatively easy to start with, but is not limited in comparison to other popular interpreted languages that are used for the development of large, complex applications (Perl and Tcl are two other languages that are popular for such tasks).
+
The latest version of Python is available from the Ports Collection in package:lang/python[].
Ruby::
-Ruby is an interpreter, pure object-oriented programming language. It has become widely popular because of its easy to understand syntax, flexibility when writing code, and the ability to easily develop and maintain large, complex programs.
+Ruby is an interpreter, pure object-oriented programming language.
+It has become widely popular because of its easy to understand syntax, flexibility when writing code, and the ability to easily develop and maintain large, complex programs.
+
Ruby is available from the Ports Collection as package:lang/ruby25[].
Tcl and Tk::
-Tcl is an embeddable, interpreted language, that has become widely used and became popular mostly because of its portability to many platforms. It can be used both for quickly writing small, prototype applications, or (when combined with Tk, a GUI toolkit) fully-fledged, featureful programs.
+Tcl is an embeddable, interpreted language, that has become widely used and became popular mostly because of its portability to many platforms.
+It can be used both for quickly writing small, prototype applications, or (when combined with Tk, a GUI toolkit) fully-fledged, featureful programs.
+
-Various versions of Tcl are available as ports for FreeBSD. The latest version, Tcl 8.5, can be found in package:lang/tcl87[].
+Various versions of Tcl are available as ports for FreeBSD.
+The latest version, Tcl 8.5, can be found in package:lang/tcl87[].
=== Compilers
-Compilers are rather different. First of all, you write your code in a file (or files) using an editor. You then run the compiler and see if it accepts your program. If it did not compile, grit your teeth and go back to the editor; if it did compile and gave you a program, you can run it either at a shell command prompt or in a debugger to see if it works properly.footnote:[If you run it in the shell, you may get a core dump.]
+Compilers are rather different.
+First of all, you write your code in a file (or files) using an editor.
+You then run the compiler and see if it accepts your program.
+If it did not compile, grit your teeth and go back to the editor;
+if it did compile and gave you a program, you can run it either at a shell command prompt or in a debugger to see if it works properly.footnote:[If you run it in the shell, you may get a core dump.]
-Obviously, this is not quite as direct as using an interpreter. However it allows you to do a lot of things which are very difficult or even impossible with an interpreter, such as writing code which interacts closely with the operating system-or even writing your own operating system! It is also useful if you need to write very efficient code, as the compiler can take its time and optimize the code, which would not be acceptable in an interpreter. Moreover, distributing a program written for a compiler is usually more straightforward than one written for an interpreter-you can just give them a copy of the executable, assuming they have the same operating system as you.
+Obviously, this is not quite as direct as using an interpreter.
+However it allows you to do a lot of things which are very difficult or even impossible with an interpreter,
+such as writing code which interacts closely with the operating system-or even writing your own operating system!
+It is also useful if you need to write very efficient code, as the compiler can take its time and optimize the code,
+which would not be acceptable in an interpreter.
+Moreover, distributing a program written for a compiler is usually more straightforward than one written for an interpreter-you can just give them a copy of the executable, assuming they have the same operating system as you.
-As the edit-compile-run-debug cycle is rather tedious when using separate programs, many commercial compiler makers have produced Integrated Development Environments (IDEs for short). FreeBSD does not include an IDE in the base system, but package:devel/kdevelop[] is available in the Ports Collection and many use Emacs for this purpose. Using Emacs as an IDE is discussed in <<emacs>>.
+As the edit-compile-run-debug cycle is rather tedious when using separate programs, many commercial compiler makers have produced Integrated Development Environments (IDEs for short).
+FreeBSD does not include an IDE in the base system, but package:devel/kdevelop[] is available in the Ports Collection and many use Emacs for this purpose.
+Using Emacs as an IDE is discussed in <<emacs>>.
[[tools-compiling]]
== Compiling with `cc`
-This section deals with the gcc and clang compilers for C and C++, since they come with the FreeBSD base system. Starting with FreeBSD 10.X `clang` is installed as `cc`. The details of producing a program with an interpreter vary considerably between interpreters, and are usually well covered in the documentation and on-line help for the interpreter.
+This section deals with the gcc and clang compilers for C and C++, since they come with the FreeBSD base system.
+Starting with FreeBSD 10.X `clang` is installed as `cc`.
+The details of producing a program with an interpreter vary considerably between interpreters, and are usually well covered in the documentation and on-line help for the interpreter.
-Once you have written your masterpiece, the next step is to convert it into something that will (hopefully!) run on FreeBSD. This usually involves several steps, each of which is done by a separate program.
+Once you have written your masterpiece, the next step is to convert it into something that will (hopefully!) run on FreeBSD.
+This usually involves several steps, each of which is done by a separate program.
[.procedure]
. Pre-process your source code to remove comments and do other tricks like expanding macros in C.
@@ -139,7 +190,8 @@ Once you have written your masterpiece, the next step is to convert it into some
. Work out how to produce something that the system's run-time loader will be able to load into memory and run.
. Finally, write the executable on the filesystem.
-The word _compiling_ is often used to refer to just steps 1 to 4-the others are referred to as _linking_. Sometimes step 1 is referred to as _pre-processing_ and steps 3-4 as _assembling_.
+The word _compiling_ is often used to refer to just steps 1 to 4-the others are referred to as _linking_.
+Sometimes step 1 is referred to as _pre-processing_ and steps 3-4 as _assembling_.
Fortunately, almost all this detail is hidden from you, as `cc` is a front end that manages calling all these programs with the right arguments for you; simply typing
@@ -148,16 +200,20 @@ Fortunately, almost all this detail is hidden from you, as `cc` is a front end t
% cc foobar.c
....
-will cause [.filename]#foobar.c# to be compiled by all the steps above. If you have more than one file to compile, just do something like
+will cause [.filename]#foobar.c# to be compiled by all the steps above.
+If you have more than one file to compile, just do something like
[source,bash]
....
% cc foo.c bar.c
....
-Note that the syntax checking is just that-checking the syntax. It will not check for any logical mistakes you may have made, like putting the program into an infinite loop, or using a bubble sort when you meant to use a binary sort.footnote:[In case you did not know, a binary sort is an efficient way of sorting things into order and a bubble sort is not.]
+Note that the syntax checking is just that-checking the syntax.
+It will not check for any logical mistakes you may have made, like putting the program into an infinite loop,
+or using a bubble sort when you meant to use a binary sort.footnote:[In case you did not know, a binary sort is an efficient way of sorting things into order and a bubble sort is not.]
-There are lots and lots of options for `cc`, which are all in the manual page. Here are a few of the most important ones, with examples of how to use them.
+There are lots and lots of options for `cc`, which are all in the manual page.
+Here are a few of the most important ones, with examples of how to use them.
`-o _filename_`::
The output name of the file. If you do not use this option, `cc` will produce an executable called [.filename]#a.out#.footnote:[The reasons for this are buried in the mists of history.]
@@ -169,17 +225,23 @@ The output name of the file. If you do not use this option, `cc` will produce an
....
`-c`::
-Just compile the file, do not link it. Useful for toy programs where you just want to check the syntax, or if you are using a [.filename]#Makefile#.
+Just compile the file, do not link it.
+Useful for toy programs where you just want to check the syntax, or if you are using a [.filename]#Makefile#.
+
[source,bash]
....
% cc -c foobar.c
....
+
-This will produce an _object file_ (not an executable) called [.filename]#foobar.o#. This can be linked together with other object files into an executable.
+This will produce an _object file_ (not an executable) called [.filename]#foobar.o#.
+This can be linked together with other object files into an executable.
`-g`::
-Create a debug version of the executable. This makes the compiler put information into the executable about which line of which source file corresponds to which function call. A debugger can use this information to show the source code as you step through the program, which is _very_ useful; the disadvantage is that all this extra information makes the program much bigger. Normally, you compile with `-g` while you are developing a program and then compile a "release version" without `-g` when you are satisfied it works properly.
+Create a debug version of the executable.
+This makes the compiler put information into the executable about which line of which source file corresponds to which function call.
+A debugger can use this information to show the source code as you step through the program, which is _very_ useful;
+the disadvantage is that all this extra information makes the program much bigger.
+Normally, you compile with `-g` while you are developing a program and then compile a "release version" without `-g` when you are satisfied it works properly.
+
[source,bash]
@@ -190,7 +252,9 @@ Create a debug version of the executable. This makes the compiler put informatio
This will produce a debug version of the program. footnote:[Note, we did not use the -o flag to specify the executable name, so we will get an executable called a.out. Producing a debug version called foobar is left as an exercise for the reader!]
`-O`::
-Create an optimized version of the executable. The compiler performs various clever tricks to try to produce an executable that runs faster than normal. You can add a number after the `-O` to specify a higher level of optimization, but this often exposes bugs in the compiler's optimizer.
+Create an optimized version of the executable.
+The compiler performs various clever tricks to try to produce an executable that runs faster than normal.
+You can add a number after the `-O` to specify a higher level of optimization, but this often exposes bugs in the compiler's optimizer.
+
[source,bash]
....
@@ -199,20 +263,27 @@ Create an optimized version of the executable. The compiler performs various cle
+
This will produce an optimized version of [.filename]#foobar#.
-The following three flags will force `cc` to check that your code complies to the relevant international standard, often referred to as the ANSI standard, though strictly speaking it is an ISO standard.
+The following three flags will force `cc` to check that your code complies to the relevant international standard,
+often referred to as the ANSI standard, though strictly speaking it is an ISO standard.
`-Wall`::
-Enable all the warnings which the authors of `cc` believe are worthwhile. Despite the name, it will not enable all the warnings `cc` is capable of.
+Enable all the warnings which the authors of `cc` believe are worthwhile.
+Despite the name, it will not enable all the warnings `cc` is capable of.
`-ansi`::
-Turn off most, but not all, of the non-ANSI C features provided by `cc`. Despite the name, it does not guarantee strictly that your code will comply to the standard.
+Turn off most, but not all, of the non-ANSI C features provided by `cc`.
+Despite the name, it does not guarantee strictly that your code will comply to the standard.
`-pedantic`::
Turn off _all_ ``cc``'s non-ANSI C features.
-Without these flags, `cc` will allow you to use some of its non-standard extensions to the standard. Some of these are very useful, but will not work with other compilers-in fact, one of the main aims of the standard is to allow people to write code that will work with any compiler on any system. This is known as _portable code_.
+Without these flags, `cc` will allow you to use some of its non-standard extensions to the standard.
+Some of these are very useful, but will not work with other compilers-in fact,
+one of the main aims of the standard is to allow people to write code that will work with any compiler on any system.
+This is known as _portable code_.
-Generally, you should try to make your code as portable as possible, as otherwise you may have to completely rewrite the program later to get it to work somewhere else-and who knows what you may be using in a few years time?
+Generally, you should try to make your code as portable as possible,
+as otherwise you may have to completely rewrite the program later to get it to work somewhere else-and who knows what you may be using in a few years time?
[source,bash]
....
@@ -224,9 +295,12 @@ This will produce an executable [.filename]#foobar# after checking [.filename]#f
`-l__library__`::
Specify a function library to be used at link time.
+
-The most common example of this is when compiling a program that uses some of the mathematical functions in C. Unlike most other platforms, these are in a separate library from the standard C one and you have to tell the compiler to add it.
+The most common example of this is when compiling a program that uses some of the mathematical functions in C.
+Unlike most other platforms, these are in a separate library from the standard C one and you have to tell the compiler to add it.
+
-The rule is that if the library is called [.filename]#libsomething.a#, you give `cc` the argument `-l__something__`. For example, the math library is [.filename]#libm.a#, so you give `cc` the argument `-lm`. A common "gotcha" with the math library is that it has to be the last library on the command line.
+The rule is that if the library is called [.filename]#libsomething.a#, you give `cc` the argument `-l__something__`.
+For example, the math library is [.filename]#libm.a#, so you give `cc` the argument `-lm`.
+A common "gotcha" with the math library is that it has to be the last library on the command line.
+
[source,bash]
....
@@ -235,7 +309,8 @@ The rule is that if the library is called [.filename]#libsomething.a#, you give
+
This will link the math library functions into [.filename]#foobar#.
+
-If you are compiling C++ code, use {c-plus-plus-command}. {c-plus-plus-command} can also be invoked as {clang-plus-plus-command} on FreeBSD.
+If you are compiling C++ code, use {c-plus-plus-command}.
+{c-plus-plus-command} can also be invoked as {clang-plus-plus-command} on FreeBSD.
+
[source,bash]
....
@@ -292,11 +367,13 @@ like you said I should, but I get this when I run it:
This is not the right answer! What is going on?
-When the compiler sees you call a function, it checks if it has already seen a prototype for it. If it has not, it assumes the function returns an int, which is definitely not what you want here.
+When the compiler sees you call a function, it checks if it has already seen a prototype for it.
+If it has not, it assumes the function returns an int, which is definitely not what you want here.
==== So how do I fix this?
-The prototypes for the mathematical functions are in [.filename]#math.h#. If you include this file, the compiler will be able to find the prototype and it will stop doing strange things to your calculation!
+The prototypes for the mathematical functions are in [.filename]#math.h#.
+If you include this file, the compiler will be able to find the prototype and it will stop doing strange things to your calculation!
[.programlisting]
....
@@ -319,7 +396,8 @@ If you are using any of the mathematical functions, _always_ include [.filename]
==== I compiled a file called foobar.c and I cannot find an executable called foobar. Where has it gone?
-Remember, `cc` will call the executable [.filename]#a.out# unless you tell it differently. Use the `-o _filename_` option:
+Remember, `cc` will call the executable [.filename]#a.out# unless you tell it differently.
+Use the `-o _filename_` option:
[source,bash]
....
@@ -328,11 +406,13 @@ Remember, `cc` will call the executable [.filename]#a.out# unless you tell it di
==== OK, I have an executable called foobar, I can see it when I run ls, but when I type in foobar at the command prompt it tells me there is no such file. Why can it not find it?
-Unlike MS-DOS(R), UNIX(R) does not look in the current directory when it is trying to find out which executable you want it to run, unless you tell it to. Type `./foobar`, which means "run the file called [.filename]#foobar# in the current directory."
+Unlike MS-DOS(R), UNIX(R) does not look in the current directory when it is trying to find out which executable you want it to run, unless you tell it to.
+Type `./foobar`, which means "run the file called [.filename]#foobar# in the current directory."
=== I called my executable test, but nothing happens when I run it. What is going on?
-Most UNIX(R) systems have a program called `test` in [.filename]#/usr/bin# and the shell is picking that one up before it gets to checking the current directory. Either type:
+Most UNIX(R) systems have a program called `test` in [.filename]#/usr/bin# and the shell is picking that one up before it gets to checking the current directory.
+Either type:
[source,bash]
....
@@ -343,7 +423,8 @@ or choose a better name for your program!
==== I compiled my program and it seemed to run all right at first, then there was an error and it said something about core dumped. What does that mean?
-The name _core dump_ dates back to the very early days of UNIX(R), when the machines used core memory for storing data. Basically, if the program failed under certain conditions, the system would write the contents of core memory to disk in a file called [.filename]#core#, which the programmer could then pore over to find out what went wrong.
+The name _core dump_ dates back to the very early days of UNIX(R), when the machines used core memory for storing data.
+Basically, if the program failed under certain conditions, the system would write the contents of core memory to disk in a file called [.filename]#core#, which the programmer could then pore over to find out what went wrong.
==== Fascinating stuff, but what I am supposed to do now?
@@ -351,7 +432,8 @@ Use a debugger to analyze the core (see <<debugging>>).
==== When my program dumped core, it said something about a segmentation fault. What is that?
-This basically means that your program tried to perform some sort of illegal operation on memory; UNIX(R) is designed to protect the operating system and other programs from rogue programs.
+This basically means that your program tried to perform some sort of illegal operation on memory;
+UNIX(R) is designed to protect the operating system and other programs from rogue programs.
Common causes for this are:
@@ -371,7 +453,8 @@ char *foo;
strcpy(foo, "bang!");
....
+
-The pointer will have some random value that, with luck, will point into an area of memory that is not available to your program and the kernel will kill your program before it can do any damage. If you are unlucky, it will point somewhere inside your own program and corrupt one of your data structures, causing the program to fail mysteriously.
+The pointer will have some random value that, with luck, will point into an area of memory that is not available to your program and the kernel will kill your program before it can do any damage.
+If you are unlucky, it will point somewhere inside your own program and corrupt one of your data structures, causing the program to fail mysteriously.
* Trying to access past the end of an array, eg
+
[.programlisting]
@@ -406,11 +489,14 @@ free(foo);
free(foo);
....
-Making one of these mistakes will not always lead to an error, but they are always bad practice. Some systems and compilers are more tolerant than others, which is why programs that ran well on one system can crash when you try them on an another.
+Making one of these mistakes will not always lead to an error, but they are always bad practice.
+Some systems and compilers are more tolerant than others,
+which is why programs that ran well on one system can crash when you try them on an another.
==== Sometimes when I get a core dump it says bus error. It says in my UNIX(R) book that this means a hardware problem, but the computer still seems to be working. Is this true?
-No, fortunately not (unless of course you really do have a hardware problem...). This is usually another way of saying that you accessed memory in a way you should not have.
+No, fortunately not (unless of course you really do have a hardware problem...).
+This is usually another way of saying that you accessed memory in a way you should not have.
==== This dumping core business sounds as though it could be quite useful, if I can make it happen when I want to. Can I do this, or do I have to wait until there is an error?
@@ -430,11 +516,14 @@ to find out the process ID of your program, and do
where `_pid_` is the process ID you looked up.
-This is useful if your program has got stuck in an infinite loop, for instance. If your program happens to trap SIGABRT, there are several other signals which have a similar effect.
+This is useful if your program has got stuck in an infinite loop, for instance.
+If your program happens to trap SIGABRT, there are several other signals which have a similar effect.
-Alternatively, you can create a core dump from inside your program, by calling the `abort()` function. See the manual page of man:abort[3] to learn more.
+Alternatively, you can create a core dump from inside your program, by calling the `abort()` function.
+See the manual page of man:abort[3] to learn more.
-If you want to create a core dump from outside your program, but do not want the process to terminate, you can use the `gcore` program. See the manual page of man:gcore[1] for more information.
+If you want to create a core dump from outside your program, but do not want the process to terminate, you can use the `gcore` program.
+See the manual page of man:gcore[1] for more information.
[[tools-make]]
== Make
@@ -450,24 +539,31 @@ When you are working on a simple program with only one or two source files, typi
is not too bad, but it quickly becomes very tedious when there are several files-and it can take a while to compile, too.
-One way to get around this is to use object files and only recompile the source file if the source code has changed. So we could have something like:
+One way to get around this is to use object files and only recompile the source file if the source code has changed.
+So we could have something like:
[source,bash]
....
% cc file1.o file2.o … file37.c …
....
-if we had changed [.filename]#file37.c#, but not any of the others, since the last time we compiled. This may speed up the compilation quite a bit, but does not solve the typing problem.
+if we had changed [.filename]#file37.c#, but not any of the others, since the last time we compiled.
+This may speed up the compilation quite a bit, but does not solve the typing problem.
Or we could write a shell script to solve the typing problem, but it would have to re-compile everything, making it very inefficient on a large project.
What happens if we have hundreds of source files lying about? What if we are working in a team with other people who forget to tell us when they have changed one of their source files that we use?
-Perhaps we could put the two solutions together and write something like a shell script that would contain some kind of magic rule saying when a source file needs compiling. Now all we need now is a program that can understand these rules, as it is a bit too complicated for the shell.
+Perhaps we could put the two solutions together and write something like a shell script that would contain some kind of magic rule saying when a source file needs compiling.
+Now all we need now is a program that can understand these rules, as it is a bit too complicated for the shell.
-This program is called `make`. It reads in a file, called a _makefile_, that tells it how different files depend on each other, and works out which files need to be re-compiled and which ones do not. For example, a rule could say something like "if [.filename]#fromboz.o# is older than [.filename]#fromboz.c#, that means someone must have changed [.filename]#fromboz.c#, so it needs to be re-compiled." The makefile also has rules telling make _how_ to re-compile the source file, making it a much more powerful tool.
+This program is called `make`.
+It reads in a file, called a _makefile_, that tells it how different files depend on each other, and works out which files need to be re-compiled and which ones do not.
+For example, a rule could say something like "if [.filename]#fromboz.o# is older than [.filename]#fromboz.c#, that means someone must have changed [.filename]#fromboz.c#, so it needs to be re-compiled."
+The makefile also has rules telling make _how_ to re-compile the source file, making it a much more powerful tool.
-Makefiles are typically kept in the same directory as the source they apply to, and can be called [.filename]#makefile#, [.filename]#Makefile# or [.filename]#MAKEFILE#. Most programmers use the name [.filename]#Makefile#, as this puts it near the top of a directory listing, where it can easily be seen.footnote:[They do not use the MAKEFILE form as block capitals are often used for documentation files like README.]
+Makefiles are typically kept in the same directory as the source they apply to, and can be called [.filename]#makefile#, [.filename]#Makefile# or [.filename]#MAKEFILE#.
+Most programmers use the name [.filename]#Makefile#, as this puts it near the top of a directory listing, where it can easily be seen.footnote:[They do not use the MAKEFILE form as block capitals are often used for documentation files like README.]
=== Example of Using `make`
@@ -481,13 +577,23 @@ foo: foo.c
It consists of two lines, a dependency line and a creation line.
-The dependency line here consists of the name of the program (known as the _target_), followed by a colon, then whitespace, then the name of the source file. When `make` reads this line, it looks to see if [.filename]#foo# exists; if it exists, it compares the time [.filename]#foo# was last modified to the time [.filename]#foo.c# was last modified. If [.filename]#foo# does not exist, or is older than [.filename]#foo.c#, it then looks at the creation line to find out what to do. In other words, this is the rule for working out when [.filename]#foo.c# needs to be re-compiled.
+The dependency line here consists of the name of the program (known as the _target_),
+followed by a colon, then whitespace, then the name of the source file.
+When `make` reads this line, it looks to see if [.filename]#foo# exists;
+if it exists, it compares the time [.filename]#foo# was last modified to the time [.filename]#foo.c# was last modified.
+If [.filename]#foo# does not exist, or is older than [.filename]#foo.c#, it then looks at the creation line to find out what to do.
+In other words, this is the rule for working out when [.filename]#foo.c# needs to be re-compiled.
-The creation line starts with a tab (press kbd:[tab]) and then the command you would type to create [.filename]#foo# if you were doing it at a command prompt. If [.filename]#foo# is out of date, or does not exist, `make` then executes this command to create it. In other words, this is the rule which tells make how to re-compile [.filename]#foo.c#.
+The creation line starts with a tab (press kbd:[tab]) and then the command you would type to create [.filename]#foo# if you were doing it at a command prompt.
+If [.filename]#foo# is out of date, or does not exist, `make` then executes this command to create it.
+In other words, this is the rule which tells make how to re-compile [.filename]#foo.c#.
-So, when you type `make`, it will make sure that [.filename]#foo# is up to date with respect to your latest changes to [.filename]#foo.c#. This principle can be extended to [.filename]#Makefile#'s with hundreds of targets-in fact, on FreeBSD, it is possible to compile the entire operating system just by typing `make world` in the appropriate directory!
+So, when you type `make`, it will make sure that [.filename]#foo# is up to date with respect to your latest changes to [.filename]#foo.c#.
+This principle can be extended to [.filename]#Makefile#'s with hundreds of targets-in fact, on FreeBSD,
+it is possible to compile the entire operating system just by typing `make world` in the appropriate directory!
-Another useful property of makefiles is that the targets do not have to be programs. For instance, we could have a make file that looks like this:
+Another useful property of makefiles is that the targets do not have to be programs.
+For instance, we could have a make file that looks like this:
[.programlisting]
....
@@ -505,17 +611,25 @@ We can tell make which target we want to make by typing:
% make target
....
-`make` will then only look at that target and ignore any others. For example, if we type `make foo` with the makefile above, make will ignore the `install` target.
+`make` will then only look at that target and ignore any others.
+For example, if we type `make foo` with the makefile above, make will ignore the `install` target.
-If we just type `make` on its own, make will always look at the first target and then stop without looking at any others. So if we typed `make` here, it will just go to the `foo` target, re-compile [.filename]#foo# if necessary, and then stop without going on to the `install` target.
+If we just type `make` on its own, make will always look at the first target and then stop without looking at any others.
+So if we typed `make` here, it will just go to the `foo` target, re-compile [.filename]#foo# if necessary, and then stop without going on to the `install` target.
-Notice that the `install` target does not actually depend on anything! This means that the command on the following line is always executed when we try to make that target by typing `make install`. In this case, it will copy [.filename]#foo# into the user's home directory. This is often used by application makefiles, so that the application can be installed in the correct directory when it has been correctly compiled.
+Notice that the `install` target does not actually depend on anything! This means that the command on the following line is always executed when we try to make that target by typing `make install`.
+In this case, it will copy [.filename]#foo# into the user's home directory.
+This is often used by application makefiles, so that the application can be installed in the correct directory when it has been correctly compiled.
-This is a slightly confusing subject to try to explain. If you do not quite understand how `make` works, the best thing to do is to write a simple program like "hello world" and a make file like the one above and experiment. Then progress to using more than one source file, or having the source file include a header file. `touch` is very useful here-it changes the date on a file without you having to edit it.
+This is a slightly confusing subject to try to explain.
+If you do not quite understand how `make` works, the best thing to do is to write a simple program like "hello world" and a make file like the one above and experiment.
+Then progress to using more than one source file, or having the source file include a header file.
+`touch` is very useful here-it changes the date on a file without you having to edit it.
=== Make and include-files
-C code often starts with a list of files to include, for example stdio.h. Some of these files are system-include files, some of them are from the project you are now working on:
+C code often starts with a list of files to include, for example stdio.h.
+Some of these files are system-include files, some of them are from the project you are now working on:
[.programlisting]
....
@@ -532,7 +646,11 @@ To make sure that this file is recompiled the moment [.filename]#foo.h# is chang
foo: foo.c foo.h
....
-The moment your project is getting bigger and you have more and more own include-files to maintain, it will be a pain to keep track of all include files and the files which are depending on it. If you change an include-file but forget to recompile all the files which are depending on it, the results will be devastating. `clang` has an option to analyze your files and to produce a list of include-files and their dependencies: `-MM`.
+The moment your project is getting bigger and you have more and more own include-files to maintain,
+it will be a pain to keep track of all include files and the files which are depending on it.
+If you change an include-file but forget to recompile all the files which are depending on it,
+the results will be devastating.
+`clang` has an option to analyze your files and to produce a list of include-files and their dependencies: `-MM`.
If you add this to your Makefile:
@@ -555,7 +673,10 @@ Do not forget to run `make depend` each time you add an include-file to one of y
=== FreeBSD Makefiles
-Makefiles can be rather complicated to write. Fortunately, BSD-based systems like FreeBSD come with some very powerful ones as part of the system. One very good example of this is the FreeBSD ports system. Here is the essential part of a typical ports [.filename]#Makefile#:
+Makefiles can be rather complicated to write.
+Fortunately, BSD-based systems like FreeBSD come with some very powerful ones as part of the system.
+One very good example of this is the FreeBSD ports system.
+Here is the essential part of a typical ports [.filename]#Makefile#:
[.programlisting]
....
@@ -578,15 +699,22 @@ Now, if we go to the directory for this port and type `make`, the following happ
Now I think you will agree that is rather impressive for a four line script!
-The secret lies in the last line, which tells `make` to look in the system makefile called [.filename]#bsd.port.mk#. It is easy to overlook this line, but this is where all the clever stuff comes from-someone has written a makefile that tells `make` to do all the things above (plus a couple of other things I did not mention, including handling any errors that may occur) and anyone can get access to that just by putting a single line in their own make file!
+The secret lies in the last line, which tells `make` to look in the system makefile called [.filename]#bsd.port.mk#.
+It is easy to overlook this line, but this is where all the clever stuff comes from-someone has written a makefile that tells `make` to do all the things above (plus a couple of other things I did not mention,
+including handling any errors that may occur) and anyone can get access to that just by putting a single line in their own make file!
-If you want to have a look at these system makefiles, they are in [.filename]#/usr/share/mk#, but it is probably best to wait until you have had a bit of practice with makefiles, as they are very complicated (and if you do look at them, make sure you have a flask of strong coffee handy!)
+If you want to have a look at these system makefiles, they are in [.filename]#/usr/share/mk#,
+but it is probably best to wait until you have had a bit of practice with makefiles,
+as they are very complicated (and if you do look at them, make sure you have a flask of strong coffee handy!)
=== More Advanced Uses of `make`
-`Make` is a very powerful tool, and can do much more than the simple example above shows. Unfortunately, there are several different versions of `make`, and they all differ considerably. The best way to learn what they can do is probably to read the documentation-hopefully this introduction will have given you a base from which you can do this.
+`Make` is a very powerful tool, and can do much more than the simple example above shows.
+Unfortunately, there are several different versions of `make`, and they all differ considerably.
+The best way to learn what they can do is probably to read the documentation-hopefully this introduction will have given you a base from which you can do this.
-The version of make that comes with FreeBSD is the Berkeley make; there is a tutorial for it in [.filename]#/usr/share/doc/psd/12.make#. To view it, do
+The version of make that comes with FreeBSD is the Berkeley make; there is a tutorial for it in [.filename]#/usr/share/doc/psd/12.make#.
+To view it, do
[source,bash]
....
@@ -595,9 +723,12 @@ The version of make that comes with FreeBSD is the Berkeley make; there is a tut
in that directory.
-Many applications in the ports use GNU make, which has a very good set of "info" pages. If you have installed any of these ports, GNU make will automatically have been installed as `gmake`. It is also available as a port and package in its own right.
+Many applications in the ports use GNU make, which has a very good set of "info" pages.
+If you have installed any of these ports, GNU make will automatically have been installed as `gmake`.
+It is also available as a port and package in its own right.
-To view the info pages for GNU make, you will have to edit [.filename]#dir# in the [.filename]#/usr/local/info# directory to add an entry for it. This involves adding a line like
+To view the info pages for GNU make, you will have to edit [.filename]#dir# in the [.filename]#/usr/local/info# directory to add an entry for it.
+This involves adding a line like
[.programlisting]
....
@@ -611,20 +742,34 @@ to the file. Once you have done this, you can type `info` and then select [.guim
=== Introduction to Available Debuggers
-Using a debugger allows running the program under more controlled circumstances. Typically, it is possible to step through the program a line at a time, inspect the value of variables, change them, tell the debugger to run up to a certain point and then stop, and so on. It is also possible to attach to a program that is already running, or load a core file to investigate why the program crashed. It is even possible to debug the kernel, though that is a little trickier than the user applications we will be discussing in this section.
+Using a debugger allows running the program under more controlled circumstances.
+Typically, it is possible to step through the program a line at a time, inspect the value of variables, change them, tell the debugger to run up to a certain point and then stop, and so on.
+It is also possible to attach to a program that is already running, or load a core file to investigate why the program crashed.
+It is even possible to debug the kernel, though that is a little trickier than the user applications we will be discussing in this section.
-This section is intended to be a quick introduction to using debuggers and does not cover specialized topics such as debugging the kernel. For more information about that, refer to crossref:kerneldebug[kerneldebug,Kernel Debugging].
+This section is intended to be a quick introduction to using debuggers and does not cover specialized topics such as debugging the kernel.
+For more information about that, refer to crossref:kerneldebug[kerneldebug,Kernel Debugging].
-The standard debugger supplied with FreeBSD {rel121-current} is called `lldb` (LLVM debugger). As it is part of the standard installation for that release, there is no need to do anything special to use it. It has good command help, accessible via the `help` command, as well as https://lldb.llvm.org/[a web tutorial and documentation].
+The standard debugger supplied with FreeBSD {rel121-current} is called `lldb` (LLVM debugger).
+As it is part of the standard installation for that release, there is no need to do anything special to use it.
+It has good command help, accessible via the `help` command, as well as https://lldb.llvm.org/[a web tutorial and documentation].
[NOTE]
====
-The `lldb` command is available for FreeBSD {rel113-current} link:{handbook}#ports-using/[from ports or packages] as package:devel/llvm[]. This will install the default version of lldb (currently 9.0).
+The `lldb` command is available for FreeBSD {rel113-current} link:{handbook}#ports-using/[from ports or packages] as package:devel/llvm[].
+This will install the default version of lldb (currently 9.0).
====
-The other debugger available with FreeBSD is called `gdb` (GNU debugger). Unlike lldb, it is not installed by default on FreeBSD {rel121-current}; to use it, link:{handbook}#ports-using/[install] package:devel/gdb[] from ports or packages. The version installed by default on FreeBSD {rel113-current} is old; instead, install package:devel/gdb[] there as well. It has quite good on-line help, as well as a set of info pages.
+The other debugger available with FreeBSD is called `gdb` (GNU debugger).
+Unlike lldb, it is not installed by default on FreeBSD {rel121-current};
+to use it, link:{handbook}#ports-using/[install] package:devel/gdb[] from ports or packages.
+The version installed by default on FreeBSD {rel113-current} is old; instead, install package:devel/gdb[] there as well.
+It has quite good on-line help, as well as a set of info pages.
-Which one to use is largely a matter of taste. If familiar with one only, use that one. People familiar with neither or both but wanting to use one from inside Emacs will need to use `gdb` as `lldb` is unsupported by Emacs. Otherwise, try both and see which one you prefer.
+Which one to use is largely a matter of taste.
+If familiar with one only, use that one.
+People familiar with neither or both but wanting to use one from inside Emacs will need to use `gdb` as `lldb` is unsupported by Emacs.
+Otherwise, try both and see which one you prefer.
=== Using lldb
@@ -639,7 +784,9 @@ Start up lldb by typing
==== Running a Program with lldb
-Compile the program with `-g` to get the most out of using `lldb`. It will work without, but will only display the name of the function currently running, instead of the source code. If it displays a line like:
+Compile the program with `-g` to get the most out of using `lldb`.
+It will work without, but will only display the name of the function currently running, instead of the source code.
+If it displays a line like:
[source,bash]
....
@@ -650,15 +797,20 @@ Breakpoint 1: where = temp`main, address = …
[TIP]
====
-
-Most `lldb` commands have shorter forms that can be used instead. The longer forms are used here for clarity.
+Most `lldb` commands have shorter forms that can be used instead.
+The longer forms are used here for clarity.
====
-At the `lldb` prompt, type `breakpoint set -n main`. This will tell the debugger not to display the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code. Now type `process launch` to actually start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
+At the `lldb` prompt, type `breakpoint set -n main`.
+This will tell the debugger not to display the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code.
+Now type `process launch` to actually start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
-To step through the program a line at a time, type `thread step-over`. When the program gets to a function call, step into it by typing `thread step-in`. Once in a function call, return from it by typing `thread step-out` or use `up` and `down` to take a quick look at the caller.
+To step through the program a line at a time, type `thread step-over`.
+When the program gets to a function call, step into it by typing `thread step-in`.
+Once in a function call, return from it by typing `thread step-out` or use `up` and `down` to take a quick look at the caller.
-Here is a simple example of how to spot a mistake in a program with `lldb`. This is our program (with a deliberate mistake):
+Here is a simple example of how to spot a mistake in a program with `lldb`.
+This is our program (with a deliberate mistake):
[.programlisting]
....
@@ -757,7 +909,8 @@ frame #1: 0x000000000020130b temp`main at temp.c:9:2 lldb displays stack frame
(int) i = -5360 lldb displays -5360
....
-Oh dear! Looking at the code, we forgot to initialize i. We meant to put
+Oh dear! Looking at the code, we forgot to initialize i.
+We meant to put
[.programlisting]
....
@@ -770,18 +923,25 @@ main() {
...
....
-but we left the `i=5;` line out. As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran, which in this case happened to be `-5360`.
+but we left the `i=5;` line out.
+As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran,
+which in this case happened to be `-5360`.
[NOTE]
====
-The `lldb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack. This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on. (The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
+The `lldb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack.
+This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on.
+(The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
====
==== Examining a Core File with lldb
-A core file is basically a file which contains the complete state of the process when it crashed. In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier. Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
+A core file is basically a file which contains the complete state of the process when it crashed.
+In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier.
+Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
-To examine a core file, specify the name of the core file in addition to the program itself. Instead of starting up `lldb` in the usual way, type `lldb -c _progname_.core -- _progname_`
+To examine a core file, specify the name of the core file in addition to the program itself.
+Instead of starting up `lldb` in the usual way, type `lldb -c _progname_.core -- _progname_`
The debugger will display something like this:
@@ -793,7 +953,10 @@ Core file '/home/pauamma/tmp/[.filename]#progname.core#' (x86_64) was loaded.
(lldb)
....
-In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#. The debugger does not display why the program crashed or where. For this, use `thread backtrace all`. This will also show how the function where the program dumped core was called.
+In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#.
+The debugger does not display why the program crashed or where.
+For this, use `thread backtrace all`.
+This will also show how the function where the program dumped core was called.
[source,bash,subs="verbatim,quotes"]
....
@@ -805,11 +968,15 @@ In this case, the program was called [.filename]#progname#, so the core file is
(lldb)
....
-`SIGSEGV` indicates that the program tried to access memory (run code or read/write data usually) at a location that does not belong to it, but does not give any specifics. For that, look at the source code at line 10 of file temp2.c, in `bazz()`. The backtrace also says that in this case, `bazz()` was called from `main()`.
+`SIGSEGV` indicates that the program tried to access memory (run code or read/write data usually) at a location that does not belong to it, but does not give any specifics.
+For that, look at the source code at line 10 of file temp2.c, in `bazz()`.
+The backtrace also says that in this case, `bazz()` was called from `main()`.
==== Attaching to a Running Program with lldb
-One of the neatest features about `lldb` is that it can attach to a program that is already running. Of course, that requires sufficient permissions to do so. A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
+One of the neatest features about `lldb` is that it can attach to a program that is already running.
+Of course, that requires sufficient permissions to do so.
+A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
To do that, start up another `lldb`, use `ps` to find the process ID for the child, and do
@@ -843,10 +1010,12 @@ Now all that is needed is to attach to the child, set PauseMode to `0` with `exp
[NOTE]
====
-The described functionality is available starting with LLDB version 12.0.0. Users of FreeBSD releases containing an earlier LLDB version may wish to use the snapshot available in link:{handbook}#ports-using/[ports or packages], as package:devel/llvm-devel[].
+The described functionality is available starting with LLDB version 12.0.0.
+Users of FreeBSD releases containing an earlier LLDB version may wish to use the snapshot available in link:{handbook}#ports-using/[ports or packages], as package:devel/llvm-devel[].
====
-Starting with LLDB 12.0.0, remote debugging is supported on FreeBSD. This means that `lldb-server` can be started to debug a program on one host, while the interactive `lldb` client connects to it from another one.
+Starting with LLDB 12.0.0, remote debugging is supported on FreeBSD.
+This means that `lldb-server` can be started to debug a program on one host, while the interactive `lldb` client connects to it from another one.
To launch a new process to be debugged remotely, run `lldb-server` on the remote server by typing
@@ -864,7 +1033,8 @@ Start `lldb` locally and type the following command to connect to the remote ser
(lldb) gdb-remote host:port
....
-`lldb-server` can also attach to a running process. To do that, type the following on the remote server:
+`lldb-server` can also attach to a running process.
+To do that, type the following on the remote server:
[source,bash]
....
@@ -882,7 +1052,8 @@ Start up gdb by typing
% gdb progname
....
-although many people prefer to run it inside Emacs. To do this, type:
+although many people prefer to run it inside Emacs.
+To do this, type:
[source,bash]
....
@@ -893,7 +1064,9 @@ Finally, for those finding its text-based command-prompt style off-putting, ther
==== Running a Program with gdb
-Compile the program with `-g` to get the most out of using `gdb`. It will work without, but will only display the name of the function currently running, instead of the source code. A line like:
+Compile the program with `-g` to get the most out of using `gdb`.
+It will work without, but will only display the name of the function currently running, instead of the source code.
+A line like:
[source,bash]
....
@@ -902,11 +1075,16 @@ Compile the program with `-g` to get the most out of using `gdb`. It will work w
when `gdb` starts up means that the program was not compiled with `-g`.
-At the `gdb` prompt, type `break main`. This will tell the debugger to skip the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code. Now type `run` to start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
+At the `gdb` prompt, type `break main`.
+This will tell the debugger to skip the preliminary set-up code in the program being run and to stop execution at the beginning of the program's code.
+Now type `run` to start the program- it will start at the beginning of the set-up code and then get stopped by the debugger when it calls `main()`.
-To step through the program a line at a time, press `n`. When at a function call, step into it by pressing `s`. Once in a function call, return from it by pressing `f`, or use `up` and `down` to take a quick look at the caller.
+To step through the program a line at a time, press `n`.
+When at a function call, step into it by pressing `s`.
+Once in a function call, return from it by pressing `f`, or use `up` and `down` to take a quick look at the caller.
-Here is a simple example of how to spot a mistake in a program with `gdb`. This is our program (with a deliberate mistake):
+Here is a simple example of how to spot a mistake in a program with `gdb`.
+This is our program (with a deliberate mistake):
[.programlisting]
....
@@ -972,7 +1150,8 @@ Hang on a minute! How did anint get to be `4231`? Was it not set to `5` in `main
$1 = 4231 gdb displays 4231
....
-Oh dear! Looking at the code, we forgot to initialize i. We meant to put
+Oh dear! Looking at the code, we forgot to initialize i.
+We meant to put
[.programlisting]
....
@@ -985,18 +1164,25 @@ main() {
...
....
-but we left the `i=5;` line out. As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran, which in this case happened to be `4231`.
+but we left the `i=5;` line out.
+As we did not initialize i, it had whatever number happened to be in that area of memory when the program ran,
+which in this case happened to be `4231`.
[NOTE]
====
-The `gdb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack. This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on. (The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
+The `gdb` command displays the stack frame every time we go into or out of a function, even if we are using `up` and `down` to move around the call stack.
+This shows the name of the function and the values of its arguments, which helps us keep track of where we are and what is going on.
+(The stack is a storage area where the program stores information about the arguments passed to functions and where to go when it returns from a function call.)
====
==== Examining a Core File with gdb
-A core file is basically a file which contains the complete state of the process when it crashed. In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier. Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
+A core file is basically a file which contains the complete state of the process when it crashed.
+In "the good old days", programmers had to print out hex listings of core files and sweat over machine code manuals, but now life is a bit easier.
+Incidentally, under FreeBSD and other 4.4BSD systems, a core file is called [.filename]#progname.core# instead of just [.filename]#core#, to make it clearer which program a core file belongs to.
-To examine a core file, start up `gdb` in the usual way. Instead of typing `break` or `run`, type
+To examine a core file, start up `gdb` in the usual way.
+Instead of typing `break` or `run`, type
[source,bash]
....
@@ -1022,9 +1208,12 @@ Cannot access memory at address 0x7020796d.
(gdb)
....
-In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#. We can see that the program crashed due to trying to access an area in memory that was not available to it in a function called `bazz`.
+In this case, the program was called [.filename]#progname#, so the core file is called [.filename]#progname.core#.
+We can see that the program crashed due to trying to access an area in memory that was not available to it in a function called `bazz`.
-Sometimes it is useful to be able to see how a function was called, as the problem could have occurred a long way up the call stack in a complex program. `bt` causes `gdb` to print out a back-trace of the call stack:
+Sometimes it is useful to be able to see how a function was called,
+as the problem could have occurred a long way up the call stack in a complex program.
+`bt` causes `gdb` to print out a back-trace of the call stack:
[source,bash]
....
@@ -1035,11 +1224,14 @@ Sometimes it is useful to be able to see how a function was called, as the probl
(gdb)
....
-The `end()` function is called when a program crashes; in this case, the `bazz()` function was called from `main()`.
+The `end()` function is called when a program crashes;
+in this case, the `bazz()` function was called from `main()`.
==== Attaching to a Running Program with gdb
-One of the neatest features about `gdb` is that it can attach to a program that is already running. Of course, that requires sufficient permissions to do so. A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
+One of the neatest features about `gdb` is that it can attach to a program that is already running.
+Of course, that requires sufficient permissions to do so.
+A common problem is stepping through a program that forks and wanting to trace the child, but the debugger will only trace the parent.
To do that, start up another `gdb`, use `ps` to find the process ID for the child, and do
@@ -1074,7 +1266,8 @@ Now all that is needed is to attach to the child, set PauseMode to `0`, and wait
=== Emacs
-Emacs is a highly customizable editor-indeed, it has been customized to the point where it is more like an operating system than an editor! Many developers and sysadmins do in fact spend practically all their time working inside Emacs, leaving it only to log out.
+Emacs is a highly customizable editor-indeed, it has been customized to the point where it is more like an operating system than an editor!
+Many developers and sysadmins do in fact spend practically all their time working inside Emacs, leaving it only to log out.
It is impossible even to summarize everything Emacs can do here, but here are some of the features of interest to developers:
@@ -1091,29 +1284,45 @@ And doubtless many more that have been overlooked.
Emacs can be installed on FreeBSD using the package:editors/emacs[] port.
-Once it is installed, start it up and do `C-h t` to read an Emacs tutorial-that means hold down kbd:[control], press kbd:[h], let go of kbd:[control], and then press kbd:[t]. (Alternatively, you can use the mouse to select [.guimenuitem]#Emacs Tutorial# from the menu:Help[] menu.)
+Once it is installed, start it up and do `C-h t` to read an Emacs tutorial-that means hold down kbd:[control], press kbd:[h], let go of kbd:[control], and then press kbd:[t].
+(Alternatively, you can use the mouse to select [.guimenuitem]#Emacs Tutorial# from the menu:Help[] menu.)
-Although Emacs does have menus, it is well worth learning the key bindings, as it is much quicker when you are editing something to press a couple of keys than to try to find the mouse and then click on the right place. And, when you are talking to seasoned Emacs users, you will find they often casually throw around expressions like "`M-x replace-s RET foo RET bar RET`" so it is useful to know what they mean. And in any case, Emacs has far too many useful functions for them to all fit on the menu bars.
+Although Emacs does have menus, it is well worth learning the key bindings,
+as it is much quicker when you are editing something to press a couple of keys than to try to find the mouse and then click on the right place.
+And, when you are talking to seasoned Emacs users, you will find they often casually throw around expressions like "`M-x replace-s RET foo RET bar RET`" so it is useful to know what they mean.
+And in any case, Emacs has far too many useful functions for them to all fit on the menu bars.
-Fortunately, it is quite easy to pick up the key-bindings, as they are displayed next to the menu item. My advice is to use the menu item for, say, opening a file until you understand how it works and feel confident with it, then try doing C-x C-f. When you are happy with that, move on to another menu command.
+Fortunately, it is quite easy to pick up the key-bindings, as they are displayed next to the menu item.
+My advice is to use the menu item for, say, opening a file until you understand how it works and feel confident with it, then try doing C-x C-f.
+When you are happy with that, move on to another menu command.
-If you cannot remember what a particular combination of keys does, select [.guimenuitem]#Describe Key# from the menu:Help[] menu and type it in-Emacs will tell you what it does. You can also use the [.guimenuitem]#Command Apropos# menu item to find out all the commands which contain a particular word in them, with the key binding next to it.
+If you cannot remember what a particular combination of keys does, select [.guimenuitem]#Describe Key# from the menu:Help[] menu and type it in-Emacs will tell you what it does.
+You can also use the [.guimenuitem]#Command Apropos# menu item to find out all the commands which contain a particular word in them, with the key binding next to it.
-By the way, the expression above means hold down the kbd:[Meta] key, press kbd:[x], release the kbd:[Meta] key, type `replace-s` (short for `replace-string`-another feature of Emacs is that you can abbreviate commands), press the kbd:[return] key, type `foo` (the string you want replaced), press the kbd:[return] key, type bar (the string you want to replace `foo` with) and press kbd:[return] again. Emacs will then do the search-and-replace operation you have just requested.
+By the way, the expression above means hold down the kbd:[Meta] key, press kbd:[x], release the kbd:[Meta] key, type `replace-s` (short for `replace-string`-another feature of Emacs is that you can abbreviate commands), press the kbd:[return] key, type `foo` (the string you want replaced), press the kbd:[return] key, type bar (the string you want to replace `foo` with) and press kbd:[return] again.
+Emacs will then do the search-and-replace operation you have just requested.
-If you are wondering what on earth kbd:[Meta] is, it is a special key that many UNIX(R) workstations have. Unfortunately, PC's do not have one, so it is usually kbd:[alt] (or if you are unlucky, the kbd:[escape] key).
+If you are wondering what on earth kbd:[Meta] is, it is a special key that many UNIX(R) workstations have.
+Unfortunately, PC's do not have one, so it is usually kbd:[alt] (or if you are unlucky, the kbd:[escape] key).
-Oh, and to get out of Emacs, do `C-x C-c` (that means hold down the kbd:[control] key, press kbd:[x], press kbd:[c] and release the kbd:[control] key). If you have any unsaved files open, Emacs will ask you if you want to save them. (Ignore the bit in the documentation where it says `C-z` is the usual way to leave Emacs-that leaves Emacs hanging around in the background, and is only really useful if you are on a system which does not have virtual terminals).
+Oh, and to get out of Emacs, do `C-x C-c` (that means hold down the kbd:[control] key, press kbd:[x], press kbd:[c] and release the kbd:[control] key).
+If you have any unsaved files open, Emacs will ask you if you want to save them.
+(Ignore the bit in the documentation where it says `C-z` is the usual way to leave Emacs-that leaves Emacs hanging around in the background, and is only really useful if you are on a system which does not have virtual terminals).
=== Configuring Emacs
Emacs does many wonderful things; some of them are built in, some of them need to be configured.
-Instead of using a proprietary macro language for configuration, Emacs uses a version of Lisp specially adapted for editors, known as Emacs Lisp. Working with Emacs Lisp can be quite helpful if you want to go on and learn something like Common Lisp. Emacs Lisp has many features of Common Lisp, although it is considerably smaller (and thus easier to master).
+Instead of using a proprietary macro language for configuration, Emacs uses a version of Lisp specially adapted for editors, known as Emacs Lisp.
+Working with Emacs Lisp can be quite helpful if you want to go on and learn something like Common Lisp.
+Emacs Lisp has many features of Common Lisp, although it is considerably smaller (and thus easier to master).
The best way to learn Emacs Lisp is to download the link:ftp://ftp.gnu.org/old-gnu/emacs/elisp-manual-19-2.4.tar.gz[Emacs Tutorial]
-However, there is no need to actually know any Lisp to get started with configuring Emacs, as I have included a sample [.filename]#.emacs#, which should be enough to get you started. Just copy it into your home directory and restart Emacs if it is already running; it will read the commands from the file and (hopefully) give you a useful basic setup.
+However, there is no need to actually know any Lisp to get started with configuring Emacs,
+as I have included a sample [.filename]#.emacs#, which should be enough to get you started.
+Just copy it into your home directory and restart Emacs if it is already running;
+it will read the commands from the file and (hopefully) give you a useful basic setup.
=== A Sample [.filename]#.emacs#
@@ -1427,14 +1636,17 @@ and then you can edit the file in your Emacs!footnote:[Many Emacs users set thei
Now, this is all very well if you only want to program in the languages already catered for in [.filename]#.emacs# (C, C++, Perl, Lisp and Scheme), but what happens if a new language called "whizbang" comes out, full of exciting features?
-The first thing to do is find out if whizbang comes with any files that tell Emacs about the language. These usually end in [.filename]#.el#, short for "Emacs Lisp". For example, if whizbang is a FreeBSD port, we can locate these files by doing
+The first thing to do is find out if whizbang comes with any files that tell Emacs about the language.
+These usually end in [.filename]#.el#, short for "Emacs Lisp".
+For example, if whizbang is a FreeBSD port, we can locate these files by doing
[source,bash]
....
% find /usr/ports/lang/whizbang -name "*.el" -print
....
-and install them by copying them into the Emacs site Lisp directory. On FreeBSD, this is [.filename]#/usr/local/share/emacs/site-lisp#.
+and install them by copying them into the Emacs site Lisp directory.
+On FreeBSD, this is [.filename]#/usr/local/share/emacs/site-lisp#.
So for example, if the output from the find command was
@@ -1450,7 +1662,9 @@ we would do
# cp /usr/ports/lang/whizbang/work/misc/whizbang.el /usr/local/share/emacs/site-lisp
....
-Next, we need to decide what extension whizbang source files have. Let us say for the sake of argument that they all end in [.filename]#.wiz#. We need to add an entry to our [.filename]#.emacs# to make sure Emacs will be able to use the information in [.filename]#whizbang.el#.
+Next, we need to decide what extension whizbang source files have.
+Let us say for the sake of argument that they all end in [.filename]#.wiz#.
+We need to add an entry to our [.filename]#.emacs# to make sure Emacs will be able to use the information in [.filename]#whizbang.el#.
Find the auto-mode-alist entry in [.filename]#.emacs# and add a line for whizbang, such as:
@@ -1465,7 +1679,8 @@ Find the auto-mode-alist entry in [.filename]#.emacs# and add a line for whizban
This means that Emacs will automatically go into `whizbang-mode` when you edit a file ending in [.filename]#.wiz#.
-Just below this, you will find the font-lock-auto-mode-list entry. Add `whizbang-mode` to it like so:
+Just below this, you will find the font-lock-auto-mode-list entry.
+Add `whizbang-mode` to it like so:
[.programlisting]
....
@@ -1477,7 +1692,8 @@ Just below this, you will find the font-lock-auto-mode-list entry. Add `whizbang
This means that Emacs will always enable `font-lock-mode` (ie syntax highlighting) when editing a [.filename]#.wiz# file.
-And that is all that is needed. If there is anything else you want done automatically when you open up [.filename]#.wiz#, you can add a `whizbang-mode hook` (see `my-scheme-mode-hook` for a simple example that adds `auto-indent`).
+And that is all that is needed. If there is anything else you want done automatically when you open up [.filename]#.wiz#,
+you can add a `whizbang-mode hook` (see `my-scheme-mode-hook` for a simple example that adds `auto-indent`).
[[tools-reading]]
== Further Reading
diff --git a/documentation/content/en/books/developers-handbook/x86/_index.adoc b/documentation/content/en/books/developers-handbook/x86/_index.adoc
index 54794e1fe6..9d463b5e49 100644
--- a/documentation/content/en/books/developers-handbook/x86/_index.adoc
+++ b/documentation/content/en/books/developers-handbook/x86/_index.adoc
@@ -37,15 +37,21 @@ _This chapter was written by {stanislav}._
[[x86-intro]]
== Synopsis
-Assembly language programming under UNIX(R) is highly undocumented. It is generally assumed that no one would ever want to use it because various UNIX(R) systems run on different microprocessors, so everything should be written in C for portability.
+Assembly language programming under UNIX(R) is highly undocumented.
+It is generally assumed that no one would ever want to use it because various UNIX(R) systems run on different microprocessors, so everything should be written in C for portability.
-In reality, C portability is quite a myth. Even C programs need to be modified when ported from one UNIX(R) to another, regardless of what processor each runs on. Typically, such a program is full of conditional statements depending on the system it is compiled for.
+In reality, C portability is quite a myth.
+Even C programs need to be modified when ported from one UNIX(R) to another, regardless of what processor each runs on.
+Typically, such a program is full of conditional statements depending on the system it is compiled for.
Even if we believe that all of UNIX(R) software should be written in C, or some other high-level language, we still need assembly language programmers: Who else would write the section of C library that accesses the kernel?
In this chapter I will attempt to show you how you can use assembly language writing UNIX(R) programs, specifically under FreeBSD.
-This chapter does not explain the basics of assembly language. There are enough resources about that (for a complete online course in assembly language, see Randall Hyde's http://webster.cs.ucr.edu/[Art of Assembly Language]; or if you prefer a printed book, take a look at Jeff Duntemann's Assembly Language Step-by-Step (ISBN: 0471375233). However, once the chapter is finished, any assembly language programmer will be able to write programs for FreeBSD quickly and efficiently.
+This chapter does not explain the basics of assembly language.
+There are enough resources about that (for a complete online course in assembly language, see Randall Hyde's http://webster.cs.ucr.edu/[Art of Assembly Language];
+or if you prefer a printed book, take a look at Jeff Duntemann's Assembly Language Step-by-Step (ISBN: 0471375233).
+However, once the chapter is finished, any assembly language programmer will be able to write programs for FreeBSD quickly and efficiently.
Copyright (R) 2000-2001 G. Adam Stanislav. All rights reserved.
@@ -57,18 +63,25 @@ Copyright (R) 2000-2001 G. Adam Stanislav. All rights reserved.
The most important tool for assembly language programming is the assembler, the software that converts assembly language code into machine language.
-Two very different assemblers are available for FreeBSD. One is man:as[1], which uses the traditional UNIX(R) assembly language syntax. It comes with the system.
+Two very different assemblers are available for FreeBSD.
+One is man:as[1], which uses the traditional UNIX(R) assembly language syntax.
+It comes with the system.
-The other is /usr/ports/devel/nasm. It uses the Intel syntax. Its main advantage is that it can assemble code for many operating systems. It needs to be installed separately, but is completely free.
+The other is /usr/ports/devel/nasm.
+It uses the Intel syntax.
+Its main advantage is that it can assemble code for many operating systems.
+It needs to be installed separately, but is completely free.
-This chapter uses nasm syntax because most assembly language programmers coming to FreeBSD from other operating systems will find it easier to understand. And, because, quite frankly, that is what I am used to.
+This chapter uses nasm syntax because most assembly language programmers coming to FreeBSD from other operating systems will find it easier to understand.
+And, because, quite frankly, that is what I am used to.
[[x86-the-linker]]
=== The Linker
The output of the assembler, like that of any compiler, needs to be linked to form an executable file.
-The standard man:ld[1] linker comes with FreeBSD. It works with the code assembled with either assembler.
+The standard man:ld[1] linker comes with FreeBSD.
+It works with the code assembled with either assembler.
[[x86-system-calls]]
== System Calls
@@ -76,11 +89,14 @@ The standard man:ld[1] linker comes with FreeBSD. It works with the code assemb
[[x86-default-calling-convention]]
=== Default Calling Convention
-By default, the FreeBSD kernel uses the C calling convention. Further, although the kernel is accessed using `int 80h`, it is assumed the program will call a function that issues `int 80h`, rather than issuing `int 80h` directly.
+By default, the FreeBSD kernel uses the C calling convention.
+Further, although the kernel is accessed using `int 80h`, it is assumed the program will call a function that issues `int 80h`, rather than issuing `int 80h` directly.
-This convention is very convenient, and quite superior to the Microsoft(R) convention used by MS-DOS(R). Why? Because the UNIX(R) convention allows any program written in any language to access the kernel.
+This convention is very convenient, and quite superior to the Microsoft(R) convention used by MS-DOS(R).
+Why? Because the UNIX(R) convention allows any program written in any language to access the kernel.
-An assembly language program can do that as well. For example, we could open a file:
+An assembly language program can do that as well.
+For example, we could open a file:
[.programlisting]
....
@@ -98,9 +114,12 @@ open:
ret
....
-This is a very clean and portable way of coding. If you need to port the code to a UNIX(R) system which uses a different interrupt, or a different way of passing parameters, all you need to change is the kernel procedure.
+This is a very clean and portable way of coding.
+If you need to port the code to a UNIX(R) system which uses a different interrupt, or a different way of passing parameters, all you need to change is the kernel procedure.
-But assembly language programmers like to shave off cycles. The above example requires a `call/ret` combination. We can eliminate it by ``push``ing an extra dword:
+But assembly language programmers like to shave off cycles.
+The above example requires a `call/ret` combination.
+We can eliminate it by ``push``ing an extra dword:
[.programlisting]
....
@@ -119,9 +138,14 @@ The `5` that we have placed in `EAX` identifies the kernel function, in this cas
[[x86-alternate-calling-convention]]
=== Alternate Calling Convention
-FreeBSD is an extremely flexible system. It offers other ways of calling the kernel. For it to work, however, the system must have Linux emulation installed.
+FreeBSD is an extremely flexible system.
+It offers other ways of calling the kernel.
+For it to work, however, the system must have Linux emulation installed.
-Linux is a UNIX(R) like system. However, its kernel uses the same system-call convention of passing parameters in registers MS-DOS(R) does. As with the UNIX(R) convention, the function number is placed in `EAX`. The parameters, however, are not passed on the stack but in `EBX, ECX, EDX, ESI, EDI, EBP`:
+Linux is a UNIX(R) like system.
+However, its kernel uses the same system-call convention of passing parameters in registers MS-DOS(R) does.
+As with the UNIX(R) convention, the function number is placed in `EAX`.
+The parameters, however, are not passed on the stack but in `EBX, ECX, EDX, ESI, EDI, EBP`:
[.programlisting]
....
@@ -133,11 +157,15 @@ open:
int 80h
....
-This convention has a great disadvantage over the UNIX(R) way, at least as far as assembly language programming is concerned: Every time you make a kernel call you must `push` the registers, then `pop` them later. This makes your code bulkier and slower. Nevertheless, FreeBSD gives you a choice.
+This convention has a great disadvantage over the UNIX(R) way, at least as far as assembly language programming is concerned:
+Every time you make a kernel call you must `push` the registers, then `pop` them later.
+This makes your code bulkier and slower.
+Nevertheless, FreeBSD gives you a choice.
-If you do choose the Linux convention, you must let the system know about it. After your program is assembled and linked, you need to brand the executable:
+If you do choose the Linux convention, you must let the system know about it.
+After your program is assembled and linked, you need to brand the executable:
-[source,bash]
+[source,shell]
....
% brandelf -t Linux filename
....
@@ -145,21 +173,27 @@ If you do choose the Linux convention, you must let the system know about it. Af
[[x86-use-geneva]]
=== Which Convention Should You Use?
-If you are coding specifically for FreeBSD, you should always use the UNIX(R) convention: It is faster, you can store global variables in registers, you do not have to brand the executable, and you do not impose the installation of the Linux emulation package on the target system.
+If you are coding specifically for FreeBSD, you should always use the UNIX(R) convention:
+It is faster, you can store global variables in registers, you do not have to brand the executable,
+and you do not impose the installation of the Linux emulation package on the target system.
-If you want to create portable code that can also run on Linux, you will probably still want to give the FreeBSD users as efficient a code as possible. I will show you how you can accomplish that after I have explained the basics.
+If you want to create portable code that can also run on Linux, you will probably still want to give the FreeBSD users as efficient a code as possible.
+I will show you how you can accomplish that after I have explained the basics.
[[x86-call-numbers]]
=== Call Numbers
-To tell the kernel which system service you are calling, place its number in `EAX`. Of course, you need to know what the number is.
+To tell the kernel which system service you are calling, place its number in `EAX`.
+Of course, you need to know what the number is.
[[x86-the-syscalls-file]]
==== The [.filename]#syscalls# File
-The numbers are listed in [.filename]#syscalls#. `locate syscalls` finds this file in several different formats, all produced automatically from [.filename]#syscalls.master#.
+The numbers are listed in [.filename]#syscalls#.
+`locate syscalls` finds this file in several different formats, all produced automatically from [.filename]#syscalls.master#.
-You can find the master file for the default UNIX(R) calling convention in [.filename]#/usr/src/sys/kern/syscalls.master#. If you need to use the other convention implemented in the Linux emulation mode, read [.filename]#/usr/src/sys/i386/linux/syscalls.master#.
+You can find the master file for the default UNIX(R) calling convention in [.filename]#/usr/src/sys/kern/syscalls.master#.
+If you need to use the other convention implemented in the Linux emulation mode, read [.filename]#/usr/src/sys/i386/linux/syscalls.master#.
[NOTE]
====
@@ -182,47 +216,56 @@ etc...
It is the leftmost column that tells us the number to place in `EAX`.
-The rightmost column tells us what parameters to `push`. They are ``push``ed _from right to left_.
+The rightmost column tells us what parameters to `push`.
+They are ``push``ed _from right to left_.
For example, to `open` a file, we need to `push` the `mode` first, then `flags`, then the address at which the `path` is stored.
[[x86-return-values]]
== Return Values
-A system call would not be useful most of the time if it did not return some kind of a value: The file descriptor of an open file, the number of bytes read to a buffer, the system time, etc.
+A system call would not be useful most of the time if it did not return some kind of a value:
+The file descriptor of an open file, the number of bytes read to a buffer, the system time, etc.
-Additionally, the system needs to inform us if an error occurs: A file does not exist, system resources are exhausted, we passed an invalid parameter, etc.
+Additionally, the system needs to inform us if an error occurs:
+A file does not exist, system resources are exhausted, we passed an invalid parameter, etc.
[[x86-man-pages]]
=== Man Pages
-The traditional place to look for information about various system calls under UNIX(R) systems are the manual pages. FreeBSD describes its system calls in section 2, sometimes in section 3.
+The traditional place to look for information about various system calls under UNIX(R) systems are the manual pages.
+FreeBSD describes its system calls in section 2, sometimes in section 3.
For example, man:open[2] says:
[.blockquote]
-If successful, `open()` returns a non-negative integer, termed a file descriptor. It returns `-1` on failure, and sets `errno` to indicate the error.
+If successful, `open()` returns a non-negative integer, termed a file descriptor.
+It returns `-1` on failure, and sets `errno` to indicate the error.
The assembly language programmer new to UNIX(R) and FreeBSD will immediately ask the puzzling question: Where is `errno` and how do I get to it?
[NOTE]
====
-The information presented in the manual pages applies to C programs. The assembly language programmer needs additional information.
+The information presented in the manual pages applies to C programs.
+The assembly language programmer needs additional information.
====
[[x86-where-return-values]]
=== Where Are the Return Values?
-Unfortunately, it depends... For most system calls it is in `EAX`, but not for all. A good rule of thumb, when working with a system call for the first time, is to look for the return value in `EAX`. If it is not there, you need further research.
+Unfortunately, it depends... For most system calls it is in `EAX`, but not for all.
+A good rule of thumb, when working with a system call for the first time, is to look for the return value in `EAX`.
+If it is not there, you need further research.
[NOTE]
====
-I am aware of one system call that returns the value in `EDX`: `SYS_fork`. All others I have worked with use `EAX`. But I have not worked with them all yet.
+I am aware of one system call that returns the value in `EDX`: `SYS_fork`.
+All others I have worked with use `EAX`.
+But I have not worked with them all yet.
====
[TIP]
====
-
If you cannot find the answer here or anywhere else, study libc source code and see how it interfaces with the kernel.
====
@@ -231,30 +274,40 @@ If you cannot find the answer here or anywhere else, study libc source code and
Actually, nowhere...
-`errno` is part of the C language, not the UNIX(R) kernel. When accessing kernel services directly, the error code is returned in `EAX`, the same register the proper return value generally ends up in.
+`errno` is part of the C language, not the UNIX(R) kernel.
+When accessing kernel services directly, the error code is returned in `EAX`, the same register the proper return value generally ends up in.
-This makes perfect sense. If there is no error, there is no error code. If there is an error, there is no return value. One register can contain either.
+This makes perfect sense. If there is no error, there is no error code.
+If there is an error, there is no return value.
+One register can contain either.
[[x86-how-to-know-error]]
=== Determining an Error Occurred
When using the standard FreeBSD calling convention, the `carry flag` is cleared upon success, set upon failure.
-When using the Linux emulation mode, the signed value in `EAX` is non-negative upon success, and contains the return value. In case of an error, the value is negative, i.e., `-errno`.
+When using the Linux emulation mode, the signed value in `EAX` is non-negative upon success, and contains the return value.
+In case of an error, the value is negative, i.e., `-errno`.
[[x86-portable-code]]
== Creating Portable Code
-Portability is generally not one of the strengths of assembly language. Yet, writing assembly language programs for different platforms is possible, especially with nasm. I have written assembly language libraries that can be assembled for such different operating systems as Windows(R) and FreeBSD.
+Portability is generally not one of the strengths of assembly language.
+Yet, writing assembly language programs for different platforms is possible, especially with nasm.
+I have written assembly language libraries that can be assembled for such different operating systems as Windows(R) and FreeBSD.
It is all the more possible when you want your code to run on two platforms which, while different, are based on similar architectures.
-For example, FreeBSD is UNIX(R), Linux is UNIX(R) like. I only mentioned three differences between them (from an assembly language programmer's perspective): The calling convention, the function numbers, and the way of returning values.
+For example, FreeBSD is UNIX(R), Linux is UNIX(R) like.
+I only mentioned three differences between them (from an assembly language programmer's perspective):
+The calling convention, the function numbers, and the way of returning values.
[[x86-deal-with-function-numbers]]
=== Dealing with Function Numbers
-In many cases the function numbers are the same. However, even when they are not, the problem is easy to deal with: Instead of using numbers in your code, use constants which you have declared differently depending on the target architecture:
+In many cases the function numbers are the same.
+However, even when they are not, the problem is easy to deal with:
+Instead of using numbers in your code, use constants which you have declared differently depending on the target architecture:
[.programlisting]
....
@@ -323,16 +376,22 @@ kernel:
[[x86-deal-with-other-portability]]
=== Dealing with Other Portability Issues
-The above solutions can handle most cases of writing code portable between FreeBSD and Linux. Nevertheless, with some kernel services the differences are deeper.
+The above solutions can handle most cases of writing code portable between FreeBSD and Linux.
+Nevertheless, with some kernel services the differences are deeper.
-In that case, you need to write two different handlers for those particular system calls, and use conditional assembly. Luckily, most of your code does something other than calling the kernel, so usually you will only need a few such conditional sections in your code.
+In that case, you need to write two different handlers for those particular system calls, and use conditional assembly.
+Luckily, most of your code does something other than calling the kernel, so usually you will only need a few such conditional sections in your code.
[[x86-portable-library]]
=== Using a Library
-You can avoid portability issues in your main code altogether by writing a library of system calls. Create a separate library for FreeBSD, a different one for Linux, and yet other libraries for more operating systems.
+You can avoid portability issues in your main code altogether by writing a library of system calls.
+Create a separate library for FreeBSD, a different one for Linux, and yet other libraries for more operating systems.
-In your library, write a separate function (or procedure, if you prefer the traditional assembly language terminology) for each system call. Use the C calling convention of passing parameters. But still use `EAX` to pass the call number in. In that case, your FreeBSD library can be very simple, as many seemingly different functions can be just labels to the same code:
+In your library, write a separate function (or procedure, if you prefer the traditional assembly language terminology) for each system call.
+Use the C calling convention of passing parameters.
+But still use `EAX` to pass the call number in.
+In that case, your FreeBSD library can be very simple, as many seemingly different functions can be just labels to the same code:
[.programlisting]
....
@@ -343,7 +402,8 @@ sys.close:
ret
....
-Your Linux library will require more different functions. But even here you can group system calls using the same number of parameters:
+Your Linux library will require more different functions.
+But even here you can group system calls using the same number of parameters:
[.programlisting]
....
@@ -370,7 +430,11 @@ sys.err:
ret
....
-The library approach may seem inconvenient at first because it requires you to produce a separate file your code depends on. But it has many advantages: For one, you only need to write it once and can use it for all your programs. You can even let other assembly language programmers use it, or perhaps use one written by someone else. But perhaps the greatest advantage of the library is that your code can be ported to other systems, even by other programmers, by simply writing a new library without any changes to your code.
+The library approach may seem inconvenient at first because it requires you to produce a separate file your code depends on.
+But it has many advantages: For one, you only need to write it once and can use it for all your programs.
+You can even let other assembly language programmers use it, or perhaps use one written by someone else.
+But perhaps the greatest advantage of the library is that your code can be ported to other systems,
+even by other programmers, by simply writing a new library without any changes to your code.
If you do not like the idea of having a library, you can at least place all your system calls in a separate assembly language file and link it with your main program. Here, again, all porters have to do is create a new object file to link with your main program.
@@ -379,11 +443,13 @@ If you do not like the idea of having a library, you can at least place all your
If you are releasing your software as (or with) source code, you can use macros and place them in a separate file, which you include in your code.
-Porters of your software will simply write a new include file. No library or external object file is necessary, yet your code is portable without any need to edit the code.
+Porters of your software will simply write a new include file.
+No library or external object file is necessary, yet your code is portable without any need to edit the code.
[NOTE]
====
-This is the approach we will use throughout this chapter. We will name our include file [.filename]#system.inc#, and add to it whenever we deal with a new system call.
+This is the approach we will use throughout this chapter.
+We will name our include file [.filename]#system.inc#, and add to it whenever we deal with a new system call.
====
We can start our [.filename]#system.inc# by declaring the standard file descriptors:
@@ -428,7 +494,8 @@ We create a macro which takes one argument, the syscall number:
%endmacro
....
-Finally, we create macros for each syscall. These macros take no arguments.
+Finally, we create macros for each syscall.
+These macros take no arguments.
[.programlisting]
....
@@ -451,7 +518,8 @@ Finally, we create macros for each syscall. These macros take no arguments.
; [etc...]
....
-Go ahead, enter it into your editor and save it as [.filename]#system.inc#. We will add more to it as we discuss more syscalls.
+Go ahead, enter it into your editor and save it as [.filename]#system.inc#.
+We will add more to it as we discuss more syscalls.
[[x86-first-program]]
== Our First Program
@@ -480,30 +548,40 @@ We are now ready for our first program, the mandatory Hello, World!
Here is what it does: Line 1 includes the defines, the macros, and the code from [.filename]#system.inc#.
-Lines 3-5 are the data: Line 3 starts the data section/segment. Line 4 contains the string "Hello, World!" followed by a new line (`0Ah`). Line 5 creates a constant that contains the length of the string from line 4 in bytes.
+Lines 3-5 are the data: Line 3 starts the data section/segment.
+Line 4 contains the string "Hello, World!" followed by a new line (`0Ah`).
+Line 5 creates a constant that contains the length of the string from line 4 in bytes.
-Lines 7-16 contain the code. Note that FreeBSD uses the _elf_ file format for its executables, which requires every program to start at the point labeled `_start` (or, more precisely, the linker expects that). This label has to be global.
+Lines 7-16 contain the code.
+Note that FreeBSD uses the _elf_ file format for its executables, which requires every program to start at the point labeled `_start` (or, more precisely, the linker expects that).
+This label has to be global.
Lines 10-13 ask the system to write `hbytes` bytes of the `hello` string to `stdout`.
-Lines 15-16 ask the system to end the program with the return value of `0`. The `SYS_exit` syscall never returns, so the code ends there.
+Lines 15-16 ask the system to end the program with the return value of `0`.
+The `SYS_exit` syscall never returns, so the code ends there.
[NOTE]
====
-If you have come to UNIX(R) from MS-DOS(R) assembly language background, you may be used to writing directly to the video hardware. You will never have to worry about this in FreeBSD, or any other flavor of UNIX(R). As far as you are concerned, you are writing to a file known as [.filename]#stdout#. This can be the video screen, or a telnet terminal, or an actual file, or even the input of another program. Which one it is, is for the system to figure out.
+If you have come to UNIX(R) from MS-DOS(R) assembly language background, you may be used to writing directly to the video hardware.
+You will never have to worry about this in FreeBSD, or any other flavor of UNIX(R).
+As far as you are concerned, you are writing to a file known as [.filename]#stdout#.
+This can be the video screen, or a telnet terminal, or an actual file, or even the input of another program.
+Which one it is, is for the system to figure out.
====
[[x86-assemble-1]]
=== Assembling the Code
-Type the code (except the line numbers) in an editor, and save it in a file named [.filename]#hello.asm#. You need nasm to assemble it.
+Type the code (except the line numbers) in an editor, and save it in a file named [.filename]#hello.asm#.
+You need nasm to assemble it.
[[x86-get-nasm]]
==== Installing nasm
If you do not have nasm, type:
-[source,bash]
+[source,shell]
....
% su
Password:your root password
@@ -519,12 +597,13 @@ Either way, FreeBSD will automatically download nasm from the Internet, compile
[NOTE]
====
-If your system is not FreeBSD, you need to get nasm from its https://sourceforge.net/projects/nasm[home page]. You can still use it to assemble FreeBSD code.
+If your system is not FreeBSD, you need to get nasm from its https://sourceforge.net/projects/nasm[home page].
+You can still use it to assemble FreeBSD code.
====
Now you can assemble, link, and run the code:
-[source,bash]
+[source,shell]
....
% nasm -f elf hello.asm
% ld -s -o hello hello.o
@@ -538,7 +617,8 @@ Hello, World!
A common type of UNIX(R) application is a filter-a program that reads data from the [.filename]#stdin#, processes it somehow, then writes the result to [.filename]#stdout#.
-In this chapter, we shall develop a simple filter, and learn how to read from [.filename]#stdin# and write to [.filename]#stdout#. This filter will convert each byte of its input into a hexadecimal number followed by a blank space.
+In this chapter, we shall develop a simple filter, and learn how to read from [.filename]#stdin# and write to [.filename]#stdout#.
+This filter will convert each byte of its input into a hexadecimal number followed by a blank space.
[.programlisting]
....
@@ -583,26 +663,38 @@ _start:
sys.exit
....
-In the data section we create an array called `hex`. It contains the 16 hexadecimal digits in ascending order. The array is followed by a buffer which we will use for both input and output. The first two bytes of the buffer are initially set to `0`. This is where we will write the two hexadecimal digits (the first byte also is where we will read the input). The third byte is a space.
+In the data section we create an array called `hex`.
+It contains the 16 hexadecimal digits in ascending order.
+The array is followed by a buffer which we will use for both input and output.
+The first two bytes of the buffer are initially set to `0`.
+This is where we will write the two hexadecimal digits (the first byte also is where we will read the input).
+The third byte is a space.
The code section consists of four parts: Reading the byte, converting it to a hexadecimal number, writing the result, and eventually exiting the program.
-To read the byte, we ask the system to read one byte from [.filename]#stdin#, and store it in the first byte of the `buffer`. The system returns the number of bytes read in `EAX`. This will be `1` while data is coming, or `0`, when no more input data is available. Therefore, we check the value of `EAX`. If it is `0`, we jump to `.done`, otherwise we continue.
+To read the byte, we ask the system to read one byte from [.filename]#stdin#, and store it in the first byte of the `buffer`.
+The system returns the number of bytes read in `EAX`.
+This will be `1` while data is coming, or `0`, when no more input data is available.
+Therefore, we check the value of `EAX`.
+If it is `0`, we jump to `.done`, otherwise we continue.
[NOTE]
====
For simplicity sake, we are ignoring the possibility of an error condition at this time.
====
-The hexadecimal conversion reads the byte from the `buffer` into `EAX`, or actually just `AL`, while clearing the remaining bits of `EAX` to zeros. We also copy the byte to `EDX` because we need to convert the upper four bits (nibble) separately from the lower four bits. We store the result in the first two bytes of the buffer.
+The hexadecimal conversion reads the byte from the `buffer` into `EAX`, or actually just `AL`, while clearing the remaining bits of `EAX` to zeros.
+We also copy the byte to `EDX` because we need to convert the upper four bits (nibble) separately from the lower four bits.
+We store the result in the first two bytes of the buffer.
-Next, we ask the system to write the three bytes of the buffer, i.e., the two hexadecimal digits and the blank space, to [.filename]#stdout#. We then jump back to the beginning of the program and process the next byte.
+Next, we ask the system to write the three bytes of the buffer, i.e., the two hexadecimal digits and the blank space, to [.filename]#stdout#.
+We then jump back to the beginning of the program and process the next byte.
Once there is no more input left, we ask the system to exit our program, returning a zero, which is the traditional value meaning the program was successful.
Go ahead, and save the code in a file named [.filename]#hex.asm#, then type the following (the `^D` means press the control key and type `D` while holding the control key down):
-[source,bash]
+[source,shell]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
@@ -614,10 +706,13 @@ Hello, World!
[NOTE]
====
-If you are migrating to UNIX(R) from MS-DOS(R), you may be wondering why each line ends with `0A` instead of `0D 0A`. This is because UNIX(R) does not use the cr/lf convention, but a "new line" convention, which is `0A` in hexadecimal.
+If you are migrating to UNIX(R) from MS-DOS(R), you may be wondering why each line ends with `0A` instead of `0D 0A`.
+This is because UNIX(R) does not use the cr/lf convention, but a "new line" convention, which is `0A` in hexadecimal.
====
-Can we improve this? Well, for one, it is a bit confusing because once we have converted a line of text, our input no longer starts at the beginning of the line. We can modify it to print a new line instead of a space after each `0A`:
+Can we improve this? Well, for one, it is a bit confusing because once we have converted a line of text,
+our input no longer starts at the beginning of the line.
+We can modify it to print a new line instead of a space after each `0A`:
[.programlisting]
....
@@ -671,13 +766,16 @@ _start:
sys.exit
....
-We have stored the space in the `CL` register. We can do this safely because, unlike Microsoft(R) Windows(R), UNIX(R) system calls do not modify the value of any register they do not use to return a value in.
+We have stored the space in the `CL` register.
+We can do this safely because, unlike Microsoft(R) Windows(R), UNIX(R) system calls do not modify the value of any register they do not use to return a value in.
-That means we only need to set `CL` once. We have, therefore, added a new label `.loop` and jump to it for the next byte instead of jumping at `_start`. We have also added the `.hex` label so we can either have a blank space or a new line as the third byte of the `buffer`.
+That means we only need to set `CL` once.
+We have, therefore, added a new label `.loop` and jump to it for the next byte instead of jumping at `_start`.
+We have also added the `.hex` label so we can either have a blank space or a new line as the third byte of the `buffer`.
Once you have changed [.filename]#hex.asm# to reflect these changes, type:
-[source,bash]
+[source,shell]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
@@ -689,16 +787,22 @@ Here I come!
^D %
....
-That looks better. But this code is quite inefficient! We are making a system call for every single byte twice (once to read it, another time to write the output).
+That looks better.
+But this code is quite inefficient! We are making a system call for every single byte twice (once to read it, another time to write the output).
[[x86-buffered-io]]
== Buffered Input and Output
-We can improve the efficiency of our code by buffering our input and output. We create an input buffer and read a whole sequence of bytes at one time. Then we fetch them one by one from the buffer.
+We can improve the efficiency of our code by buffering our input and output.
+We create an input buffer and read a whole sequence of bytes at one time.
+Then we fetch them one by one from the buffer.
-We also create an output buffer. We store our output in it until it is full. At that time we ask the kernel to write the contents of the buffer to [.filename]#stdout#.
+We also create an output buffer. We store our output in it until it is full.
+At that time we ask the kernel to write the contents of the buffer to [.filename]#stdout#.
-The program ends when there is no more input. But we still need to ask the kernel to write the contents of our output buffer to [.filename]#stdout# one last time, otherwise some of our output would make it to the output buffer, but never be sent out. Do not forget that, or you will be wondering why some of your output is missing.
+The program ends when there is no more input.
+But we still need to ask the kernel to write the contents of our output buffer to [.filename]#stdout# one last time, otherwise some of our output would make it to the output buffer, but never be sent out.
+Do not forget that, or you will be wondering why some of your output is missing.
[.programlisting]
....
@@ -797,15 +901,22 @@ write:
ret
....
-We now have a third section in the source code, named `.bss`. This section is not included in our executable file, and, therefore, cannot be initialized. We use `resb` instead of `db`. It simply reserves the requested size of uninitialized memory for our use.
+We now have a third section in the source code, named `.bss`.
+This section is not included in our executable file, and, therefore, cannot be initialized.
+We use `resb` instead of `db`.
+It simply reserves the requested size of uninitialized memory for our use.
-We take advantage of the fact that the system does not modify the registers: We use registers for what, otherwise, would have to be global variables stored in the `.data` section. This is also why the UNIX(R) convention of passing parameters to system calls on the stack is superior to the Microsoft convention of passing them in the registers: We can keep the registers for our own use.
+We take advantage of the fact that the system does not modify the registers:
+We use registers for what, otherwise, would have to be global variables stored in the `.data` section.
+This is also why the UNIX(R) convention of passing parameters to system calls on the stack is superior to the Microsoft convention of passing them in the registers: We can keep the registers for our own use.
-We use `EDI` and `ESI` as pointers to the next byte to be read from or written to. We use `EBX` and `ECX` to keep count of the number of bytes in the two buffers, so we know when to dump the output to, or read more input from, the system.
+We use `EDI` and `ESI` as pointers to the next byte to be read from or written to.
+We use `EBX` and `ECX` to keep count of the number of bytes in the two buffers,
+so we know when to dump the output to, or read more input from, the system.
Let us see how it works now:
-[source,bash]
+[source,shell]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
@@ -817,7 +928,9 @@ Here I come!
^D %
....
-Not what you expected? The program did not print the output until we pressed `^D`. That is easy to fix by inserting three lines of code to write the output every time we have converted a new line to `0A`. I have marked the three lines with > (do not copy the > in your [.filename]#hex.asm#).
+Not what you expected? The program did not print the output until we pressed `^D`.
+That is easy to fix by inserting three lines of code to write the output every time we have converted a new line to `0A`.
+I have marked the three lines with > (do not copy the > in your [.filename]#hex.asm#).
[.programlisting]
....
@@ -921,7 +1034,7 @@ write:
Now, let us see how it works:
-[source,bash]
+[source,shell]
....
% nasm -f elf hex.asm
% ld -s -o hex hex.o
@@ -937,7 +1050,8 @@ Not bad for a 644-byte executable, is it!
[NOTE]
====
-This approach to buffered input/output still contains a hidden danger. I will discuss-and fix-it later, when I talk about the <<x86-buffered-dark-side,dark side of buffering>>.
+This approach to buffered input/output still contains a hidden danger.
+I will discuss-and fix-it later, when I talk about the <<x86-buffered-dark-side,dark side of buffering>>.
====
[[x86-ungetc]]
@@ -945,25 +1059,37 @@ This approach to buffered input/output still contains a hidden danger. I will di
[WARNING]
====
-
-This may be a somewhat advanced topic, mostly of interest to programmers familiar with the theory of compilers. If you wish, you may <<x86-command-line,skip to the next section>>, and perhaps read this later.
+This may be a somewhat advanced topic, mostly of interest to programmers familiar with the theory of compilers.
+If you wish, you may <<x86-command-line,skip to the next section>>, and perhaps read this later.
====
-While our sample program does not require it, more sophisticated filters often need to look ahead. In other words, they may need to see what the next character is (or even several characters). If the next character is of a certain value, it is part of the token currently being processed. Otherwise, it is not.
+While our sample program does not require it, more sophisticated filters often need to look ahead.
+In other words, they may need to see what the next character is (or even several characters).
+If the next character is of a certain value, it is part of the token currently being processed.
+Otherwise, it is not.
-For example, you may be parsing the input stream for a textual string (e.g., when implementing a language compiler): If a character is followed by another character, or perhaps a digit, it is part of the token you are processing. If it is followed by white space, or some other value, then it is not part of the current token.
+For example, you may be parsing the input stream for a textual string (e.g., when implementing a language compiler):
+If a character is followed by another character, or perhaps a digit, it is part of the token you are processing.
+If it is followed by white space, or some other value, then it is not part of the current token.
This presents an interesting problem: How to return the next character back to the input stream, so it can be read again later?
-One possible solution is to store it in a character variable, then set a flag. We can modify `getchar` to check the flag, and if it is set, fetch the byte from that variable instead of the input buffer, and reset the flag. But, of course, that slows us down.
+One possible solution is to store it in a character variable, then set a flag.
+We can modify `getchar` to check the flag, and if it is set, fetch the byte from that variable instead of the input buffer, and reset the flag.
+But, of course, that slows us down.
-The C language has an `ungetc()` function, just for that purpose. Is there a quick way to implement it in our code? I would like you to scroll back up and take a look at the `getchar` procedure and see if you can find a nice and fast solution before reading the next paragraph. Then come back here and see my own solution.
+The C language has an `ungetc()` function, just for that purpose.
+Is there a quick way to implement it in our code?
+I would like you to scroll back up and take a look at the `getchar` procedure and see if you can find a nice and fast solution before reading the next paragraph.
+Then come back here and see my own solution.
The key to returning a character back to the stream is in how we are getting the characters to start with:
-First we check if the buffer is empty by testing the value of `EBX`. If it is zero, we call the `read` procedure.
+First we check if the buffer is empty by testing the value of `EBX`.
+If it is zero, we call the `read` procedure.
-If we do have a character available, we use `lodsb`, then decrease the value of `EBX`. The `lodsb` instruction is effectively identical to:
+If we do have a character available, we use `lodsb`, then decrease the value of `EBX`.
+The `lodsb` instruction is effectively identical to:
[.programlisting]
....
@@ -971,7 +1097,9 @@ mov al, [esi]
inc esi
....
-The byte we have fetched remains in the buffer until the next time `read` is called. We do not know when that happens, but we do know it will not happen until the next call to `getchar`. Hence, to "return" the last-read byte back to the stream, all we have to do is decrease the value of `ESI` and increase the value of `EBX`:
+The byte we have fetched remains in the buffer until the next time `read` is called.
+We do not know when that happens, but we do know it will not happen until the next call to `getchar`.
+Hence, to "return" the last-read byte back to the stream, all we have to do is decrease the value of `ESI` and increase the value of `EBX`:
[.programlisting]
....
@@ -981,17 +1109,22 @@ ungetc:
ret
....
-But, be careful! We are perfectly safe doing this if our look-ahead is at most one character at a time. If we are examining more than one upcoming character and call `ungetc` several times in a row, it will work most of the time, but not all the time (and will be tough to debug). Why?
+But, be careful! We are perfectly safe doing this if our look-ahead is at most one character at a time.
+If we are examining more than one upcoming character and call `ungetc` several times in a row, it will work most of the time, but not all the time (and will be tough to debug). Why?
-Because as long as `getchar` does not have to call `read`, all of the pre-read bytes are still in the buffer, and our `ungetc` works without a glitch. But the moment `getchar` calls `read`, the contents of the buffer change.
+Because as long as `getchar` does not have to call `read`, all of the pre-read bytes are still in the buffer, and our `ungetc` works without a glitch.
+But the moment `getchar` calls `read`, the contents of the buffer change.
We can always rely on `ungetc` working properly on the last character we have read with `getchar`, but not on anything we have read before that.
If your program reads more than one byte ahead, you have at least two choices:
-If possible, modify the program so it only reads one byte ahead. This is the simplest solution.
+If possible, modify the program so it only reads one byte ahead.
+This is the simplest solution.
-If that option is not available, first of all determine the maximum number of characters your program needs to return to the input stream at one time. Increase that number slightly, just to be sure, preferably to a multiple of 16-so it aligns nicely. Then modify the `.bss` section of your code, and create a small "spare" buffer right before your input buffer, something like this:
+If that option is not available, first of all determine the maximum number of characters your program needs to return to the input stream at one time.
+Increase that number slightly, just to be sure, preferably to a multiple of 16-so it aligns nicely.
+Then modify the `.bss` section of your code, and create a small "spare" buffer right before your input buffer, something like this:
[.programlisting]
....
@@ -1017,22 +1150,31 @@ With this modification, you can call `ungetc` up to 17 times in a row safely (th
[[x86-command-line]]
== Command Line Arguments
-Our hex program will be more useful if it can read the names of an input and output file from its command line, i.e., if it can process the command line arguments. But... Where are they?
+Our hex program will be more useful if it can read the names of an input and output file from its command line, i.e., if it can process the command line arguments.
+But... Where are they?
-Before a UNIX(R) system starts a program, it ``push``es some data on the stack, then jumps at the `_start` label of the program. Yes, I said jumps, not calls. That means the data can be accessed by reading `[esp+offset]`, or by simply ``pop``ping it.
+Before a UNIX(R) system starts a program, it ``push``es some data on the stack, then jumps at the `_start` label of the program.
+Yes, I said jumps, not calls.
+That means the data can be accessed by reading `[esp+offset]`, or by simply ``pop``ping it.
-The value at the top of the stack contains the number of command line arguments. It is traditionally called `argc`, for "argument count."
+The value at the top of the stack contains the number of command line arguments.
+It is traditionally called `argc`, for "argument count."
-Command line arguments follow next, all `argc` of them. These are typically referred to as `argv`, for "argument value(s)." That is, we get `argv[0]`, `argv[1]`, `...`, `argv[argc-1]`. These are not the actual arguments, but pointers to arguments, i.e., memory addresses of the actual arguments. The arguments themselves are NUL-terminated character strings.
+Command line arguments follow next, all `argc` of them.
+These are typically referred to as `argv`, for "argument value(s)." That is, we get `argv[0]`, `argv[1]`, `...`, `argv[argc-1]`.
+These are not the actual arguments, but pointers to arguments, i.e., memory addresses of the actual arguments.
+The arguments themselves are NUL-terminated character strings.
The `argv` list is followed by a NULL pointer, which is simply a `0`. There is more, but this is enough for our purposes right now.
[NOTE]
====
-If you have come from the MS-DOS(R) programming environment, the main difference is that each argument is in a separate string. The second difference is that there is no practical limit on how many arguments there can be.
+If you have come from the MS-DOS(R) programming environment, the main difference is that each argument is in a separate string.
+The second difference is that there is no practical limit on how many arguments there can be.
====
-Armed with this knowledge, we are almost ready for the next version of [.filename]#hex.asm#. First, however, we need to add a few lines to [.filename]#system.inc#:
+Armed with this knowledge, we are almost ready for the next version of [.filename]#hex.asm#.
+First, however, we need to add a few lines to [.filename]#system.inc#:
First, we need to add two new entries to our list of system call numbers:
@@ -1203,23 +1345,34 @@ write:
ret
....
-In our `.data` section we now have two new variables, `fd.in` and `fd.out`. We store the input and output file descriptors here.
+In our `.data` section we now have two new variables, `fd.in` and `fd.out`.
+We store the input and output file descriptors here.
In the `.text` section we have replaced the references to `stdin` and `stdout` with `[fd.in]` and `[fd.out]`.
-The `.text` section now starts with a simple error handler, which does nothing but exit the program with a return value of `1`. The error handler is before `_start` so we are within a short distance from where the errors occur.
+The `.text` section now starts with a simple error handler, which does nothing but exit the program with a return value of `1`.
+The error handler is before `_start` so we are within a short distance from where the errors occur.
-Naturally, the program execution still begins at `_start`. First, we remove `argc` and `argv[0]` from the stack: They are of no interest to us (in this program, that is).
+Naturally, the program execution still begins at `_start`.
+First, we remove `argc` and `argv[0]` from the stack: They are of no interest to us (in this program, that is).
-We pop `argv[1]` to `ECX`. This register is particularly suited for pointers, as we can handle NULL pointers with `jecxz`. If `argv[1]` is not NULL, we try to open the file named in the first argument. Otherwise, we continue the program as before: Reading from `stdin`, writing to `stdout`. If we fail to open the input file (e.g., it does not exist), we jump to the error handler and quit.
+We pop `argv[1]` to `ECX`.
+This register is particularly suited for pointers, as we can handle NULL pointers with `jecxz`.
+If `argv[1]` is not NULL, we try to open the file named in the first argument.
+Otherwise, we continue the program as before: Reading from `stdin`, writing to `stdout`.
+If we fail to open the input file (e.g., it does not exist), we jump to the error handler and quit.
-If all went well, we now check for the second argument. If it is there, we open the output file. Otherwise, we send the output to `stdout`. If we fail to open the output file (e.g., it exists and we do not have the write permission), we, again, jump to the error handler.
+If all went well, we now check for the second argument.
+If it is there, we open the output file.
+Otherwise, we send the output to `stdout`.
+If we fail to open the output file (e.g., it exists and we do not have the write permission), we, again, jump to the error handler.
The rest of the code is the same as before, except we close the input and output files before exiting, and, as mentioned, we use `[fd.in]` and `[fd.out]`.
Our executable is now a whopping 768 bytes long.
-Can we still improve it? Of course! Every program can be improved. Here are a few ideas of what we could do:
+Can we still improve it? Of course! Every program can be improved.
+Here are a few ideas of what we could do:
* Have our error handler print a message to `stderr`.
* Add error handlers to the `read` and `write` functions.
@@ -1232,21 +1385,27 @@ I shall leave these enhancements as an exercise to the reader: You already know
[[x86-environment]]
== UNIX(R) Environment
-An important UNIX(R) concept is the environment, which is defined by _environment variables_. Some are set by the system, others by you, yet others by the shell, or any program that loads another program.
+An important UNIX(R) concept is the environment, which is defined by _environment variables_.
+Some are set by the system, others by you, yet others by the shell, or any program that loads another program.
[[x86-find-environment]]
=== How to Find Environment Variables
-I said earlier that when a program starts executing, the stack contains `argc` followed by the NULL-terminated `argv` array, followed by something else. The "something else" is the _environment_, or, to be more precise, a NULL-terminated array of pointers to _environment variables_. This is often referred to as `env`.
+I said earlier that when a program starts executing, the stack contains `argc` followed by the NULL-terminated `argv` array, followed by something else.
+The "something else" is the _environment_, or, to be more precise, a NULL-terminated array of pointers to _environment variables_.
+This is often referred to as `env`.
-The structure of `env` is the same as that of `argv`, a list of memory addresses followed by a NULL (`0`). In this case, there is no `"envc"`-we figure out where the array ends by searching for the final NULL.
+The structure of `env` is the same as that of `argv`, a list of memory addresses followed by a NULL (`0`).
+In this case, there is no `"envc"`-we figure out where the array ends by searching for the final NULL.
-The variables usually come in the `name=value` format, but sometimes the `=value` part may be missing. We need to account for that possibility.
+The variables usually come in the `name=value` format, but sometimes the `=value` part may be missing.
+We need to account for that possibility.
[[x86-webvar]]
=== webvars
-I could just show you some code that prints the environment the same way the UNIX(R) env command does. But I thought it would be more interesting to write a simple assembly language CGI utility.
+I could just show you some code that prints the environment the same way the UNIX(R) env command does.
+But I thought it would be more interesting to write a simple assembly language CGI utility.
[[x86-cgi]]
==== CGI: a Quick Overview
@@ -1260,15 +1419,18 @@ I have a http://www.whizkidtech.redprince.net/cgi-bin/tutorial[detailed CGI tuto
[NOTE]
====
-While certain _environment variables_ use standard names, others vary, depending on the web server. That makes webvars quite a useful diagnostic tool.
+While certain _environment variables_ use standard names, others vary, depending on the web server.
+That makes webvars quite a useful diagnostic tool.
====
[[x86-webvars-the-code]]
==== The Code
-Our webvars program, then, must send out the HTTP header followed by some HTML mark-up. It then must read the _environment variables_ one by one and send them out as part of the HTML page.
+Our webvars program, then, must send out the HTTP header followed by some HTML mark-up.
+It then must read the _environment variables_ one by one and send them out as part of the HTML page.
-The code follows. I placed comments and explanations right inside the code:
+The code follows.
+I placed comments and explanations right inside the code:
[.programlisting]
....
@@ -1455,32 +1617,45 @@ repne scasb
sys.exit
....
-This code produces a 1,396-byte executable. Most of it is data, i.e., the HTML mark-up we need to send out.
+This code produces a 1,396-byte executable.
+Most of it is data, i.e., the HTML mark-up we need to send out.
Assemble and link it as usual:
-[source,bash]
+[source,shell]
....
% nasm -f elf webvars.asm
% ld -s -o webvars webvars.o
....
-To use it, you need to upload [.filename]#webvars# to your web server. Depending on how your web server is set up, you may have to store it in a special [.filename]#cgi-bin# directory, or perhaps rename it with a [.filename]#.cgi# extension.
+To use it, you need to upload [.filename]#webvars# to your web server.
+Depending on how your web server is set up, you may have to store it in a special [.filename]#cgi-bin# directory, or perhaps rename it with a [.filename]#.cgi# extension.
-Then you need to use your browser to view its output. To see its output on my web server, please go to http://www.int80h.org/webvars/[http://www.int80h.org/webvars/]. If curious about the additional environment variables present in a password protected web directory, go to http://www.int80h.org/private/[http://www.int80h.org/private/], using the name `asm` and password `programmer`.
+Then you need to use your browser to view its output.
+To see its output on my web server, please go to http://www.int80h.org/webvars/[http://www.int80h.org/webvars/].
+If curious about the additional environment variables present in a password protected web directory, go to http://www.int80h.org/private/[http://www.int80h.org/private/], using the name `asm` and password `programmer`.
[[x86-files]]
== Working with Files
-We have already done some basic file work: We know how to open and close them, how to read and write them using buffers. But UNIX(R) offers much more functionality when it comes to files. We will examine some of it in this section, and end up with a nice file conversion utility.
+We have already done some basic file work: We know how to open and close them, how to read and write them using buffers.
+But UNIX(R) offers much more functionality when it comes to files.
+We will examine some of it in this section, and end up with a nice file conversion utility.
-Indeed, let us start at the end, that is, with the file conversion utility. It always makes programming easier when we know from the start what the end product is supposed to do.
+Indeed, let us start at the end, that is, with the file conversion utility.
+It always makes programming easier when we know from the start what the end product is supposed to do.
-One of the first programs I wrote for UNIX(R) was link:ftp://ftp.int80h.org/unix/tuc/[tuc], a text-to-UNIX(R) file converter. It converts a text file from other operating systems to a UNIX(R) text file. In other words, it changes from different kind of line endings to the newline convention of UNIX(R). It saves the output in a different file. Optionally, it converts a UNIX(R) text file to a DOS text file.
+One of the first programs I wrote for UNIX(R) was link:ftp://ftp.int80h.org/unix/tuc/[tuc], a text-to-UNIX(R) file converter.
+It converts a text file from other operating systems to a UNIX(R) text file.
+In other words, it changes from different kind of line endings to the newline convention of UNIX(R).
+It saves the output in a different file.
+Optionally, it converts a UNIX(R) text file to a DOS text file.
-I have used tuc extensively, but always only to convert from some other OS to UNIX(R), never the other way. I have always wished it would just overwrite the file instead of me having to send the output to a different file. Most of the time, I end up using it like this:
+I have used tuc extensively, but always only to convert from some other OS to UNIX(R), never the other way.
+I have always wished it would just overwrite the file instead of me having to send the output to a different file.
+Most of the time, I end up using it like this:
-[source,bash]
+[source,shell]
....
% tuc myfile tempfile
% mv tempfile myfile
@@ -1488,7 +1663,7 @@ I have used tuc extensively, but always only to convert from some other OS to UN
It would be nice to have a ftuc, i.e., _fast tuc_, and use it like this:
-[source,bash]
+[source,shell]
....
% ftuc myfile
....
@@ -1499,7 +1674,10 @@ At first sight, such a file conversion is very simple: All you have to do is str
If you answered yes, think again: That approach will work most of the time (at least with MS DOS text files), but will fail occasionally.
-The problem is that not all non UNIX(R) text files end their line with the carriage return / line feed sequence. Some use carriage returns without line feeds. Others combine several blank lines into a single carriage return followed by several line feeds. And so on.
+The problem is that not all non UNIX(R) text files end their line with the carriage return / line feed sequence.
+Some use carriage returns without line feeds.
+Others combine several blank lines into a single carriage return followed by several line feeds.
+And so on.
A text file converter, then, must be able to handle any possible line endings:
@@ -1513,11 +1691,17 @@ It should also handle files that use some kind of a combination of the above (e.
[[x86-finite-state-machine]]
=== Finite State Machine
-The problem is easily solved by the use of a technique called _finite state machine_, originally developed by the designers of digital electronic circuits. A _finite state machine_ is a digital circuit whose output is dependent not only on its input but on its previous input, i.e., on its state. The microprocessor is an example of a _finite state machine_: Our assembly language code is assembled to machine language in which some assembly language code produces a single byte of machine language, while others produce several bytes. As the microprocessor fetches the bytes from the memory one by one, some of them simply change its state rather than produce some output. When all the bytes of the op code are fetched, the microprocessor produces some output, or changes the value of a register, etc.
+The problem is easily solved by the use of a technique called _finite state machine_, originally developed by the designers of digital electronic circuits.
+A _finite state machine_ is a digital circuit whose output is dependent not only on its input but on its previous input, i.e., on its state.
+The microprocessor is an example of a _finite state machine_: Our assembly language code is assembled to machine language in which some assembly language code produces a single byte of machine language, while others produce several bytes.
+As the microprocessor fetches the bytes from the memory one by one, some of them simply change its state rather than produce some output.
+When all the bytes of the op code are fetched, the microprocessor produces some output, or changes the value of a register, etc.
-Because of that, all software is essentially a sequence of state instructions for the microprocessor. Nevertheless, the concept of _finite state machine_ is useful in software design as well.
+Because of that, all software is essentially a sequence of state instructions for the microprocessor.
+Nevertheless, the concept of _finite state machine_ is useful in software design as well.
-Our text file converter can be designer as a _finite state machine_ with three possible states. We could call them states 0-2, but it will make our life easier if we give them symbolic names:
+Our text file converter can be designer as a _finite state machine_ with three possible states.
+We could call them states 0-2, but it will make our life easier if we give them symbolic names:
* ordinary
* cr
@@ -1525,17 +1709,23 @@ Our text file converter can be designer as a _finite state machine_ with three p
Our program will start in the ordinary state. During this state, the program action depends on its input as follows:
-* If the input is anything other than a carriage return or line feed, the input is simply passed on to the output. The state remains unchanged.
-* If the input is a carriage return, the state is changed to cr. The input is then discarded, i.e., no output is made.
-* If the input is a line feed, the state is changed to lf. The input is then discarded.
+* If the input is anything other than a carriage return or line feed, the input is simply passed on to the output.
+The state remains unchanged.
+* If the input is a carriage return, the state is changed to cr.
+The input is then discarded, i.e., no output is made.
+* If the input is a line feed, the state is changed to lf.
+The input is then discarded.
-Whenever we are in the cr state, it is because the last input was a carriage return, which was unprocessed. What our software does in this state again depends on the current input:
+Whenever we are in the cr state, it is because the last input was a carriage return, which was unprocessed.
+What our software does in this state again depends on the current input:
* If the input is anything other than a carriage return or line feed, output a line feed, then output the input, then change the state to ordinary.
* If the input is a carriage return, we have received two (or more) carriage returns in a row. We discard the input, we output a line feed, and leave the state unchanged.
* If the input is a line feed, we output the line feed and change the state to ordinary. Note that this is not the same as the first case above - if we tried to combine them, we would be outputting two line feeds instead of one.
-Finally, we are in the lf state after we have received a line feed that was not preceded by a carriage return. This will happen when our file already is in UNIX(R) format, or whenever several lines in a row are expressed by a single carriage return followed by several line feeds, or when line ends with a line feed / carriage return sequence. Here is how we need to handle our input in this state:
+Finally, we are in the lf state after we have received a line feed that was not preceded by a carriage return.
+This will happen when our file already is in UNIX(R) format, or whenever several lines in a row are expressed by a single carriage return followed by several line feeds, or when line ends with a line feed / carriage return sequence.
+Here is how we need to handle our input in this state:
* If the input is anything other than a carriage return or line feed, we output a line feed, then output the input, then change the state to ordinary. This is exactly the same action as in the cr state upon receiving the same kind of input.
* If the input is a carriage return, we discard the input, we output a line feed, then change the state to ordinary.
@@ -1544,26 +1734,35 @@ Finally, we are in the lf state after we have received a line feed that was not
[[x86-final-state]]
==== The Final State
-The above _finite state machine_ works for the entire file, but leaves the possibility that the final line end will be ignored. That will happen whenever the file ends with a single carriage return or a single line feed. I did not think of it when I wrote tuc, just to discover that occasionally it strips the last line ending.
+The above _finite state machine_ works for the entire file, but leaves the possibility that the final line end will be ignored.
+That will happen whenever the file ends with a single carriage return or a single line feed.
+I did not think of it when I wrote tuc, just to discover that occasionally it strips the last line ending.
-This problem is easily fixed by checking the state after the entire file was processed. If the state is not ordinary, we simply need to output one last line feed.
+This problem is easily fixed by checking the state after the entire file was processed.
+If the state is not ordinary, we simply need to output one last line feed.
[NOTE]
====
-Now that we have expressed our algorithm as a _finite state machine_, we could easily design a dedicated digital electronic circuit (a "chip") to do the conversion for us. Of course, doing so would be considerably more expensive than writing an assembly language program.
+Now that we have expressed our algorithm as a _finite state machine_,
+we could easily design a dedicated digital electronic circuit (a "chip") to do the conversion for us.
+Of course, doing so would be considerably more expensive than writing an assembly language program.
====
[[x86-tuc-counter]]
==== The Output Counter
-Because our file conversion program may be combining two characters into one, we need to use an output counter. We initialize it to `0`, and increase it every time we send a character to the output. At the end of the program, the counter will tell us what size we need to set the file to.
+Because our file conversion program may be combining two characters into one, we need to use an output counter.
+We initialize it to `0`, and increase it every time we send a character to the output.
+At the end of the program, the counter will tell us what size we need to set the file to.
[[x86-software-fsm]]
=== Implementing FSM in Software
-The hardest part of working with a _finite state machine_ is analyzing the problem and expressing it as a _finite state machine_. That accomplished, the software almost writes itself.
+The hardest part of working with a _finite state machine_ is analyzing the problem and expressing it as a _finite state machine_.
+That accomplished, the software almost writes itself.
-In a high-level language, such as C, there are several main approaches. One is to use a `switch` statement which chooses what function should be run. For example,
+In a high-level language, such as C, there are several main approaches.
+One is to use a `switch` statement which chooses what function should be run. For example,
[.programlisting]
....
@@ -1595,27 +1794,43 @@ Yet another is to have `state` be a function pointer, set to point at the approp
(*state)(inputchar);
....
-This is the approach we will use in our program because it is very easy to do in assembly language, and very fast, too. We will simply keep the address of the right procedure in `EBX`, and then just issue:
+This is the approach we will use in our program because it is very easy to do in assembly language, and very fast, too.
+We will simply keep the address of the right procedure in `EBX`, and then just issue:
[.programlisting]
....
call ebx
....
-This is possibly faster than hardcoding the address in the code because the microprocessor does not have to fetch the address from the memory-it is already stored in one of its registers. I said _possibly_ because with the caching modern microprocessors do, either way may be equally fast.
+This is possibly faster than hardcoding the address in the code because the microprocessor does not have to fetch the address from the memory-it is already stored in one of its registers.
+I said _possibly_ because with the caching modern microprocessors do, either way may be equally fast.
[[memory-mapped-files]]
=== Memory Mapped Files
-Because our program works on a single file, we cannot use the approach that worked for us before, i.e., to read from an input file and to write to an output file.
+Because our program works on a single file, we cannot use the approach that worked for us before, i.e.,
+to read from an input file and to write to an output file.
-UNIX(R) allows us to map a file, or a section of a file, into memory. To do that, we first need to open the file with the appropriate read/write flags. Then we use the `mmap` system call to map it into the memory. One nice thing about `mmap` is that it automatically works with virtual memory: We can map more of the file into the memory than we have physical memory available, yet still access it through regular memory op codes, such as `mov`, `lods`, and `stos`. Whatever changes we make to the memory image of the file will be written to the file by the system. We do not even have to keep the file open: As long as it stays mapped, we can read from it and write to it.
+UNIX(R) allows us to map a file, or a section of a file, into memory.
+To do that, we first need to open the file with the appropriate read/write flags.
+Then we use the `mmap` system call to map it into the memory.
+One nice thing about `mmap` is that it automatically works with virtual memory:
+We can map more of the file into the memory than we have physical memory available, yet still access it through regular memory op codes, such as `mov`, `lods`, and `stos`.
+Whatever changes we make to the memory image of the file will be written to the file by the system.
+We do not even have to keep the file open: As long as it stays mapped, we can read from it and write to it.
-The 32-bit Intel microprocessors can access up to four gigabytes of memory - physical or virtual. The FreeBSD system allows us to use up to a half of it for file mapping.
+The 32-bit Intel microprocessors can access up to four gigabytes of memory - physical or virtual.
+The FreeBSD system allows us to use up to a half of it for file mapping.
-For simplicity sake, in this tutorial we will only convert files that can be mapped into the memory in their entirety. There are probably not too many text files that exceed two gigabytes in size. If our program encounters one, it will simply display a message suggesting we use the original tuc instead.
+For simplicity sake, in this tutorial we will only convert files that can be mapped into the memory in their entirety.
+There are probably not too many text files that exceed two gigabytes in size.
+If our program encounters one, it will simply display a message suggesting we use the original tuc instead.
-If you examine your copy of [.filename]#syscalls.master#, you will find two separate syscalls named `mmap`. This is because of evolution of UNIX(R): There was the traditional BSD `mmap`, syscall 71. That one was superseded by the POSIX(R) `mmap`, syscall 197. The FreeBSD system supports both because older programs were written by using the original BSD version. But new software uses the POSIX(R) version, which is what we will use.
+If you examine your copy of [.filename]#syscalls.master#, you will find two separate syscalls named `mmap`.
+This is because of evolution of UNIX(R): There was the traditional BSD `mmap`, syscall 71.
+That one was superseded by the POSIX(R) `mmap`, syscall 197.
+The FreeBSD system supports both because older programs were written by using the original BSD version.
+But new software uses the POSIX(R) version, which is what we will use.
The [.filename]#syscalls.master# lists the POSIX(R) version like this:
@@ -1625,15 +1840,17 @@ The [.filename]#syscalls.master# lists the POSIX(R) version like this:
int flags, int fd, long pad, off_t pos); }
....
-This differs slightly from what man:mmap[2] says. That is because man:mmap[2] describes the C version.
+This differs slightly from what man:mmap[2] says.
+That is because man:mmap[2] describes the C version.
-The difference is in the `long pad` argument, which is not present in the C version. However, the FreeBSD syscalls add a 32-bit pad after ``push``ing a 64-bit argument. In this case, `off_t` is a 64-bit value.
+The difference is in the `long pad` argument, which is not present in the C version.
+However, the FreeBSD syscalls add a 32-bit pad after ``push``ing a 64-bit argument.
+In this case, `off_t` is a 64-bit value.
When we are finished working with a memory-mapped file, we unmap it with the `munmap` syscall:
[TIP]
====
-
For an in-depth treatment of `mmap`, see W. Richard Stevens' http://www.int80h.org/cgi-bin/isbn?isbn=0130810819[Unix Network Programming, Volume 2, Chapter 12].
====
@@ -1642,27 +1859,34 @@ For an in-depth treatment of `mmap`, see W. Richard Stevens' http://www.int80h.o
Because we need to tell `mmap` how many bytes of the file to map into the memory, and because we want to map the entire file, we need to determine the size of the file.
-We can use the `fstat` syscall to get all the information about an open file that the system can give us. That includes the file size.
+We can use the `fstat` syscall to get all the information about an open file that the system can give us.
+That includes the file size.
-Again, [.filename]#syscalls.master# lists two versions of `fstat`, a traditional one (syscall 62), and a POSIX(R) one (syscall 189). Naturally, we will use the POSIX(R) version:
+Again, [.filename]#syscalls.master# lists two versions of `fstat`, a traditional one (syscall 62), and a POSIX(R) one (syscall 189).
+Naturally, we will use the POSIX(R) version:
[.programlisting]
....
189 STD POSIX { int fstat(int fd, struct stat *sb); }
....
-This is a very straightforward call: We pass to it the address of a `stat` structure and the descriptor of an open file. It will fill out the contents of the `stat` structure.
+This is a very straightforward call: We pass to it the address of a `stat` structure and the descriptor of an open file.
+It will fill out the contents of the `stat` structure.
-I do, however, have to say that I tried to declare the `stat` structure in the `.bss` section, and `fstat` did not like it: It set the carry flag indicating an error. After I changed the code to allocate the structure on the stack, everything was working fine.
+I do, however, have to say that I tried to declare the `stat` structure in the `.bss` section, and `fstat` did not like it: It set the carry flag indicating an error.
+After I changed the code to allocate the structure on the stack, everything was working fine.
[[x86-ftruncate]]
=== Changing the File Size
-Because our program may combine carriage return / line feed sequences into straight line feeds, our output may be smaller than our input. However, since we are placing our output into the same file we read the input from, we may have to change the size of the file.
+Because our program may combine carriage return / line feed sequences into straight line feeds, our output may be smaller than our input.
+However, since we are placing our output into the same file we read the input from, we may have to change the size of the file.
-The `ftruncate` system call allows us to do just that. Despite its somewhat misleading name, the `ftruncate` system call can be used to both truncate the file (make it smaller) and to grow it.
+The `ftruncate` system call allows us to do just that.
+Despite its somewhat misleading name, the `ftruncate` system call can be used to both truncate the file (make it smaller) and to grow it.
-And yes, we will find two versions of `ftruncate` in [.filename]#syscalls.master#, an older one (130), and a newer one (201). We will use the newer one:
+And yes, we will find two versions of `ftruncate` in [.filename]#syscalls.master#, an older one (130), and a newer one (201).
+We will use the newer one:
[.programlisting]
....
@@ -1674,7 +1898,9 @@ Please note that this one contains a `int pad` again.
[[x86-ftuc]]
=== ftuc
-We now know everything we need to write ftuc. We start by adding some new lines in [.filename]#system.inc#. First, we define some constants and structures, somewhere at or near the beginning of the file:
+We now know everything we need to write ftuc.
+We start by adding some new lines in [.filename]#system.inc#.
+First, we define some constants and structures, somewhere at or near the beginning of the file:
[.programlisting]
....
@@ -1992,8 +2218,9 @@ align 4
[WARNING]
====
-
-Do not use this program on files stored on a disk formatted by MS-DOS(R) or Windows(R). There seems to be a subtle bug in the FreeBSD code when using `mmap` on these drives mounted under FreeBSD: If the file is over a certain size, `mmap` will just fill the memory with zeros, and then copy them to the file overwriting its contents.
+Do not use this program on files stored on a disk formatted by MS-DOS(R) or Windows(R).
+There seems to be a subtle bug in the FreeBSD code when using `mmap` on these drives mounted under FreeBSD:
+If the file is over a certain size, `mmap` will just fill the memory with zeros, and then copy them to the file overwriting its contents.
====
[[x86-one-pointed-mind]]
@@ -2001,7 +2228,8 @@ Do not use this program on files stored on a disk formatted by MS-DOS(R) or Wind
As a student of Zen, I like the idea of a one-pointed mind: Do one thing at a time, and do it well.
-This, indeed, is very much how UNIX(R) works as well. While a typical Windows(R) application is attempting to do everything imaginable (and is, therefore, riddled with bugs), a typical UNIX(R) program does only one thing, and it does it well.
+This, indeed, is very much how UNIX(R) works as well.
+While a typical Windows(R) application is attempting to do everything imaginable (and is, therefore, riddled with bugs), a typical UNIX(R) program does only one thing, and it does it well.
The typical UNIX(R) user then essentially assembles his own applications by writing a shell script which combines the various existing programs by piping the output of one program to the input of another.
@@ -2012,13 +2240,17 @@ When writing your own UNIX(R) software, it is generally a good idea to see what
I will illustrate this principle with a specific real-life example I was faced with recently:
-I needed to extract the 11th field of each record from a database I downloaded from a web site. The database was a CSV file, i.e., a list of _comma-separated values_. That is quite a standard format for sharing data among people who may be using different database software.
+I needed to extract the 11th field of each record from a database I downloaded from a web site.
+The database was a CSV file, i.e., a list of _comma-separated values_.
+That is quite a standard format for sharing data among people who may be using different database software.
-The first line of the file contains the list of various fields separated by commas. The rest of the file contains the data listed line by line, with values separated by commas.
+The first line of the file contains the list of various fields separated by commas.
+The rest of the file contains the data listed line by line, with values separated by commas.
I tried awk, using the comma as a separator. But because several lines contained a quoted comma, awk was extracting the wrong field from those lines.
-Therefore, I needed to write my own software to extract the 11th field from the CSV file. However, going with the UNIX(R) spirit, I only needed to write a simple filter that would do the following:
+Therefore, I needed to write my own software to extract the 11th field from the CSV file.
+However, going with the UNIX(R) spirit, I only needed to write a simple filter that would do the following:
* Remove the first line from the file;
* Change all unquoted commas to a different character;
@@ -2026,7 +2258,9 @@ Therefore, I needed to write my own software to extract the 11th field from the
Strictly speaking, I could use sed to remove the first line from the file, but doing so in my own program was very easy, so I decided to do it and reduce the size of the pipeline.
-At any rate, writing a program like this took me about 20 minutes. Writing a program that extracts the 11th field from the CSV file would take a lot longer, and I could not reuse it to extract some other field from some other database.
+At any rate, writing a program like this took me about 20 minutes.
+Writing a program that extracts the 11th field from the CSV file would take a lot longer,
+and I could not reuse it to extract some other field from some other database.
This time I decided to let it do a little more work than a typical tutorial program would:
@@ -2036,26 +2270,33 @@ This time I decided to let it do a little more work than a typical tutorial prog
Here is its usage message:
-[source,bash]
+[source,shell]
....
Usage: csv [-t<delim>] [-c<comma>] [-p] [-o <outfile>] [-i <infile>]
....
All parameters are optional, and can appear in any order.
-The `-t` parameter declares what to replace the commas with. The `tab` is the default here. For example, `-t;` will replace all unquoted commas with semicolons.
+The `-t` parameter declares what to replace the commas with.
+The `tab` is the default here.
+For example, `-t;` will replace all unquoted commas with semicolons.
-I did not need the `-c` option, but it may come in handy in the future. It lets me declare that I want a character other than a comma replaced with something else. For example, `-c@` will replace all at signs (useful if you want to split a list of email addresses to their user names and domains).
+I did not need the `-c` option, but it may come in handy in the future.
+It lets me declare that I want a character other than a comma replaced with something else.
+For example, `-c@` will replace all at signs (useful if you want to split a list of email addresses to their user names and domains).
-The `-p` option preserves the first line, i.e., it does not delete it. By default, we delete the first line because in a CSV file it contains the field names rather than data.
+The `-p` option preserves the first line, i.e., it does not delete it.
+By default, we delete the first line because in a CSV file it contains the field names rather than data.
-The `-i` and `-o` options let me specify the input and the output files. Defaults are [.filename]#stdin# and [.filename]#stdout#, so this is a regular UNIX(R) filter.
+The `-i` and `-o` options let me specify the input and the output files.
+Defaults are [.filename]#stdin# and [.filename]#stdout#, so this is a regular UNIX(R) filter.
-I made sure that both `-i filename` and `-ifilename` are accepted. I also made sure that only one input and one output files may be specified.
+I made sure that both `-i filename` and `-ifilename` are accepted.
+I also made sure that only one input and one output files may be specified.
To get the 11th field of each record, I can now do:
-[source,bash]
+[source,shell]
....
% csv '-t;' data.csv | awk '-F;' '{print $11}'
....
@@ -2339,20 +2580,30 @@ write:
ret
....
-Much of it is taken from [.filename]#hex.asm# above. But there is one important difference: I no longer call `write` whenever I am outputting a line feed. Yet, the code can be used interactively.
+Much of it is taken from [.filename]#hex.asm# above.
+But there is one important difference: I no longer call `write` whenever I am outputting a line feed.
+Yet, the code can be used interactively.
-I have found a better solution for the interactive problem since I first started writing this chapter. I wanted to make sure each line is printed out separately only when needed. After all, there is no need to flush out every line when used non-interactively.
+I have found a better solution for the interactive problem since I first started writing this chapter.
+I wanted to make sure each line is printed out separately only when needed.
+After all, there is no need to flush out every line when used non-interactively.
-The new solution I use now is to call `write` every time I find the input buffer empty. That way, when running in the interactive mode, the program reads one line from the user's keyboard, processes it, and sees its input buffer is empty. It flushes its output and reads the next line.
+The new solution I use now is to call `write` every time I find the input buffer empty.
+That way, when running in the interactive mode, the program reads one line from the user's keyboard, processes it, and sees its input buffer is empty.
+It flushes its output and reads the next line.
[[x86-buffered-dark-side]]
==== The Dark Side of Buffering
-This change prevents a mysterious lockup in a very specific case. I refer to it as the _dark side of buffering_, mostly because it presents a danger that is not quite obvious.
+This change prevents a mysterious lockup in a very specific case.
+I refer to it as the _dark side of buffering_, mostly because it presents a danger that is not quite obvious.
-It is unlikely to happen with a program like the csv above, so let us consider yet another filter: In this case we expect our input to be raw data representing color values, such as the _red_, _green_, and _blue_ intensities of a pixel. Our output will be the negative of our input.
+It is unlikely to happen with a program like the csv above, so let us consider yet another filter:
+In this case we expect our input to be raw data representing color values, such as the _red_, _green_, and _blue_ intensities of a pixel.
+Our output will be the negative of our input.
-Such a filter would be very simple to write. Most of it would look just like all the other filters we have written so far, so I am only going to show you its inner loop:
+Such a filter would be very simple to write.
+Most of it would look just like all the other filters we have written so far, so I am only going to show you its inner loop:
[.programlisting]
....
@@ -2365,7 +2616,8 @@ Such a filter would be very simple to write. Most of it would look just like all
Because this filter works with raw data, it is unlikely to be used interactively.
-But it could be called by image manipulation software. And, unless it calls `write` before each call to `read`, chances are it will lock up.
+But it could be called by image manipulation software.
+And, unless it calls `write` before each call to `read`, chances are it will lock up.
Here is what might happen:
@@ -2381,9 +2633,12 @@ Here is what might happen:
. The image editor will read from the other pipe, connected to the `fd.out` of our filter so it can set the first row of the output image _before_ it sends us the second row of the input.
. The _kernel_ suspends the image editor until it receives some output from our filter, so it can pass it on to the image editor.
-At this point our filter waits for the image editor to send it more data to process, while the image editor is waiting for our filter to send it the result of the processing of the first row. But the result sits in our output buffer.
+At this point our filter waits for the image editor to send it more data to process,
+while the image editor is waiting for our filter to send it the result of the processing of the first row.
+But the result sits in our output buffer.
-The filter and the image editor will continue waiting for each other forever (or, at least, until they are killed). Our software has just entered a crossref:secure[secure-race-conditions,race condition].
+The filter and the image editor will continue waiting for each other forever (or, at least, until they are killed).
+Our software has just entered a crossref:secure[secure-race-conditions,race condition].
This problem does not exist if our filter flushes its output buffer _before_ asking the _kernel_ for more input data.
@@ -2397,17 +2652,21 @@ Yet, never does assembly language shine more than when we create highly optimize
[[x86-fpu-organization]]
=== Organization of the FPU
-The FPU consists of 8 80-bit floating-point registers. These are organized in a stack fashion-you can `push` a value on TOS (_top of stack_) and you can `pop` it.
+The FPU consists of 8 80-bit floating-point registers.
+These are organized in a stack fashion-you can `push` a value on TOS (_top of stack_) and you can `pop` it.
That said, the assembly language op codes are not `push` and `pop` because those are already taken.
-You can `push` a value on TOS by using `fld`, `fild`, and `fbld`. Several other op codes let you `push` many common _constants_-such as _pi_-on the TOS.
+You can `push` a value on TOS by using `fld`, `fild`, and `fbld`.
+Several other op codes let you `push` many common _constants_-such as _pi_-on the TOS.
-Similarly, you can `pop` a value by using `fst`, `fstp`, `fist`, `fistp`, and `fbstp`. Actually, only the op codes that end with a _p_ will literally `pop` the value, the rest will `store` it somewhere else without removing it from the TOS.
+Similarly, you can `pop` a value by using `fst`, `fstp`, `fist`, `fistp`, and `fbstp`.
+Actually, only the op codes that end with a _p_ will literally `pop` the value, the rest will `store` it somewhere else without removing it from the TOS.
We can transfer the data between the TOS and the computer memory either as a 32-bit, 64-bit, or 80-bit _real_, a 16-bit, 32-bit, or 64-bit _integer_, or an 80-bit _packed decimal_.
-The 80-bit _packed decimal_ is a special case of _binary coded decimal_ which is very convenient when converting between the ASCII representation of data and the internal data of the FPU. It allows us to use 18 significant digits.
+The 80-bit _packed decimal_ is a special case of _binary coded decimal_ which is very convenient when converting between the ASCII representation of data and the internal data of the FPU.
+It allows us to use 18 significant digits.
No matter how we represent data in the memory, the FPU always stores it in the 80-bit _real_ format in its registers.
@@ -2417,22 +2676,25 @@ We can perform mathematical operations on the TOS: We can calculate its _sine_,
We can also _multiply_ or _divide_ it by, _add_ it to, or _subtract_ it from, any of the FPU registers (including itself).
-The official Intel op code for the TOS is `st`, and for the _registers_ `st(0)`-`st(7)`. `st` and `st(0)`, then, refer to the same register.
+The official Intel op code for the TOS is `st`, and for the _registers_ `st(0)`-`st(7)`.
+`st` and `st(0)`, then, refer to the same register.
-For whatever reasons, the original author of nasm has decided to use different op codes, namely `st0`-`st7`. In other words, there are no parentheses, and the TOS is always `st0`, never just `st`.
+For whatever reasons, the original author of nasm has decided to use different op codes, namely `st0`-`st7`.
+In other words, there are no parentheses, and the TOS is always `st0`, never just `st`.
[[x86-fpu-packed-decimal]]
==== The Packed Decimal Format
-The _packed decimal_ format uses 10 bytes (80 bits) of memory to represent 18 digits. The number represented there is always an _integer_.
+The _packed decimal_ format uses 10 bytes (80 bits) of memory to represent 18 digits.
+The number represented there is always an _integer_.
[TIP]
====
-
You can use it to get decimal places by multiplying the TOS by a power of 10 first.
====
-The highest bit of the highest byte (byte 9) is the _sign bit_: If it is set, the number is _negative_, otherwise, it is _positive_. The rest of the bits of this byte are unused/ignored.
+The highest bit of the highest byte (byte 9) is the _sign bit_: If it is set, the number is _negative_, otherwise, it is _positive_.
+The rest of the bits of this byte are unused/ignored.
The remaining 9 bytes store the 18 digits of the number: 2 digits per byte.
@@ -2458,7 +2720,9 @@ Remember that, or you will be pulling your hair out in desperation!
[NOTE]
====
-The book to read-if you can find it-is Richard Startz' http://www.amazon.com/exec/obidos/ASIN/013246604X/whizkidtechnomag[8087/80287/80387 for the IBM PC & Compatibles]. Though it does seem to take the fact about the little-endian storage of the _packed decimal_ for granted. I kid you not about the desperation of trying to figure out what was wrong with the filter I show below _before_ it occurred to me I should try the little-endian order even for this type of data.
+The book to read-if you can find it-is Richard Startz' http://www.amazon.com/exec/obidos/ASIN/013246604X/whizkidtechnomag[8087/80287/80387 for the IBM PC & Compatibles].
+Though it does seem to take the fact about the little-endian storage of the _packed decimal_ for granted.
+I kid you not about the desperation of trying to figure out what was wrong with the filter I show below _before_ it occurred to me I should try the little-endian order even for this type of data.
====
[[x86-pinhole-photography]]
@@ -2473,7 +2737,10 @@ Our next filter will help us whenever we want to build a _pinhole camera_, so, w
The easiest way to describe any camera ever built is as some empty space enclosed in some lightproof material, with a small hole in the enclosure.
-The enclosure is usually sturdy (e.g., a box), though sometimes it is flexible (the bellows). It is quite dark inside the camera. However, the hole lets light rays in through a single point (though in some cases there may be several). These light rays form an image, a representation of whatever is outside the camera, in front of the hole.
+The enclosure is usually sturdy (e.g., a box), though sometimes it is flexible (the bellows).
+It is quite dark inside the camera.
+However, the hole lets light rays in through a single point (though in some cases there may be several).
+These light rays form an image, a representation of whatever is outside the camera, in front of the hole.
If some light sensitive material (such as film) is placed inside the camera, it can capture the image.
@@ -2482,9 +2749,12 @@ The hole often contains a _lens_, or a lens assembly, often called the _objectiv
[[x86-the-pinhole]]
==== The Pinhole
-But, strictly speaking, the lens is not necessary: The original cameras did not use a lens but a _pinhole_. Even today, _pinholes_ are used, both as a tool to study how cameras work, and to achieve a special kind of image.
+But, strictly speaking, the lens is not necessary: The original cameras did not use a lens but a _pinhole_.
+Even today, _pinholes_ are used, both as a tool to study how cameras work, and to achieve a special kind of image.
-The image produced by the _pinhole_ is all equally sharp. Or _blurred_. There is an ideal size for a pinhole: If it is either larger or smaller, the image loses its sharpness.
+The image produced by the _pinhole_ is all equally sharp.
+Or _blurred_.
+There is an ideal size for a pinhole: If it is either larger or smaller, the image loses its sharpness.
[[x86-focal-length]]
==== Focal Length
@@ -2496,14 +2766,19 @@ This ideal pinhole diameter is a function of the square root of _focal length_,
D = PC * sqrt(FL)
....
-In here, `D` is the ideal diameter of the pinhole, `FL` is the focal length, and `PC` is a pinhole constant. According to Jay Bender, its value is `0.04`, while Kenneth Connors has determined it to be `0.037`. Others have proposed other values. Plus, this value is for the daylight only: Other types of light will require a different constant, whose value can only be determined by experimentation.
+In here, `D` is the ideal diameter of the pinhole, `FL` is the focal length, and `PC` is a pinhole constant.
+According to Jay Bender, its value is `0.04`, while Kenneth Connors has determined it to be `0.037`.
+Others have proposed other values.
+Plus, this value is for the daylight only: Other types of light will require a different constant, whose value can only be determined by experimentation.
[[x86-f-number]]
==== The F-Number
-The f-number is a very useful measure of how much light reaches the film. A light meter can determine that, for example, to expose a film of specific sensitivity with f5.6 mkay require the exposure to last 1/1000 sec.
+The f-number is a very useful measure of how much light reaches the film.
+A light meter can determine that, for example, to expose a film of specific sensitivity with f5.6 mkay require the exposure to last 1/1000 sec.
-It does not matter whether it is a 35-mm camera, or a 6x9cm camera, etc. As long as we know the f-number, we can determine the proper exposure.
+It does not matter whether it is a 35-mm camera, or a 6x9cm camera, etc.
+As long as we know the f-number, we can determine the proper exposure.
The f-number is easy to calculate:
@@ -2512,9 +2787,12 @@ The f-number is easy to calculate:
F = FL / D
....
-In other words, the f-number equals the focal length divided by the diameter of the pinhole. It also means a higher f-number either implies a smaller pinhole or a larger focal distance, or both. That, in turn, implies, the higher the f-number, the longer the exposure has to be.
+In other words, the f-number equals the focal length divided by the diameter of the pinhole.
+It also means a higher f-number either implies a smaller pinhole or a larger focal distance, or both.
+That, in turn, implies, the higher the f-number, the longer the exposure has to be.
-Furthermore, while pinhole diameter and focal distance are one-dimensional measurements, both, the film and the pinhole, are two-dimensional. That means that if you have measured the exposure at f-number `A` as `t`, then the exposure at f-number `B` is:
+Furthermore, while pinhole diameter and focal distance are one-dimensional measurements, both, the film and the pinhole, are two-dimensional.
+That means that if you have measured the exposure at f-number `A` as `t`, then the exposure at f-number `B` is:
[.programlisting]
....
@@ -2528,16 +2806,21 @@ While many modern cameras can change the diameter of their pinhole, and thus the
To allow for different f-numbers, cameras typically contained a metal plate with several holes of different sizes drilled to them.
-Their sizes were chosen according to the above formula in such a way that the resultant f-number was one of standard f-numbers used on all cameras everywhere. For example, a very old Kodak Duaflex IV camera in my possession has three such holes for f-numbers 8, 11, and 16.
+Their sizes were chosen according to the above formula in such a way that the resultant f-number was one of standard f-numbers used on all cameras everywhere.
+For example, a very old Kodak Duaflex IV camera in my possession has three such holes for f-numbers 8, 11, and 16.
-A more recently made camera may offer f-numbers of 2.8, 4, 5.6, 8, 11, 16, 22, and 32 (as well as others). These numbers were not chosen arbitrarily: They all are powers of the square root of 2, though they may be rounded somewha.
+A more recently made camera may offer f-numbers of 2.8, 4, 5.6, 8, 11, 16, 22, and 32 (as well as others).
+These numbers were not chosen arbitrarily: They all are powers of the square root of 2, though they may be rounded somewha.
[[x86-f-stop]]
==== The F-Stop
-A typical camera is designed in such a way that setting any of the normalized f-numbers changes the feel of the dial. It will naturally _stop_ in that position. Because of that, these positions of the dial are called f-stops.
+A typical camera is designed in such a way that setting any of the normalized f-numbers changes the feel of the dial.
+It will naturally _stop_ in that position. Because of that, these positions of the dial are called f-stops.
-Since the f-numbers at each stop are powers of the square root of 2, moving the dial by 1 stop will double the amount of light required for proper exposure. Moving it by 2 stops will quadruple the required exposure. Moving the dial by 3 stops will require the increase in exposure 8 times, etc.
+Since the f-numbers at each stop are powers of the square root of 2, moving the dial by 1 stop will double the amount of light required for proper exposure.
+Moving it by 2 stops will quadruple the required exposure.
+Moving the dial by 3 stops will require the increase in exposure 8 times, etc.
[[x86-pinhole-software]]
=== Designing the Pinhole Software
@@ -2547,7 +2830,8 @@ We are now ready to decide what exactly we want our pinhole software to do.
[[xpinhole-processing-input]]
==== Processing Program Input
-Since its main purpose is to help us design a working pinhole camera, we will use the _focal length_ as the input to the program. This is something we can determine without software: Proper focal length is determined by the size of the film and by the need to shoot "regular" pictures, wide angle pictures, or telephoto pictures.
+Since its main purpose is to help us design a working pinhole camera, we will use the _focal length_ as the input to the program.
+This is something we can determine without software: Proper focal length is determined by the size of the film and by the need to shoot "regular" pictures, wide angle pictures, or telephoto pictures.
Most of the programs we have written so far worked with individual characters, or bytes, as their input: The hex program converted individual bytes into a hexadecimal number, the csv program either let a character through, or deleted it, or changed it to a different character, etc.
@@ -2557,22 +2841,30 @@ But our pinhole program cannot just work with individual characters, it has to d
For example, if we want the program to calculate the pinhole diameter (and other values we will discuss later) at the focal lengths of `100 mm`, `150 mm`, and `210 mm`, we may want to enter something like this:
-[source,bash]
+[source,shell]
....
100, 150, 210
....
-Our program needs to consider more than a single byte of input at a time. When it sees the first `1`, it must understand it is seeing the first digit of a decimal number. When it sees the `0` and the other `0`, it must know it is seeing more digits of the same number.
+Our program needs to consider more than a single byte of input at a time.
+When it sees the first `1`, it must understand it is seeing the first digit of a decimal number.
+When it sees the `0` and the other `0`, it must know it is seeing more digits of the same number.
-When it encounters the first comma, it must know it is no longer receiving the digits of the first number. It must be able to convert the digits of the first number into the value of `100`. And the digits of the second number into the value of `150`. And, of course, the digits of the third number into the numeric value of `210`.
+When it encounters the first comma, it must know it is no longer receiving the digits of the first number.
+It must be able to convert the digits of the first number into the value of `100`.
+And the digits of the second number into the value of `150`.
+And, of course, the digits of the third number into the numeric value of `210`.
We need to decide what delimiters to accept: Do the input numbers have to be separated by a comma? If so, how do we treat two numbers separated by something else?
-Personally, I like to keep it simple. Something either is a number, so I process it. Or it is not a number, so I discard it. I do not like the computer complaining about me typing in an extra character when it is _obvious_ that it is an extra character. Duh!
+Personally, I like to keep it simple. Something either is a number, so I process it.
+Or it is not a number, so I discard it.
+I do not like the computer complaining about me typing in an extra character when it is _obvious_ that it is an extra character.
+Duh!
Plus, it allows me to break up the monotony of computing and type in a query instead of just a number:
-[source,bash]
+[source,shell]
....
What is the best pinhole diameter for the
focal length of 150?
@@ -2580,7 +2872,7 @@ What is the best pinhole diameter for the
There is no reason for the computer to spit out a number of complaints:
-[source,bash]
+[source,shell]
....
Syntax error: What
Syntax error: is
@@ -2590,18 +2882,23 @@ Syntax error: best
Et cetera, et cetera, et cetera.
-Secondly, I like the `#` character to denote the start of a comment which extends to the end of the line. This does not take too much effort to code, and lets me treat input files for my software as executable scripts.
+Secondly, I like the `#` character to denote the start of a comment which extends to the end of the line.
+This does not take too much effort to code, and lets me treat input files for my software as executable scripts.
In our case, we also need to decide what units the input should come in: We choose _millimeters_ because that is how most photographers measure the focus length.
Finally, we need to decide whether to allow the use of the decimal point (in which case we must also consider the fact that much of the world uses a decimal _comma_).
-In our case allowing for the decimal point/comma would offer a false sense of precision: There is little if any noticeable difference between the focus lengths of `50` and `51`, so allowing the user to input something like `50.5` is not a good idea. This is my opinion, mind you, but I am the one writing this program. You can make other choices in yours, of course.
+In our case allowing for the decimal point/comma would offer a false sense of precision: There is little if any noticeable difference between the focus lengths of `50` and `51`, so allowing the user to input something like `50.5` is not a good idea.
+This is my opinion, mind you, but I am the one writing this program.
+You can make other choices in yours, of course.
[[x86-pinhole-options]]
==== Offering Options
-The most important thing we need to know when building a pinhole camera is the diameter of the pinhole. Since we want to shoot sharp images, we will use the above formula to calculate the pinhole diameter from focal length. As experts are offering several different values for the `PC` constant, we will need to have the choice.
+The most important thing we need to know when building a pinhole camera is the diameter of the pinhole.
+Since we want to shoot sharp images, we will use the above formula to calculate the pinhole diameter from focal length.
+As experts are offering several different values for the `PC` constant, we will need to have the choice.
It is traditional in UNIX(R) programming to have two main ways of choosing program parameters, plus to have a default for the time the user does not make a choice.
@@ -2609,19 +2906,33 @@ Why have two ways of choosing?
One is to allow a (relatively) _permanent_ choice that applies automatically each time the software is run without us having to tell it over and over what we want it to do.
-The permanent choices may be stored in a configuration file, typically found in the user's home directory. The file usually has the same name as the application but is started with a dot. Often _"rc"_ is added to the file name. So, ours could be [.filename]#~/.pinhole# or [.filename]#~/.pinholerc#. (The [.filename]#~/# means current user's home directory.)
+The permanent choices may be stored in a configuration file, typically found in the user's home directory.
+The file usually has the same name as the application but is started with a dot.
+Often _"rc"_ is added to the file name.
+So, ours could be [.filename]#~/.pinhole# or [.filename]#~/.pinholerc#. (The [.filename]#~/# means current user's home directory.)
-The configuration file is used mostly by programs that have many configurable parameters. Those that have only one (or a few) often use a different method: They expect to find the parameter in an _environment variable_. In our case, we might look at an environment variable named `PINHOLE`.
+The configuration file is used mostly by programs that have many configurable parameters.
+Those that have only one (or a few) often use a different method:
+They expect to find the parameter in an _environment variable_.
+In our case, we might look at an environment variable named `PINHOLE`.
-Usually, a program uses one or the other of the above methods. Otherwise, if a configuration file said one thing, but an environment variable another, the program might get confused (or just too complicated).
+Usually, a program uses one or the other of the above methods.
+Otherwise, if a configuration file said one thing, but an environment variable another, the program might get confused (or just too complicated).
Because we only need to choose _one_ such parameter, we will go with the second method and search the environment for a variable named `PINHOLE`.
-The other way allows us to make _ad hoc_ decisions: _"Though I usually want you to use 0.039, this time I want 0.03872."_ In other words, it allows us to _override_ the permanent choice.
+The other way allows us to make _ad hoc_ decisions: _"Though I usually want you to use 0.039, this time I want 0.03872."_
+In other words, it allows us to _override_ the permanent choice.
This type of choice is usually done with command line parameters.
-Finally, a program _always_ needs a _default_. The user may not make any choices. Perhaps he does not know what to choose. Perhaps he is "just browsing." Preferably, the default will be the value most users would choose anyway. That way they do not need to choose. Or, rather, they can choose the default without an additional effort.
+Finally, a program _always_ needs a _default_.
+The user may not make any choices.
+Perhaps he does not know what to choose.
+Perhaps he is "just browsing."
+Preferably, the default will be the value most users would choose anyway.
+That way they do not need to choose.
+Or, rather, they can choose the default without an additional effort.
Given this system, the program may find conflicting options, and handle them this way:
@@ -2634,19 +2945,25 @@ We also need to decide what _format_ our `PC` option should have.
At first site, it seems obvious to use the `PINHOLE=0.04` format for the environment variable, and `-p0.04` for the command line.
-Allowing that is actually a security risk. The `PC` constant is a very small number. Naturally, we will test our software using various small values of `PC`. But what will happen if someone runs the program choosing a huge value?
+Allowing that is actually a security risk. The `PC` constant is a very small number.
+Naturally, we will test our software using various small values of `PC`.
+But what will happen if someone runs the program choosing a huge value?
It may crash the program because we have not designed it to handle huge numbers.
-Or, we may spend more time on the program so it can handle huge numbers. We might do that if we were writing commercial software for computer illiterate audience.
+Or, we may spend more time on the program so it can handle huge numbers.
+We might do that if we were writing commercial software for computer illiterate audience.
Or, we might say, _"Tough! The user should know better.""_
-Or, we just may make it impossible for the user to enter a huge number. This is the approach we will take: We will use an _implied 0._ prefix.
+Or, we just may make it impossible for the user to enter a huge number.
+This is the approach we will take: We will use an _implied 0._ prefix.
-In other words, if the user wants `0.04`, we will expect him to type `-p04`, or set `PINHOLE=04` in his environment. So, if he says `-p9999999`, we will interpret it as ``0.9999999``-still ridiculous but at least safer.
+In other words, if the user wants `0.04`, we will expect him to type `-p04`, or set `PINHOLE=04` in his environment.
+So, if he says `-p9999999`, we will interpret it as ``0.9999999``-still ridiculous but at least safer.
-Secondly, many users will just want to go with either Bender's constant or Connors' constant. To make it easier on them, we will interpret `-b` as identical to `-p04`, and `-c` as identical to `-p037`.
+Secondly, many users will just want to go with either Bender's constant or Connors' constant.
+To make it easier on them, we will interpret `-b` as identical to `-p04`, and `-c` as identical to `-p037`.
[[x86-pinhole-output]]
==== The Output
@@ -2655,21 +2972,30 @@ We need to decide what we want our software to send to the output, and in what f
Since our input allows for an unspecified number of focal length entries, it makes sense to use a traditional database-style output of showing the result of the calculation for each focal length on a separate line, while separating all values on one line by a `tab` character.
-Optionally, we should also allow the user to specify the use of the CSV format we have studied earlier. In this case, we will print out a line of comma-separated names describing each field of every line, then show our results as before, but substituting a `comma` for the `tab`.
+Optionally, we should also allow the user to specify the use of the CSV format we have studied earlier.
+In this case, we will print out a line of comma-separated names describing each field of every line, then show our results as before, but substituting a `comma` for the `tab`.
-We need a command line option for the CSV format. We cannot use `-c` because that already means _use Connors' constant_. For some strange reason, many web sites refer to CSV files as _"Excel spreadsheet"_ (though the CSV format predates Excel). We will, therefore, use the `-e` switch to inform our software we want the output in the CSV format.
+We need a command line option for the CSV format.
+We cannot use `-c` because that already means _use Connors' constant_.
+For some strange reason, many web sites refer to CSV files as _"Excel spreadsheet"_ (though the CSV format predates Excel).
+We will, therefore, use the `-e` switch to inform our software we want the output in the CSV format.
-We will start each line of the output with the focal length. This may sound repetitious at first, especially in the interactive mode: The user types in the focal length, and we are repeating it.
+We will start each line of the output with the focal length.
+This may sound repetitious at first, especially in the interactive mode:
+The user types in the focal length, and we are repeating it.
-But the user can type several focal lengths on one line. The input can also come in from a file or from the output of another program. In that case the user does not see the input at all.
+But the user can type several focal lengths on one line.
+The input can also come in from a file or from the output of another program.
+In that case the user does not see the input at all.
By the same token, the output can go to a file which we will want to examine later, or it could go to the printer, or become the input of another program.
So, it makes perfect sense to start each line with the focal length as entered by the user.
-No, wait! Not as entered by the user. What if the user types in something like this:
+No, wait! Not as entered by the user.
+What if the user types in something like this:
-[source,bash]
+[source,shell]
....
00000000150
....
@@ -2682,16 +3008,20 @@ But...
What if the user types something like this:
-[source,bash]
+[source,shell]
....
17459765723452353453534535353530530534563507309676764423
....
-Ha! The packed decimal FPU format lets us input 18-digit numbers. But the user has entered more than 18 digits. How do we handle that?
+Ha! The packed decimal FPU format lets us input 18-digit numbers.
+But the user has entered more than 18 digits.
+How do we handle that?
Well, we _could_ modify our code to read the first 18 digits, enter it to the FPU, then read more, multiply what we already have on the TOS by 10 raised to the number of additional digits, then `add` to it.
-Yes, we could do that. But in _this_ program it would be ridiculous (in a different one it may be just the thing to do): Even the circumference of the Earth expressed in millimeters only takes 11 digits. Clearly, we cannot build a camera that large (not yet, anyway).
+Yes, we could do that.
+But in _this_ program it would be ridiculous (in a different one it may be just the thing to do): Even the circumference of the Earth expressed in millimeters only takes 11 digits.
+Clearly, we cannot build a camera that large (not yet, anyway).
So, if the user enters such a huge number, he is either bored, or testing us, or trying to break into the system, or playing games-doing anything but designing a pinhole camera.
@@ -2699,12 +3029,13 @@ What will we do?
We will slap him in the face, in a manner of speaking:
-[source,bash]
+[source,shell]
....
17459765723452353453534535353530530534563507309676764423 ??? ??? ??? ??? ???
....
-To achieve that, we will simply ignore any leading zeros. Once we find a non-zero digit, we will initialize a counter to `0` and start taking three steps:
+To achieve that, we will simply ignore any leading zeros.
+Once we find a non-zero digit, we will initialize a counter to `0` and start taking three steps:
[.procedure]
. Send the digit to the output.
@@ -2716,39 +3047,55 @@ Now, while we are taking these three steps, we also need to watch out for one of
* If the counter grows above 18, we stop appending to the buffer. We continue reading the digits and sending them to the output.
* If, or rather _when_, the next input character is not a digit, we are done inputting for now.
+
-Incidentally, we can simply discard the non-digit, unless it is a `#`, which we must return to the input stream. It starts a comment, so we must see it after we are done producing output and start looking for more input.
+Incidentally, we can simply discard the non-digit, unless it is a `#`, which we must return to the input stream.
+It starts a comment, so we must see it after we are done producing output and start looking for more input.
That still leaves one possibility uncovered: If all the user enters is a zero (or several zeros), we will never find a non-zero to display.
-We can determine this has happened whenever our counter stays at `0`. In that case we need to send `0` to the output, and perform another "slap in the face":
+We can determine this has happened whenever our counter stays at `0`.
+In that case we need to send `0` to the output, and perform another "slap in the face":
-[source,bash]
+[source,shell]
....
0 ??? ??? ??? ??? ???
....
Once we have displayed the focal length and determined it is valid (greater than `0` but not exceeding 18 digits), we can calculate the pinhole diameter.
-It is not by coincidence that _pinhole_ contains the word _pin_. Indeed, many a pinhole literally is a _pin hole_, a hole carefully punched with the tip of a pin.
+It is not by coincidence that _pinhole_ contains the word _pin_.
+Indeed, many a pinhole literally is a _pin hole_, a hole carefully punched with the tip of a pin.
-That is because a typical pinhole is very small. Our formula gets the result in millimeters. We will multiply it by `1000`, so we can output the result in _microns_.
+That is because a typical pinhole is very small. Our formula gets the result in millimeters.
+We will multiply it by `1000`, so we can output the result in _microns_.
At this point we have yet another trap to face: _Too much precision._
-Yes, the FPU was designed for high precision mathematics. But we are not dealing with high precision mathematics. We are dealing with physics (optics, specifically).
+Yes, the FPU was designed for high precision mathematics.
+But we are not dealing with high precision mathematics.
+We are dealing with physics (optics, specifically).
-Suppose we want to convert a truck into a pinhole camera (we would not be the first ones to do that!). Suppose its box is `12` meters long, so we have the focal length of `12000`. Well, using Bender's constant, it gives us square root of `12000` multiplied by `0.04`, which is `4.381780460` millimeters, or `4381.780460` microns.
+Suppose we want to convert a truck into a pinhole camera (we would not be the first ones to do that!).
+Suppose its box is `12` meters long, so we have the focal length of `12000`.
+Well, using Bender's constant, it gives us square root of `12000` multiplied by `0.04`, which is `4.381780460` millimeters, or `4381.780460` microns.
-Put either way, the result is absurdly precise. Our truck is not _exactly_ `12000` millimeters long. We did not measure its length with such a precision, so stating we need a pinhole with the diameter of `4.381780460` millimeters is, well, deceiving. `4.4` millimeters would do just fine.
+Put either way, the result is absurdly precise.
+Our truck is not _exactly_ `12000` millimeters long.
+We did not measure its length with such a precision, so stating we need a pinhole with the diameter of `4.381780460` millimeters is, well, deceiving.
+`4.4` millimeters would do just fine.
[NOTE]
====
-I "only" used ten digits in the above example. Imagine the absurdity of going for all 18!
+I "only" used ten digits in the above example.
+Imagine the absurdity of going for all 18!
====
-We need to limit the number of significant digits of our result. One way of doing it is by using an integer representing microns. So, our truck would need a pinhole with the diameter of `4382` microns. Looking at that number, we still decide that `4400` microns, or `4.4` millimeters is close enough.
+We need to limit the number of significant digits of our result.
+One way of doing it is by using an integer representing microns.
+So, our truck would need a pinhole with the diameter of `4382` microns.
+Looking at that number, we still decide that `4400` microns, or `4.4` millimeters is close enough.
-Additionally, we can decide that no matter how big a result we get, we only want to display four significant digits (or any other number of them, of course). Alas, the FPU does not offer rounding to a specific number of digits (after all, it does not view the numbers as decimal but as binary).
+Additionally, we can decide that no matter how big a result we get, we only want to display four significant digits (or any other number of them, of course).
+Alas, the FPU does not offer rounding to a specific number of digits (after all, it does not view the numbers as decimal but as binary).
We, therefore, must devise an algorithm to reduce the number of significant digits.
@@ -2762,18 +3109,27 @@ Here is mine (I think it is awkward-if you know a better one, _please_, let me k
[NOTE]
====
-The `10000` is only good if you want _four_ significant digits. For any other number of significant digits, replace `10000` with `10` raised to the number of significant digits.
+The `10000` is only good if you want _four_ significant digits.
+For any other number of significant digits, replace `10000` with `10` raised to the number of significant digits.
====
We will, then, output the pinhole diameter in microns, rounded off to four significant digits.
-At this point, we know the _focal length_ and the _pinhole diameter_. That means we have enough information to also calculate the _f-number_.
+At this point, we know the _focal length_ and the _pinhole diameter_.
+That means we have enough information to also calculate the _f-number_.
-We will display the f-number, rounded to four significant digits. Chances are the f-number will tell us very little. To make it more meaningful, we can find the nearest _normalized f-number_, i.e., the nearest power of the square root of 2.
+We will display the f-number, rounded to four significant digits.
+Chances are the f-number will tell us very little.
+To make it more meaningful, we can find the nearest _normalized f-number_, i.e., the nearest power of the square root of 2.
-We do that by multiplying the actual f-number by itself, which, of course, will give us its `square`. We will then calculate its base-2 logarithm, which is much easier to do than calculating the base-square-root-of-2 logarithm! We will round the result to the nearest integer. Next, we will raise 2 to the result. Actually, the FPU gives us a good shortcut to do that: We can use the `fscale` op code to "scale" 1, which is analogous to ``shift``ing an integer left. Finally, we calculate the square root of it all, and we have the nearest normalized f-number.
+We do that by multiplying the actual f-number by itself, which, of course, will give us its `square`.
+We will then calculate its base-2 logarithm, which is much easier to do than calculating the base-square-root-of-2 logarithm! We will round the result to the nearest integer.
+Next, we will raise 2 to the result.
+Actually, the FPU gives us a good shortcut to do that: We can use the `fscale` op code to "scale" 1, which is analogous to ``shift``ing an integer left.
+Finally, we calculate the square root of it all, and we have the nearest normalized f-number.
-If all that sounds overwhelming-or too much work, perhaps-it may become much clearer if you see the code. It takes 9 op codes altogether:
+If all that sounds overwhelming-or too much work, perhaps-it may become much clearer if you see the code.
+It takes 9 op codes altogether:
[.programlisting]
....
@@ -2788,92 +3144,143 @@ fmul st0, st0
fstp st1
....
-The first line, `fmul st0, st0`, squares the contents of the TOS (top of the stack, same as `st`, called `st0` by nasm). The `fld1` pushes `1` on the TOS.
+The first line, `fmul st0, st0`, squares the contents of the TOS (top of the stack, same as `st`, called `st0` by nasm).
+The `fld1` pushes `1` on the TOS.
-The next line, `fld st1`, pushes the square back to the TOS. At this point the square is both in `st` and `st(2)` (it will become clear why we leave a second copy on the stack in a moment). `st(1)` contains `1`.
+The next line, `fld st1`, pushes the square back to the TOS.
+At this point the square is both in `st` and `st(2)` (it will become clear why we leave a second copy on the stack in a moment).
+`st(1)` contains `1`.
-Next, `fyl2x` calculates base-2 logarithm of `st` multiplied by `st(1)`. That is why we placed `1` on `st(1)` before.
+Next, `fyl2x` calculates base-2 logarithm of `st` multiplied by `st(1)`.
+That is why we placed `1` on `st(1)` before.
At this point, `st` contains the logarithm we have just calculated, `st(1)` contains the square of the actual f-number we saved for later.
-`frndint` rounds the TOS to the nearest integer. `fld1` pushes a `1`. `fscale` shifts the `1` we have on the TOS by the value in `st(1)`, effectively raising 2 to `st(1)`.
+`frndint` rounds the TOS to the nearest integer.
+`fld1` pushes a `1`.
+`fscale` shifts the `1` we have on the TOS by the value in `st(1)`, effectively raising 2 to `st(1)`.
Finally, `fsqrt` calculates the square root of the result, i.e., the nearest normalized f-number.
-We now have the nearest normalized f-number on the TOS, the base-2 logarithm rounded to the nearest integer in `st(1)`, and the square of the actual f-number in `st(2)`. We are saving the value in `st(2)` for later.
+We now have the nearest normalized f-number on the TOS, the base-2 logarithm rounded to the nearest integer in `st(1)`, and the square of the actual f-number in `st(2)`.
+We are saving the value in `st(2)` for later.
-But we do not need the contents of `st(1)` anymore. The last line, `fstp st1`, places the contents of `st` to `st(1)`, and pops. As a result, what was `st(1)` is now `st`, what was `st(2)` is now `st(1)`, etc. The new `st` contains the normalized f-number. The new `st(1)` contains the square of the actual f-number we have stored there for posterity.
+But we do not need the contents of `st(1)` anymore.
+The last line, `fstp st1`, places the contents of `st` to `st(1)`, and pops.
+As a result, what was `st(1)` is now `st`, what was `st(2)` is now `st(1)`, etc.
+The new `st` contains the normalized f-number.
+The new `st(1)` contains the square of the actual f-number we have stored there for posterity.
-At this point, we are ready to output the normalized f-number. Because it is normalized, we will not round it off to four significant digits, but will send it out in its full precision.
+At this point, we are ready to output the normalized f-number.
+Because it is normalized, we will not round it off to four significant digits, but will send it out in its full precision.
-The normalized f-number is useful as long as it is reasonably small and can be found on our light meter. Otherwise we need a different method of determining proper exposure.
+The normalized f-number is useful as long as it is reasonably small and can be found on our light meter.
+Otherwise we need a different method of determining proper exposure.
Earlier we have figured out the formula of calculating proper exposure at an arbitrary f-number from that measured at a different f-number.
-Every light meter I have ever seen can determine proper exposure at f5.6. We will, therefore, calculate an _"f5.6 multiplier,"_ i.e., by how much we need to multiply the exposure measured at f5.6 to determine the proper exposure for our pinhole camera.
+Every light meter I have ever seen can determine proper exposure at f5.6.
+We will, therefore, calculate an _"f5.6 multiplier,"_ i.e., by how much we need to multiply the exposure measured at f5.6 to determine the proper exposure for our pinhole camera.
From the above formula we know this factor can be calculated by dividing our f-number (the actual one, not the normalized one) by `5.6`, and squaring the result.
Mathematically, dividing the square of our f-number by the square of `5.6` will give us the same result.
-Computationally, we do not want to square two numbers when we can only square one. So, the first solution seems better at first.
+Computationally, we do not want to square two numbers when we can only square one.
+So, the first solution seems better at first.
But...
-`5.6` is a _constant_. We do not have to have our FPU waste precious cycles. We can just tell it to divide the square of the f-number by whatever `5.6²` equals to. Or we can divide the f-number by `5.6`, and then square the result. The two ways now seem equal.
+`5.6` is a _constant_.
+We do not have to have our FPU waste precious cycles.
+We can just tell it to divide the square of the f-number by whatever `5.6²` equals to.
+Or we can divide the f-number by `5.6`, and then square the result.
+The two ways now seem equal.
But, they are not!
-Having studied the principles of photography above, we remember that the `5.6` is actually square root of 2 raised to the fifth power. An _irrational_ number. The square of this number is _exactly_ `32`.
+Having studied the principles of photography above, we remember that the `5.6` is actually square root of 2 raised to the fifth power.
+An _irrational_ number.
+The square of this number is _exactly_ `32`.
-Not only is `32` an integer, it is a power of 2. We do not need to divide the square of the f-number by `32`. We only need to use `fscale` to shift it right by five positions. In the FPU lingo it means we will `fscale` it with `st(1)` equal to `-5`. That is _much faster_ than a division.
+Not only is `32` an integer, it is a power of 2.
+We do not need to divide the square of the f-number by `32`.
+We only need to use `fscale` to shift it right by five positions.
+In the FPU lingo it means we will `fscale` it with `st(1)` equal to `-5`.
+That is _much faster_ than a division.
-So, now it has become clear why we have saved the square of the f-number on the top of the FPU stack. The calculation of the f5.6 multiplier is the easiest calculation of this entire program! We will output it rounded to four significant digits.
+So, now it has become clear why we have saved the square of the f-number on the top of the FPU stack.
+The calculation of the f5.6 multiplier is the easiest calculation of this entire program! We will output it rounded to four significant digits.
-There is one more useful number we can calculate: The number of stops our f-number is from f5.6. This may help us if our f-number is just outside the range of our light meter, but we have a shutter which lets us set various speeds, and this shutter uses stops.
+There is one more useful number we can calculate: The number of stops our f-number is from f5.6.
+This may help us if our f-number is just outside the range of our light meter, but we have a shutter which lets us set various speeds, and this shutter uses stops.
-Say, our f-number is 5 stops from f5.6, and the light meter says we should use 1/1000 sec. Then we can set our shutter speed to 1/1000 first, then move the dial by 5 stops.
+Say, our f-number is 5 stops from f5.6, and the light meter says we should use 1/1000 sec.
+Then we can set our shutter speed to 1/1000 first, then move the dial by 5 stops.
-This calculation is quite easy as well. All we have to do is to calculate the base-2 logarithm of the f5.6 multiplier we had just calculated (though we need its value from before we rounded it off). We then output the result rounded to the nearest integer. We do not need to worry about having more than four significant digits in this one: The result is most likely to have only one or two digits anyway.
+This calculation is quite easy as well.
+All we have to do is to calculate the base-2 logarithm of the f5.6 multiplier we had just calculated (though we need its value from before we rounded it off).
+We then output the result rounded to the nearest integer.
+We do not need to worry about having more than four significant digits in this one: The result is most likely to have only one or two digits anyway.
[[x86-fpu-optimizations]]
=== FPU Optimizations
In assembly language we can optimize the FPU code in ways impossible in high languages, including C.
-Whenever a C function needs to calculate a floating-point value, it loads all necessary variables and constants into FPU registers. It then does whatever calculation is required to get the correct result. Good C compilers can optimize that part of the code really well.
+Whenever a C function needs to calculate a floating-point value, it loads all necessary variables and constants into FPU registers.
+It then does whatever calculation is required to get the correct result.
+Good C compilers can optimize that part of the code really well.
-It "returns" the value by leaving the result on the TOS. However, before it returns, it cleans up. Any variables and constants it used in its calculation are now gone from the FPU.
+It "returns" the value by leaving the result on the TOS.
+However, before it returns, it cleans up.
+Any variables and constants it used in its calculation are now gone from the FPU.
It cannot do what we just did above: We calculated the square of the f-number and kept it on the stack for later use by another function.
-We _knew_ we would need that value later on. We also knew we had enough room on the stack (which only has room for 8 numbers) to store it there.
+We _knew_ we would need that value later on.
+We also knew we had enough room on the stack (which only has room for 8 numbers) to store it there.
A C compiler has no way of knowing that a value it has on the stack will be required again in the very near future.
-Of course, the C programmer may know it. But the only recourse he has is to store the value in a memory variable.
+Of course, the C programmer may know it.
+But the only recourse he has is to store the value in a memory variable.
That means, for one, the value will be changed from the 80-bit precision used internally by the FPU to a C _double_ (64 bits) or even _single_ (32 bits).
-That also means that the value must be moved from the TOS into the memory, and then back again. Alas, of all FPU operations, the ones that access the computer memory are the slowest.
+That also means that the value must be moved from the TOS into the memory, and then back again.
+Alas, of all FPU operations, the ones that access the computer memory are the slowest.
So, whenever programming the FPU in assembly language, look for the ways of keeping intermediate results on the FPU stack.
We can take that idea even further! In our program we are using a _constant_ (the one we named `PC`).
-It does not matter how many pinhole diameters we are calculating: 1, 10, 20, 1000, we are always using the same constant. Therefore, we can optimize our program by keeping the constant on the stack all the time.
+It does not matter how many pinhole diameters we are calculating: 1, 10, 20, 1000, we are always using the same constant.
+Therefore, we can optimize our program by keeping the constant on the stack all the time.
-Early on in our program, we are calculating the value of the above constant. We need to divide our input by `10` for every digit in the constant.
+Early on in our program, we are calculating the value of the above constant.
+We need to divide our input by `10` for every digit in the constant.
-It is much faster to multiply than to divide. So, at the start of our program, we divide `10` into `1` to obtain `0.1`, which we then keep on the stack: Instead of dividing the input by `10` for every digit, we multiply it by `0.1`.
+It is much faster to multiply than to divide.
+So, at the start of our program, we divide `10` into `1` to obtain `0.1`, which we then keep on the stack: Instead of dividing the input by `10` for every digit, we multiply it by `0.1`.
-By the way, we do not input `0.1` directly, even though we could. We have a reason for that: While `0.1` can be expressed with just one decimal place, we do not know how many _binary_ places it takes. We, therefore, let the FPU calculate its binary value to its own high precision.
+By the way, we do not input `0.1` directly, even though we could.
+We have a reason for that: While `0.1` can be expressed with just one decimal place, we do not know how many _binary_ places it takes.
+We, therefore, let the FPU calculate its binary value to its own high precision.
-We are using other constants: We multiply the pinhole diameter by `1000` to convert it from millimeters to microns. We compare numbers to `10000` when we are rounding them off to four significant digits. So, we keep both, `1000` and `10000`, on the stack. And, of course, we reuse the `0.1` when rounding off numbers to four digits.
+We are using other constants: We multiply the pinhole diameter by `1000` to convert it from millimeters to microns.
+We compare numbers to `10000` when we are rounding them off to four significant digits.
+So, we keep both, `1000` and `10000`, on the stack.
+And, of course, we reuse the `0.1` when rounding off numbers to four digits.
-Last but not least, we keep `-5` on the stack. We need it to scale the square of the f-number, instead of dividing it by `32`. It is not by coincidence we load this constant last. That makes it the top of the stack when only the constants are on it. So, when the square of the f-number is being scaled, the `-5` is at `st(1)`, precisely where `fscale` expects it to be.
+Last but not least, we keep `-5` on the stack.
+We need it to scale the square of the f-number, instead of dividing it by `32`.
+It is not by coincidence we load this constant last.
+That makes it the top of the stack when only the constants are on it.
+So, when the square of the f-number is being scaled, the `-5` is at `st(1)`, precisely where `fscale` expects it to be.
-It is common to create certain constants from scratch instead of loading them from the memory. That is what we are doing with `-5`:
+It is common to create certain constants from scratch instead of loading them from the memory.
+That is what we are doing with `-5`:
[.programlisting]
....
@@ -2889,8 +3296,8 @@ We can generalize all these optimizations into one rule: _Keep repeat values on
[TIP]
====
-
-_PostScript(R)_ is a stack-oriented programming language. There are many more books available about PostScript(R) than about the FPU assembly language: Mastering PostScript(R) will help you master the FPU.
+_PostScript(R)_ is a stack-oriented programming language.
+There are many more books available about PostScript(R) than about the FPU assembly language: Mastering PostScript(R) will help you master the FPU.
====
[[x86-pinhole-the-code]]
@@ -3611,15 +4018,20 @@ When we have no more input, it can mean one of two things:
For that reason, we have modified our `getchar` and our `read` routines to return with the `carry flag` _clear_ whenever we are fetching another character from the input, or the `carry flag` _set_ whenever there is no more input.
-Of course, we are still using assembly language magic to do that! Take a good look at `getchar`. It _always_ returns with the `carry flag` _clear_.
+Of course, we are still using assembly language magic to do that! Take a good look at `getchar`.
+It _always_ returns with the `carry flag` _clear_.
Yet, our main code relies on the `carry flag` to tell it when to quit-and it works.
-The magic is in `read`. Whenever it receives more input from the system, it just returns to `getchar`, which fetches a character from the input buffer, _clears_ the `carry flag` and returns.
+The magic is in `read`.
+Whenever it receives more input from the system, it just returns to `getchar`, which fetches a character from the input buffer, _clears_ the `carry flag` and returns.
-But when `read` receives no more input from the system, it does _not_ return to `getchar` at all. Instead, the `add esp, byte 4` op code adds `4` to `ESP`, _sets_ the `carry flag`, and returns.
+But when `read` receives no more input from the system, it does _not_ return to `getchar` at all.
+Instead, the `add esp, byte 4` op code adds `4` to `ESP`, _sets_ the `carry flag`, and returns.
-So, where does it return to? Whenever a program uses the `call` op code, the microprocessor ``push``es the return address, i.e., it stores it on the top of the stack (not the FPU stack, the system stack, which is in the memory). When a program uses the `ret` op code, the microprocessor ``pop``s the return value from the stack, and jumps to the address that was stored there.
+So, where does it return to? Whenever a program uses the `call` op code, the microprocessor ``push``es the return address, i.e.,
+it stores it on the top of the stack (not the FPU stack, the system stack, which is in the memory).
+When a program uses the `ret` op code, the microprocessor ``pop``s the return value from the stack, and jumps to the address that was stored there.
But since we added `4` to `ESP` (which is the stack pointer register), we have effectively given the microprocessor a minor case of _amnesia_: It no longer remembers it was `getchar` that ``call``ed `read`.
@@ -3630,24 +4042,31 @@ Other than that, the `bcdload` routine is caught up in the middle of a Lilliputi
It is converting the text representation of a number into that number: The text is stored in the big-endian order, but the _packed decimal_ is little-endian.
-To solve the conflict, we use the `std` op code early on. We cancel it with `cld` later on: It is quite important we do not `call` anything that may depend on the default setting of the _direction flag_ while `std` is active.
+To solve the conflict, we use the `std` op code early on.
+We cancel it with `cld` later on: It is quite important we do not `call` anything that may depend on the default setting of the _direction flag_ while `std` is active.
Everything else in this code should be quit eclear, providing you have read the entire chapter that precedes it.
-It is a classical example of the adage that programming requires a lot of thought and only a little coding. Once we have thought through every tiny detail, the code almost writes itself.
+It is a classical example of the adage that programming requires a lot of thought and only a little coding.
+Once we have thought through every tiny detail, the code almost writes itself.
[[x86-pinhole-using]]
=== Using pinhole
-Because we have decided to make the program _ignore_ any input except for numbers (and even those inside a comment), we can actually perform _textual queries_. We do not _have to_, but we _can_.
+Because we have decided to make the program _ignore_ any input except for numbers (and even those inside a comment), we can actually perform _textual queries_.
+We do not _have to_, but we _can_.
In my humble opinion, forming a textual query, instead of having to follow a very strict syntax, makes software much more user friendly.
-Suppose we want to build a pinhole camera to use the 4x5 inch film. The standard focal length for that film is about 150mm. We want to _fine-tune_ our focal length so the pinhole diameter is as round a number as possible. Let us also suppose we are quite comfortable with cameras but somewhat intimidated by computers. Rather than just have to type in a bunch of numbers, we want to _ask_ a couple of questions.
+Suppose we want to build a pinhole camera to use the 4x5 inch film.
+The standard focal length for that film is about 150mm.
+We want to _fine-tune_ our focal length so the pinhole diameter is as round a number as possible.
+Let us also suppose we are quite comfortable with cameras but somewhat intimidated by computers.
+Rather than just have to type in a bunch of numbers, we want to _ask_ a couple of questions.
Our session might look like this:
-[source,bash]
+[source,shell]
....
% pinhole
@@ -3690,7 +4109,8 @@ You have probably seen shell _scripts_ that start with:
...because the blank space after the `#!` is optional.
-Whenever UNIX(R) is asked to run an executable file which starts with the `#!`, it assumes the file is a script. It adds the command to the rest of the first line of the script, and tries to execute that.
+Whenever UNIX(R) is asked to run an executable file which starts with the `#!`, it assumes the file is a script.
+It adds the command to the rest of the first line of the script, and tries to execute that.
Suppose now that we have installed pinhole in /usr/local/bin/, we can now write a script to calculate various pinhole diameters suitable for various focal lengths commonly used with the 120 film.
@@ -3716,7 +4136,7 @@ Because 120 is a medium size film, we may name this file medium.
We can set its permissions to execute, and run it as if it were a program:
-[source,bash]
+[source,shell]
....
% chmod 755 medium
% ./medium
@@ -3724,14 +4144,14 @@ We can set its permissions to execute, and run it as if it were a program:
UNIX(R) will interpret that last command as:
-[source,bash]
+[source,shell]
....
% /usr/local/bin/pinhole -b -i ./medium
....
It will run that command and display:
-[source,bash]
+[source,shell]
....
80 358 224 256 1562 11
30 219 137 128 586 9
@@ -3746,21 +4166,22 @@ It will run that command and display:
Now, let us enter:
-[source,bash]
+[source,shell]
....
% ./medium -c
....
UNIX(R) will treat that as:
-[source,bash]
+[source,shell]
....
% /usr/local/bin/pinhole -b -i ./medium -c
....
-That gives it two conflicting options: `-b` and `-c` (Use Bender's constant and use Connors' constant). We have programmed it so later options override early ones-our program will calculate everything using Connors' constant:
+That gives it two conflicting options: `-b` and `-c` (Use Bender's constant and use Connors' constant).
+We have programmed it so later options override early ones-our program will calculate everything using Connors' constant:
-[source,bash]
+[source,shell]
....
80 331 242 256 1826 11
30 203 148 128 685 9
@@ -3773,9 +4194,10 @@ That gives it two conflicting options: `-b` and `-c` (Use Bender's constant and
140 438 320 362 3196 12
....
-We decide we want to go with Bender's constant after all. We want to save its values as a comma-separated file:
+We decide we want to go with Bender's constant after all.
+We want to save its values as a comma-separated file:
-[source,bash]
+[source,shell]
....
% ./medium -b -e > bender
% cat bender
@@ -3795,18 +4217,24 @@ focal length in millimeters,pinhole diameter in microns,F-number,normalized F-nu
[[x86-caveats]]
== Caveats
-Assembly language programmers who "grew up" under MS-DOS(R) and Windows(R) often tend to take shortcuts. Reading the keyboard scan codes and writing directly to video memory are two classical examples of practices which, under MS-DOS(R) are not frowned upon but considered the right thing to do.
+Assembly language programmers who "grew up" under MS-DOS(R) and Windows(R) often tend to take shortcuts.
+Reading the keyboard scan codes and writing directly to video memory are two classical examples of practices which, under MS-DOS(R) are not frowned upon but considered the right thing to do.
The reason? Both the PC BIOS and MS-DOS(R) are notoriously slow when performing these operations.
-You may be tempted to continue similar practices in the UNIX(R) environment. For example, I have seen a web site which explains how to access the keyboard scan codes on a popular UNIX(R) clone.
+You may be tempted to continue similar practices in the UNIX(R) environment.
+For example, I have seen a web site which explains how to access the keyboard scan codes on a popular UNIX(R) clone.
That is generally a _very bad idea_ in UNIX(R) environment! Let me explain why.
[[x86-protected]]
=== UNIX(R) Is Protected
-For one thing, it may simply not be possible. UNIX(R) runs in protected mode. Only the kernel and device drivers are allowed to access hardware directly. Perhaps a particular UNIX(R) clone will let you read the keyboard scan codes, but chances are a real UNIX(R) operating system will not. And even if one version may let you do it, the next one may not, so your carefully crafted software may become a dinosaur overnight.
+For one thing, it may simply not be possible.
+UNIX(R) runs in protected mode.
+Only the kernel and device drivers are allowed to access hardware directly.
+Perhaps a particular UNIX(R) clone will let you read the keyboard scan codes, but chances are a real UNIX(R) operating system will not.
+And even if one version may let you do it, the next one may not, so your carefully crafted software may become a dinosaur overnight.
[[x86-abstraction]]
=== UNIX(R) Is an Abstraction
@@ -3815,30 +4243,45 @@ But there is a much more important reason not to try accessing the hardware dire
_UNIX(R) is an abstraction!_
-There is a major difference in the philosophy of design between MS-DOS(R) and UNIX(R). MS-DOS(R) was designed as a single-user system. It is run on a computer with a keyboard and a video screen attached directly to that computer. User input is almost guaranteed to come from that keyboard. Your program's output virtually always ends up on that screen.
+There is a major difference in the philosophy of design between MS-DOS(R) and UNIX(R).
+MS-DOS(R) was designed as a single-user system.
+It is run on a computer with a keyboard and a video screen attached directly to that computer.
+User input is almost guaranteed to come from that keyboard.
+Your program's output virtually always ends up on that screen.
-This is NEVER guaranteed under UNIX(R). It is quite common for a UNIX(R) user to pipe and redirect program input and output:
+This is NEVER guaranteed under UNIX(R).
+It is quite common for a UNIX(R) user to pipe and redirect program input and output:
-[source,bash]
+[source,shell]
....
% program1 | program2 | program3 > file1
....
-If you have written program2, your input does not come from the keyboard but from the output of program1. Similarly, your output does not go to the screen but becomes the input for program3 whose output, in turn, goes to [.filename]#file1#.
+If you have written program2, your input does not come from the keyboard but from the output of program1.
+Similarly, your output does not go to the screen but becomes the input for program3 whose output, in turn, goes to [.filename]#file1#.
-But there is more! Even if you made sure that your input comes from, and your output goes to, the terminal, there is no guarantee the terminal is a PC: It may not have its video memory where you expect it, nor may its keyboard be producing PC-style scan codes. It may be a Macintosh(R), or any other computer.
+But there is more! Even if you made sure that your input comes from, and your output goes to, the terminal, there is no guarantee the terminal is a PC: It may not have its video memory where you expect it, nor may its keyboard be producing PC-style scan codes.
+It may be a Macintosh(R), or any other computer.
Now you may be shaking your head: My software is in PC assembly language, how can it run on a Macintosh(R)? But I did not say your software would be running on a Macintosh(R), only that its terminal may be a Macintosh(R).
-Under UNIX(R), the terminal does not have to be directly attached to the computer that runs your software, it can even be on another continent, or, for that matter, on another planet. It is perfectly possible that a Macintosh(R) user in Australia connects to a UNIX(R) system in North America (or anywhere else) via telnet. The software then runs on one computer, while the terminal is on a different computer: If you try to read the scan codes, you will get the wrong input!
+Under UNIX(R), the terminal does not have to be directly attached to the computer that runs your software, it can even be on another continent, or, for that matter, on another planet.
+It is perfectly possible that a Macintosh(R) user in Australia connects to a UNIX(R) system in North America (or anywhere else) via telnet.
+The software then runs on one computer, while the terminal is on a different computer: If you try to read the scan codes, you will get the wrong input!
-Same holds true about any other hardware: A file you are reading may be on a disk you have no direct access to. A camera you are reading images from may be on a space shuttle, connected to you via satellites.
+Same holds true about any other hardware: A file you are reading may be on a disk you have no direct access to.
+A camera you are reading images from may be on a space shuttle, connected to you via satellites.
-That is why under UNIX(R) you must never make any assumptions about where your data is coming from and going to. Always let the system handle the physical access to the hardware.
+That is why under UNIX(R) you must never make any assumptions about where your data is coming from and going to.
+Always let the system handle the physical access to the hardware.
[NOTE]
====
-These are caveats, not absolute rules. Exceptions are possible. For example, if a text editor has determined it is running on a local machine, it may want to read the scan codes directly for improved control. I am not mentioning these caveats to tell you what to do or what not to do, just to make you aware of certain pitfalls that await you if you have just arrived to UNIX(R) form MS-DOS(R). Of course, creative people often break rules, and it is OK as long as they know they are breaking them and why.
+These are caveats, not absolute rules.
+Exceptions are possible.
+For example, if a text editor has determined it is running on a local machine, it may want to read the scan codes directly for improved control.
+I am not mentioning these caveats to tell you what to do or what not to do, just to make you aware of certain pitfalls that await you if you have just arrived to UNIX(R) form MS-DOS(R).
+Of course, creative people often break rules, and it is OK as long as they know they are breaking them and why.
====
[[x86-acknowledgements]]
@@ -3846,7 +4289,8 @@ These are caveats, not absolute rules. Exceptions are possible. For example, if
This tutorial would never have been possible without the help of many experienced FreeBSD programmers from the {freebsd-hackers}, many of whom have patiently answered my questions, and pointed me in the right direction in my attempts to explore the inner workings of UNIX(R) system programming in general and FreeBSD in particular.
-Thomas M. Sommers opened the door for me . His https://web.archive.org/web/20090914064615/http://www.codebreakers-journal.com/content/view/262/27[How do I write "Hello, world" in FreeBSD assembler?] web page was my first encounter with an example of assembly language programming under FreeBSD.
+Thomas M. Sommers opened the door for me .
+His https://web.archive.org/web/20090914064615/http://www.codebreakers-journal.com/content/view/262/27[How do I write "Hello, world" in FreeBSD assembler?] web page was my first encounter with an example of assembly language programming under FreeBSD.
Jake Burkholder has kept the door open by willingly answering all of my questions and supplying me with example assembly language source code.