The InterNiche IPv6 stack uses essentially the same MAC layer API (defined in the NicheStack manual) as all other InterNiche products, with one additional requirement - the multicast feature, which is optional in IPv4, is required by IPv6. There is also a measurable performance penalty for MAC drivers that do not implement support for the scatter/gather method of chaining PACKET
structures together.
Legacy MAC drivers (from products which predate the IPv6 release) should work with IPv6 as long as they implement the multicast option.
As mentioned in Section 9.2, the IPv6 stack may prepare a packet to be sent in multiple discontinuous buffers. These buffers are managed by a linked list of PACKET
structures. The pk_prev
and pk_next
members of each PACKET
point to (respectively) the previous and next members of the list. That packet's nb_plen
field gives the number of bytes in each segment, and the nb_prot
field points to the segment data.
MAC drivers may optionally provide support for handling the sending of these PACKET
lists. The MAC driver indicates this support to the stack by setting the NF_GATHER
bit in the net structure's nb_flags
field. If this bit is set, then "scattered" packets will be passed to the interface's n_pkt_send()
routine for sending, and the driver is responsible for collecting the separated data segments and sending them as a contiguous MAC packet.
Legacy drivers, or drivers which cannot support efficient sending of linked lists of PACKET
s, should not set the NF_GATHER
bit in the net structure's nb_flags
field. This will cause the IPv6 code to assemble the linked list of PACKET
s into a single large packet before passing the packet to n_pkt_send()
. This will involve copying most of the packet data, so drivers that can support the linked lists should do so.
Receiving packets with NF_GATHER
support is no different than receiving without this support. The IP level code assumes the received packets are in a single contiguous buffer, with the total data length given by nb_plen
. The only issue worth noting is that it is good form to set the nb_tlen
field (see below) as well as the nb_plen
field.
IPv6 packets that are being prepared for sending are usually in linked lists as described above. The older data length field, nb_plen
is used is indicate how much data is in a single PACKET
's buffer. When multiple PACKET
s are linked into a list, it's convenient to have a total length for all PACKET
s in the list. The nb_tlen
field serves this function.
MAC drivers which support scatter/gather may use this field to determine buffer requirements without having to traverse the linked list of PACKET
s. The nb_tlen
field is only guaranteed to be accurate in the first PACKET
of a linked list. This PACKET
is can be identified by the tk_prev
field being NULL
.
The one change that may be required for a MAC driver to support IPv6 is support for multicast packets. Multicast was an optional driver feature in previous releases, and the specification for driver support of multicast packets has not changed.
The key point is that the MAC driver must receive all multicast packets with a destination MAC address matching any of the addresses registered with the drivers via calls to the drivers n_mcastlist()
routine.
Programming multicast addresses to Ethernet devices can be error prone, and some Ethernet hardware has limited space to store multicast address. For these reasons, many Ethernet devices offer a feature that allows the hardware to receive (and send) all multicast packets. This frees the programmer from having to keep the multicast address list in the hardware synchronized with the lists in the IPv6 layers, and considerably simplifies all aspects of programming the hardware.
Since multicast addresses make up a small portion of the traffic on most IPv6 networks, this "generic multicast" approach generally does not have problems with excessive interrupts from uninteresting multicast packets; however system engineers should keep the possibility in mind. Some forms of video and audio streaming rely on heavy amounts of multicast traffic. If these, or similar applications, become popular in the future it could create problems for embedded devices with under powered CPUs.