Moving the logic for encapsulating network traffic into the data link onto special purpose VLSI ASICs on network adapters frees a node's CPU to do other work. Perhaps the best argument against this process is that of cost savings. But as the price per transistor goes down there will be less and less reason to run this software on the host CPU, and in fact network cards may move up the OSI model to include Network and perhaps eventually Transport logic and beyond.

Kurose (2004) describes this relationship as 'an adapter is a semi-autonomous unit. For example, an adapter can receive a frame, determine if a frame is in error and discard the frame without notifying its “parent” node'. So really, we can define any degree of autonomy suitable to our applications' needs and our hosts' capabilities. Whether and how this represents an advantage depends very much on the available host CPU overhead and what one is trying to achieve by involving the CPU. Many of today's NICs have quite sophisticated processing units of their own, so the odds of a general purpose CPU being able to process frames as quickly are low. However, if logic has to be applied to network traffic then doing that in the central processing unit may represent an advantage over the traffic having to traverse the bus, both inbound to the CPU and outbound back to the NIC and 'wire'. One historic reason to process network traffic with a general purpose CPU is multiprotocol routing (PSINet, 2000). Many commodity computers can be and have been applied to this task, and because some of the evaluations that must be applied to the traffic go all the way down to layer two (Novell 802.3 vs. Ethernet II framing for example) the node could just as well do it itself leveraging simple NICs on the fastest bus available. Many old Ciscos used the same commodity Motorola 68xxx CPUs as the Apples and Sun 3's of their day, rather blurring the line between a special purpose ASIC and a general purpose running specialized software. While there are certainly benefits of cost and flexibility if one is using a node to process lower layers of the OSI stack, this is unlikely to be a high-performance configuration. However this 'routing vs. switching' dilemma (Duffy, 2007) may be becoming less onerous outside of the network core, where performance truly does remain critical, as CPUs get more powerful and less expensive. An interesting corollary to these developments are those of other special purpose VLSIs like the GPU. It seems as if no matter how specialized an IC's design apparently becomes it may still lend itself to other applications (Gharaibeh et al, 2010). So to me, the resolution of this specific issue, whether a CPU should be involved in network tasks that might 'normally' be handled by a NIC, seems very much determined by factors of price and rationale.


Duffy, J. (2007) Routing vs. Switching [Online]. Available from: http://www.networkworld.com/news/2007/102607-arguments-routing-switching.html?nwwpkg=50arguments (Accessed: 7 November, 2010)


Gharaibeh, A., Al-Kiswany, S., Ripeanu, M. (2010) CrystalGPU: Transparent and Efficient Utilization of GPU Power [Online]. Available from: http://arxiv.org/ftp/arxiv/papers/1005/1005.1695.pdf (Accessed: 7 November, 2010)


Kurose, J.F. & Ross, K.W. (2004) Computer Networking A Top-Down Approach Featuring the Internet 4th Ed. Pearson Educational Publishing, Inc.


PSINet (2000) Certified Software: Novell Multiprotocol Router [Online]. Available from: http://www.support.psi.com/support/common/sw/mpr/index.html (Accessed: 7 November, 2010)