NFS Sensitive to High Performance Networks

Wednesday, March 15, 2000 - 17:30
TH 331
Richard Martin Rutgers University

This work examines NFS sensitivity to performance characteristics of emerging networks. We adopt an unusual method of inserting controlled delays into live systems to measure sensitivity to basic network parameters. We develop a simple queuing model of an NFS server and show that it reasonably characterizes our two live systems running the SPECsfs benchmark. Using the techniques in this work, we can infer the structure of servers from published SPEC results. Our results show that NFS servers are most sensitive to processor overhead; it can be the limiting factor with even a modest number of disks. Continued reductions in processor overhead will be necessary to realize performance gains from future multi-gigabit networks. NFS can tolerate network latency in the regime of newer LANs and IP switches. Due to NFS's historic high mix of small metadata operations, NFS is quite insensitive to network bandwidth. Finally, we find that the protocol enhancements in NFS version 3 tolerate high latencies better than version 2 of the protocol.


Richard Martin joined Rutgers University an Assistant Professor in the Computer Science Department in 1999. Dr. Martin has published extensively in the areas of large scale cluster computing, parallel processing and high performance messaging. A consistent theme throughout Dr. Martin's research is the interaction of live systems with modeling and performance analysis.

His work on high performance messaging software was incorporated into the Inktomi search engine, a commercially successful spinoff of the Berkeley NOW project. A recent collaboration with researchers at the University of Washington resulted in extensive design and analysis of high-performance, deeply embedded processors, including on video-on-demand and Internet routing. His current research interests include massive data aggregation, soft-real time query processing, and highly fault tolerant systems.