in General:

NFS is pretty fast protocol for accessing files.

It uses less overhead than SMB/CIFS and therefore acchieves greater performance.

method1) rsyncing to a nfs-mounted qnap nas went with 20,7 MBytes/sec.

method2) while directly ssh-rsyncing to qnap works only with 3-4 MByte/sec (server is CPU not very performant “Feroceon 88F6281 rev 1 (v5l) @ 1.2 GHz with BogoMIPS : 1196.85” according to cat /proc/cpu, can’t decrypt ssh-traffic any faster).

While i can’t really judge it from a reliability and security perspective – it is pretty easy and fast to get going with NFS.

my experiences with method1:

warning: lost data because something seems to go wrong during the rsync-to-nfs-mount process.

Data gets corrupted. have to investigate

reading on that topic: https://research.cs.wisc.edu/wind/Publications/NFSCorruption-storagess07.pdf

download mirror: the-effects-of-metadata-corruption-on-nfs-swetha-krishnan-giridhar-ravipati-andrea-c-arpaci-dusseau-remzi-h-arpaci-dusseau-barton-p-mille-nfscorruption-storagess07.pdf

server: QNAP TS-219 QTS 4.1.4 Build 20150522

Version of NFS

uname -a; # QNAP uses EXT4
Linux QNAP 3.4.6 #1 Fri May 22 07:56:30 CST 2015 armv5tel unknown

mount; # DATA is ext4
/dev/md0 on /share/MD0_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,nodelalloc,noacl)
nfsd on /proc/fs/nfsd type nfsd (rw)
cat /proc/fs/nfsd/versions
+2 +3 -4 -4.1

cat /proc/fs/nfs/exports
# Version 1.1
# Path Client(Flags) # IPs
/share/MD0_DATA/DATA *(rw,insecure,no_root_squash,async,wdelay,no_subtree_check,uuid=60dd2e14:9a01561b:00000000:00000000)

dmesg|grep nfs

[ 66.638725] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 214.072501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory

lsmod | grep nfs

nfsd 231700 12 fnotify, Live 0xbf349000
exportfs 2885 1 nfsd, Live 0xbf345000
nfs 251340 0 - Live 0xbf2f2000
auth_rpcgss 30572 2 nfsd,nfs, Live 0xbf2e4000
lockd 59814 2 nfsd,nfs, Live 0xbf2cd000
sunrpc 167759 14 nfsd,nfs,auth_rpcgss,lockd, Live 0xbf291000

client:

uname -a; # it's a pretty up to date debian system USING EXT3
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) i686 GNU/Linux

mount; # client is using ext3

/dev/sda5 on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)

dmesg|grep nfs
[ 2.517324] FS-Cache: Netfs 'nfs' registered for caching
[ 2.522955] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
dmesg|grep NFS
[ 2.509406] RPC: Registered tcp NFSv4.1 backchannel transport module.
[13274.540947] NFS: Registering the id_resolver key type

How share was mounted:

# command used to mount the NFS share
mount 192.168.1.123:/DATA /mnt/qnap;

192.168.1.123:/DATA on /mnt/qnap type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.123,mountvers=3,mountport=48394,mountproto=udp,local_lock=none,addr=192.168.1.123)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)

“This is a much-improved Linux NFS server with support for NFSv3 as well as NFSv2. NFSv4 is being worked on. These patches are considered stable and are indeed shipping with most distributions. The stock Linux 2.2 NFS server can’t be used as a cross-platform file server”

I previously lost (mostly backup)data because of ext4 partitions going corrupt on a 800€ “CHEAP TAIWANESE” QNAP NAS.

So QNAP seems to turn OpenSource into LOW “TAIWANESE Quality” software. “GOOD JOB GUYS”.

QNAP Systems, Inc. (Chinese: 威聯通科技) is a Taiwanese corporation that specializes in providing networked solutions for file sharing, virtualization, storage management and surveillance applications to address corporate, SMB (NOOO!), SOHO and home user needs.

QNAP seems to pack all kind of useless features into their firmware which bloads the whole thing up, increases complexity and probability of failure and errors and wastes resources.

I REALLY HATE THAT “Video-Transcoding” feature beeing enabled by default, which CONSUMES A LOT OF RESSOURCES, USES A LOT OF CPU AND DECREASES LIFETIME OF YOUR HARDDISKS.

I WANT MY NAS TO RELIABLY STORE FILES – I DO NOT CARE IF I CAN PLAY TETRIS ON IT!

The QNAP hardware might be okay – but the software is surely NOT.

Testing Transfer methods: RSync via NFS vs SSH – SSH is slower but WON

# test transfer a large file and md5sum copy and original afterwards
rsync -r -vvv --progress /home/username/Downloads/somefile.extension /mnt/qnap/test/somefile.extension
sending incremental file list
[sender] make_file(somefile.extension,*,0)
send_file_list done
send_files starting
server_recv(2) starting pid=10655
recv_file_name(somefile.extension)
received 1 names
recv_file_list done
get_local_name count=1 /mnt/qnap/test/somefile.extension
generator starting pid=10655
delta-transmission disabled for local transfer or --whole-file
recv_generator(somefile.extension,1)
send_files(1, /home/username/Downloads/somefile.extension)
send_files mapped /home/username/Downloads/somefile.extension of size 612752227
calling match_sums /home/username/Downloads/somefile.extension
somefile.extension
583,892,992 95% 14.48MB/s 0:00:01
sending file_sum
false_alarms=0 hash_hits=0 matches=0
612,752,227 100% 22.04MB/s 0:00:26 (xfr#1, to-chk=0/1)
sender finished /home/username/Downloads/somefile.extension
generate_files phase=1
recv_files(1) starting
recv_files(somefile.extension)
recv mapped somefile.extension of size 612752227
got file_sum
renaming .somefile.extension.Ioy6xm to somefile.extension
send_files phase=1
recv_files phase=1
generate_files phase=2
send_files phase=2
send files finished
total: matches=0 hash_hits=0 false_alarms=0 data=612752227
recv_files phase=2
recv_files finished
generate_files phase=3
generate_files finished

sent 612,901,985 bytes received 1,106 bytes 21,505,371.61 bytes/sec
total size is 612,752,227 speedup is 1.00
[sender] _exit_cleanup(code=0, file=main.c, line=1183): about to call exit(0)

# RSYNC-NFS transfered file IS CORRUPT, VLC CAN NOT PLAY IT!!!!
md5sum /home/username/Downloads/somefile.extension /mnt/qnap/test/somefile.extension
f6eda1c26066d65f535cc30db6ed474a /home/username/Downloads/somefile.extension
3fdfa4c91f040631314f5923358502de /mnt/qnap/test/somefile.extension

# RSYNC-SSH transfered file, FILE WAS CORRECTLY TRANSFERED!!!!

rsync -r -vvv --progress /home/username/Downloads/somefile.extension admin@192.168.1.123:/share/MD0_DATA/DATA/test/somefile.extension

md5sum /home/username/Downloads/somefile.extension /mnt/qnap/test/somefile.extension
f6eda1c26066d65f535cc30db6ed474a /home/username/Downloads/somefile.extension
f6eda1c26066d65f535cc30db6ed474a /mnt/qnap/test/somefile.extension

nfsd – man page

Name

nfsd – special filesystem for controlling Linux NFS server

Synposis

mount -t nfsd nfsd /proc/fs/nfsd

Description

The nfsd filesytem is a special filesystem which provides access to the Linux NFS server.

The filesystem consists of a single directory which contains a number of files.

These files are actually gateways into the NFS server.

Writing to them can affect the server.

Reading from them can provide information about the server.

This file system is only available in Linux 2.6 and later series kernels (and in the later parts of the 2.5 development series leading up to 2.6).

This man page does not apply to 2.4 and earlier.

As well as this filesystem, there are a collection of files in the procfs filesystem (normally mounted at /proc) which are used to control the NFS server.

This manual page describes all of these files.

The exportfs and mountd programs (part of the nfs-utils package) expect to find this filesystem mounted at

cat /proc/fs/nfsd or

cat /proc/fs/nfs.

If it is not mounted, they will fall-back on 2.4 style functionality.

This involves accessing the NFS server via a systemcall.

This systemcall is scheduled to be removed after the 2.6 kernel series.

Details

The three files in the nfsd filesystem are:

exports
This file contains a list of filesystems that are currently exported and clients that each filesystem is exported to, together with a list of export options for that client/filesystem pair. This is similar to the /proc/fs/nfs/exports file in 2.4. One difference is that a client doesn’t necessarily correspond to just one host. It can respond to a large collection of hosts that are being treated identically.Each line of the file contains a path name, a client name, and a number of options in parentheses. Any space, tab, newline or back-slash character in the path name or client name will be replaced by a backslash followed by the octal ASCII code for that character.
threads
This file represents the number of nfsd thread currently running. Reading it will show the number of threads. Writing an ASCII decimal number will cause the number of threads to be changed (increased or decreased as necessary) to achieve that number.
filehandle
This is a somewhat unusual file in that what is read from it depends on what was just written to it. It provides a transactional interface where a program can open the file, write a request, and read a response. If two separate programs open, write, and read at the same time, their requests will not be mixed up.The request written to filehandle should be a client name, a path name, and a number of bytes. This should be followed by a newline, with white-space separating the fields, and octal quoting of special characters.

On writing this, the program will be able to read back a filehandle for that path as exported to the given client. The filehandles length will be at most the number of bytes given.

The filehandle will be represented in hex with a leading ‘\x’.

The directory /proc/net/rpc in the procfs filesystem contains a number of files and directories. The files contain statistics that can be display using the nfsstat program. The directories contain information about various caches that the NFS server maintains to keep track of access permissions that different clients have for different filesystems. The caches are:
auth.domain
This cache maps the name of a client (or domain) to an internal data structure. The only access that is possible is to flush the cache.
auth.unix.ip
This cache contains a mapping from IP address to the name of the authentication domain that the ipaddress should be treated as part of.
nfsd.export
This cache contains a mapping from directory and domain to export options.
nfsd.fh
This cache contains a mapping from domain and a filesystem identifier to a directory. The filesystem identifier is stored in the filehandles and consists of a number indicating the type of identifier and a number of hex bytes indicating the content of the identifier.
Each directory representing a cache can hold from 1 to 3 files. They are:
flushWhen a number of seconds since epoch (1 Jan 1970) is written to this file, all entries in the cache that were last updated before that file become invalidated and will be flushed out. Writing 1 will flush everything. This is the only file that will always be present.
content
This file, if present, contains a textual representation of ever entry in the cache, one per line. If an entry is still in the cache (because it is actively being used) but has expired or is otherwise invalid, it will be presented as a comment (with a leading hash character).
channel
This file, if present, acts a channel for request from the kernel-based nfs server to be passed to a user-space program for handling.When the kernel needs some information which isn’t in the cache, it makes a line appear in the channel file giving the key for the information. A user-space program should read this, find the answer, and write a line containing the key, an expiry time, and the content. For example the kernel might make
nfsd 127.0.0.1
appear in the auth.unix.ip/content file. The user-space program might then write
nfsd 127.0.0.1 1057206953 localhost
to indicate that 127.0.0.1 should map to localhost, atleast for now.If the program uses select(2) or poll(2) to discover if it can read from the channel then it will never see and end-of-file but when all requests have been answered, it will block until another request appears.
In the /proc filesystem there are 4 files that can be used to enabled extra tracing of nfsd and related code. They are:
/proc/sys/sunrpc/nfs_debug
/proc/sys/sunrpc/nfsd_debug
/proc/sys/sunrpc/nlm_debug
/proc/sys/sunrpc/rpc_debug
They control tracing for the NFS client, the NFS server, the Network Lock Manager (lockd) and the underlying RPC layer respectively. Decimal numbers can be read from or written to these files. Each number represents a bit-pattern where bits that are set cause certain classes of tracing to be enabled. Consult the kernel header files to find out what number correspond to what tracing.

See Also

rpc.nfsd(8), exports(5), nfsstat(8), mountd(8) exportfs(8).

Author

NeilBrown

Wiki: History of NFS

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984,[1] allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The NFS is an open standard defined in Request for Comments (RFC), allowing anyone to implement the protocol.

Versions and variations

Sun used version 1 only for in-house experimental purposes. When the development team added substantial changes to NFS version 1 and released it outside of Sun, they decided to release the new version as v2, so that version interoperation and RPC version fallback could be tested.[2]

NFSv2

Version 2 of the protocol (defined in RFC 1094, March 1989) originally operated only over User Datagram Protocol (UDP). Its designers meant to keep the server side stateless, with locking (for example) implemented outside of the core protocol. People involved in the creation of NFS version 2 include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, and others.[1][3]

The Virtual File System interface allowed a modular implementation, reflected in a simple protocol.

By February 1986, implementations were demonstrated for operating systems such as System V release 2, DOS, and VAX/VMS using Eunice.[3]

NFSv2 only allowed the first 2 GB of a file to be read due to 32-bit limitations.

NFSv3

Version 3 (RFC 1813, June 1995) added:

  • support for 64-bit file sizes and offsets, to handle files larger than 2 gigabytes (GB);
  • support for asynchronous writes on the server, to improve write performance;
  • additional file attributes in many replies, to avoid the need to re-fetch them;
  • a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory;
  • assorted other improvements.

The first NFS Version 3 proposal within Sun Microsystems was created not long after the release of NFS Version 2. The principal motivation was an attempt to mitigate the performance issue of the synchronous write operation in NFS Version 2.[4] By July 1992, implementation practice had solved many shortcomings of NFS Version 2, leaving only lack of large file support (64-bit file sizes and offsets) a pressing issue. This became an acute pain point for Digital Equipment Corporation with the introduction of a 64-bit version of Ultrix to support their newly released 64-bit RISC processor, the Alpha 21064. At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol began increasing. While several vendors had already added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3. Using TCP as a transport made using NFS over a WAN more feasible, and allowed the use of larger read and write transfer sizes beyond the 8 KB limit imposed by User Datagram Protocol (UDP).

NFSv4

Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003 and again in RFC 7530, March 2015), influenced by Andrew File System (AFS) and Server Message Block (SMB, also termed CIFS), includes performance improvements, mandates strong security, and introduces a stateful protocol.[5]

Version 4 became the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols.

NFS version 4.1 (RFC 5661, January 2010) aims to provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension).

NFS version 4.2 is currently being developed.[6]

Other extensions

WebNFS, an extension to Version 2 and Version 3, allows NFS to integrate more easily into Web-browsers and to enable operation through firewalls.

In 2007, Sun Microsystems open-sourced their client-side WebNFS implementation.[7]

Various side-band protocols have become associated with NFS, including:

Platforms

NFS is often used with Unix operating systems (such as Solaris, AIX and HP-UX) and Unix-like operating systems (such as Linux and FreeBSD). It is also available to operating systems such as the classic Mac OS, OpenVMS, Microsoft Windows,[citation needed] Novell NetWare, and IBM AS/400. Alternative remote file access protocols include the Server Message Block (SMB, also termed CIFS), Apple Filing Protocol (AFP), NetWare Core Protocol (NCP), and OS/400 File Server file system (QFileSvr.400).

SMB and NetWare Core Protocol (NCP) occur more often than NFS on systems running Microsoft Windows; AFP occurs more often than NFS in Apple Macintosh systems; and QFileSvr.400 occurs more often in AS/400 systems. Haiku recently[when?] added NFSv4 support as part of a Google Summer of Code project.

nfsperformancegraph

NFS specint2008 performance comparison, as of 22 November 2013

Typical implementation

Assuming a Unix-style scenario in which one machine (the client) needs access to data stored on another machine (the NFS server):

  1. The server implements NFS daemon processes, running by default as nfsd, to make its data generically available to clients.
  2. The server administrator determines what to make available, exporting the names and parameters of directories, typically using the /etc/exports configuration file and the exportfs command.
  3. The server security-administration ensures that it can recognize and approve validated clients.
  4. The server network configuration ensures that appropriate clients can negotiate with it through any firewall system.
  5. The client machine requests access to exported data, typically by issuing a mount command. (The client asks the server (rpcbind) which port the NFS server is using, the client connects to the NFS server (nfsd), nfsd passes the request to mountd)
  6. If all goes well, users on the client machine can then view and interact with mounted filesystems on the server within the parameters permitted.

Note that automation of the NFS mounting process may take place — perhaps using /etc/fstab and/or automounting facilities.

Protocol development

1980s

NFS and ONC figured prominently in the network-computing war between Sun Microsystems and Apollo Computer, and later the UNIX wars (ca 1987-1996) between AT&T Corporation and Sun on one side, and Digital Equipment, HP, and IBM on the other.

During the development of the ONC protocol (called SunRPC at the time), only Apollo’s Network Computing System (NCS) offered comparable functionality.

Two competing groups developed over fundamental differences in the two remote procedure call systems.

Arguments focused on the method for data-encoding — ONC’s External Data Representation (XDR) always rendered integers in big-endian order,

even if both peers of the connection had little-endian machine-architectures, whereas NCS’s method attempted to avoid byte-swap whenever two peers shared a common endianness in their machine-architectures.

An industry-group called the Network Computing Forum formed (March 1987) in an (ultimately unsuccessful) attempt to reconcile the two network-computing environments.

Later,[when?] Sun and AT&T announced they would jointly develop AT&T’s UNIX System V Release 4.

This caused many of AT&T’s other licensees of UNIX System V to become concerned that this would put Sun in an advantaged position, and ultimately led to Digital Equipment, HP, IBM, and others forming the Open Software Foundation (OSF) in 1988.

Ironically, Sun and AT&T had formerly competed over Sun’s NFS versus AT&T’s Remote File System (RFS), and the quick adoption of NFS over RFS by Digital Equipment, HP, IBM, and many other computer vendors tipped the majority of users in favor of NFS.

NFS interoperability was aided by events called “Connectathons” starting in 1986 that allowed vendor-neutral testing of implementations with each other.[10]

OSF adopted the Distributed Computing Environment (DCE) and the DCE Distributed File System (DFS) over Sun/ONC RPC and NFS.

DFS used DCE as the RPC, and DFS derived from the Andrew File System (AFS);

DCE itself derived from a suite of technologies, including Apollo’s NCS and Kerberos.[citation needed]

1990s

Sun Microsystems and the Internet Society (ISOC) reached an agreement to cede “change control” of ONC RPC so that the ISOC’s engineering-standards body, the Internet Engineering Task Force (IETF), could publish standards documents (RFCs) related to ONC RPC protocols and could extend ONC RPC.

OSF attempted to make DCE RPC an IETF standard, but ultimately proved unwilling to give up change control.

Later, the IETF chose to extend ONC RPC by adding a new authentication flavor based on Generic Security Services Application Program Interface (GSSAPI), RPCSEC GSS, to meet IETF requirements that protocol standards have adequate security.

Later, Sun and ISOC reached a similar agreement to give ISOC change control over NFS, although writing the contract carefully to exclude NFS version 2 and version 3.

Instead, ISOC gained the right to add new versions to the NFS protocol, which resulted in IETF specifying NFS version 4 in 2003.

2000s

By the 21st century, neither DFS nor AFS had achieved any major commercial success as compared to SMB-CIFS or NFS.

IBM, which had formerly acquired the primary commercial vendor of DFS and AFS, Transarc, donated most of the AFS source code to the free software community in 2000.

The OpenAFS project lives on.

In early 2005, IBM announced end of sales for AFS and DFS.

In January, 2010, Panasas proposed an NFSv4.1 based on their Parallel NFS (pNFS) technology claiming to improve data-access parallelism[11] capability.

The NFSv4.1 protocol defines a method of separating the filesystem meta-data from file data location;

it goes beyond the simple name/data separation by striping the data amongst a set of data servers.

This differs from the traditional NFS server which holds the names of files and their data under the single umbrella of the server.

Some products are multi-node NFS servers, but the participation of the client in separation of meta-data and data is limited.

The NFSv4.1 pNFS server is a set of server resources or components; these are assumed to be controlled by the meta-data server.

The pNFS client still accesses one meta-data server for traversal or interaction with the namespace;

when the client moves data to and from the server it may directly interact with the set of data servers belonging to the pNFS server collection.

The NFSv4.1 client can be enabled to be a direct participant in the exact location of file data and to avoid solitary interaction with one NFS server when moving data.

In addition to pNFS, NFSv4.1 provides:

See also

References

External links

  • RFCs:
    • RFC 5661 – Network File System (NFS) Version 4 Minor Version 1 Protocol
    • RFC 5403 – RPCSEC_GSS Version 2
    • RFC 3530 – NFS Version 4 Protocol Specification
    • RFC 2054 – WebNFS Specification
    • RFC 2339 – Sun/ISOC NFS Change Control Agreement
    • RFC 2203 – RPCSEC_GSS Specification
    • RFC 1813 – NFS Version 3 Protocol Specification
    • RFC 1790 – Sun/ISOC ONC RPC Change Control Agreement
    • RFC 1094 – NFS Version 2 Protocol Specification

Source: https://en.wikipedia.org/wiki/Network_File_System

What is Connectathon ?

In 1986, Sun Microsystems sponsored the first ConnectathonTM event, a unique forum for testing software and hardware interoperability. Connectathon is a network proving ground allowing vendors to test their interoperability solutions, with special emphasis on NFSTM and Internet protocols.

Over the years, the vendor-neutral Connectathon has attracted a large number of development engineers from all major computer systems companies and a wide variety of software vendors. All have the common goal of making heterogeneous multivendor networking a reality. Now plans are being drawn to celebrate Connectathon’s 13th year.

Connectathon is an excellent opportunity for vendors to verify that their distributed computing software interoperates with a wide range of client/server implementations on different operating systems. Everything from laptops to supercomputers can be linked together under one roof, encouraging interaction among vendors, engineers and developers in a confidential atmosphere. Implementations are tested and debugged at Connectathon. There are panel discussions as well as open sessions on the latest developments in technologies and solutions by Connectathon participants.

Connectathon is a place where engineers can gather without marketing hype and can exchange ideas and information.

Now plans are being drawn to celebrate Connectathon’s 13th year. At Connectathon 99 we are expanding testing to include Y2K compatibility as well as Gigabit Ethernet based on vendor interest.

Source: https://web.archive.org/web/19990128152940/http://www.connectathon.org/#whatis

Connectathon 99 Technologies

The Connectathon 99 technologies offered for testing are listed below along with their test coordinator and Email address. Those with a TBD coordinator are still being considered for testing and may be added if there is enough interest.

Note: The test suites available for download are those that were used for Connectathon `98. Some of these test suites will be updated prior to Connectathon `99. Contact the coordinator for information about the availability of updated test suites.

 

Technology Coordinator Tests
NFS versions 2 and 3
& Lock Manager
Mike Kupfer and Rob Thurlow nfstests
NFS Version 4 Spencer Shepler (no tests yet)
WebNFS Agnes Jacob Test Suite
NIS/NIS+ Anup Sekhar nisplustests
TI-RPC Devesh Shah tirpc.tar.Z
Kerberos Mike Saltz kerberos.tar.Z
Automounter Theresa Lingutla-Raj autotests.tar.Z
IPv6 Bill Lenharth Tests
DHCP Mike Carney dhcp_tests.tar.Z
Network Computers Steve Drach The Open Group tests
available at Connectathon
LDAP Ludovic Poitou tests
Service Location
Protocol
Charles Perkins Test Suites
ATM Ed Von Adelung (no tests)
Gigabit Ethernet Mohan Srinivasan (no tests)
Y2K Compatibility TBD (no tests yet)
Fiber Channel TBD (no tests)

Technology testing coordinators will moderate the testing processes of a specific technology. If you are interested in moderating the testing of a technology, or would simply like to see a technology listed above be included for testing, please send mail to cthon@sun.com.

Connectathon Network Information

Connectathon ’99’s network is a 10/100baseT network, with a full complement of hubs, switches, and routers that allow any-to-any, any-to-many, or point-to-point connections.

(For Connectathon veterans, please note we will no longer provide converter to 10base2 or AUI. Please remember to bring your own converter.)

Each drop in every booth is a home run to a large patch panel in the Network Operations Center (NOC). Design goals call for a test suite server for every six booths. Each test server contains the test suites (hence its name) for all protocols being tested. In addition, each server has a floor map that allows for ease in locating other participants.

At a minimum, RIP will be supported on the network, with DNS, NIS+, and NIS running throughout.

Diagnostic equipment will be provided to aid in protocol troubleshooting. Although not directly connected to the Internet, access to external web servers is permitted from the Connectathon network, via an ISDN line. The NOC will be staffed during regular business hours.

If you have further questions, please send e-mail to cthon@sun.com

Connectathon Hotel Information

A limited number of rooms are being held for Connectathon 99 registrants only until February 17, 1999. To receive a special discounted rate at the hotel below, just mention that you will be attending Sun Microsystems’ Connectathon and make your reservations quickly. Rooms are being held on a first-come basis.

Crowne Plaza
Downtown San Jose
(408) 998 0400
$129/night

The Crowne Plaza (formerly Holiday Inn) is adjacent to Parkside Hall.

Documents / RFCs related to NFS

nfs-network-file-system-protocol-specification-rfc1094

nfs-v3-rfc-rfc1813.txt

Where did it come from?

How does it work? Structure of NFS server and client.

What are NFS’s advantages and shortfalls?

NFS MANPAGE:

nfs-man

nfs-v3-rfc-rfc1813

nfs-network-file-system-protocol-specification-rfc1094

the-effects-of-metadata-corruption-on-nfs-swetha-krishnan-giridhar-ravipati-andrea-c-arpaci-dusseau-remzi-h-arpaci-dusseau-barton-p-mille-nfscorruption-storagess07

mount.nfs.manpage.txt

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!
admin