notes on configuring coda in netbsd 3.0
for the purposes of this excercise, consider a network with three coda servers, one of which is the SCM (master database server), and the other two of which are secondaries. there are also a number of homogeneous clients, one of which is represented by CL1. the diagram below describes the network both physically and logically,
[network diagram]
we will assume that all systems are sun ultrasparc systems running netbsd 3.0 in 32-bit mode. we should be able to make this work with any version of coda newer than 6.0.14 with corresponding versions of the rpc2, lwp, and rvm libraries.
so the first thing we will want to do is to get the coda servers up and running. this simply requires downloading the proper coda, rpc2, lwp, and rvm sources and building them like any other program. it used to be necessary to apply a few patches to make this work on sparc platforms, but i believe these should all be fixed at the time of this writing, so that no patches are required.
i always build it myself, and i am assuming that you are doing so here, or are using a package that uses a similar path convention, namely, that the installation prefix was /usr/local.
so we will set up our SCM first. fortunately, there is a nice script which automates most of the work for us. on server S1, we invoke the script,
# /usr/local/sbin/vice-setup
specify parameters as prompted by vice-setup. pick some good strings for the update token, auth2 token, and volutil authorization token, and take note of them somewhere safe. these will be shared secrets among all the coda servers. we specify that yes, this system is the SCM, and will have a server id of 1. we will create the rvm log to a file, /rvm_log, of 20 megabyte size. we will create the rvm data partition to a file, /rvm_data, of 315 megabyte size. these are fairly middle of the road values as far as setting these parameters goes. specify that we will store our files in the directory /vicepa, and that configuration files go in /vice. we will set the maximum number of files to 2 million. and yes, we would like that vice-setup modifies rc.local such that coda will automatically start on boot.
you will also be prompted to create an administrative username while setting up the SCM. i suggest you pick something neutral like "admin" while picking the coda root user, rather than using your own username, which you can set up later as a regular user account. you also need to pick a starting UID. in doing this, you need to choose something that will synch up with your UID allocation policy in /etc/passwd on all your clients. i picked 100 and worked my policy around that.
the administrative user that you create will have the default password "changeme"
at this point, coda should nominally be configured on server 1. i would just reboot server 1 and make sure that when it comes up, all the coda daemons are running properly (rpc2portmap, auth2, updatesrv, updateclnt, codasrv). once server 1 is up and running, we can move on to configuring server 2.
we configure server 2 using the vice-setup script, with exactly the same parameters as server 1. the only differences are that server 2 is not the SCM, and when prompted for the identity of the SCM, we give it the FQDN of server 1. we will give server 2 the server id number of 2.
if you see some nonsense about some sort of authorization failed while trying to connect to the SCM on update or something like that, you can just ignore it, nothing seems to be hurt by it.
manually invoke auth2 and updateclnt on server 2. now go to server 1 and make sure that the file /vice/db/servers looks as below, 1 2
also look at /vice/db/servers on server 2. make sure it looks like that too. you will have to manually keep these in synch as you add and remove servers for some reason -- i am not sure why updateclnt does not track it. once the /vice/db/servers file is the same on both systems -- in synch -- stop then restart coda services on server 1,
# rc.coda stop ; rc.coda start
then once coda comes back up on server 1, stop then restart coda on server 2 as well. this should give us a two server coda cell, with both of them in synch.
for server S3, we do the same thing. we use vice-setup to configure it with exactly the same parameters as server 2 except that no, it is not the SCM, and, when prompted, we give it the FQDN of the SCM, we assign server 3 a server id number of 3. manually fire up auth2 and updateclnt on server 3. then, we make sure that the /vice/db/servers file on all three servers looks like, 1 2 3
then we use rc.coda to stop and restart the coda process on each server sequentially in order to get them to all synch up. stop and restart coda first on server 1, then on server 2, then on server 3. this should give us a mostly working three server coda cell compromising three servers each with one data partiton, /vicepa.
we do need to tie up a few loose ends. we want to make sure that the /etc/hosts file on each coda server lists all the neighbor servers. this saves us by allowing coda to come up anyway when something might be going wrong with the nameservice. that is, they should all look like, server1 server2 server3
we also need to edit the /vice/db/vicetab file on each server. by default, the vicetab file on a server will only list the partitions local to that server. for example, on server 1, the vicetab file would look like, /vicepa ftree width=128,depth=3
what we need to do is edit the /vice/db/vicetab file such that it shows the vice partitions that are available on all servers. since each server has one partition, /vicepa, we modify the vicetab file so that it appears as so, /vicepa ftree width=128,depth=3 /vicepa ftree width=128,depth=3 /vicepa ftree width=128,depth=3
this is another one of those files that must be manually propagated and updated among all your coda servers for everything to work correctly. every time you add or remove a coda data partition, you need to make sure that it is listed in the vicetab file on each coda server in the cell -- not just the vicetab file of the server on which it resides.
when you ran vice-setup, it already created the root volume of your cell. it is not necessary to do anything special with volutil or something like that to get one. this is a common misconception that i dont feel is really addressed anywhere. a lot of people start mucking around with volutil and end up damaging something and end up having to trash everything and start configuring everything from scratch.
now, we have to work on the clients -- first to get them to see coda at all, and then a little more to get the nice presentation that we really want from this sort of global network filesystem.
we first need to configure the kernel for coda client support. the very first step in this is making sure that your architecture has a major number configured for the coda vfs device. go to /usr/src/sys/arch/yourarch/conf and edit the file majors.yourarch. make sure that it contains a line that looks like,
device-major vcoda char 47 vcoda
where the fourth field contains some number. i happen to use 47 because it is a free major number in the netbsd sparc kernel originally intended for this purpose. some netbsd platforms like macppc have other drivers already sitting on major 47. you can pick another major, or just get rid of the old driver. your choice.
now we make the coda device. go to /dev and make sure there isnt a device already with the major,minor number that we are choosing which is in my case 47,0. then, make the device,
# mknod /dev/cfs0 c 47 0
now, make sure that your kernel configuration file contains the following lines,
file-system coda
pseudo-device vcoda 4
and build a new kernel. reboot with the new kernel. once the system comes back up, we can configure venus, the coda client. we do this with the venus-setup program,
# /usr/local/sbin/venus-setup
i run venus-setup pretty much by the book with the few defaults suggested. a cache size of 50 megabytes works fine for me. make sure that you call venus in your rc.local file on startup,
/usr/local/sbin/venus &
so that it starts up on boot. now we need to configure venus with a realm for our domain, which allows us to present the three servers as one distinct filespace. edit the file /usr/local/etc/coda/venus.conf so that it contains a line for our realm,
which will be the default realm for commands such as "clog" when invoked with no arguments. also make sure that the line,
is not commented out, and points to a valid realms file. now we will edit the /usr/local/etc/coda/realms file ourselves to define our realm. make sure that it contains the line,
typically we choose our realm name such that it matches our TLD, but there is no forced requirement for this. now i would just reboot our client and make sure that venus comes up. we should then be able to log in and loook at our newly configured coda cell,
% cd /coda/
% ls
now, we might as well log in as our administrative user and change the password to something better.
% clog admin
enter the password of "changeme" this should authenticate us as the admin user. we can check to see that this was successful with the "ctokens" command,
% ctokens
Tokens held by the Cache Manager for myuser:
        Coda user id:    100
        Expiration time: Sun Jan 28 12:55:38 2007
if we assume the admin user had a coda user id of 100. use the "cpasswd" command to set the password to something secure. you use the same method to authenticate with your own id to coda as well. this is something that you will have to do deliberately with the "clog" command as no method exists to automatically get coda tokens for you at login-time. i personally add the "clog" command to my .login file.
once you authenticate to coda, you should be able to work within it just as if it were any other branch of the filesystem. the only other really interesting commands from the user side are "cfs listacl" which will list the access control list (acl) on a directory, and "cfs setacl" which is used to modify an access control list on a directory for which you have permissions to manage acls.
to make volumes in coda, it is necessary to log into the SCM machine and become root (unlike afs, where you can make volumes anywhere). you use the createvol_rep command,
# createvol_rep volumename
which in the above example would create a volume with the primary replica on server1:/vicepa and two secondary replicas on server2:/vicepa and server3:/vicepa. you can create as many or as few replicas as you want; i just wouldnt recommend trying anything crazy like putting two replicas on the same server and partition.
now you need to actually make a mount point for your volume. this requires logging into a client system and using the "clog" command to become coda admin. then use the "cfs mkm" command to set up a mount point,
% cfs mkm /coda/ volume
which will mount the volume with name "volume" at the directory /coda/ this would then appear to us in the filesystem,
% cd /coda/
% ls
another thing that is useful is the procedure for making users. this requires the use of two commands, both which must be run on the SCM as root -- "pdbtool" and "au" -- as shown below.
# pdbtool
pdbtool> nui username userid
pdbtool> exit
# au -h nu
when prompted for Your Vice Name, enter the coda admin username. When prompted for your password, enter the coda admin password. when prompted for the Vice user, enter the username of the account that you just created with pdbtool. when prompted for a Vice password, enter them a temporary new password like "changeme".
this completes the job of setting up coda on all servers as well as on any number of clients.