CernVM-FS 0.2.68 Released
19/04/2011 CernVM-FS 0.2.68
CernVM-FS 0.2.68 is out. It comes with a modified proxy server notation and improvements for the dynamic reconfiguration. Its behavior distinguishes between a direct connection to a webserver and a connection through a proxy.
New Proxy Server Notation
On suggestion of Dave Dykstra, the meaning of the proxy notation has changed. The proxy chain is still a ring buffer but compiled out of several "load balance groups". A load balance group combines fail-over and load-balance functionality. Within a load-balance group, selection of a proxy server is always random. A load balance group fails, if all of its defined proxies fail in a row. So, the specification
means: try one of the three randomly, if it fails, try one of the two remaining proxies randomly, if the second one fails try the remaining one. If A, B, and C are altogether unavailable, the load-balance group fails and CernVM-FS switches to the next load-balance group.
The recently introduced + syntax is still accepted but deprecated.
Note that as of this revision, a proxy server is necessary in order to mount repositories. If you are a roaming user without a proxy server, specify CVMFS_HTTP_PROXY=DIRECT.
New Multiple Server Syntax
In order to be consistent with the proxy notation, the mirror server notation accepts the semicolon (';') as well. Don't forget to put the mirror list in double quotes as the semicolon has a special meaning in bash! The previous comma separator is deprecated. Please also note that the mirror servers are usually selected in the domain specific configuration, e.g. in /etc/cvmfs/domain.d/cern.ch.local
In order to use the recently created mirror servers of the repositories hosted at cernvm-webfs.cern.ch, please create the file /etc/cvmfs/domain.d/cern.ch.local and set
if you're close to CERN or
if you're close to RAL or
if you're close to BNL. If you are a roaming user and don't connect through a proxy server, CernVM-FS automatically selects the closest mirror server based no the network round trip time.
Please note: BNL Stratum 1 server is not yet a production service. If you want to connect your site to this mirror server, please contact us (or directly John DeStefano at BNL) first. The mirror servers at CERN, RAL, and BNL ("CernVM-FS Stratum 1") will soon become the default for all repositories in the cern.ch domain. They will replace the current http://cernvm-webfs.cern.ch
The Stratum 1 CernVM-FS web servers listen on port 80 and port 8000.
There are two new parameters to configure the timeouts for network operations. CVMFS_TIMEOUT_DIRECT applies if there is no proxy server active, CVMFS_TIMEOUT applies when the connection is via proxy. The default is 10 seconds (direct connection) and 5 seconds (proxy connection), resp.
The default time to live for file catalogs is decreased from 1 day to 1 hour. In effect, changes to the repositories in the cern.ch domain should appear at most after 65 minutes (5 extra minutes by HTTP caches).
The cvmfs-talk utility understands a couple of new commands: mountpoint, proxy rebalance, proxy info, proxy set, proxy group switch, host switch, host set, timeout info, and timeout set. Run cvmfs-talk without arguments to find an explanation of the new commands.
The new commands for cvmfs-talk are use by the new reload command for the cvmfs service. The reload dynamically sets:
The Fuse module comes now with an implementation for the statfs system call that reports the local cache usage. In effect, the df utility can be used to check the cache consumption. If there is no cache quota specified, CernVM-FS takes the available space reported by the file system of the cache directory as available cache space.
This and That
- cvmfs_config chksetup reports autofs problems as warnings, not as errors
- cvmfs_config chksetup reports broken proxy/host combinations
- A rather serious bug found by Steve Traylen is fixed: under certain circumstances, CernVM-FS could end up with junk in the local cache as consequence of a proxy or host fail-over
- By default, the strict mount is enabled, i.e. the mount helpers won't mount any repository that is not listed in CVMFS_REPOSITORIES.
- Several bugs are fixed in the replica utilities
- Optimization and parallelization of the cvmfs_sync.bin utility in order to speed-up the publishing step.
Migration from CernVM-FS 0.2.61
This migration is supposed to be smooth. Make sure that CVMFS_HTTP_PROXY is defined. Check that all your required repositories are indeed listed in CVMFS_REPOSITORIES.
Please update with caution. For large sites, we suggest to start with a couple of worker nodes and to update the rest of the worker nodes gradually if everything turns out to work fine. Make sure that all jobs are drained out prior to the migration.