Go to main page

CernVM-FS Examples

CernVM-FS Configuration Examples


Please follow the instructions in the quickstart chapter of the CernVM-FS technical report.

Select Repositories

Let's assume you want to support ATLAS and LHCb software. Create or edit /etc/cvmfs/default.local and set


The following table lists the available repositories and the repositories they rely on.
Note: For Grid sites deploying the Grid UI by other means, please ignore the CernVM-FS grid.cern.ch dependency.
Note: the 2.1 client introduces the shared cache and repository specific quota settings should be avoided.  Instead set at least 40G as a quota in /etc/cvmfs/default.local and ensure that the filesystem hosting the cache has at least 15% additional safety margin.

Repository Description Dependencies Recommended min. quota
atlas.cern.ch ATLAS experiment software atlas-condb.cern.ch 10G
atlas-condb.cern.ch ATLAS conditions database    
cms.cern.ch CMS experiment software grid.cern.ch  
lhcb.cern.ch LHCb experiment software grid.cern.ch 5G
Ideally 10G
lhcb-conddb.cern.ch LHCb conditions database   1.5G
na61.cern.ch NA61 experiment software sft.cern.ch  
boss.cern.ch BES experiment software    
grid.cern.ch Grid User Interface    
sft.cern.ch LCG application’s area software    
geant4.cern.ch Geant4 software sft.cern.ch

Change Cache Quota and Location

Let's assume we want to have a 40G shared cache for all the configured repositories.  The cache should be located in the existing directory /var/scratch/cvmfs.

If you're about to change a current cache location or if you decrease the quota, run /sbin/service cvmfs restartclean (CernVM-FS 2.0.X) or cvmfs_config wipecache (CernVM-FS 2.1.X) first in order to wipe out the current location. Create or edit /etc/cvmfs/default.local and set


Make the changes effective by /sbin/service cvmfs restart (CernVM-FS 2.0.X) or cvmfs_config reload (CernVM-FS 2.1.X).

In practice, 20G should be more than enough for running jobs.  Since the CernVM-FS quota is a soft quota, ensure that the file system hosting the cache has an additional 15% free space.

Note: Please keep in mind that the cache location has to be on local storage.  Please take care that tmpwatch doesn't clean the cvmfs cache directory behind your back.

Select the Grid UI Version in the grid.cern.ch Repository

The grid.cern.ch repository provides two symbolic links that facilitate selecting the version of the Grid UI software that is supposed to by used on a particular grid site.  The symbolic link /cvmfs/grid.cern.ch/default points to a reasonbly recent version that should fit in many cases.

The symbolic link /cvmfs/grid.cern.ch/glite is a so-called variant symbolic link.  It can be customized by sites.  The target of this symbolic link will be set by CernVM-FS at mount time depending on the GLITE_VERSION environment variable.  With the cvmfs-init-scripts package in version 1.0.20 or newer, /cvmfs/grid.cern.ch/glite will point to /cvmfs/grid.cern.ch/default by default.  In order to change that to a different target, set GLITE_VERSION=<new_target> in any of the CernVM-FS configuration files for grid.cern.ch, for instance in /etc/cvmfs/default.local.

Specify Local Site Proxies

Let's assume you have two Squid proxies (squid1 and squid2), both listening on port 3128, which you want to configure load-balanced and with fail-over in case one of them is offline.  Create or edit /etc/cvmfs/default.local and set


followed by /sbin/service cvmfs reload (CernVM-FS 2.0.X) or cvmfs_config reload (CernVM-FS 2.1.X).

Use the CernVM-FS Stratum 1 Mirror Servers

Create or edit /etc/cvmfs/domain.d/cern.ch.local (not /etc/cvmfs/default.local) and set


followed by /sbin/service cvmfs reload (CernVM-FS 2.0.X) or cvmfs_config reload (CernVM-FS 2.1.X).  Switch the URLs in order to try the closest servers first, e.g. choose the RAL URL first if you are in the U.K.  In case of failure, CernVM-FS will fail-over to the next available host.  Specify the hosts using standard HTTP port 80 (like http://cvmfs-stratum-one.cern.ch/opt/@org@) if port 8000 is blocked at your site.

Additional Stratum 1 servers are at ASGC in Taiwan (http://cvmfs02.grid.sinica.edu.tw:8000) and at Fermilab near Chicago (http://cvmfs.fnal.gov:8000).

Setup a Local Squid Server

Squid is very powerful and has lots of configuration and tuning options. For CernVM- FS we require only the very basic static content caching. If you already have a Frontier Squid installed (http://frontier.cern.ch) you can use it for cvmfs as well

Otherwise, start from a standard Scientific Linux 5 or 6 Squid and adjust as follows. Browse through your /etc/squid/squid.conf and make sure the following lines appear accordingly

max_filedesc 8192
maximum_object_size 1024 MB
cache_mem 128 MB
maximum_object_size_in_memory 128 KB
# 50 GB disk cache
cache_dir ufs /var/spool/squid 50000 16 256

Check your Squid configuration with squid -k parse. Create the hard disk cache area with squid -z. In order to make the increased number of file descriptors effective for Squid, execute ulimit -n 8192 prior to starting the Squid service.
If you're using ACLs, don't forget to add ACLs allow rules for the Stratum 1 servers, for example like this:

acl cvmfs dst cvmfs-stratum-one.cern.ch
acl cvmfs dst cernvmfs.gridpp.rl.ac.uk
acl cvmfs dst cvmfs.racf.bnl.gov
acl cvmfs dst cvmfs02.grid.sinica.edu.tw
acl cvmfs dst cvmfs.fnal.gov
acl cvmfs dst cvmfs-atlas-nightlies.cern.ch
http_access allow cvmfs