Go to main page

CernVM-FS 0.2.76 Released

08/06/2011 CernVM-FS 0.2.76

Note (04/07/2011): cvmfs-0.2.76 is a client-only bugfix release for cvmfs-0.2.71. It fixes the following bugs:

  • Messages were broadcasted to all logs (maillog, boot.log, etc.).  This has been fixed to log only to /var/log/messages.
  • In case of a system crash, the CernVM-FS internal cache database might be corrupted after reboot.  The behavior on mount has been changed in order to automatically rebuild the cache database if corrupted (instead of failing to mount).
  • The cvmfs_fsck utility had very high memory consumption.  This has been fixed.
  • The cvmfs_fsck now correctly detects temporary file catalogs that might occur in a cache directory of a running instance.
  • The cvmfs_config setup command  produced a file "1" because of false stderr redirection. This has been fixed.

Upgrade from 0.2.70/71 is supposed to be smooth. You can update the package and wait for automount to reload the cvmfs2 binary.

Note: cvmfs-0.2.71 is a client-only bugfix release for cvmfs-0.2.70. It fixes false error messages for ls -l caused by wrong return values of getxattr().

Description

CernVM-FS 0.2.70 is out. It includes several bugfixes and performance improvements. The server tools come with a new interface, cvmfs_server, which greatly simplifies repository creation for small VOs.

This release goes along with cvmfs-init-scripts 1.0.12 (see also here). The new init scripts reflect that a couple of repositories switched to the new namespace.

If you're new to CernVM-FS, have a look at the examples page.

Bugfixes

  • A bug in the CernVM-FS inode handling occasionally confused the directory cycle detection of the GNU coreutils file traversal code (used in du, find, …). This has been fixed.
  • A bug in the optimization for loading nested file catalogs could prevent CernVM-FS from applying new nested file catalogs. That behavior was consistent during restarts of CernVM-FS. A newly implemented logic for loading nested file catalogs fixes that problem.
  • After applying a new file catalog, there was short period in which the Linux kernel buffers could serve outdated objects. That has been fixed by draining out the Linux kernel buffers prior to applying any new file catalog.
  • The locking scheme for the CernVM-FS internal cache database has been changed in order to sustain deletion of the cache database of a mounted repository. The cache database can be deleted by cvmfs_fsck in fix-mode. See also Savannah.

Performance Improvements

  • CernVM-FS now uses zlib 1.2.5. It has significantly better compression and decompression rates than the SLC5 zlib 1.2.3
  • Using an exclusive lock for the cache database speeds up the open() call.
  • The CernVM-FS Fuse module now uses jemalloc instead of the glibc standard memory allocator. The jemalloc allocator deals a lot better with memory fragmentation of libfuse in case very large amounts of meta data are touched in a couple of seconds.

Extended Attributes

CernVM-FS now supports the getxattr() system call. Currently there are two supported attributes:

  • hash: Shows the SHA-1 hash of a regular file as listed in the file catalog.
  • revision: Shows the file catalog revision of the mounted root catalog, an auto-increment counter increased on every synchronization of shadow tree and repository. The value is the same for all directories, symbolic links and regular files of the mount point.

Extended attributes can be queried using the attr command. For instance, attr -g hash /cvmfs/atlas.cern.ch/software/ChangeLog returns the SHA-1 key of the file at hand.

Admin Interface

There is a new cvmfs-talk command remount, which independently of the file catalog's TTL looks for a new repository revision.

There is a new parameter CVMFS_MAX_TTL, which enforces a maximum file catalog TTL. Note that shorter TTLs will result in more load on your local Squids. This parameter is covered by service cvmfs reload.

The cvmfs_fsck utility returns proper return values, comparable to the system's fsck. See also Savannah.

CernVM-FS Server Tools

With cvmfs_server, this release comes with a new interface to create and maintain CernVM-FS repositories for small VOs. The tool expects an SL5 distribution with an httpd service running and no CernVM-FS client utilities. The server utilities and the client utilites are mutually exclusive. The cvmfs_server tool uses the /srv/cvmfs area as storage. So if you want to use a large hard disk, mount it there upfront. Note that the software signing key and the release manager machine certificate are newly created as well. In particular, they are different from CERN repositories.

The server tools configuration has been moved from /etc/cvmfsdrc(.local) to /etc/cvmfs/server.(conf|local) in order to be more consistent with the rest of the CernVM-FS configuration.

The replication configuration is now mainly done in /etc/cvmfs/replica.repositories. This file contains a list of repositories to replicate in the following format:
Repository Name|URL|Public Signing Key|Destination Directory|Parallel Connections|Timeout|Retries

This release includes also a couple bugfixes and performance improvements.

Note: the CernVM-FS server tools are still considered to be experimental unless packaged in a CernVM virtual machine.

Migration from CernVM-FS 0.2.68

The client migration with RPMs is supposed to be smooth. This might not apply for cvmfs-init-scripts.  Before upgrading cvmfs-init-scripts, have a look here.

If you build from sources, remove --enable-libfuse-builtin from configure options and add --enable-zlib-builtin. If you don't use the CernVM-FS provided mount scripts, note that the catalog_timeout mount option disappeared while the use_ino mount option has been added.

For the server tools, note that the configuration files have changed location. For the replica tools, note that configuration is done differently than before.

Please update with caution. For large sites, we suggest to start with a couple of worker nodes and to update the rest of the worker nodes gradually if everything turns out to work fine. Make sure that all jobs are drained out prior to the migration.