A.1 Specific Information for Remote Sites and Institutions

Complete: 5
Detailed Review status


Goals of this page:

This page is intended to provide links to CMS-affiliated, non-CERN, site-specific and institution-specific information that is essential for users working from these institutions to complete the tutorials and exercises in the workbook.


Site administrators and/or system administrators are encouraged to provide any necessary information that deviates from the workbook instructions such that a CMS user working from your site/institution can accomplish all the tasks in the workbook. If the information is brief, you may include it directly in the current page. Otherwise, please provide links on this page pointing to the information. The information may reside on web pages that you maintain elsewhere, or you may create new workbook pages from this page. For the latter, see the twiki formatting help (available from any editing screen) and WorkBookContributors.

Examples include but are not limited to:

  • how to get computing privileges and accounts at the site/institution,
  • login information to local clusters that maintain the CMS environment,
  • information about grid resources (besides LCG) that are available, and so on.
We organize the sites/institutions by affiliation with Tier-1 sites.



There are various facilities provided at CERN for CMS usage. Please see the facility page to see if it is available to you. To interact with CERN storage systems a set of tools is available.

U.S. Tier-1 site affiliates

U.S. CMS is a collaboration of US scientists participating in the CMS experiment. The CMS T1 site in the U.S. is at Fermilab (also known as FNAL), which houses the LHC Physics Center, a location for CMS physicists to find experts on all aspects of data analysis, particle ID, software, and event processing within the US, during hours convenient for U.S.-based physicists.

Find USCMS-specific software and computing information at User Computing. We collect some of the information from the USCMS web site here for convenience.



Getting a Fermilab Account

If you are eligible for an Fermilab account (e.g., if you are affiliated with the CMS Software and Computing activities at Fermilab), see How to Get Started or Renew your Privileges at Fermilab. You must be registered with Fermilab before you can get any accounts. The following web page: How to get a CMS Computing Account at Fermilab, outlines all necessary steps to get a CMS specific account at Fermilab and points you to the online forms that have to be filled out.

Login to CMSLPC

To log on to the CMSLPC cluster at Fermilab, obtain your FNAL Kerberos credentials on your local computer:

kinit username@FNAL.GOV

then log in to the CMSLPC cluster as follows:

To connect to the Scientific Linux6 (SL6) cluster do:

ssh username@cmslpc-sl6.fnal.gov

And to connect to the SL5 cluster do:

ssh username@cmslpc-sl5.fnal.gov

More Instructions on how to get access to cmslpc cluster are given at How to get access to the (CMSLPC) cluster.

Set up your CMS Environment

General CMS Software Environment

The CMS software environment is set by sourcing the appropriate environment setup script according to the user' specific shell:

In tcsh, csh:

source /cvmfs/cms.cern.ch/cmsset_default.csh

In bash, sh:

source /cvmfs/cms.cern.ch/cmsset_default.sh

This will set general CMS software environment variables, extend the user's $PATH to include CMS specific utilities and tools, and define aliases used in the CMS software projects.

Mass Storage

Users may request a directory there by opening a service desk ticket (servicedesk@fnal.gov -> Service Catalog -> Scientific Computing -> CMS Storage Space Request) or by sending email to cms-t1@fnal.gov.

More info can be found at EOS Mass Storage at the LPC. An older link is still reachable at: Mass Storage.

Grid computing

The US CMS Grid is part of the worldwide LHC Computing Grid for LHC science analysis. The US CMS grid environment is part of the Open Science Grid (OSG) infrastructure. See US CMS Grid Services and Open Science Grid. The instructions on page Starting on the GRID are valid for Fermilab users.

For information on DOEGrids grid certificates, Fermilab users should see Certificates at Fermilab.


Like other sites, you have to set up the Grid UI, your CMSSW release, and then CRAB but the commands are slightly different. They are listed here.

CRAB, as set up at FNAL, has a couple of extra features that you may find useful. First, if your dataset is available at one of the US Tier2 or Tier3 sites, you may be able to use scheduler = condor_g in your crab.cfg file to access that data. This tends to be somewhat more reliable than sheduler = glite. Second, if your data is located at the Fermilab Tier1, you can use the LPC batch system to access it. Directions for setting up and running this are here: CRAB on LPC CAF In both cases, you cannot use CRAB's server mode with these options, so be sure you don't have the use_server or server_name options in your crab.cfg.

Batch system

The batch system available for users of the CMSLPC cluster is condor which allows the user to submit jobs into the production farm. The use of this batch system is described on the following page: Batch System It is recommended that if you are submitting cmsRun jobs, you use the CRAB mechanism described above as this does everything correctly for you already. It is much easier, both for users and for the support group.

Germany Tier-1 site affiliates

This information is included from original FSP-CMS Analysys Support page.

DESY Hamburg

Information about local computing resources at DESY Hamburg can be found in LocalComputingIssuesHamburg.

Italy Tier-1 site affiliates

UK Tier-1 site affiliates

Imperial College London

Complete information for users willing to work from Imperial College London can be found in ImperialCollegeLondonWorkBook.

Login platforms at OSG Tier-2s

Some of the OSG Tier-2s provide a login platform for their "local users". To get access to this login platform you should email the site contact listed in CMS.SiteDB.

CMSSW is installed uniformly in $OSG_APP/cmssoft on all of the OSG sites. To get started, you generally need to first setup your environment by doing one of the following:
source $OSG_APP/cmssoft/cms/cmsset_default.csh
source $OSG_APP/cmssoft/cms/cmsset_default.sh
depending on what shell you are working in.

Review status

Reviewer/Editor and Date (copy from screen) Comments
JohnStupak - 15-September-2013 Review with minor changes
FedorRatnikov - 10-Feb-2010 added DCMS details
Main.fkw - 22 Jul 2007 added info on OSG Tier-2s login platforms
AnneHeavey - 03 Aug 2006 more additions to FNAL info
JennyWilliams - 05 Dec 2006 tidied up a bit
Main.gartung - 23 May 2007 updated instructions for running at Fermilab
AlanStone - 14 Oct 2008 Updated links to new USCMS & FNAL Computing docs

%REVIEW% SudhirMalik - 26 Nov 2008 (FNAL)

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2016-03-11 - eduardo
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback