UPRM HEP T3 cluster is composed of one head node (cms-hn), one compute element (cms-grid0), one storage element (cms-se), one squid (and gums) server (cms-squid0), three interactive nodes (with private ips desktop-0-0,desktop-0-1,desktop-0-2) and 26 worker nodes (18 nodes with Xeon CPU W3550 @3.07GHz and 8 nodes with Xeon Sandy Bridge E5-1620 @3.60GHz) configured to run local and grid jobs. Also the worker nodes share their local disk in a hadoop distributed storage file system (hdfs) with a current capacity of 120 TB (able to grow with more disk or nodes). There are three gridftp servers configured, one in cms-grid0 to access the data in raid disks. The other two gridftp are configured cms-se and desktop-0-0 to access in our hadoop distributed file system. We have 3TB for /home disk space, ~30TB raid for large datasets (old storage). Finally, our latest tool is our xrootd server configured on cms-se to get data while jobs run. Our cluster is managed by Rocks and is designed to have full T3 capability, including a storage element. It is on the Open Science Grid (OSG), affiliated with the CMS virtual organization (VO).
Head Node Cluster Frontend :(Builded)
Interactive node (aka. desktop-0-0): Acmemicro barebone 5036T-T
Interactive node (aka. desktop-0-1 and desktop-0-2):
18 Worker nodes (aka. compute): Acmemicro barebone 5036T-T
8 Worker nodes (aka. sandy):
Acme AS-424JS1 SAS/SATA 4U 24 bays SAS JBOD storage subsystem
2 switches: PowerConnect 6248
Operating System is Scientific Linux 6.5 + latest security updates on top of Rocks-6.1.1
VMs are managed by KVM
grid packages comes from OSG repository (series 3.3.x)