Facilities

MRI Instruments

For more information, please see our MRI Center section.

Computing Infrastructure

The CIND manages 65 terabytes of acquired and processed data storage on 3 Windows 2003 Enterprise servers. Most of the data storage is maintained on compressed file systems to conserve space. The estimated volume of the uncompressed data is between 200 and 400 terabytes.  Another 30 terabytes of acquired and processed data storage are hosted on a state-of-the-art Hewlett-Packard PolyServe clustered file system; this technology enables us to distribute file server traffic across all the nodes in the cluster evenly, eliminating bottlenecks which may occur during processing.

To process and manage the data, the CIND has a network of 18 computers which utilize the Sun Grid Engine (SGE) to distribute processing jobs between them. Each system has two quad-core CPUs and 16 gigabytes of main memory with the capacity to run 8 jobs simultaneously. This provides a total additional capacity of 144 simultaneous computing jobs.  To support visualization and minor processing needs, the CIND has three computers using CentOS 5 available for shared use by lab members. 

For parallel processing applications, CIND has acquired a Beowulf compute cluster under the Research Resource grant which currently consists of 28 computing nodes directed by a head node. Each compute node has two quad-core CPUs and 64 gigabytes of main memory. The cluster also includes two Nvidia graphics processing units (GPU) which gives application developers the option of utilizing its superior image processing power to further enhance image analysis and reconstruction processes.

To assist in processing very large data, we have a Dell PowerEdge R910 system.  This system runs Red Hat Enterprise Linux 5 with 1TB RAM and four processors hosting a total of 32 processing cores, with 13TB disk storage to store data while processing. 

The principal computing servers and storage systems are housed in a computer room which is part of the CIND building. This room has two dedicated air-conditioning units, a 40KVa Power Distribution Unit which provides battery backup in case of a power failure. The room also has an Infrastructure Manager which can safely shut down all system if the batteries should run low.