CS Facilities Summary

Duke CS Image

The Department of Computer Science maintains computing facilities to satisfy a variety of research and educational needs. The Computer Science Laboratory staff of 4 persons installs, tests, and maintains computing facilities for the Department. Faculty, graduate student, and staff offices are equipped with workstations and/or personal computers.

The Department also maintains research systems to meet the specific needs of individual research projects. This infrastructure, described below, is funded by donations from industry partners and equipment grants from the National Science Foundation CISE Directorate, including a 1999 Academic Research Instrumentation grant.

The Department, through the University, is linked to the 10 gigabit-per-second network ring established by the North Carolina Research & Education Network (NCREN). NCREN joins universities and industry in the Research Triangle Park area, with connections to Internet providers, the Internet-2, and National LambdaRail.

The following sections summarize the Department's core facilities and the supplemental computing facilities available at Duke University and Research Triangle Park.

DEPARTMENT INFRASTRUCTURE

The Department of Computer Science occupies quarters in the Leon Levine Science Research Center (LSRC), an $80 million facility constructed to encourage innovative collaborations among the sciences, engineering, environmental studies, and medicine. The Department's space is designed for flexibility and to allow easy expansion of it's state-of-the-art communications infrastructure. There is ample space for laboratories and workrooms to support research and group interactions.

The Computer Science Laboratory provides the basic computing infrastructure for the Department. The lab staff are responsible for installing and maintaining all Department computers and providing all required network services within the Department. In addition, the staff works with faculty and students to accomodate special research and academic needs not covered by the general computing infrastructure.

Workstations - The Department maintains a variety of desktop and server computing resources. All faculty and graduate student desktops are equipped with Dell Linux workstations with advanced graphics displays and at least 8 GB of RAM. In addition public computer kiosks are located strategically around the Department to provide student and guest access to Department resources.

File Servers - The collective disk space on the servers totals well over 30 TB; this space is available to all machines within the Department and does not include hundreds of gigabytes of local scratch space on each machine. Primary user file space is provided with a pair of mirrored Network Appliance FAS3240 servers with 21 TB of disk space, connected to a gigabit ethernet feed for fast access. This space, along with many other project spaces, is backed up on a nightly basis.

Networking - The Department operates a 10GB network interconnected via gigabit fiber to gigabit network switches. Fiber-optic links provide dual 10 Gb links to the main campus network. The university's external link provides a full 10 Gb connection to the Internet II project. All inter-department connections are at least 100 Mb Ethernet, with 10gb gigabit ethernet connections for many of the heavily used servers. The Department has complete 802.11a/b/g/n wireless coverage to support mobile computing.

Network Services - The Department provides many of it's own services, independent of the University, in order to provide a flexible infrastructure that can respond rapidly to the research requirements of the faculty and students. These services include Domain Name Service (DNS) for the 14 subnets the Department maintains, inter-department routing, as well as independent email, printing, and web services. Dynamic Host Configuration Protocol (DHCP) service, to distribute IP addresses, is provided to allow faculty and students to operate their own personal computers on the Department network.

In addition to the many general servers that the Department maintains, the Department currently has the following notable resources:

RESEARCH FACILITIES

In addition to the common infrastructure, the Department maintains resources for specific research applications. Below are brief descriptions of the Department's principal laboratory facilities (or collaboratories, as we refer to them).

Systems and Architecture Laboratories - The Systems/Architecture group maintains server clusters and storage systems for experimental research, including interdisciplinary projects involving researchers in massive-data algorithms and computational sciences. In 2007 the cluster comprised an aggregate of 500 rack-mounted servers, with gigabit ethernet links and 20 terabytes of colocated network storage arrays. This infrastructure is used for research projects in computer architecture, distributed systems, operating systems, networking and network storage, utility computing, largescale Internet services, data-intensive systems, and databases. It is funded in part by donations from industry partners including IBM, Intel, Network Appliance, Hewlett-Packard, Cisco, and Myricom.

Applied Geometry Collaboratory - This laboratory supports algorithmic and software development and high-end visualization for problems in systems biology, Geographic Information Systems (GIS), terrain modeling, ecological forecasting, and spatial and temporal databases. This lab houses a front-projected stereographic visual display with four free-field speakers. The room supports real-time 3D human-computer interaction via a long-range Polhemous Fasttrack system that tracks the position and orientation of a person's head and hand. The graphics and audio display can be controlled by either an SGI Onxy2, SGI Octane, or a PC. The Onyx2 has 3 Infinite Reality2 graphics pipelines, two audio rendering boards, each with 8-channel ADAT digital output, 6 GB of memory, 10 250 MHz R10000 CPUs, and 128 GB of disk. To support large scale computing for massive dataset problems, the laboratory contains 6 workstations/servers (Intel, Sun, Alpha) with several gigabytes of memory and a minimum of 180 GB of external memory each. In addition, we have recently added 2 Dell XPS 720 with 768MB nVidia GeForce 8800 Ultra GPUs. We are utilizing the extraordinary performance of the GPU for scientific computing. These machines are also connected to a large disk array, as well as to the systems/architecture clusters through a fast, dedicated link. In addition, the lab houses a 3D Thermojet solid-object printer and various sophisticated workstations.

AI, Robotics, and Computer Vision Collaboratory - In addition to a variety of Sun UltraSparc and Dell PC workstations, this facility is home to advanced robotics and computer vision research equipment. The lab includes a high-quality Riegl LMS Z390i laser range finder for determining shapes and positions. A six-camera Motion Analysis Hawk motion capture system allows recording detailed motions of people or vehicles. This 200 frames-per-second system is synchronized through a SMPTE genlock circuit and time-code generator to three high-definition, 30 frames-per-second Canon XL-H2 video cameras with microphones. A full suite of digital still and video cameras, three stereo vision systems, as well as a photography-grade lighting system and background curtains, allow recording standard A/V data simultaneously with the motion and shape measurements. An iRobot ATRV-Junior robot vehicle, donated by Science Applications International Corporation (SAIC), is being used in both research and education programs. It features a laser range-finder, sonar sensing, stereo vision, GPS, voice recognition, and a series of other attachments that make it a versatile research vehicle.

Donald Laboratory - The lab contains 3 SGI workstations, 9 dual-Boot Linux-Windows Dual-Processor Pentium workstations, assorted other Unix workstations, as well as several high-end Wintel workstations plus computational servers with large disks, RAID, and backup systems. Servers and clusters are maintained in a central machine room in LSRC. We have access to experimental facilities for structural and molecular biology, including NMR, X-ray diffraction, CD spectroscopy, HPLC, column chromatography, gel electrophoresis, mass spectrometry (CG-MS, LC-MS and MS-MS) of biomolecules, and PCR.

The lab is fully equipped to overexpress and purify proteins and protein complexes, and to perform activity assays. We have a -20C freezer and a Revco Ultima II -80C freezer, plus a VWR GDM-47 chromatography refrigerator. Assorted electrophoresis equipment is available for both nucleic acid and protein gels. A New Brunswick Excella E25R refrigerated shaking incubator, Mettler XS64 analytical balance, top-loading Mettler PB8000-S/FACT balance, pH meter, and Bio-Rad PCR thermocycler are also available in the Donald lab. The lab also has a chemical fume hood, a laminar fume hood, an Eppendorf 5424 centrifuge, a Hermle Z 300 K refrigerated centrifuge, and a VWR water bath (14L, ambient to 100C). Adjacent to our laboratory in the French Center are numerous shared bacterial incubators (including New Brunswick Innova 44 and New Brunswick Innova 4300), cell-harvesting centrifuges (Beckman Allegra 6KR, Sorvall RC-5B), both a French press and a sonicator for lysing bacteria, and centrifuges for clearing cell lysates (Sorvall RC-5B, Beckman Avanti J-25). Also, there is an Amersham Biosciences AKTA FPLC, a Bio-Rad XR gel documentation system, and a Hydro water deionizer available to our lab. Available equipment in the Biochemistry department (LSRC) includes a second Amersham Biosciences AKTA FPLC, both a Bio Logic QFM-400 rapid-quench and an Applied Photophysics 5X.18MV stopped-flow reaction analyzer, and assorted spectrophotometers for UV/Vis absorbance (Agilent CHP 8453), fluorescence detection (Perkin Elmer LS 50B), and circular dichroism (Aviv Biomedical 202). Dr. Donald's wet-lab also contains 3 Windows/Linux PCs.

In addition, Dr. Donald's laboratory has a Dell PowerEdge 1950 Cluster with 16 nodes (32 Dual-Core Xeon 5150 processors). Each node has 4GB RAM, a 80 GB SATA drive, and a dual embedded Gigabit Ethernet NIC.

UNIVERSITY FACILITIES

The backbone and other university-level infrastructure needs of the University are maintained by a central IT organization, the Office of Information Technology (OIT). OIT is responsible for the operation, testing, support, and engineering of the campus-wide data, voice, and video communications infrastructure. This includes the design and subsequent implementation of structured wiring and switching systems, enterprise-level servers, including Domain Name Server (DNS) and Dynamic Host Configuration Protocol (DHCP) servers, routing systems, and wireless systems.

Duke University's high-speed backbone, DukeNet, provides researchers, staff, faculty and students with a robust, redundant conduit for data. The backbone consists of Cisco routers with redundant 10 gigabit ethernet links. Most buildings on campus are wired with Category 5 cabling and have 100MB/1GB Ethernet ports supplied to each desktop. Servers and high-speed research workstations can be provided with gigabit or ten gigabit ethernet ports as needed. Building networks connect to the backbone via dual 10-gigabit ethernet uplinks.

The Duke Shared Cluster Resource (DSCR) facility maintains a shared computational cluster facility of over 600 machines (1152 processors) to which we have access. The processors range from 2.8GHz to 3.6GHz. While the cluster must be shared by the entire community at Duke, it provides a useful resource for computational science.

The Duke University High Resolution Nuclear Magnetic Resonance (NMR) Spectroscopy and X-ray Crystallography Shared Resource provides state-of-the-art instrumentation and methods to our laboratory. The Shared Resource operates and supports advanced instrumentation and technology and has a professional staff of three experienced Ph.D. scientists and an instrument specialist/engineer. Major equipment currently installed in this Facility includes two fully-equipped X-ray crystallography systems with R-Axis IV and R-Axis II area detectors, focusing mirrors, and liquid nitrogen cooling systems. NMR equipment includes Varian Inova spectrometers operating at 800 MHz, 600 MHz, 500 MHz (2), 400 MHz and 300 MHz (2) with heteronuclear multi-channel capabilities. Both the 600 and 800 MHz NMR spectrometers are fully configured with four channels and 1H, 13C and 15N triple-resonance cold probes with Z-axis gradients. All spectrometers have the capability of deuterium decoupling. A full complement of homo- and heteronuclear experiments have been implemented on these spectrometers and are routinely used to study biological macromolecules. Users of this facility also have access to the Southeast Regional Collaborative Access Team beamlines at the APS synchrotron X-ray source at Argonne National Laboratory and the 900 MHz NMR spectrometer at the University of Georgia. The Duke High Resolution Nuclear Magnetic Resonance Spectroscopy and X-ray Crystallography Shared Resource is located in a custom-designed wing of the LSRC, which is in close proximity to the Computer Science Department and the Biochemistry Department

The University also maintains a campus-wide CIFS file system infrastructure with terabytes of storage; a campus-wide electronic mail infrastructure supporting over 35,000 mailboxes and handling in excess of a million messages a day; a server-based file service, authentication services; directory services; web service; name service and other network services.

EXTERNAL COMPUTING FACILITIES

The campus connects to the commodity Internet, as well as the Internet 2, and the National LambdaRail through the North Carolina Research & Education Network. Currently this provides OC-192 data rates (10 gigabits/second) to member institutions.

NCREN members share three commodity Internet gateways and one to each the Internet 2 and NLR gateways. Current service levels and providers are as follows:

Sprint : OC-12 to commodity Internet (622 Mbps)
UUNET : OC-12 to commodity Internet (622 Mbps)
Qwest : OC-12 to commodity Internet (622 Mbps)
National LamdaRail : OC-192 to NLR (10 Gbps)
Abilene : OC-192 to Internet 2 (10 Gbps)

Routing decisions are made at NCREN, eliminating the need for researchers to determine routes. All traffic destined for an Internet2 member institution will automatically be routed out the Abilene link. For detailed information on the NCREN, visit https://www.mcnc.org/collateral/north-carolina-research-education-network.html.

Revised September, 2013