For projects involving high performance computing and graphics, the laboratory contains workstations equipped with cutting edge 3D accelerator cards running under the Linux, OS-X, and Windows operating systems. In addition, a number of small-scale prototyping systems are available. Included among these is a 4-node PS3 cluster, a 2-node NVIDIA Tesla System, and a 2-node ATI Firestream System. We also have access to Intel Larrabee nodes which will be available to the project.
Computational resources are provided by several clusters. The Solver cluster includes a total of 80 CPUs (76 Model 852 and 4 Model 248), 616GB of RAM and a total of 20.2TB of storage space (14.4TB shared centralized Fibre Channel storage and 5.8TB of local SCSI Ultra320 storage). Three dual processor Xeon file servers, each with multi-terabyte RAID systems and connected via dual gigabit interfaces, provide networked storage for research projects.
The Stanford Parallel Processing Lab (PPL) has experimental computing systems that include: (1) a large Niagara II based SMP with 64 cores, 256 threads and 0.5 terabyte of memory. This system is interesting for understanding the scalability of shared memory algorithms. (2) Opteron SMP system with cache coherent interface to an FPGA. This system is useful for understanding how low latency coupling of domain specific accelerators to general purpose proces
The Stanford Army High Performance Computing Research Center (AHPCRC) Cluster has 216 compute nodes and 7 support nodes. Each compute node has a total of 16GB of memory and consists of two quad-core Intel Xeon E5345 processors running at 2.33 GHz inside a DELL Poweredge 1950. The system has a high-speed DDR Infiniband interconnect from Cisco. The attached Lustre file system provides 100 TB of high-speed storage via Data Direct Networks S2A 9550 controllers and SAF4248 disk arrays. GPU processing is available using 4 Nvidia Tesla S1070 high performance computing systems.
Stanford's High Performance Computing Center gives us access to is a 560-node cluster. Each node in the cluster is composed of two Intel Westmere Nehalems for 12 cores per node (24 threads with hyperthreading enabled) running at 2.67 Ghz with 36Gb of RAM, 400TB of disk and Infiniband interconnect.
To facilitate research in shape acquisition and computational imaging, we have a number of special-purpose input and output devices. These include a laser triangulation range scanner custom-built by Cyberware for scanning large statues, a smaller Cyberware Model 15 laser triangulation rangefinder, a 3D Scanners Ltd. handheld laser triangulation rangefinder, a Cyra time-of-flight laser rangefinder, two Faro digitizing articulated arms (Bronze and Silver Series), a unique two-armed, spherical gonioreflectometer, and a reconfigurable array of 128 custom CMOS-based cameras with supporting electronics. We share Motion capture facilities with the Stanford Neuromuscular Biomechanics Lab. These include a Motion Analysis Corp. system using 8 EAGLE-4 4MP CCD cameras able of full-frame capture at 200Hz and reduced-frame capture at up to 10Khz.
To support all our research projects, we have a variety of digital still and video cameras, several kinds of studio lighting rigs, an optical bench well-stocked with optical "tinkertoys". A small video lab permits video to be played, and edited across a number of analog and digital signal formats and media types. Finally, we can produce printed materials on a variety of small-to-large format color and B&W printers and an Epson 9800 plotter.
The laboratory is managed by John Gerth. See Pat Hanrahan, Leo Guibas, Marc Levoy, or Ron Fedkiw for information about getting access to the lab.
The graphics laboratory was initially supported by a NSF CISE Research Infrastructure Grant entitled "High Performance Graphics and Imaging". Additional equipment support has been provided by Intel, nVIDIA, SGI, and Sony .