A Powerful Distributed Cyberinfrastructure to Support
Data-Intensive Scientific Research and Collaboration

mission statement
news room
mailing lists



Optical Network is Key to Next-Generation Research Cyberinfrastructure

At TeraGrid ’08 Conference, UC San Diego ’s Smarr Urges University Campuses
to Remove Network Bottlenecks to Supercomputer Users

San Diego, CA and Las Vegas, NV, June 11, 2008 -- The director of the California Institute for Telecommunications and Information Technology (Calit2), a partnership of UC San Diego and the UC Irvine, said today that all the pieces are in place for a revolution in the usability of remote high performance computers to advance science across many disciplines.  He urged early adopter application scientists to drive the creation of end-to-end dedicated lightpaths connecting remote supercomputers to their labs, greatly enhancing their local capability to analyze visually massive datasets generated by TeraGrid's terascale to petascale computers.

TeraGrid and Larry Smarr talk
Calit2 Director Larry Smarr (at podium) delivers his talk to TeraGrid '08. Also pictured (l-r): John Towns, NCSA Leader for Persistent Infrastructure; Steve Meacham, NSF Program Director, Office of Cyberinfrastructure; and Jay Boisseau, Director, Texas Advanced Computing Center, UT Austin.

In a featured keynote today at the TeraGrid ’08 Conference being held in Las Vegas this week, Calit2 Director Larry Smarr said “the last ten years have established the state, regional, national, and global optical networks needed for this revolution, but the bottleneck is on the user’s campus.”  However, as a result of research funded by the National Science Foundation (NSF), there now is a clear path forward to removing this last bottleneck.

This opens the possibility for end users of the NSF’s TeraGrid to begin to adopt these optical network technologies. 

[Click here to download Smarr's slides from TeraGrid '08  in Las Vegas. The Calit2 director's other recent presentations are also available from the site.]

The TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities from the eleven partner sites around the country.

Larry Smarr and Mark Ellisman

Smarr (left) with NCMIR director Mark Ellisman in front of an early OptIPortal at Calit2 on the UCSD campus.

"The NSF-funded OptIPuter project has been exploring for six years how user-controlled, wide-area, high-bandwidth lightpaths – termed lambdas – on fiber optics can provide direct uncongested access to global data repositories, scientific instruments and high performance computational resources from the researchers’ Linux clusters in their campus laboratories," said Smarr. “This research is now being rapidly adopted because universities are beginning to acquire lambda access through state or regional optical networks interconnected with the National LambdaRail, the Internet2 Dynamic Circuit Network, and the Global Lambda Integrated Facility."

The OptIPuter project, led by Smarr, is not designed to scale to millions of sites like the normal shared Internet, but to create private networks with much higher levels of data volume, accuracy, and timeliness for a few data-intensive research and education sites. Led by Calit2, the San Diego Supercomputer Center (SDSC), and the University of Illinois at Chicago ’s Electronic Visualization Laboratory (EVL), OptIPuter ties together the efforts of researchers from over a dozen campuses.

Larry Smarr at TeraGrid
Calit2's Smarr explains how OptIPuter technologies can help link scientists' labs to TeraGrid resources.

The OptIPuter uses dedicated lightpaths to form end-to-end uncongested 1- or 10-Gbps Internet protocol (IP) networks. The OptIPuter’s dedicated network infrastructure – and supporting software – has a number of significant advantages over shared Internet connections, including high bandwidth, controlled performance (no jitter), lower cost per unit bandwidth, and security.  “The OptIPuter essentially completes the Grid program,” said Smarr. “In addition to allowing the end user to discover, reserve, and integrate remote computers, storage, and instruments, the OptIPuter enables the user to do the same for dedicated lambdas, creating a high-performance LambdaGrid.”

In his talk, Smarr described how the user-configurable OptIPuter global platform is already being used for research in collaborative work environments, digital cinema, biomedical instrumentation, and marine microbial metagenomics. He issued a challenge to the TeraGrid users to begin to adopt this technology to support remote use of the TeraGrid resources.

“OptIPuter technologies can enhance the ability of scientists to use remote high-performance computing resources from their local labs, particularly applications with persistent large data flows, real-time visualization and collaboration, and remote steering,” Smarr said.

HIPerSpace OptIPortal at Calit2 in San Diego
Researchers at Calit2 in front of UCSD's HIPerSpace wall, a 220-million-pixel tiled OptIPortal display system.

A key OptIPuter technology, the OptIPortal, was prototyped by EVL and developed by Calit2 under the NSF-funded OptIPuter partnership. The OptIPortal is a networked and scalable, high-resolution LCD tiled display system, driven by a PC graphics cluster. Designed for the user’s laboratory, each OptIPortal can be constructed with commodity commercial displays and processors. While most of the PC clusters run Linux, there are some that run on Mac (Calit2@UC Irvine and UCSD’s Scripps Institution of Oceanography) or on Windows (UCSD’s National Center for Microscopy and Imaging Research) clusters.

“OptIPortals are the appropriate termination device for 10Gbps lambdas, allowing the end user to choose the right amount of local storage, compute, and graphics capacity needed for their application,” said Smarr. “In addition, the tiled walls provide the scalable pixel real estate necessary to analyze visually the complexity of supercomputing runs.”

The OptIPuter project prefers OptIPortal clusters to run on SDSC’s Rocks, an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Rocks is developed under an NSF-funded SDCI project led by SDSC’s Philip Papadopoulos, who is also a co-principal investigator on the OptIPuter project. There are currently over 1,300 registered clusters running Rocks, providing a global and vibrant open-source software community. The Rocks “Rolls” provide a convenient method of distribution of software innovations coming from community members.

HIPerWall at UCI
The 200-million-pixel HIPerWall at Calit2 on the UC Irvine campus is now part of the OptIPortal Collaboratory.

OptIPortals range in size from four to 60 tiles, offering screen resolutions ranging from 8 million pixels to the nearly-¼-billion-pixel HIPerSpace wall – the highest-resolution display system in the world, located in the Calit2 building on the UCSD campus. OptIPortals do not need to be restricted to planar tiled walls, Smarr said.  Smarr showed pictures of Calit2’s StarCAVE immersive environment driven by 34 high-definition projectors, and a 60-LCD semi-cylindrical tiled wall autostereo Varrier display, both providing three-dimensional virtual reality, driven by the same type of Linux clusters that drive the HIPerWall, all connected at multiples of 10Gbps to the OptIPuter.

To handle multi-gigabit video streams, OptIPuter researchers at EVL developed the Scalable Adaptive Graphics Environment (SAGE), specialized graphics middleware that supports collaborative scientific visualization environments with potentially hundreds of megapixels of contiguous display resolution. In collaborative scientific visualization, it is crucial to share high-resolution imagery as well as high-definition video among groups of collaborators at local or remote sites.

Calit2's Smarr in San Diego leads a "virtual high-five" with colleagues at the University of Melbourne during the dedication of the 'OzIPortal' (visible on screen).

SAGE enables the real-time streaming of extremely high-resolution content − such as ultra-high-resolution 2D and 3D computer graphics from remote rendering and compute clusters and storage devices, as well as high-definition video camera output − to scalable tiled display walls over high-speed networks. SAGE serves as a window manager, allowing users to move, resize, and overlap windows as easily as on standard desktop computers. SAGE also has standard collaboration desktop tools, such as image viewer, video player, and desktop sharing capabilities, enabling participants to resize, pan, zoom and move through the data.

In addition to SAGE, other windowing software environments have been developed by research groups that were not part of the original NSF proposal, including the Calit2 lab of UCSD Professor Falko Kuester, developer of CGLX (Cross-Platform Cluster Graphics Library). CGLX is a flexible, transparent OpenGL-based graphics framework for distributed high performance visualization systems such as OptIPortals. The framework was developed to enable OpenGL programs to be executed on visualization clusters such as a high-resolution tiled display system and to maximize the achievable performance and resolution for OpenGL-based applications on such systems.To overcome performance and configuration related challenges in networked display environments, CGLX launches and manages instances of an application on all rendering nodes through a light-weight, thread-based network communication layer. A GLUT-like (Open GL Utility Toolkit) interface is exposed to the OptIPortal user, which intercepts and interprets OpenGL calls and provides a distributed large-scale OpenGL multi-tile context. CGLX provides distributed parallelized rendering of OpenGL applications with access to all OpenGL extensions that are supported through the graphics hardware. OptIPortal applications using CGLX include a real-time viewer for gigapixel images and image collections, video playback, video streaming and visualization of multi-dimensional models.

Although scalable visualization displays have been under development for over a decade, first as arrays of projectors, the use of commodity hardware and open-source software in the OptIPortal makes this visualization power affordable to individual researchers. The typical cost of an N-tiled wall is about the same as N/2 deskside PCs. As a result, adoption of OptIPortals has been rapid over the past two years. Besides the United States there are OptIPortals installed in Australia , Taiwan , China , Japan , Korea , Canada , the UK , the Netherlands , Switzerland , the Czech Republic , and Russia , as well as a number of corporations.

However, there has been a critical “missing link” blocking widespread adoption of the OptIPuter/ OptIPortal metacomputer:  few campuses have installed the optical fiber paths needed to connect from the regional optical network campus gateway to the end user. Smarr quoted NSF Director Arden Bement, who three years ago said prophetically: “Those massive conduits [e.g., NLR lambdas] are reduced to two-lane roads at most college and university campuses. Improving cyberinfrastructure will transform the capabilities of campus-based scientists.”

To make effective use of the 10Gbps lightpaths from the TeraGrid resources to the campus gateways, Smarr said, “the user’s campus must invest in the equivalent of city ‘data freeway’ systems of switched optical fibers connecting the campus gateway to specific buildings and inside the buildings to the user’s lab.”

A full scale experiment of this vision is underway at UCSD with funds provided by the campus and an NSF-funded Major Research Instrumentation grant called Quartzite, which has SDSC’s Papadopoulos as PI and Calit2’s Smarr as one of the co-PIs. The Quartzite optical infrastructure includes a hybrid packet-circuit switched environment, interconnecting over 45 installed 10Gbps channels crisscrossing the UC San Diego campus, with 15 more planned by the end of this year. More than 400 endpoints are connected to Quartzite through access or direct connection to the core switch.  Geographically, these are located in seven different buildings, including 17 laboratories within these buildings.  Large projects (CAMERA, CineGrid) use Quartzite directly. 

The Quartzite switching complex is able to switch packets, wavelengths or entire fiber paths, allowing fast configuration, under software control, of the different types of network layouts and capabilities required by the end user. This optical complex will provide this year an aggregate bandwidth of ~½ Terabit/sec from dedicated lightpaths coming into a central, reconfigurable switching complex and from there connecting to UCSD researchers. This testbed also enables a broad set of “Green Cyberinfrastructure” research projects to be conducted on a campus scale. As a result, we can experiment at UCSD with one model of the “campus of the future,” from which robust solutions can be provided to other interested campuses.

“Quartzite provides the ‘golden spike’ which allows completion of end-to-end 10Gbps lightpaths running from TeraGrid sites to the remote user’s lab,” said Smarr, adding: “Like the OptIPortal, Quartzite was designed using commercial technologies that can be easily installed on any campus.”

With this complete end-to-end OptIPuter now in hand, the stage is set for a wide variety of applications to be developed over this global high performance cyberinfrastructure.  “When we were conceptualizing the OptIPuter seven years ago, I always thought that remote supercomputer users would provide the killer applications,” said Smarr, the founding director in 1985 of the National Center for Supercomputing Applications (NCSA).  “TeraGrid users are located in research campuses across the nation, but they all share the characteristic that they need to carry out interactive visual analysis of massive datasets generated by a remote supercomputer.”

Smarr showed a number of DoE, NASA, and NSF supercomputer centers that have large tiled projector walls located in the center for visual analysis of these complexities.  “The time has come to take that capability out to end users in their labs with local OptIPortals connected to the supercomputer center using the OptIPuter,” said Smarr. “I believe that we will see early adopters step forward in the next year to set up prototypes of this cyberarchitecture.”

Michael Norman image
A cube of a simulated universe two billion light years on a side revealing large-scale structure of galaxy clusters. The Enzo adaptive-mesh simulation achieves a 65,000 dynamic range of scales.  The simulation ran over one million CPU-hours on Lawrence Livermore National Laboratory's Thunder supercomputer. The data analysis and visualization were run on SDSC’s Data Star and NCSA’s Cobalt. 
Science credit: Michael Norman (UCSD)
Image credit: M. Hall (NCSA)

Smarr described the work of one such early adopter, Michael Norman, UCSD Professor of Physics, recently named SDSC’s Chief Scientific Officer.  Norman is designing an OptIPortal in the new SDSC building, to be dedicated in October 2008, for use by his Laboratory for Computational Astrophysics. It will be connected over the UCSD optical complex (described above) to the TeraGrid 10Gbps backbone and National LambdaRail and used to visualize results from his cosmology simulations on the NSF’s Petascale Track II machines at the Texas Advanced Computing Center and at the University of Tennessee/Oak Ridge National Laboratory ’s National Institute for Computational Sciences. Norman plans to stage and analyze the terabytes of data generated at SDSC, using the campus optical fiber network to move the data into specialized OptIPortals at Calit2, such as the StarCAVE and HIPerSpace wall.

To make this OptIPuter distributed analysis more efficient, EVL has developed LambdaRAM, which can prefetch data from disk storage and temporarily store it in the cluster’s Random Access Memory (RAM), masking the substantial disk I/O latency, and then move the data from this “staging” computer to the computer running the simulation.  Smarr showed how NASA Goddard Space Flight Center in Maryland uses the OptIPuter and LambdaRAM to optimize the use of NLR for severe storm and hurricane forecasts carried out at the Project Columbia supercomputer at NASA Ames in Mountain View , California , and to zoom and pan interactively through ultra-high-resolution images on local OptIPortals at Goddard. EVL modified LambdaRAM so that it would work seamlessly with legacy applications to locally access large data files generated by the remote supercomputer.

Finally, Smarr described how, with the integration of high definition and digital cinema video streams, which easily fit inside a 10Gbps lightpath, the OptIPuter architecture is rapidly creating an OptIPlanet Collaboratory in which multiple scientists can analyze a complex dataset while seeing and talking to each other as if they were physically in the same room. Smarr showed photos of “telepresence” sessions in January and May 2008 where this was demonstrated on a global basis between Calit2 at UC San Diego and the100-Megapixel ‘OzIPortal,’ constructed earlier this year at the University of Melbourne in Australia, connected over a transpacific gigabit lightpath on Australia's Academic and Research Network (AARNet). “Petascale problems will require geographically distributed multidisciplinary teams analyzing enormous data sets—a perfect application of the OptIPlanet Collaboratory,” said Smarr.

In conclusion, Smarr said, “After a decade of research carried out at dozens of institutions, we are seeing the OptIPuter take off on a global basis. I look forward to working with many of the TeraGrid ‘08 participants as they become early adopters of this innovative, high performance cyberinfrastructure—rebalancing the local analysis and network connectivity with the awesome growth NSF has made possible in the emerging petascale computers.”

In addition to Smarr and Papadapoulos, co-principal investigators on the OptIPuter initiative include Calit2’s Thomas DeFanti; Jason Leigh, from the University of Illinois at Chicago ; and Mark Ellisman, from UC San Diego. The project manager is Maxine Brown, from the University of Illinois at Chicago .  Andrew Chien, now Vice President of Research at Intel, served as the system software architect while he was at UCSD.

The NSF-funded TeraGrid links compute resources among 11 partner sites across the U.S. It currently has a combined compute capability approaching one petaflop (10^15 calculations per second), or equal to the computing power of about 200,000 typical laptops.   

For details on the OptIPuter project see www.optiputer.net, including 250 reports and publications. 

Related Links

Presentation by Larry Smarr to TeraGrid '08
California Institute for Telecommunications and Information Technology (Calit2)
San Diego Supercomputer Center (SDSC)
SDSC Rocks Project
National LambdaRail
Internet2 Dynamic Circuit Network 
National Science Foundation
Cross-Platform Cluster Graphics Library

Media Contacts

Doug Ramsey, Calit2 Communications, 858 822-5825, or dramsey@ucsd.edu
Warren R. Froelich, SDSC Communications, 858 822-3622 or froelich@sdsc.edu
Jan Zverina, SDSC Communications, 858 534-5111 or jzverina@sdsc.edu 
Laura Wolf, University of Illinois at Chicago, Electronic Visualization Laboratory, 312 996-3002 or laura@evl.uic.edu