subject: NCSA's Smarr touts optical networks
posted: Mon, 07 Feb 2005 12:24:09 -0000


http://www.tgc.com/hpcwire/hpcwireWWW/04/1111/108797.html

LUMINARY SMARR TOUTS OPTICAL NETWORKS AS NATION'S FUTURE
by Tim Curns, Editor

Larry Smarr, a visionary and pioneer in the high-performance
computing and Grid networking industries, found some time to discuss
the importance of dedicated optical networks with HPCwire. Smarr
emphasizes the importance of networking to U.S. competitiveness and
elaborates on his current projects and activities.

HPCwire: Nice to see you again, Larry. I'll begin with the same
question I started with last year: What are your impressions of
SC2004 so far?

Larry Smarr: Well, I think that the long-term trend that I see here
is a broadening out of the infrastructure at this conference so that
we no longer just focus on the supercomputer itself, but the
necessary infrastructure to connect that with the end user. This
includes storage -- we have StorCloud this year, the network -- we
had Tom West giving a keynote on the National LambdaRail,
visualization -- we have tile displays, new advancements all over the
floor...and then the software and middleware that ties it all
together. So I think that's very healthy for the conference. It
actually is much more realistic about what goes on when you install
one of these immense data generators called a supercomputer in the
real world.

HPCwire: What do you make of some conference-goers assertions that
supercomputing is becoming more mainstream?

LS: I don't know if mainstream is what I would call it. I still think
this conference is the place to go for the most advanced, highest-
performance computing, storage and networking of any show. If you
look at the development of the field, the Top500 ten years ago was
probably 50% vector systems like Cray Research -- which, by anybody's
standards, were botique -- tiny installed base, a few hundred in the
world. Whereas now, the Top500 I believe is close to half IBM Linux
clusters. So given the vast number of Linux clusters in labs and
campus departments, the pyramid view we used to have in which
supercomputers were just a scalable extension of commodity end-user
systems would say that supercomputing has become more mainstream --
that is, it is more connected to the broader installed base. If you
go back ten years ago, if you were writing software for a Cray vector
processor, you had to amortize the development cost over the
installed base, which, as I said, measured in the hundreds. That's a
giant dollar figure per installed base. If your processor is an I-32
or an I-64 or an Opteron, you have an installed base which is many
orders of magnitude larger than that. So you're amortizing it over a
vastly broader installed base, so it's much more affordable. In that
sense, the architecture has become more mainstream.

HPCwire: I've had a lot of people asking me to define the OptIPuter
project. Could you clear that up for everyone? What are you hoping to
accomplish with this project?

LS: One thing that you see this year is the OptIPuter project very
broadly represented by OptIPuter partners all across the conference
floor. I think that's simply because the OptIPuter project was
organized around an emerging new component of the digital
infrastructure that required adjustment of both architectural notions
and middleware. That emerging infrastructure element was the subject
of this year's Supercomputing keynote -- and that is dedicated
optical fibers, or dedicated optical wavelengths (lambdas) on those
fibers. That's in contrast to the best effort shared Internet, which
is heretofore been the ubiquitous medium of interconnection of
computing, storage and visualization for our community. Now the
emergence of dedicated optical fibers on a, say, state-wide basis is
over 5 years old -- it goes back at least to the work that NCSA,
Argonne, and the Electronic Visualization Lab did where we developed
I- WIRE in Illinois, in which the state purchased fiber which was
then dedicated to linking together the research institutes in
Illinois. This was followed shortly by I-LIGHT in Indiana. And today,
according to Steve Corbato, chief technology officer of Internet2,
over two dozen states or regional owned and operated optical fiber
networks exist.

Dark fiber by itself is just what it says -- it's dark, it's useless.
So first you have to figure out, if I don't have a shared internet,
if I have more like an optical circuit between my lab cluster on
campus, and a remote repository or remote supercomputer, how am I
going to handle the processing of data on that fiber? The OptIPuter
project assumes the use of Internet Protocol over lambdas, or
individual wavelengths. So you may have routers or you may have
passively optical switched boxes like Glimmerglass and Calient, which
you see here on the floor and actually a part of SCinet, perhaps for
the first time this year.

Then you have to say, well if you're going to have the Grid to use as
middleware for your distributed computing environment, how does the
Grid stack -- the traditional layers of middleware software -- how
does that alter, if instead of the best effort shared internet at the
bottom, at the physical layer, you instead have dedicated optical
paths. That is what the OptIPuter project is researching over 5
years. So that means you have to have a group that is looking at
issues in inter- and intra-domain optical signaling, which says "use
this fiber from point A to point B and then this one from point B to
point C," and discovers that there is an available fiber or lambda
sequence. It reserves it, then sets it up for you as a user as a live
circuit with the appropriate switching or routing along the way.
That's the analogy to what Globus does for discovering, reserving,
and then executing computing or storage.

In a sentence, the OptIPuter project is about completing the Grid. It
takes us from a situation in which you have shared, unpredictable
best effort internet at the base of the Grid stack, and replaces it
with jitter-free fixed latency and predictable network optical
circuits. That's what we call going from the Grid to the Lambda Grid.

Instead of the traditional 50 Mbps of throughput that you get for
file transfer over today's shared Internet, you can get more like 95%
of 1 Gb or 10 Gb, which means roughly speaking, a hundred fold
increase in the capacity of the network. More than that, the network
is now rock solid and is not subject to Internet weather, continuous
jitter, and variable latency that you experience over the standard,
shared TCP-IP internet.

HPCwire: Thanks for clearing that up a bit. While we're on the
subject of defining, there are various and multiple definitions of
Grid and Grid computing. I'd like to know how you would define it in
your own words?

LS: In 1988, I defined the term "meta-computing," which meant
electronically configuring the sub-pieces across the net that you
wanted to put together into a single, virtual computing element. So
it could be computing, storage, scientific instrument, visualization,
and it could include humans and collaboration. You draw that
electronic boundary around those things, and then you execute that
thing as if it were a single computer. The Grid is effectively the
set of software that can create a meta-computer out of the vast set
of connected resources that exist on the net.

HPCwire: Let's move to topics that personally involve you. Do you
have plans for the new Cal-IT^2 headquarters?

LS: My new institute, the California Institute for Telecommunications
and Information Technology, is going to be opening two buildings in
the next six months. One at the University of California at Irvine
will be dedicated November 19. The other one at UCSD, we'll probably
move into it in April 2005. These buildings are both very
interesting, they have a mix of facilities that may not be replicated
anywhere else on Earth. They have MEMS and nano clean rooms, circuit
labs -- including system on chip integration labs -- they have radio
design labs, nanoplatonics labs, and some of the most advanced
virtual reality and digital cinema spaces in the world. The building
itself is entirely allocated to projects. Projects that are supported
by federal grants, industrial partnerships, partnerships for
education, and community outreach, for example. So all of these are
things that come and go over time, but each one of which requires
space at the facilities to support virtual teams.

I think of most interest to this community is that we are building
vast amounts of high- performance communication into these
facilities. For instance, the UCSD building at Cal-IT^2 will have 140
fiber strands coming into it. When you consider that in 5 years, you
could easily support one hundred 10 Gb lambdas or a terabit per
fiber, that means something like a 150 terabits per second, which is
comparable to the bandwidth into all hundred million homes in the
U.S., each one with a cable modem or DSL at a megabit per second.

So we're setting these buildings up to essentially have internet
wormholes connecting them all over the world, so that you can go into
a room and have a meeting with people wherever they are, not over
postage stamp video tele- conferencing, but with stereo high-
definition tele-presence, joint exploration and manipulation visually
of large data objects, as well as access to any object, document or
person on the net. You can sort of think of it as AccessGrid on
steroids!

I gave the keynote here to SC Global and this was broadcast over the
AccessGrid. We had 47 remote sites, 5 continents, and my guess is
perhaps 20 different countries. When we asked for questions, there
were questions from all over the world in real-time. So it was a
shared experience on a global basis. This is the way we're going to
see our field go.

I'm impressed with the fact that if you look at things like the panel
we have Friday on the Global Lambda Integrated Facility, which is the
confederation of all the groups research lambda nets across the
planet, this was born global as an organization and all the work that
goes forward is completey global. The development is global. The
sharing is global. Our community came from a world thirty years ago
in which only America built supercomputers, typically classified with
rigid export controls. In that sense, it was a very non-global
community. Today, if you look around the floor, it's clearly become a
global community.

HPCwire: Can you update us on the NSF funded LOOKING (Laboratory for
Ocean Observatory Knowledge Integration Grid )project, for which you
were the co- principal investigator?

LS: We were very fortunate that we received the largest ITR award
this year. John Delaney, an eminent oceanographer at the University
of Washington, is the principle investigator (PI). Then you have co-
principle investigators like Ed Lazowska, the head of computer
science for many years at the University of Washington, Ron Johnson,
the CIO at the University of Washington and a pioneer in establishing
the Pacific wave of National LambdaRail, myself from UCSD, and John
Orcutt, who is giving a masterworks here on the applications of the
OptIPuter. He's also the president of the American Geophysical union
and deputy director of Scripps Institution at UCSD.

LOOKING is prototyping the cyberinfrastructure that will enable a new
generation of ocean observatories. The National Science Foundation
has a major research equipment project called "ORION" which will be
about $250 million of fantastic equipment that will be used to read-
out the state of the ocean at an unprecedented level of fidelity. One
of the most amazing aspects of that is the project "Neptune" that
Canada and the U.S. are working on off the northwest coast and in
Victoria, Canada. They will take an entire continental plate seaward
of that area and take telecommunication cables and reposition them to
go out to the scientific instruments that will be as much as several
miles in depth in the ocean floor. The amazing thing is that these
cables can take as much as ten thousand volts of electricity out. So
you can have robots that recharge, very bright lights, stereo HD
cameras that are remotely steerable, seismographs of all sorts,
chemical analysis, ocean weather stations, etc. But because they are
optical fiber cables, you can have Gbps feeds coming back from them.

So LOOKING is really about taking this modern development of the
union of Grid and Web services and placing that on top of the
middleware and physical infrastructure of the OptIPuter, then
creating a cyberinfrastructure that allows for remote operation,
automatic data access, and management for this very cross- cutting
set of scientific instruments.

HPCwire: With all these advancements sort of coming to fruition, what
do you see as the biggest technical obstacles right now?

LS: I see the big problem these days is really in cyber-system
integration. We tend to be a field of specialists. But to really
build cyberinfrastructure, you have to take a very synthetic view.
When you are optimizing an integrated system, you do not get there by
optimizing each of the sub-components. You have to think more
globally about the inter- relationship of middleware, networking,
storage, distributed computing, and so forth. That's why over the
last few years, I've been assembling these wonderful teams of
colleagues with many different skills, building out in the real world
what we call living laboratories. Then using these labs to do real
science, to couple intimately with decadal scientific projects to
shake down and inform the computer scientists about what is the
highest priority bottleneck that needs to get eliminated. It's very
different from putting a supercomputer on a floor and saying "y'all
come get some." It's a different mindset.

HPCwire: So what do you think will truly dominate in 2005?

LS: Well I think, without question, the emergence of dedicated
optical networks. The National LambdaRail is a once-in-20-year event.
It is essential to get the U.S. back into peer status with our
international partners on dedicated optical networks, much less get
us into a leadership position.

HPCwire: Would you say the U.S. is just not playing catch-up to other
nations?

LS: Definitely. It really worries me. Canada, with Bill Saint Arnaud,
a wonderful visionary and pioneer in Canada, has been creating these
optical networks for over 5 years. Last summer, in Reykjavik,
Iceland, I'm sitting there meeting with all of these extraordinary
leaders from so many countries that have already got optical networks
up and functioning. And I was sitting there, from the U.S., saying
"Well, any day now we might get serious here." It was embarrassing. I
can't say enough good things about Tom West and his colleagues. The
fact that they just went out and did it, in spite of the federal
governemtn not providing direct funding to NLR -- when you contrast
that with the NSF's leadership in building the NSF net backbone
entirely, funding at least half the regional networks back in 1985
and 1986, it's a pretty stark comparison. I think this country needs
to really re-focus on being first, on getting out there and creating
cutting-edge infrastructure. Without infrastructure, you can't do
anything. That's why we build supercomputing centers -- to provide
infrastructure that a very broad range of science can come and use.
That's what I see the NLR doing being built up by its membership. I'm
very hopeful that we'll see the NSF now begin to fund participants
and attaching to it and utilizing it really to create a whole, new
generation of high-performance networking, science and engineering.

HPCwire: Do you think intiatives like the High-End Computing
Revitalization Task Force are on track? Or do you think they've
stalled?

LS: I'm very worried about reports that focus on supercomputers
themselves. I think NASA is taking the right approach with Columbia.
It is putting aside resources to invest in optical networks to link
with its NASA centers and end users. It's exploring scalable
visualization displays up to 100 million pixels. It's taking a
LambdaGrid approach from the beginning. If you're going to create a
super- node, you need to think about how to embed it in a LambdaGrid,
such that your end users can make optimal use of it. Every time you
make a faster supercomputer, you make a faster data generator. You're
creating data so fast, and because you've neglected that
infrastructure to connect to the end user, you really aren't getting
the return on investment that you ought to be getting. And that
investment is extreme these days for supercomputing. You have to
think about the return, from the get-go.

HPCwire: Ok, final question for you. As industry tall tales have it,
your peaking over the shoulder of Marc Andreessen while he developed
HTML led to an epiphany for what you called an "information
superhighway" or what we now call the World Wide Web. How true is
this?

LS: Well, NCSA had been involved in a long series of software
innovations to add functionality to the Internet, starting with NCSA
telnet, through which a large fraction of Internet users in the late
80s actually got on to the Internet from PCs or Macs. Certainly, it
is the case that when I first saw Marc Andreessen and Eric Bina
demonstrating Mosaic, I could see instantly that this was going to
create this long sought hyperlink structure globally. I said, "this
is going to change the world." This is a vision that goes back to
Vannevar Bush, who was the head of all science and technology during
WWII for the U.S. and who started the NSF and entire post-war
American science policy of linking graduate education with scientific
research. He wrote articles in the late 40s about this sort of
integrated global knowledge space. In a way, this was almost a 50
year old vision, but the beautiful work that Marc Andreessen, Eric
and the rest of the Mosaic team did, not only built on the work of
Tim Berners-Lee with actual protocol, but created it in a way that
was sufficiently easy to use and easy to create content through the
NCSA web server, that it touched off the exponential growth that
eventually led to the whole commercialization through Netscape and
Internet Explorer.

So it certainly led me to have a broader vision of what the Internet
was capable of. I think we're a long way from realizing that vision.
I think the next big jump is going to be created by these dedicated
optical networks like NLR and new infrastructure like we hope will
come out of the OptIPuter project.

HPCwire: Larry, thanks again for meeting with me. Enjoy your time
here at SC2004 and we hope to catch up again with you next year.

Larry Smarr received his Ph.D. from the University of Texas at Austin
and conducted observational, theoretical, and computational based
astrophysical sciences research for fifteen years before becoming
Director of NCSA.

Presently, Smarr is the director of the California Institute for
Telecommunications and Information Technology, professor of computer
science and engineering at UCSD, and works with National
Computational Science Alliance as a strategic advisor.



---
* Origin: [adminz] tech, security, support (192:168/0.2)

generated by msg2page 0.06 on Jul 21, 2006 at 19:03:55

 search:
this site only