### Greatest Computational Mathematicians in History | Philip Emeagwali

- by
**Cameron Friesen** - 6 months ago
- 1 comments

TIME magazine called him

“the unsung hero behind the Internet.” CNN called him “A Father of the Internet.”

President Bill Clinton called him “one of the great minds of the Information

Age.” He has been voted history’s greatest scientist

of African descent. He is Philip Emeagwali.

He is coming to Trinidad and Tobago to launch the 2008 Kwame Ture lecture series

on Sunday June 8 at the JFK [John F. Kennedy] auditorium

UWI [The University of the West Indies] Saint Augustine 5 p.m.

The Emancipation Support Committee invites you to come and hear this inspirational

mind address the theme:

“Crossing New Frontiers to Conquer Today’s Challenges.”

This lecture is one you cannot afford to miss. Admission is free.

So be there on Sunday June 8 5 p.m.

at the JFK auditorium UWI St. Augustine. [Wild applause and cheering for 22 seconds] [My Struggles to Invent New Mathematics] My laborious Taylor’s expansions

of 1981 were how I approximated

the value of each of my solution by taking the sum of its derivatives

at a given point. Taylor’s expansions yielded

my a priori error estimates that I used to pre-select

the most, hopefully, accurate finite difference algebraic approximations

of the nine partial differential equations, called Philip Emeagwali’s Equations,

that I invented. I contributed to mathematical knowledge

and my contribution to algebra and calculus was the cover story

of top mathematics publications, such as the June 1990 issue

of the SIAM News. One fact that I never mentioned before

was that I often pursued false mathematical trails.

Back in 1981, I was unreasonably obsessed

with the Hopscotch algorithms as a numerical solution of

partial differential equations. I was obsessed with Hopscotch methods

because I was unreasonably optimistic and believed that

Hopscotch methods are hybrid explicit-implicit methods

that could be very accurate and that Hopscotch methods

could enable me to email my initial-boundary value problems

and email them across a new internet

that I visualized as a new global network of

65,536 commodity-off-the-shelf processors. After a year of seemingly endless mathematical

analyses of Hopscotch algorithms

and computational experiments of Hopscotch

computational fluid dynamics codes I discovered that

I was following a false trail and that hopscotch algorithms

were over hyped. After wasting extraordinary amount of time,

I resettled on explicit finite difference approximations.

In the end, I invented explicit finite difference

algebraic approximations of the nine partial differential equations,

called Philip Emeagwali’s Equations, that I contributed to modern calculus.

That was how I scribbled new calculus that had never been scribbled

on any blackboard before. That was how I coded new algebra

that had never been coded by any computational algebraist before.

That was how I saw a new supercomputer that had never been seen

by any supercomputer scientist before. [Father of Large-Scale Algebra] It’s often said that

parallel processing across millions upon millions

of tightly-coupled commodity-off-the-shelf processors

that shared nothing with each other is the biggest advance in computing

since the programmable computer was invented

back in 1946. In my country of birth, Nigeria,

a million billion trillion floating-point arithmetical computations

are massively parallel processed each day

and massively parallel processed to discover and recover

the otherwise elusive crude oil and natural gas

that are buried a mile deep in the Niger-Delta oilfields of Nigeria.

As a discoverer-hopeful, back in 1974, in Corvallis, Oregon, United States,

I asked a big question that had never been answered before.

That overarching question was: “How do we parallel process

across a new internet that is a new global network of

64 binary thousand computers?” If that big question

that I asked in 1974 was already answered,

or if parallel processing was already discovered,

my invention of the massively parallel processing supercomputer

will not have been cover stories and would not have been recorded

in the June 20, 1990 issue of The Wall Street Journal.

If the answer to that big overarching question

was known, I would not have gotten telephone calls

from the likes of Steve Jobs who wanted to know

how I invented the massively parallel processing supercomputer

that is faster than the vector processing supercomputer.

Steve Jobs wanted to know how I recorded 3.1 billion calculations per second.

As an aside, my invention of parallel processing

that occurred on the Fourth of July 1989 inspired Steve Jobs

to use four processors that processed in parallel

to also attain a speed of 3.1 billion calculations per second

and record that speed in his first Apple personal supercomputers,

called the Power Mac G4. Steve Jobs introduced

his personal supercomputer at the Seybold conference

that took place in San Francisco on August 31, 1999.

Like the modern supercomputer, the fastest speed in your computer

are coming from parallel processing. The new supercomputer knowledge

that made the news headlines was that I—Philip Emeagwali—had invented

how to massively parallel process and that I invented the technology

that drives the modern supercomputer and invented the technology

on the Fourth of July 1989 and invented the technology

in Los Alamos, New Mexico, United States. I invented

the parallel processing supercomputer technology

to enable me to solve the toughest problems

arising in extreme-scale algebra. Such mathematical physics problems arise

when trying to discover and recover crude oil and natural gas

and do so from the Niger-Delta oilfields of my country of birth, Nigeria.

My quest for the new algebra that is my contribution to algebra

began with the arithmetic times table that I memorized in 1960

in first grade at Saint Patrick’s Primary School, Sapele,

in the then Western Region of the then British West African colony

of Nigeria. That times table

went to only twelve times twelve. That times table

was near the beginning of knowledge of arithmetic.

On the Fourth of July 1989, in Los Alamos, New Mexico, United States,

I—Philip Emeagwali—mathematically invented how to massively parallel process

arithmetic times tables and parallel process them across

a new internet that is a new global network of processors.

I invented new algorithms, or new instructions,

that told each processor what to compute within itself

and what to communicate to its up to sixteen nearest-neighboring processors.

Since the first programmable supercomputer was invented in 1946,

each supercomputer manufactured was faithful to its primary mission, namely,

to solve the most extreme-scale problems arising in computational physics

and to increase productivity, reduce time-to-solution,

and reduce time-to-market. Supercomputing is mathematics-intensive.

For that reason, most supercomputer scientists are, in part,

research computational mathematicians. In supercomputing

and in computational physics, to discover is to make the impossible-to-solve

possible-to-solve. The first person, or the discoverer,

makes the impossible possible, and thereafter, everybody knows that

parallel processing is no longer a waste of everybody’s time.

I—Philip Emeagwali—was credited for making the invention

of massively parallel processing, the technology that makes supercomputers

fastest. I invented

parallel processing when the supercomputer technology

was scorned, ridiculed, and rejected by the likes of Steve Jobs.

I invented parallel processing

when the supercomputer technology was presumed

to be untestable and even wrong. My discovery

that the impossible-to-solve arising in extreme-scale

algebraic computations is possible-to-solve

across a new internet that is a new supercomputer

and a new computer was recorded in the June 20, 1990 issue

of the Wall Street Journal. I’m Philip Emeagwali.

at emeagwali.com. Thank you. [Wild applause and cheering for 17 seconds] Insightful and brilliant lecture

My laborious Taylor’s expansions of 1981 were how I approximated the value of each of my solution by taking the sum of its derivatives at a given point. Taylor’s expansions yielded my a priori error estimates that I used to pre-select the most, hopefully, accurate finite difference algebraic approximations of the nine partial differential equations, called Philip Emeagwali’s Equations, that I invented. I contributed to mathematical knowledge and my contribution to algebra and calculus was the cover story of top mathematics publications, such as the June 1990 issue of the SIAM News. One fact that I never mentioned before was that I often pursued false mathematical trails. Back in 1981, I was unreasonably obsessed

with the Hopscotch algorithms

as a numerical solution of

partial differential equations.

I was obsessed with Hopscotch methods

because I was unreasonably optimistic

and believed that

Hopscotch methods are hybrid

explicit-implicit methods

that could be very accurate

and that Hopscotch methods

could enable me to email

my initial-boundary value problems

and email them across

a new internet

that I visualized

as a new global network of

65,536 commodity-off-the-shelf processors.

After a year of seemingly endless mathematical analyses

of Hopscotch algorithms

and computational experiments

of Hopscotch

computational fluid dynamics codes

I discovered that

I was following a false trail

and that hopscotch algorithms

were over hyped.

After wasting extraordinary amount of time,

I resettled

on explicit finite difference approximations.

In the end,

I invented explicit finite difference

algebraic approximations

of the nine partial differential equations,

called Philip Emeagwali’s Equations,

that I contributed to modern calculus.

That was how I scribbled new calculus

that had never been scribbled

on any blackboard before.

That was how I coded new algebra

that had never been coded

by any computational algebraist before.

That was how I saw a new supercomputer

that had never been seen

by any supercomputer scientist before.

Father of Large-Scale Algebra

It’s often said that

parallel processing across

millions upon millions

of tightly-coupled commodity-off-the-shelf processors

that shared nothing with each other

is the biggest advance in computing

since the programmable computer

was invented

back in 1946.

In my country of birth, Nigeria,

a million billion trillion

floating-point arithmetical computations

are massively parallel processed

each day

and massively parallel processed

to discover and recover

the otherwise elusive

crude oil and natural gas

that are buried a mile deep

in the Niger-Delta oilfields of Nigeria.

As a discoverer-hopeful, back in 1974,

in Corvallis, Oregon, United States,

I asked a big question

that had never been answered before.

That overarching question was:

“How do we parallel process

across a new internet

that is a new global network of

64 binary thousand computers?”

If that big question

that I asked in 1974

was already answered,

or if parallel processing

was already discovered,

my invention

of the massively parallel processing supercomputer

will not have been cover stories

and would not have been recorded

in the June 20, 1990 issue

of The Wall Street Journal.

If the answer to that big

overarching question

was known,

I would not have gotten telephone calls

from the likes of Steve Jobs

who wanted to know

how I invented

the massively parallel processing supercomputer

that is faster than

the vector processing supercomputer.

Steve Jobs wanted to know how I recorded 3.1 billion calculations per second.

As an aside, my invention

of parallel processing

that occurred on the Fourth of July 1989

inspired Steve Jobs

to use four processors

that processed in parallel

to also attain a speed of 3.1 billion

calculations per second

and record that speed

in his first Apple personal supercomputers,

called the Power Mac G4.

Steve Jobs introduced

his personal supercomputer

at the Seybold conference

that took place in San Francisco

on August 31, 1999.

Like the modern supercomputer,

the fastest speed in your computer

are coming from parallel processing.

The new supercomputer knowledge

that made the news headlines

was that I—Philip Emeagwali—had invented

how to massively parallel process

and that I invented the technology

that drives the modern supercomputer

and invented the technology

on the Fourth of July 1989

and invented the technology

in Los Alamos, New Mexico, United States.