General News

Will computers ever be smarter than us?

Will computers ever be smarter than us?
Published Thursday, Aug. 25, 2011 4:32PM EDT
Report on Business magazine: The September issue

Moore’s Law is generating a lot of animated discussion these days. Described first in a 1965 paper written by Intel Corp. co-founder Gordon Moore, the axiom predicts that the number of transistors that can be squeezed onto a computer’s integrated circuit board doubles every two years or so. Moore said that computing power would expand at that pace for a decade.

He was right and then some. Moore’s Law has held true in chip manufacture for 45 years. The newest models at Intel (INTC-Q20.13-0.11-0.54%) , IBM (iBM-N) and Qualcomm (QCOM-Q51.460.220.43%) , in fact, are threatening to blow past Moore’s Law, with size and power improvements that exceed even the blistering pace he predicted. Among the latest doodads: a 3-D chip unveiled by Intel in May that has conducting channels that stick up slightly, allowing electrons to move up and down, as well as left and right.

Yet silicon is a physical thing, governed by physical laws. At some point, when transistors have shrunk to the size of atoms, it will be impossible to make them any smaller. That physical limit suggests that the growth rate of computing power will slow and hit the wall. Depending on which theorist you ask, Moore’s Law will likely hit its expiry date between 2015 and 2020.

Should we care? Absolutely, argues Michio Kaku in his latest book, The Physics of the Future. An esteemed American physicist and co-founder of string field theory, Kaku writes that our future economic prosperity will pivot on the discovery of a suitable replacement for silicon. This is where much of the recent animated discussion comes from. It’s focused on new research out of Cavendish Laboratories, the department of physics at Cambridge University. A study published in July provided new insights into “spintronics,” a potentially revolutionary new way of transferring information. Conventional electronics rely on harnessing the charge of electrons. Spintronics depends, instead, on manipulating an electron’s spin, and transforming it into a so-called spin current, which can then be used to store and transfer information in a way that generates little or no heat.

Obstacles remain, including harnessing enough spin current to meet the electricity requirements of existing computers and devices. There’s also work to be done to integrate spin current with existing semiconductor technology. Still, the idea is intriguing and would, if commercialized, offer a way of climbing back on Moore’s exponential growth curve even after reaching silicon’s limits.

Of course, if the idea of computers operating without electricity or batteries seems like science fiction, it’s nothing compared to what some theorists foresee if computing power keeps growing exponentially for decades. One concern is that we will develop computers that are more intelligent than humans. This is called “technological singularity,” where we pass a point beyond which the future may become impossible to understand.

Supercomputers could enable an “intelligence explosion,” says futurist Ray Kurzweil. Once those computers learn to evolve, they may choose a future not at all to our liking—cyborgs and all. As artificial intelligence theorist Eliezer Yudkowsky memorably put it: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

Perhaps our best hope is that 25-year Silicon Valley veteran Martin Ford is right. In his book The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Ford predicts a “technology paradox” that might precede singularity: So many jobs in the economy are automated that consumer demand plummets, destroying the incentive to invest in the technologies necessary to bring the singularity about.

Not a great outcome, but better than choosing between technological stagnation via Moore’s expiry or potential human extinction via its continuance.

2045: The Year Man Becomes Immortal

By LEV GROSSMAN Thursday, Feb. 10, 2011

Technologist Raymond Kurzweil has a radical vision for humanity’s immortal future

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.
(See TIME’s photo-essay “Cyberdyne’s Real Robot.”)
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.
(See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.
Probably. It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.

Read more:,8599,2048138,00.html#ixzz1DbGn4WMC