Thursday 8 November 2012

the first wbsite on the world


The first website on the World Wide Web went live 21 years ago, in August 1991. The site explained the concept and history of the Web, provided links to all "the world's online information" — a list that lengthened as the Web grew — and outlined the process by which people could improve and expand the Web. It was created by Tim Berners-Lee, then a computer scientist at the European Organization for Nuclear Research (CERN) in Geneva.
The website's homepage, titled "World Wide Web," has been archived in its original formhere (found via the techblog Gizmodo). In heavily hyperlinked Times New Roman text set against a white background, the page defined the World Wide Web as "a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents."
Another page within the site directed readers to everything that was then available online, such as a page each devoted to law, the Bible, song lyrics and politics. The site requested that people "mail www-request@info.cern.ch if you know of online information not in these lists."
Berners-Lee, who invented the World Wide Web to create a depot of publicly available information that could be accessed over the Internet, wrote on his "executive summary" page, "The project is based on the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups. Originally aimed at the High Energy Physics community, it has spread to other areas and attracted much interest in user support, resource discovery and collaborative work areas."

history of ingenuity


An Oldowan tool, the earliest type of stone tool. These were used by hominins in Africa from 2.6 million years ago up until 1.7 million years ago.
An Oldowan tool, the earliest type of stone tool. These were used by hominins in Africa from 2.6 million years ago up until 1.7 million years ago.
Credit: José-Manuel Benito Álvarez | Creative Commons
A history of ingenuity
Humans are an ingenious species. From the moment someone bashed a rock on the ground to make the first sharp-edged tool, to the development of Mars rovers and the Internet, several key advancements stand out as particularly revolutionary. These are our picks for the 10 most important inventions of all time

genral lifes Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry. CREDIT: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris View full size image In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration. Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965. Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful? The singularity Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh. Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence. The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain] What happens next is uncertain — and has been the subject of speculation since the dawn of computing. "Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote. Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?] Brain-like processing But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware. Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip. For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May. But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like. Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years. And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries. And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer. If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say. Ultimate limit If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it. So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time. That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe. Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
CREDIT: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris 
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
CREDIT: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris 
In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.
Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computationsper second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.
Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?
The singularity
Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.
Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.
The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]
What happens next is uncertain — and has been the subject of speculation since the dawn of computing.
"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.
Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]
Brain-like processing
But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.
Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.
For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.
But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.
Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.
And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.
And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.
If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say.
Ultimate limit
If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.
So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.
That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.
Personally, Starkman thinks Moore's law will break down l

Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
CREDIT: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris 
View full size image
In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.

Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.

Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?

The singularity


Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.

Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.

The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]

What happens next is uncertain — and has been the subject of speculation since the dawn of computing.

"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.

Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]

Brain-like processing

But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.

Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.

For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.

But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.

Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.

And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.

And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.

If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say.

Ultimate limit

If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.

So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.

That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.

Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.ong before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.
Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computationsper second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.
Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?
The singularity
Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.
Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.
The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]
What happens next is uncertain — and has been the subject of speculation since the dawn of computing.
"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.
Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]
Brain-like processing
But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.
Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.
For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.
But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.
Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.
And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.
And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.
If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say.
Ultimate limit
If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.
So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.
That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.
Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

Friday 19 October 2012

COMPUTERS LIFE







Now is the time to start putting existing technology to work for you.  The future is now.  We provide a number of new services that will change the way businesses collect their accounts receivable.
Automated bank drafts have been used by insurance companies to collect monthly payments for over 30 years. Now, thanks to personal computer technology, large and small businesses can collect their receivables each month on a consistent, dependable basis.  Paper transactions can be moved electronically through the banking system the same way a credit card is transacted remotely.  Checks are electronically deposited into your account, without a trip to the bank.
Everyday businesses are realizing the benefits of electronic commerce.  Today's merchants are employing this technology, not only to improve their bottom line, but to enhance customer service and satisfaction.
We can bring the future to your business now.

LIFE WITHOUT COMPUTER


A Life Without Computers

For me, computers have dominated my life & I’m not sure how I feel about that.

old macintosh classic computersWhen my family got our first computer, I was 8-years-old, spent most of the time outside and had no idea what the Internet was.
A friend of my dad’s somehow had an extra, somewhat older Macintosh computer that he generously gave us; which we (the kids) joyfully accepted.
Since I had enough siblings to fill a small orphanage, we had to take turns using the computer and I frequently felt as though the waiting was unbearable.
Once my turn to use the old Macintosh came around, I would either play Wheel of Fortune, Solitaire, or explore all the other files and programs I could gain access to.  
I couldn’t really tell you why I found the device so fascinating; it was as if something was telling me it would become a major part of all of our lives later down the road, which caused me to want to know everything about it.
Sometimes it would take 5-10 minutes just for the dinosaur to boot up, sometimes it wouldn’t boot up at all and I remember thinking that it would be nice once they figured out all the kinks with the over-sized calculator in front of me.
However, if someone had told me back then that one day I would have a cell phone more powerful than a hundred computers, I’d probably become terrified. I have no doubt that my dreams would then be filled with Terminator-like scenes; where my house and/or family would be getting vaporized by intelligent computers and cell phones.
computers that kill peopleThankfully, no one ever told me about the future of technology and I thoroughly enjoyed the time I was given on the family computer…sometimes sneaking on it when I wasn’t supposed to, but little sisters don’t tattle as long as you hit them.
Kidding….
Now zip up to the present day and I find myself, not only with a cell phone 100 times more powerful than my old Macintosh, but constantly being surrounded by high-powered (compared to prehistoric times) computers; with the Internet at the tip of my fingers.
At work, I use a computer to look up student financial accounts and can instantly find an answer to practically anything I can think of by using Google. At home, I use a computer to write new material for my website; where I also spend a ridiculous amount of time designing and editing.
In between and during my time at home and work, I use a Smartphone to communicate with other humans and for anything else under the sun because there’s most likely an app for it.
This description of how I spend a majority of my time is undoubtedly identical to millions of others (such as yourself) who find themselves immersed by technological devices at every turn.
I sometimes attempt to imagine what it would be like to not be a member of Generation Y and have an understanding of what the world was like before all this technology came into the picture.
too many computersHow did people stay in touch with friends? How did they survive without being able to text?
What did people do with their free time at home; sit on the couch and knit?
Because I do not know the answers to those questions, something inside of me feels sad–like I missed out on life before computers controlled the world.
Sure, I had 7 years of life before getting a computer, but what 7-year-old observes and remembers the world around him? I was too busy playing in the woods, catching snacks and scaring my sisters to think about the non-computer-dominated world around me.
This morning, while at work, I was trying to decide if I thought all this technology was a good thing by weighing the pros and cons. I found myself unable to come to a conclusion, simply because I don’t have anything to else to compare it to.
Of course I can read all I want about what the 80s, 70s and 60s were like, but reading can only take your imagination so far.
And don’t get me wrong, I understand that technology saves countless lives, makes contacting your 7th grade best friend a cinch and (apparently) makes life easier.
before computers there were typewritersBut all the previous generations didn’t seem to have a problem living without Facebook, Google and computers in their pockets.
I’m not sure how they did it, but they obviously made it just fine using other ways to communicate (e.g. talking in-person–if you can fathom that).
Now that I think about it, if someone had told me, when I was 8, that one day life would completely revolve around computers; I’d probably let my siblings have all the computer time they wanted and enjoy the few precious computer-free years I had left.
For the rest of my life I will either be on a computer, near one, or have a computer in my pocket. Then, when I die; I’ll most likely be hooked up to one.
So I guess it doesn’t really matter how I feel about our computer-run world; there’s not much choice in the matter and, if I protest, one will probably vaporize me.
A Life Without Computers? I don’t understand that sentence.