[color=darkblue]I read somewhere that CPUs are about to hit a wall.
Because of "Moore's Law" -- CPUs doubling in power every 18 months -- if CPUs have get any smaller, quantum tunneling problems will kick in.
If they stay the same size, but with just more transistors put on them, they'll need liquid nitrogen to keep from overheating.
My questions....
1] Why can't they just be made bigger?
Would an extra square centimeter/millimeter or 2 be that big of a deal?
2] I always read how Gallium Arsenide was supposed to replace silicon. What happened?
3] What's next? BioCPUs? HoloCPUs? NanoCPUs?
Later![/color]
:coolmac: :coolmac: :coolmac:
They have got bigger, line a 6502 or 8088 up against a G3 and you'll see what i mean.
Then again i s'poise it could just be extra connectors for things like a 32/64 bit address, data, control bus. Does anyone know if this is the case?
Joel
Current data suggests that CPUs made of silicon could go down to around 35nm and still work reasonably well. That's only a few years away, though. The problem with making a bigger CPU is that there are a lot of timing and latency problems involved with an enormous die-- copper wiring can only work well for so far. As for gallium arsenide, I've heard that it was extremely expensive to work with. This coupled with the fact that it's a poison in the first place doesn't give it a lot of momentum.
Two proposed solutions to the upcoming silicon wall are artificial doped diamond transistors and carbon nanotubes. Artificially grown diamonds doped with certain chemicals can be made into transistors with a much higher heat tolerance and smaller theoretical size than silicon can. This method is possible with the technology that is in the world today, and copper wiring would most likely be replaced with nanotube wiring. Nanotube transitors can be made as small as nanotubes can be made and coupled together with the required layers. The first transistor-like nanotube construction was just recently made, and the technology just isn't here yet for a mass-produced product. The way that CPUs are designed is changing, and in the future parallelism and efficiency will be extremely important.
ibm is developing and has a working proto type of the "cell processor" thats whats going into the Playstation 3. that has speed of up to around 4.2 ghz right now.
And don't fall into the Intel marketing trap of MHz-MHz-MHz. The Pentium M can preform as well or better than a P4 at much higher clock rates, and use 1/5 the power. Efficient CPUs were mostly ignored in favor of high clock rates that could be used as marketing tools. Intel learned a hard lesson with the Netburst arch and is now backtracking and going down the road they should have been on all along. The P4 never looked that great to me, and now I'm being shown that my gut feelings were right. Intel missed the 4GHz mark, and has abandoned the P4 line. It had been projected to go to 10GHz, remember? The main problem is, why? Who needs 10GHz when you have to lower the efficiency of the CPU just to get it to go that fast? "What's Ghz got to do, got to do with it?"
There was an article on Slashdot a few months back about some dude who got a P4 overclocked to something like 7GHz, using liquid nitrogen. It was a big story because he didn't just overclock it, jump into the CMOS and take a pic of it saying "7GHz," but was able to actually boot into Windows and run benchmarks.
[color=blue]That answers my question!
Thanks again!!
Later![/color]
:coolmac: :coolmac: :coolmac:
booting into windows at 7ghz is nuts... but i bet some dual core AMD stuff would do what that one did...... imagine if you over clock that?
Here's what I thought I knew about optical CPUs > "Optical CPUs"
:coolmac: :coolmac: :coolmac:
This is similar to what happened in the automotive industry during the last 50 years. The big three American auto manufacturers were hell-bent of making their cars go faster. Their answer? Increase engine displacement and put bigger carburators on these engines. This remained the status-quo until the 1970's when the Japanese and European auto makers really started to make inroads into the American market, and such, their innovations started to attract the attention of the Americans. Technology such as multi-valve systems, fuel-injection, weight-reduction techniques, etc. became alternatives to traditional thinking.
IMHO, this is what needs to happen in the computer market: Software needs to be refined to require fewer and fewer processor cycles. Processors need to move more and more bits per cycle. I think, most importantly, people need to realize that there is a limit to what computers can do.
People don't expect to drive 1000mph in their Honda Civic, why should they expect the equivalent from their PC?
Cheers,
The Czar
The real big leap is not going to be in speed of the CPU in single-core, but as in relative as to what the benchmarks are for each core in the individual CPU. Theorhetically, you could have 4-6 Cores on a CPU in a cube like Chip, with a wall on each side for a connector. Kinda like a Borg Cube. Who knows, these could actually be light based as someone suggested. It might even be using Fibre optic boards in the future. It might actually be that there is a whole new type of CPU being used. My prediction will be that there will be multiple CPU's on one chip pretty soon doing lower speeds, but able to achieve higher benchmarking. Also, within the next 7 years. I predict maybe 128-bit CPU's using cell based processing. Now, it is up to the OS to make it usable.
I read some while ago, that there are processors that actually "Learn" as the data is processed, to make the most out of the CPU Clock cycle. These generally involve a doped pathway that is standard, and evolves the pathways in addition as the data is processed in order to gain efficiency. In effect, it is sort of like a brain. Starts off as a simple processesor, and "Learns" the best way to calculate the data, as it reaches out thru the die, and makes more connections. It repeats until it is at maximum potential, and then sets in for the data process.
I have seen the article somewhere on several sites, the first being /. (slashdot) and then in some other places. It is really cool and has a lot of potential, and I would love to see it implemented.
For instance. In starting out with video, it will be average performance, but maybe 1 minute into the data calculation, it would have made the best pathways, and becomes far above standard for video en/decoding. But when it switches data type (Say photo) it starts over and reverts to the standard die, and builds from there. this would actually eliminate the need for a standard of special instruction sets for media, as the CPU would develop the best way for moving the data thru the chip.
that, i believe, is the way CPU's will be by the time it is 20 years from now. I have also heard of "Organic" data processing too. But I doubt that will take off in even 100 years. Probably because some "Bacteria rights" organization will step in