New 8-core PowerMacs. . . no real interest here?

25 posts / 0 new
Last post
Offline
Last seen: 1 year 4 months ago
Joined: Dec 19 2003 - 18:53
Posts: 906
New 8-core PowerMacs. . . no real interest here?

So I noticed when Apple announced the 8-core PowerMacs there didn't seem to be any conversation sparked here. I know the release was a bit expected. However, it still seems like a nice and fast machine. It does feel a little bit like an interim product, not shipping with OS 10.5 nor the BluRay DVD (I think the OS will provide the drivers for the DVD, so the two are linked IMHO).

Are most of the speed intensive softwares taking advantage of multiple processers with multi-threaded programming these days?

Mutant_Pie

coius's picture
Offline
Last seen: 10 years 8 months ago
Joined: Aug 25 2004 - 13:56
Posts: 1975
I think OS X Naturally Manages the cores

i.e. It shifts off work to each core, when an app needs priority. I don't remember if Unix/Linux apps need to be written for multi-core systems for the apps to actually take advantage of it. Although, having direct control of the cores is better than having the OS take care of it, but I still think the benefit is there.
Same way with 32-bit apps on a 64-bit system. I honestly don't think it requires an app to be 64-bit for it to benefit from a 64-bit CPU.

All-in-all, it's a nice machine, but I think the speed on the machines is getting too ridiculous, and it's not being used to it's full potential. Kinda like 32 vs 64-bit apps. Not every cpu out there uses multi-core/64-bit executables, but it will eventually catch up, and the speed will become more apparent

Offline
Last seen: 14 years 11 months ago
Joined: Aug 23 2005 - 20:36
Posts: 126
Probably because nobody can a

Probably because nobody can afford it. Blum 3

MacTrash_1's picture
Offline
Last seen: 17 years 7 months ago
Joined: Dec 20 2003 - 10:38
Posts: 318
Re: Probably because nobody can a

Probably because nobody can afford it. :P

Really.......

From aout $3800.00 for the cheapest configuration to $17.000.00 for all the bells and whistles you can get. But that higher price does include dual 30" Cinema HD displays so you can save a little money if you already have the dual Cinemas.

Everybody has those already, right ?

And you get free shipping too !

I have never even owned G5 so it's definitely out of my price range.

It is quite a machine though..... but so is a new car.

Offline
Last seen: 14 years 11 months ago
Joined: Aug 23 2005 - 20:36
Posts: 126
I don't get what you are sayi

I don't get what you are saying, are you saying $3800 is cheap for a computer?

coius's picture
Offline
Last seen: 10 years 8 months ago
Joined: Aug 25 2004 - 13:56
Posts: 1975
doesn't require a Mac Pro to get the features of Core 2

We got our current iMac w/ 2.0Ghz Intel Core2Duo. It runs pretty fast, and I *DO* see that the second core is being utilized. When I boot into windows on it, and run an app (the same as in OS X) it runs at half the speed of the Same app in OS X. I am pretty sure this is because windows XP Pro does NOT take full advantage of the Dual Core setup. But as soon as I boot into OS X, run the program, and it blows away the same app that was run in windows (I used handbrake and media fork (the same thing now))

I was getting like 40Fps in windows for encoding of Xvid and about 100-120Fps in media fork OS X. And supposedly, the windows one takes advantage of the Dual Core/Multi-CPU setups. I just think windows is not seeing it. it DOES say the second core is there, but when i run windows, it only tops at 50% CPU Usage. And yes, I have reinstalled.

Unless apple's driver disc has bad drivers, it's probably that windows doesn't recognize and utilize the full multi-core setup

iamdigitalman's picture
Offline
Last seen: 2 years 4 months ago
Joined: Mar 1 2004 - 22:18
Posts: 629
I would love one of those, bu

I would love one of those, but way out of my pricerange. my B&W suits me fine, but when leopard drops, I would love to use it, as I assume it will be even snappier, if I know apple. Sure, I can run xpostfacto, but why not upgrade?

Even the price of the 1.6ghz G5 is out of my range. i would love something with dual procs, so even a dual G4 or a sawtooth with dual CPU upgrade would suit me.

-digital Wink

catmistake's picture
Offline
Last seen: 3 years 2 weeks ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
Xserve

So I noticed when Apple announced the 8-core PowerMacs there didn't seem to be any conversation sparked here

I thought the same thing when the new XServe was released. 32GB max ram, and not a peep. So, I say in jest, the new Mac Pro just doesn't hold enough RAM for my appetites.

Are most of the speed intensive softwares taking advantage of multiple processers with multi-threaded programming these days?

well, the OS, at least, was able to utilize 8 cores long before 8 cores were available. I imagine Apple apps will be first out of the gate to utilize the power.

mmphosis's picture
Offline
Last seen: 4 days 10 hours ago
Joined: Aug 18 2005 - 16:26
Posts: 442
yawn

I think that having eight Intel cores in a machine is boring -- not too mention gougingly overpriced. I am waiting for massively parallel cores of the non-Intel variety for pennies a piece. We already have cheap commodity SD-RAM. It is only a matter of time before massively parallel CPUs can be purchased and installed in the same way SD-RAM is today.

Multi-threading is a waste of time -- in so many ways! It's a very poor and backward way to do multi-processing. There are easier ways to write software that scales exceedingly well to multiple CPUs.

iantm's picture
Offline
Last seen: 3 years 8 months ago
Joined: Apr 2 2005 - 14:01
Posts: 709
My take

The Mac Pro is a pretty sweet piece of hardware. I had a 2.66 ghz Quad Core unit from Apple for evaluation for work. Had 4gb of RAM, and was wicked fast. With the expansion options, it'll make for a great server.

As for the new Xserve, yawn. I have a very low opinion of the Xserve as it lacks sufficient expansion. For a cluster, Xserves are great. As a standalone web server, it's great. As a file/backup server when coupled with an Xserve RAID, it's alright. Unfortunately - I'd like to be able to hook up a tape drive, and firewire tape drives really aren't going to do the trick. I'm looking for some high performace SCSI - no available PCI slot for that. (it's taken up by the video card, that I can't remove due to work regulations).

If there were a 2U-4U Xserve with more than 3 drive bays, and 3 pci slots, I'd be all over it. When it comes time to phase out the Xserve presently in service - I'm going with a Mac Pro - room for a fibre channel card, SCSI card, 3tb of internal mass storage, and at a lower cost than an Xserve (factoring in OS X Server unlimited clients). It's a no brainer.

Now, the Xserve is a much more service friendly device, but that doesn't really make up for the lack of connectivity and expansion.

The 8 core Mac Pro is cool, but unless you're doing video rendering, massive number crunching, or gene sequencing, or serving (even then you'll not see any improvement from 4-8 cores) you'll never see the power of the machine. That's why I recommend iMacs to everyone at work. For the needs of most end users, a MacBook/MacBook Pro or iMac is more than adequate, and far more inexpensive. The 20" iMac is an incredibly great machine at a great price. (as a general rule - with no upgrades, save for maybe ram - a five year service life can be expected from these machines - making the total cost of the computer itself around $300 a year) Such that I have one on my desk.

catmistake's picture
Offline
Last seen: 3 years 2 weeks ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
I thought I was picky!

As for the new Xserve, yawn. I have a very low opinion of the Xserve as it lacks sufficient expansion.

It has one less slot than the Mac Pro (ok, maybe 1.5 less slot), but maybe that deficiency is enough to be constrictive. Something like this takes care of that.

For the needs of most end users, a MacBook/MacBook Pro or iMac is more than adequate, and far more inexpensive.

And now I go off topic...
Assume that most in an office of business aren't power users. Most of their time is spent, probably, using a browser. Most of their productive time is probably spent using various apps in Microsoft Office or analogs. Why is it that the same application over time will need more and more power? Why does an office worker need a 2GHz machine today, when they had most of that functionality with an 80Mhz 486? Microsoft Word used to fit on a floppy, and the functionality of the version that fit on floppy (v4 or below, I think) covers what most people want to do most of the time (simple word processing). Just what the heck is going on here?

Offline
Last seen: 1 year 4 months ago
Joined: Dec 19 2003 - 18:53
Posts: 906
Re: I thought I was picky!

Why is it that the same application over time will need more and more power? Why does an office worker need a 2GHz machine today, when they had most of that functionality with an 80Mhz 486? Microsoft Word used to fit on a floppy, and the functionality of the version that fit on floppy (v4 or below, I think) covers what most people want to do most of the time (simple word processing). Just what the heck is going on here?

Because of the desire to have "what is new and better". I would also suggest that a "mission creep" for the "bells and whistles" in the OS have a lot to do with it, along with the increased use of graphic, sound, video integration which can now be integrated in many applications, and an ever expanding video resolution capability and memory storage/handling.

Expanding your analogy, as of early 2000, I was still using AppleWorks 3(circa 1985) with the Beagle Brothers expansion on 8-bit Apple //e (1.5 Mhz, though it would move a lot faster on the 4Mhz //c+ that I have). It had integrated word processor with graphics and font capabilities that can output to a laser printer, with the built in spreadsheet, and a database program, and HELP windows too. The vector based DoubleHiRes graphics program, The Graphics Magician by Penguin Software ("F***-You!" Penguin Publishing House and your army of lawers!) used the same basic methods as Adobe's Illustrator, about two years before the Macintosh came out (for HiRes graphics, and about six months afterwards for DHR).

Mutant_Pie

BDub's picture
Offline
Last seen: 2 years 9 months ago
Joined: Dec 20 2003 - 10:38
Posts: 703
Re: yawn

Multi-threading is a waste of time -- in so many ways! It's a very poor and backward way to do multi-processing. There are easier ways to write software that scales exceedingly well to multiple CPUs.

Could you elaborate on that? I was under the impression that using threads with the right design patterns is currently the best way to speed up your app on a multi-CPU system.

Eudimorphodon's picture
Offline
Last seen: 1 week 5 days ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: My take

As for the new Xserve, yawn. I have a very low opinion of the Xserve as it lacks sufficient expansion. For a cluster, Xserves are great. As a standalone web server, it's great. As a file/backup server when coupled with an Xserve RAID, it's alright. Unfortunately - I'd like to be able to hook up a tape drive, and firewire tape drives really aren't going to do the trick. I'm looking for some high performace SCSI - no available PCI slot for that. (it's taken up by the video card, that I can't remove due to work regulations).

Just for the record, the Intel Xserve has VGA integrated onto the motherboard, so both its expansion slots are free. (And one of them can be configured with *either* a PCI-X or an 8x PCIe riser.) That's about all the expansion you can reasonably expect from a 1U.

I guess personally I find the Xserve pretty ho-hum since I deal all day with Dell 1950 and 2950 servers which are essentially identical hardware. Since I'm obligated to use either FreeBSD (our poison of choice for application servers) or Linux (for VMware Server) being able to run OS X server is a complete non-issue. The Xserve *is* a pretty good deal for what it costs. If it wasn't for the *major* discounts we get from Dell buying in bulk the per-unit price for the Xserve would be about $2000 less then a *roughly* equivalent 1950. (The advantage of the 1950 is a hardware SAS/SATA RAID controller/backplane is a low-cost "slotless" option. The Xserve only offers software RAID internally.)

As for the OctoCore Mac Pro, well, I guess I'm just not the Mac Pro's target customer. I recently turned down an offer to upgrade my desktop workstation to a Dell Precision 490 (again, basically the same hardware) simply because I don't need quad cores to run KDE and a desktop full of Xterms. Unless you're *specifically* doing nothing but number-crunching even quad-cores (let alone 8 cores) start getting you into "diminishing returns" territory. Memory bandwidth is shared across all the cores on a die (and to some extent across both dies, since cache coherency has to be maintained), and with 8 cores indications are that bandwidth starvation can be a serious problem for certain applications. For a mere mortal end user who wants their video games to go fast a system based on a high-bandwidth motherboard (Nvidia 680 SLI chipset or the like) with a single dual or quad-core CPU and huge hairy-chested video card (GeForce 8800 or similar) will substantially outperform a server-chipset-based system like the Mac Pro. The sticker price might actually end up in the same ballpark, which means that someone actually looking to spend that sort of money on a computer should carefully weigh what their requirements are and what applications they intend to run before buying. Someone dropping four grand on the MacPro thinking it'll kick butt at the LAN party is bound to be disappointed when they set it next to their buddy's similarly priced AlienWare box.

If the future of Rock-And-Roll is really going to be about cramming more and more CPU cores into your box then the industry (including Apple) is going to have to start resorting to NUMA architectures, and writing effective software those isn't that easy.

--Peace

iantm's picture
Offline
Last seen: 3 years 8 months ago
Joined: Apr 2 2005 - 14:01
Posts: 709
Re: I thought I was picky!

As for the new Xserve, yawn. I have a very low opinion of the Xserve as it lacks sufficient expansion.

It has one less slot than the Mac Pro (ok, maybe 1.5 less slot), but maybe that deficiency is enough to be constrictive. Something like this takes care of that.

For the needs of most end users, a MacBook/MacBook Pro or iMac is more than adequate, and far more inexpensive.

And now I go off topic...
Assume that most in an office of business aren't power users. Most of their time is spent, probably, using a browser. Most of their productive time is probably spent using various apps in Microsoft Office or analogs. Why is it that the same application over time will need more and more power? Why does an office worker need a 2GHz machine today, when they had most of that functionality with an 80Mhz 486? Microsoft Word used to fit on a floppy, and the functionality of the version that fit on floppy (v4 or below, I think) covers what most people want to do most of the time (simple word processing). Just what the heck is going on here?

A fair number of my end users are doing browsing, though there scientific apps that are in use tend to get more and more demanding. That's the weird thing with scientific apps - some get upgraded every couple of months, while some have yet to make the jump to OS X, with an AGP Graphics G4 tower running OS 9 as the recommended platform.

Sadly, the software people and hardware people are working together to keep each other in business.

As for the Xserve concern - I don't feel that I should have to spend additional money to address an existing hardware bottleneck. When you take fibre channel cards, scsi cards, and video cards into account - the two pci slots become very limiting - at least on the Xserve G5. On the intel based Xserve, with onboard video - one pci slot is freed up, but then there's no room for growth should I need to add another interface above and beyond SCSI and fibre channel.

catmistake's picture
Offline
Last seen: 3 years 2 weeks ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
Re: I thought I was picky!

I don't feel that I should have to spend additional money to address an existing hardware bottleneck

fair enough

When you take fibre channel cards, scsi cards, and video cards into account - the two pci slots become very limiting - at least on the Xserve G5.

Aha! Sounds like you're talking about already owned, prod/dev machines, but isn't there an option on the new xserve for onboard video? On the other hand, the servernistas here would say, with a cruel smirk, "you don't need video!" But... well... much of the ease of use of OS X Server is lost in the CLI.

catmistake's picture
Offline
Last seen: 3 years 2 weeks ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
Re: My take

I recently turned down an offer to upgrade my desktop workstation to a ... simply because I don't need quad cores ....

How do you ever expect to become a resource hog with that attitude?

Dr. Webster's picture
Offline
Last seen: 6 hours 27 min ago
Joined: Dec 19 2003 - 17:34
Posts: 1760
Re: My take

I guess personally I find the Xserve pretty ho-hum since I deal all day with Dell 1950 and 2950 servers which are essentially identical hardware.

I do too; I just deployed a pair of Intel SC5400RA boxes (two 4-core Clovertown 2.66GHz Xeons, 16GB RAM, 1.2TB RAID 10 in each box) and a pair of SR2520SA's (which are pretty similar architecture-wise to the Xserve) for VMWare Server. Compared to the raw power I have in those four boxes (total of 24 cores and 48GB RAM), we'd have to drop a good chunk of change for comparable power in Xserves. Especially since we got the chassis and CPUs from Intel Engineering for free.

As for the OctoCore Mac Pro, well, I guess I'm just not the Mac Pro's target customer. I recently turned down an offer to upgrade my desktop workstation to a Dell Precision 490 (again, basically the same hardware) simply because I don't need quad cores to run KDE and a desktop full of Xterms. Unless you're *specifically* doing nothing but number-crunching even quad-cores (let alone 8 cores) start getting you into "diminishing returns" territory. Memory bandwidth is shared across all the cores on a die (and to some extent across both dies, since cache coherency has to be maintained), and with 8 cores indications are that bandwidth starvation can be a serious problem for certain applications. For a mere mortal end user who wants their video games to go fast a system based on a high-bandwidth motherboard (Nvidia 680 SLI chipset or the like) with a single dual or quad-core CPU and huge hairy-chested video card (GeForce 8800 or similar) will substantially outperform a server-chipset-based system like the Mac Pro. The sticker price might actually end up in the same ballpark, which means that someone actually looking to spend that sort of money on a computer should carefully weigh what their requirements are and what applications they intend to run before buying. Someone dropping four grand on the MacPro thinking it'll kick butt at the LAN party is bound to be disappointed when they set it next to their buddy's similarly priced AlienWare box.
--Peace

With the launch of products like Final Cut Server, I don't really see the need for massive amounts of raw power on the desktop. I forsee a lot of the power-hungry apps moving to the back end, provided that they don't need to give real-time results (such as with gaming). Why pay for 10 video editors to have high-powered desktops, with most of that power going unused (the actual editing doesn't require a whole lot, it's the final rendering that's the hog), when you can keep a single high-power server cranking along at 100% all the time and buy less expensive desktops?

mmphosis's picture
Offline
Last seen: 4 days 10 hours ago
Joined: Aug 18 2005 - 16:26
Posts: 442
the right design patterns

Perhaps, we are talking about the same thing.

"I was under the impression that using threads with the right design patterns is currently the best way to speed up your app on a multi-CPU system." I think that today's CPU and operating system makers would like to impress me with this statement. I am not impressed. And, I don't expect you to be convinced by anything I write either. I wouldn't rule out multi-threading, but I think that newer smaller embedded systems are going on a different track altogether. (ie. Erlang, functional languages, continuations.)

The multi-threading model may speed up an application under a proprietary operating system, or a software system that is based on an underlying multi-threading model, but that doesn't mean that these threading models are the "right" design pattern. Instead of multi-threading, I would prefer to use terms like multi-processing, concurrency, asyncronous queuing, and continuations to express "the right design pattern" for my software. I am talking about exposing the inards of the so-called kernel/operating system. (Of course, proprietary makers don't want to do this.) I think it is useful to remove the underlying bloat of a threading model and allow software to evolve organically, from the lowest to the highest level. I would stop following the dictate of someone else's threading model. Interspersing proprietary one-way only threading code throughout my software is a recipe for a big mess. Consciously writing my software with concurrency where concurrency makes the most sense seems like the right design pattern to me. I think this also makes it dead simple to plug in a (slow and legacy) threading scheme after the fact rather than have to rewrite.

Legacy multi-threading models are also prone to numerous problems: locking, blocking, waiting, syncronous waits, possible lag and failures due to critical timing. Software is overly-complicated in these models by protected modes, protected code, semaphores, and other wierdness. I think that a collaborative / cooperative model is more in tune to the way one might actually write software.

iantm's picture
Offline
Last seen: 3 years 8 months ago
Joined: Apr 2 2005 - 14:01
Posts: 709
Re: I thought I was picky!

I don't feel that I should have to spend additional money to address an existing hardware bottleneck

fair enough

When you take fibre channel cards, scsi cards, and video cards into account - the two pci slots become very limiting - at least on the Xserve G5.

Aha! Sounds like you're talking about already owned, prod/dev machines, but isn't there an option on the new xserve for onboard video? On the other hand, the servernistas here would say, with a cruel smirk, "you don't need video!" But... well... much of the ease of use of OS X Server is lost in the CLI.

The machine I'm working with is an existing Xserve G5 that I inherited responsibility for from my predecessor. Since it's a production machine in the server room, it has to be connected to the kvm so the other server guys can check on it while I'm away. Unfortunately, the lack of expansion, even on the Intel Xserve with onboard video is a bit of a turnoff. I realize that it's because of the 1U form factor. I just wish that Apple had a more robust option for a rack mount server. Mounting the Mac Pro should be entertaining, to say the least.

The educational pricing on a specced out Mac Pro is pretty decent for what is included (3.0 quad core, 4gb ram, 3.0tb mass storage, fibre channel, os x server, and applcare for $6250) while a comparably equipped Xserve (less 1 750gb hard drive) is $8943. Hmmm, decisions, decisions. I'll spend my employer's money the way I would spend mine - the best bang for the buck.

BDub's picture
Offline
Last seen: 2 years 9 months ago
Joined: Dec 20 2003 - 10:38
Posts: 703
Re: the right design patterns

I wouldn't rule out multi-threading, but I think that newer smaller embedded systems are going on a different track altogether. (ie. Erlang, functional languages, continuations.)

I just took a look at the Wikipedia article for Erlang. I don't see how Erlang processes are any different than decently implemented threads. I'm sure I've missed something though. I do like how it looks like the language was built with multithreading as a central characteristic.

As far as functional languages go, I've had basic exposure to Scheme, but that's it. My understanding of it is that it uses threading.


The multi-threading model may speed up an application under a proprietary operating system, or a software system that is based on an underlying multi-threading model, but that doesn't mean that these threading models are the "right" design pattern. Instead of multi-threading, I would prefer to use terms like multi-processing, concurrency, asyncronous queuing, and continuations to express "the right design pattern" for my software.

We could just as easily call threading 'programming with concurrently running centrally schedules processes' if it's a semantic issue. By the way, I'm using the term "Design Pattern" as used in the Gang of Four book, to describe a general approach to solving a problem. Erlang seems to use the multithreaded implementation of an actor pattern. Traditional threading models don't make this impossible, and give you freedom to use other design patterns as appropriate.

I am talking about exposing the inards of the so-called kernel/operating system. (Of course, proprietary makers don't want to do this.) I think it is useful to remove the underlying bloat of a threading model and allow software to evolve organically, from the lowest to the highest level. I would stop following the dictate of someone else's threading model.

I'm not sure what you're actually saying here. Are you suggesting that anyone who needs to write something with concurrency should rewrite their own handling system? Why does the OS's licensing matter, when most threading is done in user space?


Interspersing proprietary one-way only threading code throughout my software is a recipe for a big mess. Consciously writing my software with concurrency where concurrency makes the most sense seems like the right design pattern to me.

Again, I'm not sure why the licensing matters. As far as consciously making choices about where concurrency matters, I agree. I don't think you /can/ write concurrent software without understanding where it matters and how it affects your data.


I think this also makes it dead simple to plug in a (slow and legacy) threading scheme after the fact rather than have to rewrite.

Could you clarify? I think I'm misreading this.


Legacy multi-threading models are also prone to numerous problems: locking, blocking, waiting, syncronous waits, possible lag and failures due to critical timing. Software is overly-complicated in these models by protected modes, protected code, semaphores, and other wierdness.

Fair enough, maybe a message queuing system like Erlang seems to use (again, I don't know anything about it) might be superior. Locking et all only tends to be used when you're working with concurrent accesses on a single data structure though, so I'm not sure how that'd help.


I think that a collaborative / cooperative model is more in tune to the way one might actually write software.

Could you define a collaborative/cooperative model, or give some idea of where I could read about it? I think we're talking about message queuing again, but I'm not sure.

Jon
Jon's picture
Offline
Last seen: 13 years 6 months ago
Joined: Dec 20 2003 - 10:38
Posts: 2804
If Apple ever added boot time

If Apple ever added boot time support for this technology then you'd have your solution.

mmphosis's picture
Offline
Last seen: 4 days 10 hours ago
Joined: Aug 18 2005 - 16:26
Posts: 442
Re: the right design patterns

We could just as easily call threading 'programming with concurrently running centrally schedules processes' if it's a semantic issue.

Yes, it's semantics.

It's also about running software on multiple processors. And, it's about programming with concurrently running distributed unscheduled processes. And, it's about memory sharing which as you noted gets messy with the traditional threading models. It's difficult to share memory between two remote (maybe really remote) and possibly very different CPU architectures -- and I think that there is an opportunity in this.

A lot of software has been constrained and locked into running on a single Intel x86 processor. I see that there are a few people on this thread who don't seem to mind billing someone else to purchase the latest behemoth. Myself, I think that the future lies in commoditized CPUs, something that Intel is probably scared of -- masses of CPUs that can be purchased cheaply and installed easily, just like SD-RAM today. I also predict, and hope, that SD-RAM will replace traditional hard drives very soon.

By the way, I'm using the term "Design Pattern" as used in the Gang of Four book, to describe a general approach to solving a problem.

Excellent.

Are you suggesting that anyone who needs to write something with concurrency should rewrite their own handling system?

I wouldn't rule that out. Look into what is required, it's just as much about what tools are required, as which tools can be removed altogether. Maybe multiple CPUs won't help at all. Probably, there are processes that can run concurrently -- this in itself isn't new, but having massive numbers of CPUs is. I think that concurrency makes things easier without having to add and use a lot of unneeded complexity.

Why does the OS's licensing matter, when most threading is done in user space?

Power: speed, cost, and quality. I would even remove the distinction of user space. Sandboxes are way more useful.

I don't think you /can/ write concurrent software without understanding where it matters and how it affects your data.

Well, you can write anything you want and that's the problem. You are right in looking into "understanding where it matters and how it affects your data." This is great and it is what I am talking about.

I think this also makes it dead simple to plug in a (slow and legacy) threading scheme after the fact rather than have to rewrite.

Could you clarify? I think I'm misreading this.

hardware >> Mach threads >> pthreads >> Mac OS X threading model >> Mac OS X Timer >> Timer glue code >> my Timer code.

hardware >> my Timer code.

Could you define a collaborative/cooperative model, or give some idea of where I could read about it?

Message queuing might be one way to express this. I like the idea of design patterns that you mention. Take a step back and consciously look at what is required and organize, reuse, remove, refactor, rewrite -- basic software design without doing weird stuff (like pre-emptive multi-threading on a single CPU.)

Offline
Last seen: 6 years 7 months ago
Joined: Dec 20 2003 - 10:38
Posts: 851
It should work fine. It frea

It should work fine. It freaking flies on my 4 core Sossaman machine.

catmistake's picture
Offline
Last seen: 3 years 2 weeks ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
[quote]If Apple ever added bo

If Apple ever added boot time support for this technology

That's so weird!

just when I was thinking "The Apple XBlade," blade server solutions...

Log in or register to post comments