Pass4sure 9L0-060 dumps | 9L0-060 true questions |

9L0-060 Mac OS X 10.4 Service and Support

Study sheperd Prepared by Apple Dumps Experts 9L0-060 Dumps and true Questions

100% true Questions - Exam Pass Guarantee with tall Marks - Just Memorize the Answers

9L0-060 exam Dumps Source : Mac OS X 10.4 Service and Support

Test Code : 9L0-060
Test name : Mac OS X 10.4 Service and Support
Vendor name : Apple
exam questions : 50 true Questions

Little effor required to prepare 9L0-060 true question bank.
Im impressed to behold the comments that 9L0-060 braindump is updated. The changes are very current and I did not hope to find them anywhere. I just took my first 9L0-060 exam so this one will exist the next step. Gonna order soon.

those 9L0-060 questions and solutions works within the true test.
i was so much indolent and didnt want to toil tough and constantly searched short cuts and handy techniques. while i was doing an IT course 9L0-060 and it turned into very difficult for me and didnt able to find any sheperd line then i heard approximately the web page which maintain been very eminent within the marketplace. I got it and my problems removed in few days once I started it. The sample and practice questions helped me loads in my prep of 9L0-060 test and that i correctly secured preempt marks as nicely. That changed into simply because of the killexams.

Do you want latest dumps of 9L0-060 examination, it's far birthright vicinity?
I solved outright questions in simplest half time in my 9L0-060 exam. I will maintain the capability to acquire exhaust of the test sheperd understanding for sever tests as well. An unpleasant lot favored brain dump for the help. I requisite to inform that together together with your out of the regular examine and honing gadgets; I passed my 9L0-060 paper with suitablemarks. This because of the homework cooperates together with your utility.

wonderful supply today's first rate true remove a study at questions, remedy solutions.
Learning for the 9L0-060 exam has been a tough going. With so many complicated subjects to cover, added at the self credence for passing the exam via the exhaust of taking me thru headquarters questions onthe trouble. It paid off as I might too requisite to pass the exam with an first rate pass percent of eighty four%. Among thequestions got here twisted, but the solutions that matched from helped me ticket the birthright answers.

Its qualified to read books for 9L0-060 exam, but ensure your success with these exam questions .
Just passed the 9L0-060 exam with this braindump. I can verify that it is 99% valid and consists of outright this years updates. I simplest were given 2 query incorrect, so very excited and relieved.

fine to pay attention that actual test questions of 9L0-060 exam are to exist had.
Surpassed the 9L0-060 exam with 99% marks. Super! Considering simplest 15 days steering time. outright credit marks is going to the query & acknowledge by way of manner of killexams. Its high-quality dump made training so cleanly that I ought toeven recognize the hard subjects secure. Thanks loads, for offering us such an cleanly and efficacious observeguide. Want your crew sustain on developing greater of such courses for different IT certification test.

were given maximum 9L0-060 Quiz in true remove a study at that I prepared.
In case you requisite birthright 9L0-060 training on the way it really works and what are the exams and outright then dont dissipate it sluggish and select as its miles an final source of assist. I additionally favored 9L0-060 training and i even opted for this top notch exam simulator and got myself the extremely qualified schooling ever. It guided me with each thing of 9L0-060 exam and provided the brilliant questions and answers i maintain ever seen. The examine publications moreover were of very plenty help.

right situation to ascertain 9L0-060 brand current dumps paper.
ive renewed my club this time for 9L0-060 exam. I remove delivery of my involvement with is so crucial it is not practicable give up via not having a club. I am able to just deem exams for my exam. simply this web page can assist me achieve my 9L0-060 accredition and assist me in getting above 95% marks inside the exam. You outright are honestly making an top notch showing. maintain it up!

it's far splendid to maintain 9L0-060 actual test questions.
Hey buddies! Gotta skip the 9L0-060 exam and no time for studies Dont fear. I can resolve year hassle in case u believe me. I had comparable scenario as time become quick. Text books didnt help. So, I looked for an cleanly solution and were given one with the killexams. Their questions & answers worked so rightly for me. Helped cleanly the ideas and mug the tough ones. found outright questions identical as the sheperd and scored nicely. Very profitable stuff, killexams.

look at books for 9L0-060 expertise but acquire confident your fulfillment with those exam questions .
Passing the 9L0-060 maintain become long due as i was exceedingly busy with my office assignments. However, while i discovered the query & acknowledge by way of the, it absolutely inspired me to remove on the check. Its been sincerely supportive and helped smooth outright my doubts on 9L0-060 subject matter. I felt very fortunate to pass the exam with a huge 97% marks. Awesome fulfillment certainly. And outright credit is going to you for this first rate assist.

Apple Mac OS X 10.4

While it is hard errand to pick solid certification questions/answers assets regarding review, reputation and validity since individuals score sham because of picking incorrectly benefit. ensure to serve its customers best to its assets as for exam dumps update and validity. The greater Part of other's sham report objection customers arrive to us for the brain dumps and pass their exams cheerfully and effortlessly. They never compact on their review, reputation and character because killexams review, killexams reputation and killexams customer certitude is imperative to us. Extraordinarily they deal with review, reputation, sham report grievance, trust, validity, report and scam. On the off chance that you behold any False report posted by their rivals with the name killexams sham report grievance web, sham report, scam, protestation or something like this, simply recall there are constantly terrible individuals harming reputation of qualified administrations because of their advantages. There are a much many fulfilled clients that pass their exams utilizing brain dumps, killexams PDF questions, killexams exam questions questions, killexams exam simulator. Visit, their illustration questions and test brain dumps, their exam simulator and you will realize that is the best brain dumps site.

Back to Bootcamp Menu

250-271 braindumps | HP2-T17 cheat sheets | PW0-205 test questions | 000-N17 free pdf download | HP2-H24 test prep | HP0-714 study guide | HP0-513 sample test | HP2-K38 free pdf | 70-347 questions answers | 050-707 study guide | COG-135 VCE | P2090-047 bootcamp | 70-498 dumps | 1Z0-335 examcollection | JN0-102 true questions | 3X0-103 practice questions | 000-017 brain dumps | TB0-111 dump | 70-511-VB dumps questions | 70-512-Csharp braindumps |

Guarantee your prosperity with this 9L0-060 question bank Apple Certification contemplate guides are setup by means of IT specialists. A much many people grumbling that there are an examcollection of questions in this kind of monster amount of instruction exams and exam asset, and they might exist nowadays can not bear to deal with the permeate of any additional. Seeing specialists instructional meeting this far achieving interpretation while by the by affirmation that each one the becoming acquainted with is anchored after signi

We maintain Tested and Approved 9L0-060 Exams. presents the maximum accurate and ultra-modern IT exam materials which nearly embrace outright information references. With the useful resource of their 9L0-060 exam materials, you dont requisite to dissipate a while on analyzing bulk of reference books and simply requisite to spend 10-20 hours to grasp their 9L0-060 actual questions and answers. And they offer you with PDF Version & Software Version exam questions and answers. For Software Version materials, Its offered to offer the applicants simulate the Apple 9L0-060 exam in a actual environment. Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for outright tests on internet site
PROF17 : 10% Discount Coupon for Orders extra than $69
DEAL17 : 15% Discount Coupon for Orders more than $ninety nine
DECSPECIAL : 10% Special Discount Coupon for outright Orders
Click encourages an Amazing numerous applicants finish the tests and score their certifications. They maintain an extensive amount of compelling studies. Their dumps are strong, sensible, updated and of really best much to beat the issues of any IT certifications. exam dumps are latest updated in rather defeat path on standard start and fabric is released discontinuously. Latest dumps are reachable in experimenting with centers with whom they are holding up their dating to score most extreme late material.

The exam inquiries for 9L0-060 Mac OS X 10.4 Service and back exam is basically in perspective of two to exist had game plans, PDF and practice test program. PDF record passes on the majority of the exam questions, arrangements which makes your making arrangements less dedicated. While the practice test program are the complimentary detail inside the exam protest. Which serves to self-overview your reinforce. The assessment gear too works your feeble regions, where you requisite to situated more endeavor with the point that you may upgrade every one among your worries. recommend you to must endeavor its free demo, you will behold the simple UI and besides you will believe that its simple to change the prep mode. Regardless, ensure that, the genuine 9L0-060 exam has a greater wide assortment of inquiries than the prefatory shape. In case, you are assuaged with its demo then you could buy the genuine 9L0-060 exam question. offers you 3 months free updates of 9L0-060 Mac OS X 10.4 Service and back exam questions. Their grip aggregate is always reachable at returned submission who updates the material as and while required. Huge Discount Coupons and Promo Codes are as under;
WC2017: 60% Discount Coupon for outright exams on website
PROF17: 10% Discount Coupon for Orders greater than $69
DEAL17: 15% Discount Coupon for Orders greater than $99
DECSPECIAL: 10% Special Discount Coupon for outright Orders

Since 1997, we have provided a high quality education to our community with an emphasis on academic excellence and strong personal values.

Killexams 1Z0-864 questions and answers | Killexams HP0-J16 free pdf download | Killexams HP0-345 true questions | Killexams LOT-927 exam prep | Killexams HC-711-CHS practice test | Killexams 000-816 practice test | Killexams ITILF2011 cheat sheets | Killexams A2090-421 questions and answers | Killexams MB5-626 questions answers | Killexams 312-38 dumps | Killexams 000-600 test prep | Killexams 156-215-77 braindumps | Killexams 000-348 study guide | Killexams JN0-303 practice questions | Killexams HP0-M12 free pdf | Killexams 700-703 braindumps | Killexams 000-238 brain dumps | Killexams HH0-440 sample test | Killexams CBCP braindumps | Killexams 050-SEPROGRC-01 dumps questions |

Exam Simulator : Pass4sure 9L0-060 Exam Simulator

View Complete list of Brain dumps

Killexams 642-132 practice exam | Killexams SK0-003 bootcamp | Killexams 70-332 questions and answers | Killexams 3108 practice questions | Killexams LOT-922 dumps | Killexams CoreSpringV3.2 dump | Killexams P2020-300 brain dumps | Killexams 70-345 free pdf | Killexams 312-49v9 braindumps | Killexams 000-898 sample test | Killexams HP0-409 free pdf | Killexams 000-416 test questions | Killexams A2180-607 braindumps | Killexams M2090-748 mock exam | Killexams C9530-404 true questions | Killexams HP2-E61 braindumps | Killexams HH0-250 test prep | Killexams HP0-512 true questions | Killexams AHM-540 practice test | Killexams HP0-086 cheat sheets |

Mac OS X 10.4 Service and Support

Pass 4 confident 9L0-060 dumps | 9L0-060 true questions |

Watch: Mac OS X 10.4 Running in Windows Alternative ReactOS via PearPC Emulator | true questions and Pass4sure dumps

The ReactOS project recently showcased on YouTube that's practicable to virtualize the Mac OS X 10.4 operating system on their free and open-source Windows alternative operating system.

Our "Watch" succession of articles continues today with a very consuming one where you can behold Mac OS X 10.4 Tiger running inside the ReactOS computer operating system, which they believe has arrive a long way, and it's rise to study like a viable alternative to Microsoft's Windows 7 or Vista operating systems, perfect for desktop computers and laptops.

The latest release, ReactOS 0.4.8, showed us eventual month that it's now practicable to exhaust Windows 10, Windows 8, and Windows Vista software on the free and open-source operating system that's binary compatible with computer programs and device drivers made for Windows.

It too introduced initial back for reading data from NTFS formatted drives, a current app similar to the DrWatson32 software for Windows, some user-visible changes like back for balloon notifications in the system tray area and back for unmounting network drives directly from the file explorer.

You can now emulate various operating systems inside ReactOS

You'd assume that ReactOS barely runs some Windows apps, but its innovative evolution team want to prove us otherwise and they recently managed to record a screen capture of the Mac OS X 10.4 "Tiger" operating system running inside ReactOS via the well-known PearPC architecture-independent PowerPC platform emulator.

Why would you accelerate an older version of Mac OS X inside ReactOS? Well, that doesn't really matter, but what's notable here is that you can exhaust the PearPC emulator to virtualize various other PowerPC operating systems, including Mac OS X, Darwin, and GNU/Linux, which is both educational and entertaining. Check it out in action below!

Mac OS X 10.6 Snow Leopard: the Ars Technica review | true questions and Pass4sure dumps

Mac OS X 10.6 Snow Leopard: the Ars Technica review reader comments with 269 posters participating, including memoir author Share this story
  • Share on Facebook
  • Share on Twitter
  • Share on Reddit
  • Mac OS X 10.4 Tiger: 150+ current featuresMac OS X 10.4 Tiger: 150+ current features

    In June of 2004, during the WWDC keynote address, Steve Jobs revealed Mac OS X 10.4 Tiger to developers and the public for the first time. When the finished product arrived in April of 2005, Tiger was the biggest, most important, most feature-packed release in the history of Mac OS X by a wide margin. Apple's marketing drive reflected this, touting "over 150 current features."

    All those current features took time. Since its introduction in 2001, there had been at least one major release of Mac OS X each year. Tiger took over a year and a half to arrive. At the time, it definitely seemed worth the wait. Tiger was a hit with users and developers. Apple took the lesson to heart and quickly set expectations for the next major release of Mac OS X, Leopard. Through various channels, Apple communicated its intention to flood from a 12-month to an 18-month release cycle for Mac OS X. Leopard was officially scheduled for "spring 2007."

    As the date approached, Apple's marketing machine trod a predictable path.

    Steve Jobs at WWDC 2007, touting 300 current features in Mac OS X 10.5 LeopardSteve Jobs at WWDC 2007, touting 300 current features in Mac OS X 10.5 Leopard

    Apple even went so far as to list outright 300 current features on its website. As it turns out, "spring" was a bit optimistic. Leopard actually shipped at the cessation of October 2007, nearly two and a half years after Tiger. Did Leopard really maintain twice as many current features as Tiger? That's debatable. What's confident is that Leopard included a solid crop of current features and technologies, many of which they now remove for granted. (For example, maintain you had a discussion with a potential Mac user since the release of Leopard without mentioning Time Machine? I certainly haven't.)

    Mac OS X appeared to exist maturing. The progression was clear: longer release cycles, more features. What would Mac OS X 10.6 exist like? Would it arrive three and a half years after Leopard? Would it and embrace 500 current features? A thousand?

    At WWDC 2009, Bertrand Serlet announced a flood that he described as "unprecedented" in the PC industry.

    Mac OS X 10.6 - Read Bertrand's lips: No current Features!Mac OS X 10.6 - Read Bertrand's lips: No current Features!

    That's right, the next major release of Mac OS X would maintain no current features. The product name reflected this: "Snow Leopard." Mac OS X 10.6 would merely exist a variant of Leopard. Better, faster, more refined, more... uh... snowy.

    This was a risky strategy for Apple. After the rapid-fire updates of 10.1, 10.2, and 10.3 followed by the riot of current features and APIs in 10.4 and 10.5, could Apple really score away with calling a "time out?" I imagine Bertrand was really sweating this announcement up on the stage at WWDC in front of a live audience of Mac developers. Their reaction? involuntary applause. There were even a few hoots and whistles.

    Many of these very developers applauded the "150+ current features" in Tiger and the "300 current features" in Leopard at past WWDCs. Now they were applauding zero current features for Snow Leopard? What explains this?

    It probably helps to know that the "0 current Features" slither came at the cessation of an hour-long presentation detailing the major current APIs and technologies in Snow Leopard. It was too quickly followed by a back-pedaling ("well, there is one current feature...") slither describing the addition of Microsoft Exchange support. In isolation, "no current features" may appear to imply stagnation. In context, however, it served as a developer-friendly affirmation.

    The overall message from Apple to developers was something like this: "We're adding a ton of current things to Mac OS X that will serve you write better applications and acquire your existing code accelerate faster, and we're going to acquire confident that outright this current stuff is rock-solid and as bug-free as possible. We're not going to overextend ourselves adding a raft of current customer-facing, marketing-friendly features. Instead, we're going to concentrate 100% on the things that palpate you, the developers."

    But if Snow Leopard is a savor epistle to developers, is it a Dear John epistle to users? You know, those people that the marketing department might so crudely refer to as "customers." What's in it for them? Believe it or not, the sales pitch to users is actually quite similar. As exhausting as it has been for developers to sustain up with Apple's seemingly never-ending stream of current APIs, it can exist just as taxing for customers to wait on top of Mac OS X's features. Exposé, a current Finder, Spotlight, a current Dock, Time Machine, a current Finder again, a current iLife and iWork almost every year, and on and on. And as much as developers disfavor bugs in Apple's APIs, users who experience those bugs as application crashes maintain just as much understanding to exist annoyed.

    Enter Snow Leopard: the release where they outright score a atomize from the new-features/new-bugs treadmill of Mac OS X development. That's the pitch.

    Uncomfortable realities

    But wait a second, didn't I just mention an "hour-long presentation" about Snow Leopard featuring "major current APIs and technologies?" When speaking to developers, Apple's message of "no current features" is another way of aphorism "no current bugs." Snow Leopard is supposititious to fix ancient bugs without introducing current ones. But nothing says "new bugs, coming birthright up" quite like major current APIs. So which is it?

    Similarly, for users, "no current features" connotes stability and reliability. But if Snow Leopard includes enough changes to the core OS to fill an hour-long overview session at WWDC more than a year before its release, can Apple really acquire qualified on this promise? Or will users cessation up with outright the disadvantages of a feature-packed release like Tiger or Leopard—the inevitable 10.x.0 bugs, the unfamiliar, untried current functionality—but without any of the actual current features?

    Yes, it's enough to acquire one quite cynical about Apple's true motivations. To cast some more fuel on the fire, maintain a study at the Mac OS X release timeline below. Next to each release, I've included a list of its most significant features.

    Mac OS X release timelineMac OS X release timeline

    That curve is taking on a decidedly droopy shape, as if it's being weighed down by the ever-increasing number of current features. (The releases are distributed uniformly on the Y axis.) Maybe you assume it's reasonable for the time between releases to stretch out as each one brings a heavier load of goodies than the last, but sustain in sarcasm the ratiocinative consequence of such a curve over the longhorn haul.

    And yeah, there's a exiguous upwards kick at the cessation for 10.6, but remember, this is supposititious to exist the "no current features" release. Version 10.1 had a similar no-frills focus but took a heck of a lot less time to arrive.

    Looking at this graph, it's hard not to phenomenon if there's something siphoning resources from the Mac OS X evolution effort. Maybe, say, some project that's in the first two or three major releases of its life, noiseless in that steep, early section of its own timeline graph. Yes, I'm talking about the iPhone, specifically iPhone OS. The iPhone business has exploded onto Apple's poise sheets like no other product before, even the iPod. It's too accruing developers at an alarming rate.

    It's not a stretch to imagine that many of the artists and developers who piled on the user-visible features in Mac OS X 10.4 and 10.5 maintain been reassigned to iPhone OS (temporarily or otherwise). After all, Mac OS X and iPhone OS share the very core operating system, the very language for GUI development, and many of the very APIs. Some workforce migration seems inevitable.

    And let's not forget the "Mac OS X" technologies that they later scholarly were developed for the iPhone and just happened to exist announced for the Mac first (because the iPhone was noiseless a secret), like Core Animation and code signing. Such plot theories certainly aren't helped by WWDC keynote snubs and other indignities suffered by Mac OS X and the Mac in general since the iPhone arrived on the scene. And so, on top of everything else, Snow Leopard is tasked with restoring some luster to Mac OS X.

    Got outright that? A nearly two-year evolution cycle, but no current features. Major current frameworks for developers, but few current bugs. Significant changes to the core OS, but more reliability. And a franchise rejuvenation with few user-visible changes.

    It's enough to circle a leopard white.

    The price of entry

    Snow Leopard's opening overture to consumers is its price: $29 for those upgrading from Leopard. The debut release of Mac OS X 10.0 and the eventual four major releases maintain outright been $129, with no special pricing for upgrades. After eight years of this kind of fiscal disciplining, Leopard users may well exist tempted to discontinue reading birthright now and just disappear pick up a copy. Snow Leopard's upgrade price is well under the impulse purchase threshold for many people. Twenty-nine dollars plus some minimal flush of faith in Apple's ability to help the OS with each release, and boom, instant purchase.

    Still here? Good, because there's something else you requisite to know about Snow Leopard. It's an overture of a different sort, less of a come-on and more of a spur. Snow Leopard will only accelerate on Macs with Intel CPUs. Sorry (again), PowerPC fans, but this is the cessation of the line for you. The transition to Intel was announced over four years ago, and the eventual current PowerPC Mac was released in October 2005. It's time.

    But if Snow Leopard is meant to prod the PowerPC holdouts into the Intel age, its "no current features" stance (and the accompanying need of added visual flair) is working against it. For those running Leopard on a PowerPC-based Mac, there's precious exiguous in Snow Leopard to serve thrust them over the (likely) four-digit price wall of a current Mac. For PowerPC Mac owners, the threshold for a current Mac purchase remains mostly unchanged. When their ancient Mac breaks or seems too slow, they'll disappear out and buy a current one, and it'll arrive with Snow Leopard pre-installed.

    If Snow Leopard does cessation up motivating current Mac purchases by PowerPC owners, it will probably exist the result of resignation rather than inspiration. An Intel-only Snow Leopard is most significant for what it isn't: a further extension of PowerPC life back on the Mac platform.

    The final consuming group is owners of Intel-based Macs that are noiseless running Mac OS X 10.4 Tiger. Apple shipped Intel Macs with Tiger installed for a exiguous over one year and nine months. Owners of these machines who never upgraded to Leopard are not eligible for the $29 upgrade to Snow Leopard. They're too apparently not eligible to purchase Snow Leopard for the traditional $129 price. Here's what Apple has to declare about Snow Leopard's pricing (emphasis added).

    Mac OS X version 10.6 Snow Leopard will exist available as an upgrade to Mac OS X version 10.5 Leopard in September 2009 [...] The Snow Leopard solitary user license will exist available for a suggested retail price of $29 (US) and the Snow Leopard Family Pack, a solitary household, five-user license, will exist available for a suggested price of $49 (US). For Tiger® users with an Intel-based Mac, the Mac Box Set includes Mac OS X Snow Leopard, iLife® '09 and iWork® '09 and will exist available for a suggested price of $169 (US) and a Family Pack is available for a suggested price of $229 (US).

    Ignoring the family packs for a moment, this means that Snow Leopard will either exist free with your current Mac, $29 if you're already running Leopard, or $169 if you maintain an Intel Mac running Tiger. People upgrading from Tiger will score the latest version of iLife and iWork in the compact (if that's the preempt term), whether they want them or not. It confident seems like there's an obvious situation in this lineup for a $129 offering of Snow Leopard on its own. Then again, perhaps it outright comes down to how, exactly, Apple enforces the $29 Snow Leopard upgrade policy.

    (As an aside to non-Mac users, note that the non-server version of Mac OS X has no per-user serial number and no activation scheme of any kind, and never has. "Registration" with Apple during the Mac OS X install process is entirely optional and is only used to collect demographic information. Failing to register (or entering entirely bogus registration information) has no sequel on your ability to accelerate the OS. This is considered a genuine advantage of Mac OS X, but it too means that Apple has no amenable record of who, exactly, is a "legitimate" owner of Leopard.)

    One possibility was that the $29 Snow Leopard upgrade DVD would only install on top of an existing installation of Leopard. Apple has done this ilk of thing before, and it bypasses any proof-of-purchase annoyances. It would, however, introduce a current problem. In the event of a hard drive failure or simple conclusion to reinstall from scratch, owners of the $29 Snow Leopard upgrade would exist forced to first install Leopard and then install Snow Leopard on top of it, perhaps more than doubling the installation time—and quintupling the annoyance.

    Given Apple's history in this area, no one should maintain been surprised to find out that Apple chose the much simpler option: the $29 "upgrade" DVD of Snow Leopard will, in fact, install on any supported Mac, whether or not it has Leopard installed. It will even install onto an entirely vacant hard drive.

    To exist clear, installing the $29 upgrade to Snow Leopard on a system not already running a properly licensed copy of Leopard is a violation of the end-user license agreement that comes with the product. But Apple's conclusion is a refreshing change: rewarding honest people with a hassle-free product rather than trying to correct mendacious people by treating everyone like a criminal. This "honor system" upgrade enforcement policy partially explains the huge jump to $169 for the Mac Box Set, which ends up re-framed as an honest person's way to score iLife and iWork at their usual prices, plus Snow Leopard for $11 more.

    And yes, speaking of installing, let's finally score on with it.


    Apple claims that Snow Leopard's installation process is "up to 45% faster." Installation times vary wildly depending on the speed, contents, and fragmentation of the target disk, the accelerate of the optical drive, and so on. Installation too only happens once, and it's not really an consuming process unless something goes terribly wrong. Still, if Apple's going to acquire such a claim, it's worth checking out.

    To eliminate as many variables as possible, I installed both Leopard and Snow Leopard from one hard disk onto another (empty) one. It should exist famous that this change negates some of Snow Leopard's most notable installation optimizations, which are focused on reducing random data access from the optical disc.

    Even with this disadvantage, the Snow Leopard installation took about 20% less time than the Leopard installation. That's well short of Apple's "up to 45%" claim, but behold above (and don't forget the "up to" weasel words). Both versions installed in less than 30 minutes.

    What is striking about Snow Leopard's installation is how quickly the initial Spotlight indexing process completed. Here, Snow Leopard was 74% faster in my testing. Again, the times are wee (5:49 vs. 3:20) and again, current installations on vacant disks are not the norm. But the shorter wait for Spotlight indexing is worth noting because it's the first indication most users will score that Snow Leopard means business when it comes to performance.

    Another notable thing about installation is what's not installed by default: Rosetta, the facility that allows PowerPC binaries to accelerate on Intel Macs. Okay Apple, they score it. PowerPC is a stiff, bereft of life. It rests in peace. It's rung down the curtain and joined the choir invisible. As far as Apple is concerned, PowerPC is an ex-ISA.

    But not installing Rosetta by default? That seems a exiguous harsh, even foolhardy. What's going to occur when outright those users upgrade to Snow Leopard and then double-click what they've probably long since forgotten is a PowerPC application? Perhaps surprisingly, this is what happens:

    Rosetta: auto-installed for your convenienceRosetta: auto-installed for your convenience

    That's what I saw when I tried to launch Disk Inventory X on Snow Leopard, an application that, yes, I had long since forgotten was PowerPC-only. After I clicked the "Install" button, I actually expected to exist prompted to insert the installer DVD. Instead, Snow Leopard reached out over the network, pulled down Rosetta from an Apple server, and installed it.

    Rosetta auto-install

    No reboot was required, and Disk Inventory X launched successfully after the Rosetta installation completed. Mac OS X has not historically made much exhaust of the install-on-demand approach to system software components, but the facility used to install Rosetta appears quite robust. Upon clicking "Install," an XML property list containing a vast catalog of available Mac OS X packages was downloaded. Snow Leopard uses the very facility to download and install printer drivers on demand, saving another trip to the installer DVD. I hope this technique gains even wider exhaust in the future.

    Installation footprint

    Rosetta aside, Snow Leopard simply puts fewer bits on your disk. Apple claims it "takes up less than half the disk space of the previous version," and that's no lie. A clean, default install (including fully-generated Spotlight indexes) is 16.8 GB for Leopard and 5.9 GB for Snow Leopard. (Incidentally, these numbers are both powers-of-two measurements; behold sidebar.)

    A gigabyte by any other name

    Snow Leopard has another trick up its sleeve when it comes to disk usage. The Snow Leopard Finder considers 1 GB to exist equal to 109 (1,000,000,000) bytes, whereas the Leopard Finder—and, it should exist noted, every version of the Finder before it—equates 1 GB to 230 (1,073,741,824) bytes. This has the sequel of making your hard disk suddenly loom larger after installing Snow Leopard. For example, my "1 TB" hard drive shows up in the Leopard Finder as having a capacity of 931.19 GB. In Snow Leopard, it's 999.86 GB. As you might maintain guessed, hard disk manufacturers exhaust the powers-of-ten system. It's outright quite a mess, really. Though I arrive down pretty firmly on the powers-of-two side of the fence, I can't blame Apple too much for wanting to match up nicely with the long-established (but noiseless dumb, sarcasm you) hard disk vendors' capacity measurement standard.

    Snow Leopard has several weight loss secrets. The first is obvious: no PowerPC back means no PowerPC code in executables. Recall the maximum practicable binary payload in a Leopard executable: 32-bit PowerPC, 64-bit PowerPC, x86, and x86_64. Now cross half of those architectures off the list. Granted, very few applications in Leopard included 64-bit code of any kind, but it's a 50% reduction in size for executables no matter how you slice it.

    Of course, not outright the files in the operating system are executables. There are data files, images, audio files, even a exiguous video. But most of those non-executable files maintain one thing in common: they're usually stored in compressed file formats. Images are PNGs or JPEGs, audio is AAC, video is MPEG-4, even preference files and other property lists now default to a compact binary format rather than XML.

    In Snow Leopard, other kinds of files climb on board the compression bandwagon. To give just one example, ninety-seven percent of the executable files in Snow Leopard are compressed. How compressed? Let's look:

    % cd Applications/ % ls -l Mail -rwxr-xr-x@ 1 root wheel 0 Jun 18 19:35 Mail

    Boy, that's, uh, pretty small, huh? Is this really an executable or what? Let's check their assumptions.

    % file Applications/ Applications/ empty

    Yikes! What's going on here? Well, what I didn't exhibit you is that the commands shown above were accelerate from a Leopard system looking at a Snow Leopard disk. In fact, outright compressed Snow Leopard files loom to hold zero bytes when viewed from a pre-Snow Leopard version of Mac OS X. (They study and act perfectly simple when booted into Snow Leopard, of course.)

    So, where's the data? The exiguous "@" at the cessation of the permissions string in the ls output above (a feature introduced in Leopard) provides a clue. Though the Mail executable has a zero file size, it does maintain some extended attributes:

    % xattr -l Applications/ 0000 00 00 01 00 00 2C F5 F2 00 2C F4 F2 00 00 00 32 .....,...,.....2 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ (184,159 lines snipped) 2CF610 63 6D 70 66 00 00 00 0A 00 01 FF FF 00 00 00 00 cmpf............ 2CF620 00 00 00 00 .... 0000 66 70 6D 63 04 00 00 00 A0 82 72 00 00 00 00 00 fpmc......r.....

    Ah, there's outright the data. But wait, it's in the resource fork? Weren't those deprecated about eight years ago? Indeed they were. What you're witnessing here is yet another addition to Apple's favorite file system hobbyhorse, HFS+.

    At the dawn of Mac OS X, Apple added journaling, symbolic links, and hard links. In Tiger, extended attributes and access control lists were incorporated. In Leopard, HFS+ gained back for hard links to directories. In Snow Leopard, HFS+ learns another current trick: per-file compression.

    The presence of the credit is the first hint that this file is compressed. This credit is actually hidden from the xattr command when booted into Snow Leopard. But from a Leopard system, which has no erudition of its special significance, it shows up as simple as day.

    Even more information is revealed with the serve of Mac OS X Internals guru Amit Singh's hfsdebug program, which has quietly been updated for Snow Leopard.

    % hfsdebug /Applications/ ... compression magic = cmpf compression ilk = 4 (resource fork has compressed data) uncompressed size = 7500336 bytes

    And confident enough, as they saw, the resource fork does indeed hold the compressed data. Still, why the resource fork? It's outright Part of Apple's usual, ingenious backward-compatibility gymnastics. A recent illustration is the way that hard links to directories reveal up—and function—as aliases when viewed from a pre-Leopard version of Mac OS X.

    In the case of a HFS+ compression, Apple was (understandably) unable to acquire pre-Snow Leopard systems read and interpret the compressed data, which is stored in ways that did not exist at the time those earlier operating systems were written. But rather than letting applications (and users) running on pre-10.6 systems choke on—or worse, debauch through modification—the unexpectedly compressed file contents, Apple has chosen to cover the compressed data instead.

    And where can the complete contents of a potentially large file exist hidden in such a way that pre-Snow Leopard systems can noiseless copy that file without the loss of data? Why, in the resource fork, of course. The Finder has always correctly preserved Mac-specific metadata and both the resource and data forks when affecting or duplicating files. In Leopard, even the lowly cp and rsync commands will accomplish the same. So while it may exist a exiguous bit spooky to behold outright those "empty" 0 KB files when looking at a Snow Leopard disk from a pre-Snow Leopard OS, the chance of data loss is small, even if you flood or copy one of the files.

    The resource fork isn't the only situation where Apple has decided to smuggle compressed data. For smaller files, hfsdebug shows the following:

    % hfsdebug /etc/asl.conf ... compression magic = cmpf compression ilk = 3 (xattr has compressed data) uncompressed size = 860 bytes

    Here, the data is wee enough to exist stored entirely within an extended attribute, albeit in compressed form. And then, the final frontier:

    % hfsdebug /Volumes/Snow Time/Applications/ ... compression magic = cmpf compression ilk = 3 (xattr has inline data) uncompressed size = 8 bytes

    That's right, an entire file's contents stored uncompressed in an extended attribute. In the case of a standard PkgInfo file like this one, those contents are the four-byte classic Mac OS ilk and creator codes.

    % xattr -l Applications/ 0000 66 70 6D 63 03 00 00 00 08 00 00 00 00 00 00 00 fpmc............ 0010 FF 41 50 50 4C 65 6D 61 6C .APPLemal

    There's noiseless the very "fpmc..." preamble seen in outright the earlier examples of the attribute, but at the cessation of the value, the expected data appears as simple as day: ilk code "APPL" (application) and creator code "emal" (for the Mail application—cute, as per classic Mac OS tradition).

    You may exist wondering, if this is outright about data compression, how does storing eight uncompressed bytes plus a 17-byte preamble in an extended credit redeem any disk space? The acknowledge to that lies in how HFS+ allocates disk space. When storing information in a data or resource fork, HFS+ allocates space in multiples of the file system's allocation shroud size (4 KB, by default). So those eight bytes will remove up a minimum of 4,096 bytes if stored in the traditional way. When allocating disk space for extended attributes, however, the allocation shroud size is not a factor; the data is packed in much more tightly. In the end, the actual space saved by storing those 25 bytes of data in an extended credit is over 4,000 bytes.

    But compression isn't just about saving disk space. It's too a classic illustration of trading CPU cycles for decreased I/O latency and bandwidth. Over the past few decades, CPU performance has gotten better (and computing resources more plentiful—more on that later) at a much faster rate than disk performance has increased. Modern hard disk search times and rotational delays are noiseless measured in milliseconds. In one millisecond, a 2 GHz CPU goes through two million cycles. And then, of course, there's noiseless the actual data transfer time to consider.

    Granted, several levels of caching throughout the OS and hardware toil mightily to cover these delays. But those bits maintain to arrive off the disk at some point to fill those caches. Compression means that fewer bits maintain to exist transferred. Given the almost comical glut of CPU resources on a modern multi-core Mac under simple use, the total time needed to transfer a compressed payload from the disk and exhaust the CPU to decompress its contents into memory will noiseless usually exist far less than the time it'd remove to transfer the data in uncompressed form.

    That explains the potential performance benefits of transferring less data, but the exhaust of extended attributes to store file contents can actually acquire things faster, as well. It outright has to accomplish with data locality.

    If there's one thing that slows down a hard disk more than transferring a large amount of data, it's affecting its heads from one Part of the disk to another. Every flood means time for the head to start moving, then stop, then ensure that it's correctly positioned over the desired location, then wait for the spinning disk to do the desired bits beneath it. These are outright real, physical, affecting parts, and it's Amazing that they accomplish their dance as quickly and efficiently as they do, but physics has its limits. These motions are the true performance killers for rotational storage like hard disks.

    The HFS+ volume format stores outright its information about files—metadata—in two primary locations on disk: the Catalog File, which stores file dates, permissions, ownership, and a host of other things, and the Attributes File, which stores "named forks."

    Extended attributes in HFS+ are implemented as named forks in the Attributes File. But unlike resource forks, which can exist very large (up to the maximum file size supported by the file system), extended attributes in HFS+ are stored "inline" in the Attributes File. In practice, this means a circumscribe of about 128 bytes per attribute. But it too means that the disk head doesn't requisite to remove a trip to another Part of the disk to score the actual data.

    As you can imagine, the disk blocks that acquire up the Catalog and Attributes files are frequently accessed, and therefore more likely than most to exist in a cache somewhere. outright of this conspires to acquire the complete storage of a file, including both its metadata in its data, within the B-tree-structured Catalog and Attributes files an overall performance win. Even an eight-byte payload that balloons to 25 bytes is not a concern, as long as it's noiseless less than the allocation shroud size for simple data storage, and as long as it outright fits within a B-tree node in the Attributes File that the OS has to read in its entirety anyway.

    There are other significant contributions to Snow Leopard's reduced disk footprint (e.g., the removal of unnecessary localizations and "designable.nib" files) but HFS+ compression is by far the most technically interesting.

    Installer intelligence

    Apple makes two other consuming promises about the installation process:

    Snow Leopard checks your applications to acquire confident they're compatible and sets aside any programs known to exist incompatible. In case a power outage interrupts your installation, it can start again without losing any data.

    The setting aside of "known incompatible" applications is undoubtedly a response to the "blue screen" problems some users encountered when upgrading from Tiger to Leopard two years ago, which was caused by the presence of incompatible—and some would declare "illicit"—third-party system extensions. I maintain a decidedly pragmatic view of such software, and I'm fortunate to behold Apple taking a similarly practical approach to minimizing its impact on users.

    Apple can't exist expected to detect and disable outright potentially incompatible software, of course. I suspect only the most common or highest profile risky software is detected. If you're a developer, this installer feature may exist a qualified way to find out if you're on Apple's sh*t list.

    As for continuing an installation after a power failure, I didn't maintain the guts to test this feature. (I too maintain a UPS.) For long-running processes like installation, this kind of added robustness is welcome, especially on battery-powered devices like laptops.

    I mention these two details of the installation process mostly because they highlight the kinds of things that are practicable when developers at Apple are given time to polish their respective components of the OS. You might assume that the installer team would exist hard-pressed to arrive up with enough to accomplish during a nearly two-year evolution cycle. That's clearly not the case, and customers will gather the benefits.

    Snow Leopard's current looks

    I've long yearned for Apple to acquire a cleanly break, at least visually, from Mac OS X's Aqua past. Alas, I will exist waiting a bit longer, because Snow Leopard ushers in no such revolution. And yet here I am, beneath a familiar-looking section heading that seems to testify otherwise. The truth is, Snow Leopard actually changes the appearance of nearly every pixel on your screen—but not in the way you might imagine.

    Since the dawn of color on the Macintosh, the operating system has used a default output gamma correction value of 1.8. Meanwhile, Windows—aka the ease of the world—has used a value of 2.2. Though this may not appear significant to anyone but professional graphics artists, the disagreement is usually obvious to even a casual observer when viewing the very image on both kinds of displays side by side.

    Though Mac users will probably instinctively prefer the 1.8 gamma image that they're used to, Apple has decided that this historical disagreement is more pains than it's worth. The default output gamma correction value in Snow Leopard is now 2.2, just like everyone else. Done and done.

    If they notice at all, users will likely experience this change as a feeling that the Snow Leopard user interface has a bit more contrast than Leopard's. This is reinforced by the current default desktop background, a re-drawn, more saturated version of Leopard's default desktop. (Note that these are two entirely different images and not an attempt to demonstrate the effects of different gamma correction settings.)

    LeopardLeopard Snow LeopardSnow Leopard Dock Exposé spotlight effectDock Exposé spotlight effect

    But even beyond color correction, apt to form, Apple could not resist adding a few graphical tweaks to the Snow Leopard interface. The most obvious changes are related to the Dock. First, there's the current "spotlight" study triggered by a click-and-hold on an application icon in the Dock. (This activates Exposé, but only for the windows belonging to the application that was clicked. More later.)

    Furthermore, any and outright pop-up menus on the Dock—and only on the Dock—have a unique study in Snow Leopard, complete with a custom selection appearance (which, for a change, does a passable job of matching the system-wide selection appearance setting).

    New Dock menu appearance. Mmmm… arbitrary.New Dock menu appearance. Mmmm… arbitrary.

    For Mac users of a confident age, these menus may bring to sarcasm Apple's Hi-Tech appearance theme from the bad-old days of Copland. They're actually considerably more subtle, however. Note the translucent edges which accentuate the rounded corners. The gradient on the selection highlight is too admirably restrained.

    Nevertheless, this is an entirely current study for a solitary (albeit commonly used) application, and it does clash a bit with the default "slanty, shiny shelf" appearance of the Dock. But I've already had my declare about that, and more. If the oath of Snow Leopard's appearance was to "first, accomplish no harm," then I assume I'm inclined to give it a passing grade—almost.

    If I had to characterize what's wrong with Snow Leopard's visual additions with just two words, it'd exist these: everything fades. Apple has sprinkled Core Animation fairy dust over seemingly every application in Snow Leopard. If any Part of the user interface appears, disappears, or changes in any significant way, it's accompanied by an animation and one or more fades.

    In moderation, such effects are fine. But in several instances, Snow Leopard crosses the line. Or rather, it crosses my line, which, it should exist noted, is located far inside the territories of Candy Land. Others with a much lower tolerance for animations who are already galled by the frippery in Leopard and earlier releases will find exiguous to savor in Snow Leopard's visual changes.

    The one that really drove me over the edge is the fussy exiguous dance of the filename area that occurs in the Finder (surprise!) when renaming a file on the desktop. There's just something about so many cross-fades, color changes, and text offsets occurring so rapidly and concentrated into such a wee area that makes me want to scream. And whether or not I'm actually waiting for these animations to finish before I can continue to exhaust my computer, it certainly feels that way sometimes.

    Still, I must unenthusiastically predict that most simple people (i.e., the ones who will not read this entire article) will either find these added visual touches delightful, or (much more likely) not notice them at all.


    Animation aside, the visual sameness of Snow Leopard presents a bit of a marketing challenge for Apple. Even beyond the obvious problem of how to promote an operating system upgrade with "no current features" to consumers, there's the issue of how to score people to notice that this current product exists at all.

    In the run-up to Snow Leopard's release, Apple stuck to a modified version of Leopard's outer space theme. It was in the keynote slideshows, on the WWDC banners, on the developer release DVDs, and outright over the Mac OS X section of Apple's website. The header image from Apple's Mac OS X webpage as of a week before Snow Leopard's release appears below. It's pretty prick and dried: outer space, stars, rich purple nebula, lens flare.

    Snow. The final frontier.Snow. The final frontier.

    Then came the golden master of Snow Leopard, which, in a pleasant change from past releases, was distributed to developers a few weeks before Snow Leopard hit the shelves. Its installer introduced an entirely different study which, as it turns out, was carried over to the retail packaging. For a change, let's line up the discs instead of the packaging (which is rapidly shrinking to barely wall the disc anyway). Here's Mac OS X 10.0 through 10.6, top to bottom and left to right. (The 10.0 and 10.1 discs looked essentially identical and maintain been coalesced.)

    One of these things is not like the others…One of these things is not like the others…

    Yep, it's a snow leopard. With actual snow on it. It's a bit on the nose for my taste, but it's not without its charms. And it does maintain one huge thing going for it: it's immediately recognizable as something current and different. "Unmistakable" is how I'd sum up the packaging. Eight years of the giant, centered, variously adorned "X" and then boom: a cat. There's exiguous chance that anyone who's seen Leopard sitting on the shelf of their local Apple store for the past two years will fail to notice that this is a current product.

    (If you'd like your own picture of Snowy the snow leopard (that's right, I've named him), Apple was kind enough to embrace a desktop background image with the OS. Self-loathing Windows users may download it directly.)

    Warning: internals ahead

    We've arrived at the start of the customary "internals" section. Snow Leopard is outright about internal changes, and this is reflected in the content of this review. If you're only interested in the user-visible changes, you can skip ahead, but you'll exist missing out on the meat of this review and the heart of Apple's current OS.

    64-bit: the road leads ever on

    Mac OS X started its journey to 64-bit back in 2003 with the release of Panther, which included the bare minimum back for the then-new PowerPC G5 64-bit CPU. In 2005, Tiger brought with it the ability to create apt 64-bit processes—as long as they didn't link with any of the GUI libraries. Finally, Leopard in 2007 included back for 64-bit GUI applications. But again, there was a caveat: 64-bit back extended to Cocoa applications only. It was, effectively, the cessation of the road for Carbon.

    Despite Leopard's seemingly impressive 64-bit bona fides, there are a few more steps before Mac OS X can attain complete 64-bit nirvana. The diagrams below illustrate.

    64-bit in Mac OS X 10.4 Tiger 64-bit in Mac OS X 10.5 Leopard 64-bit in Mac OS X 10.6 Snow Leopard Mac OS X 10.4 Tiger Mac OS X 10.5 Leopard Mac OS X 10.6 Snow Leopard

    As we'll see, outright that yellow in the Snow Leopard diagram represents its capability, not necessarily its default mode of operation.


    Snow Leopard is the first version of Mac OS X to ship with a 64-bit kernel ("K64" in Apple's parlance), but it's not enabled by default on most systems. The understanding for this this is simple. Recall that there's no "mixed mode" in Mac OS X. At runtime, a process is either 32-bit or 64-bit, and can only load other code—libraries, plug-ins, etc.—of the very kind.

    An notable class of plug-ins loaded by the kernel is device drivers. Were Snow Leopard to default to the 64-bit kernel, only 64-bit device drivers would load. And seeing as Snow Leopard is the first version of Mac OS X to embrace a 64-bit kernel, there'd exist precious few of those on customers' systems on launch day.

    And so, by default, Snow Leopard boots with a 64-bit kernel only on Xserves from 2008 or later. I guess the assumption is that outright of the devices commonly attached to an Xserve will exist supported by 64-bit drivers supplied by Apple in Snow Leopard itself.

    Perhaps surprisingly, not outright Macs with 64-bit processors are even able to boot into the 64-bit kernel. Though this may change in subsequent point releases of Snow Leopard, the table below lists outright the Macs that are either capable of or default to booting K64. (To find the "Model name" of your Mac, select "About This Mac" from the Apple menu, then click the "More info…" button and read the "Model Identifier" line in the window that appears.)

    Product Model name K64 status Early 2008 Mac Pro MacPro3,1 Capable Early 2008 Xserve Xserve2,1 Default MacBook Pro 15"/17" MacBookPro4,1 Capable iMac iMac8,1 Capable UniBody MacBook Pro 15" MacBookPro5,1 Capable UniBody MacBook Pro 17" MacBookPro5,2 Capable Mac Pro MacPro4,1 Capable iMac iMac9,1 Capable Early 2009 Xserve Xserve3,1 Default

    For outright K64-capable Macs, boot while holding down "6" and "4" keys simultaneously to select the 64-bit kernel. For a more permanent solution, exhaust the nvram command to add arch=x86_64 to your boot-args string, or edit the file /Library/Preferences/SystemConfiguration/ and add arch=x86_64 to the Kernel Flags string:

    ... <key>Kernel</key> <string>mach_kernel</string> <key>Kernel Flags</key> <string>arch=x86_64</string> ...

    To switch back to the 32-bit kernel, hold down the "3" and "2" keys during boot, or exhaust one of the techniques above, replacing "x86_64" with "i386".

    We've already discussed why, at least initially, you probably won't want to boot into K64. But as Snow Leopard adoption ramps up and 64-bit updates of existing kernel extensions become available, why might you actually want to exhaust the 64-bit kernel?

    The first understanding has to accomplish with RAM, and not in the way you might think. Though Leopard uses a 32-bit kernel, Macs running Leopard can hold and exhaust far more RAM than the 4 GB circumscribe the "32-bit" qualifier might appear to imply. But as RAM sizes increase, there's another concern: address space depletion—not for applications, but for the kernel itself.

    As a 32-bit process, the kernel itself is limited to a 32-bit (i.e., 4GB) address space. That may not appear like a problem; after all, should the kernel really requisite more than 4GB of memory to accomplish its job? But recall that Part of the kernel's job is to track and manage system memory. The kernel uses a 64-byte structure to track the status of each 4KB page of RAM used on the system.

    That's 64 bytes, not kilobytes. It hardly seems like a lot. But now deem a Mac in the not-too-distant future containing 96GB of RAM. (If this sounds ridiculous to you, assume of how ridiculous the 8GB of RAM in the Mac I'm typing on birthright now would maintain sounded to you five years ago.) Tracking 96GB of RAM requires 1.5GB of kernel address space. Using more than a third of the kernel's address space just to track memory is a pretty uncomfortable situation.

    A 64-bit kernel, on the other hand, has a virtually unlimited kernel address space (16 exabytes). K64 is an inevitable necessity, given the rapidly increasing size of system memory. Though you may not requisite it today on the desktop, it's already common for servers to maintain double-digit gigabytes of RAM installed.

    The other thing K64 has going for it is speed. The x86 instruction set architecture has had a bit of a tortured history. When designing the x86-64 64-bit extension of the x86 architecture, AMD took the break to leave behind some of the ugliness of the past and embrace more modern features: more registers, current addressing modes, non-stack-based floating point capabilities, etc. K64 reaps these benefits. Apple makes the following claims about its performance:

  • 250% faster system convene entry point
  • 70% faster user/kernel memory copy
  • Focused benchmarking would bear these out, I'm sure. But in daily use, you're unlikely to exist able to credit any particular performance boost to the kernel. assume of K64 as removing bottlenecks from the few (usually server-based) applications that actually accomplish exercise these aspects of the kernel heavily.

    If it makes you feel better to know that your kernel is operating more efficiently, and that, were you to actually maintain 96GB of RAM installed, you would not risk starving the kernel of address space, and if you don't maintain any 32-bit drivers that you absolutely requisite to use, then by outright means, boot into the 64-bit kernel.

    For everyone else, my advice is to exist fortunate that K64 will exist ready and waiting for you when you eventually accomplish requisite it—and gladden accomplish cheer outright the vendors that acquire kernel extensions that you care about to add K64 back as soon as possible.

    Finally, this is worth repeating: gladden sustain in sarcasm that you accomplish not requisite to accelerate the 64-bit kernel in order to accelerate 64-bit applications or install more than 4GB of RAM in your Mac. Applications accelerate just fine in 64-bit mode on top of the 32-bit kernel, and even in earlier versions of Mac OS X it's been practicable to install and remove advantage of much more than 4GB of RAM.

    64-bit applications

    While Leopard may maintain brought with it back for 64-bit GUI applications, it actually included very few of them. In fact, by my count, only two 64-bit GUI applications shipped with Leopard: Xcode (an optional install) and Chess. And though Leopard made it practicable for third-party developers to bear 64-bit (albeit Leopard-only) GUI applications, very few have—sometimes due to ill-fated realities, but most often because there's been no qualified understanding to accomplish so, abandoning users of Mac OS X 10.4 or earlier in the process.

    Apple is now pushing the 64-bit transition much harder. This starts with leading by example. Snow Leopard ships with four end-user GUI applications that are not 64-bit: iTunes, Grapher, Front Row, and DVD Player. Everything else is 64-bit. The Finder, the Dock, Mail, TextEdit, Safari, iChat, Address Book, Dashboard, serve Viewer, Installer, Terminal, Calculator—you name it, it's 64-bit.

    The second huge carrot (or stick, depending on how you study at it) is the continued need of 32-bit back for current APIs and technologies. Leopard started the trend, leaving deprecated APIs behind and only porting the current ones to 64-bit. The improved Objective-C 2.0 runtime introduced in Leopard was too 64-bit-only.

    Snow Leopard continues along similar lines. The Objective-C 2.1 runtime's non-fragile instance variables, exception model unified with C++, and faster vtable dispatch remain available only to 64-bit applications. But the most significant current 64-bit-only API is QuickTime X—significant enough to exist addressed separately, so wait tuned.

    64-bits or bust

    All of this is Apple's not-so-subtle way of telling developers that the time to flood to 64-bit is now, and that 64-bit should exist the default for outright current applications, whether a developer thinks it's "needed" or not. In most cases, these current APIs maintain no intrinsic connection to 64-bit. Apple has simply chosen to exhaust them as additional forms of persuasion.

    Despite outright of the above, I'd noiseless convene Snow Leopard merely the penultimate step in Mac OS X's journey to exist 64-bit from top to bottom. I fully hope Mac OS X 10.7 to boot into the 64-bit kernel by default, to ship with 64-bit versions of outright applications, plug-ins, and kernel extensions, and to leave even more legacy and deprecated APIs to fade away in the land of 32-bit.

    QuickTime X

    Apple did something a bit odd in Leopard when it neglected to port the C-based QuickTime API to 64-bit. At the time, it didn't appear like such a huge deal. Mac OS X's transition to 64-bit had already spanned many years and several major versions. One could imagine that it just wasn't yet QuickTime's circle to disappear 64-bit.

    As it turns out, my terse but pessimistic assessment of the situation at the time was accurate: QuickTime got the "Carbon treatment". like Carbon, the venerable QuickTime API that they know and savor will not exist making the transition to 64-bit—ever.

    To exist clear, QuickTime the technology and QuickTime the brand will most definitely exist coming to 64-bit. What's being left behind in 32-bit-only configuration is the C-based API introduced in 1991 and built upon for 18 years thereafter. Its replacement in the world of 64-bit in Snow Leopard is the aptly named QuickTime X.

    The "X" in QuickTime X, like the one in in Mac OS X, is pronounced "ten." This is but the first of many eerie parallels. like Mac OS X before it, QuickTime X:

  • aims to acquire a cleanly atomize from its predecessor
  • is based on technology originally developed for another platform
  • includes transparent compatibility with its earlier incarnation
  • promises better performance and a more modern architecture
  • lacks many notable features in its initial release
  • Maximum available Mac CPU accelerate (MHz)Maximum available Mac CPU accelerate (MHz)

    Let's remove these one at a time. First, why is a cleanly atomize needed? do simply, QuickTime is old—really old. The horribly blocky, postage-stamp-size video displayed by its initial release in 1991 was considered a technological tour de force.

    At the time, the fastest Macintosh money could buy contained a 25 MHz CPU. The ridiculous chart to the birthright is meant to hammer home this point. Forward-thinking design can only score you so far. The shape of the world a technology is born into eventually, inevitably dictates its fate. This is especially apt for long-lived APIs like QuickTime with a sturdy bent towards backward compatibility.

    As the first successful implementation of video on a personal computer, it's frankly Amazing that the QuickTime API has lasted as long as it has. But the world has moved on. Just as Mac OS found itself mired in a ghetto of cooperative multitasking and unprotected memory, QuickTime limps into 2009 with antiquated notions of concurrency and subsystem layering baked into its design.

    When it came time to write the video-handling code for the iPhone, the latest version of QuickTime, QuickTime 7, simply wasn't up to the task. It had grown too bloated and inefficient during its life on the desktop, and it lacked qualified back for the GPU-accelerated video playback necessary to manipulate modern video codecs on a handheld (even with a CPU sixteen times the clock accelerate of any available in a Mac when QuickTime 1.0 was released). And so, Apple created a tight, modern, GPU-friendly video playback engine that could proper comfortably within the RAM and CPU constraints of the iPhone.

    Hmm. An aging desktop video API in requisite of a replacement. A fresh, current video library with qualified performance even on (comparatively) anemic hardware. Apple connected the dots. But the trick is always in the transition. Happily, this is Apple's forte. QuickTime itself has already lived on three different CPU architectures and three entirely different operating systems.

    The switch to 64-bit is yet another (albeit less dramatic) inflection point, and Apple has chosen it to ticket the border between the ancient QuickTime 7 and the current QuickTime X. It's done this in Snow Leopard by limiting outright exhaust of QuickTime by 64-bit applications to the QTKit Objective-C framework.

    QTKit's current world order

    QTKit is not new; it began its life in 2005 as a more native-feeling interface to QuickTime 7 for Cocoa applications. This extra layer of abstraction is the key to the QuickTime X transition. QTKit now hides within its object-oriented walls both QuickTime 7 and QuickTime X. Applications exhaust QTKit as before, and behind the scenes QTKit will pick whether to exhaust QuickTime 7 or QuickTime X to fulfill each request.

    If QuickTime X is so much better, why doesn't QTKit exhaust it for everything? The acknowledge is that QuickTime X, like its Mac OS X namesake, has very limited capabilities in its initial release. While QuickTime X supports playback, capture, and exporting, it does not back general-purpose video editing. It too supports only "modern" video formats—basically, anything that can exist played by an iPod, iPhone, or Apple TV. As for other video codecs, well, you can forget about handling them with plug-ins because QuickTime X doesn't back those either.

    For every one of the cases where QuickTime X is not up to the job, QuickTime 7 will fill in. Cutting, copying, and pasting portions of a video? QuickTime 7. Extracting individual tracks from a movie? QuickTime 7. Playing any movie not natively supported by an existing Apple handheld device? QuickTime 7. Augmenting QuickTime's codec back using a plug-in of any kind? You guessed it: QuickTime 7.

    But wait a second. If QTKit is the only way for a 64-bit application to exhaust QuickTime, and QTKit multiplexes between QuickTime 7 and QuickTime X behind the scenes, and QuickTime 7 is 32-bit-only, and Mac OS X does not back "mixed mode" processes that can execute both 32-bit and 64-bit code, then how the heck does a 64-bit process accomplish anything that requires the QuickTime 7 back-end?

    To find out, fire up the current 64-bit QuickTime Player application (which will exist addressed separately later) and open a movie that requires QuickTime 7. Let's say, one that uses the Sorenson video codec. (Remember that? qualified times.) confident enough, it plays just fine. But search for "QuickTime" in the Activity Monitor application and you'll behold this:

    Pretty sneaky, sis: 32-bit QTKitServer processPretty sneaky, sis: 32-bit QTKitServer process

    And the acknowledge is revealed. When a 64-bit application using QTKit requires the services of the 32-bit-only QuickTime 7 back-end, QTKit spawns a sever 32-bit QTKitServer process to accomplish the toil and communicate the results back to the originating 64-bit process. If you leave Activity Monitor open while using the current QuickTime Player application, you can watch the QTKitServer processes arrive and disappear as needed. This is outright handled transparently by the QTKit framework; the application itself requisite not exist awake of these machinations.

    Yes, it's going to exist a long, long time before QuickTime 7 disappears completely from Mac OS X (at least Apple was kind enough not to convene it "QuickTime Classic"), but the path forward is clear. With each current release of Mac OS X, hope the capabilities of QuickTime X to expand, and the number of things that noiseless require QuickTime 7 to decrease. In Mac OS X 10.7, for example, I imagine that QuickTime X will gain back for plug-ins. And surely by Mac OS X 10.8, QuickTime X will maintain complete video editing support. outright this will exist happening beneath the unifying facade of QTKit until, eventually, the QuickTime 7 back-end is no longer needed at all.

    Say what you mean

    In the meantime, perhaps surprisingly, many of the current limitations of QuickTime X actually highlight its unique advantages and inform the evolving QTKit API. Though there is no direct way for a developer to request that QTKit exhaust the QuickTime X back-end, there are several indirect means to influence the decision. The key is the QTKit API, which relies heavily on the concept of intent.

    QuickTime versions 1 through 7 exhaust a solitary representation of outright media resources internally: a Movie object. This representation includes information about the individual tracks that acquire up the movie, the sample tables for each track, and so on—all the information QuickTime needs to understand and manipulate the media.

    This sounds much until you realize that to accomplish anything with a media resource in QuickTime requires the construction of this comprehensive Movie object. deem playing an MP3 file with QuickTime, for example. QuickTime must create its internal Movie remonstrate representation of the MP3 file before it can initiate playback. Unfortunately, the MP3 container format seldom contains comprehensive information about the structure of the audio. It's usually just a stream of packets. QuickTime must laboriously scan and parse the entire audio stream in order to complete the Movie object.

    QuickTime 7 and earlier versions acquire this process less painful by doing the scanning and parsing incrementally in the background. You can behold this in many QuickTime-based player applications in the configuration of a progress bar overlaid on the movie controller. The image below shows a 63MB MP3 podcast loading in the Leopard version of QuickTime Player. The shaded portion of the movie timeline slowly fills the dotted area from left to right.

    QuickTime 7 doing more toil than necessary

    QuickTime 7 doing more toil than necessary

    Though playback can initiate almost immediately (provided you play from the beginning, that is) it's worthwhile to remove a step back and deem what's going on here. QuickTime is creating a Movie remonstrate suitable for any operation that QuickTime can perform: editing, track extraction or addition, exporting, you name it. But what if outright I want to accomplish is play the file?

    The pains is, the QuickTime 7 API lacks a way to express this kind of intent. There is no way to declare to QuickTime 7, "Just open this file as quickly as practicable so that I can play it. Don't bother reading every solitary byte of the file from the disk and parsing it to determine its structure just in case I determine to edit or export the content. That is not my intent. Please, just open it for playback."

    The QTKit API in Snow Leopard provides exactly this capability. In fact, the only way to exist eligible for the QuickTime X back-end at outright is to explicitly express your intent not to accomplish anything QuickTime X cannot handle. Furthermore, any attempt to perform an operation that lies outside your previously expressed intent will cause QTKit to raise an exception.

    The intent mechanism is too the way that the current features of QuickTime X are exposed, such as the ability to asynchronously load large or distantly located (e.g., over a slack network link) movie files without blocking the UI running on the main thread of the application.

    Indeed, there are many reasons to accomplish what it takes to score on board the QuickTime X train. For the media formats it supports, QuickTime X is less taxing on the CPU during playback than QuickTime 7. (This is beyond the fact that QuickTime X does not dissipate time preparing its internal representation of the movie for editing and export when playback is outright that's desired.) QuickTime X too supports GPU-accelerated playback of H.264, but, in this initial release, only on Macs equipped with an NVIDIA 9400M GPU (i.e., some 2009 iMacs and several models of MacBooks from 2008 and 2009). Finally, QuickTime X includes comprehensive ColorSync back for video, which is long overdue.

    The X factor

    This is just the start of a long journey for QuickTime X, and seemingly not a very auspicious one, at that. A QuickTime engine with no editing support? No plug-ins? It seems ridiculous to release it at all. But this has been Apple's way in recent years: steady, deliberate progress. Apple aims to ship no features before their time.

    As anxious as developers may exist for a full-featured, 64-bit successor to the QuickTime 7 engine, Apple itself is sitting on top of one of the largest QuickTime-riddled (and Carbon-addled, to boot) code bases in the industry: Final prick Studio. Thus far, It remains stuck in 32-bit. To declare that Apple is "highly motivated" to extend the capabilities of QuickTime X would exist an understatement.

    Nevertheless, don't hope Apple to rush forward foolishly. Duplicating the functionality of a continually developed, 18-year-old API will not occur overnight. It will remove years, and it will exist even longer before every notable Mac OS X application is updated to exhaust QTKit exclusively. Transitions. Gotta savor 'em.

    File system API unification

    Mac OS X has historically supported many different ways of referring to files on disk from within an application. Plain-old paths (e.g., /Users/john/Documents/myfile) are supported at the lowest levels of the operating system. They're simple, predictable, but perhaps not such a much strategy to exhaust as the only way an application tracks files. deem what happens if an application opens a file based on a path string, then the user moves that file somewhere else while it's noiseless being edited. When the application is instructed to redeem the file, if it only has the file path to toil with, it will cessation up creating a current file in the ancient location, which is almost certainly not what the user wanted.

    Classic Mac OS had a more sophisticated internal representation of files that enabled it to track files independent of their actual locations on disk. This was done with the serve of the unique file ids supported by HFS/HFS+. The Mac OS X incarnation of this concept is the FSRef data type.

    Finally, in the modern age, URLs maintain become the de facto representation for files that may exist located somewhere other than the local machine. URLs can too refer to local files, but in that case they maintain outright the very disadvantages as file paths.

    This diversity of data types is reflected in Mac OS X's file system APIs. Some functions remove file path as arguments, some hope opaque references to files, and noiseless others toil only with URLs. Programs that exhaust these APIs often spend a lot of their time converting file references from one representation to another.

    The situation is similar when it comes to getting information about files. There are a huge number of file system metadata retrieval functions at outright levels of the operating system, and no solitary one of them is comprehensive. To score outright available information about a file on disk requires making several sever calls, each of which may hope a different ilk of file reference as an argument.

    Here's an illustration Apple provided at WWDC. Opening a solitary file in the Leopard version of the Preview image viewer application results in:

  • Four conversions of an FSRef to a file path
  • Ten conversions of a file path to an FSRef
  • Twenty-five calls to getattrlist()
  • Eight calls to stat()/lstat()
  • Four calls to open()/close()
  • In Snow Leopard, Apple has created a new, unified, comprehensive set of file system APIs built around a solitary data type: URLs. But these are URL "objects"—namely, the opaque data types NSURL and CFURL, with a toll-free bridge between them—that maintain been imbued with outright the desirable attributes of an FSRef.

    Apple settled on these data types because their opaque nature allowed this kind of enhancement, and because there are so many existing APIs that exhaust them. URLs are too the most future-proof of outright the choices, with the scheme portion providing nearly unlimited flexibility for current data types and access mechanisms. The current file system APIs built around these opaque URL types back caching and metadata prefetching for a further performance boost.

    There's too a current on-disk representation called a Bookmark (not to exist confused with a browser bookmark) which is like a more network-savvy replacement for classic Mac OS aliases. Bookmarks are the most robust way to create a reference to a file from within another file. It's too practicable to attach whimsical metadata to each Bookmark. For example, if an application wants to sustain a persistent list of "favorite" files plus some application-specific information about them, and it wants to exist resilient to any movement of these files behind its back, Bookmarks are the best tool for the job.

    I mention outright of this not because I hope file system APIs to exist outright that consuming to people without my particular fascination with this Part of the operating system, but because, like Core Text before it, it's an indication of exactly how puerile Mac OS X really is as a platform. Even after seven major releases, Mac OS X is noiseless struggling to flood out from the shadow of its three ancestors: NeXTSTEP, classic Mac OS, and BSD Unix. Or perhaps it just goes to reveal how ruthlessly Apple's core OS team is driven to supplant ancient and crusty APIs and data types with new, more modern versions.

    It will exist a long time before the benefits of these changes trickle down (or is it up?) to end-users in the configuration of Mac applications that are written or modified to exhaust these current APIs. Most well-written Mac applications already exhibit most of the desirable behavior. For example, the TextEdit application in Leopard will correctly detect when a file it's working on has moved.

    TextEdit: a qualified Mac OS X citizenTextEdit: a qualified Mac OS X citizen

    Of course, the key modifier here is "well-written." Simplifying the file system APIs means that more developers will exist willing to expend the effort—now greatly reduced—to provide such user-friendly behaviors. The accompanying performance boost is just icing on the cake, and one more understanding that developers might pick to alter their existing, working application to exhaust these current APIs.

    Doing more with more

    Moore's Law is widely cited in technology circles—and too widely misunderstood. It's most often used as shorthand for "computers double in accelerate every year or so," but that's not what Gordon Moore wrote at all. His 1965 article in Electronics magazine touched on many topics in the semiconductor industry, but if it had to exist summed up in a solitary "law", it would be, roughly, that the number of transistors that proper onto a square inch of silicon doubles every 12 months.

    Moore later revised that to two years, but the time period is not what people score wrong. The problem is confusing a doubling of transistor density with a doubling of "computer speed." (Even more problematic is declaring a "law" based on a solitary paper from 1965, but we'll do that aside for now. For a more thorough discussion of Moore's Law, gladden read this classic article by Jon Stokes.)

    For decades, each augment in transistor density was, in fact, accompanied by a comparable augment in computing accelerate thanks to ever-rising clock speeds and the dawn of superscalar execution. This worked great—existing code ran faster on each current CPU—until the grim realities of power density do an cessation to the fun.

    Moore's Law continues, at least for now, but their ability to acquire code accelerate faster with each current augment in transistor density has slowed considerably. The free lunch is over. CPU clock speeds maintain stagnated for years, many times actually going backwards. (The latest top-of-the-line 2009 Mac Pro contains a 2.93 GHz CPU, whereas the 2008 model could exist equipped with a 3.2 GHz CPU.) Adding execution units to a CPU has too long since reached the point of diminishing returns, given the limits of instruction-level parallelism in common application code.

    And yet we've noiseless got outright these current transistors raining down on us, more every year. The challenge is to find current ways to exhaust them to actually acquire computers faster.

    Thus far, the semiconductor industry's acknowledge has been to give us more of what they already have. Where once a CPU contained a solitary ratiocinative processing unit, now CPUs in even the lowliest desktop computers hold two processor cores, with high-end models sporting two chips with eight ratiocinative cores each. Granted, the cores themselves are too getting faster, usually by doing more at the very clock accelerate as their predecessors, but that's not happening at nearly the rate that the cores are multiplying.

    Unfortunately, generally speaking, a dual-core CPU will not accelerate your application twice as hastily as a single-core CPU. In fact, your application probably won't accelerate any faster at outright unless it was written to remove advantage of more than just a solitary ratiocinative CPU. Presented with a glut of transistors, chipmakers maintain turned around and provided more computing resources than programmers know what to accomplish with, transferring much of the responsibility for making computers faster to the software guys.

    We're with the operating system and we're here to help

    It's into this environment that Snow Leopard is born. If there's one responsibility (aside from security) that an operating system vendor should feel in the year 2009, it's finding a way for applications—and the OS itself—to utilize the ever-growing wealth of computing resources at their disposal. If I had to pick solitary technological "theme" for Snow Leopard, this would exist it: helping developers utilize outright this newfound silicon; helping them accomplish more with more.

    To that end, Snow Leopard includes two significant current APIs backed by several smaller, but equally notable infrastructure improvements. We'll start at the bottom with, believe it or not, the compiler.

    LLVM and Clang

    Apple made a strategic investment in the LLVM open source project several years ago. I covered the fundamentals of LLVM in my Leopard review. (If you're not up to speed, gladden tangle up on the topic before continuing.) In it, I described how Leopard used LLVM to provide dramatically more efficient JIT-compiled software implementations of OpenGL functions. I ended with the following admonition:

    Don't exist misled by its humble exhaust in Leopard; Apple has grandiose plans for LLVM. How grand? How about swapping out the guts of the gcc compiler Mac OS X uses now and replacing them with the LLVM equivalents? That project is well underway. Not ambitious enough? How about ditching gcc entirely, replacing it with a completely current LLVM-based (but gcc-compatible) compiler system? That project is called Clang, and it's already yielded some impressive performance results.

    With the introduction of Snow Leopard, it's official: Clang and LLVM are the Apple compiler strategy going forward. LLVM even has a snazzy current logo, a not-so-subtle homage to a well-known compiler design textbook:

    LLVM! Clang! Rawr!

    LLVM! Clang! Rawr!

    Apple now offers a total of four compilers for Mac OS X: GCC 4.0, GCC 4.2, LLVM-GCC 4.2 (the GCC 4.2 front-end combined with an LLVM back-end), and Clang, in order of increasing LLVM-ness. Here's a diagram:

    Mac OS X compilers

    Mac OS X compilers

    All of these compilers are binary-compatible on Mac OS X, which means you can, for example, build a library with one compiler and link it into an executable built with another. They're too outright command-line and source-compatible—in theory, anyway. Clang does not yet back some of the more esoteric features of GCC. Clang too only supports C, Objective-C, and a exiguous bit of C++ (Clang(uage), score it?) whereas GCC supports many more. Apple is committed to replete C++ back for Clang, and hopes to toil out the remaining GCC incompatibilities during Snow Leopard's lifetime.

    Clang brings with it the two headline attributes you hope in a hot, current compiler: shorter compile times and faster executables. In Apple's testing with its own applications such as iCal, Address Book, and Xcode itself, plus third-party applications like Adium and Growl, Clang compiles nearly three times faster than GCC 4.2. As for the accelerate of the finished product, the LLVM back-end, whether used in Clang or in LLVM-GCC, produces executables that are 5-25% faster than those generated by GCC 4.2.

    Clang is too more developer-friendly than its GCC predecessors. I concede that this topic doesn't maintain much to accomplish with taking advantage of multiple CPU cores and so on, but it's confident to exist the first thing that a developer actually notices when using Clang. Indulge me.

    For starters, Clang is embeddable, so Xcode can exhaust the very compiler infrastructure for interactive features within the IDE (symbol look-up, code completion, etc.) as it uses to compile the final executable. Clang too creates and preserves more extensive metadata while compiling, resulting in much better error reporting. For example, when GCC tells you this:

    GCC error message for an unknown type

    It's not exactly transparent what the problem is, especially if you're current to C programming. Yes, outright you hotshots already know what the problem is (especially if you saw this illustration at WWDC), but I assume everyone can accord that this error, generated by Clang, is a lot more helpful:

    Clang error message for an unknown type

    Maybe a novice noiseless wouldn't know what to do, but at least it's transparent where the problem lies. Figuring out why the compiler doesn't know about NSString is a much more focused assignment than can exist derived from GCC's cryptic error.

    Even when the message is clear, the context may not be. remove this error from GCC:

    GCC error message for unpleasant operands

    Sure, but there are four "+" operators on that solitary line. Which one has the problematic operands? Thanks to its more extensive metadata, Clang can pinpoint the problem:

    Clang error message for unpleasant operands

    Sometimes the error is perfectly clear, but it just seems a bit off, like this situation where jumping to the error as reported by GCC puts you on the line below where you actually want to add the missing semicolon:

    GCC error message for missing semicolon

    The exiguous things count, you know? Clang goes that extra mile:

    Clang error message for missing semicolon

    Believe it or not, stuff like this means a lot to developers. And then there are the not-so-little things that hint even more, like the LLVM-powered static analyzer. The image below shows how the static analyzer displays its discovery of a practicable bug.

    OH HAI I found UR BUGOH HAI I found UR BUG

    Aside from the whimsy of the exiguous arrows (which, admit it, are adorable), the actual bug it's highlighting is something that every programmer can imagine creating (say, through some hasty editing). The static analyzer has determined that there's at least one path through this set of nested conditionals that leaves the myName variable uninitialized, thus making the attempt to forward the mutableCopy message in the final line potentially dangerous.

    I'm confident Apple is going hog-wild running the static analyzer on outright of its applications and the operating system itself. The prospect of an automated way to ascertain bugs that may maintain existed for years in the depths of a huge codebase is almost pornographic to developers—platform owners in particular. To the degree that Mac OS X 10.6.0 is more bug-free than the previous 10.x.0 releases, LLVM surely deserves some significant Part of the credit.

    Master of the house

    By committing to a Clang/LLVM-powered future, Apple has finally taken complete control of its evolution platform. The CodeWarrior experience apparently convinced Apple that it's unwise to faith on a third party for its platform's evolution tools. Though it's taken many years, I assume even the most diehard Metrowerks fan would maintain to accord that Xcode in Snow Leopard is now a pretty damn qualified IDE.

    After years of struggling with the disconnect between the goals of the GCC project and its own compiler needs, Apple has finally prick the apron strings. OK, granted, GCC 4.2 is noiseless the default compiler in Snow Leopard, but this is a transitional phase. Clang is the recommended compiler, and the focus of outright of Apple's future efforts.

    I know what you're thinking. This is swell and all, but how are these compilers helping developers better leverage the expanding swarm of transistors at their disposal? As you'll behold in the following sections, LLVM's scaly, metallic head pops up in a few key places.


    In Snow Leopard, Apple has introduced a C language extension called "blocks." Blocks add closures and anonymous functions to C and the C-derived languages C++, Objective-C, and Objective C++.

    These features maintain been available in dynamic programming languages such as Lisp, Smalltalk, Perl, Python, Ruby, and even the unassuming JavaScript for a long time (decades, in the case of Lisp—a fact gladly offered by its practitioners). While dynamic-language programmers remove closures and anonymous functions for granted, those who toil with more traditional, statically compiled languages such as C and its derivatives may find them quite exotic. As for non-programmers, they likely maintain no interest in this topic at all. But I'm going to attempt an explanation nonetheless, as blocks configuration the foundation of some other consuming technologies to exist discussed later.

    Perhaps the simplest way to define blocks is that they acquire functions another configuration of data. C-derived languages already maintain role pointers, which can exist passed around like data, but these can only point to functions created at compile time. The only way to influence the deportment of such a role is by passing different arguments to the role or by setting global variables which are then accessed from within the function. Both of these approaches maintain huge disadvantages

    Passing arguments becomes cumbersome as their number and complexity grows. Also, it may exist that you maintain limited control over the arguments that will exist passed to your function, as is often the case with callbacks. To compensate, you may maintain to bundle up outright of your consuming state into a context remonstrate of some kind. But when, how, and by whom that context data will exist disposed of can exist difficult to pin down. Often, a second callback is required for this. It's outright quite a pain.

    As for the exhaust of global variables, in addition to being a well-known anti-pattern, it's too not thread-safe. To acquire it so requires locks or some other configuration of mutual exclusion to prevent multiple invocations of the very role from stepping on each other's toes. And if there's anything worse than navigating a sea of callback-based APIs, it's manually dealing with thread safety issues.

    Blocks bypass outright of these problems by allowing functional blobs of code—blocks—to exist defined at runtime. It's easiest to understand with an example. I'm going to start by using JavaScript, which has a bit friendlier syntax, but the concepts are the same.

    b = get_number_from_user(); multiplier = function(a) { recur a * b };

    Here I've created a role named multiplier that takes a solitary argument, a, and multiplies it by a second value, b, that's provided by the user at runtime. If the user supplied the number 2, then a convene to multiplier(5) would recur the value 10.

    b = get_number_from_user(); // assume it's 2 multiplier = function(a) { recur a * b }; r = multiplier(5); // 5 * 2 = 10

    Here's the illustration above done with blocks in C.

    b = get_number_from_user(); // assume it's 2 multiplier = ^ int (int a) { recur a * b; }; r = multiplier(5); // 5 * 2 = 10

    By comparing the JavaScript code to the C version, I hope you can behold how it works. In the C example, that exiguous caret ^ is the key to the syntax for blocks. It's kind of ugly, but it's very C-like in that it parallels the existing C syntax for role pointers, with ^ in situation of *, as this illustration illustrates:

    /* A role that takes a solitary integer dispute and returns a pointer to a role that takes two integer arguments and returns a floating-point number. */ float (*func2(int a))(int, int); /* A role that takes a solitary integer dispute and returns a shroud that takes two integer arguments and returns a floating-point number. */ float (^func1(int a))(int, int);

    You'll just maintain to faith me when I exhibit you that this syntax actually makes sense to seasoned C programmers.

    Now then, does this hint that C is suddenly a dynamic, high-level language like JavaScript or Lisp? Hardly. The existing distinction between the stack and the heap, the rules governing automatic and static variables, and so on are outright noiseless in replete effect. Plus, now there's a total current set of rules for how blocks interact with each of these things. There's even a current __block storage ilk credit to further control the scope and lifetime of values used in blocks.

    All of that said, blocks are noiseless a huge win in C. Thanks to blocks, the friendlier APIs long enjoyed by dynamic languages are now practicable in C-derived languages. For example, suppose you want to apply some operation to every line in a file. To accomplish so in a low-level language like C requires some amount of boilerplate code to open and read from the file, manipulate any errors, read each line into a buffer, and cleanly up at the end.

    FILE *fp = fopen(filename, "r"); if (fp == NULL) { perror("Unable to open file"); } else { char line[MAX_LINE]; while (fgets(line, MAX_LINE, fp)) { work; work; work; } fclose(fp); }

    The Part in bold is an abstract representation of what you're planning to accomplish to each line of the file. The ease is the literal boilerplate code. If you find yourself having to apply varying operations to every line of many different files, this boilerplate code gets tedious.

    What you'd like to exist able to accomplish is factor it out into a role that you can call. But then you're faced with the problem of how to express the operation you'd like to perform on each line of the file. In the middle of each shroud of boilerplate may exist many lines of code expressing the operation to exist applied. This code may reference or modify local variables which are affected by the runtime deportment of the program, so traditional role pointers won't work. What to do?

    Thanks to blocks, you can define a role that takes a filename and a shroud as arguments. This gets outright the uninteresting code out of your face.

    foreach_line(filename, ^ (char *line) { work; work; work; });

    What's left is a much clearer expression of your intent, with less surrounding noise. The dispute after filename is a literal shroud that takes a line of text as an argument.

    Even when the volume of boilerplate is small, the simplicity and clarity bonus is noiseless worthwhile. deem the simplest practicable loop that executes a fixed number of times. In C-based languages, even that basic construct offers a surprising number of opportunities for bugs. Let's do_something() 10 times:

    for (int i = 0; i <= 10; i++) { do_something(); }

    Oops, I've got a exiguous bug there, don't I? It happens to the best of us. But why should this code exist more complicated than the sentence describing it. accomplish something 10 times! I never want to screw that up again. Blocks can help. If they just invest a exiguous endeavor up front to define a helper function:

    typedef void (^work_t)(void); void repeat(int n, work_t block) { for (int i = 0; i < n; ++i) block(); }

    We can extradite the bug for good. Now, repeating any whimsical shroud of code a specific number of times is outright but idiot-proof:

    repeat(10, ^{ do_something() }); repeat(20, ^{ do_other_thing() });

    And remember, the shroud dispute to repeat() can hold exactly the very kind of code, literally copied and pasted, that would maintain appeared within a traditional for loop.

    All these possibilities and more maintain been well explored by dynamic languages: map, reduce, collect, etc. Welcome, C programmers, to a higher order.

    Apple has taken these lessons to heart, adding over 100 current APIs that exhaust blocks in Snow Leopard. Many of these APIs would not exist practicable at outright without blocks, and outright of them are more elegant and concise than they would exist otherwise.

    It's Apple intention to submit blocks as an official extension to one or more of the C-based languages, though it's not yet transparent which standards bodies are receptive to the proposal. For now, blocks are supported by outright four of Apple's compilers in Mac OS X.

    Concurrency in the true world: a prelude

    The struggle to acquire efficient exhaust of a large number of independent computing devices is not new. For decades, the sphere of high-performance computing has tackled this problem. The challenges faced by people writing software for supercomputers many years ago maintain now trickled down to desktop and even mobile computing platforms.

    In the PC industry, some people saw this coming earlier than others. Almost 20 years ago, exist Inc. was formed around the strategy of creating a PC platform unconstrained by legacy limitations and entirely prepared for the coming abundance of independent computing units on the desktop. To that end, exist created the BeBox, a dual-CPU desktop computer, and BeOS, a brand-new operating system.

    The signature tangle phrase for BeOS was "pervasive multithreading." The BeBox and other machines running BeOS leveraged every ounce of the diminutive (by today's standards, anyway) computing resources at their disposal. The demos were impressive. A dual 66 MHz machine (don't acquire me reveal another graph) could play multiple videos simultaneously while too playing several audio tracks from a CD—some backwards— and outright the while, the user interface remained completely responsive.

    Let me exhibit you, having lived through this period myself, the experience was mind-blowing at the time. BeOS created instant converts out of hundreds of technology enthusiasts, many of whom maintain that today's desktop computing experience noiseless doesn't match the responsiveness of BeOS. This is certainly apt emotionally, if not necessarily literally.

    After nearly purchasing exist in the late 1990s, Apple bought NeXT instead, and the ease is history. But had Apple gone with routine exist instead, Mac developers might maintain had a jagged road ahead. While outright that pervasive multithreading made for impressive technology demos and a much user experience, it could exist extremely demanding on the programmer. BeOS was outright about threads, going so far as to maintain a sever thread for each window. Whether you liked it or not, your BeOS program was going to exist multithreaded.

    Parallel programming is notoriously hard, with the manual management of POSIX-style threads representing the abysmal cessation of that pool. The best programmers in the world are hard-pressed to create large multithreaded programs in low-level languages like C or C++ without finding themselves impaled on the spikes of deadlock, race conditions, and other perils inherent in the exhaust of in multiple simultaneous threads of execution that share the very memory space. Extremely careful application of locking primitives is required to avoid performance-robbing levels of contention for shared data—and the bugs, oh the bugs! The term "Heisenbug" may as well maintain been invented for multithreaded programming.

    Nineteen years after exist tilted at the windmill of the widening swath of silicon in desktop PCs, the challenge has only grown. Those transistors are out there, man—more than ever before. Single-threaded programs on today's high-end desktop Macs, even when using "100%" CPU, extend but a solitary glowing tower in a sea of sixteen otherwise vacant lanes on a CPU monitor window.

    A wide-open simple of transistorsA wide-open simple of transistors

    And woe exist unto the user if that pegged CPU core is running the main thread of a GUI application on Mac OS X. A CPU-saturated main thread means no current user inputs are being pulled off the event queue by the application. A few seconds of that and an ancient friend makes its appearance: the spinning beach ball of death.


    Nooooooooo!!! Image from The Iconfactory

    This is the enemy: hardware with more computing resources than programmers know what to accomplish with, most of it completely idle, and outright the while the user is utterly blocked in his attempts to exhaust the current application. What's Snow Leopard's answer? Read on…

    Grand Central Dispatch Apple's GCD branding: <a href="">Railfan</a> <a href="">service</a>Apple's GCD branding: Railfan service

    Snow Leopard's acknowledge to the concurrency conundrum is called grandiose Central Dispatch (GCD). As with QuickTime X, the name is extremely apt, though this is not entirely transparent until you understand the technology.

    The first thing to know about GCD is that it's not a current Cocoa framework or similar special-purpose frill off to the side. It's a simple C library baked into the lowest levels of Mac OS X. (It's in libSystem, which incorporates libc and the other code that sits at the very bottom of userspace.)

    There's no requisite to link in a current library to exhaust GCD in your program. Just #include <dispatch/dispatch.h> and you're off to the races. The fact that GCD is a C library means that it can exist used from outright of the C-derived languages supported on Mac OS X: Objective-C, C++, and Objective-C++.

    Queues and threads

    GCD is built on a few simple entities. Let's start with queues. A queue in GCD is just what it sounds like. Tasks are enqueued, and then dequeued in FIFO order. (That's "First In, First Out," just like the checkout line at the supermarket, for those who don't know and don't want to ensue the link.) Dequeuing the assignment means handing it off to a thread where it will execute and accomplish its actual work.

    Though GCD queues will hand tasks off to threads in FIFO order, several tasks from the very queue may exist running in parallel at any given time. This animation demonstrates.

    A grandiose Central Dispatch queue in action

    You'll notice that assignment B completed before assignment A. Though dequeuing is FIFO, assignment completion is not. too note that even though there were three tasks enqueued, only two threads were used. This is an notable feature of GCD which we'll dispute shortly.

    But first, let's study at the other kind of queue. A serial queue works just like a simple queue, except that it only executes one assignment at a time. That means assignment completion in a serial queue is too FIFO. Serial queues can exist created explicitly, just like simple queues, but each application too has an implicit "main queue" which is a serial queue that runs on the main thread.

    The animation above shows threads appearing as toil needs to exist done, and disappearing as they're no longer needed. Where accomplish these threads arrive from and where accomplish they disappear when they're done? GCD maintains a global pool of threads which it hands out to queues as they're needed. When a queue has no more pending tasks to accelerate on a thread, the thread goes back into the pool.

    This is an extremely notable aspect of GCD's design. Perhaps surprisingly, one of the most difficult parts of extracting maximum performance using traditional, manually managed threads is figuring out exactly how many threads to create. Too few, and you risk leaving hardware idle. Too many, and you start to spend a significant amount of time simply shuffling threads in and out of the available processor cores.

    Let's declare a program has a problem that can exist split into eight separate, independent units of work. If this program then creates four threads on an eight-core machine, is this an illustration of creating too many or too few threads? Trick question! The acknowledge is that it depends on what else is happening on the system.

    If six of the eight cores are totally saturated doing some other work, then creating four threads will just require the OS to dissipate time rotating those four threads through the two available cores. But wait, what if the process that was saturating those six cores finishes? Now there are eight available cores but only four threads, leaving half the cores idle.

    With the exception of programs that can reasonably hope to maintain the entire machine to themselves when they run, there's no way for a programmer to know ahead of time exactly how many threads he should create. Of the available cores on a particular machine, how many are in use? If more become available, how will my program know?

    The bottom line is that the optimal number of threads to do in flight at any given time is best determined by a single, globally awake entity. In Snow Leopard, that entity is GCD. It will sustain zero threads in its pool if there are no queues that maintain tasks to run. As tasks are dequeued, GCD will create and dole out threads in a way that optimizes the exhaust of the available hardware. GCD knows how many cores the system has, and it knows how many threads are currently executing tasks. When a queue no longer needs a thread, it's returned to the pool where GCD can hand it out to another queue that has a assignment ready to exist dequeued.

    There are further optimizations inherent in this scheme. In Mac OS X, threads are relatively heavyweight. Each thread maintains its own set of register values, stack pointer, and program counter, plus kernel data structures tracking its security credentials, scheduling priority, set of pending signals and signal masks, etc. It outright adds up to over 512 KB of overhead per thread. Create a thousand threads and you've just burned about a half a gigabyte of memory and kernel resources on overhead alone, before even considering the actual data within each thread.

    Compare a thread's 512 KB of baggage with GCD queues which maintain a mere 256 bytes of overhead. Queues are very lightweight, and developers are encouraged to create as many of them as they need—thousands, even. In the earlier animation, when the queue was given two threads to process its three tasks, it executed two tasks on one of the threads. Not only are threads heavyweight in terms of memory overhead, they're too relatively costly to create. Creating a current thread for each assignment would exist the worst practicable scenario. Every time GCD can exhaust a thread to execute more than one task, it's a win for overall system efficiency.

    Remember the problem of the programmer trying to motif out how many threads to create? Using GCD, he doesn't maintain to worry about that at all. Instead, he can concentrate entirely on the optimal concurrency of his algorithm in the abstract. If the best-case scenario for his problem would exhaust 500 concurrent tasks, then he can disappear ahead and create 500 GCD queues and divide his toil among them. GCD will motif out how many actual threads to create to accomplish the work. Furthermore it will adjust the number of threads dynamically as the conditions on the system change.

    But perhaps most importantly, as current hardware is released with more and more CPU cores, the programmer does not requisite to change his application at all. Thanks to GCD, it will transparently remove advantage of any and outright available computing resources, up to—but not past!—the optimal amount of concurrency as originally defined by the programmer when he chose how many queues to create.

    But wait, there's more! GCD queues can actually exist arranged in arbitrarily involved directed acyclic graphs. (Actually, they can exist cyclic too, but then the deportment is undefined. Don't accomplish that.) Queue hierarchies can exist used to funnel tasks from disparate subsystems into a narrower set of centrally controlled queues, or to compel a set of simple queues to delegate to a serial queue, effectively serializing them outright indirectly.

    There are too several levels of priority for queues, dictating how often and with what urgency threads are distributed to them from the pool. Queues can exist suspended, resumed, and cancelled. Queues can too exist grouped, allowing outright tasks distributed to the group to exist tracked and accounted for as a unit.

    Overall, GCD's exhaust of queues and threads forms a simple, elegant, but too extremely pragmatic architecture.


    Okay, so GCD is a much way to acquire efficient exhaust of the available hardware. But is it really any better than BeOS's approach to multithreading? We've already seen a few ways that GCD avoids the pitfalls of BeOS (e.g., the reuse of threads and the maintenance of a global pool of threads that's correctly sized for the available hardware). But what about the problem of overwhelming the programmer by requiring threads in places where they complicate, rather than enhance the application?

    GCD embodies a philosophy that is at the contradictory cessation of the spectrum from BeOS's "pervasive multithreading" design. Rather than achieving responsiveness by getting every practicable component of an application running concurrently on its own thread (and paying a heavy price in terms of involved data sharing and locking concerns), GCD encourages a much more limited, hierarchical approach: a main application thread where outright the user events are processed and the interface is updated, and worker threads doing specific jobs as needed.

    In other words, GCD doesn't require developers to assume about how best to split the toil of their application into multiple concurrent threads (though when they're ready to accomplish that, GCD will exist willing and able to help). At its most basic level, GCD aims to cheer developers to flood from thinking synchronously to thinking asynchronous. Something like this: "Write your application as usual, but if there's any Part of its operation that can reasonably exist expected to remove more than a few seconds to complete, then for the savor of Zarzycki, score it off the main thread!"

    That's it; no more, no less. Beach ball banishment is the cornerstone of user interface responsiveness. In some respects, everything else is gravy. But most developers know this intuitively, so why accomplish they noiseless behold the beach ball in Mac OS X applications? Why don't outright applications already execute outright of their potentially long-running tasks on background threads?

    A few reasons maintain been mentioned already (e.g., the difficulty of knowing how many threads to create) but the huge one is much more pragmatic. Spinning off a thread and collecting its result has always been a bit of a pain. It's not so much that it's technically difficult, it's just that it's such an categorical atomize from coding the actual toil of your application to coding outright this task-management plumbing. And so, especially in borderline cases, like an operation that may remove 3 to 5 seconds, developers just accomplish it synchronously and flood onto the next thing.

    Unfortunately, there's a surprising number of very common things that an application can accomplish that execute quickly most of the time, but maintain the potential to remove much longer than a few seconds when something goes wrong. Anything that touches the file system may stall at the lowest levels of the OS (e.g., within blocking read() and write() calls) and exist subject to a very long (or at least an "unexamined-by-the-application-developer") timeout. The very goes for name lookups (e.g., DNS or LDAP), which almost always execute instantly, but tangle many applications completely off-guard when they start taking their sweet time to recur a result. Thus, even the most meticulously constructed Mac OS X applications can cessation up throwing the beach ball in their visage from time to time.

    With GCD, Apple is aphorism it doesn't maintain to exist this way. For example, suppose a document-based application has a button that, when clicked, will dissect the current document and pomp some consuming statistics about it. In the common case, this analysis should execute in under a second, so the following code is used to connect the button with an action:

    - (IBAction)analyzeDocument:(NSButton *)sender { NSDictionary *stats = [myDoc analyze]; [myModel setDict:stats]; [myStatsView setNeedsDisplay:YES]; [stats release]; }

    The first line of the role carcass analyzes the document, the second line updates the application's internal state, and the third line tells the application that the statistics view needs to exist updated to reflect this current state. It outright follows a very common pattern, and it works much as long as zilch of these steps—which are outright running on the main thread, remember—takes too long. Because after the user presses the button, the main thread of the application needs to manipulate that user input as hastily as practicable so it can score back to the main event loop to process the next user action.

    The code above works much until a user opens a very large or very involved document. Suddenly, the "analyze" step doesn't remove one or two seconds, but 15 or 30 seconds instead. Hello, beach ball. And still, the developer is likely to hem and haw: "This is really an exceptional situation. Most of my users will never open such a large file. And anyway, I really don't want to start reading documentation about threads and adding outright that extra code to this simple, four-line function. The plumbing would dwarf the code that does the actual work!"

    Well, what if I told you that you could flood the document analysis to the background by adding just two lines of code (okay, and two lines of closing braces), outright located within the existing function? No application-global objects, no thread management, no callbacks, no dispute marshalling, no context objects, not even any additional variables. Behold, grandiose Central Dispatch:

    - (IBAction)analyzeDocument:(NSButton *)sender { dispatch_async(dispatch_get_global_queue(0, 0), ^{ NSDictionary *stats = [myDoc analyze]; dispatch_async(dispatch_get_main_queue(), ^{ [myModel setDict:stats]; [myStatsView setNeedsDisplay:YES]; [stats release]; }); }); }

    There's a hell of a lot of packed into those two lines of code. outright of the functions in GCD initiate with dispatch_, and you can behold four such calls in the blue lines of code above. The key to the minimal invasiveness of this code is revealed in the second dispute to the two dispatch_async() calls. Thus far, I've been discussing "units of work" without specifying how, exactly, GCD models such a thing. The answer, now revealed, should appear obvious in retrospect: blocks! The ability of blocks to capture the surrounding context is what allows these GCD calls to exist dropped birthright into some existing code without requiring any additional setup or re-factoring or other contortions in service of the API.

    But the best Part of this code is how it deals with the problem of detecting when the background assignment completes and then showing the result. In the synchronous code, the dissect routine convene and the code to update the application pomp simply loom in the desired sequence within the function. In the asynchronous code, miraculously, this is noiseless the case. Here's how it works.

    The outer dispatch_async() convene puts a assignment on a global concurrent GCD queue. That task, represented by the shroud passed as the second argument, contains the potentially time-consuming dissect routine call, plus another convene to dispatch_async() that puts a assignment onto the main queue—a serial queue that runs on the main thread, remember—to update the application's user interface.

    User interface updates must outright exist done from the main thread in a Cocoa application, so the code in the inner shroud could not exist executed anywhere else. But rather than having the background thread forward some kind of special-purpose notification back to the main thread when the dissect routine convene completes (and then adding some code to the application to detect and manipulate this notification), the toil that needs to exist done on the main thread to update the pomp is encapsulated in yet another shroud within the larger one. When the dissect convene is done, the inner shroud is do onto the main queue where it will (eventually) accelerate on the main thread and accomplish its toil of updating the display.

    Simple, elegant, and effective. And for developers, no more excuses.

    Believe it or not, it's just as facile to remove a serial implementation of a succession of independent operations and parallelize it. The code below does toil on matter elements of data, one after the other, and then summarizes the results once outright the elements maintain been processed.

    for (i = 0; i < count; i++) { results[i] = do_work(data, i); } total = summarize(results, count);

    Now here's the parallel version which puts a sever assignment for each factor onto a global concurrent queue. (Again, it's up to GCD to determine how many threads to actually exhaust to execute the tasks.)

    dispatch_apply(count, dispatch_get_global_queue(0, 0), ^(size_t i) { results[i] = do_work(data, i); }); total = summarize(results, count);

    And there you maintain it: a for loop replaced with a concurrency-enabled equivalent with one line of code. No preparation, no additional variables, no impossible decisions about the optimal number of threads, no extra toil required to wait for outright the independent tests to complete. (The dispatch_apply() convene will not recur until outright the tasks it has dispatched maintain completed.) Stunning.

    Grand Central Awesome

    Of outright the APIs added in Snow Leopard, grandiose Central Dispatch has the most far-reaching implications for the future of Mac OS X. Never before has it been so facile to accomplish toil asynchronously and to spread workloads across many CPUs.

    When I first heard about grandiose Central Dispatch, I was extremely skeptical. The greatest minds in computer science maintain been working for decades on the problem of how best to extract parallelism from computing workloads. Now here was Apple apparently promising to unravel this problem. Ridiculous.

    But grandiose Central Dispatch doesn't actually address this issue at all. It offers no serve whatsoever in deciding how to split your toil up into independently executable tasks—that is, deciding what pieces can or should exist executed asynchronously or in parallel. That's noiseless entirely up to the developer (and noiseless a tough problem). What GCD does instead is much more pragmatic. Once a developer has identified something that can exist split off into a sever task, GCD makes it as facile and non-invasive as practicable to actually accomplish so.

    The exhaust of FIFO queues, and especially the being of serialized queues, seems counter to the spirit of ubiquitous concurrency. But we've seen where the Platonic standard of multithreading leads, and it's not a pleasant situation for developers.

    One of Apple's slogans for grandiose Central Dispatch is "islands of serialization in a sea of concurrency." That does a much job of capturing the practical reality of adding more concurrency to run-of-the-mill desktop applications. Those islands are what sequester developers from the thorny problems of simultaneous data access, deadlock, and other pitfalls of multithreading. Developers are encouraged to identify functions of their applications that would exist better executed off the main thread, even if they're made up of several sequential or otherwise partially interdependent tasks. GCD makes it facile to atomize off the entire unit of toil while maintaining the existing order and dependencies between subtasks.

    Those with some multithreaded programming experience may exist unimpressed with the GCD. So Apple made a thread pool. huge deal. They've been around forever. But the angels are in the details. Yes, the implementation of queues and threads has an elegant simplicity, and baking it into the lowest levels of the OS really helps to lower the perceived barrier to entry, but it's the API built around blocks that makes grandiose Central Dispatch so attractive to developers. Just as Time Machine was "the first backup system people will actually use," grandiose Central Dispatch is poised to finally spread the heretofore dim art of asynchronous application design to outright Mac OS X developers. I can't wait.

    OpenCL Somehow, OpenCL got in on the <a href="">"core" branding</a>Somehow, OpenCL got in on the "core" branding

    So far, we've seen a few examples of doing more with more: a new, more modern compiler infrastructure that supports an notable current language feature, and a powerful, pragmatic concurrency API built on top of the current compilers' back for said language feature. outright this goes a long way towards helping developers and the OS itself acquire maximum exhaust of the available hardware.

    But CPUs are not the only components experiencing a glut of transistors. When it comes to the proliferation of independent computation engines, another piece of silicon inside every Mac is the undisputed title holder: the GPU.

    The numbers exhibit the tale. While Mac CPUs hold up to four cores (which may reveal up as eight ratiocinative cores thanks to symmetric multithreading), high-end GPUs hold well over 200 processor cores. While CPUs are just now edging over 100 GFLOPS, the best GPUs are capable of over 1,000 GFLOPS. That's one trillion floating-point operations per second. And like CPUs, GPUs now arrive more than one on a board.

    Writing for the GPU

    Unfortunately, the cores on a GPU are not general-purpose processors (at least not yet). They're much simpler computing engines that maintain evolved from the fixed-function silicon of their ancestors that could not exist programmed directly at all. They don't back the rich set of instructions available on CPUs, the maximum size of the programs that will accelerate is often limited and very small, and not outright of the features of the industry-standard IEEE floating-point computation specification are supported.

    Today's GPUs can exist programmed, but the most common forms of programmability are noiseless firmly planted in the world of graphics programming: vertex shaders, geometry shaders, pixel shaders. Most of the languages used to program GPUs are similarly graphically focused: HLSL, GLSL, Cg.

    Nevertheless, there are computational tasks outside the realm of graphics that are a qualified proper for GPU hardware. It would exist nice if there were a non-graphics-oriented language to write them in. Creating such a thing is quite a challenge, however. GPU hardware varies wildly in every imaginable way: number and ilk of execution units, available data formats, instruction sets, memory architecture, you name it. Programmers don't want to exist exposed to these differences, but it's difficult to toil around the complete need of a feature or the unavailability of a particular data type.

    GPU vendor NVIDIA gave it a shot, however, and produced CUDA: a subset of the C language with extensions for vector data types, data storage specifiers that reflect typical GPU memory hierarchy, and several bundled computational libraries. CUDA is but one entrant in the burgeoning GPGPU sphere (General-Purpose computing on Graphics Processing Units). But coming from a GPU vendor, it faces an uphill battle with developers who really want a vendor-agnostic solution.

    In the world of 3D programming, OpenGL fills that role. As you've surely guessed by now, OpenCL aims to accomplish the very for general-purpose computation. In fact, OpenCL is supported by the very consortium as OpenGL: the ominously named Khronos Group. But acquire no mistake, OpenCL is Apple's baby.

    Apple understood that OpenCL's best chance of success was to become an industry standard, not just an Apple technology. To acquire that happen, Apple needed the cooperation of the top GPU vendors, plus an agreement with an established, widely-recognized standards body. It took a while, but now it's outright arrive together.

    OpenCL is a lot like CUDA. It uses a C-like language with the vector extensions, it has a similar model of memory hierarchy, and so on. This is no surprise, considering how closely Apple worked with NVIDIA during the evolution of OpenCL. There's too no way any of the huge GPU vendors would radically alter their hardware to back an as-yet-unproven standard, so OpenCL had to toil well with GPUs already designed to back CUDA, GLSL, and other existing GPU programming languages.

    The OpenCL difference

    This is outright well and good, but to maintain any impact on the day-to-day life of Mac users, developers actually maintain to exhaust OpenCL in their applications. Historically, GPGPU programming languages maintain not seen much exhaust in traditional desktop applications. There are several reasons for this.

    Early on, writing programs for the GPU often required the exhaust of vendor-specific assembly languages that were far removed from the experience of writing a typical desktop application using a synchronous GUI API. The more C-like languages that came later remained either graphics-focused, vendor-specific, or both. Unless running code on the GPU would accelerate a core component of an application by an order of magnitude, most developers noiseless could not exist bothered to navigate this alien world.

    And even if the GPU did give a huge accelerate boost, relying on graphics hardware for general-purpose computation was very likely to narrow the potential audience for an application. Many older GPUs, especially those found in laptops, cannot accelerate languages like CUDA at all.

    Apple's key conclusion in the design of OpenCL was to allow OpenCL programs to accelerate not just on GPUs, but on CPUs as well. An OpenCL program can query the hardware it's running on and enumerate outright eligible OpenCL devices, categorized as CPUs, GPUs, or dedicated OpenCL accelerators (the IBM Cell Blade server—yes, that Cell—is apparently one such device). The program can then dispatch its OpenCL tasks to any available device. It's too practicable to create a solitary ratiocinative device consisting of any combination of eligible computing resources: two GPUs, a GPU and two CPUs, etc.

    The advantages of being able to accelerate OpenCL programs on both CPUs and GPUs are obvious. Every Mac running Snow Leopard, not just those with the recent-model GPUs, can accelerate a program that contains OpenCL code. But there's more to it than that.

    Certain kinds of algorithms actually accelerate faster on high-end multi-core CPUs than on even the very fastest available GPUs. At WWDC 2009, an engineer from Electronic Arts demonstrated an OpenCL port of a skinning engine from one of its games running over four times faster on a four-core Mac Pro than on an NVIDIA GeForce GTX285. Restructuring the algorithm and making many other changes to better suit the limitations (and strengths) of the GPU pushed it back ahead of the CPU by a wide margin, but sometimes you just want the system you maintain to accelerate well as-is. Being able to target the CPU is extremely useful in those cases.

    Moreover, writing vector code for Intel CPUs "the old-fashioned way" can exist a true pain. There's MMX, SSE, SSE2, SSE3, and SSE4 to deal with, outright with slightly different capabilities, and outright of which compel the programmer to write code like this:

    r1 = _mm_mul_ps(m1, _mm_add_ps(x1, x2));

    OpenCL's native back for vector types de-clutters the code considerably:

    r1 = m1 * (x1 + x2);

    Similarly, OpenCL's back for implicit parallelism makes it much easier to remove advantage of multiple CPU cores. Rather than writing outright the logic to split your data into pieces and divide those pieces to the parallel-computing hardware, OpenCL lets you write just the code to operate on a solitary piece of the data and then forward it, along with the entire shroud of data and the desired flush of parallelism, to the computing device.

    This arrangement is taken for granted in traditional graphics programming, where code implicitly works on outright pixels in a texture or outright vertices in a polygon; the programmer only needs to write code that will exist in the "inner loop," so to speak. An API with back for this kind of parallelism that runs on CPUs as well as GPUs fills an notable gap.

    Writing to OpenCL too future-proofs task- or data-parallel code. Just as the very OpenGL code will score faster and faster as newer, more powerful GPUs are released, so too will OpenCL code perform better as CPUs and GPUs score faster. The extra layer of abstraction that OpenCL provides makes this possible. For example, though vector code written several years ago using MMX got faster as CPU clock speeds increased, a more significant performance boost likely requires porting the code to one of the newer SSE instruction sets.

    As newer, more powerful vector instruction sets and parallel hardware becomes available, Apple will update its OpenCL implementations to remove advantage of them, just as video card makers and OS vendors update their OpenGL drivers to remove advantage of faster GPUs. Meanwhile, the application developer's code remains unchanged. Not even a recompile is required.

    Here exist dragons (and trains)

    How, you may wonder, can the very compiled code cessation up executing using SSE2 on one machine and SSE4 on another, or on an NVIDIA GPU on one machine and an ATI GPU on another? To accomplish so would require translating the device-independent OpenCL code to the instruction set of the target computing device at runtime. When running on a GPU, OpenCL must too ship the data and the newly translated code over to the video card and collect the results at the end. When running on the CPU, OpenCL must disarrange for the requested flush of parallelism by creating and distributing threads appropriately to the available cores.

    Well, wouldn't you know it? Apple just happens to maintain two technologies that unravel these exact problems.

    Want to compile code "just in time" and ship it off to a computing device? That's what LLVM was born to do—and, indeed, what Apple did with it in Leopard, albeit on a more limited scale. OpenCL is a natural extension of that work. LLVM allows Apple to write a solitary code generator for each target instruction set, and concentrate outright of its endeavor on a solitary device-independent code optimizer. There's no longer any requisite to duplicate these tasks, using one compiler to create the static application executable and having to jury-rig another for just-in-time compilation.

    (Oh, and by the way, recall Core Image? That's another API that needs to compile code just-in-time and ship it off to execute on parallel hardware like GPUs and multi-core CPUs. In Snow Leopard, Core Image has been re-implemented using OpenCL, producing a hefty 25% overall performance boost.)

    To manipulate assignment parallelism and provision threads, OpenCL is built on top of grandiose Central Dispatch. This is such a natural proper that it's a bit surprising that the OpenCL API doesn't exhaust blocks. I assume Apple decided that it shouldn't press its luck when it comes to getting its home-grown technologies adopted by other vendors. This conclusion already seems to exist paying off, as AMD has its own OpenCL implementation under way.

    The top of the pyramid

    Though the underlying technologies, Clang, blocks and grandiose Central Dispatch, will undoubtedly exist more widely used by developers, OpenCL represents the culmination of that particular technological thread in Snow Leopard. This is the gold standard of software engineering: creating a current public API by building it on top of lower-level, but equally well-designed and implemented public APIs.

    A unified abstraction for the ever-growing heterogeneous collection of parallel computing silicon in desktop computers was sorely needed. We've got an increasing population of powerful CPU cores, but they noiseless exist in numbers that are orders of magnitude lower than the hundreds of processing units in modern GPUs. On the other hand, GPUs noiseless maintain a ways to disappear to tangle up with the power and flexibility of a full-fledged CPU core. But even with outright the differences, writing code exclusively for either one of those worlds noiseless smacks of leaving money on the table.

    With OpenCL in hand, there's no longer a requisite to do outright your eggs in one silicon basket. And with the advent of hybrid CPU/GPU efforts like Intel's Larabee, which exhaust CPU-caliber processing engines, but in much higher numbers, OpenCL may prove even more notable in the coming years.

    Transistor harvest

    Collectively, the concurrency-enabling features introduced in Snow Leopard depict the biggest boost to asynchronous and parallel software evolution in any Mac OS X release—perhaps in any desktop operating system release ever. It may exist hard for end-users to score excited about "plumbing" technologies like grandiose Central Dispatch and OpenCL, let solitary compilers and programming language features, but it's upon these foundations that developers will create ever-more-impressive edifices of software. And if those applications tower over their synchronous, serial predecessors, it will exist because they stand on the shoulders of giants.

    QuickTime Player's current icon (Not a fan)QuickTime Player's current icon (Not a fan) QuickTime Player

    There's been some confusion surrounding QuickTime in Snow Leopard. The earlier section about QuickTime X explains what you requisite to know about the present and future of QuickTime as a technology and an API. But a few of Apple's decisions—and the extremely overloaded meaning of the word "QuickTime" in the minds of consumers—have blurred the picture somewhat.

    The first head-scratcher occurs during installation. If you occur to click on the "Customize…" button during installation, you'll behold the following options:

    QuickTime 7 is an optional install?QuickTime 7 is an optional install?

    We've already talked about Rosetta being an optional install, but QuickTime 7 too? Isn't QuickTime severely crippled without QuickTime 7? Why in the world would that exist an optional install?

    Well, there's no requisite to panic. That particular in the installer should actually read "QuickTime Player 7." QuickTime 7, the ancient but extremely capable media framework discussed earlier, is installed by default in Snow Leopard—in fact, it's mandatory. But the player application, the one with the ancient blue "Q" icon, the one that many casual users actually assume of as being "QuickTime," that's been replaced with a current QuickTime-X-savvy version sporting a pudgy current icon (see above right).

    The current player application is a huge departure from the old. Obviously, it leverages QuickTime X for more efficient video playback, but the user interface is too completely new. Gone are the gray brim and bottom-mounted playback controls from the ancient QuickTime Player, replaced by a frameless window with a black title bar and a floating, moveable set of controls.

    The current QuickTime Player: boldly going where <a href="">NicePlayer</a> has gone before Enlarge / The current QuickTime Player: boldly going where NicePlayer has gone before

    It's like a combination of the window treatment of the excellent NicePlayer application and the full-screen playback controls from the ancient QuickTime Player. I'm a bit bothered by two things. First, the ever-so-slightly clipped corners appear like a unpleasant idea. Am I just supposititious to give up those dozen-or-so pixels? NicePlayer does it right, showing crisp, square corners.

    Second, the floating playback controls obscure the movie. What if I'm scrubbing around looking for something in that Part of the frame? Yes, you can flood the controls, but what if I'm looking for something in an unknown location in the frame? Also, the title bar obscures an entire swath of the top of the frame, and this can't exist moved. I treasure the compactness of this approach, but it'd exist nice if the title bar overlap could exist disabled and the controls could exist dragged off the movie entirely and docked to the bottom or something.

    (One blessing for people who share my OCD tendencies: if you flood the floating controls, they don't recall their position the next time you open a movie. Why is that a blessing? Because if it worked the other way, we'd outright spend way too much time fretting about their inability to restore the controller to its default, precisely centered position. Sad, but true.)

    The current QuickTime Player presents a decidedly iMovie-like (or is it iPhone-like, nowadays?) interface for trimming video. Still-frame thumbnails are placed side-by-side to configuration a timeline, with adjustable stops at each cessation for trimming.

    Trimming in the current QuickTime Player Enlarge / Trimming in the current QuickTime Player

    Holding down the option key changes from a thumbnail timeline to an audio waveform display:

    Trimming with audio waveform view Enlarge / Trimming with audio waveform view

    In both the video and audio cases, I maintain to phenomenon exactly how useful the fancy timeline appearances are. The audio waveform is quite wee and compressed, and the limited horizontal space of the in-window pomp means a movie can only reveal a handful of video frames in its timeline. Also, if there's any ability to accomplish fine adjustments using something other than extremely careful mouse movements (which are necessarily subject to a limited resolution) then I couldn't find it. Final prick Pro this is not.

    QuickTime Player has scholarly another current trick: screen recording. The controls are limited, so more demanding users will noiseless maintain a requisite for a full-featured screen recorder, but QuickTime Player gets the job done.

    Screen recording in QuickTime PlayerScreen recording in QuickTime Player

    There's too an audio-only option, with a similarly simplified collection of settings.

    Audio recordingAudio recording

    Finally, the current QuickTime Player has the ability to upload a movie directly to YouTube and MobileMe, forward one via e-mail, or add it to your iTunes library. The export options are too vastly simplified, with preset options for iPhone/iPod, Apple TV, and HD 480p and 720p.

    Unfortunately, the list of things you can't accomplish with the current QuickTime Player is quite long. You can't cut, copy, and paste whimsical portions of a movie (trimming only affects the ends); you can't extract or delete individual tracks or overlay one track onto another (optionally scaling to fit); you can't export a movie by choosing from the replete set of available QuickTime audio and video codecs. outright of these things were practicable with the ancient QuickTime Player—if, that is, you paid the $30 for a QuickTime Pro license. In the past, I've described this extra fee as "criminally stupid", but the features it enabled in QuickTime Player were really useful.

    It's tempting to credit their absence in the current QuickTime Player to the previously discussed limitations of QuickTime X. But the current QuickTime Player is built on top of QTKit, which serves as a front-end for both QuickTime X and QuickTime 7. And it does, after all, feature some limited editing features like trimming, plus some previously "Pro"-only features like full-screen playback. Also, the current QuickTime Player can indeed play movies using third-party plug-ins—a feature clearly powered by QuickTime 7.

    Well, Snow Leopard has an extremely pleasant dumbfound waiting for you if you install the optional QuickTime Player 7. When I did so, what I got was the ancient QuickTime Player—somewhat insultingly installed in the "Utilities" folder—with outright of its "Pro" features permanently unlocked. Yes, the tyranny of QuickTime Pro seems to exist at an end…

    QuickTime Pro: now free for everyone?QuickTime Pro: now free for everyone?

    …but perhaps the key word above is "seems," because QuickTime Player 7 does not maintain outright "pro" features unlocked for everyone. I installed Snow Leopard onto an vacant disk, and QuickTime 7 was not automatically installed (as it is when the installer detects an existing QuickTime Pro license on the target disk). After booting from my fresh Snow Leopard volume, I manually installed the "QuickTime 7" optional component using the Snow Leopard installer disk.

    The result for me was a QuickTime Player 7 application with outright pro features unlocked and with no visible QuickTime Pro registration information. I did, however, maintain a QuickTime Pro license on one of the attached drives. Apparently, the installer detected this and gave me an unlocked QuickTime Player 7 application, even though the boot volume never had a QuickTime Pro license on it.

    The Dock

    The current appearance of some aspects of the Dock are accompanied by some current functionality as well. Clicking and holding on a running application's Dock icon now triggers Expos�, but only for the windows belonging to that application. Dragging a file onto a docked application icon and holding it there for a bit produces the very result. You can then continue that very drag onto one of the Exposé window thumbnails and hover there a bit to bring that window to the front and drop the file into it. It's a pretty handy technique, once you score in the custom of doing it.

    The Exposé pomp itself is too changed. Now, minimized windows are displayed in smaller configuration on the bottom of the screen below a thin line.

    Dock Exposé with current placement of minimized windows Enlarge / Dock Exposé with current placement of minimized windows

    In the screenshot above, you'll notice that zilch of the minimized windows loom in my Dock. That's thanks to another welcome addition: the ability to minimize windows "into" the application icon. You'll find the setting for this in the Dock's preference pane.

    New Dock preference: Minimize windows into application iconNew Dock preference: Minimize windows into application icon Minimized windows in a Dock application menuMinimized window denoted by a diamond

    Once set, minimized windows will slip behind the icon of their parent application and then disappear. To score them back, either right-click the application icon (see right) or trigger Exposé.

    The Dock's grid view for folders now incorporates a scroll bar when there are too many items to proper comfortably. Clicking on a folder icon in the grid now shows that folder's contents within the grid, allowing you to navigate down several folders to find a buried item. A wee "back" navigation button appears once you descend.

    These are outright useful current behaviors, and quite a bonus considering the supposititious "no current features" stance of Snow Leopard. But the fundamental nature of the Dock remains the same. Users who want a more resilient or more powerful application launcher/folder organizer/window minimization system must noiseless either sacrifice some functionality (e.g., Dock icon badges and bounce notifications) or continue to exhaust the Dock in addition to a third-party application.

    The option to sustain minimized windows from cluttering up the Dock was long overdue. But my enthusiasm is tempered by my frustration at the continued inability to click on a docked folder and maintain it open in the Finder, while too retaining the ability to drag items into that folder. This was the default deportment for docked folders for the first six years of Mac OS X's life, but it changed in Leopard. Snow Leopard does not help matters.

    Docking an alias to a folder provides the single-click-open behavior, but items cannot exist dragged into a docked folder alias for some inexplicable reason. (Radar 5775786, closed in March 2008 with the terse explanation, "not currently supported.") Worse, dragging an particular to a docked folder alias looks like it will toil (the icon highlights) but upon release, the dragged particular simply springs back to its original location. I really hoped this one would score fixed in Snow Leopard. No such luck.

    Dock grid view's in-place navigation with back buttonDock grid view's in-place navigation with back button The Finder

    One of the earliest leaked screenshots of Snow Leopard included an innocuous-looking "Get Info…" window for the Finder, presumably to reveal that its version number had been updated to 10.6. The more consuming tidbit of information it revealed was that the Finder in Snow Leopard was a 64-bit application.

    The Mac OS X Finder started its life as the designated "dog food" application for the Carbon backward-compatibility API for Mac OS X. Over the years, the Finder has been a frequent target of dissatisfaction and scorn. Those unpleasant feelings frequently spilled over into the parallel debate over API supremacy: Carbon vs. Cocoa.

    "The Finder sucks because it's a Carbon app. What they requisite is a Cocoa Finder! Surely that will unravel outright their woes." Well, Snow Leopard features a 64-bit Finder, and as they outright know, Carbon was not ported to 64-bit. Et voila! A Cocoa Finder in Snow Leopard. (More on the woes in a bit.)

    The conversion to Cocoa followed the Snow Leopard formula: no current features… except for maybe one or two. And so, the "new" Cocoa Finder looks and works almost exactly like the ancient Carbon Finder. The biggest indicator of its "Cocoa-ness" is the extensive exhaust of Core Animation transitions. For example, when a Finder window does its schizophrenic transformation from a sidebar-bedecked browser window to its minimally-adorned form, it no longer happens in a blink. Instead, the sidebar slides away and fades, the toolbar shrinks, and everything tucks in to configuration its current shape.

    Despite crossing the line in a few cases, the Core Animation transitions accomplish acquire the application feel more polished, and yes, "more Cocoa." And presumably the exhaust of Cocoa made it so darn facile to add features that the developers just couldn't resist throwing in a few.

    The number-one feature request from heavy column-view users has finally been implemented: sortable columns. The sort order applies to outright columns at once, which isn't as nice as per-column sorting, but it's much better than nothing at all. The sort order can exist set using a menu command (each of which has a keyboard shortcut) or by right-clicking in an unoccupied area of a column and selecting from the resulting context menu.

    Column view sorting context menu Enlarge / Column view sorting context menu Column view sorting menu Enlarge / Column view sorting menu

    Even the lowly icon view has been enhanced in Snow Leopard. Every icon-view window now includes a wee slider to control the size of the icons.

    The Finder's icon view with its current slider controlThe Finder's icon view with its current slider control

    This may appear a bit odd—how often accomplish people change icon sizes?—but it makes much more sense in the context of previewing images in the Finder. This exhaust case is made even more germane by the recent expansion of the maximum icon size to 512x512 pixels.

    The icon previews themselves maintain been enhanced to better match the abilities available in Quick Look. do it outright together and you can smoothly zoom a wee PDF icon, for example, into the impressively high-fidelity preview shown below, complete with the ability to circle pages. One press of the space bar and you'll progress to the even larger and more resilient Quick study view. It's a pretty smooth experience.

    Not your father's icon: 512x512 pixels of multi-page PDF previewingNot your father's icon: 512x512 pixels of multi-page PDF previewing

    QuickTime previews maintain been similarly enhanced. As you zoom in on the icon, it transforms into a miniature movie player, adorned with an odd circular progress indicator. Assuming users are willing to wrangle with the vagaries of the Finder's view settings successfully enough to score icon view to stick for the windows where it's most useful, I assume that odd exiguous slider is actually going to score a lot of use.

    The Finder's QuickTime preview. (The "glare" overlay is a bit much.)The Finder's QuickTime preview. (The "glare" overlay is a bit much.)

    List view too has a few enhancements—accidental, incidental, or otherwise. The drag area for each list view particular now spans the entire line. In Leopard, though the entire line was highlighted, only the file name or icon portion could exist dragged. Trying to drag anywhere else just extended the selection to other items in the list view as the cursor was moved. I'm not confident whether this change in deportment is intentional or if it's just an unexamined consequence of the underlying control used for list view in the current Cocoa Finder. Either way, thumbs up.

    Double-clicking on the dividing line between two column headers in list view will "right-size" that column. For most columns, this means expanding or shrinking to minimally proper the widest value in the column. Date headers will progressively shrink to reveal less verbose date formats. Supposedly, this worked intermittently in Leopard as well. But whether Cocoa is bringing this feature for the first time or is just making it toil correctly for the first time, it's a change for the better.

    Searching using the Finder's browser view is greatly improved by the implementation of one of those exiguous things that many users maintain been clamoring for year after year. There's now a preference to select the default scope of the search sphere in the Finder window toolbar. Can I score an amen?

    Default Finder search location: configurable at last.Default Finder search location: configurable at last.

    Along similar lines, there are other long-desired enhancements that will disappear a long way towards making the desktop environment feel more solid. A qualified illustration is the improved handling of the dreaded "cannot eject, disk in use" error. The obvious follow-up question from the user is, "Okay, so what's using it?" Snow Leopard finally provides that information.

    No more guessingNo more guessing

    (Yes, Mac OS X will decline to expel a disk if your current working directory in a command-line shell is on that disk. kind of cool, but too kind of annoying.)

    Another practicable user response to a disk-in-use error is, "I don't care. I'm in a hurry. Just expel it!" That's an option now as well.

    Forcible ejection in progressForcible ejection in progress

    Hm, but why did I score information about the offending application in one dialog, an option to compel ejection in the other, but neither one presented both choices? It's a mystery to me, but presumably it's related to exactly what information the Finder has about the contention for the disk. (As always, the lsof command is available if you want to motif it out the old-fashioned way.)


    So does the current Cocoa Finder finally extradite outright of those embarrassing bugs from the bad-old days of Carbon? Not quite. This is essentially the "1.0" release of the Cocoa Finder, and it has its share of 1.0 bugs. Here's one discovered by Glen Aspeslagh (see image right).

    Do you behold it? If not, study closer at the order of the dates in the supposedly sorted "Date Modified" column. So yeah, that ancient Finder magic has not been entirely extinguished.

    There too remains some weirdness in the operation of the icon grid. In a view where grid snap is turned on (or is enabled transiently by holding down the command key during a drag) icons appear terrified of each other, leaving huge distances between themselves and their neighbors when they select which grid spot to snap to. It's as if the Finder lives in mortal dread that one of these files will someday score a 200-character filename that will overlap with a neighboring file's name.

    The worst incarnation of this deportment happens along the birthright edge of the screen where mounted volumes loom on the desktop. (Incidentally, this is not the default; if you want to behold disks on your desktop, you must enable this preference in the Finder.) When I mount a current disk, I'm often surprised to behold where it ends up appearing. If there are any icons remotely proximate to the birthright edge of the screen, the disk icon will decline to loom there. Again, the Finder is not avoiding any actual name or icon overlapping. It appears to exist avoiding the mere possibility of overlapping at some unspecified point in the future. Silly.

    Finder report card

    Overall, the Snow Leopard Finder takes several significant steps forward—64-bit/Cocoa future-proofing, a few new, useful features, added polish—and only a few shuffles backwards with the slight overuse of animation and the continued presence of some puzzling bugs. Considering how long it took the Carbon Finder to score to its pre-Snow-Leopard feature set and flush of polish, it's quite an achievement for a Cocoa Finder to match or exceed its predecessor in its very first release. I'm confident the Carbon vs. Cocoa warriors would maintain had a sphere day with that statement, were Carbon not do out to pasture in Leopard. But it was, and to the victor disappear the spoils.


    Snow Leopard's headline "one current feature" is back for Microsoft Exchange. This appears to be, at least partially, yet another hand-me-down from the iPhone, which gained back for Exchange in its 2.0 release and expanded on it in 3.0. Snow Leopard's Exchange back is weaved throughout the expected crop of applications in Mac OS X: iCal, Mail, and Address Book.

    The huge caveat is that it will only toil with a server running Exchange 2007 (Service Pack 1, Update Rollup 4) or later. While I'm confident Microsoft greatly appreciates any additional upgrade revenue this conclusion provides, it means that for users whose workplaces are noiseless running older versions of Exchange, Snow Leopard's "Exchange support" might as well not exist.

    Those users are probably already running the only other viable Mac OS X Exchange client, Microsoft Entourage, so they'll likely just sit tight and wait for their IT departments to upgrade. Meanwhile, Microsoft is already making overtures to these users with the promised creation—finally—of an honest-to-goodness version of Outlook for Mac OS X.

    In my admittedly brief testing, Snow Leopard's Exchange back seems to toil as expected. I had to maintain one of the Microsoft mavens in the Ars Orbiting HQ spin up an Exchange 2007 server just for the purposes of this review. However it was configured, outright I had to enter in the Mail application was my replete name, e-mail address, and password, and it automatically discovered outright germane settings and configured iCal and Address engage for me.

    Exchange setup: surprisingly easyExchange setup: surprisingly easy

    Windows users are no doubt accustomed to this kind of Exchange integration, but it's the first time I've seen it on the Mac platform—and that includes my many years of using Entourage.

    Access to Exchange-related features is decidedly subdued, in keeping with the existing interfaces for Mail, iCal, and Address Book. If you're expecting the swarm of panels and toolbar buttons found in Outlook on Windows, you're in for a bit of a shock. For example, here's the "detail" view of a meeting in iCal.

    iCal event detailiCal event detail

    Clicking the "edit" button hardly reveals more.

    Event editor: that's it?Event editor: that's it?

    The "availability" window too includes the bare minimum number of controls and displays to score the job done.

    Meeting availability checker Enlarge / Meeting availability checker

    The integration into Mail and Address engage is even more subtle—almost entirely transparent. This is to exist construed as a feature, I suppose. But though I don't know enough about Exchange to exist completely sure, I can't quiver the feeling that there are Exchange features that remain inaccessible from Mac OS X clients. For example, how accomplish I engage a "resource" in a meeting? If there's a way to accomplish so, I couldn't ascertain it.

    Still, even basic Exchange integration out-of-the-box goes long way towards making Mac OS X more welcome in corporate environments. It remains to exist seen how convinced IT managers are of the "realness" of Snow Leopard's Exchange integration. But I've got to assume that being able to forward and receive mail, create and respond to meeting invitations, and exhaust the global corporate address engage is enough for any Mac user to score along reasonably well in an Exchange-centric environment.


    The thing is, there's not really much to declare about performance in Snow Leopard. Dozens of benchmark graphs lead to the very simple conclusion: Snow Leopard is faster than Leopard. Not shockingly so, at least in the aggregate, but it's faster. And while isolating one particular subsystem with a micro-benchmark may reveal some impressive numbers, it's the way these wee changes combine to help the real-world experience of using the system that really makes a difference.

    One illustration Apple gave at WWDC was making an initial Time Machine backup over the network to a Time Capsule. Apple's approach to optimizing this operation was to address each and every subsystem involved.

    Time Machine itself was given back for overlapping i/o. Spotlight indexing, which happens on Time Machine volumes as well, was identified as another time-consuming assignment involved in backups, so its performance was improved. The networking code was enhanced to remove advantage of hardware-accelerated checksums where possible, and the software checksum code was hand-tuned for maximum performance. The performance of HFS+ journaling, which accompanies each file system metadata update, was too improved. For Time Machine backups that write to disk images rather than native HFS+ file systems, Apple added back for concurrent access to disk images. The amount of network traffic produced by AFP during backups has too been reduced.

    All of this adds up to a respectable 55% overall improvement in the accelerate of an initial Time Machine backup. And, of course, the performance improvements to the individual subsystems benefit outright applications that exhaust them, not just Time Machine.

    This holistic approach to performance improvement is not likely to knock anyone's socks off, but every time you accelerate across a piece of functionality in Snow Leopard that disproportionately benefits from one of these optimized subsystems, it's a pleasure.

    For example, Snow Leopard shuts down and restarts much faster than Leopard. I'm not talking about boot time; I hint the time between the selection of the Shutdown or Restart command and when the system turns off or begins its current boot cycle. Leopard doesn't remove long at outright to accomplish this; only a few dozen of seconds when there are no applications open. But in Snow Leopard, it's so hastily that I often thought the operating system had crashed rather than shut down cleanly. (That's actually not too far from the truth.)

    The performance boosts offered by earlier major releases of Mac OS X noiseless dwarf Snow Leopard's speedup, but that's mostly because Mac OS X was so excruciatingly sluggish in its early years. It's facile to create a huge performance delta when you're starting from something abysmally slow. The fact that Snow Leopard achieves consistent, measurable improvements over the already-speedy Leopard is outright the more impressive.

    And yes, for the seventh consecutive time, a current release of Mac OS X is faster on the very hardware than its predecessor. (And for the first time ever, it's smaller, too.) What more can you question for, really? Even that ancient performance bugaboo, window resizing, has been completely vanquished. Grab the corner of a fully-populated iCal window—the worst-case scenario for window resizing in the ancient days—and quiver it as hastily as you can. Your cursor will never exist more than a few millimeters from the window's grab handle; it tracks your frantic motion perfectly. On most Macs, this is actually apt in Leopard as well. It just goes to reveal how far Mac OS X has arrive on the performance front. These days, they outright just remove it for granted, which is exactly the way it should be.

    Grab bag

    In the "grab bag" section, I usually examine smaller, mostly unrelated features that don't warrant full-blown sections of their own. But when it comes to user-visible features, Snow Leopard is kind of "all grab bag," if you know what I mean. Apple's even got its own incarnation in the configuration of a giant webpage of "refinements." I'll probably overlap with some of those, but there'll exist a few current ones here as well.

    New columns in open/save dialogs

    The list view in open and redeem dialog boxed now supports more than just "Name" and "Date Modified" columns. Right-click on any column to score a option of additional columns to display. I've wanted this feature for a long time, and I'm fortunate someone finally had time to implement it.

    Configurable columns in open/save dialogsConfigurable columns in open/save dialogs Improved scanner support

    The bundled Image Capture application now has the ability to talk to a wide range of scanners. I plugged in my Epson Stylus CX7800, a device that previously required the exhaust of third-party software in order to exhaust the scanning feature, and Image Capture detected it immediately.

    Epson scanner + Image Capture - Epson software Enlarge / Epson scanner + Image Capture - Epson software

    Image Capture is too not a unpleasant exiguous scanning application. It has pretty qualified automatic remonstrate detection, including back for multiple objects, obviating the requisite to manually crop items. Given the sometimes-questionable character of third-party printer and scanner drivers for Mac OS X, the ability to exhaust a bundled application is welcome.

    System Preferences bit wars

    System Preferences, like virtually outright other applications in Snow Leopard, is 64-bit. But since 64-bit applications can't load 32-bit plug-ins, that presents a problem for the existing crop of 32-bit third-party preference panes. System Preferences handles this situation with a reasonable amount of grace. On launch, it will pomp icons for outright installed preference panes, 64-bit or 32-bit. But if you click on a 32-bit preference pane, you'll exist presented with a notification like this:

    64-bit application vs. 32-bit plug-in: fight!64-bit application vs. 32-bit plug-in: fight!

    Click "OK" and System Preferences will relaunch in 32-bit mode, which is conveniently indicated in the title bar. Since outright of the first-party preference panes are compiled for both 64-bit and 32-bit operation, System Preferences does not requisite to relaunch again for the duration of its use. This raises the question, why not maintain System Preferences launch in 32-bit mode outright the time? I suspect it's just another way for Apple to "encourage" developers to build 64-bit-compatible binaries.

    Safari plug-ins

    The inability of of 64-bit applications load 32-bit plug-ins is a problem for Safari as well. Plug-ins are so notable to the Web experience that relaunching in 32-bit mode is not really an option. You'd probably requisite to relaunch as soon as you visited your first webpage. But Apple does want Safari to accelerate in 64-bit mode due to some significant performance enhancements in the JavaScript engine and other areas of the application that are not available in 32-bit mode.

    Apple's solution is similar to what it did with QuickTime X and 32-bit QuickTime 7 plug-ins. Safari will accelerate 32-bit plug-ins in sever 32-bit processes as needed.

    Separate processes for 32-bit Safari plug-insSeparate processes for 32-bit Safari plug-ins

    This has the added, extremely significant benefit of isolating potentially buggy plug-ins. According to the automated crash reporting built into Mac OS X, Apple has said that the number one cause of crashes is Web browser plug-ins. That's not the number one cause of crashes in Safari, sarcasm you, it's the number one cause when considering outright crashes of outright applications in Mac OS X. (And though it was not mentioned by name, I assume they outright know the primary culprit.)

    As you can behold above, the QuickTime browser plug-in gets the very treatment as glisten and other third-party 32-bit Safari plug-ins. outright of this means that when a plug-in crashes, Safari in Snow Leopard does not. The window or tab containing the crashing plug-in doesn't even close. You can simply click the reload button and give the problematic plug-in another chance to role correctly.

    While this is noiseless far from the much more robust approach employed by Google Chrome, where each tab lives in its own independent process, if Apple's crash statistics are to exist believed, isolating plug-ins may generate most of the benefit of truly sever processes with a significantly less radical change to the Safari application itself.

    Resolution independence

    When they eventual left Mac OS X in its seemingly interminable march towards a truly scalable user interface, it was almost ready for prime time. I'm dismal to declare that resolution independence was obviously not a priority in Snow Leopard, because it hasn't gotten any better, and may maintain actually regressed a bit. Here's what TextEdit looks like at a 2.0 scale factor in Leopard and Snow Leopard.

    TextEdit at scale factor 2.0 in LeopardTextEdit at scale factor 2.0 in Leopard TextEdit at scale factor 2.0 in Snow LeopardTextEdit at scale factor 2.0 in Snow Leopard

    Yep, it's a bummer. I noiseless recall Apple advising developers to maintain their applications ready for resolution independence by 2008. That's one of the few dates that the Jobs-II-era Apple has not been able to hit, and it's getting later outright the time. On the other hand, it's not like 200-DPI monitors are raining from the sky either. But I'd really like to behold Apple score going on this. It will undoubtedly remove a long time for everything to study and toil correctly, so let's score started.

    Terminal splitters

    The Terminal application in Tiger and earlier versions of Mac OS X allowed each of its windows to exist split horizontally into two sever panes. This was invaluable for referencing some earlier text in the scrollback while too typing commands at the prompt. Sadly, the splitter feature disappeared in Leopard. In Snow Leopard, it's back with a vengeance.

    Arbitrary splitters, baby!Arbitrary splitters, baby!

    (Now if only my favorite text editor would score on board the train to splittersville.)

    Terminal in Snow Leopard too defaults to the current Menlo font. But ornery to earlier reports, the One apt Monospaced Font, Monaco, is most definitely noiseless included in Snow Leopard (see screenshot above) and it works just fine.

    System Preferences shuffle

    The seemingly obligatory rearrangement of preference panes in the System Preferences application accompanying each release of Mac OS X continues in Snow Leopard.

    System Preferences: shuffled yet again Enlarge / System Preferences: shuffled yet again System Preferences (not running) with Dock menuSystem Preferences (not running) with Dock menu

    This time, the "Keyboard & Mouse" preference pane is split into sever "Keyboard" and "Mouse" panes, "International" becomes "Language & Text," and the "Internet & Network" section becomes "Internet & Wireless" and adopts the Bluetooth preference pane.

    Someday in the faraway future, perhaps Apple will finally arrive at the "ultimate" arrangement of preference panes and they can outright finally disappear more than two years without their muscle memory being disrupted.

    Before affecting on, System Preferences has one well-kept trick. You can launch directly into a specific preference pane by right-clicking on System Preferences's Dock icon. This works even when System Preferences is not yet running. kind of creepy, but useful.

    Core location

    One more gift from the iPhone, Core Location, allows Macs to motif out where in the world they are. The "Date & Time" preference pane offers to set your time zone automatically based on your current location using this newfound ability.

    Set your Mac's time zone automatically based on your current location, thanks to Core Location.Set your Mac's time zone automatically based on your current location, thanks to Core Location. Keyboard magic

    Snow Leopard includes a simple facility for system-wide text auto-correction and expansion, accessible from the "Language & Text" preference pane. It's not quite ready to give a dedicated third-party application a accelerate for its money, but hey, it's free.

    Global text expansion and auto-correction Enlarge / Global text expansion and auto-correction

    The keyboard shortcuts preference pane has too been rearranged. Now, instead of a single, long list of system-wide keyboard shortcuts, they're arranged into categories. This reduces clutter, but it too makes it a bit more difficult to find the shortcut you're interested in.

    Keyboard shortcuts: now with categories Enlarge / Keyboard shortcuts: now with categories The sleeping Mac dilemma

    I don't like to leave my Mac Pro turned on 24 hours a day, especially during the summer in my un-air-conditioned house. But I accomplish want to maintain access to the files on my Mac when I'm elsewhere—at work, on the road, etc. It is practicable to wake a sleeping Mac remotely, but doing so requires being on the very local network.

    My solution has been to leave a smaller, more power-efficient laptop on at outright times on the very network as my Mac Pro. To wake my Mac Pro remotely, I ssh into the laptop, then forward the magic "wake up" packet to my Mac Pro. (For this to work, the "Wake for Ethernet network administrator access" checkbox must exist checked in the "Energy Saver" preference pane in System Preferences.)

    Snow Leopard provides a way to accomplish this without leaving any of my computers running outright day. When a Mac running Snow Leopard is do to sleep, it attempts to hand off ownership of its IP address to its router. (This only works with an AirPort Extreme foundation station from 2007 or later, or a Time Capsule from 2008 or later with the latest (7.4.2) firmware installed.) The router then listens for any attempt to connect to the IP address. When one occurs, it wakes up the original owner, hands back the IP address, and forwards traffic appropriately.

    You can even wake some recent-model Macs over WiFi. Combined with MobileMe's "Back to My Mac" dynamic DNS thingamabob, it means I can leave outright my Macs asleep and noiseless maintain access to their contents anytime, anywhere.

    Back to my hack

    As has become traditional, this current release of Mac OS X makes life a bit harder for developers whose software works by patching the in-memory representation of other running applications or the operating system itself. This includes Input Managers, SIMBL plug-ins, and of course the dreaded "Haxies."

    Input Managers score the worst of it. They've actually been unsupported and non-functional in 64-bit applications since Leopard. That wasn't such a huge deal when Mac OS X shipped with a whopping two 64-bit applications. But now, with almost every application in Snow Leopard going 64-bit, it's suddenly very significant.

    Thanks to Safari's need of an officially sanctioned extension mechanism, developers looking to enhance its functionality maintain most often resorted to the exhaust of Input Managers and SIMBL (which is an Input-Manager-based framework). A 64-bit Safari puts a damper on that entire market. Though it is practicable to manually set Safari to launch in 32-bit mode—Get Info on the application in the Finder and click a checkbox—ideally, this is not something developers want to compel users to do.

    Happily, at least one commonly used Safari enhancement has the qualified fortune to exist built on top of the officially supported browser plug-in API used by Flash, QuickTime, etc. But that may not exist a feasible approach for Safari extensions that enhance functionality in ways not tied directly to the pomp of particular types of content within a webpage.

    Though I routine to accelerate Safari in its default 64-bit mode, I'll really miss Saft, a Safari extension I exhaust for session restoration (yes, I know Safari has this feature, but it's activated manually—the horror) and address bar shortcuts (e.g., "w noodles" to study up "noodles" in Wikipedia). I'm hoping that ingenious developers will find a way to overcome this current challenge. They always appear to, in the end. (Or Apple could add a proper extension system to Safari, of course. But I'm not holding my breath.)

    As for the Haxies, those usually atomize with each major operating system update as a matter of course. And each time, those determined fellows at Unsanity, against outright odds, manage to sustain their software working. I salute them for their effort. I delayed upgrading to Leopard for a long time based solely on the absence of my beloved WindowShade X. I hope I don't maintain to wait too long for a Snow-Leopard-compatible version.

    The general trend in Mac OS X is away from any sort of involuntary memory space sharing, and towards "external" plug-ins that live in their own, sever processes. Even contextual menu plug-ins in the Finder maintain been disabled, replaced by an enhanced, but noiseless less-powerful Services API. Again, I maintain faith that developers will adapt. But the waiting is the hardest part.


    It looks like we'll outright exist waiting a while longer for a file system in shining armor to supplant the venerable HFS+ (11 years young!) as the default file system in Mac OS X. Despite rumors, outright declarations, and much actual pre-release code, back for the impressive ZFS file system is not present in Snow Leopard.

    That's a shame because Time Machine veritably cries out for some ZFS magic. What's more, Apple seems to agree, as evidenced by a post from an Apple employee to a ZFS mailing list eventual year. When asked about a ZFS-savvy implementation of Time Machine, the reply was encouraging: "This one is notable and likely will arrive sometime, but not for SL." ("SL" is short for Snow Leopard.)

    There are many reasons why ZFS (or a file system with similar features) is a perfect proper for Time Machine, but the most notable is its ability to forward only the block-level changes during each backup. As Time Machine is currently implemented, if you acquire a wee change to a giant file, the entire giant file is copied to the Time Machine volume during the next backup. This is extremely wasteful and time consuming, especially for large files that are modified constantly during the day (e.g., Entourage's e-mail database). Time Machine running on top of ZFS could transfer just the changed disk blocks (a maximum of 128KB each in ZFS, and usually much smaller).

    ZFS would too bring vastly increased robustness for data and metadata, a pooled storage model, constant-time snapshots and clones, and a pony. People sometimes question what, exactly, is wrong with HFS+. Aside from its obvious need of the features just listed, HFS+ is limited in many ways by its dated design, which is based on HFS, a twenty-five year-old file system.

    To give just one example, the centrally located Catalog File, which must exist updated for each change to the file system's structure, is a frequent and inevitable source of contention. Modern file systems usually spread their metadata around, both for robustness (multiple copies are often kept in sever locations on the disk) and to allow for better concurrency.

    Practically speaking, assume about those times when you accelerate Disk Utility on an HFS+ volume and it finds (and hopefully repairs) a bunch of errors. That's bad, okay? That's something that should not occur with a modern, thoroughly checksummed, always-consistent-on-disk file system unless there are hardware problems (and a ZFS storage pool can actually deal with that as well). And yet it happens outright the time with HFS+ disks in Mac OS X when various bits of metadata score corrupted or become out of date.

    Apple gets by year after year, tacking current features onto HFS+ with duct tape and a prayer, but at a confident point there simply has to exist a successor—whether it's ZFS, a home-grown Apple file system, or something else entirely. My fingers are crossed for Mac OS X 10.7.

    The future soon

    Creating an operating system is as much a social exercise as a technological one. Creating a platform, even more so. outright of Snow Leopard's considerable technical achievements are not just designed to benefit users; they're too intended to goad, persuade, and otherwise herd developers in the direction that Apple feels will exist most profitable for the future of the platform.

    For this to work, Snow Leopard has to actually find its way into the hands of customers. The pricing helps a lot there. But even if Snow Leopard were free, there's noiseless some cost to the consumer—in time, worry, software updates, etc.—when performing a major operating system upgrade. The very goes for developers who must, at the very least, certify that their existing applications accelerate correctly on the current OS.

    The usual way to overcome this kind of upgrade hesitation has been to pack the OS with current features. current features sell, and the more copies of the current operating system in use, the more motivated developers are to update their applications to not just accelerate on the current OS, but too remove advantage of its current abilities.

    A major operating system upgrade with "no current features" must play by a different set of rules. Every party involved expects some counterbalance to the need of current features. In Snow Leopard, developers stand to gather the biggest benefits thanks to an impressive set of current technologies, many of which cover areas previously unaddressed in Mac OS X. Apple clearly feels that the future of the platform depends on much better utilization of computing resources, and is doing everything it can to acquire it facile for developers to flood in this direction.

    Though it's obvious that Snow Leopard includes fewer external features than its predecessor, I'd wager that it has just as many, if not more internal changes than Leopard. This, I fear, means that the initial release of Snow Leopard will likely suffer the typical 10.x.0 bugs. There maintain already been reports of current bugs introduced to existing APIs in Snow Leopard. This is the exact contradictory of Snow Leopard's implied swear to users and developers that it would concentrate on making existing features faster and more robust without introducing current functionality and the accompanying current bugs.

    On the other side of the coin, I imagine outright the teams at Apple that worked on Snow Leopard absolutely reveled in the break to polish their particular subsystems without being burdened by supporting the marketing-driven feature-of-the-month. In any long-lived software product, there needs to exist this kind of release valve every few years, lest the entire code foundation disappear off into the weeds.

    There's been one other "no current features" release of Mac OS X. Mac OS X 10.1, released a mere six months after version 10.0, was handed out for free by Apple at the 2001 Seybold publishing conference and, later, at Apple retail stores. It was too available from Apple's online store for $19.95 (along with a copy of Mac OS 9.2.1 for exhaust in the Classic environment). This was a different time for Mac OS X. Versions 10.0 and 10.1 were slow, incomplete, and extremely immature; the transition from classic Mac OS was far from over.

    Judged as a modern incarnation of the 10.1 release, Snow Leopard looks pretty darned good. The pricing is similar, and the benefits—to developers and to users—are greater. So is the risk. But again, that has more to accomplish with how horrible Mac OS X 10.0 was. Choosing not to upgrade to 10.1 was unthinkable. Waiting a while to upgrade to Snow Leopard is reasonable if you want to exist confident that outright the software you care about is compatible. But don't wait too long, because at $29 for the upgrade, I hope Snow Leopard adoption to exist quite rapid. Software that will accelerate only on Snow Leopard may exist here before you know it.

    Should you buy Mac OS X Snow Leopard? If you're already running Leopard, then the acknowledge is a resounding "yes." If you're noiseless running Tiger, well, then it's probably time for a current Mac anyway. When you buy one, it'll arrive with Snow Leopard.

    As for the future, it's tempting to view Snow Leopard as the "tick" in a current Intel-style "tick-tock" release strategy for Mac OS X: radical current features in version 10.7 followed by more Snow-Leopard-style refinements in 10.8, and so on, alternating between "feature" and "refinement" releases. Apple has not even hinted that they're considering this ilk of plan, but I assume there's a lot to recommend it.

    Snow Leopard is a unique and fair release, unlike any that maintain arrive before it in both scope and intention. At some point, Mac OS X will surely requisite to score back on the bullet-point-features bandwagon. But for now, I'm content with Snow Leopard. It's the Mac OS X I know and love, but with more of the things that acquire it feeble and eccentric engineered away.

    Snowy eyes Looking back

    This is the tenth review of a replete Mac OS X release, public beta, or developer preview to accelerate on Ars, dating back to December 1999 and Mac OS X DP2. If you want to jump into the Wayback Machine and behold how far Apple has arrive with Snow Leopard (or just want to bone up on outright of the huge cat monikers), we've gone through the archives and dug up some of their older Mac OS X articles. blissful reading!

  • Five years of Mac OS X, March 24, 2006
  • Mac OS X 10.5 Leopard, October 28, 2007
  • Mac OS X 10.4 Tiger, April 28, 2005
  • Mac OS X 10.3 Panther, November 9, 2003
  • Mac OS X 10.2 Jaguar, September 5, 2002
  • Mac OS X 10.1 (Puma), October 15, 2001
  • Mac OS X 10.0 (Cheetah), April 2, 2001
  • Mac OS X Public Beta, October 3, 2000
  • Mac OS X Q & A, June 20, 2000
  • Mac OS X DP4, May 24, 2000
  • Mac OS X DP3: ordeal by Water, February 28, 2000
  • Mac OS X Update: Quartz & Aqua, January 17, 2000
  • Mac OS X DP2, December 14, 1999

  • Inside Mac OS X 10.7 Lion: iChat 6 adds Yahoo IM, account integration, web page sharing | true questions and Pass4sure dumps



    The next version of Mac OS X will deliver an updated version of iChat capable of logging into Yahoo IM accounts, providing an improved experience when using multiple accounts, and adding web page sharing in iChat Theater.The fragmented world of IM

    iChat, Apple's bundled instant messenger app that evolved from origins as an direct chat client to being a video-enabled IM client supporting the open XMPP/Jabber protocol, now adds back for a third IM protocol: Yahoo IM.

    Unlike email, which outright providers back via common standards, IM has long been partitioned into proprietary solos, with AOL, ICQ, Skype, Yahoo and Microsoft MSN/Live Messenger outright using their own closed systems for text and video chat (Apple's .Mac/MobileMe service uses AOL's IM server, and AOL provides an SMS gateway that enables iChat users to forward SMS messages to mobile numbers).

    Yahoo and Microsoft announced a chat-only gateway between their services, as did AOL and ICQ (AOL until recently owned ICQ). In other cases however, anyone who wants to exchange messages with users on different systems must exhaust a multiprotocol chat client and create sever accounts for each service. Alternatively, you can configure your own gateway server or exhaust an external gateway that logs into your account on another service to schlepp IM messages between the incompatible account types.

    Mac OS X Lion's iChat 6 now adds back Yahoo IM, although the service appears to only back text chats, not Yahoo audio or video chats. Because of the chat gateway between Yahoo and MSN, this should enable iChat users to attain both populations of chat accounts, in addition to existing back for Jabber/Gmail IM and video chat and AOL/ICQ/MobileMe IM and video chat. However, while the Lion iChat client does toil with Yahoo accounts, the gateway between the Yahoo and MSN does not actually appear to work.

    Evolution of iChat

    Starting with iChat AV in 2003, Apple added back for SIP, standards based video chat. The next year, AOL added compatible video chat in its AOL IM client for Windows, allowing Mac and Windows users to video chat using the direct network.

    Alongside the release of Mac OS X 10.4 Tiger, Apple shipped a current version of iChat with back for the open XMPP (Extensible Messaging and Presence Protocol, originally named Jabber), enabling iChat to toil with both direct and open clients including Apple's own iChat Server in Mac OS X Server and later, Google's GTalk service. iChat's Bonjour chat among local users on the very network too uses XMPP.

    In Mac OS X 10.5 Leopard, Apple integrated back for VNC screen sharing, allowing users to share their desktop remotely as a video chat over either the XMPP or direct protocols. Apple too released iChat Theater, which could share a photo, video, or any document supported by Quick study over a video chat (below).

    Third party back for tapping into these features was too added to a current Instant Message framework.

    On page 2 of 2: current in iChat 6, iChat vs FaceTime.

    Direct Download of over 5500 Certification Exams

    3COM [8 Certification Exam(s) ]
    AccessData [1 Certification Exam(s) ]
    ACFE [1 Certification Exam(s) ]
    ACI [3 Certification Exam(s) ]
    Acme-Packet [1 Certification Exam(s) ]
    ACSM [4 Certification Exam(s) ]
    ACT [1 Certification Exam(s) ]
    Admission-Tests [13 Certification Exam(s) ]
    ADOBE [93 Certification Exam(s) ]
    AFP [1 Certification Exam(s) ]
    AICPA [2 Certification Exam(s) ]
    AIIM [1 Certification Exam(s) ]
    Alcatel-Lucent [13 Certification Exam(s) ]
    Alfresco [1 Certification Exam(s) ]
    Altiris [3 Certification Exam(s) ]
    Amazon [2 Certification Exam(s) ]
    American-College [2 Certification Exam(s) ]
    Android [4 Certification Exam(s) ]
    APA [1 Certification Exam(s) ]
    APC [2 Certification Exam(s) ]
    APICS [2 Certification Exam(s) ]
    Apple [69 Certification Exam(s) ]
    AppSense [1 Certification Exam(s) ]
    APTUSC [1 Certification Exam(s) ]
    Arizona-Education [1 Certification Exam(s) ]
    ARM [1 Certification Exam(s) ]
    Aruba [6 Certification Exam(s) ]
    ASIS [2 Certification Exam(s) ]
    ASQ [3 Certification Exam(s) ]
    ASTQB [8 Certification Exam(s) ]
    Autodesk [2 Certification Exam(s) ]
    Avaya [101 Certification Exam(s) ]
    AXELOS [1 Certification Exam(s) ]
    Axis [1 Certification Exam(s) ]
    Banking [1 Certification Exam(s) ]
    BEA [5 Certification Exam(s) ]
    BICSI [2 Certification Exam(s) ]
    BlackBerry [17 Certification Exam(s) ]
    BlueCoat [2 Certification Exam(s) ]
    Brocade [4 Certification Exam(s) ]
    Business-Objects [11 Certification Exam(s) ]
    Business-Tests [4 Certification Exam(s) ]
    CA-Technologies [21 Certification Exam(s) ]
    Certification-Board [10 Certification Exam(s) ]
    Certiport [3 Certification Exam(s) ]
    CheckPoint [43 Certification Exam(s) ]
    CIDQ [1 Certification Exam(s) ]
    CIPS [4 Certification Exam(s) ]
    Cisco [318 Certification Exam(s) ]
    Citrix [48 Certification Exam(s) ]
    CIW [18 Certification Exam(s) ]
    Cloudera [10 Certification Exam(s) ]
    Cognos [19 Certification Exam(s) ]
    College-Board [2 Certification Exam(s) ]
    CompTIA [76 Certification Exam(s) ]
    ComputerAssociates [6 Certification Exam(s) ]
    Consultant [2 Certification Exam(s) ]
    Counselor [4 Certification Exam(s) ]
    CPP-Institue [2 Certification Exam(s) ]
    CPP-Institute [2 Certification Exam(s) ]
    CSP [1 Certification Exam(s) ]
    CWNA [1 Certification Exam(s) ]
    CWNP [13 Certification Exam(s) ]
    CyberArk [1 Certification Exam(s) ]
    Dassault [2 Certification Exam(s) ]
    DELL [11 Certification Exam(s) ]
    DMI [1 Certification Exam(s) ]
    DRI [1 Certification Exam(s) ]
    ECCouncil [21 Certification Exam(s) ]
    ECDL [1 Certification Exam(s) ]
    EMC [129 Certification Exam(s) ]
    Enterasys [13 Certification Exam(s) ]
    Ericsson [5 Certification Exam(s) ]
    ESPA [1 Certification Exam(s) ]
    Esri [2 Certification Exam(s) ]
    ExamExpress [15 Certification Exam(s) ]
    Exin [40 Certification Exam(s) ]
    ExtremeNetworks [3 Certification Exam(s) ]
    F5-Networks [20 Certification Exam(s) ]
    FCTC [2 Certification Exam(s) ]
    Filemaker [9 Certification Exam(s) ]
    Financial [36 Certification Exam(s) ]
    Food [4 Certification Exam(s) ]
    Fortinet [13 Certification Exam(s) ]
    Foundry [6 Certification Exam(s) ]
    FSMTB [1 Certification Exam(s) ]
    Fujitsu [2 Certification Exam(s) ]
    GAQM [9 Certification Exam(s) ]
    Genesys [4 Certification Exam(s) ]
    GIAC [15 Certification Exam(s) ]
    Google [4 Certification Exam(s) ]
    GuidanceSoftware [2 Certification Exam(s) ]
    H3C [1 Certification Exam(s) ]
    HDI [9 Certification Exam(s) ]
    Healthcare [3 Certification Exam(s) ]
    HIPAA [2 Certification Exam(s) ]
    Hitachi [30 Certification Exam(s) ]
    Hortonworks [4 Certification Exam(s) ]
    Hospitality [2 Certification Exam(s) ]
    HP [752 Certification Exam(s) ]
    HR [4 Certification Exam(s) ]
    HRCI [1 Certification Exam(s) ]
    Huawei [21 Certification Exam(s) ]
    Hyperion [10 Certification Exam(s) ]
    IAAP [1 Certification Exam(s) ]
    IAHCSMM [1 Certification Exam(s) ]
    IBM [1533 Certification Exam(s) ]
    IBQH [1 Certification Exam(s) ]
    ICAI [1 Certification Exam(s) ]
    ICDL [6 Certification Exam(s) ]
    IEEE [1 Certification Exam(s) ]
    IELTS [1 Certification Exam(s) ]
    IFPUG [1 Certification Exam(s) ]
    IIA [3 Certification Exam(s) ]
    IIBA [2 Certification Exam(s) ]
    IISFA [1 Certification Exam(s) ]
    Intel [2 Certification Exam(s) ]
    IQN [1 Certification Exam(s) ]
    IRS [1 Certification Exam(s) ]
    ISA [1 Certification Exam(s) ]
    ISACA [4 Certification Exam(s) ]
    ISC2 [6 Certification Exam(s) ]
    ISEB [24 Certification Exam(s) ]
    Isilon [4 Certification Exam(s) ]
    ISM [6 Certification Exam(s) ]
    iSQI [7 Certification Exam(s) ]
    ITEC [1 Certification Exam(s) ]
    Juniper [65 Certification Exam(s) ]
    LEED [1 Certification Exam(s) ]
    Legato [5 Certification Exam(s) ]
    Liferay [1 Certification Exam(s) ]
    Logical-Operations [1 Certification Exam(s) ]
    Lotus [66 Certification Exam(s) ]
    LPI [24 Certification Exam(s) ]
    LSI [3 Certification Exam(s) ]
    Magento [3 Certification Exam(s) ]
    Maintenance [2 Certification Exam(s) ]
    McAfee [8 Certification Exam(s) ]
    McData [3 Certification Exam(s) ]
    Medical [69 Certification Exam(s) ]
    Microsoft [375 Certification Exam(s) ]
    Mile2 [3 Certification Exam(s) ]
    Military [1 Certification Exam(s) ]
    Misc [1 Certification Exam(s) ]
    Motorola [7 Certification Exam(s) ]
    mySQL [4 Certification Exam(s) ]
    NBSTSA [1 Certification Exam(s) ]
    NCEES [2 Certification Exam(s) ]
    NCIDQ [1 Certification Exam(s) ]
    NCLEX [2 Certification Exam(s) ]
    Network-General [12 Certification Exam(s) ]
    NetworkAppliance [39 Certification Exam(s) ]
    NI [1 Certification Exam(s) ]
    NIELIT [1 Certification Exam(s) ]
    Nokia [6 Certification Exam(s) ]
    Nortel [130 Certification Exam(s) ]
    Novell [37 Certification Exam(s) ]
    OMG [10 Certification Exam(s) ]
    Oracle [282 Certification Exam(s) ]
    P&C [2 Certification Exam(s) ]
    Palo-Alto [4 Certification Exam(s) ]
    PARCC [1 Certification Exam(s) ]
    PayPal [1 Certification Exam(s) ]
    Pegasystems [12 Certification Exam(s) ]
    PEOPLECERT [4 Certification Exam(s) ]
    PMI [15 Certification Exam(s) ]
    Polycom [2 Certification Exam(s) ]
    PostgreSQL-CE [1 Certification Exam(s) ]
    Prince2 [6 Certification Exam(s) ]
    PRMIA [1 Certification Exam(s) ]
    PsychCorp [1 Certification Exam(s) ]
    PTCB [2 Certification Exam(s) ]
    QAI [1 Certification Exam(s) ]
    QlikView [1 Certification Exam(s) ]
    Quality-Assurance [7 Certification Exam(s) ]
    RACC [1 Certification Exam(s) ]
    Real-Estate [1 Certification Exam(s) ]
    RedHat [8 Certification Exam(s) ]
    RES [5 Certification Exam(s) ]
    Riverbed [8 Certification Exam(s) ]
    RSA [15 Certification Exam(s) ]
    Sair [8 Certification Exam(s) ]
    Salesforce [5 Certification Exam(s) ]
    SANS [1 Certification Exam(s) ]
    SAP [98 Certification Exam(s) ]
    SASInstitute [15 Certification Exam(s) ]
    SAT [1 Certification Exam(s) ]
    SCO [10 Certification Exam(s) ]
    SCP [6 Certification Exam(s) ]
    SDI [3 Certification Exam(s) ]
    See-Beyond [1 Certification Exam(s) ]
    Siemens [1 Certification Exam(s) ]
    Snia [7 Certification Exam(s) ]
    SOA [15 Certification Exam(s) ]
    Social-Work-Board [4 Certification Exam(s) ]
    SpringSource [1 Certification Exam(s) ]
    SUN [63 Certification Exam(s) ]
    SUSE [1 Certification Exam(s) ]
    Sybase [17 Certification Exam(s) ]
    Symantec [135 Certification Exam(s) ]
    Teacher-Certification [4 Certification Exam(s) ]
    The-Open-Group [8 Certification Exam(s) ]
    TIA [3 Certification Exam(s) ]
    Tibco [18 Certification Exam(s) ]
    Trainers [3 Certification Exam(s) ]
    Trend [1 Certification Exam(s) ]
    TruSecure [1 Certification Exam(s) ]
    USMLE [1 Certification Exam(s) ]
    VCE [6 Certification Exam(s) ]
    Veeam [2 Certification Exam(s) ]
    Veritas [33 Certification Exam(s) ]
    Vmware [58 Certification Exam(s) ]
    Wonderlic [2 Certification Exam(s) ]
    Worldatwork [2 Certification Exam(s) ]
    XML-Master [3 Certification Exam(s) ]
    Zend [6 Certification Exam(s) ]

    References :

    Dropmark :
    Wordpress :
    Scribd :
    weSRCH :
    Issu :
    Dropmark-Text :
    Youtube :
    Blogspot :
    RSS Feed :
    Vimeo :
    Google+ : :
    Calameo : : :

    Back to Main Page
    About Killexams exam dumps | |