Yearly Archives: 2008

Followup on: MobileMe Problems Show Apple Needs an Infrastructure Lesson

Chuqui 3.0: MobileMe Problems Show Apple Needs an Infrastructure Lesson:


That this release was botched isn’t about Apple not having a clue, but about the MobileMe people either blowing it (I can think of any number of scenarios — scaling it hard). The ultimate failure seemed to be more capacity planning mistakes than anything else, if I’m guessing right. but the ultimate failure was not being willing to tell Steve “we aren’t ready” and taking that heat. They thought they could release and make it work, and guessed very wrong (or thought they were in good shape, which is worse).

 

I want to thank everyone who’s read, linked, emailed or commented on my thoughts on MobileMe the last few days. it’s been really interesting to see the reaction and hear the feedback. There’s a great and fascinating comment thread on the posting that I encourage everyone to read.

I have to admit that the first reaction when I realized that this thing was going to get huge readership was “man, am I going to get in trouble again?” — then I remembered I didn’t work at Apple any more. But old habits die hard, it seems.

The feedback from “the inside” was heartening. thanks, all. And to those of you emailing me from your apple accounts, what were you thinking? (grin). I’ll simply say that the reaction from that quarter indicates to me I was fairly close to the mark, and leave it at that.

And now, to make a few comments on the comments…

I think the credit card transaction delay example is a bad one, as it’s by design. Apple aims to gather up multiple purchases for a single credit card transaction to reduce fees.

 

There are a couple of reasons for this; three, actually — one, pulling together a bunch of small transactions into one larger one limits the card charges to Apple, so they pay less to the cards to handle charges; second, SAP is, pure and simple, not a real time processing system, so you HAVE to batch stuff into it, you can no scale SAP to handle what iTunes does in real time; third: this allows Apple to moderate the flow of transactions during peak times and spread the load out, it’s a form of scaling that lets you use the quieter times to avoid having to throw ever more hardware at a problem just for the peak loads.

The second reason (SAP is not a real time system) is one not well appreciated. Apple’s IT crews have some some unbelievable work around SAP, and people don’t appreciate just how critical this is to the company’s success. This is one part of it — SAP simply isn’t capable of doing what Apple needs it to do, and Apple’s geeks have found ways to beat it into submission. Mac OS X and Aperture and iLife get the coverage in the press, but down in the trenches are a bunch of people working their butts off doing stuff that’d make most CIOs drool.

By the way, every time I see someone say “Apple doesn’t do Enterprise” or “Apple doesn’t get the Enterprise”, I have to laugh. If you’re a CIO or an IT/Datacenter wonk, you would find a briefing by Apple on Enterprise stuff eye opening. Niall O’connor and his band of gleeful leprechauns are doing stuff (and doing it on Xserves to a huge degree) that’ll fry your brain. I spent my last decade at Apple in IT, and got to see (and work with, and sometimes fight with) those folks a lot, and they’ve really put together a top notch crew and a top flight operation. Apple is the largest, global, single-instance SAP environment in existence, and the entire company is driven by that beast and the tools they’ve built. amazing stuff.


I find it curious that people would hold up the iTunes Store as a good example of an Apple service done well. The release of iPhone OS 2.0 was not too long ago, and the store aspect of that release was a complete debacle. That wasn’t the first time the store went down hard, and I’m sure it won’t be the last.

 

Okay, here’s a basic reality: sometimes, there’s absolutely nothing you can do. Sometimes, you simply can’t scale to meet the demand of a high-profile, high-demand launch. Everything I’ve seen about the iPhone, iPhone 2.0 and the App store indicates that interest and demand is something like an order of magnitude larger than anyone expected in their wildest dreams. They seemed to have planned for a couple of thousand developers, they got tens of thousands. Entire cities camped out to buy phones on day one (it seems).

So sometimes, you do your best, you know there are going to be problems, you weather them as well as you can, and hope it’s good enough. Eddy’s group has done wonders at hiding launch problems and solving them on the fly — I’ve literally seen flocks of people flying pallets of xserves down hallways and creating a fire-wagon line to get them unboxed, in racks, networked up and in the load balancers just to try to catch up with capacity demand on a launch day. You try that some day….

But sometimes, nothing you do is enough. By the way, have I pointed out that I don’t ever, EVER join you fanboys in those day 1 lines? I mean, seriously — you do this to yourself, and Apple does a great job of protecting you from yourself in this rather silly game. but throw enough drooling geeks at an intro, and stuff’s going to break, because capacity is always finite, it’s merely a matter of where you draw that line.


I don’t remember Aperture ever being anywhere near state-of-the-art. I do remember a rapid fire series of price cuts when it failed to shift enough units, though.

 

it was first to market by a good measure, and redefined how photos were handled on computer. Unfortunately, there were internal problems in the team (according to rumors) and getting it all straightened out took a while. By the time Aperture 2.0 finally saw the light of day, Lightroom had caught up and passed it, and even Photoshop CS3 and Bridge had made good progress. I was an early adopter of Aperture, and I admit, I finally gave up and moved to CS3/Bridge when I got tired of waiting. And this weekend, I’ll be trying out the Lightroom 2.0 release and seeing if I like it (I was whelmed by LR 1, didn’t bother — right now, I use Bridge and CS3). Aperture 2.0 caught up with, but didn’t really leapfrog, Lightroom 1, and Lightroom 2 really blows it away from what I can see, so Apple’s got a problem in trying to get Aperture back into this game seriously. Too bad. I admit: I really tried to get on with the Aperture team before I left apple, that’s how much I loved the initial product. Today — I own Aperture 2.0, but I’ve moved everything out of it to Bridge.

I’ve talked about Aperture a lot on my blog in the past. You can find most of that here. it’s a great idea that took too long to mature. Ohwell.


I regularly see iTunes Store errors here. Maybe the Canadian one less stable, but I always thought they shared the same back end.

 

it may have changed by now, but it all used to be one big glop o’ back-end. Which created lots of complications. Like my last big project for Apple, successful beyond it’s architecture, and you end up spending time making things work while you figure out how to rebuild it. I know they had a team involved in re-architecting stuff, I assume some or all of that is in place now (I hope).

And I’ll close with this: I hopped on MobileMe about a week after release; you’ll note that I’ve publicized my email address as chuqui@me.com (shouldn’t surprise anyone, given I used to use me@chuqui.com — which should still forward, fwiw). I also am using it to sync between my Macs, publish calendars, etc. the first week or so after I moved was a bit rocky, but these days, it seems solid and stable. I’m not pushing the envelope, but it’s the environment I was hoping to see; my stuff syncs across machines fine, so I don’t have to think about it. My email is in one place, and it’s someone else’s problem to maintain and support the hardware. I long ago gave up the idea of running all of my own stuff, because I found I was spending all of my time supporting myself and not actually creating new stuff. Now, if you’re a geek who likes to do that, fine. Me? I’d rather be doing photography and writing than installing patches into DNS, at least in my off-hours. go figure.

So MobileMe is worth the cost, hands down. And I expect it’ll only get better. After the rumored september announcements, I’ll make a decision about whether to go with an iPhone G3 or an iPod touch (or both), and until then, my Blackjack does just fine. and by not being in the day 1 line follies, I avoid the day 1 glitches, too. Some of you could learn from that thought, but probably won’t.

Oh, and this may offend some of you, but what the heck. We used to see these lines forming, and watch the blog reactions and the like with both amusement and fear. That people are that involved and dedicated to Apple “stuff” is both great, and a great responsibility. And we recognized that. We also mentioned William Shatner a lot…

Take care.

MobileMe Problems Show Apple Needs an Infrastructure Lesson

August 8 update: I’ve written some followup thoughts on this message here.

MobileMe Problems Show Apple Needs an Infrastructure Lesson – GigaOM:


Steve Jobs, in an internal email seen by Ars Technica, makes clear that he’s upset about the botched launch of MobileMe, Apple’s new online suite of applications that has been plagued with bugs, including being flat-out unavailable to some for days at a time.

 

Or as I have been saying to folks here at work, “just imagine Steve Jobs wandering the hall with a flame thrower in hand, asking random people ‘do you work on MobileMe?’

I expect a bunch of friends and people I know were involved in that project, and I feel really bad for them. But the reality is, the thing wasn’t ready and the release got botched. And Steve and Apple aren’t terribly tolerant of that kind of major screwup. I expect heads have rolled and there are a few tanned hides waiting for the welts to go away.


“It was a mistake to launch MobileMe at the same time as iPhone 3G, iPhone 2.0 software and the App Store,” he says. “We all had more than enough to do, and MobileMe could have been delayed without consequence.”

There are two aspects to this. Steve is absolutely right — but also remember that ultimately, it was Steve’s call to go live (or not). he’s never been afraid to say “this ain’t ready” and pull something from release; his rehearsals for MacWorld Keynotes are legendary (and sometimes brutal), and stuff literally has disappeared in the last 24 hours, if he wasn’t satisfied with it.

So Steve has some responsibility here a swell, but with a caveat: someone he depended on to tell him what reality was told him it was ready to roll, and Steve believed him. And whoever told him that was wrong, and made everyone (including Steve and Apple) look bad. That’s not a good way to advance your career at Apple.


In his email, Jobs says: “The MobileMe launch clearly demonstrates that we have more to learn about Internet services.” You can say that again. The big question in the wake of the MobileMe debacle is whether or not the company even knows how to plan for heavy load.

 

Or not. Gruber nails this (see below). MobileMe is a tiny thing compared to iTunes. Apple gets it, and executes it amazingly well.

That this release was botched isn’t about Apple not having a clue, but about the MobileMe people either blowing it (I can think of any number of scenarios — scaling it hard). The ultimate failure seemed to be more capacity planning mistakes than anything else, if I’m guessing right. but the ultimate failure was not being willing to tell Steve “we aren’t ready” and taking that heat. They thought they could release and make it work, and guessed very wrong (or thought they were in good shape, which is worse).

 


I have picked up some tidbits from my Internet infrastructure sources, who tell me that:

* There is no-unified IT plan vis-a-vis applications; each has their own set of servers, IT practices and release scenarios.
* Developers do testing, load testing and infrastructure planning, all of which is implemented by someone else.
* There’s no unified monitoring system.
* They use Oracle on Sun servers for the databases and everything has its own SAN storage. They do not use active Oracle RAC; it is all single-instance, on one box, with a secondary failover.
* Apparently they are putting web servers and app servers on the same machines, which causes performance problems.

One of my sources opined that Apple clearly wasn’t too savvy about all the progress made in infrastructure over the past few years. If this insinuation is indeed true, then there is no way Apple can get over its current spate of problems. It needs a crash course in infrastructure and Internet services. Apple’s problem is that it doesn’t seem to have recognized the fact that it’s in the business of network-enabled hardware.

 

Not completely true, not necessarily a bad thing.

Some areas of Apple “run their own show”, effectively using Apple’s IT datacenters as a hosting facilities. Others build and operate within Apple’s IT infrastructure. One of the groups that basically runs its own IT outside of Apple’s core IT group is Eddy Cue’s group — because of the way stuff Eddy is in charge of gets built and managed.

There are unified monitoring services — and each service also tends to run a layer above that to monitor specific details. That’s not a negative.

the Oracle/Sun single instance thing? true to a degree, but I don’t see it as a negative. And don’t forget, Apple runs the largest global single-instance SAP environment on this stuff. it’s not exactly doing things wrong.

The bottom line is — Apple’s got its act together here better than these informants want to imply. The failures aren’t because Apple doesn’t know how to do this — it does — it’s because this project got botched.

And now Eddy has been brought in to fix it, which means it’s going to get fixed.

Eddy’s name isn’t familiar to most apple people, but he’s in his way as important to apple’s success as Jonathan Ives. His specialty: the back-end infrastructures that make Apple’s online universe tick. His groups did the Apple online store, iTools (later .Mac), iTunes store, etc, etc. It’s the not-sexy part of the company, but it’s the guts that make all of the sexy front ends actually work.

I’m actually amazed that Eddy hasn’t been poached by a startup, much as I’m amazed that Tim Cook hasn’t been poached — but the reality is that if you survive and become one of Steve’s inner core of people he trusts (and that ain’t easy) — you tend to stay. Apple doesn’t generally get poached by startups or other places at the exec level often, anyone notice?

A lot of that is because it’s not easy working for Steve, but if you can do it, you get to do really great stuff, and that’s addictive. trust me. you just don’t see people running off from apple to CEO a startup the way you do Yahoo or Google, not out of the top few levels of the company.

Eddy’s real specialty is to be able to take what Steve asks for, implement it, hit the target dates, make it work, and KEEP THE DAMN THING A SECRET UNTIL STEVE ANNOUNCES IT. That’s a big reason why his team is self-contained. It also means his people can do what needs to be done to implement things that never existed before and which don’t fit into normal IT “this is how we do things” standards. he and his teams spends most of his time off in uncharted territory where a need to be innovative and flexible is a must, and yet they have to do it on huge scales.

On the other hand, Eddy’s no easier to work with than Steve is, for obvious reasons. I invariably warned people not to hire into his groups unless they wanted to donate their life to the cause. When I was there, I worked pretty closely with various parts of his world, and it was populated with equal who were just as maniacal about this as Eddy and steve and people who were in process of burning out. Not much middle ground (but it works).

(full disclosure time: Laurie worked with Eddy way back when; me, I once almost got re-orged into his world until management remembered my vow to die before working for him, and re-arranged reality to fit (otherwise, lists.apple.com never would have existed….) — but I had a chance to deal with him while I was there and I’ve got a lot more respect for him now than I used to. I still wouldn’t want to work in the kind of grind his organization demands, though, but it does pretty good work under really scary conditions.

So you can bet, MobileMe will get fixed.


The looks, UI and edge devices are only as good as the networking experience — whether it comes from Apple or from its partners. MobileMe could just be the canary in the coal mine as far as the Cupertino Kingdom is concerned. MobileMe isn’t that big a portion of their revenues right now, but what happens when the problems hit the iTunes store? Imagine the uproar when your 3G connections slow to a crawl because AT&T’s wireless backhaul can’t handle the traffic surge.

It might not be a problem of Apple’s making but the company will face the brunt of the backlash. Remember, most of us instinctively blame the device first, then curse the carrier.

 

Daring Fireball has the right view here:

Daring Fireball Linked List: Om Malik on MobileMe’s Infrastructure:


But the iTunes Store does gangbuster traffic and has terrific track record for uptime. The message I read from yesterday’s reorg that put MobileMe under Eddy Cue (Apple’s VP for iTunes) is that MobileMe could and should be as responsive and reliable as the iTunes Store.

 

Om just doesn’t know the Apple internals very well. This wasn’t Apple failing, it was one group within Apple blowing chunks. That happens — remember when Aperture was the state of the art? and now it’s fighting to catch up with Lightroom, and may simply never regain that dominance. ohwell.

Apple has the expertise; this isn’t a case of MobileMe problems crawling out into itunes, but Apple bringing the iTunes expertise into MobileMe. And having thrown Eddy Cue at the problem, that’s exactly what’s going to happen here.

Reports of Usenet’s Death Are Greatly Exaggerated

Reports of Usenet’s Death Are Greatly Exaggerated:


Is Usenet dead, as Sascha posits? I don’t think so. As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on. Sure, ISPs will shut down access out of misled kiddie porn fears but the real pros know where to go to get their angst-filled, nit-picking, obsessive fix.

 

Nope. USENET is dead. Dead, dead. dead. really dead.

The fact that there are small enclaves of civilization hiding in boarded up bars hiding from the zombies that have destroyed the city doesn’t mean the city isn’t destroyed, folks.

This isn’t “yeah, there are some issues, but things are generally okay” out on USENET. This is “hello, I”m charlton heston, and I’m the Omega man” out on USENET. There is some humanity alive out there, but the zombies have won and the city is destroyed.

New and interesting uses for webmail…

For the last couple of weeks people at work have heard me muttering in the halls about “those damn geeks”. I’ve been chasing down and cleaning up after a group that’s been using the webmail system as a distribution system for — stuff. Mostly warez cracks and video, from what I can tell.

Since this seems to be fairly widespread and flying under the radar at most sites I’ve talked to about this, I thought I’d give it some wider visibility and go into some of the details.

I want to emphasize this part:

Let me say right up front: no system cracking involved here, no security issues, no hacks, no cracks, no leaks, no bugs. They are simply using these systems as designed, not doing anything to penetrate or compromise the system.

Nothing was hacked in any way, this is purely (in its way) a social engineering hack taking advantage of free webmail sites all around the internet — I saw at least 15 involved from my investigation.

I’d noticed some changes in network usage on the site the previous couple of months; bandwidth usage had doubled in both May and June, far beyond what I thought normal given the growth in new users we’re seeing. It didn’t seem too serious, though, so I stuffed it in the back of my head to investigate at some point.

Early July hits and I look at the numbers again — and in the first 7 days of July we’ve used 10X the network bandwidth we used in all of June. We’re talking orders of magnitude change, for no good reason.

That’s generally a bad thing. So I went looking….

What I found was both fascinating and a little depressing. It was a group of people based in Poland that have turned public webmail systems into the equivalent of a Bittorrent network.

Let me say right up front: no system cracking involved here, no security issues, no hacks, no cracks, no leaks, no bugs. They are simply using these systems as designed, not doing anything to penetrate or compromise the system.

Here’s how it seems to work: when they have a package to distribute, it is packaged up into pieces small enough to be attached to and sent as emails. Most webmail systems allow attachments up to about 10 megabytes. Files were split up and encoded in MIME as standard packages, although the details of name and type seemed to be ignored (lots of powerpoint files, in theory).

Then accounts were created on various webmail sites. In my sample of addresses, I see over a dozen different sites being used. The person doing all of this then emails the files to that mailbox, where they sit. Now, anyone who wants that set of files only has to get the access information for one of those accounts, log in via IMAP and let his email system download them. It looks like any given package is stored on between 3 and 8 different webmail accounts.

Account creation seems to be semi-automated. All accounts are of a similar format, a semi-random “word”, followed by a 1-3 digit number. Passwords use the same format (but are never the same), ditto the “from” address and the “return-path” in the headers of the emails. Sometimes the files are stored in more than one account on a single webmail (another reason why I think this is at least semi-automated), but generally, it’s sent to 4-6 webmail accounts on 4-6 different sites.

It looks like the actual account creation is manual, or semi-manual, because some of the sites involved use CAPTCHA on account creation and that isn’t stopping them. I don’t think this setup is sophisticated enough to have cracked CAPTCHA, so there are people involved in the setup. I think the account naming, and packaging is automated, but people are involved in the account creation and uploading. Once someone downloads the emails, there seems to be another script to put it all back together again, because it’s not depending on the MIME data in the message to do naming or decoding — in fact, that stuff is set up to (at least casually) make the content itself look innocent.

There’s obviously a web site somewhere that tells you how to access the mailbox to get the content, but I haven’t gone looking for it.

If you think about it, this is a pretty nice hack. With Bittorrent being scrutinized by many ISPs, they’ve set up a fairly low-tech, under-the-radar way of distributing “stuff” without easy detection. The original distributor only has to upload the files once, and then the rest of the resource costs are borne by the mail systems — the webmail site pays the network to upload the files into the system, pays for the disk to store them, and pays for the network to distribute them back out.

Needless to say, I spent some time shutting all of this down. We ended up with a couple of hundred accounts that I closed out. All told I identified and closed a couple of hundred accounts that accounted for over 200 gigabytes of disk storage, and the network bandwidth they were starting to suck was going to be measured in terabytes, and we’re a fairly small webmail site right now. One can only wonder what they’re doing to some other sites….

The group is based in poland. 99% of the access of these files also came from Polish IP ranges. Fortunately, once you know what to look for, it’s fairly easy to find these accounts, given the standardized naming, the limited IP range they’re coming from, and the exceptionally large average message size. The latter is the easiest way to identify them, no “real” webmail account (at least on our system) has an average message size > 5Meg. Even accounts where users are parking files in their Imap for storage tend to have no more than a 1 meg average storage size.

This group spent some time experimenting with the site, evidently to see if we were paying attention. The earliest record I can find of them accessing the site is in April. In June, they ramped their volume significantly, and in July, they opened the floodgates (and I found it four days later, fortunately). It’s hard to tell from the outside if this was them experimenting to see if we’d catch them and then ramping up when they felt safe or if this is a new network that was finally ramping up as they finished building it. Either way, it’s clear there’s a lot of network being used on a lot of webmail systems globally by these guys.

How to stop this? No easy answers. They aren’t really “doing” anything we don’t allow, it’s more of a Terms of Service on content issue with policing. If the account creation was fully automated we could possibly plug that hole (and probably should on general principles; CAPTCHA might not stop this but it can’t hurt, but some of the webmail sites being used have CAPTCHA enabled and it didn’t stop them). On the other hand, there’s no reason we should feel the need to let them pass around warez on our dime — and they only have to use network to upload it once, and then the webmail sites pay for the bandwidth to accept and then deliver it as often as it gets downloaded, plus disk storage and the typical overhead of backups and etc.

What it really goes to show is that people will find interesting uses for any publicly available technology, whether or not you intended for them to be used that way. It also, I think, means we should be aware of what those possible uses might be and see if we can influence our systems to discourage the ones we don’t like. For instance, a 5 megabyte limit on attachments might have discouraged these guys, but doesn’t seem to significantly impact “normal” users — I found very, very few emails on the system that large.

One of the things I’ve been pondering is ways to automate finding or setting alarms for this kind of “non-standard” behavior; quotas solve some problems, but not this one. I wrote a script that finds these accounts with really large average message sizes. It seems to me something that automates that process, or ways to monitor or rate-limit network usage on a per-account basis would be another way, or simply looking at accounts with the highest network usage.

Things that definitely don’t help this kind of problem: quotas, looking for accounts at or close to quota, accounts with large number of log-ins, or even usage from many different IP addresses. None of those were true. I also didn’t see any significant sign of multiple simultaneous users. The things I think of as “obvious” signs of abuse are missing here, it’s a different set of parameters that become visible once you look.

One option I’m just starting to investigate is coming up with some kind of “typical” network usage per user, sort of a capacity planning number — and then if the system deviates from that significantly it gives you a hint you need to look in more detail. I want to avoid having to monitor at the per-user level to the greatest extent possible, and find metrics at the system-usage level that might tell me if the system is within expected usage ranges or not.

In reality, there’s nothing “wrong” going on here other than the sheer size of the operation and the costs it involves (and the fact that most of the content is likely illegal). technically it’s pretty simple and straightforward — a nice hack — to shift the cost of distribution off to others in a way that’s (in theory) low-key enough to not be noticed, at least until they get greedy in resource consumption. If they hadn’t spiked usage in July like they did, I might not have gotten around to chasing them for a while.

My ultimate take-away, though, is that the users “use cases” for a technology are rarely the same as the developers. Sometimes the users innovate in really interesting and positive ways, sometimes they distribute warez — but either way, people are going to see opportunities in your technology and that should be part of the discussion in designing those technologies.

My suggestion: if you run a webmail site that allows users to create accounts, you might just want to look and see what you find. Might surprise you.

Oh, for what it’s worth, I’ve held off posting on this for a bit because I gave advance warning to the other sites I found involved in this. Of the 15 or so abuse@ accounts I sent the details to (including accounts, IP ranges, Received header data, etc, etc), one responded immediately and started their own search and destroy operation — they happened to be one of the larger “white label” webmail, so that’ll shut down any number of the domains involved.

But three of the webmail sites had their abuse@ addresses bounce as user unknown. One sent me email letting me know he was on holiday for a few weeks (in italian). And from the rest, including the two Polish ISPs where all of the upload activity intiated, total silence. Ohwell. Kinda sad, but hey, it’s their network bill, if they don’t mind paying it, I shouldn’t complain… And I just did a check of our site to see if they took the hint, and I see no sign of them creating new accounts now or doing any kind of activity, so I think they’re gone. Well, for now. I’ll know if they come back…

breaking the 200 barrier… (with a bullet!)

With everything going on, I was wondering if I’d ever get past the 200th bird on my birdwatching life list. I set myself two goals for birding in 2008: 200 species, and to be the first to discover a notable bird in the area.

The latter is really a function of luck, time spent birding and a bit more luck; and I’ve come close a couple of times in the last year, but it’s never been confirmed. It’ll happen when it happens.

But I’ve finally been able to do a bit of birding again, and I’ve now shot past 200 species. I’m now thinking I might amend the goal to 200 species for the year and see what happens.

Bird # 200 was, of all things, a Barn Owl. There’s a Barn Owl in a box at Don Edwards EEC; I went out there on the 11th to see if I could find the Wilson’s Phalaropes (no luck because I was limited in how far I could walk out after them), and realized I’d never logged the owl onto my list. Looked i the box, it looked back and blinked. Done.

Leading up to 200 included a couple of nice birds: 199 was Snowy Plover, down in Bolsa Chica (yes, I’m spending a LOT of time in SoCal these days, and birding Bolsa Chica on the way out home most trips; it’s a nice place to visit and a good break after the fun of Southern Cal right now). 198 was Blue-Gray Gnatcatcher, a bird I’ve missed multiple times, even when Bob Power has pointed it out — yet when I was reworking my photo library, there was a bird from 2006 labeled “sparrow” that I saw at a glance was wrong; a close look showed it to be a Blue-Gray Gnatcatcher, so I added it to the list retroactively.

Also added to my list via photo evidence were Gila Woodpecker and Northern Cardinal from a trip to Tucson in the 1990s, and Rhinoceros Auklet from 2005 and a trip to Victoria at Odgen Point, those made 201, 202 and 203. Today I added Wilson’s Phalarope and the Ruff out at EEC for 204 and 205, and I could have added a Pacific Golden Plover, but my ankle just wasn’t up to the walk. The walk out to island 4 wasn’t bad, but it stiffened up watching the ruff, and the walk back got pretty brutal; still, it’s slowly improving.

Nice to know that despite everything going on the last few months, at least one goal is accomplished…

On the way back from SoCal yesterday with the first batch of Dad’s “stuff” to be sorted and organized, and with the key estate issues taken care of (at least this round), I hit Bolsa Chica again and got some really nice Snowy Plover photos, as well as some least tern chicks, and got to watch a black skimmer on the hunt again. Fascinating, weird-but-beautiful birds, the black skimmer.

Some quick notes on today’s birding trip:

I headed out this morning to Don Edwards in search of fame and fortune, or at least a Wilson’s Phalarope. Starting out around 10, I walked out to Island 4 and back, running into numerous other birders out searching for same or better.

It was a very successful day. The Ruff continued on Island 4, living most of the time on the far side of the island but popping up into sight every so often; while I was there, it came into full view three times, and popped it’s head up once more, over about 30-40 minutes.

Other birders reported the golden plover continuing on island 5, but my ankle was already complaining, so I gave it a pass (sigh. but right decision for me).

there was a Wilson’s Phalarope at the eastern edge of Island 4, another on island 3, and a third in the shallows on the S side of the berm across from Island 4, but not great numbers. I found two ruddy turnstones on Island 3. Walking back towards the parking lot near island 2, I had a sparrow fly past me. I chased it a bit before it flew off into the brush, and it looks (I think) like a moulting juvenile Savannah

From talking to the other birders, the black tern had been a no-show that morning. I’d stopped to rest the ankle near Island 1 on the way back (about 12:30ish) and noticed a tern out on the algae mat out beyond island one. It was only there for a minute or so, but I got the scope on it and it was a black tern (much darker underwings than forsters, and much different flying habits, it was flying maybe 1-3 feet over the water and dipping in to skim, much like a black skimmer, rather than the plunge dive; very distinctive once you see it). It flew off to island 1 and I thought it landed near the pelicans, but I couldn’t find it, but it was definitely there for a very short period of time.

In the reeds of the marsh between the EEC and the pond I spent some time trying to coerce the marsh wrens to come into view; one finally did, but there were four or five in the reeds. While doing that I had another bird fly through and perch; my initial thought was warbler, when I got my binocs on it, the face seemed more like a kinglet, but it had bright yellow on the chest. Coming home and researching, I realize now it was a female common yellowthroat, so my first guess was pretty close (I was initially thinking yellow-rump but no yellow on top or back).

A couple of birders reported a peregrine playing around near island 1; I didn’t see it, the terns did and weren’t happy.

Other birds seen included canada geese (which seemed to be migrators, not feral, and not terribly friendly), snowy and great egret, black-crowned night heron, one great blue heron, white pelican, a few mallards and a couple of pied-billed grebes, double-crested cormorants (lots of blonde younger ones), turkey vulture, lots of stilts and avocets, two really, really, REALLY cute baby stilts still in down on one of the islands (3, I think), one practicing catching bugs, one practicing swimming, yellowlegs, dowitchers, red-necked phalaropes (50+), western and least sandpipers (my brain cramp of the day: “least sandpiper. that’s a lifer. yeah, right. it’s semi-palmated I need.. gah). swallows, anna’s, and the usual cast of characters.

the golden plover, by reports, has moved onto island 5 and evidently went to sleep there, so it may hang around. the ruff is definitely hanging around, and well worth going and looking for; patience is needed because of its tendency to wander the far side of the island. When I was there, it’d make an appearance every 5-15 minutes for a bit. The black tern is around, look for the tern that isn’t acting like the Forsters — it tends to fly much closer to the water and swoop/skim rather than dive/plunge.

(and Ruff is 204 on the life list, wilson’s phalarope 205, and black tern 174 on the year list…. finally over 200….)

Apple’s Joswiak dishes on missing features

Macworld | iPhone Central | Apple’s Joswiak dishes on missing features:

When asked about cut-and-paste support, something that many iPhone users—ourselves included—have clamored for, Joz said that the feature simply didn’t make—if you’ll pardon the expression—the cut on Apple’s priority list for the latest software release. There’s nothing against cut-and-paste, Joz claimed, it’s just that other features were determined to be more in demand.

I think Joswiak and Apple are being a bit disingenuous here, but for all of the right reasons.
There’s a deeper issue that needs to be examined here, but it boils down to a few key points:
1) Once you implement something, it’s really hard to throw it out and replace it with something better; Apple’s more willing to do this than most companies (think, for instance, the “new” iMovie in iLife ’08 — and the whining that happened; personally, Apple was right, IMHO, but that’s a different blog entry — but most of the whining came from folks who honestly should have moved to Final Cut Express long ago and were pissed when Apple took iMovie back into being a “my mother can use this” entry level app)
2) One of Apple’s core values is “do it right”.
3) Something as core as cut and paste isn’t shipping until it passes the “Steve test”; and Steve is not big on “well, it’ll do”.
4) it’s easy to do cut and paste badly on an iPhone. Or even do it in a “hey, this doesn’t suck” way. But doing it the Apple way?
Basically, I think the real reason this doesn’t exist is because Apple knows once they implement it, they’re stuck with it, and so they’d rather not do it at all until they do it right.
And they’re right. It’s a lot easier to fix “we don’t have cut and paste” than “damn, but cut and paste sucks”.
but it’s a lot easier politically to simply say “hey, there are other priorities”. to a degree, he’s right; the priority he’s implying but not explicitly bringing forward is “we want to make sure it works like an Apple product and doesn’t suck” first.
And that’s why Apple sold a million of these buggers already; because they are careful about core functionality and compromises, and the geeks know it. Few companies are willing to play the “better to not do than do badly” game, much less Apple’s “… than do so-so” standard.
As an aside, since the Xbox 360/Netflix agreement has brought it forward again:
this is why Apple hasn’t done a PVR or PVR software for the Apple TV. There are so many factors out of its control — anyone who’s hooked one of these bastards up understands — that building a PVR that “works like an Apple” is somewhere beyond difficult and towards impossible (which is why so many of us would love something like this; it solves a problem nobody’s really solved, even Tivo, where interfacing to random cable boxes in random ways is still a bit of a horror show)
FWIW, I like the Xbox/Netflix deal. It’s impact on Apple and iTunes is less than most people think, because it really comes down to whether you prefer a subscription model (netflix) or a pay per view model (Apple), and neither model really matters for online video until both platforms fix the “there’s no freaking content” problem — the amount of downloadable content on Netflix is still a tiny proportion of it’s library, and bluntly, Netflix’s real value is in its library, not its technology. Which is why, everytime I talked to someone in the iTunes group when I was at Apple, I used to harp on “we have to buy Netflix” until they finally told me to just shut up… But iTunes with a subscription and PPV model and Netflix’ library depth and an Apple TV is one hell of a business proposition… Still is, but Apple never showed any significant interest in it, even though some of us wandering the project and its peripheries thought it was a killer combo.
But that’s ultimately why I haven’t bought an Apple TV (or a Roku) — neither gets me access to much of the content I want, which is the library beyond the last 3 years of hit movies. Talk to me when I can stream, as, Big Chill or Season 5 of M*A*S*H to my Apple TV on PPV prices.

Looking at Sharks 2009…..

So the dust is settling, and the new sharks roster is taking shape, and I’m finally back at a point in my life where blogging seems not only possible, but interesting. Been an interesting three months.

So now we can start to look at the Sharks for 2008-2009 and see if this is a better team. Is it?

First, coaching: I like the hiring of McLellan, but it’s not without risks. Sometimes a really top-notch assistant coach is — a top-notch assistant coach. He could be the next Bruce Boudreau or John Anderson, but he could also be Dave Lewis or Wayne Cashman, two guys who tried to make the leap to NHL coach and found out they made damn good assistant coaches. Or he could be Kevin Constantine, who’s a pretty damn good coach, just not at the NHL level.

So the move is not without risk, but the Sharks aren’t afraid of taking risks, and I think this one makes sense. I’m a lot happier with the idea of bringing in a new voice that has some ability to relate to younger kids than to bring in a “safe” retread who overplays veterans and doesn’t grow his players. I think it’s a good hiring, and I’m looking forward to seeing how he fills out his assistants.

A while back, I wrote about what I thought should be (or would be) changed in the roster offseason. A few highlights and lowlights:

Two for Elbowing: Picking up pieces and an update on Michalek – The San Jose Mercury News Sharks Hockey Blog -:


the Sharks are a damn good team, but it’s clear changes need to be made for the team to get better.

I’d like to see Nabokov backed off to 60-65 games next year (his going to the world championships notwithstanding). Rest him a bit more, keep him a bit fresher.

If that means bringing back Boucher, or someone else, so be it.

And so it is.


Core group (do not touch under penalty of death):

Thornton
Marleau
Pavelski
Mitchell
Grier
Clowe
Setoguchi

Not coming back:

Curtis Brown (Sorry, Brownie, but I think it’s time).

All of which happened. I honestly felt a top six forward would go — I’m happy that McLellan and Wilson think this group can be kept together and improved without being swapped around.


Players I expect back, but which aren’t “no trade under any cirucmstance” types (as part of the right deal? sure):

Tomas Plihal
Patrick Rissmiller
Jody Shelley
Marcel Goc

I want to see come back:

Jeremy Roenick

Rissmiller was allowed to leave, Shelley is back, as far as I can tell, Plihal and Goc are still unsigned. Plihal will be, Goc, not so sure. If of this crew we lose Rissmiller (like him, replaceable) and Goc (like him, somewhat disappointing), I don’t think the sharks miss a beat. No game changers.

So where does this put the Sharks?

Thornton-Cheechoo-Michalek
Pavelski-Marleau-Clowe
Grier-Roenick-Setoguchi
Shelley-Plihal (probably)-Mitchell

and two black aces to be named. Goc maybe one of them.

It’s kinda hard to complain about this roster, especially if they play to potential. So I won’t. you can see why Rissmiller wasn’t kept, when guys like Setoguchi and MItchell and Plihal are having to fight for third line time?

Now the fun begins. The Defense was the thin spot on depth last year. I thought going into the season it was good enough. I was wrong. This year, it’s looking a lot different:


Core group (do not touch under penalty of death):

Douglas Murray (what an improvement this year!)

Matt Carle (struggled at times, but seems to be growing into it; I’d hate to give up too soon)

M-E Vlasic (wow; at his age?)

Craig Rivet

I’d like to see back:

Brian Campbell (but not for Phaneuf money; if someone wants to pay him that, be my guest. he’s missing that “punk brat” aspect to his game, which keeps him a rung below Phaneuf on the ladder. But $25m over 5 years? sure. Just not $30 over 5.

hint: I expect Campbell to stay. He seems happy. He likes playing 30 minutes a game. Why screw it up?

Not coming back:

Sandis Ozolinsh: thanks, Sandis. for everything.

Alexei Semenov: ditto. Neither of these are NHL caliber in today’s NHL.

Kyle McLaren: love his guts and drive, but his knees are problematic. I think it’s time to consider an upgrade.

So one of my “untouchables” goes away in Matt Carle, but we’re getting (if rumors are true) Dan Boyle in return. We lose Brian Campbell, but for the money he’s getting, I hope Chicago enjoys his play. Carle was a lot more expendable to me than Vlasic, so I’m happy.

So our D now looks like:

Rob Blake-Dan Boyle
Vlasic-Rivet
Murray-lukowich (rumored coming from tampa)

McLaren

Again, not much to complain about here. Blake/Boyle/Lukowich instead of Campbell/Carle and either Semenov or Ozolinsh? It’s a more veteran crew, but I like what Blake brings to the team in intrinsics, even if we’re giving up some youth to get it (indirectly, because losing Campbell makes bringing Blake in and getting Boyle possible — although I get the impression Wilson was going to bring Blake in anyway).

This is a team that’s now completely oriented towards the next two season. Yeah, after that we’ll have to see about bringing youth in and reloading, but that’s wilson’s problem later. This team needed to be about “NOW OR ELSE”, and now it truly is.

In retrospect, two problems last year:

no backup goalie to take the load off of Nabby and limit his playing time a bit. I don’t think this really hurt the sharks, but I don’t want my goalie playing that many games.

The defense was too young and too thin; trying to patch in with Semenov and Ozolinsh was the red flag, and that proved to be true.


One thing I can guarantee: Doug Wilson will do something completely different than this, and when he does, I’ll go “wow, I never would have thought of that” and like it. Whcih is why he’s GM, and I’m a blogger…

Well actually, Wilson’s done pretty much what I expected; couldn’t re-sign Campbell, went and got Boyle. I have to admit that Rob Blake was one of the guys I thought would be great on the Sharks — only I never thought he’d leave the Kings, so I didn’t really consider it an option. Fortunately, Wilson did, and Lombardi (if you ask me) mis-stepped here. but more on that later. But thanks, Dean. I expect Detroit will send you flowers for helping us (not).

I can’t see how Wilson could have handled this better, given things not under his control (campbell couldn’t be forced back without seriously overpaying him, which the Sharks don’t do). I’m really happy they didn’t move Marleau, I’m really happy they didn’t make any “make a splash” moves at the draft and overpaying to do so. It’s all a very solid, methodical, well-thought out strategy.

So far, a great offseason. And what it ends up doing is sending a big message to the players: no excuses. Now, it’s up to the players.

Can’t wait for camp.

.Mac Morphs into MobileMe

TidBITS Networking: .Mac Morphs into MobileMe:


Although still costing $99 per year (with a free 60-day trial), the idea is that MobileMe is less a separate service and more of an extension of what you already do on your Mac, PC, iPhone, or iPod touch. For example, your email messages and mailboxes will apparently instantly be the same, whether on your iPhone or your computer, a feature that many users should welcome with open arms. And, contacts and calendar items will sync automatically.

 

Exactly. While some of the geek pundits are downplaying MobileMe and the cost, they keep listening to the echo chamber and not paying attention to the core market .Mac served and MobileMe is going to dominate. For me, the improvements announced today are wonderful — assuming Apple comes through in delivering AND delivers them in a reliable form.

So I’ll likely give it a few weeks, then hook up a family pack (for myself, Laurie and mom to each have their own environment) and move my email from my hosting service and google over to MobileMe. For something that’s (a) tightly integrated, (b) reliable, (c) works appropriately between the web, my macs (plural), and my iPhone (when I buy it), that’s worth the money.

Yeah, I COULD patch it all together, and in fact, I have for the most part, having used things like gmail, google calendar, google bookmarks, Spanning Sync or Plaxo or whatever… But to pay a few bucks to let someone else do the grunt work and to have a single support contact for everything? let the geeks geek, I want to USE the damn thing, not maintain it.

And this is one of those places where the echo chamber of geekdom falls down badly. It’s never gotten .Mac, and I’m sure .Mac’s going to get ripped for the price (again), lack of social networking tools, etc, etc. What they don’t get is that a huge audience doesn’t care about or want those features. Their idea of social networking is passing around email to church group members and coordinating calendars for play dates and little league, and sending photos to grandma in Sun City.

So Apple’s never gotten much geekcred for .Mac, but keep getting good numbers of happy, paying customers. And now, it takes a quantum leap forward, and starts integrating the cloud into Mac OS X. This opens the door to all sorts of things down the road beyond “exchange for the masses”.

I love it. and I love the new iPhone. Both will be part of my toolbox in the next few months, once they prove themselves out.

To all the folks back at Apple who worked their butts off to build this, way to go. I like it.